text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In mathematics, a field F is algebraically closed if every non-constant polynomial in F[x] (the univariate polynomial ring with coefficients in F) has a root in F. In other words, a field is algebraically closed if the fundamental theorem of algebra holds for it.
Every field
K
{\displaystyle K}
is contained in an algebraically closed field
C
,
{\displaystyle C,}
and the roots in
C
{\displaystyle C}
of the polynomials with coefficients in
K
{\displaystyle K}
form an algebraically closed field called an algebraic closure of
K
.
{\displaystyle K.}
Given two algebraic closures of
K
{\displaystyle K}
there are isomorphisms between them that fix the elements of
K
.
{\displaystyle K.}
Algebraically closed fields appear in the following chain of class inclusions:
rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ euclidean domains ⊃ fields ⊃ algebraically closed fields
== Examples ==
As an example, the field of real numbers is not algebraically closed, because the polynomial equation
x
2
+
1
=
0
{\displaystyle x^{2}+1=0}
has no solution in real numbers, even though all its coefficients (1 and 0) are real. The same argument proves that no subfield of the real field is algebraically closed; in particular, the field of rational numbers is not algebraically closed. By contrast, the fundamental theorem of algebra states that the field of complex numbers is algebraically closed. Another example of an algebraically closed field is the field of (complex) algebraic numbers.
No finite field F is algebraically closed, because if a1, a2, ..., an are the elements of F, then the polynomial (x − a1)(x − a2) ⋯ (x − an) + 1
has no zero in F. However, the union of all finite fields of a fixed characteristic p (p prime) is an algebraically closed field, which is, in fact, the algebraic closure of the field
F
p
{\displaystyle \mathbb {F} _{p}}
with p elements.
The field
C
(
x
)
{\displaystyle \mathbb {C} (x)}
of rational functions with complex coefficients is not closed; for example, the polynomial
y
2
−
x
{\displaystyle y^{2}-x}
has roots
±
x
{\displaystyle \pm {\sqrt {x}}}
, which are not elements of
C
(
x
)
{\displaystyle \mathbb {C} (x)}
.
== Equivalent properties ==
Given a field F, the assertion "F is algebraically closed" is equivalent to other assertions:
=== The only irreducible polynomials are those of degree one ===
The field F is algebraically closed if and only if the only irreducible polynomials in the polynomial ring F[x] are those of degree one.
The assertion "the polynomials of degree one are irreducible" is trivially true for any field. If F is algebraically closed and p(x) is an irreducible polynomial of F[x], then it has some root a and therefore p(x) is a multiple of x − a. Since p(x) is irreducible, this means that p(x) = k(x − a), for some k ∈ F \ {0} . On the other hand, if F is not algebraically closed, then there is some non-constant polynomial p(x) in F[x] without roots in F. Let q(x) be some irreducible factor of p(x). Since p(x) has no roots in F, q(x) also has no roots in F. Therefore, q(x) has degree greater than one, since every first degree polynomial has one root in F.
=== Every polynomial is a product of first degree polynomials ===
The field F is algebraically closed if and only if every polynomial p(x) of degree n ≥ 1, with coefficients in F, splits into linear factors. In other words, there are elements k, x1, x2, ..., xn of the field F such that p(x) = k(x − x1)(x − x2) ⋯ (x − xn).
If F has this property, then clearly every non-constant polynomial in F[x] has some root in F; in other words, F is algebraically closed. On the other hand, that the property stated here holds for F if F is algebraically closed follows from the previous property together with the fact that, for any field K, any polynomial in K[x] can be written as a product of irreducible polynomials.
=== Polynomials of prime degree have roots ===
If every polynomial over F of prime degree has a root in F, then every non-constant polynomial has a root in F. It follows that a field is algebraically closed if and only if every polynomial over F of prime degree has a root in F.
=== The field has no proper algebraic extension ===
The field F is algebraically closed if and only if it has no proper algebraic extension.
If F has no proper algebraic extension, let p(x) be some irreducible polynomial in F[x]. Then the quotient of F[x] modulo the ideal generated by p(x) is an algebraic extension of F whose degree is equal to the degree of p(x). Since it is not a proper extension, its degree is 1 and therefore the degree of p(x) is 1.
On the other hand, if F has some proper algebraic extension K, then the minimal polynomial of an element in K \ F is irreducible and its degree is greater than 1.
=== The field has no proper finite extension ===
The field F is algebraically closed if and only if it has no proper finite extension because if, within the previous proof, the term "algebraic extension" is replaced by the term "finite extension", then the proof is still valid. (Finite extensions are necessarily algebraic.)
=== Every endomorphism of Fn has some eigenvector ===
The field F is algebraically closed if and only if, for each natural number n, every linear map from Fn into itself has some eigenvector.
An endomorphism of Fn has an eigenvector if and only if its characteristic polynomial has some root. Therefore, when F is algebraically closed, every endomorphism of Fn has some eigenvector. On the other hand, if every endomorphism of Fn has an eigenvector, let p(x) be an element of F[x]. Dividing by its leading coefficient, we get another polynomial q(x) which has roots if and only if p(x) has roots. But if q(x) = xn + an − 1 xn − 1 + ⋯ + a0, then q(x) is the characteristic polynomial of the n×n companion matrix
(
0
0
⋯
0
−
a
0
1
0
⋯
0
−
a
1
0
1
⋯
0
−
a
2
⋮
⋮
⋱
⋮
⋮
0
0
⋯
1
−
a
n
−
1
)
.
{\displaystyle {\begin{pmatrix}0&0&\cdots &0&-a_{0}\\1&0&\cdots &0&-a_{1}\\0&1&\cdots &0&-a_{2}\\\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&\cdots &1&-a_{n-1}\end{pmatrix}}.}
=== Decomposition of rational expressions ===
The field F is algebraically closed if and only if every rational function in one variable x, with coefficients in F, can be written as the sum of a polynomial function with rational functions of the form a/(x − b)n, where n is a natural number, and a and b are elements of F.
If F is algebraically closed then, since the irreducible polynomials in F[x] are all of degree 1, the property stated above holds by the theorem on partial fraction decomposition.
On the other hand, suppose that the property stated above holds for the field F. Let p(x) be an irreducible element in F[x]. Then the rational function 1/p can be written as the sum of a polynomial function q with rational functions of the form a/(x – b)n. Therefore, the rational expression
1
p
(
x
)
−
q
(
x
)
=
1
−
p
(
x
)
q
(
x
)
p
(
x
)
{\displaystyle {\frac {1}{p(x)}}-q(x)={\frac {1-p(x)q(x)}{p(x)}}}
can be written as a quotient of two polynomials in which the denominator is a product of first degree polynomials. Since p(x) is irreducible, it must divide this product and, therefore, it must also be a first degree polynomial.
=== Relatively prime polynomials and roots ===
For any field F, if two polynomials p(x), q(x) ∈ F[x] are relatively prime then they do not have a common root, for if a ∈ F was a common root, then p(x) and q(x) would both be multiples of x − a and therefore they would not be relatively prime. The fields for which the reverse implication holds (that is, the fields such that whenever two polynomials have no common root then they are relatively prime) are precisely the algebraically closed fields.
If the field F is algebraically closed, let p(x) and q(x) be two polynomials which are not relatively prime and let r(x) be their greatest common divisor. Then, since r(x) is not constant, it will have some root a, which will be then a common root of p(x) and q(x).
If F is not algebraically closed, let p(x) be a polynomial whose degree is at least 1 without roots. Then p(x) and p(x) are not relatively prime, but they have no common roots (since none of them has roots).
== Other properties ==
If F is an algebraically closed field and n is a natural number, then F contains all nth roots of unity, because these are (by definition) the n (not necessarily distinct) zeroes of the polynomial xn − 1. A field extension that is contained in an extension generated by the roots of unity is a cyclotomic extension, and the extension of a field generated by all roots of unity is sometimes called its cyclotomic closure. Thus algebraically closed fields are cyclotomically closed. The converse is not true. Even assuming that every polynomial of the form xn − a splits into linear factors is not enough to assure that the field is algebraically closed.
If a proposition which can be expressed in the language of first-order logic is true for an algebraically closed field, then it is true for every algebraically closed field with the same characteristic. Furthermore, if such a proposition is valid for an algebraically closed field with characteristic 0, then not only is it valid for all other algebraically closed fields with characteristic 0, but there is some natural number N such that the proposition is valid for every algebraically closed field with characteristic p when p > N.
Every field F has some extension which is algebraically closed. Such an extension is called an algebraically closed extension. Among all such extensions there is one and only one (up to isomorphism, but not unique isomorphism) which is an algebraic extension of F; it is called the algebraic closure of F.
The theory of algebraically closed fields has quantifier elimination.
== Notes ==
== References == | Wikipedia/Algebraically_closed |
In the study of the representation theory of Lie groups, the study of representations of SU(2) is fundamental to the study of representations of semisimple Lie groups. It is the first case of a Lie group that is both a compact group and a non-abelian group. The first condition implies the representation theory is discrete: representations are direct sums of a collection of basic irreducible representations (governed by the Peter–Weyl theorem). The second means that there will be irreducible representations in dimensions greater than 1.
SU(2) is the universal covering group of SO(3), and so its representation theory includes that of the latter, by dint of a surjective homomorphism to it. This underlies the significance of SU(2) for the description of non-relativistic spin in theoretical physics; see below for other physical and historical context.
As shown below, the finite-dimensional irreducible representations of SU(2) are indexed by a non-negative integer
m
{\displaystyle m}
and have dimension
m
+
1
{\displaystyle m+1}
. In the physics literature, the representations are labeled by the quantity
l
=
m
/
2
{\displaystyle l=m/2}
, where
l
{\displaystyle l}
is then either an integer or a half-integer, and the dimension is
2
l
+
1
{\displaystyle 2l+1}
.
== Lie algebra representations ==
The representations of the group are found by considering representations of
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
, the Lie algebra of SU(2). Since the group SU(2) is simply connected, every representation of its Lie algebra can be integrated to a group representation; we will give an explicit construction of the representations at the group level below.
=== Real and complexified Lie algebras ===
The real Lie algebra
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
has a basis given by
u
1
=
[
0
i
i
0
]
,
u
2
=
[
0
−
1
1
0
]
,
u
3
=
[
i
0
0
−
i
]
,
{\displaystyle u_{1}={\begin{bmatrix}0&i\\i&0\end{bmatrix}},\qquad u_{2}={\begin{bmatrix}0&-1\\1&~~0\end{bmatrix}},\qquad u_{3}={\begin{bmatrix}i&~~0\\0&-i\end{bmatrix}}~,}
(These basis matrices are related to the Pauli matrices by
u
1
=
+
i
σ
1
,
u
2
=
−
i
σ
2
,
{\displaystyle u_{1}=+i\ \sigma _{1}\;,\,u_{2}=-i\ \sigma _{2}\;,}
and
u
3
=
+
i
σ
3
.
{\displaystyle u_{3}=+i\ \sigma _{3}~.}
)
The matrices are a representation of the quaternions:
u
1
u
1
=
−
I
,
u
2
u
2
=
−
I
,
u
3
u
3
=
−
I
,
{\displaystyle u_{1}\,u_{1}=-I\,,~~\quad u_{2}\,u_{2}=-I\,,~~\quad u_{3}\,u_{3}=-I\,,}
u
1
u
2
=
+
u
3
,
u
2
u
3
=
+
u
1
,
u
3
u
1
=
+
u
2
,
{\displaystyle u_{1}\,u_{2}=+u_{3}\,,\quad u_{2}\,u_{3}=+u_{1}\,,\quad u_{3}\,u_{1}=+u_{2}\,,}
u
2
u
1
=
−
u
3
,
u
3
u
2
=
−
u
1
,
u
1
u
3
=
−
u
2
.
{\displaystyle u_{2}\,u_{1}=-u_{3}\,,\quad u_{3}\,u_{2}=-u_{1}\,,\quad u_{1}\,u_{3}=-u_{2}~.}
where I is the conventional 2×2 identity matrix:
I
=
[
1
0
0
1
]
.
{\displaystyle ~~I={\begin{bmatrix}1&0\\0&1\end{bmatrix}}~.}
Consequently, the commutator brackets of the matrices satisfy
[
u
1
,
u
2
]
=
2
u
3
,
[
u
2
,
u
3
]
=
2
u
1
,
[
u
3
,
u
1
]
=
2
u
2
.
{\displaystyle [u_{1},u_{2}]=2u_{3}\,,\quad [u_{2},u_{3}]=2u_{1}\,,\quad [u_{3},u_{1}]=2u_{2}~.}
It is then convenient to pass to the complexified Lie algebra
s
u
(
2
)
+
i
s
u
(
2
)
=
s
l
(
2
;
C
)
.
{\displaystyle {\mathfrak {su}}(2)+i\,{\mathfrak {su}}(2)={\mathfrak {sl}}(2;\mathbb {C} )~.}
(Skew self-adjoint matrices with trace zero plus self-adjoint matrices with trace zero gives all matrices with trace zero.) As long as we are working with representations over
C
{\displaystyle \mathbb {C} }
this passage from real to complexified Lie algebra is harmless. The reason for passing to the complexification is that it allows us to construct a nice basis of a type that does not exist in the real Lie algebra
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
.
The complexified Lie algebra is spanned by three elements
X
{\displaystyle X}
,
Y
{\displaystyle Y}
, and
H
{\displaystyle H}
, given by
H
=
1
i
u
3
,
X
=
1
2
i
(
u
1
−
i
u
2
)
,
Y
=
1
2
i
(
u
1
+
i
u
2
)
;
{\displaystyle H={\frac {1}{i}}u_{3},\qquad X={\frac {1}{2i}}\left(u_{1}-iu_{2}\right),\qquad Y={\frac {1}{2i}}(u_{1}+iu_{2})~;}
or, explicitly,
H
=
[
1
0
0
−
1
]
,
X
=
[
0
1
0
0
]
,
Y
=
[
0
0
1
0
]
.
{\displaystyle H={\begin{bmatrix}1&~~0\\0&-1\end{bmatrix}},\qquad X={\begin{bmatrix}0&1\\0&0\end{bmatrix}},\qquad Y={\begin{bmatrix}0&0\\1&0\end{bmatrix}}~.}
The non-trivial/non-identical part of the group's multiplication table is
H
X
=
X
,
H
Y
=
−
Y
,
X
Y
=
1
2
(
I
+
H
)
,
{\displaystyle HX~=~~~~X,\qquad HY~=-Y,\qquad XY~=~{\tfrac {1}{2}}\left(I+H\right),}
X
H
=
−
X
,
Y
H
=
Y
,
Y
X
=
1
2
(
I
−
H
)
,
{\displaystyle XH~=-X,\qquad YH~=~~~~Y,\qquad YX~=~{\tfrac {1}{2}}\left(I-H\right),}
H
H
=
I
,
X
X
=
O
,
Y
Y
=
O
,
{\displaystyle HH~=~~~I~,\qquad XX~=~~~~O,\qquad YY~=~~O,}
where O is the 2×2 all-zero matrix.
Hence their commutation relations are
[
H
,
X
]
=
2
X
,
[
H
,
Y
]
=
−
2
Y
,
[
X
,
Y
]
=
H
.
{\displaystyle [H,X]=2X,\qquad [H,Y]=-2Y,\qquad [X,Y]=H.}
Up to a factor of 2, the elements
H
{\displaystyle H}
,
X
{\displaystyle X}
and
Y
{\displaystyle Y}
may be identified with the angular momentum operators
J
z
{\displaystyle J_{z}}
,
J
+
{\displaystyle J_{+}}
, and
J
−
{\displaystyle J_{-}}
, respectively. The factor of 2 is a discrepancy between conventions in math and physics; we will attempt to mention both conventions in the results that follow.
=== Weights and the structure of the representation ===
In this setting, the eigenvalues for
H
{\displaystyle H}
are referred to as the weights of the representation. The following elementary result is a key step in the analysis. Suppose that
v
{\displaystyle v}
is an eigenvector for
H
{\displaystyle H}
with eigenvalue
α
{\displaystyle \alpha }
; that is, that
H
v
=
α
v
.
{\displaystyle Hv=\alpha v.}
Then
H
(
X
v
)
=
(
X
H
+
[
H
,
X
]
)
v
=
(
α
+
2
)
X
v
,
H
(
Y
v
)
=
(
Y
H
+
[
H
,
Y
]
)
v
=
(
α
−
2
)
Y
v
.
{\displaystyle {\begin{alignedat}{5}H(Xv)&=(XH+[H,X])v&&=(\alpha +2)Xv,\\[3pt]H(Yv)&=(YH+[H,Y])v&&=(\alpha -2)Yv.\end{alignedat}}}
In other words,
X
v
{\displaystyle Xv}
is either the zero vector or an eigenvector for
H
{\displaystyle H}
with eigenvalue
α
+
2
{\displaystyle \alpha +2}
and
Y
v
{\displaystyle Yv}
is either zero or an eigenvector for
H
{\displaystyle H}
with eigenvalue
α
−
2.
{\displaystyle \alpha -2.}
Thus, the operator
X
{\displaystyle X}
acts as a raising operator, increasing the weight by 2, while
Y
{\displaystyle Y}
acts as a lowering operator.
Suppose now that
V
{\displaystyle V}
is an irreducible, finite-dimensional representation of the complexified Lie algebra. Then
H
{\displaystyle H}
can have only finitely many eigenvalues. In particular, there must be some final eigenvalue
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
with the property that
λ
+
2
{\displaystyle \lambda +2}
is not an eigenvalue. Let
v
0
{\displaystyle v_{0}}
be an eigenvector for
H
{\displaystyle H}
with that eigenvalue
λ
:
{\displaystyle \lambda :}
H
v
0
=
λ
v
0
,
{\displaystyle Hv_{0}=\lambda v_{0},}
then we must have
X
v
0
=
0
,
{\displaystyle Xv_{0}=0,}
or else the above identity would tell us that
X
v
0
{\displaystyle Xv_{0}}
is an eigenvector with eigenvalue
λ
+
2.
{\displaystyle \lambda +2.}
Now define a "chain" of vectors
v
0
,
v
1
,
…
{\displaystyle v_{0},v_{1},\ldots }
by
v
k
=
Y
k
v
0
{\displaystyle v_{k}=Y^{k}v_{0}}
.
A simple argument by induction then shows that
X
v
k
=
k
(
λ
−
(
k
−
1
)
)
v
k
−
1
{\displaystyle Xv_{k}=k(\lambda -(k-1))v_{k-1}}
for all
k
=
1
,
2
,
…
.
{\displaystyle k=1,2,\ldots .}
Now, if
v
k
{\displaystyle v_{k}}
is not the zero vector, it is an eigenvector for
H
{\displaystyle H}
with eigenvalue
λ
−
2
k
.
{\displaystyle \lambda -2k.}
Since, again,
H
{\displaystyle H}
has only finitely many eigenvectors, we conclude that
v
ℓ
{\displaystyle v_{\ell }}
must be zero for some
ℓ
{\displaystyle \ell }
(and then
v
k
=
0
{\displaystyle v_{k}=0}
for all
k
>
ℓ
{\displaystyle k>\ell }
).
Let
v
m
{\displaystyle v_{m}}
be the last nonzero vector in the chain; that is,
v
m
≠
0
{\displaystyle v_{m}\neq 0}
but
v
m
+
1
=
0.
{\displaystyle v_{m+1}=0.}
Then of course
X
v
m
+
1
=
0
{\displaystyle Xv_{m+1}=0}
and by the above identity with
k
=
m
+
1
,
{\displaystyle k=m+1,}
we have
0
=
X
v
m
+
1
=
(
m
+
1
)
(
λ
−
m
)
v
m
.
{\displaystyle 0=Xv_{m+1}=(m+1)(\lambda -m)v_{m}.}
Since
m
+
1
{\displaystyle m+1}
is at least one and
v
m
≠
0
,
{\displaystyle v_{m}\neq 0,}
we conclude that
λ
{\displaystyle \lambda }
must be equal to the non-negative integer
m
.
{\displaystyle m.}
We thus obtain a chain of
m
+
1
{\displaystyle m+1}
vectors,
v
0
,
v
1
,
…
,
v
m
,
{\displaystyle v_{0},v_{1},\ldots ,v_{m},}
such that
Y
{\displaystyle Y}
acts as
Y
v
m
=
0
,
Y
v
k
=
v
k
+
1
(
k
<
m
)
{\displaystyle Yv_{m}=0,\quad Yv_{k}=v_{k+1}\quad (k<m)}
and
X
{\displaystyle X}
acts as
X
v
0
=
0
,
X
v
k
=
k
(
m
−
(
k
−
1
)
)
v
k
−
1
(
k
≥
1
)
{\displaystyle Xv_{0}=0,\quad Xv_{k}=k(m-(k-1))v_{k-1}\quad (k\geq 1)}
and
H
{\displaystyle H}
acts as
H
v
k
=
(
m
−
2
k
)
v
k
.
{\displaystyle Hv_{k}=(m-2k)v_{k}.}
(We have replaced
λ
{\displaystyle \lambda }
with its currently known value of
m
{\displaystyle m}
in the formulas above.)
Since the vectors
v
k
{\displaystyle v_{k}}
are eigenvectors for
H
{\displaystyle H}
with distinct eigenvalues, they must be linearly independent. Furthermore, the span of
v
0
,
…
,
v
m
{\displaystyle v_{0},\ldots ,v_{m}}
is clearly invariant under the action of the complexified Lie algebra. Since
V
{\displaystyle V}
is assumed irreducible, this span must be all of
V
.
{\displaystyle V.}
We thus obtain a complete description of what an irreducible representation must look like; that is, a basis for the space and a complete description of how the generators of the Lie algebra act. Conversely, for any
m
≥
0
{\displaystyle m\geq 0}
we can construct a representation by simply using the above formulas and checking that the commutation relations hold. This representation can then be shown to be irreducible.
Conclusion: For each non-negative integer
m
,
{\displaystyle m,}
there is a unique irreducible representation with highest weight
m
.
{\displaystyle m.}
Each irreducible representation is equivalent to one of these. The representation with highest weight
m
{\displaystyle m}
has dimension
m
+
1
{\displaystyle m+1}
with weights
m
,
m
−
2
,
…
,
−
(
m
−
2
)
,
−
m
,
{\displaystyle m,m-2,\ldots ,-(m-2),-m,}
each having multiplicity one.
=== The Casimir element ===
We now introduce the (quadratic) Casimir element,
C
{\displaystyle C}
given by
C
=
−
(
u
1
2
+
u
2
2
+
u
3
2
)
{\displaystyle C=-\left(u_{1}^{2}+u_{2}^{2}+u_{3}^{2}\right)}
.
We can view
C
{\displaystyle C}
as an element of the universal enveloping algebra or as an operator in each irreducible representation. Viewing
C
{\displaystyle C}
as an operator on the representation with highest weight
m
{\displaystyle m}
, we may easily compute that
C
{\displaystyle C}
commutes with each
u
i
.
{\displaystyle u_{i}.}
Thus, by Schur's lemma,
C
{\displaystyle C}
acts as a scalar multiple
c
m
{\displaystyle c_{m}}
of the identity for each
m
.
{\displaystyle m.}
We can write
C
{\displaystyle C}
in terms of the
{
H
,
X
,
Y
}
{\displaystyle \{H,X,Y\}}
basis as follows:
C
=
(
X
+
Y
)
2
−
(
−
X
+
Y
)
2
+
H
2
,
{\displaystyle C=(X+Y)^{2}-(-X+Y)^{2}+H^{2},}
which can be reduced to
C
=
4
Y
X
+
H
2
+
2
H
.
{\displaystyle C=4YX+H^{2}+2H.}
The eigenvalue of
C
{\displaystyle C}
in the representation with highest weight
m
{\displaystyle m}
can be computed by applying
C
{\displaystyle C}
to the highest weight vector, which is annihilated by
X
;
{\displaystyle X;}
thus, we get
c
m
=
m
2
+
2
m
=
m
(
m
+
2
)
.
{\displaystyle c_{m}=m^{2}+2m=m(m+2).}
In the physics literature, the Casimir is normalized as
C
′
=
1
4
C
.
{\textstyle C'={\frac {1}{4}}C.}
Labeling things in terms of
ℓ
=
1
2
m
,
{\textstyle \ell ={\frac {1}{2}}m,}
the eigenvalue
d
ℓ
{\displaystyle d_{\ell }}
of
C
′
{\displaystyle C'}
is then computed as
d
ℓ
=
1
4
(
2
ℓ
)
(
2
ℓ
+
2
)
=
ℓ
(
ℓ
+
1
)
.
{\displaystyle d_{\ell }={\frac {1}{4}}(2\ell )(2\ell +2)=\ell (\ell +1).}
== The group representations ==
=== Action on polynomials ===
Since SU(2) is simply connected, a general result shows that every representation of its (complexified) Lie algebra gives rise to a representation of SU(2) itself. It is desirable, however, to give an explicit realization of the representations at the group level. The group representations can be realized on spaces of polynomials in two complex variables. That is, for each non-negative integer
m
{\displaystyle m}
, we let
V
m
{\displaystyle V_{m}}
denote the space of homogeneous polynomials
p
{\displaystyle p}
of degree
m
{\displaystyle m}
in two complex variables. Then the dimension of
V
m
{\displaystyle V_{m}}
is
m
+
1
{\displaystyle m+1}
. There is a natural action of SU(2) on each
V
m
{\displaystyle V_{m}}
, given by
[
U
⋅
p
]
(
z
)
=
p
(
U
−
1
z
)
,
z
∈
C
2
,
U
∈
S
U
(
2
)
{\displaystyle [U\cdot p](z)=p\left(U^{-1}z\right),\quad z\in \mathbb {C} ^{2},U\in \mathrm {SU} (2)}
.
The associated Lie algebra representation is simply the one described in the previous section. (See here for an explicit formula for the action of the Lie algebra on the space of polynomials.)
=== The characters ===
The character of a representation
Π
:
G
→
GL
(
V
)
{\displaystyle \Pi :G\rightarrow \operatorname {GL} (V)}
is the function
X
:
G
→
C
{\displaystyle \mathrm {X} :G\rightarrow \mathbb {C} }
given by
X
(
g
)
=
trace
(
Π
(
g
)
)
{\displaystyle \mathrm {X} (g)=\operatorname {trace} (\Pi (g))}
.
Characters plays an important role in the representation theory of compact groups. The character is easily seen to be a class function, that is, invariant under conjugation.
In the SU(2) case, the fact that the character is a class function means it is determined by its value on the maximal torus
T
{\displaystyle T}
consisting of the diagonal matrices in SU(2), since the elements are orthogonally diagonalizable with the spectral theorem. Since the irreducible representation with highest weight
m
{\displaystyle m}
has weights
m
,
m
−
2
,
…
,
−
(
m
−
2
)
,
−
m
{\displaystyle m,m-2,\ldots ,-(m-2),-m}
, it is easy to see that the associated character satisfies
X
(
(
e
i
θ
0
0
e
−
i
θ
)
)
=
e
i
m
θ
+
e
i
(
m
−
2
)
θ
+
⋯
+
e
−
i
(
m
−
2
)
θ
+
e
−
i
m
θ
.
{\displaystyle \mathrm {X} \left({\begin{pmatrix}e^{i\theta }&0\\0&e^{-i\theta }\end{pmatrix}}\right)=e^{im\theta }+e^{i(m-2)\theta }+\cdots +e^{-i(m-2)\theta }+e^{-im\theta }.}
This expression is a finite geometric series that can be simplified to
X
(
(
e
i
θ
0
0
e
−
i
θ
)
)
=
sin
(
(
m
+
1
)
θ
)
sin
(
θ
)
.
{\displaystyle \mathrm {X} \left({\begin{pmatrix}e^{i\theta }&0\\0&e^{-i\theta }\end{pmatrix}}\right)={\frac {\sin((m+1)\theta )}{\sin(\theta )}}.}
This last expression is just the statement of the Weyl character formula for the SU(2) case.
Actually, following Weyl's original analysis of the representation theory of compact groups, one can classify the representations entirely from the group perspective, without using Lie algebra representations at all. In this approach, the Weyl character formula plays an essential part in the classification, along with the Peter–Weyl theorem. The SU(2) case of this story is described here.
=== Relation to the representations of SO(3) ===
Note that either all of the weights of the representation are even (if
m
{\displaystyle m}
is even) or all of the weights are odd (if
m
{\displaystyle m}
is odd). In physical terms, this distinction is important: The representations with even weights correspond to ordinary representations of the rotation group SO(3). By contrast, the representations with odd weights correspond to double-valued (spinorial) representation of SO(3), also known as projective representations.
In the physics conventions,
m
{\displaystyle m}
being even corresponds to
l
{\displaystyle l}
being an integer while
m
{\displaystyle m}
being odd corresponds to
l
{\displaystyle l}
being a half-integer. These two cases are described as integer spin and half-integer spin, respectively. The representations with odd, positive values of
m
{\displaystyle m}
are faithful representations of SU(2), while the representations of SU(2) with non-negative, even
m
{\displaystyle m}
are not faithful.
== Another approach ==
See under the example for Borel–Weil–Bott theorem.
== Most important irreducible representations and their applications ==
Representations of SU(2) describe non-relativistic spin, due to being a double covering of the rotation group of Euclidean 3-space. Relativistic spin is described by the representation theory of SL2(C), a supergroup of SU(2), which in a similar way covers SO+(1;3), the relativistic version of the rotation group. SU(2) symmetry also supports concepts of isobaric spin and weak isospin, collectively known as isospin.
The representation with
m
=
1
{\displaystyle m=1}
(i.e.,
l
=
1
/
2
{\displaystyle l=1/2}
in the physics convention) is the 2 representation, the fundamental representation of SU(2). When an element of SU(2) is written as a complex 2 × 2 matrix, it is simply a multiplication of column 2-vectors. It is known in physics as the spin-1/2 and, historically, as the multiplication of quaternions (more precisely, multiplication by a unit quaternion). This representation can also be viewed as a double-valued projective representation of the rotation group SO(3).
The representation with
m
=
2
{\displaystyle m=2}
(i.e.,
l
=
1
{\displaystyle l=1}
) is the 3 representation, the adjoint representation. It describes 3-d rotations, the standard representation of SO(3), so real numbers are sufficient for it. Physicists use it for the description of massive spin-1 particles, such as vector mesons, but its importance for spin theory is much higher because it anchors spin states to the geometry of the physical 3-space. This representation emerged simultaneously with the 2 when William Rowan Hamilton introduced versors, his term for elements of SU(2). Note that Hamilton did not use standard group theory terminology since his work preceded Lie group developments.
The
m
=
3
{\displaystyle m=3}
(i.e.
l
=
3
/
2
{\displaystyle l=3/2}
) representation is used in particle physics for certain baryons, such as the Δ.
== See also ==
Rotation operator (vector space)
Rotation operator (quantum mechanics)
Representation theory of SO(3)
Connection between SO(3) and SU(2)
representation theory of SL2(R)
Electroweak interaction
Rotation group SO(3) § A note on Lie algebras
== References ==
Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666
Gerard 't Hooft (2007), Lie groups in Physics, Chapter 5 "Ladder operators"
Iachello, Francesco (2006), Lie Algebras and Applications, Lecture Notes in Physics, vol. 708, Springer, ISBN 3540362363 | Wikipedia/Representation_theory_of_SU(2) |
In mathematics, a Lie algebra (pronounced LEE) is a vector space
g
{\displaystyle {\mathfrak {g}}}
together with an operation called the Lie bracket, an alternating bilinear map
g
×
g
→
g
{\displaystyle {\mathfrak {g}}\times {\mathfrak {g}}\rightarrow {\mathfrak {g}}}
, that satisfies the Jacobi identity. In other words, a Lie algebra is an algebra over a field for which the multiplication operation (called the Lie bracket) is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors
x
{\displaystyle x}
and
y
{\displaystyle y}
is denoted
[
x
,
y
]
{\displaystyle [x,y]}
. A Lie algebra is typically a non-associative algebra. However, every associative algebra gives rise to a Lie algebra, consisting of the same vector space with the commutator Lie bracket,
[
x
,
y
]
=
x
y
−
y
x
{\displaystyle [x,y]=xy-yx}
.
Lie algebras are closely related to Lie groups, which are groups that are also smooth manifolds: every Lie group gives rise to a Lie algebra, which is the tangent space at the identity. (In this case, the Lie bracket measures the failure of commutativity for the Lie group.) Conversely, to any finite-dimensional Lie algebra over the real or complex numbers, there is a corresponding connected Lie group, unique up to covering spaces (Lie's third theorem). This correspondence allows one to study the structure and classification of Lie groups in terms of Lie algebras, which are simpler objects of linear algebra.
In more detail: for any Lie group, the multiplication operation near the identity element 1 is commutative to first order. In other words, every Lie group G is (to first order) approximately a real vector space, namely the tangent space
g
{\displaystyle {\mathfrak {g}}}
to G at the identity. To second order, the group operation may be non-commutative, and the second-order terms describing the non-commutativity of G near the identity give
g
{\displaystyle {\mathfrak {g}}}
the structure of a Lie algebra. It is a remarkable fact that these second-order terms (the Lie algebra) completely determine the group structure of G near the identity. They even determine G globally, up to covering spaces.
In physics, Lie groups appear as symmetry groups of physical systems, and their Lie algebras (tangent vectors near the identity) may be thought of as infinitesimal symmetry motions. Thus Lie algebras and their representations are used extensively in physics, notably in quantum mechanics and particle physics.
An elementary example (not directly coming from an associative algebra) is the 3-dimensional space
g
=
R
3
{\displaystyle {\mathfrak {g}}=\mathbb {R} ^{3}}
with Lie bracket defined by the cross product
[
x
,
y
]
=
x
×
y
.
{\displaystyle [x,y]=x\times y.}
This is skew-symmetric since
x
×
y
=
−
y
×
x
{\displaystyle x\times y=-y\times x}
, and instead of associativity it satisfies the Jacobi identity:
x
×
(
y
×
z
)
+
y
×
(
z
×
x
)
+
z
×
(
x
×
y
)
=
0.
{\displaystyle x\times (y\times z)+\ y\times (z\times x)+\ z\times (x\times y)\ =\ 0.}
This is the Lie algebra of the Lie group of rotations of space, and each vector
v
∈
R
3
{\displaystyle v\in \mathbb {R} ^{3}}
may be pictured as an infinitesimal rotation around the axis
v
{\displaystyle v}
, with angular speed equal to the magnitude
of
v
{\displaystyle v}
. The Lie bracket is a measure of the non-commutativity between two rotations. Since a rotation commutes with itself, one has the alternating property
[
x
,
x
]
=
x
×
x
=
0
{\displaystyle [x,x]=x\times x=0}
.
== History ==
Lie algebras were introduced to study the concept of infinitesimal transformations by Sophus Lie in the 1870s, and independently discovered by Wilhelm Killing in the 1880s. The name Lie algebra was given by Hermann Weyl in the 1930s; in older texts, the term infinitesimal group was used.
== Definition of a Lie algebra ==
A Lie algebra is a vector space
g
{\displaystyle \,{\mathfrak {g}}}
over a field
F
{\displaystyle F}
together with a binary operation
[
⋅
,
⋅
]
:
g
×
g
→
g
{\displaystyle [\,\cdot \,,\cdot \,]:{\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}}
called the Lie bracket, satisfying the following axioms:
Bilinearity,
[
a
x
+
b
y
,
z
]
=
a
[
x
,
z
]
+
b
[
y
,
z
]
,
{\displaystyle [ax+by,z]=a[x,z]+b[y,z],}
[
z
,
a
x
+
b
y
]
=
a
[
z
,
x
]
+
b
[
z
,
y
]
{\displaystyle [z,ax+by]=a[z,x]+b[z,y]}
for all scalars
a
,
b
{\displaystyle a,b}
in
F
{\displaystyle F}
and all elements
x
,
y
,
z
{\displaystyle x,y,z}
in
g
{\displaystyle {\mathfrak {g}}}
.
The Alternating property,
[
x
,
x
]
=
0
{\displaystyle [x,x]=0\ }
for all
x
{\displaystyle x}
in
g
{\displaystyle {\mathfrak {g}}}
.
The Jacobi identity,
[
x
,
[
y
,
z
]
]
+
[
z
,
[
x
,
y
]
]
+
[
y
,
[
z
,
x
]
]
=
0
{\displaystyle [x,[y,z]]+[z,[x,y]]+[y,[z,x]]=0\ }
for all
x
,
y
,
z
{\displaystyle x,y,z}
in
g
{\displaystyle {\mathfrak {g}}}
.
Given a Lie group, the Jacobi identity for its Lie algebra follows from the associativity of the group operation.
Using bilinearity to expand the Lie bracket
[
x
+
y
,
x
+
y
]
{\displaystyle [x+y,x+y]}
and using the alternating property shows that
[
x
,
y
]
+
[
y
,
x
]
=
0
{\displaystyle [x,y]+[y,x]=0}
for all
x
,
y
{\displaystyle x,y}
in
g
{\displaystyle {\mathfrak {g}}}
. Thus bilinearity and the alternating property together imply
Anticommutativity,
[
x
,
y
]
=
−
[
y
,
x
]
,
{\displaystyle [x,y]=-[y,x],\ }
for all
x
,
y
{\displaystyle x,y}
in
g
{\displaystyle {\mathfrak {g}}}
. If the field does not have characteristic 2, then anticommutativity implies the alternating property, since it implies
[
x
,
x
]
=
−
[
x
,
x
]
.
{\displaystyle [x,x]=-[x,x].}
It is customary to denote a Lie algebra by a lower-case fraktur letter such as
g
,
h
,
b
,
n
{\displaystyle {\mathfrak {g,h,b,n}}}
. If a Lie algebra is associated with a Lie group, then the algebra is denoted by the fraktur version of the group's name: for example, the Lie algebra of SU(n) is
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
.
=== Generators and dimension ===
The dimension of a Lie algebra over a field means its dimension as a vector space. In physics, a vector space basis of the Lie algebra of a Lie group G may be called a set of generators for G. (They are "infinitesimal generators" for G, so to speak.) In mathematics, a set S of generators for a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
means a subset of
g
{\displaystyle {\mathfrak {g}}}
such that any Lie subalgebra (as defined below) that contains S must be all of
g
{\displaystyle {\mathfrak {g}}}
. Equivalently,
g
{\displaystyle {\mathfrak {g}}}
is spanned (as a vector space) by all iterated brackets of elements of S.
== Basic examples ==
=== Abelian Lie algebras ===
A Lie algebra is called abelian if its Lie bracket is identically zero. Any vector space
V
{\displaystyle V}
endowed with the identically zero Lie bracket becomes a Lie algebra. Every one-dimensional Lie algebra is abelian, by the alternating property of the Lie bracket.
=== The Lie algebra of matrices ===
On an associative algebra
A
{\displaystyle A}
over a field
F
{\displaystyle F}
with multiplication written as
x
y
{\displaystyle xy}
, a Lie bracket may be defined by the commutator
[
x
,
y
]
=
x
y
−
y
x
{\displaystyle [x,y]=xy-yx}
. With this bracket,
A
{\displaystyle A}
is a Lie algebra. (The Jacobi identity follows from the associativity of the multiplication on
A
{\displaystyle A}
.)
The endomorphism ring of an
F
{\displaystyle F}
-vector space
V
{\displaystyle V}
with the above Lie bracket is denoted
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
.
For a field F and a positive integer n, the space of n × n matrices over F, denoted
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
or
g
l
n
(
F
)
{\displaystyle {\mathfrak {gl}}_{n}(F)}
, is a Lie algebra with bracket given by the commutator of matrices:
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
. This is a special case of the previous example; it is a key example of a Lie algebra. It is called the general linear Lie algebra.
When F is the real numbers,
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )}
is the Lie algebra of the general linear group
G
L
(
n
,
R
)
{\displaystyle \mathrm {GL} (n,\mathbb {R} )}
, the group of invertible n x n real matrices (or equivalently, matrices with nonzero determinant), where the group operation is matrix multiplication. Likewise,
g
l
(
n
,
C
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )}
is the Lie algebra of the complex Lie group
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
. The Lie bracket on
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )}
describes the failure of commutativity for matrix multiplication, or equivalently for the composition of linear maps. For any field F,
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
can be viewed as the Lie algebra of the algebraic group
G
L
(
n
)
{\displaystyle \mathrm {GL} (n)}
over F.
== Definitions ==
=== Subalgebras, ideals and homomorphisms ===
The Lie bracket is not required to be associative, meaning that
[
[
x
,
y
]
,
z
]
{\displaystyle [[x,y],z]}
need not be equal to
[
x
,
[
y
,
z
]
]
{\displaystyle [x,[y,z]]}
. Nonetheless, much of the terminology for associative rings and algebras (and also for groups) has analogs for Lie algebras. A Lie subalgebra is a linear subspace
h
⊆
g
{\displaystyle {\mathfrak {h}}\subseteq {\mathfrak {g}}}
which is closed under the Lie bracket. An ideal
i
⊆
g
{\displaystyle {\mathfrak {i}}\subseteq {\mathfrak {g}}}
is a linear subspace that satisfies the stronger condition:
[
g
,
i
]
⊆
i
.
{\displaystyle [{\mathfrak {g}},{\mathfrak {i}}]\subseteq {\mathfrak {i}}.}
In the correspondence between Lie groups and Lie algebras, subgroups correspond to Lie subalgebras, and normal subgroups correspond to ideals.
A Lie algebra homomorphism is a linear map compatible with the respective Lie brackets:
ϕ
:
g
→
h
,
ϕ
(
[
x
,
y
]
)
=
[
ϕ
(
x
)
,
ϕ
(
y
)
]
for all
x
,
y
∈
g
.
{\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}},\quad \phi ([x,y])=[\phi (x),\phi (y)]\ {\text{for all}}\ x,y\in {\mathfrak {g}}.}
An isomorphism of Lie algebras is a bijective homomorphism.
As with normal subgroups in groups, ideals in Lie algebras are precisely the kernels of homomorphisms. Given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
and an ideal
i
{\displaystyle {\mathfrak {i}}}
in it, the quotient Lie algebra
g
/
i
{\displaystyle {\mathfrak {g}}/{\mathfrak {i}}}
is defined, with a surjective homomorphism
g
→
g
/
i
{\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}}
of Lie algebras. The first isomorphism theorem holds for Lie algebras: for any homomorphism
ϕ
:
g
→
h
{\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}}}
of Lie algebras, the image of
ϕ
{\displaystyle \phi }
is a Lie subalgebra of
h
{\displaystyle {\mathfrak {h}}}
that is isomorphic to
g
/
ker
(
ϕ
)
{\displaystyle {\mathfrak {g}}/{\text{ker}}(\phi )}
.
For the Lie algebra of a Lie group, the Lie bracket is a kind of infinitesimal commutator. As a result, for any Lie algebra, two elements
x
,
y
∈
g
{\displaystyle x,y\in {\mathfrak {g}}}
are said to commute if their bracket vanishes:
[
x
,
y
]
=
0
{\displaystyle [x,y]=0}
.
The centralizer subalgebra of a subset
S
⊂
g
{\displaystyle S\subset {\mathfrak {g}}}
is the set of elements commuting with
S
{\displaystyle S}
: that is,
z
g
(
S
)
=
{
x
∈
g
:
[
x
,
s
]
=
0
for all
s
∈
S
}
{\displaystyle {\mathfrak {z}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]=0\ {\text{ for all }}s\in S\}}
. The centralizer of
g
{\displaystyle {\mathfrak {g}}}
itself is the center
z
(
g
)
{\displaystyle {\mathfrak {z}}({\mathfrak {g}})}
. Similarly, for a subspace S, the normalizer subalgebra of
S
{\displaystyle S}
is
n
g
(
S
)
=
{
x
∈
g
:
[
x
,
s
]
∈
S
for all
s
∈
S
}
{\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]\in S\ {\text{ for all}}\ s\in S\}}
. If
S
{\displaystyle S}
is a Lie subalgebra,
n
g
(
S
)
{\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)}
is the largest subalgebra such that
S
{\displaystyle S}
is an ideal of
n
g
(
S
)
{\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)}
.
==== Example ====
The subspace
t
n
{\displaystyle {\mathfrak {t}}_{n}}
of diagonal matrices in
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
is an abelian Lie subalgebra. (It is a Cartan subalgebra of
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
, analogous to a maximal torus in the theory of compact Lie groups.) Here
t
n
{\displaystyle {\mathfrak {t}}_{n}}
is not an ideal in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
for
n
≥
2
{\displaystyle n\geq 2}
. For example, when
n
=
2
{\displaystyle n=2}
, this follows from the calculation:
[
[
a
b
c
d
]
,
[
x
0
0
y
]
]
=
[
a
x
b
y
c
x
d
y
]
−
[
a
x
b
x
c
y
d
y
]
=
[
0
b
(
y
−
x
)
c
(
x
−
y
)
0
]
{\displaystyle {\begin{aligned}\left[{\begin{bmatrix}a&b\\c&d\end{bmatrix}},{\begin{bmatrix}x&0\\0&y\end{bmatrix}}\right]&={\begin{bmatrix}ax&by\\cx&dy\\\end{bmatrix}}-{\begin{bmatrix}ax&bx\\cy&dy\\\end{bmatrix}}\\&={\begin{bmatrix}0&b(y-x)\\c(x-y)&0\end{bmatrix}}\end{aligned}}}
(which is not always in
t
2
{\displaystyle {\mathfrak {t}}_{2}}
).
Every one-dimensional linear subspace of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is an abelian Lie subalgebra, but it need not be an ideal.
=== Product and semidirect product ===
For two Lie algebras
g
{\displaystyle {\mathfrak {g}}}
and
g
′
{\displaystyle {\mathfrak {g'}}}
, the product Lie algebra is the vector space
g
×
g
′
{\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}}
consisting of all ordered pairs
(
x
,
x
′
)
,
x
∈
g
,
x
′
∈
g
′
{\displaystyle (x,x'),\,x\in {\mathfrak {g}},\ x'\in {\mathfrak {g'}}}
, with Lie bracket
[
(
x
,
x
′
)
,
(
y
,
y
′
)
]
=
(
[
x
,
y
]
,
[
x
′
,
y
′
]
)
.
{\displaystyle [(x,x'),(y,y')]=([x,y],[x',y']).}
This is the product in the category of Lie algebras. Note that the copies of
g
{\displaystyle {\mathfrak {g}}}
and
g
′
{\displaystyle {\mathfrak {g}}'}
in
g
×
g
′
{\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}}
commute with each other:
[
(
x
,
0
)
,
(
0
,
x
′
)
]
=
0.
{\displaystyle [(x,0),(0,x')]=0.}
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra and
i
{\displaystyle {\mathfrak {i}}}
an ideal of
g
{\displaystyle {\mathfrak {g}}}
. If the canonical map
g
→
g
/
i
{\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}}
splits (i.e., admits a section
g
/
i
→
g
{\displaystyle {\mathfrak {g}}/{\mathfrak {i}}\to {\mathfrak {g}}}
, as a homomorphism of Lie algebras), then
g
{\displaystyle {\mathfrak {g}}}
is said to be a semidirect product of
i
{\displaystyle {\mathfrak {i}}}
and
g
/
i
{\displaystyle {\mathfrak {g}}/{\mathfrak {i}}}
,
g
=
g
/
i
⋉
i
{\displaystyle {\mathfrak {g}}={\mathfrak {g}}/{\mathfrak {i}}\ltimes {\mathfrak {i}}}
. See also semidirect sum of Lie algebras.
=== Derivations ===
For an algebra A over a field F, a derivation of A over F is a linear map
D
:
A
→
A
{\displaystyle D\colon A\to A}
that satisfies the Leibniz rule
D
(
x
y
)
=
D
(
x
)
y
+
x
D
(
y
)
{\displaystyle D(xy)=D(x)y+xD(y)}
for all
x
,
y
∈
A
{\displaystyle x,y\in A}
. (The definition makes sense for a possibly non-associative algebra.) Given two derivations
D
1
{\displaystyle D_{1}}
and
D
2
{\displaystyle D_{2}}
, their commutator
[
D
1
,
D
2
]
:=
D
1
D
2
−
D
2
D
1
{\displaystyle [D_{1},D_{2}]:=D_{1}D_{2}-D_{2}D_{1}}
is again a derivation. This operation makes the space
Der
k
(
A
)
{\displaystyle {\text{Der}}_{k}(A)}
of all derivations of A over F into a Lie algebra.
Informally speaking, the space of derivations of A is the Lie algebra of the automorphism group of A. (This is literally true when the automorphism group is a Lie group, for example when F is the real numbers and A has finite dimension as a vector space.) For this reason, spaces of derivations are a natural way to construct Lie algebras: they are the "infinitesimal automorphisms" of A. Indeed, writing out the condition that
(
1
+
ϵ
D
)
(
x
y
)
≡
(
1
+
ϵ
D
)
(
x
)
⋅
(
1
+
ϵ
D
)
(
y
)
(
mod
ϵ
2
)
{\displaystyle (1+\epsilon D)(xy)\equiv (1+\epsilon D)(x)\cdot (1+\epsilon D)(y){\pmod {\epsilon ^{2}}}}
(where 1 denotes the identity map on A) gives exactly the definition of D being a derivation.
Example: the Lie algebra of vector fields. Let A be the ring
C
∞
(
X
)
{\displaystyle C^{\infty }(X)}
of smooth functions on a smooth manifold X. Then a derivation of A over
R
{\displaystyle \mathbb {R} }
is equivalent to a vector field on X. (A vector field v gives a derivation of the space of smooth functions by differentiating functions in the direction of v.) This makes the space
Vect
(
X
)
{\displaystyle {\text{Vect}}(X)}
of vector fields into a Lie algebra (see Lie bracket of vector fields). Informally speaking,
Vect
(
X
)
{\displaystyle {\text{Vect}}(X)}
is the Lie algebra of the diffeomorphism group of X. So the Lie bracket of vector fields describes the non-commutativity of the diffeomorphism group. An action of a Lie group G on a manifold X determines a homomorphism of Lie algebras
g
→
Vect
(
X
)
{\displaystyle {\mathfrak {g}}\to {\text{Vect}}(X)}
. (An example is illustrated below.)
A Lie algebra can be viewed as a non-associative algebra, and so each Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over a field F determines its Lie algebra of derivations,
Der
F
(
g
)
{\displaystyle {\text{Der}}_{F}({\mathfrak {g}})}
. That is, a derivation of
g
{\displaystyle {\mathfrak {g}}}
is a linear map
D
:
g
→
g
{\displaystyle D\colon {\mathfrak {g}}\to {\mathfrak {g}}}
such that
D
(
[
x
,
y
]
)
=
[
D
(
x
)
,
y
]
+
[
x
,
D
(
y
)
]
{\displaystyle D([x,y])=[D(x),y]+[x,D(y)]}
.
The inner derivation associated to any
x
∈
g
{\displaystyle x\in {\mathfrak {g}}}
is the adjoint mapping
a
d
x
{\displaystyle \mathrm {ad} _{x}}
defined by
a
d
x
(
y
)
:=
[
x
,
y
]
{\displaystyle \mathrm {ad} _{x}(y):=[x,y]}
. (This is a derivation as a consequence of the Jacobi identity.) That gives a homomorphism of Lie algebras,
ad
:
g
→
Der
F
(
g
)
{\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\text{Der}}_{F}({\mathfrak {g}})}
. The image
Inn
F
(
g
)
{\displaystyle {\text{Inn}}_{F}({\mathfrak {g}})}
is an ideal in
Der
F
(
g
)
{\displaystyle {\text{Der}}_{F}({\mathfrak {g}})}
, and the Lie algebra of outer derivations is defined as the quotient Lie algebra,
Out
F
(
g
)
=
Der
F
(
g
)
/
Inn
F
(
g
)
{\displaystyle {\text{Out}}_{F}({\mathfrak {g}})={\text{Der}}_{F}({\mathfrak {g}})/{\text{Inn}}_{F}({\mathfrak {g}})}
. (This is exactly analogous to the outer automorphism group of a group.) For a semisimple Lie algebra (defined below) over a field of characteristic zero, every derivation is inner. This is related to the theorem that the outer automorphism group of a semisimple Lie group is finite.
In contrast, an abelian Lie algebra has many outer derivations. Namely, for a vector space
V
{\displaystyle V}
with Lie bracket zero, the Lie algebra
Out
F
(
V
)
{\displaystyle {\text{Out}}_{F}(V)}
can be identified with
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
.
== Examples ==
=== Matrix Lie algebras ===
A matrix group is a Lie group consisting of invertible matrices,
G
⊂
G
L
(
n
,
R
)
{\displaystyle G\subset \mathrm {GL} (n,\mathbb {R} )}
, where the group operation of G is matrix multiplication. The corresponding Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is the space of matrices which are tangent vectors to G inside the linear space
M
n
(
R
)
{\displaystyle M_{n}(\mathbb {R} )}
: this consists of derivatives of smooth curves in G at the identity matrix
I
{\displaystyle I}
:
g
=
{
X
=
c
′
(
0
)
∈
M
n
(
R
)
:
smooth
c
:
R
→
G
,
c
(
0
)
=
I
}
.
{\displaystyle {\mathfrak {g}}=\{X=c'(0)\in M_{n}(\mathbb {R} ):{\text{ smooth }}c:\mathbb {R} \to G,\ c(0)=I\}.}
The Lie bracket of
g
{\displaystyle {\mathfrak {g}}}
is given by the commutator of matrices,
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
. Given a Lie algebra
g
⊂
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {g}}\subset {\mathfrak {gl}}(n,\mathbb {R} )}
, one can recover the Lie group as the subgroup generated by the matrix exponential of elements of
g
{\displaystyle {\mathfrak {g}}}
. (To be precise, this gives the identity component of G, if G is not connected.) Here the exponential mapping
exp
:
M
n
(
R
)
→
M
n
(
R
)
{\displaystyle \exp :M_{n}(\mathbb {R} )\to M_{n}(\mathbb {R} )}
is defined by
exp
(
X
)
=
I
+
X
+
1
2
!
X
2
+
1
3
!
X
3
+
⋯
{\displaystyle \exp(X)=I+X+{\tfrac {1}{2!}}X^{2}+{\tfrac {1}{3!}}X^{3}+\cdots }
, which converges for every matrix
X
{\displaystyle X}
.
The same comments apply to complex Lie subgroups of
G
L
(
n
,
C
)
{\displaystyle GL(n,\mathbb {C} )}
and the complex matrix exponential,
exp
:
M
n
(
C
)
→
M
n
(
C
)
{\displaystyle \exp :M_{n}(\mathbb {C} )\to M_{n}(\mathbb {C} )}
(defined by the same formula).
Here are some matrix Lie groups and their Lie algebras.
For a positive integer n, the special linear group
S
L
(
n
,
R
)
{\displaystyle \mathrm {SL} (n,\mathbb {R} )}
consists of all real n × n matrices with determinant 1. This is the group of linear maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to itself that preserve volume and orientation. More abstractly,
S
L
(
n
,
R
)
{\displaystyle \mathrm {SL} (n,\mathbb {R} )}
is the commutator subgroup of the general linear group
G
L
(
n
,
R
)
{\displaystyle \mathrm {GL} (n,\mathbb {R} )}
. Its Lie algebra
s
l
(
n
,
R
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {R} )}
consists of all real n × n matrices with trace 0. Similarly, one can define the analogous complex Lie group
S
L
(
n
,
C
)
{\displaystyle {\rm {SL}}(n,\mathbb {C} )}
and its Lie algebra
s
l
(
n
,
C
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )}
.
The orthogonal group
O
(
n
)
{\displaystyle \mathrm {O} (n)}
plays a basic role in geometry: it is the group of linear maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to itself that preserve the length of vectors. For example, rotations and reflections belong to
O
(
n
)
{\displaystyle \mathrm {O} (n)}
. Equivalently, this is the group of n x n orthogonal matrices, meaning that
A
T
=
A
−
1
{\displaystyle A^{\mathrm {T} }=A^{-1}}
, where
A
T
{\displaystyle A^{\mathrm {T} }}
denotes the transpose of a matrix. The orthogonal group has two connected components; the identity component is called the special orthogonal group
S
O
(
n
)
{\displaystyle \mathrm {SO} (n)}
, consisting of the orthogonal matrices with determinant 1. Both groups have the same Lie algebra
s
o
(
n
)
{\displaystyle {\mathfrak {so}}(n)}
, the subspace of skew-symmetric matrices in
g
l
(
n
,
R
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )}
(
X
T
=
−
X
{\displaystyle X^{\rm {T}}=-X}
). See also infinitesimal rotations with skew-symmetric matrices.
The complex orthogonal group
O
(
n
,
C
)
{\displaystyle \mathrm {O} (n,\mathbb {C} )}
, its identity component
S
O
(
n
,
C
)
{\displaystyle \mathrm {SO} (n,\mathbb {C} )}
, and the Lie algebra
s
o
(
n
,
C
)
{\displaystyle {\mathfrak {so}}(n,\mathbb {C} )}
are given by the same formulas applied to n x n complex matrices. Equivalently,
O
(
n
,
C
)
{\displaystyle \mathrm {O} (n,\mathbb {C} )}
is the subgroup of
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
that preserves the standard symmetric bilinear form on
C
n
{\displaystyle \mathbb {C} ^{n}}
.
The unitary group
U
(
n
)
{\displaystyle \mathrm {U} (n)}
is the subgroup of
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
that preserves the length of vectors in
C
n
{\displaystyle \mathbb {C} ^{n}}
(with respect to the standard Hermitian inner product). Equivalently, this is the group of n × n unitary matrices (satisfying
A
∗
=
A
−
1
{\displaystyle A^{*}=A^{-1}}
, where
A
∗
{\displaystyle A^{*}}
denotes the conjugate transpose of a matrix). Its Lie algebra
u
(
n
)
{\displaystyle {\mathfrak {u}}(n)}
consists of the skew-hermitian matrices in
g
l
(
n
,
C
)
{\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )}
(
X
∗
=
−
X
{\displaystyle X^{*}=-X}
). This is a Lie algebra over
R
{\displaystyle \mathbb {R} }
, not over
C
{\displaystyle \mathbb {C} }
. (Indeed, i times a skew-hermitian matrix is hermitian, rather than skew-hermitian.) Likewise, the unitary group
U
(
n
)
{\displaystyle \mathrm {U} (n)}
is a real Lie subgroup of the complex Lie group
G
L
(
n
,
C
)
{\displaystyle \mathrm {GL} (n,\mathbb {C} )}
. For example,
U
(
1
)
{\displaystyle \mathrm {U} (1)}
is the circle group, and its Lie algebra (from this point of view) is
i
R
⊂
C
=
g
l
(
1
,
C
)
{\displaystyle i\mathbb {R} \subset \mathbb {C} ={\mathfrak {gl}}(1,\mathbb {C} )}
.
The special unitary group
S
U
(
n
)
{\displaystyle \mathrm {SU} (n)}
is the subgroup of matrices with determinant 1 in
U
(
n
)
{\displaystyle \mathrm {U} (n)}
. Its Lie algebra
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
consists of the skew-hermitian matrices with trace zero.
The symplectic group
S
p
(
2
n
,
R
)
{\displaystyle \mathrm {Sp} (2n,\mathbb {R} )}
is the subgroup of
G
L
(
2
n
,
R
)
{\displaystyle \mathrm {GL} (2n,\mathbb {R} )}
that preserves the standard alternating bilinear form on
R
2
n
{\displaystyle \mathbb {R} ^{2n}}
. Its Lie algebra is the symplectic Lie algebra
s
p
(
2
n
,
R
)
{\displaystyle {\mathfrak {sp}}(2n,\mathbb {R} )}
.
The classical Lie algebras are those listed above, along with variants over any field.
=== Two dimensions ===
Some Lie algebras of low dimension are described here. See the classification of low-dimensional real Lie algebras for further examples.
There is a unique nonabelian Lie algebra
g
{\displaystyle {\mathfrak {g}}}
of dimension 2 over any field F, up to isomorphism. Here
g
{\displaystyle {\mathfrak {g}}}
has a basis
X
,
Y
{\displaystyle X,Y}
for which the bracket is given by
[
X
,
Y
]
=
Y
{\displaystyle \left[X,Y\right]=Y}
. (This determines the Lie bracket completely, because the axioms imply that
[
X
,
X
]
=
0
{\displaystyle [X,X]=0}
and
[
Y
,
Y
]
=
0
{\displaystyle [Y,Y]=0}
.) Over the real numbers,
g
{\displaystyle {\mathfrak {g}}}
can be viewed as the Lie algebra of the Lie group
G
=
A
f
f
(
1
,
R
)
{\displaystyle G=\mathrm {Aff} (1,\mathbb {R} )}
of affine transformations of the real line,
x
↦
a
x
+
b
{\displaystyle x\mapsto ax+b}
.
The affine group G can be identified with the group of matrices
(
a
b
0
1
)
{\displaystyle \left({\begin{array}{cc}a&b\\0&1\end{array}}\right)}
under matrix multiplication, with
a
,
b
∈
R
{\displaystyle a,b\in \mathbb {R} }
,
a
≠
0
{\displaystyle a\neq 0}
. Its Lie algebra is the Lie subalgebra
g
{\displaystyle {\mathfrak {g}}}
of
g
l
(
2
,
R
)
{\displaystyle {\mathfrak {gl}}(2,\mathbb {R} )}
consisting of all matrices
(
c
d
0
0
)
.
{\displaystyle \left({\begin{array}{cc}c&d\\0&0\end{array}}\right).}
In these terms, the basis above for
g
{\displaystyle {\mathfrak {g}}}
is given by the matrices
X
=
(
1
0
0
0
)
,
Y
=
(
0
1
0
0
)
.
{\displaystyle X=\left({\begin{array}{cc}1&0\\0&0\end{array}}\right),\qquad Y=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right).}
For any field
F
{\displaystyle F}
, the 1-dimensional subspace
F
⋅
Y
{\displaystyle F\cdot Y}
is an ideal in the 2-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, by the formula
[
X
,
Y
]
=
Y
∈
F
⋅
Y
{\displaystyle [X,Y]=Y\in F\cdot Y}
. Both of the Lie algebras
F
⋅
Y
{\displaystyle F\cdot Y}
and
g
/
(
F
⋅
Y
)
{\displaystyle {\mathfrak {g}}/(F\cdot Y)}
are abelian (because 1-dimensional). In this sense,
g
{\displaystyle {\mathfrak {g}}}
can be broken into abelian "pieces", meaning that it is solvable (though not nilpotent), in the terminology below.
=== Three dimensions ===
The Heisenberg algebra
h
3
(
F
)
{\displaystyle {\mathfrak {h}}_{3}(F)}
over a field F is the three-dimensional Lie algebra with a basis
X
,
Y
,
Z
{\displaystyle X,Y,Z}
such that
[
X
,
Y
]
=
Z
,
[
X
,
Z
]
=
0
,
[
Y
,
Z
]
=
0
{\displaystyle [X,Y]=Z,\quad [X,Z]=0,\quad [Y,Z]=0}
.
It can be viewed as the Lie algebra of 3×3 strictly upper-triangular matrices, with the commutator Lie bracket and the basis
X
=
(
0
1
0
0
0
0
0
0
0
)
,
Y
=
(
0
0
0
0
0
1
0
0
0
)
,
Z
=
(
0
0
1
0
0
0
0
0
0
)
.
{\displaystyle X=\left({\begin{array}{ccc}0&1&0\\0&0&0\\0&0&0\end{array}}\right),\quad Y=\left({\begin{array}{ccc}0&0&0\\0&0&1\\0&0&0\end{array}}\right),\quad Z=\left({\begin{array}{ccc}0&0&1\\0&0&0\\0&0&0\end{array}}\right)~.\quad }
Over the real numbers,
h
3
(
R
)
{\displaystyle {\mathfrak {h}}_{3}(\mathbb {R} )}
is the Lie algebra of the Heisenberg group
H
3
(
R
)
{\displaystyle \mathrm {H} _{3}(\mathbb {R} )}
, that is, the group of matrices
(
1
a
c
0
1
b
0
0
1
)
{\displaystyle \left({\begin{array}{ccc}1&a&c\\0&1&b\\0&0&1\end{array}}\right)}
under matrix multiplication.
For any field F, the center of
h
3
(
F
)
{\displaystyle {\mathfrak {h}}_{3}(F)}
is the 1-dimensional ideal
F
⋅
Z
{\displaystyle F\cdot Z}
, and the quotient
h
3
(
F
)
/
(
F
⋅
Z
)
{\displaystyle {\mathfrak {h}}_{3}(F)/(F\cdot Z)}
is abelian, isomorphic to
F
2
{\displaystyle F^{2}}
. In the terminology below, it follows that
h
3
(
F
)
{\displaystyle {\mathfrak {h}}_{3}(F)}
is nilpotent (though not abelian).
The Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
of the rotation group SO(3) is the space of skew-symmetric 3 x 3 matrices over
R
{\displaystyle \mathbb {R} }
. A basis is given by the three matrices
F
1
=
(
0
0
0
0
0
−
1
0
1
0
)
,
F
2
=
(
0
0
1
0
0
0
−
1
0
0
)
,
F
3
=
(
0
−
1
0
1
0
0
0
0
0
)
.
{\displaystyle F_{1}=\left({\begin{array}{ccc}0&0&0\\0&0&-1\\0&1&0\end{array}}\right),\quad F_{2}=\left({\begin{array}{ccc}0&0&1\\0&0&0\\-1&0&0\end{array}}\right),\quad F_{3}=\left({\begin{array}{ccc}0&-1&0\\1&0&0\\0&0&0\end{array}}\right)~.\quad }
The commutation relations among these generators are
[
F
1
,
F
2
]
=
F
3
,
{\displaystyle [F_{1},F_{2}]=F_{3},}
[
F
2
,
F
3
]
=
F
1
,
{\displaystyle [F_{2},F_{3}]=F_{1},}
[
F
3
,
F
1
]
=
F
2
.
{\displaystyle [F_{3},F_{1}]=F_{2}.}
The cross product of vectors in
R
3
{\displaystyle \mathbb {R} ^{3}}
is given by the same formula in terms of the standard basis; so that Lie algebra is isomorphic to
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
. Also,
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
is equivalent to the Spin (physics) angular-momentum component operators for spin-1 particles in quantum mechanics.
The Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
cannot be broken into pieces in the way that the previous examples can: it is simple, meaning that it is not abelian and its only ideals are 0 and all of
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
.
Another simple Lie algebra of dimension 3, in this case over
C
{\displaystyle \mathbb {C} }
, is the space
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
of 2 x 2 matrices of trace zero. A basis is given by the three matrices
H
=
(
1
0
0
−
1
)
,
E
=
(
0
1
0
0
)
,
F
=
(
0
0
1
0
)
.
{\displaystyle H=\left({\begin{array}{cc}1&0\\0&-1\end{array}}\right),\ E=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right),\ F=\left({\begin{array}{cc}0&0\\1&0\end{array}}\right).}
The Lie bracket is given by:
[
H
,
E
]
=
2
E
,
{\displaystyle [H,E]=2E,}
[
H
,
F
]
=
−
2
F
,
{\displaystyle [H,F]=-2F,}
[
E
,
F
]
=
H
.
{\displaystyle [E,F]=H.}
Using these formulas, one can show that the Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
is simple, and classify its finite-dimensional representations (defined below). In the terminology of quantum mechanics, one can think of E and F as raising and lowering operators. Indeed, for any representation of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
, the relations above imply that E maps the c-eigenspace of H (for a complex number c) into the
(
c
+
2
)
{\displaystyle (c+2)}
-eigenspace, while F maps the c-eigenspace into the
(
c
−
2
)
{\displaystyle (c-2)}
-eigenspace.
The Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
is isomorphic to the complexification of
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
, meaning the tensor product
s
o
(
3
)
⊗
R
C
{\displaystyle {\mathfrak {so}}(3)\otimes _{\mathbb {R} }\mathbb {C} }
. The formulas for the Lie bracket are easier to analyze in the case of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
. As a result, it is common to analyze complex representations of the group
S
O
(
3
)
{\displaystyle \mathrm {SO} (3)}
by relating them to representations of the Lie algebra
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
.
=== Infinite dimensions ===
The Lie algebra of vector fields on a smooth manifold of positive dimension is an infinite-dimensional Lie algebra over
R
{\displaystyle \mathbb {R} }
.
The Kac–Moody algebras are a large class of infinite-dimensional Lie algebras, say over
C
{\displaystyle \mathbb {C} }
, with structure much like that of the finite-dimensional simple Lie algebras (such as
s
l
(
n
,
C
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )}
).
The Moyal algebra is an infinite-dimensional Lie algebra that contains all the classical Lie algebras as subalgebras.
The Virasoro algebra is important in string theory.
The functor that takes a Lie algebra over a field F to the underlying vector space has a left adjoint
V
↦
L
(
V
)
{\displaystyle V\mapsto L(V)}
, called the free Lie algebra on a vector space V. It is spanned by all iterated Lie brackets of elements of V, modulo only the relations coming from the definition of a Lie algebra. The free Lie algebra
L
(
V
)
{\displaystyle L(V)}
is infinite-dimensional for V of dimension at least 2.
== Representations ==
=== Definitions ===
Given a vector space V, let
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
denote the Lie algebra consisting of all linear maps from V to itself, with bracket given by
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
. A representation of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
on V is a Lie algebra homomorphism
π
:
g
→
g
l
(
V
)
.
{\displaystyle \pi \colon {\mathfrak {g}}\to {\mathfrak {gl}}(V).}
That is,
π
{\displaystyle \pi }
sends each element of
g
{\displaystyle {\mathfrak {g}}}
to a linear map from V to itself, in such a way that the Lie bracket on
g
{\displaystyle {\mathfrak {g}}}
corresponds to the commutator of linear maps.
A representation is said to be faithful if its kernel is zero. Ado's theorem states that every finite-dimensional Lie algebra over a field of characteristic zero has a faithful representation on a finite-dimensional vector space. Kenkichi Iwasawa extended this result to finite-dimensional Lie algebras over a field of any characteristic. Equivalently, every finite-dimensional Lie algebra over a field F is isomorphic to a Lie subalgebra of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
for some positive integer n.
=== Adjoint representation ===
For any Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, the adjoint representation is the representation
ad
:
g
→
g
l
(
g
)
{\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})}
given by
ad
(
x
)
(
y
)
=
[
x
,
y
]
{\displaystyle \operatorname {ad} (x)(y)=[x,y]}
. (This is a representation of
g
{\displaystyle {\mathfrak {g}}}
by the Jacobi identity.)
=== Goals of representation theory ===
One important aspect of the study of Lie algebras (especially semisimple Lie algebras, as defined below) is the study of their representations. Although Ado's theorem is an important result, the primary goal of representation theory is not to find a faithful representation of a given Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. Indeed, in the semisimple case, the adjoint representation is already faithful. Rather, the goal is to understand all possible representations of
g
{\displaystyle {\mathfrak {g}}}
. For a semisimple Lie algebra over a field of characteristic zero, Weyl's theorem says that every finite-dimensional representation is a direct sum of irreducible representations (those with no nontrivial invariant subspaces). The finite-dimensional irreducible representations are well understood from several points of view; see the representation theory of semisimple Lie algebras and the Weyl character formula.
=== Universal enveloping algebra ===
The functor that takes an associative algebra A over a field F to A as a Lie algebra (by
[
X
,
Y
]
:=
X
Y
−
Y
X
{\displaystyle [X,Y]:=XY-YX}
) has a left adjoint
g
↦
U
(
g
)
{\displaystyle {\mathfrak {g}}\mapsto U({\mathfrak {g}})}
, called the universal enveloping algebra. To construct this: given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over F, let
T
(
g
)
=
F
⊕
g
⊕
(
g
⊗
g
)
⊕
(
g
⊗
g
⊗
g
)
⊕
⋯
{\displaystyle T({\mathfrak {g}})=F\oplus {\mathfrak {g}}\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}})\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}}\otimes {\mathfrak {g}})\oplus \cdots }
be the tensor algebra on
g
{\displaystyle {\mathfrak {g}}}
, also called the free associative algebra on the vector space
g
{\displaystyle {\mathfrak {g}}}
. Here
⊗
{\displaystyle \otimes }
denotes the tensor product of F-vector spaces. Let I be the two-sided ideal in
T
(
g
)
{\displaystyle T({\mathfrak {g}})}
generated by the elements
X
Y
−
Y
X
−
[
X
,
Y
]
{\displaystyle XY-YX-[X,Y]}
for
X
,
Y
∈
g
{\displaystyle X,Y\in {\mathfrak {g}}}
; then the universal enveloping algebra is the quotient ring
U
(
g
)
=
T
(
g
)
/
I
{\displaystyle U({\mathfrak {g}})=T({\mathfrak {g}})/I}
. It satisfies the Poincaré–Birkhoff–Witt theorem: if
e
1
,
…
,
e
n
{\displaystyle e_{1},\ldots ,e_{n}}
is a basis for
g
{\displaystyle {\mathfrak {g}}}
as an F-vector space, then a basis for
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is given by all ordered products
e
1
i
1
⋯
e
n
i
n
{\displaystyle e_{1}^{i_{1}}\cdots e_{n}^{i_{n}}}
with
i
1
,
…
,
i
n
{\displaystyle i_{1},\ldots ,i_{n}}
natural numbers. In particular, the map
g
→
U
(
g
)
{\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})}
is injective.
Representations of
g
{\displaystyle {\mathfrak {g}}}
are equivalent to modules over the universal enveloping algebra. The fact that
g
→
U
(
g
)
{\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})}
is injective implies that every Lie algebra (possibly of infinite dimension) has a faithful representation (of infinite dimension), namely its representation on
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. This also shows that every Lie algebra is contained in the Lie algebra associated to some associative algebra.
=== Representation theory in physics ===
The representation theory of Lie algebras plays an important role in various parts of theoretical physics. There, one considers operators on the space of states that satisfy certain natural commutation relations. These commutation relations typically come from a symmetry of the problem—specifically, they are the relations of the Lie algebra of the relevant symmetry group. An example is the angular momentum operators, whose commutation relations are those of the Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
of the rotation group
S
O
(
3
)
{\displaystyle \mathrm {SO} (3)}
. Typically, the space of states is far from being irreducible under the pertinent operators, but
one can attempt to decompose it into irreducible pieces. In doing so, one needs to know the irreducible representations of the given Lie algebra. In the study of the hydrogen atom, for example, quantum mechanics textbooks classify (more or less explicitly) the finite-dimensional irreducible representations of the Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
.
== Structure theory and classification ==
Lie algebras can be classified to some extent. This is a powerful approach to the classification of Lie groups.
=== Abelian, nilpotent, and solvable ===
Analogously to abelian, nilpotent, and solvable groups, one can define abelian, nilpotent, and solvable Lie algebras.
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is abelian if the Lie bracket vanishes; that is, [x,y] = 0 for all x and y in
g
{\displaystyle {\mathfrak {g}}}
. In particular, the Lie algebra of an abelian Lie group (such as the group
R
n
{\displaystyle \mathbb {R} ^{n}}
under addition or the torus group
T
n
{\displaystyle \mathbb {T} ^{n}}
) is abelian. Every finite-dimensional abelian Lie algebra over a field
F
{\displaystyle F}
is isomorphic to
F
n
{\displaystyle F^{n}}
for some
n
≥
0
{\displaystyle n\geq 0}
, meaning an n-dimensional vector space with Lie bracket zero.
A more general class of Lie algebras is defined by the vanishing of all commutators of given length. First, the commutator subalgebra (or derived subalgebra) of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is
[
g
,
g
]
{\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]}
, meaning the linear subspace spanned by all brackets
[
x
,
y
]
{\displaystyle [x,y]}
with
x
,
y
∈
g
{\displaystyle x,y\in {\mathfrak {g}}}
. The commutator subalgebra is an ideal in
g
{\displaystyle {\mathfrak {g}}}
, in fact the smallest ideal such that the quotient Lie algebra is abelian. It is analogous to the commutator subgroup of a group.
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is nilpotent if the lower central series
g
⊇
[
g
,
g
]
⊇
[
[
g
,
g
]
,
g
]
⊇
[
[
[
g
,
g
]
,
g
]
,
g
]
⊇
⋯
{\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}],{\mathfrak {g}}]\supseteq \cdots }
becomes zero after finitely many steps. Equivalently,
g
{\displaystyle {\mathfrak {g}}}
is nilpotent if there is a finite sequence of ideals in
g
{\displaystyle {\mathfrak {g}}}
,
0
=
a
0
⊆
a
1
⊆
⋯
⊆
a
r
=
g
,
{\displaystyle 0={\mathfrak {a}}_{0}\subseteq {\mathfrak {a}}_{1}\subseteq \cdots \subseteq {\mathfrak {a}}_{r}={\mathfrak {g}},}
such that
a
j
/
a
j
−
1
{\displaystyle {\mathfrak {a}}_{j}/{\mathfrak {a}}_{j-1}}
is central in
g
/
a
j
−
1
{\displaystyle {\mathfrak {g}}/{\mathfrak {a}}_{j-1}}
for each j. By Engel's theorem, a Lie algebra over any field is nilpotent if and only if for every u in
g
{\displaystyle {\mathfrak {g}}}
the adjoint endomorphism
ad
(
u
)
:
g
→
g
,
ad
(
u
)
v
=
[
u
,
v
]
{\displaystyle \operatorname {ad} (u):{\mathfrak {g}}\to {\mathfrak {g}},\quad \operatorname {ad} (u)v=[u,v]}
is nilpotent.
More generally, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is said to be solvable if the derived series:
g
⊇
[
g
,
g
]
⊇
[
[
g
,
g
]
,
[
g
,
g
]
]
⊇
[
[
[
g
,
g
]
,
[
g
,
g
]
]
,
[
[
g
,
g
]
,
[
g
,
g
]
]
]
⊇
⋯
{\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]],[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]]\supseteq \cdots }
becomes zero after finitely many steps. Equivalently,
g
{\displaystyle {\mathfrak {g}}}
is solvable if there is a finite sequence of Lie subalgebras,
0
=
m
0
⊆
m
1
⊆
⋯
⊆
m
r
=
g
,
{\displaystyle 0={\mathfrak {m}}_{0}\subseteq {\mathfrak {m}}_{1}\subseteq \cdots \subseteq {\mathfrak {m}}_{r}={\mathfrak {g}},}
such that
m
j
−
1
{\displaystyle {\mathfrak {m}}_{j-1}}
is an ideal in
m
j
{\displaystyle {\mathfrak {m}}_{j}}
with
m
j
/
m
j
−
1
{\displaystyle {\mathfrak {m}}_{j}/{\mathfrak {m}}_{j-1}}
abelian for each j.
Every finite-dimensional Lie algebra over a field has a unique maximal solvable ideal, called its radical. Under the Lie correspondence, nilpotent (respectively, solvable) Lie groups correspond to nilpotent (respectively, solvable) Lie algebras over
R
{\displaystyle \mathbb {R} }
.
For example, for a positive integer n and a field F of characteristic zero, the radical of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
is its center, the 1-dimensional subspace spanned by the identity matrix. An example of a solvable Lie algebra is the space
b
n
{\displaystyle {\mathfrak {b}}_{n}}
of upper-triangular matrices in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
; this is not nilpotent when
n
≥
2
{\displaystyle n\geq 2}
. An example of a nilpotent Lie algebra is the space
u
n
{\displaystyle {\mathfrak {u}}_{n}}
of strictly upper-triangular matrices in
g
l
(
n
)
{\displaystyle {\mathfrak {gl}}(n)}
;
this is not abelian when
n
≥
3
{\displaystyle n\geq 3}
.
=== Simple and semisimple ===
A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is called simple if it is not abelian and the only ideals in
g
{\displaystyle {\mathfrak {g}}}
are 0 and
g
{\displaystyle {\mathfrak {g}}}
. (In particular, a one-dimensional—necessarily abelian—Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is by definition not simple, even though its only ideals are 0 and
g
{\displaystyle {\mathfrak {g}}}
.) A finite-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is called semisimple if the only solvable ideal in
g
{\displaystyle {\mathfrak {g}}}
is 0. In characteristic zero, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is semisimple if and only if it is isomorphic to a product of simple Lie algebras,
g
≅
g
1
×
⋯
×
g
r
{\displaystyle {\mathfrak {g}}\cong {\mathfrak {g}}_{1}\times \cdots \times {\mathfrak {g}}_{r}}
.
For example, the Lie algebra
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
is simple for every
n
≥
2
{\displaystyle n\geq 2}
and every field F of characteristic zero (or just of characteristic not dividing n). The Lie algebra
s
u
(
n
)
{\displaystyle {\mathfrak {su}}(n)}
over
R
{\displaystyle \mathbb {R} }
is simple for every
n
≥
2
{\displaystyle n\geq 2}
. The Lie algebra
s
o
(
n
)
{\displaystyle {\mathfrak {so}}(n)}
over
R
{\displaystyle \mathbb {R} }
is simple if
n
=
3
{\displaystyle n=3}
or
n
≥
5
{\displaystyle n\geq 5}
. (There are "exceptional isomorphisms"
s
o
(
3
)
≅
s
u
(
2
)
{\displaystyle {\mathfrak {so}}(3)\cong {\mathfrak {su}}(2)}
and
s
o
(
4
)
≅
s
u
(
2
)
×
s
u
(
2
)
{\displaystyle {\mathfrak {so}}(4)\cong {\mathfrak {su}}(2)\times {\mathfrak {su}}(2)}
.)
The concept of semisimplicity for Lie algebras is closely related with the complete reducibility (semisimplicity) of their representations. When the ground field F has characteristic zero, every finite-dimensional representation of a semisimple Lie algebra is semisimple (that is, a direct sum of irreducible representations).
A finite-dimensional Lie algebra over a field of characteristic zero is called reductive if its adjoint representation is semisimple. Every reductive Lie algebra is isomorphic to the product of an abelian Lie algebra and a semisimple Lie algebra.
For example,
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
is reductive for F of characteristic zero: for
n
≥
2
{\displaystyle n\geq 2}
, it is isomorphic to the product
g
l
(
n
,
F
)
≅
F
×
s
l
(
n
,
F
)
,
{\displaystyle {\mathfrak {gl}}(n,F)\cong F\times {\mathfrak {sl}}(n,F),}
where F denotes the center of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
, the 1-dimensional subspace spanned by the identity matrix. Since the special linear Lie algebra
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
is simple,
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
contains few ideals: only 0, the center F,
s
l
(
n
,
F
)
{\displaystyle {\mathfrak {sl}}(n,F)}
, and all of
g
l
(
n
,
F
)
{\displaystyle {\mathfrak {gl}}(n,F)}
.
=== Cartan's criterion ===
Cartan's criterion (by Élie Cartan) gives conditions for a finite-dimensional Lie algebra of characteristic zero to be solvable or semisimple. It is expressed in terms of the Killing form, the symmetric bilinear form on
g
{\displaystyle {\mathfrak {g}}}
defined by
K
(
u
,
v
)
=
tr
(
ad
(
u
)
ad
(
v
)
)
,
{\displaystyle K(u,v)=\operatorname {tr} (\operatorname {ad} (u)\operatorname {ad} (v)),}
where tr denotes the trace of a linear operator. Namely: a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is semisimple if and only if the Killing form is nondegenerate. A Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is solvable if and only if
K
(
g
,
[
g
,
g
]
)
=
0.
{\displaystyle K({\mathfrak {g}},[{\mathfrak {g}},{\mathfrak {g}}])=0.}
=== Classification ===
The Levi decomposition asserts that every finite-dimensional Lie algebra over a field of characteristic zero is a semidirect product of its solvable radical and a semisimple Lie algebra. Moreover, a semisimple Lie algebra in characteristic zero is a product of simple Lie algebras, as mentioned above. This focuses attention on the problem of classifying the simple Lie algebras.
The simple Lie algebras of finite dimension over an algebraically closed field F of characteristic zero were classified by Killing and Cartan in the 1880s and 1890s, using root systems. Namely, every simple Lie algebra is of type An, Bn, Cn, Dn, E6, E7, E8, F4, or G2. Here the simple Lie algebra of type An is
s
l
(
n
+
1
,
F
)
{\displaystyle {\mathfrak {sl}}(n+1,F)}
, Bn is
s
o
(
2
n
+
1
,
F
)
{\displaystyle {\mathfrak {so}}(2n+1,F)}
, Cn is
s
p
(
2
n
,
F
)
{\displaystyle {\mathfrak {sp}}(2n,F)}
, and Dn is
s
o
(
2
n
,
F
)
{\displaystyle {\mathfrak {so}}(2n,F)}
. The other five are known as the exceptional Lie algebras.
The classification of finite-dimensional simple Lie algebras over
R
{\displaystyle \mathbb {R} }
is more complicated, but it was also solved by Cartan (see simple Lie group for an equivalent classification). One can analyze a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over
R
{\displaystyle \mathbb {R} }
by considering its complexification
g
⊗
R
C
{\displaystyle {\mathfrak {g}}\otimes _{\mathbb {R} }\mathbb {C} }
.
In the years leading up to 2004, the finite-dimensional simple Lie algebras over an algebraically closed field of characteristic
p
>
3
{\displaystyle p>3}
were classified by Richard Earl Block, Robert Lee Wilson, Alexander Premet, and Helmut Strade. (See restricted Lie algebra#Classification of simple Lie algebras.) It turns out that there are many more simple Lie algebras in positive characteristic than in characteristic zero.
== Relation to Lie groups ==
Although Lie algebras can be studied in their own right, historically they arose as a means to study Lie groups.
The relationship between Lie groups and Lie algebras can be summarized as follows. Each Lie group determines a Lie algebra over
R
{\displaystyle \mathbb {R} }
(concretely, the tangent space at the identity). Conversely, for every finite-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, there is a connected Lie group
G
{\displaystyle G}
with Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. This is Lie's third theorem; see the Baker–Campbell–Hausdorff formula. This Lie group is not determined uniquely; however, any two Lie groups with the same Lie algebra are locally isomorphic, and more strongly, they have the same universal cover. For instance, the special orthogonal group SO(3) and the special unitary group SU(2) have isomorphic Lie algebras, but SU(2) is a simply connected double cover of SO(3).
For simply connected Lie groups, there is a complete correspondence: taking the Lie algebra gives an equivalence of categories from simply connected Lie groups to Lie algebras of finite dimension over
R
{\displaystyle \mathbb {R} }
.
The correspondence between Lie algebras and Lie groups is used in several ways, including in the classification of Lie groups and the representation theory of Lie groups. For finite-dimensional representations, there is an equivalence of categories between representations of a real Lie algebra and representations of the corresponding simply connected Lie group. This simplifies the representation theory of Lie groups: it is often easier to classify the representations of a Lie algebra, using linear algebra.
Every connected Lie group is isomorphic to its universal cover modulo a discrete central subgroup. So classifying Lie groups becomes simply a matter of counting the discrete subgroups of the center, once the Lie algebra is known. For example, the real semisimple Lie algebras were classified by Cartan, and so the classification of semisimple Lie groups is well understood.
For infinite-dimensional Lie algebras, Lie theory works less well. The exponential map need not be a local homeomorphism (for example, in the diffeomorphism group of the circle, there are diffeomorphisms arbitrarily close to the identity that are not in the image of the exponential map). Moreover, in terms of the existing notions of infinite-dimensional Lie groups, some infinite-dimensional Lie algebras do not come from any group.
Lie theory also does not work so neatly for infinite-dimensional representations of a finite-dimensional group. Even for the additive group
G
=
R
{\displaystyle G=\mathbb {R} }
, an infinite-dimensional representation of
G
{\displaystyle G}
can usually not be differentiated to produce a representation of its Lie algebra on the same space, or vice versa. The theory of Harish-Chandra modules is a more subtle relation between infinite-dimensional representations for groups and Lie algebras.
== Real form and complexification ==
Given a complex Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, a real Lie algebra
g
0
{\displaystyle {\mathfrak {g}}_{0}}
is said to be a real form of
g
{\displaystyle {\mathfrak {g}}}
if the complexification
g
0
⊗
R
C
{\displaystyle {\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} }
is isomorphic to
g
{\displaystyle {\mathfrak {g}}}
. A real form need not be unique; for example,
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
has two real forms up to isomorphism,
s
l
(
2
,
R
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {R} )}
and
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
.
Given a semisimple complex Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, a split form of it is a real form that splits; i.e., it has a Cartan subalgebra which acts via an adjoint representation with real eigenvalues. A split form exists and is unique (up to isomorphism). A compact form is a real form that is the Lie algebra of a compact Lie group. A compact form exists and is also unique up to isomorphism.
== Lie algebra with additional structures ==
A Lie algebra may be equipped with additional structures that are compatible with the Lie bracket. For example, a graded Lie algebra is a Lie algebra (or more generally a Lie superalgebra) with a compatible grading. A differential graded Lie algebra also comes with a differential, making the underlying vector space a chain complex.
For example, the homotopy groups of a simply connected topological space form a graded Lie algebra, using the Whitehead product. In a related construction, Daniel Quillen used differential graded Lie algebras over the rational numbers
Q
{\displaystyle \mathbb {Q} }
to describe rational homotopy theory in algebraic terms.
== Lie ring ==
The definition of a Lie algebra over a field extends to define a Lie algebra over any commutative ring R. Namely, a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over R is an R-module with an alternating R-bilinear map
[
,
]
:
g
×
g
→
g
{\displaystyle [\ ,\ ]\colon {\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}}
that satisfies the Jacobi identity. A Lie algebra over the ring
Z
{\displaystyle \mathbb {Z} }
of integers is sometimes called a Lie ring. (This is not directly related to the notion of a Lie group.)
Lie rings are used in the study of finite p-groups (for a prime number p) through the Lazard correspondence. The lower central factors of a finite p-group are finite abelian p-groups. The direct sum of the lower central factors is given the structure of a Lie ring by defining the bracket to be the commutator of two coset representatives; see the example below.
p-adic Lie groups are related to Lie algebras over the field
Q
p
{\displaystyle \mathbb {Q} _{p}}
of p-adic numbers as well as over the ring
Z
p
{\displaystyle \mathbb {Z} _{p}}
of p-adic integers. Part of Claude Chevalley's construction of the finite groups of Lie type involves showing that a simple Lie algebra over the complex numbers comes from a Lie algebra over the integers, and then (with more care) a group scheme over the integers.
=== Examples ===
Here is a construction of Lie rings arising from the study of abstract groups. For elements
x
,
y
{\displaystyle x,y}
of a group, define the commutator
[
x
,
y
]
=
x
−
1
y
−
1
x
y
{\displaystyle [x,y]=x^{-1}y^{-1}xy}
. Let
G
=
G
1
⊇
G
2
⊇
G
3
⊇
⋯
⊇
G
n
⊇
⋯
{\displaystyle G=G_{1}\supseteq G_{2}\supseteq G_{3}\supseteq \cdots \supseteq G_{n}\supseteq \cdots }
be a filtration of a group
G
{\displaystyle G}
, that is, a chain of subgroups such that
[
G
i
,
G
j
]
{\displaystyle [G_{i},G_{j}]}
is contained in
G
i
+
j
{\displaystyle G_{i+j}}
for all
i
,
j
{\displaystyle i,j}
. (For the Lazard correspondence, one takes the filtration to be the lower central series of G.) Then
L
=
⨁
i
≥
1
G
i
/
G
i
+
1
{\displaystyle L=\bigoplus _{i\geq 1}G_{i}/G_{i+1}}
is a Lie ring, with addition given by the group multiplication (which is abelian on each quotient group
G
i
/
G
i
+
1
{\displaystyle G_{i}/G_{i+1}}
), and with Lie bracket
G
i
/
G
i
+
1
×
G
j
/
G
j
+
1
→
G
i
+
j
/
G
i
+
j
+
1
{\displaystyle G_{i}/G_{i+1}\times G_{j}/G_{j+1}\to G_{i+j}/G_{i+j+1}}
given by commutators in the group:
[
x
G
i
+
1
,
y
G
j
+
1
]
:=
[
x
,
y
]
G
i
+
j
+
1
.
{\displaystyle [xG_{i+1},yG_{j+1}]:=[x,y]G_{i+j+1}.}
For example, the Lie ring associated to the lower central series on the dihedral group of order 8 is the Heisenberg Lie algebra of dimension 3 over the field
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
.
== Definition using category-theoretic notation ==
The definition of a Lie algebra can be reformulated more abstractly in the language of category theory. Namely, one can define a Lie algebra in terms of linear maps—that is, morphisms in the category of vector spaces—without considering individual elements. (In this section, the field over which the algebra is defined is assumed to be of characteristic different from 2.)
For the category-theoretic definition of Lie algebras, two braiding isomorphisms are needed. If A is a vector space, the interchange isomorphism
τ
:
A
⊗
A
→
A
⊗
A
{\displaystyle \tau :A\otimes A\to A\otimes A}
is defined by
τ
(
x
⊗
y
)
=
y
⊗
x
.
{\displaystyle \tau (x\otimes y)=y\otimes x.}
The cyclic-permutation braiding
σ
:
A
⊗
A
⊗
A
→
A
⊗
A
⊗
A
{\displaystyle \sigma :A\otimes A\otimes A\to A\otimes A\otimes A}
is defined as
σ
=
(
i
d
⊗
τ
)
∘
(
τ
⊗
i
d
)
,
{\displaystyle \sigma =(\mathrm {id} \otimes \tau )\circ (\tau \otimes \mathrm {id} ),}
where
i
d
{\displaystyle \mathrm {id} }
is the identity morphism. Equivalently,
σ
{\displaystyle \sigma }
is defined by
σ
(
x
⊗
y
⊗
z
)
=
y
⊗
z
⊗
x
.
{\displaystyle \sigma (x\otimes y\otimes z)=y\otimes z\otimes x.}
With this notation, a Lie algebra can be defined as an object
A
{\displaystyle A}
in the category of vector spaces together with a morphism
[
⋅
,
⋅
]
:
A
⊗
A
→
A
{\displaystyle [\cdot ,\cdot ]\colon A\otimes A\rightarrow A}
that satisfies the two morphism equalities
[
⋅
,
⋅
]
∘
(
i
d
+
τ
)
=
0
,
{\displaystyle [\cdot ,\cdot ]\circ (\mathrm {id} +\tau )=0,}
and
[
⋅
,
⋅
]
∘
(
[
⋅
,
⋅
]
⊗
i
d
)
∘
(
i
d
+
σ
+
σ
2
)
=
0.
{\displaystyle [\cdot ,\cdot ]\circ ([\cdot ,\cdot ]\otimes \mathrm {id} )\circ (\mathrm {id} +\sigma +\sigma ^{2})=0.}
== Generalization ==
Several generalizations of a Lie algebra have been proposed, many from physics. Among them are graded Lie algebras, Lie superalgebras, Lie n-algebras,
== See also ==
== Remarks ==
== References ==
== Sources ==
Bourbaki, Nicolas (1989). Lie Groups and Lie Algebras: Chapters 1-3. Springer. ISBN 978-3-540-64242-8. MR 1728312.
Erdmann, Karin; Wildon, Mark (2006). Introduction to Lie Algebras. Springer. ISBN 1-84628-040-0. MR 2218355.
Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
Hall, Brian C. (2015). Lie groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Vol. 222 (2nd ed.). Springer. doi:10.1007/978-3-319-13467-3. ISBN 978-3319134666. ISSN 0072-5285. MR 3331229.
Humphreys, James E. (1978). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9 (2nd ed.). Springer-Verlag. ISBN 978-0-387-90053-7. MR 0499562.
Jacobson, Nathan (1979) [1962]. Lie Algebras. Dover. ISBN 978-0-486-63832-4. MR 0559927.
Khukhro, E. I. (1998), p-Automorphisms of Finite p-Groups, Cambridge University Press, doi:10.1017/CBO9780511526008, ISBN 0-521-59717-X, MR 1615819
Knapp, Anthony W. (2001) [1986], Representation Theory of Semisimple Groups: an Overview Based on Examples, Princeton University Press, ISBN 0-691-09089-0, MR 1880691
Milnor, John (2010) [1986], "Remarks on infinite-dimensional Lie groups", Collected Papers of John Milnor, vol. 5, American Mathematical Soc., pp. 91–141, ISBN 978-0-8218-4876-0, MR 0830252
O'Connor, J.J; Robertson, E.F. (2000). "Marius Sophus Lie". MacTutor History of Mathematics Archive.
O'Connor, J.J; Robertson, E.F. (2005). "Wilhelm Karl Joseph Killing". MacTutor History of Mathematics Archive.
Quillen, Daniel (1969), "Rational homotopy theory", Annals of Mathematics, 90 (2): 205–295, doi:10.2307/1970725, JSTOR 1970725, MR 0258031
Serre, Jean-Pierre (2006). Lie Algebras and Lie Groups (2nd ed.). Springer. ISBN 978-3-540-55008-2. MR 2179691.
Varadarajan, Veeravalli S. (1984) [1974]. Lie Groups, Lie Algebras, and Their Representations. Springer. ISBN 978-0-387-90969-1. MR 0746308.
Wigner, Eugene (1959). Group Theory and its Application to the Quantum Mechanics of Atomic Spectra. Translated by J. J. Griffin. Academic Press. ISBN 978-0127505503. MR 0106711. {{cite book}}: ISBN / Date incompatibility (help)
== External links ==
Kac, Victor G.; et al. Course notes for MIT 18.745: Introduction to Lie Algebras. Archived from the original on 2010-04-20.
"Lie algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
McKenzie, Douglas (2015). "An Elementary Introduction to Lie Algebras for Physicists". | Wikipedia/Lie_algebra_homomorphism |
The Lorentz group is a Lie group of symmetries of the spacetime of special relativity. This group can be realized as a collection of matrices, linear transformations, or unitary operators on some Hilbert space; it has a variety of representations. This group is significant because special relativity together with quantum mechanics are the two physical theories that are most thoroughly established, and the conjunction of these two theories is the study of the infinite-dimensional unitary representations of the Lorentz group. These have both historical importance in mainstream physics, as well as connections to more speculative present-day theories.
== Development ==
The full theory of the finite-dimensional representations of the Lie algebra of the Lorentz group is deduced using the general framework of the representation theory of semisimple Lie algebras. The finite-dimensional representations of the connected component
SO
(
3
;
1
)
+
{\displaystyle {\text{SO}}(3;1)^{+}}
of the full Lorentz group O(3; 1) are obtained by employing the Lie correspondence and the matrix exponential. The full finite-dimensional representation theory of the universal covering group (and also the spin group, a double cover)
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
of
SO
(
3
;
1
)
+
{\displaystyle {\text{SO}}(3;1)^{+}}
is obtained, and explicitly given in terms of action on a function space in representations of
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
and
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
. The representatives of time reversal and space inversion are given in space inversion and time reversal, completing the finite-dimensional theory for the full Lorentz group. The general properties of the (m, n) representations are outlined. Action on function spaces is considered, with the action on spherical harmonics and the Riemann P-functions appearing as examples. The infinite-dimensional case of irreducible unitary representations are realized for the
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
principal series and the complementary series. Finally, the Plancherel formula for
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
is given, and representations of SO(3, 1) are classified and realized for Lie algebras.
The development of the representation theory has historically followed the development of the more general theory of representation theory of semisimple groups, largely due to Élie Cartan and Hermann Weyl, but the Lorentz group has also received special attention due to its importance in physics. Notable contributors are physicist E. P. Wigner and mathematician Valentine Bargmann with their Bargmann–Wigner program, one conclusion of which is, roughly, a classification of all unitary representations of the inhomogeneous Lorentz group amounts to a classification of all possible relativistic wave equations. The classification of the irreducible infinite-dimensional representations of the Lorentz group was established by Paul Dirac's doctoral student in theoretical physics, Harish-Chandra, later turned mathematician, in 1947. The corresponding classification for
S
L
(
2
,
C
)
{\displaystyle \mathrm {SL} (2,\mathbb {C} )}
was published independently by Bargmann and Israel Gelfand together with Mark Naimark in the same year.
== Applications ==
Many of the representations, both finite-dimensional and infinite-dimensional, are important in theoretical physics. Representations appear in the description of fields in classical field theory, most importantly the electromagnetic field, and of particles in relativistic quantum mechanics, as well as of both particles and quantum fields in quantum field theory and of various objects in string theory and beyond. The representation theory also provides the theoretical ground for the concept of spin. The theory enters into general relativity in the sense that in small enough regions of spacetime, physics is that of special relativity.
The finite-dimensional irreducible non-unitary representations together with the irreducible infinite-dimensional unitary representations of the inhomogeneous Lorentz group, the Poincare group, are the representations that have direct physical relevance.
Infinite-dimensional unitary representations of the Lorentz group appear by restriction of the irreducible infinite-dimensional unitary representations of the Poincaré group acting on the Hilbert spaces of relativistic quantum mechanics and quantum field theory. But these are also of mathematical interest and of potential direct physical relevance in other roles than that of a mere restriction. There were speculative theories, (tensors and spinors have infinite counterparts in the expansors of Dirac and the expinors of Harish-Chandra) consistent with relativity and quantum mechanics, but they have found no proven physical application. Modern speculative theories potentially have similar ingredients per below.
=== Classical field theory ===
While the electromagnetic field together with the gravitational field are the only classical fields providing accurate descriptions of nature, other types of classical fields are important too. In the approach to quantum field theory (QFT) referred to as second quantization, the starting point is one or more classical fields, where e.g. the wave functions solving the Dirac equation are considered as classical fields prior to (second) quantization. While second quantization and the Lagrangian formalism associated with it is not a fundamental aspect of QFT, it is the case that so far all quantum field theories can be approached this way, including the standard model. In these cases, there are classical versions of the field equations following from the Euler–Lagrange equations derived from the Lagrangian using the principle of least action. These field equations must be relativistically invariant, and their solutions (which will qualify as relativistic wave functions according to the definition below) must transform under some representation of the Lorentz group.
The action of the Lorentz group on the space of field configurations (a field configuration is the spacetime history of a particular solution, e.g. the electromagnetic field in all of space over all time is one field configuration) resembles the action on the Hilbert spaces of quantum mechanics, except that the commutator brackets are replaced by field theoretical Poisson brackets.
=== Relativistic quantum mechanics ===
For the present purposes the following definition is made: A relativistic wave function is a set of n functions ψα on spacetime which transforms under an arbitrary proper Lorentz transformation Λ as
ψ
′
α
(
x
)
=
D
[
Λ
]
α
β
ψ
β
(
Λ
−
1
x
)
,
{\displaystyle \psi '^{\alpha }(x)=D{[\Lambda ]^{\alpha }}_{\beta }\psi ^{\beta }\left(\Lambda ^{-1}x\right),}
where D[Λ] is an n-dimensional matrix representative of Λ belonging to some direct sum of the (m, n) representations to be introduced below.
The most useful relativistic quantum mechanics one-particle theories (there are no fully consistent such theories) are the Klein–Gordon equation and the Dirac equation in their original setting. They are relativistically invariant and their solutions transform under the Lorentz group as Lorentz scalars ((m, n) = (0, 0)) and bispinors ((0, 1/2) ⊕ (1/2, 0)) respectively. The electromagnetic field is a relativistic wave function according to this definition, transforming under (1, 0) ⊕ (0, 1).
The infinite-dimensional representations may be used in the analysis of scattering.
=== Quantum field theory ===
In quantum field theory, the demand for relativistic invariance enters, among other ways in that the S-matrix necessarily must be Poincaré invariant. This has the implication that there is one or more infinite-dimensional representation of the Lorentz group acting on Fock space. One way to guarantee the existence of such representations is the existence of a Lagrangian description (with modest requirements imposed, see the reference) of the system using the canonical formalism, from which a realization of the generators of the Lorentz group may be deduced.
The transformations of field operators illustrate the complementary role played by the finite-dimensional representations of the Lorentz group and the infinite-dimensional unitary representations of the Poincare group, witnessing the deep unity between mathematics and physics. For illustration, consider the definition an n-component field operator: A relativistic field operator is a set of n operator valued functions on spacetime which transforms under proper Poincaré transformations (Λ, a) according to
Ψ
α
(
x
)
→
Ψ
′
α
(
x
)
=
U
[
Λ
,
a
]
Ψ
α
(
x
)
U
−
1
[
Λ
,
a
]
=
D
[
Λ
−
1
]
α
β
Ψ
β
(
Λ
x
+
a
)
{\displaystyle \Psi ^{\alpha }(x)\to \Psi '^{\alpha }(x)=U[\Lambda ,a]\Psi ^{\alpha }(x)U^{-1}\left[\Lambda ,a\right]=D{\left[\Lambda ^{-1}\right]^{\alpha }}_{\beta }\Psi ^{\beta }(\Lambda x+a)}
Here U[Λ, a] is the unitary operator representing (Λ, a) on the Hilbert space on which Ψ is defined and D is an n-dimensional representation of the Lorentz group. The transformation rule is the second Wightman axiom of quantum field theory.
By considerations of differential constraints that the field operator must be subjected to in order to describe a single particle with definite mass m and spin s (or helicity), it is deduced that
where a†, a are interpreted as creation and annihilation operators respectively. The creation operator a† transforms according to
a
†
(
p
,
σ
)
→
a
′
†
(
p
,
σ
)
=
U
[
Λ
]
a
†
(
p
,
σ
)
U
[
Λ
−
1
]
=
a
†
(
Λ
p
,
ρ
)
D
(
s
)
[
R
(
Λ
,
p
)
−
1
]
ρ
σ
,
{\displaystyle a^{\dagger }(\mathbf {p} ,\sigma )\rightarrow a'^{\dagger }\left(\mathbf {p} ,\sigma \right)=U[\Lambda ]a^{\dagger }(\mathbf {p} ,\sigma )U\left[\Lambda ^{-1}\right]=a^{\dagger }(\Lambda \mathbf {p} ,\rho )D^{(s)}{\left[R(\Lambda ,\mathbf {p} )^{-1}\right]^{\rho }}_{\sigma },}
and similarly for the annihilation operator. The point to be made is that the field operator transforms according to a finite-dimensional non-unitary representation of the Lorentz group, while the creation operator transforms under the infinite-dimensional unitary representation of the Poincare group characterized by the mass and spin (m, s) of the particle. The connection between the two are the wave functions, also called coefficient functions
u
α
(
p
,
σ
)
e
i
p
⋅
x
,
v
α
(
p
,
σ
)
e
−
i
p
⋅
x
{\displaystyle u^{\alpha }(\mathbf {p} ,\sigma )e^{ip\cdot x},\quad v^{\alpha }(\mathbf {p} ,\sigma )e^{-ip\cdot x}}
that carry both the indices (x, α) operated on by Lorentz transformations and the indices (p, σ) operated on by Poincaré transformations. This may be called the Lorentz–Poincaré connection. To exhibit the connection, subject both sides of equation (X1) to a Lorentz transformation resulting in for e.g. u,
D
[
Λ
]
α
α
′
u
α
′
(
p
,
λ
)
=
D
(
s
)
[
R
(
Λ
,
p
)
]
λ
′
λ
u
α
(
Λ
p
,
λ
′
)
,
{\displaystyle {D[\Lambda ]^{\alpha }}_{\alpha '}u^{\alpha '}(\mathbf {p} ,\lambda )={D^{(s)}[R(\Lambda ,\mathbf {p} )]^{\lambda '}}_{\lambda }u^{\alpha }\left(\Lambda \mathbf {p} ,\lambda '\right),}
where D is the non-unitary Lorentz group representative of Λ and D(s) is a unitary representative of the so-called Wigner rotation R associated to Λ and p that derives from the representation of the Poincaré group, and s is the spin of the particle.
All of the above formulas, including the definition of the field operator in terms of creation and annihilation operators, as well as the differential equations satisfied by the field operator for a particle with specified mass, spin and the (m, n) representation under which it is supposed to transform, and also that of the wave function, can be derived from group theoretical considerations alone once the frameworks of quantum mechanics and special relativity is given.
=== Speculative theories ===
In theories in which spacetime can have more than D = 4 dimensions, the generalized Lorentz groups O(D − 1; 1) of the appropriate dimension take the place of O(3; 1).
The requirement of Lorentz invariance takes on perhaps its most dramatic effect in string theory. Classical relativistic strings can be handled in the Lagrangian framework by using the Nambu–Goto action. This results in a relativistically invariant theory in any spacetime dimension. But as it turns out, the theory of open and closed bosonic strings (the simplest string theory) is impossible to quantize in such a way that the Lorentz group is represented on the space of states (a Hilbert space) unless the dimension of spacetime is 26. The corresponding result for superstring theory is again deduced demanding Lorentz invariance, but now with supersymmetry. In these theories the Poincaré algebra is replaced by a supersymmetry algebra which is a Z2-graded Lie algebra extending the Poincaré algebra. The structure of such an algebra is to a large degree fixed by the demands of Lorentz invariance. In particular, the fermionic operators (grade 1) belong to a (0, 1/2) or (1/2, 0) representation space of the (ordinary) Lorentz Lie algebra. The only possible dimension of spacetime in such theories is 10.
== Finite-dimensional representations ==
Representation theory of groups in general, and Lie groups in particular, is a very rich subject. The Lorentz group has some properties that makes it "agreeable" and others that make it "not very agreeable" within the context of representation theory; the group is simple and thus semisimple, but is not connected, and none of its components are simply connected. Furthermore, the Lorentz group is not compact.
For finite-dimensional representations, the presence of semisimplicity means that the Lorentz group can be dealt with the same way as other semisimple groups using a well-developed theory. In addition, all representations are built from the irreducible ones, since the Lie algebra possesses the complete reducibility property. But, the non-compactness of the Lorentz group, in combination with lack of simple connectedness, cannot be dealt with in all the aspects as in the simple framework that applies to simply connected, compact groups. Non-compactness implies, for a connected simple Lie group, that no nontrivial finite-dimensional unitary representations exist. Lack of simple connectedness gives rise to spin representations of the group. The non-connectedness means that, for representations of the full Lorentz group, time reversal and reversal of spatial orientation have to be dealt with separately.
=== History ===
The development of the finite-dimensional representation theory of the Lorentz group mostly follows that of representation theory in general. Lie theory originated with Sophus Lie in 1873. By 1888 the classification of simple Lie algebras was essentially completed by Wilhelm Killing. In 1913 the theorem of highest weight for representations of simple Lie algebras, the path that will be followed here, was completed by Élie Cartan. Richard Brauer was during the period of 1935–38 largely responsible for the development of the Weyl-Brauer matrices describing how spin representations of the Lorentz Lie algebra can be embedded in Clifford algebras. The Lorentz group has also historically received special attention in representation theory, see History of infinite-dimensional unitary representations below, due to its exceptional importance in physics. Mathematicians Hermann Weyl and Harish-Chandra and physicists Eugene Wigner and Valentine Bargmann made substantial contributions both to general representation theory and in particular to the Lorentz group. Physicist Paul Dirac was perhaps the first to manifestly knit everything together in a practical application of major lasting importance with the Dirac equation in 1928.
=== The Lie algebra ===
This section addresses the irreducible complex linear representations of the complexification
s
o
(
3
;
1
)
C
{\displaystyle {\mathfrak {so}}(3;1)_{\mathbb {C} }}
of the Lie algebra
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
of the Lorentz group. A convenient basis for
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
is given by the three generators Ji of rotations and the three generators Ki of boosts. They are explicitly given in conventions and Lie algebra bases.
The Lie algebra is complexified, and the basis is changed to the components of its two ideals
A
=
J
+
i
K
2
,
B
=
J
−
i
K
2
.
{\displaystyle \mathbf {A} ={\frac {\mathbf {J} +i\mathbf {K} }{2}},\quad \mathbf {B} ={\frac {\mathbf {J} -i\mathbf {K} }{2}}.}
The components of A = (A1, A2, A3) and B = (B1, B2, B3) separately satisfy the commutation relations of the Lie algebra
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
and, moreover, they commute with each other,
[
A
i
,
A
j
]
=
i
ε
i
j
k
A
k
,
[
B
i
,
B
j
]
=
i
ε
i
j
k
B
k
,
[
A
i
,
B
j
]
=
0
,
{\displaystyle \left[A_{i},A_{j}\right]=i\varepsilon _{ijk}A_{k},\quad \left[B_{i},B_{j}\right]=i\varepsilon _{ijk}B_{k},\quad \left[A_{i},B_{j}\right]=0,}
where i, j, k are indices which each take values 1, 2, 3, and εijk is the three-dimensional Levi-Civita symbol. Let
A
C
{\displaystyle \mathbf {A} _{\mathbb {C} }}
and
B
C
{\displaystyle \mathbf {B} _{\mathbb {C} }}
denote the complex linear span of A and B respectively.
One has the isomorphisms
where
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
is the complexification of
s
u
(
2
)
≅
A
≅
B
.
{\displaystyle {\mathfrak {su}}(2)\cong \mathbf {A} \cong \mathbf {B} .}
The utility of these isomorphisms comes from the fact that all irreducible representations of
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
, and hence all irreducible complex linear representations of
s
l
(
2
,
C
)
,
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ),}
are known. The irreducible complex linear representation of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
is isomorphic to one of the highest weight representations. These are explicitly given in complex linear representations of
s
l
(
2
,
C
)
.
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ).}
==== The unitarian trick ====
The Lie algebra
s
l
(
2
,
C
)
⊕
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )\oplus {\mathfrak {sl}}(2,\mathbb {C} )}
is the Lie algebra of
SL
(
2
,
C
)
×
SL
(
2
,
C
)
.
{\displaystyle {\text{SL}}(2,\mathbb {C} )\times {\text{SL}}(2,\mathbb {C} ).}
It contains the compact subgroup SU(2) × SU(2) with Lie algebra
s
u
(
2
)
⊕
s
u
(
2
)
.
{\displaystyle {\mathfrak {su}}(2)\oplus {\mathfrak {su}}(2).}
The latter is a compact real form of
s
l
(
2
,
C
)
⊕
s
l
(
2
,
C
)
.
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )\oplus {\mathfrak {sl}}(2,\mathbb {C} ).}
Thus from the first statement of the unitarian trick, representations of SU(2) × SU(2) are in one-to-one correspondence with holomorphic representations of
SL
(
2
,
C
)
×
SL
(
2
,
C
)
.
{\displaystyle {\text{SL}}(2,\mathbb {C} )\times {\text{SL}}(2,\mathbb {C} ).}
By compactness, the Peter–Weyl theorem applies to SU(2) × SU(2), and hence orthonormality of irreducible characters may be appealed to. The irreducible unitary representations of SU(2) × SU(2) are precisely the tensor products of irreducible unitary representations of SU(2).
By appeal to simple connectedness, the second statement of the unitarian trick is applied. The objects in the following list are in one-to-one correspondence:
Holomorphic representations of
SL
(
2
,
C
)
×
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )\times {\text{SL}}(2,\mathbb {C} )}
Smooth representations of SU(2) × SU(2)
Real linear representations of
s
u
(
2
)
⊕
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)\oplus {\mathfrak {su}}(2)}
Complex linear representations of
s
l
(
2
,
C
)
⊕
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )\oplus {\mathfrak {sl}}(2,\mathbb {C} )}
Tensor products of representations appear at the Lie algebra level as either of
where Id is the identity operator. Here, the latter interpretation, which follows from (G6), is intended. The highest weight representations of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
are indexed by μ for μ = 0, 1/2, 1, .... (The highest weights are actually 2μ = 0, 1, 2, ..., but the notation here is adapted to that of
s
o
(
3
;
1
)
.
{\displaystyle {\mathfrak {so}}(3;1).}
) The tensor products of two such complex linear factors then form the irreducible complex linear representations of
s
l
(
2
,
C
)
⊕
s
l
(
2
,
C
)
.
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )\oplus {\mathfrak {sl}}(2,\mathbb {C} ).}
Finally, the
R
{\displaystyle \mathbb {R} }
-linear representations of the real forms of the far left,
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
, and the far right,
s
l
(
2
,
C
)
,
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ),}
in (A1) are obtained from the
C
{\displaystyle \mathbb {C} }
-linear representations of
s
l
(
2
,
C
)
⊕
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )\oplus {\mathfrak {sl}}(2,\mathbb {C} )}
characterized in the previous paragraph.
==== The (μ, ν)-representations of sl(2, C) ====
The complex linear representations of the complexification of
s
l
(
2
,
C
)
,
s
l
(
2
,
C
)
C
,
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ),{\mathfrak {sl}}(2,\mathbb {C} )_{\mathbb {C} },}
obtained via isomorphisms in (A1), stand in one-to-one correspondence with the real linear representations of
s
l
(
2
,
C
)
.
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ).}
The set of all real linear irreducible representations of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
are thus indexed by a pair (μ, ν). The complex linear ones, corresponding precisely to the complexification of the real linear
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
representations, are of the form (μ, 0), while the conjugate linear ones are the (0, ν). All others are real linear only. The linearity properties follow from the canonical injection, the far right in (A1), of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
into its complexification. Representations on the form (ν, ν) or (μ, ν) ⊕ (ν, μ) are given by real matrices (the latter are not irreducible). Explicitly, the real linear (μ, ν)-representations of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
are
φ
μ
,
ν
(
X
)
=
(
φ
μ
⊗
φ
ν
¯
)
(
X
)
=
φ
μ
(
X
)
⊗
Id
ν
+
1
+
Id
μ
+
1
⊗
φ
ν
(
X
)
¯
,
X
∈
s
l
(
2
,
C
)
{\displaystyle \varphi _{\mu ,\nu }(X)=\left(\varphi _{\mu }\otimes {\overline {\varphi _{\nu }}}\right)(X)=\varphi _{\mu }(X)\otimes \operatorname {Id} _{\nu +1}+\operatorname {Id} _{\mu +1}\otimes {\overline {\varphi _{\nu }(X)}},\qquad X\in {\mathfrak {sl}}(2,\mathbb {C} )}
where
φ
μ
,
μ
=
0
,
1
2
,
1
,
3
2
,
…
{\textstyle \varphi _{\mu },\mu =0,{\tfrac {1}{2}},1,{\tfrac {3}{2}},\ldots }
are the complex linear irreducible representations of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
and
φ
ν
¯
,
ν
=
0
,
1
2
,
1
,
3
2
,
…
{\displaystyle {\overline {\varphi _{\nu }}},\nu =0,{\tfrac {1}{2}},1,{\tfrac {3}{2}},\ldots }
their complex conjugate representations. (The labeling is usually in the mathematics literature 0, 1, 2, ..., but half-integers are chosen here to conform with the labeling for the
s
o
(
3
,
1
)
{\displaystyle {\mathfrak {so}}(3,1)}
Lie algebra.) Here the tensor product is interpreted in the former sense of (A0). These representations are concretely realized below.
==== The (m, n)-representations of so(3; 1) ====
Via the displayed isomorphisms in (A1) and knowledge of the complex linear irreducible representations of
s
l
(
2
,
C
)
⊕
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )\oplus {\mathfrak {sl}}(2,\mathbb {C} )}
upon solving for J and K, all irreducible representations of
s
o
(
3
;
1
)
C
,
{\displaystyle {\mathfrak {so}}(3;1)_{\mathbb {C} },}
and, by restriction, those of
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
are obtained. The representations of
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
obtained this way are real linear (and not complex or conjugate linear) because the algebra is not closed upon conjugation, but they are still irreducible. Since
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
is semisimple, all its representations can be built up as direct sums of the irreducible ones.
Thus the finite dimensional irreducible representations of the Lorentz algebra are classified by an ordered pair of half-integers m = μ and n = ν, conventionally written as one of
(
m
,
n
)
≡
π
(
m
,
n
)
:
s
o
(
3
;
1
)
→
g
l
(
V
)
,
{\displaystyle (m,n)\equiv \pi _{(m,n)}:{\mathfrak {so}}(3;1)\to {\mathfrak {gl}}(V),}
where V is a finite-dimensional vector space. These are, up to a similarity transformation, uniquely given by
where 1n is the n-dimensional unit matrix and
J
(
n
)
=
(
J
1
(
n
)
,
J
2
(
n
)
,
J
3
(
n
)
)
{\displaystyle \mathbf {J} ^{(n)}=\left(J_{1}^{(n)},J_{2}^{(n)},J_{3}^{(n)}\right)}
are the (2n + 1)-dimensional irreducible representations of
s
o
(
3
)
≅
s
u
(
2
)
{\displaystyle {\mathfrak {so}}(3)\cong {\mathfrak {su}}(2)}
also termed spin matrices or angular momentum matrices. These are explicitly given as
(
J
1
(
j
)
)
a
′
a
=
1
2
(
(
j
−
a
)
(
j
+
a
+
1
)
δ
a
′
,
a
+
1
+
(
j
+
a
)
(
j
−
a
+
1
)
δ
a
′
,
a
−
1
)
(
J
2
(
j
)
)
a
′
a
=
1
2
i
(
(
j
−
a
)
(
j
+
a
+
1
)
δ
a
′
,
a
+
1
−
(
j
+
a
)
(
j
−
a
+
1
)
δ
a
′
,
a
−
1
)
(
J
3
(
j
)
)
a
′
a
=
a
δ
a
′
,
a
{\displaystyle {\begin{aligned}\left(J_{1}^{(j)}\right)_{a'a}&={\frac {1}{2}}\left({\sqrt {(j-a)(j+a+1)}}\delta _{a',a+1}+{\sqrt {(j+a)(j-a+1)}}\delta _{a',a-1}\right)\\\left(J_{2}^{(j)}\right)_{a'a}&={\frac {1}{2i}}\left({\sqrt {(j-a)(j+a+1)}}\delta _{a',a+1}-{\sqrt {(j+a)(j-a+1)}}\delta _{a',a-1}\right)\\\left(J_{3}^{(j)}\right)_{a'a}&=a\delta _{a',a}\end{aligned}}}
where δ denotes the Kronecker delta. In components, with −m ≤ a, a′ ≤ m, −n ≤ b, b′ ≤ n, the representations are given by
(
π
(
m
,
n
)
(
J
i
)
)
a
′
b
′
,
a
b
=
δ
b
′
b
(
J
i
(
m
)
)
a
′
a
+
δ
a
′
a
(
J
i
(
n
)
)
b
′
b
(
π
(
m
,
n
)
(
K
i
)
)
a
′
b
′
,
a
b
=
−
i
(
δ
b
′
b
(
J
i
(
m
)
)
a
′
a
−
δ
a
′
a
(
J
i
(
n
)
)
b
′
b
)
{\displaystyle {\begin{aligned}\left(\pi _{(m,n)}\left(J_{i}\right)\right)_{a'b',ab}&=\delta _{b'b}\left(J_{i}^{(m)}\right)_{a'a}+\delta _{a'a}\left(J_{i}^{(n)}\right)_{b'b}\\\left(\pi _{(m,n)}\left(K_{i}\right)\right)_{a'b',ab}&=-i\left(\delta _{b'b}\left(J_{i}^{(m)}\right)_{a'a}-\delta _{a'a}\left(J_{i}^{(n)}\right)_{b'b}\right)\end{aligned}}}
==== Common representations ====
The (0, 0) representation is the one-dimensional trivial representation and is carried by relativistic scalar field theories.
Fermionic supersymmetry generators transform under one of the (0, 1/2) or (1/2, 0) representations (Weyl spinors).
The four-momentum of a particle (either massless or massive) transforms under the (1/2, 1/2) representation, a four-vector.
A physical example of a (1,1) traceless symmetric tensor field is the traceless part of the energy–momentum tensor Tμν.
==== Off-diagonal direct sums ====
Since for any irreducible representation for which m ≠ n it is essential to operate over the field of complex numbers, the direct sum of representations (m, n) and (n, m) have particular relevance to physics, since it permits to use linear operators over real numbers.
(1/2, 0) ⊕ (0, 1/2) is the bispinor representation. See also Dirac spinor and Weyl spinors and bispinors below.
(1, 1/2) ⊕ (1/2, 1) is the Rarita–Schwinger field representation.
(3/2, 0) ⊕ (0, 3/2) would be the symmetry of the hypothesized gravitino. It can be obtained from the (1, 1/2) ⊕ (1/2, 1) representation.
(1, 0) ⊕ (0, 1) is the representation of a parity-invariant 2-form field (a.k.a. curvature form). The electromagnetic field tensor transforms under this representation.
=== The group ===
The approach in this section is based on theorems that, in turn, are based on the fundamental Lie correspondence. The Lie correspondence is in essence a dictionary between connected Lie groups and Lie algebras. The link between them is the exponential mapping from the Lie algebra to the Lie group, denoted
exp
:
g
→
G
.
{\displaystyle \exp :{\mathfrak {g}}\to G.}
If
π
:
g
→
g
l
(
V
)
{\displaystyle \pi :{\mathfrak {g}}\to {\mathfrak {gl}}(V)}
for some vector space V is a representation, a representation Π of the connected component of G is defined by
This definition applies whether the resulting representation is projective or not.
==== Surjectiveness of exponential map for SO(3, 1) ====
From a practical point of view, it is important whether the first formula in (G2) can be used for all elements of the group. It holds for all
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
, however, in the general case, e.g. for
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
, not all g ∈ G are in the image of exp.
But
exp
:
s
o
(
3
;
1
)
→
SO
(
3
;
1
)
+
{\displaystyle \exp :{\mathfrak {so}}(3;1)\to {\text{SO}}(3;1)^{+}}
is surjective. One way to show this is to make use of the isomorphism
SO
(
3
;
1
)
+
≅
PGL
(
2
,
C
)
,
{\displaystyle {\text{SO}}(3;1)^{+}\cong {\text{PGL}}(2,\mathbb {C} ),}
the latter being the Möbius group. It is a quotient of
GL
(
n
,
C
)
{\displaystyle {\text{GL}}(n,\mathbb {C} )}
(see the linked article). The quotient map is denoted with
p
:
GL
(
n
,
C
)
→
PGL
(
2
,
C
)
.
{\displaystyle p:{\text{GL}}(n,\mathbb {C} )\to {\text{PGL}}(2,\mathbb {C} ).}
The map
exp
:
g
l
(
n
,
C
)
→
GL
(
n
,
C
)
{\displaystyle \exp :{\mathfrak {gl}}(n,\mathbb {C} )\to {\text{GL}}(n,\mathbb {C} )}
is onto. Apply (Lie) with π being the differential of p at the identity. Then
∀
X
∈
g
l
(
n
,
C
)
:
p
(
exp
(
i
X
)
)
=
exp
(
i
π
(
X
)
)
.
{\displaystyle \forall X\in {\mathfrak {gl}}(n,\mathbb {C} ):\quad p(\exp(iX))=\exp(i\pi (X)).}
Since the left hand side is surjective (both exp and p are), the right hand side is surjective and hence
exp
:
p
g
l
(
2
,
C
)
→
PGL
(
2
,
C
)
{\displaystyle \exp :{\mathfrak {pgl}}(2,\mathbb {C} )\to {\text{PGL}}(2,\mathbb {C} )}
is surjective. Finally, recycle the argument once more, but now with the known isomorphism between SO(3; 1)+ and
PGL
(
2
,
C
)
{\displaystyle {\text{PGL}}(2,\mathbb {C} )}
to find that exp is onto for the connected component of the Lorentz group.
==== Fundamental group ====
The Lorentz group is doubly connected, i. e. π1(SO(3; 1)) is a group with two equivalence classes of loops as its elements.
==== Projective representations ====
Since π1(SO(3; 1)+) has two elements, some representations of the Lie algebra will yield projective representations. Once it is known whether a representation is projective, formula (G2) applies to all group elements and all representations, including the projective ones — with the understanding that the representative of a group element will depend on which element in the Lie algebra (the X in (G2)) is used to represent the group element in the standard representation.
For the Lorentz group, the (m, n)-representation is projective when m + n is a half-integer. See § Spinors.
For a projective representation Π of SO(3; 1)+, it holds that
since any loop in SO(3; 1)+ traversed twice, due to the double connectedness, is contractible to a point, so that its homotopy class is that of a constant map. It follows that Π is a double-valued function. It is not possible to consistently choose a sign to obtain a continuous representation of all of SO(3; 1)+, but this is possible locally around any point.
=== The covering group SL(2, C) ===
Consider
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
as a real Lie algebra with basis
(
1
2
σ
1
,
1
2
σ
2
,
1
2
σ
3
,
i
2
σ
1
,
i
2
σ
2
,
i
2
σ
3
)
≡
(
j
1
,
j
2
,
j
3
,
k
1
,
k
2
,
k
3
)
,
{\displaystyle \left({\frac {1}{2}}\sigma _{1},{\frac {1}{2}}\sigma _{2},{\frac {1}{2}}\sigma _{3},{\frac {i}{2}}\sigma _{1},{\frac {i}{2}}\sigma _{2},{\frac {i}{2}}\sigma _{3}\right)\equiv (j_{1},j_{2},j_{3},k_{1},k_{2},k_{3}),}
where the sigmas are the Pauli matrices. From the relations
is obtained
which are exactly on the form of the 3-dimensional version of the commutation relations for
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
(see conventions and Lie algebra bases below). Thus, the map Ji ↔ ji, Ki ↔ ki, extended by linearity is an isomorphism. Since
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
is simply connected, it is the universal covering group of SO(3; 1)+.
==== Non-surjectiveness of exponential mapping for SL(2, C) ====
The exponential mapping
exp
:
s
l
(
2
,
C
)
→
SL
(
2
,
C
)
{\displaystyle \exp :{\mathfrak {sl}}(2,\mathbb {C} )\to {\text{SL}}(2,\mathbb {C} )}
is not onto. The matrix
is in
SL
(
2
,
C
)
,
{\displaystyle {\text{SL}}(2,\mathbb {C} ),}
but there is no
Q
∈
s
l
(
2
,
C
)
{\displaystyle Q\in {\mathfrak {sl}}(2,\mathbb {C} )}
such that q = exp(Q).
In general, if g is an element of a connected Lie group G with Lie algebra
g
,
{\displaystyle {\mathfrak {g}},}
then, by (Lie),
The matrix q can be written
=== Realization of representations of SL(2, C) and sl(2, C) and their Lie algebras ===
The complex linear representations of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
and
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
are more straightforward to obtain than the
s
o
(
3
;
1
)
+
{\displaystyle {\mathfrak {so}}(3;1)^{+}}
representations. They can be (and usually are) written down from scratch. The holomorphic group representations (meaning the corresponding Lie algebra representation is complex linear) are related to the complex linear Lie algebra representations by exponentiation. The real linear representations of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
are exactly the (μ, ν)-representations. They can be exponentiated too. The (μ, 0)-representations are complex linear and are (isomorphic to) the highest weight-representations. These are usually indexed with only one integer (but half-integers are used here).
The mathematics convention is used in this section for convenience. Lie algebra elements differ by a factor of i and there is no factor of i in the exponential mapping compared to the physics convention used elsewhere. Let the basis of
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
be
This choice of basis, and the notation, is standard in the mathematical literature.
==== Complex linear representations ====
The irreducible holomorphic (n + 1)-dimensional representations
SL
(
2
,
C
)
,
n
⩾
2
,
{\displaystyle {\text{SL}}(2,\mathbb {C} ),n\geqslant 2,}
can be realized on the space of homogeneous polynomial of degree n in 2 variables
P
n
2
,
{\displaystyle \mathbf {P} _{n}^{2},}
the elements of which are
P
(
z
1
z
2
)
=
c
n
z
1
n
+
c
n
−
1
z
1
n
−
1
z
2
+
⋯
+
c
0
z
2
n
,
c
0
,
c
1
,
…
,
c
n
∈
Z
.
{\displaystyle P{\begin{pmatrix}z_{1}\\z_{2}\\\end{pmatrix}}=c_{n}z_{1}^{n}+c_{n-1}z_{1}^{n-1}z_{2}+\cdots +c_{0}z_{2}^{n},\quad c_{0},c_{1},\ldots ,c_{n}\in \mathbb {Z} .}
The action of
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
is given by
The associated
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
-action is, using (G6) and the definition above, for the basis elements of
s
l
(
2
,
C
)
,
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ),}
With a choice of basis for
P
∈
P
n
2
{\displaystyle P\in \mathbf {P} _{n}^{2}}
, these representations become matrix Lie algebras.
==== Real linear representations ====
The (μ, ν)-representations are realized on a space of polynomials
P
μ
,
ν
2
{\displaystyle \mathbf {P} _{\mu ,\nu }^{2}}
in
z
1
,
z
1
¯
,
z
2
,
z
2
¯
,
{\displaystyle z_{1},{\overline {z_{1}}},z_{2},{\overline {z_{2}}},}
homogeneous of degree μ in
z
1
,
z
2
{\displaystyle z_{1},z_{2}}
and homogeneous of degree ν in
z
1
¯
,
z
2
¯
.
{\displaystyle {\overline {z_{1}}},{\overline {z_{2}}}.}
The representations are given by
By employing (G6) again it is found that
In particular for the basis elements,
=== Properties of the (m, n) representations ===
The (m, n) representations, defined above via (A1) (as restrictions to the real form
s
l
(
3
,
1
)
{\displaystyle {\mathfrak {sl}}(3,1)}
) of tensor products of irreducible complex linear representations πm = μ and πn = ν of
s
l
(
2
,
C
)
,
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ),}
are irreducible, and they are the only irreducible representations.
Irreducibility follows from the unitarian trick and that a representation Π of SU(2) × SU(2) is irreducible if and only if Π = Πμ ⊗ Πν, where Πμ, Πν are irreducible representations of SU(2).
Uniqueness follows from that the Πm are the only irreducible representations of SU(2), which is one of the conclusions of the theorem of the highest weight.
==== Dimension ====
The (m, n) representations are (2m + 1)(2n + 1)-dimensional. This follows easiest from counting the dimensions in any concrete realization, such as the one given in representations of
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
and
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )}
. For a Lie general algebra
g
{\displaystyle {\mathfrak {g}}}
the Weyl dimension formula,
dim
π
ρ
=
Π
α
∈
R
+
⟨
α
,
ρ
+
δ
⟩
Π
α
∈
R
+
⟨
α
,
δ
⟩
,
{\displaystyle \dim \pi _{\rho }={\frac {\Pi _{\alpha \in R^{+}}\langle \alpha ,\rho +\delta \rangle }{\Pi _{\alpha \in R^{+}}\langle \alpha ,\delta \rangle }},}
applies, where R+ is the set of positive roots, ρ is the highest weight, and δ is half the sum of the positive roots. The inner product
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is that of the Lie algebra
g
,
{\displaystyle {\mathfrak {g}},}
invariant under the action of the Weyl group on
h
⊂
g
,
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}},}
the Cartan subalgebra. The roots (really elements of
h
∗
{\displaystyle {\mathfrak {h}}^{*}}
) are via this inner product identified with elements of
h
.
{\displaystyle {\mathfrak {h}}.}
For
s
l
(
2
,
C
)
,
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ),}
the formula reduces to dim πμ = 2μ + 1 = 2m + 1, where the present notation must be taken into account. The highest weight is 2μ. By taking tensor products, the result follows.
==== Faithfulness ====
If a representation Π of a Lie group G is not faithful, then N = ker Π is a nontrivial normal subgroup. There are three relevant cases.
N is non-discrete and abelian.
N is non-discrete and non-abelian.
N is discrete. In this case N ⊂ Z, where Z is the center of G.
In the case of SO(3; 1)+, the first case is excluded since SO(3; 1)+ is semi-simple. The second case (and the first case) is excluded because SO(3; 1)+ is simple. For the third case, SO(3; 1)+ is isomorphic to the quotient
SL
(
2
,
C
)
/
{
±
I
}
.
{\displaystyle {\text{SL}}(2,\mathbb {C} )/\{\pm I\}.}
But
{
±
I
}
{\displaystyle \{\pm I\}}
is the center of
SL
(
2
,
C
)
.
{\displaystyle {\text{SL}}(2,\mathbb {C} ).}
It follows that the center of SO(3; 1)+ is trivial, and this excludes the third case. The conclusion is that every representation Π : SO(3; 1)+ → GL(V) and every projective representation Π : SO(3; 1)+ → PGL(W) for V, W finite-dimensional vector spaces are faithful.
By using the fundamental Lie correspondence, the statements and the reasoning above translate directly to Lie algebras with (abelian) nontrivial non-discrete normal subgroups replaced by (one-dimensional) nontrivial ideals in the Lie algebra, and the center of SO(3; 1)+ replaced by the center of
s
l
(
3
;
1
)
+
{\displaystyle {\mathfrak {sl}}(3;1)^{+}}
The center of any semisimple Lie algebra is trivial and
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
is semi-simple and simple, and hence has no non-trivial ideals.
A related fact is that if the corresponding representation of
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
is faithful, then the representation is projective. Conversely, if the representation is non-projective, then the corresponding
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
representation is not faithful, but is 2:1.
==== Non-unitarity ====
The (m, n) Lie algebra representation is not Hermitian. Accordingly, the corresponding (projective) representation of the group is never unitary. This is due to the non-compactness of the Lorentz group. In fact, a connected simple non-compact Lie group cannot have any nontrivial unitary finite-dimensional representations. There is a topological proof of this. Let u : G → GL(V), where V is finite-dimensional, be a continuous unitary representation of the non-compact connected simple Lie group G. Then u(G) ⊂ U(V) ⊂ GL(V) where U(V) is the compact subgroup of GL(V) consisting of unitary transformations of V. The kernel of u is a normal subgroup of G. Since G is simple, ker u is either all of G, in which case u is trivial, or ker u is trivial, in which case u is faithful. In the latter case u is a diffeomorphism onto its image, u(G) ≅ G and u(G) is a Lie group. This would mean that u(G) is an embedded non-compact Lie subgroup of the compact group U(V). This is impossible with the subspace topology on u(G) ⊂ U(V) since all embedded Lie subgroups of a Lie group are closed If u(G) were closed, it would be compact, and then G would be compact, contrary to assumption.
In the case of the Lorentz group, this can also be seen directly from the definitions. The representations of A and B used in the construction are Hermitian. This means that J is Hermitian, but K is anti-Hermitian. The non-unitarity is not a problem in quantum field theory, since the objects of concern are not required to have a Lorentz-invariant positive definite norm.
==== Restriction to SO(3) ====
The (m, n) representation is, however, unitary when restricted to the rotation subgroup SO(3), but these representations are not irreducible as representations of SO(3). A Clebsch–Gordan decomposition can be applied showing that an (m, n) representation have SO(3)-invariant subspaces of highest weight (spin) m + n, m + n − 1, ..., | m − n|, where each possible highest weight (spin) occurs exactly once. A weight subspace of highest weight (spin) j is (2j + 1)-dimensional. So for example, the (1/2, 1/2) representation has spin 1 and spin 0 subspaces of dimension 3 and 1 respectively.
Since the angular momentum operator is given by J = A + B, the highest spin in quantum mechanics of the rotation sub-representation will be (m + n)ℏ and the "usual" rules of addition of angular momenta and the formalism of 3-j symbols, 6-j symbols, etc. applies.
==== Spinors ====
It is the SO(3)-invariant subspaces of the irreducible representations that determine whether a representation has spin. From the above paragraph, it is seen that the (m, n) representation has spin if m + n is half-integer. The simplest are (1/2, 0) and (0, 1/2), the Weyl-spinors of dimension 2. Then, for example, (0, 3/2) and (1, 1/2) are a spin representations of dimensions 2⋅3/2 + 1 = 4 and (2 + 1)(2⋅1/2 + 1) = 6 respectively. According to the above paragraph, there are subspaces with spin both 3/2 and 1/2 in the last two cases, so these representations cannot likely represent a single physical particle which must be well-behaved under SO(3). It cannot be ruled out in general, however, that representations with multiple SO(3) subrepresentations with different spin can represent physical particles with well-defined spin. It may be that there is a suitable relativistic wave equation that projects out unphysical components, leaving only a single spin.
Construction of pure spin n/2 representations for any n (under SO(3)) from the irreducible representations involves taking tensor products of the Dirac-representation with a non-spin representation, extraction of a suitable subspace, and finally imposing differential constraints.
==== Dual representations ====
The following theorems are applied to examine whether the dual representation of an irreducible representation is isomorphic to the original representation:
The set of weights of the dual representation of an irreducible representation of a semisimple Lie algebra is, including multiplicities, the negative of the set of weights for the original representation.
Two irreducible representations are isomorphic if and only if they have the same highest weight.
For each semisimple Lie algebra there exists a unique element w0 of the Weyl group such that if μ is a dominant integral weight, then w0 ⋅ (−μ) is again a dominant integral weight.
If
π
μ
0
{\displaystyle \pi _{\mu _{0}}}
is an irreducible representation with highest weight μ0, then
π
μ
0
∗
{\displaystyle \pi _{\mu _{0}}^{*}}
has highest weight w0 ⋅ (−μ).
Here, the elements of the Weyl group are considered as orthogonal transformations, acting by matrix multiplication, on the real vector space of roots. If −I is an element of the Weyl group of a semisimple Lie algebra, then w0 = −I. In the case of
s
l
(
2
,
C
)
,
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ),}
the Weyl group is W = {I, −I}. It follows that each πμ, μ = 0, 1, ... is isomorphic to its dual
π
μ
∗
.
{\displaystyle \pi _{\mu }^{*}.}
The root system of
s
l
(
2
,
C
)
⊕
s
l
(
2
,
C
)
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )\oplus {\mathfrak {sl}}(2,\mathbb {C} )}
is shown in the figure to the right. The Weyl group is generated by
{
w
γ
}
{\displaystyle \{w_{\gamma }\}}
where
w
γ
{\displaystyle w_{\gamma }}
is reflection in the plane orthogonal to γ as γ ranges over all roots. Inspection shows that wα ⋅ wβ = −I so −I ∈ W. Using the fact that if π, σ are Lie algebra representations and π ≅ σ, then Π ≅ Σ, the conclusion for SO(3; 1)+ is
π
m
,
n
∗
≅
π
m
,
n
,
Π
m
,
n
∗
≅
Π
m
,
n
,
2
m
,
2
n
∈
N
.
{\displaystyle \pi _{m,n}^{*}\cong \pi _{m,n},\quad \Pi _{m,n}^{*}\cong \Pi _{m,n},\quad 2m,2n\in \mathbf {N} .}
==== Complex conjugate representations ====
If π is a representation of a Lie algebra, then
π
¯
{\displaystyle {\overline {\pi }}}
is a representation, where the bar denotes entry-wise complex conjugation in the representative matrices. This follows from that complex conjugation commutes with addition and multiplication. In general, every irreducible representation π of
s
l
(
n
,
C
)
{\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )}
can be written uniquely as π = π+ + π−, where
π
±
(
X
)
=
1
2
(
π
(
X
)
±
i
π
(
i
−
1
X
)
)
,
{\displaystyle \pi ^{\pm }(X)={\frac {1}{2}}\left(\pi (X)\pm i\pi \left(i^{-1}X\right)\right),}
with
π
+
{\displaystyle \pi ^{+}}
holomorphic (complex linear) and
π
−
{\displaystyle \pi ^{-}}
anti-holomorphic (conjugate linear). For
s
l
(
2
,
C
)
,
{\displaystyle {\mathfrak {sl}}(2,\mathbb {C} ),}
since
π
μ
{\displaystyle \pi _{\mu }}
is holomorphic,
π
μ
¯
{\displaystyle {\overline {\pi _{\mu }}}}
is anti-holomorphic. Direct examination of the explicit expressions for
π
μ
,
0
{\displaystyle \pi _{\mu ,0}}
and
π
0
,
ν
{\displaystyle \pi _{0,\nu }}
in equation (S8) below shows that they are holomorphic and anti-holomorphic respectively. Closer examination of the expression (S8) also allows for identification of
π
+
{\displaystyle \pi ^{+}}
and
π
−
{\displaystyle \pi ^{-}}
for
π
μ
,
ν
{\displaystyle \pi _{\mu ,\nu }}
as
π
μ
,
ν
+
=
π
μ
⊕
ν
+
1
,
π
μ
,
ν
−
=
π
ν
⊕
μ
+
1
¯
.
{\displaystyle \pi _{\mu ,\nu }^{+}=\pi _{\mu }^{\oplus _{\nu +1}},\qquad \pi _{\mu ,\nu }^{-}={\overline {\pi _{\nu }^{\oplus _{\mu +1}}}}.}
Using the above identities (interpreted as pointwise addition of functions), for SO(3; 1)+ yields
π
m
,
n
¯
=
π
m
,
n
+
+
π
m
,
n
−
¯
=
π
m
⊕
2
n
+
1
¯
+
π
n
¯
⊕
2
m
+
1
¯
=
π
n
⊕
2
m
+
1
+
π
m
¯
⊕
2
n
+
1
=
π
n
,
m
+
+
π
n
,
m
−
=
π
n
,
m
2
m
,
2
n
∈
N
Π
m
,
n
¯
=
Π
n
,
m
{\displaystyle {\begin{aligned}{\overline {\pi _{m,n}}}&={\overline {\pi _{m,n}^{+}+\pi _{m,n}^{-}}}={\overline {\pi _{m}^{\oplus _{2n+1}}}}+{\overline {{\overline {\pi _{n}}}^{\oplus _{2m+1}}}}\\&=\pi _{n}^{\oplus _{2m+1}}+{\overline {\pi _{m}}}^{\oplus _{2n+1}}=\pi _{n,m}^{+}+\pi _{n,m}^{-}=\pi _{n,m}\\&&&2m,2n\in \mathbb {N} \\{\overline {\Pi _{m,n}}}&=\Pi _{n,m}\end{aligned}}}
where the statement for the group representations follow from exp(X) = exp(X). It follows that the irreducible representations (m, n) have real matrix representatives if and only if m = n. Reducible representations on the form (m, n) ⊕ (n, m) have real matrices too.
=== The adjoint representation, the Clifford algebra, and the Dirac spinor representation ===
In general representation theory, if (π, V) is a representation of a Lie algebra
g
,
{\displaystyle {\mathfrak {g}},}
then there is an associated representation of
g
,
{\displaystyle {\mathfrak {g}},}
on End(V), also denoted π, given by
Likewise, a representation (Π, V) of a group G yields a representation Π on End(V) of G, still denoted Π, given by
If π and Π are the standard representations on
R
4
{\displaystyle \mathbb {R} ^{4}}
and if the action is restricted to
s
o
(
3
,
1
)
⊂
End
(
R
4
)
,
{\displaystyle {\mathfrak {so}}(3,1)\subset {\text{End}}(\mathbb {R} ^{4}),}
then the two above representations are the adjoint representation of the Lie algebra and the adjoint representation of the group respectively. The corresponding representations (some
R
n
{\displaystyle \mathbb {R} ^{n}}
or
C
n
{\displaystyle \mathbb {C} ^{n}}
) always exist for any matrix Lie group, and are paramount for investigation of the representation theory in general, and for any given Lie group in particular.
Applying this to the Lorentz group, if (Π, V) is a projective representation, then direct calculation using (G5) shows that the induced representation on End(V) is a proper representation, i.e. a representation without phase factors.
In quantum mechanics this means that if (π, H) or (Π, H) is a representation acting on some Hilbert space H, then the corresponding induced representation acts on the set of linear operators on H. As an example, the induced representation of the projective spin (1/2, 0) ⊕ (0, 1/2) representation on End(H) is the non-projective 4-vector (1/2, 1/2) representation.
For simplicity, consider only the "discrete part" of End(H), that is, given a basis for H, the set of constant matrices of various dimension, including possibly infinite dimensions. The induced 4-vector representation of above on this simplified End(H) has an invariant 4-dimensional subspace that is spanned by the four gamma matrices. (The metric convention is different in the linked article.) In a corresponding way, the complete Clifford algebra of spacetime,
C
l
3
,
1
(
R
)
,
{\displaystyle {\mathcal {Cl}}_{3,1}(\mathbb {R} ),}
whose complexification is
M
(
4
,
C
)
,
{\displaystyle {\text{M}}(4,\mathbb {C} ),}
generated by the gamma matrices decomposes as a direct sum of representation spaces of a scalar irreducible representation (irrep), the (0, 0), a pseudoscalar irrep, also the (0, 0), but with parity inversion eigenvalue −1, see the next section below, the already mentioned vector irrep, (1/2, 1/2), a pseudovector irrep, (1/2, 1/2) with parity inversion eigenvalue +1 (not −1), and a tensor irrep, (1, 0) ⊕ (0, 1). The dimensions add up to 1 + 1 + 4 + 4 + 6 = 16. In other words,
where, as is customary, a representation is confused with its representation space.
==== The (1/2, 0) ⊕ (0, 1/2) spin representation ====
The six-dimensional representation space of the tensor (1, 0) ⊕ (0, 1)-representation inside
C
l
3
,
1
(
R
)
{\displaystyle {\mathcal {Cl}}_{3,1}(\mathbb {R} )}
has two roles. The
where
γ
0
,
…
,
γ
3
∈
C
l
3
,
1
(
R
)
{\displaystyle \gamma ^{0},\ldots ,\gamma ^{3}\in {\mathcal {Cl}}_{3,1}(\mathbb {R} )}
are the gamma matrices, the sigmas, only 6 of which are non-zero due to antisymmetry of the bracket, span the tensor representation space. Moreover, they have the commutation relations of the Lorentz Lie algebra,
and hence constitute a representation (in addition to spanning a representation space) sitting inside
C
l
3
,
1
(
R
)
,
{\displaystyle {\mathcal {Cl}}_{3,1}(\mathbb {R} ),}
the (1/2, 0) ⊕ (0, 1/2) spin representation. For details, see bispinor and Dirac algebra.
The conclusion is that every element of the complexified
C
l
3
,
1
(
R
)
{\displaystyle {\mathcal {Cl}}_{3,1}(\mathbb {R} )}
in End(H) (i.e. every complex 4×4 matrix) has well defined Lorentz transformation properties. In addition, it has a spin-representation of the Lorentz Lie algebra, which upon exponentiation becomes a spin representation of the group, acting on
C
4
,
{\displaystyle \mathbb {C} ^{4},}
making it a space of bispinors.
=== Reducible representations ===
There is a multitude of other representations that can be deduced from the irreducible ones, such as those obtained by taking direct sums, tensor products, and quotients of the irreducible representations. Other methods of obtaining representations include the restriction of a representation of a larger group containing the Lorentz group, e.g.
GL
(
n
,
R
)
{\displaystyle {\text{GL}}(n,\mathbb {R} )}
and the Poincaré group. These representations are in general not irreducible.
The Lorentz group and its Lie algebra have the complete reducibility property. This means that every representation reduces to a direct sum of irreducible representations. The reducible representations will therefore not be discussed.
=== Space inversion and time reversal ===
The (possibly projective) (m, n) representation is irreducible as a representation SO(3; 1)+, the identity component of the Lorentz group, in physics terminology the proper orthochronous Lorentz group. If m = n it can be extended to a representation of all of O(3; 1), the full Lorentz group, including space parity inversion and time reversal. The representations (m, n) ⊕ (n, m) can be extended likewise.
==== Space parity inversion ====
For space parity inversion, the adjoint action AdP of P ∈ SO(3; 1) on
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
is considered, where P is the standard representative of space parity inversion, P = diag(1, −1, −1, −1), given by
It is these properties of K and J under P that motivate the terms vector for K and pseudovector or axial vector for J. In a similar way, if π is any representation of
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
and Π is its associated group representation, then Π(SO(3; 1)+) acts on the representation of π by the adjoint action, π(X) ↦ Π(g) π(X) Π(g)−1 for
X
∈
s
o
(
3
;
1
)
,
{\displaystyle X\in {\mathfrak {so}}(3;1),}
g ∈ SO(3; 1)+. If P is to be included in Π, then consistency with (F1) requires that
holds, where A and B are defined as in the first section. This can hold only if Ai and Bi have the same dimensions, i.e. only if m = n. When m ≠ n then (m, n) ⊕ (n, m) can be extended to an irreducible representation of SO(3; 1)+, the orthochronous Lorentz group. The parity reversal representative Π(P) does not come automatically with the general construction of the (m, n) representations. It must be specified separately. The matrix β = i γ0 (or a multiple of modulus −1 times it) may be used in the (1/2, 0) ⊕ (0, 1/2) representation.
If parity is included with a minus sign (the 1×1 matrix [−1]) in the (0,0) representation, it is called a pseudoscalar representation.
==== Time reversal ====
Time reversal T = diag(−1, 1, 1, 1), acts similarly on
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
by
By explicitly including a representative for T, as well as one for P, a representation of the full Lorentz group O(3; 1) is obtained. A subtle problem appears however in application to physics, in particular quantum mechanics. When considering the full Poincaré group, four more generators, the Pμ, in addition to the Ji and Ki generate the group. These are interpreted as generators of translations. The time-component P0 is the Hamiltonian H. The operator T satisfies the relation
in analogy to the relations above with
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
replaced by the full Poincaré algebra. By just cancelling the i's, the result THT−1 = −H would imply that for every state Ψ with positive energy E in a Hilbert space of quantum states with time-reversal invariance, there would be a state Π(T−1)Ψ with negative energy −E. Such states do not exist. The operator Π(T) is therefore chosen antilinear and antiunitary, so that it anticommutes with i, resulting in THT−1 = H, and its action on Hilbert space likewise becomes antilinear and antiunitary. It may be expressed as the composition of complex conjugation with multiplication by a unitary matrix. This is mathematically sound, see Wigner's theorem, but with very strict requirements on terminology, Π is not a representation.
When constructing theories such as QED which is invariant under space parity and time reversal, Dirac spinors may be used, while theories that do not, such as the electroweak force, must be formulated in terms of Weyl spinors. The Dirac representation, (1/2, 0) ⊕ (0, 1/2), is usually taken to include both space parity and time inversions. Without space parity inversion, it is not an irreducible representation.
The third discrete symmetry entering in the CPT theorem along with P and T, charge conjugation symmetry C, has nothing directly to do with Lorentz invariance.
== Action on function spaces ==
If V is a vector space of functions of a finite number of variables n, then the action on a scalar function
f
∈
V
{\displaystyle f\in V}
given by
produces another function Πf ∈ V. Here Πx is an n-dimensional representation, and Π is a possibly infinite-dimensional representation. A special case of this construction is when V is a space of functions defined on the a linear group G itself, viewed as a n-dimensional manifold embedded in
R
m
2
{\displaystyle \mathbb {R} ^{m^{2}}}
(with m the dimension of the matrices). This is the setting in which the Peter–Weyl theorem and the Borel–Weil theorem are formulated. The former demonstrates the existence of a Fourier decomposition of functions on a compact group into characters of finite-dimensional representations. The latter theorem, providing more explicit representations, makes use of the unitarian trick to yield representations of complex non-compact groups, e.g.
SL
(
2
,
C
)
.
{\displaystyle {\text{SL}}(2,\mathbb {C} ).}
The following exemplifies action of the Lorentz group and the rotation subgroup on some function spaces.
=== Euclidean rotations ===
The subgroup SO(3) of three-dimensional Euclidean rotations has an infinite-dimensional representation on the Hilbert space
L
2
(
S
2
)
=
span
{
Y
m
l
,
l
∈
N
+
,
−
l
⩽
m
⩽
l
}
,
{\displaystyle L^{2}\left(\mathbb {S} ^{2}\right)=\operatorname {span} \left\{Y_{m}^{l},l\in \mathbb {N} ^{+},-l\leqslant m\leqslant l\right\},}
where
Y
m
l
{\displaystyle Y_{m}^{l}}
are the spherical harmonics. An arbitrary square integrable function f on the unit sphere can be expressed as
where the flm are generalized Fourier coefficients.
The Lorentz group action restricts to that of SO(3) and is expressed as
where the Dl are obtained from the representatives of odd dimension of the generators of rotation.
=== The Möbius group ===
The identity component of the Lorentz group is isomorphic to the Möbius group M. This group can be thought of as conformal mappings of either the complex plane or, via stereographic projection, the Riemann sphere. In this way, the Lorentz group itself can be thought of as acting conformally on the complex plane or on the Riemann sphere.
In the plane, a Möbius transformation characterized by the complex numbers a, b, c, d acts on the plane according to
and can be represented by complex matrices
since multiplication by a nonzero complex scalar does not change f. These are elements of
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
and are unique up to a sign (since ±Πf give the same f), hence
SL
(
2
,
C
)
/
{
±
I
}
≅
SO
(
3
;
1
)
+
.
{\displaystyle {\text{SL}}(2,\mathbb {C} )/\{\pm I\}\cong {\text{SO}}(3;1)^{+}.}
=== The Riemann P-functions ===
The Riemann P-functions, solutions of Riemann's differential equation, are an example of a set of functions that transform among themselves under the action of the Lorentz group. The Riemann P-functions are expressed as
where the a, b, c, α, β, γ, α′, β′, γ′ are complex constants. The P-function on the right hand side can be expressed using standard hypergeometric functions. The connection is
The set of constants 0, ∞, 1 in the upper row on the left hand side are the regular singular points of the Gauss' hypergeometric equation. Its exponents, i. e. solutions of the indicial equation, for expansion around the singular point 0 are 0 and 1 − c ,corresponding to the two linearly independent solutions, and for expansion around the singular point 1 they are 0 and c − a − b. Similarly, the exponents for ∞ are a and b for the two solutions.
One has thus
where the condition (sometimes called Riemann's identity)
α
+
α
′
+
β
+
β
′
+
γ
+
γ
′
=
1
{\displaystyle \alpha +\alpha '+\beta +\beta '+\gamma +\gamma '=1}
on the exponents of the solutions of Riemann's differential equation has been used to define γ′.
The first set of constants on the left hand side in (T1), a, b, c denotes the regular singular points of Riemann's differential equation. The second set, α, β, γ, are the corresponding exponents at a, b, c for one of the two linearly independent solutions, and, accordingly, α′, β′, γ′ are exponents at a, b, c for the second solution.
Define an action of the Lorentz group on the set of all Riemann P-functions by first setting
where A, B, C, D are the entries in
for Λ = p(λ) ∈ SO(3; 1)+ a Lorentz transformation.
Define
where P is a Riemann P-function. The resulting function is again a Riemann P-function. The effect of the Möbius transformation of the argument is that of shifting the poles to new locations, hence changing the critical points, but there is no change in the exponents of the differential equation the new function satisfies. The new function is expressed as
where
== Infinite-dimensional unitary representations ==
=== History ===
The Lorentz group SO(3; 1)+ and its double cover
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
also have infinite dimensional unitary representations, studied independently by Bargmann (1947), Gelfand & Naimark (1947) and Harish-Chandra (1947) at the instigation of Paul Dirac. This trail of development begun with Dirac (1936) where he devised matrices U and B necessary for description of higher spin (compare Dirac matrices), elaborated upon by Fierz (1939), see also Fierz & Pauli (1939), and proposed precursors of the Bargmann-Wigner equations. In Dirac (1945) he proposed a concrete infinite-dimensional representation space whose elements were called expansors as a generalization of tensors. These ideas were incorporated by Harish–Chandra and expanded with expinors as an infinite-dimensional generalization of spinors in his 1947 paper.
The Plancherel formula for these groups was first obtained by Gelfand and Naimark through involved calculations. The treatment was subsequently considerably simplified by Harish-Chandra (1951) and Gelfand & Graev (1953), based on an analogue for
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
of the integration formula of Hermann Weyl for compact Lie groups. Elementary accounts of this approach can be found in Rühl (1970) and Knapp (2001).
The theory of spherical functions for the Lorentz group, required for harmonic analysis on the hyperboloid model of 3-dimensional hyperbolic space sitting in Minkowski space is considerably easier than the general theory. It only involves representations from the spherical principal series and can be treated directly, because in radial coordinates the Laplacian on the hyperboloid is equivalent to the Laplacian on
R
.
{\displaystyle \mathbb {R} .}
This theory is discussed in Takahashi (1963), Helgason (1968), Helgason (2000) and the posthumous text of Jorgenson & Lang (2008).
=== Principal series for SL(2, C) ===
The principal series, or unitary principal series, are the unitary representations induced from the one-dimensional representations of the lower triangular subgroup B of
G
=
SL
(
2
,
C
)
.
{\displaystyle G={\text{SL}}(2,\mathbb {C} ).}
Since the one-dimensional representations of B correspond to the representations of the diagonal matrices, with non-zero complex entries z and z−1, they thus have the form
χ
ν
,
k
(
z
0
c
z
−
1
)
=
r
i
ν
e
i
k
θ
,
{\displaystyle \chi _{\nu ,k}{\begin{pmatrix}z&0\\c&z^{-1}\end{pmatrix}}=r^{i\nu }e^{ik\theta },}
for k an integer, ν real and with z = reiθ. The representations are irreducible; the only repetitions, i.e. isomorphisms of representations, occur when k is replaced by −k. By definition the representations are realized on L2 sections of line bundles on
G
/
B
=
S
2
,
{\displaystyle G/B=\mathbb {S} ^{2},}
which is isomorphic to the Riemann sphere. When k = 0, these representations constitute the so-called spherical principal series.
The restriction of a principal series to the maximal compact subgroup K = SU(2) of G can also be realized as an induced representation of K using the identification G/B = K/T, where T = B ∩ K is the maximal torus in K consisting of diagonal matrices with | z | = 1. It is the representation induced from the 1-dimensional representation zkT, and is independent of ν. By Frobenius reciprocity, on K they decompose as a direct sum of the irreducible representations of K with dimensions |k| + 2m + 1 with m a non-negative integer.
Using the identification between the Riemann sphere minus a point and
C
,
{\displaystyle \mathbb {C} ,}
the principal series can be defined directly on
L
2
(
C
)
{\displaystyle L^{2}(\mathbb {C} )}
by the formula
π
ν
,
k
(
a
b
c
d
)
−
1
f
(
z
)
=
|
c
z
+
d
|
−
2
−
i
ν
(
c
z
+
d
|
c
z
+
d
|
)
−
k
f
(
a
z
+
b
c
z
+
d
)
.
{\displaystyle \pi _{\nu ,k}{\begin{pmatrix}a&b\\c&d\end{pmatrix}}^{-1}f(z)=|cz+d|^{-2-i\nu }\left({cz+d \over |cz+d|}\right)^{-k}f\left({az+b \over cz+d}\right).}
Irreducibility can be checked in a variety of ways:
The representation is already irreducible on B. This can be seen directly, but is also a special case of general results on irreducibility of induced representations due to François Bruhat and George Mackey, relying on the Bruhat decomposition G = B ∪ BsB where s is the Weyl group element
(
0
−
1
1
0
)
{\displaystyle {\begin{pmatrix}0&-1\\1&0\end{pmatrix}}}
.
The action of the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
of G can be computed on the algebraic direct sum of the irreducible subspaces of K can be computed explicitly and the it can be verified directly that the lowest-dimensional subspace generates this direct sum as a
g
{\displaystyle {\mathfrak {g}}}
-module.
=== Complementary series for SL(2, C) ===
The for 0 < t < 2, the complementary series is defined on
L
2
(
C
)
{\displaystyle L^{2}(\mathbb {C} )}
for the inner product
(
f
,
g
)
t
=
∬
f
(
z
)
g
(
w
)
¯
|
z
−
w
|
2
−
t
d
z
d
w
,
{\displaystyle (f,g)_{t}=\iint {\frac {f(z){\overline {g(w)}}}{|z-w|^{2-t}}}\,dz\,dw,}
with the action given by
π
t
(
a
b
c
d
)
−
1
f
(
z
)
=
|
c
z
+
d
|
−
2
−
t
f
(
a
z
+
b
c
z
+
d
)
.
{\displaystyle \pi _{t}{\begin{pmatrix}a&b\\c&d\end{pmatrix}}^{-1}f(z)=|cz+d|^{-2-t}f\left({az+b \over cz+d}\right).}
The representations in the complementary series are irreducible and pairwise non-isomorphic. As a representation of K, each is isomorphic to the Hilbert space direct sum of all the odd dimensional irreducible representations of K = SU(2). Irreducibility can be proved by analyzing the action of
g
{\displaystyle {\mathfrak {g}}}
on the algebraic sum of these subspaces or directly without using the Lie algebra.
=== Plancherel theorem for SL(2, C) ===
The only irreducible unitary representations of
SL
(
2
,
C
)
{\displaystyle {\text{SL}}(2,\mathbb {C} )}
are the principal series, the complementary series and the trivial representation.
Since −I acts as (−1)k on the principal series and trivially on the remainder, these will give all the irreducible unitary representations of the Lorentz group, provided k is taken to be even.
To decompose the left regular representation of G on
L
2
(
G
)
{\displaystyle L^{2}(G)}
only the principal series are required. This immediately yields the decomposition on the subrepresentations
L
2
(
G
/
{
±
I
}
)
,
{\displaystyle L^{2}(G/\{\pm I\}),}
the left regular representation of the Lorentz group, and
L
2
(
G
/
K
)
,
{\displaystyle L^{2}(G/K),}
the regular representation on 3-dimensional hyperbolic space. (The former only involves principal series representations with k even and the latter only those with k = 0.)
The left and right regular representation λ and ρ are defined on
L
2
(
G
)
{\displaystyle L^{2}(G)}
by
(
λ
(
g
)
f
)
(
x
)
=
f
(
g
−
1
x
)
(
ρ
(
g
)
f
)
(
x
)
=
f
(
x
g
)
{\displaystyle {\begin{aligned}(\lambda (g)f)(x)&=f\left(g^{-1}x\right)\\(\rho (g)f)(x)&=f(xg)\end{aligned}}}
Now if f is an element of Cc(G), the operator
π
ν
,
k
(
f
)
{\displaystyle \pi _{\nu ,k}(f)}
defined by
π
ν
,
k
(
f
)
=
∫
G
f
(
g
)
π
(
g
)
d
g
{\displaystyle \pi _{\nu ,k}(f)=\int _{G}f(g)\pi (g)\,dg}
is Hilbert–Schmidt. Define a Hilbert space H by
H
=
⨁
k
⩾
0
HS
(
L
2
(
C
)
)
⊗
L
2
(
R
,
c
k
ν
2
+
k
2
d
ν
)
,
{\displaystyle H=\bigoplus _{k\geqslant 0}{\text{HS}}\left(L^{2}(\mathbb {C} )\right)\otimes L^{2}\left(\mathbb {R} ,c_{k}{\sqrt {\nu ^{2}+k^{2}}}d\nu \right),}
where
c
k
=
{
1
4
π
3
/
2
k
=
0
1
(
2
π
)
3
/
2
k
≠
0
{\displaystyle c_{k}={\begin{cases}{\frac {1}{4\pi ^{3/2}}}&k=0\\{\frac {1}{(2\pi )^{3/2}}}&k\neq 0\end{cases}}}
and
HS
(
L
2
(
C
)
)
{\displaystyle {\text{HS}}\left(L^{2}(\mathbb {C} )\right)}
denotes the Hilbert space of Hilbert–Schmidt operators on
L
2
(
C
)
.
{\displaystyle L^{2}(\mathbb {C} ).}
Then the map U defined on Cc(G) by
U
(
f
)
(
ν
,
k
)
=
π
ν
,
k
(
f
)
{\displaystyle U(f)(\nu ,k)=\pi _{\nu ,k}(f)}
extends to a unitary of
L
2
(
G
)
{\displaystyle L^{2}(G)}
onto H.
The map U satisfies the intertwining property
U
(
λ
(
x
)
ρ
(
y
)
f
)
(
ν
,
k
)
=
π
ν
,
k
(
x
)
−
1
π
ν
,
k
(
f
)
π
ν
,
k
(
y
)
.
{\displaystyle U(\lambda (x)\rho (y)f)(\nu ,k)=\pi _{\nu ,k}(x)^{-1}\pi _{\nu ,k}(f)\pi _{\nu ,k}(y).}
If f1, f2 are in Cc(G) then by unitarity
(
f
1
,
f
2
)
=
∑
k
⩾
0
c
k
2
∫
−
∞
∞
Tr
(
π
ν
,
k
(
f
1
)
π
ν
,
k
(
f
2
)
∗
)
(
ν
2
+
k
2
)
d
ν
.
{\displaystyle (f_{1},f_{2})=\sum _{k\geqslant 0}c_{k}^{2}\int _{-\infty }^{\infty }\operatorname {Tr} \left(\pi _{\nu ,k}(f_{1})\pi _{\nu ,k}(f_{2})^{*}\right)\left(\nu ^{2}+k^{2}\right)\,d\nu .}
Thus if
f
=
f
1
∗
f
2
∗
{\displaystyle f=f_{1}*f_{2}^{*}}
denotes the convolution of
f
1
{\displaystyle f_{1}}
and
f
2
∗
,
{\displaystyle f_{2}^{*},}
and
f
2
∗
(
g
)
=
f
2
(
g
−
1
)
¯
,
{\displaystyle f_{2}^{*}(g)={\overline {f_{2}(g^{-1})}},}
then
f
(
1
)
=
∑
k
⩾
0
c
k
2
∫
−
∞
∞
Tr
(
π
ν
,
k
(
f
)
)
(
ν
2
+
k
2
)
d
ν
.
{\displaystyle f(1)=\sum _{k\geqslant 0}c_{k}^{2}\int _{-\infty }^{\infty }\operatorname {Tr} \left(\pi _{\nu ,k}(f)\right)\left(\nu ^{2}+k^{2}\right)\,d\nu .}
The last two displayed formulas are usually referred to as the Plancherel formula and the Fourier inversion formula respectively.
The Plancherel formula extends to all
f
i
∈
L
2
(
G
)
.
{\displaystyle f_{i}\in L^{2}(G).}
By a theorem of Jacques Dixmier and Paul Malliavin, every smooth compactly supported function on
G
{\displaystyle G}
is a finite sum of convolutions of similar functions, the inversion formula holds for such f. It can be extended to much wider classes of functions satisfying mild differentiability conditions.
=== Classification of representations of SO(3, 1) ===
The strategy followed in the classification of the irreducible infinite-dimensional representations is, in analogy to the finite-dimensional case, to assume they exist, and to investigate their properties. Thus first assume that an irreducible strongly continuous infinite-dimensional representation ΠH on a Hilbert space H of SO(3; 1)+ is at hand. Since SO(3) is a subgroup, ΠH is a representation of it as well. Each irreducible subrepresentation of SO(3) is finite-dimensional, and the SO(3) representation is reducible into a direct sum of irreducible finite-dimensional unitary representations of SO(3) if ΠH is unitary.
The steps are the following:
Choose a suitable basis of common eigenvectors of J2 and J3.
Compute matrix elements of J1, J2, J3 and K1, K2, K3.
Enforce Lie algebra commutation relations.
Require unitarity together with orthonormality of the basis.
==== Step 1 ====
One suitable choice of basis and labeling is given by
|
j
0
j
1
;
j
m
⟩
.
{\displaystyle \left|j_{0}\,j_{1};j\,m\right\rangle .}
If this were a finite-dimensional representation, then j0 would correspond the lowest occurring eigenvalue j(j + 1) of J2 in the representation, equal to |m − n|, and j1 would correspond to the highest occurring eigenvalue, equal to m + n. In the infinite-dimensional case, j0 ≥ 0 retains this meaning, but j1 does not. For simplicity, it is assumed that a given j occurs at most once in a given representation (this is the case for finite-dimensional representations), and it can be shown that the assumption is possible to avoid (with a slightly more complicated calculation) with the same results.
==== Step 2 ====
The next step is to compute the matrix elements of the operators J1, J2, J3 and K1, K2, K3 forming the basis of the Lie algebra of
s
o
(
3
;
1
)
.
{\displaystyle {\mathfrak {so}}(3;1).}
The matrix elements of
J
±
=
J
1
±
i
J
2
{\displaystyle J_{\pm }=J_{1}\pm iJ_{2}}
and
J
3
{\displaystyle J_{3}}
(the complexified Lie algebra is understood) are known from the representation theory of the rotation group, and are given by
⟨
j
m
|
J
+
|
j
m
−
1
⟩
=
⟨
j
m
−
1
|
J
−
|
j
m
⟩
=
(
j
+
m
)
(
j
−
m
+
1
)
,
⟨
j
m
|
J
3
|
j
m
⟩
=
m
,
{\displaystyle {\begin{aligned}\left\langle j\,m\right|J_{+}\left|j\,m-1\right\rangle =\left\langle j\,m-1\right|J_{-}\left|j\,m\right\rangle &={\sqrt {(j+m)(j-m+1)}},\\\left\langle j\,m\right|J_{3}\left|j\,m\right\rangle &=m,\end{aligned}}}
where the labels j0 and j1 have been dropped since they are the same for all basis vectors in the representation.
Due to the commutation relations
[
J
i
,
K
j
]
=
i
ϵ
i
j
k
K
k
,
{\displaystyle [J_{i},K_{j}]=i\epsilon _{ijk}K_{k},}
the triple (K1, K2, K3) ≡ K is a vector operator and the Wigner–Eckart theorem applies for computation of matrix elements between the states represented by the chosen basis. The matrix elements of
K
0
(
1
)
=
K
3
,
K
±
1
(
1
)
=
∓
1
2
(
K
1
±
i
K
2
)
,
{\displaystyle {\begin{aligned}K_{0}^{(1)}&=K_{3},\\K_{\pm 1}^{(1)}&=\mp {\frac {1}{\sqrt {2}}}(K_{1}\pm iK_{2}),\end{aligned}}}
where the superscript (1) signifies that the defined quantities are the components of a spherical tensor operator of rank k = 1 (which explains the factor √2 as well) and the subscripts 0, ±1 are referred to as q in formulas below, are given by
⟨
j
′
m
′
|
K
0
(
1
)
|
j
m
⟩
=
⟨
j
′
m
′
k
=
1
q
=
0
|
j
m
⟩
⟨
j
‖
K
(
1
)
‖
j
′
⟩
,
⟨
j
′
m
′
|
K
±
1
(
1
)
|
j
m
⟩
=
⟨
j
′
m
′
k
=
1
q
=
±
1
|
j
m
⟩
⟨
j
‖
K
(
1
)
‖
j
′
⟩
.
{\displaystyle {\begin{aligned}\left\langle j'm'\left|K_{0}^{(1)}\right|j\,m\right\rangle &=\left\langle j'\,m'\,k=1\,q=0|j\,m\right\rangle \left\langle j\left\|K^{(1)}\right\|j'\right\rangle ,\\\left\langle j'm'\left|K_{\pm 1}^{(1)}\right|j\,m\right\rangle &=\left\langle j'\,m'\,k=1\,q=\pm 1|j\,m\right\rangle \left\langle j\left\|K^{(1)}\right\|j'\right\rangle .\end{aligned}}}
Here the first factors on the right hand sides are Clebsch–Gordan coefficients for coupling j′ with k to get j. The second factors are the reduced matrix elements. They do not depend on m, m′ or q, but depend on j, j′ and, of course, K. For a complete list of non-vanishing equations, see Harish-Chandra (1947, p. 375).
==== Step 3 ====
The next step is to demand that the Lie algebra relations hold, i.e. that
[
K
±
,
K
3
]
=
±
J
±
,
[
K
+
,
K
−
]
=
−
2
J
3
.
{\displaystyle [K_{\pm },K_{3}]=\pm J_{\pm },\quad [K_{+},K_{-}]=-2J_{3}.}
This results in a set of equations for which the solutions are
⟨
j
‖
K
(
1
)
‖
j
⟩
=
i
j
1
j
0
j
(
j
+
1
)
,
⟨
j
‖
K
(
1
)
‖
j
−
1
⟩
=
−
B
j
ξ
j
j
(
2
j
−
1
)
,
⟨
j
−
1
‖
K
(
1
)
‖
j
⟩
=
B
j
ξ
j
−
1
j
(
2
j
+
1
)
,
{\displaystyle {\begin{aligned}\left\langle j\left\|K^{(1)}\right\|j\right\rangle &=i{\frac {j_{1}j_{0}}{\sqrt {j(j+1)}}},\\\left\langle j\left\|K^{(1)}\right\|j-1\right\rangle &=-B_{j}\xi _{j}{\sqrt {j(2j-1)}},\\\left\langle j-1\left\|K^{(1)}\right\|j\right\rangle &=B_{j}\xi _{j}^{-1}{\sqrt {j(2j+1)}},\end{aligned}}}
where
B
j
=
(
j
2
−
j
0
2
)
(
j
2
−
j
1
2
)
j
2
(
4
j
2
−
1
)
,
j
0
=
0
,
1
2
,
1
,
…
and
j
1
,
ξ
j
∈
C
.
{\displaystyle B_{j}={\sqrt {\frac {(j^{2}-j_{0}^{2})(j^{2}-j_{1}^{2})}{j^{2}(4j^{2}-1)}}},\quad j_{0}=0,{\tfrac {1}{2}},1,\ldots \quad {\text{and}}\quad j_{1},\xi _{j}\in \mathbb {C} .}
==== Step 4 ====
The imposition of the requirement of unitarity of the corresponding representation of the group restricts the possible values for the arbitrary complex numbers j0 and ξj. Unitarity of the group representation translates to the requirement of the Lie algebra representatives being Hermitian, meaning
K
±
†
=
K
∓
,
K
3
†
=
K
3
.
{\displaystyle K_{\pm }^{\dagger }=K_{\mp },\quad K_{3}^{\dagger }=K_{3}.}
This translates to
⟨
j
‖
K
(
1
)
‖
j
⟩
=
⟨
j
‖
K
(
1
)
‖
j
⟩
¯
,
⟨
j
‖
K
(
1
)
‖
j
−
1
⟩
=
−
⟨
j
−
1
‖
K
(
1
)
‖
j
⟩
¯
,
{\displaystyle {\begin{aligned}\left\langle j\left\|K^{(1)}\right\|j\right\rangle &={\overline {\left\langle j\left\|K^{(1)}\right\|j\right\rangle }},\\\left\langle j\left\|K^{(1)}\right\|j-1\right\rangle &=-{\overline {\left\langle j-1\left\|K^{(1)}\right\|j\right\rangle }},\end{aligned}}}
leading to
j
0
(
j
1
+
j
1
¯
)
=
0
,
|
B
j
|
(
|
ξ
j
|
2
−
e
−
2
i
β
j
)
=
0
,
{\displaystyle {\begin{aligned}j_{0}\left(j_{1}+{\overline {j_{1}}}\right)&=0,\\\left|B_{j}\right|\left(\left|\xi _{j}\right|^{2}-e^{-2i\beta _{j}}\right)&=0,\end{aligned}}}
where βj is the angle of Bj on polar form. For |Bj| ≠ 0 follows
|
ξ
j
|
2
=
1
{\displaystyle \left|\xi _{j}\right|^{2}=1}
and
ξ
j
=
1
{\displaystyle \xi _{j}=1}
is chosen by convention. There are two possible cases:
j
1
+
j
1
¯
=
0.
_
{\displaystyle {\underline {j_{1}+{\overline {j_{1}}}=0.}}}
In this case j1 = − iν, ν real,
⟨
j
‖
K
(
1
)
‖
j
⟩
=
ν
j
0
j
(
j
+
1
)
and
B
j
=
(
j
2
−
j
0
2
)
(
j
2
+
ν
2
)
4
j
2
−
1
{\displaystyle \left\langle j\left\|K^{(1)}\right\|j\right\rangle ={\frac {\nu j_{0}}{j(j+1)}}\quad {\text{and}}\quad B_{j}={\sqrt {\frac {(j^{2}-j_{0}^{2})(j^{2}+\nu ^{2})}{4j^{2}-1}}}}
This is the principal series. Its elements are denoted
(
j
0
,
ν
)
,
2
j
0
∈
N
,
ν
∈
R
.
{\displaystyle (j_{0},\nu ),2j_{0}\in \mathbb {N} ,\nu \in \mathbb {R} .}
j
0
=
0.
_
{\displaystyle {\underline {j_{0}=0.}}}
It follows:
⟨
j
‖
K
(
1
)
‖
j
⟩
=
0
and
B
j
=
j
2
−
ν
2
4
j
2
−
1
{\displaystyle \left\langle j\left\|K^{(1)}\right\|j\right\rangle =0\quad {\text{and}}\quad B_{j}={\sqrt {\frac {j^{2}-\nu ^{2}}{4j^{2}-1}}}}
Since B0 = Bj0, B2j is real and positive for j = 1, 2, ..., leading to −1 ≤ ν ≤ 1. This is complementary series. Its elements are denoted (0, ν), −1 ≤ ν ≤ 1
This shows that the representations of above are all infinite-dimensional irreducible unitary representations.
== Explicit formulas ==
=== Conventions and Lie algebra bases ===
The metric of choice is given by η = diag(−1, 1, 1, 1), and the physics convention for Lie algebras and the exponential mapping is used. These choices are arbitrary, but once they are made, fixed. One possible choice of basis for the Lie algebra is, in the 4-vector representation, given by:
J
1
=
J
23
=
−
J
32
=
i
(
0
0
0
0
0
0
0
0
0
0
0
−
1
0
0
1
0
)
,
K
1
=
J
01
=
−
J
10
=
i
(
0
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
)
,
J
2
=
J
31
=
−
J
13
=
i
(
0
0
0
0
0
0
0
1
0
0
0
0
0
−
1
0
0
)
,
K
2
=
J
02
=
−
J
20
=
i
(
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
)
,
J
3
=
J
12
=
−
J
21
=
i
(
0
0
0
0
0
0
−
1
0
0
1
0
0
0
0
0
0
)
,
K
3
=
J
03
=
−
J
30
=
i
(
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
)
.
{\displaystyle {\begin{aligned}J_{1}=J^{23}=-J^{32}&=i{\begin{pmatrix}0&0&0&0\\0&0&0&0\\0&0&0&-1\\0&0&1&0\end{pmatrix}},&K_{1}=J^{01}=-J^{10}&=i{\begin{pmatrix}0&1&0&0\\1&0&0&0\\0&0&0&0\\0&0&0&0\end{pmatrix}},\\[8pt]J_{2}=J^{31}=-J^{13}&=i{\begin{pmatrix}0&0&0&0\\0&0&0&1\\0&0&0&0\\0&-1&0&0\end{pmatrix}},&K_{2}=J^{02}=-J^{20}&=i{\begin{pmatrix}0&0&1&0\\0&0&0&0\\1&0&0&0\\0&0&0&0\end{pmatrix}},\\[8pt]J_{3}=J^{12}=-J^{21}&=i{\begin{pmatrix}0&0&0&0\\0&0&-1&0\\0&1&0&0\\0&0&0&0\end{pmatrix}},&K_{3}=J^{03}=-J^{30}&=i{\begin{pmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&0\end{pmatrix}}.\\[8pt]\end{aligned}}}
The commutation relations of the Lie algebra
s
o
(
3
;
1
)
{\displaystyle {\mathfrak {so}}(3;1)}
are:
[
J
μ
ν
,
J
ρ
σ
]
=
i
(
η
σ
μ
J
ρ
ν
+
η
ν
σ
J
μ
ρ
−
η
ρ
μ
J
σ
ν
−
η
ν
ρ
J
μ
σ
)
.
{\displaystyle \left[J^{\mu \nu },J^{\rho \sigma }\right]=i\left(\eta ^{\sigma \mu }J^{\rho \nu }+\eta ^{\nu \sigma }J^{\mu \rho }-\eta ^{\rho \mu }J^{\sigma \nu }-\eta ^{\nu \rho }J^{\mu \sigma }\right).}
In three-dimensional notation, these are
[
J
i
,
J
j
]
=
i
ϵ
i
j
k
J
k
,
[
J
i
,
K
j
]
=
i
ϵ
i
j
k
K
k
,
[
K
i
,
K
j
]
=
−
i
ϵ
i
j
k
J
k
.
{\displaystyle \left[J_{i},J_{j}\right]=i\epsilon _{ijk}J_{k},\quad \left[J_{i},K_{j}\right]=i\epsilon _{ijk}K_{k},\quad \left[K_{i},K_{j}\right]=-i\epsilon _{ijk}J_{k}.}
The choice of basis above satisfies the relations, but other choices are possible. The multiple use of the symbol J above and in the sequel should be observed.
For example, a typical boost and a typical rotation exponentiate as,
exp
(
−
i
ξ
K
1
)
=
(
cosh
ξ
sinh
ξ
0
0
sinh
ξ
cosh
ξ
0
0
0
0
1
0
0
0
0
1
)
,
exp
(
−
i
θ
J
1
)
=
(
1
0
0
0
0
1
0
0
0
0
cos
θ
−
sin
θ
0
0
sin
θ
cos
θ
)
,
{\displaystyle \exp(-i\xi K_{1})={\begin{pmatrix}\cosh \xi &\sinh \xi &0&0\\\sinh \xi &\cosh \xi &0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}},\qquad \exp(-i\theta J_{1})={\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&\cos \theta &-\sin \theta \\0&0&\sin \theta &\cos \theta \end{pmatrix}},}
symmetric and orthogonal, respectively.
=== Weyl spinors and bispinors ===
By taking, in turn, m = 1/2, n = 0 and m = 0, n = 1/2 and by setting
J
i
(
1
2
)
=
1
2
σ
i
{\displaystyle J_{i}^{\left({\frac {1}{2}}\right)}={\frac {1}{2}}\sigma _{i}}
in the general expression (G1), and by using the trivial relations 11 = 1 and J(0) = 0, it follows
These are the left-handed and right-handed Weyl spinor representations. They act by matrix multiplication on 2-dimensional complex vector spaces (with a choice of basis) VL and VR, whose elements ΨL and ΨR are called left- and right-handed Weyl spinors respectively. Given
(
π
(
1
2
,
0
)
,
V
L
)
and
(
π
(
0
,
1
2
)
,
V
R
)
{\displaystyle \left(\pi _{\left({\frac {1}{2}},0\right)},V_{\text{L}}\right)\quad {\text{and}}\quad \left(\pi _{\left(0,{\frac {1}{2}}\right)},V_{\text{R}}\right)}
their direct sum as representations is formed,
This is, up to a similarity transformation, the (1/2,0) ⊕ (0,1/2) Dirac spinor representation of
s
o
(
3
;
1
)
.
{\displaystyle {\mathfrak {so}}(3;1).}
It acts on the 4-component elements (ΨL, ΨR) of (VL ⊕ VR), called bispinors, by matrix multiplication. The representation may be obtained in a more general and basis independent way using Clifford algebras. These expressions for bispinors and Weyl spinors all extend by linearity of Lie algebras and representations to all of
s
o
(
3
;
1
)
.
{\displaystyle {\mathfrak {so}}(3;1).}
Expressions for the group representations are obtained by exponentiation.
== Open problems ==
The classification and characterization of the representation theory of the Lorentz group was completed in 1947. But in association with the Bargmann–Wigner programme, there are yet unresolved purely mathematical problems, linked to the infinite-dimensional unitary representations.
The irreducible infinite-dimensional unitary representations may have indirect relevance to physical reality in speculative modern theories since the (generalized) Lorentz group appears as the little group of the Poincaré group of spacelike vectors in higher spacetime dimension. The corresponding infinite-dimensional unitary representations of the (generalized) Poincaré group are the so-called tachyonic representations. Tachyons appear in the spectrum of bosonic strings and are associated with instability of the vacuum. Even though tachyons may not be realized in nature, these representations must be mathematically understood in order to understand string theory. This is so since tachyon states turn out to appear in superstring theories too in attempts to create realistic models.
One open problem is the completion of the Bargmann–Wigner programme for the isometry group SO(D − 2, 1) of the de Sitter spacetime dSD−2. Ideally, the physical components of wave functions would be realized on the hyperboloid dSD−2 of radius μ > 0 embedded in
R
D
−
2
,
1
{\displaystyle \mathbb {R} ^{D-2,1}}
and the corresponding O(D−2, 1) covariant wave equations of the infinite-dimensional unitary representation to be known.
== See also ==
Bargmann–Wigner equations
Dirac algebra
Gamma matrices
Lorentz group
Möbius transformation
Poincaré group
Representation theory of the Poincaré group
Symmetry in quantum mechanics
Wigner's classification
== Remarks ==
== Notes ==
== Freely available online references ==
Bekaert, X.; Boulanger, N. (2006), "The unitary representations of the Poincare group in any spacetime dimension", arXiv:hep-th/0611263 Expanded version of the lectures presented at the second Modave summer school in mathematical physics (Belgium, August 2006).
Curtright, T L; Fairlie, D B; Zachos, C K (2014), "A compact formula for rotations as spin matrix polynomials", SIGMA, 10: 084, arXiv:1402.3541, Bibcode:2014SIGMA..10..084C, doi:10.3842/SIGMA.2014.084, S2CID 18776942 Group elements of SU(2) are expressed in closed form as finite polynomials of the Lie algebra generators, for all definite spin representations of the rotation group.
== References ==
Abramowitz, M.; Stegun, I. A. (1965), Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables, Dover Books on Mathematics, New York: Dover Publications, ISBN 978-0486612720
Bargmann, V. (1947), "Irreducible unitary representations of the Lorenz group", Ann. of Math., 48 (3): 568–640, doi:10.2307/1969129, JSTOR 1969129 (the representation theory of SO(2,1) and SL(2, R); the second part on SO(3; 1) and SL(2, C), described in the introduction, was never published).
Bargmann, V.; Wigner, E. P. (1948), "Group theoretical discussion of relativistic wave equations", Proc. Natl. Acad. Sci. USA, 34 (5): 211–23, Bibcode:1948PNAS...34..211B, doi:10.1073/pnas.34.5.211, PMC 1079095, PMID 16578292
Bourbaki, N. (1998), Lie Groups and Lie Algebras: Chapters 1-3, Springer, ISBN 978-3-540-64242-8
Brauer, R.; Weyl, H. (1935), "Spinors in n dimensions", Amer. J. Math., 57 (2): 425–449, doi:10.2307/2371218, JSTOR 2371218
Bäuerle, G.G.A; de Kerf, E.A. (1990), A. van Groesen; E.M. de Jager (eds.), Finite and infinite dimensional Lie algebras and their application in physics, Studies in mathematical physics, vol. 1, North-Holland, ISBN 978-0-444-88776-4
Bäuerle, G.G.A; de Kerf, E.A.; ten Kroode, A.P.E. (1997), A. van Groesen; E.M. de Jager (eds.), Finite and infinite dimensional Lie algebras and their application in physics, Studies in mathematical physics, vol. 7, North-Holland, ISBN 978-0-444-82836-1
Cartan, Élie (1913), "Les groupes projectifs qui ne laissant invariante aucun multiplicité plane", Bull. Soc. Math. Fr. (in French), 41: 53–96, doi:10.24033/bsmf.916
Churchill, R. V.; Brown, J. W. (2014) [1948], Complex Variables and Applications (9th ed.), New York: McGraw–Hill, ISBN 978-0073-383-170
Coleman, A. J. (1989), "The Greatest Mathematical Paper of All Time", The Mathematical Intelligencer, 11 (3): 29–38, doi:10.1007/BF03025189, ISSN 0343-6993, S2CID 35487310
Dalitz, R. H.; Peierls, Rudolf (1986), "Paul Adrien Maurice Dirac. 8 August 1902–20 October 1984", Biogr. Mem. Fellows R. Soc., 32: 138–185, doi:10.1098/rsbm.1986.0006, S2CID 74547263
Delbourgo, R.; Salam, A.; Strathdee, J. (1967), "Harmonic analysis in terms of the homogeneous Lorentz group", Physics Letters B, 25 (3): 230–32, Bibcode:1967PhLB...25..230D, doi:10.1016/0370-2693(67)90050-0
Dirac, P. A. M. (1928), "The Quantum Theory of the Electron", Proc. R. Soc. A, 117 (778): 610–624, Bibcode:1928RSPSA.117..610D, doi:10.1098/rspa.1928.0023 (free access)
Dirac, P. A. M. (1936), "Relativistic wave equations", Proc. R. Soc. A, 155 (886): 447–459, Bibcode:1936RSPSA.155..447D, doi:10.1098/rspa.1936.0111
Dirac, P. A. M. (1945), "Unitary representations of the Lorentz group", Proc. R. Soc. A, 183 (994): 284–295, Bibcode:1945RSPSA.183..284D, doi:10.1098/rspa.1945.0003, S2CID 202575171
Dixmier, J.; Malliavin, P. (1978), "Factorisations de fonctions et de vecteurs indéfiniment différentiables", Bull. Sci. Math. (in French), 102: 305–330
Fierz, M. (1939), "Über die relativistische theorie Kräftefreier teilchen mit beliebigem spin", Helv. Phys. Acta (in German), 12 (1): 3–37, Bibcode:1939AcHPh..12....3F, doi:10.5169/seals-110930(pdf download available){{citation}}: CS1 maint: postscript (link)
Fierz, M.; Pauli, W. (1939), "On relativistic wave equations for particles of arbitrary spin in an electromagnetic field", Proc. R. Soc. A, 173 (953): 211–232, Bibcode:1939RSPSA.173..211F, doi:10.1098/rspa.1939.0140
Folland, G. (2015), A Course in Abstract Harmonic Analysis (2nd ed.), CRC Press, ISBN 978-1498727136
Fulton, W.; Harris, J. (1991), Representation theory. A first course, Graduate Texts in Mathematics, vol. 129, New York: Springer-Verlag, ISBN 978-0-387-97495-8, MR 1153249
Gelfand, I. M.; Graev, M. I. (1953), "On a general method of decomposition of the regular representation of a Lie group into irreducible representations", Doklady Akademii Nauk SSSR, 92: 221–224
Gelfand, I. M.; Graev, M. I.; Vilenkin, N. Ya. (1966), "Harmonic analysis on the group of complex unimodular matrices in two dimensions", Generalized functions. Vol. 5: Integral geometry and representation theory, translated by Eugene Saletan, Academic Press, pp. 202–267, ISBN 978-1-4832-2975-1
Gelfand, I. M.; Graev, M. I.; Pyatetskii-Shapiro, I. I. (1969), Representation theory and automorphic functions, Academic Press, ISBN 978-0-12-279506-0
Gelfand, I.M.; Minlos, R.A.; Shapiro, Z. Ya. (1963), Representations of the Rotation and Lorentz Groups and their Applications, New York: Pergamon Press
Gelfand, I. M.; Naimark, M. A. (1947), "Unitary representations of the Lorentz group" (PDF), Izvestiya Akad. Nauk SSSR. Ser. Mat. (in Russian), 11 (5): 411–504, retrieved 2014-12-15(Pdf from Math.net.ru){{citation}}: CS1 maint: postscript (link)
Green, J. A. (1998), "Richard Dagobert Brauer" (PDF), Biographical Memoirs, vol. 75, National Academy Press, pp. 70–95, ISBN 978-0309062954
Greiner, W.; Müller, B. (1994), Quantum Mechanics: Symmetries (2nd ed.), Springer, ISBN 978-3540580805
Greiner, W.; Reinhardt, J. (1996), Field Quantization, Springer, ISBN 978-3-540-59179-5
Harish-Chandra (1947), "Infinite irreducible representations of the Lorentz group", Proc. R. Soc. A, 189 (1018): 372–401, Bibcode:1947RSPSA.189..372H, doi:10.1098/rspa.1947.0047, S2CID 124917518
Harish-Chandra (1951), "Plancherel formula for complex semi-simple Lie groups", Proc. Natl. Acad. Sci. U.S.A., 37 (12): 813–818, Bibcode:1951PNAS...37..813H, doi:10.1073/pnas.37.12.813, PMC 1063477, PMID 16589034
Hall, Brian C. (2003), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (1st ed.), Springer, ISBN 978-0-387-40122-5
Hall, Brian C. (2015), Lie groups, Lie algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, doi:10.1007/978-3-319-13467-3, ISBN 978-3319134666, ISSN 0072-5285
Helgason, S. (1968), Lie groups and symmetric spaces, Battelle Rencontres, Benjamin, pp. 1–71 (a general introduction for physicists)
Helgason, S. (2000), Groups and geometric analysis. Integral geometry, invariant differential operators, and spherical functions (corrected reprint of the 1984 original), Mathematical Surveys and Monographs, vol. 83, American Mathematical Society, ISBN 978-0-8218-2673-7
Jorgenson, J.; Lang, S. (2008), The heat kernel and theta inversion on SL(2,C), Springer Monographs in Mathematics, Springer, ISBN 978-0-387-38031-5
Killing, Wilhelm (1888), "Die Zusammensetzung der stetigen/endlichen Transformationsgruppen", Mathematische Annalen (in German), 31 (2 (June)): 252–290, doi:10.1007/bf01211904, S2CID 120501356
Kirillov, A. (2008), An Introduction to Lie Groups and Lie Algebras, Cambridge Studies in Advanced Mathematics, vol. 113, Cambridge University Press, ISBN 978-0521889698
Klauder, J. R. (1999), "Valentine Bargmann" (PDF), Biographical Memoirs, vol. 76, National Academy Press, pp. 37–50, ISBN 978-0-309-06434-7
Knapp, Anthony W. (2001), Representation theory of semisimple groups. An overview based on examples., Princeton Landmarks in Mathematics, Princeton University Press, ISBN 978-0-691-09089-4 (elementary treatment for SL(2,C))
Langlands, R. P. (1985), "Harish-Chandra", Biogr. Mem. Fellows R. Soc., 31: 198–225, doi:10.1098/rsbm.1985.0008, S2CID 61332822
Lee, J. M. (2003), Introduction to Smooth manifolds, Springer Graduate Texts in Mathematics, vol. 218, ISBN 978-0-387-95448-6
Lie, Sophus (1888), Theorie der Transformationsgruppen I(1888), II(1890), III(1893) (in German)
Misner, Charles W.; Thorne, Kip. S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 978-0-7167-0344-0
Naimark, M.A. (1964), Linear representations of the Lorentz group (translated from the Russian original by Ann Swinfen and O. J. Marstrand), Macmillan
Rossmann, Wulf (2002), Lie Groups – An Introduction Through Linear Groups, Oxford Graduate Texts in Mathematics, Oxford Science Publications, ISBN 0-19-859683-9
Rühl, W. (1970), The Lorentz group and harmonic analysis, Benjamin (a detailed account for physicists)
Simmons, G. F. (1972), Differential Equations with Applications and historical Notes (T M H ed.), New Dheli: Tata McGra–Hill Publishing Company Ltd, ISBN 978-0-07-099572-7
Stein, Elias M. (1970), "Analytic continuation of group representations", Advances in Mathematics, 4 (2): 172–207, doi:10.1016/0001-8708(70)90022-8 (James K. Whittemore Lectures in Mathematics given at Yale University, 1967)
Takahashi, R. (1963), "Sur les représentations unitaires des groupes de Lorentz généralisés", Bull. Soc. Math. France (in French), 91: 289–433, doi:10.24033/bsmf.1598
Taylor, M. E. (1986), Noncommutative harmonic analysis, Mathematical Surveys and Monographs, vol. 22, American Mathematical Society, ISBN 978-0-8218-1523-6, Chapter 9, SL(2, C) and more general Lorentz groups
Tung, Wu-Ki (1985), Group Theory in Physics (1st ed.), New Jersey·London·Singapore·Hong Kong: World Scientific, ISBN 978-9971966577
Varadarajan, V. S. (1989), An Introduction to Harmonic Analysis on Semisimple Lie Groups, Cambridge University Press, ISBN 978-0521663625
Weinberg, S. (2002) [1995], Foundations, The Quantum Theory of Fields, vol. 1, Cambridge: Cambridge University Press, ISBN 978-0-521-55001-7
Weinberg, S. (2000), Supersymmetry, The Quantum Theory of Fields, vol. 3 (1st ed.), Cambridge: Cambridge University Press, ISBN 978-0521670555
Weyl, H. (1953), The Classical Groups. Their Invariants and Representations (2nd ed.), Princeton University Press, ISBN 978-0-691-05756-9, MR 0000255{{citation}}: CS1 maint: ignored ISBN errors (link)
Weyl, H. (1931), The Theory of Groups and Quantum Mechanics, Methuen and Company; reprinted, Dover Publications, 1950, ISBN 978-0-486-60269-1
Wigner, E. P. (1939), "On unitary representations of the inhomogeneous Lorentz group", Annals of Mathematics, 40 (1): 149–204, Bibcode:1939AnMat..40..149W, doi:10.2307/1968551, JSTOR 1968551, MR 1503456, S2CID 121773411.
Zwiebach, B. (2004), A First Course in String Theory, Cambridge University Press, ISBN 0-521-83143-1 | Wikipedia/Representation_theory_of_the_Lorentz_group |
In abstract algebra, a representation of a Hopf algebra is a representation of its underlying associative algebra. That is, a representation of a Hopf algebra H over a field K is a K-vector space V with an action H × V → V usually denoted by juxtaposition (that is, the image of (h, v) is written hv). The vector space V is called an H-module.
== Properties ==
The module structure of a representation of a Hopf algebra H is simply its structure as a module for the underlying associative algebra. The main use of considering the additional structure of a Hopf algebra is when considering all H-modules as a category. The additional structure is also used to define invariant elements of an H-module V. An element v in V is invariant under H if for all h in H, hv = ε(h)v, where ε is the counit of H. The subset of all invariant elements of V forms a submodule of V.
== Categories of representations as a motivation for Hopf algebras ==
For an associative algebra H, the tensor product V1 ⊗ V2 of two H-modules V1 and V2 is a vector space, but not necessarily an H-module. For the tensor product to be a functorial product operation on H-modules, there must be a linear binary operation Δ : H → H ⊗ H such that for any v in V1 ⊗ V2 and any h in H,
h
v
=
Δ
h
(
v
(
1
)
⊗
v
(
2
)
)
=
h
(
1
)
v
(
1
)
⊗
h
(
2
)
v
(
2
)
,
{\displaystyle hv=\Delta h(v_{(1)}\otimes v_{(2)})=h_{(1)}v_{(1)}\otimes h_{(2)}v_{(2)},}
and for any v in V1 ⊗ V2 and a and b in H,
Δ
(
a
b
)
(
v
(
1
)
⊗
v
(
2
)
)
=
(
a
b
)
v
=
a
[
b
[
v
]
]
=
Δ
a
[
Δ
b
(
v
(
1
)
⊗
v
(
2
)
)
]
=
(
Δ
a
)
(
Δ
b
)
(
v
(
1
)
⊗
v
(
2
)
)
.
{\displaystyle \Delta (ab)(v_{(1)}\otimes v_{(2)})=(ab)v=a[b[v]]=\Delta a[\Delta b(v_{(1)}\otimes v_{(2)})]=(\Delta a)(\Delta b)(v_{(1)}\otimes v_{(2)}).}
using sumless Sweedler's notation, which is somewhat like an index free form of the Einstein summation convention. This is satisfied if there is a Δ such that Δ(ab) = Δ(a)Δ(b) for all a, b in H.
For the category of H-modules to be a strict monoidal category with respect to ⊗,
V
1
⊗
(
V
2
⊗
V
3
)
{\displaystyle V_{1}\otimes (V_{2}\otimes V_{3})}
and
(
V
1
⊗
V
2
)
⊗
V
3
{\displaystyle (V_{1}\otimes V_{2})\otimes V_{3}}
must be equivalent and there must be unit object εH, called the trivial module, such that εH ⊗ V, V and V ⊗ εH are equivalent.
This means that for any v in
V
1
⊗
(
V
2
⊗
V
3
)
=
(
V
1
⊗
V
2
)
⊗
V
3
{\displaystyle V_{1}\otimes (V_{2}\otimes V_{3})=(V_{1}\otimes V_{2})\otimes V_{3}}
and for h in H,
(
(
id
⊗
Δ
)
Δ
h
)
(
v
(
1
)
⊗
v
(
2
)
⊗
v
(
3
)
)
=
h
(
1
)
v
(
1
)
⊗
h
(
2
)
(
1
)
v
(
2
)
⊗
h
(
2
)
(
2
)
v
(
3
)
=
h
v
=
(
(
Δ
⊗
id
)
Δ
h
)
(
v
(
1
)
⊗
v
(
2
)
⊗
v
(
3
)
)
.
{\displaystyle ((\operatorname {id} \otimes \Delta )\Delta h)(v_{(1)}\otimes v_{(2)}\otimes v_{(3)})=h_{(1)}v_{(1)}\otimes h_{(2)(1)}v_{(2)}\otimes h_{(2)(2)}v_{(3)}=hv=((\Delta \otimes \operatorname {id} )\Delta h)(v_{(1)}\otimes v_{(2)}\otimes v_{(3)}).}
This will hold for any three H-modules if Δ satisfies
(
id
⊗
Δ
)
Δ
A
=
(
Δ
⊗
id
)
Δ
A
.
{\displaystyle (\operatorname {id} \otimes \Delta )\Delta A=(\Delta \otimes \operatorname {id} )\Delta A.}
The trivial module must be one-dimensional, and so an algebra homomorphism ε : H → F may be defined such that hv = ε(h)v for all v in εH. The trivial module may be identified with F, with 1 being the element such that 1 ⊗ v = v = v ⊗ 1 for all v. It follows that for any v in any H-module V, any c in εH and any h in H,
(
ε
(
h
(
1
)
)
h
(
2
)
)
c
v
=
h
(
1
)
c
⊗
h
(
2
)
v
=
h
(
c
⊗
v
)
=
h
(
c
v
)
=
(
h
(
1
)
ε
(
h
(
2
)
)
)
c
v
.
{\displaystyle (\varepsilon (h_{(1)})h_{(2)})cv=h_{(1)}c\otimes h_{(2)}v=h(c\otimes v)=h(cv)=(h_{(1)}\varepsilon (h_{(2)}))cv.}
The existence of an algebra homomorphism ε satisfying
ε
(
h
(
1
)
)
h
(
2
)
=
h
=
h
(
1
)
ε
(
h
(
2
)
)
{\displaystyle \varepsilon (h_{(1)})h_{(2)}=h=h_{(1)}\varepsilon (h_{(2)})}
is a sufficient condition for the existence of the trivial module.
It follows that in order for the category of H-modules to be a monoidal category with respect to the tensor product, it is sufficient for H to have maps Δ and ε satisfying these conditions. This is the motivation for the definition of a bialgebra, where Δ is called the comultiplication and ε is called the counit.
In order for each H-module V to have a dual representation V such that the underlying vector spaces are dual and the operation * is functorial over the monoidal category of H-modules, there must be a linear map S : H → H such that for any h in H, x in V and y in V*,
⟨
y
,
S
(
h
)
x
⟩
=
⟨
h
y
,
x
⟩
.
{\displaystyle \langle y,S(h)x\rangle =\langle hy,x\rangle .}
where
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is the usual pairing of dual vector spaces. If the map
φ
:
V
⊗
V
∗
→
ε
H
{\displaystyle \varphi :V\otimes V^{*}\rightarrow \varepsilon _{H}}
induced by the pairing is to be an H-homomorphism, then for any h in H, x in V and y in V*,
φ
(
h
(
x
⊗
y
)
)
=
φ
(
x
⊗
S
(
h
(
1
)
)
h
(
2
)
y
)
=
φ
(
S
(
h
(
2
)
)
h
(
1
)
x
⊗
y
)
=
h
φ
(
x
⊗
y
)
=
ε
(
h
)
φ
(
x
⊗
y
)
,
{\displaystyle \varphi \left(h(x\otimes y)\right)=\varphi \left(x\otimes S(h_{(1)})h_{(2)}y\right)=\varphi \left(S(h_{(2)})h_{(1)}x\otimes y\right)=h\varphi (x\otimes y)=\varepsilon (h)\varphi (x\otimes y),}
which is satisfied if
S
(
h
(
1
)
)
h
(
2
)
=
ε
(
h
)
=
h
(
1
)
S
(
h
(
2
)
)
{\displaystyle S(h_{(1)})h_{(2)}=\varepsilon (h)=h_{(1)}S(h_{(2)})}
for all h in H.
If there is such a map S, then it is called an antipode, and H is a Hopf algebra. The desire for a monoidal category of modules with functorial tensor products and dual representations is therefore one motivation for the concept of a Hopf algebra.
== Representations on an algebra ==
A Hopf algebra also has representations which carry additional structure, namely they are algebras.
Let H be a Hopf algebra. If A is an algebra with the product operation μ : A ⊗ A → A, and ρ : H ⊗ A → A is a representation of H on A, then ρ is said to be a representation of H on an algebra if μ is H-equivariant. As special cases, Lie algebras, Lie superalgebras and groups can also have representations on an algebra.
== See also ==
Tannaka–Krein reconstruction theorem
== References == | Wikipedia/Representation_theory_of_Hopf_algebras |
In the mathematical field of representation theory, a Lie algebra representation or representation of a Lie algebra is a way of writing a Lie algebra as a set of matrices (or endomorphisms of a vector space) in such a way that the Lie bracket is given by the commutator. In the language of physics, one looks for a vector space
V
{\displaystyle V}
together with a collection of operators on
V
{\displaystyle V}
satisfying some fixed set of commutation relations, such as the relations satisfied by the angular momentum operators.
The notion is closely related to that of a representation of a Lie group. Roughly speaking, the representations of Lie algebras are the differentiated form of representations of Lie groups, while the representations of the universal cover of a Lie group are the integrated form of the representations of its Lie algebra.
In the study of representations of a Lie algebra, a particular ring, called the universal enveloping algebra, associated with the Lie algebra plays an important role. The universality of this ring says that the category of representations of a Lie algebra is the same as the category of modules over its enveloping algebra.
== Formal definition ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra and let
V
{\displaystyle V}
be a vector space. We let
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
denote the space of endomorphisms of
V
{\displaystyle V}
, that is, the space of all linear maps of
V
{\displaystyle V}
to itself. Here, the associative algebra
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
is turned into a Lie algebra with bracket given by the commutator:
[
s
,
t
]
=
s
∘
t
−
t
∘
s
{\displaystyle [s,t]=s\circ t-t\circ s}
for all s,t in
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
. Then a representation of
g
{\displaystyle {\mathfrak {g}}}
on
V
{\displaystyle V}
is a Lie algebra homomorphism
ρ
:
g
→
g
l
(
V
)
{\displaystyle \rho \colon {\mathfrak {g}}\to {\mathfrak {gl}}(V)}
.
Explicitly, this means that
ρ
{\displaystyle \rho }
should be a linear map and it should satisfy
ρ
(
[
X
,
Y
]
)
=
ρ
(
X
)
ρ
(
Y
)
−
ρ
(
Y
)
ρ
(
X
)
{\displaystyle \rho ([X,Y])=\rho (X)\rho (Y)-\rho (Y)\rho (X)}
for all X, Y in
g
{\displaystyle {\mathfrak {g}}}
. The vector space V, together with the representation ρ, is called a
g
{\displaystyle {\mathfrak {g}}}
-module. (Many authors abuse terminology and refer to V itself as the representation).
The representation
ρ
{\displaystyle \rho }
is said to be faithful if it is injective.
One can equivalently define a
g
{\displaystyle {\mathfrak {g}}}
-module as a vector space V together with a bilinear map
g
×
V
→
V
{\displaystyle {\mathfrak {g}}\times V\to V}
such that
[
X
,
Y
]
⋅
v
=
X
⋅
(
Y
⋅
v
)
−
Y
⋅
(
X
⋅
v
)
{\displaystyle [X,Y]\cdot v=X\cdot (Y\cdot v)-Y\cdot (X\cdot v)}
for all X,Y in
g
{\displaystyle {\mathfrak {g}}}
and v in V. This is related to the previous definition by setting X ⋅ v = ρ(X)(v).
== Examples ==
=== Adjoint representations ===
The most basic example of a Lie algebra representation is the adjoint representation of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
on itself:
ad
:
g
→
g
l
(
g
)
,
X
↦
ad
X
,
ad
X
(
Y
)
=
[
X
,
Y
]
.
{\displaystyle {\textrm {ad}}:{\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}}),\quad X\mapsto \operatorname {ad} _{X},\quad \operatorname {ad} _{X}(Y)=[X,Y].}
Indeed, by virtue of the Jacobi identity,
ad
{\displaystyle \operatorname {ad} }
is a Lie algebra homomorphism.
=== Infinitesimal Lie group representations ===
A Lie algebra representation also arises in nature. If
ϕ
{\displaystyle \phi }
: G → H is a homomorphism of (real or complex) Lie groups, and
g
{\displaystyle {\mathfrak {g}}}
and
h
{\displaystyle {\mathfrak {h}}}
are the Lie algebras of G and H respectively, then the differential
d
e
ϕ
:
g
→
h
{\displaystyle d_{e}\phi :{\mathfrak {g}}\to {\mathfrak {h}}}
on tangent spaces at the identities is a Lie algebra homomorphism. In particular, for a finite-dimensional vector space V, a representation of Lie groups
ϕ
:
G
→
GL
(
V
)
{\displaystyle \phi :G\to \operatorname {GL} (V)\,}
determines a Lie algebra homomorphism
d
ϕ
:
g
→
g
l
(
V
)
{\displaystyle d\phi :{\mathfrak {g}}\to {\mathfrak {gl}}(V)}
from
g
{\displaystyle {\mathfrak {g}}}
to the Lie algebra of the general linear group GL(V), i.e. the endomorphism algebra of V.
For example, let
c
g
(
x
)
=
g
x
g
−
1
{\displaystyle c_{g}(x)=gxg^{-1}}
. Then the differential of
c
g
:
G
→
G
{\displaystyle c_{g}:G\to G}
at the identity is an element of
GL
(
g
)
{\displaystyle \operatorname {GL} ({\mathfrak {g}})}
. Denoting it by
Ad
(
g
)
{\displaystyle \operatorname {Ad} (g)}
one obtains a representation
Ad
{\displaystyle \operatorname {Ad} }
of G on the vector space
g
{\displaystyle {\mathfrak {g}}}
. This is the adjoint representation of G. Applying the preceding, one gets the Lie algebra representation
d
Ad
{\displaystyle d\operatorname {Ad} }
. It can be shown that
d
e
Ad
=
ad
{\displaystyle d_{e}\operatorname {Ad} =\operatorname {ad} }
, the adjoint representation of
g
{\displaystyle {\mathfrak {g}}}
.
A partial converse to this statement says that every representation of a finite-dimensional (real or complex) Lie algebra lifts to a unique representation of the associated simply connected Lie group, so that representations of simply-connected Lie groups are in one-to-one correspondence with representations of their Lie algebras.
=== In quantum physics ===
In quantum theory, one considers "observables" that are self-adjoint operators on a Hilbert space. The commutation relations among these operators are then an important tool. The angular momentum operators, for example, satisfy the commutation relations
[
L
x
,
L
y
]
=
i
ℏ
L
z
,
[
L
y
,
L
z
]
=
i
ℏ
L
x
,
[
L
z
,
L
x
]
=
i
ℏ
L
y
,
{\displaystyle [L_{x},L_{y}]=i\hbar L_{z},\;\;[L_{y},L_{z}]=i\hbar L_{x},\;\;[L_{z},L_{x}]=i\hbar L_{y},}
.
Thus, the span of these three operators forms a Lie algebra, which is isomorphic to the Lie algebra so(3) of the rotation group SO(3). Then if
V
{\displaystyle V}
is any subspace of the quantum Hilbert space that is invariant under the angular momentum operators,
V
{\displaystyle V}
will constitute a representation of the Lie algebra so(3). An understanding of the representation theory of so(3) is of great help in, for example, analyzing Hamiltonians with rotational symmetry, such as the hydrogen atom. Many other interesting Lie algebras (and their representations) arise in other parts of quantum physics. Indeed, the history of representation theory is characterized by rich interactions between mathematics and physics.
== Basic concepts ==
=== Invariant subspaces and irreducibility ===
Given a representation
ρ
:
g
→
End
(
V
)
{\displaystyle \rho :{\mathfrak {g}}\rightarrow \operatorname {End} (V)}
of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, we say that a subspace
W
{\displaystyle W}
of
V
{\displaystyle V}
is invariant if
ρ
(
X
)
w
∈
W
{\displaystyle \rho (X)w\in W}
for all
w
∈
W
{\displaystyle w\in W}
and
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
. A nonzero representation is said to be irreducible if the only invariant subspaces are
V
{\displaystyle V}
itself and the zero space
{
0
}
{\displaystyle \{0\}}
. The term simple module is also used for an irreducible representation.
=== Homomorphisms ===
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra. Let V, W be
g
{\displaystyle {\mathfrak {g}}}
-modules. Then a linear map
f
:
V
→
W
{\displaystyle f:V\to W}
is a homomorphism of
g
{\displaystyle {\mathfrak {g}}}
-modules if it is
g
{\displaystyle {\mathfrak {g}}}
-equivariant; i.e.,
f
(
X
⋅
v
)
=
X
⋅
f
(
v
)
{\displaystyle f(X\cdot v)=X\cdot f(v)}
for any
X
∈
g
,
v
∈
V
{\displaystyle X\in {\mathfrak {g}},\,v\in V}
. If f is bijective,
V
,
W
{\displaystyle V,W}
are said to be equivalent. Such maps are also referred to as intertwining maps or morphisms.
Similarly, many other constructions from module theory in abstract algebra carry over to this setting: submodule, quotient, subquotient, direct sum, Jordan-Hölder series, etc.
=== Schur's lemma ===
A simple but useful tool in studying irreducible representations is Schur's lemma. It has two parts:
If V, W are irreducible
g
{\displaystyle {\mathfrak {g}}}
-modules and
f
:
V
→
W
{\displaystyle f:V\to W}
is a homomorphism, then
f
{\displaystyle f}
is either zero or an isomorphism.
If V is an irreducible
g
{\displaystyle {\mathfrak {g}}}
-module over an algebraically closed field and
f
:
V
→
V
{\displaystyle f:V\to V}
is a homomorphism, then
f
{\displaystyle f}
is a scalar multiple of the identity.
=== Complete reducibility ===
Let V be a representation of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. Then V is said to be completely reducible (or semisimple) if it is isomorphic to a direct sum of irreducible representations (cf. semisimple module). If V is finite-dimensional, then V is completely reducible if and only if every invariant subspace of V has an invariant complement. (That is, if W is an invariant subspace, then there is another invariant subspace P such that V is the direct sum of W and P.)
If
g
{\displaystyle {\mathfrak {g}}}
is a finite-dimensional semisimple Lie algebra over a field of characteristic zero and V is finite-dimensional, then V is semisimple; this is Weyl's complete reducibility theorem. Thus, for semisimple Lie algebras, a classification of irreducible (i.e. simple) representations leads immediately to classification of all representations. For other Lie algebra, which do not have this special property, classifying the irreducible representations may not help much in classifying general representations.
A Lie algebra is said to be reductive if the adjoint representation is semisimple. Certainly, every (finite-dimensional) semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is reductive, since every representation of
g
{\displaystyle {\mathfrak {g}}}
is completely reducible, as we have just noted. In the other direction, the definition of a reductive Lie algebra means that it decomposes as a direct sum of ideals (i.e., invariant subspaces for the adjoint representation) that have no nontrivial sub-ideals. Some of these ideals will be one-dimensional and the rest are simple Lie algebras. Thus, a reductive Lie algebra is a direct sum of a commutative algebra and a semisimple algebra.
=== Invariants ===
An element v of V is said to be
g
{\displaystyle {\mathfrak {g}}}
-invariant if
x
⋅
v
=
0
{\displaystyle x\cdot v=0}
for all
x
∈
g
{\displaystyle x\in {\mathfrak {g}}}
. The set of all invariant elements is denoted by
V
g
{\displaystyle V^{\mathfrak {g}}}
.
== Basic constructions ==
=== Tensor products of representations ===
If we have two representations of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, with V1 and V2 as their underlying vector spaces, then the tensor product of the representations would have V1 ⊗ V2 as the underlying vector space, with the action of
g
{\displaystyle {\mathfrak {g}}}
uniquely determined by the assumption that
X
⋅
(
v
1
⊗
v
2
)
=
(
X
⋅
v
1
)
⊗
v
2
+
v
1
⊗
(
X
⋅
v
2
)
.
{\displaystyle X\cdot (v_{1}\otimes v_{2})=(X\cdot v_{1})\otimes v_{2}+v_{1}\otimes (X\cdot v_{2}).}
for all
v
1
∈
V
1
{\displaystyle v_{1}\in V_{1}}
and
v
2
∈
V
2
{\displaystyle v_{2}\in V_{2}}
.
In the language of homomorphisms, this means that we define
ρ
1
⊗
ρ
2
:
g
→
g
l
(
V
1
⊗
V
2
)
{\displaystyle \rho _{1}\otimes \rho _{2}:{\mathfrak {g}}\rightarrow {\mathfrak {gl}}(V_{1}\otimes V_{2})}
by the formula
(
ρ
1
⊗
ρ
2
)
(
X
)
=
ρ
1
(
X
)
⊗
I
+
I
⊗
ρ
2
(
X
)
{\displaystyle (\rho _{1}\otimes \rho _{2})(X)=\rho _{1}(X)\otimes \mathrm {I} +\mathrm {I} \otimes \rho _{2}(X)}
. This is called the Kronecker sum of
ρ
1
{\displaystyle \rho _{1}}
and
ρ
2
{\displaystyle \rho _{2}}
, defined in Matrix addition#Kronecker_sum and Kronecker product#Properties, and more specifically in Tensor product of representations.
In the physics literature, the tensor product with the identity operator is often suppressed in the notation, with the formula written as
(
ρ
1
⊗
ρ
2
)
(
X
)
=
ρ
1
(
X
)
+
ρ
2
(
X
)
{\displaystyle (\rho _{1}\otimes \rho _{2})(X)=\rho _{1}(X)+\rho _{2}(X)}
,
where it is understood that
ρ
1
(
x
)
{\displaystyle \rho _{1}(x)}
acts on the first factor in the tensor product and
ρ
2
(
x
)
{\displaystyle \rho _{2}(x)}
acts on the second factor in the tensor product. In the context of representations of the Lie algebra su(2), the tensor product of representations goes under the name "addition of angular momentum." In this context,
ρ
1
(
X
)
{\displaystyle \rho _{1}(X)}
might, for example, be the orbital angular momentum while
ρ
2
(
X
)
{\displaystyle \rho _{2}(X)}
is the spin angular momentum.
=== Dual representations ===
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra and
ρ
:
g
→
g
l
(
V
)
{\displaystyle \rho :{\mathfrak {g}}\rightarrow {\mathfrak {gl}}(V)}
be a representation of
g
{\displaystyle {\mathfrak {g}}}
. Let
V
∗
{\displaystyle V^{*}}
be the dual space, that is, the space of linear functionals on
V
{\displaystyle V}
. Then we can define a representation
ρ
∗
:
g
→
g
l
(
V
∗
)
{\displaystyle \rho ^{*}:{\mathfrak {g}}\rightarrow {\mathfrak {gl}}(V^{*})}
by the formula
ρ
∗
(
X
)
=
−
(
ρ
(
X
)
)
tr
,
{\displaystyle \rho ^{*}(X)=-(\rho (X))^{\operatorname {tr} },}
where for any operator
A
:
V
→
V
{\displaystyle A:V\rightarrow V}
, the transpose operator
A
tr
:
V
∗
→
V
∗
{\displaystyle A^{\operatorname {tr} }:V^{*}\rightarrow V^{*}}
is defined as the "composition with
A
{\displaystyle A}
" operator:
(
A
tr
ϕ
)
(
v
)
=
ϕ
(
A
v
)
{\displaystyle (A^{\operatorname {tr} }\phi )(v)=\phi (Av)}
The minus sign in the definition of
ρ
∗
{\displaystyle \rho ^{*}}
is needed to ensure that
ρ
∗
{\displaystyle \rho ^{*}}
is actually a representation of
g
{\displaystyle {\mathfrak {g}}}
, in light of the identity
(
A
B
)
tr
=
B
tr
A
tr
.
{\displaystyle (AB)^{\operatorname {tr} }=B^{\operatorname {tr} }A^{\operatorname {tr} }.}
If we work in a basis, then the transpose in the above definition can be interpreted as the ordinary matrix transpose.
=== Representation on linear maps ===
Let
V
,
W
{\displaystyle V,W}
be
g
{\displaystyle {\mathfrak {g}}}
-modules,
g
{\displaystyle {\mathfrak {g}}}
a Lie algebra. Then
Hom
(
V
,
W
)
{\displaystyle \operatorname {Hom} (V,W)}
becomes a
g
{\displaystyle {\mathfrak {g}}}
-module by setting
(
X
⋅
f
)
(
v
)
=
X
f
(
v
)
−
f
(
X
v
)
{\displaystyle (X\cdot f)(v)=Xf(v)-f(Xv)}
. In particular,
Hom
g
(
V
,
W
)
=
Hom
(
V
,
W
)
g
{\displaystyle \operatorname {Hom} _{\mathfrak {g}}(V,W)=\operatorname {Hom} (V,W)^{\mathfrak {g}}}
; that is to say, the
g
{\displaystyle {\mathfrak {g}}}
-module homomorphisms from
V
{\displaystyle V}
to
W
{\displaystyle W}
are simply the elements of
Hom
(
V
,
W
)
{\displaystyle \operatorname {Hom} (V,W)}
that are invariant under the just-defined action of
g
{\displaystyle {\mathfrak {g}}}
on
Hom
(
V
,
W
)
{\displaystyle \operatorname {Hom} (V,W)}
. If we take
W
{\displaystyle W}
to be the base field, we recover the action of
g
{\displaystyle {\mathfrak {g}}}
on
V
∗
{\displaystyle V^{*}}
given in the previous subsection.
== Representation theory of semisimple Lie algebras ==
See Representation theory of semisimple Lie algebras.
== Enveloping algebras ==
To each Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over a field k, one can associate a certain ring called the universal enveloping algebra of
g
{\displaystyle {\mathfrak {g}}}
and denoted
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. The universal property of the universal enveloping algebra guarantees that every representation of
g
{\displaystyle {\mathfrak {g}}}
gives rise to a representation of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. Conversely, the PBW theorem tells us that
g
{\displaystyle {\mathfrak {g}}}
sits inside
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
, so that every representation of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
can be restricted to
g
{\displaystyle {\mathfrak {g}}}
. Thus, there is a one-to-one correspondence between representations of
g
{\displaystyle {\mathfrak {g}}}
and those of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
.
The universal enveloping algebra plays an important role in the representation theory of semisimple Lie algebras, described above. Specifically, the finite-dimensional irreducible representations are constructed as quotients of Verma modules, and Verma modules are constructed as quotients of the universal enveloping algebra.
The construction of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is as follows. Let T be the tensor algebra of the vector space
g
{\displaystyle {\mathfrak {g}}}
. Thus, by definition,
T
=
⊕
n
=
0
∞
⊗
1
n
g
{\displaystyle T=\oplus _{n=0}^{\infty }\otimes _{1}^{n}{\mathfrak {g}}}
and the multiplication on it is given by
⊗
{\displaystyle \otimes }
. Let
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
be the quotient ring of T by the ideal generated by elements of the form
[
X
,
Y
]
−
(
X
⊗
Y
−
Y
⊗
X
)
{\displaystyle [X,Y]-(X\otimes Y-Y\otimes X)}
.
There is a natural linear map from
g
{\displaystyle {\mathfrak {g}}}
into
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
obtained by restricting the quotient map of
T
→
U
(
g
)
{\displaystyle T\to U({\mathfrak {g}})}
to degree one piece. The PBW theorem implies that the canonical map is actually injective. Thus, every Lie algebra
g
{\displaystyle {\mathfrak {g}}}
can be embedded into an associative algebra
A
=
U
(
g
)
{\displaystyle A=U({\mathfrak {g}})}
in such a way that the bracket on
g
{\displaystyle {\mathfrak {g}}}
is given by
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
in
A
{\displaystyle A}
.
If
g
{\displaystyle {\mathfrak {g}}}
is abelian, then
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is the symmetric algebra of the vector space
g
{\displaystyle {\mathfrak {g}}}
.
Since
g
{\displaystyle {\mathfrak {g}}}
is a module over itself via adjoint representation, the enveloping algebra
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
becomes a
g
{\displaystyle {\mathfrak {g}}}
-module by extending the adjoint representation. But one can also use the left and right regular representation to make the enveloping algebra a
g
{\displaystyle {\mathfrak {g}}}
-module; namely, with the notation
l
X
(
Y
)
=
X
Y
,
X
∈
g
,
Y
∈
U
(
g
)
{\displaystyle l_{X}(Y)=XY,X\in {\mathfrak {g}},Y\in U({\mathfrak {g}})}
, the mapping
X
↦
l
X
{\displaystyle X\mapsto l_{X}}
defines a representation of
g
{\displaystyle {\mathfrak {g}}}
on
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. The right regular representation is defined similarly.
== Induced representation ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a finite-dimensional Lie algebra over a field of characteristic zero and
h
⊂
g
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}}
a subalgebra.
U
(
h
)
{\displaystyle U({\mathfrak {h}})}
acts on
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
from the right and thus, for any
h
{\displaystyle {\mathfrak {h}}}
-module W, one can form the left
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
-module
U
(
g
)
⊗
U
(
h
)
W
{\displaystyle U({\mathfrak {g}})\otimes _{U({\mathfrak {h}})}W}
. It is a
g
{\displaystyle {\mathfrak {g}}}
-module denoted by
Ind
h
g
W
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}W}
and called the
g
{\displaystyle {\mathfrak {g}}}
-module induced by W. It satisfies (and is in fact characterized by) the universal property: for any
g
{\displaystyle {\mathfrak {g}}}
-module E
Hom
g
(
Ind
h
g
W
,
E
)
≃
Hom
h
(
W
,
Res
h
g
E
)
{\displaystyle \operatorname {Hom} _{\mathfrak {g}}(\operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}W,E)\simeq \operatorname {Hom} _{\mathfrak {h}}(W,\operatorname {Res} _{\mathfrak {h}}^{\mathfrak {g}}E)}
.
Furthermore,
Ind
h
g
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}}
is an exact functor from the category of
h
{\displaystyle {\mathfrak {h}}}
-modules to the category of
g
{\displaystyle {\mathfrak {g}}}
-modules. These uses the fact that
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is a free right module over
U
(
h
)
{\displaystyle U({\mathfrak {h}})}
. In particular, if
Ind
h
g
W
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}W}
is simple (resp. absolutely simple), then W is simple (resp. absolutely simple). Here, a
g
{\displaystyle {\mathfrak {g}}}
-module V is absolutely simple if
V
⊗
k
F
{\displaystyle V\otimes _{k}F}
is simple for any field extension
F
/
k
{\displaystyle F/k}
.
The induction is transitive:
Ind
h
g
≃
Ind
h
′
g
∘
Ind
h
h
′
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}\simeq \operatorname {Ind} _{\mathfrak {h'}}^{\mathfrak {g}}\circ \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {h'}}}
for any Lie subalgebra
h
′
⊂
g
{\displaystyle {\mathfrak {h'}}\subset {\mathfrak {g}}}
and any Lie subalgebra
h
⊂
h
′
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {h}}'}
. The induction commutes with restriction: let
h
⊂
g
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}}
be subalgebra and
n
{\displaystyle {\mathfrak {n}}}
an ideal of
g
{\displaystyle {\mathfrak {g}}}
that is contained in
h
{\displaystyle {\mathfrak {h}}}
. Set
g
1
=
g
/
n
{\displaystyle {\mathfrak {g}}_{1}={\mathfrak {g}}/{\mathfrak {n}}}
and
h
1
=
h
/
n
{\displaystyle {\mathfrak {h}}_{1}={\mathfrak {h}}/{\mathfrak {n}}}
. Then
Ind
h
g
∘
Res
h
≃
Res
g
∘
Ind
h
1
g
1
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}\circ \operatorname {Res} _{\mathfrak {h}}\simeq \operatorname {Res} _{\mathfrak {g}}\circ \operatorname {Ind} _{\mathfrak {h_{1}}}^{\mathfrak {g_{1}}}}
.
== Infinite-dimensional representations and "category O" ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a finite-dimensional semisimple Lie algebra over a field of characteristic zero. (in the solvable or nilpotent case, one studies primitive ideals of the enveloping algebra; cf. Dixmier for the definitive account.)
The category of (possibly infinite-dimensional) modules over
g
{\displaystyle {\mathfrak {g}}}
turns out to be too large especially for homological algebra methods to be useful: it was realized that a smaller subcategory category O is a better place for the representation theory in the semisimple case in zero characteristic. For instance, the category O turned out to be of a right size to formulate the celebrated BGG reciprocity.
== (g,K)-module ==
One of the most important applications of Lie algebra representations is to the representation theory of real reductive Lie groups. The application is based on the idea that if
π
{\displaystyle \pi }
is a Hilbert-space representation of, say, a connected real semisimple linear Lie group G, then it has two natural actions: the complexification
g
{\displaystyle {\mathfrak {g}}}
and the connected maximal compact subgroup K. The
g
{\displaystyle {\mathfrak {g}}}
-module structure of
π
{\displaystyle \pi }
allows algebraic especially homological methods to be applied and
K
{\displaystyle K}
-module structure allows harmonic analysis to be carried out in a way similar to that on connected compact semisimple Lie groups.
== Representation on an algebra ==
If we have a Lie superalgebra L, then a representation of L on an algebra is a (not necessarily associative) Z2 graded algebra A which is a representation of L as a Z2 graded vector space and in addition, the elements of L acts as derivations/antiderivations on A.
More specifically, if H is a pure element of L and x and y are pure elements of A,
H[xy] = (H[x])y + (−1)xHx(H[y])
Also, if A is unital, then
H[1] = 0
Now, for the case of a representation of a Lie algebra, we simply drop all the gradings and the (−1) to the some power factors.
A Lie (super)algebra is an algebra and it has an adjoint representation of itself. This is a representation on an algebra: the (anti)derivation property is the superJacobi identity.
If a vector space is both an associative algebra and a Lie algebra and the adjoint representation of the Lie algebra on itself is a representation on an algebra (i.e., acts by derivations on the associative algebra structure), then it is a Poisson algebra. The analogous observation for Lie superalgebras gives the notion of a Poisson superalgebra.
== See also ==
Representation of a Lie group
Weight (representation theory)
Weyl's theorem on complete reducibility
Root system
Weyl character formula
Representation theory of a connected compact Lie group
Whitehead's lemma (Lie algebras)
Kazhdan–Lusztig conjectures
Quillen's lemma - analog of Schur's lemma
== Notes ==
== References ==
Bernstein I.N., Gelfand I.M., Gelfand S.I., "Structure of Representations that are generated by vectors of highest weight," Functional. Anal. Appl. 5 (1971)
Dixmier, J. (1977), Enveloping Algebras, Amsterdam, New York, Oxford: North-Holland, ISBN 0-444-11077-1.
A. Beilinson and J. Bernstein, "Localisation de g-modules," Comptes Rendus de l'Académie des Sciences, Série I, vol. 292, iss. 1, pp. 15–18, 1981.
Bäuerle, G.G.A; de Kerf, E.A. (1990). A. van Groesen; E.M. de Jager (eds.). Finite and infinite dimensional Lie algebras and their application in physics. Studies in mathematical physics. Vol. 1. North-Holland. ISBN 0-444-88776-8.
Bäuerle, G.G.A; de Kerf, E.A.; ten Kroode, A.P.E. (1997). A. van Groesen; E.M. de Jager (eds.). Finite and infinite dimensional Lie algebras and their application in physics. Studies in mathematical physics. Vol. 7. North-Holland. ISBN 978-0-444-82836-1 – via ScienceDirect.
Fulton, W.; Harris, J. (1991). Representation theory. A first course. Graduate Texts in Mathematics. Vol. 129. New York: Springer-Verlag. ISBN 978-0-387-97495-8. MR 1153249.
D. Gaitsgory, Geometric Representation theory, Math 267y, Fall 2005
Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, ISBN 978-1461471158
Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666
Rossmann, Wulf (2002), Lie Groups - An Introduction Through Linear Groups, Oxford Graduate Texts in Mathematics, Oxford Science Publications, ISBN 0-19-859683-9
Ryoshi Hotta, Kiyoshi Takeuchi, Toshiyuki Tanisaki, D-modules, perverse sheaves, and representation theory; translated by Kiyoshi Takeuch
Humphreys, James (1972), Introduction to Lie Algebras and Representation Theory, Graduate Texts in Mathematics, vol. 9, Springer, ISBN 9781461263982
Jacobson, Nathan (1979) [1962]. Lie algebras. Dover. ISBN 978-0-486-63832-4.
Garrett Birkhoff; Philip M. Whitman (1949). "Representation of Jordan and Lie Algebras" (PDF). Trans. Amer. Math. Soc. 65: 116–136. doi:10.1090/s0002-9947-1949-0029366-6.
Kirillov, A. (2008). An Introduction to Lie Groups and Lie Algebras. Cambridge Studies in Advanced Mathematics. Vol. 113. Cambridge University Press. ISBN 978-0521889698.
Knapp, Anthony W. (2001), Representation theory of semisimple groups. An overview based on examples., Princeton Landmarks in Mathematics, Princeton University Press, ISBN 0-691-09089-0 (elementary treatment for SL(2,C))
Knapp, Anthony W. (2002), Lie Groups Beyond and Introduction (second ed.), Birkhauser
== Further reading ==
Ben-Zvi, David; Nadler, David (2012). "Beilinson-Bernstein localization over the Harish-Chandra center". arXiv:1209.0188v1 [math.RT]. | Wikipedia/Lie_algebra_representation |
Modular representation theory is a branch of mathematics, and is the part of representation theory that studies linear representations of finite groups over a field K of positive characteristic p, necessarily a prime number. As well as having applications to group theory, modular representations arise naturally in other branches of mathematics, such as algebraic geometry, coding theory, combinatorics and number theory.
Within finite group theory, character-theoretic results proved by Richard Brauer using modular representation theory played an important role in early progress towards the classification of finite simple groups, especially for simple groups whose characterization was not amenable to purely group-theoretic methods because their Sylow 2-subgroups were too small in an appropriate sense. Also, a general result on embedding of elements of order 2 in finite groups called the Z* theorem, proved by George Glauberman using the theory developed by Brauer, was particularly useful in the classification program.
If the characteristic p of K does not divide the order |G|, then modular representations are completely reducible, as with ordinary (characteristic 0) representations, by virtue of Maschke's theorem. In the other case, when |G| ≡ 0 (mod p), the process of averaging over the group needed to prove Maschke's theorem breaks down, and representations need not be completely reducible. Much of the discussion below implicitly assumes that the field K is sufficiently large (for example, K algebraically closed suffices), otherwise some statements need refinement.
== History ==
The earliest work on representation theory over finite fields is by Dickson (1902) who showed that when p does not divide the order of the group, the representation theory is similar to that in characteristic 0. He also investigated modular invariants of some finite groups. The systematic study of modular representations, when the characteristic p divides the order of the group, was started by Brauer (1935) and was continued by him for the next few decades.
== Example ==
Finding a representation of the cyclic group of two elements over F2 is equivalent to the problem of finding matrices whose square is the identity matrix. Over every field of characteristic other than 2, there is always a basis such that the matrix can be written as a diagonal matrix with only 1 or −1 occurring on the diagonal, such as
[
1
0
0
−
1
]
.
{\displaystyle {\begin{bmatrix}1&0\\0&-1\end{bmatrix}}.}
Over F2, there are many other possible matrices, such as
[
1
1
0
1
]
.
{\displaystyle {\begin{bmatrix}1&1\\0&1\end{bmatrix}}.}
Over an algebraically closed field of positive characteristic, the representation theory of a finite cyclic group is fully explained by the theory of the Jordan normal form. Non-diagonal Jordan forms occur when the characteristic divides the order of the group.
== Ring theory interpretation ==
Given a field K and a finite group G, the group algebra K[G] (which is the K-vector space with K-basis consisting of the elements of G, endowed with algebra multiplication by extending the multiplication of G by linearity) is an Artinian ring.
When the order of G is divisible by the characteristic of K, the group algebra is not semisimple, hence has non-zero Jacobson radical. In that case, there are finite-dimensional modules for the group algebra that are not projective modules. By contrast, in the characteristic 0 case every irreducible representation is a direct summand of the regular representation, hence is projective.
== Brauer characters ==
Modular representation theory was developed by Richard Brauer from about 1940 onwards to study in greater depth the relationships between the
characteristic p representation theory, ordinary character theory and structure of G, especially as the latter relates to the embedding of, and relationships between, its p-subgroups. Such results can be applied in group theory to problems not directly phrased in terms of representations.
Brauer introduced the notion now known as the Brauer character. When K is algebraically closed of positive characteristic p, there is a bijection between roots of unity in K and complex roots of unity of order coprime to p. Once a choice of such a bijection is fixed, the Brauer character of a representation assigns to each group element of order coprime to p the sum of complex roots of unity corresponding to the eigenvalues (including multiplicities) of that element in the given representation.
The Brauer character of a representation determines its composition
factors but not, in general, its equivalence type. The irreducible
Brauer characters are those afforded by the simple modules.
These are integral (though not necessarily non-negative) combinations
of the restrictions to elements of order coprime to p of the ordinary irreducible
characters. Conversely, the restriction to the elements of order coprime to p of
each ordinary irreducible character is uniquely expressible as a non-negative
integer combination of irreducible Brauer characters.
== Reduction (mod p) ==
In the theory initially developed by Brauer, the link between ordinary representation theory and modular representation theory is best exemplified by considering the
group algebra of the group G over a complete discrete valuation ring R with residue field K of positive
characteristic p and field of fractions F of characteristic
0, such as the p-adic integers. The structure of R[G] is closely related both to
the structure of the group algebra K[G] and to the structure of the semisimple group algebra F[G], and there is much interplay
between the module theory of the three algebras.
Each R[G]-module naturally gives rise to an F[G]-module, and, by a process often known informally as reduction (mod p),
to a K[G]-module. On the other hand, since R is a principal ideal domain, each finite-dimensional F[G]-module
arises by extension of scalars from an R[G]-module. In general, however, not all K[G]-modules arise as reductions (mod p) of
R[G]-modules. Those that do are liftable.
== Number of simple modules ==
In ordinary representation theory, the number of simple modules k(G) is equal to the number of conjugacy classes of G. In the modular case, the number l(G) of simple modules is equal to the number of conjugacy classes whose elements have order coprime to the relevant prime p, the so-called p-regular classes.
== Blocks and the structure of the group algebra ==
In modular representation theory, while Maschke's theorem does not hold
when the characteristic divides the group order, the group algebra may be decomposed as the direct sum of a maximal collection of two-sided ideals known as blocks. When the field F has characteristic 0, or characteristic coprime to the group order, there is still such a decomposition of the group algebra F[G] as a sum of blocks (one for each isomorphism type of simple module), but the situation is relatively transparent when F is sufficiently large: each block is a full matrix algebra over F, the endomorphism ring of the vector space underlying the associated simple module.
To obtain the blocks, the identity element of the group G is decomposed as a sum of primitive idempotents
in Z(R[G]), the center of the group algebra over the maximal order R of F. The block corresponding to the primitive idempotent
e is the two-sided ideal e R[G]. For each indecomposable R[G]-module, there is only one such primitive idempotent that does not annihilate it, and the module is said to belong to (or to be in) the corresponding block (in which case, all its composition factors also belong to that block). In particular, each simple module belongs to a unique block. Each ordinary irreducible character may also be assigned to a unique block according to its decomposition as a sum of irreducible Brauer characters. The block containing the trivial module is known as the principal block.
== Projective modules ==
In ordinary representation theory, every indecomposable module is irreducible, and so every module is projective. However, the simple modules with characteristic dividing the group order are rarely projective. Indeed, if a simple module is projective, then it is the only simple module in its block, which is then isomorphic to the endomorphism algebra of the underlying vector space, a full matrix algebra. In that case, the block is said to have 'defect 0'. Generally, the structure of projective modules is difficult to determine.
For the group algebra of a finite group, the (isomorphism types of) projective indecomposable modules are in a one-to-one correspondence with the (isomorphism types of) simple modules: the socle of each projective indecomposable is simple (and isomorphic to the top), and this affords the bijection, as non-isomorphic projective indecomposables have
non-isomorphic socles. The multiplicity of a projective indecomposable module as a summand of the group algebra (viewed as the regular module) is the dimension of its socle (for large enough fields of characteristic zero, this recovers the fact that each simple module occurs with multiplicity equal to its dimension as a direct summand of the regular module).
Each projective indecomposable module (and hence each projective module) in positive characteristic p may be lifted to a module in characteristic 0. Using the ring R as above, with residue field K, the identity element of G may be decomposed as a sum of mutually orthogonal primitive idempotents (not necessarily
central) of K[G]. Each projective indecomposable K[G]-module is isomorphic to e.K[G] for a primitive idempotent e that occurs in this decomposition. The idempotent e lifts to a primitive idempotent, say E, of R[G], and the left module E.R[G] has reduction (mod p) isomorphic to e.K[G].
== Some orthogonality relations for Brauer characters ==
When a projective module is lifted, the associated character vanishes on all elements of order divisible by p, and (with consistent choice of roots of unity), agrees with the Brauer character of the original characteristic p module on p-regular elements. The (usual character-ring) inner product of the Brauer character of a projective indecomposable with any other Brauer character can thus be defined: this is 0 if the
second Brauer character is that of the socle of a non-isomorphic projective indecomposable, and 1
if the second Brauer character is that of its own socle. The multiplicity of an ordinary irreducible
character in the character of the lift of a projective indecomposable is equal to the number
of occurrences of the Brauer character of the socle of the projective indecomposable when the restriction of the ordinary character to p-regular elements is expressed as a sum of irreducible Brauer characters.
== Decomposition matrix and Cartan matrix ==
The composition factors of the projective indecomposable modules may be calculated as follows:
Given the ordinary irreducible and irreducible Brauer characters of a particular finite group, the irreducible ordinary characters may be decomposed as non-negative integer combinations of the irreducible Brauer characters. The integers involved can be placed in a matrix, with the ordinary irreducible characters assigned rows and the irreducible Brauer characters assigned columns. This is referred to as the decomposition matrix, and is frequently labelled D. It is customary to place the trivial ordinary and Brauer characters in the first row and column respectively. The product of the transpose of D with D itself
results in the Cartan matrix, usually denoted C; this is a symmetric matrix such that the entries in its j-th row are the multiplicities of the respective simple modules as composition
factors of the j-th projective indecomposable module. The Cartan
matrix is non-singular; in fact, its determinant is a power of the
characteristic of K.
Since a projective indecomposable module in a given block has
all its composition factors in that same block, each block has
its own Cartan matrix.
== Defect groups ==
To each block B of the group algebra K[G], Brauer associated a certain p-subgroup, known as its defect group (where p is the characteristic of K). Formally, it is the largest p-subgroup
D of G for which there is a Brauer correspondent of B for the
subgroup
D
C
G
(
D
)
{\displaystyle DC_{G}(D)}
, where
C
G
(
D
)
{\displaystyle C_{G}(D)}
is the centralizer of D in G.
The defect group of a block is unique up to conjugacy and has a strong influence on the structure of the block. For example, if the defect group is trivial, then the block contains just one simple module, just one ordinary character, the ordinary and Brauer irreducible characters agree on elements of order prime to the relevant characteristic p, and the simple module is projective. At the other extreme, when K has characteristic p, the Sylow p-subgroup of the finite group G is a defect group for the principal block of K[G].
The order of the defect group of a block has many arithmetical characterizations related to representation theory. It is the largest invariant factor of the Cartan matrix of the block, and occurs with
multiplicity one. Also, the power of p dividing the index of the defect group of a block is the greatest common divisor of the powers of p dividing the dimensions of the simple modules in that block, and this coincides with the greatest common divisor of the powers of p dividing the degrees of the ordinary irreducible characters in that block.
Other relationships between the defect group of a block and character theory include Brauer's result that if no conjugate of the p-part of a group element g is in the defect group of a given block, then each irreducible character in that block vanishes at g. This is one of many consequences of Brauer's second main theorem.
The defect group of a block also has several characterizations in the more module-theoretic approach to block theory, building on the work of J. A. Green, which associates a p-subgroup
known as the vertex to an indecomposable module, defined in terms of relative projectivity of the module. For example, the vertex of each indecomposable module in a block is contained (up to conjugacy)
in the defect group of the block, and no proper subgroup of the defect group has that property.
Brauer's first main theorem states that the number of blocks of a finite group that have a given p-subgroup as defect group is the same as the corresponding number for the normalizer in the group of that p-subgroup.
The easiest block structure to analyse with non-trivial defect group is when the latter is cyclic. Then there are only finitely many isomorphism types of indecomposable modules in the block, and the structure of the block is by now well understood, by virtue of work of Brauer, E.C. Dade, J.A. Green and J.G. Thompson, among others. In all other cases, there are infinitely many isomorphism types of indecomposable modules in the block.
Blocks whose defect groups are not cyclic can be divided into two types: tame and wild. The tame blocks (which only occur for the prime 2) have as a defect group a dihedral group, semidihedral group or (generalized) quaternion group, and their structure has been broadly determined in a series of papers by Karin Erdmann. The indecomposable modules in wild blocks are extremely difficult to classify, even in principle.
== References ==
Brauer, R. (1935), Über die Darstellung von Gruppen in Galoisschen Feldern, Actualités Scientifiques et Industrielles, vol. 195, Paris: Hermann et cie, pp. 1–15, review
Dickson, Leonard Eugene (1902), "On the Group Defined for any Given Field by the Multiplication Table of Any Given Finite Group", Transactions of the American Mathematical Society, 3 (3), Providence, R.I.: American Mathematical Society: 285–301, doi:10.2307/1986379, ISSN 0002-9947, JSTOR 1986379
Jean-Pierre Serre (1977). Linear Representations of Finite Groups. Springer-Verlag. ISBN 0-387-90190-6.
Walter Feit (1982). The representation theory of finite groups. North-Holland Mathematical Library. Vol. 25. Amsterdam-New York: North-Holland Publishing. ISBN 0-444-86155-6. | Wikipedia/Modular_representation_theory |
In abstract algebra, an adelic algebraic group is a semitopological group defined by an algebraic group G over a number field K, and the adele ring A = A(K) of K. It consists of the points of G having values in A; the definition of the appropriate topology is straightforward only in case G is a linear algebraic group. In the case of G being an abelian variety, it presents a technical obstacle, though it is known that the concept is potentially useful in connection with Tamagawa numbers. Adelic algebraic groups are widely used in number theory, particularly for the theory of automorphic representations, and the arithmetic of quadratic forms.
In case G is a linear algebraic group, it is an affine algebraic variety in affine N-space. The topology on the adelic algebraic group
G
(
A
)
{\displaystyle G(A)}
is taken to be the subspace topology in AN, the Cartesian product of N copies of the adele ring. In this case,
G
(
A
)
{\displaystyle G(A)}
is a topological group.
== History of the terminology ==
Historically the idèles () were introduced by Chevalley (1936) under the name "élément idéal", which is "ideal element" in French, which Chevalley (1940) then abbreviated to "idèle" following a suggestion of Hasse. (In these papers he also gave the ideles a non-Hausdorff topology.) This was to formulate class field theory for infinite extensions in terms of topological groups. Weil (1938) defined (but did not name) the ring of adeles in the function field case and pointed out that Chevalley's group of Idealelemente was the group of invertible elements of this ring. Tate (1950) defined the ring of adeles as a restricted direct product, though he called its elements "valuation vectors" rather than adeles.
Chevalley (1951) defined the ring of adeles in the function field case, under the name "repartitions"; the contemporary term adèle stands for 'additive idèles', and can also be a French woman's name. The term adèle was in use shortly afterwards (Jaffard 1953) and may have been introduced by André Weil. The general construction of adelic algebraic groups by Ono (1957) followed the algebraic group theory founded by Armand Borel and Harish-Chandra.
== Ideles ==
An important example, the idele group (ideal element group) I(K), is the case of
G
=
G
L
1
{\displaystyle G=GL_{1}}
. Here the set of ideles consists of the invertible adeles; but the topology on the idele group is not their topology as a subset of the adeles. Instead, considering that
G
L
1
{\displaystyle GL_{1}}
lies in two-dimensional affine space as the 'hyperbola' defined parametrically by
{
(
t
,
t
−
1
)
}
,
{\displaystyle \{(t,t^{-1})\},}
the topology correctly assigned to the idele group is that induced by inclusion in A2; composing with a projection, it follows that the ideles carry a finer topology than the subspace topology from A.
Inside AN, the product KN lies as a discrete subgroup. This means that G(K) is a discrete subgroup of G(A), also. In the case of the idele group, the quotient group
I
(
K
)
/
K
×
{\displaystyle I(K)/K^{\times }\,}
is the idele class group. It is closely related to (though larger than) the ideal class group. The idele class group is not itself compact; the ideles must first be replaced by the ideles of norm 1, and then the image of those in the idele class group is a compact group; the proof of this is essentially equivalent to the finiteness of the class number.
The study of the Galois cohomology of idele class groups is a central matter in class field theory. Characters of the idele class group, now usually called Hecke characters or Größencharacters, give rise to the most basic class of L-functions.
== Tamagawa numbers ==
For more general G, the Tamagawa number is defined (or indirectly computed) as the measure of
G(A)/G(K).
Tsuneo Tamagawa's observation was that, starting from an invariant differential form ω on G, defined over K, the measure involved was well-defined: while ω could be replaced by cω with c a non-zero element of K, the product formula for valuations in K is reflected by the independence from c of the measure of the quotient, for the product measure constructed from ω on each effective factor. The computation of Tamagawa numbers for semisimple groups contains important parts of classical quadratic form theory.
== References ==
Chevalley, Claude (1936), "Généralisation de la théorie du corps de classes pour les extensions infinies.", Journal de Mathématiques Pures et Appliquées (in French), 15: 359–371, JFM 62.1153.02
Chevalley, Claude (1940), "La théorie du corps de classes", Annals of Mathematics, Second Series, 41 (2): 394–418, doi:10.2307/1969013, ISSN 0003-486X, JSTOR 1969013, MR 0002357
Chevalley, Claude (1951), Introduction to the Theory of Algebraic Functions of One Variable, Mathematical Surveys, No. VI, Providence, R.I.: American Mathematical Society, MR 0042164
Jaffard, Paul (1953), Anneaux d'adèles (d'après Iwasawa), Séminaire Bourbaki, Secrétariat mathématique, Paris, MR 0157859
Ono, Takashi (1957), "Sur une propriété arithmétique des groupes algébriques commutatifs", Bulletin de la Société Mathématique de France, 85: 307–323, doi:10.24033/bsmf.1491, ISSN 0037-9484, MR 0094362
Tate, John T. (1950), "Fourier analysis in number fields, and Hecke's zeta-functions", Algebraic Number Theory (Proc. Instructional Conf., Brighton, 1965), Thompson, Washington, D.C., pp. 305–347, ISBN 978-0-9502734-2-6, MR 0217026 {{citation}}: ISBN / Date incompatibility (help)
Weil, André (1938), "Zur algebraischen Theorie der algebraischen Funktionen.", Journal für die Reine und Angewandte Mathematik (in German), 179: 129–133, doi:10.1515/crll.1938.179.129, ISSN 0075-4102, S2CID 116472982
== External links ==
Rapinchuk, A.S. (2001) [1994], "Tamagawa number", Encyclopedia of Mathematics, EMS Press | Wikipedia/Adelic_algebraic_group |
In mathematics, Schur's lemma is an elementary but extremely useful statement in representation theory of groups and algebras. In the group case it says that if M and N are two finite-dimensional irreducible representations
of a group G and φ is a linear map from M to N that commutes with the action of the group, then either φ is invertible, or φ = 0. An important special case occurs when M = N, i.e. φ is a self-map; in particular, any element of the center of a group must act as a scalar operator (a scalar multiple of the identity) on M. The lemma is named after Issai Schur who used it to prove the Schur orthogonality relations and develop the basics of the representation theory of finite groups. Schur's lemma admits generalisations to Lie groups and Lie algebras, the most common of which are due to Jacques Dixmier and Daniel Quillen.
== Representation theory of groups ==
Representation theory is the study of homomorphisms from a group, G, into the general linear group GL(V) of a vector space V; i.e., into the group of automorphisms of V. (Let us here restrict ourselves to the case when the underlying field of V is
C
{\displaystyle \mathbb {C} }
, the field of complex numbers.) Such a homomorphism is called a representation of G on V. A representation on V is a special case of a group action on V, but rather than permit any arbitrary bijections (permutations) of the underlying set of V, we restrict ourselves to invertible linear transformations.
Let ρ be a representation of G on V. It may be the case that V has a subspace, W, such that for every element g of G, the invertible linear map ρ(g) preserves or fixes W, so that (ρ(g))(w) is in W for every w in W, and (ρ(g))(v) is not in W for any v not in W. In other words, every linear map ρ(g): V→V is also an automorphism of W, ρ(g): W→W, when its domain is restricted to W. We say W is stable under G, or stable under the action of G. It is clear that if we consider W on its own as a vector space, then there is an obvious representation of G on W—the representation we get by restricting each map ρ(g) to W. When W has this property, we call W with the given representation a subrepresentation of V. Every representation of G has itself and the zero vector space as trivial subrepresentations. A representation of G with no non-trivial subrepresentations is called an irreducible representation. Irreducible representations – like the prime numbers, or like the simple groups in group theory – are the building blocks of representation theory. Many of the initial questions and theorems of representation theory deal with the properties of irreducible representations.
Just as we are interested in homomorphisms between groups, and in continuous maps between topological spaces, we are also interested in certain functions between representations of G. Let V and W be vector spaces, and let
ρ
V
{\displaystyle \rho _{V}}
and
ρ
W
{\displaystyle \rho _{W}}
be representations of G on V and W respectively. Then we define a G-linear map f from V to W to be a linear map from V to W that is equivariant under the action of G; that is, for every g in G,
ρ
W
(
g
)
∘
f
=
f
∘
ρ
V
(
g
)
{\displaystyle \rho _{W}(g)\circ f=f\circ \rho _{V}(g)}
. In other words, we require that f commutes with the action of G. G-linear maps are the morphisms in the category of representations of G.
Schur's Lemma is a theorem that describes what G-linear maps can exist between two irreducible representations of G.
=== Statement and Proof of the Lemma ===
Theorem (Schur's Lemma): Let V and W be vector spaces; and let
ρ
V
{\displaystyle \rho _{V}}
and
ρ
W
{\displaystyle \rho _{W}}
be irreducible representations of G on V and W respectively.
If
V
{\displaystyle V}
and
W
{\displaystyle W}
are not isomorphic, then there are no nontrivial G-linear maps between them.
If
V
=
W
{\displaystyle V=W}
finite-dimensional over an algebraically closed field (e.g.
C
{\displaystyle \mathbb {C} }
); and if
ρ
V
=
ρ
W
{\displaystyle \rho _{V}=\rho _{W}}
, then the only nontrivial G-linear maps are the identity, and scalar multiples of the identity. (A scalar multiple of the identity is sometimes called a homothety.)
Proof: Suppose
f
{\displaystyle f}
is a nonzero G-linear map from
V
{\displaystyle V}
to
W
{\displaystyle W}
. We will prove that
V
{\displaystyle V}
and
W
{\displaystyle W}
are isomorphic. Let
V
′
{\displaystyle V'}
be the kernel, or null space, of
f
{\displaystyle f}
in
V
{\displaystyle V}
, described as
V
′
=
{
x
∈
V
|
f
(
x
)
=
0
}
{\displaystyle V'=\{x\in V|f(x)=0\}}
.
V
′
{\displaystyle V'}
is a subspace of
V
{\displaystyle V}
as it is nonempty (contains the zero vector), and is closed.
By the assumption that
f
{\displaystyle f}
is G-linear, for every
g
{\displaystyle g}
in
G
{\displaystyle G}
and choice of
x
{\displaystyle x}
in
V
′
{\displaystyle V'}
,
f
(
(
ρ
V
(
g
)
)
(
x
)
)
=
(
ρ
W
(
g
)
)
(
f
(
x
)
)
=
(
ρ
W
(
g
)
)
(
0
)
=
0
{\displaystyle f((\rho _{V}(g))(x))=(\rho _{W}(g))(f(x))=(\rho _{W}(g))(0)=0}
. But saying that
f
(
ρ
V
(
g
)
(
x
)
)
=
0
{\displaystyle f(\rho _{V}(g)(x))=0}
is the same as saying that
ρ
V
(
g
)
(
x
)
{\displaystyle \rho _{V}(g)(x)}
is in the null space of
f
:
V
→
W
{\displaystyle f:V\rightarrow W}
. So
V
′
{\displaystyle V'}
is stable under the action of G; it is a subrepresentation. Since by assumption
V
{\displaystyle V}
is irreducible,
V
′
{\displaystyle V'}
must be zero; so
f
{\displaystyle f}
is injective.
By an identical argument we will show
f
{\displaystyle f}
is also surjective; since
f
(
(
ρ
V
(
g
)
)
(
x
)
)
=
(
ρ
W
(
g
)
)
(
f
(
x
)
)
{\displaystyle f((\rho _{V}(g))(x))=(\rho _{W}(g))(f(x))}
, we can conclude that for arbitrary choice of
f
(
x
)
{\displaystyle f(x)}
in the image of
f
{\displaystyle f}
,
ρ
W
(
g
)
{\displaystyle \rho _{W}(g)}
sends
f
(
x
)
{\displaystyle f(x)}
somewhere else in the image of
f
{\displaystyle f}
; in particular it sends it to the image of
ρ
V
(
g
)
x
{\displaystyle \rho _{V}(g)x}
. So the image of
f
(
x
)
{\displaystyle f(x)}
is a subspace
W
′
{\displaystyle W'}
of
W
{\displaystyle W}
stable under the action of
G
{\displaystyle G}
, so it is a subrepresentation and
f
{\displaystyle f}
must be zero or surjective. By assumption it is not zero, so it is surjective, in which case it is an isomorphism.
In the event that
V
=
W
{\displaystyle V=W}
finite-dimensional over an algebraically closed field and they have the same representation, let
λ
{\displaystyle \lambda }
be an eigenvalue of
f
{\displaystyle f}
. (An eigenvalue exists for every linear transformation on a finite-dimensional vector space over an algebraically closed field.) Let
f
′
=
f
−
λ
I
{\displaystyle f'=f-\lambda I}
. Then if
x
{\displaystyle x}
is an eigenvector of
f
{\displaystyle f}
corresponding to
λ
,
f
′
(
x
)
=
0
{\displaystyle \lambda ,f'(x)=0}
. It is clear that
f
′
{\displaystyle f'}
is a G-linear map, because the sum or difference of G-linear maps is also G-linear. Then we return to the above argument, where we used the fact that a map was G-linear to conclude that the kernel is a subrepresentation, and is thus either zero or equal to all of
V
{\displaystyle V}
; because it is not zero (it contains
x
{\displaystyle x}
) it must be all of V and so
f
′
{\displaystyle f'}
is trivial, so
f
=
λ
I
{\displaystyle f=\lambda I}
.
=== Corollary of Schur's Lemma ===
An important corollary of Schur's lemma follows from the observation that we can often build explicitly
G
{\displaystyle G}
-linear maps between representations by "averaging" over the action of individual group elements on some fixed linear operator. In particular, given any irreducible representation, such objects will satisfy the assumptions of Schur's lemma, hence be scalar multiples of the identity. More precisely:
Corollary: Using the same notation from the previous theorem, let
h
{\displaystyle h}
be a linear mapping of V into W, and set
h
0
=
1
|
G
|
∑
g
∈
G
(
ρ
W
(
g
)
)
−
1
h
ρ
V
(
g
)
.
{\displaystyle h_{0}={\frac {1}{|G|}}\sum _{g\in G}(\rho _{W}(g))^{-1}h\rho _{V}(g).}
Then,
If
V
{\displaystyle V}
and
W
{\displaystyle W}
are not isomorphic, then
h
0
=
0
{\displaystyle h_{0}=0}
.
If
V
=
W
{\displaystyle V=W}
is finite-dimensional over an algebraically closed field (e.g.
C
{\displaystyle \mathbb {C} }
); and if
ρ
V
=
ρ
W
{\displaystyle \rho _{V}=\rho _{W}}
, then
h
0
=
I
T
r
[
h
]
/
n
{\displaystyle h_{0}=I\,\mathrm {Tr} [h]/n}
, where n is the dimension of V. That is,
h
0
{\displaystyle h_{0}}
is a homothety of ratio
T
r
[
h
]
/
n
{\displaystyle \mathrm {Tr} [h]/n}
.
Proof:
Let us first show that
h
0
{\displaystyle h_{0}}
is a G-linear map, i.e.,
ρ
W
(
g
)
∘
h
0
=
h
0
∘
ρ
V
(
g
)
{\displaystyle \rho _{W}(g)\circ h_{0}=h_{0}\circ \rho _{V}(g)}
for all
g
∈
G
{\displaystyle g\in G}
. Indeed, consider that
(
ρ
W
(
g
′
)
)
−
1
h
0
ρ
V
(
g
′
)
=
1
|
G
|
∑
g
∈
G
(
ρ
W
(
g
′
)
)
−
1
(
ρ
W
(
g
)
)
−
1
h
ρ
V
(
g
)
ρ
V
(
g
′
)
=
1
|
G
|
∑
g
∈
G
(
ρ
W
(
g
∘
g
′
)
)
−
1
h
ρ
V
(
g
∘
g
′
)
=
h
0
{\displaystyle {\begin{aligned}(\rho _{W}(g'))^{-1}h_{0}\rho _{V}(g')&={\frac {1}{|G|}}\sum _{g\in G}(\rho _{W}(g'))^{-1}(\rho _{W}(g))^{-1}h\rho _{V}(g)\rho _{V}(g')\\&={\frac {1}{|G|}}\sum _{g\in G}(\rho _{W}(g\circ g'))^{-1}h\rho _{V}(g\circ g')\\&=h_{0}\end{aligned}}}
Now applying the previous theorem, for case 1, it follows that
h
0
=
0
{\displaystyle h_{0}=0}
, and for case 2, it follows that
h
0
{\displaystyle h_{0}}
is a scalar multiple of the identity matrix (i.e.,
h
0
=
μ
I
{\displaystyle h_{0}=\mu I}
). To determine the scalar multiple
μ
{\displaystyle \mu }
, consider that
T
r
[
h
0
]
=
1
|
G
|
∑
g
∈
G
T
r
[
(
ρ
V
(
g
)
)
−
1
h
ρ
V
(
g
)
]
=
T
r
[
h
]
{\displaystyle \mathrm {Tr} [h_{0}]={\frac {1}{|G|}}\sum _{g\in G}\mathrm {Tr} [(\rho _{V}(g))^{-1}h\rho _{V}(g)]=\mathrm {Tr} [h]}
It then follows that
μ
=
T
r
[
h
]
/
n
{\displaystyle \mu =\mathrm {Tr} [h]/n}
.
This result has numerous applications. For example, in the context of quantum information science, it is used to derive results about complex projective t-designs. In the context of molecular orbital theory, it is used to restrict atomic orbital interactions based on the molecular symmetry.
== Formulation in the language of modules ==
Theorem: Let
M
,
N
{\displaystyle M,N}
be two simple modules over a ring
R
{\displaystyle R}
. Then any homomorphism
f
:
M
→
N
{\displaystyle f\colon M\to N}
of
R
{\displaystyle R}
-modules is either zero or an isomorphism. In particular, the endomorphism ring of a simple module is a division ring.
Proof: Consider the kernel and image of
f
{\displaystyle f}
: since
ker
(
f
)
⊆
M
,
i
m
(
f
)
⊆
N
{\displaystyle \ker(f)\subseteq M,\mathrm {im} (f)\subseteq N}
are submodules of simple modules, by definition they are either zero or equal to
M
,
N
{\displaystyle M,N}
respectively. In particular, we have that either
ker
(
f
)
=
M
{\displaystyle \ker(f)=M}
, meaning that
f
{\displaystyle f}
is the zero morphism, or that
ker
(
f
)
=
0
{\displaystyle \ker(f)=0}
, meaning that
f
{\displaystyle f}
is injective. In the latter case, the first isomorphism theorem tells us furthermore that
i
m
(
f
)
≅
M
/
ker
(
f
)
≅
M
{\displaystyle \mathrm {im} (f)\cong M/\ker(f)\cong M}
is not trivial and thus
i
m
(
f
)
=
N
{\displaystyle \mathrm {im} (f)=N}
: this shows that
f
{\displaystyle f}
is in addition surjective, hence bijective and thus an isomorphism of
R
{\displaystyle R}
-modules.
The group version is a special case of the module version, since any representation of a group G can equivalently be viewed as a module over the group ring of G.
Schur's lemma is frequently applied in the following particular case. Suppose that R is an algebra over a field k and the vector space M = N is a simple module of R. Then Schur's lemma says that the endomorphism ring of the module M is a division algebra over k. If M is finite-dimensional, this division algebra is finite-dimensional. If k is the field of complex numbers, the only option is that this division algebra is the complex numbers. Thus the endomorphism ring of the module M is "as small as possible". In other words, the only linear transformations of M that commute with all transformations coming from R are scalar multiples of the identity.
More generally, if
R
{\displaystyle R}
is an algebra over an algebraically closed field
k
{\displaystyle k}
and
M
{\displaystyle M}
is a simple
R
{\displaystyle R}
-module satisfying
dim
k
(
M
)
<
#
k
{\displaystyle \dim _{k}(M)<\#k}
(the cardinality of
k
{\displaystyle k}
), then
End
R
(
M
)
=
k
{\displaystyle \operatorname {End} _{R}(M)=k}
. So in particular, if
R
{\displaystyle R}
is an algebra over an uncountable algebraically closed field
k
{\displaystyle k}
and
M
{\displaystyle M}
is a simple module that is at most countably-dimensional, the only linear transformations of
M
{\displaystyle M}
that commute with all transformations coming from
R
{\displaystyle R}
are scalar multiples of the identity.
When the field is not algebraically closed, the case where the endomorphism ring is as small as possible is still of particular interest. A simple module over a
k
{\displaystyle k}
-algebra is said to be absolutely simple if its endomorphism ring is isomorphic to
k
{\displaystyle k}
. This is in general stronger than being irreducible over the field
k
{\displaystyle k}
, and implies the module is irreducible even over the algebraic closure of
k
{\displaystyle k}
.
=== Application to central characters ===
Definition: Let
R
{\displaystyle R}
be a
k
{\displaystyle k}
-algebra. An
R
{\displaystyle R}
-module
M
{\displaystyle M}
is said to have central character
χ
:
Z
(
R
)
→
k
{\displaystyle \chi :Z(R)\to k}
(here,
Z
(
R
)
{\displaystyle Z(R)}
is the center of
R
{\displaystyle R}
) if for every
m
∈
M
,
z
∈
Z
(
R
)
{\displaystyle m\in M,z\in Z(R)}
there is
n
∈
N
{\displaystyle n\in \mathbb {N} }
such that
(
z
−
χ
(
z
)
)
n
m
=
0
{\displaystyle (z-\chi (z))^{n}m=0}
, i.e. if every
m
∈
M
{\displaystyle m\in M}
is a generalized eigenvector of
z
{\displaystyle z}
with eigenvalue
χ
(
z
)
{\displaystyle \chi (z)}
.
If
End
R
(
M
)
=
k
{\displaystyle \operatorname {End} _{R}(M)=k}
, say in the case sketched above, every element of
Z
(
R
)
{\displaystyle Z(R)}
acts on
M
{\displaystyle M}
as an
R
{\displaystyle R}
-endomorphism and hence as a scalar. Thus, there is a ring homomorphism
χ
:
Z
(
R
)
→
k
{\displaystyle \chi :Z(R)\to k}
such that
(
z
−
χ
(
z
)
)
m
=
0
{\displaystyle (z-\chi (z))m=0}
for all
z
∈
Z
(
R
)
,
m
∈
M
{\displaystyle z\in Z(R),m\in M}
. In particular,
M
{\displaystyle M}
has central character
χ
{\displaystyle \chi }
.
If
R
=
U
(
g
)
,
k
=
C
{\displaystyle R=U({\mathfrak {g}}),k=\mathbb {C} }
is the universal enveloping algebra of a Lie algebra, a central character is also referred to as an infinitesimal character and the previous considerations show that if
g
{\displaystyle {\mathfrak {g}}}
is finite-dimensional (so that
R
=
U
(
g
)
{\displaystyle R=U({\mathfrak {g}})}
is countable-dimensional), then every simple
g
{\displaystyle {\mathfrak {g}}}
-module has an infinitesimal character.
In the case where
k
=
C
,
R
=
C
[
G
]
{\displaystyle k=\mathbb {C} ,R=\mathbb {C} [G]}
is the group algebra of a finite group
G
{\displaystyle G}
, the same conclusion follows. Here, the center of
R
{\displaystyle R}
consists of elements of the shape
∑
g
∈
G
a
(
g
)
g
{\displaystyle \sum _{g\in G}a(g)g}
where
a
:
G
→
C
{\displaystyle a:G\to \mathbb {C} }
is a class function, i.e. invariant under conjugation. Since the set of class functions is spanned by the characters
χ
π
{\displaystyle \chi _{\pi }}
of the irreducible representations
π
∈
G
^
{\displaystyle \pi \in {\hat {G}}}
, the central character is determined by what it maps
u
π
:=
1
#
G
∑
g
∈
G
χ
π
(
g
)
g
{\displaystyle u_{\pi }:={\frac {1}{\#G}}\sum _{g\in G}\chi _{\pi }(g)g}
to (for all
π
∈
G
^
{\displaystyle \pi \in {\hat {G}}}
). Since all
u
π
{\displaystyle u_{\pi }}
are idempotent, they are each mapped either to 0 or to 1, and since
u
π
u
π
′
=
0
{\displaystyle u_{\pi }u_{\pi '}=0}
for two different irreducible representations, only one
u
π
{\displaystyle u_{\pi }}
can be mapped to 1: the one corresponding to the module
M
{\displaystyle M}
.
== Representations of Lie groups and Lie algebras ==
We now describe Schur's lemma as it is usually stated in the context of representations of Lie groups and Lie algebras. There are three parts to the result.
First, suppose that
V
1
{\displaystyle V_{1}}
and
V
2
{\displaystyle V_{2}}
are irreducible representations of a Lie group or Lie algebra over any field and that
ϕ
:
V
1
→
V
2
{\displaystyle \phi :V_{1}\rightarrow V_{2}}
is an intertwining map. Then
ϕ
{\displaystyle \phi }
is either zero or an isomorphism.
Second, if
V
{\displaystyle V}
is an irreducible representation of a Lie group or Lie algebra over an algebraically closed field and
ϕ
:
V
→
V
{\displaystyle \phi :V\rightarrow V}
is an intertwining map, then
ϕ
{\displaystyle \phi }
is a scalar multiple of the identity map.
Third, suppose
V
1
{\displaystyle V_{1}}
and
V
2
{\displaystyle V_{2}}
are irreducible representations of a Lie group or Lie algebra over an algebraically closed field and
ϕ
1
,
ϕ
2
:
V
1
→
V
2
{\displaystyle \phi _{1},\phi _{2}:V_{1}\rightarrow V_{2}}
are nonzero intertwining maps. Then
ϕ
1
=
λ
ϕ
2
{\displaystyle \phi _{1}=\lambda \phi _{2}}
for some scalar
λ
{\displaystyle \lambda }
.
A simple corollary of the second statement is that every complex irreducible representation of an abelian group is one-dimensional.
=== Application to the Casimir element ===
Suppose
g
{\displaystyle {\mathfrak {g}}}
is a Lie algebra and
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is the universal enveloping algebra of
g
{\displaystyle {\mathfrak {g}}}
. Let
π
:
g
→
E
n
d
(
V
)
{\displaystyle \pi :{\mathfrak {g}}\rightarrow \mathrm {End} (V)}
be an irreducible representation of
g
{\displaystyle {\mathfrak {g}}}
over an algebraically closed field. The universal property of the universal enveloping algebra ensures that
π
{\displaystyle \pi }
extends to a representation of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
acting on the same vector space. It follows from the second part of Schur's lemma that if
x
{\displaystyle x}
belongs to the center of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
, then
π
(
x
)
{\displaystyle \pi (x)}
must be a multiple of the identity operator. In the case when
g
{\displaystyle {\mathfrak {g}}}
is a complex semisimple Lie algebra, an important example of the preceding construction is the one in which
x
{\displaystyle x}
is the (quadratic) Casimir element
C
{\displaystyle C}
. In this case,
π
(
C
)
=
λ
π
I
{\displaystyle \pi (C)=\lambda _{\pi }I}
, where
λ
π
{\displaystyle \lambda _{\pi }}
is a constant that can be computed explicitly in terms of the highest weight of
π
{\displaystyle \pi }
. The action of the Casimir element plays an important role in the proof of complete reducibility for finite-dimensional representations of semisimple Lie algebras.
== Generalization to non-simple modules ==
The one-module version of Schur's lemma admits generalizations for modules
M
{\displaystyle M}
that are not necessarily simple. They express relations between the module-theoretic properties of
M
{\displaystyle M}
and the properties of the endomorphism ring of
M
{\displaystyle M}
.
Theorem (Lam 2001, §19): A module is said to be strongly indecomposable if its endomorphism ring is a local ring. For a module
M
{\displaystyle M}
of finite length, the following properties are equivalent:
M
{\displaystyle M}
is indecomposable;
M
{\displaystyle M}
is strongly indecomposable;
Every endomorphism of
M
{\displaystyle M}
is either nilpotent or invertible.
Schur's lemma cannot be reversed in general, however, since there exist modules that are not simple but whose endomorphism algebra is a division ring. Such modules are necessarily indecomposable and so cannot exist over semisimple rings, such as the complex group ring of a finite group. However, even over the ring of integers, the module of rational numbers has an endomorphism ring that is a division ring, specifically the field of rational numbers. Even for group rings, there are examples when the characteristic of the field divides the order of the group: the Jacobson radical of the projective cover of the one-dimensional representation of the alternating group A5 over the finite field with three elements F3 has F3 as its endomorphism ring.
== See also ==
Schur complement
Quillen's lemma
== Notes ==
== References ==
Dummit, David S.; Foote, Richard M. (1999). Abstract Algebra (2nd ed.). New York: Wiley. p. 337. ISBN 0-471-36857-1.
Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666
Lam, Tsit-Yuen (2001). A First Course in Noncommutative Rings. Berlin, New York: Springer-Verlag. ISBN 978-0-387-95325-0.
Sengupta, Ambar (2012). "Induced Representations". Representing Finite Groups. New York. pp. 235–248. doi:10.1007/978-1-4614-1231-1_8. ISBN 9781461412311. OCLC 769756134.{{cite book}}: CS1 maint: location missing publisher (link)
Shtern, A.I.; Lomonosov, V.I. (2001) [1994], "Schur lemma", Encyclopedia of Mathematics, EMS Press | Wikipedia/Schur's_lemma |
In mathematics, a Kac–Moody algebra (named for Victor Kac and Robert Moody, who independently and simultaneously discovered them in 1968) is a Lie algebra, usually infinite-dimensional, that can be defined by generators and relations through a generalized Cartan matrix. These algebras form a generalization of finite-dimensional semisimple Lie algebras, and many properties related to the structure of a Lie algebra such as its root system, irreducible representations, and connection to flag manifolds have natural analogues in the Kac–Moody setting.
A class of Kac–Moody algebras called affine Lie algebras is of particular importance in mathematics and theoretical physics, especially two-dimensional conformal field theory and the theory of exactly solvable models. Kac discovered an elegant proof of certain combinatorial identities, the Macdonald identities, which is based on the representation theory of affine Kac–Moody algebras. Howard Garland and James Lepowsky demonstrated that Rogers–Ramanujan identities can be derived in a similar fashion.
== History of Kac–Moody algebras ==
The initial construction by Élie Cartan and Wilhelm Killing of finite dimensional simple Lie algebras from the Cartan integers was type dependent. In 1966 Jean-Pierre Serre showed that relations of Claude Chevalley and Harish-Chandra, with simplifications by Nathan Jacobson, give a defining presentation for the Lie algebra. One could thus describe a simple Lie algebra in terms of generators and relations using data from the matrix of Cartan integers, which is naturally positive definite.
"Almost simultaneously in 1967, Victor Kac in the USSR and Robert Moody in Canada developed what was to become Kac–Moody algebra. Kac and Moody noticed that if Wilhelm Killing's conditions were relaxed, it was still possible to associate to the Cartan matrix a Lie algebra which, necessarily, would be infinite dimensional." – A. J. Coleman
In his 1967 thesis, Robert Moody considered Lie algebras whose Cartan matrix is no longer positive definite. This still gave rise to a Lie algebra, but one which is now infinite dimensional. Simultaneously, Z-graded Lie algebras were being studied in Moscow where I. L. Kantor introduced and studied a general class of Lie algebras including what eventually became known as Kac–Moody algebras. Victor Kac was also studying simple or nearly simple Lie algebras with polynomial growth. A rich mathematical theory of infinite dimensional Lie algebras evolved. An account of the subject, which also includes works of many others is given in (Kac 1990). See also (Seligman 1987).
== Introduction ==
Given an n×n generalized Cartan matrix
C
=
(
c
i
j
)
{\displaystyle C={\begin{pmatrix}c_{ij}\end{pmatrix}}}
, one can construct a Lie algebra
g
′
(
C
)
{\displaystyle {\mathfrak {g}}'(C)}
defined by generators
e
i
{\displaystyle e_{i}}
,
h
i
{\displaystyle h_{i}}
, and
f
i
(
i
∈
{
1
,
…
,
n
}
)
{\displaystyle f_{i}\left(i\in \{1,\ldots ,n\}\right)}
and relations given by:
[
h
i
,
h
j
]
=
0
{\displaystyle \left[h_{i},h_{j}\right]=0\ }
for all
i
,
j
∈
{
1
,
…
,
n
}
{\displaystyle i,j\in \{1,\ldots ,n\}}
;
[
h
i
,
e
j
]
=
c
i
j
e
j
{\displaystyle \left[h_{i},e_{j}\right]=c_{ij}e_{j}}
;
[
h
i
,
f
j
]
=
−
c
i
j
f
j
{\displaystyle \left[h_{i},f_{j}\right]=-c_{ij}f_{j}}
;
[
e
i
,
f
j
]
=
δ
i
j
h
i
{\displaystyle \left[e_{i},f_{j}\right]=\delta _{ij}h_{i}}
, where
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta;
If
i
≠
j
{\displaystyle i\neq j}
(so
c
i
j
≤
0
{\displaystyle c_{ij}\leq 0}
) then
ad
(
e
i
)
1
−
c
i
j
(
e
j
)
=
0
{\displaystyle {\textrm {ad}}(e_{i})^{1-c_{ij}}(e_{j})=0}
and
ad
(
f
i
)
1
−
c
i
j
(
f
j
)
=
0
{\displaystyle \operatorname {ad} (f_{i})^{1-c_{ij}}(f_{j})=0}
, where
ad
:
g
→
End
(
g
)
,
ad
(
x
)
(
y
)
=
[
x
,
y
]
,
{\displaystyle \operatorname {ad} :{\mathfrak {g}}\to \operatorname {End} ({\mathfrak {g}}),\operatorname {ad} (x)(y)=[x,y],}
is the adjoint representation of
g
{\displaystyle {\mathfrak {g}}}
.
Under a "symmetrizability" assumption,
g
′
(
C
)
{\displaystyle {\mathfrak {g}}'(C)}
identifies with the derived subalgebra
g
′
(
C
)
=
[
g
(
C
)
,
g
(
C
)
]
{\displaystyle {\mathfrak {g}}'(C)=[{\mathfrak {g}}(C),{\mathfrak {g}}(C)]}
of the affine Kac-Moody algebra
g
(
C
)
{\displaystyle {\mathfrak {g}}(C)}
defined below.
== Definition ==
Assume we are given an
n
×
n
{\displaystyle n\times n}
generalized Cartan matrix C = (cij) of rank r. For every such
C
{\displaystyle C}
, there exists a unique up to isomorphism realization of
C
{\displaystyle C}
, i.e. a triple
(
h
,
{
α
i
}
i
=
1
n
,
{
α
i
∨
}
i
=
1
n
,
{\displaystyle ({\mathfrak {h}},\{\alpha _{i}\}_{i=1}^{n},\{\alpha _{i}^{\vee }\}_{i=1}^{n},}
) where
h
{\displaystyle {\mathfrak {h}}}
is a complex vector space,
{
α
i
∨
}
i
=
1
n
{\displaystyle \{\alpha _{i}^{\vee }\}_{i=1}^{n}}
is a subset of elements of
h
{\displaystyle {\mathfrak {h}}}
, and
{
α
i
}
i
=
1
n
{\displaystyle \{\alpha _{i}\}_{i=1}^{n}}
is a subset of the dual space
h
∗
{\displaystyle {\mathfrak {h}}^{*}}
satisfying the following three conditions:
The vector space
h
{\displaystyle {\mathfrak {h}}}
has dimension 2n − r
The sets
{
α
i
}
i
=
1
n
{\displaystyle \{\alpha _{i}\}_{i=1}^{n}}
and
{
α
i
∨
}
i
=
1
n
{\displaystyle \{\alpha _{i}^{\vee }\}_{i=1}^{n}}
are linearly independent and
For every
1
≤
i
,
j
≤
n
,
α
i
(
α
j
∨
)
=
c
j
i
{\displaystyle 1\leq i,j\leq n,\alpha _{i}\left(\alpha _{j}^{\vee }\right)=c_{ji}}
.
The
α
i
{\displaystyle \alpha _{i}}
are analogue to the simple roots of a semi-simple Lie algebra, and the
α
i
∨
{\displaystyle \alpha _{i}^{\vee }}
to the simple coroots.
Then we define the Kac-Moody algebra associated to
C
{\displaystyle C}
as the Lie algebra
g
:=
g
(
C
)
{\displaystyle {\mathfrak {g}}:={\mathfrak {g}}(C)}
defined by generators
e
i
{\displaystyle e_{i}}
and
f
i
(
i
∈
{
1
,
…
,
n
}
)
{\displaystyle f_{i}\left(i\in \{1,\ldots ,n\}\right)}
and the elements of
h
{\displaystyle {\mathfrak {h}}}
and relations
[
h
,
h
′
]
=
0
{\displaystyle \left[h,h'\right]=0\ }
for
h
,
h
′
∈
h
{\displaystyle h,h'\in {\mathfrak {h}}}
;
[
h
,
e
i
]
=
α
i
(
h
)
e
i
{\displaystyle \left[h,e_{i}\right]=\alpha _{i}(h)e_{i}}
, for
h
∈
h
{\displaystyle h\in {\mathfrak {h}}}
;
[
h
,
f
i
]
=
−
α
i
(
h
)
f
i
{\displaystyle \left[h,f_{i}\right]=-\alpha _{i}(h)f_{i}}
, for
h
∈
h
{\displaystyle h\in {\mathfrak {h}}}
;
[
e
i
,
f
j
]
=
δ
i
j
α
i
∨
{\displaystyle \left[e_{i},f_{j}\right]=\delta _{ij}\alpha _{i}^{\vee }}
, where
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta;
If
i
≠
j
{\displaystyle i\neq j}
(so
c
i
j
≤
0
{\displaystyle c_{ij}\leq 0}
) then
ad
(
e
i
)
1
−
c
i
j
(
e
j
)
=
0
{\displaystyle {\textrm {ad}}(e_{i})^{1-c_{ij}}(e_{j})=0}
and
ad
(
f
i
)
1
−
c
i
j
(
f
j
)
=
0
{\displaystyle \operatorname {ad} (f_{i})^{1-c_{ij}}(f_{j})=0}
, where
ad
:
g
→
End
(
g
)
,
ad
(
x
)
(
y
)
=
[
x
,
y
]
,
{\displaystyle \operatorname {ad} :{\mathfrak {g}}\to \operatorname {End} ({\mathfrak {g}}),\operatorname {ad} (x)(y)=[x,y],}
is the adjoint representation of
g
{\displaystyle {\mathfrak {g}}}
.
A real (possibly infinite-dimensional) Lie algebra is also considered a Kac–Moody algebra if its complexification is a Kac–Moody algebra.
== Root-space decomposition of a Kac–Moody algebra ==
h
{\displaystyle {\mathfrak {h}}}
is the analogue of a Cartan subalgebra for the Kac–Moody algebra
g
{\displaystyle {\mathfrak {g}}}
.
If
x
≠
0
{\displaystyle x\neq 0}
is an element of
g
{\displaystyle {\mathfrak {g}}}
such that
∀
h
∈
h
,
[
h
,
x
]
=
λ
(
h
)
x
{\displaystyle \forall h\in {\mathfrak {h}},[h,x]=\lambda (h)x}
for some
λ
∈
h
∗
∖
{
0
}
{\displaystyle \lambda \in {\mathfrak {h}}^{*}\backslash \{0\}}
, then
x
{\displaystyle x}
is called a root vector and
λ
{\displaystyle \lambda }
is a root of
g
{\displaystyle {\mathfrak {g}}}
. (The zero functional is not considered a root by convention.) The set of all roots of
g
{\displaystyle {\mathfrak {g}}}
is often denoted by
Δ
{\displaystyle \Delta }
and sometimes by
R
{\displaystyle R}
. For a given root
λ
{\displaystyle \lambda }
, one denotes by
g
λ
{\displaystyle {\mathfrak {g}}_{\lambda }}
the root space of
λ
{\displaystyle \lambda }
; that is,
g
λ
=
{
x
∈
g
:
∀
h
∈
h
,
[
h
,
x
]
=
λ
(
h
)
x
}
{\displaystyle {\mathfrak {g}}_{\lambda }=\{x\in {\mathfrak {g}}:\forall h\in {\mathfrak {h}},[h,x]=\lambda (h)x\}}
.
It follows from the defining relations of
g
{\displaystyle {\mathfrak {g}}}
that
e
i
∈
g
α
i
{\displaystyle e_{i}\in {\mathfrak {g}}_{\alpha _{i}}}
and
f
i
∈
g
−
α
i
{\displaystyle f_{i}\in {\mathfrak {g}}_{-\alpha _{i}}}
. Also, if
x
1
∈
g
λ
1
{\displaystyle x_{1}\in {\mathfrak {g}}_{\lambda _{1}}}
and
x
2
∈
g
λ
2
{\displaystyle x_{2}\in {\mathfrak {g}}_{\lambda _{2}}}
, then
[
x
1
,
x
2
]
∈
g
λ
1
+
λ
2
{\displaystyle \left[x_{1},x_{2}\right]\in {\mathfrak {g}}_{\lambda _{1}+\lambda _{2}}}
by the Jacobi identity.
A fundamental result of the theory is that any Kac–Moody algebra can be decomposed into the direct sum of
h
{\displaystyle {\mathfrak {h}}}
and its root spaces, that is
g
=
h
⊕
⨁
λ
∈
Δ
g
λ
{\displaystyle {\mathfrak {g}}={\mathfrak {h}}\oplus \bigoplus _{\lambda \in \Delta }{\mathfrak {g}}_{\lambda }}
,
and that every root
λ
{\displaystyle \lambda }
can be written as
λ
=
∑
i
=
1
n
z
i
α
i
{\displaystyle \lambda =\sum _{i=1}^{n}z_{i}\alpha _{i}}
with all the
z
i
{\displaystyle z_{i}}
being integers of the same sign.
== Types of Kac–Moody algebras ==
Properties of a Kac–Moody algebra are controlled by the algebraic properties of its generalized Cartan matrix C. In order to classify Kac–Moody algebras, it is enough to consider the case of an indecomposable matrix C, that is, assume that there is no decomposition of the set of indices I into a disjoint union of non-empty subsets I1 and I2 such that Cij = 0 for all i in I1 and j in I2. Any decomposition of the generalized Cartan matrix leads to the direct sum decomposition of the corresponding Kac–Moody algebra:
g
(
C
)
≃
g
(
C
1
)
⊕
g
(
C
2
)
,
{\displaystyle {\mathfrak {g}}(C)\simeq {\mathfrak {g}}\left(C_{1}\right)\oplus {\mathfrak {g}}\left(C_{2}\right),}
where the two Kac–Moody algebras in the right hand side are associated with the submatrices of C corresponding to the index sets I1 and I2.
An important subclass of Kac–Moody algebras corresponds to symmetrizable generalized Cartan matrices C, which can be decomposed as DS, where D is a diagonal matrix with positive integer entries and S is a symmetric matrix. Under the assumptions that C is symmetrizable and indecomposable, the Kac–Moody algebras are divided into three classes:
A positive definite matrix S gives rise to a finite-dimensional simple Lie algebra.
A positive semidefinite matrix S gives rise to an infinite-dimensional Kac–Moody algebra of affine type, or an affine Lie algebra.
An indefinite matrix S gives rise to a Kac–Moody algebra of indefinite type.
Since the diagonal entries of C and S are positive, S cannot be negative definite or negative semidefinite.
Symmetrizable indecomposable generalized Cartan matrices of finite and affine type have been completely classified. They correspond to Dynkin diagrams and affine Dynkin diagrams. Little is known about the Kac–Moody algebras of indefinite type, although the groups corresponding to these Kac–Moody algebras were constructed over arbitrary fields by Jacques Tits.
Among the Kac–Moody algebras of indefinite type, most work has focused on those hyperbolic type, for which the matrix S is indefinite, but for each proper subset of I, the corresponding submatrix is positive definite or positive semidefinite. Hyperbolic Kac–Moody algebras have rank at most 10, and they have been completely classified. There are infinitely many of rank 2, and 238 of ranks between 3 and 10.
== See also ==
Weyl–Kac character formula
Generalized Kac–Moody algebra
Integrable module
Monstrous moonshine
== Citations ==
== References ==
== External links ==
SIGMA: Special Issue on Kac–Moody Algebras and Applications | Wikipedia/Kac–Moody_algebra |
In graph theory, a branch of mathematics, a skew-symmetric graph is a directed graph that is isomorphic to its own transpose graph, the graph formed by reversing all of its edges, under an isomorphism that is an involution without any fixed points. Skew-symmetric graphs are identical to the double covering graphs of bidirected graphs.
Skew-symmetric graphs were first introduced under the name of antisymmetrical digraphs by Tutte (1967), later as the double covering graphs of polar graphs by Zelinka (1976b), and still later as the double covering graphs of bidirected graphs by Zaslavsky (1991). They arise in modeling the search for alternating paths and alternating cycles in algorithms for finding matchings in graphs, in testing whether a still life pattern in Conway's Game of Life may be partitioned into simpler components, in graph drawing, and in the implication graphs used to efficiently solve the 2-satisfiability problem.
== Definition ==
As defined, e.g., by Goldberg & Karzanov (1996), a skew-symmetric graph G is a directed graph, together with a function σ mapping vertices of G to other vertices of G, satisfying the following properties:
For every vertex v, σ(v) ≠ v,
For every vertex v, σ(σ(v)) = v,
For every edge (u,v), (σ(v),σ(u)) must also be an edge.
One may use the third property to extend σ to an orientation-reversing function on the edges of G.
The transpose graph of G is the graph formed by reversing every edge of G, and σ defines a graph isomorphism from G to its transpose. However, in a skew-symmetric graph, it is additionally required that the isomorphism pair each vertex with a different vertex, rather than allowing a vertex to be mapped to itself by the isomorphism or to group more than two vertices in a cycle of isomorphism.
A path or cycle in a skew-symmetric graph is said to be regular if, for each vertex v of the path or cycle, the corresponding vertex σ(v) is not part of the path or cycle.
== Examples ==
Every directed path graph with an even number of vertices is skew-symmetric, via a symmetry that swaps the two ends of the path. However, path graphs with an odd number of vertices are not skew-symmetric, because the orientation-reversing symmetry of these graphs maps the center vertex of the path to itself, something that is not allowed for skew-symmetric graphs.
Similarly, a directed cycle graph is skew-symmetric if and only if it has an even number of vertices. In this case, the number of different mappings σ that realize the skew symmetry of the graph equals half the length of the cycle.
== Polar/switch graphs, double covering graphs, and bidirected graphs ==
A skew-symmetric graph may equivalently be defined as the double covering graph of a polar graph or switch graph, which is an undirected graph in which the edges incident to each vertex are partitioned into two subsets. Each vertex of the polar graph corresponds to two vertices of the skew-symmetric graph, and each edge of the polar graph corresponds to two edges of the skew-symmetric graph. This equivalence is the one used by Goldberg & Karzanov (1996) to model problems of matching in terms of skew-symmetric graphs; in that application, the two subsets of edges at each vertex are the unmatched edges and the matched edges. Zelinka (following F. Zitek) and Cook visualize the vertices of a polar graph as points where multiple tracks of a train track come together: if a train enters a switch via a track that comes in from one direction, it must exit via a track in the other direction. The problem of finding non-self-intersecting smooth curves between given points in a train track comes up in testing whether certain kinds of graph drawings are valid. and may be modeled as the search for a regular path in a skew-symmetric graph.
A closely related concept is the bidirected graph or polarized graph, a graph in which each of the two ends of each edge may be either a head or a tail, independently of the other end. A bidirected graph may be interpreted as a polar graph by letting the partition of edges at each vertex be determined by the partition of endpoints at that vertex into heads and tails; however, swapping the roles of heads and tails at a single vertex ("switching" the vertex) produces a different bidirected graph but the same polar graph.
To form the double covering graph (i.e., the corresponding skew-symmetric graph) from a polar graph G, create for each vertex v of G two vertices v0 and v1, and let σ(vi) = v1 − i. For each edge e = (u,v) of G, create two directed edges in the covering graph, one oriented from u to v and one oriented from v to u. If e is in the first subset of edges at v, these two edges are from u0 into v0 and from v1 into u1, while if e is in the second subset, the edges are from u0 into v1 and from v0 into u1.
In the other direction, given a skew-symmetric graph G, one may form a polar graph that has one vertex for every corresponding pair of vertices in G and one undirected edge for every corresponding pair of edges in G. The undirected edges at each vertex of the polar graph may be partitioned into two subsets according to which vertex of the polar graph they go out of and come into.
A regular path or cycle of a skew-symmetric graph corresponds to a path or cycle in the polar graph that uses at most one edge from each subset of edges at each of its vertices.
== Matching ==
In constructing matchings in undirected graphs, it is important to find alternating paths, paths of vertices that start and end at unmatched vertices, in which the edges at odd positions in the path are not part of a given partial matching and in which the edges at even positions in the path are part of the matching. By removing the matched edges of such a path from a matching, and adding the unmatched edges, one can increase the size of the matching. Similarly, cycles that alternate between matched and unmatched edges are of importance in weighted matching problems.
An alternating path or cycle in an undirected graph may be modeled as a regular path or cycle in a skew-symmetric directed graph. To create a skew-symmetric graph from an undirected graph G with a specified matching M, view G as a switch graph in which the edges at each vertex are partitioned into matched and unmatched edges; an alternating path in G is then a regular path in this switch graph and an alternating cycle in G is a regular cycle in the switch graph.
Goldberg & Karzanov (1996) generalized alternating path algorithms to show that the existence of a regular path between any two vertices of a skew-symmetric graph may be tested in linear time. Given additionally a non-negative length function on the edges of the graph that assigns the same length to any edge e and to σ(e), the shortest regular path connecting a given pair of nodes in a skew-symmetric graph with m edges and n vertices may be tested in time O(m log n). If the length function is allowed to have negative lengths, the existence of a negative regular cycle may be tested in polynomial time.
Along with the path problems arising in matchings, skew-symmetric generalizations of the max-flow min-cut theorem have also been studied.
== Still life theory ==
Cook (2003) shows that a still life pattern in Conway's Game of Life may be partitioned into two smaller still lifes if and only if an associated switch graph contains a regular cycle. As he shows, for switch graphs with at most three edges per vertex, this may be tested in polynomial time by repeatedly removing bridges (edges the removal of which disconnects the graph) and vertices at which all edges belong to a single partition until no more such simplifications may be performed. If the result is an empty graph, there is no regular cycle; otherwise, a regular cycle may be found in any remaining bridgeless component. The repeated search for bridges in this algorithm may be performed efficiently using a dynamic graph algorithm of Thorup (2000).
Similar bridge-removal techniques in the context of matching were previously considered by Gabow, Kaplan & Tarjan (1999).
== Satisfiability ==
An instance of the 2-satisfiability problem, that is, a Boolean expression in conjunctive normal form with two variables or negations of variables per clause, may be transformed into an implication graph by replacing each clause
u
∨
v
{\displaystyle \scriptstyle u\lor v}
by the two implications
(
¬
u
)
⇒
v
{\displaystyle \scriptstyle (\lnot u)\Rightarrow v}
and
(
¬
v
)
⇒
u
{\displaystyle \scriptstyle (\lnot v)\Rightarrow u}
. This graph has a vertex for each variable or negated variable, and a directed edge for each implication; it is, by construction, skew-symmetric, with a correspondence σ that maps each variable to its negation.
As Aspvall, Plass & Tarjan (1979) showed, a satisfying assignment to the 2-satisfiability instance is equivalent to a partition of this implication graph into two subsets of vertices, S and σ(S), such that no edge starts in S and ends in σ(S). If such a partition exists, a satisfying assignment may be formed by assigning a true value to every variable in S and a false value to every variable in σ(S). This may be done if and only if no strongly connected component of the graph contains both some vertex v and its complementary vertex σ(v). If two vertices belong to the same strongly connected component, the corresponding variables or negated variables are constrained to equal each other in any satisfying assignment of the 2-satisfiability instance. The total time for testing strong connectivity and finding a partition of the implication graph is linear in the size of the given 2-CNF expression.
== Recognition ==
It is NP-complete to determine whether a given directed graph is skew-symmetric, by a result of Lalonde (1981) that it is NP-complete to find a color-reversing involution in a bipartite graph. Such an involution exists if and only if the directed graph given by orienting each edge from one color class to the other is skew-symmetric, so testing skew-symmetry of this directed graph is hard. This complexity does not affect path-finding algorithms for skew-symmetric graphs, because these algorithms assume that the skew-symmetric structure is given as part of the input to the algorithm rather than requiring it to be inferred from the graph alone.
== Notes ==
== References ==
Aspvall, Bengt; Plass, Michael F.; Tarjan, Robert E. (1979), "A linear-time algorithm for testing the truth of certain quantified boolean formulas", Information Processing Letters, 8 (3): 121–123, doi:10.1016/0020-0190(79)90002-4.
Babenko, Maxim A. (2006), "Acyclic bidirected and skew-symmetric graphs: algorithms and structure", Computer Science – Theory and Applications, Lecture Notes in Computer Science, vol. 3967, Springer-Verlag, pp. 23–34, arXiv:math/0607547, doi:10.1007/11753728_6, ISBN 978-3-540-34166-6.
Biggs, Norman (1974), Algebraic Graph Theory, London: Cambridge University Press.
Cook, Matthew (2003), "Still life theory", New Constructions in Cellular Automata, Santa Fe Institute Studies in the Sciences of Complexity, Oxford University Press, pp. 93–118.
Edmonds, Jack; Johnson, Ellis L. (1970), "Matching: a well-solved class of linear programs", Combinatorial Structures and their Applications: Proceedings of the Calgary Symposium, June 1969, New York: Gordon and Breach. Reprinted in Combinatorial Optimization — Eureka, You Shrink!, Springer-Verlag, Lecture Notes in Computer Science 2570, 2003, pp. 27–30, doi:10.1007/3-540-36478-1_3.
Gabow, Harold N.; Kaplan, Haim; Tarjan, Robert E. (1999), "Unique maximum matching algorithms", Proc. 31st ACM Symp. Theory of Computing (STOC), pp. 70–78, doi:10.1145/301250.301273, ISBN 1-58113-067-8.
Goldberg, Andrew V.; Karzanov, Alexander V. (1996), "Path problems in skew-symmetric graphs", Combinatorica, 16 (3): 353–382, doi:10.1007/BF01261321.
Goldberg, Andrew V.; Karzanov, Alexander V. (2004), "Maximum skew-symmetric flows and matchings", Mathematical Programming, 100 (3): 537–568, arXiv:math/0304290, doi:10.1007/s10107-004-0505-z.
Hui, Peter; Schaefer, Marcus; Štefankovič, Daniel (2004), "Train tracks and confluent drawings", Proc. 12th Int. Symp. Graph Drawing, Lecture Notes in Computer Science, vol. 3383, Springer-Verlag, pp. 318–328.
Lalonde, François (1981), "Le problème d'étoiles pour graphes est NP-complet", Discrete Mathematics, 33 (3): 271–280, doi:10.1016/0012-365X(81)90271-5, MR 0602044.
Thorup, Mikkel (2000), "Near-optimal fully-dynamic graph connectivity", Proc. 32nd ACM Symposium on Theory of Computing, pp. 343–350, doi:10.1145/335305.335345, ISBN 1-58113-184-4.
Tutte, W. T. (1967), "Antisymmetrical digraphs", Canadian Journal of Mathematics, 19: 1101–1117, doi:10.4153/CJM-1967-101-8.
Zaslavsky, Thomas (1982), "Signed graphs", Discrete Applied Mathematics, 4: 47–74, doi:10.1016/0166-218X(82)90033-6, hdl:10338.dmlcz/127957.
Zaslavsky, Thomas (1991), "Orientation of signed graphs", European Journal of Combinatorics, 12 (4): 361–375, doi:10.1016/s0195-6698(13)80118-7.
Zelinka, Bohdan (1974), "Polar graphs and railway traffic", Aplikace Matematiky, 19: 169–176.
Zelinka, Bohdan (1976a), "Isomorphisms of polar and polarized graphs", Czechoslovak Mathematical Journal, 26 (3): 339–351, doi:10.21136/CMJ.1976.101409.
Zelinka, Bohdan (1976b), "Analoga of Menger's theorem for polar and polarized graphs", Czechoslovak Mathematical Journal, 26 (3): 352–360, doi:10.21136/CMJ.1976.101410. | Wikipedia/Skew-symmetric_graph |
In mathematics, a Cartan subalgebra, often abbreviated as CSA, is a nilpotent subalgebra
h
{\displaystyle {\mathfrak {h}}}
of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
that is self-normalising (if
[
X
,
Y
]
∈
h
{\displaystyle [X,Y]\in {\mathfrak {h}}}
for all
X
∈
h
{\displaystyle X\in {\mathfrak {h}}}
, then
Y
∈
h
{\displaystyle Y\in {\mathfrak {h}}}
). They were introduced by Élie Cartan in his doctoral thesis. It controls the representation theory of a semi-simple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over a field of characteristic
0
{\displaystyle 0}
.
In a finite-dimensional semisimple Lie algebra over an algebraically closed field of characteristic zero (e.g.,
C
{\displaystyle \mathbb {C} }
), a Cartan subalgebra is the same thing as a maximal abelian subalgebra consisting of elements x such that the adjoint endomorphism
ad
(
x
)
:
g
→
g
{\displaystyle \operatorname {ad} (x):{\mathfrak {g}}\to {\mathfrak {g}}}
is semisimple (i.e., diagonalizable). Sometimes this characterization is simply taken as the definition of a Cartan subalgebra.pg 231
In general, a subalgebra is called toral if it consists of semisimple elements. Over an algebraically closed field, a toral subalgebra is automatically abelian. Thus, over an algebraically closed field of characteristic zero, a Cartan subalgebra can also be defined as a maximal toral subalgebra.
Kac–Moody algebras and generalized Kac–Moody algebras also have subalgebras that play the same role as the Cartan subalgebras of semisimple Lie algebras (over a field of characteristic zero).
== Existence and uniqueness ==
Cartan subalgebras exist for finite-dimensional Lie algebras whenever the base field is infinite. One way to construct a Cartan subalgebra is by means of a regular element. Over a finite field, the question of the existence is still open.
For a finite-dimensional semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over an algebraically closed field of characteristic zero, there is a simpler approach: by definition, a toral subalgebra is a subalgebra of
g
{\displaystyle {\mathfrak {g}}}
that consists of semisimple elements (an element is semisimple if the adjoint endomorphism induced by it is diagonalizable). A Cartan subalgebra of
g
{\displaystyle {\mathfrak {g}}}
is then the same thing as a maximal toral subalgebra and the existence of a maximal toral subalgebra is easy to see.
In a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero, all Cartan subalgebras are conjugate under automorphisms of the algebra, and in particular are all isomorphic. The common dimension of a Cartan subalgebra is then called the rank of the algebra.
For a finite-dimensional complex semisimple Lie algebra, the existence of a Cartan subalgebra is much simpler to establish, assuming the existence of a compact real form. In that case,
h
{\displaystyle {\mathfrak {h}}}
may be taken as the complexification of the Lie algebra of a maximal torus of the compact group.
If
g
{\displaystyle {\mathfrak {g}}}
is a linear Lie algebra (a Lie subalgebra of the Lie algebra of endomorphisms of a finite-dimensional vector space V) over an algebraically closed field, then any Cartan subalgebra of
g
{\displaystyle {\mathfrak {g}}}
is the centralizer of a maximal toral subalgebra of
g
{\displaystyle {\mathfrak {g}}}
. If
g
{\displaystyle {\mathfrak {g}}}
is semisimple and the field has characteristic zero, then a maximal toral subalgebra is self-normalizing, and so is equal to the associated Cartan subalgebra. If in addition
g
{\displaystyle {\mathfrak {g}}}
is semisimple, then the adjoint representation presents
g
{\displaystyle {\mathfrak {g}}}
as a linear Lie algebra, so that a subalgebra of
g
{\displaystyle {\mathfrak {g}}}
is Cartan if and only if it is a maximal toral subalgebra.
== Examples ==
Any nilpotent Lie algebra is its own Cartan subalgebra.
A Cartan subalgebra of
g
l
n
{\displaystyle {\mathfrak {gl}}_{n}}
, the Lie algebra of
n
×
n
{\displaystyle n\times n}
matrices over a field, is the algebra of all diagonal matrices.
For the special Lie algebra of traceless
n
×
n
{\displaystyle n\times n}
matrices
s
l
n
(
C
)
{\displaystyle {\mathfrak {sl}}_{n}(\mathbb {C} )}
, it has the Cartan subalgebra
h
=
{
d
(
a
1
,
…
,
a
n
)
∣
a
i
∈
C
and
∑
i
=
1
n
a
i
=
0
}
{\displaystyle {\mathfrak {h}}=\left\{d(a_{1},\ldots ,a_{n})\mid a_{i}\in \mathbb {C} {\text{ and }}\sum _{i=1}^{n}a_{i}=0\right\}}
where
d
(
a
1
,
…
,
a
n
)
=
(
a
1
0
⋯
0
0
⋱
0
⋮
⋱
⋮
0
⋯
⋯
a
n
)
{\displaystyle d(a_{1},\ldots ,a_{n})={\begin{pmatrix}a_{1}&0&\cdots &0\\0&\ddots &&0\\\vdots &&\ddots &\vdots \\0&\cdots &\cdots &a_{n}\end{pmatrix}}}
For example, in
s
l
2
(
C
)
{\displaystyle {\mathfrak {sl}}_{2}(\mathbb {C} )}
the Cartan subalgebra is the subalgebra of matrices
h
=
{
(
a
0
0
−
a
)
:
a
∈
C
}
{\displaystyle {\mathfrak {h}}=\left\{{\begin{pmatrix}a&0\\0&-a\end{pmatrix}}:a\in \mathbb {C} \right\}}
with Lie bracket given by the matrix commutator.
The Lie algebra
s
l
2
(
R
)
{\displaystyle {\mathfrak {sl}}_{2}(\mathbb {R} )}
of
2
{\displaystyle 2}
by
2
{\displaystyle 2}
matrices of trace
0
{\displaystyle 0}
has two non-conjugate Cartan subalgebras.
The dimension of a Cartan subalgebra is not in general the maximal dimension of an abelian subalgebra, even for complex simple Lie algebras. For example, the Lie algebra
s
l
2
n
(
C
)
{\displaystyle {\mathfrak {sl}}_{2n}(\mathbb {C} )}
of
2
n
{\displaystyle 2n}
by
2
n
{\displaystyle 2n}
matrices of trace
0
{\displaystyle 0}
has a Cartan subalgebra of rank
2
n
−
1
{\displaystyle 2n-1}
but has a maximal abelian subalgebra of dimension
n
2
{\displaystyle n^{2}}
consisting of all matrices of the form
(
0
A
0
0
)
{\displaystyle {\begin{pmatrix}0&A\\0&0\end{pmatrix}}}
with
A
{\displaystyle A}
any
n
{\displaystyle n}
by
n
{\displaystyle n}
matrix. One can directly see this abelian subalgebra is not a Cartan subalgebra, since it is contained in the nilpotent algebra of strictly upper triangular matrices (or, since it is normalized by diagonal matrices).
== Cartan subalgebras of semisimple Lie algebras ==
For finite-dimensional semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over an algebraically closed field of characteristic 0, a Cartan subalgebra
h
{\displaystyle {\mathfrak {h}}}
has the following properties:
h
{\displaystyle {\mathfrak {h}}}
is abelian,
For the adjoint representation
ad
:
g
→
g
l
(
g
)
{\displaystyle \operatorname {ad} :{\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})}
, the image
ad
(
h
)
{\displaystyle \operatorname {ad} ({\mathfrak {h}})}
consists of semisimple operators (i.e., diagonalizable matrices).
(As noted earlier, a Cartan subalgebra can in fact be characterized as a subalgebra that is maximal among those having the above two properties.)
These two properties say that the operators in
ad
(
h
)
{\displaystyle \operatorname {ad} ({\mathfrak {h}})}
are simultaneously diagonalizable and that there is a direct sum decomposition of
g
{\displaystyle {\mathfrak {g}}}
as
g
=
⨁
λ
∈
h
∗
g
λ
{\displaystyle {\mathfrak {g}}=\bigoplus _{\lambda \in {\mathfrak {h}}^{*}}{\mathfrak {g}}_{\lambda }}
where
g
λ
=
{
x
∈
g
:
ad
(
h
)
x
=
λ
(
h
)
x
,
for
h
∈
h
}
{\displaystyle {\mathfrak {g}}_{\lambda }=\{x\in {\mathfrak {g}}:{\text{ad}}(h)x=\lambda (h)x,{\text{ for }}h\in {\mathfrak {h}}\}}
.
Let
Φ
=
{
λ
∈
h
∗
∖
{
0
}
|
g
λ
≠
{
0
}
}
{\displaystyle \Phi =\{\lambda \in {\mathfrak {h}}^{*}\setminus \{0\}|{\mathfrak {g}}_{\lambda }\neq \{0\}\}}
. Then
Φ
{\displaystyle \Phi }
is a root system and, moreover,
g
0
=
h
{\displaystyle {\mathfrak {g}}_{0}={\mathfrak {h}}}
; i.e., the centralizer of
h
{\displaystyle {\mathfrak {h}}}
coincides with
h
{\displaystyle {\mathfrak {h}}}
. The above decomposition can then be written as:
g
=
h
⊕
(
⨁
λ
∈
Φ
g
λ
)
{\displaystyle {\mathfrak {g}}={\mathfrak {h}}\oplus \left(\bigoplus _{\lambda \in \Phi }{\mathfrak {g}}_{\lambda }\right)}
As it turns out, for each
λ
∈
Φ
{\displaystyle \lambda \in \Phi }
,
g
λ
{\displaystyle {\mathfrak {g}}_{\lambda }}
has dimension one and so:
dim
g
=
dim
h
+
#
Φ
{\displaystyle \dim {\mathfrak {g}}=\dim {\mathfrak {h}}+\#\Phi }
.
See also Semisimple Lie algebra#Structure for further information.
=== Decomposing representations with dual Cartan subalgebra ===
Given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over a field of characteristic
0
{\displaystyle 0}
, and a Lie algebra representation
σ
:
g
→
g
l
(
V
)
{\displaystyle \sigma :{\mathfrak {g}}\to {\mathfrak {gl}}(V)}
there is a decomposition related to the decomposition of the Lie algebra from its Cartan subalgebra. If we set
V
λ
=
{
v
∈
V
:
(
σ
(
h
)
)
(
v
)
=
λ
(
h
)
v
for
h
∈
h
}
{\displaystyle V_{\lambda }=\{v\in V:(\sigma (h))(v)=\lambda (h)v{\text{ for }}h\in {\mathfrak {h}}\}}
with
λ
∈
h
∗
{\displaystyle \lambda \in {\mathfrak {h}}^{*}}
, called the weight space for weight
λ
{\displaystyle \lambda }
, there is a decomposition of the representation in terms of these weight spaces
V
=
⨁
λ
∈
h
∗
V
λ
{\displaystyle V=\bigoplus _{\lambda \in {\mathfrak {h}}^{*}}V_{\lambda }}
In addition, whenever
V
λ
≠
{
0
}
{\displaystyle V_{\lambda }\neq \{0\}}
we call
λ
{\displaystyle \lambda }
a weight of the
g
{\displaystyle {\mathfrak {g}}}
-representation
V
{\displaystyle V}
.
==== Classification of irreducible representations using weights ====
But, it turns out these weights can be used to classify the irreducible representations of the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. For a finite dimensional irreducible
g
{\displaystyle {\mathfrak {g}}}
-representation
V
{\displaystyle V}
, there exists a unique weight
λ
∈
Φ
{\displaystyle \lambda \in \Phi }
with respect to a partial ordering on
h
∗
{\displaystyle {\mathfrak {h}}^{*}}
. Moreover, given a
λ
∈
Φ
{\displaystyle \lambda \in \Phi }
such that
⟨
α
,
λ
⟩
∈
N
{\displaystyle \langle \alpha ,\lambda \rangle \in \mathbb {N} }
for every positive root
α
∈
Φ
+
{\displaystyle \alpha \in \Phi ^{+}}
, there exists a unique irreducible representation
L
+
(
λ
)
{\displaystyle L^{+}(\lambda )}
. This means the root system
Φ
{\displaystyle \Phi }
contains all information about the representation theory of
g
{\displaystyle {\mathfrak {g}}}
.pg 240
== Splitting Cartan subalgebra ==
Over non-algebraically closed fields, not all Cartan subalgebras are conjugate. An important class are splitting Cartan subalgebras: if a Lie algebra admits a splitting Cartan subalgebra
h
{\displaystyle {\mathfrak {h}}}
then it is called splittable, and the pair
(
g
,
h
)
{\displaystyle ({\mathfrak {g}},{\mathfrak {h}})}
is called a split Lie algebra; over an algebraically closed field every semisimple Lie algebra is splittable. Any two splitting Cartan algebras are conjugate, and they fulfill a similar function to Cartan algebras in semisimple Lie algebras over algebraically closed fields, so split semisimple Lie algebras (indeed, split reductive Lie algebras) share many properties with semisimple Lie algebras over algebraically closed fields.
Over a non-algebraically closed field not every semisimple Lie algebra is splittable, however.
== Cartan subgroup ==
A Cartan subgroup of a Lie group is a special type of subgroup. Specifically, its Lie algebra (which captures the group’s algebraic structure) is itself a Cartan subalgebra. When we consider the identity component of a subgroup, it shares the same Lie algebra. However, there isn’t a universally agreed-upon definition for which subgroup with this property should be called the ‘Cartan subgroup,’ especially when dealing with disconnected groups.
For compact connected Lie groups, a Cartan subgroup is essentially a maximal connected Abelian subgroup—often referred to as a ‘maximal torus.’ The Lie algebra associated with this subgroup is also a Cartan subalgebra.
Now, when we explore disconnected compact Lie groups, things get interesting. There are multiple definitions for a Cartan subgroup. One common approach, proposed by David Vogan, defines it as the group of elements that normalize a fixed maximal torus while preserving the fundamental Weyl chamber. This version is sometimes called the ‘large Cartan subgroup.’ Additionally, there exists a ‘small Cartan subgroup,’ defined as the centralizer of a maximal torus. It’s important to note that these Cartan subgroups may not always be abelian in genera
=== Examples of Cartan Subgroups ===
The subgroup in GL2(R) consisting of diagonal matrices.
== References ==
=== Notes ===
Lie algebras and their Representations
Infinite-dimensional Lie algebras
=== References ===
Borel, Armand (1991), Linear algebraic groups, Graduate Texts in Mathematics, vol. 126 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-97370-8, MR 1102012
Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666
Jacobson, Nathan (1979), Lie algebras, New York: Dover Publications, ISBN 978-0-486-63832-4, MR 0559927
Humphreys, James E. (1972), Introduction to Lie Algebras and Representation Theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90053-7
Popov, V.L. (2001) [1994], "Cartan subalgebra", Encyclopedia of Mathematics, EMS Press
Anthony William Knapp; David A. Vogan (1995). Cohomological Induction and Unitary Representations. Princeton University Press. ISBN 978-0-691-03756-1. | Wikipedia/Cartan_subalgebra |
In mathematics, integrability is a property of certain dynamical systems. While there are several distinct formal definitions, informally speaking, an integrable system is a dynamical system with sufficiently many conserved quantities, or first integrals, that its motion is confined to a submanifold
of much smaller dimensionality than that of its phase space.
Three features are often referred to as characterizing integrable systems:
the existence of a maximal set of conserved quantities (the usual defining property of complete integrability)
the existence of algebraic invariants, having a basis in algebraic geometry (a property known sometimes as algebraic integrability)
the explicit determination of solutions in an explicit functional form (not an intrinsic property, but something often referred to as solvability)
Integrable systems may be seen as very different in qualitative character from more generic dynamical systems,
which are more typically chaotic systems. The latter generally have no conserved quantities, and are asymptotically intractable, since an arbitrarily small perturbation in initial conditions may lead to arbitrarily large deviations in their trajectories over a sufficiently large time.
Many systems studied in physics are completely integrable, in particular, in the Hamiltonian sense, the key example being multi-dimensional harmonic oscillators. Another standard example is planetary motion about either one fixed center (e.g., the sun) or two. Other elementary examples include the motion of a rigid body about its center of mass (the Euler top) and the motion of an axially symmetric rigid body about a point in its axis of symmetry (the Lagrange top).
In the late 1960s, it was realized that there are completely integrable systems in physics having an infinite number of degrees of freedom, such as some models of shallow water waves (Korteweg–de Vries equation), the Kerr effect in optical fibres, described by the nonlinear Schrödinger equation, and certain integrable many-body systems, such as the Toda lattice. The modern theory of integrable systems was revived with the numerical discovery of solitons by Martin Kruskal and Norman Zabusky in 1965, which led to the inverse scattering transform method in 1967.
In the special case of Hamiltonian systems, if there are enough independent Poisson commuting first integrals for the flow parameters to be able to serve as a coordinate system on the invariant level sets (the leaves of the Lagrangian foliation), and if the flows are complete and the energy level set is compact, this implies the Liouville–Arnold theorem; i.e., the existence of action-angle variables. General dynamical systems have no such conserved quantities; in the case of autonomous Hamiltonian systems, the energy is generally the only one, and on the energy level sets, the flows are typically chaotic.
A key ingredient in characterizing integrable systems is the Frobenius theorem, which states that a system is Frobenius integrable (i.e., is generated by an integrable distribution) if, locally, it has a foliation by maximal integral manifolds. But integrability, in the sense of dynamical systems, is a global property, not a local one, since it requires that the foliation be a regular one, with the leaves embedded submanifolds.
Integrability does not necessarily imply that generic solutions can be explicitly expressed in terms of some known set of special functions; it is an intrinsic property of the geometry and topology of the system, and the nature of the dynamics.
== General dynamical systems ==
In the context of differentiable dynamical systems, the notion of integrability refers to the existence of invariant, regular foliations; i.e., ones whose leaves are embedded submanifolds of the smallest possible dimension that are invariant under the flow. There is thus a variable notion of the degree of integrability, depending on the dimension of the leaves of the invariant foliation. This concept has a refinement in the case of Hamiltonian systems, known as complete integrability in the sense of Liouville (see below), which is what is most frequently referred to in this context.
An extension of the notion of integrability is also applicable to discrete systems such as lattices. This definition can be adapted to describe evolution equations that either are systems of differential equations or finite difference equations.
The distinction between integrable and nonintegrable dynamical systems has the qualitative implication of regular motion vs. chaotic motion and hence is an intrinsic property, not just a matter of whether a system can be explicitly integrated in an exact form.
== Hamiltonian systems and Liouville integrability ==
In the special setting of Hamiltonian systems, we have the notion of integrability in the Liouville sense. (See the Liouville–Arnold theorem.) Liouville integrability means that there exists a regular foliation of the phase space by invariant manifolds such that the Hamiltonian vector fields associated with the invariants of the foliation span the tangent distribution. Another way to state this is that there exists a maximal set of functionally independent Poisson commuting invariants (i.e., independent functions on the phase space whose Poisson brackets with the Hamiltonian of the system, and with each other, vanish).
In finite dimensions, if the phase space is symplectic (i.e., the center of the Poisson algebra consists only of constants), it must have even dimension
2
n
,
{\displaystyle 2n,}
and the maximal number of independent Poisson commuting invariants (including the Hamiltonian itself) is
n
{\displaystyle n}
. The leaves of the foliation are totally isotropic with respect to the symplectic form and such a maximal isotropic foliation is called Lagrangian. All autonomous Hamiltonian systems (i.e. those for which the Hamiltonian and Poisson brackets are not explicitly time-dependent) have at least one invariant; namely, the Hamiltonian itself, whose value along the flow is the energy. If the energy level sets are compact, the leaves of the Lagrangian foliation are tori, and the natural linear coordinates on these are called "angle" variables. The cycles of the canonical
1
{\displaystyle 1}
-form are called the action variables, and the resulting canonical coordinates are called action-angle variables (see below).
There is also a distinction between complete integrability, in the Liouville sense, and partial integrability, as well as a notion of superintegrability and maximal superintegrability. Essentially, these distinctions correspond to the dimensions of the leaves of the foliation. When the number of independent Poisson commuting invariants is less than maximal (but, in the case of autonomous systems, more than one), we say the system is partially integrable. When there exist further functionally independent invariants, beyond the maximal number that can be Poisson commuting, and hence the dimension of the leaves of the invariant foliation is less than n, we say the system is superintegrable. If there is a regular foliation with one-dimensional leaves (curves), this is called maximally superintegrable.
== Action-angle variables ==
When a finite-dimensional Hamiltonian system is completely integrable in the Liouville sense,
and the energy level sets are compact, the flows are complete, and the leaves of the invariant foliation are tori. There then exist, as mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables,
such that the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the tori. The motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables.
== The Hamilton–Jacobi approach ==
In canonical transformation theory, there is the Hamilton–Jacobi method, in which solutions to Hamilton's equations are sought by first finding a complete solution of the associated Hamilton–Jacobi equation. In classical terminology, this is described as determining a transformation to a canonical set of coordinates consisting of completely ignorable variables; i.e., those in which there is no dependence of the Hamiltonian on a complete set of canonical "position" coordinates, and hence the corresponding canonically conjugate momenta are all conserved quantities. In the case of compact energy level sets, this is the first step towards determining the action-angle variables. In the general theory of partial differential equations of Hamilton–Jacobi type, a complete solution (i.e. one that depends on n independent constants of integration, where n is the dimension of the configuration space), exists in very general cases, but only in the local sense. Therefore, the existence of a complete solution of the Hamilton–Jacobi equation is by no means a characterization of complete integrability in the Liouville sense. Most cases that can be "explicitly integrated" involve a complete separation of variables, in which the separation constants provide the complete set of integration constants that are required. Only when these constants can be reinterpreted, within the full phase space setting, as the values of a complete set of Poisson commuting functions restricted to the leaves of a Lagrangian foliation, can the system be regarded as completely integrable in the Liouville sense.
== Solitons and inverse spectral methods ==
A resurgence of interest in classical integrable systems came with the discovery, in the late 1960s, that solitons, which are strongly stable, localized solutions of partial differential equations like the Korteweg–de Vries equation (which describes 1-dimensional non-dissipative fluid dynamics in shallow basins), could be understood by viewing these equations as infinite-dimensional integrable Hamiltonian systems. Their study leads to a very fruitful approach for "integrating" such systems, the inverse scattering transform and more general inverse spectral methods (often reducible to Riemann–Hilbert problems),
which generalize local linear methods like Fourier analysis to nonlocal linearization, through the solution of associated integral equations.
The basic idea of this method is to introduce a linear operator that is determined by the position in phase space and which evolves under the dynamics of the system in question in such a way that its "spectrum" (in a suitably generalized sense) is invariant under the evolution, cf. Lax pair. This provides, in certain cases, enough invariants, or "integrals of motion" to make the system completely integrable. In the case of systems having an infinite number of degrees of freedom, such as the KdV equation, this is not sufficient to make precise the property of Liouville integrability. However, for suitably defined boundary conditions, the spectral transform can, in fact, be interpreted as a transformation to completely ignorable coordinates, in which the conserved quantities form half of a doubly infinite set of canonical coordinates, and the flow linearizes in these. In some cases, this may even be seen as a transformation to action-angle variables, although typically only a finite number of the "position" variables are actually angle coordinates, and the rest are noncompact.
== Hirota bilinear equations and τ-functions ==
Another viewpoint that arose in the modern theory of integrable systems originated in
a calculational approach pioneered by Ryogo Hirota, which involved replacing
the original nonlinear dynamical system with a bilinear system of constant coefficient
equations for an auxiliary quantity, which later came to be known as the
τ-function. These are now referred to as the Hirota equations. Although originally appearing just as a calculational device, without any clear relation
to the inverse scattering approach, or the Hamiltonian structure, this nevertheless gave a very direct method from which important classes of solutions such as solitons could be derived.
Subsequently, this was interpreted by Mikio Sato and his students, at first for the case of
integrable hierarchies of PDEs, such as the Kadomtsev–Petviashvili hierarchy, but then
for much more general classes of integrable hierarchies, as a sort of universal phase space approach, in which, typically, the commuting dynamics were viewed simply as determined by a fixed (finite or infinite) abelian group action on a (finite or infinite) Grassmann manifold.
The τ-function was viewed as the determinant
of a projection operator from elements of the group orbit to some origin within the Grassmannian,
and the Hirota equations as expressing the Plücker relations, characterizing the
Plücker embedding of the Grassmannian in the projectivization of a suitably
defined (infinite) exterior space, viewed as a fermionic Fock space.
== Quantum integrable systems ==
There is also a notion of quantum integrable systems.
In the quantum setting, functions on phase space must be replaced by self-adjoint operators on a Hilbert space, and the notion of Poisson commuting functions replaced by commuting operators. The notion of conservation laws must be specialized to local conservation laws. Every Hamiltonian has an infinite set of conserved quantities given by projectors to its energy eigenstates. However, this does not imply any special dynamical structure.
To explain quantum integrability, it is helpful to consider the free particle setting. Here all dynamics are one-body reducible. A quantum system is said to be integrable if the dynamics are two-body reducible. The Yang–Baxter equation is a consequence of this reducibility and leads to trace identities which provide an infinite set of conserved quantities. All of these ideas are incorporated into the quantum inverse scattering method where the algebraic Bethe ansatz can be used to obtain explicit solutions. Examples of quantum integrable models are the Lieb–Liniger model, the Hubbard model and several variations on the Heisenberg model. Some other types of quantum integrability are known in explicitly time-dependent quantum problems, such as the driven Tavis-Cummings model.
== Exactly solvable models ==
In physics, completely integrable systems, especially in the infinite-dimensional setting, are often referred to as exactly solvable models. This obscures the distinction between integrability, in the Hamiltonian sense, and the more general dynamical systems sense.
There are also exactly solvable models in statistical mechanics, which are more closely related to quantum integrable systems than classical ones. Two closely related methods: the Bethe ansatz approach, in its modern sense, based on the Yang–Baxter equations and the quantum inverse scattering method, provide quantum analogs of the inverse spectral methods. These are equally important in the study of solvable models in statistical mechanics.
An imprecise notion of "exact solvability" as meaning: "The solutions can be expressed explicitly in terms of some previously known functions" is also sometimes used, as though this were an intrinsic property of the system itself, rather than the purely calculational feature that we happen to have some "known" functions available, in terms of which the solutions may be expressed. This notion has no intrinsic meaning, since what is meant by "known" functions very often is defined precisely by the fact that they satisfy certain given equations, and the list of such "known functions" is constantly growing. Although such a characterization of "integrability" has no intrinsic validity, it often implies the sort of regularity that is to be expected in integrable systems.
== List of some well-known integrable systems ==
Classical mechanical systems
Calogero–Moser–Sutherland model
Central force motion (exact solutions of classical central-force problems)
Geodesic motion on ellipsoids
Harmonic oscillator
Integrable Clebsch and Steklov systems in fluids
Lagrange, Euler, and Kovalevskaya tops
Neumann oscillator
Two center Newtonian gravitational motion
Integrable lattice models
Ablowitz–Ladik lattice
Toda lattice
Volterra lattice
Integrable systems in 1 + 1 dimensions
AKNS system
Benjamin–Ono equation
Boussinesq equation (water waves)
Camassa–Holm equation
Classical Heisenberg ferromagnet model (spin chain)
Degasperis–Procesi equation
Dym equation
Garnier integrable system
Kaup–Kupershmidt equation
Krichever–Novikov equation
Korteweg–de Vries equation
Landau–Lifshitz equation (continuous spin field)
Nonlinear Schrödinger equation
Nonlinear sigma models
Sine–Gordon equation
Thirring model
Three-wave equation
Integrable PDEs in 2 + 1 dimensions
Davey–Stewartson equation
Ishimori equation
Kadomtsev–Petviashvili equation
Novikov–Veselov equation
Integrable PDEs in 3 + 1 dimensions
The Belinski–Zakharov transform generates a Lax pair for the Einstein field equations; general solutions are termed gravitational solitons, of which the Schwarzschild metric, the Kerr metric and some gravitational wave solutions are examples.
Exactly solvable statistical lattice models
8-vertex model
Gaudin model
Ising model in 1- and 2-dimensions
Ice-type model of Lieb
Quantum Heisenberg model
== See also ==
Hitchin system
Pentagram map
=== Related areas ===
Mathematical physics
Soliton
Painleve transcendents
Statistical mechanics
Integrable algorithm
=== Some key contributors (since 1965) ===
== References ==
Arnold, V.I. (1997). Mathematical Methods of Classical Mechanics (2nd ed.). Springer. ISBN 978-0-387-96890-2.
Audin, M. (1996). Spinning Tops: A Course on Integrable Systems. Cambridge Studies in Advanced Mathematics. Vol. 51. Cambridge University Press. ISBN 978-0521779197.
Babelon, O.; Bernard, D.; Talon, M. (2003). Introduction to classical integrable systems. Cambridge University Press. doi:10.1017/CBO9780511535024. ISBN 0-521-82267-X.
Baxter, R.J. (1982). Exactly solved models in statistical mechanics. Academic Press. ISBN 978-0-12-083180-7.
Dunajski, M. (2009). Solitons, Instantons and Twistors. Oxford University Press. ISBN 978-0-19-857063-9.
Faddeev, L.D.; Takhtajan, L.A. (1987). Hamiltonian Methods in the Theory of Solitons. Addison-Wesley. ISBN 978-0-387-15579-1.
Fomenko, A.T. (1995). Symplectic Geometry. Methods and Applications (2nd ed.). Gordon and Breach. ISBN 978-2-88124-901-3.
Fomenko, A.T.; Bolsinov, A.V. (2003). Integrable Hamiltonian Systems: Geometry, Topology, Classification. Taylor and Francis. ISBN 978-0-415-29805-6.
Goldstein, H. (1980). Classical Mechanics (2nd ed.). Addison-Wesley. ISBN 0-201-02918-9.
Harnad, J.; Winternitz, P.; Sabidussi, G., eds. (2000). Integrable Systems: From Classical to Quantum. American Mathematical Society. ISBN 0-8218-2093-1.
Harnad, J.; Balogh, F. (2021). Tau functions and Their Applications. Cambridge Monographs on Mathematical Physics. Cambridge University Press. doi:10.1017/9781108610902. ISBN 9781108492683. S2CID 222379146.
Hietarinta, J.; Joshi, N.; Nijhoff, F. (2016). Discrete systems and integrability. Cambridge University Press. Bibcode:2016dsi..book.....H. doi:10.1017/CBO9781107337411. ISBN 978-1-107-04272-8.
Korepin, V. E.; Bogoliubov, N.M.; Izergin, A.G. (1997). Quantum Inverse Scattering Method and Correlation Functions. Cambridge University Press. ISBN 978-0-521-58646-7.
Afrajmovich, V.S.; Arnold, V.I.; Il'yashenko, Yu. S.; Shil'nikov, L.P. Dynamical Systems V. Springer. ISBN 3-540-18173-3.
Mussardo, Giuseppe (2010). Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics. Oxford University Press. ISBN 978-0-19-954758-6.
Sardanashvily, G. (2015). Handbook of Integrable Hamiltonian Systems. URSS. ISBN 978-5-396-00687-4.
== Further reading ==
Beilinson, A.; Drinfeld, V. "Quantization of Hitchin's integrable system and Hecke eigensheaves" (PDF).
Donagi, R.; Markman, E. (1996). "Spectral covers, algebraically completely integrable, Hamiltonian systems, and moduli of bundles". Integrable systems and quantum groups. Lecture Notes in Mathematics. Vol. 1620. Springer. pp. 1–119. doi:10.1007/BFb0094792. ISBN 978-3-540-60542-3.
Sonnad, Kiran G.; Cary, John R. (2004). "Finding a nonlinear lattice with improved integrability using Lie transform perturbation theory". Physical Review E. 69 (5): 056501. Bibcode:2004PhRvE..69e6501S. doi:10.1103/PhysRevE.69.056501. PMID 15244955.
== External links ==
"Integrable system", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"SIDE - Symmetries and Integrability of Difference Equations", a conference devoted to the study of integrable difference equations and related topics.
== Notes == | Wikipedia/Exactly_solvable_model |
In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced sine and cosine transforms (which correspond to the imaginary and real components of the modern Fourier transform) in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation.
The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory. For example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint.
The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 3-dimensional 'position space' to a function of 3-dimensional momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued. Still further generalization is possible to functions on groups, which, besides the original Fourier transform on R or Rn, notably includes the discrete-time Fourier transform (DTFT, group = Z), the discrete Fourier transform (DFT, group = Z mod N) and the Fourier series or circular Fourier transform (group = S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT.
== Definition ==
The Fourier transform of a complex-valued (Lebesgue) integrable function
f
(
x
)
{\displaystyle f(x)}
on the real line, is the complex valued function
f
^
(
ξ
)
{\displaystyle {\hat {f}}(\xi )}
, defined by the integral
Evaluating the Fourier transform for all values of
ξ
{\displaystyle \xi }
produces the frequency-domain function, and it converges at all frequencies to a continuous function tending to zero at infinity. If
f
(
x
)
{\displaystyle f(x)}
decays with all derivatives, i.e.,
lim
|
x
|
→
∞
f
(
n
)
(
x
)
=
0
,
∀
n
∈
N
,
{\displaystyle \lim _{|x|\to \infty }f^{(n)}(x)=0,\quad \forall n\in \mathbb {N} ,}
then
f
^
{\displaystyle {\widehat {f}}}
converges for all frequencies and, by the Riemann–Lebesgue lemma,
f
^
{\displaystyle {\widehat {f}}}
also decays with all derivatives.
First introduced in Fourier's Analytical Theory of Heat., the corresponding inversion formula for "sufficiently nice" functions is given by the Fourier inversion theorem, i.e.,
The functions
f
{\displaystyle f}
and
f
^
{\displaystyle {\widehat {f}}}
are referred to as a Fourier transform pair. A common notation for designating transform pairs is:
f
(
x
)
⟷
F
f
^
(
ξ
)
,
{\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ {\widehat {f}}(\xi ),}
for example
rect
(
x
)
⟷
F
sinc
(
ξ
)
.
{\displaystyle \operatorname {rect} (x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ \operatorname {sinc} (\xi ).}
By analogy, the Fourier series can be regarded as an abstract Fourier transform on the group
Z
{\displaystyle \mathbb {Z} }
of integers. That is, the synthesis of a sequence of complex numbers
c
n
{\displaystyle c_{n}}
is defined by the Fourier transform
f
(
x
)
=
∑
n
=
−
∞
∞
c
n
e
i
2
π
n
P
x
,
{\displaystyle f(x)=\sum _{n=-\infty }^{\infty }c_{n}\,e^{i2\pi {\tfrac {n}{P}}x},}
such that
c
n
{\displaystyle c_{n}}
are given by the inversion formula, i.e., the analysis
c
n
=
1
P
∫
−
P
/
2
P
/
2
f
(
x
)
e
−
i
2
π
n
P
x
d
x
,
{\displaystyle c_{n}={\frac {1}{P}}\int _{-P/2}^{P/2}f(x)\,e^{-i2\pi {\frac {n}{P}}x}\,dx,}
for some complex-valued,
P
{\displaystyle P}
-periodic function
f
(
x
)
{\displaystyle f(x)}
defined on a bounded interval
[
−
P
/
2
,
P
/
2
]
∈
R
{\displaystyle [-P/2,P/2]\in \mathbb {R} }
. When
P
→
∞
,
{\displaystyle P\to \infty ,}
the constituent frequencies are a continuum:
n
P
→
ξ
∈
R
,
{\displaystyle {\tfrac {n}{P}}\to \xi \in \mathbb {R} ,}
and
c
n
→
f
^
(
ξ
)
∈
C
{\displaystyle c_{n}\to {\hat {f}}(\xi )\in \mathbb {C} }
.
In other words, on the finite interval
[
−
P
/
2
,
P
/
2
]
{\displaystyle [-P/2,P/2]}
the function
f
(
x
)
{\displaystyle f(x)}
has a discrete decomposition in the periodic functions
e
i
2
π
x
n
/
P
{\displaystyle e^{i2\pi xn/P}}
. On the infinite interval
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
the function
f
(
x
)
{\displaystyle f(x)}
has a continuous decomposition in periodic functions
e
i
2
π
x
ξ
{\displaystyle e^{i2\pi x\xi }}
.
=== Lebesgue integrable functions ===
A measurable function
f
:
R
→
C
{\displaystyle f:\mathbb {R} \to \mathbb {C} }
is called (Lebesgue) integrable if the Lebesgue integral of its absolute value is finite:
‖
f
‖
1
=
∫
R
|
f
(
x
)
|
d
x
<
∞
.
{\displaystyle \|f\|_{1}=\int _{\mathbb {R} }|f(x)|\,dx<\infty .}
If
f
{\displaystyle f}
is Lebesgue integrable then the Fourier transform, given by Eq.1, is well-defined for all
ξ
∈
R
{\displaystyle \xi \in \mathbb {R} }
. Furthermore,
f
^
∈
L
∞
∩
C
(
R
)
{\displaystyle {\widehat {f}}\in L^{\infty }\cap C(\mathbb {R} )}
is bounded, uniformly continuous and (by the Riemann–Lebesgue lemma) zero at infinity.
The space
L
1
(
R
)
{\displaystyle L^{1}(\mathbb {R} )}
is the space of measurable functions for which the norm
‖
f
‖
1
{\displaystyle \|f\|_{1}}
is finite, modulo the equivalence relation of equality almost everywhere. The Fourier transform on
L
1
(
R
)
{\displaystyle L^{1}(\mathbb {R} )}
is one-to-one. However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular, Eq.2 is no longer valid, as it was stated only under the hypothesis that
f
(
x
)
{\displaystyle f(x)}
decayed with all derivatives.
While Eq.1 defines the Fourier transform for (complex-valued) functions in
L
1
(
R
)
{\displaystyle L^{1}(\mathbb {R} )}
, it is not well-defined for other integrability classes, most importantly the space of square-integrable functions
L
2
(
R
)
{\displaystyle L^{2}(\mathbb {R} )}
. For example, the function
f
(
x
)
=
(
1
+
x
2
)
−
1
/
2
{\displaystyle f(x)=(1+x^{2})^{-1/2}}
is in
L
2
{\displaystyle L^{2}}
but not
L
1
{\displaystyle L^{1}}
and therefore the Lebesgue integral Eq.1 does not exist. However, the Fourier transform on the dense subspace
L
1
∩
L
2
(
R
)
⊂
L
2
(
R
)
{\displaystyle L^{1}\cap L^{2}(\mathbb {R} )\subset L^{2}(\mathbb {R} )}
admits a unique continuous extension to a unitary operator on
L
2
(
R
)
{\displaystyle L^{2}(\mathbb {R} )}
. This extension is important in part because, unlike the case of
L
1
{\displaystyle L^{1}}
, the Fourier transform is an automorphism of the space
L
2
(
R
)
{\displaystyle L^{2}(\mathbb {R} )}
.
In such cases, the Fourier transform can be obtained explicitly by regularizing the integral, and then passing to a limit. In practice, the integral is often regarded as an improper integral instead of a proper Lebesgue integral, but sometimes for convergence one needs to use weak limit or principal value instead of the (pointwise) limits implicit in an improper integral. Titchmarsh (1986) and Dym & McKean (1985) each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with the
L
2
{\displaystyle L^{2}}
Fourier transform is that Gaussians are dense in
L
1
∩
L
2
{\displaystyle L^{1}\cap L^{2}}
, and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians:
that
e
−
π
x
2
{\displaystyle e^{-\pi x^{2}}}
is its own Fourier transform; and
that the Gaussian integral
∫
−
∞
∞
e
−
π
x
2
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }e^{-\pi x^{2}}\,dx=1.}
A feature of the
L
1
{\displaystyle L^{1}}
Fourier transform is that it is a homomorphism of Banach algebras from
L
1
{\displaystyle L^{1}}
equipped with the convolution operation to the Banach algebra of continuous functions under the
L
∞
{\displaystyle L^{\infty }}
(supremum) norm. The conventions chosen in this article are those of harmonic analysis, and are characterized as the unique conventions such that the Fourier transform is both unitary on L2 and an algebra homomorphism from L1 to L∞, without renormalizing the Lebesgue measure.
=== Angular frequency (ω) ===
When the independent variable (
x
{\displaystyle x}
) represents time (often denoted by
t
{\displaystyle t}
), the transform variable (
ξ
{\displaystyle \xi }
) represents frequency (often denoted by
f
{\displaystyle f}
). For example, if time is measured in seconds, then frequency is in hertz. The Fourier transform can also be written in terms of angular frequency,
ω
=
2
π
ξ
,
{\displaystyle \omega =2\pi \xi ,}
whose units are radians per second.
The substitution
ξ
=
ω
2
π
{\displaystyle \xi ={\tfrac {\omega }{2\pi }}}
into Eq.1 produces this convention, where function
f
^
{\displaystyle {\widehat {f}}}
is relabeled
f
1
^
:
{\displaystyle {\widehat {f_{1}}}:}
f
3
^
(
ω
)
≜
∫
−
∞
∞
f
(
x
)
⋅
e
−
i
ω
x
d
x
=
f
1
^
(
ω
2
π
)
,
f
(
x
)
=
1
2
π
∫
−
∞
∞
f
3
^
(
ω
)
⋅
e
i
ω
x
d
ω
.
{\displaystyle {\begin{aligned}{\widehat {f_{3}}}(\omega )&\triangleq \int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\widehat {f_{3}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}}
Unlike the Eq.1 definition, the Fourier transform is no longer a unitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the
2
π
{\displaystyle 2\pi }
factor evenly between the transform and its inverse, which leads to another convention:
f
2
^
(
ω
)
≜
1
2
π
∫
−
∞
∞
f
(
x
)
⋅
e
−
i
ω
x
d
x
=
1
2
π
f
1
^
(
ω
2
π
)
,
f
(
x
)
=
1
2
π
∫
−
∞
∞
f
2
^
(
ω
)
⋅
e
i
ω
x
d
ω
.
{\displaystyle {\begin{aligned}{\widehat {f_{2}}}(\omega )&\triangleq {\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\frac {1}{\sqrt {2\pi }}}\ \ {\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\widehat {f_{2}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}}
Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites.
== Background ==
=== History ===
In 1822, Fourier claimed (see Joseph Fourier § The Analytic Theory of Heat) that any function, whether continuous or discontinuous, can be expanded into a series of sines. That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since.
=== Complex sinusoids ===
In general, the coefficients
f
^
(
ξ
)
{\displaystyle {\widehat {f}}(\xi )}
are complex numbers, which have two equivalent forms (see Euler's formula):
f
^
(
ξ
)
=
A
e
i
θ
⏟
polar coordinate form
=
A
cos
(
θ
)
+
i
A
sin
(
θ
)
⏟
rectangular coordinate form
.
{\displaystyle {\widehat {f}}(\xi )=\underbrace {Ae^{i\theta }} _{\text{polar coordinate form}}=\underbrace {A\cos(\theta )+iA\sin(\theta )} _{\text{rectangular coordinate form}}.}
The product with
e
i
2
π
ξ
x
{\displaystyle e^{i2\pi \xi x}}
(Eq.2) has these forms:
f
^
(
ξ
)
⋅
e
i
2
π
ξ
x
=
A
e
i
θ
⋅
e
i
2
π
ξ
x
=
A
e
i
(
2
π
ξ
x
+
θ
)
⏟
polar coordinate form
=
A
cos
(
2
π
ξ
x
+
θ
)
+
i
A
sin
(
2
π
ξ
x
+
θ
)
⏟
rectangular coordinate form
.
{\displaystyle {\begin{aligned}{\widehat {f}}(\xi )\cdot e^{i2\pi \xi x}&=Ae^{i\theta }\cdot e^{i2\pi \xi x}\\&=\underbrace {Ae^{i(2\pi \xi x+\theta )}} _{\text{polar coordinate form}}\\&=\underbrace {A\cos(2\pi \xi x+\theta )+iA\sin(2\pi \xi x+\theta )} _{\text{rectangular coordinate form}}.\end{aligned}}}
which conveys both amplitude and phase of frequency
ξ
.
{\displaystyle \xi .}
Likewise, the intuitive interpretation of Eq.1 is that multiplying
f
(
x
)
{\displaystyle f(x)}
by
e
−
i
2
π
ξ
x
{\displaystyle e^{-i2\pi \xi x}}
has the effect of subtracting
ξ
{\displaystyle \xi }
from every frequency component of function
f
(
x
)
.
{\displaystyle f(x).}
Only the component that was at frequency
ξ
{\displaystyle \xi }
can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see § Example)
It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula.
=== Negative frequency ===
Euler's formula introduces the possibility of negative
ξ
.
{\displaystyle \xi .}
And Eq.1 is defined
∀
ξ
∈
R
.
{\displaystyle \forall \xi \in \mathbb {R} .}
Only certain complex-valued
f
(
x
)
{\displaystyle f(x)}
have transforms
f
^
=
0
,
∀
ξ
<
0
{\displaystyle {\widehat {f}}=0,\ \forall \ \xi <0}
(See Analytic signal. A simple example is
e
i
2
π
ξ
0
x
(
ξ
0
>
0
)
.
{\displaystyle e^{i2\pi \xi _{0}x}\ (\xi _{0}>0).}
) But negative frequency is necessary to characterize all other complex-valued
f
(
x
)
,
{\displaystyle f(x),}
found in signal processing, partial differential equations, radar, nonlinear optics, quantum mechanics, and others.
For a real-valued
f
(
x
)
,
{\displaystyle f(x),}
Eq.1 has the symmetry property
f
^
(
−
ξ
)
=
f
^
∗
(
ξ
)
{\displaystyle {\widehat {f}}(-\xi )={\widehat {f}}^{*}(\xi )}
(see § Conjugation below). This redundancy enables Eq.2 to distinguish
f
(
x
)
=
cos
(
2
π
ξ
0
x
)
{\displaystyle f(x)=\cos(2\pi \xi _{0}x)}
from
e
i
2
π
ξ
0
x
.
{\displaystyle e^{i2\pi \xi _{0}x}.}
But of course it cannot tell us the actual sign of
ξ
0
,
{\displaystyle \xi _{0},}
because
cos
(
2
π
ξ
0
x
)
{\displaystyle \cos(2\pi \xi _{0}x)}
and
cos
(
2
π
(
−
ξ
0
)
x
)
{\displaystyle \cos(2\pi (-\xi _{0})x)}
are indistinguishable on just the real numbers line.
=== Fourier transform for periodic functions ===
The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral in Eq.1 to be defined the function must be absolutely integrable. Instead it is common to use Fourier series. It is possible to extend the definition to include periodic functions by viewing them as tempered distributions.
This makes it possible to see a connection between the Fourier series and the Fourier transform for periodic functions that have a convergent Fourier series. If
f
(
x
)
{\displaystyle f(x)}
is a periodic function, with period
P
{\displaystyle P}
, that has a convergent Fourier series, then:
f
^
(
ξ
)
=
∑
n
=
−
∞
∞
c
n
⋅
δ
(
ξ
−
n
P
)
,
{\displaystyle {\widehat {f}}(\xi )=\sum _{n=-\infty }^{\infty }c_{n}\cdot \delta \left(\xi -{\tfrac {n}{P}}\right),}
where
c
n
{\displaystyle c_{n}}
are the Fourier series coefficients of
f
{\displaystyle f}
, and
δ
{\displaystyle \delta }
is the Dirac delta function. In other words, the Fourier transform is a Dirac comb function whose teeth are multiplied by the Fourier series coefficients.
=== Sampling the Fourier transform ===
The Fourier transform of an integrable function
f
{\displaystyle f}
can be sampled at regular intervals of arbitrary length
1
P
.
{\displaystyle {\tfrac {1}{P}}.}
These samples can be deduced from one cycle of a periodic function
f
P
{\displaystyle f_{P}}
which has Fourier series coefficients proportional to those samples by the Poisson summation formula:
f
P
(
x
)
≜
∑
n
=
−
∞
∞
f
(
x
+
n
P
)
=
1
P
∑
k
=
−
∞
∞
f
^
(
k
P
)
e
i
2
π
k
P
x
,
∀
k
∈
Z
{\displaystyle f_{P}(x)\triangleq \sum _{n=-\infty }^{\infty }f(x+nP)={\frac {1}{P}}\sum _{k=-\infty }^{\infty }{\widehat {f}}\left({\tfrac {k}{P}}\right)e^{i2\pi {\frac {k}{P}}x},\quad \forall k\in \mathbb {Z} }
The integrability of
f
{\displaystyle f}
ensures the periodic summation converges. Therefore, the samples
f
^
(
k
P
)
{\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)}
can be determined by Fourier series analysis:
f
^
(
k
P
)
=
∫
P
f
P
(
x
)
⋅
e
−
i
2
π
k
P
x
d
x
.
{\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)=\int _{P}f_{P}(x)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx.}
When
f
(
x
)
{\displaystyle f(x)}
has compact support,
f
P
(
x
)
{\displaystyle f_{P}(x)}
has a finite number of terms within the interval of integration. When
f
(
x
)
{\displaystyle f(x)}
does not have compact support, numerical evaluation of
f
P
(
x
)
{\displaystyle f_{P}(x)}
requires an approximation, such as tapering
f
(
x
)
{\displaystyle f(x)}
or truncating the number of terms.
== Units ==
The frequency variable must have inverse units to the units of the original function's domain (typically named
t
{\displaystyle t}
or
x
{\displaystyle x}
). For example, if
t
{\displaystyle t}
is measured in seconds,
ξ
{\displaystyle \xi }
should be in cycles per second or hertz. If the scale of time is in units of
2
π
{\displaystyle 2\pi }
seconds, then another Greek letter
ω
{\displaystyle \omega }
is typically used instead to represent angular frequency (where
ω
=
2
π
ξ
{\displaystyle \omega =2\pi \xi }
) in units of radians per second. If using
x
{\displaystyle x}
for units of length, then
ξ
{\displaystyle \xi }
must be in inverse length, e.g., wavenumbers. That is to say, there are two versions of the real line: one which is the range of
t
{\displaystyle t}
and measured in units of
t
,
{\displaystyle t,}
and the other which is the range of
ξ
{\displaystyle \xi }
and measured in inverse units to the units of
t
.
{\displaystyle t.}
These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition.
In general,
ξ
{\displaystyle \xi }
must always be taken to be a linear form on the space of its domain, which is to say that the second real line is the dual space of the first real line. See the article on linear algebra for a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to general symmetry groups, including the case of Fourier series.
That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants.
In other conventions, the Fourier transform has i in the exponent instead of −i, and vice versa for the inversion formula. This convention is common in modern physics and is the default for Wolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means that
f
^
(
ξ
)
{\displaystyle {\hat {f}}(\xi )}
is the amplitude of the wave
e
−
i
2
π
ξ
x
{\displaystyle e^{-i2\pi \xi x}}
instead of the wave
e
i
2
π
ξ
x
{\displaystyle e^{i2\pi \xi x}}
(the former, with its minus sign, is often seen in the time dependence for sinusoidal plane-wave solutions of the electromagnetic wave equation, or in the time dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involve i have it replaced by −i. In electrical engineering the letter j is typically used for the imaginary unit instead of i because i is used for current.
When using dimensionless units, the constant factors might not be written in the transform definition. For instance, in probability theory, the characteristic function Φ of the probability density function f of a random variable X of continuous type is defined without a negative sign in the exponential, and since the units of x are ignored, there is no 2π either:
ϕ
(
λ
)
=
∫
−
∞
∞
f
(
x
)
e
i
λ
x
d
x
.
{\displaystyle \phi (\lambda )=\int _{-\infty }^{\infty }f(x)e^{i\lambda x}\,dx.}
In probability theory and mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because many random variables are not of continuous type, and do not possess a density function, and one must treat not functions but distributions, i.e., measures which possess "atoms".
From the higher point of view of group characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on a locally compact Abelian group.
== Properties ==
Let
f
(
x
)
{\displaystyle f(x)}
and
h
(
x
)
{\displaystyle h(x)}
represent integrable functions Lebesgue-measurable on the real line satisfying:
∫
−
∞
∞
|
f
(
x
)
|
d
x
<
∞
.
{\displaystyle \int _{-\infty }^{\infty }|f(x)|\,dx<\infty .}
We denote the Fourier transforms of these functions as
f
^
(
ξ
)
{\displaystyle {\hat {f}}(\xi )}
and
h
^
(
ξ
)
{\displaystyle {\hat {h}}(\xi )}
respectively.
=== Basic properties ===
The Fourier transform has the following basic properties:
==== Linearity ====
a
f
(
x
)
+
b
h
(
x
)
⟺
F
a
f
^
(
ξ
)
+
b
h
^
(
ξ
)
;
a
,
b
∈
C
{\displaystyle a\ f(x)+b\ h(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ a\ {\widehat {f}}(\xi )+b\ {\widehat {h}}(\xi );\quad \ a,b\in \mathbb {C} }
==== Time shifting ====
f
(
x
−
x
0
)
⟺
F
e
−
i
2
π
x
0
ξ
f
^
(
ξ
)
;
x
0
∈
R
{\displaystyle f(x-x_{0})\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ e^{-i2\pi x_{0}\xi }\ {\widehat {f}}(\xi );\quad \ x_{0}\in \mathbb {R} }
==== Frequency shifting ====
e
i
2
π
ξ
0
x
f
(
x
)
⟺
F
f
^
(
ξ
−
ξ
0
)
;
ξ
0
∈
R
{\displaystyle e^{i2\pi \xi _{0}x}f(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(\xi -\xi _{0});\quad \ \xi _{0}\in \mathbb {R} }
==== Time scaling ====
f
(
a
x
)
⟺
F
1
|
a
|
f
^
(
ξ
a
)
;
a
≠
0
{\displaystyle f(ax)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\frac {1}{|a|}}{\widehat {f}}\left({\frac {\xi }{a}}\right);\quad \ a\neq 0}
The case
a
=
−
1
{\displaystyle a=-1}
leads to the time-reversal property:
f
(
−
x
)
⟺
F
f
^
(
−
ξ
)
{\displaystyle f(-x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(-\xi )}
==== Symmetry ====
When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:
T
i
m
e
d
o
m
a
i
n
f
=
f
RE
+
f
RO
+
i
f
IE
+
i
f
IO
⏟
⇕
F
⇕
F
⇕
F
⇕
F
⇕
F
F
r
e
q
u
e
n
c
y
d
o
m
a
i
n
f
^
=
f
^
RE
+
i
f
^
IO
⏞
+
i
f
^
IE
+
f
^
RO
{\displaystyle {\begin{array}{rlcccccccc}{\mathsf {Time\ domain}}&f&=&f_{_{\text{RE}}}&+&f_{_{\text{RO}}}&+&i\ f_{_{\text{IE}}}&+&\underbrace {i\ f_{_{\text{IO}}}} \\&{\Bigg \Updownarrow }{\mathcal {F}}&&{\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}\\{\mathsf {Frequency\ domain}}&{\widehat {f}}&=&{\widehat {f}}_{_{\text{RE}}}&+&\overbrace {i\ {\widehat {f}}_{_{\text{IO}}}\,} &+&i\ {\widehat {f}}_{_{\text{IE}}}&+&{\widehat {f}}_{_{\text{RO}}}\end{array}}}
From this, various relationships are apparent, for example:
The transform of a real-valued function
(
f
R
E
+
f
R
O
)
{\displaystyle (f_{_{RE}}+f_{_{RO}})}
is the conjugate symmetric function
f
^
R
E
+
i
f
^
I
O
.
{\displaystyle {\hat {f}}_{RE}+i\ {\hat {f}}_{IO}.}
Conversely, a conjugate symmetric transform implies a real-valued time-domain.
The transform of an imaginary-valued function
(
i
f
I
E
+
i
f
I
O
)
{\displaystyle (i\ f_{_{IE}}+i\ f_{_{IO}})}
is the conjugate antisymmetric function
f
^
R
O
+
i
f
^
I
E
,
{\displaystyle {\hat {f}}_{RO}+i\ {\hat {f}}_{IE},}
and the converse is true.
The transform of a conjugate symmetric function
(
f
R
E
+
i
f
I
O
)
{\displaystyle (f_{_{RE}}+i\ f_{_{IO}})}
is the real-valued function
f
^
R
E
+
f
^
R
O
,
{\displaystyle {\hat {f}}_{RE}+{\hat {f}}_{RO},}
and the converse is true.
The transform of a conjugate antisymmetric function
(
f
R
O
+
i
f
I
E
)
{\displaystyle (f_{_{RO}}+i\ f_{_{IE}})}
is the imaginary-valued function
i
f
^
I
E
+
i
f
^
I
O
,
{\displaystyle i\ {\hat {f}}_{IE}+i{\hat {f}}_{IO},}
and the converse is true.
==== Conjugation ====
(
f
(
x
)
)
∗
⟺
F
(
f
^
(
−
ξ
)
)
∗
{\displaystyle {\bigl (}f(x){\bigr )}^{*}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ \left({\widehat {f}}(-\xi )\right)^{*}}
(Note: the ∗ denotes complex conjugation.)
In particular, if
f
{\displaystyle f}
is real, then
f
^
{\displaystyle {\widehat {f}}}
is even symmetric (aka Hermitian function):
f
^
(
−
ξ
)
=
(
f
^
(
ξ
)
)
∗
.
{\displaystyle {\widehat {f}}(-\xi )={\bigl (}{\widehat {f}}(\xi ){\bigr )}^{*}.}
And if
f
{\displaystyle f}
is purely imaginary, then
f
^
{\displaystyle {\widehat {f}}}
is odd symmetric:
f
^
(
−
ξ
)
=
−
(
f
^
(
ξ
)
)
∗
.
{\displaystyle {\widehat {f}}(-\xi )=-({\widehat {f}}(\xi ))^{*}.}
==== Real and imaginary parts ====
Re
{
f
(
x
)
}
⟺
F
1
2
(
f
^
(
ξ
)
+
(
f
^
(
−
ξ
)
)
∗
)
{\displaystyle \operatorname {Re} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2}}\left({\widehat {f}}(\xi )+{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)}
Im
{
f
(
x
)
}
⟺
F
1
2
i
(
f
^
(
ξ
)
−
(
f
^
(
−
ξ
)
)
∗
)
{\displaystyle \operatorname {Im} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2i}}\left({\widehat {f}}(\xi )-{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)}
==== Zero frequency component ====
Substituting
ξ
=
0
{\displaystyle \xi =0}
in the definition, we obtain:
f
^
(
0
)
=
∫
−
∞
∞
f
(
x
)
d
x
.
{\displaystyle {\widehat {f}}(0)=\int _{-\infty }^{\infty }f(x)\,dx.}
The integral of
f
{\displaystyle f}
over its domain is known as the average value or DC bias of the function.
=== Uniform continuity and the Riemann–Lebesgue lemma ===
The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties.
The Fourier transform
f
^
{\displaystyle {\hat {f}}}
of any integrable function
f
{\displaystyle f}
is uniformly continuous and
‖
f
^
‖
∞
≤
‖
f
‖
1
{\displaystyle \left\|{\hat {f}}\right\|_{\infty }\leq \left\|f\right\|_{1}}
By the Riemann–Lebesgue lemma,
f
^
(
ξ
)
→
0
as
|
ξ
|
→
∞
.
{\displaystyle {\hat {f}}(\xi )\to 0{\text{ as }}|\xi |\to \infty .}
However,
f
^
{\displaystyle {\hat {f}}}
need not be integrable. For example, the Fourier transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent.
It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both
f
{\displaystyle f}
and
f
^
{\displaystyle {\hat {f}}}
are integrable, the inverse equality
f
(
x
)
=
∫
−
∞
∞
f
^
(
ξ
)
e
i
2
π
x
ξ
d
ξ
{\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )e^{i2\pi x\xi }\,d\xi }
holds for almost every x. As a result, the Fourier transform is injective on L1(R).
=== Plancherel theorem and Parseval's theorem ===
Let f(x) and g(x) be integrable, and let f̂(ξ) and ĝ(ξ) be their Fourier transforms. If f(x) and g(x) are also square-integrable, then the Parseval formula follows:
⟨
f
,
g
⟩
L
2
=
∫
−
∞
∞
f
(
x
)
g
(
x
)
¯
d
x
=
∫
−
∞
∞
f
^
(
ξ
)
g
^
(
ξ
)
¯
d
ξ
,
{\displaystyle \langle f,g\rangle _{L^{2}}=\int _{-\infty }^{\infty }f(x){\overline {g(x)}}\,dx=\int _{-\infty }^{\infty }{\hat {f}}(\xi ){\overline {{\hat {g}}(\xi )}}\,d\xi ,}
where the bar denotes complex conjugation.
The Plancherel theorem, which follows from the above, states that
‖
f
‖
L
2
2
=
∫
−
∞
∞
|
f
(
x
)
|
2
d
x
=
∫
−
∞
∞
|
f
^
(
ξ
)
|
2
d
ξ
.
{\displaystyle \|f\|_{L^{2}}^{2}=\int _{-\infty }^{\infty }\left|f(x)\right|^{2}\,dx=\int _{-\infty }^{\infty }\left|{\hat {f}}(\xi )\right|^{2}\,d\xi .}
Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on L2(R). On L1(R) ∩ L2(R), this extension agrees with original Fourier transform defined on L1(R), thus enlarging the domain of the Fourier transform to L1(R) + L2(R) (and consequently to Lp(R) for 1 ≤ p ≤ 2). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem.
See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups.
=== Convolution theorem ===
The Fourier transform translates between convolution and multiplication of functions. If f(x) and g(x) are integrable functions with Fourier transforms f̂(ξ) and ĝ(ξ) respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms f̂(ξ) and ĝ(ξ) (under other conventions for the definition of the Fourier transform a constant factor may appear).
This means that if:
h
(
x
)
=
(
f
∗
g
)
(
x
)
=
∫
−
∞
∞
f
(
y
)
g
(
x
−
y
)
d
y
,
{\displaystyle h(x)=(f*g)(x)=\int _{-\infty }^{\infty }f(y)g(x-y)\,dy,}
where ∗ denotes the convolution operation, then:
h
^
(
ξ
)
=
f
^
(
ξ
)
g
^
(
ξ
)
.
{\displaystyle {\hat {h}}(\xi )={\hat {f}}(\xi )\,{\hat {g}}(\xi ).}
In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system with input f(x) and output h(x), since substituting the unit impulse for f(x) yields h(x) = g(x). In this case, ĝ(ξ) represents the frequency response of the system.
Conversely, if f(x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the Fourier transform of f(x) is given by the convolution of the respective Fourier transforms p̂(ξ) and q̂(ξ).
=== Cross-correlation theorem ===
In an analogous manner, it can be shown that if h(x) is the cross-correlation of f(x) and g(x):
h
(
x
)
=
(
f
⋆
g
)
(
x
)
=
∫
−
∞
∞
f
(
y
)
¯
g
(
x
+
y
)
d
y
{\displaystyle h(x)=(f\star g)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}g(x+y)\,dy}
then the Fourier transform of h(x) is:
h
^
(
ξ
)
=
f
^
(
ξ
)
¯
g
^
(
ξ
)
.
{\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}\,{\hat {g}}(\xi ).}
As a special case, the autocorrelation of function f(x) is:
h
(
x
)
=
(
f
⋆
f
)
(
x
)
=
∫
−
∞
∞
f
(
y
)
¯
f
(
x
+
y
)
d
y
{\displaystyle h(x)=(f\star f)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}f(x+y)\,dy}
for which
h
^
(
ξ
)
=
f
^
(
ξ
)
¯
f
^
(
ξ
)
=
|
f
^
(
ξ
)
|
2
.
{\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}{\hat {f}}(\xi )=\left|{\hat {f}}(\xi )\right|^{2}.}
=== Differentiation ===
Suppose f(x) is an absolutely continuous differentiable function, and both f and its derivative f′ are integrable. Then the Fourier transform of the derivative is given by
f
′
^
(
ξ
)
=
F
{
d
d
x
f
(
x
)
}
=
i
2
π
ξ
f
^
(
ξ
)
.
{\displaystyle {\widehat {f'\,}}(\xi )={\mathcal {F}}\left\{{\frac {d}{dx}}f(x)\right\}=i2\pi \xi {\hat {f}}(\xi ).}
More generally, the Fourier transformation of the nth derivative f(n) is given by
f
(
n
)
^
(
ξ
)
=
F
{
d
n
d
x
n
f
(
x
)
}
=
(
i
2
π
ξ
)
n
f
^
(
ξ
)
.
{\displaystyle {\widehat {f^{(n)}}}(\xi )={\mathcal {F}}\left\{{\frac {d^{n}}{dx^{n}}}f(x)\right\}=(i2\pi \xi )^{n}{\hat {f}}(\xi ).}
Analogously,
F
{
d
n
d
ξ
n
f
^
(
ξ
)
}
=
(
i
2
π
x
)
n
f
(
x
)
{\displaystyle {\mathcal {F}}\left\{{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi )\right\}=(i2\pi x)^{n}f(x)}
, so
F
{
x
n
f
(
x
)
}
=
(
i
2
π
)
n
d
n
d
ξ
n
f
^
(
ξ
)
.
{\displaystyle {\mathcal {F}}\left\{x^{n}f(x)\right\}=\left({\frac {i}{2\pi }}\right)^{n}{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi ).}
By applying the Fourier transform and using these formulas, some ordinary differential equations can be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb "f(x) is smooth if and only if f̂(ξ) quickly falls to 0 for |ξ| → ∞." By using the analogous rules for the inverse Fourier transform, one can also say "f(x) quickly falls to 0 for |x| → ∞ if and only if f̂(ξ) is smooth."
=== Eigenfunctions ===
The Fourier transform is a linear transform which has eigenfunctions obeying
F
[
ψ
]
=
λ
ψ
,
{\displaystyle {\mathcal {F}}[\psi ]=\lambda \psi ,}
with
λ
∈
C
.
{\displaystyle \lambda \in \mathbb {C} .}
A set of eigenfunctions is found by noting that the homogeneous differential equation
[
U
(
1
2
π
d
d
x
)
+
U
(
x
)
]
ψ
(
x
)
=
0
{\displaystyle \left[U\left({\frac {1}{2\pi }}{\frac {d}{dx}}\right)+U(x)\right]\psi (x)=0}
leads to eigenfunctions
ψ
(
x
)
{\displaystyle \psi (x)}
of the Fourier transform
F
{\displaystyle {\mathcal {F}}}
as long as the form of the equation remains invariant under Fourier transform. In other words, every solution
ψ
(
x
)
{\displaystyle \psi (x)}
and its Fourier transform
ψ
^
(
ξ
)
{\displaystyle {\hat {\psi }}(\xi )}
obey the same equation. Assuming uniqueness of the solutions, every solution
ψ
(
x
)
{\displaystyle \psi (x)}
must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform if
U
(
x
)
{\displaystyle U(x)}
can be expanded in a power series in which for all terms the same factor of either one of
±
1
,
±
i
{\displaystyle \pm 1,\pm i}
arises from the factors
i
n
{\displaystyle i^{n}}
introduced by the differentiation rules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowable
U
(
x
)
=
x
{\displaystyle U(x)=x}
leads to the standard normal distribution.
More generally, a set of eigenfunctions is also found by noting that the differentiation rules imply that the ordinary differential equation
[
W
(
i
2
π
d
d
x
)
+
W
(
x
)
]
ψ
(
x
)
=
C
ψ
(
x
)
{\displaystyle \left[W\left({\frac {i}{2\pi }}{\frac {d}{dx}}\right)+W(x)\right]\psi (x)=C\psi (x)}
with
C
{\displaystyle C}
constant and
W
(
x
)
{\displaystyle W(x)}
being a non-constant even function remains invariant in form when applying the Fourier transform
F
{\displaystyle {\mathcal {F}}}
to both sides of the equation. The simplest example is provided by
W
(
x
)
=
x
2
{\displaystyle W(x)=x^{2}}
which is equivalent to considering the Schrödinger equation for the quantum harmonic oscillator. The corresponding solutions provide an important choice of an orthonormal basis for L2(R) and are given by the "physicist's" Hermite functions. Equivalently one may use
ψ
n
(
x
)
=
2
4
n
!
e
−
π
x
2
H
e
n
(
2
x
π
)
,
{\displaystyle \psi _{n}(x)={\frac {\sqrt[{4}]{2}}{\sqrt {n!}}}e^{-\pi x^{2}}\mathrm {He} _{n}\left(2x{\sqrt {\pi }}\right),}
where Hen(x) are the "probabilist's" Hermite polynomials, defined as
H
e
n
(
x
)
=
(
−
1
)
n
e
1
2
x
2
(
d
d
x
)
n
e
−
1
2
x
2
.
{\displaystyle \mathrm {He} _{n}(x)=(-1)^{n}e^{{\frac {1}{2}}x^{2}}\left({\frac {d}{dx}}\right)^{n}e^{-{\frac {1}{2}}x^{2}}.}
Under this convention for the Fourier transform, we have that
ψ
^
n
(
ξ
)
=
(
−
i
)
n
ψ
n
(
ξ
)
.
{\displaystyle {\hat {\psi }}_{n}(\xi )=(-i)^{n}\psi _{n}(\xi ).}
In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on L2(R). However, this choice of eigenfunctions is not unique. Because of
F
4
=
i
d
{\displaystyle {\mathcal {F}}^{4}=\mathrm {id} }
there are only four different eigenvalues of the Fourier transform (the fourth roots of unity ±1 and ±i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose L2(R) as a direct sum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by ik.
Since the complete set of Hermite functions ψn provides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed:
F
[
f
]
(
ξ
)
=
∫
d
x
f
(
x
)
∑
n
≥
0
(
−
i
)
n
ψ
n
(
x
)
ψ
n
(
ξ
)
.
{\displaystyle {\mathcal {F}}[f](\xi )=\int dxf(x)\sum _{n\geq 0}(-i)^{n}\psi _{n}(x)\psi _{n}(\xi )~.}
This approach to define the Fourier transform was first proposed by Norbert Wiener. Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time–frequency analysis. In physics, this transform was introduced by Edward Condon. This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the right conventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generator
N
{\displaystyle N}
via
F
[
ψ
]
=
e
−
i
t
N
ψ
.
{\displaystyle {\mathcal {F}}[\psi ]=e^{-itN}\psi .}
The operator
N
{\displaystyle N}
is the number operator of the quantum harmonic oscillator written as
N
≡
1
2
(
x
−
∂
∂
x
)
(
x
+
∂
∂
x
)
=
1
2
(
−
∂
2
∂
x
2
+
x
2
−
1
)
.
{\displaystyle N\equiv {\frac {1}{2}}\left(x-{\frac {\partial }{\partial x}}\right)\left(x+{\frac {\partial }{\partial x}}\right)={\frac {1}{2}}\left(-{\frac {\partial ^{2}}{\partial x^{2}}}+x^{2}-1\right).}
It can be interpreted as the generator of fractional Fourier transforms for arbitrary values of t, and of the conventional continuous Fourier transform
F
{\displaystyle {\mathcal {F}}}
for the particular value
t
=
π
/
2
,
{\displaystyle t=\pi /2,}
with the Mehler kernel implementing the corresponding active transform. The eigenfunctions of
N
{\displaystyle N}
are the Hermite functions
ψ
n
(
x
)
{\displaystyle \psi _{n}(x)}
which are therefore also eigenfunctions of
F
.
{\displaystyle {\mathcal {F}}.}
Upon extending the Fourier transform to distributions the Dirac comb is also an eigenfunction of the Fourier transform.
=== Inversion and periodicity ===
Under suitable conditions on the function
f
{\displaystyle f}
, it can be recovered from its Fourier transform
f
^
{\displaystyle {\hat {f}}}
. Indeed, denoting the Fourier transform operator by
F
{\displaystyle {\mathcal {F}}}
, so
F
f
:=
f
^
{\displaystyle {\mathcal {F}}f:={\hat {f}}}
, then for suitable functions, applying the Fourier transform twice simply flips the function:
(
F
2
f
)
(
x
)
=
f
(
−
x
)
{\displaystyle \left({\mathcal {F}}^{2}f\right)(x)=f(-x)}
, which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields
F
4
(
f
)
=
f
{\displaystyle {\mathcal {F}}^{4}(f)=f}
, so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times:
F
3
(
f
^
)
=
f
{\displaystyle {\mathcal {F}}^{3}\left({\hat {f}}\right)=f}
. In particular the Fourier transform is invertible (under suitable conditions).
More precisely, defining the parity operator
P
{\displaystyle {\mathcal {P}}}
such that
(
P
f
)
(
x
)
=
f
(
−
x
)
{\displaystyle ({\mathcal {P}}f)(x)=f(-x)}
, we have:
F
0
=
i
d
,
F
1
=
F
,
F
2
=
P
,
F
3
=
F
−
1
=
P
∘
F
=
F
∘
P
,
F
4
=
i
d
{\displaystyle {\begin{aligned}{\mathcal {F}}^{0}&=\mathrm {id} ,\\{\mathcal {F}}^{1}&={\mathcal {F}},\\{\mathcal {F}}^{2}&={\mathcal {P}},\\{\mathcal {F}}^{3}&={\mathcal {F}}^{-1}={\mathcal {P}}\circ {\mathcal {F}}={\mathcal {F}}\circ {\mathcal {P}},\\{\mathcal {F}}^{4}&=\mathrm {id} \end{aligned}}}
These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem.
This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the time–frequency domain (considering time as the x-axis and frequency as the y-axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group SL2(R) on the time–frequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under time–frequency analysis.
=== Connection with the Heisenberg group ===
The Heisenberg group is a certain group of unitary operators on the Hilbert space L2(R) of square integrable complex valued functions f on the real line, generated by the translations (Ty f)(x) = f (x + y) and multiplication by ei2πξx, (Mξ f)(x) = ei2πξx f (x). These operators do not commute, as their (group) commutator is
(
M
ξ
−
1
T
y
−
1
M
ξ
T
y
f
)
(
x
)
=
e
i
2
π
ξ
y
f
(
x
)
{\displaystyle \left(M_{\xi }^{-1}T_{y}^{-1}M_{\xi }T_{y}f\right)(x)=e^{i2\pi \xi y}f(x)}
which is multiplication by the constant (independent of x) ei2πξy ∈ U(1) (the circle group of unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensional Lie group of triples (x, ξ, z) ∈ R2 × U(1), with the group law
(
x
1
,
ξ
1
,
t
1
)
⋅
(
x
2
,
ξ
2
,
t
2
)
=
(
x
1
+
x
2
,
ξ
1
+
ξ
2
,
t
1
t
2
e
i
2
π
(
x
1
ξ
1
+
x
2
ξ
2
+
x
1
ξ
2
)
)
.
{\displaystyle \left(x_{1},\xi _{1},t_{1}\right)\cdot \left(x_{2},\xi _{2},t_{2}\right)=\left(x_{1}+x_{2},\xi _{1}+\xi _{2},t_{1}t_{2}e^{i2\pi \left(x_{1}\xi _{1}+x_{2}\xi _{2}+x_{1}\xi _{2}\right)}\right).}
Denote the Heisenberg group by H1. The above procedure describes not only the group structure, but also a standard unitary representation of H1 on a Hilbert space, which we denote by ρ : H1 → B(L2(R)). Define the linear automorphism of R2 by
J
(
x
ξ
)
=
(
−
ξ
x
)
{\displaystyle J{\begin{pmatrix}x\\\xi \end{pmatrix}}={\begin{pmatrix}-\xi \\x\end{pmatrix}}}
so that J2 = −I. This J can be extended to a unique automorphism of H1:
j
(
x
,
ξ
,
t
)
=
(
−
ξ
,
x
,
t
e
−
i
2
π
ξ
x
)
.
{\displaystyle j\left(x,\xi ,t\right)=\left(-\xi ,x,te^{-i2\pi \xi x}\right).}
According to the Stone–von Neumann theorem, the unitary representations ρ and ρ ∘ j are unitarily equivalent, so there is a unique intertwiner W ∈ U(L2(R)) such that
ρ
∘
j
=
W
ρ
W
∗
.
{\displaystyle \rho \circ j=W\rho W^{*}.}
This operator W is the Fourier transform.
Many of the standard properties of the Fourier transform are immediate consequences of this more general framework. For example, the square of the Fourier transform, W2, is an intertwiner associated with J2 = −I, and so we have (W2f)(x) = f (−x) is the reflection of the original function f.
== Complex domain ==
The integral for the Fourier transform
f
^
(
ξ
)
=
∫
−
∞
∞
e
−
i
2
π
ξ
t
f
(
t
)
d
t
{\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }e^{-i2\pi \xi t}f(t)\,dt}
can be studied for complex values of its argument ξ. Depending on the properties of f, this might not converge off the real axis at all, or it might converge to a complex analytic function for all values of ξ = σ + iτ, or something in between.
The Paley–Wiener theorem says that f is smooth (i.e., n-times differentiable for all positive integers n) and compactly supported if and only if f̂ (σ + iτ) is a holomorphic function for which there exists a constant a > 0 such that for any integer n ≥ 0,
|
ξ
n
f
^
(
ξ
)
|
≤
C
e
a
|
τ
|
{\displaystyle \left\vert \xi ^{n}{\hat {f}}(\xi )\right\vert \leq Ce^{a\vert \tau \vert }}
for some constant C. (In this case, f is supported on [−a, a].) This can be expressed by saying that f̂ is an entire function which is rapidly decreasing in σ (for fixed τ) and of exponential growth in τ (uniformly in σ).
(If f is not smooth, but only L2, the statement still holds provided n = 0.) The space of such functions of a complex variable is called the Paley—Wiener space. This theorem has been generalised to semisimple Lie groups.
If f is supported on the half-line t ≥ 0, then f is said to be "causal" because the impulse response function of a physically realisable filter must have this property, as no effect can precede its cause. Paley and Wiener showed that then f̂ extends to a holomorphic function on the complex lower half-plane τ < 0 which tends to zero as τ goes to infinity. The converse is false and it is not known how to characterise the Fourier transform of a causal function.
=== Laplace transform ===
The Fourier transform f̂(ξ) is related to the Laplace transform F(s), which is also used for the solution of differential equations and the analysis of filters.
It may happen that a function f for which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of the complex plane.
For example, if f(t) is of exponential growth, i.e.,
|
f
(
t
)
|
<
C
e
a
|
t
|
{\displaystyle \vert f(t)\vert <Ce^{a\vert t\vert }}
for some constants C, a ≥ 0, then
f
^
(
i
τ
)
=
∫
−
∞
∞
e
2
π
τ
t
f
(
t
)
d
t
,
{\displaystyle {\hat {f}}(i\tau )=\int _{-\infty }^{\infty }e^{2\pi \tau t}f(t)\,dt,}
convergent for all 2πτ < −a, is the two-sided Laplace transform of f.
The more usual version ("one-sided") of the Laplace transform is
F
(
s
)
=
∫
0
∞
f
(
t
)
e
−
s
t
d
t
.
{\displaystyle F(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt.}
If f is also causal, and analytical, then:
f
^
(
i
τ
)
=
F
(
−
2
π
τ
)
.
{\displaystyle {\hat {f}}(i\tau )=F(-2\pi \tau ).}
Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variable s = i2πξ.
From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb.
Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel.
In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea of harmonic analysis.
=== Inversion ===
Still with
ξ
=
σ
+
i
τ
{\displaystyle \xi =\sigma +i\tau }
, if
f
^
{\displaystyle {\widehat {f}}}
is complex analytic for a ≤ τ ≤ b, then
∫
−
∞
∞
f
^
(
σ
+
i
a
)
e
i
2
π
ξ
t
d
σ
=
∫
−
∞
∞
f
^
(
σ
+
i
b
)
e
i
2
π
ξ
t
d
σ
{\displaystyle \int _{-\infty }^{\infty }{\hat {f}}(\sigma +ia)e^{i2\pi \xi t}\,d\sigma =\int _{-\infty }^{\infty }{\hat {f}}(\sigma +ib)e^{i2\pi \xi t}\,d\sigma }
by Cauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis.
Theorem: If f(t) = 0 for t < 0, and |f(t)| < Cea|t| for some constants C, a > 0, then
f
(
t
)
=
∫
−
∞
∞
f
^
(
σ
+
i
τ
)
e
i
2
π
ξ
t
d
σ
,
{\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}(\sigma +i\tau )e^{i2\pi \xi t}\,d\sigma ,}
for any τ < −a/2π.
This theorem implies the Mellin inversion formula for the Laplace transformation,
f
(
t
)
=
1
i
2
π
∫
b
−
i
∞
b
+
i
∞
F
(
s
)
e
s
t
d
s
{\displaystyle f(t)={\frac {1}{i2\pi }}\int _{b-i\infty }^{b+i\infty }F(s)e^{st}\,ds}
for any b > a, where F(s) is the Laplace transform of f(t).
The hypotheses can be weakened, as in the results of Carleson and Hunt, to f(t) e−at being L1, provided that f be of bounded variation in a closed neighborhood of t (cf. Dini test), the value of f at t be taken to be the arithmetic mean of the left and right limits, and that the integrals be taken in the sense of Cauchy principal values.
L2 versions of these inversion formulas are also available.
== Fourier transform on Euclidean space ==
The Fourier transform can be defined in any arbitrary number of dimensions n. As with the one-dimensional case, there are many conventions. For an integrable function f(x), this article takes the definition:
f
^
(
ξ
)
=
F
(
f
)
(
ξ
)
=
∫
R
n
f
(
x
)
e
−
i
2
π
ξ
⋅
x
d
x
{\displaystyle {\hat {f}}({\boldsymbol {\xi }})={\mathcal {F}}(f)({\boldsymbol {\xi }})=\int _{\mathbb {R} ^{n}}f(\mathbf {x} )e^{-i2\pi {\boldsymbol {\xi }}\cdot \mathbf {x} }\,d\mathbf {x} }
where x and ξ are n-dimensional vectors, and x · ξ is the dot product of the vectors. Alternatively, ξ can be viewed as belonging to the dual vector space
R
n
⋆
{\displaystyle \mathbb {R} ^{n\star }}
, in which case the dot product becomes the contraction of x and ξ, usually written as ⟨x, ξ⟩.
All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann–Lebesgue lemma holds.
=== Uncertainty principle ===
Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform f̂(ξ) must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function in x, its Fourier transform stretches out in ξ. It is not possible to arbitrarily concentrate both a function and its Fourier transform.
The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the time–frequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form.
Suppose f(x) is an integrable and square-integrable function. Without loss of generality, assume that f(x) is normalized:
∫
−
∞
∞
|
f
(
x
)
|
2
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }|f(x)|^{2}\,dx=1.}
It follows from the Plancherel theorem that f̂(ξ) is also normalized.
The spread around x = 0 may be measured by the dispersion about zero defined by
D
0
(
f
)
=
∫
−
∞
∞
x
2
|
f
(
x
)
|
2
d
x
.
{\displaystyle D_{0}(f)=\int _{-\infty }^{\infty }x^{2}|f(x)|^{2}\,dx.}
In probability terms, this is the second moment of |f(x)|2 about zero.
The uncertainty principle states that, if f(x) is absolutely continuous and the functions x·f(x) and f′(x) are square integrable, then
D
0
(
f
)
D
0
(
f
^
)
≥
1
16
π
2
.
{\displaystyle D_{0}(f)D_{0}({\hat {f}})\geq {\frac {1}{16\pi ^{2}}}.}
The equality is attained only in the case
f
(
x
)
=
C
1
e
−
π
x
2
σ
2
∴
f
^
(
ξ
)
=
σ
C
1
e
−
π
σ
2
ξ
2
{\displaystyle {\begin{aligned}f(x)&=C_{1}\,e^{-\pi {\frac {x^{2}}{\sigma ^{2}}}}\\\therefore {\hat {f}}(\xi )&=\sigma C_{1}\,e^{-\pi \sigma ^{2}\xi ^{2}}\end{aligned}}}
where σ > 0 is arbitrary and C1 = 4√2/√σ so that f is L2-normalized. In other words, where f is a (normalized) Gaussian function with variance σ2/2π, centered at zero, and its Fourier transform is a Gaussian function with variance σ−2/2π. Gaussian functions are examples of Schwartz functions (see the discussion on tempered distributions below).
In fact, this inequality implies that:
(
∫
−
∞
∞
(
x
−
x
0
)
2
|
f
(
x
)
|
2
d
x
)
(
∫
−
∞
∞
(
ξ
−
ξ
0
)
2
|
f
^
(
ξ
)
|
2
d
ξ
)
≥
1
16
π
2
,
∀
x
0
,
ξ
0
∈
R
.
{\displaystyle \left(\int _{-\infty }^{\infty }(x-x_{0})^{2}|f(x)|^{2}\,dx\right)\left(\int _{-\infty }^{\infty }(\xi -\xi _{0})^{2}\left|{\hat {f}}(\xi )\right|^{2}\,d\xi \right)\geq {\frac {1}{16\pi ^{2}}},\quad \forall x_{0},\xi _{0}\in \mathbb {R} .}
In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, up to a factor of the Planck constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle.
A stronger uncertainty principle is the Hirschman uncertainty principle, which is expressed as:
H
(
|
f
|
2
)
+
H
(
|
f
^
|
2
)
≥
log
(
e
2
)
{\displaystyle H\left(\left|f\right|^{2}\right)+H\left(\left|{\hat {f}}\right|^{2}\right)\geq \log \left({\frac {e}{2}}\right)}
where H(p) is the differential entropy of the probability density function p(x):
H
(
p
)
=
−
∫
−
∞
∞
p
(
x
)
log
(
p
(
x
)
)
d
x
{\displaystyle H(p)=-\int _{-\infty }^{\infty }p(x)\log {\bigl (}p(x){\bigr )}\,dx}
where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case.
=== Sine and cosine transforms ===
Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable function f for which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically) λ by
f
(
t
)
=
∫
0
∞
(
a
(
λ
)
cos
(
2
π
λ
t
)
+
b
(
λ
)
sin
(
2
π
λ
t
)
)
d
λ
.
{\displaystyle f(t)=\int _{0}^{\infty }{\bigl (}a(\lambda )\cos(2\pi \lambda t)+b(\lambda )\sin(2\pi \lambda t){\bigr )}\,d\lambda .}
This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functions a and b can be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised):
a
(
λ
)
=
2
∫
−
∞
∞
f
(
t
)
cos
(
2
π
λ
t
)
d
t
{\displaystyle a(\lambda )=2\int _{-\infty }^{\infty }f(t)\cos(2\pi \lambda t)\,dt}
and
b
(
λ
)
=
2
∫
−
∞
∞
f
(
t
)
sin
(
2
π
λ
t
)
d
t
.
{\displaystyle b(\lambda )=2\int _{-\infty }^{\infty }f(t)\sin(2\pi \lambda t)\,dt.}
Older literature refers to the two transform functions, the Fourier cosine transform, a, and the Fourier sine transform, b.
The function f can be recovered from the sine and cosine transform using
f
(
t
)
=
2
∫
0
∞
∫
−
∞
∞
f
(
τ
)
cos
(
2
π
λ
(
τ
−
t
)
)
d
τ
d
λ
.
{\displaystyle f(t)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(\tau )\cos {\bigl (}2\pi \lambda (\tau -t){\bigr )}\,d\tau \,d\lambda .}
together with trigonometric identities. This is referred to as Fourier's integral formula.
=== Spherical harmonics ===
Let the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of the solid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if f(x) = e−π|x|2P(x) for some P(x) in Ak, then f̂(ξ) = i−k f(ξ). Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x) where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk.
Let f(x) = f0(|x|)P(x) (with P(x) in Ak), then
f
^
(
ξ
)
=
F
0
(
|
ξ
|
)
P
(
ξ
)
{\displaystyle {\hat {f}}(\xi )=F_{0}(|\xi |)P(\xi )}
where
F
0
(
r
)
=
2
π
i
−
k
r
−
n
+
2
k
−
2
2
∫
0
∞
f
0
(
s
)
J
n
+
2
k
−
2
2
(
2
π
r
s
)
s
n
+
2
k
2
d
s
.
{\displaystyle F_{0}(r)=2\pi i^{-k}r^{-{\frac {n+2k-2}{2}}}\int _{0}^{\infty }f_{0}(s)J_{\frac {n+2k-2}{2}}(2\pi rs)s^{\frac {n+2k}{2}}\,ds.}
Here J(n + 2k − 2)/2 denotes the Bessel function of the first kind with order n + 2k − 2/2. When k = 0 this gives a useful formula for the Fourier transform of a radial function. This is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases n + 2 and n allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one.
=== Restriction problems ===
In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in Lp for 1 < p < 2. It is possible in some cases to define the restriction of a Fourier transform to a set S, provided S has non-zero curvature. The case when S is the unit sphere in Rn is of particular interest. In this case the Tomas–Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in Rn is a bounded operator on Lp provided 1 ≤ p ≤ 2n + 2/n + 3.
One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets ER indexed by R ∈ (0,∞): such as balls of radius R centered at the origin, or cubes of side 2R. For a given integrable function f, consider the function fR defined by:
f
R
(
x
)
=
∫
E
R
f
^
(
ξ
)
e
i
2
π
x
⋅
ξ
d
ξ
,
x
∈
R
n
.
{\displaystyle f_{R}(x)=\int _{E_{R}}{\hat {f}}(\xi )e^{i2\pi x\cdot \xi }\,d\xi ,\quad x\in \mathbb {R} ^{n}.}
Suppose in addition that f ∈ Lp(Rn). For n = 1 and 1 < p < ∞, if one takes ER = (−R, R), then fR converges to f in Lp as R tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for n > 1. In the case that ER is taken to be a cube with side length R, then convergence still holds. Another natural candidate is the Euclidean ball ER = {ξ : |ξ| < R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in Lp(Rn). For n ≥ 2 it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless p = 2. In fact, when p ≠ 2, this shows that not only may fR fail to converge to f in Lp, but for some functions f ∈ Lp(Rn), fR is not even an element of Lp.
== Fourier transform on function spaces ==
The definition of the Fourier transform naturally extends from
L
1
(
R
)
{\displaystyle L^{1}(\mathbb {R} )}
to
L
1
(
R
n
)
{\displaystyle L^{1}(\mathbb {R} ^{n})}
. That is, if
f
∈
L
1
(
R
n
)
{\displaystyle f\in L^{1}(\mathbb {R} ^{n})}
then the Fourier transform
F
:
L
1
(
R
n
)
→
L
∞
(
R
n
)
{\displaystyle {\mathcal {F}}:L^{1}(\mathbb {R} ^{n})\to L^{\infty }(\mathbb {R} ^{n})}
is given by
f
(
x
)
↦
f
^
(
ξ
)
=
∫
R
n
f
(
x
)
e
−
i
2
π
ξ
⋅
x
d
x
,
∀
ξ
∈
R
n
.
{\displaystyle f(x)\mapsto {\hat {f}}(\xi )=\int _{\mathbb {R} ^{n}}f(x)e^{-i2\pi \xi \cdot x}\,dx,\quad \forall \xi \in \mathbb {R} ^{n}.}
This operator is bounded as
sup
ξ
∈
R
n
|
f
^
(
ξ
)
|
≤
∫
R
n
|
f
(
x
)
|
d
x
,
{\displaystyle \sup _{\xi \in \mathbb {R} ^{n}}\left\vert {\hat {f}}(\xi )\right\vert \leq \int _{\mathbb {R} ^{n}}\vert f(x)\vert \,dx,}
which shows that its operator norm is bounded by 1. The Riemann–Lebesgue lemma shows that if
f
∈
L
1
(
R
n
)
{\displaystyle f\in L^{1}(\mathbb {R} ^{n})}
then its Fourier transform actually belongs to the space of continuous functions which vanish at infinity, i.e.,
f
^
∈
C
0
(
R
n
)
⊂
L
∞
(
R
n
)
{\displaystyle {\hat {f}}\in C_{0}(\mathbb {R} ^{n})\subset L^{\infty }(\mathbb {R} ^{n})}
. Furthermore, the image of
L
1
{\displaystyle L^{1}}
under
F
{\displaystyle {\mathcal {F}}}
is a strict subset of
C
0
(
R
n
)
{\displaystyle C_{0}(\mathbb {R} ^{n})}
.
Similarly to the case of one variable, the Fourier transform can be defined on
L
2
(
R
n
)
{\displaystyle L^{2}(\mathbb {R} ^{n})}
. The Fourier transform in
L
2
(
R
n
)
{\displaystyle L^{2}(\mathbb {R} ^{n})}
is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, i.e.,
f
^
(
ξ
)
=
lim
R
→
∞
∫
|
x
|
≤
R
f
(
x
)
e
−
i
2
π
ξ
⋅
x
d
x
{\displaystyle {\hat {f}}(\xi )=\lim _{R\to \infty }\int _{|x|\leq R}f(x)e^{-i2\pi \xi \cdot x}\,dx}
where the limit is taken in the L2 sense.
Furthermore,
F
:
L
2
(
R
n
)
→
L
2
(
R
n
)
{\displaystyle {\mathcal {F}}:L^{2}(\mathbb {R} ^{n})\to L^{2}(\mathbb {R} ^{n})}
is a unitary operator. For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any f, g ∈ L2(Rn) we have
∫
R
n
f
(
x
)
F
g
(
x
)
d
x
=
∫
R
n
F
f
(
x
)
g
(
x
)
d
x
.
{\displaystyle \int _{\mathbb {R} ^{n}}f(x){\mathcal {F}}g(x)\,dx=\int _{\mathbb {R} ^{n}}{\mathcal {F}}f(x)g(x)\,dx.}
In particular, the image of L2(Rn) is itself under the Fourier transform.
=== On other Lp ===
For
1
<
p
<
2
{\displaystyle 1<p<2}
, the Fourier transform can be defined on
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
by Marcinkiewicz interpolation, which amounts to decomposing such functions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of a function in Lp(Rn) is in Lq(Rn), where q = p/p − 1 is the Hölder conjugate of p (by the Hausdorff–Young inequality). However, except for p = 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for the range 2 < p < ∞ requires the study of distributions. In fact, it can be shown that there are functions in Lp with p > 2 so that the Fourier transform is not defined as a function.
=== Tempered distributions ===
One might consider enlarging the domain of the Fourier transform from
L
1
+
L
2
{\displaystyle L^{1}+L^{2}}
by considering generalized functions, or distributions. A distribution on
R
n
{\displaystyle \mathbb {R} ^{n}}
is a continuous linear functional on the space
C
c
∞
(
R
n
)
{\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}
of compactly supported smooth functions (i.e. bump functions), equipped with a suitable topology. Since
C
c
∞
(
R
n
)
{\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}
is dense in
L
2
(
R
n
)
{\displaystyle L^{2}(\mathbb {R} ^{n})}
, the Plancherel theorem allows one to extend the definition of the Fourier transform to general functions in
L
2
(
R
n
)
{\displaystyle L^{2}(\mathbb {R} ^{n})}
by continuity arguments. The strategy is then to consider the action of the Fourier transform on
C
c
∞
(
R
n
)
{\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}
and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not map
C
c
∞
(
R
n
)
{\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}
to
C
c
∞
(
R
n
)
{\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}
. In fact the Fourier transform of an element in
C
c
∞
(
R
n
)
{\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})}
can not vanish on an open set; see the above discussion on the uncertainty principle.
The Fourier transform can also be defined for tempered distributions
S
′
(
R
n
)
{\displaystyle {\mathcal {S}}'(\mathbb {R} ^{n})}
, dual to the space of Schwartz functions
S
(
R
n
)
{\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})}
. A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, hence
C
c
∞
(
R
n
)
⊂
S
(
R
n
)
{\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})\subset {\mathcal {S}}(\mathbb {R} ^{n})}
and:
F
:
C
c
∞
(
R
n
)
→
S
(
R
n
)
∖
C
c
∞
(
R
n
)
.
{\displaystyle {\mathcal {F}}:C_{c}^{\infty }(\mathbb {R} ^{n})\rightarrow S(\mathbb {R} ^{n})\setminus C_{c}^{\infty }(\mathbb {R} ^{n}).}
The Fourier transform is an automorphism of the Schwartz space and, by duality, also an automorphism of the space of tempered distributions. The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above.
For the definition of the Fourier transform of a tempered distribution, let
f
{\displaystyle f}
and
g
{\displaystyle g}
be integrable functions, and let
f
^
{\displaystyle {\hat {f}}}
and
g
^
{\displaystyle {\hat {g}}}
be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula,
∫
R
n
f
^
(
x
)
g
(
x
)
d
x
=
∫
R
n
f
(
x
)
g
^
(
x
)
d
x
.
{\displaystyle \int _{\mathbb {R} ^{n}}{\hat {f}}(x)g(x)\,dx=\int _{\mathbb {R} ^{n}}f(x){\hat {g}}(x)\,dx.}
Every integrable function
f
{\displaystyle f}
defines (induces) a distribution
T
f
{\displaystyle T_{f}}
by the relation
T
f
(
ϕ
)
=
∫
R
n
f
(
x
)
ϕ
(
x
)
d
x
,
∀
ϕ
∈
S
(
R
n
)
.
{\displaystyle T_{f}(\phi )=\int _{\mathbb {R} ^{n}}f(x)\phi (x)\,dx,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).}
So it makes sense to define the Fourier transform of a tempered distribution
T
f
∈
S
′
(
R
)
{\displaystyle T_{f}\in {\mathcal {S}}'(\mathbb {R} )}
by the duality:
⟨
T
^
f
,
ϕ
⟩
=
⟨
T
f
,
ϕ
^
⟩
,
∀
ϕ
∈
S
(
R
n
)
.
{\displaystyle \langle {\widehat {T}}_{f},\phi \rangle =\langle T_{f},{\widehat {\phi }}\rangle ,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).}
Extending this to all tempered distributions
T
{\displaystyle T}
gives the general definition of the Fourier transform.
Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions.
== Generalizations ==
=== Fourier–Stieltjes transform on measurable spaces ===
The Fourier transform of a finite Borel measure μ on Rn is given by the continuous function:
μ
^
(
ξ
)
=
∫
R
n
e
−
i
2
π
x
⋅
ξ
d
μ
,
{\displaystyle {\hat {\mu }}(\xi )=\int _{\mathbb {R} ^{n}}e^{-i2\pi x\cdot \xi }\,d\mu ,}
and called the Fourier-Stieltjes transform due to its connection with the Riemann-Stieltjes integral representation of (Radon) measures. If
μ
{\displaystyle \mu }
is the probability distribution of a random variable
X
{\displaystyle X}
then its Fourier–Stieltjes transform is, by definition, a characteristic function. If, in addition, the probability distribution has a probability density function, this definition is subject to the usual Fourier transform. Stated more generally, when
μ
{\displaystyle \mu }
is absolutely continuous with respect to the Lebesgue measure, i.e.,
d
μ
=
f
(
x
)
d
x
,
{\displaystyle d\mu =f(x)dx,}
then
μ
^
(
ξ
)
=
f
^
(
ξ
)
,
{\displaystyle {\hat {\mu }}(\xi )={\hat {f}}(\xi ),}
and the Fourier-Stieltjes transform reduces to the usual definition of the Fourier transform. That is, the notable difference with the Fourier transform of integrable functions is that the Fourier-Stieltjes transform need not vanish at infinity, i.e., the Riemann–Lebesgue lemma fails for measures.
Bochner's theorem characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle.
One example of a finite Borel measure that is not a function is the Dirac measure. Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used).
=== Locally compact abelian groups ===
The Fourier transform may be generalized to any locally compact abelian group, i.e., an abelian group that is also a locally compact Hausdorff space such that the group operation is continuous. If G is a locally compact abelian group, it has a translation invariant measure μ, called Haar measure. For a locally compact abelian group G, the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from
G
{\displaystyle G}
to the circle group), the set of characters Ĝ is itself a locally compact abelian group, called the Pontryagin dual of G. For a function f in L1(G), its Fourier transform is defined by
f
^
(
ξ
)
=
∫
G
ξ
(
x
)
f
(
x
)
d
μ
for any
ξ
∈
G
^
.
{\displaystyle {\hat {f}}(\xi )=\int _{G}\xi (x)f(x)\,d\mu \quad {\text{for any }}\xi \in {\hat {G}}.}
The Riemann–Lebesgue lemma holds in this case; f̂(ξ) is a function vanishing at infinity on Ĝ.
The Fourier transform on T = R/Z is an example; here T is a locally compact abelian group, and the Haar measure μ on T can be thought of as the Lebesgue measure on [0,1). Consider the representation of T on the complex plane C that is a 1-dimensional complex vector space. There are a group of representations (which are irreducible since C is 1-dim)
{
e
k
:
T
→
G
L
1
(
C
)
=
C
∗
∣
k
∈
Z
}
{\displaystyle \{e_{k}:T\rightarrow GL_{1}(C)=C^{*}\mid k\in Z\}}
where
e
k
(
x
)
=
e
i
2
π
k
x
{\displaystyle e_{k}(x)=e^{i2\pi kx}}
for
x
∈
T
{\displaystyle x\in T}
.
The character of such representation, that is the trace of
e
k
(
x
)
{\displaystyle e_{k}(x)}
for each
x
∈
T
{\displaystyle x\in T}
and
k
∈
Z
{\displaystyle k\in Z}
, is
e
i
2
π
k
x
{\displaystyle e^{i2\pi kx}}
itself. In the case of representation of finite group, the character table of the group G are rows of vectors such that each row is the character of one irreducible representation of G, and these vectors form an orthonormal basis of the space of class functions that map from G to C by Schur's lemma. Now the group T is no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the function
e
k
(
x
)
{\displaystyle e_{k}(x)}
of
x
∈
T
,
{\displaystyle x\in T,}
and the inner product between two class functions (all functions being class functions since T is abelian)
f
,
g
∈
L
2
(
T
,
d
μ
)
{\displaystyle f,g\in L^{2}(T,d\mu )}
is defined as
⟨
f
,
g
⟩
=
1
|
T
|
∫
[
0
,
1
)
f
(
y
)
g
¯
(
y
)
d
μ
(
y
)
{\textstyle \langle f,g\rangle ={\frac {1}{|T|}}\int _{[0,1)}f(y){\overline {g}}(y)d\mu (y)}
with the normalizing factor
|
T
|
=
1
{\displaystyle |T|=1}
. The sequence
{
e
k
∣
k
∈
Z
}
{\displaystyle \{e_{k}\mid k\in Z\}}
is an orthonormal basis of the space of class functions
L
2
(
T
,
d
μ
)
{\displaystyle L^{2}(T,d\mu )}
.
For any representation V of a finite group G,
χ
v
{\displaystyle \chi _{v}}
can be expressed as the span
∑
i
⟨
χ
v
,
χ
v
i
⟩
χ
v
i
{\textstyle \sum _{i}\left\langle \chi _{v},\chi _{v_{i}}\right\rangle \chi _{v_{i}}}
(
V
i
{\displaystyle V_{i}}
are the irreps of G), such that
⟨
χ
v
,
χ
v
i
⟩
=
1
|
G
|
∑
g
∈
G
χ
v
(
g
)
χ
¯
v
i
(
g
)
{\textstyle \left\langle \chi _{v},\chi _{v_{i}}\right\rangle ={\frac {1}{|G|}}\sum _{g\in G}\chi _{v}(g){\overline {\chi }}_{v_{i}}(g)}
. Similarly for
G
=
T
{\displaystyle G=T}
and
f
∈
L
2
(
T
,
d
μ
)
{\displaystyle f\in L^{2}(T,d\mu )}
,
f
(
x
)
=
∑
k
∈
Z
f
^
(
k
)
e
k
{\textstyle f(x)=\sum _{k\in Z}{\hat {f}}(k)e_{k}}
. The Pontriagin dual
T
^
{\displaystyle {\hat {T}}}
is
{
e
k
}
(
k
∈
Z
)
{\displaystyle \{e_{k}\}(k\in Z)}
and for
f
∈
L
2
(
T
,
d
μ
)
{\displaystyle f\in L^{2}(T,d\mu )}
,
f
^
(
k
)
=
1
|
T
|
∫
[
0
,
1
)
f
(
y
)
e
−
i
2
π
k
y
d
y
{\textstyle {\hat {f}}(k)={\frac {1}{|T|}}\int _{[0,1)}f(y)e^{-i2\pi ky}dy}
is its Fourier transform for
e
k
∈
T
^
{\displaystyle e_{k}\in {\hat {T}}}
.
=== Gelfand transform ===
The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above.
Given an abelian locally compact Hausdorff topological group G, as before we consider space L1(G), defined using a Haar measure. With convolution as multiplication, L1(G) is an abelian Banach algebra. It also has an involution * given by
f
∗
(
g
)
=
f
(
g
−
1
)
¯
.
{\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}}.}
Taking the completion with respect to the largest possibly C*-norm gives its enveloping C*-algebra, called the group C*-algebra C*(G) of G. (Any C*-norm on L1(G) is bounded by the L1 norm, therefore their supremum exists.)
Given any abelian C*-algebra A, the Gelfand transform gives an isomorphism between A and C0(A^), where A^ is the multiplicative linear functionals, i.e. one-dimensional representations, on A with the weak-* topology. The map is simply given by
a
↦
(
φ
↦
φ
(
a
)
)
{\displaystyle a\mapsto {\bigl (}\varphi \mapsto \varphi (a){\bigr )}}
It turns out that the multiplicative linear functionals of C*(G), after suitable identification, are exactly the characters of G, and the Gelfand transform, when restricted to the dense subset L1(G) is the Fourier–Pontryagin transform.
=== Compact non-abelian groups ===
The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators. The Fourier transform on compact groups is a major tool in representation theory and non-commutative harmonic analysis.
Let G be a compact Hausdorff topological group. Let Σ denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation U(σ) on the Hilbert space Hσ of finite dimension dσ for each σ ∈ Σ. If μ is a finite Borel measure on G, then the Fourier–Stieltjes transform of μ is the operator on Hσ defined by
⟨
μ
^
ξ
,
η
⟩
H
σ
=
∫
G
⟨
U
¯
g
(
σ
)
ξ
,
η
⟩
d
μ
(
g
)
{\displaystyle \left\langle {\hat {\mu }}\xi ,\eta \right\rangle _{H_{\sigma }}=\int _{G}\left\langle {\overline {U}}_{g}^{(\sigma )}\xi ,\eta \right\rangle \,d\mu (g)}
where U(σ) is the complex-conjugate representation of U(σ) acting on Hσ. If μ is absolutely continuous with respect to the left-invariant probability measure λ on G, represented as
d
μ
=
f
d
λ
{\displaystyle d\mu =f\,d\lambda }
for some f ∈ L1(λ), one identifies the Fourier transform of f with the Fourier–Stieltjes transform of μ.
The mapping
μ
↦
μ
^
{\displaystyle \mu \mapsto {\hat {\mu }}}
defines an isomorphism between the Banach space M(G) of finite Borel measures (see rca space) and a closed subspace of the Banach space C∞(Σ) consisting of all sequences E = (Eσ) indexed by Σ of (bounded) linear operators Eσ : Hσ → Hσ for which the norm
‖
E
‖
=
sup
σ
∈
Σ
‖
E
σ
‖
{\displaystyle \|E\|=\sup _{\sigma \in \Sigma }\left\|E_{\sigma }\right\|}
is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C*-algebras into a subspace of C∞(Σ). Multiplication on M(G) is given by convolution of measures and the involution * defined by
f
∗
(
g
)
=
f
(
g
−
1
)
¯
,
{\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}},}
and C∞(Σ) has a natural C*-algebra structure as Hilbert space operators.
The Peter–Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if f ∈ L2(G), then
f
(
g
)
=
∑
σ
∈
Σ
d
σ
tr
(
f
^
(
σ
)
U
g
(
σ
)
)
{\displaystyle f(g)=\sum _{\sigma \in \Sigma }d_{\sigma }\operatorname {tr} \left({\hat {f}}(\sigma )U_{g}^{(\sigma )}\right)}
where the summation is understood as convergent in the L2 sense.
The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry. In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions.
== Alternatives ==
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, fractional Fourier transform, Synchrosqueezing Fourier transform, or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.
== Example ==
The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the function
f
(
t
)
=
cos
(
2
π
3
t
)
e
−
π
t
2
,
{\displaystyle f(t)=\cos(2\pi \ 3t)\ e^{-\pi t^{2}},}
which is a 3 Hz cosine wave (the first term) shaped by a Gaussian envelope function (the second term) that smoothly turns the wave on and off. The next 2 images show the product
f
(
t
)
e
−
i
2
π
3
t
,
{\displaystyle f(t)e^{-i2\pi 3t},}
which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs of
f
(
t
)
{\displaystyle f(t)}
and
Re
(
e
−
i
2
π
3
t
)
{\displaystyle \operatorname {Re} (e^{-i2\pi 3t})}
oscillate at the same rate and in phase, whereas
f
(
t
)
{\displaystyle f(t)}
and
Im
(
e
−
i
2
π
3
t
)
{\displaystyle \operatorname {Im} (e^{-i2\pi 3t})}
oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1.
However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a function
f
(
t
)
.
{\displaystyle f(t).}
To re-enforce an earlier point, the reason for the response at
ξ
=
−
3
{\displaystyle \xi =-3}
Hz is because
cos
(
2
π
3
t
)
{\displaystyle \cos(2\pi 3t)}
and
cos
(
2
π
(
−
3
)
t
)
{\displaystyle \cos(2\pi (-3)t)}
are indistinguishable. The transform of
e
i
2
π
3
t
⋅
e
−
π
t
2
{\displaystyle e^{i2\pi 3t}\cdot e^{-\pi t^{2}}}
would have just one response, whose amplitude is the integral of the smooth envelope:
e
−
π
t
2
,
{\displaystyle e^{-\pi t^{2}},}
whereas
Re
(
f
(
t
)
⋅
e
−
i
2
π
3
t
)
{\displaystyle \operatorname {Re} (f(t)\cdot e^{-i2\pi 3t})}
is
e
−
π
t
2
(
1
+
cos
(
2
π
6
t
)
)
/
2.
{\displaystyle e^{-\pi t^{2}}(1+\cos(2\pi 6t))/2.}
== Applications ==
Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to ordinary multiplication in the frequency domain (see Convolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics.
=== Analysis of differential equations ===
Perhaps the most important use of the Fourier transformation is to solve partial differential equations.
Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is
∂
2
y
(
x
,
t
)
∂
2
x
=
∂
y
(
x
,
t
)
∂
t
.
{\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial y(x,t)}{\partial t}}.}
The example we will give, a slightly more difficult one, is the wave equation in one dimension,
∂
2
y
(
x
,
t
)
∂
2
x
=
∂
2
y
(
x
,
t
)
∂
2
t
.
{\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial ^{2}y(x,t)}{\partial ^{2}t}}.}
As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions"
y
(
x
,
0
)
=
f
(
x
)
,
∂
y
(
x
,
0
)
∂
t
=
g
(
x
)
.
{\displaystyle y(x,0)=f(x),\qquad {\frac {\partial y(x,0)}{\partial t}}=g(x).}
Here, f and g are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions y which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution.
It is easier to find the Fourier transform ŷ of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After ŷ is determined, we can apply the inverse Fourier transformation to find y.
Fourier's method is as follows. First, note that any function of the forms
cos
(
2
π
ξ
(
x
±
t
)
)
or
sin
(
2
π
ξ
(
x
±
t
)
)
{\displaystyle \cos {\bigl (}2\pi \xi (x\pm t){\bigr )}{\text{ or }}\sin {\bigl (}2\pi \xi (x\pm t){\bigr )}}
satisfies the wave equation. These are called the elementary solutions.
Second, note that therefore any integral
y
(
x
,
t
)
=
∫
0
∞
d
ξ
[
a
+
(
ξ
)
cos
(
2
π
ξ
(
x
+
t
)
)
+
a
−
(
ξ
)
cos
(
2
π
ξ
(
x
−
t
)
)
+
b
+
(
ξ
)
sin
(
2
π
ξ
(
x
+
t
)
)
+
b
−
(
ξ
)
sin
(
2
π
ξ
(
x
−
t
)
)
]
{\displaystyle {\begin{aligned}y(x,t)=\int _{0}^{\infty }d\xi {\Bigl [}&a_{+}(\xi )\cos {\bigl (}2\pi \xi (x+t){\bigr )}+a_{-}(\xi )\cos {\bigl (}2\pi \xi (x-t){\bigr )}+{}\\&b_{+}(\xi )\sin {\bigl (}2\pi \xi (x+t){\bigr )}+b_{-}(\xi )\sin \left(2\pi \xi (x-t)\right){\Bigr ]}\end{aligned}}}
satisfies the wave equation for arbitrary a+, a−, b+, b−. This integral may be interpreted as a continuous linear combination of solutions for the linear equation.
Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of a± and b± in the variable x.
The third step is to examine how to find the specific unknown coefficient functions a± and b± that will lead to y satisfying the boundary conditions. We are interested in the values of these solutions at t = 0. So we will set t = 0. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable x) of both sides and obtain
2
∫
−
∞
∞
y
(
x
,
0
)
cos
(
2
π
ξ
x
)
d
x
=
a
+
+
a
−
{\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\cos(2\pi \xi x)\,dx=a_{+}+a_{-}}
and
2
∫
−
∞
∞
y
(
x
,
0
)
sin
(
2
π
ξ
x
)
d
x
=
b
+
+
b
−
.
{\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\sin(2\pi \xi x)\,dx=b_{+}+b_{-}.}
Similarly, taking the derivative of y with respect to t and then applying the Fourier sine and cosine transformations yields
2
∫
−
∞
∞
∂
y
(
u
,
0
)
∂
t
sin
(
2
π
ξ
x
)
d
x
=
(
2
π
ξ
)
(
−
a
+
+
a
−
)
{\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\sin(2\pi \xi x)\,dx=(2\pi \xi )\left(-a_{+}+a_{-}\right)}
and
2
∫
−
∞
∞
∂
y
(
u
,
0
)
∂
t
cos
(
2
π
ξ
x
)
d
x
=
(
2
π
ξ
)
(
b
+
−
b
−
)
.
{\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\cos(2\pi \xi x)\,dx=(2\pi \xi )\left(b_{+}-b_{-}\right).}
These are four linear equations for the four unknowns a± and b±, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found.
In summary, we chose a set of elementary solutions, parametrized by ξ, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter ξ. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions f and g. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions a± and b± in terms of the given boundary conditions f and g.
From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both x and t rather than operate as Fourier did, who only transformed in the spatial variables. Note that ŷ must be considered in the sense of a distribution since y(x, t) is not going to be L1: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in x to multiplication by i2πξ and differentiation with respect to t to multiplication by i2πf where f is the frequency. Then the wave equation becomes an algebraic equation in ŷ:
ξ
2
y
^
(
ξ
,
f
)
=
f
2
y
^
(
ξ
,
f
)
.
{\displaystyle \xi ^{2}{\hat {y}}(\xi ,f)=f^{2}{\hat {y}}(\xi ,f).}
This is equivalent to requiring ŷ(ξ, f) = 0 unless ξ = ±f. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously f̂ = δ(ξ ± f) will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic ξ2 − f2 = 0.
We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line ξ = f plus distributions on the line ξ = −f as follows: if Φ is any test function,
∬
y
^
ϕ
(
ξ
,
f
)
d
ξ
d
f
=
∫
s
+
ϕ
(
ξ
,
ξ
)
d
ξ
+
∫
s
−
ϕ
(
ξ
,
−
ξ
)
d
ξ
,
{\displaystyle \iint {\hat {y}}\phi (\xi ,f)\,d\xi \,df=\int s_{+}\phi (\xi ,\xi )\,d\xi +\int s_{-}\phi (\xi ,-\xi )\,d\xi ,}
where s+, and s−, are distributions of one variable.
Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put Φ(ξ, f) = ei2π(xξ+tf), which is clearly of polynomial growth):
y
(
x
,
0
)
=
∫
{
s
+
(
ξ
)
+
s
−
(
ξ
)
}
e
i
2
π
ξ
x
+
0
d
ξ
{\displaystyle y(x,0)=\int {\bigl \{}s_{+}(\xi )+s_{-}(\xi ){\bigr \}}e^{i2\pi \xi x+0}\,d\xi }
and
∂
y
(
x
,
0
)
∂
t
=
∫
{
s
+
(
ξ
)
−
s
−
(
ξ
)
}
i
2
π
ξ
e
i
2
π
ξ
x
+
0
d
ξ
.
{\displaystyle {\frac {\partial y(x,0)}{\partial t}}=\int {\bigl \{}s_{+}(\xi )-s_{-}(\xi ){\bigr \}}i2\pi \xi e^{i2\pi \xi x+0}\,d\xi .}
Now, as before, applying the one-variable Fourier transformation in the variable x to these functions of x yields two equations in the two unknown distributions s± (which can be taken to be ordinary functions if the boundary conditions are L1 or L2).
From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used.
The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well.
=== Fourier-transform spectroscopy ===
The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry.
=== Quantum mechanics ===
The Fourier transform is useful in quantum mechanics in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of complementary variables, connected by the Heisenberg uncertainty principle. For example, in one dimension, the spatial variable q of, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentum p of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of q or by a function of p but not by a function of both variables. The variable p is called the conjugate variable to q. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both p and q simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a p-axis and a q-axis called the phase space.
In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the q-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the p-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that
ϕ
(
p
)
=
∫
d
q
ψ
(
q
)
e
−
i
p
q
/
h
,
{\displaystyle \phi (p)=\int dq\,\psi (q)e^{-ipq/h},}
or, equivalently,
ψ
(
q
)
=
∫
d
p
ϕ
(
p
)
e
i
p
q
/
h
.
{\displaystyle \psi (q)=\int dp\,\phi (p)e^{ipq/h}.}
Physically realisable states are L2, and so by the Plancherel theorem, their Fourier transforms are also L2. (Note that since q is in units of distance and p is in units of momentum, the presence of the Planck constant in the exponent makes the exponent dimensionless, as it should be.)
Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg uncertainty principle.
The other use of the Fourier transform in both quantum mechanics and quantum field theory is to solve the applicable wave equation. In non-relativistic quantum mechanics, the Schrödinger equation for a time-varying wave function in one-dimension, not subject to external forces, is
−
∂
2
∂
x
2
ψ
(
x
,
t
)
=
i
h
2
π
∂
∂
t
ψ
(
x
,
t
)
.
{\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).}
This is the same as the heat equation except for the presence of the imaginary unit i. Fourier methods can be used to solve this equation.
In the presence of a potential, given by the potential energy function V(x), the equation becomes
−
∂
2
∂
x
2
ψ
(
x
,
t
)
+
V
(
x
)
ψ
(
x
,
t
)
=
i
h
2
π
∂
∂
t
ψ
(
x
,
t
)
.
{\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)+V(x)\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).}
The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of ψ given its values for t = 0. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important.
In relativistic quantum mechanics, the Schrödinger equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units,
(
∂
2
∂
x
2
+
1
)
ψ
(
x
,
t
)
=
∂
2
∂
t
2
ψ
(
x
,
t
)
.
{\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+1\right)\psi (x,t)={\frac {\partial ^{2}}{\partial t^{2}}}\psi (x,t).}
This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions.
Finally, the number operator of the quantum harmonic oscillator can be interpreted, for example via the Mehler kernel, as the generator of the Fourier transform
F
{\displaystyle {\mathcal {F}}}
.
=== Signal processing ===
The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function.
The autocorrelation function R of a function f is defined by
R
f
(
τ
)
=
lim
T
→
∞
1
2
T
∫
−
T
T
f
(
t
)
f
(
t
+
τ
)
d
t
.
{\displaystyle R_{f}(\tau )=\lim _{T\rightarrow \infty }{\frac {1}{2T}}\int _{-T}^{T}f(t)f(t+\tau )\,dt.}
This function is a function of the time-lag τ elapsing between the values of f to be correlated.
For most functions f that occur in practice, R is a bounded even function of the time-lag τ and for typical noisy signals it turns out to be uniformly continuous with a maximum at τ = 0.
The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of f separated by a time lag. This is a way of searching for the correlation of f with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if f(t) represents the temperature at time t, one expects a strong correlation with the temperature at a time lag of 24 hours.
It possesses a Fourier transform,
P
f
(
ξ
)
=
∫
−
∞
∞
R
f
(
τ
)
e
−
i
2
π
ξ
τ
d
τ
.
{\displaystyle P_{f}(\xi )=\int _{-\infty }^{\infty }R_{f}(\tau )e^{-i2\pi \xi \tau }\,d\tau .}
This Fourier transform is called the power spectral density function of f. (Unless all periodic components are first filtered out from f, this integral will diverge, but it is easy to filter out such periodicities.)
The power spectrum, as indicated by this density function P, measures the amount of variance contributed to the data by the frequency ξ. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA).
Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data.
The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out.
Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool.
== Other notations ==
Other common notations for
f
^
(
ξ
)
{\displaystyle {\hat {f}}(\xi )}
include:
f
~
(
ξ
)
,
F
(
ξ
)
,
F
(
f
)
(
ξ
)
,
(
F
f
)
(
ξ
)
,
F
(
f
)
,
F
{
f
}
,
F
(
f
(
t
)
)
,
F
{
f
(
t
)
}
.
{\displaystyle {\tilde {f}}(\xi ),\ F(\xi ),\ {\mathcal {F}}\left(f\right)(\xi ),\ \left({\mathcal {F}}f\right)(\xi ),\ {\mathcal {F}}(f),\ {\mathcal {F}}\{f\},\ {\mathcal {F}}{\bigl (}f(t){\bigr )},\ {\mathcal {F}}{\bigl \{}f(t){\bigr \}}.}
In the sciences and engineering it is also common to make substitutions like these:
ξ
→
f
,
x
→
t
,
f
→
x
,
f
^
→
X
.
{\displaystyle \xi \rightarrow f,\quad x\rightarrow t,\quad f\rightarrow x,\quad {\hat {f}}\rightarrow X.}
So the transform pair
f
(
x
)
⟺
F
f
^
(
ξ
)
{\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ {\hat {f}}(\xi )}
can become
x
(
t
)
⟺
F
X
(
f
)
{\displaystyle x(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ X(f)}
A disadvantage of the capital letter notation is when expressing a transform such as
f
⋅
g
^
{\displaystyle {\widehat {f\cdot g}}}
or
f
′
^
,
{\displaystyle {\widehat {f'}},}
which become the more awkward
F
{
f
⋅
g
}
{\displaystyle {\mathcal {F}}\{f\cdot g\}}
and
F
{
f
′
}
.
{\displaystyle {\mathcal {F}}\{f'\}.}
In some contexts such as particle physics, the same symbol
f
{\displaystyle f}
may be used for both for a function as well as it Fourier transform, with the two only distinguished by their argument I.e.
f
(
k
1
+
k
2
)
{\displaystyle f(k_{1}+k_{2})}
would refer to the Fourier transform because of the momentum argument, while
f
(
x
0
+
π
r
→
)
{\displaystyle f(x_{0}+\pi {\vec {r}})}
would refer to the original function because of the positional argument. Although tildes may be used as in
f
~
{\displaystyle {\tilde {f}}}
to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a more Lorentz invariant form, such as
d
k
~
=
d
k
(
2
π
)
3
2
ω
{\displaystyle {\tilde {dk}}={\frac {dk}{(2\pi )^{3}2\omega }}}
, so care must be taken. Similarly,
f
^
{\displaystyle {\hat {f}}}
often denotes the Hilbert transform of
f
{\displaystyle f}
.
The interpretation of the complex function f̂(ξ) may be aided by expressing it in polar coordinate form
f
^
(
ξ
)
=
A
(
ξ
)
e
i
φ
(
ξ
)
{\displaystyle {\hat {f}}(\xi )=A(\xi )e^{i\varphi (\xi )}}
in terms of the two real functions A(ξ) and φ(ξ) where:
A
(
ξ
)
=
|
f
^
(
ξ
)
|
,
{\displaystyle A(\xi )=\left|{\hat {f}}(\xi )\right|,}
is the amplitude and
φ
(
ξ
)
=
arg
(
f
^
(
ξ
)
)
,
{\displaystyle \varphi (\xi )=\arg \left({\hat {f}}(\xi )\right),}
is the phase (see arg function).
Then the inverse transform can be written:
f
(
x
)
=
∫
−
∞
∞
A
(
ξ
)
e
i
(
2
π
ξ
x
+
φ
(
ξ
)
)
d
ξ
,
{\displaystyle f(x)=\int _{-\infty }^{\infty }A(\xi )\ e^{i{\bigl (}2\pi \xi x+\varphi (\xi ){\bigr )}}\,d\xi ,}
which is a recombination of all the frequency components of f(x). Each component is a complex sinusoid of the form e2πixξ whose amplitude is A(ξ) and whose initial phase angle (at x = 0) is φ(ξ).
The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted F and F(f) is used to denote the Fourier transform of the function f. This mapping is linear, which means that F can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write F f instead of F(f). Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ξ for its variable, and this is denoted either as F f(ξ) or as (F f)(ξ). Notice that in the former case, it is implicitly understood that F is applied first to f and then the resulting function is evaluated at ξ, not the other way around.
In mathematics and various applied sciences, it is often necessary to distinguish between a function f and the value of f when its variable equals x, denoted f(x). This means that a notation like F(f(x)) formally can be interpreted as the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example,
F
(
rect
(
x
)
)
=
sinc
(
ξ
)
{\displaystyle {\mathcal {F}}{\bigl (}\operatorname {rect} (x){\bigr )}=\operatorname {sinc} (\xi )}
is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or
F
(
f
(
x
+
x
0
)
)
=
F
(
f
(
x
)
)
e
i
2
π
x
0
ξ
{\displaystyle {\mathcal {F}}{\bigl (}f(x+x_{0}){\bigr )}={\mathcal {F}}{\bigl (}f(x){\bigr )}\,e^{i2\pi x_{0}\xi }}
is used to express the shift property of the Fourier transform.
Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not of x0.
As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined
E
(
e
i
t
⋅
X
)
=
∫
e
i
t
⋅
x
d
μ
X
(
x
)
.
{\displaystyle E\left(e^{it\cdot X}\right)=\int e^{it\cdot x}\,d\mu _{X}(x).}
As in the case of the "non-unitary angular frequency" convention above, the factor of 2π appears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent.
== Computation methods ==
The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable,
f
(
x
)
,
{\displaystyle f(x),}
and functions of a discrete variable (i.e. ordered pairs of
x
{\displaystyle x}
and
f
{\displaystyle f}
values). For discrete-valued
x
,
{\displaystyle x,}
the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency (
ξ
{\displaystyle \xi }
or
ω
{\displaystyle \omega }
). When the sinusoids are harmonically related (i.e. when the
x
{\displaystyle x}
-values are spaced at integer multiples of an interval), the transform is called discrete-time Fourier transform (DTFT).
=== Discrete Fourier transforms and fast Fourier transforms ===
Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described at Discrete-time Fourier transform § Sampling the DTFT. The discrete Fourier transform (DFT), used there, is usually computed by a fast Fourier transform (FFT) algorithm.
=== Analytic integration of closed-form functions ===
Tables of closed-form Fourier transforms, such as § Square-integrable functions, one-dimensional and § Table of discrete-time Fourier transforms, are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency (
ξ
{\displaystyle \xi }
or
ω
{\displaystyle \omega }
). When mathematically possible, this provides a transform for a continuum of frequency values.
Many computer algebra systems such as Matlab and Mathematica that are capable of symbolic integration are capable of computing Fourier transforms analytically. For example, to compute the Fourier transform of cos(6πt) e−πt2 one might enter the command integrate cos(6*pi*t) exp(−pi*t^2) exp(-i*2*pi*f*t) from -inf to inf into Wolfram Alpha.
=== Numerical integration of closed-form continuous functions ===
Discrete sampling of the Fourier transform can also be done by numerical integration of the definition at each value of frequency for which transform is desired. The numerical integration approach works on a much broader class of functions than the analytic approach.
=== Numerical integration of a series of ordered pairs ===
If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs. The DTFT is a common subcase of this more general situation.
== Tables of important Fourier transforms ==
The following tables record some closed-form Fourier transforms. For functions f(x) and g(x) denote their Fourier transforms by f̂ and ĝ. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse.
=== Functional relationships, one-dimensional ===
The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix).
=== Square-integrable functions, one-dimensional ===
The Fourier transforms in this table may be found in Campbell & Foster (1948), Erdélyi (1954), or Kammler (2000, appendix).
=== Distributions, one-dimensional ===
The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix).
=== Two-dimensional functions ===
=== Formulas for general n-dimensional functions ===
== See also ==
== Notes ==
== Citations ==
== References ==
== External links ==
Media related to Fourier transformation at Wikimedia Commons
Encyclopedia of Mathematics
Weisstein, Eric W. "Fourier Transform". MathWorld.
Fourier Transform in Crystallography | Wikipedia/Fourier_transform |
In mathematics, a Lie algebra is semisimple if it is a direct sum of simple Lie algebras. (A simple Lie algebra is a non-abelian Lie algebra without any non-zero proper ideals.)
Throughout the article, unless otherwise stated, a Lie algebra is a finite-dimensional Lie algebra over a field of characteristic 0. For such a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, if nonzero, the following conditions are equivalent:
g
{\displaystyle {\mathfrak {g}}}
is semisimple;
the Killing form
κ
(
x
,
y
)
=
tr
(
ad
(
x
)
ad
(
y
)
)
{\displaystyle \kappa (x,y)=\operatorname {tr} (\operatorname {ad} (x)\operatorname {ad} (y))}
is non-degenerate;
g
{\displaystyle {\mathfrak {g}}}
has no non-zero abelian ideals;
g
{\displaystyle {\mathfrak {g}}}
has no non-zero solvable ideals;
the radical (maximal solvable ideal) of
g
{\displaystyle {\mathfrak {g}}}
is zero.
== Significance ==
The significance of semisimplicity comes firstly from the Levi decomposition, which states that every finite dimensional Lie algebra is the semidirect product of a solvable ideal (its radical) and a semisimple algebra. In particular, there is no nonzero Lie algebra that is both solvable and semisimple.
Semisimple Lie algebras have a very elegant classification, in stark contrast to solvable Lie algebras. Semisimple Lie algebras over an algebraically closed field of characteristic zero are completely classified by their root system, which are in turn classified by Dynkin diagrams. Semisimple algebras over non-algebraically closed fields can be understood in terms of those over the algebraic closure, though the classification is somewhat more intricate; see real form for the case of real semisimple Lie algebras, which were classified by Élie Cartan.
Further, the representation theory of semisimple Lie algebras is much cleaner than that for general Lie algebras. For example, the Jordan decomposition in a semisimple Lie algebra coincides with the Jordan decomposition in its representation; this is not the case for Lie algebras in general.
If
g
{\displaystyle {\mathfrak {g}}}
is semisimple, then
g
=
[
g
,
g
]
{\displaystyle {\mathfrak {g}}=[{\mathfrak {g}},{\mathfrak {g}}]}
. In particular, every linear semisimple Lie algebra is a subalgebra of
s
l
{\displaystyle {\mathfrak {sl}}}
, the special linear Lie algebra. The study of the structure of
s
l
{\displaystyle {\mathfrak {sl}}}
constitutes an important part of the representation theory for semisimple Lie algebras.
== History ==
The semisimple Lie algebras over the complex numbers were first classified by Wilhelm Killing (1888–90), though his proof lacked rigor. His proof was made rigorous by Élie Cartan (1894) in his Ph.D. thesis, who also classified semisimple real Lie algebras. This was subsequently refined, and the present classification by Dynkin diagrams was given by then 22-year-old Eugene Dynkin in 1947. Some minor modifications have been made (notably by J. P. Serre), but the proof is unchanged in its essentials and can be found in any standard reference, such as (Humphreys 1972).
== Basic properties ==
Every ideal, quotient and product of semisimple Lie algebras is again semisimple.
The center of a semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is trivial (since the center is an abelian ideal). In other words, the adjoint representation
ad
{\displaystyle \operatorname {ad} }
is injective. Moreover, the image turns out to be
Der
(
g
)
{\displaystyle \operatorname {Der} ({\mathfrak {g}})}
of derivations on
g
{\displaystyle {\mathfrak {g}}}
. Hence,
ad
:
g
→
∼
Der
(
g
)
{\displaystyle \operatorname {ad} :{\mathfrak {g}}{\overset {\sim }{\to }}\operatorname {Der} ({\mathfrak {g}})}
is an isomorphism. (This is a special case of Whitehead's lemma.)
As the adjoint representation is injective, a semisimple Lie algebra is a linear Lie algebra under the adjoint representation. This may lead to some ambiguity, as every Lie algebra is already linear with respect to some other vector space (Ado's theorem), although not necessarily via the adjoint representation. But in practice, such ambiguity rarely occurs.
If
g
{\displaystyle {\mathfrak {g}}}
is a semisimple Lie algebra, then
g
=
[
g
,
g
]
{\displaystyle {\mathfrak {g}}=[{\mathfrak {g}},{\mathfrak {g}}]}
(because
g
/
[
g
,
g
]
{\displaystyle {\mathfrak {g}}/[{\mathfrak {g}},{\mathfrak {g}}]}
is semisimple and abelian).
A finite-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over a field k of characteristic zero is semisimple if and only if the base extension
g
⊗
k
F
{\displaystyle {\mathfrak {g}}\otimes _{k}F}
is semisimple for each field extension
F
⊃
k
{\displaystyle F\supset k}
. Thus, for example, a finite-dimensional real Lie algebra is semisimple if and only if its complexification is semisimple.
== Jordan decomposition ==
Each endomorphism x of a finite-dimensional vector space over a field of characteristic zero can be decomposed uniquely into a semisimple (i.e., diagonalizable over the algebraic closure) and nilpotent part
x
=
s
+
n
{\displaystyle x=s+n\ }
such that s and n commute with each other. Moreover, each of s and n is a polynomial in x. This is the Jordan decomposition of x.
The above applies to the adjoint representation
ad
{\displaystyle \operatorname {ad} }
of a semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. An element x of
g
{\displaystyle {\mathfrak {g}}}
is said to be semisimple (resp. nilpotent) if
ad
(
x
)
{\displaystyle \operatorname {ad} (x)}
is a semisimple (resp. nilpotent) operator. If
x
∈
g
{\displaystyle x\in {\mathfrak {g}}}
, then the abstract Jordan decomposition states that x can be written uniquely as:
x
=
s
+
n
{\displaystyle x=s+n}
where
s
{\displaystyle s}
is semisimple,
n
{\displaystyle n}
is nilpotent and
[
s
,
n
]
=
0
{\displaystyle [s,n]=0}
. Moreover, if
y
∈
g
{\displaystyle y\in {\mathfrak {g}}}
commutes with x, then it commutes with both
s
,
n
{\displaystyle s,n}
as well.
The abstract Jordan decomposition factors through any representation of
g
{\displaystyle {\mathfrak {g}}}
in the sense that given any representation ρ,
ρ
(
x
)
=
ρ
(
s
)
+
ρ
(
n
)
{\displaystyle \rho (x)=\rho (s)+\rho (n)\,}
is the Jordan decomposition of ρ(x) in the endomorphism algebra of the representation space. (This is proved as a consequence of Weyl's complete reducibility theorem; see Weyl's theorem on complete reducibility#Application: preservation of Jordan decomposition.)
== Structure ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a (finite-dimensional) semisimple Lie algebra over an algebraically closed field of characteristic zero. The structure of
g
{\displaystyle {\mathfrak {g}}}
can be described by an adjoint action of a certain distinguished subalgebra on it, a Cartan subalgebra. By definition, a Cartan subalgebra (also called a maximal toral subalgebra)
h
{\displaystyle {\mathfrak {h}}}
of
g
{\displaystyle {\mathfrak {g}}}
is a maximal subalgebra such that, for each
h
∈
h
{\displaystyle h\in {\mathfrak {h}}}
,
ad
(
h
)
{\displaystyle \operatorname {ad} (h)}
is diagonalizable. As it turns out,
h
{\displaystyle {\mathfrak {h}}}
is abelian and so all the operators in
ad
(
h
)
{\displaystyle \operatorname {ad} ({\mathfrak {h}})}
are simultaneously diagonalizable. For each linear functional
α
{\displaystyle \alpha }
of
h
{\displaystyle {\mathfrak {h}}}
, let
g
α
=
{
x
∈
g
|
ad
(
h
)
x
:=
[
h
,
x
]
=
α
(
h
)
x
for all
h
∈
h
}
{\displaystyle {\mathfrak {g}}_{\alpha }=\{x\in {\mathfrak {g}}|\operatorname {ad} (h)x:=[h,x]=\alpha (h)x\,{\text{ for all }}h\in {\mathfrak {h}}\}}
.
(Note that
g
0
{\displaystyle {\mathfrak {g}}_{0}}
is the centralizer of
h
{\displaystyle {\mathfrak {h}}}
.) Then
(The most difficult item to show is
dim
g
α
=
1
{\displaystyle \dim {\mathfrak {g}}_{\alpha }=1}
. The standard proofs all use some facts in the representation theory of
s
l
2
{\displaystyle {\mathfrak {sl}}_{2}}
; e.g., Serre uses the fact that an
s
l
2
{\displaystyle {\mathfrak {sl}}_{2}}
-module with a primitive element of negative weight is infinite-dimensional, contradicting
dim
g
<
∞
{\displaystyle \dim {\mathfrak {g}}<\infty }
.)
Let
h
α
∈
h
,
e
α
∈
g
α
,
f
α
∈
g
−
α
{\displaystyle h_{\alpha }\in {\mathfrak {h}},e_{\alpha }\in {\mathfrak {g}}_{\alpha },f_{\alpha }\in {\mathfrak {g}}_{-\alpha }}
with the commutation relations
[
e
α
,
f
α
]
=
h
α
,
[
h
α
,
e
α
]
=
2
e
α
,
[
h
α
,
f
α
]
=
−
2
f
α
{\displaystyle [e_{\alpha },f_{\alpha }]=h_{\alpha },[h_{\alpha },e_{\alpha }]=2e_{\alpha },[h_{\alpha },f_{\alpha }]=-2f_{\alpha }}
; i.e., the
h
α
,
e
α
,
f
α
{\displaystyle h_{\alpha },e_{\alpha },f_{\alpha }}
correspond to the standard basis of
s
l
2
{\displaystyle {\mathfrak {sl}}_{2}}
.
The linear functionals in
Φ
{\displaystyle \Phi }
are called the roots of
g
{\displaystyle {\mathfrak {g}}}
relative to
h
{\displaystyle {\mathfrak {h}}}
. The roots span
h
∗
{\displaystyle {\mathfrak {h}}^{*}}
(since if
α
(
h
)
=
0
,
α
∈
Φ
{\displaystyle \alpha (h)=0,\alpha \in \Phi }
, then
ad
(
h
)
{\displaystyle \operatorname {ad} (h)}
is the zero operator; i.e.,
h
{\displaystyle h}
is in the center, which is zero.) Moreover, from the representation theory of
s
l
2
{\displaystyle {\mathfrak {sl}}_{2}}
, one deduces the following symmetry and integral properties of
Φ
{\displaystyle \Phi }
: for each
α
,
β
∈
Φ
{\displaystyle \alpha ,\beta \in \Phi }
,
Note that
s
α
{\displaystyle s_{\alpha }}
has the properties (1)
s
α
(
α
)
=
−
α
{\displaystyle s_{\alpha }(\alpha )=-\alpha }
and (2) the fixed-point set is
{
γ
∈
h
∗
|
γ
(
h
α
)
=
0
}
{\displaystyle \{\gamma \in {\mathfrak {h}}^{*}|\gamma (h_{\alpha })=0\}}
, which means that
s
α
{\displaystyle s_{\alpha }}
is the reflection with respect to the hyperplane corresponding to
α
{\displaystyle \alpha }
. The above then says that
Φ
{\displaystyle \Phi }
is a root system.
It follows from the general theory of a root system that
Φ
{\displaystyle \Phi }
contains a basis
α
1
,
…
,
α
l
{\displaystyle \alpha _{1},\dots ,\alpha _{l}}
of
h
∗
{\displaystyle {\mathfrak {h}}^{*}}
such that each root is a linear combination of
α
1
,
…
,
α
l
{\displaystyle \alpha _{1},\dots ,\alpha _{l}}
with integer coefficients of the same sign; the roots
α
i
{\displaystyle \alpha _{i}}
are called simple roots. Let
e
i
=
e
α
i
{\displaystyle e_{i}=e_{\alpha _{i}}}
, etc. Then the
3
l
{\displaystyle 3l}
elements
e
i
,
f
i
,
h
i
{\displaystyle e_{i},f_{i},h_{i}}
(called Chevalley generators) generate
g
{\displaystyle {\mathfrak {g}}}
as a Lie algebra. Moreover, they satisfy the relations (called Serre relations): with
a
i
j
=
α
j
(
h
i
)
{\displaystyle a_{ij}=\alpha _{j}(h_{i})}
,
[
h
i
,
h
j
]
=
0
,
{\displaystyle [h_{i},h_{j}]=0,}
[
e
i
,
f
i
]
=
h
i
,
[
e
i
,
f
j
]
=
0
,
i
≠
j
,
{\displaystyle [e_{i},f_{i}]=h_{i},[e_{i},f_{j}]=0,i\neq j,}
[
h
i
,
e
j
]
=
a
i
j
e
j
,
[
h
i
,
f
j
]
=
−
a
i
j
f
j
,
{\displaystyle [h_{i},e_{j}]=a_{ij}e_{j},[h_{i},f_{j}]=-a_{ij}f_{j},}
ad
(
e
i
)
−
a
i
j
+
1
(
e
j
)
=
ad
(
f
i
)
−
a
i
j
+
1
(
f
j
)
=
0
,
i
≠
j
{\displaystyle \operatorname {ad} (e_{i})^{-a_{ij}+1}(e_{j})=\operatorname {ad} (f_{i})^{-a_{ij}+1}(f_{j})=0,i\neq j}
.
The converse of this is also true: i.e., the Lie algebra generated by the generators and the relations like the above is a (finite-dimensional) semisimple Lie algebra that has the root space decomposition as above (provided the
[
a
i
j
]
1
≤
i
,
j
≤
l
{\displaystyle [a_{ij}]_{1\leq i,j\leq l}}
is a Cartan matrix). This is a theorem of Serre. In particular, two semisimple Lie algebras are isomorphic if they have the same root system.
The implication of the axiomatic nature of a root system and Serre's theorem is that one can enumerate all possible root systems; hence, "all possible" semisimple Lie algebras (finite-dimensional over an algebraically closed field of characteristic zero).
The Weyl group is the group of linear transformations of
h
∗
≃
h
{\displaystyle {\mathfrak {h}}^{*}\simeq {\mathfrak {h}}}
generated by the
s
α
{\displaystyle s_{\alpha }}
's. The Weyl group is an important symmetry of the problem; for example, the weights of any finite-dimensional representation of
g
{\displaystyle {\mathfrak {g}}}
are invariant under the Weyl group.
== Example root space decomposition in sln(C) ==
For
g
=
s
l
n
(
C
)
{\displaystyle {\mathfrak {g}}={\mathfrak {sl}}_{n}(\mathbb {C} )}
and the Cartan subalgebra
h
{\displaystyle {\mathfrak {h}}}
of diagonal matrices, define
λ
i
∈
h
∗
{\displaystyle \lambda _{i}\in {\mathfrak {h}}^{*}}
by
λ
i
(
d
(
a
1
,
…
,
a
n
)
)
=
a
i
{\displaystyle \lambda _{i}(d(a_{1},\ldots ,a_{n}))=a_{i}}
,
where
d
(
a
1
,
…
,
a
n
)
{\displaystyle d(a_{1},\ldots ,a_{n})}
denotes the diagonal matrix with
a
1
,
…
,
a
n
{\displaystyle a_{1},\ldots ,a_{n}}
on the diagonal. Then the decomposition is given by
g
=
h
⊕
(
⨁
i
≠
j
g
λ
i
−
λ
j
)
{\displaystyle {\mathfrak {g}}={\mathfrak {h}}\oplus \left(\bigoplus _{i\neq j}{\mathfrak {g}}_{\lambda _{i}-\lambda _{j}}\right)}
where
g
λ
i
−
λ
j
=
Span
C
(
e
i
j
)
{\displaystyle {\mathfrak {g}}_{\lambda _{i}-\lambda _{j}}={\text{Span}}_{\mathbb {C} }(e_{ij})}
for the vector
e
i
j
{\displaystyle e_{ij}}
in
s
l
n
(
C
)
{\displaystyle {\mathfrak {sl}}_{n}(\mathbb {C} )}
with the standard (matrix) basis, meaning
e
i
j
{\displaystyle e_{ij}}
represents the basis vector in the
i
{\displaystyle i}
-th row and
j
{\displaystyle j}
-th column. This decomposition of
g
{\displaystyle {\mathfrak {g}}}
has an associated root system:
Φ
=
{
λ
i
−
λ
j
:
i
≠
j
}
{\displaystyle \Phi =\{\lambda _{i}-\lambda _{j}:i\neq j\}}
=== sl2(C) ===
For example, in
s
l
2
(
C
)
{\displaystyle {\mathfrak {sl}}_{2}(\mathbb {C} )}
the decomposition is
s
l
2
=
h
⊕
g
λ
1
−
λ
2
⊕
g
λ
2
−
λ
1
{\displaystyle {\mathfrak {sl}}_{2}={\mathfrak {h}}\oplus {\mathfrak {g}}_{\lambda _{1}-\lambda _{2}}\oplus {\mathfrak {g}}_{\lambda _{2}-\lambda _{1}}}
and the associated root system is
Φ
=
{
λ
1
−
λ
2
,
λ
2
−
λ
1
}
{\displaystyle \Phi =\{\lambda _{1}-\lambda _{2},\lambda _{2}-\lambda _{1}\}}
=== sl3(C) ===
In
s
l
3
(
C
)
{\displaystyle {\mathfrak {sl}}_{3}(\mathbb {C} )}
the decomposition is
s
l
3
=
h
⊕
g
λ
1
−
λ
2
⊕
g
λ
1
−
λ
3
⊕
g
λ
2
−
λ
3
⊕
g
λ
2
−
λ
1
⊕
g
λ
3
−
λ
1
⊕
g
λ
3
−
λ
2
{\displaystyle {\mathfrak {sl}}_{3}={\mathfrak {h}}\oplus {\mathfrak {g}}_{\lambda _{1}-\lambda _{2}}\oplus {\mathfrak {g}}_{\lambda _{1}-\lambda _{3}}\oplus {\mathfrak {g}}_{\lambda _{2}-\lambda _{3}}\oplus {\mathfrak {g}}_{\lambda _{2}-\lambda _{1}}\oplus {\mathfrak {g}}_{\lambda _{3}-\lambda _{1}}\oplus {\mathfrak {g}}_{\lambda _{3}-\lambda _{2}}}
and the associated root system is given by
Φ
=
{
±
(
λ
1
−
λ
2
)
,
±
(
λ
1
−
λ
3
)
,
±
(
λ
2
−
λ
3
)
}
{\displaystyle \Phi =\{\pm (\lambda _{1}-\lambda _{2}),\pm (\lambda _{1}-\lambda _{3}),\pm (\lambda _{2}-\lambda _{3})\}}
== Examples ==
As noted in #Structure, semisimple Lie algebras over
C
{\displaystyle \mathbb {C} }
(or more generally an algebraically closed field of characteristic zero) are classified by the root system associated to their Cartan subalgebras, and the root systems, in turn, are classified by their Dynkin diagrams.
Examples of semisimple Lie algebras, the classical Lie algebras, with notation coming from their Dynkin diagrams, are:
A
n
:
{\displaystyle A_{n}:}
s
l
n
+
1
{\displaystyle {\mathfrak {sl}}_{n+1}}
, the special linear Lie algebra.
B
n
:
{\displaystyle B_{n}:}
s
o
2
n
+
1
{\displaystyle {\mathfrak {so}}_{2n+1}}
, the odd-dimensional special orthogonal Lie algebra.
C
n
:
{\displaystyle C_{n}:}
s
p
2
n
{\displaystyle {\mathfrak {sp}}_{2n}}
, the symplectic Lie algebra.
D
n
:
{\displaystyle D_{n}:}
s
o
2
n
{\displaystyle {\mathfrak {so}}_{2n}}
, the even-dimensional special orthogonal Lie algebra (
n
>
1
{\displaystyle n>1}
).
The restriction
n
>
1
{\displaystyle n>1}
in the
D
n
{\displaystyle D_{n}}
family is needed because
s
o
2
{\displaystyle {\mathfrak {so}}_{2}}
is one-dimensional and commutative and therefore not semisimple.
These Lie algebras are numbered so that n is the rank. Almost all of these semisimple Lie algebras are actually simple and the members of these families are almost all distinct, except for some collisions in small rank. For example
s
o
4
≅
s
o
3
⊕
s
o
3
{\displaystyle {\mathfrak {so}}_{4}\cong {\mathfrak {so}}_{3}\oplus {\mathfrak {so}}_{3}}
and
s
p
2
≅
s
o
5
{\displaystyle {\mathfrak {sp}}_{2}\cong {\mathfrak {so}}_{5}}
. These four families, together with five exceptions (E6, E7, E8, F4, and G2), are in fact the only simple Lie algebras over the complex numbers.
== Classification ==
Every semisimple Lie algebra over an algebraically closed field of characteristic 0 is a direct sum of simple Lie algebras (by definition), and the finite-dimensional simple Lie algebras fall in four families – An, Bn, Cn, and Dn – with five exceptions
E6, E7, E8, F4, and G2. Simple Lie algebras are classified by the connected Dynkin diagrams, shown on the right, while semisimple Lie algebras correspond to not necessarily connected Dynkin diagrams, where each component of the diagram corresponds to a summand of the decomposition of the semisimple Lie algebra into simple Lie algebras.
The classification proceeds by considering a Cartan subalgebra (see below) and its adjoint action on the Lie algebra. The root system of the action then both determines the original Lie algebra and must have a very constrained form, which can be classified by the Dynkin diagrams. See the section below describing Cartan subalgebras and root systems for more details.
The classification is widely considered one of the most elegant results in mathematics – a brief list of axioms yields, via a relatively short proof, a complete but non-trivial classification with surprising structure. This should be compared to the classification of finite simple groups, which is significantly more complicated.
The enumeration of the four families is non-redundant and consists only of simple algebras if
n
≥
1
{\displaystyle n\geq 1}
for An,
n
≥
2
{\displaystyle n\geq 2}
for Bn,
n
≥
3
{\displaystyle n\geq 3}
for Cn, and
n
≥
4
{\displaystyle n\geq 4}
for Dn. If one starts numbering lower, the enumeration is redundant, and one has exceptional isomorphisms between simple Lie algebras, which are reflected in isomorphisms of Dynkin diagrams; the En can also be extended down, but below E6 are isomorphic to other, non-exceptional algebras.
Over a non-algebraically closed field, the classification is more complicated – one classifies simple Lie algebras over the algebraic closure, then for each of these, one classifies simple Lie algebras over the original field which have this form (over the closure). For example, to classify simple real Lie algebras, one classifies real Lie algebras with a given complexification, which are known as real forms of the complex Lie algebra; this can be done by Satake diagrams, which are Dynkin diagrams with additional data ("decorations").
== Representation theory of semisimple Lie algebras ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a (finite-dimensional) semisimple Lie algebra over an algebraically closed field of characteristic zero. Then, as in #Structure,
g
=
h
⊕
⨁
α
∈
Φ
g
α
{\textstyle {\mathfrak {g}}={\mathfrak {h}}\oplus \bigoplus _{\alpha \in \Phi }{\mathfrak {g}}_{\alpha }}
where
Φ
{\displaystyle \Phi }
is the root system. Choose the simple roots in
Φ
{\displaystyle \Phi }
; a root
α
{\displaystyle \alpha }
of
Φ
{\displaystyle \Phi }
is then called positive and is denoted by
α
>
0
{\displaystyle \alpha >0}
if it is a linear combination of the simple roots with non-negative integer coefficients. Let
b
=
h
⊕
⨁
α
>
0
g
α
{\textstyle {\mathfrak {b}}={\mathfrak {h}}\oplus \bigoplus _{\alpha >0}{\mathfrak {g}}_{\alpha }}
, which is a maximal solvable subalgebra of
g
{\displaystyle {\mathfrak {g}}}
, the Borel subalgebra.
Let V be a (possibly-infinite-dimensional) simple
g
{\displaystyle {\mathfrak {g}}}
-module. If V happens to admit a
b
{\displaystyle {\mathfrak {b}}}
-weight vector
v
0
{\displaystyle v_{0}}
, then it is unique up to scaling and is called the highest weight vector of V. It is also an
h
{\displaystyle {\mathfrak {h}}}
-weight vector and the
h
{\displaystyle {\mathfrak {h}}}
-weight of
v
0
{\displaystyle v_{0}}
, a linear functional of
h
{\displaystyle {\mathfrak {h}}}
, is called the highest weight of V. The basic yet nontrivial facts then are (1) to each linear functional
μ
∈
h
∗
{\displaystyle \mu \in {\mathfrak {h}}^{*}}
, there exists a simple
g
{\displaystyle {\mathfrak {g}}}
-module
V
μ
{\displaystyle V^{\mu }}
having
μ
{\displaystyle \mu }
as its highest weight and (2) two simple modules having the same highest weight are equivalent. In short, there exists a bijection between
h
∗
{\displaystyle {\mathfrak {h}}^{*}}
and the set of the equivalence classes of simple
g
{\displaystyle {\mathfrak {g}}}
-modules admitting a Borel-weight vector.
For applications, one is often interested in a finite-dimensional simple
g
{\displaystyle {\mathfrak {g}}}
-module (a finite-dimensional irreducible representation). This is especially the case when
g
{\displaystyle {\mathfrak {g}}}
is the Lie algebra of a Lie group (or complexification of such), since, via the Lie correspondence, a Lie algebra representation can be integrated to a Lie group representation when the obstructions are overcome. The next criterion then addresses this need: by the positive Weyl chamber
C
⊂
h
∗
{\displaystyle C\subset {\mathfrak {h}}^{*}}
, we mean the convex cone
C
=
{
μ
∈
h
∗
|
μ
(
h
α
)
≥
0
,
α
∈
Φ
>
0
}
{\displaystyle C=\{\mu \in {\mathfrak {h}}^{*}|\mu (h_{\alpha })\geq 0,\alpha \in \Phi >0\}}
where
h
α
∈
[
g
α
,
g
−
α
]
{\displaystyle h_{\alpha }\in [{\mathfrak {g}}_{\alpha },{\mathfrak {g}}_{-\alpha }]}
is a unique vector such that
α
(
h
α
)
=
2
{\displaystyle \alpha (h_{\alpha })=2}
. The criterion then reads:
dim
V
μ
<
∞
{\displaystyle \dim V^{\mu }<\infty }
if and only if, for each positive root
α
>
0
{\displaystyle \alpha >0}
, (1)
μ
(
h
α
)
{\displaystyle \mu (h_{\alpha })}
is an integer and (2)
μ
{\displaystyle \mu }
lies in
C
{\displaystyle C}
.
A linear functional
μ
{\displaystyle \mu }
satisfying the above equivalent condition is called a dominant integral weight. Hence, in summary, there exists a bijection between the dominant integral weights and the equivalence classes of finite-dimensional simple
g
{\displaystyle {\mathfrak {g}}}
-modules, the result known as the theorem of the highest weight. The character of a finite-dimensional simple module in turns is computed by the Weyl character formula.
The theorem due to Weyl says that, over a field of characteristic zero, every finite-dimensional module of a semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is completely reducible; i.e., it is a direct sum of simple
g
{\displaystyle {\mathfrak {g}}}
-modules. Hence, the above results then apply to finite-dimensional representations of a semisimple Lie algebra.
== Real semisimple Lie algebra ==
For a semisimple Lie algebra over a field that has characteristic zero but is not algebraically closed, there is no general structure theory like the one for those over an algebraically closed field of characteristic zero. But over the field of real numbers, there are still the structure results.
Let
g
{\displaystyle {\mathfrak {g}}}
be a finite-dimensional real semisimple Lie algebra and
g
C
=
g
⊗
R
C
{\displaystyle {\mathfrak {g}}^{\mathbb {C} }={\mathfrak {g}}\otimes _{\mathbb {R} }\mathbb {C} }
the complexification of it (which is again semisimple). The real Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is called a real form of
g
C
{\displaystyle {\mathfrak {g}}^{\mathbb {C} }}
. A real form is called a compact form if the Killing form on it is negative-definite; it is necessarily the Lie algebra of a compact Lie group (hence, the name).
=== Compact case ===
Suppose
g
{\displaystyle {\mathfrak {g}}}
is a compact form and
h
⊂
g
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}}
a maximal abelian subspace. One can show (for example, from the fact
g
{\displaystyle {\mathfrak {g}}}
is the Lie algebra of a compact Lie group) that
ad
(
h
)
{\displaystyle \operatorname {ad} ({\mathfrak {h}})}
consists of skew-Hermitian matrices, diagonalizable over
C
{\displaystyle \mathbb {C} }
with imaginary eigenvalues. Hence,
h
C
{\displaystyle {\mathfrak {h}}^{\mathbb {C} }}
is a Cartan subalgebra of
g
C
{\displaystyle {\mathfrak {g}}^{\mathbb {C} }}
and there results in the root space decomposition (cf. #Structure)
g
C
=
h
C
⊕
⨁
α
∈
Φ
g
α
{\displaystyle {\mathfrak {g}}^{\mathbb {C} }={\mathfrak {h}}^{\mathbb {C} }\oplus \bigoplus _{\alpha \in \Phi }{\mathfrak {g}}_{\alpha }}
where each
α
∈
Φ
{\displaystyle \alpha \in \Phi }
is real-valued on
i
h
{\displaystyle i{\mathfrak {h}}}
; thus, can be identified with a real-linear functional on the real vector space
i
h
{\displaystyle i{\mathfrak {h}}}
.
For example, let
g
=
s
u
(
n
)
{\displaystyle {\mathfrak {g}}={\mathfrak {su}}(n)}
and take
h
⊂
g
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}}
the subspace of all diagonal matrices. Note
g
C
=
s
l
n
C
{\displaystyle {\mathfrak {g}}^{\mathbb {C} }={\mathfrak {sl}}_{n}\mathbb {C} }
. Let
e
i
{\displaystyle e_{i}}
be the linear functional on
h
C
{\displaystyle {\mathfrak {h}}^{\mathbb {C} }}
given by
e
i
(
H
)
=
h
i
{\displaystyle e_{i}(H)=h_{i}}
for
H
=
diag
(
h
1
,
…
,
h
n
)
{\displaystyle H=\operatorname {diag} (h_{1},\dots ,h_{n})}
. Then for each
H
∈
h
C
{\displaystyle H\in {\mathfrak {h}}^{\mathbb {C} }}
,
[
H
,
E
i
j
]
=
(
e
i
(
H
)
−
e
j
(
H
)
)
E
i
j
{\displaystyle [H,E_{ij}]=(e_{i}(H)-e_{j}(H))E_{ij}}
where
E
i
j
{\displaystyle E_{ij}}
is the matrix that has 1 on the
(
i
,
j
)
{\displaystyle (i,j)}
-th spot and zero elsewhere. Hence, each root
α
{\displaystyle \alpha }
is of the form
α
=
e
i
−
e
j
,
i
≠
j
{\displaystyle \alpha =e_{i}-e_{j},i\neq j}
and the root space decomposition is the decomposition of matrices:
g
C
=
h
C
⊕
⨁
i
≠
j
C
E
i
j
.
{\displaystyle {\mathfrak {g}}^{\mathbb {C} }={\mathfrak {h}}^{\mathbb {C} }\oplus \bigoplus _{i\neq j}\mathbb {C} E_{ij}.}
=== Noncompact case ===
Suppose
g
{\displaystyle {\mathfrak {g}}}
is not necessarily a compact form (i.e., the signature of the Killing form is not all negative). Suppose, moreover, it has a Cartan involution
θ
{\displaystyle \theta }
and let
g
=
k
⊕
p
{\displaystyle {\mathfrak {g}}={\mathfrak {k}}\oplus {\mathfrak {p}}}
be the eigenspace decomposition of
θ
{\displaystyle \theta }
, where
k
,
p
{\displaystyle {\mathfrak {k}},{\mathfrak {p}}}
are the eigenspaces for 1 and -1, respectively. For example, if
g
=
s
l
n
R
{\displaystyle {\mathfrak {g}}={\mathfrak {sl}}_{n}\mathbb {R} }
and
θ
{\displaystyle \theta }
the negative transpose, then
k
=
s
o
(
n
)
{\displaystyle {\mathfrak {k}}={\mathfrak {so}}(n)}
.
Let
a
⊂
p
{\displaystyle {\mathfrak {a}}\subset {\mathfrak {p}}}
be a maximal abelian subspace. Now,
ad
(
p
)
{\displaystyle \operatorname {ad} ({\mathfrak {p}})}
consists of symmetric matrices (with respect to a suitable inner product) and thus the operators in
ad
(
a
)
{\displaystyle \operatorname {ad} ({\mathfrak {a}})}
are simultaneously diagonalizable, with real eigenvalues. By repeating the arguments for the algebraically closed base field, one obtains the decomposition (called the restricted root space decomposition):
g
=
g
0
⊕
⨁
α
∈
Φ
g
α
{\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\oplus \bigoplus _{\alpha \in \Phi }{\mathfrak {g}}_{\alpha }}
where
the elements in
Φ
{\displaystyle \Phi }
are called the restricted roots,
θ
(
g
α
)
=
g
−
α
{\displaystyle \theta ({\mathfrak {g}}_{\alpha })={\mathfrak {g}}_{-\alpha }}
for any linear functional
α
{\displaystyle \alpha }
; in particular,
−
Φ
⊂
Φ
{\displaystyle -\Phi \subset \Phi }
,
g
0
=
a
⊕
Z
k
(
a
)
{\displaystyle {\mathfrak {g}}_{0}={\mathfrak {a}}\oplus Z_{\mathfrak {k}}({\mathfrak {a}})}
.
Moreover,
Φ
{\displaystyle \Phi }
is a root system but not necessarily reduced one (i.e., it can happen
α
,
2
α
{\displaystyle \alpha ,2\alpha }
are both roots).
== The case of sl(n,C) ==
If
g
=
s
l
(
n
,
C
)
{\displaystyle {\mathfrak {g}}=\mathrm {sl} (n,\mathbb {C} )}
, then
h
{\displaystyle {\mathfrak {h}}}
may be taken to be the diagonal subalgebra of
g
{\displaystyle {\mathfrak {g}}}
, consisting of diagonal matrices whose diagonal entries sum to zero. Since
h
{\displaystyle {\mathfrak {h}}}
has dimension
n
−
1
{\displaystyle n-1}
, we see that
s
l
(
n
;
C
)
{\displaystyle \mathrm {sl} (n;\mathbb {C} )}
has rank
n
−
1
{\displaystyle n-1}
.
The root vectors
X
{\displaystyle X}
in this case may be taken to be the matrices
E
i
,
j
{\displaystyle E_{i,j}}
with
i
≠
j
{\displaystyle i\neq j}
, where
E
i
,
j
{\displaystyle E_{i,j}}
is the matrix with a 1 in the
(
i
,
j
)
{\displaystyle (i,j)}
spot and zeros elsewhere. If
H
{\displaystyle H}
is a diagonal matrix with diagonal entries
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
, then we have
[
H
,
E
i
,
j
]
=
(
λ
i
−
λ
j
)
E
i
,
j
{\displaystyle [H,E_{i,j}]=(\lambda _{i}-\lambda _{j})E_{i,j}}
.
Thus, the roots for
s
l
(
n
,
C
)
{\displaystyle \mathrm {sl} (n,\mathbb {C} )}
are the linear functionals
α
i
,
j
{\displaystyle \alpha _{i,j}}
given by
α
i
,
j
(
H
)
=
λ
i
−
λ
j
{\displaystyle \alpha _{i,j}(H)=\lambda _{i}-\lambda _{j}}
.
After identifying
h
{\displaystyle {\mathfrak {h}}}
with its dual, the roots become the vectors
α
i
,
j
:=
e
i
−
e
j
{\displaystyle \alpha _{i,j}:=e_{i}-e_{j}}
in the space of
n
{\displaystyle n}
-tuples that sum to zero. This is the root system known as
A
n
−
1
{\displaystyle A_{n-1}}
in the conventional labeling.
The reflection associated to the root
α
i
,
j
{\displaystyle \alpha _{i,j}}
acts on
h
{\displaystyle {\mathfrak {h}}}
by transposing the
i
{\displaystyle i}
and
j
{\displaystyle j}
diagonal entries. The Weyl group is then just the permutation group on
n
{\displaystyle n}
elements, acting by permuting the diagonal entries of matrices in
h
{\displaystyle {\mathfrak {h}}}
.
== Generalizations ==
Semisimple Lie algebras admit certain generalizations. Firstly, many statements that are true for semisimple Lie algebras are true more generally for reductive Lie algebras. Abstractly, a reductive Lie algebra is one whose adjoint representation is completely reducible, while concretely, a reductive Lie algebra is a direct sum of a semisimple Lie algebra and an abelian Lie algebra; for example,
s
l
n
{\displaystyle {\mathfrak {sl}}_{n}}
is semisimple, and
g
l
n
{\displaystyle {\mathfrak {gl}}_{n}}
is reductive. Many properties of semisimple Lie algebras depend only on reducibility.
Many properties of complex semisimple/reductive Lie algebras are true not only for semisimple/reductive Lie algebras over algebraically closed fields, but more generally for split semisimple/reductive Lie algebras over other fields: semisimple/reductive Lie algebras over algebraically closed fields are always split, but over other fields this is not always the case. Split Lie algebras have essentially the same representation theory as semisimple Lie algebras over algebraically closed fields, for instance, the splitting Cartan subalgebra playing the same role as the Cartan subalgebra plays over algebraically closed fields. This is the approach followed in (Bourbaki 2005), for instance, which classifies representations of split semisimple/reductive Lie algebras.
== Semisimple and reductive groups ==
A connected Lie group is called semisimple if its Lie algebra is a semisimple Lie algebra, i.e. a direct sum of simple Lie algebras. It is called reductive if its Lie algebra is a direct sum of simple and trivial (one-dimensional) Lie algebras. Reductive groups occur naturally as symmetries of a number of mathematical objects in algebra, geometry, and physics. For example, the group
G
L
n
(
R
)
{\displaystyle GL_{n}(\mathbb {R} )}
of symmetries of an n-dimensional real vector space (equivalently, the group of invertible matrices) is reductive.
== See also ==
Lie algebra
Root system
Lie algebra representation
Compact group
Simple Lie group
Borel subalgebra
Jacobson–Morozov theorem
== References == | Wikipedia/Semisimple_Lie_algebra |
In mathematics, Lie group–Lie algebra correspondence allows one to correspond a Lie group to a Lie algebra or vice versa, and study the conditions for such a relationship. Lie groups that are isomorphic to each other have Lie algebras that are isomorphic to each other, but the converse is not necessarily true. One obvious counterexample is
R
n
{\displaystyle \mathbb {R} ^{n}}
and
T
n
{\displaystyle \mathbb {T} ^{n}}
(see real coordinate space and the circle group respectively) which are non-isomorphic to each other as Lie groups but their Lie algebras are isomorphic to each other. However, for simply connected Lie groups, the Lie group-Lie algebra correspondence is one-to-one.
In this article, a Lie group refers to a real Lie group. For the complex and p-adic cases, see complex Lie group and p-adic Lie group. In this article, manifolds (in particular Lie groups) are assumed to be second countable; in particular, they have at most countably many connected components.
== Basics ==
=== The Lie algebra of a Lie group ===
There are various ways one can understand the construction of the Lie algebra of a Lie group G. One approach uses left-invariant vector fields. A vector field X on G is said to be invariant under left translations if, for any g, h in G,
(
d
L
g
)
h
(
X
h
)
=
X
g
h
{\displaystyle (dL_{g})_{h}(X_{h})=X_{gh}}
where
L
g
:
G
→
G
{\displaystyle L_{g}:G\to G}
is defined by
L
g
(
x
)
=
g
x
{\displaystyle L_{g}(x)=gx}
and
(
d
L
g
)
h
:
T
h
G
→
T
g
h
G
{\displaystyle (dL_{g})_{h}:T_{h}G\to T_{gh}G}
is the differential of
L
g
{\displaystyle L_{g}}
between tangent spaces.
Let
Lie
(
G
)
{\displaystyle \operatorname {Lie} (G)}
be the set of all left-translation-invariant vector fields on G. It is a real vector space. Moreover, it is closed under the Lie bracket of vector fields; i.e.,
[
X
,
Y
]
{\displaystyle [X,Y]}
is a left-translation-invariant vector field if X and Y are. Thus,
Lie
(
G
)
{\displaystyle \operatorname {Lie} (G)}
is a Lie subalgebra of the Lie algebra of all vector fields on G and is called the Lie algebra of G. One can understand this more concretely by identifying the space of left-invariant vector fields with the tangent space at the identity, as follows: Given a left-invariant vector field, one can take its value at the identity, and given a tangent vector at the identity, one can extend it to a left-invariant vector field. This correspondence is one-to-one in both directions, so is bijective. Thus, the Lie algebra can be thought of as the tangent space at the identity and the bracket of X and Y in
T
e
G
{\displaystyle T_{e}G}
can be computed by extending them to left-invariant vector fields, taking the bracket of the vector fields, and then evaluating the result at the identity.
There is also another incarnation of
Lie
(
G
)
{\displaystyle \operatorname {Lie} (G)}
as the Lie algebra of primitive elements of the Hopf algebra of distributions on G with support at the identity element; for this, see Related constructions below.
=== Matrix Lie groups ===
Suppose G is a closed subgroup of GL(n;C), and thus a Lie group, by the closed subgroups theorem. Then the Lie algebra of G may be computed as
Lie
(
G
)
=
{
X
∈
M
(
n
;
C
)
∣
e
t
X
∈
G
for all
t
∈
R
}
.
{\displaystyle \operatorname {Lie} (G)=\left\{X\in M(n;\mathbb {C} )\mid e^{tX}\in G{\text{ for all }}t\in \mathbb {R} \right\}.}
For example, one can use the criterion to establish the correspondence for classical compact groups (cf. the table in "compact Lie groups" below.)
=== Homomorphisms ===
If
f
:
G
→
H
{\displaystyle f:G\to H}
is a Lie group homomorphism, then its differential at the identity element
d
f
=
d
f
e
:
Lie
(
G
)
→
Lie
(
H
)
{\displaystyle df=df_{e}:\operatorname {Lie} (G)\to \operatorname {Lie} (H)}
is a Lie algebra homomorphism (brackets go to brackets), which has the following properties:
exp
(
d
f
(
X
)
)
=
f
(
exp
(
X
)
)
{\displaystyle \exp(df(X))=f(\exp(X))}
for all X in Lie(G), where "exp" is the exponential map
Lie
(
ker
(
f
)
)
=
ker
(
d
f
)
{\displaystyle \operatorname {Lie} (\ker(f))=\ker(df)}
.
If the image of f is closed, then
Lie
(
im
(
f
)
)
=
im
(
d
f
)
{\displaystyle \operatorname {Lie} (\operatorname {im} (f))=\operatorname {im} (df)}
and the first isomorphism theorem holds: f induces the isomorphism of Lie groups:
G
/
ker
(
f
)
→
im
(
f
)
.
{\displaystyle G/\ker(f)\to \operatorname {im} (f).}
The chain rule holds: if
f
:
G
→
H
{\displaystyle f:G\to H}
and
g
:
H
→
K
{\displaystyle g:H\to K}
are Lie group homomorphisms, then
d
(
g
∘
f
)
=
(
d
g
)
∘
(
d
f
)
{\displaystyle d(g\circ f)=(dg)\circ (df)}
.
In particular, if H is a closed subgroup of a Lie group G, then
Lie
(
H
)
{\displaystyle \operatorname {Lie} (H)}
is a Lie subalgebra of
Lie
(
G
)
{\displaystyle \operatorname {Lie} (G)}
. Also, if f is injective, then f is an immersion and so G is said to be an immersed (Lie) subgroup of H. For example,
G
/
ker
(
f
)
{\displaystyle G/\ker(f)}
is an immersed subgroup of H. If f is surjective, then f is a submersion and if, in addition, G is compact, then f is a principal bundle with the structure group its kernel. (Ehresmann's lemma)
=== Other properties ===
Let
G
=
G
1
×
⋯
×
G
r
{\displaystyle G=G_{1}\times \cdots \times G_{r}}
be a direct product of Lie groups and
p
i
:
G
→
G
i
{\displaystyle p_{i}:G\to G_{i}}
projections. Then the differentials
d
p
i
:
Lie
(
G
)
→
Lie
(
G
i
)
{\displaystyle dp_{i}:\operatorname {Lie} (G)\to \operatorname {Lie} (G_{i})}
give the canonical identification:
Lie
(
G
1
×
⋯
×
G
r
)
=
Lie
(
G
1
)
⊕
⋯
⊕
Lie
(
G
r
)
.
{\displaystyle \operatorname {Lie} (G_{1}\times \cdots \times G_{r})=\operatorname {Lie} (G_{1})\oplus \cdots \oplus \operatorname {Lie} (G_{r}).}
If
H
,
H
′
{\displaystyle H,H'}
are Lie subgroups of a Lie group, then
Lie
(
H
∩
H
′
)
=
Lie
(
H
)
∩
Lie
(
H
′
)
.
{\displaystyle \operatorname {Lie} (H\cap H')=\operatorname {Lie} (H)\cap \operatorname {Lie} (H').}
Let G be a connected Lie group. If H is a Lie group, then any Lie group homomorphism
f
:
G
→
H
{\displaystyle f:G\to H}
is uniquely determined by its differential
d
f
{\displaystyle df}
. Precisely, there is the exponential map
exp
:
Lie
(
G
)
→
G
{\displaystyle \exp :\operatorname {Lie} (G)\to G}
(and one for H) such that
f
(
exp
(
X
)
)
=
exp
(
d
f
(
X
)
)
{\displaystyle f(\exp(X))=\exp(df(X))}
and, since G is connected, this determines f uniquely. In general, if U is a neighborhood of the identity element in a connected topological group G, then
⋃
n
>
0
U
n
{\textstyle \bigcup _{n>0}U^{n}}
coincides with G, since the former is an open (hence closed) subgroup. Now,
exp
:
Lie
(
G
)
→
G
{\displaystyle \exp :\operatorname {Lie} (G)\to G}
defines a local homeomorphism from a neighborhood of the zero vector to the neighborhood of the identity element. For example, if G is the Lie group of invertible real square matrices of size n (general linear group), then
Lie
(
G
)
{\displaystyle \operatorname {Lie} (G)}
is the Lie algebra of real square matrices of size n and
exp
(
X
)
=
e
X
=
∑
0
∞
X
j
/
j
!
{\textstyle \exp(X)=e^{X}=\sum _{0}^{\infty }{X^{j}/j!}}
.
== The correspondence ==
The correspondence between Lie groups and Lie algebras includes the following three main results.
Lie's third theorem: Every finite-dimensional real Lie algebra is the Lie algebra of some simply connected Lie group.
The homomorphisms theorem: If
ϕ
:
Lie
(
G
)
→
Lie
(
H
)
{\displaystyle \phi :\operatorname {Lie} (G)\to \operatorname {Lie} (H)}
is a Lie algebra homomorphism and if G is simply connected, then there exists a (unique) Lie group homomorphism
f
:
G
→
H
{\displaystyle f:G\to H}
such that
ϕ
=
d
f
{\displaystyle \phi =df}
.
The subgroups–subalgebras theorem: If G is a Lie group and
h
{\displaystyle {\mathfrak {h}}}
is a Lie subalgebra of
Lie
(
G
)
{\displaystyle \operatorname {Lie} (G)}
, then there is a unique connected Lie subgroup (not necessarily closed) H of G with Lie algebra
h
{\displaystyle {\mathfrak {h}}}
.
In the second part of the correspondence, the assumption that G is simply connected cannot be omitted. For example, the Lie algebras of SO(3) and SU(2) are isomorphic, but there is no corresponding homomorphism of SO(3) into SU(2). Rather, the homomorphism goes from the simply connected group SU(2) to the non-simply connected group SO(3). If G and H are both simply connected and have isomorphic Lie algebras, the above result allows one to show that G and H are isomorphic. One method to construct f is to use the Baker–Campbell–Hausdorff formula.
For readers familiar with category theory the correspondence can be summarised as follows: First, the operation of associating to each connected Lie group
G
{\displaystyle G}
its Lie algebra
Lie
(
G
)
{\displaystyle \operatorname {Lie} (G)}
, and to each homomorphism
f
{\displaystyle f}
of Lie groups the corresponding differential
Lie
(
f
)
=
d
f
e
{\displaystyle \operatorname {Lie} (f)=df_{e}}
at the neutral element, is a (covariant) functor
Lie
{\displaystyle \operatorname {Lie} }
from the category
of connected (real) Lie groups to the category of finite-dimensional (real) Lie-algebras. This functor has a
left adjoint functor
Γ
{\displaystyle \Gamma }
from (finite dimensional) Lie algebras to Lie groups
(which is necessarily unique up to canonical isomorphism). In other words
there is a natural isomorphism of bifunctors
H
o
m
C
L
G
r
p
(
Γ
(
g
)
,
H
)
≅
H
o
m
L
A
l
g
(
g
,
Lie
(
H
)
)
.
{\displaystyle \mathrm {Hom} _{CLGrp}(\Gamma ({\mathfrak {g}}),H)\cong \mathrm {Hom} _{LAlg}({\mathfrak {g}},\operatorname {Lie} (H)).}
Γ
(
g
)
{\displaystyle \Gamma ({\mathfrak {g}})}
is the (up to isomorphism unique) simply-connected Lie group with Lie algebra
g
{\displaystyle {\mathfrak {g}}}
.
The associated natural unit morphisms
ϵ
:
g
→
Lie
(
Γ
(
g
)
)
{\displaystyle \epsilon \colon {\mathfrak {g}}\rightarrow \operatorname {Lie} (\Gamma ({\mathfrak {g}}))}
of the adjunction are isomorphisms, which corresponds to
Γ
{\displaystyle \Gamma }
being fully faithful
(part of the second statement above).
The corresponding counit
Γ
(
Lie
(
H
)
)
→
H
{\displaystyle \Gamma (\operatorname {Lie} (H))\rightarrow H}
is the canonical projection
H
~
→
H
{\displaystyle {\widetilde {H}}\rightarrow H}
from the
simply connected covering; its surjectivity corresponds to
L
i
e
{\displaystyle Lie}
being a faithful functor.
=== Proof of Lie's third theorem ===
Perhaps the most elegant proof of the first result above uses Ado's theorem, which says any finite-dimensional Lie algebra (over a field of any characteristic) is a Lie subalgebra of the Lie algebra
g
l
n
{\displaystyle {\mathfrak {gl}}_{n}}
of square matrices. The proof goes as follows: by Ado's theorem, we assume
g
⊂
g
l
n
(
R
)
=
Lie
(
G
L
n
(
R
)
)
{\displaystyle {\mathfrak {g}}\subset {\mathfrak {gl}}_{n}(\mathbb {R} )=\operatorname {Lie} (GL_{n}(\mathbb {R} ))}
is a Lie subalgebra. Let G be the closed (without taking the closure one can get pathological dense example as in the case of the irrational winding of the torus) subgroup of
G
L
n
(
R
)
{\displaystyle GL_{n}(\mathbb {R} )}
generated by
e
g
{\displaystyle e^{\mathfrak {g}}}
and let
G
~
{\displaystyle {\widetilde {G}}}
be a simply connected covering of G; it is not hard to show that
G
~
{\displaystyle {\widetilde {G}}}
is a Lie group and that the covering map is a Lie group homomorphism. Since
T
e
G
~
=
T
e
G
=
g
{\displaystyle T_{e}{\widetilde {G}}=T_{e}G={\mathfrak {g}}}
, this completes the proof.
Example: Each element X in the Lie algebra
g
=
Lie
(
G
)
{\displaystyle {\mathfrak {g}}=\operatorname {Lie} (G)}
gives rise to the Lie algebra homomorphism
R
→
g
,
t
↦
t
X
.
{\displaystyle \mathbb {R} \to {\mathfrak {g}},\,t\mapsto tX.}
By Lie's third theorem, as
Lie
(
R
)
=
T
0
R
=
R
{\displaystyle \operatorname {Lie} (\mathbb {R} )=T_{0}\mathbb {R} =\mathbb {R} }
and exp for it is the identity, this homomorphism is the differential of the Lie group homomorphism
R
→
H
{\displaystyle \mathbb {R} \to H}
for some immersed subgroup H of G. This Lie group homomorphism, called the one-parameter subgroup generated by X, is precisely the exponential map
t
↦
exp
(
t
X
)
{\displaystyle t\mapsto \exp(tX)}
and H its image. The preceding can be summarized to saying that there is a canonical bijective correspondence between
g
{\displaystyle {\mathfrak {g}}}
and the set of one-parameter subgroups of G.
=== Proof of the homomorphisms theorem ===
One approach to proving the second part of the Lie group-Lie algebra correspondence (the homomorphisms theorem) is to use the Baker–Campbell–Hausdorff formula, as in Section 5.7 of Hall's book. Specifically, given the Lie algebra homomorphism
ϕ
{\displaystyle \phi }
from
Lie
(
G
)
{\displaystyle \operatorname {Lie} (G)}
to
Lie
(
H
)
{\displaystyle \operatorname {Lie} (H)}
, we may define
f
:
G
→
H
{\displaystyle f:G\to H}
locally (i.e., in a neighborhood of the identity) by the formula
f
(
e
X
)
=
e
ϕ
(
X
)
,
{\displaystyle f(e^{X})=e^{\phi (X)},}
where
e
X
{\displaystyle e^{X}}
is the exponential map for G, which has an inverse defined near the identity. We now argue that f is a local homomorphism. Thus, given two elements near the identity
e
X
{\displaystyle e^{X}}
and
e
Y
{\displaystyle e^{Y}}
(with X and Y small), we consider their product
e
X
e
Y
{\displaystyle e^{X}e^{Y}}
. According to the Baker–Campbell–Hausdorff formula, we have
e
X
e
Y
=
e
Z
{\displaystyle e^{X}e^{Y}=e^{Z}}
, where
Z
=
X
+
Y
+
1
2
[
X
,
Y
]
+
1
12
[
X
,
[
X
,
Y
]
]
+
⋯
,
{\displaystyle Z=X+Y+{\frac {1}{2}}[X,Y]+{\frac {1}{12}}[X,[X,Y]]+\cdots ,}
with
⋯
{\displaystyle \cdots }
indicating other terms expressed as repeated commutators involving X and Y. Thus,
f
(
e
X
e
Y
)
=
f
(
e
Z
)
=
e
ϕ
(
Z
)
=
e
ϕ
(
X
)
+
ϕ
(
Y
)
+
1
2
[
ϕ
(
X
)
,
ϕ
(
Y
)
]
+
1
12
[
ϕ
(
X
)
,
[
ϕ
(
X
)
,
ϕ
(
Y
)
]
]
+
⋯
,
{\displaystyle f\left(e^{X}e^{Y}\right)=f\left(e^{Z}\right)=e^{\phi (Z)}=e^{\phi (X)+\phi (Y)+{\frac {1}{2}}[\phi (X),\phi (Y)]+{\frac {1}{12}}[\phi (X),[\phi (X),\phi (Y)]]+\cdots },}
because
ϕ
{\displaystyle \phi }
is a Lie algebra homomorphism. Using the Baker–Campbell–Hausdorff formula again, this time for the group H, we see that this last expression becomes
e
ϕ
(
X
)
e
ϕ
(
Y
)
{\displaystyle e^{\phi (X)}e^{\phi (Y)}}
, and therefore we have
f
(
e
X
e
Y
)
=
e
ϕ
(
X
)
e
ϕ
(
Y
)
=
f
(
e
X
)
f
(
e
Y
)
.
{\displaystyle f\left(e^{X}e^{Y}\right)=e^{\phi (X)}e^{\phi (Y)}=f\left(e^{X}\right)f\left(e^{Y}\right).}
Thus, f has the homomorphism property, at least when X and Y are sufficiently small. This argument is only local, since the exponential map is only invertible in a small neighborhood of the identity in G and since the Baker–Campbell–Hausdorff formula only holds if X and Y are small. The assumption that G is simply connected has not yet been used.
The next stage in the argument is to extend f from a local homomorphism to a global one. The extension is done by defining f along a path and then using the simple connectedness of G to show that the definition is independent of the choice of path.
== Lie group representations ==
A special case of Lie correspondence is a correspondence between finite-dimensional representations of a Lie group and representations of the associated Lie algebra.
The general linear group
G
L
n
(
C
)
{\displaystyle GL_{n}(\mathbb {C} )}
is a (real) Lie group and any Lie group homomorphism
π
:
G
→
G
L
n
(
C
)
{\displaystyle \pi :G\to GL_{n}(\mathbb {C} )}
is called a representation of the Lie group G. The differential
d
π
:
g
→
g
l
n
(
C
)
,
{\displaystyle d\pi :{\mathfrak {g}}\to {\mathfrak {gl}}_{n}(\mathbb {C} ),}
is then a Lie algebra homomorphism called a Lie algebra representation. (The differential
d
π
{\displaystyle d\pi }
is often simply denoted by
π
′
{\displaystyle \pi '}
.)
The homomorphisms theorem (mentioned above as part of the Lie group-Lie algebra correspondence) then says that if
G
{\displaystyle G}
is the simply connected Lie group whose Lie algebra is
g
{\displaystyle {\mathfrak {g}}}
, every representation of
g
{\displaystyle {\mathfrak {g}}}
comes from a representation of G. The assumption that G be simply connected is essential. Consider, for example, the rotation group SO(3), which is not simply connected. There is one irreducible representation of the Lie algebra in each dimension, but only the odd-dimensional representations of the Lie algebra come from representations of the group. (This observation is related to the distinction between integer spin and half-integer spin in quantum mechanics.) On the other hand, the group SU(2) is simply connected with Lie algebra isomorphic to that of SO(3), so every representation of the Lie algebra of SO(3) does give rise to a representation of SU(2).
=== The adjoint representation ===
An example of a Lie group representation is the adjoint representation of a Lie group G; each element g in a Lie group G defines an automorphism of G by conjugation:
c
g
(
h
)
=
g
h
g
−
1
{\displaystyle c_{g}(h)=ghg^{-1}}
; the differential
d
c
g
{\displaystyle dc_{g}}
is then an automorphism of the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. This way, we get a representation
Ad
:
G
→
G
L
(
g
)
,
g
↦
d
c
g
{\displaystyle \operatorname {Ad} :G\to GL({\mathfrak {g}}),\,g\mapsto dc_{g}}
, called the adjoint representation. The corresponding Lie algebra homomorphism
g
→
g
l
(
g
)
{\displaystyle {\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})}
is called the adjoint representation of
g
{\displaystyle {\mathfrak {g}}}
and is denoted by
ad
{\displaystyle \operatorname {ad} }
. One can show
ad
(
X
)
(
Y
)
=
[
X
,
Y
]
{\displaystyle \operatorname {ad} (X)(Y)=[X,Y]}
, which in particular implies that the Lie bracket of
g
{\displaystyle {\mathfrak {g}}}
is determined by the group law on G.
By Lie's third theorem, there exists a subgroup
Int
(
g
)
{\displaystyle \operatorname {Int} ({\mathfrak {g}})}
of
G
L
(
g
)
{\displaystyle GL({\mathfrak {g}})}
whose Lie algebra is
ad
(
g
)
{\displaystyle \operatorname {ad} ({\mathfrak {g}})}
. (
Int
(
g
)
{\displaystyle \operatorname {Int} ({\mathfrak {g}})}
is in general not a closed subgroup; only an immersed subgroup.) It is called the adjoint group of
g
{\displaystyle {\mathfrak {g}}}
. If G is connected, it fits into the exact sequence:
0
→
Z
(
G
)
→
G
→
Ad
Int
(
g
)
→
0
{\displaystyle 0\to Z(G)\to G\xrightarrow {\operatorname {Ad} } \operatorname {Int} ({\mathfrak {g}})\to 0}
where
Z
(
G
)
{\displaystyle Z(G)}
is the center of G. If the center of G is discrete, then Ad here is a covering map.
Let G be a connected Lie group. Then G is unimodular if and only if
det
(
Ad
(
g
)
)
=
1
{\displaystyle \det(\operatorname {Ad} (g))=1}
for all g in G.
Let G be a Lie group acting on a manifold X and Gx the stabilizer of a point x in X. Let
ρ
(
x
)
:
G
→
X
,
g
↦
g
⋅
x
{\displaystyle \rho (x):G\to X,\,g\mapsto g\cdot x}
. Then
Lie
(
G
x
)
=
ker
(
d
ρ
(
x
)
:
T
e
G
→
T
x
X
)
.
{\displaystyle \operatorname {Lie} (G_{x})=\ker(d\rho (x):T_{e}G\to T_{x}X).}
If the orbit
G
⋅
x
{\displaystyle G\cdot x}
is locally closed, then the orbit is a submanifold of X and
T
x
(
G
⋅
x
)
=
im
(
d
ρ
(
x
)
:
T
e
G
→
T
x
X
)
{\displaystyle T_{x}(G\cdot x)=\operatorname {im} (d\rho (x):T_{e}G\to T_{x}X)}
.
For a subset A of
g
{\displaystyle {\mathfrak {g}}}
or G, let
z
g
(
A
)
=
{
X
∈
g
∣
ad
(
a
)
X
=
0
or
Ad
(
a
)
X
=
0
for all
a
in
A
}
{\displaystyle {\mathfrak {z}}_{\mathfrak {g}}(A)=\{X\in {\mathfrak {g}}\mid \operatorname {ad} (a)X=0{\text{ or }}\operatorname {Ad} (a)X=0{\text{ for all }}a{\text{ in }}A\}}
Z
G
(
A
)
=
{
g
∈
G
∣
Ad
(
g
)
a
=
0
or
g
a
=
a
g
for all
a
in
A
}
{\displaystyle Z_{G}(A)=\{g\in G\mid \operatorname {Ad} (g)a=0{\text{ or }}ga=ag{\text{ for all }}a{\text{ in }}A\}}
be the Lie algebra centralizer and the Lie group centralizer of A. Then
Lie
(
Z
G
(
A
)
)
=
z
g
(
A
)
{\displaystyle \operatorname {Lie} (Z_{G}(A))={\mathfrak {z}}_{\mathfrak {g}}(A)}
.
If H is a closed connected subgroup of G, then H is normal if and only if
Lie
(
H
)
{\displaystyle \operatorname {Lie} (H)}
is an ideal and in such a case
Lie
(
G
/
H
)
=
Lie
(
G
)
/
Lie
(
H
)
{\displaystyle \operatorname {Lie} (G/H)=\operatorname {Lie} (G)/\operatorname {Lie} (H)}
.
== Abelian Lie groups ==
Let G be a connected Lie group. Since the Lie algebra of the center of G is the center of the Lie algebra of G (cf. the previous §), G is abelian if and only if its Lie algebra is abelian.
If G is abelian, then the exponential map
exp
:
g
→
G
{\displaystyle \exp :{\mathfrak {g}}\to G}
is a surjective group homomorphism. The kernel of it is a discrete group (since the dimension is zero) called the integer lattice of G and is denoted by
Γ
{\displaystyle \Gamma }
. By the first isomorphism theorem,
exp
{\displaystyle \exp }
induces the isomorphism
g
/
Γ
→
G
{\displaystyle {\mathfrak {g}}/\Gamma \to G}
.
By the rigidity argument, the fundamental group
π
1
(
G
)
{\displaystyle \pi _{1}(G)}
of a connected Lie group G is a central subgroup of a simply connected covering
G
~
{\displaystyle {\widetilde {G}}}
of G; in other words, G fits into the central extension
1
→
π
1
(
G
)
→
G
~
→
p
G
→
1.
{\displaystyle 1\to \pi _{1}(G)\to {\widetilde {G}}{\overset {p}{\to }}G\to 1.}
Equivalently, given a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
and a simply connected Lie group
G
~
{\displaystyle {\widetilde {G}}}
whose Lie algebra is
g
{\displaystyle {\mathfrak {g}}}
, there is a one-to-one correspondence between quotients of
G
~
{\displaystyle {\widetilde {G}}}
by discrete central subgroups and connected Lie groups having Lie algebra
g
{\displaystyle {\mathfrak {g}}}
.
For the complex case, complex tori are important; see complex Lie group for this topic.
== Compact Lie groups ==
Let G be a connected Lie group with finite center. Then the following are equivalent.
G is compact.
(Weyl) The simply connected covering
G
~
{\displaystyle {\widetilde {G}}}
of G is compact.
The adjoint group
Int
g
{\displaystyle \operatorname {Int} {\mathfrak {g}}}
is compact.
There exists an embedding
G
↪
O
(
n
,
R
)
{\displaystyle G\hookrightarrow O(n,\mathbb {R} )}
as a closed subgroup.
The Killing form on
g
{\displaystyle {\mathfrak {g}}}
is negative definite.
For each X in
g
{\displaystyle {\mathfrak {g}}}
,
ad
(
X
)
{\displaystyle \operatorname {ad} (X)}
is diagonalizable and has zero or purely imaginary eigenvalues.
There exists an invariant inner product on
g
{\displaystyle {\mathfrak {g}}}
.
It is important to emphasize that the equivalence of the preceding conditions holds only under the assumption that G has finite center. Thus, for example, if G is compact with finite center, the universal cover
G
~
{\displaystyle {\widetilde {G}}}
is also compact. Clearly, this conclusion does not hold if G has infinite center, e.g., if
G
=
S
1
{\displaystyle G=S^{1}}
. The last three conditions above are purely Lie algebraic in nature.
If G is a compact Lie group, then
H
k
(
g
;
R
)
=
H
dR
(
G
)
{\displaystyle H^{k}({\mathfrak {g}};\mathbb {R} )=H_{\text{dR}}(G)}
where the left-hand side is the Lie algebra cohomology of
g
{\displaystyle {\mathfrak {g}}}
and the right-hand side is the de Rham cohomology of G. (Roughly, this is a consequence of the fact that any differential form on G can be made left invariant by the averaging argument.)
== Related constructions ==
Let G be a Lie group. The associated Lie algebra
Lie
(
G
)
{\displaystyle \operatorname {Lie} (G)}
of G may be alternatively defined as follows. Let
A
(
G
)
{\displaystyle A(G)}
be the algebra of distributions on G with support at the identity element with the multiplication given by convolution.
A
(
G
)
{\displaystyle A(G)}
is in fact a Hopf algebra. The Lie algebra of G is then
g
=
Lie
(
G
)
=
P
(
A
(
G
)
)
{\displaystyle {\mathfrak {g}}=\operatorname {Lie} (G)=P(A(G))}
, the Lie algebra of primitive elements in
A
(
G
)
{\displaystyle A(G)}
. By the Milnor–Moore theorem, there is the canonical isomorphism
U
(
g
)
=
A
(
G
)
{\displaystyle U({\mathfrak {g}})=A(G)}
between the universal enveloping algebra of
g
{\displaystyle {\mathfrak {g}}}
and
A
(
G
)
{\displaystyle A(G)}
.
== See also ==
Compact Lie algebra
Milnor–Moore theorem
Formal group
Malcev Lie algebra
Distribution on a linear algebraic group
== Citations ==
== References ==
== External links ==
Notes for Math 261A Lie groups and Lie algebras
Popov, V.L. (2001) [1994], "Lie algebra of an analytic group", Encyclopedia of Mathematics, EMS Press
Formal Lie theory in characteristic zero, a blog post by Akhil Mathew | Wikipedia/Lie_group–Lie_algebra_correspondence |
In mathematics, the representation theory of semisimple Lie algebras is one of the crowning achievements of the theory of Lie groups and Lie algebras. The theory was worked out mainly by E. Cartan and H. Weyl and because of that, the theory is also known as the Cartan–Weyl theory. The theory gives the structural description and classification of a finite-dimensional representation of a semisimple Lie algebra (over
C
{\displaystyle \mathbb {C} }
); in particular, it gives a way to parametrize (or classify) irreducible finite-dimensional representations of a semisimple Lie algebra, the result known as the theorem of the highest weight.
There is a natural one-to-one correspondence between the finite-dimensional representations of a simply connected compact Lie group K and the finite-dimensional representations of the complex semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
that is the complexification of the Lie algebra of K (this fact is essentially a special case of the Lie group–Lie algebra correspondence). Also, finite-dimensional representations of a connected compact Lie group can be studied through finite-dimensional representations of the universal cover of such a group. Hence, the representation theory of semisimple Lie algebras marks the starting point for the general theory of representations of connected compact Lie groups.
The theory is a basis for the later works of Harish-Chandra that concern (infinite-dimensional) representation theory of real reductive groups.
== Classifying finite-dimensional representations of semisimple Lie algebras ==
There is a beautiful theory classifying the finite-dimensional representations of a semisimple Lie algebra over
C
{\displaystyle \mathbb {C} }
. The finite-dimensional irreducible representations are described by a theorem of the highest weight. The theory is described in various textbooks, including Fulton & Harris (1991), Hall (2015), and Humphreys (1972).
Following an overview, the theory is described in increasing generality, starting with two simple cases that can be done "by hand" and then proceeding to the general result. The emphasis here is on the representation theory; for the geometric structures involving root systems needed to define the term "dominant integral element," follow the above link on weights in representation theory.
=== Overview ===
Classification of the finite-dimensional irreducible representations of a semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over
R
{\displaystyle \mathbb {R} }
or
C
{\displaystyle \mathbb {C} }
generally consists of two steps. The first step amounts to analysis of hypothesized representations resulting in a tentative classification. The second step is actual realization of these representations.
A real Lie algebra is usually complexified enabling analysis in an algebraically closed field. Working over the complex numbers in addition admits nicer bases. The following theorem applies: A real-linear finite-dimensional representation of a real Lie algebra extends to a complex-linear representation of its complexification. The real-linear representation is irreducible if and only if the corresponding complex-linear representation is irreducible. Moreover, a complex semisimple Lie algebra has the complete reducibility property. This means that every finite-dimensional representation decomposes as a direct sum of irreducible representations.
Conclusion: Classification amounts to studying irreducible complex linear representations of the (complexified) Lie algebra.
==== Classification: Step One ====
The first step is to hypothesize the existence of irreducible representations. That is to say, one hypothesizes that one has an irreducible representation
π
{\displaystyle \pi }
of a complex semisimple Lie algebra
g
,
{\displaystyle {\mathfrak {g}},}
without worrying about how the representation is constructed. The properties of these hypothetical representations are investigated, and conditions necessary for the existence of an irreducible representation are then established.
The properties involve the weights of the representation. Here is the simplest description. Let
h
{\displaystyle {\mathfrak {h}}}
be a Cartan subalgebra of
g
{\displaystyle {\mathfrak {g}}}
, that is a maximal commutative subalgebra with the property that
ad
H
{\displaystyle \operatorname {ad} _{H}}
is diagonalizable for each
H
∈
h
{\displaystyle H\in {\mathfrak {h}}}
, and let
H
1
,
…
,
H
n
{\displaystyle H_{1},\ldots ,H_{n}}
be a basis for
h
{\displaystyle {\mathfrak {h}}}
. A weight
λ
{\displaystyle \lambda }
for a representation
(
π
,
V
)
{\displaystyle (\pi ,V)}
of
g
{\displaystyle {\mathfrak {g}}}
is a collection of simultaneous eigenvalues
(
λ
1
,
…
,
λ
n
)
{\displaystyle (\lambda _{1},\ldots ,\lambda _{n})}
for the commuting operators
π
(
H
1
)
,
…
,
π
(
H
n
)
{\displaystyle \pi (H_{1}),\ldots ,\pi (H_{n})}
. In basis-independent language,
λ
{\displaystyle \lambda }
is a linear functional
λ
{\displaystyle \lambda }
on
h
{\displaystyle {\mathfrak {h}}}
such that there exists a nonzero vector
v
∈
V
{\displaystyle v\in V}
such that
π
(
H
)
v
=
λ
(
H
)
v
{\displaystyle \pi (H)v=\lambda (H)v}
for every
H
∈
h
{\displaystyle H\in {\mathfrak {h}}}
.
A partial ordering on the set of weights is defined, and the notion of highest weight in terms of this partial ordering is established for any set of weights. Using the structure on the Lie algebra, the notions dominant element and integral element are defined. Every finite-dimensional representation must have a maximal weight
λ
{\displaystyle \lambda }
, i.e., one for which no strictly higher weight occurs. If
V
{\displaystyle V}
is irreducible and
v
{\displaystyle v}
is a weight vector with weight
λ
{\displaystyle \lambda }
, then the entire space
V
{\displaystyle V}
must be generated by the action of
g
{\displaystyle {\mathfrak {g}}}
on
v
{\displaystyle v}
. Thus,
(
π
,
V
)
{\displaystyle (\pi ,V)}
is a "highest weight cyclic" representation. One then shows that the weight
λ
{\displaystyle \lambda }
is actually the highest weight (not just maximal) and that every highest weight cyclic representation is irreducible. One then shows that two irreducible representations with the same highest weight are isomorphic. Finally, one shows that the highest weight
λ
{\displaystyle \lambda }
must be dominant and integral.
Conclusion: Irreducible representations are classified by their highest weights, and the highest weight is always a dominant integral element.
Step One has the side benefit that the structure of the irreducible representations is better understood. Representations decompose as direct sums of weight spaces, with the weight space corresponding to the highest weight one-dimensional. Repeated application of the representatives of certain elements of the Lie algebra called lowering operators yields a set of generators for the representation as a vector space. The application of one such operator on a vector with definite weight results either in zero or a vector with strictly lower weight. Raising operators work similarly, but results in a vector with strictly higher weight or zero. The representatives of the Cartan subalgebra acts diagonally in a basis of weight vectors.
==== Classification: Step Two ====
Step Two is concerned with constructing the representations that Step One allows for. That is to say, we now fix a dominant integral element
λ
{\displaystyle \lambda }
and try to construct an irreducible representation with highest weight
λ
{\displaystyle \lambda }
.
There are several standard ways of constructing irreducible representations:
Construction using Verma modules. This approach is purely Lie algebraic. (Generally applicable to complex semisimple Lie algebras.)
The compact group approach using the Peter–Weyl theorem. If, for example,
g
=
sl
(
n
,
C
)
{\displaystyle {\mathfrak {g}}=\operatorname {sl} (n,\mathbb {C} )}
, one would work with the simply connected compact group
SU
(
n
)
{\displaystyle \operatorname {SU} (n)}
. (Generally applicable to complex semisimple Lie algebras.)
Construction using the Borel–Weil theorem, in which holomorphic representations of the group G corresponding to
g
{\displaystyle {\mathfrak {g}}}
are constructed. (Generally applicable to complex semisimple Lie algebras.)
Performing standard operations on known representations, in particular applying Clebsch–Gordan decomposition to tensor products of representations. (Not generally applicable.) In the case
g
=
sl
(
3
,
C
)
{\displaystyle {\mathfrak {g}}=\operatorname {sl} (3,\mathbb {C} )}
, this construction is described below.
In the simplest cases, construction from scratch.
Conclusion: Every dominant integral element of a complex semisimple Lie algebra gives rise to an irreducible, finite-dimensional representation. These are the only irreducible representations.
=== The case of sl(2,C) ===
The Lie algebra sl(2,C) of the special linear group SL(2,C) is the space of 2x2 trace-zero matrices with complex entries. The following elements form a basis:
X
=
(
0
1
0
0
)
Y
=
(
0
0
1
0
)
H
=
(
1
0
0
−
1
)
,
{\displaystyle X={\begin{pmatrix}0&1\\0&0\end{pmatrix}}\qquad Y={\begin{pmatrix}0&0\\1&0\end{pmatrix}}\qquad H={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}~,}
These satisfy the commutation relations
[
H
,
X
]
=
2
X
,
[
H
,
Y
]
=
−
2
Y
,
[
X
,
Y
]
=
H
{\displaystyle [H,X]=2X,\quad [H,Y]=-2Y,\quad [X,Y]=H}
.
Every finite-dimensional representation of sl(2,C) decomposes as a direct sum of irreducible representations. This claim follows from the general result on complete reducibility of semisimple Lie algebras, or from the fact that sl(2,C) is the complexification of the Lie algebra of the simply connected compact group SU(2). The irreducible representations
π
{\displaystyle \pi }
, in turn, can be classified by the largest eigenvalue of
π
(
H
)
{\displaystyle \pi (H)}
, which must be a non-negative integer m. That is to say, in this case, a "dominant integral element" is simply a non-negative integer.
The irreducible representation with largest eigenvalue m has dimension
m
+
1
{\displaystyle m+1}
and is spanned by eigenvectors for
π
(
H
)
{\displaystyle \pi (H)}
with eigenvalues
m
,
m
−
2
,
…
,
−
m
+
2
,
−
m
{\displaystyle m,m-2,\ldots ,-m+2,-m}
. The operators
π
(
X
)
{\displaystyle \pi (X)}
and
π
(
Y
)
{\displaystyle \pi (Y)}
move up and down the chain of eigenvectors, respectively. This analysis is described in detail in the representation theory of SU(2) (from the point of the view of the complexified Lie algebra).
One can give a concrete realization of the representations (Step Two in the overview above) in either of two ways. First, in this simple example, it is not hard to write down an explicit basis for the representation and an explicit formula for how the generators
X
,
Y
,
H
{\displaystyle X,Y,H}
of the Lie algebra act on this basis. Alternatively, one can realize the representation with highest weight
m
{\displaystyle m}
by letting
V
m
{\displaystyle V_{m}}
denote the space of homogeneous polynomials of degree
m
{\displaystyle m}
in two complex variables, and then defining the action of
X
{\displaystyle X}
,
Y
{\displaystyle Y}
, and
H
{\displaystyle H}
by
π
m
(
X
)
=
−
z
2
∂
∂
z
1
;
π
m
(
Y
)
=
−
z
1
∂
∂
z
2
;
π
m
(
H
)
=
−
z
1
∂
∂
z
1
+
z
2
∂
∂
z
2
.
{\displaystyle \pi _{m}(X)=-z_{2}{\frac {\partial }{\partial z_{1}}};\quad \pi _{m}(Y)=-z_{1}{\frac {\partial }{\partial z_{2}}};\quad \pi _{m}(H)=-z_{1}{\frac {\partial }{\partial z_{1}}}+z_{2}{\frac {\partial }{\partial z_{2}}}.}
Note that the formulas for the action of
X
{\displaystyle X}
,
Y
{\displaystyle Y}
, and
H
{\displaystyle H}
do not depend on
m
{\displaystyle m}
; the subscript in the formulas merely indicates that we are restricting the action of the indicated operators to the space of homogeneous polynomials of degree
m
{\displaystyle m}
in
z
1
{\displaystyle z_{1}}
and
z
2
{\displaystyle z_{2}}
.
=== The case of sl(3,C) ===
There is a similar theory classifying the irreducible representations of sl(3,C), which is the complexified Lie algebra of the group SU(3). The Lie algebra sl(3,C) is eight dimensional. We may work with a basis consisting of the following two diagonal elements
H
1
=
(
1
0
0
0
−
1
0
0
0
0
)
,
H
2
=
(
0
0
0
0
1
0
0
0
−
1
)
{\displaystyle H_{1}={\begin{pmatrix}1&0&0\\0&-1&0\\0&0&0\end{pmatrix}},\quad H_{2}={\begin{pmatrix}0&0&0\\0&1&0\\0&0&-1\end{pmatrix}}}
,
together with six other matrices
X
1
,
X
2
,
X
3
{\displaystyle X_{1},\,X_{2},\,X_{3}}
and
Y
1
,
Y
2
,
Y
3
{\displaystyle Y_{1},\,Y_{2},\,Y_{3}}
each of which has a 1 in an off-diagonal entry and zeros elsewhere. (The
X
i
{\displaystyle X_{i}}
's have a 1 above the diagonal and the
Y
i
{\displaystyle Y_{i}}
's have a 1 below the diagonal.)
The strategy is then to simultaneously diagonalize
π
(
H
1
)
{\displaystyle \pi (H_{1})}
and
π
(
H
2
)
{\displaystyle \pi (H_{2})}
in each irreducible representation
π
{\displaystyle \pi }
. Recall that in the sl(2,C) case, the action of
π
(
X
)
{\displaystyle \pi (X)}
and
π
(
Y
)
{\displaystyle \pi (Y)}
raise and lower the eigenvalues of
π
(
H
)
{\displaystyle \pi (H)}
. Similarly, in the sl(3,C) case, the action of
π
(
X
i
)
{\displaystyle \pi (X_{i})}
and
π
(
Y
i
)
{\displaystyle \pi (Y_{i})}
"raise" and "lower" the eigenvalues of
π
(
H
1
)
{\displaystyle \pi (H_{1})}
and
π
(
H
2
)
{\displaystyle \pi (H_{2})}
. The irreducible representations are then classified by the largest eigenvalues
m
1
{\displaystyle m_{1}}
and
m
2
{\displaystyle m_{2}}
of
π
(
H
1
)
{\displaystyle \pi (H_{1})}
and
π
(
H
2
)
{\displaystyle \pi (H_{2})}
, respectively, where
m
1
{\displaystyle m_{1}}
and
m
2
{\displaystyle m_{2}}
are non-negative integers. That is to say, in this setting, a "dominant integral element" is precisely a pair of non-negative integers.
Unlike the representations of sl(2,C), the representation of sl(3,C) cannot be described explicitly in general. Thus, it requires an argument to show that every pair
(
m
1
,
m
2
)
{\displaystyle (m_{1},m_{2})}
actually arises the highest weight of some irreducible representation (Step Two in the overview above). This can be done as follows. First, we construct the "fundamental representations", with highest weights (1,0) and (0,1). These are the three-dimensional standard representation (in which
π
(
X
)
=
X
{\displaystyle \pi (X)=X}
) and the dual of the standard representation. Then one takes a tensor product of
m
1
{\displaystyle m_{1}}
copies of the standard representation and
m
2
{\displaystyle m_{2}}
copies of the dual of the standard representation, and extracts an irreducible invariant subspace.
Although the representations cannot be described explicitly, there is a lot of useful information describing their structure. For example, the dimension of the irreducible representation with highest weight
(
m
1
,
m
2
)
{\displaystyle (m_{1},m_{2})}
is given by
dim
(
m
1
,
m
2
)
=
1
2
(
m
1
+
1
)
(
m
2
+
1
)
(
m
1
+
m
2
+
2
)
{\displaystyle \dim(m_{1},m_{2})={\frac {1}{2}}(m_{1}+1)(m_{2}+1)(m_{1}+m_{2}+2)}
There is also a simple pattern to the multiplicities of the various weight spaces. Finally, the irreducible representations with highest weight
(
0
,
m
)
{\displaystyle (0,m)}
can be realized concretely on the space of homogeneous polynomials of degree
m
{\displaystyle m}
in three complex variables.
=== The case of a general semisimple Lie algebras ===
Let
g
{\displaystyle {\mathfrak {g}}}
be a semisimple Lie algebra and let
h
{\displaystyle {\mathfrak {h}}}
be a Cartan subalgebra of
g
{\displaystyle {\mathfrak {g}}}
, that is, a maximal commutative subalgebra with the property that adH is diagonalizable for all H in
h
{\displaystyle {\mathfrak {h}}}
. As an example, we may consider the case where
g
{\displaystyle {\mathfrak {g}}}
is sl(n,C), the algebra of n by n traceless matrices, and
h
{\displaystyle {\mathfrak {h}}}
is the subalgebra of traceless diagonal matrices. We then let R denote the associated root system. We then choose a base (or system of positive simple roots)
Δ
{\displaystyle \Delta }
for R.
We now briefly summarize the structures needed to state the theorem of the highest weight; more details can be found in the article on weights in representation theory.
We choose an inner product on
h
{\displaystyle {\mathfrak {h}}}
that is invariant under the action of the Weyl group of R, which we use to identify
h
{\displaystyle {\mathfrak {h}}}
with its dual space. If
(
π
,
V
)
{\displaystyle (\pi ,V)}
is a representation of
g
{\displaystyle {\mathfrak {g}}}
, we define a weight of V to be an element
λ
{\displaystyle \lambda }
in
h
{\displaystyle {\mathfrak {h}}}
with the property that for some nonzero v in V, we have
π
(
H
)
v
=
⟨
λ
,
H
⟩
v
{\displaystyle \pi (H)v=\langle \lambda ,H\rangle v}
for all H in
h
{\displaystyle {\mathfrak {h}}}
. We then define one weight
λ
{\displaystyle \lambda }
to be higher than another weight
μ
{\displaystyle \mu }
if
λ
−
μ
{\displaystyle \lambda -\mu }
is expressible as a linear combination of elements of
Δ
{\displaystyle \Delta }
with non-negative real coefficients. A weight
μ
{\displaystyle \mu }
is called a highest weight if
μ
{\displaystyle \mu }
is higher than every other weight of
π
{\displaystyle \pi }
. Finally, if
λ
{\displaystyle \lambda }
is a weight, we say that
λ
{\displaystyle \lambda }
is dominant if it has non-negative inner product with each element of
Δ
{\displaystyle \Delta }
and we say that
λ
{\displaystyle \lambda }
is integral if
2
⟨
λ
,
α
⟩
/
⟨
α
,
α
⟩
{\displaystyle 2\langle \lambda ,\alpha \rangle /\langle \alpha ,\alpha \rangle }
is an integer for each
α
{\displaystyle \alpha }
in R.
Finite-dimensional representations of a semisimple Lie algebra are completely reducible, so it suffices to classify irreducible (simple) representations. The irreducible representations, in turn, may be classified by the "theorem of the highest weight" as follows:
Every irreducible, finite-dimensional representation of
g
{\displaystyle {\mathfrak {g}}}
has a highest weight, and this highest weight is dominant and integral.
Two irreducible, finite-dimensional representations with the same highest weight are isomorphic.
Every dominant integral element arises as the highest weight of some irreducible, finite-dimensional representation of
g
{\displaystyle {\mathfrak {g}}}
.
The last point of the theorem (Step Two in the overview above) is the most difficult one. In the case of the Lie algebra sl(3,C), the construction can be done in an elementary way, as described above. In general, the construction of the representations may be given by using Verma modules.
=== Construction using Verma modules ===
If
λ
{\displaystyle \lambda }
is any weight, not necessarily dominant or integral, one can construct an infinite-dimensional representation
W
λ
{\displaystyle W_{\lambda }}
of
g
{\displaystyle {\mathfrak {g}}}
with highest weight
λ
{\displaystyle \lambda }
known as a Verma module. The Verma module then has a maximal proper invariant subspace
U
λ
{\displaystyle U_{\lambda }}
, so that the quotient representation
V
λ
:=
W
λ
/
U
λ
{\displaystyle V_{\lambda }:=W_{\lambda }/U_{\lambda }}
is irreducible—and still has highest weight
λ
{\displaystyle \lambda }
. In the case that
λ
{\displaystyle \lambda }
is dominant and integral, we wish to show that
V
λ
{\displaystyle V_{\lambda }}
is finite dimensional.
The strategy for proving finite-dimensionality of
V
λ
{\displaystyle V_{\lambda }}
is to show that the set of weights of
V
λ
{\displaystyle V_{\lambda }}
is invariant under the action of the Weyl group
W
{\displaystyle W}
of
g
{\displaystyle {\mathfrak {g}}}
relative to the given Cartan subalgebra
h
{\displaystyle {\mathfrak {h}}}
. (Note that the weights of the Verma module
W
λ
{\displaystyle W_{\lambda }}
itself are definitely not invariant under
W
{\displaystyle W}
.) Once this invariance result is established, it follows that
V
λ
{\displaystyle V_{\lambda }}
has only finitely many weights. After all, if
μ
{\displaystyle \mu }
is a weight of
V
λ
{\displaystyle V_{\lambda }}
, then
μ
{\displaystyle \mu }
must be integral—indeed,
μ
{\displaystyle \mu }
must differ from
λ
{\displaystyle \lambda }
by an integer combination of roots—and by the invariance result,
w
⋅
μ
{\displaystyle w\cdot \mu }
must be lower than
λ
{\displaystyle \lambda }
for every
w
{\displaystyle w}
in
W
{\displaystyle W}
. But there are only finitely many integral elements
μ
{\displaystyle \mu }
with this property. Thus,
V
λ
{\displaystyle V_{\lambda }}
has only finitely many weights, each of which has finite multiplicity (even in the Verma module, so certainly also in
V
λ
{\displaystyle V_{\lambda }}
). From this, it follows that
V
λ
{\displaystyle V_{\lambda }}
must be finite dimensional.
=== Additional properties of the representations ===
Much is known about the representations of a complex semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, besides the classification in terms of highest weights. We mention a few of these briefly. We have already alluded to Weyl's theorem, which states that every finite-dimensional representation of
g
{\displaystyle {\mathfrak {g}}}
decomposes as a direct sum of irreducible representations. There is also the Weyl character formula, which leads to the Weyl dimension formula (a formula for the dimension of the representation in terms of its highest weight), the Kostant multiplicity formula (a formula for the multiplicities of the various weights occurring in a representation). Finally, there is also a formula for the eigenvalue of the Casimir element, which acts as a scalar in each irreducible representation.
== Lie group representations and Weyl's unitarian trick ==
Although it is possible to develop the representation theory of complex semisimple Lie algebras in a self-contained way, it can be illuminating to bring in a perspective using Lie groups. This approach is particularly helpful in understanding Weyl's theorem on complete reducibility. It is known that every complex semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
has a compact real form
k
{\displaystyle {\mathfrak {k}}}
. This means first that
g
{\displaystyle {\mathfrak {g}}}
is the complexification of
k
{\displaystyle {\mathfrak {k}}}
:
g
=
k
+
i
k
{\displaystyle {\mathfrak {g}}={\mathfrak {k}}+i{\mathfrak {k}}}
and second that there exists a simply connected compact group
K
{\displaystyle K}
whose Lie algebra is
k
{\displaystyle {\mathfrak {k}}}
. As an example, we may consider
g
=
sl
(
n
;
C
)
{\displaystyle {\mathfrak {g}}=\operatorname {sl} (n;\mathbb {C} )}
, in which case
K
{\displaystyle K}
may be taken to be the special unitary group SU(n).
Given a finite-dimensional representation
V
{\displaystyle V}
of
g
{\displaystyle {\mathfrak {g}}}
, we can restrict it to
k
{\displaystyle {\mathfrak {k}}}
. Then since
K
{\displaystyle K}
is simply connected, we can integrate the representation to the group
K
{\displaystyle K}
. The method of averaging over the group shows that there is an inner product on
V
{\displaystyle V}
that is invariant under the action of
K
{\displaystyle K}
; that is, the action of
K
{\displaystyle K}
on
V
{\displaystyle V}
is unitary. At this point, we may use unitarity to see that
V
{\displaystyle V}
decomposes as a direct sum of irreducible representations. This line of reasoning is called the unitarian trick and was Weyl's original argument for what is now called Weyl's theorem. There is also a purely algebraic argument for the complete reducibility of representations of semisimple Lie algebras.
If
g
{\displaystyle {\mathfrak {g}}}
is a complex semisimple Lie algebra, there is a unique complex semisimple Lie group
G
{\displaystyle G}
with Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, in addition to the simply connected compact group
K
{\displaystyle K}
. (If
g
=
sl
(
n
;
C
)
{\displaystyle {\mathfrak {g}}=\operatorname {sl} (n;\mathbb {C} )}
then
G
=
SL
(
n
;
C
)
{\displaystyle G=\operatorname {SL} (n;\mathbb {C} )}
.) Then we have the following result about finite-dimensional representations.
Statement: The objects in the following list are in one-to-one correspondence:
Smooth representations of K
Holomorphic representations of G
Real linear representations of
k
{\displaystyle {\mathfrak {k}}}
Complex linear representations of
g
{\displaystyle {\mathfrak {g}}}
Conclusion: The representation theory of compact Lie groups can shed light on the representation theory of complex semisimple Lie algebras.
== Remarks ==
== Notes ==
== References ==
Bäuerle, G. G. A.; de Kerf, E. A.; ten Kroode, A. P. E. (1997). A. van Groesen; E.M. de Jager (eds.). Finite and infinite dimensional Lie algebras and their application in physics. Studies in mathematical physics. Vol. 7. North-Holland. ISBN 978-0-444-82836-1 – via ScienceDirect.
Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666
Humphreys, James E. (1972), Introduction to Lie Algebras and Representation Theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90053-7
Knapp, Anthony W. (2001), Representation theory of semisimple groups. An overview based on examples., Princeton Landmarks in Mathematics, Princeton University Press, ISBN 0-691-09089-0
Knapp, Anthony W. (2002), Lie Groups: Beyond an Introduction, Progress in Mathematics, vol. 140 (2nd ed.), Boston: Birkhäuser, ISBN 978-0-8176-4259-4.
Rossmann, Wulf (2002), Lie Groups: An Introduction Through Linear Groups, Oxford Graduate Texts in Mathematics, Oxford University Press, ISBN 978-0-19-859683-7. | Wikipedia/Representation_theory_of_semisimple_Lie_algebras |
In mathematics, geometric invariant theory (or GIT) is a method for constructing quotients by group actions in algebraic geometry, used to construct moduli spaces. It was developed by David Mumford in 1965, using ideas from the paper (Hilbert 1893) in classical invariant theory.
Geometric invariant theory studies an action of a group G on an algebraic variety (or scheme) X and provides techniques for forming the 'quotient' of X by G as a scheme with reasonable properties. One motivation was to construct moduli spaces in algebraic geometry as quotients of schemes parametrizing marked objects. In the 1970s and 1980s the theory developed interactions with symplectic geometry and equivariant topology, and was used to construct moduli spaces of objects in differential geometry, such as instantons and monopoles.
== Background ==
Invariant theory is concerned with a group action of a group G on an algebraic variety (or a scheme) X. Classical invariant theory addresses the situation when X = V is a vector space and G is either a finite group, or one of the classical Lie groups that acts linearly on V. This action induces a linear action of G on the space of polynomial functions R(V) on V by the formula
g
⋅
f
(
v
)
=
f
(
g
−
1
v
)
,
g
∈
G
,
v
∈
V
.
{\displaystyle g\cdot f(v)=f(g^{-1}v),\quad g\in G,v\in V.}
The polynomial invariants of the G-action on V are those polynomial functions f on V which are fixed under the 'change of variables' due to the action of the group, so that g · f = f for all G in G. They form a commutative algebra A = R(V)G, and this algebra is interpreted as the algebra of functions on the 'invariant theory quotient' V // G because any one of these functions gives the same value for all points that are equivalent (that is, f (v) = f (gv) for all g). In the language of modern algebraic geometry,
V
/
/
G
=
Spec
A
=
Spec
R
(
V
)
G
.
{\displaystyle V/\!\!/G=\operatorname {Spec} A=\operatorname {Spec} R(V)^{G}.}
Several difficulties emerge from this description. The first one, successfully tackled by Hilbert in the case of a general linear group, is to prove that the algebra A is finitely generated. This is necessary if one wanted the quotient to be an affine algebraic variety. Whether a similar fact holds for arbitrary groups G was the subject of Hilbert's fourteenth problem, and Nagata demonstrated that the answer was negative in general. On the other hand, in the course of development of representation theory in the first half of the twentieth century, a large class of groups for which the answer is positive was identified; these are called reductive groups and include all finite groups and all classical groups.
The finite generation of the algebra A is but the first step towards the complete description of A, and progress in resolving this more delicate question was rather modest. The invariants had classically been described only in a restricted range of situations, and the complexity of this description beyond the first few cases held out little hope for full understanding of the algebras of invariants in general. Furthermore, it may happen that any polynomial invariant f takes the same value on a given pair of points u and v in V, yet these points are in different orbits of the G-action. A simple example is provided by the multiplicative group C* of non-zero complex numbers that acts on an n-dimensional complex vector space Cn by scalar multiplication. In this case, every polynomial invariant is a constant, but there are many different orbits of the action. The zero vector forms an orbit by itself, and the non-zero multiples of any non-zero vector form an orbit, so that non-zero orbits are parametrized by the points of the complex projective space CPn–1. If this happens (different orbits having the same function values), one says that "invariants do not separate the orbits", and the algebra A reflects the topological quotient space X / G rather imperfectly. Indeed, the latter space, with the quotient topology, is frequently non-separated (non-Hausdorff). (This is the case in our example – the null orbit is not open because any neighborhood of the null vector contains points in all other orbits, so in the quotient topology any neighborhood of the null orbit contains all other orbits.) In 1893 Hilbert formulated and proved a criterion for determining those orbits which are not separated from the zero orbit by invariant polynomials. Rather remarkably, unlike his earlier work in invariant theory, which led to the rapid development of abstract algebra, this result of Hilbert remained little known and little used for the next 70 years. Much of the development of invariant theory in the first half of the twentieth century concerned explicit computations with invariants, and at any rate, followed the logic of algebra rather than geometry.
== Mumford's book ==
Geometric invariant theory was founded and developed by Mumford in a monograph, first published in 1965, that applied ideas of nineteenth century invariant theory, including some results of Hilbert, to modern algebraic geometry questions. (The book was greatly expanded in two later editions, with extra appendices by Fogarty and Mumford, and a chapter on symplectic quotients by Kirwan.) The book uses both scheme theory and computational techniques available in examples.
The abstract setting used is that of a group action on a scheme X.
The simple-minded idea of an orbit space
G
∖
X
{\displaystyle G\setminus X}
i.e. the quotient space of X by the group action, runs into difficulties in algebraic geometry, for reasons that are explicable in abstract terms. There is in fact no general reason why equivalence relations should interact well with the (rather rigid) regular functions (polynomial functions), which are at the heart of algebraic geometry. The functions on the orbit space G \ X that should be considered are those on X that are invariant under the action of G. The direct approach can be made, by means of the function field of a variety (i.e. rational functions): take the G-invariant rational functions on it, as the function field of the quotient variety. Unfortunately this — the point of view of birational geometry — can only give a first approximation to the answer. As Mumford put it in the Preface to the book:The problem is, within the set of all models of the resulting birational class, there is one model whose geometric points classify the set of orbits in some action, or the set of algebraic objects in some moduli problem.
In Chapter 5 he isolates further the specific technical problem addressed, in a moduli problem of quite classical type — classify the big 'set' of all algebraic varieties subject only to being non-singular (and a requisite condition on polarization). The moduli are supposed to describe the parameter space. For example, for algebraic curves it has been known from the time of Riemann that there should be connected components of dimensions
0
,
1
,
3
,
6
,
9
,
…
{\displaystyle 0,1,3,6,9,\dots }
according to the genus g = 0, 1, 2, 3, 4, …, and the moduli are functions on each component. In the coarse moduli problem Mumford considers the obstructions to be:
non-separated topology on the moduli space (i.e. not enough parameters in good standing)
infinitely many irreducible components (which isn't avoidable, but local finiteness may hold)
failure of components to be representable as schemes, although representable topologically.
It is the third point that motivated the whole theory. As Mumford puts it, if the first two difficulties are resolved [the third question] becomes essentially equivalent to the question of whether an orbit space of some locally closed subset of the Hilbert or Chow schemes by the projective group exists.
To deal with this he introduced a notion (in fact three) of stability. This enabled him to open up the previously treacherous area — much had been written, in particular by Francesco Severi, but the methods of the literature had limitations. The birational point of view can afford to be careless about subsets of codimension 1. To have a moduli space as a scheme is on one side a question about characterising schemes as representable functors (as the Grothendieck school would see it); but geometrically it is more like a compactification question, as the stability criteria revealed. The restriction to non-singular varieties will not lead to a compact space in any sense as moduli space: varieties can degenerate to having singularities. On the other hand, the points that would correspond to highly singular varieties are definitely too 'bad' to include in the answer. The correct middle ground, of points stable enough to be admitted, was isolated by Mumford's work. The concept was not entirely new, since certain aspects of it were to be found in David Hilbert's final ideas on invariant theory, before he moved on to other fields.
The book's Preface also enunciated the Mumford conjecture, later proved by William Haboush.
== Stability ==
If a reductive group G acts linearly on a vector space V, then a non-zero point of V is called
unstable if 0 is in the closure of its orbit,
semi-stable if 0 is not in the closure of its orbit,
stable if its orbit is closed, and its stabilizer is finite.
There are equivalent ways to state these (this criterion is known as the Hilbert–Mumford criterion):
A non-zero point x is unstable if and only if there is a 1-parameter subgroup of G all of whose weights with respect to x are positive.
A non-zero point x is unstable if and only if every invariant polynomial has the same value on 0 and x.
A non-zero point x is semistable if and only if there is no 1-parameter subgroup of G all of whose weights with respect to x are positive.
A non-zero point x is semistable if and only if some invariant polynomial has different values on 0 and x.
A non-zero point x is stable if and only if every 1-parameter subgroup of G has positive (and negative) weights with respect to x.
A non-zero point x is stable if and only if for every y not in the orbit of x there is some invariant polynomial that has different values on y and x, and the ring of invariant polynomials has transcendence degree dim(V) – dim(G).
A point of the corresponding projective space of V is called unstable, semi-stable, or stable if it is the
image of a point in V with the same property. "Unstable" is the opposite of "semistable" (not "stable"). The unstable points form a Zariski closed set of projective space, while the semistable and stable points both form Zariski open sets (possibly empty). These definitions are from (Mumford 1977) and are not equivalent to the ones in the first edition of Mumford's book.
Many moduli spaces can be constructed as the quotients of the space of stable points of some subset of projective space by some group action. These spaces can often be compactified by adding certain equivalence classes of semistable points. Different stable orbits correspond to different points in the quotient, but two different semistable orbits may correspond to the same point in the quotient if their closures intersect.
Example: (Deligne & Mumford 1969)
A stable curve is a reduced connected curve of genus ≥2 such that its only singularities are ordinary double points and every non-singular rational component meets the other components in at least 3 points. The moduli space of stable curves of genus G is the quotient of a subset of the Hilbert scheme of curves in P5g–6 with Hilbert polynomial (6n – 1)(g – 1) by the group PGL5g–5.
Example:
A vector bundle W over an algebraic curve (or over a Riemann surface) is a stable vector bundle
if and only if
deg
(
V
)
rank
(
V
)
<
deg
(
W
)
rank
(
W
)
{\displaystyle \displaystyle {\frac {\deg(V)}{{\hbox{rank}}(V)}}<{\frac {\deg(W)}{{\hbox{rank}}(W)}}}
for all proper non-zero subbundles V of W and is semistable if this condition holds with < replaced by ≤.
== See also ==
GIT quotient
Geometric complexity theory
Geometric quotient
Categorical quotient
Quantization commutes with reduction
K-stability
K-stability of Fano varieties
Bridgeland stability condition
Stability (algebraic geometry)
== References ==
Deligne, Pierre; Mumford, David (1969), "The irreducibility of the space of curves of given genus", Publications Mathématiques de l'IHÉS, 36 (1): 75–109, doi:10.1007/BF02684599, MR 0262240, S2CID 16482150
Hilbert, D. (1893), "Über die vollen Invariantensysteme", Math. Annalen, 42 (3): 313, doi:10.1007/BF01444162
Kirwan, Frances, Cohomology of quotients in symplectic and algebraic geometry. Mathematical Notes, 31. Princeton University Press, Princeton, NJ, 1984. i+211 pp. MR0766741 ISBN 0-691-08370-3
Kraft, Hanspeter, Geometrische Methoden in der Invariantentheorie. (German) (Geometrical methods in invariant theory) Aspects of Mathematics, D1. Friedr. Vieweg & Sohn, Braunschweig, 1984. x+308 pp. MR0768181 ISBN 3-528-08525-8
Mumford, David (1977), "Stability of projective varieties", L'Enseignement Mathématique, 2e Série, 23 (1): 39–110, ISSN 0013-8584, MR 0450272, archived from the original on 2011-07-07
Mumford, David; Fogarty, J.; Kirwan, F. (1994), Geometric invariant theory, Ergebnisse der Mathematik und ihrer Grenzgebiete (2) [Results in Mathematics and Related Areas (2)], vol. 34 (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-56963-3, MR 1304906; MR0214602 (1st ed 1965); MR0719371 (2nd ed)
V. L. Popov, E. B. Vinberg, Invariant theory, in Algebraic geometry. IV. Encyclopaedia of Mathematical Sciences, 55 (translated from 1989 Russian edition) Springer-Verlag, Berlin, 1994. vi+284 pp. ISBN 3-540-54682-0 | Wikipedia/Geometric_invariant_theory |
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain, and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena.
The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation.
== Overview ==
A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms.
R
i
c
=
k
g
{\displaystyle \mathrm {Ric} =kg}
The equations for an Einstein manifold, used in general relativity to describe the curvature of spacetime
A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that (action and) energy are not continuously variable.
Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ (semi-) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding. "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems. Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics.
Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston, or astronomical bodies revolving around the Earth) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result. Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory, first postulated millennia ago (by several thinkers in Greece and India) and the two-fluid theory of electricity are two cases in this point. However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle.
Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty), a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity). They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method.
Physical theories can be grouped into three categories: mainstream theories, proposed theories and fringe theories.
== History ==
Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, and continued by Plato and Aristotle, whose views held sway for a millennium. During the rise of medieval universities, the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar, logic, and rhetoric and of the Quadrivium like arithmetic, geometry, music and astronomy. During the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon. As the Scientific Revolution gathered pace, the concepts of matter, energy, space, time and causality slowly began to acquire the form we know today, and other sciences spun off from the rubric of natural philosophy. Thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe; the works of these men (alongside Galileo's) can perhaps be considered to constitute the Scientific Revolution.
The great push toward the modern concept of explanation started with Galileo, one of the few physicists who was both a consummate theoretician and a great experimentalist. The analytic geometry and mechanics of Descartes were incorporated into the calculus and mechanics of Isaac Newton, another theoretician/experimentalist of the highest order, writing Principia Mathematica. In it contained a grand synthesis of the work of Copernicus, Galileo and Kepler; as well as Newton's theories of mechanics and gravitation, which held sway as worldviews until the early 20th century. Simultaneously, progress was also made in optics (in particular colour theory and the ancient science of geometrical optics), courtesy of Newton, Descartes and the Dutchmen Snell and Huygens. In the 18th and 19th centuries Joseph-Louis Lagrange, Leonhard Euler and William Rowan Hamilton would extend the theory of classical mechanics considerably. They picked up the interactive intertwining of mathematics and physics begun two millennia earlier by Pythagoras.
Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat, electricity and magnetism, and then light. The laws of thermodynamics, and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory, unifying the previously separate phenomena of electricity, magnetism and light.
The pillars of modern physics, and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics. Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity. Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules. Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena, among others), in parallel to the applications of relativity to problems in astronomy and cosmology respectively.
All of these achievements depended on the theoretical physics as a moving force both to suggest experiments and to consolidate results — often by ingenious application of existing mathematics, or, as in the case of Descartes and Newton (with Leibniz), by inventing new mathematics. Fourier's studies of heat conduction led to a new branch of mathematics: infinite, orthogonal series.
Modern theoretical physics attempts to unify theories and explain phenomena in further attempts to understand the Universe, from the cosmological to the elementary particle scale. Where experimentation cannot be done, theoretical physics still tries to advance through the use of mathematical models.
== Mainstream theories ==
Mainstream theories (sometimes referred to as central theories) are the body of knowledge of both factual and scientific views and possess a usual scientific quality of the tests of repeatability, consistency with existing well-established science and experimentation. There do exist mainstream theories that are generally accepted theories based solely upon their effects explaining a wide variety of data, although the detection, explanation, and possible composition are subjects of debate.
=== Examples ===
== Proposed theories ==
The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics, which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle. For example, AdS/CFT correspondence, Chern–Simons theory, graviton, magnetic monopole, string theory, theory of everything.
== Fringe theories ==
Fringe theories include any new area of scientific endeavor in the process of becoming established and some proposed theories. It can include speculative sciences. This includes physics fields and physical theories presented in accordance with known evidence, and a body of associated predictions have been made according to that theory.
Some fringe theories go on to become a widely accepted part of physics. Other fringe theories end up being disproven. Some fringe theories are a form of protoscience and others are a form of pseudoscience. The falsification of the original theory sometimes leads to reformulation of the theory.
=== Examples ===
== Thought experiments vs real experiments ==
"Thought" experiments are situations created in one's mind, asking a question akin to "suppose you are in this situation, assuming such is true, what would follow?". They are usually created to investigate phenomena that are not readily experienced in every-day situations. Famous examples of such thought experiments are Schrödinger's cat, the EPR thought experiment, simple illustrations of time dilation, and so on. These usually lead to real experiments designed to verify that the conclusion (and therefore the assumptions) of the thought experiments are correct. The EPR thought experiment led to the Bell inequalities, which were then tested to various degrees of rigor, leading to the acceptance of the current formulation of quantum mechanics and probabilism as a working hypothesis.
== See also ==
List of theoretical physicists
Philosophy of physics
Symmetry in quantum mechanics
Timeline of developments in theoretical physics
Double field theory
== Notes ==
== References ==
== Further reading ==
Physical Sciences. Encyclopædia Britannica (Macropaedia). Vol. 25 (15th ed.). 1994.
Duhem, Pierre. La théorie physique - Son objet, sa structure, (in French). 2nd edition - 1914. English translation: The physical theory - its purpose, its structure. Republished by Joseph Vrin philosophical bookstore (1981), ISBN 2711602214.
Feynman, et al. The Feynman Lectures on Physics (3 vol.). First edition: Addison–Wesley, (1964, 1966).
Bestselling three-volume textbook covering the span of physics. Reference for both (under)graduate student and professional researcher alike.
Landau et al. Course of Theoretical Physics.
Famous series of books dealing with theoretical concepts in physics covering 10 volumes, translated into many languages and reprinted over many editions. Often known simply as "Landau and Lifschits" or "Landau-Lifschits" in the literature.
Longair, MS. Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics. Cambridge University Press; 2d edition (4 Dec 2003). ISBN 052152878X. ISBN 978-0521528788
Planck, Max (1909). Eight Lectures on theoretical physics. Library of Alexandria. ISBN 1465521887, ISBN 9781465521880.
A set of lectures given in 1909 at Columbia University.
Sommerfeld, Arnold. Vorlesungen über theoretische Physik (Lectures on Theoretical Physics); German, 6 volumes.
A series of lessons from a master educator of theoretical physicists.
== External links ==
MIT Center for Theoretical Physics
How to become a GOOD Theoretical Physicist, a website made by Gerard 't Hooft | Wikipedia/Theoretical_physics |
In the mathematical field of Lie theory, a split Lie algebra is a pair
(
g
,
h
)
{\displaystyle ({\mathfrak {g}},{\mathfrak {h}})}
where
g
{\displaystyle {\mathfrak {g}}}
is a Lie algebra and
h
<
g
{\displaystyle {\mathfrak {h}}<{\mathfrak {g}}}
is a splitting Cartan subalgebra, where "splitting" means that for all
x
∈
h
{\displaystyle x\in {\mathfrak {h}}}
,
ad
g
x
{\displaystyle \operatorname {ad} _{\mathfrak {g}}x}
is triangularizable. If a Lie algebra admits a splitting, it is called a splittable Lie algebra. Note that for reductive Lie algebras, the Cartan subalgebra is required to contain the center.
Over an algebraically closed field such as the complex numbers, all semisimple Lie algebras are splittable (indeed, not only does the Cartan subalgebra act by triangularizable matrices, but even stronger, it acts by diagonalizable ones) and all splittings are conjugate; thus split Lie algebras are of most interest for non-algebraically closed fields.
Split Lie algebras are of interest both because they formalize the split real form of a complex Lie algebra, and because split semisimple Lie algebras (more generally, split reductive Lie algebras) over any field share many properties with semisimple Lie algebras over algebraically closed fields – having essentially the same representation theory, for instance – the splitting Cartan subalgebra playing the same role as the Cartan subalgebra plays over algebraically closed fields. This is the approach followed in (Bourbaki 2005), for instance.
== Properties ==
Over an algebraically closed field, all Cartan subalgebras are conjugate. Over a non-algebraically closed field, not all Cartan subalgebras are conjugate in general; however, in a splittable semisimple Lie algebra all splitting Cartan algebras are conjugate.
Over an algebraically closed field, all semisimple Lie algebras are splittable.
Over a non-algebraically closed field, there exist non-splittable semisimple Lie algebras.
In a splittable Lie algebra, there may exist Cartan subalgebras that are not splitting.
Direct sums of splittable Lie algebras and ideals in splittable Lie algebras are splittable.
=== Split real Lie algebras ===
For a real Lie algebra, splittable is equivalent to either of these conditions:
The real rank equals the complex rank.
The Satake diagram has neither black vertices nor arrows.
Every complex semisimple Lie algebra has a unique (up to isomorphism) split real Lie algebra, which is also semisimple, and is simple if and only if the complex Lie algebra is.
For real semisimple Lie algebras, split Lie algebras are opposite to compact Lie algebras – the corresponding Lie group is "as far as possible" from being compact.
== Examples ==
The split real forms for the complex semisimple Lie algebras are:
A
n
,
s
l
n
+
1
(
C
)
:
s
l
n
+
1
(
R
)
{\displaystyle A_{n},{\mathfrak {sl}}_{n+1}(\mathbf {C} ):{\mathfrak {sl}}_{n+1}(\mathbf {R} )}
B
n
,
s
o
2
n
+
1
(
C
)
:
s
o
n
,
n
+
1
(
R
)
{\displaystyle B_{n},{\mathfrak {so}}_{2n+1}(\mathbf {C} ):{\mathfrak {so}}_{n,n+1}(\mathbf {R} )}
C
n
,
s
p
n
(
C
)
:
s
p
n
(
R
)
{\displaystyle C_{n},{\mathfrak {sp}}_{n}(\mathbf {C} ):{\mathfrak {sp}}_{n}(\mathbf {R} )}
D
n
,
s
o
2
n
(
C
)
:
s
o
n
,
n
(
R
)
{\displaystyle D_{n},{\mathfrak {so}}_{2n}(\mathbf {C} ):{\mathfrak {so}}_{n,n}(\mathbf {R} )}
Exceptional Lie algebras:
E
6
,
E
7
,
E
8
,
F
4
,
G
2
{\displaystyle E_{6},E_{7},E_{8},F_{4},G_{2}}
have split real forms EI, EV, EVIII, FI, G.
These are the Lie algebras of the split real groups of the complex Lie groups.
Note that for
s
l
{\displaystyle {\mathfrak {sl}}}
and
s
p
{\displaystyle {\mathfrak {sp}}}
, the real form is the real points of (the Lie algebra of) the same algebraic group, while for
s
o
{\displaystyle {\mathfrak {so}}}
one must use the split forms (of maximally indefinite index), as the group SO is compact.
== See also ==
Compact Lie algebra
Real form
Split-complex number
Split orthogonal group
== References == | Wikipedia/Split_Lie_algebra |
In mathematics, the notion of a real form relates objects defined over the field of real and complex numbers. A real Lie algebra g0 is called a real form of a complex Lie algebra g if g is the complexification of g0:
g
≃
g
0
⊗
R
C
.
{\displaystyle {\mathfrak {g}}\simeq {\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} .}
The notion of a real form can also be defined for complex Lie groups. Real forms of complex semisimple Lie groups and Lie algebras have been completely classified by Élie Cartan.
== Real forms for Lie groups and algebraic groups ==
Using the Lie correspondence between Lie groups and Lie algebras, the notion of a real form can be defined for Lie groups. In the case of linear algebraic groups, the notions of complexification and real form have a natural description in the language of algebraic geometry.
== Classification ==
Just as complex semisimple Lie algebras are classified by Dynkin diagrams, the real forms of a semisimple Lie algebra are classified by Satake diagrams, which are obtained from the Dynkin diagram of the complex form by labeling some vertices black (filled), and connecting some other vertices in pairs by arrows, according to certain rules.
It is a basic fact in the structure theory of complex semisimple Lie algebras that every such algebra has two special real forms: one is the compact real form and corresponds to a compact Lie group under the Lie correspondence (its Satake diagram has all vertices blackened), and the other is the split real form and corresponds to a Lie group that is as far as possible from being compact (its Satake diagram has no vertices blackened and no arrows). In the case of the complex special linear group SL(n,C), the compact real form is the special unitary group SU(n) and the split real form is the real special linear group SL(n,R). The classification of real forms of semisimple Lie algebras was accomplished by Élie Cartan in the context of Riemannian symmetric spaces. In general, there may be more than two real forms.
Suppose that g0 is a semisimple Lie algebra over the field of real numbers. By Cartan's criterion, the Killing form is nondegenerate, and can be diagonalized in a suitable basis with the diagonal entries +1 or −1. By Sylvester's law of inertia, the number of positive entries, or the positive index of inertia, is an invariant of the bilinear form, i.e. it does not depend on the choice of the diagonalizing basis. This is a number between 0 and the dimension of g which is an important invariant of the real Lie algebra, called its index.
=== Split real form ===
A real form g0 of a finite-dimensional complex semisimple Lie algebra g is said to be split, or normal, if in each Cartan decomposition g0 = k0 ⊕ p0, the space p0 contains a maximal abelian subalgebra of g0, i.e. its Cartan subalgebra. Élie Cartan proved that every complex semisimple Lie algebra g has a split real form, which is unique up to isomorphism. It has maximal index among all real forms.
The split form corresponds to the Satake diagram with no vertices blackened and no arrows.
=== Compact real form ===
A real Lie algebra g0 is called compact if the Killing form is negative definite, i.e. the index of g0 is zero. In this case g0 = k0 is a compact Lie algebra. It is known that under the Lie correspondence, compact Lie algebras correspond to compact Lie groups.
The compact form corresponds to the Satake diagram with all vertices blackened.
== Construction of the compact real form ==
In general, the construction of the compact real form uses structure theory of semisimple Lie algebras. For classical Lie algebras there is a more explicit construction.
Let g0 be a real Lie algebra of matrices over R that is closed under the transpose map,
X
→
X
t
.
{\displaystyle X\to {X}^{t}.}
Then g0 decomposes into the direct sum of its skew-symmetric part k0 and its symmetric part p0. This is the Cartan decomposition:
g
0
=
k
0
⊕
p
0
.
{\displaystyle {\mathfrak {g}}_{0}={\mathfrak {k}}_{0}\oplus {\mathfrak {p}}_{0}.}
The complexification g of g0 decomposes into the direct sum of g0 and ig0. The real vector space of matrices
u
0
=
k
0
⊕
i
p
0
{\displaystyle {\mathfrak {u}}_{0}={\mathfrak {k}}_{0}\oplus i{\mathfrak {p}}_{0}}
is a subspace of the complex Lie algebra g that is closed under the commutators and consists of skew-hermitian matrices. It follows that u0 is a real Lie subalgebra of g, that its Killing form is negative definite (making it a compact Lie algebra), and that the complexification of u0 is g. Therefore, u0 is a compact form of g.
== See also ==
Complexification (Lie group)
== Notes ==
== References == | Wikipedia/Real_form_(Lie_theory) |
In mathematics, a Lie superalgebra is a generalisation of a Lie algebra to include a
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
‑grading. Lie superalgebras are important in theoretical physics where they are used to describe the mathematics of supersymmetry.
The notion of
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
grading used here is distinct from a second
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
grading having cohomological origins. A graded Lie algebra (say, graded by
Z
{\displaystyle \mathbb {Z} }
or
N
{\displaystyle \mathbb {N} }
) that is anticommutative and has a graded Jacobi identity also has a
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
grading; this is the "rolling up" of the algebra into odd and even parts. This rolling-up is not normally referred to as "super". Thus, supergraded Lie superalgebras carry a pair of
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
‑gradations: one of which is supersymmetric, and the other is classical. Pierre Deligne calls the supersymmetric one the super gradation, and the classical one the cohomological gradation. These two gradations must be compatible, and there is often disagreement as to how they should be regarded.
== Definition ==
Formally, a Lie superalgebra is a nonassociative Z2-graded algebra, or superalgebra, over a commutative ring (typically R or C) whose product [·, ·], called the Lie superbracket or supercommutator, satisfies the two conditions (analogs of the usual Lie algebra axioms, with grading):
Super skew-symmetry:
[
x
,
y
]
=
−
(
−
1
)
|
x
|
|
y
|
[
y
,
x
]
.
{\displaystyle [x,y]=-(-1)^{|x||y|}[y,x].\ }
The super Jacobi identity:
(
−
1
)
|
x
|
|
z
|
[
x
,
[
y
,
z
]
]
+
(
−
1
)
|
y
|
|
x
|
[
y
,
[
z
,
x
]
]
+
(
−
1
)
|
z
|
|
y
|
[
z
,
[
x
,
y
]
]
=
0
,
{\displaystyle (-1)^{|x||z|}[x,[y,z]]+(-1)^{|y||x|}[y,[z,x]]+(-1)^{|z||y|}[z,[x,y]]=0,}
where x, y, and z are pure in the Z2-grading. Here, |x| denotes the degree of x (either 0 or 1). The degree of [x,y] is the sum of degree of x and y modulo 2.
One also sometimes adds the axioms
[
x
,
x
]
=
0
{\displaystyle [x,x]=0}
for |x| = 0 (if 2 is invertible this follows automatically) and
[
[
x
,
x
]
,
x
]
=
0
{\displaystyle [[x,x],x]=0}
for |x| = 1 (if 3 is invertible this follows automatically). When the ground ring is the integers or the Lie superalgebra is a free module, these conditions are equivalent to the condition that the Poincaré–Birkhoff–Witt theorem holds (and, in general, they are necessary conditions for the theorem to hold).
Just as for Lie algebras, the universal enveloping algebra of the Lie superalgebra can be given a Hopf algebra structure.
== Comments ==
Lie superalgebras show up in physics in several different ways. In conventional supersymmetry, the even elements of the superalgebra correspond to bosons and odd elements to fermions. This corresponds to a bracket that has a grading of zero:
|
[
a
,
b
]
|
=
|
a
|
+
|
b
|
{\displaystyle |[a,b]|=|a|+|b|}
This is not always the case; for example, in BRST supersymmetry and in the Batalin–Vilkovisky formalism, it is the other way around, which corresponds to the bracket of having a grading of -1:
|
[
a
,
b
]
|
=
|
a
|
+
|
b
|
−
1
{\displaystyle |[a,b]|=|a|+|b|-1}
This distinction becomes particularly relevant when an algebra has not one, but two graded associative products. In addition to the Lie bracket, there may also be an "ordinary" product, thus giving rise to the Poisson superalgebra and the Gerstenhaber algebra. Such gradings are also observed in deformation theory.
== Properties ==
Let
g
=
g
0
⊕
g
1
{\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\oplus {\mathfrak {g}}_{1}}
be a Lie superalgebra. By inspecting the Jacobi identity, one sees that there are eight cases depending on whether arguments are even or odd. These fall into four classes, indexed by the number of odd elements:
No odd elements. The statement is just that
g
0
{\displaystyle {\mathfrak {g}}_{0}}
is an ordinary Lie algebra.
One odd element. Then
g
1
{\displaystyle {\mathfrak {g}}_{1}}
is a
g
0
{\displaystyle {\mathfrak {g}}_{0}}
-module for the action
a
d
a
:
b
→
[
a
,
b
]
,
a
∈
g
0
,
b
,
[
a
,
b
]
∈
g
1
{\displaystyle \mathrm {ad} _{a}:b\rightarrow [a,b],\quad a\in {\mathfrak {g}}_{0},\quad b,[a,b]\in {\mathfrak {g}}_{1}}
.
Two odd elements. The Jacobi identity says that the bracket
g
1
⊗
g
1
→
g
0
{\displaystyle {\mathfrak {g}}_{1}\otimes {\mathfrak {g}}_{1}\rightarrow {\mathfrak {g}}_{0}}
is a symmetric
g
1
{\displaystyle {\mathfrak {g}}_{1}}
-map.
Three odd elements. For all
b
∈
g
1
{\displaystyle b\in {\mathfrak {g}}_{1}}
,
[
b
,
[
b
,
b
]
]
=
0
{\displaystyle [b,[b,b]]=0}
.
Thus the even subalgebra
g
0
{\displaystyle {\mathfrak {g}}_{0}}
of a Lie superalgebra forms a (normal) Lie algebra as all the signs disappear, and the superbracket becomes a normal Lie bracket, while
g
1
{\displaystyle {\mathfrak {g}}_{1}}
is a linear representation of
g
0
{\displaystyle {\mathfrak {g}}_{0}}
, and there exists a symmetric
g
0
{\displaystyle {\mathfrak {g}}_{0}}
-equivariant linear map
{
⋅
,
⋅
}
:
g
1
⊗
g
1
→
g
0
{\displaystyle \{\cdot ,\cdot \}:{\mathfrak {g}}_{1}\otimes {\mathfrak {g}}_{1}\rightarrow {\mathfrak {g}}_{0}}
such that,
[
{
x
,
y
}
,
z
]
+
[
{
y
,
z
}
,
x
]
+
[
{
z
,
x
}
,
y
]
=
0
,
x
,
y
,
z
∈
g
1
.
{\displaystyle [\left\{x,y\right\},z]+[\left\{y,z\right\},x]+[\left\{z,x\right\},y]=0,\quad x,y,z\in {\mathfrak {g}}_{1}.}
Conditions (1)–(3) are linear and can all be understood in terms of ordinary Lie algebras. Condition (4) is nonlinear, and is the most difficult one to verify when constructing a Lie superalgebra starting from an ordinary Lie algebra (
g
0
{\displaystyle {\mathfrak {g}}_{0}}
) and a representation (
g
1
{\displaystyle {\mathfrak {g}}_{1}}
).
== Involution ==
A
∗
{\displaystyle *}
Lie superalgebra is a complex Lie superalgebra equipped with an involutive antilinear map from itself to itself which respects the Z2 grading and satisfies
[x,y]* = [y*,x*] for all x and y in the Lie superalgebra. (Some authors prefer the convention [x,y]* = (−1)|x||y|[y*,x*]; changing * to −* switches between the two conventions.) Its universal enveloping algebra would be an ordinary *-algebra.
== Examples ==
Given any associative superalgebra
A
{\displaystyle A}
one can define the supercommutator on homogeneous elements by
[
x
,
y
]
=
x
y
−
(
−
1
)
|
x
|
|
y
|
y
x
{\displaystyle [x,y]=xy-(-1)^{|x||y|}yx\ }
and then extending by linearity to all elements. The algebra
A
{\displaystyle A}
together with the supercommutator then becomes a Lie superalgebra. The simplest example of this procedure is perhaps when
A
{\displaystyle A}
is the space of all linear functions
E
n
d
(
V
)
{\displaystyle \mathbf {End} (V)}
of a super vector space
V
{\displaystyle V}
to itself. When
V
=
K
p
|
q
{\displaystyle V=\mathbb {K} ^{p|q}}
, this space is denoted by
M
p
|
q
{\displaystyle M^{p|q}}
or
M
(
p
|
q
)
{\displaystyle M(p|q)}
. With the Lie bracket per above, the space is denoted
g
l
(
p
|
q
)
{\displaystyle {\mathfrak {gl}}(p|q)}
.
A Poisson algebra is an associative algebra together with a Lie bracket. If the algebra is given a Z2-grading, such that the Lie bracket becomes a Lie superbracket, then one obtains the Poisson superalgebra. If, in addition, the associative product is made supercommutative, one obtains a supercommutative Poisson superalgebra.
The Whitehead product on homotopy groups gives many examples of Lie superalgebras over the integers.
The super-Poincaré algebra generates the isometries of flat superspace.
== Classification ==
The simple complex finite-dimensional Lie superalgebras were classified by Victor Kac.
They are (excluding the Lie algebras):
The special linear lie superalgebra
s
l
(
m
|
n
)
{\displaystyle {\mathfrak {sl}}(m|n)}
.
The lie superalgebra
s
l
(
m
|
n
)
{\displaystyle {\mathfrak {sl}}(m|n)}
is the subalgebra of
g
l
(
m
|
n
)
{\displaystyle {\mathfrak {gl}}(m|n)}
consisting of matrices with super trace zero. It is simple when
m
≠
n
{\displaystyle m\not =n}
. If
m
=
n
{\displaystyle m=n}
, then the identity matrix
I
2
m
{\displaystyle I_{2m}}
generates an ideal. Quotienting out this ideal leads to
s
l
(
m
|
m
)
/
⟨
I
2
m
⟩
{\displaystyle {\mathfrak {sl}}(m|m)/\langle I_{2m}\rangle }
which is simple for
m
≥
2
{\displaystyle m\geq 2}
.
The orthosymplectic Lie superalgebra
o
s
p
(
m
|
2
n
)
{\displaystyle {\mathfrak {osp}}(m|2n)}
.
Consider an even, non-degenerate, supersymmetric bilinear form
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
on
C
m
|
2
n
{\displaystyle \mathbb {C} ^{m|2n}}
. Then the orthosymplectic Lie superalgebra is the subalgebra of
g
l
(
m
|
2
n
)
{\displaystyle {\mathfrak {gl}}(m|2n)}
consisting of matrices that leave this form invariant:
o
s
p
(
m
|
2
n
)
=
{
X
∈
g
l
(
m
|
2
n
)
∣
⟨
X
u
,
v
⟩
+
(
−
1
)
|
X
|
|
u
|
⟨
u
,
X
v
⟩
=
0
for all
u
,
v
∈
C
m
|
2
n
}
.
{\displaystyle {\mathfrak {osp}}(m|2n)=\{X\in {\mathfrak {gl}}(m|2n)\mid \langle Xu,v\rangle +(-1)^{|X||u|}\langle u,Xv\rangle =0{\text{ for all }}u,v\in \mathbb {C} ^{m|2n}\}.}
Its even part is given by
s
o
(
m
)
⊕
s
p
(
2
n
)
{\displaystyle {\mathfrak {so}}(m)\oplus {\mathfrak {sp}}(2n)}
.
The exceptional Lie superalgebra
D
(
2
,
1
;
α
)
{\displaystyle D(2,1;\alpha )}
.
There is a family of (9∣8)-dimensional Lie superalgebras depending on a parameter
α
{\displaystyle \alpha }
. These are deformations of
D
(
2
,
1
)
=
o
s
p
(
4
|
2
)
{\displaystyle D(2,1)={\mathfrak {osp}}(4|2)}
. If
α
≠
0
{\displaystyle \alpha \not =0}
and
α
≠
−
1
{\displaystyle \alpha \not =-1}
, then D(2,1,α) is simple. Moreover
D
(
2
,
1
;
α
)
≅
D
(
2
,
1
;
β
)
{\displaystyle D(2,1;\alpha )\cong D(2,1;\beta )}
if
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
are under the same orbit under the maps
α
↦
α
−
1
{\displaystyle \alpha \mapsto \alpha ^{-1}}
and
α
↦
−
1
−
α
{\displaystyle \alpha \mapsto -1-\alpha }
.
The exceptional Lie superalgebra
F
(
4
)
{\displaystyle F(4)}
.
It has dimension (24|16). Its even part is given by
s
l
(
2
)
⊕
s
o
(
7
)
{\displaystyle {\mathfrak {sl}}(2)\oplus {\mathfrak {so}}(7)}
.
The exceptional Lie superalgebra
G
(
3
)
{\displaystyle G(3)}
.
It has dimension (17|14). Its even part is given by
s
l
(
2
)
⊕
G
2
{\displaystyle {\mathfrak {sl}}(2)\oplus G_{2}}
.
There are also two so-called strange series called
p
e
(
n
)
{\displaystyle {\mathfrak {pe}}(n)}
and
q
(
n
)
{\displaystyle {\mathfrak {q}}(n)}
.
The Cartan types. They can be divided in four families:
W
(
n
)
{\displaystyle W(n)}
,
S
(
n
)
{\displaystyle S(n)}
,
S
~
(
2
n
)
{\displaystyle {\widetilde {S}}(2n)}
and
H
(
n
)
{\displaystyle H(n)}
. For the Cartan type of simple Lie superalgebras, the odd part is no longer completely reducible under the action of the even part.
== Classification of infinite-dimensional simple linearly compact Lie superalgebras ==
The classification consists of the 10 series W(m, n), S(m, n) ((m, n) ≠ (1, 1)), H(2m, n), K(2m + 1, n), HO(m, m) (m ≥ 2), SHO(m, m) (m ≥ 3), KO(m, m + 1), SKO(m, m + 1; β) (m ≥ 2), SHO ~ (2m, 2m), SKO ~ (2m + 1, 2m + 3) and the five exceptional algebras:
E(1, 6), E(5, 10), E(4, 4), E(3, 6), E(3, 8)
The last two are particularly interesting (according to Kac) because they have the standard model gauge group SU(3)×SU(2)×U(1) as their zero level algebra. Infinite-dimensional (affine) Lie superalgebras are important symmetries in superstring theory. Specifically, the Virasoro algebras with
N
{\displaystyle {\mathcal {N}}}
supersymmetries are
K
(
1
,
N
)
{\displaystyle K(1,{\mathcal {N}})}
which only have central extensions up to
N
=
4
{\displaystyle {\mathcal {N}}=4}
.
== Category-theoretic definition ==
In category theory, a Lie superalgebra can be defined as a nonassociative superalgebra whose product satisfies
[
⋅
,
⋅
]
∘
(
id
+
τ
A
,
A
)
=
0
{\displaystyle [\cdot ,\cdot ]\circ ({\operatorname {id} }+\tau _{A,A})=0}
[
⋅
,
⋅
]
∘
(
[
⋅
,
⋅
]
⊗
id
∘
(
id
+
σ
+
σ
2
)
=
0
{\displaystyle [\cdot ,\cdot ]\circ ([\cdot ,\cdot ]\otimes {\operatorname {id} }\circ ({\operatorname {id} }+\sigma +\sigma ^{2})=0}
where σ is the cyclic permutation braiding
(
id
⊗
τ
A
,
A
)
∘
(
τ
A
,
A
⊗
id
)
{\displaystyle ({\operatorname {id} }\otimes \tau _{A,A})\circ (\tau _{A,A}\otimes {\operatorname {id} })}
. In diagrammatic form:
== See also ==
Gerstenhaber algebra
Anyonic Lie algebra
Grassmann algebra
Representation of a Lie superalgebra
Superspace
Supergroup
Universal enveloping algebra
== Notes ==
== References ==
Cheng, S.-J.; Wang, W. (2012). Dualities and Representations of Lie Superalgebras. Graduate Studies in Mathematics. Vol. 144. pp. 302pp. ISBN 978-0-8218-9118-6.
Freund, P. G. O. (1983). Introduction to supersymmetry. Cambridge Monographs on Mathematical Physics. Cambridge University Press. doi:10.1017/CBO9780511564017. ISBN 978-0521-356-756.
Grozman, P.; Leites, D.; Shchepochkina, I. (2005). "Lie Superalgebras of String Theories". Acta Mathematica Vietnamica. 26 (2005): 27–63. arXiv:hep-th/9702120. Bibcode:1997hep.th....2120G.
Kac, V. G. (1977). "Lie superalgebras". Advances in Mathematics. 26 (1): 8–96. doi:10.1016/0001-8708(77)90017-2.
Kac, V. G. (2010). "Classification of Infinite-Dimensional Simple Groups of Supersymmetries and Quantum Field Theory". Visions in Mathematics. pp. 162–183. arXiv:math/9912235. doi:10.1007/978-3-0346-0422-2_6. ISBN 978-3-0346-0421-5. S2CID 15597378.
Manin, Y. I. (1997). Gauge Field Theory and Complex Geometry ((2nd ed.) ed.). Berlin: Springer. ISBN 978-3-540-61378-7.
Musson, I. M. (2012). Lie Superalgebras and Enveloping Algebras. Graduate Studies in Mathematics. Vol. 131. pp. 488 pp. ISBN 978-0-8218-6867-6.
Varadarajan, V. S. (2004). Supersymmetry for Mathematicians: An Introduction. Courant Lecture Notes in Mathematics. Vol. 11. American Mathematical Society. ISBN 978-0-8218-3574-6.
=== Historical ===
Frölicher, A.; Nijenhuis, A. (1956). "Theory of vector valued differential forms. Part I". Indagationes Mathematicae. 59: 338–350. doi:10.1016/S1385-7258(56)50046-7..
Gerstenhaber, M. (1963). "The cohomology structure of an associative ring". Annals of Mathematics. 78 (2): 267–288. doi:10.2307/1970343. JSTOR 1970343.
Gerstenhaber, M. (1964). "On the Deformation of Rings and Algebras". Annals of Mathematics. 79 (1): 59–103. doi:10.2307/1970484. JSTOR 1970484.
Milnor, J. W.; Moore, J. C. (1965). "On the structure of Hopf algebras". Annals of Mathematics. 81 (2): 211–264. doi:10.2307/1970615. JSTOR 1970615.
== External links ==
Irving Kaplansky + Lie Superalgebras | Wikipedia/Lie_superalgebra |
In mathematics, the tensor product of representations is a tensor product of vector spaces underlying representations together with the factor-wise group action on the product. This construction, together with the Clebsch–Gordan procedure, can be used to generate additional irreducible representations if one already knows a few.
== Definition ==
=== Group representations ===
If
V
1
,
V
2
{\displaystyle V_{1},V_{2}}
are linear representations of a group
G
{\displaystyle G}
, then their tensor product is the tensor product of vector spaces
V
1
⊗
V
2
{\displaystyle V_{1}\otimes V_{2}}
with the linear action of
G
{\displaystyle G}
uniquely determined by the condition that
g
⋅
(
v
1
⊗
v
2
)
=
(
g
⋅
v
1
)
⊗
(
g
⋅
v
2
)
{\displaystyle g\cdot (v_{1}\otimes v_{2})=(g\cdot v_{1})\otimes (g\cdot v_{2})}
for all
v
1
∈
V
1
{\displaystyle v_{1}\in V_{1}}
and
v
2
∈
V
2
{\displaystyle v_{2}\in V_{2}}
. Although not every element of
V
1
⊗
V
2
{\displaystyle V_{1}\otimes V_{2}}
is expressible in the form
v
1
⊗
v
2
{\displaystyle v_{1}\otimes v_{2}}
, the universal property of the tensor product guarantees that this action is well-defined.
In the language of homomorphisms, if the actions of
G
{\displaystyle G}
on
V
1
{\displaystyle V_{1}}
and
V
2
{\displaystyle V_{2}}
are given by homomorphisms
Π
1
:
G
→
GL
(
V
1
)
{\displaystyle \Pi _{1}:G\to \operatorname {GL} (V_{1})}
and
Π
2
:
G
→
GL
(
V
2
)
{\displaystyle \Pi _{2}:G\to \operatorname {GL} (V_{2})}
, then the tensor product representation is given by the homomorphism
Π
1
⊗
Π
2
:
G
→
GL
(
V
1
⊗
V
2
)
{\displaystyle \Pi _{1}\otimes \Pi _{2}:G\to \operatorname {GL} (V_{1}\otimes V_{2})}
given by
Π
1
⊗
Π
2
(
g
)
=
Π
1
(
g
)
⊗
Π
2
(
g
)
{\displaystyle \Pi _{1}\otimes \Pi _{2}(g)=\Pi _{1}(g)\otimes \Pi _{2}(g)}
,
where
Π
1
(
g
)
⊗
Π
2
(
g
)
{\displaystyle \Pi _{1}(g)\otimes \Pi _{2}(g)}
is the tensor product of linear maps.
One can extend the notion of tensor products to any finite number of representations. If V is a linear representation of a group G, then with the above linear action, the tensor algebra
T
(
V
)
{\displaystyle T(V)}
is an algebraic representation of G; i.e., each element of G acts as an algebra automorphism.
=== Lie algebra representations ===
If
(
V
1
,
π
1
)
{\displaystyle (V_{1},\pi _{1})}
and
(
V
2
,
π
2
)
{\displaystyle (V_{2},\pi _{2})}
are representations of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, then the tensor product of these representations is the map
π
1
⊗
π
2
:
g
→
End
(
V
1
⊗
V
2
)
{\displaystyle \pi _{1}\otimes \pi _{2}:{\mathfrak {g}}\to \operatorname {End} (V_{1}\otimes V_{2})}
given by
π
1
⊗
π
2
(
X
)
=
π
1
(
X
)
⊗
I
+
I
⊗
π
2
(
X
)
{\displaystyle \pi _{1}\otimes \pi _{2}(X)=\pi _{1}(X)\otimes I+I\otimes \pi _{2}(X)}
,
where
I
{\displaystyle I}
is the identity endomorphism. This is called the Kronecker sum, defined in Matrix addition#Kronecker sum and Kronecker product#Properties.
The motivation for the use of the Kronecker sum in this definition comes from the case in which
π
1
{\displaystyle \pi _{1}}
and
π
2
{\displaystyle \pi _{2}}
come from representations
Π
1
{\displaystyle \Pi _{1}}
and
Π
2
{\displaystyle \Pi _{2}}
of a Lie group
G
{\displaystyle G}
. In that case, a simple computation shows that the Lie algebra representation associated to
Π
1
⊗
Π
2
{\displaystyle \Pi _{1}\otimes \Pi _{2}}
is given by the preceding formula.
=== Quantum groups ===
For quantum groups, the coproduct is no longer co-commutative. As a result, the natural permutation map
V
⊗
W
→
W
⊗
V
{\displaystyle V\otimes W\rightarrow W\otimes V}
is no longer an isomorphism of modules. However, the permutation map remains an isomorphism of vector spaces.
== Action on linear maps ==
If
(
V
1
,
Π
1
)
{\displaystyle (V_{1},\Pi _{1})}
and
(
V
2
,
Π
2
)
{\displaystyle (V_{2},\Pi _{2})}
are representations of a group
G
{\displaystyle G}
, let
Hom
(
V
1
,
V
2
)
{\displaystyle \operatorname {Hom} (V_{1},V_{2})}
denote the space of all linear maps from
V
1
{\displaystyle V_{1}}
to
V
2
{\displaystyle V_{2}}
. Then
Hom
(
V
1
,
V
2
)
{\displaystyle \operatorname {Hom} (V_{1},V_{2})}
can be given the structure of a representation by defining
g
⋅
A
=
Π
2
(
g
)
A
Π
1
(
g
)
−
1
{\displaystyle g\cdot A=\Pi _{2}(g)A\Pi _{1}(g)^{-1}}
for all
A
∈
Hom
(
V
,
W
)
{\displaystyle A\in \operatorname {Hom} (V,W)}
. Now, there is a natural isomorphism
Hom
(
V
,
W
)
≅
V
∗
⊗
W
{\displaystyle \operatorname {Hom} (V,W)\cong V^{*}\otimes W}
as vector spaces; this vector space isomorphism is in fact an isomorphism of representations.
The trivial subrepresentation
Hom
(
V
,
W
)
G
{\displaystyle \operatorname {Hom} (V,W)^{G}}
consists of G-linear maps; i.e.,
Hom
G
(
V
,
W
)
=
Hom
(
V
,
W
)
G
.
{\displaystyle \operatorname {Hom} _{G}(V,W)=\operatorname {Hom} (V,W)^{G}.}
Let
E
=
End
(
V
)
{\displaystyle E=\operatorname {End} (V)}
denote the endomorphism algebra of V and let A denote the subalgebra of
E
⊗
m
{\displaystyle E^{\otimes m}}
consisting of symmetric tensors. The main theorem of invariant theory states that A is semisimple when the characteristic of the base field is zero.
== Clebsch–Gordan theory ==
=== The general problem ===
The tensor product of two irreducible representations
V
1
,
V
2
{\displaystyle V_{1},V_{2}}
of a group or Lie algebra is usually not irreducible. It is therefore of interest to attempt to decompose
V
1
⊗
V
2
{\displaystyle V_{1}\otimes V_{2}}
into irreducible pieces. This decomposition problem is known as the Clebsch–Gordan problem.
=== The SU(2) case ===
The prototypical example of this problem is the case of the rotation group SO(3)—or its double cover, the special unitary group SU(2). The irreducible representations of SU(2) are described by a parameter
ℓ
{\displaystyle \ell }
, whose possible values are
ℓ
=
0
,
1
/
2
,
1
,
3
/
2
,
…
.
{\displaystyle \ell =0,1/2,1,3/2,\ldots .}
(The dimension of the representation is then
2
ℓ
+
1
{\displaystyle 2\ell +1}
.) Let us take two parameters
ℓ
{\displaystyle \ell }
and
m
{\displaystyle m}
with
ℓ
≥
m
{\displaystyle \ell \geq m}
. Then the tensor product representation
V
ℓ
⊗
V
m
{\displaystyle V_{\ell }\otimes V_{m}}
then decomposes as follows:
V
ℓ
⊗
V
m
≅
V
ℓ
+
m
⊕
V
ℓ
+
m
−
1
⊕
⋯
⊕
V
ℓ
−
m
+
1
⊕
V
ℓ
−
m
.
{\displaystyle V_{\ell }\otimes V_{m}\cong V_{\ell +m}\oplus V_{\ell +m-1}\oplus \cdots \oplus V_{\ell -m+1}\oplus V_{\ell -m}.}
Consider, as an example, the tensor product of the four-dimensional representation
V
3
/
2
{\displaystyle V_{3/2}}
and the three-dimensional representation
V
1
{\displaystyle V_{1}}
. The tensor product representation
V
3
/
2
⊗
V
1
{\displaystyle V_{3/2}\otimes V_{1}}
has dimension 12 and decomposes as
V
3
/
2
⊗
V
1
≅
V
5
/
2
⊕
V
3
/
2
⊕
V
1
/
2
{\displaystyle V_{3/2}\otimes V_{1}\cong V_{5/2}\oplus V_{3/2}\oplus V_{1/2}}
,
where the representations on the right-hand side have dimension 6, 4, and 2, respectively. We may summarize this result arithmetically as
4
×
3
=
6
+
4
+
2
{\displaystyle 4\times 3=6+4+2}
.
=== The SU(3) case ===
In the case of the group SU(3), all the irreducible representations can be generated from the standard 3-dimensional representation and its dual, as follows. To generate the representation with label
(
m
1
,
m
2
)
{\displaystyle (m_{1},m_{2})}
, one takes the tensor product of
m
1
{\displaystyle m_{1}}
copies of the standard representation and
m
2
{\displaystyle m_{2}}
copies of the dual of the standard representation, and then takes the invariant subspace generated by the tensor product of the highest weight vectors.
In contrast to the situation for SU(2), in the Clebsch–Gordan decomposition for SU(3), a given irreducible representation
W
{\displaystyle W}
may occur more than once in the decomposition of
U
⊗
V
{\displaystyle U\otimes V}
.
== Tensor power ==
As with vector spaces, one can define the kth tensor power of a representation V to be the vector space
V
⊗
k
{\displaystyle V^{\otimes k}}
with the action given above.
=== The symmetric and alternating square ===
Over a field of characteristic zero, the symmetric and alternating squares are subrepresentations of the second tensor power. They can be used to define the Frobenius–Schur indicator, which indicates whether a given irreducible character is real, complex, or quaternionic. They are examples of Schur functors.
They are defined as follows.
Let V be a vector space. Define an endomorphism T of
V
⊗
V
{\displaystyle V\otimes V}
as follows:
T
:
V
⊗
V
⟶
V
⊗
V
v
⊗
w
⟼
w
⊗
v
.
{\displaystyle {\begin{aligned}T:V\otimes V&\longrightarrow V\otimes V\\v\otimes w&\longmapsto w\otimes v.\end{aligned}}}
It is an involution (its own inverse), and so is an automorphism of
V
⊗
V
{\displaystyle V\otimes V}
.
Define two subsets of the second tensor power of V,
Sym
2
(
V
)
:=
{
v
∈
V
⊗
V
∣
T
(
v
)
=
v
}
Alt
2
(
V
)
:=
{
v
∈
V
⊗
V
∣
T
(
v
)
=
−
v
}
{\displaystyle {\begin{aligned}\operatorname {Sym} ^{2}(V)&:=\{v\in V\otimes V\mid T(v)=v\}\\\operatorname {Alt} ^{2}(V)&:=\{v\in V\otimes V\mid T(v)=-v\}\end{aligned}}}
These are the symmetric square of V,
V
⊙
V
{\displaystyle V\odot V}
, and the alternating square of V,
V
∧
V
{\displaystyle V\wedge V}
, respectively. The symmetric and alternating squares are also known as the symmetric part and antisymmetric part of the tensor product.
==== Properties ====
The second tensor power of a linear representation V of a group G decomposes as the direct sum of the symmetric and alternating squares:
V
⊗
2
=
V
⊗
V
≅
Sym
2
(
V
)
⊕
Alt
2
(
V
)
{\displaystyle V^{\otimes 2}=V\otimes V\cong \operatorname {Sym} ^{2}(V)\oplus \operatorname {Alt} ^{2}(V)}
as representations. In particular, both are subrepresentations of the second tensor power. In the language of modules over the group ring, the symmetric and alternating squares are
C
[
G
]
{\displaystyle \mathbb {C} [G]}
-submodules of
V
⊗
V
{\displaystyle V\otimes V}
.
If V has a basis
{
v
1
,
v
2
,
…
,
v
n
}
{\displaystyle \{v_{1},v_{2},\ldots ,v_{n}\}}
, then the symmetric square has a basis
{
v
i
⊗
v
j
+
v
j
⊗
v
i
∣
1
≤
i
≤
j
≤
n
}
{\displaystyle \{v_{i}\otimes v_{j}+v_{j}\otimes v_{i}\mid 1\leq i\leq j\leq n\}}
and the alternating square has a basis
{
v
i
⊗
v
j
−
v
j
⊗
v
i
∣
1
≤
i
<
j
≤
n
}
{\displaystyle \{v_{i}\otimes v_{j}-v_{j}\otimes v_{i}\mid 1\leq i<j\leq n\}}
. Accordingly,
dim
Sym
2
(
V
)
=
dim
V
(
dim
V
+
1
)
2
,
dim
Alt
2
(
V
)
=
dim
V
(
dim
V
−
1
)
2
.
{\displaystyle {\begin{aligned}\dim \operatorname {Sym} ^{2}(V)&={\frac {\dim V(\dim V+1)}{2}},\\\dim \operatorname {Alt} ^{2}(V)&={\frac {\dim V(\dim V-1)}{2}}.\end{aligned}}}
Let
χ
:
G
→
C
{\displaystyle \chi :G\to \mathbb {C} }
be the character of
V
{\displaystyle V}
. Then we can calculate the characters of the symmetric and alternating squares as follows: for all g in G,
χ
Sym
2
(
V
)
(
g
)
=
1
2
(
χ
(
g
)
2
+
χ
(
g
2
)
)
,
χ
Alt
2
(
V
)
(
g
)
=
1
2
(
χ
(
g
)
2
−
χ
(
g
2
)
)
.
{\displaystyle {\begin{aligned}\chi _{\operatorname {Sym} ^{2}(V)}(g)&={\frac {1}{2}}(\chi (g)^{2}+\chi (g^{2})),\\\chi _{\operatorname {Alt} ^{2}(V)}(g)&={\frac {1}{2}}(\chi (g)^{2}-\chi (g^{2})).\end{aligned}}}
=== The symmetric and exterior powers ===
As in multilinear algebra, over a field of characteristic zero, one can more generally define the kth symmetric power
Sym
k
(
V
)
{\displaystyle \operatorname {Sym} ^{k}(V)}
and kth exterior power
Λ
k
(
V
)
{\displaystyle \Lambda ^{k}(V)}
, which are subspaces of the kth tensor power (see those pages for more detail on this construction). They are also subrepresentations, but higher tensor powers no longer decompose as their direct sum.
The Schur–Weyl duality computes the irreducible representations occurring in tensor powers of representations of the general linear group
G
=
GL
(
V
)
{\displaystyle G=\operatorname {GL} (V)}
. Precisely, as an
S
n
×
G
{\displaystyle S_{n}\times G}
-module
V
⊗
n
≃
⨁
λ
M
λ
⊗
S
λ
(
V
)
{\displaystyle V^{\otimes n}\simeq \bigoplus _{\lambda }M_{\lambda }\otimes S^{\lambda }(V)}
where
M
λ
{\displaystyle M_{\lambda }}
is an irreducible representation of the symmetric group
S
n
{\displaystyle \mathrm {S} _{n}}
corresponding to a partition
λ
{\displaystyle \lambda }
of n (in decreasing order),
S
λ
(
V
)
{\displaystyle S^{\lambda }(V)}
is the image of the Young symmetrizer
c
λ
:
V
⊗
n
→
V
⊗
n
{\displaystyle c_{\lambda }:V^{\otimes n}\to V^{\otimes n}}
.
The mapping
V
↦
S
λ
(
V
)
{\displaystyle V\mapsto S^{\lambda }(V)}
is a functor called the Schur functor. It generalizes the constructions of symmetric and exterior powers:
S
(
n
)
(
V
)
=
Sym
n
V
,
S
(
1
,
1
,
…
,
1
)
(
V
)
=
∧
n
V
.
{\displaystyle S^{(n)}(V)=\operatorname {Sym} ^{n}V,\,\,S^{(1,1,\dots ,1)}(V)=\wedge ^{n}V.}
In particular, as a G-module, the above simplifies to
V
⊗
n
≃
⨁
λ
S
λ
(
V
)
⊕
m
λ
{\displaystyle V^{\otimes n}\simeq \bigoplus _{\lambda }S^{\lambda }(V)^{\oplus m_{\lambda }}}
where
m
λ
=
dim
M
λ
{\displaystyle m_{\lambda }=\dim M_{\lambda }}
. Moreover, the multiplicity
m
λ
{\displaystyle m_{\lambda }}
may be computed by the Frobenius formula (or the hook length formula). For example, take
n
=
3
{\displaystyle n=3}
. Then there are exactly three partitions:
3
=
3
=
2
+
1
=
1
+
1
+
1
{\displaystyle 3=3=2+1=1+1+1}
and, as it turns out,
m
(
3
)
=
m
(
1
,
1
,
1
)
=
1
,
m
(
2
,
1
)
=
2
{\displaystyle m_{(3)}=m_{(1,1,1)}=1,\,m_{(2,1)}=2}
. Hence,
V
⊗
3
≃
Sym
3
V
⨁
∧
3
V
⨁
S
(
2
,
1
)
(
V
)
⊕
2
.
{\displaystyle V^{\otimes 3}\simeq \operatorname {Sym} ^{3}V\bigoplus \wedge ^{3}V\bigoplus S^{(2,1)}(V)^{\oplus 2}.}
== Tensor products involving Schur functors ==
Let
S
λ
{\displaystyle S^{\lambda }}
denote the Schur functor defined according to a partition
λ
{\displaystyle \lambda }
. Then there is the following decomposition:
S
λ
V
⊗
S
μ
V
≃
⨁
ν
(
S
ν
V
)
⊕
N
λ
μ
ν
{\displaystyle S^{\lambda }V\otimes S^{\mu }V\simeq \bigoplus _{\nu }(S^{\nu }V)^{\oplus N_{\lambda \mu \nu }}}
where the multiplicities
N
λ
μ
ν
{\displaystyle N_{\lambda \mu \nu }}
are given by the Littlewood–Richardson rule.
Given finite-dimensional vector spaces V, W, the Schur functors Sλ give the decomposition
Sym
(
W
∗
⊗
V
)
≃
⨁
λ
S
λ
(
W
∗
)
⊗
S
λ
(
V
)
{\displaystyle \operatorname {Sym} (W^{*}\otimes V)\simeq \bigoplus _{\lambda }S^{\lambda }(W^{*})\otimes S^{\lambda }(V)}
The left-hand side can be identified with the ring of polynomial functions on Hom(V, W ), k[Hom(V, W )] = k[V * ⊗ W ], and so the above also gives the decomposition of k[Hom(V, W )].
== Tensor products representations as representations of product groups ==
Let G, H be two groups and let
(
π
,
V
)
{\displaystyle (\pi ,V)}
and
(
ρ
,
W
)
{\displaystyle (\rho ,W)}
be representations of G and H, respectively. Then we can let the direct product group
G
×
H
{\displaystyle G\times H}
act on the tensor product space
V
⊗
W
{\displaystyle V\otimes W}
by the formula
(
g
,
h
)
⋅
(
v
⊗
w
)
=
π
(
g
)
v
⊗
ρ
(
h
)
w
.
{\displaystyle (g,h)\cdot (v\otimes w)=\pi (g)v\otimes \rho (h)w.}
Even if
G
=
H
{\displaystyle G=H}
, we can still perform this construction, so that the tensor product of two representations of
G
{\displaystyle G}
could, alternatively, be viewed as a representation of
G
×
G
{\displaystyle G\times G}
rather than a representation of
G
{\displaystyle G}
. It is therefore important to clarify whether the tensor product of two representations of
G
{\displaystyle G}
is being viewed as a representation of
G
{\displaystyle G}
or as a representation of
G
×
G
{\displaystyle G\times G}
.
In contrast to the Clebsch–Gordan problem discussed above, the tensor product of two irreducible representations of
G
{\displaystyle G}
is irreducible when viewed as a representation of the product group
G
×
G
{\displaystyle G\times G}
.
== See also ==
Dual representation
Hermite reciprocity
Clebsch–Gordan coefficients
Lie group representation
Lie algebra representation
Kronecker product
Hopf algebra
== Notes ==
== References ==
Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666.
James, Gordon Douglas (2001). Representations and characters of groups. Liebeck, Martin W. (2nd ed.). Cambridge, UK: Cambridge University Press. ISBN 978-0521003926. OCLC 52220683.
Claudio Procesi (2007) Lie Groups: an approach through invariants and representation, Springer, ISBN 9780387260402 .
Serre, Jean-Pierre (1977). Linear Representations of Finite Groups. Springer-Verlag. ISBN 978-0-387-90190-9. OCLC 2202385. | Wikipedia/Tensor_product_of_representations |
In the mathematical field of representation theory, a weight of an algebra A over a field F is an algebra homomorphism from A to F, or equivalently, a one-dimensional representation of A over F. It is the algebra analogue of a multiplicative character of a group. The importance of the concept, however, stems from its application to representations of Lie algebras and hence also to representations of algebraic and Lie groups. In this context, a weight of a representation is a generalization of the notion of an eigenvalue, and the corresponding eigenspace is called a weight space.
== Motivation and general concept ==
Given a set S of
n
×
n
{\displaystyle n\times n}
matrices over the same field, each of which is diagonalizable, and any two of which commute, it is always possible to simultaneously diagonalize all of the elements of S. Equivalently, for any set S of mutually commuting semisimple linear transformations of a finite-dimensional vector space V there exists a basis of V consisting of simultaneous eigenvectors of all elements of S. Each of these common eigenvectors v ∈ V defines a linear functional on the subalgebra U of End(V ) generated by the set of endomorphisms S; this functional is defined as the map which associates to each element of U its eigenvalue on the eigenvector v. This map is also multiplicative, and sends the identity to 1; thus it is an algebra homomorphism from U to the base field. This "generalized eigenvalue" is a prototype for the notion of a weight.
The notion is closely related to the idea of a multiplicative character in group theory, which is a homomorphism χ from a group G to the multiplicative group of a field F. Thus χ: G → F× satisfies χ(e) = 1 (where e is the identity element of G) and
χ
(
g
h
)
=
χ
(
g
)
χ
(
h
)
{\displaystyle \chi (gh)=\chi (g)\chi (h)}
for all g, h in G.
Indeed, if G acts on a vector space V over F, each simultaneous eigenspace for every element of G, if such exists, determines a multiplicative character on G: the eigenvalue on this common eigenspace of each element of the group.
The notion of multiplicative character can be extended to any algebra A over F, by replacing χ: G → F× by a linear map χ: A → F with:
χ
(
a
b
)
=
χ
(
a
)
χ
(
b
)
{\displaystyle \chi (ab)=\chi (a)\chi (b)}
for all a, b in A. If an algebra A acts on a vector space V over F to any simultaneous eigenspace, this corresponds an algebra homomorphism from A to F assigning to each element of A its eigenvalue.
If A is a Lie algebra (which is generally not an associative algebra), then instead of requiring multiplicativity of a character, one requires that it maps any Lie bracket to the corresponding commutator; but since F is commutative this simply means that this map must vanish on Lie brackets: χ([a,b]) = 0. A weight on a Lie algebra g over a field F is a linear map λ: g → F with λ([x, y]) = 0 for all x, y in g. Any weight on a Lie algebra g vanishes on the derived algebra [g,g] and hence descends to a weight on the abelian Lie algebra g/[g,g]. Thus weights are primarily of interest for abelian Lie algebras, where they reduce to the simple notion of a generalized eigenvalue for space of commuting linear transformations.
If G is a Lie group or an algebraic group, then a multiplicative character θ: G → F× induces a weight χ = dθ: g → F on its Lie algebra by differentiation. (For Lie groups, this is differentiation at the identity element of G, and the algebraic group case is an abstraction using the notion of a derivation.)
== Weights in the representation theory of semisimple Lie algebras ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a complex semisimple Lie algebra and
h
{\displaystyle {\mathfrak {h}}}
a Cartan subalgebra of
g
{\displaystyle {\mathfrak {g}}}
. In this section, we describe the concepts needed to formulate the "theorem of the highest weight" classifying the finite-dimensional representations of
g
{\displaystyle {\mathfrak {g}}}
. Notably, we will explain the notion of a "dominant integral element." The representations themselves are described in the article linked to above.
=== Weight of a representation ===
Let
σ
:
g
→
End
(
V
)
{\displaystyle \sigma :{\mathfrak {g}}\to \operatorname {End} (V)}
be a representation of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
on a vector space V over a field of characteristic 0, say
C
{\displaystyle \mathbb {C} }
, and let
λ
:
h
→
C
{\displaystyle \lambda :{\mathfrak {h}}\to \mathbb {C} }
be a linear functional on
h
{\displaystyle {\mathfrak {h}}}
, where
h
{\displaystyle {\mathfrak {h}}}
is a Cartan subalgebra of
g
{\displaystyle {\mathfrak {g}}}
. Then the weight space of V with weight λ is the subspace
V
λ
{\displaystyle V_{\lambda }}
given by
V
λ
:=
{
v
∈
V
:
∀
H
∈
h
,
(
σ
(
H
)
)
(
v
)
=
λ
(
H
)
v
}
{\displaystyle V_{\lambda }:=\{v\in V:\forall H\in {\mathfrak {h}},\,(\sigma (H))(v)=\lambda (H)v\}}
.
A weight of the representation V (the representation is often referred to in short by the vector space V over which elements of the Lie algebra act rather than the map
σ
{\displaystyle \sigma }
) is a linear functional λ such that the corresponding weight space is nonzero. Nonzero elements of the weight space are called weight vectors. That is to say, a weight vector is a simultaneous eigenvector for the action of the elements of
h
{\displaystyle {\mathfrak {h}}}
, with the corresponding eigenvalues given by λ.
If V is the direct sum of its weight spaces
V
=
⨁
λ
∈
h
∗
V
λ
{\displaystyle V=\bigoplus _{\lambda \in {\mathfrak {h}}^{*}}V_{\lambda }}
then V is called a weight module; this corresponds to there being a common eigenbasis (a basis of simultaneous eigenvectors) for all the represented elements of the algebra, i.e., to there being simultaneously diagonalizable matrices (see diagonalizable matrix).
If G is group with Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, every finite-dimensional representation of G induces a representation of
g
{\displaystyle {\mathfrak {g}}}
. A weight of the representation of G is then simply a weight of the associated representation of
g
{\displaystyle {\mathfrak {g}}}
. There is a subtle distinction between weights of group representations and Lie algebra representations, which is that there is a different notion of integrality condition in the two cases; see below. (The integrality condition is more restrictive in the group case, reflecting that not every representation of the Lie algebra comes from a representation of the group.)
=== Action of the root vectors ===
For the adjoint representation
a
d
:
g
→
End
(
g
)
{\displaystyle \mathrm {ad} :{\mathfrak {g}}\to \operatorname {End} ({\mathfrak {g}})}
of
g
{\displaystyle {\mathfrak {g}}}
, the space over which the representation acts is the Lie algebra itself. Then the nonzero weights are called roots, the weight spaces are called root spaces, and the weight vectors, which are thus elements of
g
{\displaystyle {\mathfrak {g}}}
, are called root vectors. Explicitly, a linear functional
α
{\displaystyle \alpha }
on the Cartan subalgebra
h
{\displaystyle {\mathfrak {h}}}
is called a root if
α
≠
0
{\displaystyle \alpha \neq 0}
and there exists a nonzero
X
{\displaystyle X}
in
g
{\displaystyle {\mathfrak {g}}}
such that
[
H
,
X
]
=
α
(
H
)
X
{\displaystyle [H,X]=\alpha (H)X}
for all
H
{\displaystyle H}
in
h
{\displaystyle {\mathfrak {h}}}
. The collection of roots forms a root system.
From the perspective of representation theory, the significance of the roots and root vectors is the following elementary but important result: If
σ
:
g
→
End
(
V
)
{\displaystyle \sigma :{\mathfrak {g}}\to \operatorname {End} (V)}
is a representation of
g
{\displaystyle {\mathfrak {g}}}
, v is a weight vector with weight
λ
{\displaystyle \lambda }
and X is a root vector with root
α
{\displaystyle \alpha }
, then
σ
(
H
)
(
σ
(
X
)
(
v
)
)
=
[
(
λ
+
α
)
(
H
)
]
(
σ
(
X
)
(
v
)
)
{\displaystyle \sigma (H)(\sigma (X)(v))=[(\lambda +\alpha )(H)](\sigma (X)(v))}
for all H in
h
{\displaystyle {\mathfrak {h}}}
. That is,
σ
(
X
)
(
v
)
{\displaystyle \sigma (X)(v)}
is either the zero vector or a weight vector with weight
λ
+
α
{\displaystyle \lambda +\alpha }
. Thus, the action of
X
{\displaystyle X}
maps the weight space with weight
λ
{\displaystyle \lambda }
into the weight space with weight
λ
+
α
{\displaystyle \lambda +\alpha }
.
For example, if
g
=
s
u
C
(
2
)
{\displaystyle {\mathfrak {g}}={\mathfrak {su}}_{\mathbb {C} }(2)}
, or
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
complexified, the root vectors
H
,
X
,
Y
{\displaystyle {H,X,Y}}
span the algebra and have weights
0
{\displaystyle 0}
,
1
{\displaystyle 1}
, and
−
1
{\displaystyle -1}
respectively. The Cartan subalgebra is spanned by
H
{\displaystyle H}
, and the action of
H
{\displaystyle H}
classifies the weight spaces. The action of
X
{\displaystyle X}
maps a weight space of weight
λ
{\displaystyle \lambda }
to the weight space of weight
λ
+
1
{\displaystyle \lambda +1}
and the action of
Y
{\displaystyle Y}
maps a weight space of weight
λ
{\displaystyle \lambda }
to the weight space of weight
λ
−
1
{\displaystyle \lambda -1}
, and the action of
H
{\displaystyle H}
maps the weight spaces to themselves. In the fundamental representation, with weights
±
1
2
{\displaystyle \pm {\frac {1}{2}}}
and weight spaces
V
±
1
2
{\displaystyle V_{\pm {\frac {1}{2}}}}
,
X
{\displaystyle X}
maps
V
+
1
2
{\displaystyle V_{+{\frac {1}{2}}}}
to zero and
V
−
1
2
{\displaystyle V_{-{\frac {1}{2}}}}
to
V
+
1
2
{\displaystyle V_{+{\frac {1}{2}}}}
, while
Y
{\displaystyle Y}
maps
V
−
1
2
{\displaystyle V_{-{\frac {1}{2}}}}
to zero and
V
+
1
2
{\displaystyle V_{+{\frac {1}{2}}}}
to
V
−
1
2
{\displaystyle V_{-{\frac {1}{2}}}}
, and
H
{\displaystyle H}
maps each weight space to itself.
=== Integral element ===
Let
h
0
∗
{\displaystyle {\mathfrak {h}}_{0}^{*}}
be the real subspace of
h
∗
{\displaystyle {\mathfrak {h}}^{*}}
generated by the roots of
g
{\displaystyle {\mathfrak {g}}}
, where
h
∗
{\displaystyle {\mathfrak {h}}^{*}}
is the space of linear functionals
λ
:
h
→
C
{\displaystyle \lambda :{\mathfrak {h}}\to \mathbb {C} }
, the dual space to
h
{\displaystyle {\mathfrak {h}}}
. For computations, it is convenient to choose an inner product that is invariant under the Weyl group, that is, under reflections about the hyperplanes orthogonal to the roots. We may then use this inner product to identify
h
0
∗
{\displaystyle {\mathfrak {h}}_{0}^{*}}
with a subspace
h
0
{\displaystyle {\mathfrak {h}}_{0}}
of
h
{\displaystyle {\mathfrak {h}}}
. With this identification, the coroot associated to a root
α
{\displaystyle \alpha }
is given as
H
α
=
2
α
(
α
,
α
)
{\displaystyle H_{\alpha }=2{\frac {\alpha }{(\alpha ,\alpha )}}}
where
(
α
,
β
)
{\displaystyle (\alpha ,\beta )}
denotes the inner product of vectors
α
,
β
.
{\displaystyle \alpha ,\beta .}
In addition to this inner product, it is common for an angle bracket notation
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
to be used in discussions of root systems, with the angle bracket defined as
⟨
λ
,
α
⟩
≡
(
λ
,
H
α
)
.
{\displaystyle \langle \lambda ,\alpha \rangle \equiv (\lambda ,H_{\alpha }).}
The angle bracket here is not an inner product, as it is not symmetric, and is linear only in the first argument. The angle bracket notation should not be confused with the inner product
(
⋅
,
⋅
)
.
{\displaystyle (\cdot ,\cdot ).}
We now define two different notions of integrality for elements of
h
0
{\displaystyle {\mathfrak {h}}_{0}}
. The motivation for these definitions is simple: The weights of finite-dimensional representations of
g
{\displaystyle {\mathfrak {g}}}
satisfy the first integrality condition, while if G is a group with Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, the weights of finite-dimensional representations of G satisfy the second integrality condition.
An element
λ
∈
h
0
{\displaystyle \lambda \in {\mathfrak {h}}_{0}}
is algebraically integral if
(
λ
,
H
α
)
=
2
(
λ
,
α
)
(
α
,
α
)
∈
Z
{\displaystyle (\lambda ,H_{\alpha })=2{\frac {(\lambda ,\alpha )}{(\alpha ,\alpha )}}\in \mathbb {Z} }
for all roots
α
{\displaystyle \alpha }
. The motivation for this condition is that the coroot
H
α
{\displaystyle H_{\alpha }}
can be identified with the H element in a standard
X
,
Y
,
H
{\displaystyle {X,Y,H}}
basis for an
s
l
(
2
,
C
)
{\displaystyle sl(2,\mathbb {C} )}
-subalgebra of
g
{\displaystyle {\mathfrak {g}}}
. By elementary results for
s
l
(
2
,
C
)
{\displaystyle sl(2,\mathbb {C} )}
, the eigenvalues of
H
α
{\displaystyle H_{\alpha }}
in any finite-dimensional representation must be an integer. We conclude that, as stated above, the weight of any finite-dimensional representation of
g
{\displaystyle {\mathfrak {g}}}
is algebraically integral.
The fundamental weights
ω
1
,
…
,
ω
n
{\displaystyle \omega _{1},\ldots ,\omega _{n}}
are defined by the property that they form a basis of
h
0
{\displaystyle {\mathfrak {h}}_{0}}
dual to the set of coroots associated to the simple roots. That is, the fundamental weights are defined by the condition
2
(
ω
i
,
α
j
)
(
α
j
,
α
j
)
=
δ
i
,
j
{\displaystyle 2{\frac {(\omega _{i},\alpha _{j})}{(\alpha _{j},\alpha _{j})}}=\delta _{i,j}}
where
α
1
,
…
α
n
{\displaystyle \alpha _{1},\ldots \alpha _{n}}
are the simple roots. An element
λ
{\displaystyle \lambda }
is then algebraically integral if and only if it is an integral combination of the fundamental weights. The set of all
g
{\displaystyle {\mathfrak {g}}}
-integral weights is a lattice in
h
0
{\displaystyle {\mathfrak {h}}_{0}}
called the weight lattice for
g
{\displaystyle {\mathfrak {g}}}
, denoted by
P
(
g
)
{\displaystyle P({\mathfrak {g}})}
.
The figure shows the example of the Lie algebra
s
l
(
3
,
C
)
{\displaystyle sl(3,\mathbb {C} )}
, whose root system is the
A
2
{\displaystyle A_{2}}
root system. There are two simple roots,
γ
1
{\displaystyle \gamma _{1}}
and
γ
2
{\displaystyle \gamma _{2}}
. The first fundamental weight,
ω
1
{\displaystyle \omega _{1}}
, should be orthogonal to
γ
2
{\displaystyle \gamma _{2}}
and should project orthogonally to half of
γ
1
{\displaystyle \gamma _{1}}
, and similarly for
ω
2
{\displaystyle \omega _{2}}
. The weight lattice is then the triangular lattice.
Suppose now that the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is the Lie algebra of a Lie group G. Then we say that
λ
∈
h
0
{\displaystyle \lambda \in {\mathfrak {h}}_{0}}
is analytically integral (G-integral) if for each t in
h
{\displaystyle {\mathfrak {h}}}
such that
exp
(
t
)
=
1
∈
G
{\displaystyle \exp(t)=1\in G}
we have
(
λ
,
t
)
∈
2
π
i
Z
{\displaystyle (\lambda ,t)\in 2\pi i\mathbb {Z} }
. The reason for making this definition is that if a representation of
g
{\displaystyle {\mathfrak {g}}}
arises from a representation of G, then the weights of the representation will be G-integral. For G semisimple, the set of all G-integral weights is a sublattice P(G) ⊂ P(
g
{\displaystyle {\mathfrak {g}}}
). If G is simply connected, then P(G) = P(
g
{\displaystyle {\mathfrak {g}}}
). If G is not simply connected, then the lattice P(G) is smaller than P(
g
{\displaystyle {\mathfrak {g}}}
) and their quotient is isomorphic to the fundamental group of G.
=== Partial ordering on the space of weights ===
We now introduce a partial ordering on the set of weights, which will be used to formulate the theorem of the highest weight describing the representations of
g
{\displaystyle {\mathfrak {g}}}
. Recall that R is the set of roots; we now fix a set
R
+
{\displaystyle R^{+}}
of positive roots.
Consider two elements
μ
{\displaystyle \mu }
and
λ
{\displaystyle \lambda }
of
h
0
{\displaystyle {\mathfrak {h}}_{0}}
. We are mainly interested in the case where
μ
{\displaystyle \mu }
and
λ
{\displaystyle \lambda }
are integral, but this assumption is not necessary to the definition we are about to introduce. We then say that
μ
{\displaystyle \mu }
is higher than
λ
{\displaystyle \lambda }
, which we write as
μ
⪰
λ
{\displaystyle \mu \succeq \lambda }
, if
μ
−
λ
{\displaystyle \mu -\lambda }
is expressible as a linear combination of positive roots with non-negative real coefficients. This means, roughly, that "higher" means in the directions of the positive roots. We equivalently say that
λ
{\displaystyle \lambda }
is "lower" than
μ
{\displaystyle \mu }
, which we write as
λ
⪯
μ
{\displaystyle \lambda \preceq \mu }
.
This is only a partial ordering; it can easily happen that
μ
{\displaystyle \mu }
is neither higher nor lower than
λ
{\displaystyle \lambda }
.
=== Dominant weight ===
An integral element
λ
{\displaystyle \lambda }
is dominant if
(
λ
,
γ
)
≥
0
{\displaystyle (\lambda ,\gamma )\geq 0}
for each positive root
γ
{\displaystyle \gamma }
. Equivalently,
λ
{\displaystyle \lambda }
is dominant if it is a non-negative integer combination of the fundamental weights. In the
A
2
{\displaystyle A_{2}}
case, the dominant integral elements live in a 60-degree sector. The notion of being dominant is not the same as being higher than zero. Note the grey area in the picture on the right is a 120-degree sector, strictly containing the 60-degree sector corresponding to the dominant integral elements.
The set of all λ (not necessarily integral) such that
(
λ
,
γ
)
≥
0
{\displaystyle (\lambda ,\gamma )\geq 0}
for all positive roots
γ
{\displaystyle \gamma }
is known as the fundamental Weyl chamber associated to the given set of positive roots.
=== Theorem of the highest weight ===
A weight
λ
{\displaystyle \lambda }
of a representation
V
{\displaystyle V}
of
g
{\displaystyle {\mathfrak {g}}}
is called a highest weight if every other weight of
V
{\displaystyle V}
is lower than
λ
{\displaystyle \lambda }
.
The theory classifying the finite-dimensional irreducible representations of
g
{\displaystyle {\mathfrak {g}}}
is by means of a "theorem of the highest weight." The theorem says that
(1) every irreducible (finite-dimensional) representation has a highest weight,
(2) the highest weight is always a dominant, algebraically integral element,
(3) two irreducible representations with the same highest weight are isomorphic, and
(4) every dominant, algebraically integral element is the highest weight of an irreducible representation.
The last point is the most difficult one; the representations may be constructed using Verma modules.
=== Highest-weight module ===
A representation (not necessarily finite dimensional) V of
g
{\displaystyle {\mathfrak {g}}}
is called highest-weight module if it is generated by a weight vector v ∈ V that is annihilated by the action of all positive root spaces in
g
{\displaystyle {\mathfrak {g}}}
. Every irreducible
g
{\displaystyle {\mathfrak {g}}}
-module with a highest weight is necessarily a highest-weight module, but in the infinite-dimensional case, a highest weight module need not be irreducible.
For each
λ
∈
h
∗
{\displaystyle \lambda \in {\mathfrak {h}}^{*}}
—not necessarily dominant or integral—there exists a unique (up to isomorphism) simple highest-weight
g
{\displaystyle {\mathfrak {g}}}
-module with highest weight λ, which is denoted L(λ), but this module is infinite dimensional unless λ is dominant integral. It can be shown that each highest weight module with highest weight λ is a quotient of the Verma module M(λ). This is just a restatement of universality property in the definition of a Verma module.
Every finite-dimensional highest weight module is irreducible.
== See also ==
Classifying finite-dimensional representations of Lie algebras
Representation theory of a connected compact Lie group
Highest-weight category
Root system
== Notes ==
== References == | Wikipedia/Weight_(representation_theory) |
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping
V
→
W
{\displaystyle V\to W}
between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.
If a linear map is a bijection then it is called a linear isomorphism. In the case where
V
=
W
{\displaystyle V=W}
, a linear map is called a linear endomorphism. Sometimes the term linear operator refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that
V
{\displaystyle V}
and
W
{\displaystyle W}
are real vector spaces (not necessarily with
V
=
W
{\displaystyle V=W}
), or it can be used to emphasize that
V
{\displaystyle V}
is a function space, which is a common convention in functional analysis. Sometimes the term linear function has the same meaning as linear map, while in analysis it does not.
A linear map from
V
{\displaystyle V}
to
W
{\displaystyle W}
always maps the origin of
V
{\displaystyle V}
to the origin of
W
{\displaystyle W}
. Moreover, it maps linear subspaces in
V
{\displaystyle V}
onto linear subspaces in
W
{\displaystyle W}
(possibly of a lower dimension); for example, it maps a plane through the origin in
V
{\displaystyle V}
to either a plane through the origin in
W
{\displaystyle W}
, a line through the origin in
W
{\displaystyle W}
, or just the origin in
W
{\displaystyle W}
. Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations.
In the language of category theory, linear maps are the morphisms of vector spaces, and they form a category equivalent to the one of matrices.
== Definition and first consequences ==
Let
V
{\displaystyle V}
and
W
{\displaystyle W}
be vector spaces over the same field
K
{\displaystyle K}
.
A function
f
:
V
→
W
{\displaystyle f:V\to W}
is said to be a linear map if for any two vectors
u
,
v
∈
V
{\textstyle \mathbf {u} ,\mathbf {v} \in V}
and any scalar
c
∈
K
{\displaystyle c\in K}
the following two conditions are satisfied:
Additivity / operation of addition
f
(
u
+
v
)
=
f
(
u
)
+
f
(
v
)
{\displaystyle f(\mathbf {u} +\mathbf {v} )=f(\mathbf {u} )+f(\mathbf {v} )}
Homogeneity of degree 1 / operation of scalar multiplication
f
(
c
u
)
=
c
f
(
u
)
{\displaystyle f(c\mathbf {u} )=cf(\mathbf {u} )}
Thus, a linear map is said to be operation preserving. In other words, it does not matter whether the linear map is applied before (the right hand sides of the above examples) or after (the left hand sides of the examples) the operations of addition and scalar multiplication.
By the associativity of the addition operation denoted as +, for any vectors
u
1
,
…
,
u
n
∈
V
{\textstyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{n}\in V}
and scalars
c
1
,
…
,
c
n
∈
K
,
{\textstyle c_{1},\ldots ,c_{n}\in K,}
the following equality holds:
f
(
c
1
u
1
+
⋯
+
c
n
u
n
)
=
c
1
f
(
u
1
)
+
⋯
+
c
n
f
(
u
n
)
.
{\displaystyle f(c_{1}\mathbf {u} _{1}+\cdots +c_{n}\mathbf {u} _{n})=c_{1}f(\mathbf {u} _{1})+\cdots +c_{n}f(\mathbf {u} _{n}).}
Thus a linear map is one which preserves linear combinations.
Denoting the zero elements of the vector spaces
V
{\displaystyle V}
and
W
{\displaystyle W}
by
0
V
{\textstyle \mathbf {0} _{V}}
and
0
W
{\textstyle \mathbf {0} _{W}}
respectively, it follows that
f
(
0
V
)
=
0
W
.
{\textstyle f(\mathbf {0} _{V})=\mathbf {0} _{W}.}
Let
c
=
0
{\displaystyle c=0}
and
v
∈
V
{\textstyle \mathbf {v} \in V}
in the equation for homogeneity of degree 1:
f
(
0
V
)
=
f
(
0
v
)
=
0
f
(
v
)
=
0
W
.
{\displaystyle f(\mathbf {0} _{V})=f(0\mathbf {v} )=0f(\mathbf {v} )=\mathbf {0} _{W}.}
A linear map
V
→
K
{\displaystyle V\to K}
with
K
{\displaystyle K}
viewed as a one-dimensional vector space over itself is called a linear functional.
These statements generalize to any left-module
R
M
{\textstyle {}_{R}M}
over a ring
R
{\displaystyle R}
without modification, and to any right-module upon reversing of the scalar multiplication.
== Examples ==
A prototypical example that gives linear maps their name is a function
f
:
R
→
R
:
x
↦
c
x
{\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto cx}
, of which the graph is a line through the origin.
More generally, any homothety
v
↦
c
v
{\textstyle \mathbf {v} \mapsto c\mathbf {v} }
centered in the origin of a vector space is a linear map (here c is a scalar).
The zero map
x
↦
0
{\textstyle \mathbf {x} \mapsto \mathbf {0} }
between two vector spaces (over the same field) is linear.
The identity map on any module is a linear operator.
For real numbers, the map
x
↦
x
2
{\textstyle x\mapsto x^{2}}
is not linear.
For real numbers, the map
x
↦
x
+
1
{\textstyle x\mapsto x+1}
is not linear (but is an affine transformation).
If
A
{\displaystyle A}
is a
m
×
n
{\displaystyle m\times n}
real matrix, then
A
{\displaystyle A}
defines a linear map from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
m
{\displaystyle \mathbb {R} ^{m}}
by sending a column vector
x
∈
R
n
{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}
to the column vector
A
x
∈
R
m
{\displaystyle A\mathbf {x} \in \mathbb {R} ^{m}}
. Conversely, any linear map between finite-dimensional vector spaces can be represented in this manner; see the § Matrices, below.
If
f
:
V
→
W
{\textstyle f:V\to W}
is an isometry between real normed spaces such that
f
(
0
)
=
0
{\textstyle f(0)=0}
then
f
{\displaystyle f}
is a linear map. This result is not necessarily true for complex normed space.
Differentiation defines a linear map from the space of all differentiable functions to the space of all functions. It also defines a linear operator on the space of all smooth functions (a linear operator is a linear endomorphism, that is, a linear map with the same domain and codomain). Indeed,
d
d
x
(
a
f
(
x
)
+
b
g
(
x
)
)
=
a
d
f
(
x
)
d
x
+
b
d
g
(
x
)
d
x
.
{\displaystyle {\frac {d}{dx}}\left(af(x)+bg(x)\right)=a{\frac {df(x)}{dx}}+b{\frac {dg(x)}{dx}}.}
A definite integral over some interval I is a linear map from the space of all real-valued integrable functions on I to
R
{\displaystyle \mathbb {R} }
. Indeed,
∫
u
v
(
a
f
(
x
)
+
b
g
(
x
)
)
d
x
=
a
∫
u
v
f
(
x
)
d
x
+
b
∫
u
v
g
(
x
)
d
x
.
{\displaystyle \int _{u}^{v}\left(af(x)+bg(x)\right)dx=a\int _{u}^{v}f(x)dx+b\int _{u}^{v}g(x)dx.}
An indefinite integral (or antiderivative) with a fixed integration starting point defines a linear map from the space of all real-valued integrable functions on
R
{\displaystyle \mathbb {R} }
to the space of all real-valued, differentiable functions on
R
{\displaystyle \mathbb {R} }
. Without a fixed starting point, the antiderivative maps to the quotient space of the differentiable functions by the linear space of constant functions.
If
V
{\displaystyle V}
and
W
{\displaystyle W}
are finite-dimensional vector spaces over a field F, of respective dimensions m and n, then the function that maps linear maps
f
:
V
→
W
{\textstyle f:V\to W}
to n × m matrices in the way described in § Matrices (below) is a linear map, and even a linear isomorphism.
The expected value of a random variable (which is in fact a function, and as such an element of a vector space) is linear, as for random variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
we have
E
[
X
+
Y
]
=
E
[
X
]
+
E
[
Y
]
{\displaystyle E[X+Y]=E[X]+E[Y]}
and
E
[
a
X
]
=
a
E
[
X
]
{\displaystyle E[aX]=aE[X]}
, but the variance of a random variable is not linear.
=== Linear extensions ===
Often, a linear map is constructed by defining it on a subset of a vector space and then extending by linearity to the linear span of the domain.
Suppose
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are vector spaces and
f
:
S
→
Y
{\displaystyle f:S\to Y}
is a function defined on some subset
S
⊆
X
.
{\displaystyle S\subseteq X.}
Then a linear extension of
f
{\displaystyle f}
to
X
,
{\displaystyle X,}
if it exists, is a linear map
F
:
X
→
Y
{\displaystyle F:X\to Y}
defined on
X
{\displaystyle X}
that extends
f
{\displaystyle f}
(meaning that
F
(
s
)
=
f
(
s
)
{\displaystyle F(s)=f(s)}
for all
s
∈
S
{\displaystyle s\in S}
) and takes its values from the codomain of
f
.
{\displaystyle f.}
When the subset
S
{\displaystyle S}
is a vector subspace of
X
{\displaystyle X}
then a (
Y
{\displaystyle Y}
-valued) linear extension of
f
{\displaystyle f}
to all of
X
{\displaystyle X}
is guaranteed to exist if (and only if)
f
:
S
→
Y
{\displaystyle f:S\to Y}
is a linear map. In particular, if
f
{\displaystyle f}
has a linear extension to
span
S
,
{\displaystyle \operatorname {span} S,}
then it has a linear extension to all of
X
.
{\displaystyle X.}
The map
f
:
S
→
Y
{\displaystyle f:S\to Y}
can be extended to a linear map
F
:
span
S
→
Y
{\displaystyle F:\operatorname {span} S\to Y}
if and only if whenever
n
>
0
{\displaystyle n>0}
is an integer,
c
1
,
…
,
c
n
{\displaystyle c_{1},\ldots ,c_{n}}
are scalars, and
s
1
,
…
,
s
n
∈
S
{\displaystyle s_{1},\ldots ,s_{n}\in S}
are vectors such that
0
=
c
1
s
1
+
⋯
+
c
n
s
n
,
{\displaystyle 0=c_{1}s_{1}+\cdots +c_{n}s_{n},}
then necessarily
0
=
c
1
f
(
s
1
)
+
⋯
+
c
n
f
(
s
n
)
.
{\displaystyle 0=c_{1}f\left(s_{1}\right)+\cdots +c_{n}f\left(s_{n}\right).}
If a linear extension of
f
:
S
→
Y
{\displaystyle f:S\to Y}
exists then the linear extension
F
:
span
S
→
Y
{\displaystyle F:\operatorname {span} S\to Y}
is unique and
F
(
c
1
s
1
+
⋯
c
n
s
n
)
=
c
1
f
(
s
1
)
+
⋯
+
c
n
f
(
s
n
)
{\displaystyle F\left(c_{1}s_{1}+\cdots c_{n}s_{n}\right)=c_{1}f\left(s_{1}\right)+\cdots +c_{n}f\left(s_{n}\right)}
holds for all
n
,
c
1
,
…
,
c
n
,
{\displaystyle n,c_{1},\ldots ,c_{n},}
and
s
1
,
…
,
s
n
{\displaystyle s_{1},\ldots ,s_{n}}
as above.
If
S
{\displaystyle S}
is linearly independent then every function
f
:
S
→
Y
{\displaystyle f:S\to Y}
into any vector space has a linear extension to a (linear) map
span
S
→
Y
{\displaystyle \;\operatorname {span} S\to Y}
(the converse is also true).
For example, if
X
=
R
2
{\displaystyle X=\mathbb {R} ^{2}}
and
Y
=
R
{\displaystyle Y=\mathbb {R} }
then the assignment
(
1
,
0
)
→
−
1
{\displaystyle (1,0)\to -1}
and
(
0
,
1
)
→
2
{\displaystyle (0,1)\to 2}
can be linearly extended from the linearly independent set of vectors
S
:=
{
(
1
,
0
)
,
(
0
,
1
)
}
{\displaystyle S:=\{(1,0),(0,1)\}}
to a linear map on
span
{
(
1
,
0
)
,
(
0
,
1
)
}
=
R
2
.
{\displaystyle \operatorname {span} \{(1,0),(0,1)\}=\mathbb {R} ^{2}.}
The unique linear extension
F
:
R
2
→
R
{\displaystyle F:\mathbb {R} ^{2}\to \mathbb {R} }
is the map that sends
(
x
,
y
)
=
x
(
1
,
0
)
+
y
(
0
,
1
)
∈
R
2
{\displaystyle (x,y)=x(1,0)+y(0,1)\in \mathbb {R} ^{2}}
to
F
(
x
,
y
)
=
x
(
−
1
)
+
y
(
2
)
=
−
x
+
2
y
.
{\displaystyle F(x,y)=x(-1)+y(2)=-x+2y.}
Every (scalar-valued) linear functional
f
{\displaystyle f}
defined on a vector subspace of a real or complex vector space
X
{\displaystyle X}
has a linear extension to all of
X
.
{\displaystyle X.}
Indeed, the Hahn–Banach dominated extension theorem even guarantees that when this linear functional
f
{\displaystyle f}
is dominated by some given seminorm
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
(meaning that
|
f
(
m
)
|
≤
p
(
m
)
{\displaystyle |f(m)|\leq p(m)}
holds for all
m
{\displaystyle m}
in the domain of
f
{\displaystyle f}
) then there exists a linear extension to
X
{\displaystyle X}
that is also dominated by
p
.
{\displaystyle p.}
== Matrices ==
If
V
{\displaystyle V}
and
W
{\displaystyle W}
are finite-dimensional vector spaces and a basis is defined for each vector space, then every linear map from
V
{\displaystyle V}
to
W
{\displaystyle W}
can be represented by a matrix. This is useful because it allows concrete calculations. Matrices yield examples of linear maps: if
A
{\displaystyle A}
is a real
m
×
n
{\displaystyle m\times n}
matrix, then
f
(
x
)
=
A
x
{\displaystyle f(\mathbf {x} )=A\mathbf {x} }
describes a linear map
R
n
→
R
m
{\displaystyle \mathbb {R} ^{n}\to \mathbb {R} ^{m}}
(see Euclidean space).
Let
{
v
1
,
…
,
v
n
}
{\displaystyle \{\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}\}}
be a basis for
V
{\displaystyle V}
. Then every vector
v
∈
V
{\displaystyle \mathbf {v} \in V}
is uniquely determined by the coefficients
c
1
,
…
,
c
n
{\displaystyle c_{1},\ldots ,c_{n}}
in the field
R
{\displaystyle \mathbb {R} }
:
v
=
c
1
v
1
+
⋯
+
c
n
v
n
.
{\displaystyle \mathbf {v} =c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n}.}
If
f
:
V
→
W
{\textstyle f:V\to W}
is a linear map,
f
(
v
)
=
f
(
c
1
v
1
+
⋯
+
c
n
v
n
)
=
c
1
f
(
v
1
)
+
⋯
+
c
n
f
(
v
n
)
,
{\displaystyle f(\mathbf {v} )=f(c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n})=c_{1}f(\mathbf {v} _{1})+\cdots +c_{n}f\left(\mathbf {v} _{n}\right),}
which implies that the function f is entirely determined by the vectors
f
(
v
1
)
,
…
,
f
(
v
n
)
{\displaystyle f(\mathbf {v} _{1}),\ldots ,f(\mathbf {v} _{n})}
. Now let
{
w
1
,
…
,
w
m
}
{\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}}
be a basis for
W
{\displaystyle W}
. Then we can represent each vector
f
(
v
j
)
{\displaystyle f(\mathbf {v} _{j})}
as
f
(
v
j
)
=
a
1
j
w
1
+
⋯
+
a
m
j
w
m
.
{\displaystyle f\left(\mathbf {v} _{j}\right)=a_{1j}\mathbf {w} _{1}+\cdots +a_{mj}\mathbf {w} _{m}.}
Thus, the function
f
{\displaystyle f}
is entirely determined by the values of
a
i
j
{\displaystyle a_{ij}}
. If we put these values into an
m
×
n
{\displaystyle m\times n}
matrix
M
{\displaystyle M}
, then we can conveniently use it to compute the vector output of
f
{\displaystyle f}
for any vector in
V
{\displaystyle V}
. To get
M
{\displaystyle M}
, every column
j
{\displaystyle j}
of
M
{\displaystyle M}
is a vector
(
a
1
j
⋮
a
m
j
)
{\displaystyle {\begin{pmatrix}a_{1j}\\\vdots \\a_{mj}\end{pmatrix}}}
corresponding to
f
(
v
j
)
{\displaystyle f(\mathbf {v} _{j})}
as defined above. To define it more clearly, for some column
j
{\displaystyle j}
that corresponds to the mapping
f
(
v
j
)
{\displaystyle f(\mathbf {v} _{j})}
,
M
=
(
⋯
a
1
j
⋯
⋮
a
m
j
)
{\displaystyle \mathbf {M} ={\begin{pmatrix}\ \cdots &a_{1j}&\cdots \ \\&\vdots &\\&a_{mj}&\end{pmatrix}}}
where
M
{\displaystyle M}
is the matrix of
f
{\displaystyle f}
. In other words, every column
j
=
1
,
…
,
n
{\displaystyle j=1,\ldots ,n}
has a corresponding vector
f
(
v
j
)
{\displaystyle f(\mathbf {v} _{j})}
whose coordinates
a
1
j
,
⋯
,
a
m
j
{\displaystyle a_{1j},\cdots ,a_{mj}}
are the elements of column
j
{\displaystyle j}
. A single linear map may be represented by many matrices. This is because the values of the elements of a matrix depend on the bases chosen.
The matrices of a linear transformation can be represented visually:
Matrix for
T
{\textstyle T}
relative to
B
{\textstyle B}
:
A
{\textstyle A}
Matrix for
T
{\textstyle T}
relative to
B
′
{\textstyle B'}
:
A
′
{\textstyle A'}
Transition matrix from
B
′
{\textstyle B'}
to
B
{\textstyle B}
:
P
{\textstyle P}
Transition matrix from
B
{\textstyle B}
to
B
′
{\textstyle B'}
:
P
−
1
{\textstyle P^{-1}}
Such that starting in the bottom left corner
[
v
]
B
′
{\textstyle \left[\mathbf {v} \right]_{B'}}
and looking for the bottom right corner
[
T
(
v
)
]
B
′
{\textstyle \left[T\left(\mathbf {v} \right)\right]_{B'}}
, one would left-multiply—that is,
A
′
[
v
]
B
′
=
[
T
(
v
)
]
B
′
{\textstyle A'\left[\mathbf {v} \right]_{B'}=\left[T\left(\mathbf {v} \right)\right]_{B'}}
. The equivalent method would be the "longer" method going clockwise from the same point such that
[
v
]
B
′
{\textstyle \left[\mathbf {v} \right]_{B'}}
is left-multiplied with
P
−
1
A
P
{\textstyle P^{-1}AP}
, or
P
−
1
A
P
[
v
]
B
′
=
[
T
(
v
)
]
B
′
{\textstyle P^{-1}AP\left[\mathbf {v} \right]_{B'}=\left[T\left(\mathbf {v} \right)\right]_{B'}}
.
=== Examples in two dimensions ===
In two-dimensional space R2 linear maps are described by 2 × 2 matrices. These are some examples:
rotation
by 90 degrees counterclockwise:
A
=
(
0
−
1
1
0
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}}
by an angle θ counterclockwise:
A
=
(
cos
θ
−
sin
θ
sin
θ
cos
θ
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}}
reflection
through the x axis:
A
=
(
1
0
0
−
1
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}}
through the y axis:
A
=
(
−
1
0
0
1
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}-1&0\\0&1\end{pmatrix}}}
through a line making an angle θ with the origin:
A
=
(
cos
2
θ
sin
2
θ
sin
2
θ
−
cos
2
θ
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}\cos 2\theta &\sin 2\theta \\\sin 2\theta &-\cos 2\theta \end{pmatrix}}}
scaling by 2 in all directions:
A
=
(
2
0
0
2
)
=
2
I
{\displaystyle \mathbf {A} ={\begin{pmatrix}2&0\\0&2\end{pmatrix}}=2\mathbf {I} }
horizontal shear mapping:
A
=
(
1
m
0
1
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}1&m\\0&1\end{pmatrix}}}
skew of the y axis by an angle θ:
A
=
(
1
−
sin
θ
0
cos
θ
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}1&-\sin \theta \\0&\cos \theta \end{pmatrix}}}
squeeze mapping:
A
=
(
k
0
0
1
k
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}k&0\\0&{\frac {1}{k}}\end{pmatrix}}}
projection onto the y axis:
A
=
(
0
0
0
1
)
.
{\displaystyle \mathbf {A} ={\begin{pmatrix}0&0\\0&1\end{pmatrix}}.}
If a linear map is only composed of rotation, reflection, and/or uniform scaling, then the linear map is a conformal linear transformation.
== Vector space of linear maps ==
The composition of linear maps is linear: if
f
:
V
→
W
{\displaystyle f:V\to W}
and
g
:
W
→
Z
{\textstyle g:W\to Z}
are linear, then so is their composition
g
∘
f
:
V
→
Z
{\textstyle g\circ f:V\to Z}
. It follows from this that the class of all vector spaces over a given field K, together with K-linear maps as morphisms, forms a category.
The inverse of a linear map, when defined, is again a linear map.
If
f
1
:
V
→
W
{\textstyle f_{1}:V\to W}
and
f
2
:
V
→
W
{\textstyle f_{2}:V\to W}
are linear, then so is their pointwise sum
f
1
+
f
2
{\displaystyle f_{1}+f_{2}}
, which is defined by
(
f
1
+
f
2
)
(
x
)
=
f
1
(
x
)
+
f
2
(
x
)
{\displaystyle (f_{1}+f_{2})(\mathbf {x} )=f_{1}(\mathbf {x} )+f_{2}(\mathbf {x} )}
.
If
f
:
V
→
W
{\textstyle f:V\to W}
is linear and
α
{\textstyle \alpha }
is an element of the ground field
K
{\textstyle K}
, then the map
α
f
{\textstyle \alpha f}
, defined by
(
α
f
)
(
x
)
=
α
(
f
(
x
)
)
{\textstyle (\alpha f)(\mathbf {x} )=\alpha (f(\mathbf {x} ))}
, is also linear.
Thus the set
L
(
V
,
W
)
{\textstyle {\mathcal {L}}(V,W)}
of linear maps from
V
{\textstyle V}
to
W
{\textstyle W}
itself forms a vector space over
K
{\textstyle K}
, sometimes denoted
Hom
(
V
,
W
)
{\textstyle \operatorname {Hom} (V,W)}
. Furthermore, in the case that
V
=
W
{\textstyle V=W}
, this vector space, denoted
End
(
V
)
{\textstyle \operatorname {End} (V)}
, is an associative algebra under composition of maps, since the composition of two linear maps is again a linear map, and the composition of maps is always associative. This case is discussed in more detail below.
Given again the finite-dimensional case, if bases have been chosen, then the composition of linear maps corresponds to the matrix multiplication, the addition of linear maps corresponds to the matrix addition, and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars.
=== Endomorphisms and automorphisms ===
A linear transformation
f
:
V
→
V
{\textstyle f:V\to V}
is an endomorphism of
V
{\textstyle V}
; the set of all such endomorphisms
End
(
V
)
{\textstyle \operatorname {End} (V)}
together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field
K
{\textstyle K}
(and in particular a ring). The multiplicative identity element of this algebra is the identity map
id
:
V
→
V
{\textstyle \operatorname {id} :V\to V}
.
An endomorphism of
V
{\textstyle V}
that is also an isomorphism is called an automorphism of
V
{\textstyle V}
. The composition of two automorphisms is again an automorphism, and the set of all automorphisms of
V
{\textstyle V}
forms a group, the automorphism group of
V
{\textstyle V}
which is denoted by
Aut
(
V
)
{\textstyle \operatorname {Aut} (V)}
or
GL
(
V
)
{\textstyle \operatorname {GL} (V)}
. Since the automorphisms are precisely those endomorphisms which possess inverses under composition,
Aut
(
V
)
{\textstyle \operatorname {Aut} (V)}
is the group of units in the ring
End
(
V
)
{\textstyle \operatorname {End} (V)}
.
If
V
{\textstyle V}
has finite dimension
n
{\textstyle n}
, then
End
(
V
)
{\textstyle \operatorname {End} (V)}
is isomorphic to the associative algebra of all
n
×
n
{\textstyle n\times n}
matrices with entries in
K
{\textstyle K}
. The automorphism group of
V
{\textstyle V}
is isomorphic to the general linear group
GL
(
n
,
K
)
{\textstyle \operatorname {GL} (n,K)}
of all
n
×
n
{\textstyle n\times n}
invertible matrices with entries in
K
{\textstyle K}
.
== Kernel, image and the rank–nullity theorem ==
If
f
:
V
→
W
{\textstyle f:V\to W}
is linear, we define the kernel and the image or range of
f
{\textstyle f}
by
ker
(
f
)
=
{
x
∈
V
:
f
(
x
)
=
0
}
im
(
f
)
=
{
w
∈
W
:
w
=
f
(
x
)
,
x
∈
V
}
{\displaystyle {\begin{aligned}\ker(f)&=\{\,\mathbf {x} \in V:f(\mathbf {x} )=\mathbf {0} \,\}\\\operatorname {im} (f)&=\{\,\mathbf {w} \in W:\mathbf {w} =f(\mathbf {x} ),\mathbf {x} \in V\,\}\end{aligned}}}
ker
(
f
)
{\textstyle \ker(f)}
is a subspace of
V
{\textstyle V}
and
im
(
f
)
{\textstyle \operatorname {im} (f)}
is a subspace of
W
{\textstyle W}
. The following dimension formula is known as the rank–nullity theorem:
dim
(
ker
(
f
)
)
+
dim
(
im
(
f
)
)
=
dim
(
V
)
.
{\displaystyle \dim(\ker(f))+\dim(\operatorname {im} (f))=\dim(V).}
The number
dim
(
im
(
f
)
)
{\textstyle \dim(\operatorname {im} (f))}
is also called the rank of
f
{\textstyle f}
and written as
rank
(
f
)
{\textstyle \operatorname {rank} (f)}
, or sometimes,
ρ
(
f
)
{\textstyle \rho (f)}
; the number
dim
(
ker
(
f
)
)
{\textstyle \dim(\ker(f))}
is called the nullity of
f
{\textstyle f}
and written as
null
(
f
)
{\textstyle \operatorname {null} (f)}
or
ν
(
f
)
{\textstyle \nu (f)}
. If
V
{\textstyle V}
and
W
{\textstyle W}
are finite-dimensional, bases have been chosen and
f
{\textstyle f}
is represented by the matrix
A
{\textstyle A}
, then the rank and nullity of
f
{\textstyle f}
are equal to the rank and nullity of the matrix
A
{\textstyle A}
, respectively.
== Cokernel ==
A subtler invariant of a linear transformation
f
:
V
→
W
{\textstyle f:V\to W}
is the cokernel, which is defined as
coker
(
f
)
:=
W
/
f
(
V
)
=
W
/
im
(
f
)
.
{\displaystyle \operatorname {coker} (f):=W/f(V)=W/\operatorname {im} (f).}
This is the dual notion to the kernel: just as the kernel is a subspace of the domain, the co-kernel is a quotient space of the target. Formally, one has the exact sequence
0
→
ker
(
f
)
→
V
→
W
→
coker
(
f
)
→
0.
{\displaystyle 0\to \ker(f)\to V\to W\to \operatorname {coker} (f)\to 0.}
These can be interpreted thus: given a linear equation f(v) = w to solve,
the kernel is the space of solutions to the homogeneous equation f(v) = 0, and its dimension is the number of degrees of freedom in the space of solutions, if it is not empty;
the co-kernel is the space of constraints that the solutions must satisfy, and its dimension is the maximal number of independent constraints.
The dimension of the co-kernel and the dimension of the image (the rank) add up to the dimension of the target space. For finite dimensions, this means that the dimension of the quotient space W/f(V) is the dimension of the target space minus the dimension of the image.
As a simple example, consider the map f: R2 → R2, given by f(x, y) = (0, y). Then for an equation f(x, y) = (a, b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x, b) or equivalently stated, (0, b) + (x, 0), (one degree of freedom). The kernel may be expressed as the subspace (x, 0) < V: the value of x is the freedom in a solution – while the cokernel may be expressed via the map W → R,
(
a
,
b
)
↦
(
a
)
{\textstyle (a,b)\mapsto (a)}
: given a vector (a, b), the value of a is the obstruction to there being a solution.
An example illustrating the infinite-dimensional case is afforded by the map f: R∞ → R∞,
{
a
n
}
↦
{
b
n
}
{\textstyle \left\{a_{n}\right\}\mapsto \left\{b_{n}\right\}}
with b1 = 0 and bn + 1 = an for n > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0 (it maps only the zero sequence to the zero sequence), its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel (
ℵ
0
+
0
=
ℵ
0
+
1
{\textstyle \aleph _{0}+0=\aleph _{0}+1}
), but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension (0 ≠ 1). The reverse situation obtains for the map h: R∞ → R∞,
{
a
n
}
↦
{
c
n
}
{\textstyle \left\{a_{n}\right\}\mapsto \left\{c_{n}\right\}}
with cn = an + 1. Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1.
=== Index ===
For a linear operator with finite-dimensional kernel and co-kernel, one may define index as:
ind
(
f
)
:=
dim
(
ker
(
f
)
)
−
dim
(
coker
(
f
)
)
,
{\displaystyle \operatorname {ind} (f):=\dim(\ker(f))-\dim(\operatorname {coker} (f)),}
namely the degrees of freedom minus the number of constraints.
For a transformation between finite-dimensional vector spaces, this is just the difference dim(V) − dim(W), by rank–nullity. This gives an indication of how many solutions or how many constraints one has: if mapping from a larger space to a smaller one, the map may be onto, and thus will have degrees of freedom even without constraints. Conversely, if mapping from a smaller space to a larger one, the map cannot be onto, and thus one will have constraints even without degrees of freedom.
The index of an operator is precisely the Euler characteristic of the 2-term complex 0 → V → W → 0. In operator theory, the index of Fredholm operators is an object of study, with a major result being the Atiyah–Singer index theorem.
== Algebraic classifications of linear transformations ==
No classification of linear maps could be exhaustive. The following incomplete list enumerates some important classifications that do not require any additional structure on the vector space.
Let V and W denote vector spaces over a field F and let T: V → W be a linear map.
=== Monomorphism ===
T is said to be injective or a monomorphism if any of the following equivalent conditions are true:
T is one-to-one as a map of sets.
ker T = {0V}
dim(ker T) = 0
T is monic or left-cancellable, which is to say, for any vector space U and any pair of linear maps R: U → V and S: U → V, the equation TR = TS implies R = S.
T is left-invertible, which is to say there exists a linear map S: W → V such that ST is the identity map on V.
=== Epimorphism ===
T is said to be surjective or an epimorphism if any of the following equivalent conditions are true:
T is onto as a map of sets.
coker T = {0W}
T is epic or right-cancellable, which is to say, for any vector space U and any pair of linear maps R: W → U and S: W → U, the equation RT = ST implies R = S.
T is right-invertible, which is to say there exists a linear map S: W → V such that TS is the identity map on W.
=== Isomorphism ===
T is said to be an isomorphism if it is both left- and right-invertible. This is equivalent to T being both one-to-one and onto (a bijection of sets) or also to T being both epic and monic, and so being a bimorphism.
If T: V → V is an endomorphism, then:
If, for some positive integer n, the n-th iterate of T, Tn, is identically zero, then T is said to be nilpotent.
If T2 = T, then T is said to be idempotent
If T = kI, where k is some scalar, then T is said to be a scaling transformation or scalar multiplication map; see scalar matrix.
== Change of basis ==
Given a linear map which is an endomorphism whose matrix is A, in the basis B of the space it transforms vector coordinates [u] as [v] = A[u]. As vectors change with the inverse of B (vectors coordinates are contravariant) its inverse transformation is [v] = B[v'].
Substituting this in the first expression
B
[
v
′
]
=
A
B
[
u
′
]
{\displaystyle B\left[v'\right]=AB\left[u'\right]}
hence
[
v
′
]
=
B
−
1
A
B
[
u
′
]
=
A
′
[
u
′
]
.
{\displaystyle \left[v'\right]=B^{-1}AB\left[u'\right]=A'\left[u'\right].}
Therefore, the matrix in the new basis is A′ = B−1AB, being B the matrix of the given basis.
Therefore, linear maps are said to be 1-co- 1-contra-variant objects, or type (1, 1) tensors.
== Continuity ==
A linear transformation between topological vector spaces, for example normed spaces, may be continuous. If its domain and codomain are the same, it will then be a continuous linear operator. A linear operator on a normed linear space is continuous if and only if it is bounded, for example, when the domain is finite-dimensional. An infinite-dimensional domain may have discontinuous linear operators.
An example of an unbounded, hence discontinuous, linear transformation is differentiation on the space of smooth functions equipped with the supremum norm (a function with small values can have a derivative with large values, while the derivative of 0 is 0). For a specific example, sin(nx)/n converges to 0, but its derivative cos(nx) does not, so differentiation is not continuous at 0 (and by a variation of this argument, it is not continuous anywhere).
== Applications ==
A specific application of linear maps is for geometric transformations, such as those performed in computer graphics, where the translation, rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix. Linear mappings also are used as a mechanism for describing change: for example in calculus correspond to derivatives; or in relativity, used as a device to keep track of the local transformations of reference frames.
Another application of these transformations is in compiler optimizations of nested-loop code, and in parallelizing compiler techniques.
== See also ==
Additive map – Z-module homomorphism
Antilinear map – Conjugate homogeneous additive map
Bent function – Special type of Boolean function
Bounded operator – Linear transformation between topological vector spaces
Cauchy's functional equation – Functional equation
Continuous linear operator
Linear functional – Linear map from a vector space to its field of scalarsPages displaying short descriptions of redirect targets
Linear isometry – Distance-preserving mathematical transformationPages displaying short descriptions of redirect targets
Category of matrices
Quasilinearization
== Notes ==
== Bibliography ==
Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
Bronshtein, I. N.; Semendyayev, K. A. (2004). Handbook of Mathematics (4th ed.). New York: Springer-Verlag. ISBN 3-540-43491-7.
Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4.
Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis (Second ed.). Cambridge University Press. ISBN 978-0-521-83940-2.
Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
Kubrusly, Carlos (2001). Elements of operator theory. Boston: Birkhäuser. ISBN 978-1-4757-3328-0. OCLC 754555941.
Lang, Serge (1987), Linear Algebra (Third ed.), New York: Springer-Verlag, ISBN 0-387-96412-6
Rudin, Walter (1973). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 25 (First ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 9780070542259.
Rudin, Walter (1976). Principles of Mathematical Analysis. Walter Rudin Student Series in Advanced Mathematics (3rd ed.). New York: McGraw–Hill. ISBN 978-0-07-054235-8.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067.
Tu, Loring W. (2011). An Introduction to Manifolds (2nd ed.). Springer. ISBN 978-0-8218-4419-9.
Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114. | Wikipedia/Linear_transformations |
In algebra, a simple Lie algebra is a Lie algebra that is non-abelian and contains no nonzero proper ideals. The classification of real simple Lie algebras is one of the major achievements of Wilhelm Killing and Élie Cartan.
A direct sum of simple Lie algebras is called a semisimple Lie algebra.
A simple Lie group is a connected Lie group whose Lie algebra is simple.
== Complex simple Lie algebras ==
A finite-dimensional simple complex Lie algebra is isomorphic to either of the following:
s
l
n
C
{\displaystyle {\mathfrak {sl}}_{n}\mathbb {C} }
,
s
o
n
C
{\displaystyle {\mathfrak {so}}_{n}\mathbb {C} }
,
s
p
2
n
C
{\displaystyle {\mathfrak {sp}}_{2n}\mathbb {C} }
(classical Lie algebras) or one of the five exceptional Lie algebras.
To each finite-dimensional complex semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, there exists a corresponding diagram (called the Dynkin diagram) where the nodes denote the simple roots, the nodes are jointed (or not jointed) by a number of lines depending on the angles between the simple roots and the arrows are put to indicate whether the roots are longer or shorter. The Dynkin diagram of
g
{\displaystyle {\mathfrak {g}}}
is connected if and only if
g
{\displaystyle {\mathfrak {g}}}
is simple. All possible connected Dynkin diagrams are the following:
where n is the number of the nodes (the simple roots). The correspondence of the diagrams and complex simple Lie algebras is as follows:
(An)
s
l
n
+
1
C
{\displaystyle \quad {\mathfrak {sl}}_{n+1}\mathbb {C} }
(Bn)
s
o
2
n
+
1
C
{\displaystyle \quad {\mathfrak {so}}_{2n+1}\mathbb {C} }
(Cn)
s
p
2
n
C
{\displaystyle \quad {\mathfrak {sp}}_{2n}\mathbb {C} }
(Dn)
s
o
2
n
C
{\displaystyle \quad {\mathfrak {so}}_{2n}\mathbb {C} }
The rest, exceptional Lie algebras.
== Real simple Lie algebras ==
If
g
0
{\displaystyle {\mathfrak {g}}_{0}}
is a finite-dimensional real simple Lie algebra, its complexification is either (1) simple or (2) a product of a simple complex Lie algebra and its conjugate. For example, the complexification of
s
l
n
C
{\displaystyle {\mathfrak {sl}}_{n}\mathbb {C} }
thought of as a real Lie algebra is
s
l
n
C
×
s
l
n
C
¯
{\displaystyle {\mathfrak {sl}}_{n}\mathbb {C} \times {\overline {{\mathfrak {sl}}_{n}\mathbb {C} }}}
. Thus, a real simple Lie algebra can be classified by the classification of complex simple Lie algebras and some additional information. This can be done by Satake diagrams that generalize Dynkin diagrams. See also Table of Lie groups#Real Lie algebras for a partial list of real simple Lie algebras.
== Notes ==
== See also ==
Simple Lie group
Vogel plane
== References ==
Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
Jacobson, Nathan, Lie algebras, Republication of the 1962 original. Dover Publications, Inc., New York, 1979. ISBN 0-486-63832-4; Chapter X considers a classification of simple Lie algebras over a field of characteristic zero.
"Lie algebra, semi-simple", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Simple Lie algebra at the nLab | Wikipedia/Simple_Lie_algebra |
In mathematics, the representation theory of the Poincaré group is an example of the representation theory of a Lie group that is neither a compact group nor a semisimple group. It is fundamental in theoretical physics.
In a physical theory having Minkowski space as the underlying spacetime, the space of physical states is typically a representation of the Poincaré group. (More generally, it may be a projective representation, which amounts to a representation of the double cover of the group.)
In a classical field theory, the physical states are sections of a Poincaré-equivariant vector bundle over Minkowski space. The equivariance condition means that the group acts on the total space of the vector bundle, and the projection to Minkowski space is an equivariant map. Therefore, the Poincaré group also acts on the space of sections. Representations arising in this way (and their subquotients) are called covariant field representations, and are not usually unitary.
For a discussion of such unitary representations, see Wigner's classification.
In quantum mechanics, the state of the system is determined by the Schrödinger equation, which is invariant under Galilean transformations. Quantum field theory is the relativistic extension of quantum mechanics, where relativistic (Lorentz/Poincaré invariant) wave equations are solved, "quantized", and act on a Hilbert space composed of Fock states.
There are no finite unitary representations of the full Lorentz (and thus Poincaré) transformations due to the non-compact nature of Lorentz boosts (rotations in Minkowski space along a space and time axis). However, there are finite non-unitary indecomposable representations of the Poincaré algebra, which may be used for modelling of unstable particles.
In case of spin 1/2 particles, it is possible to find a construction that includes both a finite-dimensional representation and a scalar product preserved by this representation by associating a 4-component Dirac spinor
ψ
{\displaystyle \psi }
with each particle. These spinors transform under Lorentz transformations generated by the gamma matrices (
γ
μ
{\displaystyle \gamma _{\mu }}
). It can be shown that the scalar product
⟨
ψ
|
ϕ
⟩
=
ψ
¯
ϕ
=
ψ
†
γ
0
ϕ
{\displaystyle \langle \psi |\phi \rangle ={\bar {\psi }}\phi =\psi ^{\dagger }\gamma _{0}\phi }
is preserved. It is not, however, positive definite, so the representation is not unitary.
== References ==
Greiner, W.; Müller, B. (1994). Quantum Mechanics: Symmetries (2nd ed.). Springer. ISBN 978-3540580805.
Greiner, W.; Reinhardt, J. (1996), Field Quantization, Springer, ISBN 978-3-540-59179-5
Harish-Chandra (1947), "Infinite irreducible representations of the Lorentz group", Proc. R. Soc. A, 189 (1018): 372–401, Bibcode:1947RSPSA.189..372H, doi:10.1098/rspa.1947.0047
Hall, Brian C. (2015), Lie groups, Lie algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, doi:10.1007/978-3-319-13467-3, ISBN 978-3319134666, ISSN 0072-5285
Wigner, E. P. (1939), "On unitary representations of the inhomogeneous Lorentz group", Annals of Mathematics, 40 (1): 149–204, Bibcode:1939AnMat..40..149W, doi:10.2307/1968551, JSTOR 1968551, MR 1503456, S2CID 121773411.
== Notes ==
== See also ==
Wigner's classification
Representation theory of the Lorentz group
Representation theory of the Galilean group
Representation theory of diffeomorphism groups
Particle physics and representation theory
Symmetry in quantum mechanics | Wikipedia/Representation_theory_of_the_Poincaré_group |
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping
V
→
W
{\displaystyle V\to W}
between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.
If a linear map is a bijection then it is called a linear isomorphism. In the case where
V
=
W
{\displaystyle V=W}
, a linear map is called a linear endomorphism. Sometimes the term linear operator refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that
V
{\displaystyle V}
and
W
{\displaystyle W}
are real vector spaces (not necessarily with
V
=
W
{\displaystyle V=W}
), or it can be used to emphasize that
V
{\displaystyle V}
is a function space, which is a common convention in functional analysis. Sometimes the term linear function has the same meaning as linear map, while in analysis it does not.
A linear map from
V
{\displaystyle V}
to
W
{\displaystyle W}
always maps the origin of
V
{\displaystyle V}
to the origin of
W
{\displaystyle W}
. Moreover, it maps linear subspaces in
V
{\displaystyle V}
onto linear subspaces in
W
{\displaystyle W}
(possibly of a lower dimension); for example, it maps a plane through the origin in
V
{\displaystyle V}
to either a plane through the origin in
W
{\displaystyle W}
, a line through the origin in
W
{\displaystyle W}
, or just the origin in
W
{\displaystyle W}
. Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations.
In the language of category theory, linear maps are the morphisms of vector spaces, and they form a category equivalent to the one of matrices.
== Definition and first consequences ==
Let
V
{\displaystyle V}
and
W
{\displaystyle W}
be vector spaces over the same field
K
{\displaystyle K}
.
A function
f
:
V
→
W
{\displaystyle f:V\to W}
is said to be a linear map if for any two vectors
u
,
v
∈
V
{\textstyle \mathbf {u} ,\mathbf {v} \in V}
and any scalar
c
∈
K
{\displaystyle c\in K}
the following two conditions are satisfied:
Additivity / operation of addition
f
(
u
+
v
)
=
f
(
u
)
+
f
(
v
)
{\displaystyle f(\mathbf {u} +\mathbf {v} )=f(\mathbf {u} )+f(\mathbf {v} )}
Homogeneity of degree 1 / operation of scalar multiplication
f
(
c
u
)
=
c
f
(
u
)
{\displaystyle f(c\mathbf {u} )=cf(\mathbf {u} )}
Thus, a linear map is said to be operation preserving. In other words, it does not matter whether the linear map is applied before (the right hand sides of the above examples) or after (the left hand sides of the examples) the operations of addition and scalar multiplication.
By the associativity of the addition operation denoted as +, for any vectors
u
1
,
…
,
u
n
∈
V
{\textstyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{n}\in V}
and scalars
c
1
,
…
,
c
n
∈
K
,
{\textstyle c_{1},\ldots ,c_{n}\in K,}
the following equality holds:
f
(
c
1
u
1
+
⋯
+
c
n
u
n
)
=
c
1
f
(
u
1
)
+
⋯
+
c
n
f
(
u
n
)
.
{\displaystyle f(c_{1}\mathbf {u} _{1}+\cdots +c_{n}\mathbf {u} _{n})=c_{1}f(\mathbf {u} _{1})+\cdots +c_{n}f(\mathbf {u} _{n}).}
Thus a linear map is one which preserves linear combinations.
Denoting the zero elements of the vector spaces
V
{\displaystyle V}
and
W
{\displaystyle W}
by
0
V
{\textstyle \mathbf {0} _{V}}
and
0
W
{\textstyle \mathbf {0} _{W}}
respectively, it follows that
f
(
0
V
)
=
0
W
.
{\textstyle f(\mathbf {0} _{V})=\mathbf {0} _{W}.}
Let
c
=
0
{\displaystyle c=0}
and
v
∈
V
{\textstyle \mathbf {v} \in V}
in the equation for homogeneity of degree 1:
f
(
0
V
)
=
f
(
0
v
)
=
0
f
(
v
)
=
0
W
.
{\displaystyle f(\mathbf {0} _{V})=f(0\mathbf {v} )=0f(\mathbf {v} )=\mathbf {0} _{W}.}
A linear map
V
→
K
{\displaystyle V\to K}
with
K
{\displaystyle K}
viewed as a one-dimensional vector space over itself is called a linear functional.
These statements generalize to any left-module
R
M
{\textstyle {}_{R}M}
over a ring
R
{\displaystyle R}
without modification, and to any right-module upon reversing of the scalar multiplication.
== Examples ==
A prototypical example that gives linear maps their name is a function
f
:
R
→
R
:
x
↦
c
x
{\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto cx}
, of which the graph is a line through the origin.
More generally, any homothety
v
↦
c
v
{\textstyle \mathbf {v} \mapsto c\mathbf {v} }
centered in the origin of a vector space is a linear map (here c is a scalar).
The zero map
x
↦
0
{\textstyle \mathbf {x} \mapsto \mathbf {0} }
between two vector spaces (over the same field) is linear.
The identity map on any module is a linear operator.
For real numbers, the map
x
↦
x
2
{\textstyle x\mapsto x^{2}}
is not linear.
For real numbers, the map
x
↦
x
+
1
{\textstyle x\mapsto x+1}
is not linear (but is an affine transformation).
If
A
{\displaystyle A}
is a
m
×
n
{\displaystyle m\times n}
real matrix, then
A
{\displaystyle A}
defines a linear map from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
m
{\displaystyle \mathbb {R} ^{m}}
by sending a column vector
x
∈
R
n
{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}
to the column vector
A
x
∈
R
m
{\displaystyle A\mathbf {x} \in \mathbb {R} ^{m}}
. Conversely, any linear map between finite-dimensional vector spaces can be represented in this manner; see the § Matrices, below.
If
f
:
V
→
W
{\textstyle f:V\to W}
is an isometry between real normed spaces such that
f
(
0
)
=
0
{\textstyle f(0)=0}
then
f
{\displaystyle f}
is a linear map. This result is not necessarily true for complex normed space.
Differentiation defines a linear map from the space of all differentiable functions to the space of all functions. It also defines a linear operator on the space of all smooth functions (a linear operator is a linear endomorphism, that is, a linear map with the same domain and codomain). Indeed,
d
d
x
(
a
f
(
x
)
+
b
g
(
x
)
)
=
a
d
f
(
x
)
d
x
+
b
d
g
(
x
)
d
x
.
{\displaystyle {\frac {d}{dx}}\left(af(x)+bg(x)\right)=a{\frac {df(x)}{dx}}+b{\frac {dg(x)}{dx}}.}
A definite integral over some interval I is a linear map from the space of all real-valued integrable functions on I to
R
{\displaystyle \mathbb {R} }
. Indeed,
∫
u
v
(
a
f
(
x
)
+
b
g
(
x
)
)
d
x
=
a
∫
u
v
f
(
x
)
d
x
+
b
∫
u
v
g
(
x
)
d
x
.
{\displaystyle \int _{u}^{v}\left(af(x)+bg(x)\right)dx=a\int _{u}^{v}f(x)dx+b\int _{u}^{v}g(x)dx.}
An indefinite integral (or antiderivative) with a fixed integration starting point defines a linear map from the space of all real-valued integrable functions on
R
{\displaystyle \mathbb {R} }
to the space of all real-valued, differentiable functions on
R
{\displaystyle \mathbb {R} }
. Without a fixed starting point, the antiderivative maps to the quotient space of the differentiable functions by the linear space of constant functions.
If
V
{\displaystyle V}
and
W
{\displaystyle W}
are finite-dimensional vector spaces over a field F, of respective dimensions m and n, then the function that maps linear maps
f
:
V
→
W
{\textstyle f:V\to W}
to n × m matrices in the way described in § Matrices (below) is a linear map, and even a linear isomorphism.
The expected value of a random variable (which is in fact a function, and as such an element of a vector space) is linear, as for random variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
we have
E
[
X
+
Y
]
=
E
[
X
]
+
E
[
Y
]
{\displaystyle E[X+Y]=E[X]+E[Y]}
and
E
[
a
X
]
=
a
E
[
X
]
{\displaystyle E[aX]=aE[X]}
, but the variance of a random variable is not linear.
=== Linear extensions ===
Often, a linear map is constructed by defining it on a subset of a vector space and then extending by linearity to the linear span of the domain.
Suppose
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are vector spaces and
f
:
S
→
Y
{\displaystyle f:S\to Y}
is a function defined on some subset
S
⊆
X
.
{\displaystyle S\subseteq X.}
Then a linear extension of
f
{\displaystyle f}
to
X
,
{\displaystyle X,}
if it exists, is a linear map
F
:
X
→
Y
{\displaystyle F:X\to Y}
defined on
X
{\displaystyle X}
that extends
f
{\displaystyle f}
(meaning that
F
(
s
)
=
f
(
s
)
{\displaystyle F(s)=f(s)}
for all
s
∈
S
{\displaystyle s\in S}
) and takes its values from the codomain of
f
.
{\displaystyle f.}
When the subset
S
{\displaystyle S}
is a vector subspace of
X
{\displaystyle X}
then a (
Y
{\displaystyle Y}
-valued) linear extension of
f
{\displaystyle f}
to all of
X
{\displaystyle X}
is guaranteed to exist if (and only if)
f
:
S
→
Y
{\displaystyle f:S\to Y}
is a linear map. In particular, if
f
{\displaystyle f}
has a linear extension to
span
S
,
{\displaystyle \operatorname {span} S,}
then it has a linear extension to all of
X
.
{\displaystyle X.}
The map
f
:
S
→
Y
{\displaystyle f:S\to Y}
can be extended to a linear map
F
:
span
S
→
Y
{\displaystyle F:\operatorname {span} S\to Y}
if and only if whenever
n
>
0
{\displaystyle n>0}
is an integer,
c
1
,
…
,
c
n
{\displaystyle c_{1},\ldots ,c_{n}}
are scalars, and
s
1
,
…
,
s
n
∈
S
{\displaystyle s_{1},\ldots ,s_{n}\in S}
are vectors such that
0
=
c
1
s
1
+
⋯
+
c
n
s
n
,
{\displaystyle 0=c_{1}s_{1}+\cdots +c_{n}s_{n},}
then necessarily
0
=
c
1
f
(
s
1
)
+
⋯
+
c
n
f
(
s
n
)
.
{\displaystyle 0=c_{1}f\left(s_{1}\right)+\cdots +c_{n}f\left(s_{n}\right).}
If a linear extension of
f
:
S
→
Y
{\displaystyle f:S\to Y}
exists then the linear extension
F
:
span
S
→
Y
{\displaystyle F:\operatorname {span} S\to Y}
is unique and
F
(
c
1
s
1
+
⋯
c
n
s
n
)
=
c
1
f
(
s
1
)
+
⋯
+
c
n
f
(
s
n
)
{\displaystyle F\left(c_{1}s_{1}+\cdots c_{n}s_{n}\right)=c_{1}f\left(s_{1}\right)+\cdots +c_{n}f\left(s_{n}\right)}
holds for all
n
,
c
1
,
…
,
c
n
,
{\displaystyle n,c_{1},\ldots ,c_{n},}
and
s
1
,
…
,
s
n
{\displaystyle s_{1},\ldots ,s_{n}}
as above.
If
S
{\displaystyle S}
is linearly independent then every function
f
:
S
→
Y
{\displaystyle f:S\to Y}
into any vector space has a linear extension to a (linear) map
span
S
→
Y
{\displaystyle \;\operatorname {span} S\to Y}
(the converse is also true).
For example, if
X
=
R
2
{\displaystyle X=\mathbb {R} ^{2}}
and
Y
=
R
{\displaystyle Y=\mathbb {R} }
then the assignment
(
1
,
0
)
→
−
1
{\displaystyle (1,0)\to -1}
and
(
0
,
1
)
→
2
{\displaystyle (0,1)\to 2}
can be linearly extended from the linearly independent set of vectors
S
:=
{
(
1
,
0
)
,
(
0
,
1
)
}
{\displaystyle S:=\{(1,0),(0,1)\}}
to a linear map on
span
{
(
1
,
0
)
,
(
0
,
1
)
}
=
R
2
.
{\displaystyle \operatorname {span} \{(1,0),(0,1)\}=\mathbb {R} ^{2}.}
The unique linear extension
F
:
R
2
→
R
{\displaystyle F:\mathbb {R} ^{2}\to \mathbb {R} }
is the map that sends
(
x
,
y
)
=
x
(
1
,
0
)
+
y
(
0
,
1
)
∈
R
2
{\displaystyle (x,y)=x(1,0)+y(0,1)\in \mathbb {R} ^{2}}
to
F
(
x
,
y
)
=
x
(
−
1
)
+
y
(
2
)
=
−
x
+
2
y
.
{\displaystyle F(x,y)=x(-1)+y(2)=-x+2y.}
Every (scalar-valued) linear functional
f
{\displaystyle f}
defined on a vector subspace of a real or complex vector space
X
{\displaystyle X}
has a linear extension to all of
X
.
{\displaystyle X.}
Indeed, the Hahn–Banach dominated extension theorem even guarantees that when this linear functional
f
{\displaystyle f}
is dominated by some given seminorm
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
(meaning that
|
f
(
m
)
|
≤
p
(
m
)
{\displaystyle |f(m)|\leq p(m)}
holds for all
m
{\displaystyle m}
in the domain of
f
{\displaystyle f}
) then there exists a linear extension to
X
{\displaystyle X}
that is also dominated by
p
.
{\displaystyle p.}
== Matrices ==
If
V
{\displaystyle V}
and
W
{\displaystyle W}
are finite-dimensional vector spaces and a basis is defined for each vector space, then every linear map from
V
{\displaystyle V}
to
W
{\displaystyle W}
can be represented by a matrix. This is useful because it allows concrete calculations. Matrices yield examples of linear maps: if
A
{\displaystyle A}
is a real
m
×
n
{\displaystyle m\times n}
matrix, then
f
(
x
)
=
A
x
{\displaystyle f(\mathbf {x} )=A\mathbf {x} }
describes a linear map
R
n
→
R
m
{\displaystyle \mathbb {R} ^{n}\to \mathbb {R} ^{m}}
(see Euclidean space).
Let
{
v
1
,
…
,
v
n
}
{\displaystyle \{\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}\}}
be a basis for
V
{\displaystyle V}
. Then every vector
v
∈
V
{\displaystyle \mathbf {v} \in V}
is uniquely determined by the coefficients
c
1
,
…
,
c
n
{\displaystyle c_{1},\ldots ,c_{n}}
in the field
R
{\displaystyle \mathbb {R} }
:
v
=
c
1
v
1
+
⋯
+
c
n
v
n
.
{\displaystyle \mathbf {v} =c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n}.}
If
f
:
V
→
W
{\textstyle f:V\to W}
is a linear map,
f
(
v
)
=
f
(
c
1
v
1
+
⋯
+
c
n
v
n
)
=
c
1
f
(
v
1
)
+
⋯
+
c
n
f
(
v
n
)
,
{\displaystyle f(\mathbf {v} )=f(c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n})=c_{1}f(\mathbf {v} _{1})+\cdots +c_{n}f\left(\mathbf {v} _{n}\right),}
which implies that the function f is entirely determined by the vectors
f
(
v
1
)
,
…
,
f
(
v
n
)
{\displaystyle f(\mathbf {v} _{1}),\ldots ,f(\mathbf {v} _{n})}
. Now let
{
w
1
,
…
,
w
m
}
{\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}}
be a basis for
W
{\displaystyle W}
. Then we can represent each vector
f
(
v
j
)
{\displaystyle f(\mathbf {v} _{j})}
as
f
(
v
j
)
=
a
1
j
w
1
+
⋯
+
a
m
j
w
m
.
{\displaystyle f\left(\mathbf {v} _{j}\right)=a_{1j}\mathbf {w} _{1}+\cdots +a_{mj}\mathbf {w} _{m}.}
Thus, the function
f
{\displaystyle f}
is entirely determined by the values of
a
i
j
{\displaystyle a_{ij}}
. If we put these values into an
m
×
n
{\displaystyle m\times n}
matrix
M
{\displaystyle M}
, then we can conveniently use it to compute the vector output of
f
{\displaystyle f}
for any vector in
V
{\displaystyle V}
. To get
M
{\displaystyle M}
, every column
j
{\displaystyle j}
of
M
{\displaystyle M}
is a vector
(
a
1
j
⋮
a
m
j
)
{\displaystyle {\begin{pmatrix}a_{1j}\\\vdots \\a_{mj}\end{pmatrix}}}
corresponding to
f
(
v
j
)
{\displaystyle f(\mathbf {v} _{j})}
as defined above. To define it more clearly, for some column
j
{\displaystyle j}
that corresponds to the mapping
f
(
v
j
)
{\displaystyle f(\mathbf {v} _{j})}
,
M
=
(
⋯
a
1
j
⋯
⋮
a
m
j
)
{\displaystyle \mathbf {M} ={\begin{pmatrix}\ \cdots &a_{1j}&\cdots \ \\&\vdots &\\&a_{mj}&\end{pmatrix}}}
where
M
{\displaystyle M}
is the matrix of
f
{\displaystyle f}
. In other words, every column
j
=
1
,
…
,
n
{\displaystyle j=1,\ldots ,n}
has a corresponding vector
f
(
v
j
)
{\displaystyle f(\mathbf {v} _{j})}
whose coordinates
a
1
j
,
⋯
,
a
m
j
{\displaystyle a_{1j},\cdots ,a_{mj}}
are the elements of column
j
{\displaystyle j}
. A single linear map may be represented by many matrices. This is because the values of the elements of a matrix depend on the bases chosen.
The matrices of a linear transformation can be represented visually:
Matrix for
T
{\textstyle T}
relative to
B
{\textstyle B}
:
A
{\textstyle A}
Matrix for
T
{\textstyle T}
relative to
B
′
{\textstyle B'}
:
A
′
{\textstyle A'}
Transition matrix from
B
′
{\textstyle B'}
to
B
{\textstyle B}
:
P
{\textstyle P}
Transition matrix from
B
{\textstyle B}
to
B
′
{\textstyle B'}
:
P
−
1
{\textstyle P^{-1}}
Such that starting in the bottom left corner
[
v
]
B
′
{\textstyle \left[\mathbf {v} \right]_{B'}}
and looking for the bottom right corner
[
T
(
v
)
]
B
′
{\textstyle \left[T\left(\mathbf {v} \right)\right]_{B'}}
, one would left-multiply—that is,
A
′
[
v
]
B
′
=
[
T
(
v
)
]
B
′
{\textstyle A'\left[\mathbf {v} \right]_{B'}=\left[T\left(\mathbf {v} \right)\right]_{B'}}
. The equivalent method would be the "longer" method going clockwise from the same point such that
[
v
]
B
′
{\textstyle \left[\mathbf {v} \right]_{B'}}
is left-multiplied with
P
−
1
A
P
{\textstyle P^{-1}AP}
, or
P
−
1
A
P
[
v
]
B
′
=
[
T
(
v
)
]
B
′
{\textstyle P^{-1}AP\left[\mathbf {v} \right]_{B'}=\left[T\left(\mathbf {v} \right)\right]_{B'}}
.
=== Examples in two dimensions ===
In two-dimensional space R2 linear maps are described by 2 × 2 matrices. These are some examples:
rotation
by 90 degrees counterclockwise:
A
=
(
0
−
1
1
0
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}}
by an angle θ counterclockwise:
A
=
(
cos
θ
−
sin
θ
sin
θ
cos
θ
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}}
reflection
through the x axis:
A
=
(
1
0
0
−
1
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}}
through the y axis:
A
=
(
−
1
0
0
1
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}-1&0\\0&1\end{pmatrix}}}
through a line making an angle θ with the origin:
A
=
(
cos
2
θ
sin
2
θ
sin
2
θ
−
cos
2
θ
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}\cos 2\theta &\sin 2\theta \\\sin 2\theta &-\cos 2\theta \end{pmatrix}}}
scaling by 2 in all directions:
A
=
(
2
0
0
2
)
=
2
I
{\displaystyle \mathbf {A} ={\begin{pmatrix}2&0\\0&2\end{pmatrix}}=2\mathbf {I} }
horizontal shear mapping:
A
=
(
1
m
0
1
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}1&m\\0&1\end{pmatrix}}}
skew of the y axis by an angle θ:
A
=
(
1
−
sin
θ
0
cos
θ
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}1&-\sin \theta \\0&\cos \theta \end{pmatrix}}}
squeeze mapping:
A
=
(
k
0
0
1
k
)
{\displaystyle \mathbf {A} ={\begin{pmatrix}k&0\\0&{\frac {1}{k}}\end{pmatrix}}}
projection onto the y axis:
A
=
(
0
0
0
1
)
.
{\displaystyle \mathbf {A} ={\begin{pmatrix}0&0\\0&1\end{pmatrix}}.}
If a linear map is only composed of rotation, reflection, and/or uniform scaling, then the linear map is a conformal linear transformation.
== Vector space of linear maps ==
The composition of linear maps is linear: if
f
:
V
→
W
{\displaystyle f:V\to W}
and
g
:
W
→
Z
{\textstyle g:W\to Z}
are linear, then so is their composition
g
∘
f
:
V
→
Z
{\textstyle g\circ f:V\to Z}
. It follows from this that the class of all vector spaces over a given field K, together with K-linear maps as morphisms, forms a category.
The inverse of a linear map, when defined, is again a linear map.
If
f
1
:
V
→
W
{\textstyle f_{1}:V\to W}
and
f
2
:
V
→
W
{\textstyle f_{2}:V\to W}
are linear, then so is their pointwise sum
f
1
+
f
2
{\displaystyle f_{1}+f_{2}}
, which is defined by
(
f
1
+
f
2
)
(
x
)
=
f
1
(
x
)
+
f
2
(
x
)
{\displaystyle (f_{1}+f_{2})(\mathbf {x} )=f_{1}(\mathbf {x} )+f_{2}(\mathbf {x} )}
.
If
f
:
V
→
W
{\textstyle f:V\to W}
is linear and
α
{\textstyle \alpha }
is an element of the ground field
K
{\textstyle K}
, then the map
α
f
{\textstyle \alpha f}
, defined by
(
α
f
)
(
x
)
=
α
(
f
(
x
)
)
{\textstyle (\alpha f)(\mathbf {x} )=\alpha (f(\mathbf {x} ))}
, is also linear.
Thus the set
L
(
V
,
W
)
{\textstyle {\mathcal {L}}(V,W)}
of linear maps from
V
{\textstyle V}
to
W
{\textstyle W}
itself forms a vector space over
K
{\textstyle K}
, sometimes denoted
Hom
(
V
,
W
)
{\textstyle \operatorname {Hom} (V,W)}
. Furthermore, in the case that
V
=
W
{\textstyle V=W}
, this vector space, denoted
End
(
V
)
{\textstyle \operatorname {End} (V)}
, is an associative algebra under composition of maps, since the composition of two linear maps is again a linear map, and the composition of maps is always associative. This case is discussed in more detail below.
Given again the finite-dimensional case, if bases have been chosen, then the composition of linear maps corresponds to the matrix multiplication, the addition of linear maps corresponds to the matrix addition, and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars.
=== Endomorphisms and automorphisms ===
A linear transformation
f
:
V
→
V
{\textstyle f:V\to V}
is an endomorphism of
V
{\textstyle V}
; the set of all such endomorphisms
End
(
V
)
{\textstyle \operatorname {End} (V)}
together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field
K
{\textstyle K}
(and in particular a ring). The multiplicative identity element of this algebra is the identity map
id
:
V
→
V
{\textstyle \operatorname {id} :V\to V}
.
An endomorphism of
V
{\textstyle V}
that is also an isomorphism is called an automorphism of
V
{\textstyle V}
. The composition of two automorphisms is again an automorphism, and the set of all automorphisms of
V
{\textstyle V}
forms a group, the automorphism group of
V
{\textstyle V}
which is denoted by
Aut
(
V
)
{\textstyle \operatorname {Aut} (V)}
or
GL
(
V
)
{\textstyle \operatorname {GL} (V)}
. Since the automorphisms are precisely those endomorphisms which possess inverses under composition,
Aut
(
V
)
{\textstyle \operatorname {Aut} (V)}
is the group of units in the ring
End
(
V
)
{\textstyle \operatorname {End} (V)}
.
If
V
{\textstyle V}
has finite dimension
n
{\textstyle n}
, then
End
(
V
)
{\textstyle \operatorname {End} (V)}
is isomorphic to the associative algebra of all
n
×
n
{\textstyle n\times n}
matrices with entries in
K
{\textstyle K}
. The automorphism group of
V
{\textstyle V}
is isomorphic to the general linear group
GL
(
n
,
K
)
{\textstyle \operatorname {GL} (n,K)}
of all
n
×
n
{\textstyle n\times n}
invertible matrices with entries in
K
{\textstyle K}
.
== Kernel, image and the rank–nullity theorem ==
If
f
:
V
→
W
{\textstyle f:V\to W}
is linear, we define the kernel and the image or range of
f
{\textstyle f}
by
ker
(
f
)
=
{
x
∈
V
:
f
(
x
)
=
0
}
im
(
f
)
=
{
w
∈
W
:
w
=
f
(
x
)
,
x
∈
V
}
{\displaystyle {\begin{aligned}\ker(f)&=\{\,\mathbf {x} \in V:f(\mathbf {x} )=\mathbf {0} \,\}\\\operatorname {im} (f)&=\{\,\mathbf {w} \in W:\mathbf {w} =f(\mathbf {x} ),\mathbf {x} \in V\,\}\end{aligned}}}
ker
(
f
)
{\textstyle \ker(f)}
is a subspace of
V
{\textstyle V}
and
im
(
f
)
{\textstyle \operatorname {im} (f)}
is a subspace of
W
{\textstyle W}
. The following dimension formula is known as the rank–nullity theorem:
dim
(
ker
(
f
)
)
+
dim
(
im
(
f
)
)
=
dim
(
V
)
.
{\displaystyle \dim(\ker(f))+\dim(\operatorname {im} (f))=\dim(V).}
The number
dim
(
im
(
f
)
)
{\textstyle \dim(\operatorname {im} (f))}
is also called the rank of
f
{\textstyle f}
and written as
rank
(
f
)
{\textstyle \operatorname {rank} (f)}
, or sometimes,
ρ
(
f
)
{\textstyle \rho (f)}
; the number
dim
(
ker
(
f
)
)
{\textstyle \dim(\ker(f))}
is called the nullity of
f
{\textstyle f}
and written as
null
(
f
)
{\textstyle \operatorname {null} (f)}
or
ν
(
f
)
{\textstyle \nu (f)}
. If
V
{\textstyle V}
and
W
{\textstyle W}
are finite-dimensional, bases have been chosen and
f
{\textstyle f}
is represented by the matrix
A
{\textstyle A}
, then the rank and nullity of
f
{\textstyle f}
are equal to the rank and nullity of the matrix
A
{\textstyle A}
, respectively.
== Cokernel ==
A subtler invariant of a linear transformation
f
:
V
→
W
{\textstyle f:V\to W}
is the cokernel, which is defined as
coker
(
f
)
:=
W
/
f
(
V
)
=
W
/
im
(
f
)
.
{\displaystyle \operatorname {coker} (f):=W/f(V)=W/\operatorname {im} (f).}
This is the dual notion to the kernel: just as the kernel is a subspace of the domain, the co-kernel is a quotient space of the target. Formally, one has the exact sequence
0
→
ker
(
f
)
→
V
→
W
→
coker
(
f
)
→
0.
{\displaystyle 0\to \ker(f)\to V\to W\to \operatorname {coker} (f)\to 0.}
These can be interpreted thus: given a linear equation f(v) = w to solve,
the kernel is the space of solutions to the homogeneous equation f(v) = 0, and its dimension is the number of degrees of freedom in the space of solutions, if it is not empty;
the co-kernel is the space of constraints that the solutions must satisfy, and its dimension is the maximal number of independent constraints.
The dimension of the co-kernel and the dimension of the image (the rank) add up to the dimension of the target space. For finite dimensions, this means that the dimension of the quotient space W/f(V) is the dimension of the target space minus the dimension of the image.
As a simple example, consider the map f: R2 → R2, given by f(x, y) = (0, y). Then for an equation f(x, y) = (a, b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x, b) or equivalently stated, (0, b) + (x, 0), (one degree of freedom). The kernel may be expressed as the subspace (x, 0) < V: the value of x is the freedom in a solution – while the cokernel may be expressed via the map W → R,
(
a
,
b
)
↦
(
a
)
{\textstyle (a,b)\mapsto (a)}
: given a vector (a, b), the value of a is the obstruction to there being a solution.
An example illustrating the infinite-dimensional case is afforded by the map f: R∞ → R∞,
{
a
n
}
↦
{
b
n
}
{\textstyle \left\{a_{n}\right\}\mapsto \left\{b_{n}\right\}}
with b1 = 0 and bn + 1 = an for n > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0 (it maps only the zero sequence to the zero sequence), its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel (
ℵ
0
+
0
=
ℵ
0
+
1
{\textstyle \aleph _{0}+0=\aleph _{0}+1}
), but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension (0 ≠ 1). The reverse situation obtains for the map h: R∞ → R∞,
{
a
n
}
↦
{
c
n
}
{\textstyle \left\{a_{n}\right\}\mapsto \left\{c_{n}\right\}}
with cn = an + 1. Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1.
=== Index ===
For a linear operator with finite-dimensional kernel and co-kernel, one may define index as:
ind
(
f
)
:=
dim
(
ker
(
f
)
)
−
dim
(
coker
(
f
)
)
,
{\displaystyle \operatorname {ind} (f):=\dim(\ker(f))-\dim(\operatorname {coker} (f)),}
namely the degrees of freedom minus the number of constraints.
For a transformation between finite-dimensional vector spaces, this is just the difference dim(V) − dim(W), by rank–nullity. This gives an indication of how many solutions or how many constraints one has: if mapping from a larger space to a smaller one, the map may be onto, and thus will have degrees of freedom even without constraints. Conversely, if mapping from a smaller space to a larger one, the map cannot be onto, and thus one will have constraints even without degrees of freedom.
The index of an operator is precisely the Euler characteristic of the 2-term complex 0 → V → W → 0. In operator theory, the index of Fredholm operators is an object of study, with a major result being the Atiyah–Singer index theorem.
== Algebraic classifications of linear transformations ==
No classification of linear maps could be exhaustive. The following incomplete list enumerates some important classifications that do not require any additional structure on the vector space.
Let V and W denote vector spaces over a field F and let T: V → W be a linear map.
=== Monomorphism ===
T is said to be injective or a monomorphism if any of the following equivalent conditions are true:
T is one-to-one as a map of sets.
ker T = {0V}
dim(ker T) = 0
T is monic or left-cancellable, which is to say, for any vector space U and any pair of linear maps R: U → V and S: U → V, the equation TR = TS implies R = S.
T is left-invertible, which is to say there exists a linear map S: W → V such that ST is the identity map on V.
=== Epimorphism ===
T is said to be surjective or an epimorphism if any of the following equivalent conditions are true:
T is onto as a map of sets.
coker T = {0W}
T is epic or right-cancellable, which is to say, for any vector space U and any pair of linear maps R: W → U and S: W → U, the equation RT = ST implies R = S.
T is right-invertible, which is to say there exists a linear map S: W → V such that TS is the identity map on W.
=== Isomorphism ===
T is said to be an isomorphism if it is both left- and right-invertible. This is equivalent to T being both one-to-one and onto (a bijection of sets) or also to T being both epic and monic, and so being a bimorphism.
If T: V → V is an endomorphism, then:
If, for some positive integer n, the n-th iterate of T, Tn, is identically zero, then T is said to be nilpotent.
If T2 = T, then T is said to be idempotent
If T = kI, where k is some scalar, then T is said to be a scaling transformation or scalar multiplication map; see scalar matrix.
== Change of basis ==
Given a linear map which is an endomorphism whose matrix is A, in the basis B of the space it transforms vector coordinates [u] as [v] = A[u]. As vectors change with the inverse of B (vectors coordinates are contravariant) its inverse transformation is [v] = B[v'].
Substituting this in the first expression
B
[
v
′
]
=
A
B
[
u
′
]
{\displaystyle B\left[v'\right]=AB\left[u'\right]}
hence
[
v
′
]
=
B
−
1
A
B
[
u
′
]
=
A
′
[
u
′
]
.
{\displaystyle \left[v'\right]=B^{-1}AB\left[u'\right]=A'\left[u'\right].}
Therefore, the matrix in the new basis is A′ = B−1AB, being B the matrix of the given basis.
Therefore, linear maps are said to be 1-co- 1-contra-variant objects, or type (1, 1) tensors.
== Continuity ==
A linear transformation between topological vector spaces, for example normed spaces, may be continuous. If its domain and codomain are the same, it will then be a continuous linear operator. A linear operator on a normed linear space is continuous if and only if it is bounded, for example, when the domain is finite-dimensional. An infinite-dimensional domain may have discontinuous linear operators.
An example of an unbounded, hence discontinuous, linear transformation is differentiation on the space of smooth functions equipped with the supremum norm (a function with small values can have a derivative with large values, while the derivative of 0 is 0). For a specific example, sin(nx)/n converges to 0, but its derivative cos(nx) does not, so differentiation is not continuous at 0 (and by a variation of this argument, it is not continuous anywhere).
== Applications ==
A specific application of linear maps is for geometric transformations, such as those performed in computer graphics, where the translation, rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix. Linear mappings also are used as a mechanism for describing change: for example in calculus correspond to derivatives; or in relativity, used as a device to keep track of the local transformations of reference frames.
Another application of these transformations is in compiler optimizations of nested-loop code, and in parallelizing compiler techniques.
== See also ==
Additive map – Z-module homomorphism
Antilinear map – Conjugate homogeneous additive map
Bent function – Special type of Boolean function
Bounded operator – Linear transformation between topological vector spaces
Cauchy's functional equation – Functional equation
Continuous linear operator
Linear functional – Linear map from a vector space to its field of scalarsPages displaying short descriptions of redirect targets
Linear isometry – Distance-preserving mathematical transformationPages displaying short descriptions of redirect targets
Category of matrices
Quasilinearization
== Notes ==
== Bibliography ==
Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
Bronshtein, I. N.; Semendyayev, K. A. (2004). Handbook of Mathematics (4th ed.). New York: Springer-Verlag. ISBN 3-540-43491-7.
Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4.
Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis (Second ed.). Cambridge University Press. ISBN 978-0-521-83940-2.
Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
Kubrusly, Carlos (2001). Elements of operator theory. Boston: Birkhäuser. ISBN 978-1-4757-3328-0. OCLC 754555941.
Lang, Serge (1987), Linear Algebra (Third ed.), New York: Springer-Verlag, ISBN 0-387-96412-6
Rudin, Walter (1973). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 25 (First ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 9780070542259.
Rudin, Walter (1976). Principles of Mathematical Analysis. Walter Rudin Student Series in Advanced Mathematics (3rd ed.). New York: McGraw–Hill. ISBN 978-0-07-054235-8.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067.
Tu, Loring W. (2011). An Introduction to Manifolds (2nd ed.). Springer. ISBN 978-0-8218-4419-9.
Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114. | Wikipedia/Linear_transformation |
In mathematics, a linear algebraic group is a subgroup of the group of invertible
n
×
n
{\displaystyle n\times n}
matrices (under matrix multiplication) that is defined by polynomial equations. An example is the orthogonal group, defined by the relation
M
T
M
=
I
n
{\displaystyle M^{T}M=I_{n}}
where
M
T
{\displaystyle M^{T}}
is the transpose of
M
{\displaystyle M}
.
Many Lie groups can be viewed as linear algebraic groups over the field of real or complex numbers. (For example, every compact Lie group can be regarded as a linear algebraic group over R (necessarily R-anisotropic and reductive), as can many noncompact groups such as the simple Lie group SL(n,R).) The simple Lie groups were classified by Wilhelm Killing and Élie Cartan in the 1880s and 1890s. At that time, no special use was made of the fact that the group structure can be defined by polynomials, that is, that these are algebraic groups. The founders of the theory of algebraic groups include Maurer, Chevalley, and Kolchin (1948). In the 1950s, Armand Borel constructed much of the theory of algebraic groups as it exists today.
One of the first uses for the theory was to define the Chevalley groups.
== Examples ==
For a positive integer
n
{\displaystyle n}
, the general linear group
G
L
(
n
)
{\displaystyle GL(n)}
over a field
k
{\displaystyle k}
, consisting of all invertible
n
×
n
{\displaystyle n\times n}
matrices, is a linear algebraic group over
k
{\displaystyle k}
. It contains the subgroups
U
⊂
B
⊂
G
L
(
n
)
{\displaystyle U\subset B\subset GL(n)}
consisting of matrices of the form, resp.,
(
1
∗
…
∗
0
1
⋱
⋮
⋮
⋱
⋱
∗
0
…
0
1
)
{\displaystyle \left({\begin{array}{cccc}1&*&\dots &*\\0&1&\ddots &\vdots \\\vdots &\ddots &\ddots &*\\0&\dots &0&1\end{array}}\right)}
and
(
∗
∗
…
∗
0
∗
⋱
⋮
⋮
⋱
⋱
∗
0
…
0
∗
)
{\displaystyle \left({\begin{array}{cccc}*&*&\dots &*\\0&*&\ddots &\vdots \\\vdots &\ddots &\ddots &*\\0&\dots &0&*\end{array}}\right)}
.
The group
U
{\displaystyle U}
is an example of a unipotent linear algebraic group, the group
B
{\displaystyle B}
is an example of a solvable algebraic group called the Borel subgroup of
G
L
(
n
)
{\displaystyle GL(n)}
. It is a consequence of the Lie-Kolchin theorem that any connected solvable subgroup of
G
L
(
n
)
{\displaystyle \mathrm {GL} (n)}
is conjugated into
B
{\displaystyle B}
. Any unipotent subgroup can be conjugated into
U
{\displaystyle U}
.
Another algebraic subgroup of
G
L
(
n
)
{\displaystyle \mathrm {GL} (n)}
is the special linear group
S
L
(
n
)
{\displaystyle \mathrm {SL} (n)}
of matrices with determinant 1.
The group
G
L
(
1
)
{\displaystyle \mathrm {GL} (1)}
is called the multiplicative group, usually denoted by
G
m
{\displaystyle \mathbf {G} _{\mathrm {m} }}
. The group of
k
{\displaystyle k}
-points
G
m
(
k
)
{\displaystyle \mathbf {G} _{\mathrm {m} }(k)}
is the multiplicative group
k
∗
{\displaystyle k^{*}}
of nonzero elements of the field
k
{\displaystyle k}
. The additive group
G
a
{\displaystyle \mathbf {G} _{\mathrm {a} }}
, whose
k
{\displaystyle k}
-points are isomorphic to the additive group of
k
{\displaystyle k}
, can also be expressed as a matrix group, for example as the subgroup
U
{\displaystyle U}
in
G
L
(
2
)
{\displaystyle \mathrm {GL} (2)}
:
(
1
∗
0
1
)
.
{\displaystyle {\begin{pmatrix}1&*\\0&1\end{pmatrix}}.}
These two basic examples of commutative linear algebraic groups, the multiplicative and additive groups, behave very differently in terms of their linear representations (as algebraic groups). Every representation of the multiplicative group
G
m
{\displaystyle \mathbf {G} _{\mathrm {m} }}
is a direct sum of irreducible representations. (Its irreducible representations all have dimension 1, of the form
x
↦
x
n
{\displaystyle x\mapsto x^{n}}
for an integer
n
{\displaystyle n}
.) By contrast, the only irreducible representation of the additive group
G
a
{\displaystyle \mathbf {G} _{\mathrm {a} }}
is the trivial representation. So every representation of
G
a
{\displaystyle \mathbf {G} _{\mathrm {a} }}
(such as the 2-dimensional representation above) is an iterated extension of trivial representations, not a direct sum (unless the representation is trivial). The structure theory of linear algebraic groups analyzes any linear algebraic group in terms of these two basic groups and their generalizations, tori and unipotent groups, as discussed below.
== Definitions ==
For an algebraically closed field k, much of the structure of an algebraic variety X over k is encoded in its set X(k) of k-rational points, which allows an elementary definition of a linear algebraic group. First, define a function from the abstract group GL(n,k) to k to be regular if it can be written as a polynomial in the entries of an n×n matrix A and in 1/det(A), where det is the determinant. Then a linear algebraic group G over an algebraically closed field k is a subgroup G(k) of the abstract group GL(n,k) for some natural number n such that G(k) is defined by the vanishing of some set of regular functions.
For an arbitrary field k, algebraic varieties over k are defined as a special case of schemes over k. In that language, a linear algebraic group G over a field k is a smooth closed subgroup scheme of GL(n) over k for some natural number n. In particular, G is defined by the vanishing of some set of regular functions on GL(n) over k, and these functions must have the property that for every commutative k-algebra R, G(R) is a subgroup of the abstract group GL(n,R). (Thus an algebraic group G over k is not just the abstract group G(k), but rather the whole family of groups G(R) for commutative k-algebras R; this is the philosophy of describing a scheme by its functor of points.)
In either language, one has the notion of a homomorphism of linear algebraic groups. For example, when k is algebraically closed, a homomorphism from G ⊂ GL(m) to H ⊂ GL(n) is a homomorphism of abstract groups G(k) → H(k) which is defined by regular functions on G. This makes the linear algebraic groups over k into a category. In particular, this defines what it means for two linear algebraic groups to be isomorphic.
In the language of schemes, a linear algebraic group G over a field k is in particular a group scheme over k, meaning a scheme over k together with a k-point 1 ∈ G(k) and morphisms
m
:
G
×
k
G
→
G
,
i
:
G
→
G
{\displaystyle m\colon G\times _{k}G\to G,\;i\colon G\to G}
over k which satisfy the usual axioms for the multiplication and inverse maps in a group (associativity, identity, inverses). A linear algebraic group is also smooth and of finite type over k, and it is affine (as a scheme). Conversely, every affine group scheme G of finite type over a field k has a faithful representation into GL(n) over k for some n. An example is the embedding of the additive group Ga into GL(2), as mentioned above. As a result, one can think of linear algebraic groups either as matrix groups or, more abstractly, as smooth affine group schemes over a field. (Some authors use "linear algebraic group" to mean any affine group scheme of finite type over a field.)
For a full understanding of linear algebraic groups, one has to consider more general (non-smooth) group schemes. For example, let k be an algebraically closed field of characteristic p > 0. Then the homomorphism f: Gm → Gm defined by x ↦ xp induces an isomorphism of abstract groups k* → k*, but f is not an isomorphism of algebraic groups (because x1/p is not a regular function). In the language of group schemes, there is a clearer reason why f is not an isomorphism: f is surjective, but it has nontrivial kernel, namely the group scheme μp of pth roots of unity. This issue does not arise in characteristic zero. Indeed, every group scheme of finite type over a field k of characteristic zero is smooth over k. A group scheme of finite type over any field k is smooth over k if and only if it is geometrically reduced, meaning that the base change
G
k
¯
{\displaystyle G_{\overline {k}}}
is reduced, where
k
¯
{\displaystyle {\overline {k}}}
is an algebraic closure of k.
Since an affine scheme X is determined by its ring O(X) of regular functions, an affine group scheme G over a field k is determined by the ring O(G) with its structure of a Hopf algebra (coming from the multiplication and inverse maps on G). This gives an equivalence of categories (reversing arrows) between affine group schemes over k and commutative Hopf algebras over k. For example, the Hopf algebra corresponding to the multiplicative group Gm = GL(1) is the Laurent polynomial ring k[x, x−1], with comultiplication given by
x
↦
x
⊗
x
.
{\displaystyle x\mapsto x\otimes x.}
=== Basic notions ===
For a linear algebraic group G over a field k, the identity component Go (the connected component containing the point 1) is a normal subgroup of finite index. So there is a group extension
1
→
G
∘
→
G
→
F
→
1
,
{\displaystyle 1\to G^{\circ }\to G\to F\to 1,}
where F is a finite algebraic group. (For k algebraically closed, F can be identified with an abstract finite group.) Because of this, the study of algebraic groups mostly focuses on connected groups.
Various notions from abstract group theory can be extended to linear algebraic groups. It is straightforward to define what it means for a linear algebraic group to be commutative, nilpotent, or solvable, by analogy with the definitions in abstract group theory. For example, a linear algebraic group is solvable if it has a composition series of linear algebraic subgroups such that the quotient groups are commutative. Also, the normalizer, the center, and the centralizer of a closed subgroup H of a linear algebraic group G are naturally viewed as closed subgroup schemes of G. If they are smooth over k, then they are linear algebraic groups as defined above.
One may ask to what extent the properties of a connected linear algebraic group G over a field k are determined by the abstract group G(k). A useful result in this direction is that if the field k is perfect (for example, of characteristic zero), or if G is reductive (as defined below), then G is unirational over k. Therefore, if in addition k is infinite, the group G(k) is Zariski dense in G. For example, under the assumptions mentioned, G is commutative, nilpotent, or solvable if and only if G(k) has the corresponding property.
The assumption of connectedness cannot be omitted in these results. For example, let G be the group μ3 ⊂ GL(1) of cube roots of unity over the rational numbers Q. Then G is a linear algebraic group over Q for which G(Q) = 1 is not Zariski dense in G, because
G
(
Q
¯
)
{\displaystyle G({\overline {\mathbf {Q} }})}
is a group of order 3.
Over an algebraically closed field, there is a stronger result about algebraic groups as algebraic varieties: every connected linear algebraic group over an algebraically closed field is a rational variety.
== The Lie algebra of an algebraic group ==
The Lie algebra
g
{\displaystyle {\mathfrak {g}}}
of an algebraic group G can be defined in several equivalent ways: as the tangent space T1(G) at the identity element 1 ∈ G(k), or as the space of left-invariant derivations. If k is algebraically closed, a derivation D: O(G) → O(G) over k of the coordinate ring of G is left-invariant if
D
λ
x
=
λ
x
D
{\displaystyle D\lambda _{x}=\lambda _{x}D}
for every x in G(k), where λx: O(G) → O(G) is induced by left multiplication by x. For an arbitrary field k, left invariance of a derivation is defined as an analogous equality of two linear maps O(G) → O(G) ⊗O(G). The Lie bracket of two derivations is defined by [D1, D2] =D1D2 − D2D1.
The passage from G to
g
{\displaystyle {\mathfrak {g}}}
is thus a process of differentiation. For an element x ∈ G(k), the derivative at 1 ∈ G(k) of the conjugation map G → G, g ↦ xgx−1, is an automorphism of
g
{\displaystyle {\mathfrak {g}}}
, giving the adjoint representation:
Ad
:
G
→
Aut
(
g
)
.
{\displaystyle \operatorname {Ad} \colon G\to \operatorname {Aut} ({\mathfrak {g}}).}
Over a field of characteristic zero, a connected subgroup H of a linear algebraic group G is uniquely determined by its Lie algebra
h
⊂
g
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}}
. But not every Lie subalgebra of
g
{\displaystyle {\mathfrak {g}}}
corresponds to an algebraic subgroup of G, as one sees in the example of the torus G = (Gm)2 over C. In positive characteristic, there can be many different connected subgroups of a group G with the same Lie algebra (again, the torus G = (Gm)2 provides examples). For these reasons, although the Lie algebra of an algebraic group is important, the structure theory of algebraic groups requires more global tools.
== Semisimple and unipotent elements ==
For an algebraically closed field k, a matrix g in GL(n,k) is called semisimple if it is diagonalizable, and unipotent if the matrix g − 1 is nilpotent. Equivalently, g is unipotent if all eigenvalues of g are equal to 1. The Jordan canonical form for matrices implies that every element g of GL(n,k) can be written uniquely as a product g = gssgu such that gss is semisimple, gu is unipotent, and gss and gu commute with each other.
For any field k, an element g of GL(n,k) is said to be semisimple if it becomes diagonalizable over the algebraic closure of k. If the field k is perfect, then the semisimple and unipotent parts of g also lie in GL(n,k). Finally, for any linear algebraic group G ⊂ GL(n) over a field k, define a k-point of G to be semisimple or unipotent if it is semisimple or unipotent in GL(n,k). (These properties are in fact independent of the choice of a faithful representation of G.) If the field k is perfect, then the semisimple and unipotent parts of a k-point of G are automatically in G. That is (the Jordan decomposition): every element g of G(k) can be written uniquely as a product g = gssgu in G(k) such that gss is semisimple, gu is unipotent, and gss and gu commute with each other. This reduces the problem of describing the conjugacy classes in G(k) to the semisimple and unipotent cases.
== Tori ==
A torus over an algebraically closed field k means a group isomorphic to (Gm)n, the product of n copies of the multiplicative group over k, for some natural number n. For a linear algebraic group G, a maximal torus in G means a torus in G that is not contained in any bigger torus. For example, the group of diagonal matrices in GL(n) over k is a maximal torus in GL(n), isomorphic to (Gm)n. A basic result of the theory is that any two maximal tori in a group G over an algebraically closed field k are conjugate by some element of G(k). The rank of G means the dimension of any maximal torus.
For an arbitrary field k, a torus T over k means a linear algebraic group over k whose base change
T
k
¯
{\displaystyle T_{\overline {k}}}
to the algebraic closure of k is isomorphic to (Gm)n over
k
¯
{\displaystyle {\overline {k}}}
, for some natural number n. A split torus over k means a group isomorphic to (Gm)n over k for some n. An example of a non-split torus over the real numbers R is
T
=
{
(
x
,
y
)
∈
A
R
2
:
x
2
+
y
2
=
1
}
,
{\displaystyle T=\{(x,y)\in A_{\mathbf {R} }^{2}:x^{2}+y^{2}=1\},}
with group structure given by the formula for multiplying complex numbers x+iy. Here T is a torus of dimension 1 over R. It is not split, because its group of real points T(R) is the circle group, which is not isomorphic even as an abstract group to Gm(R) = R*.
Every point of a torus over a field k is semisimple. Conversely, if G is a connected linear algebraic group such that every element of
G
(
k
¯
)
{\displaystyle G({\overline {k}})}
is semisimple, then G is a torus.
For a linear algebraic group G over a general field k, one cannot expect all maximal tori in G over k to be conjugate by elements of G(k). For example, both the multiplicative group Gm and the circle group T above occur as maximal tori in SL(2) over R. However, it is always true that any two maximal split tori in G over k (meaning split tori in G that are not contained in a bigger split torus) are conjugate by some element of G(k). As a result, it makes sense to define the k-rank or split rank of a group G over k as the dimension of any maximal split torus in G over k.
For any maximal torus T in a linear algebraic group G over a field k, Grothendieck showed that
T
k
¯
{\displaystyle T_{\overline {k}}}
is a maximal torus in
G
k
¯
{\displaystyle G_{\overline {k}}}
. It follows that any two maximal tori in G over a field k have the same dimension, although they need not be isomorphic.
== Unipotent groups ==
Let Un be the group of upper-triangular matrices in GL(n) with diagonal entries equal to 1, over a field k. A group scheme over a field k (for example, a linear algebraic group) is called unipotent if it is isomorphic to a closed subgroup scheme of Un for some n. It is straightforward to check that the group Un is nilpotent. As a result, every unipotent group scheme is nilpotent.
A linear algebraic group G over a field k is unipotent if and only if every element of
G
(
k
¯
)
{\displaystyle G({\overline {k}})}
is unipotent.
The group Bn of upper-triangular matrices in GL(n) is a semidirect product
B
n
=
T
n
⋉
U
n
,
{\displaystyle B_{n}=T_{n}\ltimes U_{n},}
where Tn is the diagonal torus (Gm)n. More generally, every connected solvable linear algebraic group is a semidirect product of a torus with a unipotent group, T ⋉ U.
A smooth connected unipotent group over a perfect field k (for example, an algebraically closed field) has a composition series with all quotient groups isomorphic to the additive group Ga.
== Borel subgroups ==
The Borel subgroups are important for the structure theory of linear algebraic groups. For a linear algebraic group G over an algebraically closed field k, a Borel subgroup of G means a maximal smooth connected solvable subgroup. For example, one Borel subgroup of GL(n) is the subgroup B of upper-triangular matrices (all entries below the diagonal are zero).
A basic result of the theory is that any two Borel subgroups of a connected group G over an algebraically closed field k are conjugate by some element of G(k). (A standard proof uses the Borel fixed-point theorem: for a connected solvable group G acting on a proper variety X over an algebraically closed field k, there is a k-point in X which is fixed by the action of G.) The conjugacy of Borel subgroups in GL(n) amounts to the Lie–Kolchin theorem: every smooth connected solvable subgroup of GL(n) is conjugate to a subgroup of the upper-triangular subgroup in GL(n).
For an arbitrary field k, a Borel subgroup B of G is defined to be a subgroup over k such that, over an algebraic closure
k
¯
{\displaystyle {\overline {k}}}
of k,
B
k
¯
{\displaystyle B_{\overline {k}}}
is a Borel subgroup of
G
k
¯
{\displaystyle G_{\overline {k}}}
. Thus G may or may not have a Borel subgroup over k.
For a closed subgroup scheme H of G, the quotient space G/H is a smooth quasi-projective scheme over k. A smooth subgroup P of a connected group G is called parabolic if G/P is projective over k (or equivalently, proper over k). An important property of Borel subgroups B is that G/B is a projective variety, called the flag variety of G. That is, Borel subgroups are parabolic subgroups. More precisely, for k algebraically closed, the Borel subgroups are exactly the minimal parabolic subgroups of G; conversely, every subgroup containing a Borel subgroup is parabolic. So one can list all parabolic subgroups of G (up to conjugation by G(k)) by listing all the linear algebraic subgroups of G that contain a fixed Borel subgroup. For example, the subgroups P ⊂ GL(3) over k that contain the Borel subgroup B of upper-triangular matrices are B itself, the whole group GL(3), and the intermediate subgroups
{
[
∗
∗
∗
0
∗
∗
0
∗
∗
]
}
{\displaystyle \left\{{\begin{bmatrix}*&*&*\\0&*&*\\0&*&*\end{bmatrix}}\right\}}
and
{
[
∗
∗
∗
∗
∗
∗
0
0
∗
]
}
.
{\displaystyle \left\{{\begin{bmatrix}*&*&*\\*&*&*\\0&0&*\end{bmatrix}}\right\}.}
The corresponding projective homogeneous varieties GL(3)/P are (respectively): the flag manifold of all chains of linear subspaces
0
⊂
V
1
⊂
V
2
⊂
A
k
3
{\displaystyle 0\subset V_{1}\subset V_{2}\subset A_{k}^{3}}
with Vi of dimension i; a point; the projective space P2 of lines (1-dimensional linear subspaces) in A3; and the dual projective space P2 of planes in A3.
== Semisimple and reductive groups ==
A connected linear algebraic group G over an algebraically closed field is called semisimple if every smooth connected solvable normal subgroup of G is trivial. More generally, a connected linear algebraic group G over an algebraically closed field is called reductive if every smooth connected unipotent normal subgroup of G is trivial. (Some authors do not require reductive groups to be connected.) A semisimple group is reductive. A group G over an arbitrary field k is called semisimple or reductive if
G
k
¯
{\displaystyle G_{\overline {k}}}
is semisimple or reductive. For example, the group SL(n) of n × n matrices with determinant 1 over any field k is semisimple, whereas a nontrivial torus is reductive but not semisimple. Likewise, GL(n) is reductive but not semisimple (because its center Gm is a nontrivial smooth connected solvable normal subgroup).
Every compact connected Lie group has a complexification, which is a complex reductive algebraic group. In fact, this construction gives a one-to-one correspondence between compact connected Lie groups and complex reductive groups, up to isomorphism.
A linear algebraic group G over a field k is called simple (or k-simple) if it is semisimple, nontrivial, and every smooth connected normal subgroup of G over k is trivial or equal to G. (Some authors call this property "almost simple".) This differs slightly from the terminology for abstract groups, in that a simple algebraic group may have nontrivial center (although the center must be finite). For example, for any integer n at least 2 and any field k, the group SL(n) over k is simple, and its center is the group scheme μn of nth roots of unity.
Every connected linear algebraic group G over a perfect field k is (in a unique way) an extension of a reductive group R by a smooth connected unipotent group U, called the unipotent radical of G:
1
→
U
→
G
→
R
→
1.
{\displaystyle 1\to U\to G\to R\to 1.}
If k has characteristic zero, then one has the more precise Levi decomposition: every connected linear algebraic group G over k is a semidirect product
R
⋉
U
{\displaystyle R\ltimes U}
of a reductive group by a unipotent group.
== Classification of reductive groups ==
Reductive groups include the most important linear algebraic groups in practice, such as the classical groups: GL(n), SL(n), the orthogonal groups SO(n) and the symplectic groups Sp(2n). On the other hand, the definition of reductive groups is quite "negative", and it is not clear that one can expect to say much about them. Remarkably, Claude Chevalley gave a complete classification of the reductive groups over an algebraically closed field: they are determined by root data. In particular, simple groups over an algebraically closed field k are classified (up to quotients by finite central subgroup schemes) by their Dynkin diagrams. It is striking that this classification is independent of the characteristic of k. For example, the exceptional Lie groups G2, F4, E6, E7, and E8 can be defined in any characteristic (and even as group schemes over Z). The classification of finite simple groups says that most finite simple groups arise as the group of k-points of a simple algebraic group over a finite field k, or as minor variants of that construction.
Every reductive group over a field is the quotient by a finite central subgroup scheme of the product of a torus and some simple groups. For example,
G
L
(
n
)
≅
(
G
m
×
S
L
(
n
)
)
/
μ
n
.
{\displaystyle GL(n)\cong (G_{m}\times SL(n))/\mu _{n}.}
For an arbitrary field k, a reductive group G is called split if it contains a split maximal torus over k (that is, a split torus in G which remains maximal over an algebraic closure of k). For example, GL(n) is a split reductive group over any field k. Chevalley showed that the classification of split reductive groups is the same over any field. By contrast, the classification of arbitrary reductive groups can be hard, depending on the base field. For example, every nondegenerate quadratic form q over a field k determines a reductive group SO(q), and every central simple algebra A over k determines a reductive group SL1(A). As a result, the problem of classifying reductive groups over k essentially includes the problem of classifying all quadratic forms over k or all central simple algebras over k. These problems are easy for k algebraically closed, and they are understood for some other fields such as number fields, but for arbitrary fields there are many open questions.
== Applications ==
=== Representation theory ===
One reason for the importance of reductive groups comes from representation theory. Every irreducible representation of a unipotent group is trivial. More generally, for any linear algebraic group G written as an extension
1
→
U
→
G
→
R
→
1
{\displaystyle 1\to U\to G\to R\to 1}
with U unipotent and R reductive, every irreducible representation of G factors through R. This focuses attention on the representation theory of reductive groups. (To be clear, the representations considered here are representations of G as an algebraic group. Thus, for a group G over a field k, the representations are on k-vector spaces, and the action of G is given by regular functions. It is an important but different problem to classify continuous representations of the group G(R) for a real reductive group G, or similar problems over other fields.)
Chevalley showed that the irreducible representations of a split reductive group over a field k are finite-dimensional, and they are indexed by dominant weights. This is the same as what happens in the representation theory of compact connected Lie groups, or the finite-dimensional representation theory of complex semisimple Lie algebras. For k of characteristic zero, all these theories are essentially equivalent. In particular, every representation of a reductive group G over a field of characteristic zero is a direct sum of irreducible representations, and if G is split, the characters of the irreducible representations are given by the Weyl character formula. The Borel–Weil theorem gives a geometric construction of the irreducible representations of a reductive group G in characteristic zero, as spaces of sections of line bundles over the flag manifold G/B.
The representation theory of reductive groups (other than tori) over a field of positive characteristic p is less well understood. In this situation, a representation need not be a direct sum of irreducible representations. And although irreducible representations are indexed by dominant weights, the dimensions and characters of the irreducible representations are known only in some cases. Andersen, Jantzen and Soergel (1994) determined these characters (proving Lusztig's conjecture) when the characteristic p is sufficiently large compared to the Coxeter number of the group. For small primes p, there is not even a precise conjecture.
=== Group actions and geometric invariant theory ===
An action of a linear algebraic group G on a variety (or scheme) X over a field k is a morphism
G
×
k
X
→
X
{\displaystyle G\times _{k}X\to X}
that satisfies the axioms of a group action. As in other types of group theory, it is important to study group actions, since groups arise naturally as symmetries of geometric objects.
Part of the theory of group actions is geometric invariant theory, which aims to construct a quotient variety X/G, describing the set of orbits of a linear algebraic group G on X as an algebraic variety. Various complications arise. For example, if X is an affine variety, then one can try to construct X/G as Spec of the ring of invariants O(X)G. However, Masayoshi Nagata showed that the ring of invariants need not be finitely generated as a k-algebra (and so Spec of the ring is a scheme but not a variety), a negative answer to Hilbert's 14th problem. In the positive direction, the ring of invariants is finitely generated if G is reductive, by Haboush's theorem, proved in characteristic zero by Hilbert and Nagata.
Geometric invariant theory involves further subtleties when a reductive group G acts on a projective variety X. In particular, the theory defines open subsets of "stable" and "semistable" points in X, with the quotient morphism only defined on the set of semistable points.
== Related notions ==
Linear algebraic groups admit variants in several directions. Dropping the existence of the inverse map
i
:
G
→
G
{\displaystyle i\colon G\to G}
, one obtains the notion of a linear algebraic monoid.
=== Lie groups ===
For a linear algebraic group G over the real numbers R, the group of real points G(R) is a Lie group, essentially because real polynomials, which describe the multiplication on G, are smooth functions. Likewise, for a linear algebraic group G over C, G(C) is a complex Lie group. Much of the theory of algebraic groups was developed by analogy with Lie groups.
There are several reasons why a Lie group may not have the structure of a linear algebraic group over R.
A Lie group with an infinite group of components G/Go cannot be realized as a linear algebraic group.
An algebraic group G over R may be connected as an algebraic group while the Lie group G(R) is not connected, and likewise for simply connected groups. For example, the algebraic group SL(2) is simply connected over any field, whereas the Lie group SL(2,R) has fundamental group isomorphic to the integers Z. The double cover H of SL(2,R), known as the metaplectic group, is a Lie group that cannot be viewed as a linear algebraic group over R. More strongly, H has no faithful finite-dimensional representation.
Anatoly Maltsev showed that every simply connected nilpotent Lie group can be viewed as a unipotent algebraic group G over R in a unique way. (As a variety, G is isomorphic to affine space of some dimension over R.) By contrast, there are simply connected solvable Lie groups that cannot be viewed as real algebraic groups. For example, the universal cover H of the semidirect product S1 ⋉ R2 has center isomorphic to Z, which is not a linear algebraic group, and so H cannot be viewed as a linear algebraic group over R.
=== Abelian varieties ===
Algebraic groups which are not affine behave very differently. In particular, a smooth connected group scheme which is a projective variety over a field is called an abelian variety. In contrast to linear algebraic groups, every abelian variety is commutative. Nonetheless, abelian varieties have a rich theory. Even the case of elliptic curves (abelian varieties of dimension 1) is central to number theory, with applications including the proof of Fermat's Last Theorem.
=== Tannakian categories ===
The finite-dimensional representations of an algebraic group G, together with the tensor product of representations, form a tannakian category RepG. In fact, tannakian categories with a "fiber functor" over a field are equivalent to affine group schemes. (Every affine group scheme over a field k is pro-algebraic in the sense that it is an inverse limit of affine group schemes of finite type over k.) For example, the Mumford–Tate group and the motivic Galois group are constructed using this formalism. Certain properties of a (pro-)algebraic group G can be read from its category of representations. For example, over a field of characteristic zero, RepG is a semisimple category if and only if the identity component of G is pro-reductive.
== See also ==
The groups of Lie type are the finite simple groups constructed from simple algebraic groups over finite fields.
Lang's theorem
Generalized flag variety, Bruhat decomposition, BN pair, Weyl group, Cartan subgroup, group of adjoint type, parabolic induction
Real form (Lie theory), Satake diagram
Adelic algebraic group, Weil's conjecture on Tamagawa numbers
Langlands classification, Langlands program, geometric Langlands program
Torsor, nonabelian cohomology, special group, cohomological invariant, essential dimension, Kneser–Tits conjecture, Serre's conjecture II
Pseudo-reductive group
Differential Galois theory
Distribution on a linear algebraic group
== Notes ==
== References ==
Andersen, H. H.; Jantzen, J. C.; Soergel, W. (1994), Representations of Quantum Groups at a pth Root of Unity and of Semisimple Groups in Characteristic p: Independence of p, Astérisque, vol. 220, Société Mathématique de France, ISSN 0303-1179, MR 1272539
Borel, Armand (1991) [1969], Linear Algebraic Groups (2nd ed.), New York: Springer-Verlag, ISBN 0-387-97370-2, MR 1102012
Bröcker, Theodor; tom Dieck, Tammo (1985), Representations of Compact Lie Groups, Springer Nature, ISBN 0-387-13678-9, MR 0781344
Conrad, Brian (2014), "Reductive group schemes" (PDF), Autour des schémas en groupes, vol. 1, Paris: Société Mathématique de France, pp. 93–444, ISBN 978-2-85629-794-0, MR 3309122
Deligne, Pierre; Milne, J. S. (1982), "Tannakian categories", Hodge Cycles, Motives, and Shimura Varieties, Lecture Notes in Mathematics, vol. 900, Springer Nature, pp. 101–228, ISBN 3-540-11174-3, MR 0654325
De Medts, Tom (2019), Linear Algebraic Groups (course notes) (PDF), Ghent University
Humphreys, James E. (1975), Linear Algebraic Groups, Springer, ISBN 0-387-90108-6, MR 0396773
Kolchin, E. R. (1948), "Algebraic matric groups and the Picard–Vessiot theory of homogeneous linear ordinary differential equations", Annals of Mathematics, Second Series, 49 (1): 1–42, doi:10.2307/1969111, ISSN 0003-486X, JSTOR 1969111, MR 0024884
Milne, J. S. (2017), Algebraic Groups: The Theory of Group Schemes of Finite Type over a Field, Cambridge University Press, ISBN 978-1107167483, MR 3729270
Springer, Tonny A. (1998) [1981], Linear Algebraic Groups (2nd ed.), New York: Birkhäuser, ISBN 0-8176-4021-5, MR 1642713
== External links ==
"Linear algebraic group", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Linear_algebraic_group |
In the mathematical field of Lie theory, there are two definitions of a compact Lie algebra. Extrinsically and topologically, a compact Lie algebra is the Lie algebra of a compact Lie group; this definition includes tori. Intrinsically and algebraically, a compact Lie algebra is a real Lie algebra whose Killing form is negative definite; this definition is more restrictive and excludes tori. A compact Lie algebra can be seen as the smallest real form of a corresponding complex Lie algebra, namely the complexification.
== Definition ==
Formally, one may define a compact Lie algebra either as the Lie algebra of a compact Lie group, or as a real Lie algebra whose Killing form is negative definite. These definitions do not quite agree:
The Killing form on the Lie algebra of a compact Lie group is negative semidefinite, not negative definite in general.
If the Killing form of a Lie algebra is negative definite, then the Lie algebra is the Lie algebra of a compact semisimple Lie group.
In general, the Lie algebra of a compact Lie group decomposes as the Lie algebra direct sum of a commutative summand (for which the corresponding subgroup is a torus) and a summand on which the Killing form is negative definite.
It is important to note that the converse of the first result above is false: Even if the Killing form of a Lie algebra is negative semidefinite, this does not mean that the Lie algebra is the Lie algebra of some compact group. For example, the Killing form on the Lie algebra of the Heisenberg group is identically zero, hence negative semidefinite, but this Lie algebra is not the Lie algebra of any compact group.
== Properties ==
Compact Lie algebras are reductive; note that the analogous result is true for compact groups in general.
The Lie algebra
g
{\displaystyle {\mathfrak {g}}}
for the compact Lie group G admits an Ad(G)-invariant inner product,. Conversely, if
g
{\displaystyle {\mathfrak {g}}}
admits an Ad-invariant inner product, then
g
{\displaystyle {\mathfrak {g}}}
is the Lie algebra of some compact group. If
g
{\displaystyle {\mathfrak {g}}}
is semisimple, this inner product can be taken to be the negative of the Killing form. Thus relative to this inner product, Ad(G) acts by orthogonal transformations (
SO
(
g
)
{\displaystyle \operatorname {SO} ({\mathfrak {g}})}
) and
ad
g
{\displaystyle \operatorname {ad} \ {\mathfrak {g}}}
acts by skew-symmetric matrices (
s
o
(
g
)
{\displaystyle {\mathfrak {so}}({\mathfrak {g}})}
). It is possible to develop the theory of complex semisimple Lie algebras by viewing them as the complexifications of Lie algebras of compact groups; the existence of an Ad-invariant inner product on the compact real form greatly simplifies the development.
This can be seen as a compact analog of Ado's theorem on the representability of Lie algebras: just as every finite-dimensional Lie algebra in characteristic 0 embeds in
g
l
,
{\displaystyle {\mathfrak {gl}},}
every compact Lie algebra embeds in
s
o
.
{\displaystyle {\mathfrak {so}}.}
The Satake diagram of a compact Lie algebra is the Dynkin diagram of the complex Lie algebra with all vertices blackened.
Compact Lie algebras are opposite to split real Lie algebras among real forms, split Lie algebras being "as far as possible" from being compact.
== Classification ==
The compact Lie algebras are classified and named according to the compact real forms of the complex semisimple Lie algebras. These are:
A
n
:
{\displaystyle A_{n}:}
s
u
n
+
1
,
{\displaystyle {\mathfrak {su}}_{n+1},}
corresponding to the special unitary group (properly, the compact form is PSU, the projective special unitary group);
B
n
:
{\displaystyle B_{n}:}
s
o
2
n
+
1
,
{\displaystyle {\mathfrak {so}}_{2n+1},}
corresponding to the special orthogonal group (or
o
2
n
+
1
,
{\displaystyle {\mathfrak {o}}_{2n+1},}
corresponding to the orthogonal group);
C
n
:
{\displaystyle C_{n}:}
s
p
n
,
{\displaystyle {\mathfrak {sp}}_{n},}
corresponding to the compact symplectic group; sometimes written
u
s
p
n
,
{\displaystyle {\mathfrak {usp}}_{n},}
;
D
n
:
{\displaystyle D_{n}:}
s
o
2
n
,
{\displaystyle {\mathfrak {so}}_{2n},}
corresponding to the special orthogonal group (or
o
2
n
,
{\displaystyle {\mathfrak {o}}_{2n},}
corresponding to the orthogonal group) (properly, the compact form is PSO, the projective special orthogonal group);
Compact real forms of the exceptional Lie algebras
E
6
,
E
7
,
E
8
,
F
4
,
G
2
.
{\displaystyle E_{6},E_{7},E_{8},F_{4},G_{2}.}
=== Isomorphisms ===
The classification is non-redundant if one takes
n
≥
1
{\displaystyle n\geq 1}
for
A
n
,
{\displaystyle A_{n},}
n
≥
2
{\displaystyle n\geq 2}
for
B
n
,
{\displaystyle B_{n},}
n
≥
3
{\displaystyle n\geq 3}
for
C
n
,
{\displaystyle C_{n},}
and
n
≥
4
{\displaystyle n\geq 4}
for
D
n
.
{\displaystyle D_{n}.}
If one instead takes
n
≥
0
{\displaystyle n\geq 0}
or
n
≥
1
{\displaystyle n\geq 1}
one obtains certain exceptional isomorphisms.
For
n
=
0
,
{\displaystyle n=0,}
A
0
≅
B
0
≅
C
0
≅
D
0
{\displaystyle A_{0}\cong B_{0}\cong C_{0}\cong D_{0}}
is the trivial diagram, corresponding to the trivial group
SU
(
1
)
≅
SO
(
1
)
≅
Sp
(
0
)
≅
SO
(
0
)
.
{\displaystyle \operatorname {SU} (1)\cong \operatorname {SO} (1)\cong \operatorname {Sp} (0)\cong \operatorname {SO} (0).}
For
n
=
1
,
{\displaystyle n=1,}
the isomorphism
s
u
2
≅
s
o
3
≅
s
p
1
{\displaystyle {\mathfrak {su}}_{2}\cong {\mathfrak {so}}_{3}\cong {\mathfrak {sp}}_{1}}
corresponds to the isomorphisms of diagrams
A
1
≅
B
1
≅
C
1
{\displaystyle A_{1}\cong B_{1}\cong C_{1}}
and the corresponding isomorphisms of Lie groups
SU
(
2
)
≅
Spin
(
3
)
≅
Sp
(
1
)
{\displaystyle \operatorname {SU} (2)\cong \operatorname {Spin} (3)\cong \operatorname {Sp} (1)}
(the 3-sphere or unit quaternions).
For
n
=
2
,
{\displaystyle n=2,}
the isomorphism
s
o
5
≅
s
p
2
{\displaystyle {\mathfrak {so}}_{5}\cong {\mathfrak {sp}}_{2}}
corresponds to the isomorphisms of diagrams
B
2
≅
C
2
,
{\displaystyle B_{2}\cong C_{2},}
and the corresponding isomorphism of Lie groups
Sp
(
2
)
≅
Spin
(
5
)
.
{\displaystyle \operatorname {Sp} (2)\cong \operatorname {Spin} (5).}
For
n
=
3
,
{\displaystyle n=3,}
the isomorphism
s
u
4
≅
s
o
6
{\displaystyle {\mathfrak {su}}_{4}\cong {\mathfrak {so}}_{6}}
corresponds to the isomorphisms of diagrams
A
3
≅
D
3
,
{\displaystyle A_{3}\cong D_{3},}
and the corresponding isomorphism of Lie groups
SU
(
4
)
≅
Spin
(
6
)
.
{\displaystyle \operatorname {SU} (4)\cong \operatorname {Spin} (6).}
If one considers
E
4
{\displaystyle E_{4}}
and
E
5
{\displaystyle E_{5}}
as diagrams, these are isomorphic to
A
4
{\displaystyle A_{4}}
and
D
5
,
{\displaystyle D_{5},}
respectively, with corresponding isomorphisms of Lie algebras.
== See also ==
Real form
Split Lie algebra
== Notes ==
== References ==
== External links ==
Lie group, compact, in Encyclopaedia of Mathematics | Wikipedia/Compact_Lie_algebra |
In mathematics, a module is a generalization of the notion of vector space in which the field of scalars is replaced by a (not necessarily commutative) ring. The concept of a module also generalizes the notion of an abelian group, since the abelian groups are exactly the modules over the ring of integers.
Like a vector space, a module is an additive abelian group, and scalar multiplication is distributive over the operations of addition between elements of the ring or module and is compatible with the ring multiplication.
Modules are very closely related to the representation theory of groups. They are also one of the central notions of commutative algebra and homological algebra, and are used widely in algebraic geometry and algebraic topology.
== Introduction and definition ==
=== Motivation ===
In a vector space, the set of scalars is a field and acts on the vectors by scalar multiplication, subject to certain axioms such as the distributive law. In a module, the scalars need only be a ring, so the module concept represents a significant generalization. In commutative algebra, both ideals and quotient rings are modules, so that many arguments about ideals or quotient rings can be combined into a single argument about modules. In non-commutative algebra, the distinction between left ideals, ideals, and modules becomes more pronounced, though some ring-theoretic conditions can be expressed either about left ideals or left modules.
Much of the theory of modules consists of extending as many of the desirable properties of vector spaces as possible to the realm of modules over a "well-behaved" ring, such as a principal ideal domain. However, modules can be quite a bit more complicated than vector spaces; for instance, not all modules have a basis, and, even for those that do (free modules), the number of elements in a basis need not be the same for all bases (that is to say that they may not have a unique rank) if the underlying ring does not satisfy the invariant basis number condition, unlike vector spaces, which always have a (possibly infinite) basis whose cardinality is then unique. (These last two assertions require the axiom of choice in general, but not in the case of finite-dimensional vector spaces, or certain well-behaved infinite-dimensional vector spaces such as Lp spaces.)
=== Formal definition ===
Suppose that R is a ring, and 1 is its multiplicative identity.
A left R-module M consists of an abelian group (M, +) and an operation · : R × M → M such that for all r, s in R and x, y in M, we have
r
⋅
(
x
+
y
)
=
r
⋅
x
+
r
⋅
y
{\displaystyle r\cdot (x+y)=r\cdot x+r\cdot y}
,
(
r
+
s
)
⋅
x
=
r
⋅
x
+
s
⋅
x
{\displaystyle (r+s)\cdot x=r\cdot x+s\cdot x}
,
(
r
s
)
⋅
x
=
r
⋅
(
s
⋅
x
)
{\displaystyle (rs)\cdot x=r\cdot (s\cdot x)}
,
1
⋅
x
=
x
.
{\displaystyle 1\cdot x=x.}
The operation · is called scalar multiplication. Often the symbol · is omitted, but in this article we use it and reserve juxtaposition for multiplication in R. One may write RM to emphasize that M is a left R-module. A right R-module MR is defined similarly in terms of an operation · : M × R → M.
The qualificative of left- or right-module does not depend on whether the scalars are written on the left or on the right, but on the property 3: if, in the above definition, the property 3 is replaced by
(
r
s
)
⋅
x
=
s
⋅
(
r
⋅
x
)
,
{\displaystyle (rs)\cdot x=s\cdot (r\cdot x),}
one gets a right-module, even if the scalars are written on the left. However, writing the scalars on the left for left-modules and on the right for right modules makes the manipulation of property 3 much easier.
Authors who do not require rings to be unital omit condition 4 in the definition above; they would call the structures defined above "unital left R-modules". In this article, consistent with the glossary of ring theory, all rings and modules are assumed to be unital.
An (R,S)-bimodule is an abelian group together with both a left scalar multiplication · by elements of R and a right scalar multiplication ∗ by elements of S, making it simultaneously a left R-module and a right S-module, satisfying the additional condition (r · x) ∗ s = r ⋅ (x ∗ s) for all r in R, x in M, and s in S.
If R is commutative, then left R-modules are the same as right R-modules and are simply called R-modules. Most often the scalars are written on the left in this case.
== Examples ==
If K is a field, then K-modules are called K-vector spaces (vector spaces over K).
If K is a field, and K[x] a univariate polynomial ring, then a K[x]-module M is a K-module with an additional action of x on M by a group homomorphism that commutes with the action of K on M. In other words, a K[x]-module is a K-vector space M combined with a linear map from M to M. Applying the structure theorem for finitely generated modules over a principal ideal domain to this example shows the existence of the rational and Jordan canonical forms.
The concept of a Z-module agrees with the notion of an abelian group. That is, every abelian group is a module over the ring of integers Z in a unique way. For n > 0, let n ⋅ x = x + x + ... + x (n summands), 0 ⋅ x = 0, and (−n) ⋅ x = −(n ⋅ x). Such a module need not have a basis—groups containing torsion elements do not. (For example, in the group of integers modulo 3, one cannot find even one element that satisfies the definition of a linearly independent set, since when an integer such as 3 or 6 multiplies an element, the result is 0. However, if a finite field is considered as a module over the same finite field taken as a ring, it is a vector space and does have a basis.)
The decimal fractions (including negative ones) form a module over the integers. Only singletons are linearly independent sets, but there is no singleton that can serve as a basis, so the module has no basis and no rank, in the usual sense of linear algebra. However this module has a torsion-free rank equal to 1.
If R is any ring and n a natural number, then the cartesian product Rn is both a left and right R-module over R if we use the component-wise operations. Hence when n = 1, R is an R-module, where the scalar multiplication is just ring multiplication. The case n = 0 yields the trivial R-module {0} consisting only of its identity element. Modules of this type are called free and if R has invariant basis number (e.g. any commutative ring or field) the number n is then the rank of the free module.
If Mn(R) is the ring of n × n matrices over a ring R, M is an Mn(R)-module, and ei is the n × n matrix with 1 in the (i, i)-entry (and zeros elsewhere), then eiM is an R-module, since reim = eirm ∈ eiM. So M breaks up as the direct sum of R-modules, M = e1M ⊕ ... ⊕ enM. Conversely, given an R-module M0, then M0⊕n is an Mn(R)-module. In fact, the category of R-modules and the category of Mn(R)-modules are equivalent. The special case is that the module M is just R as a module over itself, then Rn is an Mn(R)-module.
If S is a nonempty set, M is a left R-module, and MS is the collection of all functions f : S → M, then with addition and scalar multiplication in MS defined pointwise by (f + g)(s) = f(s) + g(s) and (rf)(s) = rf(s), MS is a left R-module. The right R-module case is analogous. In particular, if R is commutative then the collection of R-module homomorphisms h : M → N (see below) is an R-module (and in fact a submodule of NM).
If X is a smooth manifold, then the smooth functions from X to the real numbers form a ring C∞(X). The set of all smooth vector fields defined on X forms a module over C∞(X), and so do the tensor fields and the differential forms on X. More generally, the sections of any vector bundle form a projective module over C∞(X), and by Swan's theorem, every projective module is isomorphic to the module of sections of some vector bundle; the category of C∞(X)-modules and the category of vector bundles over X are equivalent.
If R is any ring and I is any left ideal in R, then I is a left R-module, and analogously right ideals in R are right R-modules.
If R is a ring, we can define the opposite ring Rop, which has the same underlying set and the same addition operation, but the opposite multiplication: if ab = c in R, then ba = c in Rop. Any left R-module M can then be seen to be a right module over Rop, and any right module over R can be considered a left module over Rop.
Modules over a Lie algebra are (associative algebra) modules over its universal enveloping algebra.
If R and S are rings with a ring homomorphism φ : R → S, then every S-module M is an R-module by defining rm = φ(r)m. In particular, S itself is such an R-module.
== Submodules and homomorphisms ==
Suppose M is a left R-module and N is a subgroup of M. Then N is a submodule (or more explicitly an R-submodule) if for any n in N and any r in R, the product r ⋅ n (or n ⋅ r for a right R-module) is in N.
If X is any subset of an R-module M, then the submodule spanned by X is defined to be
⟨
X
⟩
=
⋂
N
⊇
X
N
{\textstyle \langle X\rangle =\,\bigcap _{N\supseteq X}N}
where N runs over the submodules of M that contain X, or explicitly
{
∑
i
=
1
k
r
i
x
i
∣
r
i
∈
R
,
x
i
∈
X
}
{\textstyle \left\{\sum _{i=1}^{k}r_{i}x_{i}\mid r_{i}\in R,x_{i}\in X\right\}}
, which is important in the definition of tensor products of modules.
The set of submodules of a given module M, together with the two binary operations + (the module spanned by the union of the arguments) and ∩, forms a lattice that satisfies the modular law:
Given submodules U, N1, N2 of M such that N1 ⊆ N2, then the following two submodules are equal: (N1 + U) ∩ N2 = N1 + (U ∩ N2).
If M and N are left R-modules, then a map f : M → N is a homomorphism of R-modules if for any m, n in M and r, s in R,
f
(
r
⋅
m
+
s
⋅
n
)
=
r
⋅
f
(
m
)
+
s
⋅
f
(
n
)
{\displaystyle f(r\cdot m+s\cdot n)=r\cdot f(m)+s\cdot f(n)}
.
This, like any homomorphism of mathematical objects, is just a mapping that preserves the structure of the objects. Another name for a homomorphism of R-modules is an R-linear map.
A bijective module homomorphism f : M → N is called a module isomorphism, and the two modules M and N are called isomorphic. Two isomorphic modules are identical for all practical purposes, differing solely in the notation for their elements.
The kernel of a module homomorphism f : M → N is the submodule of M consisting of all elements that are sent to zero by f, and the image of f is the submodule of N consisting of values f(m) for all elements m of M. The isomorphism theorems familiar from groups and vector spaces are also valid for R-modules.
Given a ring R, the set of all left R-modules together with their module homomorphisms forms an abelian category, denoted by R-Mod (see category of modules).
== Types of modules ==
Finitely generated
An R-module M is finitely generated if there exist finitely many elements x1, ..., xn in M such that every element of M is a linear combination of those elements with coefficients from the ring R.
Cyclic
A module is called a cyclic module if it is generated by one element.
Free
A free R-module is a module that has a basis, or equivalently, one that is isomorphic to a direct sum of copies of the ring R. These are the modules that behave very much like vector spaces.
Projective
Projective modules are direct summands of free modules and share many of their desirable properties.
Injective
Injective modules are defined dually to projective modules.
Flat
A module is called flat if taking the tensor product of it with any exact sequence of R-modules preserves exactness.
Torsionless
A module is called torsionless if it embeds into its algebraic dual.
Simple
A simple module S is a module that is not {0} and whose only submodules are {0} and S. Simple modules are sometimes called irreducible.
Semisimple
A semisimple module is a direct sum (finite or not) of simple modules. Historically these modules are also called completely reducible.
Indecomposable
An indecomposable module is a non-zero module that cannot be written as a direct sum of two non-zero submodules. Every simple module is indecomposable, but there are indecomposable modules that are not simple (e.g. uniform modules).
Faithful
A faithful module M is one where the action of each r ≠ 0 in R on M is nontrivial (i.e. r ⋅ x ≠ 0 for some x in M). Equivalently, the annihilator of M is the zero ideal.
Torsion-free
A torsion-free module is a module over a ring such that 0 is the only element annihilated by a regular element (non zero-divisor) of the ring, equivalently rm = 0 implies r = 0 or m = 0.
Noetherian
A Noetherian module is a module that satisfies the ascending chain condition on submodules, that is, every increasing chain of submodules becomes stationary after finitely many steps. Equivalently, every submodule is finitely generated.
Artinian
An Artinian module is a module that satisfies the descending chain condition on submodules, that is, every decreasing chain of submodules becomes stationary after finitely many steps.
Graded
A graded module is a module with a decomposition as a direct sum M = ⨁x Mx over a graded ring R = ⨁x Rx such that RxMy ⊆ Mx+y for all x and y.
Uniform
A uniform module is a module in which all pairs of nonzero submodules have nonzero intersection.
== Further notions ==
=== Relation to representation theory ===
A representation of a group G over a field k is a module over the group ring k[G].
If M is a left R-module, then the action of an element r in R is defined to be the map M → M that sends each x to rx (or xr in the case of a right module), and is necessarily a group endomorphism of the abelian group (M, +). The set of all group endomorphisms of M is denoted EndZ(M) and forms a ring under addition and composition, and sending a ring element r of R to its action actually defines a ring homomorphism from R to EndZ(M).
Such a ring homomorphism R → EndZ(M) is called a representation of the abelian group M over the ring R; an alternative and equivalent way of defining left R-modules is to say that a left R-module is an abelian group M together with a representation of M over R. Such a representation R → EndZ(M) may also be called a ring action of R on M.
A representation is called faithful if the map R → EndZ(M) is injective. In terms of modules, this means that if r is an element of R such that rx = 0 for all x in M, then r = 0. Every abelian group is a faithful module over the integers or over the ring of integers modulo n, Z/nZ, for some n.
=== Generalizations ===
A ring R corresponds to a preadditive category R with a single object. With this understanding, a left R-module is just a covariant additive functor from R to the category Ab of abelian groups, and right R-modules are contravariant additive functors. This suggests that, if C is any preadditive category, a covariant additive functor from C to Ab should be considered a generalized left module over C. These functors form a functor category C-Mod, which is the natural generalization of the module category R-Mod.
Modules over commutative rings can be generalized in a different direction: take a ringed space (X, OX) and consider the sheaves of OX-modules (see sheaf of modules). These form a category OX-Mod, and play an important role in modern algebraic geometry. If X has only a single point, then this is a module category in the old sense over the commutative ring OX(X).
One can also consider modules over a semiring. Modules over rings are abelian groups, but modules over semirings are only commutative monoids. Most applications of modules are still possible. In particular, for any semiring S, the matrices over S form a semiring over which the tuples of elements from S are a module (in this generalized sense only). This allows a further generalization of the concept of vector space incorporating the semirings from theoretical computer science.
Over near-rings, one can consider near-ring modules, a nonabelian generalization of modules.
== See also ==
Group ring
Algebra (ring theory)
Module (model theory)
Module spectrum
Annihilator
== Notes ==
== References ==
F.W. Anderson and K.R. Fuller: Rings and Categories of Modules, Graduate Texts in Mathematics, Vol. 13, 2nd Ed., Springer-Verlag, New York, 1992, ISBN 0-387-97845-3, ISBN 3-540-97845-3
Nathan Jacobson. Structure of rings. Colloquium publications, Vol. 37, 2nd Ed., AMS Bookstore, 1964, ISBN 978-0-8218-1037-8
== External links ==
"Module", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
module at the nLab | Wikipedia/Module_theory |
In mathematics, physics and chemistry, a space group is the symmetry group of a repeating pattern in space, usually in three dimensions. The elements of a space group (its symmetry operations) are the rigid transformations of the pattern that leave it unchanged. In three dimensions, space groups are classified into 219 distinct types, or 230 types if chiral copies are considered distinct. Space groups are discrete cocompact groups of isometries of an oriented Euclidean space in any number of dimensions. In dimensions other than 3, they are sometimes called Bieberbach groups.
In crystallography, space groups are also called the crystallographic or Fedorov groups, and represent a description of the symmetry of the crystal. A definitive source regarding 3-dimensional space groups is the International Tables for Crystallography Hahn (2002).
== History ==
Space groups in 2 dimensions are the 17 wallpaper groups which have been known for several centuries, though the proof that the list was complete was only given in 1891, after the much more difficult classification of space groups had largely been completed.
In 1879 the German mathematician Leonhard Sohncke listed the 65 space groups (called Sohncke groups) whose elements preserve the chirality. More accurately, he listed 66 groups, but both the Russian mathematician and crystallographer Evgraf Fedorov and the German mathematician Arthur Moritz Schoenflies noticed that two of them were really the same. The space groups in three dimensions were first enumerated in 1891 by Fedorov (whose list had two omissions (I43d and Fdd2) and one duplication (Fmm2)), and shortly afterwards in 1891 were independently enumerated by Schönflies (whose list had four omissions (I43d, Pc, Cc, ?) and one duplication (P421m)). The correct list of 230 space groups was found by 1892 during correspondence between Fedorov and Schönflies. William Barlow (1894) later enumerated the groups with a different method, but omitted four groups (Fdd2, I42d, P421d, and P421c) even though he already had the correct list of 230 groups from Fedorov and Schönflies; the common claim that Barlow was unaware of their work is incorrect. Burckhardt (1967) describes the history of the discovery of the space groups in detail.
== Elements ==
The space groups in three dimensions are made from combinations of the 32 crystallographic point groups with the 14 Bravais lattices, each of the latter belonging to one of 7 lattice systems. What this means is that the action of any element of a given space group can be expressed as the action of an element of the appropriate point group followed optionally by a translation. A space group is thus some combination of the translational symmetry of a unit cell (including lattice centering), the point group symmetry operations of reflection, rotation and improper rotation (also called rotoinversion), and the screw axis and glide plane symmetry operations. The combination of all these symmetry operations results in a total of 230 different space groups describing all possible crystal symmetries.
The number of replicates of the asymmetric unit in a unit cell is thus the number of lattice points in the cell times the order of the point group. This ranges from 1 in the case of space group P1 to 192 for a space group like Fm3m, the NaCl structure.
=== Elements fixing a point ===
The elements of the space group fixing a point of space are the identity element, reflections, rotations and improper rotations, including inversion points.
=== Translations ===
The translations form a normal abelian subgroup of rank 3, called the Bravais lattice (so named after French physicist Auguste Bravais). There are 14 possible types of Bravais lattice. The quotient of the space group by the Bravais lattice is a finite group which is one of the 32 possible point groups.
=== Glide planes ===
A glide plane is a reflection in a plane, followed by a translation parallel with that plane. This is noted by
a
{\displaystyle a}
,
b
{\displaystyle b}
, or
c
{\displaystyle c}
, depending on which axis the glide is along. There is also the
n
{\displaystyle n}
glide, which is a glide along the half of a diagonal of a face, and the
d
{\displaystyle d}
glide, which is a fourth of the way along either a face or space diagonal of the unit cell. The latter is called the diamond glide plane as it features in the diamond structure. In 17 space groups, due to the centering of the cell, the glides occur in two perpendicular directions simultaneously, i.e. the same glide plane can be called b or c, a or b, a or c. For example, group Abm2 could be also called Acm2, group Ccca could be called Cccb. In 1992, it was suggested to use symbol e for such planes. The symbols for five space groups have been modified:
=== Screw axes ===
A screw axis is a rotation about an axis, followed by a translation along the direction of the axis. These are noted by a number, n, to describe the degree of rotation, where the number is how many operations must be applied to complete a full rotation (e.g., 3 would mean a rotation one third of the way around the axis each time). The degree of translation is then added as a subscript showing how far along the axis the translation is, as a portion of the parallel lattice vector. So, 21 is a twofold rotation followed by a translation of 1/2 of the lattice vector.
=== General formula ===
The general formula for the action of an element of a space group is
y = M.x + D
where M is its matrix, D is its vector, and where the element transforms point x into point y. In general, D = D (lattice) + D(M), where D(M) is a unique function of M that is zero for M being the identity. The matrices M form a point group that is a basis of the space group; the lattice must be symmetric under that point group, but the crystal structure itself may not be symmetric under that point group as applied to any particular point (that is, without a translation). For example, the diamond cubic structure does not have any point where the cubic point group applies.
The lattice dimension can be less than the overall dimension, resulting in a "subperiodic" space group. For (overall dimension, lattice dimension):
(1,1): One-dimensional line groups
(2,1): Two-dimensional line groups: frieze groups
(2,2): Wallpaper groups
(3,1): Three-dimensional line groups; with the 3D crystallographic point groups, the rod groups
(3,2): Layer groups
(3,3): The space groups discussed in this article
=== Chirality ===
The 65 "Sohncke" space groups, not containing any mirrors, inversion points, improper rotations or glide planes, yield chiral crystals, not identical to their mirror image; whereas space groups that do include at least one of those give achiral crystals. Achiral molecules sometimes form chiral crystals, but chiral molecules always form chiral crystals, in one of the space groups that permit this.
Among the 65 Sohncke groups are 22 that come in 11 enantiomorphic pairs.
=== Combinations ===
Only certain combinations of symmetry elements are possible in a space group. Translations are always present, and the space group P1 has only translations and the identity element. The presence of mirrors implies glide planes as well, and the presence of rotation axes implies screw axes as well, but the converses are not true. An inversion and a mirror implies two-fold screw axes, and so on.
== Notation ==
There are at least ten methods of naming space groups. Some of these methods can assign several different names to the same space group, so altogether there are many thousands of different names.
Number
The International Union of Crystallography publishes tables of all space group types, and assigns each a unique number from 1 to 230. The numbering is arbitrary, except that groups with the same crystal system or point group are given consecutive numbers.
International symbol notation
Hermann–Mauguin notation
The Hermann–Mauguin (or international) notation describes the lattice and some generators for the group. It has a shortened form called the international short symbol, which is the one most commonly used in crystallography, and usually consists of a set of four symbols. The first describes the centering of the Bravais lattice (P, A, C, I, R or F). The next three describe the most prominent symmetry operation visible when projected along one of the high symmetry directions of the crystal. These symbols are the same as used in point groups, with the addition of glide planes and screw axis, described above. By way of example, the space group of quartz is P3121, showing that it exhibits primitive centering of the motif (i.e., once per unit cell), with a threefold screw axis and a twofold rotation axis. Note that it does not explicitly contain the crystal system, although this is unique to each space group (in the case of P3121, it is trigonal).
In the international short symbol the first symbol (31 in this example) denotes the symmetry along the major axis (c-axis in trigonal cases), the second (2 in this case) along axes of secondary importance (a and b) and the third symbol the symmetry in another direction. In the trigonal case there also exists a space group P3112. In this space group the twofold axes are not along the a and b-axes but in a direction rotated by 30°.
The international symbols and international short symbols for some of the space groups were changed slightly between 1935 and 2002, so several space groups have 4 different international symbols in use.
The viewing directions of the 7 crystal systems are shown as follows.
Hall notation
Space group notation with an explicit origin. Rotation, translation and axis-direction symbols are clearly separated and inversion centers are explicitly defined. The construction and format of the notation make it particularly suited to computer generation of symmetry information. For example, group number 3 has three Hall symbols: P 2y (P 1 2 1), P 2 (P 1 1 2), P 2x (P 2 1 1).
Schönflies notation
The space groups with given point group are numbered by 1, 2, 3, ... (in the same order as their international number) and this number is added as a superscript to the Schönflies symbol for the point group. For example, groups numbers 3 to 5 whose point group is C2 have Schönflies symbols C12, C22, C32.
Fedorov notation
Shubnikov symbol
Strukturbericht designationA related notation for crystal structures given a letter and index: A Elements (monatomic), B for AB compounds, C for AB2 compounds, D for Am Bn compounds, (E, F, ..., K More complex compounds), L Alloys, O Organic compounds, S Silicates. Some structure designation share the same space groups. For example, space group 225 is A1, B1, and C1. Space group 221 is Ah, and B2. However, crystallographers would not use Strukturbericht notation to describe the space group, rather it would be used to describe a specific crystal structure (e.g. space group + atomic arrangement (motif)).
Orbifold notation (2D)
Fibrifold notation (3D)As the name suggests, the orbifold notation describes the orbifold, given by the quotient of Euclidean space by the space group, rather than generators of the space group. It was introduced by Conway and Thurston, and is not used much outside mathematics. Some of the space groups have several different fibrifolds associated to them, so have several different fibrifold symbols.
Coxeter notation
Spatial and point symmetry groups, represented as modifications of the pure reflectional Coxeter groups.
Geometric notation
A geometric algebra notation.
== Classification systems ==
There are (at least) 10 different ways to classify space groups into classes. The relations between some of these are described in the following table. Each classification system is a refinement of the ones below it. To understand an explanation given here it may be necessary to understand the next one down.
Conway, Delgado Friedrichs, and Huson et al. (2001) gave another classification of the space groups, called a fibrifold notation, according to the fibrifold structures on the corresponding orbifold. They divided the 219 affine space groups into reducible and irreducible groups. The reducible groups fall into 17 classes corresponding to the 17 wallpaper groups, and the remaining 35 irreducible groups are the same as the cubic groups and are classified separately.
== In other dimensions ==
=== Bieberbach's theorems ===
In n dimensions, an affine space group, or Bieberbach group, is a discrete subgroup of isometries of n-dimensional Euclidean space with a compact fundamental domain. Bieberbach (1911, 1912) proved that the subgroup of translations of any such group contains n linearly independent translations, and is a free abelian subgroup of finite index, and is also the unique maximal normal abelian subgroup. He also showed that in any dimension n there are only a finite number of possibilities for the isomorphism class of the underlying group of a space group, and moreover the action of the group on Euclidean space is unique up to conjugation by affine transformations. This answers part of Hilbert's eighteenth problem. Zassenhaus (1948) showed that conversely any group that is the extension of Zn by a finite group acting faithfully is an affine space group. Combining these results shows that classifying space groups in n dimensions up to conjugation by affine transformations is essentially the same as classifying isomorphism classes for groups that are extensions of Zn by a finite group acting faithfully.
It is essential in Bieberbach's theorems to assume that the group acts as isometries; the theorems do not generalize to discrete cocompact groups of affine transformations of Euclidean space. A counter-example is given by the 3-dimensional Heisenberg group of the integers acting by translations on the Heisenberg group of the reals, identified with 3-dimensional Euclidean space. This is a discrete cocompact group of affine transformations of space, but does not contain a subgroup Z3.
=== Classification in small dimensions ===
This table gives the number of space group types in small dimensions, including the numbers of various classes of space group. The numbers of enantiomorphic pairs are given in parentheses.
=== Magnetic groups and time reversal ===
In addition to crystallographic space groups there are also magnetic space groups (also called two-color (black and white) crystallographic groups or Shubnikov groups). These symmetries contain an element known as time reversal. They treat time as an additional dimension, and the group elements can include time reversal as reflection in it. They are of importance in magnetic structures that contain ordered unpaired spins, i.e. ferro-, ferri- or antiferromagnetic structures as studied by neutron diffraction. The time reversal element flips a magnetic spin while leaving all other structure the same and it can be combined with a number of other symmetry elements. Including time reversal there are 1651 magnetic space groups in 3D (Kim 1999, p.428). It has also been possible to construct magnetic versions for other overall and lattice dimensions (Daniel Litvin's papers, (Litvin 2008), (Litvin 2005)). Frieze groups are magnetic 1D line groups and layer groups are magnetic wallpaper groups, and the axial 3D point groups are magnetic 2D point groups. Number of original and magnetic groups by (overall, lattice) dimension:(Palistrant 2012)(Souvignier 2006)
== Table of space groups in 2 dimensions (wallpaper groups) ==
Table of the wallpaper groups using the classification of the 2-dimensional space groups:
For each geometric class, the possible arithmetic classes are
None: no reflection lines
Along: reflection lines along lattice directions
Between: reflection lines halfway in between lattice directions
Both: reflection lines both along and between lattice directions
== Table of space groups in 3 dimensions ==
Note: An e plane is a double glide plane, one having glides in two different directions. They are found in seven orthorhombic, five tetragonal and five cubic space groups, all with centered lattice. The use of the symbol e became official with Hahn (2002).
The lattice system can be found as follows. If the crystal system is not trigonal then the lattice system is of the same type. If the crystal system is trigonal, then the lattice system is hexagonal unless the space group is one of the seven in the rhombohedral lattice system consisting of the 7 trigonal space groups in the table above whose name begins with R. (The term rhombohedral system is also sometimes used as an alternative name for the whole trigonal system.) The hexagonal lattice system is larger than the hexagonal crystal system, and consists of the hexagonal crystal system together with the 18 groups of the trigonal crystal system other than the seven whose names begin with R.
The Bravais lattice of the space group is determined by the lattice system together with the initial letter of its name, which for the non-rhombohedral groups is P, I, F, A or C, standing for the principal, body centered, face centered, A-face centered or C-face centered lattices. There are seven rhombohedral space groups, with initial letter R.
== Derivation of the crystal class from the space group ==
Leave out the Bravais type
Convert all symmetry elements with translational components into their respective symmetry elements without translation symmetry (Glide planes are converted into simple mirror planes; Screw axes are converted into simple axes of rotation)
Axes of rotation, rotoinversion axes and mirror planes remain unchanged.
== References ==
== External links ==
International Union of Crystallography
Point Groups and Bravais Lattices Archived 2012-07-16 at the Wayback Machine
[1] Bilbao Crystallographic Server
Space Group Info (old)
Space Group Info (new)
Crystal Lattice Structures: Index by Space Group
Full list of 230 crystallographic space groups
Interactive 3D visualization of all 230 crystallographic space groups Archived 2021-04-18 at the Wayback Machine
Huson, Daniel H. (1999), The Fibrifold Notation and Classification for 3D Space Groups (PDF)
The Geometry Center: 2.1 Formulas for Symmetries in Cartesian Coordinates (two dimensions)
The Geometry Center: 10.1 Formulas for Symmetries in Cartesian Coordinates (three dimensions) | Wikipedia/Crystallographic_group |
In nonrelativistic quantum mechanics, an account can be given of the existence of mass and spin (normally explained in Wigner's classification of relativistic mechanics) in terms of the representation theory of the Galilean group, which is the spacetime symmetry group of nonrelativistic quantum mechanics.
== Background ==
In 3 + 1 dimensions, this is the subgroup of the affine group on (t, x, y, z), whose linear part leaves invariant both the metric (gμν = diag(1, 0, 0, 0)) and the (independent) dual metric (gμν = diag(0, 1, 1, 1)). A similar definition applies for n + 1 dimensions.
We are interested in projective representations of this group, which are equivalent to unitary representations of the nontrivial central extension of the universal covering group of the Galilean group by the one-dimensional Lie group R, cf. the article Galilean group for the central extension of its Lie algebra. The method of induced representations will be used to survey these.
== Lie algebra ==
We focus on the (centrally extended, Bargmann) Lie algebra here, because it is simpler to analyze and we can always extend the results to the full Lie group through the Frobenius theorem.
[
E
,
P
i
]
=
0
{\displaystyle [E,P_{i}]=0}
[
P
i
,
P
j
]
=
0
{\displaystyle [P_{i},P_{j}]=0}
[
L
i
j
,
E
]
=
0
{\displaystyle [L_{ij},E]=0}
[
C
i
,
C
j
]
=
0
{\displaystyle [C_{i},C_{j}]=0}
[
L
i
j
,
L
k
l
]
=
i
ℏ
[
δ
i
k
L
j
l
−
δ
i
l
L
j
k
−
δ
j
k
L
i
l
+
δ
j
l
L
i
k
]
{\displaystyle [L_{ij},L_{kl}]=i\hbar [\delta _{ik}L_{jl}-\delta _{il}L_{jk}-\delta _{jk}L_{il}+\delta _{jl}L_{ik}]}
[
L
i
j
,
P
k
]
=
i
ℏ
[
δ
i
k
P
j
−
δ
j
k
P
i
]
{\displaystyle [L_{ij},P_{k}]=i\hbar [\delta _{ik}P_{j}-\delta _{jk}P_{i}]}
[
L
i
j
,
C
k
]
=
i
ℏ
[
δ
i
k
C
j
−
δ
j
k
C
i
]
{\displaystyle [L_{ij},C_{k}]=i\hbar [\delta _{ik}C_{j}-\delta _{jk}C_{i}]}
[
C
i
,
E
]
=
i
ℏ
P
i
{\displaystyle [C_{i},E]=i\hbar P_{i}}
[
C
i
,
P
j
]
=
i
ℏ
M
δ
i
j
.
{\displaystyle [C_{i},P_{j}]=i\hbar M\delta _{ij}~.}
E is the generator of time translations (Hamiltonian), Pi is the generator of translations (momentum operator), Ci is the generator of Galilean boosts, and Lij stands for a generator of rotations (angular momentum operator).
=== Casimir invariants ===
The central charge M is a Casimir invariant.
The mass-shell invariant
M
E
−
P
2
2
{\displaystyle ME-{P^{2} \over 2}}
is an additional Casimir invariant.
In 3 + 1 dimensions, a third Casimir invariant is W2, where
W
→
≡
M
L
→
+
P
→
×
C
→
,
{\displaystyle {\vec {W}}\equiv M{\vec {L}}+{\vec {P}}\times {\vec {C}}~,}
somewhat analogous to the Pauli–Lubanski pseudovector of relativistic mechanics.
More generally, in n + 1 dimensions, invariants will be a function of
W
i
j
=
M
L
i
j
+
P
i
C
j
−
P
j
C
i
{\displaystyle W_{ij}=ML_{ij}+P_{i}C_{j}-P_{j}C_{i}}
and
W
i
j
k
=
P
i
L
j
k
+
P
j
L
k
i
+
P
k
L
i
j
,
{\displaystyle W_{ijk}=P_{i}L_{jk}+P_{j}L_{ki}+P_{k}L_{ij}~,}
as well as of the above mass-shell invariant and central charge.
=== Schur's lemma ===
Using Schur's lemma, in an irreducible unitary representation, all these Casimir invariants are multiples of the identity. Call these coefficients m and mE0 and (in the case of 3 + 1 dimensions) w, respectively. Recalling that we are considering unitary representations here, we see that these eigenvalues have to be real numbers.
Thus, m > 0, m = 0 and m < 0. (The last case is similar to the first.) In 3 + 1 dimensions, when In m > 0, we can write, w = ms for the third invariant, where s represents the spin, or intrinsic angular momentum. More generally, in n + 1 dimensions, the generators L and C will be related, respectively, to the total angular momentum and center-of-mass moment by
W
i
j
=
M
S
i
j
{\displaystyle W_{ij}=MS_{ij}}
L
i
j
=
S
i
j
+
X
i
P
j
−
X
j
P
i
{\displaystyle L_{ij}=S_{ij}+X_{i}P_{j}-X_{j}P_{i}}
C
i
=
M
X
i
−
P
i
t
.
{\displaystyle C_{i}=MX_{i}-P_{i}t~.}
From a purely representation-theoretic point of view, one would have to study all of the representations; but, here, we are only interested in applications to quantum mechanics. There, E represents the energy, which has to be bounded below, if thermodynamic stability is required. Consider first the case where m is nonzero.
Considering the (E, P→) space with the constraint
m
E
=
m
E
0
+
P
2
2
,
{\displaystyle mE=mE_{0}+{P^{2} \over 2}~,}
we see that the Galilean boosts act transitively on this hypersurface. In fact, treating the energy E as the Hamiltonian, differentiating with respect to P, and applying Hamilton's equations, we obtain the mass-velocity relation m v→ = P→.
The hypersurface is parametrized by this velocity In v→. Consider the stabilizer of a point on the orbit, (E0, 0), where the velocity is 0. Because of transitivity, we know the unitary irrep contains a nontrivial linear subspace with these energy-momentum eigenvalues. (This subspace only exists in a rigged Hilbert space, because the momentum spectrum is continuous.)
=== The little group ===
The subspace is spanned by E, P→, M and Lij. We already know how the subspace of the irrep transforms under all operators but the angular momentum. Note that the rotation subgroup is Spin(3). We have to look at its double cover, because we are considering projective representations. This is called the little group, a name given by Eugene Wigner. His method of induced representations specifies that the irrep is given by the direct sum of all the fibers in a vector bundle over the mE = mE0 + P2/2 hypersurface, whose fibers are a unitary irrep of Spin(3).
Spin(3) is none other than SU(2). (See representation theory of SU(2), where it is shown that the unitary irreps of SU(2) are labeled by s, a non-negative integer multiple of one half. This is called spin, for historical reasons.)
Consequently, for m ≠ 0, the unitary irreps are classified by m, E0 and a spin s.
Looking at the spectrum of E, it is evident that if m is negative, the spectrum of E is not bounded below. Hence, only the case with a positive mass is physical.
Now, consider the case m = 0. By unitarity,
m
E
−
P
2
2
=
−
P
2
2
{\displaystyle mE-{P^{2} \over 2}={-P^{2} \over 2}}
is nonpositive. Suppose it is zero. Here, it is also the boosts as well as the rotations that constitute the little group. Any unitary irrep of this little group also gives rise to a projective irrep of the Galilean group. As far as we can tell, only the case which transforms trivially under the little group has any physical interpretation, and it corresponds to the no-particle state, the vacuum.
The case where the invariant is negative requires additional comment. This corresponds to the representation class for m = 0 and non-zero P→. Extending the bradyon, luxon, tachyon classification from the representation theory of the Poincaré group to an analogous classification, here, one may term these states as synchrons. They represent an instantaneous transfer of non-zero momentum across a (possibly large) distance. Associated with them, by above, is a "time" operator
t
=
−
P
→
⋅
C
→
P
2
,
{\displaystyle t=-{{\vec {P}}\cdot {\vec {C}} \over P^{2}}~,}
which may be identified with the time of transfer. These states are naturally interpreted as the carriers of instantaneous action-at-a-distance forces.
N.B. In the 3 + 1-dimensional Galilei group, the boost generator may be decomposed into
C
→
=
W
→
×
P
→
P
2
−
P
→
t
,
{\displaystyle {\vec {C}}={{\vec {W}}\times {\vec {P}} \over P^{2}}-{\vec {P}}t~,}
with W→ playing a role analogous to helicity.
== See also ==
Galilei-covariant tensor formulation
Representation theory of the Poincaré group
Wigner's classification
Pauli–Lubanski pseudovector
Representation theory of the diffeomorphism group
Rotation operator
== References ==
Bargmann, V. (1954). "On Unitary Ray Representations of Continuous Groups", Annals of Mathematics, Second Series, 59, No. 1 (Jan., 1954), pp. 1–46
Lévy-Leblond, Jean-Marc (1967), "Nonrelativistic Particles and Wave Equations" (PDF), Communications in Mathematical Physics, 6 (4), Springer: 286–311, Bibcode:1967CMaPh...6..286L, doi:10.1007/bf01646020, S2CID 121990089.
Ballentine, Leslie E. (1998). Quantum Mechanics, A Modern Development. World Scientific Publishing Co Pte Ltd. ISBN 981-02-4105-4.
Gilmore, Robert (2006). Lie Groups, Lie Algebras, and Some of Their Applications (Dover Books on Mathematics) ISBN 0486445291 | Wikipedia/Representation_theory_of_the_Galilean_group |
The concept of a system of imprimitivity is used in mathematics, particularly in algebra and analysis, both within the context of the theory of group representations. It was used by George Mackey as the basis for his theory of induced unitary representations of locally compact groups.
The simplest case, and the context in which the idea was first noticed, is that of finite groups (see primitive permutation group). Consider a group G and subgroups H and K, with K contained in H. Then the left cosets of H in G are each the union of left cosets of K. Not only that, but translation (on one side) by any element g of G respects this decomposition. The connection with induced representations is that the permutation representation on cosets is the special case of induced representation, in which a representation is induced from a trivial representation. The structure, combinatorial in this case, respected by translation shows that either K is a maximal subgroup of G, or there is a system of imprimitivity (roughly, a lack of full "mixing"). In order to generalise this to other cases, the concept is re-expressed: first in terms of functions on G constant on K-cosets, and then in terms of projection operators (for example the averaging over K-cosets of elements of the group algebra).
Mackey also used the idea for his explication of quantization theory based on preservation of relativity groups acting on configuration space. This generalized work of Eugene Wigner and others and is often considered to be one of the pioneering ideas in canonical quantization.
== Example ==
To motivate the general definitions, a definition is first formulated, in the case of finite groups and their representations on finite-dimensional vector spaces.
Let G be a finite group and U a representation of G on a finite-dimensional complex vector space H. The action of G on elements of H induces an action of G on the vector subspaces W of H in this way:
U
g
W
=
{
U
g
w
:
w
∈
W
}
.
{\displaystyle U_{g}W=\{U_{g}w:w\in W\}.}
Let X be a set of subspaces of H such that
the elements of X are permuted by the action of G on subspaces and
H is the (internal) algebraic direct sum of the elements of X, i.e.,
H
=
⨁
W
∈
X
W
.
{\displaystyle H=\bigoplus _{W\in X}W.}
Then (U,X) is a system of imprimitivity for G.
Two assertions must hold in the definition above:
the spaces W for W ∈ X must span H, and
the spaces W ∈ X must be linearly independent, that is,
∑
W
∈
X
c
W
v
W
=
0
,
v
W
∈
W
∖
{
0
}
{\displaystyle \sum _{W\in X}c_{W}v_{W}=0,\quad v_{W}\in W\setminus \{0\}}
holds only when all the coefficients cW are zero.
If the action of G on the elements of X is transitive, then we say this is a transitive system of imprimitivity.
Let G be a finite group and G0 a subgroup of G. A representation U of G is induced from a representation V of G0 if and only if there exist the following:
a transitive system of imprimitivity (U, X) and
a subspace W0 ∈ X
such that G0 is the stabilizer subgroup of W under the action of G, i.e.
G
0
=
{
g
∈
G
:
U
g
W
0
⊆
W
0
}
.
{\displaystyle G_{0}=\{g\in G:U_{g}W_{0}\subseteq W_{0}\}.}
and V is equivalent to the representation of G0
on W0 given by Uh | W0 for h ∈ G0. Note that by this definition, induced by is a relation between representations. We would like to show that there is actually a mapping on representations which corresponds to this relation.
For finite groups one can show that a well-defined inducing construction exists on equivalence of representations by considering the character of a representation U defined by
χ
U
(
g
)
=
tr
(
U
g
)
.
{\displaystyle \chi _{U}(g)=\operatorname {tr} (U_{g}).}
If a representation U of G is induced from a representation V of G0, then
χ
U
(
g
)
=
1
|
G
0
|
∑
{
x
∈
G
:
x
−
1
g
x
∈
G
0
}
χ
V
(
x
−
1
g
x
)
,
∀
g
∈
G
.
{\displaystyle \chi _{U}(g)={\frac {1}{|G_{0}|}}\sum _{\{x\in G:{x}^{-1}\,g\,x\in G_{0}\}}\chi _{V}({x}^{-1}\ g\ x),\quad \forall g\in G.}
Thus the character function χU (and therefore U itself) is completely determined by χV.
=== Example ===
Let G be a finite group and consider the space H of complex-valued functions on G. The left regular representation of G on H is defined by
[
L
g
ψ
]
(
h
)
=
ψ
(
g
−
1
h
)
.
{\displaystyle [\operatorname {L} _{g}\psi ](h)=\psi (g^{-1}h).}
Now H can be considered as the algebraic direct sum of the one-dimensional spaces Wx, for x ∈ G, where
W
x
=
{
ψ
∈
H
:
ψ
(
g
)
=
0
,
∀
g
≠
x
}
.
{\displaystyle W_{x}=\{\psi \in H:\psi (g)=0,\quad \forall g\neq x\}.}
The spaces Wx are permuted by Lg.
== Infinite dimensional systems of imprimitivity ==
To generalize the finite dimensional definition given in the preceding section, a suitable replacement for the set X of vector subspaces of H which is permuted by the representation U is needed. As it turns out, a naïve approach based on subspaces of H will not work; for example the translation representation of R on L2(R) has no system of imprimitivity in this sense. The right formulation of direct sum decomposition is formulated in terms of projection-valued measures.
Mackey's original formulation was expressed in terms of a locally compact second countable (lcsc) group G, a standard Borel space X and a Borel group action
G
×
X
→
X
,
(
g
,
x
)
↦
g
⋅
x
.
{\displaystyle G\times X\rightarrow X,\quad (g,x)\mapsto g\cdot x.}
We will refer to this as a standard Borel G-space.
The definitions can be given in a much more general context, but the original setup used by Mackey is still quite general and requires fewer technicalities.
Definition. Let G be a lcsc group acting on a standard Borel space X. A system of imprimitivity based on (G, X) consists of a separable Hilbert space H and a pair consisting of
A strongly-continuous unitary representation U: g → Ug of G on H.
A projection-valued measure π on the Borel sets of X with values in the projections of H;
which satisfy
U
g
π
(
A
)
U
g
−
1
=
π
(
g
⋅
A
)
.
{\displaystyle U_{g}\pi (A)U_{g^{-1}}=\pi (g\cdot A).}
=== Example ===
Let X be a standard G space and μ a σ-finite countably additive invariant measure on X. This means
μ
(
g
−
1
A
)
=
μ
(
A
)
{\displaystyle \mu (g^{-1}A)=\mu (A)\quad }
for all g ∈ G and Borel subsets A of G.
Let π(A) be multiplication by the indicator function of A and Ug be the operator
[
U
g
ψ
]
(
x
)
=
ψ
(
g
−
1
x
)
.
{\displaystyle [U_{g}\psi ](x)=\psi (g^{-1}x).\quad }
Then (U, π) is a system of imprimitivity of (G, X) on L2μ(X).
This system of imprimitivity is sometimes called the Koopman system of imprimitivity.
== Homogeneous systems of imprimitivity ==
A system of imprimitivity is homogeneous of multiplicity n, where 1 ≤ n ≤ ω if and only if the corresponding projection-valued measure π on X is homogeneous of multiplicity n. In fact, X breaks up into a countable disjoint family {Xn} 1 ≤ n ≤ ω of Borel sets such that π is homogeneous of multiplicity n on Xn. It is also easy to show Xn is G invariant.
Lemma. Any system of imprimitivity is an orthogonal direct sum of homogeneous ones.
It can be shown that if the action of G on X is transitive, then any system of imprimitivity on X is homogeneous. More generally, if the action of G on X is ergodic (meaning that X cannot be reduced by invariant proper Borel sets of X) then any system of imprimitivity on X is homogeneous.
We now discuss how the structure of homogeneous systems of imprimitivity can be expressed in a form which generalizes the Koopman representation given in the example above.
In the following, we assume that μ is a σ-finite measure on a standard Borel G-space X such that the action of G respects the measure class of μ. This condition is weaker than invariance, but it suffices to construct a unitary translation operator similar to the Koopman operator in the example above. G respects the measure class of μ means that the Radon-Nikodym derivative
s
(
g
,
x
)
=
[
d
μ
d
g
−
1
μ
]
(
x
)
∈
[
0
,
∞
)
{\displaystyle s(g,x)={\bigg [}{\frac {d\mu }{dg^{-1}\mu }}{\bigg ]}(x)\in [0,\infty )}
is well-defined for every g ∈ G, where
g
−
1
μ
(
A
)
=
μ
(
g
A
)
.
{\displaystyle g^{-1}\mu (A)=\mu (gA).\quad }
It can be shown that there is a version of s which is jointly Borel measurable, that is
s
:
G
×
X
→
[
0
,
∞
)
{\displaystyle s:G\times X\rightarrow [0,\infty )}
is Borel measurable and satisfies
s
(
g
,
x
)
=
[
d
μ
d
g
−
1
μ
]
(
x
)
∈
[
0
,
∞
)
{\displaystyle s(g,x)={\bigg [}{\frac {d\mu }{dg^{-1}\mu }}{\bigg ]}(x)\in [0,\infty )}
for almost all values of (g, x) ∈ G × X.
Suppose H is a separable Hilbert space, U(H) the unitary operators on H. A unitary cocycle is a Borel mapping
Φ
:
G
×
X
→
U
(
H
)
{\displaystyle \Phi :G\times X\rightarrow \operatorname {U} (H)}
such that
Φ
(
e
,
x
)
=
I
{\displaystyle \Phi (e,x)=I\quad }
for almost all x ∈ X
Φ
(
g
h
,
x
)
=
Φ
(
g
,
h
⋅
x
)
Φ
(
h
,
x
)
{\displaystyle \Phi (gh,x)=\Phi (g,h\cdot x)\Phi (h,x)}
for almost all (g, h, x). A unitary cocycle is strict if and only if the above relations hold for all (g, h, x). It can be shown that for any unitary cocycle there is a strict unitary cocycle which is equal almost everywhere to it (Varadarajan, 1985).
Theorem. Define
[
U
g
ψ
]
(
x
)
=
s
(
g
,
g
−
1
x
)
Φ
(
g
,
g
−
1
x
)
ψ
(
g
−
1
x
)
.
{\displaystyle [U_{g}\psi ](x)={\sqrt {s(g,g^{-1}x)}}\ \Phi (g,g^{-1}x)\ \psi (g^{-1}x).}
Then U is a unitary representation of G on the Hilbert space
∫
X
⊕
H
d
μ
(
x
)
.
{\displaystyle \int _{X}^{\oplus }Hd\mu (x).}
Moreover, if for any Borel set A, π(A) is the projection operator
π
(
A
)
ψ
=
1
A
ψ
,
∫
X
⊕
H
d
μ
(
x
)
→
∫
X
⊕
H
d
μ
(
x
)
,
{\displaystyle \pi (A)\psi =1_{A}\psi ,\quad \int _{X}^{\oplus }Hd\mu (x)\rightarrow \int _{X}^{\oplus }Hd\mu (x),}
then (U, π) is a system of imprimitivity of (G,X).
Conversely, any homogeneous system of imprimitivity is of this form, for some measure σ-finite measure μ. This measure is unique up to measure equivalence, that is to say, two such measures have the same sets of measure 0.
Much more can be said about the correspondence between homogeneous systems of imprimitivity and cocycles.
When the action of G on X is transitive however, the correspondence takes a particularly explicit form based on the representation obtained by restricting the cocycle Φ to a fixed point subgroup of the action. We consider this case in the next section.
=== Example ===
A system of imprimitivity (U, π) of (G,X) on a separable Hilbert space H is irreducible if and only if the only closed subspaces invariant under all the operators Ug and π(A) for g and element of G and A a Borel subset of X are H or {0}.
If (U, π) is irreducible, then π is homogeneous. Moreover, the corresponding measure on X as per the previous theorem is ergodic.
== Induced representations ==
If X is a Borel G space and x ∈ X, then the fixed point subgroup
G
x
=
{
g
∈
G
:
g
⋅
x
=
x
}
{\displaystyle G_{x}=\{g\in G:g\cdot x=x\}}
is a closed subgroup of G. Since we are only assuming the action of G on X is Borel, this fact is non-trivial. To prove it, one can use the fact that a standard Borel G-space can be imbedded into a compact G-space in which the action is continuous.
Theorem. Suppose G acts on X transitively. Then there is a σ-finite quasi-invariant measure μ on X which is unique up to measure equivalence (that is any two such measures have the same sets of measure zero).
If Φ is a strict unitary cocycle
Φ
:
G
×
X
→
U
(
H
)
{\displaystyle \Phi :G\times X\rightarrow \operatorname {U} (H)}
then the restriction of Φ to the fixed point subgroup Gx is a Borel measurable unitary representation U of Gx on H (Here U(H) has the strong operator topology). However, it is known that a Borel measurable unitary representation is equal almost everywhere (with respect to Haar measure) to a strongly continuous unitary representation. This restriction mapping sets up a fundamental correspondence:
Theorem. Suppose G acts on X transitively with quasi-invariant measure μ. There is a bijection from unitary equivalence classes of systems of imprimitivity of (G, X) and unitary equivalence classes of representation of Gx.
Moreover, this bijection preserves irreducibility, that is a system of imprimitivity of (G, X) is irreducible if and only if the corresponding representation of Gx is irreducible.
Given a representation V of Gx the corresponding representation of G is called the representation induced by V.
See theorem 6.2 of (Varadarajan, 1985).
== Applications to the theory of group representations ==
Systems of imprimitivity arise naturally in the determination of the representations of a group G which is the semi-direct product of an abelian group N by a group H that acts by automorphisms of N. This means N is a normal subgroup of G and H a subgroup of G such that G = N H and N ∩ H = {e} (with e being the identity element of G).
An important example of this is the inhomogeneous Lorentz group.
Fix G, H and N as above and let X be the character space of N. In particular, H acts on X by
[
h
⋅
χ
]
(
n
)
=
χ
(
h
−
1
n
h
)
.
{\displaystyle [h\cdot \chi ](n)=\chi (h^{-1}nh).}
Theorem. There is a bijection between unitary equivalence classes of representations of G and unitary equivalence classes of systems of imprimitivity based on (H, X). This correspondence preserves intertwining operators. In particular, a representation of G is irreducible if and only if the corresponding system of imprimitivity is irreducible.
This result is of particular interest when the action of H on X is such that every ergodic quasi-invariant measure on X is transitive. In that case, each such measure is the image of
(a totally finite version) of Haar measure on X by the map
g
↦
g
⋅
x
0
.
{\displaystyle g\mapsto g\cdot x_{0}.}
A necessary condition for this to be the case is that there is a countable set of H invariant Borel sets which separate the orbits of H. This is the case for instance for the action of the Lorentz group on the character space of R4.
=== Example: the Heisenberg group ===
The Heisenberg group is the group of 3 × 3 real matrices of the form:
[
1
x
z
0
1
y
0
0
1
]
.
{\displaystyle {\begin{bmatrix}1&x&z\\0&1&y\\0&0&1\end{bmatrix}}.}
This group is the semi-direct product of
H
=
{
[
1
w
0
0
1
0
0
0
1
]
:
w
∈
R
}
{\displaystyle H={\bigg \{}{\begin{bmatrix}1&w&0\\0&1&0\\0&0&1\end{bmatrix}}:w\in \mathbb {R} {\bigg \}}}
and the abelian normal subgroup
N
=
{
[
1
0
t
0
1
s
0
0
1
]
:
s
,
t
∈
R
}
.
{\displaystyle N={\bigg \{}{\begin{bmatrix}1&0&t\\0&1&s\\0&0&1\end{bmatrix}}:s,t\in \mathbb {R} {\bigg \}}.}
Denote the typical matrix in H by [w] and the typical one in N by [s,t]. Then
[
w
]
−
1
[
s
t
]
[
w
]
=
[
s
−
w
s
+
t
]
=
[
1
0
−
w
1
]
[
s
t
]
{\displaystyle [w]^{-1}{\begin{bmatrix}s\\t\end{bmatrix}}[w]={\begin{bmatrix}s\\-ws+t\end{bmatrix}}={\begin{bmatrix}1&0\\-w&1\end{bmatrix}}{\begin{bmatrix}s\\t\end{bmatrix}}}
w acts on the dual of R2 by multiplication by the transpose matrix
[
1
−
w
0
1
]
.
{\displaystyle {\begin{bmatrix}1&-w\\0&1\end{bmatrix}}.}
This allows us to completely determine the orbits and the representation theory.
Orbit structure: The orbits fall into two classes:
A horizontal line which intersects the y-axis at a non-zero value y0. In this case, we can take the quasi-invariant measure on this line to be Lebesgue measure.
A single point (x0,0) on the x-axis
Fixed point subgroups: These also fall into two classes depending on the orbit:
The trivial subgroup {0}
The group H itself
Classification: This allows us to completely classify all irreducible representations of the Heisenberg group. These are parametrized by the set consisting of
R − {0}. These are infinite-dimensional.
Pairs (x0, λ) ∈ R × R. x0 is the abscissa of the single point orbit on the x-axis and λ is an element of the dual of H. These are one-dimensional.
We can write down explicit formulas for these representations by describing the restrictions to N and H.
Case 1. The corresponding representation π is of the form: It acts on L2(R) with respect to Lebesgue measure and
(
π
[
s
,
t
]
ψ
)
(
x
)
=
e
i
t
y
0
e
i
s
x
ψ
(
x
)
.
{\displaystyle (\pi [s,t]\psi )(x)=e^{ity_{0}}e^{isx}\psi (x).\quad }
(
π
[
w
]
ψ
)
(
x
)
=
ψ
(
x
+
w
y
0
)
.
{\displaystyle (\pi [w]\psi )(x)=\psi (x+wy_{0}).\quad }
Case 2. The corresponding representation is given by the 1-dimensional character
π
[
s
,
t
]
=
e
i
s
x
0
.
{\displaystyle \pi [s,t]=e^{isx_{0}}.\quad }
π
[
w
]
=
e
i
λ
w
.
{\displaystyle \pi [w]=e^{i\lambda w}.\quad }
== References ==
G. W. Mackey, The Theory of Unitary Group Representations, University of Chicago Press, 1976.
V. S. Varadarajan, Geometry of Quantum Theory, Springer-Verlag, 1985.
David Edwards, The Mathematical Foundations of Quantum Mechanics, Synthese, Volume 42, Number 1/September, 1979, pp. 1–70. | Wikipedia/Mackey_theory |
In the theory of Lie groups, the exponential map is a map from the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
of a Lie group
G
{\displaystyle G}
to the group, which allows one to recapture the local group structure from the Lie algebra. The existence of the exponential map is one of the primary reasons that Lie algebras are a useful tool for studying Lie groups.
The ordinary exponential function of mathematical analysis is a special case of the exponential map when
G
{\displaystyle G}
is the multiplicative group of positive real numbers (whose Lie algebra is the additive group of all real numbers). The exponential map of a Lie group satisfies many properties analogous to those of the ordinary exponential function, however, it also differs in many important respects.
== Definitions ==
Let
G
{\displaystyle G}
be a Lie group and
g
{\displaystyle {\mathfrak {g}}}
be its Lie algebra (thought of as the tangent space to the identity element of
G
{\displaystyle G}
). The exponential map is a map
exp
:
g
→
G
{\displaystyle \exp \colon {\mathfrak {g}}\to G}
which can be defined in several different ways. The typical modern definition is this:
Definition: The exponential of
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
is given by
exp
(
X
)
=
γ
(
1
)
{\displaystyle \exp(X)=\gamma (1)}
where
γ
:
R
→
G
{\displaystyle \gamma \colon \mathbb {R} \to G}
is the unique one-parameter subgroup of
G
{\displaystyle G}
whose tangent vector at the identity is equal to
X
{\displaystyle X}
.
It follows easily from the chain rule that
exp
(
t
X
)
=
γ
(
t
)
{\displaystyle \exp(tX)=\gamma (t)}
. The map
γ
{\displaystyle \gamma }
, a group homomorphism from
(
R
,
+
)
{\displaystyle (\mathbb {R} ,+)}
to
G
{\displaystyle G}
, may be constructed as the integral curve of either the right- or left-invariant vector field associated with
X
{\displaystyle X}
. That the integral curve exists for all real parameters follows by right- or left-translating the solution near zero.
We have a more concrete definition in the case of a matrix Lie group. The exponential map coincides with the matrix exponential and is given by the ordinary series expansion:
exp
(
X
)
=
∑
k
=
0
∞
X
k
k
!
=
I
+
X
+
1
2
X
2
+
1
6
X
3
+
⋯
{\displaystyle \exp(X)=\sum _{k=0}^{\infty }{\frac {X^{k}}{k!}}=I+X+{\frac {1}{2}}X^{2}+{\frac {1}{6}}X^{3}+\cdots }
,
where
I
{\displaystyle I}
is the identity matrix. Thus, in the setting of matrix Lie groups, the exponential map is the restriction of the matrix exponential to the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
of
G
{\displaystyle G}
.
=== Comparison with Riemannian exponential map ===
If
G
{\displaystyle G}
is compact, it has a Riemannian metric invariant under left and right translations, then the Lie-theoretic exponential map for
G
{\displaystyle G}
coincides with the exponential map of this Riemannian metric.
For a general
G
{\displaystyle G}
, there will not exist a Riemannian metric invariant under both left and right translations. Although there is always a Riemannian metric invariant under, say, left translations, the exponential map in the sense of Riemannian geometry for a left-invariant metric will not in general agree with the exponential map in the Lie group sense. That is to say, if
G
{\displaystyle G}
is a Lie group equipped with a left- but not right-invariant metric, the geodesics through the identity will not be one-parameter subgroups of
G
{\displaystyle G}
.
=== Other definitions ===
Other equivalent definitions of the Lie-group exponential are as follows:
It is the exponential map of a canonical left-invariant affine connection on G, such that parallel transport is given by left translation. That is,
exp
(
X
)
=
γ
(
1
)
{\displaystyle \exp(X)=\gamma (1)}
where
γ
{\displaystyle \gamma }
is the unique geodesic with the initial point at the identity element and the initial velocity X (thought of as a tangent vector).
It is the exponential map of a canonical right-invariant affine connection on G. This is usually different from the canonical left-invariant connection, but both connections have the same geodesics (orbits of 1-parameter subgroups acting by left or right multiplication) so give the same exponential map.
The Lie group–Lie algebra correspondence also gives the definition: for
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
, the mapping
t
↦
exp
(
t
X
)
{\displaystyle t\mapsto \exp(tX)}
is the unique Lie group homomorphism
(
R
,
+
)
→
G
{\displaystyle (\mathbb {R} ,+)\to G}
corresponding to the Lie algebra homomorphism
R
→
g
{\displaystyle \mathbb {R} \to {\mathfrak {g}}}
,
t
↦
t
X
.
{\displaystyle t\mapsto tX.}
The exponential map is characterized by the differential equation
d
d
t
exp
(
t
X
)
=
exp
(
t
X
)
⋅
X
{\textstyle {\frac {d}{dt}}\exp(tX)=\exp(tX)\cdot X}
, where the right side uses the translation mapping
g
=
T
e
G
→
T
g
G
,
X
↦
g
⋅
X
{\displaystyle {\mathfrak {g}}=T_{e}G\to T_{g}G,\ X\mapsto g\cdot X}
for
g
=
exp
(
t
X
)
{\displaystyle g=\exp(tX)}
. In the one-dimensional case, this is equivalent to
exp
′
(
x
)
=
exp
(
x
)
{\displaystyle \exp '(x)=\exp(x)}
.
== Examples ==
The unit circle centered at 0 in the complex plane is a Lie group (called the circle group) whose tangent space at 1 can be identified with the imaginary line in the complex plane,
{
i
t
:
t
∈
R
}
.
{\displaystyle \{it:t\in \mathbb {R} \}.}
The exponential map for this Lie group is given by
i
t
↦
exp
(
i
t
)
=
e
i
t
=
cos
(
t
)
+
i
sin
(
t
)
,
{\displaystyle it\mapsto \exp(it)=e^{it}=\cos(t)+i\sin(t),\,}
that is, the same formula as the ordinary complex exponential.
More generally, for complex torus: 8
X
=
C
n
/
Λ
{\displaystyle X=\mathbb {C} ^{n}/\Lambda }
for some integral lattice
Λ
{\displaystyle \Lambda }
of rank
n
{\displaystyle n}
(so isomorphic to
Z
n
{\displaystyle \mathbb {Z} ^{n}}
) the torus comes equipped with a universal covering map
π
:
C
n
→
X
{\displaystyle \pi :\mathbb {C} ^{n}\to X}
from the quotient by the lattice. Since
X
{\displaystyle X}
is locally isomorphic to
C
n
{\displaystyle \mathbb {C} ^{n}}
as complex manifolds, we can identify it with the tangent space
T
0
X
{\displaystyle T_{0}X}
, and the map
π
:
T
0
X
→
X
{\displaystyle \pi :T_{0}X\to X}
corresponds to the exponential map for the complex Lie group
X
{\displaystyle X}
.
In the quaternions
H
{\displaystyle \mathbb {H} }
, the set of quaternions of unit length form a Lie group (isomorphic to the special unitary group SU(2)) whose tangent space at 1 can be identified with the space of purely imaginary quaternions,
{
i
t
+
j
u
+
k
v
:
t
,
u
,
v
∈
R
}
.
{\displaystyle \{it+ju+kv:t,u,v\in \mathbb {R} \}.}
The exponential map for this Lie group is given by
w
:=
(
i
t
+
j
u
+
k
v
)
↦
exp
(
i
t
+
j
u
+
k
v
)
=
cos
(
|
w
|
)
1
+
sin
(
|
w
|
)
w
|
w
|
.
{\displaystyle \mathbf {w} :=(it+ju+kv)\mapsto \exp(it+ju+kv)=\cos(|\mathbf {w} |)1+\sin(|\mathbf {w} |){\frac {\mathbf {w} }{|\mathbf {w} |}}.\,}
This map takes the 2-sphere of radius R inside the purely imaginary quaternions to
{
s
∈
S
3
⊂
H
:
Re
(
s
)
=
cos
(
R
)
}
{\displaystyle \{s\in S^{3}\subset \mathbf {H} :\operatorname {Re} (s)=\cos(R)\}}
, a 2-sphere of radius
sin
(
R
)
{\displaystyle \sin(R)}
(cf. Exponential of a Pauli vector). Compare this to the first example above.
Let V be a finite dimensional real vector space and view it as a Lie group under the operation of vector addition. Then
Lie
(
V
)
=
V
{\displaystyle \operatorname {Lie} (V)=V}
via the identification of V with its tangent space at 0, and the exponential map
exp
:
Lie
(
V
)
=
V
→
V
{\displaystyle \operatorname {exp} :\operatorname {Lie} (V)=V\to V}
is the identity map, that is,
exp
(
v
)
=
v
{\displaystyle \exp(v)=v}
.
In the split-complex number plane
z
=
x
+
y
ȷ
,
ȷ
2
=
+
1
,
{\displaystyle z=x+y\jmath ,\quad \jmath ^{2}=+1,}
the imaginary line
{
ȷ
t
:
t
∈
R
}
{\displaystyle \lbrace \jmath t:t\in \mathbb {R} \rbrace }
forms the Lie algebra of the unit hyperbola group
{
cosh
t
+
ȷ
sinh
t
:
t
∈
R
}
{\displaystyle \lbrace \cosh t+\jmath \ \sinh t:t\in \mathbb {R} \rbrace }
since the exponential map is given by
ȷ
t
↦
exp
(
ȷ
t
)
=
cosh
t
+
ȷ
sinh
t
.
{\displaystyle \jmath t\mapsto \exp(\jmath t)=\cosh t+\jmath \ \sinh t.}
== Properties ==
=== Elementary properties of the exponential ===
For all
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
, the map
γ
(
t
)
=
exp
(
t
X
)
{\displaystyle \gamma (t)=\exp(tX)}
is the unique one-parameter subgroup of
G
{\displaystyle G}
whose tangent vector at the identity is
X
{\displaystyle X}
. It follows that:
exp
(
(
t
+
s
)
X
)
=
exp
(
t
X
)
exp
(
s
X
)
{\displaystyle \exp((t+s)X)=\exp(tX)\exp(sX)\,}
exp
(
−
X
)
=
exp
(
X
)
−
1
.
{\displaystyle \exp(-X)=\exp(X)^{-1}.\,}
More generally:
exp
(
X
+
Y
)
=
exp
(
X
)
exp
(
Y
)
,
if
[
X
,
Y
]
=
0
{\displaystyle \exp(X+Y)=\exp(X)\exp(Y),\quad {\text{if }}[X,Y]=0}
.
The preceding identity does not hold in general; the assumption that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
commute is important.
The image of the exponential map always lies in the identity component of
G
{\displaystyle G}
.
=== The exponential near the identity ===
The exponential map
exp
:
g
→
G
{\displaystyle \exp \colon {\mathfrak {g}}\to G}
is a smooth map. Its differential at zero,
exp
∗
:
g
→
g
{\displaystyle \exp _{*}\colon {\mathfrak {g}}\to {\mathfrak {g}}}
, is the identity map (with the usual identifications).
It follows from the inverse function theorem that the exponential map, therefore, restricts to a diffeomorphism from some neighborhood of 0 in
g
{\displaystyle {\mathfrak {g}}}
to a neighborhood of 1 in
G
{\displaystyle G}
.
It is then not difficult to show that if G is connected, every element g of G is a product of exponentials of elements of
g
{\displaystyle {\mathfrak {g}}}
:
g
=
exp
(
X
1
)
exp
(
X
2
)
⋯
exp
(
X
n
)
,
X
j
∈
g
{\displaystyle g=\exp(X_{1})\exp(X_{2})\cdots \exp(X_{n}),\quad X_{j}\in {\mathfrak {g}}}
.
Globally, the exponential map is not necessarily surjective. Furthermore, the exponential map may not be a local diffeomorphism at all points. For example, the exponential map from
s
o
{\displaystyle {\mathfrak {so}}}
(3) to SO(3) is not a local diffeomorphism; see also cut locus on this failure. See derivative of the exponential map for more information.
=== Surjectivity of the exponential ===
In these important special cases, the exponential map is known to always be surjective:
G is connected and compact,
G is connected and nilpotent (for example, G connected and abelian), or
G
=
G
L
n
(
C
)
{\displaystyle G=GL_{n}(\mathbb {C} )}
.
For groups not satisfying any of the above conditions, the exponential map may or may not be surjective.
The image of the exponential map of the connected but non-compact group SL2(R) is not the whole group. Its image consists of C-diagonalizable matrices with eigenvalues either positive or with modulus 1, and of non-diagonalizable matrices with a repeated eigenvalue 1, and the matrix
−
I
{\displaystyle -I}
. (Thus, the image excludes matrices with real, negative eigenvalues, other than
−
I
{\displaystyle -I}
.)
=== Exponential map and homomorphisms ===
Let
ϕ
:
G
→
H
{\displaystyle \phi \colon G\to H}
be a Lie group homomorphism and let
ϕ
∗
{\displaystyle \phi _{*}}
be its derivative at the identity. Then the following diagram commutes:
In particular, when applied to the adjoint action of a Lie group
G
{\displaystyle G}
, since
Ad
∗
=
ad
{\displaystyle \operatorname {Ad} _{*}=\operatorname {ad} }
, we have the useful identity:
A
d
exp
X
(
Y
)
=
exp
(
a
d
X
)
(
Y
)
=
Y
+
[
X
,
Y
]
+
1
2
!
[
X
,
[
X
,
Y
]
]
+
1
3
!
[
X
,
[
X
,
[
X
,
Y
]
]
]
+
⋯
{\displaystyle \mathrm {Ad} _{\exp X}(Y)=\exp(\mathrm {ad} _{X})(Y)=Y+[X,Y]+{\frac {1}{2!}}[X,[X,Y]]+{\frac {1}{3!}}[X,[X,[X,Y]]]+\cdots }
.
== Logarithmic coordinates ==
Given a Lie group
G
{\displaystyle G}
with Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, each choice of a basis
X
1
,
…
,
X
n
{\displaystyle X_{1},\dots ,X_{n}}
of
g
{\displaystyle {\mathfrak {g}}}
determines a coordinate system near the identity element e for G, as follows. By the inverse function theorem, the exponential map
exp
:
N
→
∼
U
{\displaystyle \operatorname {exp} :N{\overset {\sim }{\to }}U}
is a diffeomorphism from some neighborhood
N
⊂
g
≃
R
n
{\displaystyle N\subset {\mathfrak {g}}\simeq \mathbb {R} ^{n}}
of the origin to a neighborhood
U
{\displaystyle U}
of
e
∈
G
{\displaystyle e\in G}
. Its inverse:
log
:
U
→
∼
N
⊂
R
n
{\displaystyle \log :U{\overset {\sim }{\to }}N\subset \mathbb {R} ^{n}}
is then a coordinate system on U. It is called by various names such as logarithmic coordinates, exponential coordinates or normal coordinates. See the closed-subgroup theorem for an example of how they are used in applications.
Remark: The open cover
{
U
g
|
g
∈
G
}
{\displaystyle \{Ug|g\in G\}}
gives a structure of a real-analytic manifold to G such that the group operation
(
g
,
h
)
↦
g
h
−
1
{\displaystyle (g,h)\mapsto gh^{-1}}
is real-analytic.
== See also ==
List of exponential topics
Derivative of the exponential map
Matrix exponential
== Citations ==
== Works cited == | Wikipedia/Exponential_map_(Lie_theory) |
In the mathematical field of representation theory, a Lie algebra representation or representation of a Lie algebra is a way of writing a Lie algebra as a set of matrices (or endomorphisms of a vector space) in such a way that the Lie bracket is given by the commutator. In the language of physics, one looks for a vector space
V
{\displaystyle V}
together with a collection of operators on
V
{\displaystyle V}
satisfying some fixed set of commutation relations, such as the relations satisfied by the angular momentum operators.
The notion is closely related to that of a representation of a Lie group. Roughly speaking, the representations of Lie algebras are the differentiated form of representations of Lie groups, while the representations of the universal cover of a Lie group are the integrated form of the representations of its Lie algebra.
In the study of representations of a Lie algebra, a particular ring, called the universal enveloping algebra, associated with the Lie algebra plays an important role. The universality of this ring says that the category of representations of a Lie algebra is the same as the category of modules over its enveloping algebra.
== Formal definition ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra and let
V
{\displaystyle V}
be a vector space. We let
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
denote the space of endomorphisms of
V
{\displaystyle V}
, that is, the space of all linear maps of
V
{\displaystyle V}
to itself. Here, the associative algebra
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
is turned into a Lie algebra with bracket given by the commutator:
[
s
,
t
]
=
s
∘
t
−
t
∘
s
{\displaystyle [s,t]=s\circ t-t\circ s}
for all s,t in
g
l
(
V
)
{\displaystyle {\mathfrak {gl}}(V)}
. Then a representation of
g
{\displaystyle {\mathfrak {g}}}
on
V
{\displaystyle V}
is a Lie algebra homomorphism
ρ
:
g
→
g
l
(
V
)
{\displaystyle \rho \colon {\mathfrak {g}}\to {\mathfrak {gl}}(V)}
.
Explicitly, this means that
ρ
{\displaystyle \rho }
should be a linear map and it should satisfy
ρ
(
[
X
,
Y
]
)
=
ρ
(
X
)
ρ
(
Y
)
−
ρ
(
Y
)
ρ
(
X
)
{\displaystyle \rho ([X,Y])=\rho (X)\rho (Y)-\rho (Y)\rho (X)}
for all X, Y in
g
{\displaystyle {\mathfrak {g}}}
. The vector space V, together with the representation ρ, is called a
g
{\displaystyle {\mathfrak {g}}}
-module. (Many authors abuse terminology and refer to V itself as the representation).
The representation
ρ
{\displaystyle \rho }
is said to be faithful if it is injective.
One can equivalently define a
g
{\displaystyle {\mathfrak {g}}}
-module as a vector space V together with a bilinear map
g
×
V
→
V
{\displaystyle {\mathfrak {g}}\times V\to V}
such that
[
X
,
Y
]
⋅
v
=
X
⋅
(
Y
⋅
v
)
−
Y
⋅
(
X
⋅
v
)
{\displaystyle [X,Y]\cdot v=X\cdot (Y\cdot v)-Y\cdot (X\cdot v)}
for all X,Y in
g
{\displaystyle {\mathfrak {g}}}
and v in V. This is related to the previous definition by setting X ⋅ v = ρ(X)(v).
== Examples ==
=== Adjoint representations ===
The most basic example of a Lie algebra representation is the adjoint representation of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
on itself:
ad
:
g
→
g
l
(
g
)
,
X
↦
ad
X
,
ad
X
(
Y
)
=
[
X
,
Y
]
.
{\displaystyle {\textrm {ad}}:{\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}}),\quad X\mapsto \operatorname {ad} _{X},\quad \operatorname {ad} _{X}(Y)=[X,Y].}
Indeed, by virtue of the Jacobi identity,
ad
{\displaystyle \operatorname {ad} }
is a Lie algebra homomorphism.
=== Infinitesimal Lie group representations ===
A Lie algebra representation also arises in nature. If
ϕ
{\displaystyle \phi }
: G → H is a homomorphism of (real or complex) Lie groups, and
g
{\displaystyle {\mathfrak {g}}}
and
h
{\displaystyle {\mathfrak {h}}}
are the Lie algebras of G and H respectively, then the differential
d
e
ϕ
:
g
→
h
{\displaystyle d_{e}\phi :{\mathfrak {g}}\to {\mathfrak {h}}}
on tangent spaces at the identities is a Lie algebra homomorphism. In particular, for a finite-dimensional vector space V, a representation of Lie groups
ϕ
:
G
→
GL
(
V
)
{\displaystyle \phi :G\to \operatorname {GL} (V)\,}
determines a Lie algebra homomorphism
d
ϕ
:
g
→
g
l
(
V
)
{\displaystyle d\phi :{\mathfrak {g}}\to {\mathfrak {gl}}(V)}
from
g
{\displaystyle {\mathfrak {g}}}
to the Lie algebra of the general linear group GL(V), i.e. the endomorphism algebra of V.
For example, let
c
g
(
x
)
=
g
x
g
−
1
{\displaystyle c_{g}(x)=gxg^{-1}}
. Then the differential of
c
g
:
G
→
G
{\displaystyle c_{g}:G\to G}
at the identity is an element of
GL
(
g
)
{\displaystyle \operatorname {GL} ({\mathfrak {g}})}
. Denoting it by
Ad
(
g
)
{\displaystyle \operatorname {Ad} (g)}
one obtains a representation
Ad
{\displaystyle \operatorname {Ad} }
of G on the vector space
g
{\displaystyle {\mathfrak {g}}}
. This is the adjoint representation of G. Applying the preceding, one gets the Lie algebra representation
d
Ad
{\displaystyle d\operatorname {Ad} }
. It can be shown that
d
e
Ad
=
ad
{\displaystyle d_{e}\operatorname {Ad} =\operatorname {ad} }
, the adjoint representation of
g
{\displaystyle {\mathfrak {g}}}
.
A partial converse to this statement says that every representation of a finite-dimensional (real or complex) Lie algebra lifts to a unique representation of the associated simply connected Lie group, so that representations of simply-connected Lie groups are in one-to-one correspondence with representations of their Lie algebras.
=== In quantum physics ===
In quantum theory, one considers "observables" that are self-adjoint operators on a Hilbert space. The commutation relations among these operators are then an important tool. The angular momentum operators, for example, satisfy the commutation relations
[
L
x
,
L
y
]
=
i
ℏ
L
z
,
[
L
y
,
L
z
]
=
i
ℏ
L
x
,
[
L
z
,
L
x
]
=
i
ℏ
L
y
,
{\displaystyle [L_{x},L_{y}]=i\hbar L_{z},\;\;[L_{y},L_{z}]=i\hbar L_{x},\;\;[L_{z},L_{x}]=i\hbar L_{y},}
.
Thus, the span of these three operators forms a Lie algebra, which is isomorphic to the Lie algebra so(3) of the rotation group SO(3). Then if
V
{\displaystyle V}
is any subspace of the quantum Hilbert space that is invariant under the angular momentum operators,
V
{\displaystyle V}
will constitute a representation of the Lie algebra so(3). An understanding of the representation theory of so(3) is of great help in, for example, analyzing Hamiltonians with rotational symmetry, such as the hydrogen atom. Many other interesting Lie algebras (and their representations) arise in other parts of quantum physics. Indeed, the history of representation theory is characterized by rich interactions between mathematics and physics.
== Basic concepts ==
=== Invariant subspaces and irreducibility ===
Given a representation
ρ
:
g
→
End
(
V
)
{\displaystyle \rho :{\mathfrak {g}}\rightarrow \operatorname {End} (V)}
of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, we say that a subspace
W
{\displaystyle W}
of
V
{\displaystyle V}
is invariant if
ρ
(
X
)
w
∈
W
{\displaystyle \rho (X)w\in W}
for all
w
∈
W
{\displaystyle w\in W}
and
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
. A nonzero representation is said to be irreducible if the only invariant subspaces are
V
{\displaystyle V}
itself and the zero space
{
0
}
{\displaystyle \{0\}}
. The term simple module is also used for an irreducible representation.
=== Homomorphisms ===
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra. Let V, W be
g
{\displaystyle {\mathfrak {g}}}
-modules. Then a linear map
f
:
V
→
W
{\displaystyle f:V\to W}
is a homomorphism of
g
{\displaystyle {\mathfrak {g}}}
-modules if it is
g
{\displaystyle {\mathfrak {g}}}
-equivariant; i.e.,
f
(
X
⋅
v
)
=
X
⋅
f
(
v
)
{\displaystyle f(X\cdot v)=X\cdot f(v)}
for any
X
∈
g
,
v
∈
V
{\displaystyle X\in {\mathfrak {g}},\,v\in V}
. If f is bijective,
V
,
W
{\displaystyle V,W}
are said to be equivalent. Such maps are also referred to as intertwining maps or morphisms.
Similarly, many other constructions from module theory in abstract algebra carry over to this setting: submodule, quotient, subquotient, direct sum, Jordan-Hölder series, etc.
=== Schur's lemma ===
A simple but useful tool in studying irreducible representations is Schur's lemma. It has two parts:
If V, W are irreducible
g
{\displaystyle {\mathfrak {g}}}
-modules and
f
:
V
→
W
{\displaystyle f:V\to W}
is a homomorphism, then
f
{\displaystyle f}
is either zero or an isomorphism.
If V is an irreducible
g
{\displaystyle {\mathfrak {g}}}
-module over an algebraically closed field and
f
:
V
→
V
{\displaystyle f:V\to V}
is a homomorphism, then
f
{\displaystyle f}
is a scalar multiple of the identity.
=== Complete reducibility ===
Let V be a representation of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
. Then V is said to be completely reducible (or semisimple) if it is isomorphic to a direct sum of irreducible representations (cf. semisimple module). If V is finite-dimensional, then V is completely reducible if and only if every invariant subspace of V has an invariant complement. (That is, if W is an invariant subspace, then there is another invariant subspace P such that V is the direct sum of W and P.)
If
g
{\displaystyle {\mathfrak {g}}}
is a finite-dimensional semisimple Lie algebra over a field of characteristic zero and V is finite-dimensional, then V is semisimple; this is Weyl's complete reducibility theorem. Thus, for semisimple Lie algebras, a classification of irreducible (i.e. simple) representations leads immediately to classification of all representations. For other Lie algebra, which do not have this special property, classifying the irreducible representations may not help much in classifying general representations.
A Lie algebra is said to be reductive if the adjoint representation is semisimple. Certainly, every (finite-dimensional) semisimple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
is reductive, since every representation of
g
{\displaystyle {\mathfrak {g}}}
is completely reducible, as we have just noted. In the other direction, the definition of a reductive Lie algebra means that it decomposes as a direct sum of ideals (i.e., invariant subspaces for the adjoint representation) that have no nontrivial sub-ideals. Some of these ideals will be one-dimensional and the rest are simple Lie algebras. Thus, a reductive Lie algebra is a direct sum of a commutative algebra and a semisimple algebra.
=== Invariants ===
An element v of V is said to be
g
{\displaystyle {\mathfrak {g}}}
-invariant if
x
⋅
v
=
0
{\displaystyle x\cdot v=0}
for all
x
∈
g
{\displaystyle x\in {\mathfrak {g}}}
. The set of all invariant elements is denoted by
V
g
{\displaystyle V^{\mathfrak {g}}}
.
== Basic constructions ==
=== Tensor products of representations ===
If we have two representations of a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, with V1 and V2 as their underlying vector spaces, then the tensor product of the representations would have V1 ⊗ V2 as the underlying vector space, with the action of
g
{\displaystyle {\mathfrak {g}}}
uniquely determined by the assumption that
X
⋅
(
v
1
⊗
v
2
)
=
(
X
⋅
v
1
)
⊗
v
2
+
v
1
⊗
(
X
⋅
v
2
)
.
{\displaystyle X\cdot (v_{1}\otimes v_{2})=(X\cdot v_{1})\otimes v_{2}+v_{1}\otimes (X\cdot v_{2}).}
for all
v
1
∈
V
1
{\displaystyle v_{1}\in V_{1}}
and
v
2
∈
V
2
{\displaystyle v_{2}\in V_{2}}
.
In the language of homomorphisms, this means that we define
ρ
1
⊗
ρ
2
:
g
→
g
l
(
V
1
⊗
V
2
)
{\displaystyle \rho _{1}\otimes \rho _{2}:{\mathfrak {g}}\rightarrow {\mathfrak {gl}}(V_{1}\otimes V_{2})}
by the formula
(
ρ
1
⊗
ρ
2
)
(
X
)
=
ρ
1
(
X
)
⊗
I
+
I
⊗
ρ
2
(
X
)
{\displaystyle (\rho _{1}\otimes \rho _{2})(X)=\rho _{1}(X)\otimes \mathrm {I} +\mathrm {I} \otimes \rho _{2}(X)}
. This is called the Kronecker sum of
ρ
1
{\displaystyle \rho _{1}}
and
ρ
2
{\displaystyle \rho _{2}}
, defined in Matrix addition#Kronecker_sum and Kronecker product#Properties, and more specifically in Tensor product of representations.
In the physics literature, the tensor product with the identity operator is often suppressed in the notation, with the formula written as
(
ρ
1
⊗
ρ
2
)
(
X
)
=
ρ
1
(
X
)
+
ρ
2
(
X
)
{\displaystyle (\rho _{1}\otimes \rho _{2})(X)=\rho _{1}(X)+\rho _{2}(X)}
,
where it is understood that
ρ
1
(
x
)
{\displaystyle \rho _{1}(x)}
acts on the first factor in the tensor product and
ρ
2
(
x
)
{\displaystyle \rho _{2}(x)}
acts on the second factor in the tensor product. In the context of representations of the Lie algebra su(2), the tensor product of representations goes under the name "addition of angular momentum." In this context,
ρ
1
(
X
)
{\displaystyle \rho _{1}(X)}
might, for example, be the orbital angular momentum while
ρ
2
(
X
)
{\displaystyle \rho _{2}(X)}
is the spin angular momentum.
=== Dual representations ===
Let
g
{\displaystyle {\mathfrak {g}}}
be a Lie algebra and
ρ
:
g
→
g
l
(
V
)
{\displaystyle \rho :{\mathfrak {g}}\rightarrow {\mathfrak {gl}}(V)}
be a representation of
g
{\displaystyle {\mathfrak {g}}}
. Let
V
∗
{\displaystyle V^{*}}
be the dual space, that is, the space of linear functionals on
V
{\displaystyle V}
. Then we can define a representation
ρ
∗
:
g
→
g
l
(
V
∗
)
{\displaystyle \rho ^{*}:{\mathfrak {g}}\rightarrow {\mathfrak {gl}}(V^{*})}
by the formula
ρ
∗
(
X
)
=
−
(
ρ
(
X
)
)
tr
,
{\displaystyle \rho ^{*}(X)=-(\rho (X))^{\operatorname {tr} },}
where for any operator
A
:
V
→
V
{\displaystyle A:V\rightarrow V}
, the transpose operator
A
tr
:
V
∗
→
V
∗
{\displaystyle A^{\operatorname {tr} }:V^{*}\rightarrow V^{*}}
is defined as the "composition with
A
{\displaystyle A}
" operator:
(
A
tr
ϕ
)
(
v
)
=
ϕ
(
A
v
)
{\displaystyle (A^{\operatorname {tr} }\phi )(v)=\phi (Av)}
The minus sign in the definition of
ρ
∗
{\displaystyle \rho ^{*}}
is needed to ensure that
ρ
∗
{\displaystyle \rho ^{*}}
is actually a representation of
g
{\displaystyle {\mathfrak {g}}}
, in light of the identity
(
A
B
)
tr
=
B
tr
A
tr
.
{\displaystyle (AB)^{\operatorname {tr} }=B^{\operatorname {tr} }A^{\operatorname {tr} }.}
If we work in a basis, then the transpose in the above definition can be interpreted as the ordinary matrix transpose.
=== Representation on linear maps ===
Let
V
,
W
{\displaystyle V,W}
be
g
{\displaystyle {\mathfrak {g}}}
-modules,
g
{\displaystyle {\mathfrak {g}}}
a Lie algebra. Then
Hom
(
V
,
W
)
{\displaystyle \operatorname {Hom} (V,W)}
becomes a
g
{\displaystyle {\mathfrak {g}}}
-module by setting
(
X
⋅
f
)
(
v
)
=
X
f
(
v
)
−
f
(
X
v
)
{\displaystyle (X\cdot f)(v)=Xf(v)-f(Xv)}
. In particular,
Hom
g
(
V
,
W
)
=
Hom
(
V
,
W
)
g
{\displaystyle \operatorname {Hom} _{\mathfrak {g}}(V,W)=\operatorname {Hom} (V,W)^{\mathfrak {g}}}
; that is to say, the
g
{\displaystyle {\mathfrak {g}}}
-module homomorphisms from
V
{\displaystyle V}
to
W
{\displaystyle W}
are simply the elements of
Hom
(
V
,
W
)
{\displaystyle \operatorname {Hom} (V,W)}
that are invariant under the just-defined action of
g
{\displaystyle {\mathfrak {g}}}
on
Hom
(
V
,
W
)
{\displaystyle \operatorname {Hom} (V,W)}
. If we take
W
{\displaystyle W}
to be the base field, we recover the action of
g
{\displaystyle {\mathfrak {g}}}
on
V
∗
{\displaystyle V^{*}}
given in the previous subsection.
== Representation theory of semisimple Lie algebras ==
See Representation theory of semisimple Lie algebras.
== Enveloping algebras ==
To each Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over a field k, one can associate a certain ring called the universal enveloping algebra of
g
{\displaystyle {\mathfrak {g}}}
and denoted
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. The universal property of the universal enveloping algebra guarantees that every representation of
g
{\displaystyle {\mathfrak {g}}}
gives rise to a representation of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. Conversely, the PBW theorem tells us that
g
{\displaystyle {\mathfrak {g}}}
sits inside
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
, so that every representation of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
can be restricted to
g
{\displaystyle {\mathfrak {g}}}
. Thus, there is a one-to-one correspondence between representations of
g
{\displaystyle {\mathfrak {g}}}
and those of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
.
The universal enveloping algebra plays an important role in the representation theory of semisimple Lie algebras, described above. Specifically, the finite-dimensional irreducible representations are constructed as quotients of Verma modules, and Verma modules are constructed as quotients of the universal enveloping algebra.
The construction of
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is as follows. Let T be the tensor algebra of the vector space
g
{\displaystyle {\mathfrak {g}}}
. Thus, by definition,
T
=
⊕
n
=
0
∞
⊗
1
n
g
{\displaystyle T=\oplus _{n=0}^{\infty }\otimes _{1}^{n}{\mathfrak {g}}}
and the multiplication on it is given by
⊗
{\displaystyle \otimes }
. Let
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
be the quotient ring of T by the ideal generated by elements of the form
[
X
,
Y
]
−
(
X
⊗
Y
−
Y
⊗
X
)
{\displaystyle [X,Y]-(X\otimes Y-Y\otimes X)}
.
There is a natural linear map from
g
{\displaystyle {\mathfrak {g}}}
into
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
obtained by restricting the quotient map of
T
→
U
(
g
)
{\displaystyle T\to U({\mathfrak {g}})}
to degree one piece. The PBW theorem implies that the canonical map is actually injective. Thus, every Lie algebra
g
{\displaystyle {\mathfrak {g}}}
can be embedded into an associative algebra
A
=
U
(
g
)
{\displaystyle A=U({\mathfrak {g}})}
in such a way that the bracket on
g
{\displaystyle {\mathfrak {g}}}
is given by
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
in
A
{\displaystyle A}
.
If
g
{\displaystyle {\mathfrak {g}}}
is abelian, then
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is the symmetric algebra of the vector space
g
{\displaystyle {\mathfrak {g}}}
.
Since
g
{\displaystyle {\mathfrak {g}}}
is a module over itself via adjoint representation, the enveloping algebra
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
becomes a
g
{\displaystyle {\mathfrak {g}}}
-module by extending the adjoint representation. But one can also use the left and right regular representation to make the enveloping algebra a
g
{\displaystyle {\mathfrak {g}}}
-module; namely, with the notation
l
X
(
Y
)
=
X
Y
,
X
∈
g
,
Y
∈
U
(
g
)
{\displaystyle l_{X}(Y)=XY,X\in {\mathfrak {g}},Y\in U({\mathfrak {g}})}
, the mapping
X
↦
l
X
{\displaystyle X\mapsto l_{X}}
defines a representation of
g
{\displaystyle {\mathfrak {g}}}
on
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
. The right regular representation is defined similarly.
== Induced representation ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a finite-dimensional Lie algebra over a field of characteristic zero and
h
⊂
g
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}}
a subalgebra.
U
(
h
)
{\displaystyle U({\mathfrak {h}})}
acts on
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
from the right and thus, for any
h
{\displaystyle {\mathfrak {h}}}
-module W, one can form the left
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
-module
U
(
g
)
⊗
U
(
h
)
W
{\displaystyle U({\mathfrak {g}})\otimes _{U({\mathfrak {h}})}W}
. It is a
g
{\displaystyle {\mathfrak {g}}}
-module denoted by
Ind
h
g
W
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}W}
and called the
g
{\displaystyle {\mathfrak {g}}}
-module induced by W. It satisfies (and is in fact characterized by) the universal property: for any
g
{\displaystyle {\mathfrak {g}}}
-module E
Hom
g
(
Ind
h
g
W
,
E
)
≃
Hom
h
(
W
,
Res
h
g
E
)
{\displaystyle \operatorname {Hom} _{\mathfrak {g}}(\operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}W,E)\simeq \operatorname {Hom} _{\mathfrak {h}}(W,\operatorname {Res} _{\mathfrak {h}}^{\mathfrak {g}}E)}
.
Furthermore,
Ind
h
g
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}}
is an exact functor from the category of
h
{\displaystyle {\mathfrak {h}}}
-modules to the category of
g
{\displaystyle {\mathfrak {g}}}
-modules. These uses the fact that
U
(
g
)
{\displaystyle U({\mathfrak {g}})}
is a free right module over
U
(
h
)
{\displaystyle U({\mathfrak {h}})}
. In particular, if
Ind
h
g
W
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}W}
is simple (resp. absolutely simple), then W is simple (resp. absolutely simple). Here, a
g
{\displaystyle {\mathfrak {g}}}
-module V is absolutely simple if
V
⊗
k
F
{\displaystyle V\otimes _{k}F}
is simple for any field extension
F
/
k
{\displaystyle F/k}
.
The induction is transitive:
Ind
h
g
≃
Ind
h
′
g
∘
Ind
h
h
′
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}\simeq \operatorname {Ind} _{\mathfrak {h'}}^{\mathfrak {g}}\circ \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {h'}}}
for any Lie subalgebra
h
′
⊂
g
{\displaystyle {\mathfrak {h'}}\subset {\mathfrak {g}}}
and any Lie subalgebra
h
⊂
h
′
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {h}}'}
. The induction commutes with restriction: let
h
⊂
g
{\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}}
be subalgebra and
n
{\displaystyle {\mathfrak {n}}}
an ideal of
g
{\displaystyle {\mathfrak {g}}}
that is contained in
h
{\displaystyle {\mathfrak {h}}}
. Set
g
1
=
g
/
n
{\displaystyle {\mathfrak {g}}_{1}={\mathfrak {g}}/{\mathfrak {n}}}
and
h
1
=
h
/
n
{\displaystyle {\mathfrak {h}}_{1}={\mathfrak {h}}/{\mathfrak {n}}}
. Then
Ind
h
g
∘
Res
h
≃
Res
g
∘
Ind
h
1
g
1
{\displaystyle \operatorname {Ind} _{\mathfrak {h}}^{\mathfrak {g}}\circ \operatorname {Res} _{\mathfrak {h}}\simeq \operatorname {Res} _{\mathfrak {g}}\circ \operatorname {Ind} _{\mathfrak {h_{1}}}^{\mathfrak {g_{1}}}}
.
== Infinite-dimensional representations and "category O" ==
Let
g
{\displaystyle {\mathfrak {g}}}
be a finite-dimensional semisimple Lie algebra over a field of characteristic zero. (in the solvable or nilpotent case, one studies primitive ideals of the enveloping algebra; cf. Dixmier for the definitive account.)
The category of (possibly infinite-dimensional) modules over
g
{\displaystyle {\mathfrak {g}}}
turns out to be too large especially for homological algebra methods to be useful: it was realized that a smaller subcategory category O is a better place for the representation theory in the semisimple case in zero characteristic. For instance, the category O turned out to be of a right size to formulate the celebrated BGG reciprocity.
== (g,K)-module ==
One of the most important applications of Lie algebra representations is to the representation theory of real reductive Lie groups. The application is based on the idea that if
π
{\displaystyle \pi }
is a Hilbert-space representation of, say, a connected real semisimple linear Lie group G, then it has two natural actions: the complexification
g
{\displaystyle {\mathfrak {g}}}
and the connected maximal compact subgroup K. The
g
{\displaystyle {\mathfrak {g}}}
-module structure of
π
{\displaystyle \pi }
allows algebraic especially homological methods to be applied and
K
{\displaystyle K}
-module structure allows harmonic analysis to be carried out in a way similar to that on connected compact semisimple Lie groups.
== Representation on an algebra ==
If we have a Lie superalgebra L, then a representation of L on an algebra is a (not necessarily associative) Z2 graded algebra A which is a representation of L as a Z2 graded vector space and in addition, the elements of L acts as derivations/antiderivations on A.
More specifically, if H is a pure element of L and x and y are pure elements of A,
H[xy] = (H[x])y + (−1)xHx(H[y])
Also, if A is unital, then
H[1] = 0
Now, for the case of a representation of a Lie algebra, we simply drop all the gradings and the (−1) to the some power factors.
A Lie (super)algebra is an algebra and it has an adjoint representation of itself. This is a representation on an algebra: the (anti)derivation property is the superJacobi identity.
If a vector space is both an associative algebra and a Lie algebra and the adjoint representation of the Lie algebra on itself is a representation on an algebra (i.e., acts by derivations on the associative algebra structure), then it is a Poisson algebra. The analogous observation for Lie superalgebras gives the notion of a Poisson superalgebra.
== See also ==
Representation of a Lie group
Weight (representation theory)
Weyl's theorem on complete reducibility
Root system
Weyl character formula
Representation theory of a connected compact Lie group
Whitehead's lemma (Lie algebras)
Kazhdan–Lusztig conjectures
Quillen's lemma - analog of Schur's lemma
== Notes ==
== References ==
Bernstein I.N., Gelfand I.M., Gelfand S.I., "Structure of Representations that are generated by vectors of highest weight," Functional. Anal. Appl. 5 (1971)
Dixmier, J. (1977), Enveloping Algebras, Amsterdam, New York, Oxford: North-Holland, ISBN 0-444-11077-1.
A. Beilinson and J. Bernstein, "Localisation de g-modules," Comptes Rendus de l'Académie des Sciences, Série I, vol. 292, iss. 1, pp. 15–18, 1981.
Bäuerle, G.G.A; de Kerf, E.A. (1990). A. van Groesen; E.M. de Jager (eds.). Finite and infinite dimensional Lie algebras and their application in physics. Studies in mathematical physics. Vol. 1. North-Holland. ISBN 0-444-88776-8.
Bäuerle, G.G.A; de Kerf, E.A.; ten Kroode, A.P.E. (1997). A. van Groesen; E.M. de Jager (eds.). Finite and infinite dimensional Lie algebras and their application in physics. Studies in mathematical physics. Vol. 7. North-Holland. ISBN 978-0-444-82836-1 – via ScienceDirect.
Fulton, W.; Harris, J. (1991). Representation theory. A first course. Graduate Texts in Mathematics. Vol. 129. New York: Springer-Verlag. ISBN 978-0-387-97495-8. MR 1153249.
D. Gaitsgory, Geometric Representation theory, Math 267y, Fall 2005
Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, ISBN 978-1461471158
Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666
Rossmann, Wulf (2002), Lie Groups - An Introduction Through Linear Groups, Oxford Graduate Texts in Mathematics, Oxford Science Publications, ISBN 0-19-859683-9
Ryoshi Hotta, Kiyoshi Takeuchi, Toshiyuki Tanisaki, D-modules, perverse sheaves, and representation theory; translated by Kiyoshi Takeuch
Humphreys, James (1972), Introduction to Lie Algebras and Representation Theory, Graduate Texts in Mathematics, vol. 9, Springer, ISBN 9781461263982
Jacobson, Nathan (1979) [1962]. Lie algebras. Dover. ISBN 978-0-486-63832-4.
Garrett Birkhoff; Philip M. Whitman (1949). "Representation of Jordan and Lie Algebras" (PDF). Trans. Amer. Math. Soc. 65: 116–136. doi:10.1090/s0002-9947-1949-0029366-6.
Kirillov, A. (2008). An Introduction to Lie Groups and Lie Algebras. Cambridge Studies in Advanced Mathematics. Vol. 113. Cambridge University Press. ISBN 978-0521889698.
Knapp, Anthony W. (2001), Representation theory of semisimple groups. An overview based on examples., Princeton Landmarks in Mathematics, Princeton University Press, ISBN 0-691-09089-0 (elementary treatment for SL(2,C))
Knapp, Anthony W. (2002), Lie Groups Beyond and Introduction (second ed.), Birkhauser
== Further reading ==
Ben-Zvi, David; Nadler, David (2012). "Beilinson-Bernstein localization over the Harish-Chandra center". arXiv:1209.0188v1 [math.RT]. | Wikipedia/Representation_of_a_Lie_algebra |
In the field of mathematics called abstract algebra, a division algebra is, roughly speaking, an algebra over a field in which division, except by zero, is always possible.
== Definitions ==
Formally, we start with a non-zero algebra D over a field. We call D a division algebra if for any element a in D and any non-zero element b in D there exists precisely one element x in D with a = bx and precisely one element y in D such that a = yb.
For associative algebras, the definition can be simplified as follows: a non-zero associative algebra over a field is a division algebra if and only if it has a multiplicative identity element 1 and every non-zero element a has a multiplicative inverse (i.e. an element x with ax = xa = 1).
== Associative division algebras ==
The best-known examples of associative division algebras are the finite-dimensional real ones (that is, algebras over the field R of real numbers, which are finite-dimensional as a vector space over the reals). The Frobenius theorem states that up to isomorphism there are three such algebras: the reals themselves (dimension 1), the field of complex numbers (dimension 2), and the quaternions (dimension 4).
Wedderburn's little theorem states that if D is a finite division algebra, then D is a finite field.
Over an algebraically closed field K (for example the complex numbers C), there are no finite-dimensional associative division algebras, except K itself.
Associative division algebras have no nonzero zero divisors. A finite-dimensional unital associative algebra (over any field) is a division algebra if and only if it has no nonzero zero divisors.
Whenever A is an associative unital algebra over the field F and S is a simple module over A, then the endomorphism ring of S is a division algebra over F; every associative division algebra over F arises in this fashion.
The center of an associative division algebra D over the field K is a field containing K. The dimension of such an algebra over its center, if finite, is a perfect square: it is equal to the square of the dimension of a maximal subfield of D over the center. Given a field F, the Brauer equivalence classes of simple (contains only trivial two-sided ideals) associative division algebras whose center is F and which are finite-dimensional over F can be turned into a group, the Brauer group of the field F.
One way to construct finite-dimensional associative division algebras over arbitrary fields is given by the quaternion algebras (see also quaternions).
For infinite-dimensional associative division algebras, the most important cases are those where the space has some reasonable topology. See for example normed division algebras and Banach algebras.
== Not necessarily associative division algebras ==
If the division algebra is not assumed to be associative, usually some weaker condition (such as alternativity or power associativity) is imposed instead. See algebra over a field for a list of such conditions.
Over the reals there are (up to isomorphism) only two unitary commutative finite-dimensional division algebras: the reals themselves, and the complex numbers. These are of course both associative. For a non-associative example, consider the complex numbers with multiplication defined by taking the complex conjugate of the usual multiplication:
a
∗
b
=
a
b
¯
.
{\displaystyle a*b={\overline {ab}}.}
This is a commutative, non-associative division algebra of dimension 2 over the reals, and has no unit element. There are infinitely many other non-isomorphic commutative, non-associative, finite-dimensional real divisional algebras, but they all have dimension 2.
In fact, every finite-dimensional real commutative division algebra is either 1- or 2-dimensional. This is known as Hopf's theorem, and was proved in 1940. The proof uses methods from topology. Although a later proof was found using algebraic geometry, no direct algebraic proof is known. The fundamental theorem of algebra is a corollary of Hopf's theorem.
Dropping the requirement of commutativity, Hopf generalized his result: Any finite-dimensional real division algebra must have dimension a power of 2.
Later work showed that in fact, any finite-dimensional real division algebra must be of dimension 1, 2, 4, or 8. This was independently proved by Michel Kervaire and John Milnor in 1958, again using techniques of algebraic topology, in particular K-theory. Adolf Hurwitz had shown in 1898 that the identity
q
q
¯
=
sum of squares
{\displaystyle q{\overline {q}}={\text{sum of squares}}}
held only for dimensions 1, 2, 4 and 8. (See Hurwitz's theorem.) The challenge of constructing a division algebra of three dimensions was tackled by several early mathematicians. Kenneth O. May surveyed these attempts in 1966.
Any real finite-dimensional division algebra
over the reals must be
isomorphic to R or C if unitary and commutative (equivalently: associative and commutative)
isomorphic to the quaternions if noncommutative but associative
isomorphic to the octonions if non-associative but alternative.
The following is known about the dimension of a finite-dimensional division algebra A over a field K:
dim A = 1 if K is algebraically closed,
dim A = 1, 2, 4 or 8 if K is real closed, and
If K is neither algebraically nor real closed, then there are infinitely many dimensions in which there exist division algebras over K.
We may say an algebra A has multiplicative inverses if for any nonzero
a
∈
A
{\displaystyle a\in A}
there is an element
a
−
1
∈
A
{\displaystyle a^{-1}\in A}
with
a
a
−
1
=
a
−
1
a
=
1
{\displaystyle aa^{-1}=a^{-1}a=1}
. An associative algebra has multiplicative inverses if and only if it is a division algebra. However, this fails for nonassociative algebras. The sedenions are a nonassociative algebra over the real numbers that has multiplicative inverses, but is not a division algebra. On the other hand, we can construct a division algebra without multiplicative inverses by taking the quaternions and modifying the product, setting
i
2
=
−
1
+
ϵ
j
{\displaystyle i^{2}=-1+\epsilon j}
for some small nonzero real number
ϵ
{\displaystyle \epsilon }
while leaving the rest of the multiplication table unchanged. The element
i
{\displaystyle i}
then has both right and left inverses, but they are not equal.
== See also ==
Normed division algebra
Division (mathematics)
Division ring
Semifield
Cayley–Dickson construction
== Notes ==
== References ==
Cohn, Paul Moritz (2003). Basic algebra: groups, rings, and fields. London: Springer-Verlag. doi:10.1007/978-0-85729-428-9. ISBN 978-1-85233-587-8. MR 1935285.
Lam, Tsit-Yuen (2001). A first course in noncommutative rings. Graduate Texts in Mathematics. Vol. 131 (2 ed.). Springer. ISBN 0-387-95183-0.
== External links ==
"Division algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Division_algebra |
In mathematics, loop algebras are certain types of Lie algebras, of particular interest in theoretical physics.
== Definition ==
For a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
over a field
K
{\displaystyle K}
, if
K
[
t
,
t
−
1
]
{\displaystyle K[t,t^{-1}]}
is the space of Laurent polynomials, then
L
g
:=
g
⊗
K
[
t
,
t
−
1
]
,
{\displaystyle L{\mathfrak {g}}:={\mathfrak {g}}\otimes K[t,t^{-1}],}
with the inherited bracket
[
X
⊗
t
m
,
Y
⊗
t
n
]
=
[
X
,
Y
]
⊗
t
m
+
n
.
{\displaystyle [X\otimes t^{m},Y\otimes t^{n}]=[X,Y]\otimes t^{m+n}.}
=== Geometric definition ===
If
g
{\displaystyle {\mathfrak {g}}}
is a Lie algebra, the tensor product of
g
{\displaystyle {\mathfrak {g}}}
with C∞(S1), the algebra of (complex) smooth functions over the circle manifold S1 (equivalently, smooth complex-valued periodic functions of a given period),
g
⊗
C
∞
(
S
1
)
,
{\displaystyle {\mathfrak {g}}\otimes C^{\infty }(S^{1}),}
is an infinite-dimensional Lie algebra with the Lie bracket given by
[
g
1
⊗
f
1
,
g
2
⊗
f
2
]
=
[
g
1
,
g
2
]
⊗
f
1
f
2
.
{\displaystyle [g_{1}\otimes f_{1},g_{2}\otimes f_{2}]=[g_{1},g_{2}]\otimes f_{1}f_{2}.}
Here g1 and g2 are elements of
g
{\displaystyle {\mathfrak {g}}}
and f1 and f2 are elements of C∞(S1).
This isn't precisely what would correspond to the direct product of infinitely many copies of
g
{\displaystyle {\mathfrak {g}}}
, one for each point in S1, because of the smoothness restriction. Instead, it can be thought of in terms of smooth map from S1 to
g
{\displaystyle {\mathfrak {g}}}
; a smooth parametrized loop in
g
{\displaystyle {\mathfrak {g}}}
, in other words. This is why it is called the loop algebra.
== Gradation ==
Defining
g
i
{\displaystyle {\mathfrak {g}}_{i}}
to be the linear subspace
g
i
=
g
⊗
t
i
<
L
g
,
{\displaystyle {\mathfrak {g}}_{i}={\mathfrak {g}}\otimes t^{i}<L{\mathfrak {g}},}
the bracket restricts to a product
[
⋅
,
⋅
]
:
g
i
×
g
j
→
g
i
+
j
,
{\displaystyle [\cdot \,,\,\cdot ]:{\mathfrak {g}}_{i}\times {\mathfrak {g}}_{j}\rightarrow {\mathfrak {g}}_{i+j},}
hence giving the loop algebra a
Z
{\displaystyle \mathbb {Z} }
-graded Lie algebra structure.
In particular, the bracket restricts to the 'zero-mode' subalgebra
g
0
≅
g
{\displaystyle {\mathfrak {g}}_{0}\cong {\mathfrak {g}}}
.
== Derivation ==
There is a natural derivation on the loop algebra, conventionally denoted
d
{\displaystyle d}
acting as
d
:
L
g
→
L
g
{\displaystyle d:L{\mathfrak {g}}\rightarrow L{\mathfrak {g}}}
d
(
X
⊗
t
n
)
=
n
X
⊗
t
n
{\displaystyle d(X\otimes t^{n})=nX\otimes t^{n}}
and so can be thought of formally as
d
=
t
d
d
t
{\displaystyle d=t{\frac {d}{dt}}}
.
It is required to define affine Lie algebras, which are used in physics, particularly conformal field theory.
== Loop group ==
Similarly, a set of all smooth maps from S1 to a Lie group G forms an infinite-dimensional Lie group (Lie group in the sense we can define functional derivatives over it) called the loop group. The Lie algebra of a loop group is the corresponding loop algebra.
== Affine Lie algebras as central extension of loop algebras ==
If
g
{\displaystyle {\mathfrak {g}}}
is a semisimple Lie algebra, then a nontrivial central extension of its loop algebra
L
g
{\displaystyle L{\mathfrak {g}}}
gives rise to an affine Lie algebra. Furthermore this central extension is unique.
The central extension is given by adjoining a central element
k
^
{\displaystyle {\hat {k}}}
, that is, for all
X
⊗
t
n
∈
L
g
{\displaystyle X\otimes t^{n}\in L{\mathfrak {g}}}
,
[
k
^
,
X
⊗
t
n
]
=
0
,
{\displaystyle [{\hat {k}},X\otimes t^{n}]=0,}
and modifying the bracket on the loop algebra to
[
X
⊗
t
m
,
Y
⊗
t
n
]
=
[
X
,
Y
]
⊗
t
m
+
n
+
m
B
(
X
,
Y
)
δ
m
+
n
,
0
k
^
,
{\displaystyle [X\otimes t^{m},Y\otimes t^{n}]=[X,Y]\otimes t^{m+n}+mB(X,Y)\delta _{m+n,0}{\hat {k}},}
where
B
(
⋅
,
⋅
)
{\displaystyle B(\cdot ,\cdot )}
is the Killing form.
The central extension is, as a vector space,
L
g
⊕
C
k
^
{\displaystyle L{\mathfrak {g}}\oplus \mathbb {C} {\hat {k}}}
(in its usual definition, as more generally,
C
{\displaystyle \mathbb {C} }
can be taken to be an arbitrary field).
=== Cocycle ===
Using the language of Lie algebra cohomology, the central extension can be described using a 2-cocycle on the loop algebra. This is the map
φ
:
L
g
×
L
g
→
C
{\displaystyle \varphi :L{\mathfrak {g}}\times L{\mathfrak {g}}\rightarrow \mathbb {C} }
satisfying
φ
(
X
⊗
t
m
,
Y
⊗
t
n
)
=
m
B
(
X
,
Y
)
δ
m
+
n
,
0
.
{\displaystyle \varphi (X\otimes t^{m},Y\otimes t^{n})=mB(X,Y)\delta _{m+n,0}.}
Then the extra term added to the bracket is
φ
(
X
⊗
t
m
,
Y
⊗
t
n
)
k
^
.
{\displaystyle \varphi (X\otimes t^{m},Y\otimes t^{n}){\hat {k}}.}
=== Affine Lie algebra ===
In physics, the central extension
L
g
⊕
C
k
^
{\displaystyle L{\mathfrak {g}}\oplus \mathbb {C} {\hat {k}}}
is sometimes referred to as the affine Lie algebra. In mathematics, this is insufficient, and the full affine Lie algebra is the vector space
g
^
=
L
g
⊕
C
k
^
⊕
C
d
{\displaystyle {\hat {\mathfrak {g}}}=L{\mathfrak {g}}\oplus \mathbb {C} {\hat {k}}\oplus \mathbb {C} d}
where
d
{\displaystyle d}
is the derivation defined above.
On this space, the Killing form can be extended to a non-degenerate form, and so allows a root system analysis of the affine Lie algebra.
== References == | Wikipedia/Loop_algebra |
In mathematics, the main results concerning irreducible unitary representations of the Lie group SL(2, R) are due to Gelfand and Naimark (1946), V. Bargmann (1947), and Harish-Chandra (1952).
== Structure of the complexified Lie algebra ==
We choose a basis H, X, Y for the complexification of the Lie algebra of SL(2, R) so that iH generates the Lie algebra of a compact Cartan subgroup K (so in particular unitary representations split as a sum of eigenspaces of H), and {H, X, Y} is an sl2-triple, which means that they satisfy the relations
[
H
,
X
]
=
2
X
,
[
H
,
Y
]
=
−
2
Y
,
[
X
,
Y
]
=
H
.
{\displaystyle [H,X]=2X,\quad [H,Y]=-2Y,\quad [X,Y]=H.}
One way of doing this is as follows:
H
=
(
0
−
i
i
0
)
{\displaystyle H={\begin{pmatrix}0&-i\\i&0\end{pmatrix}}}
corresponding to the subgroup K of matrices
(
cos
(
θ
)
−
sin
(
θ
)
sin
(
θ
)
cos
(
θ
)
)
{\displaystyle {\begin{pmatrix}\cos(\theta )&-\sin(\theta )\\\sin(\theta )&\cos(\theta )\end{pmatrix}}}
X
=
1
2
(
1
i
i
−
1
)
{\displaystyle X={1 \over 2}{\begin{pmatrix}1&i\\i&-1\end{pmatrix}}}
Y
=
1
2
(
1
−
i
−
i
−
1
)
{\displaystyle Y={1 \over 2}{\begin{pmatrix}1&-i\\-i&-1\end{pmatrix}}}
The Casimir operator Ω is defined to be
Ω
=
H
2
+
1
+
2
X
Y
+
2
Y
X
.
{\displaystyle \Omega =H^{2}+1+2XY+2YX.}
It generates the center of the universal enveloping algebra of the complexified Lie algebra of SL(2, R). The Casimir element acts on any irreducible representation as multiplication by some complex scalar μ2. Thus in the case of the Lie algebra sl2, the infinitesimal character of an irreducible representation is specified by one complex number.
The center Z of the group SL(2, R) is a cyclic group {I, −I} of order 2, consisting of the identity matrix and its negative. On any irreducible representation, the center either acts trivially, or by the nontrivial character of Z, which represents the matrix -I by multiplication by -1 in the representation space. Correspondingly, one speaks of the trivial or nontrivial central character.
The central character and the infinitesimal character of an irreducible representation of any reductive Lie group are important invariants of the representation. In the case of irreducible admissible representations of SL(2, R), it turns out that, generically, there is exactly one representation, up to an isomorphism, with the specified central and infinitesimal characters. In the exceptional cases there are two or three representations with the prescribed parameters, all of which have been determined.
== Finite-dimensional representations ==
For each nonnegative integer n, the group SL(2, R) has an irreducible representation of dimension n + 1, which is unique up to an isomorphism. This representation can be constructed in the space of homogeneous polynomials of degree n in two variables. The case n = 0 corresponds to the trivial representation. An irreducible finite-dimensional representation of a noncompact simple Lie group of dimension greater than 1 is never unitary. Thus this construction produces only one unitary representation of SL(2, R), the trivial representation.
The finite-dimensional representation theory of the noncompact group SL(2, R) is equivalent to the representation theory of SU(2), its compact form, essentially because their Lie algebras have the same complexification and they are "algebraically simply connected". (More precisely, the group SU(2) is simply connected and, although SL(2, R) is not, it has no non-trivial algebraic central extensions.) However, in the general infinite-dimensional case, there is no close correspondence between representations of a group and the representations of its Lie algebra. In fact, it follows from the Peter–Weyl theorem that all irreducible representations of the compact Lie group SU(2) are finite-dimensional and unitary. The situation with SL(2, R) is completely different: it possesses infinite-dimensional irreducible representations, some of which are unitary, and some are not.
== Principal series representations ==
A major technique of constructing representations of a reductive Lie group is the method of parabolic induction. In the case of the group SL(2, R), there is up to conjugacy only one proper parabolic subgroup, the Borel subgroup of the upper-triangular matrices of determinant 1. The inducing parameter of an induced principal series representation is a (possibly non-unitary) character of the multiplicative group of real numbers, which is specified by choosing ε = ± 1 and a complex number μ. The corresponding principal series representation is denoted Iε,μ. It turns out that ε is the central character of the induced representation and the complex number μ may be identified with the infinitesimal character via the Harish-Chandra isomorphism.
The principal series representation Iε,μ (or more precisely its Harish-Chandra module of K-finite elements) admits a basis consisting of elements wj, where the index j runs through the even integers if ε=1 and the odd integers if ε=-1. The action of X, Y, and H is given by the formulas
H
(
w
j
)
=
j
w
j
{\displaystyle H(w_{j})=jw_{j}}
X
(
w
j
)
=
μ
+
j
+
1
2
w
j
+
2
{\displaystyle X(w_{j})={\mu +j+1 \over 2}w_{j+2}}
Y
(
w
j
)
=
μ
−
j
+
1
2
w
j
−
2
{\displaystyle Y(w_{j})={\mu -j+1 \over 2}w_{j-2}}
== Admissible representations ==
Using the fact that it is an eigenvector of the Casimir operator and has an eigenvector for H, it follows easily that any irreducible admissible representation is a subrepresentation of a parabolically induced representation. (This also is true for more general reductive Lie groups and is known as Casselman's subrepresentation theorem.) Thus the irreducible admissible representations of SL(2, R) can be found by decomposing the principal series representations Iε,μ into irreducible components and determining the isomorphisms. We summarize the decompositions as follows:
Iε,μ is reducible if and only if μ is an integer and ε=−(−1)μ. If Iε,μ is irreducible then it is isomorphic to Iε,−μ.
I−1, 0 splits as the direct sum Iε,0 = D+0 + D−0 of two irreducible representations, called limit of discrete series representations. D+0 has a basis wj for j≥1, and D−0 has a basis wj for j≤−1,
If Iε,μ is reducible with μ>0 (so ε=−(−1)μ) then it has a unique irreducible quotient which has finite dimension μ, and the kernel is the sum of two discrete series representations D+μ + D−μ. The representation Dμ has a basis wμ+j for j≥1, and D−μ has a basis w−μ−j for j≤−1.
If Iε,μ is reducible with μ<0 (so ε=−(−1)μ) then it has a unique irreducible subrepresentation, which has finite dimension -μ, and the quotient is the sum of two discrete series representations D+μ + D−μ.
This gives the following list of irreducible admissible representations:
A finite-dimensional representation of dimension μ for each positive integer μ, with central character −(−1)μ.
Two limit of discrete series representations D+0, D−0, with μ=0 and non-trivial central character.
Discrete series representations Dμ for μ a non-zero integer, with central character −(−1)μ.
Two families of irreducible principal series representations Iε,μ for ε≠−(−1)μ (where Iε,μ is isomorphic to Iε,−μ).
=== Relation with the Langlands classification ===
According to the Langlands classification, the irreducible admissible representations are parametrized by certain tempered representations of Levi subgroups M of parabolic subgroups P=MAN. This works as follows:
The discrete series, limit of discrete series, and unitary principal series representations Iε,μ with μ imaginary are already tempered, so in these cases the parabolic subgroup P is SL(2, R) itself.
The finite-dimensional representations and the representations Iε,μ for ℜμ>0, μ not an integer or ε≠−(−1)μ are the irreducible quotients of the principal series representations Iε,μ for ℜμ>0, which are induced from tempered representations of the parabolic subgroup P = MAN of upper triangular matrices, with A the positive diagonal matrices and M the center of order 2. For μ a positive integer and ε=−(−1)μ the principal series representation has a finite-dimensional representation as its irreducible quotient, and otherwise it is already irreducible.
== Unitary representations ==
The irreducible unitary representations can be found by checking which of the irreducible admissible representations admit an invariant positively definite Hermitian form. This results in the following list of unitary representations of SL(2, R):
The trivial representation (the only finite-dimensional representation in this list).
The two limit of discrete series representations D+0, D−0.
The discrete series representations Dk, indexed by non-zero integers k. They are all distinct.
The two families of irreducible principal series representation, consisting of the spherical principal series I+,iμ indexed by the real numbers μ, and the non-spherical unitary principal series I−,iμ indexed by the non-zero real numbers μ. The representation with parameter μ is isomorphic to the one with parameter −μ, and there are no further isomorphisms between them.
The complementary series representations I+,μ for 0<|μ|<1. The representation with parameter μ is isomorphic to the one with parameter −μ, and there are no further isomorphisms between them.
Of these, the two limit of discrete series representations, the discrete series representations, and the two families of principal series representations are tempered, while the trivial and complementary series representations are not tempered.
== References ==
Bargmann, V. (1947), "Irreducible unitary representations of the Lorentz group", Annals of Mathematics, Second Series, 48 (3): 568–640, doi:10.2307/1969129, JSTOR 1969129, MR 0021942
Gelfand, I.; Neumark, M. (1946), "Unitary representations of the Lorentz group", Acad. Sci. USSR. J. Phys., 10: 93–94, MR 0017282.
Harish-Chandra (1952), "Plancherel formula for the 2 × 2 real unimodular group", Proceedings of the National Academy of Sciences of the United States of America, 38 (4): 337–342, doi:10.1073/pnas.38.4.337, JSTOR 88737, MR 0047055, PMC 1063558, PMID 16589101.
Howe, Roger; Tan, Eng-Chye (1992), Nonabelian harmonic analysis: Applications of SL(2, R), Universitext, New York: Springer-Verlag, doi:10.1007/978-1-4613-9200-2, ISBN 0-387-97768-6, MR 1151617.
Knapp, Anthony W. (2001), Representation theory of semisimple groups: An overview based on examples (Reprint of the 1986 original), Princeton Landmarks in Mathematics, Princeton, NJ: Princeton University Press, ISBN 0-691-09089-0, MR 1880691.
Kunze, R. A.; Stein, E. M. (1960), "Uniformly bounded representations and harmonic analysis of the 2 × 2 real unimodular group", American Journal of Mathematics, 82: 1–62, doi:10.2307/2372876, JSTOR 2372876, MR 0163988.
Vogan, David A. Jr. (1981), Representations of real reductive Lie groups, Progress in Mathematics, vol. 15, Boston, Mass.: Birkhäuser, ISBN 3-7643-3037-6, MR 0632407.
Wallach, Nolan R. (1988), Real reductive groups. I, Pure and Applied Mathematics, vol. 132, Boston, MA: Academic Press, Inc., p. xx+412, ISBN 0-12-732960-9, MR 0929683.
== See also ==
Spin (physics)
Representation theory of SU(2)
Rotation group SO(3)#A note on Lie algebra | Wikipedia/Representation_theory_of_SL2(R) |
In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a (not necessarily unital) ring. If the algebra is not unital, it may be made so in a standard way (see the adjoint functors page); there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.
== Examples ==
=== Linear complex structure ===
One of the simplest non-trivial examples is a linear complex structure, which is a representation of the complex numbers C, thought of as an associative algebra over the real numbers R. This algebra is realized concretely as
C
=
R
[
x
]
/
(
x
2
+
1
)
,
{\displaystyle \mathbb {C} =\mathbb {R} [x]/(x^{2}+1),}
which corresponds to i2 = −1. Then a representation of C is a real vector space V, together with an action of C on V (a map
C
→
E
n
d
(
V
)
{\displaystyle \mathbb {C} \to \mathrm {End} (V)}
). Concretely, this is just an action of i , as this generates the algebra, and the operator representing i (the image of i in End(V)) is denoted J to avoid confusion with the identity matrix I.
=== Polynomial algebras ===
Another important basic class of examples are representations of polynomial algebras, the free commutative algebras – these form a central object of study in commutative algebra and its geometric counterpart, algebraic geometry. A representation of a polynomial algebra in k variables over the field K is concretely a K-vector space with k commuting operators, and is often denoted
K
[
T
1
,
…
,
T
k
]
,
{\displaystyle K[T_{1},\dots ,T_{k}],}
meaning the representation of the abstract algebra
K
[
x
1
,
…
,
x
k
]
{\displaystyle K[x_{1},\dots ,x_{k}]}
where
x
i
↦
T
i
.
{\displaystyle x_{i}\mapsto T_{i}.}
A basic result about such representations is that, over an algebraically closed field, the representing matrices are simultaneously triangularisable.
Even the case of representations of the polynomial algebra in a single variable are of interest – this is denoted by
K
[
T
]
{\displaystyle K[T]}
and is used in understanding the structure of a single linear operator on a finite-dimensional vector space. Specifically, applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra yields as corollaries the various canonical forms of matrices, such as Jordan canonical form.
In some approaches to noncommutative geometry, the free noncommutative algebra (polynomials in non-commuting variables) plays a similar role, but the analysis is much more difficult.
== Weights ==
Eigenvalues and eigenvectors can be generalized to algebra representations.
The generalization of an eigenvalue of an algebra representation is, rather than a single scalar, a one-dimensional representation
λ
:
A
→
R
{\displaystyle \lambda \colon A\to R}
(i.e., an algebra homomorphism from the algebra to its underlying ring: a linear functional that is also multiplicative). This is known as a weight, and the analog of an eigenvector and eigenspace are called weight vector and weight space.
The case of the eigenvalue of a single operator corresponds to the algebra
R
[
T
]
,
{\displaystyle R[T],}
and a map of algebras
R
[
T
]
→
R
{\displaystyle R[T]\to R}
is determined by which scalar it maps the generator T to. A weight vector for an algebra representation is a vector such that any element of the algebra maps this vector to a multiple of itself – a one-dimensional submodule (subrepresentation). As the pairing
A
×
M
→
M
{\displaystyle A\times M\to M}
is bilinear, "which multiple" is an A-linear functional of A (an algebra map A → R), namely the weight. In symbols, a weight vector is a vector
m
∈
M
{\displaystyle m\in M}
such that
a
m
=
λ
(
a
)
m
{\displaystyle am=\lambda (a)m}
for all elements
a
∈
A
,
{\displaystyle a\in A,}
for some linear functional
λ
{\displaystyle \lambda }
– note that on the left, multiplication is the algebra action, while on the right, multiplication is scalar multiplication.
Because a weight is a map to a commutative ring, the map factors through the abelianization of the algebra
A
{\displaystyle {\mathcal {A}}}
– equivalently, it vanishes on the derived algebra – in terms of matrices, if
v
{\displaystyle v}
is a common eigenvector of operators
T
{\displaystyle T}
and
U
{\displaystyle U}
, then
T
U
v
=
U
T
v
{\displaystyle TUv=UTv}
(because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra). Thus of central interest are the free commutative algebras, namely the polynomial algebras. In this particularly simple and important case of the polynomial algebra
F
[
T
1
,
…
,
T
k
]
{\displaystyle \mathbf {F} [T_{1},\dots ,T_{k}]}
in a set of commuting matrices, a weight vector of this algebra is a simultaneous eigenvector of the matrices, while a weight of this algebra is simply a
k
{\displaystyle k}
-tuple of scalars
λ
=
(
λ
1
,
…
,
λ
k
)
{\displaystyle \lambda =(\lambda _{1},\dots ,\lambda _{k})}
corresponding to the eigenvalue of each matrix, and hence geometrically to a point in
k
{\displaystyle k}
-space. These weights – in particularly their geometry – are of central importance in understanding the representation theory of Lie algebras, specifically the finite-dimensional representations of semisimple Lie algebras.
As an application of this geometry, given an algebra that is a quotient of a polynomial algebra on
k
{\displaystyle k}
generators, it corresponds geometrically to an algebraic variety in
k
{\displaystyle k}
-dimensional space, and the weight must fall on the variety – i.e., it satisfies the defining equations for the variety. This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable.
== See also ==
Representation theory
Intertwiner
Representation theory of Hopf algebras
Lie algebra representation
Schur’s lemma
Jacobson density theorem
Double commutant theorem
== Notes ==
== References == | Wikipedia/Algebra_representation |
In mathematics, an affine Lie algebra is an infinite-dimensional Lie algebra that is constructed in a canonical fashion out of a finite-dimensional simple Lie algebra. Given an affine Lie algebra, one can also form the associated affine Kac-Moody algebra, as described below. From a purely mathematical point of view, affine Lie algebras are interesting because their representation theory, like representation theory of finite-dimensional semisimple Lie algebras, is much better understood than that of general Kac–Moody algebras. As observed by Victor Kac, the character formula for representations of affine Lie algebras implies certain combinatorial identities, the Macdonald identities.
Affine Lie algebras play an important role in string theory and two-dimensional conformal field theory due to the way they are constructed: starting from a simple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, one considers the loop algebra,
L
g
{\displaystyle L{\mathfrak {g}}}
, formed by the
g
{\displaystyle {\mathfrak {g}}}
-valued functions on a circle (interpreted as the closed string) with pointwise commutator. The affine Lie algebra
g
^
{\displaystyle {\hat {\mathfrak {g}}}}
is obtained by adding one extra dimension to the loop algebra and modifying the commutator in a non-trivial way, which physicists call a quantum anomaly (in this case, the anomaly of the WZW model) and mathematicians a central extension. More generally,
if σ is an automorphism of the simple Lie algebra
g
{\displaystyle {\mathfrak {g}}}
associated to an automorphism of its Dynkin diagram, the twisted loop algebra
L
σ
g
{\displaystyle L_{\sigma }{\mathfrak {g}}}
consists of
g
{\displaystyle {\mathfrak {g}}}
-valued functions f on the real line which satisfy
the twisted periodicity condition f(x + 2π) = σ f(x). Their central extensions are precisely the twisted affine Lie algebras. The point of view of string theory helps to understand many deep properties of affine Lie algebras, such as the fact that the characters of their representations transform amongst themselves under the modular group.
== Affine Lie algebras from simple Lie algebras ==
=== Definition ===
If
g
{\displaystyle {\mathfrak {g}}}
is a finite-dimensional simple Lie algebra, the corresponding
affine Lie algebra
g
^
{\displaystyle {\hat {\mathfrak {g}}}}
is constructed as a central extension of the loop algebra
g
⊗
C
[
t
,
t
−
1
]
{\displaystyle {\mathfrak {g}}\otimes \mathbb {\mathbb {C} } [t,t^{-1}]}
, with one-dimensional center
C
c
.
{\displaystyle \mathbb {\mathbb {C} } c.}
As a vector space,
g
^
=
g
⊗
C
[
t
,
t
−
1
]
⊕
C
c
,
{\displaystyle {\widehat {\mathfrak {g}}}={\mathfrak {g}}\otimes \mathbb {\mathbb {C} } [t,t^{-1}]\oplus \mathbb {\mathbb {C} } c,}
where
C
[
t
,
t
−
1
]
{\displaystyle \mathbb {\mathbb {C} } [t,t^{-1}]}
is the complex vector space of Laurent polynomials in the indeterminate t. The Lie bracket is defined by the formula
[
a
⊗
t
n
+
α
c
,
b
⊗
t
m
+
β
c
]
=
[
a
,
b
]
⊗
t
n
+
m
+
⟨
a
|
b
⟩
n
δ
m
+
n
,
0
c
{\displaystyle [a\otimes t^{n}+\alpha c,b\otimes t^{m}+\beta c]=[a,b]\otimes t^{n+m}+\langle a|b\rangle n\delta _{m+n,0}c}
for all
a
,
b
∈
g
,
α
,
β
∈
C
{\displaystyle a,b\in {\mathfrak {g}},\alpha ,\beta \in \mathbb {\mathbb {C} } }
and
n
,
m
∈
Z
{\displaystyle n,m\in \mathbb {Z} }
, where
[
a
,
b
]
{\displaystyle [a,b]}
is the Lie bracket in the Lie algebra
g
{\displaystyle {\mathfrak {g}}}
and
⟨
⋅
|
⋅
⟩
{\displaystyle \langle \cdot |\cdot \rangle }
is the Cartan-Killing form on
g
.
{\displaystyle {\mathfrak {g}}.}
The affine Lie algebra corresponding to a finite-dimensional semisimple Lie algebra is the direct sum of the affine Lie algebras corresponding to its simple summands. There is a distinguished derivation of the affine Lie algebra defined by
δ
(
a
⊗
t
m
+
α
c
)
=
t
d
d
t
(
a
⊗
t
m
)
.
{\displaystyle \delta (a\otimes t^{m}+\alpha c)=t{d \over dt}(a\otimes t^{m}).}
The corresponding affine Kac–Moody algebra is defined as a semidirect product by adding an extra generator d that satisfies [d, A] = δ(A).
=== Constructing the Dynkin diagrams ===
The Dynkin diagram of each affine Lie algebra consists of that of the corresponding simple Lie algebra plus an additional node, which corresponds to the addition of an imaginary root. Of course, such a node cannot be attached to the Dynkin diagram in just any location, but for each simple Lie algebra there exists a number of possible attachments equal to the cardinality of the group of outer automorphisms of the Lie algebra. In particular, this group always contains the identity element, and the corresponding affine Lie algebra is called an untwisted affine Lie algebra. When the simple algebra admits automorphisms that are not inner automorphisms, one may obtain other Dynkin diagrams and these correspond to twisted affine Lie algebras.
=== Classifying the central extensions ===
The attachment of an extra node to the Dynkin diagram of the corresponding simple Lie algebra corresponds to the following construction. An affine Lie algebra can always be constructed as a central extension of the loop algebra of the corresponding simple Lie algebra. If one wishes to begin instead with a semisimple Lie algebra, then one needs to centrally extend by a number of elements equal to the number of simple components of the semisimple algebra. In physics, one often considers instead the direct sum of a semisimple algebra and an abelian algebra
C
n
{\displaystyle \mathbb {\mathbb {C} } ^{n}}
. In this case one also needs to add n further central elements for the n abelian generators.
The second integral cohomology of the loop group of the corresponding simple compact Lie group is isomorphic to the integers. Central extensions of the affine Lie group by a single generator are topologically circle bundles over this free loop group, which are classified by a two-class known as the first Chern class of the fibration. Therefore, the central extensions of an affine Lie group are classified by a single parameter k which is called the level in the physics literature, where it first appeared. Unitary highest weight representations of the affine compact groups only exist when k is a natural number. More generally, if one considers a semi-simple algebra, there is a central charge for each simple component.
== Structure ==
=== Cartan–Weyl basis ===
As in the finite case, determining the Cartan–Weyl basis is an important step in determining the structure of affine Lie algebras.
Fix a finite-dimensional, simple, complex Lie algebra
g
{\displaystyle {\mathfrak {g}}}
with Cartan subalgebra
h
{\displaystyle {\mathfrak {h}}}
and a particular root system
Δ
{\displaystyle \Delta }
. Introducing the notation
X
n
=
X
⊗
t
n
,
{\displaystyle X_{n}=X\otimes t^{n},}
, one can attempt to extend a Cartan–Weyl basis
{
H
i
}
∪
{
E
α
|
α
∈
Δ
}
{\displaystyle \{H^{i}\}\cup \{E^{\alpha }|\alpha \in \Delta \}}
for
g
{\displaystyle {\mathfrak {g}}}
to one for the affine Lie algebra, given by
{
H
n
i
}
∪
{
c
}
∪
{
E
n
α
}
{\displaystyle \{H_{n}^{i}\}\cup \{c\}\cup \{E_{n}^{\alpha }\}}
, with
{
H
0
i
}
∪
{
c
}
{\displaystyle \{H_{0}^{i}\}\cup \{c\}}
forming an abelian subalgebra.
The eigenvalues of
a
d
(
H
0
i
)
{\displaystyle ad(H_{0}^{i})}
and
a
d
(
c
)
{\displaystyle ad(c)}
on
E
n
α
{\displaystyle E_{n}^{\alpha }}
are
α
i
{\displaystyle \alpha ^{i}}
and
0
{\displaystyle 0}
respectively and independently of
n
{\displaystyle n}
. Therefore the root
α
{\displaystyle \alpha }
is infinitely degenerate with respect to this abelian subalgebra. Appending the derivation described above to the abelian subalgebra turns the abelian subalgebra into a Cartan subalgebra for the affine Lie algebra, with eigenvalues
(
α
1
,
⋯
,
α
d
i
m
h
,
0
,
n
)
{\displaystyle (\alpha ^{1},\cdots ,\alpha ^{dim{\mathfrak {h}}},0,n)}
for
E
n
α
.
{\displaystyle E_{n}^{\alpha }.}
=== Killing form ===
The Killing form can almost be completely determined using its invariance property. Using the notation
B
{\displaystyle B}
for the Killing form on
g
{\displaystyle {\mathfrak {g}}}
and
B
^
{\displaystyle {\hat {B}}}
for the Killing form on the affine Kac–Moody algebra,
B
^
(
X
n
,
Y
m
)
=
B
(
X
,
Y
)
δ
n
+
m
,
0
,
{\displaystyle {\hat {B}}(X_{n},Y_{m})=B(X,Y)\delta _{n+m,0},}
B
^
(
X
n
,
c
)
=
0
,
B
^
(
X
n
,
d
)
=
0
{\displaystyle {\hat {B}}(X_{n},c)=0,{\hat {B}}(X_{n},d)=0}
B
^
(
c
,
c
)
=
0
,
B
^
(
c
,
d
)
=
1
,
B
^
(
d
,
d
)
=
0
,
{\displaystyle {\hat {B}}(c,c)=0,{\hat {B}}(c,d)=1,{\hat {B}}(d,d)=0,}
where only the last equation is not fixed by invariance and instead chosen by convention. Notably, the restriction of
B
^
{\displaystyle {\hat {B}}}
to the
c
,
d
{\displaystyle c,d}
subspace gives a bilinear form with signature
(
+
,
−
)
{\displaystyle (+,-)}
.
Write the affine root associated with
E
n
α
{\displaystyle E_{n}^{\alpha }}
as
α
^
=
(
α
;
0
;
n
)
{\displaystyle {\hat {\alpha }}=(\alpha ;0;n)}
. Defining
δ
=
(
0
,
0
,
1
)
{\displaystyle \delta =(0,0,1)}
, this can be rewritten
α
^
=
α
+
n
δ
.
{\displaystyle {\hat {\alpha }}=\alpha +n\delta .}
The full set of roots is
Δ
^
=
{
α
+
n
δ
|
n
∈
Z
,
α
∈
Δ
}
∪
{
n
δ
|
n
∈
Z
,
n
≠
0
}
.
{\displaystyle {\hat {\Delta }}=\{\alpha +n\delta |n\in \mathbb {Z} ,\alpha \in \Delta \}\cup \{n\delta |n\in \mathbb {Z} ,n\neq 0\}.}
Then
δ
{\displaystyle \delta }
is unusual as it has zero length:
(
δ
,
δ
)
=
0
{\displaystyle (\delta ,\delta )=0}
where
(
⋅
,
⋅
)
{\displaystyle (\cdot ,\cdot )}
is the bilinear form on the roots induced by the Killing form.
=== Affine simple root ===
In order to obtain a basis of simple roots for the affine algebra, an extra simple root must be appended, and is given by
α
0
=
−
θ
+
δ
{\displaystyle \alpha _{0}=-\theta +\delta }
where
θ
{\displaystyle \theta }
is the highest root of
g
{\displaystyle {\mathfrak {g}}}
, using the usual notion of height of a root. This allows definition of the extended Cartan matrix and extended Dynkin diagrams.
== Representation theory ==
The representation theory for affine Lie algebras is usually developed using Verma modules. Just as in the case of semi-simple Lie algebras, these are highest weight modules. There are no finite-dimensional representations; this follows from the fact that the null vectors of a finite-dimensional Verma module are necessarily zero; whereas those for the affine Lie algebras are not. Roughly speaking, this follows because the Killing form is Lorentzian in the
c
,
δ
{\displaystyle c,\delta }
directions, thus
(
z
,
z
¯
)
{\displaystyle (z,{\bar {z}})}
are sometimes called "lightcone coordinates" on the string. The "radially ordered" current operator products can be understood to be time-like normal ordered by taking
z
=
exp
(
τ
+
i
σ
)
{\displaystyle z=\exp(\tau +i\sigma )}
with
τ
{\displaystyle \tau }
the time-like direction along the string world sheet and
σ
{\displaystyle \sigma }
the spatial direction.
=== Vacuum representation of rank k ===
The representations are constructed in more detail as follows.
Fix a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
and basis
{
J
ρ
}
{\displaystyle \{J^{\rho }\}}
. Then
{
J
n
ρ
}
=
{
J
ρ
⊗
t
n
}
{\displaystyle \{J_{n}^{\rho }\}=\{J^{\rho }\otimes t^{n}\}}
is a basis for the corresponding loop algebra, and
{
J
n
ρ
}
∪
{
c
}
{\displaystyle \{J_{n}^{\rho }\}\cup \{c\}}
is a basis for the affine Lie algebra
g
^
{\displaystyle {\hat {\mathfrak {g}}}}
.
The vacuum representation of rank
k
{\displaystyle k}
, denoted by
V
k
(
g
)
{\displaystyle V_{k}({\mathfrak {g}})}
where
k
∈
C
{\displaystyle k\in \mathbb {C} }
, is the complex representation with basis
{
v
n
1
⋯
n
m
ρ
1
⋯
ρ
m
:
n
1
≥
⋯
≥
n
m
≥
1
,
ρ
1
≤
⋯
≤
ρ
m
}
∪
{
Ω
}
,
{\displaystyle \{v_{\,n_{1}\,\cdots \,n_{m}}^{\,\rho _{1}\,\cdots \,\rho _{m}}:n_{1}\geq \cdots \geq n_{m}\geq 1,\rho _{1}\leq \cdots \leq \rho _{m}\}\cup \{\Omega \},}
and where the action of
g
^
{\displaystyle {\hat {\mathfrak {g}}}}
on
V
=
V
k
(
g
)
{\displaystyle V=V_{k}({\mathfrak {g}})}
is given by:
c
=
k
id
V
,
J
n
ρ
Ω
=
0
,
f
o
r
n
≥
0
,
J
−
n
ρ
Ω
=
v
n
ρ
,
f
o
r
n
>
0
,
{\displaystyle c=k\,{\text{id}}_{V},\qquad J_{n}^{\rho }\Omega =0,\ \mathrm {for} \ n\geq 0,\qquad J_{-n}^{\rho }\Omega =v_{n}^{\rho },\ \mathrm {for} \ n>0,}
a
n
d
J
−
n
ρ
v
n
1
⋯
n
m
ρ
1
⋯
ρ
m
=
v
n
n
1
⋯
n
m
ρ
ρ
1
⋯
ρ
m
.
{\displaystyle \mathrm {and} \ J_{-n}^{\rho }v_{\,n_{1}\cdots n_{m}}^{\,\rho _{1}\cdots \rho _{m}}=v_{\,n\,n_{1}\cdots n_{m}}^{\,\rho \,\rho _{1}\cdots \rho _{m}}.}
=== Affine Vertex Algebra ===
The vacuum representation in fact can be equipped with vertex algebra structure, in which case it is called the affine vertex algebra of rank
k
{\displaystyle k}
. The affine Lie algebra naturally extends to the Kac–Moody algebra, with the differential
d
{\displaystyle d}
represented by the translation operator
T
{\displaystyle T}
in the vertex algebra.
== Weyl group and characters ==
The Weyl group of an affine Lie algebra can be written as a semi-direct product of the Weyl group of the zero-mode algebra (the Lie algebra used to define the loop algebra) and the coroot lattice.
The Weyl character formula of the algebraic characters of the affine Lie algebras generalizes to the Weyl-Kac character formula. A number of interesting constructions follow from these. One may construct generalizations of the Jacobi theta function. These theta functions transform under the modular group. The usual denominator identities of semi-simple Lie algebras generalize as well; because the characters can be written as "deformations" or q-analogs of the highest weights, this led to many new combinatoric identities, include many previously unknown identities for the Dedekind eta function. These generalizations can be viewed as a practical example of the Langlands program.
== Applications ==
Due to the Sugawara construction, the universal enveloping algebra of any affine Lie algebra has the Virasoro algebra as a subalgebra. This allows affine Lie algebras to serve as symmetry algebras of conformal field theories such as WZW models or coset models. As a consequence, affine Lie algebras also appear in the worldsheet description of string theory.
== Example ==
The Heisenberg algebra defined by generators
a
n
,
n
∈
Z
{\displaystyle a_{n},n\in \mathbb {Z} }
satisfying commutation relations
[
a
m
,
a
n
]
=
m
δ
m
+
n
,
0
c
{\displaystyle [a_{m},a_{n}]=m\delta _{m+n,0}c}
can be realized as the affine Lie algebra
u
^
(
1
)
{\displaystyle {\hat {\mathfrak {u}}}(1)}
.
== References ==
Fuchs, Jurgen (1992), Affine Lie Algebras and Quantum Groups, Cambridge University Press, ISBN 0-521-48412-X
Goddard, Peter; Olive, David (1988), Kac-Moody and Virasoro algebras: A Reprint Volume for Physicists, Advanced Series in Mathematical Physics, vol. 3, World Scientific, ISBN 9971-5-0419-7
Kac, Victor (1990), Infinite dimensional Lie algebras (3 ed.), Cambridge University Press, ISBN 0-521-46693-8
Kohno, Toshitake (1998), Conformal Field Theory and Topology, American Mathematical Society, ISBN 0-8218-2130-X
Pressley, Andrew; Segal, Graeme (1986), Loop groups, Oxford University Press, ISBN 0-19-853535-X | Wikipedia/Affine_Lie_algebra |
In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a (not necessarily unital) ring. If the algebra is not unital, it may be made so in a standard way (see the adjoint functors page); there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.
== Examples ==
=== Linear complex structure ===
One of the simplest non-trivial examples is a linear complex structure, which is a representation of the complex numbers C, thought of as an associative algebra over the real numbers R. This algebra is realized concretely as
C
=
R
[
x
]
/
(
x
2
+
1
)
,
{\displaystyle \mathbb {C} =\mathbb {R} [x]/(x^{2}+1),}
which corresponds to i2 = −1. Then a representation of C is a real vector space V, together with an action of C on V (a map
C
→
E
n
d
(
V
)
{\displaystyle \mathbb {C} \to \mathrm {End} (V)}
). Concretely, this is just an action of i , as this generates the algebra, and the operator representing i (the image of i in End(V)) is denoted J to avoid confusion with the identity matrix I.
=== Polynomial algebras ===
Another important basic class of examples are representations of polynomial algebras, the free commutative algebras – these form a central object of study in commutative algebra and its geometric counterpart, algebraic geometry. A representation of a polynomial algebra in k variables over the field K is concretely a K-vector space with k commuting operators, and is often denoted
K
[
T
1
,
…
,
T
k
]
,
{\displaystyle K[T_{1},\dots ,T_{k}],}
meaning the representation of the abstract algebra
K
[
x
1
,
…
,
x
k
]
{\displaystyle K[x_{1},\dots ,x_{k}]}
where
x
i
↦
T
i
.
{\displaystyle x_{i}\mapsto T_{i}.}
A basic result about such representations is that, over an algebraically closed field, the representing matrices are simultaneously triangularisable.
Even the case of representations of the polynomial algebra in a single variable are of interest – this is denoted by
K
[
T
]
{\displaystyle K[T]}
and is used in understanding the structure of a single linear operator on a finite-dimensional vector space. Specifically, applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra yields as corollaries the various canonical forms of matrices, such as Jordan canonical form.
In some approaches to noncommutative geometry, the free noncommutative algebra (polynomials in non-commuting variables) plays a similar role, but the analysis is much more difficult.
== Weights ==
Eigenvalues and eigenvectors can be generalized to algebra representations.
The generalization of an eigenvalue of an algebra representation is, rather than a single scalar, a one-dimensional representation
λ
:
A
→
R
{\displaystyle \lambda \colon A\to R}
(i.e., an algebra homomorphism from the algebra to its underlying ring: a linear functional that is also multiplicative). This is known as a weight, and the analog of an eigenvector and eigenspace are called weight vector and weight space.
The case of the eigenvalue of a single operator corresponds to the algebra
R
[
T
]
,
{\displaystyle R[T],}
and a map of algebras
R
[
T
]
→
R
{\displaystyle R[T]\to R}
is determined by which scalar it maps the generator T to. A weight vector for an algebra representation is a vector such that any element of the algebra maps this vector to a multiple of itself – a one-dimensional submodule (subrepresentation). As the pairing
A
×
M
→
M
{\displaystyle A\times M\to M}
is bilinear, "which multiple" is an A-linear functional of A (an algebra map A → R), namely the weight. In symbols, a weight vector is a vector
m
∈
M
{\displaystyle m\in M}
such that
a
m
=
λ
(
a
)
m
{\displaystyle am=\lambda (a)m}
for all elements
a
∈
A
,
{\displaystyle a\in A,}
for some linear functional
λ
{\displaystyle \lambda }
– note that on the left, multiplication is the algebra action, while on the right, multiplication is scalar multiplication.
Because a weight is a map to a commutative ring, the map factors through the abelianization of the algebra
A
{\displaystyle {\mathcal {A}}}
– equivalently, it vanishes on the derived algebra – in terms of matrices, if
v
{\displaystyle v}
is a common eigenvector of operators
T
{\displaystyle T}
and
U
{\displaystyle U}
, then
T
U
v
=
U
T
v
{\displaystyle TUv=UTv}
(because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra). Thus of central interest are the free commutative algebras, namely the polynomial algebras. In this particularly simple and important case of the polynomial algebra
F
[
T
1
,
…
,
T
k
]
{\displaystyle \mathbf {F} [T_{1},\dots ,T_{k}]}
in a set of commuting matrices, a weight vector of this algebra is a simultaneous eigenvector of the matrices, while a weight of this algebra is simply a
k
{\displaystyle k}
-tuple of scalars
λ
=
(
λ
1
,
…
,
λ
k
)
{\displaystyle \lambda =(\lambda _{1},\dots ,\lambda _{k})}
corresponding to the eigenvalue of each matrix, and hence geometrically to a point in
k
{\displaystyle k}
-space. These weights – in particularly their geometry – are of central importance in understanding the representation theory of Lie algebras, specifically the finite-dimensional representations of semisimple Lie algebras.
As an application of this geometry, given an algebra that is a quotient of a polynomial algebra on
k
{\displaystyle k}
generators, it corresponds geometrically to an algebraic variety in
k
{\displaystyle k}
-dimensional space, and the weight must fall on the variety – i.e., it satisfies the defining equations for the variety. This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable.
== See also ==
Representation theory
Intertwiner
Representation theory of Hopf algebras
Lie algebra representation
Schur’s lemma
Jacobson density theorem
Double commutant theorem
== Notes ==
== References == | Wikipedia/Representation_of_an_associative_algebra |
In mathematics, an algebraic group is an algebraic variety endowed with a group structure that is compatible with its structure as an algebraic variety. Thus the study of algebraic groups belongs both to algebraic geometry and group theory.
Many groups of geometric transformations are algebraic groups, including orthogonal groups, general linear groups, projective groups, Euclidean groups, etc. Many matrix groups are also algebraic. Other algebraic groups occur naturally in algebraic geometry, such as elliptic curves and Jacobian varieties.
An important class of algebraic groups is given by the affine algebraic groups, those whose underlying algebraic variety is an affine variety; they are exactly the algebraic subgroups of the general linear group, and are therefore also called linear algebraic groups. Another class is formed by the abelian varieties, which are the algebraic groups whose underlying variety is a projective variety. Chevalley's structure theorem states that every algebraic group can be constructed from groups in those two families.
== Definitions ==
Formally, an algebraic group over a field
k
{\displaystyle k}
is an algebraic variety
G
{\displaystyle \mathrm {G} }
over
k
{\displaystyle k}
, together with a distinguished element
e
∈
G
(
k
)
{\displaystyle e\in \mathrm {G} (k)}
(the neutral element), and regular maps
G
×
G
→
G
{\displaystyle \mathrm {G} \times \mathrm {G} \to \mathrm {G} }
(the multiplication operation) and
G
→
G
{\displaystyle \mathrm {G} \to \mathrm {G} }
(the inversion operation) that satisfy the group axioms.
=== Examples ===
The additive group: the affine line
A
1
{\displaystyle \mathbb {A} ^{1}}
endowed with addition and opposite as group operations is an algebraic group. It is called the additive group (because its
k
{\displaystyle k}
-points are isomorphic as a group to the additive group of
k
{\displaystyle k}
), and usually denoted by
G
a
{\displaystyle \mathrm {G} _{a}}
.
The multiplicative group: Let
G
m
{\displaystyle \mathrm {G} _{m}}
be the affine variety defined by the equation
x
y
=
1
{\displaystyle xy=1}
in the affine plane
A
2
{\displaystyle \mathbb {A} ^{2}}
. The functions
(
(
x
,
y
)
,
(
x
′
,
y
′
)
)
↦
(
x
x
′
,
y
y
′
)
{\displaystyle ((x,y),(x',y'))\mapsto (xx',yy')}
and
(
x
,
y
)
↦
(
x
−
1
,
y
−
1
)
{\displaystyle (x,y)\mapsto (x^{-1},y^{-1})}
are regular on
G
m
{\displaystyle \mathrm {G} _{m}}
, and they satisfy the group axioms (with neutral element
(
1
,
1
)
{\displaystyle (1,1)}
). The algebraic group
G
m
{\displaystyle \mathrm {G} _{m}}
is called the multiplicative group, because its
k
{\displaystyle k}
-points are isomorphic to the multiplicative group of the field
k
{\displaystyle k}
(an isomorphism is given by
x
↦
(
x
,
x
−
1
)
{\displaystyle x\mapsto (x,x^{-1})}
; note that the subset of invertible elements does not define an algebraic subvariety in
A
1
{\displaystyle \mathbb {A} ^{1}}
).
The special linear group
S
L
n
{\displaystyle \mathrm {SL} _{n}}
is an algebraic group: it is given by the algebraic equation
det
(
g
)
=
1
{\displaystyle \det(g)=1}
in the affine space
A
n
2
{\displaystyle \mathbb {A} ^{n^{2}}}
(identified with the space of
n
{\displaystyle n}
-by-
n
{\displaystyle n}
matrices), multiplication of matrices is regular and the formula for the inverse in terms of the adjugate matrix shows that inversion is regular as well on matrices with determinant 1.
The general linear group
G
L
n
{\displaystyle \mathrm {GL} _{n}}
of invertible matrices over a field
k
{\displaystyle k}
is an algebraic group. It can be realized as a subvariety in
A
n
2
+
1
{\displaystyle \mathbb {A} ^{n^{2}+1}}
in much the same way as the multiplicative group in the previous example.
A non-singular cubic curve in the projective plane
P
2
{\displaystyle \mathbb {P} ^{2}}
with a specified point can be endowed with a geometrically defined group law that makes it into an algebraic group (see elliptic curve).
=== Related definitions ===
An algebraic subgroup of an algebraic group
G
{\displaystyle \mathrm {G} }
is a subvariety
H
{\displaystyle \mathrm {H} }
of
G
{\displaystyle \mathrm {G} }
that is also a subgroup of
G
{\displaystyle \mathrm {G} }
(that is, the maps
G
×
G
→
G
{\displaystyle \mathrm {G} \times \mathrm {G} \to \mathrm {G} }
and
G
→
G
{\displaystyle \mathrm {G} \to \mathrm {G} }
defining the group structure map
H
×
H
{\displaystyle \mathrm {H} \times \mathrm {H} }
and
H
{\displaystyle \mathrm {H} }
, respectively, into
H
{\displaystyle \mathrm {H} }
).
A morphism between two algebraic groups
G
,
G
′
{\displaystyle \mathrm {G} ,\mathrm {G} '}
is a regular map
G
→
G
′
{\displaystyle \mathrm {G} \to \mathrm {G} '}
that is also a group homomorphism. Its kernel is an algebraic subgroup of
G
{\displaystyle \mathrm {G} }
, and its image is an algebraic subgroup of
G
′
{\displaystyle \mathrm {G} '}
.
Quotients in the category of algebraic groups are more delicate to deal with. An algebraic subgroup is said to be normal if it is stable under every inner automorphism (which are regular maps). If
H
{\displaystyle \mathrm {H} }
is a normal algebraic subgroup of
G
{\displaystyle \mathrm {G} }
, then there exists an algebraic group
G
/
H
{\displaystyle \mathrm {G} /\mathrm {H} }
and a surjective morphism
π
:
G
→
G
/
H
{\displaystyle \pi :\mathrm {G} \to \mathrm {G} /\mathrm {H} }
such that
H
{\displaystyle \mathrm {H} }
is the kernel of
π
{\displaystyle \pi }
. Note that if the field
k
{\displaystyle k}
is not algebraically closed, then the morphism of groups
G
(
k
)
→
G
(
k
)
/
H
(
k
)
{\displaystyle \mathrm {G} (k)\to \mathrm {G} (k)/\mathrm {H} (k)}
may not be surjective (the defect of surjectivity is measured by Galois cohomology).
=== Lie algebra of an algebraic group ===
Similarly to the Lie group–Lie algebra correspondence, to an algebraic group over a field
k
{\displaystyle k}
is associated a Lie algebra over
k
{\displaystyle k}
. As a vector space, the Lie algebra is isomorphic to the tangent space at the identity element. The Lie bracket can be constructed from its interpretation as a space of derivations.
=== Alternative definitions ===
A more sophisticated definition of an algebraic group over a field
k
{\displaystyle k}
is that it is a group scheme over
k
{\displaystyle k}
(group schemes can more generally be defined over commutative rings).
Yet another definition of the concept is to say that an algebraic group over
k
{\displaystyle k}
is a group object in the category of algebraic varieties over
k
{\displaystyle k}
.
== Affine algebraic groups ==
An algebraic group is said to be affine if its underlying algebraic variety is an affine variety. Among the examples above, the additive, multiplicative, general linear, and special linear groups are affine. Using the action of an affine algebraic group on its coordinate ring, it can be shown that every affine algebraic group is a linear (or matrix) group, meaning that it is isomorphic to an algebraic subgroup of the general linear group.
For example, the additive group can be embedded in
G
L
2
{\displaystyle \mathrm {GL} _{2}}
by the morphism
x
↦
(
1
x
0
1
)
{\displaystyle x\mapsto \left({\begin{matrix}1&x\\0&1\end{matrix}}\right)}
.
There are many examples of such groups beyond those given previously, including orthogonal groups, symplectic groups, unipotent groups, algebraic tori, and certain semidirect products, such as jet groups, or some solvable groups such as that of invertible triangular matrices.
Linear algebraic groups can be classified to a certain extent. Levi's theorem states that every linear algebraic group is (essentially) a semidirect product of a unipotent group (its unipotent radical) with a reductive group. In turn, a reductive group is decomposed as (again essentially) a product of its center (an algebraic torus) with a semisimple group. The latter are classified over algebraically closed fields via their Lie algebras. The classification over arbitrary fields is more involved, but still well-understood. If can be made very explicit in some cases, such as over the real or p-adic fields, and thereby over number fields via local-global principles.
== Abelian varieties ==
Abelian varieties are connected projective algebraic groups, such as elliptic curves. They are always commutative. They arise naturally in various situations in algebraic geometry and number theory, such as the Jacobian varieties of curves.
== Structure theorem for general algebraic groups ==
Not all algebraic groups are linear groups or abelian varieties; for instance, some group schemes occurring naturally in arithmetic geometry are neither. Chevalley's structure theorem asserts that every connected algebraic group is an extension of an abelian variety by a linear algebraic group. More precisely, if K is a perfect field, and G a connected algebraic group over K, then there exists a unique normal closed subgroup H in G, such that H is a connected linear algebraic group and G/H an abelian variety.
== Connectedness ==
As an algebraic variety,
G
{\displaystyle \mathrm {G} }
carries a Zariski topology. It is not in general a group topology; that is, the group operations may not be continuous for this topology (because the Zariski topology on the product is not the product of Zariski topologies on the factors).
An algebraic group is said to be connected if the underlying algebraic variety is connected for the Zariski topology. For an algebraic group, this means that it is not the union of two proper algebraic subsets.
Examples of groups that are not connected are given by the algebraic subgroup of
n
{\displaystyle n}
th roots of unity in the multiplicative group
G
m
{\displaystyle \mathrm {G} _{m}}
(each point is a Zariski-closed subset so it is not connected for
n
≥
1
{\displaystyle n\geq 1}
). This group is generally denoted by
μ
n
{\displaystyle \mu _{n}}
. Other non-connected groups are the orthogonal group in even dimension (the determinant gives a surjective morphism to
μ
2
{\displaystyle \mu _{2}}
).
More generally, every finite group is an algebraic group (it can be realised as a finite, hence Zariski-closed, subgroup of some
G
L
n
{\displaystyle \mathrm {GL} _{n}}
by Cayley's theorem). In addition it is both affine and projective. Thus, in particular for classification purposes, it is natural to restrict statements to connected algebraic groups.
== Algebraic groups over local fields and Lie groups ==
If the field
k
{\displaystyle k}
is a local field (for instance the real or complex numbers, or a p-adic field) and
G
{\displaystyle \mathrm {G} }
is a
k
{\displaystyle k}
-group, then the group
G
(
k
)
{\displaystyle \mathrm {G} (k)}
is endowed with the analytic topology coming from any embedding into a projective space
P
n
(
k
)
{\displaystyle \mathbb {P} ^{n}(k)}
as a quasi-projective variety. This is a group topology, and it makes
G
(
k
)
{\displaystyle \mathrm {G} (k)}
into a topological group. Such groups are important examples in the general theory of topological groups.
If
k
=
R
{\displaystyle k=\mathbb {R} }
or
C
{\displaystyle \mathbb {C} }
, then this makes
G
(
k
)
{\displaystyle \mathrm {G} (k)}
into a Lie group. Not all Lie groups can be obtained via this procedure; for example, the universal cover of SL2(R), or the quotient of the Heisenberg group by an infinite normal discrete subgroup. An algebraic group over the real or complex numbers may have closed subgroups (in the analytic topology) that do not have the same connected component of the identity as any algebraic subgroup.
== Coxeter groups and algebraic groups ==
There are a number of analogous results between algebraic groups and Coxeter groups – for instance, the number of elements of the symmetric group is
n
!
{\displaystyle n!}
, and the number of elements of the general linear group over a finite field is (up to some factor) the q-factorial
[
n
]
q
!
{\displaystyle [n]_{q}!}
; thus, the symmetric group behaves as though it were a linear group over "the field with one element". This is formalized by the field with one element, which considers Coxeter groups to be simple algebraic groups over the field with one element.
== See also ==
Character variety
Borel subgroup
Tame group
Morley rank
Cherlin–Zilber conjecture
Adelic algebraic group
Pseudo-reductive group
== References ==
Chevalley, Claude, ed. (1958), Séminaire C. Chevalley, 1956--1958. Classification des groupes de Lie algébriques, 2 vols, Paris: Secrétariat Mathématique, MR 0106966, Reprinted as volume 3 of Chevalley's collected works., archived from the original on 2014-11-04, retrieved 2012-06-25
Borel, Armand (1991). Linear algebraic groups. 2nd enlarged ed. Graduate Texts in Mathematics. Springer-Verlag. pp. x+288. Zbl 0726.20030.
Humphreys, James E. (1972), Linear Algebraic Groups, Graduate Texts in Mathematics, vol. 21, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90108-4, MR 0396773
Lang, Serge (1983), Abelian varieties, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90875-5
Milne, J. S. (2017), Algebraic Groups: The Theory of Group Schemes of Finite Type over a Field, Cambridge University Press, doi:10.1017/9781316711736, ISBN 978-1107167483, MR 3729270
Milne, J. S., Affine Group Schemes; Lie Algebras; Lie Groups; Reductive Groups; Arithmetic Subgroups
Mumford, David (1970), Abelian varieties, Oxford University Press, ISBN 978-0-19-560528-0, OCLC 138290
Springer, Tonny A. (1998), Linear algebraic groups, Progress in Mathematics, vol. 9 (2nd ed.), Boston, MA: Birkhäuser Boston, ISBN 978-0-8176-4021-7, MR 1642713
Waterhouse, William C. (1979), Introduction to affine group schemes, Graduate Texts in Mathematics, vol. 66, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90421-4
Weil, André (1971), Courbes algébriques et variétés abéliennes, Paris: Hermann, OCLC 322901
== Further reading ==
Algebraic groups and their Lie algebras by Daniel Miller | Wikipedia/Algebraic_group |
A software system is a system of intercommunicating components based on software forming part of a computer system (a combination of hardware and software). It "consists of a number of separate programs, configuration files, which are used to set up these programs, system documentation, which describes the structure of the system, and user documentation, which explains how to use the system".
A software system differs from a computer program or software. While a computer program is generally a set of instructions (source, or object code) that perform a specific task, a software system is more or an encompassing concept with many more components such as specification, test results, end-user documentation, maintenance records, etc.
The use of the term software system is at times related to the application of systems theory approaches in the context of software engineering. A software system consists of several separate computer programs and associated configuration files, documentation, etc., that operate together. The concept is used in the study of large and complex software, because it focuses on the major components of software and their interactions. It is also related to the field of software architecture.
Software systems are an active area of research for groups interested in software engineering in particular and systems engineering in general. Academic journals like the Journal of Systems and Software (published by Elsevier) are dedicated to the subject.
The ACM Software System Award is an annual award that honors people or an organization "for developing a system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both". It has been awarded by the Association for Computing Machinery (ACM) since 1983, with a cash prize sponsored by IBM.
== Categories ==
Major categories of software systems include those based on application software development, programming software, and system software although the distinction can sometimes be difficult. Examples of software systems include operating systems, computer reservations systems, air traffic control systems, military command and control systems, telecommunication networks, content management systems, database management systems, expert systems, embedded systems, etc.
== See also ==
ACM Software System Award
Common layers in an information system logical architecture
Computer program
Computer program installation
Experimental software engineering
Software bug
Software architecture
System software
Systems theory
Systems Science
Systems Engineering
Software Engineering
== References == | Wikipedia/Software_systems |
A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines. The control systems are designed via control engineering process.
For continuously modulated control, a feedback controller is used to automatically control a process or operation. The control system compares the value or status of the process variable (PV) being controlled with the desired value or setpoint (SP), and applies the difference as a control signal to bring the process variable output of the plant to the same value as the setpoint.
For sequential and combinational logic, software logic, such as in a programmable logic controller, is used.
== Open-loop and closed-loop control ==
== Feedback control systems ==
== Logic control ==
Logic control systems for industrial and commercial machinery were historically implemented by interconnected electrical relays and cam timers using ladder logic. Today, most such systems are constructed with microcontrollers or more specialized programmable logic controllers (PLCs). The notation of ladder logic is still in use as a programming method for PLCs.
Logic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use of actuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine.
PLC software can be written in many different ways – ladder diagrams, SFC (sequential function charts) or statement lists.
== On–off control ==
On–off control uses a feedback controller that switches abruptly between two states. A simple bi-metallic domestic thermostat can be described as an on-off controller. When the temperature in the room (PV) goes below the user setting (SP), the heater is switched on. Another example is a pressure switch on an air compressor. When the pressure (PV) drops below the setpoint (SP) the compressor is powered. Refrigerators and vacuum pumps contain similar mechanisms. Simple on–off control systems like these can be cheap and effective.
== Linear control ==
== Fuzzy logic ==
Fuzzy logic is an attempt to apply the easy design of logic controllers to the control of complex continuously varying systems. Basically, a measurement in a fuzzy logic system can be partly true.
The rules of the system are written in natural language and translated into fuzzy logic. For example, the design for a furnace would start with: "If the temperature is too high, reduce the fuel to the furnace. If the temperature is too low, increase the fuel to the furnace."
Measurements from the real world (such as the temperature of a furnace) are fuzzified and logic is calculated arithmetic, as opposed to Boolean logic, and the outputs are de-fuzzified to control equipment.
When a robust fuzzy design is reduced to a single, quick calculation, it begins to resemble a conventional feedback loop solution and it might appear that the fuzzy design was unnecessary. However, the fuzzy logic paradigm may provide scalability for large control systems where conventional methods become unwieldy or costly to derive.
Fuzzy electronics is an electronic technology that uses fuzzy logic instead of the two-value logic more commonly used in digital electronics.
== Physical implementation ==
The range of control system implementation is from compact controllers often with dedicated software for a particular machine or device, to distributed control systems for industrial process control for a large physical plant.
Logic systems and feedback controllers are usually implemented with programmable logic controllers. The Broadly Reconfigurable and Expandable Automation Device (BREAD) is a recent framework that provides many open-source hardware devices which can be connected to create more complex data acquisition and control systems.
== See also ==
== References ==
== External links ==
SystemControl Create, simulate or HWIL control loops with Python. Includes Kalman filter, LQG control among others.
Semiautonomous Flight Direction - Reference unmannedaircraft.org
Control System Toolbox for design and analysis of control systems.
Control Systems Manufacturer Design and Manufacture of control systems.
Mathematica functions for the analysis, design, and simulation of control systems
Python Control System (PyConSys) Create and simulate control loops with Python. AI for setting PID parameters. | Wikipedia/Control_systems |
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (x ∗ h)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
Linear time-invariant system theory is also used in image processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to as linear translation-invariant to give the terminology the most general reach. In the case of generic discrete-time (i.e., sampled) systems, linear shift-invariant is the corresponding term. LTI system theory is an area of applied mathematics which has direct applications in electrical circuit analysis and design, signal processing and filter design, control theory, mechanical engineering, image processing, the design of measuring instruments of many sorts, NMR spectroscopy, and many other technical areas where systems of ordinary differential equations present themselves.
== Overview ==
The defining properties of any LTI system are linearity and time invariance.
Linearity means that the relationship between the input
x
(
t
)
{\displaystyle x(t)}
and the output
y
(
t
)
{\displaystyle y(t)}
, both being regarded as functions, is a linear mapping: If
a
{\displaystyle a}
is a constant then the system output to
a
x
(
t
)
{\displaystyle ax(t)}
is
a
y
(
t
)
{\displaystyle ay(t)}
; if
x
′
(
t
)
{\displaystyle x'(t)}
is a further input with system output
y
′
(
t
)
{\displaystyle y'(t)}
then the output of the system to
x
(
t
)
+
x
′
(
t
)
{\displaystyle x(t)+x'(t)}
is
y
(
t
)
+
y
′
(
t
)
{\displaystyle y(t)+y'(t)}
, this applying for all choices of
a
{\displaystyle a}
,
x
(
t
)
{\displaystyle x(t)}
,
x
′
(
t
)
{\displaystyle x'(t)}
. The latter condition is often referred to as the superposition principle.
Time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds. That is, if the output due to input
x
(
t
)
{\displaystyle x(t)}
is
y
(
t
)
{\displaystyle y(t)}
, then the output due to input
x
(
t
−
T
)
{\displaystyle x(t-T)}
is
y
(
t
−
T
)
{\displaystyle y(t-T)}
. Hence, the system is time invariant because the output does not depend on the particular time the input is applied.
Through these properties, it is reasoned that LTI systems can be characterized entirely by a single function called the system's impulse response, as, by superposition, any arbitrary signal can be expressed as a superposition of time-shifted impulses. The output of the system
y
(
t
)
{\displaystyle y(t)}
is simply the convolution of the input to the system
x
(
t
)
{\displaystyle x(t)}
with the system's impulse response
h
(
t
)
{\displaystyle h(t)}
. This is called a continuous time system. Similarly, a discrete-time linear time-invariant (or, more generally, "shift-invariant") system is defined as one operating in discrete time:
y
i
=
x
i
∗
h
i
{\displaystyle y_{i}=x_{i}*h_{i}}
where y, x, and h are sequences and the convolution, in discrete time, uses a discrete summation rather than an integral.
LTI systems can also be characterized in the frequency domain by the system's transfer function, which for a continuous-time or discrete-time system is the Laplace transform or Z-transform of the system's impulse response, respectively. As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the corresponding frequency-domain representation of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain.
For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. As a result, if the input to a system is the complex waveform
A
s
e
s
t
{\displaystyle A_{s}e^{st}}
for some complex amplitude
A
s
{\displaystyle A_{s}}
and complex frequency
s
{\displaystyle s}
, the output will be some complex constant times the input, say
B
s
e
s
t
{\displaystyle B_{s}e^{st}}
for some new complex amplitude
B
s
{\displaystyle B_{s}}
. The ratio
B
s
/
A
s
{\displaystyle B_{s}/A_{s}}
is the transfer function at frequency
s
{\displaystyle s}
. The output signal will be shifted in phase and amplitude, but always with the same frequency upon reaching steady-state. LTI systems cannot produce frequency components that are not in the input.
LTI system theory is good at describing many important systems. Most LTI systems are considered "easy" to analyze, at least compared to the time-varying and/or nonlinear case. Any system that can be modeled as a linear differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors, inductors, and capacitors (RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits.
Most LTI system concepts are similar between the continuous-time and discrete-time cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by two-dimensional shift invariance. When analyzing filter banks and MIMO systems, it is often useful to consider vectors of signals. A linear system that is not time-invariant can be solved using other approaches such as the Green function method.
== Continuous-time systems ==
=== Impulse response and convolution ===
The behavior of a linear, continuous-time, time-invariant system with input signal x(t) and output signal y(t) is described by the convolution integral:
where
h
(
t
)
{\textstyle h(t)}
is the system's response to an impulse:
x
(
τ
)
=
δ
(
τ
)
{\textstyle x(\tau )=\delta (\tau )}
.
y
(
t
)
{\textstyle y(t)}
is therefore proportional to a weighted average of the input function
x
(
τ
)
{\textstyle x(\tau )}
. The weighting function is
h
(
−
τ
)
{\textstyle h(-\tau )}
, simply shifted by amount
t
{\textstyle t}
. As
t
{\textstyle t}
changes, the weighting function emphasizes different parts of the input function. When
h
(
τ
)
{\textstyle h(\tau )}
is zero for all negative
τ
{\textstyle \tau }
,
y
(
t
)
{\textstyle y(t)}
depends only on values of
x
{\textstyle x}
prior to time
t
{\textstyle t}
, and the system is said to be causal.
To understand why the convolution produces the output of an LTI system, let the notation
{
x
(
u
−
τ
)
;
u
}
{\textstyle \{x(u-\tau );\ u\}}
represent the function
x
(
u
−
τ
)
{\textstyle x(u-\tau )}
with variable
u
{\textstyle u}
and constant
τ
{\textstyle \tau }
. And let the shorter notation
{
x
}
{\textstyle \{x\}}
represent
{
x
(
u
)
;
u
}
{\textstyle \{x(u);\ u\}}
. Then a continuous-time system transforms an input function,
{
x
}
,
{\textstyle \{x\},}
into an output function,
{
y
}
{\textstyle \{y\}}
. And in general, every value of the output can depend on every value of the input. This concept is represented by:
y
(
t
)
=
def
O
t
{
x
}
,
{\displaystyle y(t)\mathrel {\stackrel {\text{def}}{=}} O_{t}\{x\},}
where
O
t
{\textstyle O_{t}}
is the transformation operator for time
t
{\textstyle t}
. In a typical system,
y
(
t
)
{\textstyle y(t)}
depends most heavily on the values of
x
{\textstyle x}
that occurred near time
t
{\textstyle t}
. Unless the transform itself changes with
t
{\textstyle t}
, the output function is just constant, and the system is uninteresting.
For a linear system,
O
{\textstyle O}
must satisfy Eq.1:
And the time-invariance requirement is:
In this notation, we can write the impulse response as
h
(
t
)
=
def
O
t
{
δ
(
u
)
;
u
}
.
{\textstyle h(t)\mathrel {\stackrel {\text{def}}{=}} O_{t}\{\delta (u);\ u\}.}
Similarly:
Substituting this result into the convolution integral:
(
x
∗
h
)
(
t
)
=
∫
−
∞
∞
x
(
τ
)
⋅
h
(
t
−
τ
)
d
τ
=
∫
−
∞
∞
x
(
τ
)
⋅
O
t
{
δ
(
u
−
τ
)
;
u
}
d
τ
,
{\displaystyle {\begin{aligned}(x*h)(t)&=\int _{-\infty }^{\infty }x(\tau )\cdot h(t-\tau )\,\mathrm {d} \tau \\[4pt]&=\int _{-\infty }^{\infty }x(\tau )\cdot O_{t}\{\delta (u-\tau );\ u\}\,\mathrm {d} \tau ,\,\end{aligned}}}
which has the form of the right side of Eq.2 for the case
c
τ
=
x
(
τ
)
{\textstyle c_{\tau }=x(\tau )}
and
x
τ
(
u
)
=
δ
(
u
−
τ
)
.
{\textstyle x_{\tau }(u)=\delta (u-\tau ).}
Eq.2 then allows this continuation:
(
x
∗
h
)
(
t
)
=
O
t
{
∫
−
∞
∞
x
(
τ
)
⋅
δ
(
u
−
τ
)
d
τ
;
u
}
=
O
t
{
x
(
u
)
;
u
}
=
def
y
(
t
)
.
{\displaystyle {\begin{aligned}(x*h)(t)&=O_{t}\left\{\int _{-\infty }^{\infty }x(\tau )\cdot \delta (u-\tau )\,\mathrm {d} \tau ;\ u\right\}\\[4pt]&=O_{t}\left\{x(u);\ u\right\}\\&\mathrel {\stackrel {\text{def}}{=}} y(t).\,\end{aligned}}}
In summary, the input function,
{
x
}
{\textstyle \{x\}}
, can be represented by a continuum of time-shifted impulse functions, combined "linearly", as shown at Eq.1. The system's linearity property allows the system's response to be represented by the corresponding continuum of impulse responses, combined in the same way. And the time-invariance property allows that combination to be represented by the convolution integral.
The mathematical operations above have a simple graphical simulation.
=== Exponentials as eigenfunctions ===
An eigenfunction is a function for which the output of the operator is a scaled version of the same function. That is,
H
f
=
λ
f
,
{\displaystyle {\mathcal {H}}f=\lambda f,}
where f is the eigenfunction and
λ
{\displaystyle \lambda }
is the eigenvalue, a constant.
The exponential functions
A
e
s
t
{\displaystyle Ae^{st}}
, where
A
,
s
∈
C
{\displaystyle A,s\in \mathbb {C} }
, are eigenfunctions of a linear, time-invariant operator. A simple proof illustrates this concept. Suppose the input is
x
(
t
)
=
A
e
s
t
{\displaystyle x(t)=Ae^{st}}
. The output of the system with impulse response
h
(
t
)
{\displaystyle h(t)}
is then
∫
−
∞
∞
h
(
t
−
τ
)
A
e
s
τ
d
τ
{\displaystyle \int _{-\infty }^{\infty }h(t-\tau )Ae^{s\tau }\,\mathrm {d} \tau }
which, by the commutative property of convolution, is equivalent to
∫
−
∞
∞
h
(
τ
)
A
e
s
(
t
−
τ
)
d
τ
⏞
H
f
=
∫
−
∞
∞
h
(
τ
)
A
e
s
t
e
−
s
τ
d
τ
=
A
e
s
t
∫
−
∞
∞
h
(
τ
)
e
−
s
τ
d
τ
=
A
e
s
t
⏟
Input
⏞
f
H
(
s
)
⏟
Scalar
⏞
λ
,
{\displaystyle {\begin{aligned}\overbrace {\int _{-\infty }^{\infty }h(\tau )\,Ae^{s(t-\tau )}\,\mathrm {d} \tau } ^{{\mathcal {H}}f}&=\int _{-\infty }^{\infty }h(\tau )\,Ae^{st}e^{-s\tau }\,\mathrm {d} \tau \\[4pt]&=Ae^{st}\int _{-\infty }^{\infty }h(\tau )\,e^{-s\tau }\,\mathrm {d} \tau \\[4pt]&=\overbrace {\underbrace {Ae^{st}} _{\text{Input}}} ^{f}\,\overbrace {\underbrace {H(s)} _{\text{Scalar}}} ^{\lambda }\,,\\\end{aligned}}}
where the scalar
H
(
s
)
=
def
∫
−
∞
∞
h
(
t
)
e
−
s
t
d
t
{\displaystyle H(s)\mathrel {\stackrel {\text{def}}{=}} \int _{-\infty }^{\infty }h(t)e^{-st}\,\mathrm {d} t}
is dependent only on the parameter s.
So the system's response is a scaled version of the input. In particular, for any
A
,
s
∈
C
{\displaystyle A,s\in \mathbb {C} }
, the system output is the product of the input
A
e
s
t
{\displaystyle Ae^{st}}
and the constant
H
(
s
)
{\displaystyle H(s)}
. Hence,
A
e
s
t
{\displaystyle Ae^{st}}
is an eigenfunction of an LTI system, and the corresponding eigenvalue is
H
(
s
)
{\displaystyle H(s)}
.
==== Direct proof ====
It is also possible to directly derive complex exponentials as eigenfunctions of LTI systems.
Let's set
v
(
t
)
=
e
i
ω
t
{\displaystyle v(t)=e^{i\omega t}}
some complex exponential and
v
a
(
t
)
=
e
i
ω
(
t
+
a
)
{\displaystyle v_{a}(t)=e^{i\omega (t+a)}}
a time-shifted version of it.
H
[
v
a
]
(
t
)
=
e
i
ω
a
H
[
v
]
(
t
)
{\displaystyle H[v_{a}](t)=e^{i\omega a}H[v](t)}
by linearity with respect to the constant
e
i
ω
a
{\displaystyle e^{i\omega a}}
.
H
[
v
a
]
(
t
)
=
H
[
v
]
(
t
+
a
)
{\displaystyle H[v_{a}](t)=H[v](t+a)}
by time invariance of
H
{\displaystyle H}
.
So
H
[
v
]
(
t
+
a
)
=
e
i
ω
a
H
[
v
]
(
t
)
{\displaystyle H[v](t+a)=e^{i\omega a}H[v](t)}
. Setting
t
=
0
{\displaystyle t=0}
and renaming we get:
H
[
v
]
(
τ
)
=
e
i
ω
τ
H
[
v
]
(
0
)
{\displaystyle H[v](\tau )=e^{i\omega \tau }H[v](0)}
i.e. that a complex exponential
e
i
ω
τ
{\displaystyle e^{i\omega \tau }}
as input will give a complex exponential of same frequency as output.
=== Fourier and Laplace transforms ===
The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The one-sided Laplace transform
H
(
s
)
=
def
L
{
h
(
t
)
}
=
def
∫
0
∞
h
(
t
)
e
−
s
t
d
t
{\displaystyle H(s)\mathrel {\stackrel {\text{def}}{=}} {\mathcal {L}}\{h(t)\}\mathrel {\stackrel {\text{def}}{=}} \int _{0}^{\infty }h(t)e^{-st}\,\mathrm {d} t}
is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids (i.e., exponential functions of the form
e
j
ω
t
{\displaystyle e^{j\omega t}}
where
ω
∈
R
{\displaystyle \omega \in \mathbb {R} }
and
j
=
def
−
1
{\displaystyle j\mathrel {\stackrel {\text{def}}{=}} {\sqrt {-1}}}
). The Fourier transform
H
(
j
ω
)
=
F
{
h
(
t
)
}
{\displaystyle H(j\omega )={\mathcal {F}}\{h(t)\}}
gives the eigenvalues for pure complex sinusoids. Both of
H
(
s
)
{\displaystyle H(s)}
and
H
(
j
ω
)
{\displaystyle H(j\omega )}
are called the system function, system response, or transfer function.
The Laplace transform is usually used in the context of one-sided signals, i.e. signals that are zero for all values of t less than some value. Usually, this "start time" is set to zero, for convenience and without loss of generality, with the transform integral being taken from zero to infinity (the transform shown above with lower limit of integration of negative infinity is formally known as the bilateral Laplace transform).
The Fourier transform is used for analyzing systems that process signals that are infinite in extent, such as modulated sinusoids, even though it cannot be directly applied to input and output signals that are not square integrable. The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. The Fourier transform is often applied to spectra of infinite signals via the Wiener–Khinchin theorem even when Fourier transforms of the signals do not exist.
Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain, given signals for which the transforms exist
y
(
t
)
=
(
h
∗
x
)
(
t
)
=
def
∫
−
∞
∞
h
(
t
−
τ
)
x
(
τ
)
d
τ
=
def
L
−
1
{
H
(
s
)
X
(
s
)
}
.
{\displaystyle y(t)=(h*x)(t)\mathrel {\stackrel {\text{def}}{=}} \int _{-\infty }^{\infty }h(t-\tau )x(\tau )\,\mathrm {d} \tau \mathrel {\stackrel {\text{def}}{=}} {\mathcal {L}}^{-1}\{H(s)X(s)\}.}
One can use the system response directly to determine how any particular frequency component is handled by a system with that Laplace transform. If we evaluate the system response (Laplace transform of the impulse response) at complex frequency s = jω, where ω = 2πf, we obtain |H(s)| which is the system gain for frequency f. The relative phase shift between the output and input for that frequency component is likewise given by arg(H(s)).
=== Examples ===
=== Important system properties ===
Some of the most important properties of a system are causality and stability. Causality is a necessity for a physical system whose independent variable is time, however this restriction is not present in other cases such as image processing.
==== Causality ====
A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality is
h
(
t
)
=
0
∀
t
<
0
,
{\displaystyle h(t)=0\quad \forall t<0,}
where
h
(
t
)
{\displaystyle h(t)}
is the impulse response. It is not possible in general to determine causality from the two-sided Laplace transform. However, when working in the time domain, one normally uses the one-sided Laplace transform which requires causality.
==== Stability ====
A system is bounded-input, bounded-output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if every input satisfying
‖
x
(
t
)
‖
∞
<
∞
{\displaystyle \ \|x(t)\|_{\infty }<\infty }
leads to an output satisfying
‖
y
(
t
)
‖
∞
<
∞
{\displaystyle \ \|y(t)\|_{\infty }<\infty }
(that is, a finite maximum absolute value of
x
(
t
)
{\displaystyle x(t)}
implies a finite maximum absolute value of
y
(
t
)
{\displaystyle y(t)}
), then the system is stable. A necessary and sufficient condition is that
h
(
t
)
{\displaystyle h(t)}
, the impulse response, is in L1 (has a finite L1 norm):
‖
h
(
t
)
‖
1
=
∫
−
∞
∞
|
h
(
t
)
|
d
t
<
∞
.
{\displaystyle \|h(t)\|_{1}=\int _{-\infty }^{\infty }|h(t)|\,\mathrm {d} t<\infty .}
In the frequency domain, the region of convergence must contain the imaginary axis
s
=
j
ω
{\displaystyle s=j\omega }
.
As an example, the ideal low-pass filter with impulse response equal to a sinc function is not BIBO stable, because the sinc function does not have a finite L1 norm. Thus, for some bounded input, the output of the ideal low-pass filter is unbounded. In particular, if the input is zero for
t
<
0
{\displaystyle t<0}
and equal to a sinusoid at the cut-off frequency for
t
>
0
{\displaystyle t>0}
, then the output will be unbounded for all times other than the zero crossings.
== Discrete-time systems ==
Almost everything in continuous-time systems has a counterpart in discrete-time systems.
=== Discrete-time systems from continuous-time systems ===
In many contexts, a discrete time (DT) system is really part of a larger continuous time (CT) system. For example, a digital recording system takes an analog sound, digitizes it, possibly processes the digital signals, and plays back an analog sound for people to listen to.
In practical systems, DT signals obtained are usually uniformly sampled versions of CT signals. If
x
(
t
)
{\displaystyle x(t)}
is a CT signal, then the sampling circuit used before an analog-to-digital converter will transform it to a DT signal:
x
n
=
def
x
(
n
T
)
∀
n
∈
Z
,
{\displaystyle x_{n}\mathrel {\stackrel {\text{def}}{=}} x(nT)\qquad \forall \,n\in \mathbb {Z} ,}
where T is the sampling period. Before sampling, the input signal is normally run through a so-called Nyquist filter which removes frequencies above the "folding frequency" 1/(2T); this guarantees that no information in the filtered signal will be lost. Without filtering, any frequency component above the folding frequency (or Nyquist frequency) is aliased to a different frequency (thus distorting the original signal), since a DT signal can only support frequency components lower than the folding frequency.
=== Impulse response and convolution ===
Let
{
x
[
m
−
k
]
;
m
}
{\displaystyle \{x[m-k];\ m\}}
represent the sequence
{
x
[
m
−
k
]
;
for all integer values of
m
}
.
{\displaystyle \{x[m-k];{\text{ for all integer values of }}m\}.}
And let the shorter notation
{
x
}
{\displaystyle \{x\}}
represent
{
x
[
m
]
;
m
}
.
{\displaystyle \{x[m];\ m\}.}
A discrete system transforms an input sequence,
{
x
}
{\displaystyle \{x\}}
into an output sequence,
{
y
}
.
{\displaystyle \{y\}.}
In general, every element of the output can depend on every element of the input. Representing the transformation operator by
O
{\displaystyle O}
, we can write:
y
[
n
]
=
def
O
n
{
x
}
.
{\displaystyle y[n]\mathrel {\stackrel {\text{def}}{=}} O_{n}\{x\}.}
Note that unless the transform itself changes with n, the output sequence is just constant, and the system is uninteresting. (Thus the subscript, n.) In a typical system, y[n] depends most heavily on the elements of x whose indices are near n.
For the special case of the Kronecker delta function,
x
[
m
]
=
δ
[
m
]
,
{\displaystyle x[m]=\delta [m],}
the output sequence is the impulse response:
h
[
n
]
=
def
O
n
{
δ
[
m
]
;
m
}
.
{\displaystyle h[n]\mathrel {\stackrel {\text{def}}{=}} O_{n}\{\delta [m];\ m\}.}
For a linear system,
O
{\displaystyle O}
must satisfy:
And the time-invariance requirement is:
In such a system, the impulse response,
{
h
}
{\displaystyle \{h\}}
, characterizes the system completely. That is, for any input sequence, the output sequence can be calculated in terms of the input and the impulse response. To see how that is done, consider the identity:
x
[
m
]
≡
∑
k
=
−
∞
∞
x
[
k
]
⋅
δ
[
m
−
k
]
,
{\displaystyle x[m]\equiv \sum _{k=-\infty }^{\infty }x[k]\cdot \delta [m-k],}
which expresses
{
x
}
{\displaystyle \{x\}}
in terms of a sum of weighted delta functions.
Therefore:
y
[
n
]
=
O
n
{
x
}
=
O
n
{
∑
k
=
−
∞
∞
x
[
k
]
⋅
δ
[
m
−
k
]
;
m
}
=
∑
k
=
−
∞
∞
x
[
k
]
⋅
O
n
{
δ
[
m
−
k
]
;
m
}
,
{\displaystyle {\begin{aligned}y[n]=O_{n}\{x\}&=O_{n}\left\{\sum _{k=-\infty }^{\infty }x[k]\cdot \delta [m-k];\ m\right\}\\&=\sum _{k=-\infty }^{\infty }x[k]\cdot O_{n}\{\delta [m-k];\ m\},\,\end{aligned}}}
where we have invoked Eq.4 for the case
c
k
=
x
[
k
]
{\displaystyle c_{k}=x[k]}
and
x
k
[
m
]
=
δ
[
m
−
k
]
{\displaystyle x_{k}[m]=\delta [m-k]}
.
And because of Eq.5, we may write:
O
n
{
δ
[
m
−
k
]
;
m
}
=
O
n
−
k
{
δ
[
m
]
;
m
}
=
def
h
[
n
−
k
]
.
{\displaystyle {\begin{aligned}O_{n}\{\delta [m-k];\ m\}&\mathrel {\stackrel {\quad }{=}} O_{n-k}\{\delta [m];\ m\}\\&\mathrel {\stackrel {\text{def}}{=}} h[n-k].\end{aligned}}}
Therefore:
which is the familiar discrete convolution formula. The operator
O
n
{\displaystyle O_{n}}
can therefore be interpreted as proportional to a weighted average of the function x[k].
The weighting function is h[−k], simply shifted by amount n. As n changes, the weighting function emphasizes different parts of the input function. Equivalently, the system's response to an impulse at n=0 is a "time" reversed copy of the unshifted weighting function. When h[k] is zero for all negative k, the system is said to be causal.
=== Exponentials as eigenfunctions ===
An eigenfunction is a function for which the output of the operator is the same function, scaled by some constant. In symbols,
H
f
=
λ
f
,
{\displaystyle {\mathcal {H}}f=\lambda f,}
where f is the eigenfunction and
λ
{\displaystyle \lambda }
is the eigenvalue, a constant.
The exponential functions
z
n
=
e
s
T
n
{\displaystyle z^{n}=e^{sTn}}
, where
n
∈
Z
{\displaystyle n\in \mathbb {Z} }
, are eigenfunctions of a linear, time-invariant operator.
T
∈
R
{\displaystyle T\in \mathbb {R} }
is the sampling interval, and
z
=
e
s
T
,
z
,
s
∈
C
{\displaystyle z=e^{sT},\ z,s\in \mathbb {C} }
. A simple proof illustrates this concept.
Suppose the input is
x
[
n
]
=
z
n
{\displaystyle x[n]=z^{n}}
. The output of the system with impulse response
h
[
n
]
{\displaystyle h[n]}
is then
∑
m
=
−
∞
∞
h
[
n
−
m
]
z
m
{\displaystyle \sum _{m=-\infty }^{\infty }h[n-m]\,z^{m}}
which is equivalent to the following by the commutative property of convolution
∑
m
=
−
∞
∞
h
[
m
]
z
(
n
−
m
)
=
z
n
∑
m
=
−
∞
∞
h
[
m
]
z
−
m
=
z
n
H
(
z
)
{\displaystyle \sum _{m=-\infty }^{\infty }h[m]\,z^{(n-m)}=z^{n}\sum _{m=-\infty }^{\infty }h[m]\,z^{-m}=z^{n}H(z)}
where
H
(
z
)
=
def
∑
m
=
−
∞
∞
h
[
m
]
z
−
m
{\displaystyle H(z)\mathrel {\stackrel {\text{def}}{=}} \sum _{m=-\infty }^{\infty }h[m]z^{-m}}
is dependent only on the parameter z.
So
z
n
{\displaystyle z^{n}}
is an eigenfunction of an LTI system because the system response is the same as the input times the constant
H
(
z
)
{\displaystyle H(z)}
.
=== Z and discrete-time Fourier transforms ===
The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The Z transform
H
(
z
)
=
Z
{
h
[
n
]
}
=
∑
n
=
−
∞
∞
h
[
n
]
z
−
n
{\displaystyle H(z)={\mathcal {Z}}\{h[n]\}=\sum _{n=-\infty }^{\infty }h[n]z^{-n}}
is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids; i.e. exponentials of the form
e
j
ω
n
{\displaystyle e^{j\omega n}}
, where
ω
∈
R
{\displaystyle \omega \in \mathbb {R} }
. These can also be written as
z
n
{\displaystyle z^{n}}
with
z
=
e
j
ω
{\displaystyle z=e^{j\omega }}
. The discrete-time Fourier transform (DTFT)
H
(
e
j
ω
)
=
F
{
h
[
n
]
}
{\displaystyle H(e^{j\omega })={\mathcal {F}}\{h[n]\}}
gives the eigenvalues of pure sinusoids. Both of
H
(
z
)
{\displaystyle H(z)}
and
H
(
e
j
ω
)
{\displaystyle H(e^{j\omega })}
are called the system function, system response, or transfer function.
Like the one-sided Laplace transform, the Z transform is usually used in the context of one-sided signals, i.e. signals that are zero for t<0. The discrete-time Fourier transform Fourier series may be used for analyzing periodic signals.
Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain. That is,
y
[
n
]
=
(
h
∗
x
)
[
n
]
=
∑
m
=
−
∞
∞
h
[
n
−
m
]
x
[
m
]
=
Z
−
1
{
H
(
z
)
X
(
z
)
}
.
{\displaystyle y[n]=(h*x)[n]=\sum _{m=-\infty }^{\infty }h[n-m]x[m]={\mathcal {Z}}^{-1}\{H(z)X(z)\}.}
Just as with the Laplace transform transfer function in continuous-time system analysis, the Z transform makes it easier to analyze systems and gain insight into their behavior.
=== Examples ===
=== Important system properties ===
The input-output characteristics of discrete-time LTI system are completely described by its impulse response
h
[
n
]
{\displaystyle h[n]}
.
Two of the most important properties of a system are causality and stability. Non-causal (in time) systems can be defined and analyzed as above, but cannot be realized in real-time. Unstable systems can also be analyzed and built, but are only useful as part of a larger system whose overall transfer function is stable.
==== Causality ====
A discrete-time LTI system is causal if the current value of the output depends on only the current value and past values of the input. A necessary and sufficient condition for causality is
h
[
n
]
=
0
∀
n
<
0
,
{\displaystyle h[n]=0\ \forall n<0,}
where
h
[
n
]
{\displaystyle h[n]}
is the impulse response. It is not possible in general to determine causality from the Z transform, because the inverse transform is not unique. When a region of convergence is specified, then causality can be determined.
==== Stability ====
A system is bounded input, bounded output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if
‖
x
[
n
]
‖
∞
<
∞
{\displaystyle \|x[n]\|_{\infty }<\infty }
implies that
‖
y
[
n
]
‖
∞
<
∞
{\displaystyle \|y[n]\|_{\infty }<\infty }
(that is, if bounded input implies bounded output, in the sense that the maximum absolute values of
x
[
n
]
{\displaystyle x[n]}
and
y
[
n
]
{\displaystyle y[n]}
are finite), then the system is stable. A necessary and sufficient condition is that
h
[
n
]
{\displaystyle h[n]}
, the impulse response, satisfies
‖
h
[
n
]
‖
1
=
def
∑
n
=
−
∞
∞
|
h
[
n
]
|
<
∞
.
{\displaystyle \|h[n]\|_{1}\mathrel {\stackrel {\text{def}}{=}} \sum _{n=-\infty }^{\infty }|h[n]|<\infty .}
In the frequency domain, the region of convergence must contain the unit circle (i.e., the locus satisfying
|
z
|
=
1
{\displaystyle |z|=1}
for complex z).
== Notes ==
== See also ==
Circulant matrix
Frequency response
Impulse response
System analysis
Green function
Signal-flow graph
== References ==
Phillips, C.L., Parr, J.M., & Riskin, E.A. (2007). Signals, systems and Transforms. Prentice Hall. ISBN 978-0-13-041207-2.{{cite book}}: CS1 maint: multiple names: authors list (link)
Hespanha, J.P. (2009). Linear System Theory. Princeton university press. ISBN 978-0-691-14021-6.
Crutchfield, Steve (October 12, 2010), "The Joy of Convolution", Johns Hopkins University, retrieved November 21, 2010
Vaidyanathan, P. P.; Chen, T. (May 1995). "Role of anticausal inverses in multirate filter banks — Part I: system theoretic fundamentals" (PDF). IEEE Trans. Signal Process. 43 (6): 1090. Bibcode:1995ITSP...43.1090V. doi:10.1109/78.382395.
== Further reading ==
== External links ==
ECE 209: Review of Circuits as LTI Systems – Short primer on the mathematical analysis of (electrical) LTI systems.
ECE 209: Sources of Phase Shift – Gives an intuitive explanation of the source of phase shift in two common electrical LTI systems.
JHU 520.214 Signals and Systems course notes. An encapsulated course on LTI system theory. Adequate for self teaching.
LTI system example: RC low-pass filter. Amplitude and phase response. | Wikipedia/LTI_system_theory |
Management science (or managerial science) is a wide and interdisciplinary study of solving complex problems and making strategic decisions as it pertains to institutions, corporations, governments and other types of organizational entities. It is closely related to management, economics, business, engineering, management consulting, and other fields. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms and aims to improve an organization's ability to enact rational and accurate management decisions by arriving at optimal or near optimal solutions to complex decision problems.: 113
Management science looks to help businesses achieve goals using a number of scientific methods. The field was initially an outgrowth of applied mathematics, where early challenges were problems relating to the optimization of systems which could be modeled linearly, i.e., determining the optima (maximum value of profit, assembly line performance, crop yield, bandwidth, etc. or minimum of loss, risk, costs, etc.) of some objective function. Today, the discipline of management science may encompass a diverse range of managerial and organizational activity as it regards to a problem which is structured in mathematical or other quantitative form in order to derive managerially relevant insights and solutions.
== Overview ==
Management science is concerned with a number of areas of study:
Developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems. The models used can often be represented mathematically, but sometimes computer-based, visual or verbal representations are used as well or instead.
Designing and developing new and better models of organizational excellence.
Helping to improve, stabilize or otherwise manage profit margins in enterprises.
Management science research can be done on three levels:
The fundamental level lies in three mathematical disciplines: probability, optimization, and dynamical systems theory.
The modeling level is about building models, analyzing them mathematically, gathering and analyzing data, implementing models on computers, solving them, experimenting with them—all this is part of management science research on the modeling level. This level is mainly instrumental, and driven mainly by statistics and econometrics.
The application level, just as in any other engineering and economics disciplines, strives to make a practical impact and be a driver for change in the real world.
The management scientist's mandate is to use rational, systematic and science-based techniques to inform and improve decisions of all kinds. The techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups. The norm for scholars in management science is to focus their work in a certain area or subfield of management like public administration, finance, calculus, information and so forth.
== History ==
Although management science as it exists now covers a myriad of topics having to do with coming up with solutions that increase the efficiency of a business, it was not even a field of study in the not too distant past. There are a number of businessmen and management specialists who can receive credit for the creation of the idea of management science. Most commonly, however, the founder of the field is considered to be Frederick Winslow Taylor in the early 20th century. Likewise, administration expert Luther Gulick and management expert Peter Drucker both had an impact on the development of management science in the 1930s and 1940s. Drucker is quoted as having said that, "the purpose of the corporation is to be economically efficient." This thought process is foundational to management science. Even before the influence of these men, there was Louis Brandeis who became known as "the people's lawyer". In 1910, Brandeis was the creator of a new business approach which he coined as "scientific management", a term that is often falsely attributed to the aforementioned Frederick Winslow Taylor.
These men represent some of the earliest ideas of management science at its conception. After the idea was born, it was further explored around the time of World War II. It was at this time that management science became more than an idea and was put into practice. This sort of experimentation was essential to the development of the field as it is known today.
The origins of management science can be traced to operations research, which became influential during World War II when the Allied forces recruited scientists of various disciplines to assist with military operations. In these early applications, the scientists used simple mathematical models to make efficient use of limited technologies and resources. The application of these models to the corporate sector became known as management science.
In 1967 Stafford Beer characterized the field of management science as "the business use of operations research".
== Theory ==
Some of the fields that management science involves include:
== Applications ==
Management science's applications are diverse allowing the use of it in many fields. Below are examples of the applications of management science.
In finance, management science is instrumental in portfolio optimization, risk management, and investment strategies. By employing mathematical models, analysts can assess market trends, optimize asset allocation, and mitigate financial risks, contributing to more informed and strategic decision-making.
In healthcare, management science plays a crucial role in optimizing resource allocation, patient scheduling, and facility management. Mathematical models aid healthcare professionals in streamlining operations, reducing waiting times, and improving overall efficiency in the delivery of care.
Logistics and supply chain management benefit significantly from management science applications. Optimization algorithms assist in route planning, inventory management, and demand forecasting, enhancing the efficiency of the entire supply chain.
In manufacturing, management science supports process optimization, production planning, and quality control. Mathematical models help identify bottlenecks, reduce production costs, and enhance overall productivity.
Furthermore, management science contributes to strategic decision-making in project management, marketing, and human resources. By leveraging quantitative techniques, organizations can make data-driven decisions, allocate resources effectively, and enhance overall performance across diverse functional areas.
== See also ==
== References ==
== Further reading ==
Kenneth R. Baker, Dean H. Kropp (1985). Management Science: An Introduction to the Use of Decision Models
David Charles Heinze (1982). Management Science: Introductory Concepts and Applications
Lee J. Krajewski, Howard E. Thompson (1981). "Management Science: Quantitative Methods in Context"
Thomas W. Knowles (1989). Management science: Building and Using Models
Kamlesh Mathur, Daniel Solow (1994). Management Science: The Art of Decision Making
Laurence J. Moore, Sang M. Lee, Bernard W. Taylor (1993). Management Science
William Thomas Morris (1968). Management Science: A Bayesian Introduction.
William E. Pinney, Donald B. McWilliams (1987). Management Science: An Introduction to Quantitative Analysis for Management
Gerald E. Thompson (1982). Management Science: An Introduction to Modern Quantitative Analysis and Decision Making. New York : McGraw-Hill Publishing Co. | Wikipedia/Management_science |
Dynamical systems theory is an area of mathematics used to describe the behavior of complex dynamical systems, usually by employing differential equations by nature of the ergodicity of dynamic systems. When differential equations are employed, the theory is called continuous dynamical systems. From a physical point of view, continuous dynamical systems is a generalization of classical mechanics, a generalization where the equations of motion are postulated directly and are not constrained to be Euler–Lagrange equations of a least action principle. When difference equations are employed, the theory is called discrete dynamical systems. When the time variable runs over a set that is discrete over some intervals and continuous over other intervals or is any arbitrary time-set such as a Cantor set, one gets dynamic equations on time scales. Some situations may also be modeled by mixed operators, such as differential-difference equations.
This theory deals with the long-term qualitative behavior of dynamical systems, and studies the nature of, and when possible the solutions of, the equations of motion of systems that are often primarily mechanical or otherwise physical in nature, such as planetary orbits and the behaviour of electronic circuits, as well as systems that arise in biology, economics, and elsewhere. Much of modern research is focused on the study of chaotic systems and bizarre systems.
This field of study is also called just dynamical systems, mathematical dynamical systems theory or the mathematical theory of dynamical systems.
== Overview ==
Dynamical systems theory and chaos theory deal with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible steady states?", or "Does the long-term behavior of the system depend on its initial condition?"
An important goal is to describe the fixed points, or steady states of a given dynamical system; these are values of the variable that do not change over time. Some of these fixed points are attractive, meaning that if the system starts out in a nearby state, it converges towards the fixed point.
Similarly, one is interested in periodic points, states of the system that repeat after several timesteps. Periodic points can also be attractive. Sharkovskii's theorem is an interesting statement about the number of periodic points of a one-dimensional discrete dynamical system.
Even simple nonlinear dynamical systems often exhibit seemingly random behavior that has been called chaos. The branch of dynamical systems that deals with the clean definition and investigation of chaos is called chaos theory.
== History ==
The concept of dynamical systems theory has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is given implicitly by a relation that gives the state of the system only a short time into the future.
Before the advent of fast computing machines, solving a dynamical system required sophisticated mathematical techniques and could only be accomplished for a small class of dynamical systems.
Some excellent presentations of mathematical dynamic system theory include Beltrami (1998), Luenberger (1979), Padulo & Arbib (1974), and Strogatz (1994).
== Concepts ==
=== Dynamical systems ===
The dynamical system concept is a mathematical formalization for any fixed "rule" that describes the time dependence of a point's position in its ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each spring in a lake.
A dynamical system has a state determined by a collection of real numbers, or more generally by a set of points in an appropriate state space. Small changes in the state of the system correspond to small changes in the numbers. The numbers are also the coordinates of a geometrical space—a manifold. The evolution rule of the dynamical system is a fixed rule that describes what future states follow from the current state. The rule may be deterministic (for a given time interval one future state can be precisely predicted given the current state) or stochastic (the evolution of the state can only be predicted with a certain probability).
=== Dynamicism ===
Dynamicism, also termed the dynamic hypothesis or the dynamic hypothesis in cognitive science or dynamic cognition, is a new approach in cognitive science exemplified by the work of philosopher Tim van Gelder. It argues that differential equations are more suited to modelling cognition than more traditional computer models.
=== Nonlinear system ===
In mathematics, a nonlinear system is a system that is not linear—i.e., a system that does not satisfy the superposition principle. Less technically, a nonlinear system is any problem where the variable(s) to solve for cannot be written as a linear sum of independent components. A nonhomogeneous system, which is linear apart from the presence of a function of the independent variables, is nonlinear according to a strict definition, but such systems are usually studied alongside linear systems, because they can be transformed to a linear system as long as a particular solution is known.
== Related fields ==
=== Arithmetic dynamics ===
Arithmetic dynamics is a field that emerged in the 1990s that amalgamates two areas of mathematics, dynamical systems and number theory. Classically, discrete dynamics refers to the study of the iteration of self-maps of the complex plane or real line. Arithmetic dynamics is the study of the number-theoretic properties of integer, rational, p-adic, and/or algebraic points under repeated application of a polynomial or rational function.
=== Chaos theory ===
Chaos theory describes the behavior of certain dynamical systems – that is, systems whose state evolves with time – that may exhibit dynamics that are highly sensitive to initial conditions (popularly referred to as the butterfly effect). As a result of this sensitivity, which manifests itself as an exponential growth of perturbations in the initial conditions, the behavior of chaotic systems appears random. This happens even though these systems are deterministic, meaning that their future dynamics are fully defined by their initial conditions, with no random elements involved. This behavior is known as deterministic chaos, or simply chaos.
=== Complex systems ===
Complex systems is a scientific field that studies the common properties of systems considered complex in nature, society, and science. It is also called complex systems theory, complexity science, study of complex systems and/or sciences of complexity. The key problems of such systems are difficulties with their formal modeling and simulation. From such perspective, in different research contexts complex systems are defined on the base of their different attributes.
The study of complex systems is bringing new vitality to many areas of science where a more typical reductionist strategy has fallen short. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines including neurosciences, social sciences, meteorology, chemistry, physics, computer science, psychology, artificial life, evolutionary computation, economics, earthquake prediction, molecular biology and inquiries into the nature of living cells themselves.
=== Control theory ===
Control theory is an interdisciplinary branch of engineering and mathematics, in part it deals with influencing the behavior of dynamical systems.
=== Ergodic theory ===
Ergodic theory is a branch of mathematics that studies dynamical systems with an invariant measure and related problems. Its initial development was motivated by problems of statistical physics.
=== Functional analysis ===
Functional analysis is the branch of mathematics, and specifically of analysis, concerned with the study of vector spaces and operators acting upon them. It has its historical roots in the study of functional spaces, in particular transformations of functions, such as the Fourier transform, as well as in the study of differential and integral equations. This usage of the word functional goes back to the calculus of variations, implying a function whose argument is a function. Its use in general has been attributed to mathematician and physicist Vito Volterra and its founding is largely attributed to mathematician Stefan Banach.
=== Graph dynamical systems ===
The concept of graph dynamical systems (GDS) can be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of graph dynamical systems is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result.
=== Projected dynamical systems ===
Projected dynamical systems is a mathematical theory investigating the behaviour of dynamical systems where solutions are restricted to a constraint set. The discipline shares connections to and applications with both the static world of optimization and equilibrium problems and the dynamical world of ordinary differential equations. A projected dynamical system is given by the flow to the projected differential equation.
=== Symbolic dynamics ===
Symbolic dynamics is the practice of modelling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator.
=== System dynamics ===
System dynamics is an approach to understanding the behaviour of systems over time. It deals with internal feedback loops and time delays that affect the behaviour and state of the entire system. What makes using system dynamics different from other approaches to studying systems is the language used to describe feedback loops with stocks and flows. These elements help describe how even seemingly simple systems display baffling nonlinearity.
=== Topological dynamics ===
Topological dynamics is a branch of the theory of dynamical systems in which qualitative, asymptotic properties of dynamical systems are studied from the viewpoint of general topology.
== Applications ==
=== In biomechanics ===
In sports biomechanics, dynamical systems theory has emerged in the movement sciences as a viable framework for modeling athletic performance and efficiency. It comes as no surprise, since dynamical systems theory has its roots in Analytical mechanics. From psychophysiological perspective, the human movement system is a highly intricate network of co-dependent sub-systems (e.g. respiratory, circulatory, nervous, skeletomuscular, perceptual) that are composed of a large number of interacting components (e.g. blood cells, oxygen molecules, muscle tissue, metabolic enzymes, connective tissue and bone). In dynamical systems theory, movement patterns emerge through generic processes of self-organization found in physical and biological systems. There is no research validation of any of the claims associated to the conceptual application of this framework.
=== In cognitive science ===
Dynamical system theory has been applied in the field of neuroscience and cognitive development, especially in the neo-Piagetian theories of cognitive development. It is the belief that cognitive development is best represented by physical theories rather than theories based on syntax and AI. It also believed that differential equations are the most appropriate tool for modeling human behavior. These equations are interpreted to represent an agent's cognitive trajectory through state space. In other words, dynamicists argue that psychology should be (or is) the description (via differential equations) of the cognitions and behaviors of an agent under certain environmental and internal pressures. The language of chaos theory is also frequently adopted.
In it, the learner's mind reaches a state of disequilibrium where old patterns have broken down. This is the phase transition of cognitive development. Self-organization (the spontaneous creation of coherent forms) sets in as activity levels link to each other. Newly formed macroscopic and microscopic structures support each other, speeding up the process. These links form the structure of a new state of order in the mind through a process called scalloping (the repeated building up and collapsing of complex performance.) This new, novel state is progressive, discrete, idiosyncratic and unpredictable.
Dynamic systems theory has recently been used to explain a long-unanswered problem in child development referred to as the A-not-B error.
Further, since the middle of the 1990s cognitive science, oriented towards a system theoretical connectionism, has increasingly adopted the methods from (nonlinear) “Dynamic Systems Theory (DST)“. A variety of neurosymbolic cognitive neuroarchitectures in modern connectionism, considering their mathematical structural core, can be categorized as (nonlinear) dynamical systems. These attempts in neurocognition to merge connectionist cognitive neuroarchitectures with DST come from not only neuroinformatics and connectionism, but also recently from developmental psychology (“Dynamic Field Theory (DFT)”) and from “evolutionary robotics” and “developmental robotics” in connection with the mathematical method of “evolutionary computation (EC)”. For an overview see Maurer.
=== In second language development ===
The application of Dynamic Systems Theory to study second language acquisition is attributed to Diane Larsen-Freeman who published an article in 1997 in which she claimed that second language acquisition should be viewed as a developmental process which includes language attrition as well as language acquisition. In her article she claimed that language should be viewed as a dynamic system which is dynamic, complex, nonlinear, chaotic, unpredictable, sensitive to initial conditions, open, self-organizing, feedback sensitive, and adaptive.
== See also ==
Related subjects
Related scientists
== Notes ==
== Further reading ==
Abraham, Frederick D.; Abraham, Ralph; Shaw, Christopher D. (1990). A Visual Introduction to Dynamical Systems Theory for Psychology. Aerial Press. ISBN 978-0-942344-09-7. OCLC 24345312.
Beltrami, Edward J. (1998). Mathematics for Dynamic Modeling (2nd ed.). Academic Press. ISBN 978-0-12-085566-7. OCLC 36713294.
Hájek, Otomar (1968). Dynamical systems in the plane. Academic Press. ISBN 9780123172402. OCLC 343328.
Luenberger, David G. (1979). Introduction to dynamic systems: theory, models, and applications. Wiley. ISBN 978-0-471-02594-8. OCLC 4195122.
Michel, Anthony; Kaining Wang; Bo Hu (2001). Qualitative Theory of Dynamical Systems. Taylor & Francis. ISBN 978-0-8247-0526-8. OCLC 45873628.
Padulo, Louis; Arbib, Michael A. (1974). System theory: a unified state-space approach to continuous and discrete systems. Saunders. ISBN 9780721670355. OCLC 947600.
Strogatz, Steven H. (1994). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Addison Wesley. ISBN 978-0-7382-0453-6. OCLC 49839504.
== External links ==
Dynamic Systems Encyclopedia of Cognitive Science entry.
Definition of dynamical system in MathWorld.
DSWeb Dynamical Systems Magazine | Wikipedia/Dynamical_systems_theory |
Systems theory in archaeology is the application of systems theory and systems thinking in archaeology. It originated with the work of Ludwig von Bertalanffy in the 1950s, and is introduced in archaeology in the 1960s with the work of Sally R. Binford and Lewis Binford's "New Perspectives in Archaeology" and Kent V. Flannery's "Archaeological Systems Theory and Early Mesoamerica".
== Overview ==
Bertalanffy attempted to construct a general systems theory that would explain the interactions of different variables in a variety of systems, no matter what those variables actually represented. A system was defined as a group of interacting parts and the relative influence of these parts followed rules which, once formulated could be used to describe the system no matter what the actual components were.
Binford stated the problem in New Perspectives in Archaeology, identifying the low range theory, the middle range theory, and the upper range theory.
The low range theory could be used to explain a specific aspect of a specific culture, such as the archaeology of Mesoamerican agriculture.
A middle range theory could describe any cultural system outside of its specific cultural context, for example, the archaeology of agriculture.
An upper range theory can explain any cultural system, independent of any specifics and regardless of the nature of the variables.
At the time Binford thought the middle range theory may be as far as archaeologists could ever go, but in the mid-1970s some believed that systems theory offered the definitive upper range theory.
Archaeologist Kent Flannery described the application of systems theory to archaeology in his paper Archaeological Systems Theory and Early Mesoamerica. Systems theory allowed archaeologists to treat the archaeological record in a completely new way. No longer did it matter what was being looked at, because it was being broken down to its elemental system components. Culture may be subjective, but unless the model of systems theory is attacked in general and as long as it is treated mathematically the same way a retreating glacier is treated, the results were objective. In other words, the problem of cultural bias no longer had any meaning, unless it was a problem with systems theory itself. Culture was now just another natural system that could be explained in mathematical terms.
== Criticism ==
Archaeologists found it was rarely possible to use systems theory in a rigorously mathematical way. While it provided a framework for describing interactions in terms of types of feedback within the system, it was rarely possible to put the quantitative values that systems theory requires for full use, as Flannery himself admits. The result was that in the long run systems theory turned out to be more useful in describing change than in explaining it.
Systems theory also eventually went on to show that predictions that a high amount of cultural regularities would be found were certainly overly optimistic during the early stages of processual archaeology, the opposite of what processual archaeologists were hoping it would be able to do with systems theory. However, systems theory is still used to describe how variables inside a cultural system can interact.
Systems theory, at least, was important in the rise of processual archaeology and was a call against culture-historical methods of past generations. It held argument that one could contemplate the past impartially and sidestep pitfalls through rigour.
== See also ==
World-systems theory
== References ==
== Further reading ==
Sally R. Binford & Lewis Binford (1968). New Perspectives in Archaeology. Chicago, Aldine Press.
K.V. Flannery (1968). Archaeological Systems Theory and Early Mesoamerica". In Anthropological Archaeology in the Americas, ed. by B. J. Meggers, pp. 67-87. Washington, Anthropological Society of Washington.
Bruce Trigger (1989). A History of Archaeological Thought. Cambridge University Press: New York | Wikipedia/Systems_theory_in_archaeology |
An agent-based model (ABM) is a computational model for simulating the actions and interactions of autonomous agents (both individual or collective entities such as organizations or groups) in order to understand the behavior of a system and what governs its outcomes. It combines elements of game theory, complex systems, emergence, computational sociology, multi-agent systems, and evolutionary programming. Monte Carlo methods are used to understand the stochasticity of these models. Particularly within ecology, ABMs are also called individual-based models (IBMs). A review of recent literature on individual-based models, agent-based models, and multiagent systems shows that ABMs are used in many scientific domains including biology, ecology and social science. Agent-based modeling is related to, but distinct from, the concept of multi-agent systems or multi-agent simulation in that the goal of ABM is to search for explanatory insight into the collective behavior of agents obeying simple rules, typically in natural systems, rather than in designing agents or solving specific practical or engineering problems.
Agent-based models are a kind of microscale model that simulate the simultaneous operations and interactions of multiple agents in an attempt to re-create and predict the appearance of complex phenomena. The process is one of emergence, which some express as "the whole is greater than the sum of its parts". In other words, higher-level system properties emerge from the interactions of lower-level subsystems. Or, macro-scale state changes emerge from micro-scale agent behaviors. Or, simple behaviors (meaning rules followed by agents) generate complex behaviors (meaning state changes at the whole system level).
Individual agents are typically characterized as boundedly rational, presumed to be acting in what they perceive as their own interests, such as reproduction, economic benefit, or social status, using heuristics or simple decision-making rules. ABM agents may experience "learning", adaptation, and reproduction.
Most agent-based models are composed of: (1) numerous agents specified at various scales (typically referred to as agent-granularity); (2) decision-making heuristics; (3) learning rules or adaptive processes; (4) an interaction topology; and (5) an environment. ABMs are typically implemented as computer simulations, either as custom software, or via ABM toolkits, and this software can be then used to test how changes in individual behaviors will affect the system's emerging overall behavior.
== History ==
The idea of agent-based modeling was developed as a relatively simple concept in the late 1940s. Since it requires computation-intensive procedures, it did not become widespread until the 1990s.
=== Early developments ===
The history of the agent-based model can be traced back to the Von Neumann machine, a theoretical machine capable of reproduction. The device von Neumann proposed would follow precisely detailed instructions to fashion a copy of itself. The concept was then built upon by von Neumann's friend Stanislaw Ulam, also a mathematician; Ulam suggested that the machine be built on paper, as a collection of cells on a grid. The idea intrigued von Neumann, who drew it up—creating the first of the devices later termed cellular automata.
Another advance was introduced by the mathematician John Conway. He constructed the well-known Game of Life. Unlike von Neumann's machine, Conway's Game of Life operated by simple rules in a virtual world in the form of a 2-dimensional checkerboard.
The Simula programming language, developed in the mid 1960s and widely implemented by the early 1970s, was the first framework for automating step-by-step agent simulations.
=== 1970s and 1980s: the first models ===
One of the earliest agent-based models in concept was Thomas Schelling's segregation model, which was discussed in his paper "Dynamic Models of Segregation" in 1971. Though Schelling originally used coins and graph paper rather than computers, his models embodied the basic concept of agent-based models as autonomous agents interacting in a shared environment with an observed aggregate, emergent outcome.
In the late 1970s, Paulien Hogeweg and Bruce Hesper began experimenting with individual models of ecology. One of their first results was to show that the social structure of bumble-bee colonies emerged as a result of simple rules that govern the behaviour of individual bees.
They introduced the ToDo principle, referring to the way agents "do what there is to do" at any given time.
In the early 1980s, Robert Axelrod hosted a tournament of Prisoner's Dilemma strategies and had them interact in an agent-based manner to determine a winner. Axelrod would go on to develop many other agent-based models in the field of political science that examine phenomena from ethnocentrism to the dissemination of culture.
By the late 1980s, Craig Reynolds' work on flocking models contributed to the development of some of the first biological agent-based models that contained social characteristics. He tried to model the reality of lively biological agents, known as artificial life, a term coined by Christopher Langton.
The first use of the word "agent" and a definition as it is currently used today is hard to track down. One candidate appears to be John Holland and John H. Miller's 1991 paper "Artificial Adaptive Agents in Economic Theory", based on an earlier conference presentation of theirs. A stronger and earlier candidate is Allan Newell, who in the first Presidential Address of AAAI (published as The Knowledge Level) discussed intelligent agents as a concept.
At the same time, during the 1980s, social scientists, mathematicians, operations researchers, and a scattering of people from other disciplines developed Computational and Mathematical Organization Theory (CMOT). This field grew as a special interest group of The Institute of Management Sciences (TIMS) and its sister society, the Operations Research Society of America (ORSA).
=== 1990s: expansion ===
The 1990s were especially notable for the expansion of ABM within the social sciences, one notable effort was the large-scale ABM, Sugarscape, developed by
Joshua M. Epstein and Robert Axtell to simulate and explore the role of social phenomena such as seasonal migrations, pollution, sexual reproduction, combat, and transmission of disease and even culture. Other notable 1990s developments included Carnegie Mellon University's Kathleen Carley ABM, to explore the co-evolution of social networks and culture. The Santa Fe Institute (SFI) was important in encouraging the development of the ABM modeling platform Swarm under the leadership of Christopher Langton. Research conducted through SFI allowed the expansion of ABM techniques to a number of fields including study of the social and spatial dynamics of small-scale human societies and primates. During this 1990s timeframe Nigel Gilbert published the first textbook on Social Simulation: Simulation for the social scientist (1999) and established a journal from the perspective of social sciences: the Journal of Artificial Societies and Social Simulation (JASSS). Other than JASSS, agent-based models of any discipline are within scope of SpringerOpen journal Complex Adaptive Systems Modeling (CASM).
Through the mid-1990s, the social sciences thread of ABM began to focus on such issues as designing effective teams, understanding the communication required for organizational effectiveness, and the behavior of social networks. CMOT—later renamed Computational Analysis of Social and Organizational Systems (CASOS)—incorporated more and more agent-based modeling. Samuelson (2000) is a good brief overview of the early history, and Samuelson (2005) and Samuelson and Macal (2006) trace the more recent developments.
In the late 1990s, the merger of TIMS and ORSA to form INFORMS, and the move by INFORMS from two meetings each year to one, helped to spur the CMOT group to form a separate society, the North American Association for Computational Social and Organizational Sciences (NAACSOS). Kathleen Carley was a major contributor, especially to models of social networks, obtaining National Science Foundation funding for the annual conference and serving as the first President of NAACSOS. She was succeeded by David Sallach of the University of Chicago and Argonne National Laboratory, and then by Michael Prietula of Emory University. At about the same time NAACSOS began, the European Social Simulation Association (ESSA) and the Pacific Asian Association for Agent-Based Approach in Social Systems Science (PAAA), counterparts of NAACSOS, were organized. As of 2013, these three organizations collaborate internationally. The First World Congress on Social Simulation was held under their joint sponsorship in Kyoto, Japan, in August 2006. The Second World Congress was held in the northern Virginia suburbs of Washington, D.C., in July 2008, with George Mason University taking the lead role in local arrangements.
=== 2000s ===
More recently, Ron Sun developed methods for basing agent-based simulation on models of human cognition, known as cognitive social simulation. Bill McKelvey, Suzanne Lohmann, Dario Nardi, Dwight Read and others at UCLA have also made significant contributions in organizational behavior and decision-making. Since 1991, UCLA has arranged a conference at Lake Arrowhead, California, that has become another major gathering point for practitioners in this field.
=== 2020 and later ===
After the advent of large language models, researchers began applying interacting language models to agent based modeling. In one widely cited paper, agentic language models interacted in a sandbox environment to perform activities like planning birthday parties and holding elections.
== Theory ==
Most computational modeling research describes systems in equilibrium or as moving between equilibria. Agent-based modeling, however, using simple rules, can result in different sorts of complex and interesting behavior. The three ideas central to agent-based models are agents as objects, emergence, and complexity.
Agent-based models consist of dynamically interacting rule-based agents. The systems within which they interact can create real-world-like complexity. Typically agents are
situated in space and time and reside in networks or in lattice-like neighborhoods. The location of the agents and their responsive behavior are encoded in algorithmic form in computer programs. In some cases, though not always, the agents may be considered as intelligent and purposeful. In ecological ABM (often referred to as "individual-based models" in ecology), agents may, for example, be trees in a forest, and would not be considered intelligent, although they may be "purposeful" in the sense of optimizing access to a resource (such as water).
The modeling process is best described as inductive. The modeler makes those assumptions thought most relevant to the situation at hand and then watches phenomena emerge from the agents' interactions. Sometimes that result is an equilibrium. Sometimes it is an emergent pattern. Sometimes, however, it is an unintelligible mangle.
In some ways, agent-based models complement traditional analytic methods. Where analytic methods enable humans to characterize the equilibria of a system, agent-based models allow the possibility of generating those equilibria. This generative contribution may be the most mainstream of the potential benefits of agent-based modeling. Agent-based models can explain the emergence of higher-order patterns—network structures of terrorist organizations and the Internet, power-law distributions in the sizes of traffic jams, wars, and stock-market crashes, and social segregation that persists despite populations of tolerant people. Agent-based models also can be used to identify lever points, defined as moments in time in which interventions have extreme consequences, and to distinguish among types of path dependency.
Rather than focusing on stable states, many models consider a system's robustness—the ways that complex systems adapt to internal and external pressures so as to maintain their functionalities. The task of harnessing that complexity requires consideration of the agents themselves—their diversity, connectedness, and level of interactions.
=== Framework ===
Recent work on the Modeling and simulation of Complex Adaptive Systems has demonstrated the need for combining agent-based and complex network based models. describe a framework consisting of four levels of developing models of complex adaptive systems described using several example multidisciplinary case studies:
Complex Network Modeling Level for developing models using interaction data of various system components.
Exploratory Agent-based Modeling Level for developing agent-based models for assessing the feasibility of further research. This can e.g. be useful for developing proof-of-concept models such as for funding applications without requiring an extensive learning curve for the researchers.
Descriptive Agent-based Modeling (DREAM) for developing descriptions of agent-based models by means of using templates and complex network-based models. Building DREAM models allows model comparison across scientific disciplines.
Validated agent-based modeling using Virtual Overlay Multiagent system (VOMAS) for the development of verified and validated models in a formal manner.
Other methods of describing agent-based models include code templates and text-based methods such as the ODD (Overview, Design concepts, and Design Details) protocol.
The role of the environment where agents live, both macro and micro, is also becoming an important factor in agent-based modelling and simulation work. Simple environment affords simple agents, but complex environments generate diversity of behavior.
=== Multi-scale modelling ===
One strength of agent-based modelling is its ability to mediate information flow between scales. When additional details about an agent are needed, a researcher can integrate it with models describing the extra details. When one is interested in the emergent behaviours demonstrated by the agent population, they can combine the agent-based model with a continuum model describing population dynamics. For example, in a study about CD4+ T cells (a key cell type in the adaptive immune system), the researchers modelled biological phenomena occurring at different spatial (intracellular, cellular, and systemic), temporal, and organizational scales (signal transduction, gene regulation, metabolism, cellular behaviors, and cytokine transport). In the resulting modular model, signal transduction and gene regulation are described by a logical model, metabolism by constraint-based models, cell population dynamics are described by an agent-based model, and systemic cytokine concentrations by ordinary differential equations. In this multi-scale model, the agent-based model occupies the central place and orchestrates every stream of information flow between scales.
== Applications ==
=== In biology ===
Agent-based modeling has been used extensively in biology, including the analysis of the spread of epidemics, and the threat of biowarfare, biological applications including population dynamics, stochastic gene expression, plant-animal interactions, vegetation ecology, migratory ecology, landscape diversity, sociobiology, the growth and decline of ancient civilizations, evolution of ethnocentric behavior, forced displacement/migration, language choice dynamics, cognitive modeling, and biomedical applications including modeling 3D breast tissue formation/morphogenesis, the effects of ionizing radiation on mammary stem cell subpopulation dynamics, inflammation,
and the human immune system, and the evolution of foraging behaviors. Agent-based models have also been used for developing decision support systems such as for breast cancer. Agent-based models are increasingly being used to model pharmacological systems in early stage and pre-clinical research to aid in drug development and gain insights into biological systems that would not be possible a priori. Military applications have also been evaluated. Moreover, agent-based models have been recently employed to study molecular-level biological systems. Agent-based models have also been written to describe ecological processes at work in ancient systems, such as those in dinosaur environments and more recent ancient systems as well.
=== In epidemiology ===
Agent-based models now complement traditional compartmental models, the usual type of epidemiological models. ABMs have been shown to be superior to compartmental models in regard to the accuracy of predictions. Recently, ABMs such as CovidSim by epidemiologist Neil Ferguson, have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated. The ABMs for such simulations are mostly based on synthetic populations, since the data of the actual population is not always available.
=== In business, technology and network theory ===
Agent-based models have been used since the mid-1990s to solve a variety of business and technology problems. Examples of applications include marketing, organizational behaviour and cognition, team working, supply chain optimization and logistics, modeling of consumer behavior, including word of mouth, social network effects, distributed computing, workforce management, and portfolio management. They have also been used to analyze traffic congestion.
Recently, agent based modelling and simulation has been applied to various domains such as studying the impact of publication venues by researchers in the computer science domain (journals versus conferences). In addition, ABMs have been used to simulate information delivery in ambient assisted environments. A November 2016 article in arXiv analyzed an agent based simulation of posts spread in Facebook. In the domain of peer-to-peer, ad hoc and other self-organizing and complex networks, the usefulness of agent based modeling and simulation has been shown. The use of a computer science-based formal specification framework coupled with wireless sensor networks and an agent-based simulation has recently been demonstrated.
Agent based evolutionary search or algorithm is a new research topic for solving complex optimization problems.
=== In team science ===
In the realm of team science, agent-based modeling has been utilized to assess the effects of team members' characteristics and biases on team performance across various settings. By simulating interactions between agents—each representing individual team members with distinct traits and biases—this modeling approach enables researchers to explore how these factors collectively influence the dynamics and outcomes of team performance. Consequently, agent-based modeling provides a nuanced understanding of team science, facilitating a deeper exploration of the subtleties and variabilities inherent in team-based collaborations.
=== In economics and social sciences ===
Prior to, and during the 2008 financial crisis, interest has grown in ABMs as possible tools for economic analysis. ABMs do not assume the economy can achieve equilibrium and "representative agents" are replaced by agents with diverse, dynamic, and interdependent behavior including herding. ABMs take a "bottom-up" approach and can generate extremely complex and volatile simulated economies. ABMs can represent unstable systems with crashes and booms that develop out of non-linear (disproportionate) responses to proportionally small changes. A July 2010 article in The Economist looked at ABMs as alternatives to DSGE models. The journal Nature also encouraged agent-based modeling with an editorial that suggested ABMs can do a better job of representing financial markets and other economic complexities than standard models along with an essay by J. Doyne Farmer and Duncan Foley that argued ABMs could fulfill both the desires of Keynes to represent a complex economy and of Robert Lucas to construct models based on microfoundations. Farmer and Foley pointed to progress that has been made using ABMs to model parts of an economy, but argued for the creation of a very large model that incorporates low level models. By modeling a complex system of analysts based on three distinct behavioral profiles – imitating, anti-imitating, and indifferent – financial markets were simulated to high accuracy. Results showed a correlation between network morphology and the stock market index. However, the ABM approach has been criticized for its lack of robustness between models, where similar models can yield very different results.
ABMs have been deployed in architecture and urban planning to evaluate design and to simulate pedestrian flow in the urban environment and the examination of public policy applications to land-use. There is also a growing field of socio-economic analysis of infrastructure investment impact using ABM's ability to discern systemic impacts upon a socio-economic network. Heterogeneity and dynamics can be easily built in ABM models to address wealth inequality and social mobility.
ABMs have also been proposed as applied educational tools for diplomats in the field of international relations and for domestic and international policymakers to enhance their evaluation of public policy.
=== In water management ===
ABMs have also been applied in water resources planning and management, particularly for exploring, simulating, and predicting the performance of infrastructure design and policy decisions, and in assessing the value of cooperation and information exchange in large water resources systems.
=== Organizational ABM: agent-directed simulation ===
The agent-directed simulation (ADS) metaphor distinguishes between two categories, namely "Systems for Agents" and "Agents for Systems." Systems for Agents (sometimes referred to as agents systems) are systems implementing agents for the use in engineering, human and social dynamics, military applications, and others. Agents for Systems are divided in two subcategories. Agent-supported systems deal with the use of agents as a support facility to enable computer assistance in problem solving or enhancing cognitive capabilities. Agent-based systems focus on the use of agents for the generation of model behavior in a system evaluation (system studies and analyses).
=== Self-driving cars ===
Hallerbach et al. discussed the application of agent-based approaches for the development and validation of automated driving systems via a digital twin of the vehicle-under-test and microscopic traffic simulation based on independent agents. Waymo has created a multi-agent simulation environment Carcraft to test algorithms for self-driving cars. It simulates traffic interactions between human drivers, pedestrians and automated vehicles. People's behavior is imitated by artificial agents based on data of real human behavior. The basic idea of using agent-based modeling to understand self-driving cars was discussed as early as 2003.
== Implementation ==
Many ABM frameworks are designed for serial von-Neumann computer architectures, limiting the speed and scalability of implemented models. Since emergent behavior in large-scale ABMs is dependent of population size, scalability restrictions may hinder model validation. Such limitations have mainly been addressed using distributed computing, with frameworks such as Repast HPC specifically dedicated to these type of implementations. While such approaches map well to cluster and supercomputer architectures, issues related to communication and synchronization, as well as deployment complexity, remain potential obstacles for their widespread adoption.
A recent development is the use of data-parallel algorithms on Graphics Processing Units GPUs for ABM simulation. The extreme memory bandwidth combined with the sheer number crunching power of multi-processor GPUs has enabled simulation of millions of agents at tens of frames per second.
=== Integration with other modeling forms ===
Since Agent-Based Modeling is more of a modeling framework than a particular piece of software or platform, it has often been used in conjunction with other modeling forms. For instance, agent-based models have also been combined with Geographic Information Systems (GIS). This provides a useful combination where the ABM serves as a process model and the GIS system can provide a model of pattern. Similarly, Social Network Analysis (SNA) tools and agent-based models are sometimes integrated, where the ABM is used to simulate the dynamics on the network while the SNA tool models and analyzes the network of interactions. Tools like GAMA provide a natural way to integrate system dynamics and GIS with ABM.
== Verification and validation ==
Verification and validation (V&V) of simulation models is extremely important. Verification involves making sure the implemented model matches the conceptual model, whereas validation ensures that the implemented model has some relationship to the real-world. Face validation, sensitivity analysis, calibration, and statistical validation are different aspects of validation. A discrete-event simulation framework approach for the validation of agent-based systems has been proposed. A comprehensive resource on empirical validation of agent-based models can be found here.
As an example of V&V technique, consider VOMAS (virtual overlay multi-agent system), a software engineering based approach, where a virtual overlay multi-agent system is developed alongside the agent-based model. Muazi et al. also provide an example of using VOMAS for verification and validation of a forest fire simulation model. Another software engineering method, i.e. Test-Driven Development has been adapted to for agent-based model validation. This approach has another advantage that allows an automatic validation using unit test tools.
== See also ==
== References ==
=== General ===
== External links ==
=== Articles/general information ===
Agent-based models of social networks, java applets.
On-Line Guide for Newcomers to Agent-Based Modeling in the Social Sciences
Introduction to Agent-based Modeling and Simulation. Argonne National Laboratory, November 29, 2006.
Agent-based models in Ecology – Using computer models as theoretical tools to analyze complex ecological systems
Network for Computational Modeling in the Social and Ecological Sciences' Agent Based Modeling FAQ
Multiagent Information Systems – Article on the convergence of SOA, BPM and Multi-Agent Technology in the domain of the Enterprise Information Systems. Jose Manuel Gomez Alvarez, Artificial Intelligence, Technical University of Madrid – 2006
Artificial Life Framework
Article providing methodology for moving real world human behaviors into a simulation model where agent behaviors are represented
Agent-based Modeling Resources, an information hub for modelers, methods, and philosophy for agent-based modeling
An Agent-Based Model of the Flash Crash of May 6, 2010, with Policy Implications, Tommi A. Vuorenmaa (Valo Research and Trading), Liang Wang (University of Helsinki - Department of Computer Science), October, 2013
=== Simulation models ===
Multi-agent Meeting Scheduling System Model by Qasim Siddique
Multi-firm market simulation by Valentino Piana
List of COVID-19 simulation models | Wikipedia/Agent-based_model |
Bifurcation theory is the mathematical study of changes in the qualitative or topological structure of a given family of curves, such as the integral curves of a family of vector fields, and the solutions of a family of differential equations. Most commonly applied to the mathematical study of dynamical systems, a bifurcation occurs when a small smooth change made to the parameter values (the bifurcation parameters) of a system causes a sudden 'qualitative' or topological change in its behavior. Bifurcations occur in both continuous systems (described by ordinary, delay or partial differential equations) and discrete systems (described by maps).
The name "bifurcation" was first introduced by Henri Poincaré in 1885 in the first paper in mathematics showing such a behavior.
== Bifurcation types ==
It is useful to divide bifurcations into two principal classes:
Local bifurcations, which can be analysed entirely through changes in the local stability properties of equilibria, periodic orbits or other invariant sets as parameters cross through critical thresholds; and
Global bifurcations, which often occur when larger invariant sets of the system 'collide' with each other, or with equilibria of the system. They cannot be detected purely by a stability analysis of the equilibria (fixed points).
=== Local bifurcations ===
A local bifurcation occurs when a parameter change causes the stability of an equilibrium (or fixed point) to change. In continuous systems, this corresponds to the real part of an eigenvalue of an equilibrium passing through zero. In discrete systems (described by maps), this corresponds to a fixed point having a Floquet multiplier with modulus equal to one. In both cases, the equilibrium is non-hyperbolic at the bifurcation point.
The topological changes in the phase portrait of the system can be confined to arbitrarily small neighbourhoods of the bifurcating fixed points by moving the bifurcation parameter close to the bifurcation point (hence 'local').
More technically, consider the continuous dynamical system described by the ordinary differential equation (ODE)
x
˙
=
f
(
x
,
λ
)
f
:
R
n
×
R
→
R
n
.
{\displaystyle {\dot {x}}=f(x,\lambda )\quad f\colon \mathbb {R} ^{n}\times \mathbb {R} \to \mathbb {R} ^{n}.}
A local bifurcation occurs at
(
x
0
,
λ
0
)
{\displaystyle (x_{0},\lambda _{0})}
if the Jacobian matrix
d
f
x
0
,
λ
0
{\displaystyle {\textrm {d}}f_{x_{0},\lambda _{0}}}
has an eigenvalue with zero real part. If the eigenvalue is equal to zero, the bifurcation is a steady state bifurcation, but if the eigenvalue is non-zero but purely imaginary, this is a Hopf bifurcation.
For discrete dynamical systems, consider the system
x
n
+
1
=
f
(
x
n
,
λ
)
.
{\displaystyle x_{n+1}=f(x_{n},\lambda )\,.}
Then a local bifurcation occurs at
(
x
0
,
λ
0
)
{\displaystyle (x_{0},\lambda _{0})}
if the matrix
d
f
x
0
,
λ
0
{\displaystyle {\textrm {d}}f_{x_{0},\lambda _{0}}}
has an eigenvalue with modulus equal to one. If the eigenvalue is equal to one, the bifurcation is either a saddle-node (often called fold bifurcation in maps), transcritical or pitchfork bifurcation. If the eigenvalue is equal to −1, it is a period-doubling (or flip) bifurcation, and otherwise, it is a Hopf bifurcation.
Examples of local bifurcations include:
Saddle-node (fold) bifurcation
Transcritical bifurcation
Pitchfork bifurcation
Period-doubling (flip) bifurcation
Hopf bifurcation
Neimark–Sacker (secondary Hopf) bifurcation
=== Global bifurcations ===
Global bifurcations occur when 'larger' invariant sets, such as periodic orbits, collide with equilibria. This causes changes in the topology of the trajectories in the phase space which cannot be confined to a small neighbourhood, as is the case with local bifurcations. In fact, the changes in topology extend out to an arbitrarily large distance (hence 'global').
Examples of global bifurcations include:
Homoclinic bifurcation in which a limit cycle collides with a saddle point. Homoclinic bifurcations can occur supercritically or subcritically. The variant above is the "small" or "type I" homoclinic bifurcation. In 2D there is also the "big" or "type II" homoclinic bifurcation in which the homoclinic orbit "traps" the other ends of the unstable and stable manifolds of the saddle. In three or more dimensions, higher codimension bifurcations can occur, producing complicated, possibly chaotic dynamics.
Heteroclinic bifurcation in which a limit cycle collides with two or more saddle points; they involve a heteroclinic cycle. Heteroclinic bifurcations are of two types: resonance bifurcations and transverse bifurcations. Both types of bifurcation will result in the change of stability of the heteroclinic cycle. At a resonance bifurcation, the stability of the cycle changes when an algebraic condition on the eigenvalues of the equilibria in the cycle is satisfied. This is usually accompanied by the birth or death of a periodic orbit. A transverse bifurcation of a heteroclinic cycle is caused when the real part of a transverse eigenvalue of one of the equilibria in the cycle passes through zero. This will also cause a change in stability of the heteroclinic cycle.
Infinite-period bifurcation in which a stable node and saddle point simultaneously occur on a limit cycle. As the limit of a parameter approaches a certain critical value, the speed of the oscillation slows down and the period approaches infinity. The infinite-period bifurcation occurs at this critical value. Beyond the critical value, the two fixed points emerge continuously from each other on the limit cycle to disrupt the oscillation and form two saddle points.
Blue sky catastrophe in which a limit cycle collides with a nonhyperbolic cycle.
Global bifurcations can also involve more complicated sets such as chaotic attractors (e.g. crises).
Examples of bifurcations
== Codimension of a bifurcation ==
The codimension of a bifurcation is the number of parameters which must be varied for the bifurcation to occur. This corresponds to the codimension of the parameter set for which the bifurcation occurs within the full space of parameters. Saddle-node bifurcations and Hopf bifurcations are the only generic local bifurcations which are really codimension-one (the others all having higher codimension). However, transcritical and pitchfork bifurcations are also often thought of as codimension-one, because the normal forms can be written with only one parameter.
An example of a well-studied codimension-two bifurcation is the Bogdanov–Takens bifurcation.
== Applications in semiclassical and quantum physics ==
Bifurcation theory has been applied to connect quantum systems to the dynamics of their classical analogues in atomic systems, molecular systems, and resonant tunneling diodes. Bifurcation theory has also been applied to the study of laser dynamics and a number of theoretical examples which are difficult to access experimentally such as the kicked top and coupled quantum wells. The dominant reason for the link between quantum systems and bifurcations in the classical equations of motion is that at bifurcations, the signature of classical orbits becomes large, as Martin Gutzwiller points out in his classic work on quantum chaos. Many kinds of bifurcations have been studied with regard to links between classical and quantum dynamics including saddle node bifurcations, Hopf bifurcations, umbilic bifurcations, period doubling bifurcations, reconnection bifurcations, tangent bifurcations, and cusp bifurcations.
== See also ==
Logistic Map for an example driven simple introduction to some simple bifurcations
Bifurcation diagram
Bifurcation memory
Catastrophe theory
Feigenbaum constants
Geomagnetic reversal
Phase portrait
Tennis racket theorem
== Notes ==
== References ==
Afrajmovich, V. S.; Arnold, V. I.; et al. (1994). Bifurcation Theory and Catastrophe Theory. Springer. ISBN 978-3-540-65379-0.
Guardia, M.; Martinez-Seara, M.; Teixeira, M. A. (2011). Generic bifurcations of low codimension of planar Filippov Systems. "Journal of differential equations", Febrer 2011, vol. 250, núm. 4, pp. 1967–2023. doi:10.1016/j.jde.2010.11.016
Wiggins, Stephen (1988). Global bifurcations and Chaos: Analytical Methods. New York: Springer. ISBN 978-0-387-96775-2.
== External links ==
Nonlinear dynamics
Bifurcations and Two Dimensional Flows by Elmer G. Wiens
Introduction to Bifurcation theory by John David Crawford | Wikipedia/Bifurcation_theory |
A small-world network is a graph characterized by a high clustering coefficient and low distances. In an example of the social network, high clustering implies the high probability that two friends of one person are friends themselves. The low distances, on the other hand, mean that there is a short chain of social connections between any two people (this effect is known as six degrees of separation). Specifically, a small-world network is defined to be a network where the typical distance L between two randomly chosen nodes (the number of steps required) grows proportionally to the logarithm of the number of nodes N in the network, that is:
L
∝
log
N
{\displaystyle L\propto \log N}
while the global clustering coefficient is not small.
In the context of a social network, this results in the small world phenomenon of strangers being linked by a short chain of acquaintances. Many empirical graphs show the small-world effect, including social networks, wikis such as Wikipedia, gene networks, and even the underlying architecture of the Internet. It is the inspiration for many network-on-chip architectures in contemporary computer hardware.
A certain category of small-world networks were identified as a class of random graphs by Duncan Watts and Steven Strogatz in 1998. They noted that graphs could be classified according to two independent structural features, namely the clustering coefficient, and average node-to-node distance (also known as average shortest path length). Purely random graphs, built according to the Erdős–Rényi (ER) model, exhibit a small average shortest path length (varying typically as the logarithm of the number of nodes) along with a small clustering coefficient. Watts and Strogatz measured that in fact many real-world networks have a small average shortest path length, but also a clustering coefficient significantly higher than expected by random chance. Watts and Strogatz then proposed a novel graph model, currently named the Watts and Strogatz model, with (i) a small average shortest path length, and (ii) a large clustering coefficient. The crossover in the Watts–Strogatz model between a "large world" (such as a lattice) and a small world was first described by Barthelemy and Amaral in 1999. This work was followed by many studies, including exact results (Barrat and Weigt, 1999; Dorogovtsev and Mendes; Barmpoutis and Murray, 2010).
== Properties of small-world networks ==
Small-world networks tend to contain cliques, and near-cliques, meaning sub-networks which have connections between almost any two nodes within them. This follows from the defining property of a high clustering coefficient. Secondly, most pairs of nodes will be connected by at least one short path. This follows from the defining property that the mean-shortest path length be small. Several other properties are often associated with small-world networks. Typically there is an over-abundance of hubs – nodes in the network with a high number of connections (known as high degree nodes). These hubs serve as the common connections mediating the short path lengths between other edges. By analogy, the small-world network of airline flights has a small mean-path length (i.e. between any two cities you are likely to have to take three or fewer flights) because many flights are routed through hub cities. This property is often analyzed by considering the fraction of nodes in the network that have a particular number of connections going into them (the degree distribution of the network). Networks with a greater than expected number of hubs will have a greater fraction of nodes with high degree, and consequently the degree distribution will be enriched at high degree values. This is known colloquially as a fat-tailed distribution. Graphs of very different topology qualify as small-world networks as long as they satisfy the two definitional requirements above.
Network small-worldness has been quantified by a small-coefficient,
σ
{\displaystyle \sigma }
, calculated by comparing clustering and path length of a given network to an Erdős–Rényi model with same degree on average.
σ
=
C
C
r
L
L
r
{\displaystyle \sigma ={\frac {\frac {C}{C_{r}}}{\frac {L}{L_{r}}}}}
if
σ
>
1
{\displaystyle \sigma >1}
(
C
≫
C
r
{\textstyle C\gg C_{r}}
and
L
≈
L
r
{\textstyle L\approx {L_{r}}}
), network is small-world. However, this metric is known to perform poorly because it is heavily influenced by the network's size.
Another method for quantifying network small-worldness utilizes the original definition of the small-world network comparing the clustering of a given network to an equivalent lattice network and its path length to an equivalent random network. The small-world measure (
ω
{\displaystyle \omega }
) is defined as
ω
=
L
r
L
−
C
C
ℓ
{\displaystyle \omega ={\frac {L_{r}}{L}}-{\frac {C}{C_{\ell }}}}
Where the characteristic path length L and clustering coefficient C are calculated from the network you are testing, Cℓ is the clustering coefficient for an equivalent lattice network and Lr is the characteristic path length for an equivalent random network.
Still another method for quantifying small-worldness normalizes both the network's clustering and path length relative to these characteristics in equivalent lattice and random networks. The Small World Index (SWI) is defined as
SWI
=
L
−
L
ℓ
L
r
−
L
ℓ
×
C
−
C
r
C
ℓ
−
C
r
{\displaystyle {\text{SWI}}={\frac {L-L_{\ell }}{L_{r}-L_{\ell }}}\times {\frac {C-C_{r}}{C_{\ell }-C_{r}}}}
Both ω′ and SWI range between 0 and 1, and have been shown to capture aspects of small-worldness. However, they adopt slightly different conceptions of ideal small-worldness. For a given set of constraints (e.g. size, density, degree distribution), there exists a network for which ω′ = 1, and thus ω aims to capture the extent to which a network with given constraints as small worldly as possible. In contrast, there may not exist a network for which SWI = 1, thus SWI aims to capture the extent to which a network with given constraints approaches the theoretical small world ideal of a network where C ≈ Cℓ and L ≈ Lr.
== Examples of small-world networks ==
Small-world properties are found in many real-world phenomena, including websites with navigation menus, food webs, electric power grids, metabolite processing networks, networks of brain neurons,
voter networks, telephone call graphs, and airport networks. Cultural networks and word co-occurrence networks have also been shown to be small-world networks.
Networks of connected proteins have small world properties such as power-law obeying degree distributions. Similarly transcriptional networks, in which the nodes are genes, and they are linked if one gene has an up or down-regulatory genetic influence on the other, have small world network properties.
== Examples of non-small-world networks ==
In another example, the famous theory of "six degrees of separation" between people tacitly presumes that the domain of discourse is the set of people alive at any one time. The number of degrees of separation between Albert Einstein and Alexander the Great is almost certainly greater than 30 and this network does not have small-world properties. A similarly constrained network would be the "went to school with" network: if two people went to the same college ten years apart from one another, it is unlikely that they have acquaintances in common amongst the student body.
Similarly, the number of relay stations through which a message must pass was not always small. In the days when the post was carried by hand or on horseback, the number of times a letter changed hands between its source and destination would have been much greater than it is today. The number of times a message changed hands in the days of the visual telegraph (circa 1800–1850) was determined by the requirement that two stations be connected by line-of-sight.
Tacit assumptions, if not examined, can cause a bias in the literature on graphs in favor of finding small-world networks (an example of the file drawer effect resulting from the publication bias).
== Network robustness ==
It is hypothesized by some researchers, such as Albert-László Barabási, that the prevalence of small world networks in biological systems may reflect an evolutionary advantage of such an architecture. One possibility is that small-world networks are more robust to perturbations than other network architectures. If this were the case, it would provide an advantage to biological systems that are subject to damage by mutation or viral infection.
In a small world network with a degree distribution following a power-law, deletion of a random node rarely causes a dramatic increase in mean-shortest path length (or a dramatic decrease in the clustering coefficient). This follows from the fact that most shortest paths between nodes flow through hubs, and if a peripheral node is deleted it is unlikely to interfere with passage between other peripheral nodes. As the fraction of peripheral nodes in a small world network is much higher than the fraction of hubs, the probability of deleting an important node is very low. For example, if the small airport in Sun Valley, Idaho was shut down, it would not increase the average number of flights that other passengers traveling in the United States would have to take to arrive at their respective destinations. However, if random deletion of a node hits a hub by chance, the average path length can increase dramatically. This can be observed annually when northern hub airports, such as Chicago's O'Hare airport, are shut down because of snow; many people have to take additional flights.
By contrast, in a random network, in which all nodes have roughly the same number of connections, deleting a random node is likely to increase the mean-shortest path length slightly but significantly for almost any node deleted. In this sense, random networks are vulnerable to random perturbations, whereas small-world networks are robust. However, small-world networks are vulnerable to targeted attack of hubs, whereas random networks cannot be targeted for catastrophic failure.
== Construction of small-world networks ==
The main mechanism to construct small-world networks is the Watts–Strogatz mechanism.
Small-world networks can also be introduced with time-delay, which will not only produce fractals but also chaos under the right conditions, or transition to chaos in dynamics networks.
Soon after the publication of Watts–Strogatz mechanism, approaches have been developed by Mashaghi and co-workers to generate network models that exhibit high degree correlations, while preserving the desired degree distribution and small-world properties. These approaches are based on edge-dual transformation and can be used to generate analytically solvable small-world network models for research into these systems.
Degree–diameter graphs are constructed such that the number of neighbors each vertex in the network has is bounded, while the distance from any given vertex in the network to any other vertex (the diameter of the network) is minimized. Constructing such small-world networks is done as part of the effort to find graphs of order close to the Moore bound.
Another way to construct a small world network from scratch is given in Barmpoutis et al., where a network with very small average distance and very large average clustering is constructed. A fast algorithm of constant complexity is given, along with measurements of the robustness of the resulting graphs. Depending on the application of each network, one can start with one such "ultra small-world" network, and then rewire some edges, or use several small such networks as subgraphs to a larger graph.
Small-world properties can arise naturally in social networks and other real-world systems via the process of dual-phase evolution. This is particularly common where time or spatial constraints limit the addition of connections between vertices The mechanism generally involves periodic shifts between phases, with connections being added during a "global" phase and being reinforced or removed during a "local" phase.
Small-world networks can change from scale-free class to broad-scale class whose connectivity distribution has a sharp cutoff following a power law regime due to constraints limiting the addition of new links. For strong enough constraints, scale-free networks can even become single-scale networks whose connectivity distribution is characterized as fast decaying. It was also shown analytically that scale-free networks are ultra-small, meaning that the distance scales according to
L
∝
log
log
N
{\displaystyle L\propto \log \log N}
.
== Applications ==
=== Applications to sociology ===
The advantages to small world networking for social movement groups are their resistance to change due to the filtering apparatus of using highly connected nodes, and its better effectiveness in relaying information while keeping the number of links required to connect a network to a minimum.
The small world network model is directly applicable to affinity group theory represented in sociological arguments by William Finnegan. Affinity groups are social movement groups that are small and semi-independent pledged to a larger goal or function. Though largely unaffiliated at the node level, a few members of high connectivity function as connectivity nodes, linking the different groups through networking. This small world model has proven an extremely effective protest organization tactic against police action. Clay Shirky argues that the larger the social network created through small world networking, the more valuable the nodes of high connectivity within the network. The same can be said for the affinity group model, where the few people within each group connected to outside groups allowed for a large amount of mobilization and adaptation. A practical example of this is small world networking through affinity groups that William Finnegan outlines in reference to the 1999 Seattle WTO protests.
=== Applications to earth sciences ===
Many networks studied in geology and geophysics have been shown to have characteristics of small-world networks. Networks defined in fracture systems and porous substances have demonstrated these characteristics. The seismic network in the Southern California region may be a small-world network. The examples above occur on very different spatial scales, demonstrating the scale invariance of the phenomenon in the earth sciences.
=== Applications to computing ===
Small-world networks have been used to estimate the usability of information stored in large databases. The measure is termed the Small World Data Transformation Measure. The greater the database links align to a small-world network the more likely a user is going to be able to extract information in the future. This usability typically comes at the cost of the amount of information that can be stored in the same repository.
The Freenet peer-to-peer network has been shown to form a small-world network in simulation, allowing information to be stored and retrieved in a manner that scales efficiency as the network grows.
Nearest Neighbor Search solutions like HNSW use small-world networks to efficiently find the information in large item corpuses.
=== Small-world neural networks in the brain ===
Both anatomical connections in the brain and the synchronization networks of cortical neurons exhibit small-world topology.
Structural and functional connectivity in the brain has also been found to reflect the small-world topology of short path length and high clustering. The network structure has been found in the mammalian cortex across species as well as in large scale imaging studies in humans. Advances in connectomics and network neuroscience, have found the small-worldness of neural networks to be associated with efficient communication.
In neural networks, short pathlength between nodes and high clustering at network hubs supports efficient communication between brain regions at the lowest energetic cost. The brain is constantly processing and adapting to new information and small-world network model supports the intense communication demands of neural networks. High clustering of nodes forms local networks which are often functionally related. Short path length between these hubs supports efficient global communication. This balance enables the efficiency of the global network while simultaneously equipping the brain to handle disruptions and maintain homeostasis, due to local subsystems being isolated from the global network. Loss of small-world network structure has been found to indicate changes in cognition and increased risk of psychological disorders.
In addition to characterizing whole-brain functional and structural connectivity, specific neural systems, such as the visual system, exhibit small-world network properties.
A small-world network of neurons can exhibit short-term memory. A computer model developed by Sara Solla had two stable states, a property (called bistability) thought to be important in memory storage. An activating pulse generated self-sustaining loops of communication activity among the neurons. A second pulse ended this activity. The pulses switched the system between stable states: flow (recording a "memory"), and stasis (holding it). Small world neuronal networks have also been used as models to understand seizures.
== See also ==
Barabási–Albert model – Scale-free network generation algorithm
Climate as complex networks – Conceptual model to generate insight into climate science
Dual-phase evolution – Process that drives self-organization within complex adaptive systems
Dunbar's number – Suggested cognitive limit important in sociology and anthropology
Erdős number – Closeness of someone's association with mathematician Paul Erdős
Erdős–Rényi (ER) model – Two closely related models for generating random graphs
Local World Evolving Network Models
Percolation theory – Mathematical theory on behavior of connected clusters in a random graph
Network science – Academic field - mathematical theory of networks
Scale-free network – Network whose degree distribution follows a power law
Six degrees of Kevin Bacon – Parlor game on degrees of separation
Small-world experiment – Experiments examining the average path length for social networks
Social network – Social structure made up of a set of social actors
Watts–Strogatz model – Method of generating random small-world graphs
Network on a chip – Electronic communication subsystem on an integrated circuit – systems on chip modeled on small-world networks
Zachary's karate club
== References ==
== Further reading ==
=== Books ===
=== Journal articles ===
== External links ==
Dynamic Proximity Networks by Seth J. Chandler, The Wolfram Demonstrations Project.
Small-World Networks entry on Scholarpedia (by Mason A. Porter) | Wikipedia/Small-world_network |
System dynamics (SD) is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.
== Overview ==
System dynamics is a methodology and mathematical modeling technique to frame, understand, and discuss complex issues and problems. Originally developed in the 1950s to help corporate managers improve their understanding of industrial processes, SD is currently being used throughout the public and private sector for policy analysis and design.
Convenient graphical user interface (GUI) system dynamics software developed into user friendly versions by the 1990s and have been applied to diverse systems. SD models solve the problem of simultaneity (mutual causation) by updating all variables in small time increments with positive and negative feedbacks and time delays structuring the interactions and control. The best known SD model is probably the 1972 The Limits to Growth. This model forecast that exponential growth of population and capital, with finite resource sources and sinks and perception delays, would lead to economic collapse during the 21st century under a wide variety of growth scenarios.
System dynamics is an aspect of systems theory as a method to understand the dynamic behavior of complex systems. The basis of the method is the recognition that the structure of any system, the many circular, interlocking, sometimes time-delayed relationships among its components, is often just as important in determining its behavior as the individual components themselves. Examples are chaos theory and social dynamics. It is also claimed that because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts.
== History ==
System dynamics was created during the mid-1950s by Professor Jay Forrester of the Massachusetts Institute of Technology. In 1956, Forrester accepted a professorship in the newly formed MIT Sloan School of Management. His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, on the core issues that determine the success or failure of corporations. Forrester's insights into the common foundations that underlie engineering, which led to the creation of system dynamics, were triggered, to a large degree, by his involvement with managers at General Electric (GE) during the mid-1950s. At that time, the managers at GE were perplexed because employment at their appliance plants in Kentucky exhibited a significant three-year cycle. The business cycle was judged to be an insufficient explanation for the employment instability. From hand simulations (or calculations) of the stock-flow-feedback structure of the GE plants, which included the existing corporate decision-making structure for hiring and layoffs, Forrester was able to show how the instability in GE employment was due to the internal structure of the firm and not to an external force such as the business cycle. These hand simulations were the start of the field of system dynamics.
During the late 1950s and early 1960s, Forrester and a team of graduate students moved the emerging field of system dynamics from the hand-simulation stage to the formal computer modeling stage. Richard Bennett created the first system dynamics computer modeling language called SIMPLE (Simulation of Industrial Management Problems with Lots of Equations) in the spring of 1958. In 1959, Phyllis Fox and Alexander Pugh wrote the first version of
DYNAMO (DYNAmic MOdels), an improved version of SIMPLE, and the system dynamics language became the industry standard for over thirty years. Forrester published the first, and still classic, book in the field titled Industrial Dynamics in 1961.
From the late 1950s to the late 1960s, system dynamics was applied almost exclusively to corporate/managerial problems. In 1968, however, an unexpected occurrence caused the field to broaden beyond corporate modeling. John F. Collins, the former mayor of Boston, was appointed a visiting professor of Urban Affairs at MIT. The result of the Collins-Forrester collaboration was a book titled Urban Dynamics. The Urban Dynamics model presented in the book was the first major non-corporate application of system dynamics. In 1967, Richard M. Goodwin published the first edition of his paper "A Growth Cycle", which was the first attempt to apply the principles of system dynamics to economics. He devoted most of his life teaching what he called "Economic Dynamics", which could be considered a precursor of modern Non-equilibrium economics.
The second major noncorporate application of system dynamics came shortly after the first. In 1970, Jay Forrester was invited by the Club of Rome to a meeting in Bern, Switzerland. The Club of Rome is an organization devoted to solving what its members describe as the "predicament of mankind"—that is, the global crisis that may appear sometime in the future, due to the demands being placed on the Earth's carrying capacity (its sources of renewable and nonrenewable resources and its sinks for the disposal of pollutants) by the world's exponentially growing population. At the Bern meeting, Forrester was asked if system dynamics could be used to address the predicament of mankind. His answer, of course, was that it could. On the plane back from the Bern meeting, Forrester created the first draft of a system dynamics model of the world's socioeconomic system. He called this model WORLD1. Upon his return to the United States, Forrester refined WORLD1 in preparation for a visit to MIT by members of the Club of Rome. Forrester called the refined version of the model WORLD2. Forrester published WORLD2 in a book titled World Dynamics.
== Topics in systems dynamics ==
The primary elements of system dynamics diagrams are feedback, accumulation of flows into stocks and time delays.
As an illustration of the use of system dynamics, imagine an organisation that plans to introduce an innovative new durable consumer product. The organisation needs to understand the possible market dynamics in order to design marketing and production plans.
=== Causal loop diagrams ===
In the system dynamics methodology, a problem or a system (e.g., ecosystem, political system or mechanical system) may be represented as a causal loop diagram. A causal loop diagram is a simple map of a system with all its constituent components and their interactions. By capturing interactions and consequently the feedback loops (see figure below), a causal loop diagram reveals the structure of a system. By understanding the structure of a system, it becomes possible to ascertain a system's behavior over a certain time period.
The causal loop diagram of the new product introduction may look as follows:
There are two feedback loops in this diagram. The positive reinforcement (labeled R) loop on the right indicates that the more people have already adopted the new product, the stronger the word-of-mouth impact. There will be more references to the product, more demonstrations, and more reviews. This positive feedback should generate sales that continue to grow.
The second feedback loop on the left is negative reinforcement (or "balancing" and hence labeled B). Clearly, growth cannot continue forever, because as more and more people adopt, there remain fewer and fewer potential adopters.
Both feedback loops act simultaneously, but at different times they may have different strengths. Thus one might expect growing sales in the initial years, and then declining sales in the later years. However, in general a causal loop diagram does not specify the structure of a system sufficiently to permit determination of its behavior from the visual representation alone.
=== Stock and flow diagrams ===
Causal loop diagrams aid in visualizing a system's structure and behavior, and analyzing the system qualitatively. To perform a more detailed quantitative analysis, a causal loop diagram is transformed to a stock and flow diagram. A stock and flow model helps in studying and analyzing the system in a quantitative way; such models are usually built and simulated using computer software.
A stock is the term for any entity that accumulates or depletes over time. A flow is the rate of change in a stock.
In this example, there are two stocks: Potential adopters and Adopters. There is one flow: New adopters. For every new adopter, the stock of potential adopters declines by one, and the stock of adopters increases by one.
=== Equations ===
The real power of system dynamics is utilised through simulation. Although it is possible to perform the modeling in a spreadsheet, there are a variety of software packages that have been optimised for this.
The steps involved in a simulation are:
Define the problem boundary.
Identify the most important stocks and flows that change these stock levels.
Identify sources of information that impact the flows.
Identify the main feedback loops.
Draw a causal loop diagram that links the stocks, flows and sources of information.
Write the equations that determine the flows.
Estimate the parameters and initial conditions. These can be estimated using statistical methods, expert opinion, market research data or other relevant sources of information.
Simulate the model and analyse results.
In this example, the equations that change the two stocks via the flow are:
Potential adopters
=
−
∫
0
t
New adopters
d
t
{\displaystyle \ {\mbox{Potential adopters}}=-\int _{0}^{t}{\mbox{New adopters }}\,dt}
Adopters
=
∫
0
t
New adopters
d
t
{\displaystyle \ {\mbox{Adopters}}=\int _{0}^{t}{\mbox{New adopters }}\,dt}
=== Equations in discrete time ===
List of all the equations in discrete time, in their order of execution in each year, for years 1 to 15 :
1
)
Probability that contact has not yet adopted
=
Potential adopters
/
(
Potential adopters
+
Adopters
)
{\displaystyle 1)\ {\mbox{Probability that contact has not yet adopted}}={\mbox{Potential adopters}}/({\mbox{Potential adopters }}+{\mbox{ Adopters}})}
2
)
Imitators
=
q
⋅
Adopters
⋅
Probability that contact has not yet adopted
{\displaystyle 2)\ {\mbox{Imitators}}=q\cdot {\mbox{Adopters}}\cdot {\mbox{Probability that contact has not yet adopted}}}
3
)
Innovators
=
p
⋅
Potential adopters
{\displaystyle 3)\ {\mbox{Innovators}}=p\cdot {\mbox{Potential adopters}}}
4
)
New adopters
=
Innovators
+
Imitators
{\displaystyle 4)\ {\mbox{New adopters}}={\mbox{Innovators}}+{\mbox{Imitators}}}
4.1
)
Potential adopters
−
=
New adopters
{\displaystyle 4.1)\ {\mbox{Potential adopters}}\ -={\mbox{New adopters }}}
4.2
)
Adopters
+
=
New adopters
{\displaystyle 4.2)\ {\mbox{Adopters}}\ +={\mbox{New adopters }}}
p
=
0.03
{\displaystyle \ p=0.03}
q
=
0.4
{\displaystyle \ q=0.4}
==== Dynamic simulation results ====
The dynamic simulation results show that the behaviour of the system would be to have growth in adopters that follows a classic s-curve shape.
The increase in adopters is very slow initially, then exponential growth for a period, followed ultimately by saturation.
=== Equations in continuous time ===
To get intermediate values and better accuracy, the model can run in continuous time: we multiply the number of units of time and we proportionally divide values that change stock levels. In this example we multiply the 15 years by 4 to obtain 60 quarters, and we divide the value of the flow by 4.
Dividing the value is the simplest with the Euler method, but other methods could be employed instead, such as Runge–Kutta methods.
List of the equations in continuous time for trimesters = 1 to 60 :
They are the same equations as in the section Equation in discrete time above, except equations 4.1 and 4.2 replaced by following :
10
)
Valve New adopters
=
New adopters
⋅
T
i
m
e
S
t
e
p
{\displaystyle 10)\ {\mbox{Valve New adopters}}\ ={\mbox{New adopters}}\cdot TimeStep}
10.1
)
Potential adopters
−
=
Valve New adopters
{\displaystyle 10.1)\ {\mbox{Potential adopters}}\ -={\mbox{Valve New adopters}}}
10.2
)
Adopters
+
=
Valve New adopters
{\displaystyle 10.2)\ {\mbox{Adopters}}\ +={\mbox{Valve New adopters }}}
T
i
m
e
S
t
e
p
=
1
/
4
{\displaystyle \ TimeStep=1/4}
In the below stock and flow diagram, the intermediate flow 'Valve New adopters' calculates the equation :
Valve New adopters
=
New adopters
⋅
T
i
m
e
S
t
e
p
{\displaystyle \ {\mbox{Valve New adopters}}\ ={\mbox{New adopters }}\cdot TimeStep}
== Application ==
System dynamics has found application in a wide range of areas, for example population, agriculture, epidemiological, ecological and economic systems, which usually interact strongly with each other.
System dynamics have various "back of the envelope" management applications. They are a potent tool to:
Teach system thinking reflexes to persons being coached
Analyze and compare assumptions and mental models about the way things work
Gain qualitative insight into the workings of a system or the consequences of a decision
Recognize archetypes of dysfunctional systems in everyday practice
Computer software is used to simulate a system dynamics model of the situation being studied. Running "what if" simulations to test certain policies on such a model can greatly aid in understanding how the system changes over time. System dynamics is very similar to systems thinking and constructs the same causal loop diagrams of systems with feedback. However, system dynamics typically goes further and utilises simulation to study the behaviour of systems and the impact of alternative policies.
System dynamics has been used to investigate resource dependencies, and resulting problems, in product development.
A system dynamics approach to macroeconomics, known as Minsky, has been developed by the economist Steve Keen. This has been used to successfully model world economic behaviour from the apparent stability of the Great Moderation to the 2008 financial crisis.
=== Example: Growth and decline of companies ===
The figure above is a causal loop diagram of a system dynamics model created to examine forces that may be responsible for the growth or decline of life insurance companies in the United Kingdom. A number of this figure's features are worth mentioning. The first is that the model's negative feedback loops are identified by C's, which stand for Counteracting loops. The second is that double slashes are used to indicate places where there is a significant delay between causes (i.e., variables at the tails of arrows) and effects (i.e., variables at the heads of arrows). This is a common causal loop diagramming convention in system dynamics. Third, is that thicker lines are used to identify the feedback loops and links that author wishes the audience to focus on. This is also a common system dynamics diagramming convention. Last, it is clear that a decision maker would find it impossible to think through the dynamic behavior inherent in the model, from inspection of the figure alone.
=== Example: Piston motion ===
Objective: study of a crank-connecting rod system. We want to model a crank-connecting rod system through a system dynamic model. Two different full descriptions of the physical system with related systems of equations can be found here (in English) and here (in French); they give the same results. In this example, the crank, with variable radius and angular frequency, will drive a piston with a variable connecting rod length.
System dynamic modeling: the system is now modeled, according to a stock and flow system dynamic logic. The figure below shows the stock and flow diagram
Simulation: the behavior of the crank-connecting rod dynamic system can then be simulated. The next figure is a 3D simulation created using procedural animation. Variables of the model animate all parts of this animation: crank, radius, angular frequency, rod length, and piston position.
== See also ==
== References ==
== Further reading ==
Kypuros, Javier (2013). System dynamics and control with bond graph modeling. Boca Raton: Taylor & Francis. ISBN 978-1466560758.
Forrester, Jay W. (1961). Industrial Dynamics. M.I.T. Press.
Forrester, Jay W. (1969). Urban Dynamics. Pegasus Communications. ISBN 978-1-883823-39-9.
Meadows, Donella H. (1972). Limits to Growth. New York: University books. ISBN 978-0-87663-165-2.
Morecroft, John (2007). Strategic Modelling and Business Dynamics: A Feedback Systems Approach. John Wiley & Sons. ISBN 978-0-470-01286-4.
Roberts, Edward B. (1978). Managerial Applications of System Dynamics. Cambridge: MIT Press. ISBN 978-0-262-18088-7.
Randers, Jorgen (1980). Elements of the System Dynamics Method. Cambridge: MIT Press. ISBN 978-0-915299-39-3.
Senge, Peter (1990). The Fifth Discipline. Currency. ISBN 978-0-385-26095-4.
Sterman, John D. (2000). Business Dynamics: Systems thinking and modeling for a complex world. McGraw Hill. ISBN 978-0-07-231135-8.
== External links ==
System Dynamics Society
Study Prepared for the U.S. Department of Energy's Introducing System Dynamics -
Desert Island Dynamics "An Annotated Survey of the Essential System Dynamics Literature"
True World : Temporal Reasoning Universal Elaboration : System Dynamics software used for diagrams in this article (free) | Wikipedia/System_dynamics |
Michael Christopher Jackson OBE (born 1951) is a British systems scientist, consultant and Emeritus Professor of Management Systems and former Dean of Hull University Business School, known for his work in the field of systems thinking and management.
== Biography ==
Jackson studied Politics, Philosophy and Economics at Oxford University from 1970 to 1973, where he received his PPE. After spending 4 years in the civil service, he received his MA in Systems in Management at the Lancaster University in 1978.
Jackson spent his academic life teaching at the Lancaster University, the University of Warwick, the University of Lincoln and the Hull University, appointed Professor of Management Systems in Hull from 1989 to May 2012. He was Visiting Professor at the Indian Institute of Technology (New Delhi) and Honorary Professor at the Universidad Ricardo Palma, Lima, Peru. In 1997 he was an Erskine Scholar at the University of Canterbury, New Zealand.
Jackson is a past President of the UK Systems Society, of the International Federation for Systems Research from 1996 to 2000, and of the International Society for the Systems Sciences in 2001. He has also served on the Council of the Operational Research Society. He is a Fellow of the British Computer Society, the Chartered Management Institute, the Cybernetics Society and the Operational Research Society.
Jackson is Editor-in-chief of the journal Systems Research and Behavioral Science, published by John Wiley, and he is on the editorial board of 5 other journals. He has delivered plenary addresses at numerous international conferences, and has undertaken many consultancy engagements with outside organisations, both commercial and non-profit.
In 2009, his work was honored from the Hellenic Society for Systemic Studies with their most prestigious Medal. He was appointed Officer of the Order of the British Empire (OBE) in the 2011 New Year Honours for services to higher education and business. In 2017 he received the Beale Medal of the UK Operational Research Society for ‘an outstanding lifetime achievement in the philosophy, methodology or practice of OR’.
== Work ==
Jackson's teaching and research interests are Systems thinking, Organizational cybernetics, Creative problem solving, Critical systems thinking, Management science and Systems science.
== Publications ==
Jackson has written 5 books, edited 6 others and has published over 70 articles in refereed journals, Books:
1991, Systems Methodology for the Management Sciences.
1991, Creative Problem Solving: Total Systems Intervention, with Robert L. Flood, Wiley. 268 p.
2000, Systems Approaches to Management, London: Springer 465 p.
2003, Systems Thinking: Creative Holism for Managers, Wiley.
2019, Critical Systems Thinking and the Management of Complexity, Wiley.
Articles, a selection:
1993, with Robert L. Flood, "Critical Systems Thinking", in: Organization Studies Vol 14, p. 613.
== Notes and references ==
== External links ==
Emeritus Professor Mike C Jackson at the Hull University Business School.
Michael C. Jackson, IFSR Newsletter 34/35 - December 1994. | Wikipedia/Mike_Jackson_(systems_scientist) |
Dynamic network analysis (DNA) is an emergent scientific field that brings together traditional social network analysis (SNA), link analysis (LA), social simulation and multi-agent systems (MAS) within network science and network theory. Dynamic networks are a function of time (modeled as a subset of the real numbers) to a set of graphs; for each time point there is a graph. This is akin to the definition of dynamical systems, in which the function is from time to an ambient space, where instead of ambient space time is translated to relationships between pairs of vertices.
== Overview ==
There are two aspects of this field. The first is the statistical analysis of DNA data. The second is the utilization of simulation to address issues of network dynamics. DNA networks vary from traditional social networks in that they are larger, dynamic, multi-mode, multi-plex networks, and may contain varying levels of uncertainty. The main difference of DNA to SNA is that DNA takes interactions of social features conditioning structure and behavior of networks into account. DNA is tied to temporal analysis but temporal analysis is not necessarily tied to DNA, as changes in networks sometimes result from external factors which are independent of social features found in networks. One of the most notable and earliest of cases in the use of DNA is in Sampson's monastery study, where he took snapshots of the same network from different intervals and observed and analyzed the evolution of the network.
DNA statistical tools are generally optimized for large-scale networks and admit the analysis of multiple networks simultaneously in which, there are multiple types of nodes (multi-node) and multiple types of links (multi-plex). Multi-node multi-plex networks are generally referred to as
meta-networks or high-dimensional networks. In contrast, SNA statistical tools focus on single or at most two mode data and facilitate the analysis of only one type of link at a time.
DNA statistical tools tend to provide more measures to the user, because they have measures that use data drawn from multiple networks simultaneously. Latent space models (Sarkar and Moore, 2005) and agent-based simulation are often used to examine dynamic social networks (Carley et al., 2009). From a computer simulation perspective, nodes in DNA are like atoms in quantum theory, nodes can be, though need not be, treated as probabilistic. Whereas nodes in a traditional SNA model are static, nodes in a DNA model have the ability to learn. Properties change over time; nodes can adapt: A company's employees can learn new skills and increase their value to the network; or, capture one terrorist and three more are forced to improvise. Change propagates from one node to the next and so on. DNA adds the element of a network's evolution and considers the circumstances under which change is likely to occur.
There are three main features to dynamic network analysis that distinguish it from standard social network analysis. First, rather than just using social networks, DNA looks at meta-networks. Second, agent-based modeling and other forms of simulations are often used to explore how networks evolve and adapt as well as the impact of interventions on those networks. Third, the links in the network are not binary; in fact, in many cases they represent the probability that there is a link.
== Dynamic Representation Learning ==
Complex information about object relationships can be effectively condensed into low-dimensional embeddings in a latent space. Dynamic systems, unlike static ones, involve temporal changes. Differences in learned representations over time in a dynamic system can arise from actual changes or arbitrary alterations that do not affect the metrics in the latent space with the former reflecting on the system's stability and the latter linked to the alignment of embeddings.
In essence, the stability of the system defines its dynamics, while misalignment signifies irrelevant changes in the latent space. Dynamic embeddings are considered aligned when variations between embeddings at different times accurately represent the system's actual changes, not meaningless alterations in the latent space. The matter of stability and alignment of dynamic embeddings holds significant importance in various tasks reliant on temporal changes within the latent space. These tasks encompass future metadata prediction, temporal evolution, dynamic visualization, and obtaining average embeddings, among others.
== Meta-network ==
A meta-network is a multi-mode, multi-link, multi-level network. Multi-mode means that there are many types of nodes; e.g., nodes people and locations. Multi-link means that there are many types of links; e.g., friendship and advice. Multi-level means that some nodes may be members of other nodes, such as a network composed of people and organizations and one of the links is who is a member of which organization.
While different researchers use different modes, common modes reflect who, what, when, where, why and how. A simple example of a meta-network is the PCANS formulation with people, tasks, and resources. A more detailed formulation considers people, tasks, resources, knowledge, and organizations. The ORA tool was developed to support meta-network analysis.
== Illustrative problems that people in the DNA area work on ==
Developing metrics and statistics to assess and identify change within and across networks.
Developing and validating simulations to study network change, evolution, adaptation, decay. See Computer simulation and organizational studies
Developing and testing theory of network change, evolution, adaptation, decay
Developing and validating formal models of network generation and evolution
Developing techniques to visualize network change overall or at the node or group level
Developing statistical techniques to see whether differences observed over time in networks are due to simply different samples from a distribution of links and nodes or changes over time in the underlying distribution of links and nodes
Developing control processes for networks over time
Developing algorithms to change distributions of links in networks over time
Developing algorithms to track groups in networks over time
Developing tools to extract or locate networks from various data sources such as texts
Developing statistically valid measurements on networks over time
Examining the robustness of network metrics under various types of missing data
Empirical studies of multi-mode multi-link multi-time period networks
Examining networks as probabilistic time-variant phenomena
Forecasting change in existing networks
Identifying trails through time given a sequence of networks
Identifying changes in node criticality given a sequence of networks anything else related to multi-mode multi-link multi-time period networks
Studying random walks on temporal networks
Quantifying structural properties of contact sequences in dynamic networks, which influence dynamical processes
Assessment of covert activity and dark networks
Citational analysis
Social media analysis
Assessment of public health systems
Analysis of hospital safety outcomes
Assessment of the structure of ethnic violence from news data
Assessment of terror groups
Online social decay of social interactions
Modelling of classroom interactions in schools
== See also ==
Graph dynamical system
International Network for Social Network Analysis
Kathleen M. Carley
Network dynamics
Network science
Sequential dynamical systemios13.3 deca mield(8)
== References ==
== Further reading ==
Kathleen M. Carley, 2003, "Dynamic Network Analysis" in Dynamic Social Network Modeling and Analysis: Workshop Summary and Papers, Ronald Breiger, Kathleen Carley, and Philippa Pattison, (Eds.) Committee on Human Factors, National Research Council, National Research Council. Pp. 133–145, Washington, DC.
Kathleen M. Carley, 2002, "Smart Agents and Organizations of the Future" The Handbook of New Media. Edited by Leah Lievrouw and Sonia Livingstone, Ch. 12, pp. 206–220, Thousand Oaks, CA, Sage.
Kathleen M. Carley, Jana Diesner, Jeffrey Reminga, Maksim Tsvetovat, 2008, Toward an Interoperable Dynamic Network Analysis Toolkit, DSS Special Issue on Cyberinfrastructure for Homeland Security: Advances in Information Sharing, Data Mining, and Collaboration Systems. Decision Support Systems 43(4):1324-1347 (article 20)
Terrill L. Frantz, Kathleen M. Carley. 2009, Toward A Confidence Estimate For The Most-Central-Actor Finding. Academy of Management Annual Conference, Chicago, IL, USA, 7–11 August. (Awarded the Sage Publications/RM Division Best Student Paper Award)
Petter Holme, Jari Saramäki, 2011, "Temporal networks". https://arxiv.org/abs/1108.1780
C. Aggarwal, K. Subbian, 2014, "Evolutionary Network Analysis: A Survey". ACM Computing Surveys, 47(1). (pdf)
== External links ==
Radcliffe Exploratory Seminar on Dynamic Networks
Center for Computational Analysis of Social and Organizational Systems (CASOS) | Wikipedia/Dynamic_network_analysis |
Systems geology emphasizes the nature of geology as a system – that is, as a set of interacting parts that function as a whole. The systems approach involves study of the linkages or interfaces between the component objects and processes at all levels of detail in order to gain a more comprehensive understanding of the solid Earth. A long-term objective is to provide computational support throughout the cycles of investigation, integrating observation and experiment with modeling and theory, each reinforcing the other. The overall complexity suggests that systems geology must be based on the wider emerging cyberinfrastructure, and should aim to harmonize geological information with Earth system science within the context of the e-science vision of a comprehensive global knowledge system (see Linked Data, Semantic Web).
== Background ==
Systems geology can be seen as an integral part of the science of earth systems, "encompassing all components of the Earth system – air, life, rock and water – to gain a new and more comprehensive understanding of the world as we know it". Much of the background was set out in Solid-Earth Science and Society in 1993. Since then, considerable progress has resulted from large investments in geoinformatics by the US National Science Foundation and the European Commission, much of it implemented on their high-level computing networks. The concepts of Earth Systems are reflected in the teaching of geology. Nevertheless, geology has unique aspects that justify consideration of systems geology as a distinct subsystem. These include the availability of detailed world-wide geological mapping and stratigraphical classification, and the rapidly growing understanding of Earth history in terms of past configurations of geological objects and processes.
== Related initiatives ==
Cornell University's Geoscience Information System Project started in 1995. 'Building the Digital Earth' aims to develop a comprehensive geoscience information system, which they see as one of the most important steps that geoscientists could undertake in response to new technological advancements. Their ambition is to place all information and knowledge, along with access, modeling, and visualization tools, 'under the finger tips of a user'. This objective is echoed in Keller and Baru (2011) where the Earth is considered as a single system (pages 3, 12, 15, 37), and progress is recorded in moving towards the geoinformatics vision set out in 2007: to facilitate 'a future in which someone can sit at a terminal and have easy access to vast stores of data of almost any kind, with the easy ability to visualize, analyze and model those data.' (p15). Because the treatment of earth systems and geology has repercussions in other fields, there is a need for them to share a wider-ranging cyberinfrastructure (p3, chapters 3, 4).
== Wider context ==
The systems approach is being actively developed in many other areas, such as biology and medicine (EuroPhysiome) opening the prospect of widely shared concepts, structures and implementations. Geospatial cyberinfrastructure applications, which seem particularly relevant to communicating information from geologists to end-users, are discussed by Yang et al., 2010.
== Conclusions ==
The systems approach may be particularly relevant to geological surveys, which are typically state, national or federal institutions that maintain and advance knowledge of geosciences. Traditionally, they have focused on the systematic production of geological maps, reports and archives of records and specimens. In the long run, geoinformatics could support integration at a systems level of geological surveys activities world-wide, all contributing to, using, testing and extending a shared cloud-based model. The British Geological Survey website tentatively suggests some possible developments in systems geology and the consequences for future geological mapping. It makes available A Scenario for Systems Geology which brings together relevant material from many sources to suggest how a comprehensive approach to systems geology might evolve. The scenario is not a statement of intent or a proposal for implementation, but an account of some possibilities that can be considered, discussed, criticized and improved. The ideas of systems geology will contribute to the future framework for studying geology in its wider context, but exploration of its full potential is still at an early stage.
== See also ==
Cyberinfrastructure
Earth System Science Partnership
International Geosphere-Biosphere Programme
GeoSciML
== References == | Wikipedia/Systems_geology |
System dynamics (SD) is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.
== Overview ==
System dynamics is a methodology and mathematical modeling technique to frame, understand, and discuss complex issues and problems. Originally developed in the 1950s to help corporate managers improve their understanding of industrial processes, SD is currently being used throughout the public and private sector for policy analysis and design.
Convenient graphical user interface (GUI) system dynamics software developed into user friendly versions by the 1990s and have been applied to diverse systems. SD models solve the problem of simultaneity (mutual causation) by updating all variables in small time increments with positive and negative feedbacks and time delays structuring the interactions and control. The best known SD model is probably the 1972 The Limits to Growth. This model forecast that exponential growth of population and capital, with finite resource sources and sinks and perception delays, would lead to economic collapse during the 21st century under a wide variety of growth scenarios.
System dynamics is an aspect of systems theory as a method to understand the dynamic behavior of complex systems. The basis of the method is the recognition that the structure of any system, the many circular, interlocking, sometimes time-delayed relationships among its components, is often just as important in determining its behavior as the individual components themselves. Examples are chaos theory and social dynamics. It is also claimed that because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts.
== History ==
System dynamics was created during the mid-1950s by Professor Jay Forrester of the Massachusetts Institute of Technology. In 1956, Forrester accepted a professorship in the newly formed MIT Sloan School of Management. His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, on the core issues that determine the success or failure of corporations. Forrester's insights into the common foundations that underlie engineering, which led to the creation of system dynamics, were triggered, to a large degree, by his involvement with managers at General Electric (GE) during the mid-1950s. At that time, the managers at GE were perplexed because employment at their appliance plants in Kentucky exhibited a significant three-year cycle. The business cycle was judged to be an insufficient explanation for the employment instability. From hand simulations (or calculations) of the stock-flow-feedback structure of the GE plants, which included the existing corporate decision-making structure for hiring and layoffs, Forrester was able to show how the instability in GE employment was due to the internal structure of the firm and not to an external force such as the business cycle. These hand simulations were the start of the field of system dynamics.
During the late 1950s and early 1960s, Forrester and a team of graduate students moved the emerging field of system dynamics from the hand-simulation stage to the formal computer modeling stage. Richard Bennett created the first system dynamics computer modeling language called SIMPLE (Simulation of Industrial Management Problems with Lots of Equations) in the spring of 1958. In 1959, Phyllis Fox and Alexander Pugh wrote the first version of
DYNAMO (DYNAmic MOdels), an improved version of SIMPLE, and the system dynamics language became the industry standard for over thirty years. Forrester published the first, and still classic, book in the field titled Industrial Dynamics in 1961.
From the late 1950s to the late 1960s, system dynamics was applied almost exclusively to corporate/managerial problems. In 1968, however, an unexpected occurrence caused the field to broaden beyond corporate modeling. John F. Collins, the former mayor of Boston, was appointed a visiting professor of Urban Affairs at MIT. The result of the Collins-Forrester collaboration was a book titled Urban Dynamics. The Urban Dynamics model presented in the book was the first major non-corporate application of system dynamics. In 1967, Richard M. Goodwin published the first edition of his paper "A Growth Cycle", which was the first attempt to apply the principles of system dynamics to economics. He devoted most of his life teaching what he called "Economic Dynamics", which could be considered a precursor of modern Non-equilibrium economics.
The second major noncorporate application of system dynamics came shortly after the first. In 1970, Jay Forrester was invited by the Club of Rome to a meeting in Bern, Switzerland. The Club of Rome is an organization devoted to solving what its members describe as the "predicament of mankind"—that is, the global crisis that may appear sometime in the future, due to the demands being placed on the Earth's carrying capacity (its sources of renewable and nonrenewable resources and its sinks for the disposal of pollutants) by the world's exponentially growing population. At the Bern meeting, Forrester was asked if system dynamics could be used to address the predicament of mankind. His answer, of course, was that it could. On the plane back from the Bern meeting, Forrester created the first draft of a system dynamics model of the world's socioeconomic system. He called this model WORLD1. Upon his return to the United States, Forrester refined WORLD1 in preparation for a visit to MIT by members of the Club of Rome. Forrester called the refined version of the model WORLD2. Forrester published WORLD2 in a book titled World Dynamics.
== Topics in systems dynamics ==
The primary elements of system dynamics diagrams are feedback, accumulation of flows into stocks and time delays.
As an illustration of the use of system dynamics, imagine an organisation that plans to introduce an innovative new durable consumer product. The organisation needs to understand the possible market dynamics in order to design marketing and production plans.
=== Causal loop diagrams ===
In the system dynamics methodology, a problem or a system (e.g., ecosystem, political system or mechanical system) may be represented as a causal loop diagram. A causal loop diagram is a simple map of a system with all its constituent components and their interactions. By capturing interactions and consequently the feedback loops (see figure below), a causal loop diagram reveals the structure of a system. By understanding the structure of a system, it becomes possible to ascertain a system's behavior over a certain time period.
The causal loop diagram of the new product introduction may look as follows:
There are two feedback loops in this diagram. The positive reinforcement (labeled R) loop on the right indicates that the more people have already adopted the new product, the stronger the word-of-mouth impact. There will be more references to the product, more demonstrations, and more reviews. This positive feedback should generate sales that continue to grow.
The second feedback loop on the left is negative reinforcement (or "balancing" and hence labeled B). Clearly, growth cannot continue forever, because as more and more people adopt, there remain fewer and fewer potential adopters.
Both feedback loops act simultaneously, but at different times they may have different strengths. Thus one might expect growing sales in the initial years, and then declining sales in the later years. However, in general a causal loop diagram does not specify the structure of a system sufficiently to permit determination of its behavior from the visual representation alone.
=== Stock and flow diagrams ===
Causal loop diagrams aid in visualizing a system's structure and behavior, and analyzing the system qualitatively. To perform a more detailed quantitative analysis, a causal loop diagram is transformed to a stock and flow diagram. A stock and flow model helps in studying and analyzing the system in a quantitative way; such models are usually built and simulated using computer software.
A stock is the term for any entity that accumulates or depletes over time. A flow is the rate of change in a stock.
In this example, there are two stocks: Potential adopters and Adopters. There is one flow: New adopters. For every new adopter, the stock of potential adopters declines by one, and the stock of adopters increases by one.
=== Equations ===
The real power of system dynamics is utilised through simulation. Although it is possible to perform the modeling in a spreadsheet, there are a variety of software packages that have been optimised for this.
The steps involved in a simulation are:
Define the problem boundary.
Identify the most important stocks and flows that change these stock levels.
Identify sources of information that impact the flows.
Identify the main feedback loops.
Draw a causal loop diagram that links the stocks, flows and sources of information.
Write the equations that determine the flows.
Estimate the parameters and initial conditions. These can be estimated using statistical methods, expert opinion, market research data or other relevant sources of information.
Simulate the model and analyse results.
In this example, the equations that change the two stocks via the flow are:
Potential adopters
=
−
∫
0
t
New adopters
d
t
{\displaystyle \ {\mbox{Potential adopters}}=-\int _{0}^{t}{\mbox{New adopters }}\,dt}
Adopters
=
∫
0
t
New adopters
d
t
{\displaystyle \ {\mbox{Adopters}}=\int _{0}^{t}{\mbox{New adopters }}\,dt}
=== Equations in discrete time ===
List of all the equations in discrete time, in their order of execution in each year, for years 1 to 15 :
1
)
Probability that contact has not yet adopted
=
Potential adopters
/
(
Potential adopters
+
Adopters
)
{\displaystyle 1)\ {\mbox{Probability that contact has not yet adopted}}={\mbox{Potential adopters}}/({\mbox{Potential adopters }}+{\mbox{ Adopters}})}
2
)
Imitators
=
q
⋅
Adopters
⋅
Probability that contact has not yet adopted
{\displaystyle 2)\ {\mbox{Imitators}}=q\cdot {\mbox{Adopters}}\cdot {\mbox{Probability that contact has not yet adopted}}}
3
)
Innovators
=
p
⋅
Potential adopters
{\displaystyle 3)\ {\mbox{Innovators}}=p\cdot {\mbox{Potential adopters}}}
4
)
New adopters
=
Innovators
+
Imitators
{\displaystyle 4)\ {\mbox{New adopters}}={\mbox{Innovators}}+{\mbox{Imitators}}}
4.1
)
Potential adopters
−
=
New adopters
{\displaystyle 4.1)\ {\mbox{Potential adopters}}\ -={\mbox{New adopters }}}
4.2
)
Adopters
+
=
New adopters
{\displaystyle 4.2)\ {\mbox{Adopters}}\ +={\mbox{New adopters }}}
p
=
0.03
{\displaystyle \ p=0.03}
q
=
0.4
{\displaystyle \ q=0.4}
==== Dynamic simulation results ====
The dynamic simulation results show that the behaviour of the system would be to have growth in adopters that follows a classic s-curve shape.
The increase in adopters is very slow initially, then exponential growth for a period, followed ultimately by saturation.
=== Equations in continuous time ===
To get intermediate values and better accuracy, the model can run in continuous time: we multiply the number of units of time and we proportionally divide values that change stock levels. In this example we multiply the 15 years by 4 to obtain 60 quarters, and we divide the value of the flow by 4.
Dividing the value is the simplest with the Euler method, but other methods could be employed instead, such as Runge–Kutta methods.
List of the equations in continuous time for trimesters = 1 to 60 :
They are the same equations as in the section Equation in discrete time above, except equations 4.1 and 4.2 replaced by following :
10
)
Valve New adopters
=
New adopters
⋅
T
i
m
e
S
t
e
p
{\displaystyle 10)\ {\mbox{Valve New adopters}}\ ={\mbox{New adopters}}\cdot TimeStep}
10.1
)
Potential adopters
−
=
Valve New adopters
{\displaystyle 10.1)\ {\mbox{Potential adopters}}\ -={\mbox{Valve New adopters}}}
10.2
)
Adopters
+
=
Valve New adopters
{\displaystyle 10.2)\ {\mbox{Adopters}}\ +={\mbox{Valve New adopters }}}
T
i
m
e
S
t
e
p
=
1
/
4
{\displaystyle \ TimeStep=1/4}
In the below stock and flow diagram, the intermediate flow 'Valve New adopters' calculates the equation :
Valve New adopters
=
New adopters
⋅
T
i
m
e
S
t
e
p
{\displaystyle \ {\mbox{Valve New adopters}}\ ={\mbox{New adopters }}\cdot TimeStep}
== Application ==
System dynamics has found application in a wide range of areas, for example population, agriculture, epidemiological, ecological and economic systems, which usually interact strongly with each other.
System dynamics have various "back of the envelope" management applications. They are a potent tool to:
Teach system thinking reflexes to persons being coached
Analyze and compare assumptions and mental models about the way things work
Gain qualitative insight into the workings of a system or the consequences of a decision
Recognize archetypes of dysfunctional systems in everyday practice
Computer software is used to simulate a system dynamics model of the situation being studied. Running "what if" simulations to test certain policies on such a model can greatly aid in understanding how the system changes over time. System dynamics is very similar to systems thinking and constructs the same causal loop diagrams of systems with feedback. However, system dynamics typically goes further and utilises simulation to study the behaviour of systems and the impact of alternative policies.
System dynamics has been used to investigate resource dependencies, and resulting problems, in product development.
A system dynamics approach to macroeconomics, known as Minsky, has been developed by the economist Steve Keen. This has been used to successfully model world economic behaviour from the apparent stability of the Great Moderation to the 2008 financial crisis.
=== Example: Growth and decline of companies ===
The figure above is a causal loop diagram of a system dynamics model created to examine forces that may be responsible for the growth or decline of life insurance companies in the United Kingdom. A number of this figure's features are worth mentioning. The first is that the model's negative feedback loops are identified by C's, which stand for Counteracting loops. The second is that double slashes are used to indicate places where there is a significant delay between causes (i.e., variables at the tails of arrows) and effects (i.e., variables at the heads of arrows). This is a common causal loop diagramming convention in system dynamics. Third, is that thicker lines are used to identify the feedback loops and links that author wishes the audience to focus on. This is also a common system dynamics diagramming convention. Last, it is clear that a decision maker would find it impossible to think through the dynamic behavior inherent in the model, from inspection of the figure alone.
=== Example: Piston motion ===
Objective: study of a crank-connecting rod system. We want to model a crank-connecting rod system through a system dynamic model. Two different full descriptions of the physical system with related systems of equations can be found here (in English) and here (in French); they give the same results. In this example, the crank, with variable radius and angular frequency, will drive a piston with a variable connecting rod length.
System dynamic modeling: the system is now modeled, according to a stock and flow system dynamic logic. The figure below shows the stock and flow diagram
Simulation: the behavior of the crank-connecting rod dynamic system can then be simulated. The next figure is a 3D simulation created using procedural animation. Variables of the model animate all parts of this animation: crank, radius, angular frequency, rod length, and piston position.
== See also ==
== References ==
== Further reading ==
Kypuros, Javier (2013). System dynamics and control with bond graph modeling. Boca Raton: Taylor & Francis. ISBN 978-1466560758.
Forrester, Jay W. (1961). Industrial Dynamics. M.I.T. Press.
Forrester, Jay W. (1969). Urban Dynamics. Pegasus Communications. ISBN 978-1-883823-39-9.
Meadows, Donella H. (1972). Limits to Growth. New York: University books. ISBN 978-0-87663-165-2.
Morecroft, John (2007). Strategic Modelling and Business Dynamics: A Feedback Systems Approach. John Wiley & Sons. ISBN 978-0-470-01286-4.
Roberts, Edward B. (1978). Managerial Applications of System Dynamics. Cambridge: MIT Press. ISBN 978-0-262-18088-7.
Randers, Jorgen (1980). Elements of the System Dynamics Method. Cambridge: MIT Press. ISBN 978-0-915299-39-3.
Senge, Peter (1990). The Fifth Discipline. Currency. ISBN 978-0-385-26095-4.
Sterman, John D. (2000). Business Dynamics: Systems thinking and modeling for a complex world. McGraw Hill. ISBN 978-0-07-231135-8.
== External links ==
System Dynamics Society
Study Prepared for the U.S. Department of Energy's Introducing System Dynamics -
Desert Island Dynamics "An Annotated Survey of the Essential System Dynamics Literature"
True World : Temporal Reasoning Universal Elaboration : System Dynamics software used for diagrams in this article (free) | Wikipedia/Systems_dynamics |
Systems theory is the transdisciplinary study of systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or artificial. Every system has causal boundaries, is influenced by its context, defined by its structure, function and role, and expressed through its relations with other systems. A system is "more than the sum of its parts" when it expresses synergy or emergent behavior.
Changing one component of a system may affect other components or the whole system. It may be possible to predict these changes in patterns of behavior. For systems that learn and adapt, the growth and the degree of adaptation depend upon how well the system is engaged with its environment and other contexts influencing its organization. Some systems support other systems, maintaining the other system to prevent failure. The goals of systems theory are to model a system's dynamics, constraints, conditions, and relations; and to elucidate principles (such as purpose, measure, methods, tools) that can be discerned and applied to other systems at every level of nesting, and in a wide range of fields for achieving optimized equifinality.
General systems theory is about developing broadly applicable concepts and principles, as opposed to concepts and principles specific to one domain of knowledge. It distinguishes dynamic or active systems from static or passive systems. Active systems are activity structures or components that interact in behaviours and processes or interrelate through formal contextual boundary conditions (attractors). Passive systems are structures and components that are being processed. For example, a computer program is passive when it is a file stored on the hard drive and active when it runs in memory. The field is related to systems thinking, machine logic, and systems engineering.
== Overview ==
Systems theory is manifest in the work of practitioners in many disciplines, for example the works of physician Alexander Bogdanov, biologist Ludwig von Bertalanffy, linguist Béla H. Bánáthy, and sociologist Talcott Parsons; in the study of ecological systems by Howard T. Odum, Eugene Odum; in Fritjof Capra's study of organizational theory; in the study of management by Peter Senge; in interdisciplinary areas such as human resource development in the works of Richard A. Swanson; and in the works of educators Debora Hammond and Alfonso Montuori.
As a transdisciplinary, interdisciplinary, and multiperspectival endeavor, systems theory brings together principles and concepts from ontology, the philosophy of science, physics, computer science, biology, and engineering, as well as geography, sociology, political science, psychotherapy (especially family systems therapy), and economics.
Systems theory promotes dialogue between autonomous areas of study as well as within systems science itself. In this respect, with the possibility of misinterpretations, von Bertalanffy believed a general theory of systems "should be an important regulative device in science," to guard against superficial analogies that "are useless in science and harmful in their practical consequences."
Others remain closer to the direct systems concepts developed by the original systems theorists. For example, Ilya Prigogine, of the Center for Complex Quantum Systems at the University of Texas, has studied emergent properties, suggesting that they offer analogues for living systems. The distinction of autopoiesis as made by Humberto Maturana and Francisco Varela represent further developments in this field. Important names in contemporary systems science include Russell Ackoff, Ruzena Bajcsy, Béla H. Bánáthy, Gregory Bateson, Anthony Stafford Beer, Peter Checkland, Barbara Grosz, Brian Wilson, Robert L. Flood, Allenna Leonard, Radhika Nagpal, Fritjof Capra, Warren McCulloch, Kathleen Carley, Michael C. Jackson, Katia Sycara, and Edgar Morin among others.
With the modern foundations for a general theory of systems following World War I, Ervin László, in the preface for Bertalanffy's book, Perspectives on General System Theory, points out that the translation of "general system theory" from German into English has "wrought a certain amount of havoc":
It (General System Theory) was criticized as pseudoscience and said to be nothing more than an admonishment to attend to things in a holistic way. Such criticisms would have lost their point had it been recognized that von Bertalanffy's general system theory is a perspective or paradigm, and that such basic conceptual frameworks play a key role in the development of exact scientific theory. .. Allgemeine Systemtheorie is not directly consistent with an interpretation often put on 'general system theory,' to wit, that it is a (scientific) "theory of general systems." To criticize it as such is to shoot at straw men. Von Bertalanffy opened up something much broader and of much greater significance than a single theory (which, as we now know, can always be falsified and has usually an ephemeral existence): he created a new paradigm for the development of theories.
Theorie (or Lehre) "has a much broader meaning in German than the closest English words 'theory' and 'science'," just as Wissenschaft (or 'Science'). These ideas refer to an organized body of knowledge and "any systematically presented set of concepts, whether empirically, axiomatically, or philosophically" represented, while many associate Lehre with theory and science in the etymology of general systems, though it also does not translate from the German very well; its "closest equivalent" translates to 'teaching', but "sounds dogmatic and off the mark." An adequate overlap in meaning is found within the word "nomothetic", which can mean "having the capability to posit long-lasting sense." While the idea of a "general systems theory" might have lost many of its root meanings in the translation, by defining a new way of thinking about science and scientific paradigms, systems theory became a widespread term used for instance to describe the interdependence of relationships created in organizations.
A system in this frame of reference can contain regularly interacting or interrelating groups of activities. For example, in noting the influence in the evolution of "an individually oriented industrial psychology [into] a systems and developmentally oriented organizational psychology," some theorists recognize that organizations have complex social systems; separating the parts from the whole reduces the overall effectiveness of organizations. This difference, from conventional models that center on individuals, structures, departments and units, separates in part from the whole, instead of recognizing the interdependence between groups of individuals, structures and processes that enable an organization to function.
László explains that the new systems view of organized complexity went "one step beyond the Newtonian view of organized simplicity" which reduced the parts from the whole, or understood the whole without relation to the parts. The relationship between organisations and their environments can be seen as the foremost source of complexity and interdependence. In most cases, the whole has properties that cannot be known from analysis of the constituent elements in isolation.
Béla H. Bánáthy, who argued—along with the founders of the systems society—that "the benefit of humankind" is the purpose of science, has made significant and far-reaching contributions to the area of systems theory. For the Primer Group at the International Society for the System Sciences, Bánáthy defines a perspective that iterates this view:
The systems view is a world-view that is based on the discipline of SYSTEM INQUIRY. Central to systems inquiry is the concept of SYSTEM. In the most general sense, system means a configuration of parts connected and joined together by a web of relationships. The Primer Group defines system as a family of relationships among the members acting as a whole. Von Bertalanffy defined system as "elements in standing relationship."
== Applications ==
=== Art ===
=== Biology ===
Systems biology is a movement that draws on several trends in bioscience research. Proponents describe systems biology as a biology-based interdisciplinary study field that focuses on complex interactions in biological systems, claiming that it uses a new perspective (holism instead of reduction).
Particularly from the year 2000 onwards, the biosciences use the term widely and in a variety of contexts. An often stated ambition of systems biology is the modelling and discovery of emergent properties which represents properties of a system whose theoretical description requires the only possible useful techniques to fall under the remit of systems biology. It is thought that Ludwig von Bertalanffy may have created the term systems biology in 1928.
Subdisciplines of systems biology include:
Systems neuroscience
Systems pharmacology
==== Ecology ====
Systems ecology is an interdisciplinary field of ecology that takes a holistic approach to the study of ecological systems, especially ecosystems; it can be seen as an application of general systems theory to ecology.
Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems.
=== Chemistry ===
Systems chemistry is the science of studying networks of interacting molecules, to create new functions from a set (or library) of molecules with different hierarchical levels and emergent properties. Systems chemistry is also related to the origin of life (abiogenesis).
=== Engineering ===
Systems engineering is an interdisciplinary approach and means for enabling the realisation and deployment of successful systems. It can be viewed as the application of engineering techniques to the engineering of systems, as well as the application of a systems approach to engineering efforts. Systems engineering integrates other disciplines and specialty groups into a team effort, forming a structured development process that proceeds from concept to production to operation and disposal. Systems engineering considers both the business and the technical needs of all customers, with the goal of providing a quality product that meets the user's needs.
==== User-centered design process ====
Systems thinking is a crucial part of user-centered design processes and is necessary to understand the whole impact of a new human computer interaction (HCI) information system. Overlooking this and developing software without insights input from the future users (mediated by user experience designers) is a serious design flaw that can lead to complete failure of information systems, increased stress and mental illness for users of information systems leading to increased costs and a huge waste of resources. It is currently surprisingly uncommon for organizations and governments to investigate the project management decisions leading to serious design flaws and lack of usability.
The Institute of Electrical and Electronics Engineers estimates that roughly 15% of the estimated $1 trillion used to develop information systems every year is completely wasted and the produced systems are discarded before implementation by entirely preventable mistakes. According to the CHAOS report published in 2018 by the Standish Group, a vast majority of information systems fail or partly fail according to their survey:
Pure success is the combination of high customer satisfaction with high return on value to the organization. Related figures for the year 2017 are: successful: 14%, challenged: 67%, failed 19%.
=== Mathematics ===
System dynamics is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, and time delays.
=== Social sciences and humanities ===
Systems theory in anthropology
Systems theory in archaeology
Systems theory in political science
==== Psychology ====
Systems psychology is a branch of psychology that studies human behaviour and experience in complex systems.
It received inspiration from systems theory and systems thinking, as well as the basics of theoretical work from Roger Barker, Gregory Bateson, Humberto Maturana and others. It makes an approach in psychology in which groups and individuals receive consideration as systems in homeostasis. Systems psychology "includes the domain of engineering psychology, but in addition seems more concerned with societal systems and with the study of motivational, affective, cognitive and group behavior that holds the name engineering psychology."
In systems psychology, characteristics of organizational behaviour (such as individual needs, rewards, expectations, and attributes of the people interacting with the systems) "considers this process in order to create an effective system."
=== Informatics ===
System theory has been applied in the field of neuroinformatics and connectionist cognitive science. Attempts are being made in neurocognition to merge connectionist cognitive neuroarchitectures with the approach of system theory and dynamical systems theory.
== History ==
=== Precursors ===
Systems thinking can date back to antiquity, whether considering the first systems of written communication with Sumerian cuneiform to Maya numerals, or the feats of engineering with the Egyptian pyramids. Differentiated from Western rationalist traditions of philosophy, C. West Churchman often identified with the I Ching as a systems approach sharing a frame of reference similar to pre-Socratic philosophy and Heraclitus.: 12–13 Ludwig von Bertalanffy traced systems concepts to the philosophy of Gottfried Leibniz and Nicholas of Cusa's coincidentia oppositorum. While modern systems can seem considerably more complicated, they may embed themselves in history.
Figures like James Joule and Sadi Carnot represent an important step to introduce the systems approach into the (rationalist) hard sciences of the 19th century, also known as the energy transformation. Then, the thermodynamics of this century, by Rudolf Clausius, Josiah Gibbs and others, established the system reference model as a formal scientific object.
Similar ideas are found in learning theories that developed from the same fundamental concepts, emphasising how understanding results from knowing concepts both in part and as a whole. In fact, Bertalanffy's organismic psychology paralleled the learning theory of Jean Piaget. Some consider interdisciplinary perspectives critical in breaking away from industrial age models and thinking, wherein history represents history and math represents math, while the arts and sciences specialization remain separate and many treat teaching as behaviorist conditioning.
The contemporary work of Peter Senge provides detailed discussion of the commonplace critique of educational systems grounded in conventional assumptions about learning, including the problems with fragmented knowledge and lack of holistic learning from the "machine-age thinking" that became a "model of school separated from daily life." In this way, some systems theorists attempt to provide alternatives to, and evolved ideation from orthodox theories which have grounds in classical assumptions, including individuals such as Max Weber and Émile Durkheim in sociology and Frederick Winslow Taylor in scientific management. The theorists sought holistic methods by developing systems concepts that could integrate with different areas.
Some may view the contradiction of reductionism in conventional theory (which has as its subject a single part) as simply an example of changing assumptions. The emphasis with systems theory shifts from parts to the organization of parts, recognizing interactions of the parts as not static and constant but dynamic processes. Some questioned the conventional closed systems with the development of open systems perspectives. The shift originated from absolute and universal authoritative principles and knowledge to relative and general conceptual and perceptual knowledge and still remains in the tradition of theorists that sought to provide means to organize human life. In other words, theorists rethought the preceding history of ideas; they did not lose them. Mechanistic thinking was particularly critiqued, especially the industrial-age mechanistic metaphor for the mind from interpretations of Newtonian mechanics by Enlightenment philosophers and later psychologists that laid the foundations of modern organizational theory and management by the late 19th century.
=== Founding and early development ===
Where assumptions in Western science from Plato and Aristotle to Isaac Newton's Principia (1687) have historically influenced all areas from the hard to social sciences (see, David Easton's seminal development of the "political system" as an analytical construct), the original systems theorists explored the implications of 20th-century advances in terms of systems.
Between 1929 and 1951, Robert Maynard Hutchins at the University of Chicago had undertaken efforts to encourage innovation and interdisciplinary research in the social sciences, aided by the Ford Foundation with the university's interdisciplinary Division of the Social Sciences established in 1931.: 5–9
Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science.
"General systems theory" (GST; German: allgemeine Systemlehre) was coined in the 1940s by Ludwig von Bertalanffy, who sought a new approach to the study of living systems. Bertalanffy developed the theory via lectures beginning in 1937 and then via publications beginning in 1946. According to Mike C. Jackson (2000), Bertalanffy promoted an embryonic form of GST as early as the 1920s and 1930s, but it was not until the early 1950s that it became more widely known in scientific circles.
Jackson also claimed that Bertalanffy's work was informed by Alexander Bogdanov's three-volume Tectology (1912–1917), providing the conceptual base for GST. A similar position is held by Richard Mattessich (1978) and Fritjof Capra (1996). Despite this, Bertalanffy never even mentioned Bogdanov in his works.
The systems view was based on several fundamental ideas. First, all phenomena can be viewed as a web of relationships among elements, or a system. Second, all systems, whether electrical, biological, or social, have common patterns, behaviors, and properties that the observer can analyze and use to develop greater insight into the behavior of complex phenomena and to move closer toward a unity of the sciences. System philosophy, methodology and application are complementary to this science.
Cognizant of advances in science that questioned classical assumptions in the organizational sciences, Bertalanffy's idea to develop a theory of systems began as early as the interwar period, publishing "An Outline for General Systems Theory" in the British Journal for the Philosophy of Science by 1950.
In 1954, von Bertalanffy, along with Anatol Rapoport, Ralph W. Gerard, and Kenneth Boulding, came together at the Center for Advanced Study in the Behavioral Sciences in Palo Alto to discuss the creation of a "society for the advancement of General Systems Theory." In December that year, a meeting of around 70 people was held in Berkeley to form a society for the exploration and development of GST. The Society for General Systems Research (renamed the International Society for Systems Science in 1988) was established in 1956 thereafter as an affiliate of the American Association for the Advancement of Science (AAAS), specifically catalyzing systems theory as an area of study. The field developed from the work of Bertalanffy, Rapoport, Gerard, and Boulding, as well as other theorists in the 1950s like William Ross Ashby, Margaret Mead, Gregory Bateson, and C. West Churchman, among others.
Bertalanffy's ideas were adopted by others, working in mathematics, psychology, biology, game theory, and social network analysis. Subjects that were studied included those of complexity, self-organization, connectionism and adaptive systems. In fields like cybernetics, researchers such as Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster examined complex systems mathematically; Von Neumann discovered cellular automata and self-reproducing systems, again with only pencil and paper. Aleksandr Lyapunov and Jules Henri Poincaré worked on the foundations of chaos theory without any computer at all. At the same time, Howard T. Odum, known as a radiation ecologist, recognized that the study of general systems required a language that could depict energetics, thermodynamics and kinetics at any system scale. To fulfill this role, Odum developed a general system, or universal language, based on the circuit language of electronics, known as the Energy Systems Language.
The Cold War affected the research project for systems theory in ways that sorely disappointed many of the seminal theorists. Some began to recognize that theories defined in association with systems theory had deviated from the initial general systems theory view. Economist Kenneth Boulding, an early researcher in systems theory, had concerns over the manipulation of systems concepts. Boulding concluded from the effects of the Cold War that abuses of power always prove consequential and that systems theory might address such issues.: 229–233 Since the end of the Cold War, a renewed interest in systems theory emerged, combined with efforts to strengthen an ethical view on the subject.
In sociology, systems thinking also began in the 20th century, including Talcott Parsons' action theory and Niklas Luhmann's social systems theory. According to Rudolf Stichweh (2011):: 2 Since its beginnings the social sciences were an important part of the establishment of systems theory... [T]he two most influential suggestions were the comprehensive sociological versions of systems theory which were proposed by Talcott Parsons since the 1950s and by Niklas Luhmann since the 1970s.Elements of systems thinking can also be seen in the work of James Clerk Maxwell, particularly control theory.
== General systems research and systems inquiry ==
Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. Ludwig von Bertalanffy began developing his 'general systems theory' via lectures in 1937 and then via publications from 1946. The concept received extensive focus in his 1968 book, General System Theory: Foundations, Development, Applications.
There are many definitions of a general system, some properties that definitions include are: an overall goal of the system, parts of the system and relationships between these parts, and emergent properties of the interaction between the parts of the system that are not performed by any part on its own.: 58 Derek Hitchins defines a system in terms of entropy as a collection of parts and relationships between the parts where the parts of their interrelationships decrease entropy.: 58
Bertalanffy aimed to bring together under one heading the organismic science that he had observed in his work as a biologist. He wanted to use the word system for those principles that are common to systems in general. In General System Theory (1968), he wrote:: 32
[T]here exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relationships or "forces" between them. It seems legitimate to ask for a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general.
In the preface to von Bertalanffy's Perspectives on General System Theory, Ervin László stated:
Thus when von Bertalanffy spoke of Allgemeine Systemtheorie it was consistent with his view that he was proposing a new perspective, a new way of doing science. It was not directly consistent with an interpretation often put on "general system theory", to wit, that it is a (scientific) "theory of general systems." To criticize it as such is to shoot at straw men. Von Bertalanffy opened up something much broader and of much greater significance than a single theory (which, as we now know, can always be falsified and has usually an ephemeral existence): he created a new paradigm for the development of theories.
Bertalanffy outlines systems inquiry into three major domains: philosophy, science, and technology. In his work with the Primer Group, Béla H. Bánáthy generalized the domains into four integratable domains of systemic inquiry:
philosophy: the ontology, epistemology, and axiology of systems
theory: a set of interrelated concepts and principles applying to all systems
methodology: the set of models, strategies, methods and tools that instrumentalize systems theory and philosophy
application: the application and interaction of the domains
These operate in a recursive relationship, he explained; integrating 'philosophy' and 'theory' as knowledge, and 'method' and 'application' as action; systems inquiry is thus knowledgeable action.
=== Properties of general systems ===
General systems may be split into a hierarchy of systems, where there is less interactions between the different systems than there is the components in the system. The alternative is heterarchy where all components within the system interact with one another.: 65 Sometimes an entire system will be represented inside another system as a part, sometimes referred to as a holon. These hierarchies of system are studied in hierarchy theory. The amount of interaction between parts of systems higher in the hierarchy and parts of the system lower in the hierarchy is reduced. If all the parts of a system are tightly coupled (interact with one another a lot) then the system cannot be decomposed into different systems. The amount of coupling between parts of a system may differ temporally, with some parts interacting more often than other, or for different processes in a system.: 293 Herbert A. Simon distinguished between decomposable, nearly decomposable and nondecomposable systems.: 72
Russell L. Ackoff distinguished general systems by how their goals and subgoals could change over time. He distinguished between goal-maintaining, goal-seeking, multi-goal and reflective (or goal-changing) systems.: 73
== System types and fields ==
=== Theoretical fields ===
Chaos theory
Complex system
Control theory
Dynamical systems theory
Earth system science
Ecological systems theory
Industrial ecology
Living systems theory
Sociotechnical system
Systemics
Telecoupling
Urban metabolism
World-systems theory
==== Cybernetics ====
Cybernetics is the study of the communication and control of regulatory feedback both in living and lifeless systems (organisms, organizations, machines), and in combinations of those. Its focus is how anything (digital, mechanical or biological) controls its behavior, processes information, reacts to information, and changes or can be changed to better accomplish those three primary tasks.
The terms systems theory and cybernetics have been widely used as synonyms. Some authors use the term cybernetic systems to denote a proper subset of the class of general systems, namely those systems that include feedback loops. However, Gordon Pask's differences of eternal interacting actor loops (that produce finite products) makes general systems a proper subset of cybernetics. In cybernetics, complex systems have been examined mathematically by such researchers as W. Ross Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster.
Threads of cybernetics began in the late 1800s that led toward the publishing of seminal works (such as Wiener's Cybernetics in 1948 and Bertalanffy's General System Theory in 1968). Cybernetics arose more from engineering fields and GST from biology. If anything, it appears that although the two probably mutually influenced each other, cybernetics had the greater influence. Bertalanffy specifically made the point of distinguishing between the areas in noting the influence of cybernetics:Systems theory is frequently identified with cybernetics and control theory. This again is incorrect. Cybernetics as the theory of control mechanisms in technology and nature is founded on the concepts of information and feedback, but as part of a general theory of systems.... [T]he model is of wide application but should not be identified with 'systems theory' in general ... [and] warning is necessary against its incautious expansion to fields for which its concepts are not made.: 17–23 Cybernetics, catastrophe theory, chaos theory and complexity theory have the common goal to explain complex systems that consist of a large number of mutually interacting and interrelated parts in terms of those interactions. Cellular automata, neural networks, artificial intelligence, and artificial life are related fields, but do not try to describe general (universal) complex (singular) systems. The best context to compare the different "C"-Theories about complex systems is historical, which emphasizes different tools and methodologies, from pure mathematics in the beginning to pure computer science today. Since the beginning of chaos theory, when Edward Lorenz accidentally discovered a strange attractor with his computer, computers have become an indispensable source of information. One could not imagine the study of complex systems without the use of computers today.
=== System types ===
Biological
Anatomical systems
Nervous
Sensory
Ecological systems
Living systems
Complex
Complex adaptive system
Conceptual
Coordinate
Deterministic (philosophy)
Digital ecosystem
Experimental
Writing
Coupled human–environment
Database
Deterministic (science)
Mathematical
Dynamical system
Formal system
Energy
Holarchical
Information
Measurement
Imperial
Metric
Multi-agent
Nonlinear
Operating
Planetary
Social
Cultural
Economic
Legal
Political
Star
==== Complex adaptive systems ====
Complex adaptive systems (CAS), coined by John H. Holland, Murray Gell-Mann, and others at the interdisciplinary Santa Fe Institute, are special cases of complex systems: they are complex in that they are diverse and composed of multiple, interconnected elements; they are adaptive in that they have the capacity to change and learn from experience.
In contrast to control systems, in which negative feedback dampens and reverses disequilibria, CAS are often subject to positive feedback, which magnifies and perpetuates changes, converting local irregularities into global features.
== See also ==
=== Organizations ===
List of systems sciences organizations
== References ==
== Further reading ==
Ashby, W. Ross. 1956. An Introduction to Cybernetics. Chapman & Hall.
—— 1960. Design for a Brain: The Origin of Adaptive Behavior (2nd ed.). Chapman & Hall.
Bateson, Gregory. 1972. Steps to an Ecology of Mind: Collected essays in Anthropology, Psychiatry, Evolution, and Epistemology. University of Chicago Press.
von Bertalanffy, Ludwig. 1968. General System Theory: Foundations, Development, Applications New York: George Braziller
Burks, Arthur. 1970. Essays on Cellular Automata. University of Illinois Press.
Cherry, Colin. 1957. On Human Communication: A Review, a Survey, and a Criticism. Cambridge: The MIT Press.
Churchman, C. West. 1971. The Design of Inquiring Systems: Basic Concepts of Systems and Organizations. New York: Basic Books.
Checkland, Peter. 1999. Systems Thinking, Systems Practice: Includes a 30-Year Retrospective. Wiley.
Gleick, James. 1997. Chaos: Making a New Science, Random House.
Haken, Hermann. 1983. Synergetics: An Introduction – 3rd Edition, Springer.
Holland, John H. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge: The MIT Press.
Luhmann, Niklas. 2013. Introduction to Systems Theory, Polity.
Macy, Joanna. 1991. Mutual Causality in Buddhism and General Systems Theory: The Dharma of Natural Systems. SUNY Press.
Maturana, Humberto, and Francisco Varela. 1980. Autopoiesis and Cognition: The Realization of the Living. Springer Science & Business Media.
Miller, James Grier. 1978. Living Systems. Mcgraw-Hill.
von Neumann, John. 1951 "The General and Logical Theory of Automata." pp. 1–41 in Cerebral Mechanisms in Behavior.
—— 1956. "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components." Automata Studies 34: 43–98.
von Neumann, John, and Arthur Burks, eds. 1966. Theory of Self-Reproducing Automata. Illinois University Press.
Parsons, Talcott. 1951. The Social System. The Free Press.
Prigogine, Ilya. 1980. From Being to Becoming: Time and Complexity in the Physical Sciences. W H Freeman & Co.
Simon, Herbert A. 1962. "The Architecture of Complexity." Proceedings of the American Philosophical Society, 106.
—— 1996. The Sciences of the Artificial (3rd ed.), vol. 136. The MIT Press.
Shannon, Claude, and Warren Weaver. 1949. The Mathematical Theory of Communication. ISBN 0-252-72546-8.
Adapted from Shannon, Claude. 1948. "A Mathematical Theory of Communication." Bell System Technical Journal 27(3): 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x.
Thom, René. 1972. Structural Stability and Morphogenesis: An Outline of a General Theory of Models. Reading, Massachusetts
Volk, Tyler. 1995. Metapatterns: Across Space, Time, and Mind. New York: Columbia University Press.
Weaver, Warren. 1948. "Science and Complexity." The American Scientist, pp. 536–544.
Wiener, Norbert. 1965. Cybernetics: Or the Control and Communication in the Animal and the Machine (2nd ed.). Cambridge: The MIT Press.
Wolfram, Stephen. 2002. A New Kind of Science. Wolfram Media.
Zadeh, Lofti. 1962. "From Circuit Theory to System Theory." Proceedings of the IRE 50(5): 856–865.
== External links ==
Systems Thinking at Wikiversity
Systems theory at Principia Cybernetica Web
Introduction to systems thinking – 55 slides
Organizations
International Society for the System Sciences
New England Complex Systems Institute
System Dynamics Society | Wikipedia/Systems_theory |
A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, complex software and electronic systems, social and economic organizations (like cities), an ecosystem, a living cell, and, ultimately, for some authors, the entire universe.
The behavior of a complex system is intrinsically difficult to model due to the dependencies, competitions, relationships, and other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links represent their interactions.
The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment. The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.
As an interdisciplinary domain, complex systems draw contributions from many different fields, such as the study of self-organization and critical phenomena from physics, of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology.
== Types of systems ==
Complex systems can be:
Complex adaptive systems which have the capacity to change.
Polycentric systems : “where many elements are capable of making mutual adjustments for ordering their relationships with one another within a general system of rules where each element acts with independence of other elements”.
Disorganised systems involving localized interactions of multiple entities that do not form a coherent whole. Disorganised systems are linked to self-organisation processes.
Hierarchic systems which are analyzable into successive sets of subsystems. They can also be called nested or embedded systems.
Cybernetic systems involve information feedback loops.
== Key concepts ==
=== Adaptation ===
Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity to change and learn from experience. Examples of complex adaptive systems include the international trade markets, social insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the developing embryo, cities, manufacturing businesses and any human social group-based endeavor in a cultural and social system such as political parties or communities.
=== Decomposability ===
A system is decomposable if the parts of the system (subsystems) are independent from each other, for exemple the model of a perfect gas consider the relations among molecules negligeable.
In a nearly decomposable system, the interactions between subsystems are weak but not negligeable, this is often the case in social systems. Conceptually, a system is nearly decomposable if the variables composing it can be separated into classes and subclasses, if these variables are independent for many functions but affect each other, and if the whole system is greater than the parts.
== Features ==
Complex systems may have the following features:
Complex systems may be open
Complex systems are usually open systems – that is, they exist in a thermodynamic gradient and dissipate energy. In other words, complex systems are frequently far from energetic equilibrium: but despite this flux, there may be pattern stability, see synergetics.
Complex systems may exhibit critical transitions
Critical transitions are abrupt shifts in the state of ecosystems, the climate, financial and economic systems or other complex systems that may occur when changing conditions pass a critical or bifurcation point. The 'direction of critical slowing down' in a system's state space may be indicative of a system's future state after such transitions when delayed negative feedbacks leading to oscillatory or other complex dynamics are weak.
Complex systems may be nested
The components of a complex system may themselves be complex systems. For example, an economy is made up of organisations, which are made up of people, which are made up of cells – all of which are complex systems. The arrangement of interactions within complex bipartite networks may be nested as well. More specifically, bipartite ecological and organisational networks of mutually beneficial interactions were found to have a nested structure. This structure promotes indirect facilitation and a system's capacity to persist under increasingly harsh circumstances as well as the potential for large-scale systemic regime shifts.
Dynamic network of multiplicity
As well as coupling rules, the dynamic network of a complex system is important. Small-world or scale-free networks which have many local interactions and a smaller number of inter-area connections are often employed. Natural complex systems often exhibit such topologies. In the human cortex for example, we see dense local connectivity and a few very long axon projections between regions inside the cortex and to other brain regions.
May produce emergent phenomena
Complex systems may exhibit behaviors that are emergent, which is to say that while the results may be sufficiently determined by the activity of the systems' basic constituents, they may have properties that can only be studied at a higher level. For example, empirical food webs display regular, scale-invariant features across aquatic and terrestrial ecosystems when studied at the level of clustered 'trophic' species. Another example is offered by the termites in a mound which have physiology, biochemistry and biological development at one level of analysis, whereas their social behavior and mound building is a property that emerges from the collection of termites and needs to be analyzed at a different level.
Relationships are non-linear
In practical terms, this means a small perturbation may cause a large effect (see butterfly effect), a proportional effect, or even no effect at all. In linear systems, the effect is always directly proportional to cause. See nonlinearity.
Relationships contain feedback loops
Both negative (damping) and positive (amplifying) feedback are always found in complex systems. The effects of an element's behavior are fed back in such a way that the element itself is altered.
== History ==
In 1948, Dr. Warren Weaver published an essay on "Science and Complexity", exploring the diversity of problem types by contrasting problems of simplicity, disorganized complexity, and organized complexity. Weaver described these as "problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole."
While the explicit study of complex systems dates at least to the 1970s, the first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984. Early Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson, economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb Anderson. Today, there are over 50 institutes and research centers focusing on complex systems.
Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely "econophysics", which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis.
The 2021 Nobel Prize in Physics was awarded to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi for their work to understand complex systems. Their work was used to create more accurate computer models of the effect of global warming on the Earth's climate.
== Applications ==
=== Complexity in practice ===
The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions.
=== Complexity of cities ===
Jane Jacobs described cities as being a problem in organized complexity in 1961, citing Dr. Weaver's 1948 essay. As an example, she explains how an abundance of factors interplay into how various urban spaces lead to a diversity of interactions, and how changing those factors can change how the space is used, and how well the space supports the functions of the city. She further illustrates how cities have been severely damaged when approached as a problem in simplicity by replacing organized complexity with simple and predictable spaces, such as Le Corbusier's "Radiant City" and Ebenezer Howard's "Garden City". Since then, others have written at length on the complexity of cities.
=== Complexity economics ===
Over the last decades, within the emerging field of complexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989 and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and the Harvard economist Ricardo Hausmann.
Recurrence quantification analysis has been employed to detect the characteristic of business cycles and economic development. To this end, Orlando et al. developed the so-called recurrence quantification correlation index (RQCI) to test correlations of RQA on a sample signal and then investigated the application to business time series. The said index has been proven to detect hidden changes in time series. Further, Orlando et al., over an extensive dataset, shown that recurrence quantification analysis may help in anticipating transitions from laminar (i.e. regular) to turbulent (i.e. chaotic) phases such as USA GDP in 1949, 1953, etc. Last but not least, it has been demonstrated that recurrence quantification analysis can detect differences between macroeconomic variables and highlight hidden features of economic dynamics.
=== Complexity and education ===
Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics".
=== Complexity in healthcare research and practice ===
Healthcare systems are prime examples of complex systems, characterized by interactions among diverse stakeholders, such as patients, providers, policymakers, and researchers, across various sectors like health, government, community, and education. These systems demonstrate properties like non-linearity, emergence, adaptation, and feedback loops. Complexity science in healthcare frames knowledge translation as a dynamic and interconnected network of processes—problem identification, knowledge creation, synthesis, implementation, and evaluation—rather than a linear or cyclical sequence. Such approaches emphasize the importance of understanding and leveraging the interactions within and between these processes and stakeholders to optimize the creation and movement of knowledge. By acknowledging the complex, adaptive nature of healthcare systems, complexity science advocates for continuous stakeholder engagement, transdisciplinary collaboration, and flexible strategies to effectively translate research into practice.
=== Complexity and biology ===
Complexity science has been applied to living organisms, and in particular to biological systems. Within the emerging field of fractal physiology, bodily signals, such as heart rate or brain activity, are characterized using entropy or fractal indices. The goal is often to assess the state and the health of the underlying system, and diagnose potential disorders and illnesses.
=== Complexity and chaos theory ===
Complex systems theory is related to chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy.
The emergence of complex systems theory shows a domain between deterministic order and randomness which is complex. This is referred to as the "edge of chaos".
When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions. For recent examples in economics and business see Stoop et al. who discussed Android's market position, Orlando who explained the corporate dynamics in terms of mutual synchronization and chaos regularization of bursts in a group of chaotically bursting cells and Orlando et al. who modelled financial data (Financial Stress Index, swap and equity, emerging and developed, corporate and government, short and long maturity) with a low-dimensional deterministic model.
Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents". In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.
=== Complexity and network science ===
A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions. For example, the Internet can be represented as a network composed of nodes (computers) and links (direct connections between computers). Other examples of complex networks include social networks, financial institution interdependencies, airline networks, and biological networks.
== Notable scholars ==
== See also ==
== References ==
== Further reading ==
Complexity Explained.
L.A.N. Amaral and J.M. Ottino, Complex networks – augmenting the framework for the study of complex system, 2004.
Chu, D.; Strand, R.; Fjelland, R. (2003). "Theories of complexity". Complexity. 8 (3): 19–30. Bibcode:2003Cmplx...8c..19C. doi:10.1002/cplx.10059.
Walter Clemens, Jr., Complexity Science and World Affairs, SUNY Press, 2013.
Gell-Mann, Murray (1995). "Let's Call It Plectics". Complexity. 1 (5): 3–5. Bibcode:1996Cmplx...1e...3G. doi:10.1002/cplx.6130010502.
A. Gogolin, A. Nersesyan and A. Tsvelik, Theory of strongly correlated systems , Cambridge University Press, 1999.
Nigel Goldenfeld and Leo P. Kadanoff, Simple Lessons from Complexity Archived 2017-09-28 at the Wayback Machine, 1999
Kelly, K. (1995). Out of Control, Perseus Books Group.
Orlando, Giuseppe Orlando; Pisarchick, Alexander; Stoop, Ruedi (2021). Nonlinearities in Economics. Dynamic Modeling and Econometrics in Economics and Finance. Vol. 29. doi:10.1007/978-3-030-70982-2. ISBN 978-3-030-70981-5. S2CID 239756912.
Syed M. Mehmud (2011), A Healthcare Exchange Complexity Model
Preiser-Kapeller, Johannes, "Calculating Byzantium. Social Network Analysis and Complexity Sciences as tools for the exploration of medieval social dynamics". August 2010
Donald Snooks, Graeme (2008). "A general theory of complex living systems: Exploring the demand side of dynamics". Complexity. 13 (6): 12–20. Bibcode:2008Cmplx..13f..12S. doi:10.1002/cplx.20225.
Stefan Thurner, Peter Klimek, Rudolf Hanel: Introduction to the Theory of Complex Systems, Oxford University Press, 2018, ISBN 978-0198821939
SFI @30, Foundations & Frontiers (2014).
== External links ==
"The Open Agent-Based Modeling Consortium".
"Complexity Science Focus". Archived from the original on 2017-12-05. Retrieved 2017-09-22.
"Santa Fe Institute".
"The Center for the Study of Complex Systems, Univ. of Michigan Ann Arbor". Archived from the original on 2017-12-13. Retrieved 2017-09-22.
"INDECS". (Interdisciplinary Description of Complex Systems)
"Introduction to Complexity – Free online course by Melanie Mitchell". Archived from the original on 2018-08-30. Retrieved 2018-08-29.
Jessie Henshaw (October 24, 2013). "Complex Systems". Encyclopedia of Earth.
Complex systems in scholarpedia.
Complex Systems Society
(Australian) Complex systems research network.
Complex Systems Modeling based on Luis M. Rocha, 1999.
CRM Complex systems research group
The Center for Complex Systems Research, Univ. of Illinois at Urbana-Champaign
Institute for Cross-Disciplinary Physics and Complex Systems (IFISC) | Wikipedia/Complex_Systems |
Evolutionary game theory (EGT) is the application of game theory to evolving populations in biology. It defines a framework of contests, strategies, and analytics into which Darwinian competition can be modelled. It originated in 1973 with John Maynard Smith and George R. Price's formalisation of contests, analysed as strategies, and the mathematical criteria that can be used to predict the results of competing strategies.
Evolutionary game theory differs from classical game theory in focusing more on the dynamics of strategy change. This is influenced by the frequency of the competing strategies in the population.
Evolutionary game theory has helped to explain the basis of altruistic behaviours in Darwinian evolution. It has in turn become of interest to economists, sociologists, anthropologists, and philosophers.
== History ==
=== Classical game theory ===
Classical non-cooperative game theory was conceived by John von Neumann to determine optimal strategies in competitions between adversaries. A contest involves players, all of whom have a choice of moves. Games can be a single round or repetitive. The approach a player takes in making their moves constitutes their strategy. Rules govern the outcome for the moves taken by the players, and outcomes produce payoffs for the players; rules and resulting payoffs can be expressed as decision trees or in a payoff matrix. Classical theory requires the players to make rational choices. Each player must consider the strategic analysis that their opponents are making to make their own choice of moves.
=== The problem of ritualized behaviour ===
Evolutionary game theory started with the problem of how to explain ritualized animal behaviour in a conflict situation; "why are animals so 'gentlemanly or ladylike' in contests for resources?" The leading ethologists Niko Tinbergen and Konrad Lorenz proposed that such behaviour exists for the benefit of the species. John Maynard Smith considered that incompatible with Darwinian thought, where selection occurs at an individual level, so self-interest is rewarded while seeking the common good is not. Maynard Smith, a mathematical biologist, turned to game theory as suggested by George Price, though Richard Lewontin's attempts to use the theory had failed.
=== Adapting game theory to evolutionary games ===
Maynard Smith realised that an evolutionary version of game theory does not require players to act rationally—only that they have a strategy. The results of a game show how good that strategy was, just as evolution tests alternative strategies for the ability to survive and reproduce. In biology, strategies are genetically inherited traits that control an individual's action, analogous with computer programs. The success of a strategy is determined by how good the strategy is in the presence of competing strategies (including itself), and of the frequency with which those strategies are used. Maynard Smith described his work in his book Evolution and the Theory of Games.
Participants aim to produce as many replicas of themselves as they can, and the payoff is in units of fitness (relative worth in being able to reproduce). It is always a multi-player game with many competitors. Rules include replicator dynamics, in other words how the fitter players will spawn more replicas of themselves into the population and how the less fit will be culled, in a replicator equation. The replicator dynamics models heredity but not mutation, and assumes asexual reproduction for the sake of simplicity. Games are run repetitively with no terminating conditions. Results include the dynamics of changes in the population, the success of strategies, and any equilibrium states reached. Unlike in classical game theory, players do not choose their strategy and cannot change it: they are born with a strategy and their offspring inherit that same strategy.
== Evolutionary games ==
=== Models ===
Evolutionary game theory encompasses Darwinian evolution, including competition (the game), natural selection (replicator dynamics), and heredity. Evolutionary game theory has contributed to the understanding of group selection, sexual selection, altruism, parental care, co-evolution, and ecological dynamics. Many counter-intuitive situations in these areas have been put on a firm mathematical footing by the use of these models.
The common way to study the evolutionary dynamics in games is through replicator equations. These show the growth rate of the proportion of organisms using a certain strategy and that rate is equal to the difference between the average payoff of that strategy and the average payoff of the population as a whole. Continuous replicator equations assume infinite populations, continuous time, complete mixing and that strategies breed true. Some attractors (all global asymptotically stable fixed points) of the equations are evolutionarily stable states. A strategy which can survive all "mutant" strategies is considered evolutionarily stable. In the context of animal behavior, this usually means such strategies are programmed and heavily influenced by genetics, thus making any player or organism's strategy determined by these biological factors.
Evolutionary games are mathematical objects with different rules, payoffs, and mathematical behaviours. Each "game" represents different problems that organisms have to deal with, and the strategies they might adopt to survive and reproduce. Evolutionary games are often given colourful names and cover stories which describe the general situation of a particular game. Representative games include hawk-dove, war of attrition, stag hunt, producer-scrounger, tragedy of the commons, and prisoner's dilemma. Strategies for these games include hawk, dove, bourgeois, prober, defector, assessor, and retaliator. The various strategies compete under the particular game's rules, and the mathematics are used to determine the results and behaviours.
=== Hawk dove ===
The first game that Maynard Smith analysed is the classic hawk dove game. It was conceived to analyse Lorenz and Tinbergen's problem, a contest over a shareable resource. The contestants can be either a hawk or a dove. These are two subtypes or morphs of one species with different strategies. The hawk first displays aggression, then escalates into a fight until it either wins or is injured (loses). The dove first displays aggression, but if faced with major escalation runs for safety. If not faced with such escalation, the dove attempts to share the resource.
Given that the resource is given the value V, the damage from losing a fight is given cost C:
If a hawk meets a dove, the hawk gets the full resource V
If a hawk meets a hawk, half the time they win, half the time they lose, so the average outcome is then V/2 minus C/2
If a dove meets a hawk, the dove will back off and get nothing – 0
If a dove meets a dove, both share the resource and get V/2
The actual payoff, however, depends on the probability of meeting a hawk or dove, which in turn is a representation of the percentage of hawks and doves in the population when a particular contest takes place. That, in turn, is determined by the results of all of the previous contests. If the cost of losing C is greater than the value of winning V (the normal situation in the natural world) the mathematics ends in an evolutionarily stable strategy (ESS), a mix of the two strategies where the population of hawks is V/C. The population regresses to this equilibrium point if any new hawks or doves make a temporary perturbation in the population.
The solution of the hawk dove game explains why most animal contests involve only ritual fighting behaviours in contests rather than outright battles. The result does not at all depend on "good of the species" behaviours as suggested by Lorenz, but solely on the implication of actions of so-called selfish genes.
=== War of attrition ===
In the hawk dove game the resource is shareable, which gives payoffs to both doves meeting in a pairwise contest. Where the resource is not shareable, but an alternative resource might be available by backing off and trying elsewhere, pure hawk or dove strategies are less effective. If an unshareable resource is combined with a high cost of losing a contest (injury or possible death) both hawk and dove payoffs are further diminished. A safer strategy of lower cost display, bluffing and waiting to win, is then viable – a bluffer strategy. The game then becomes one of accumulating costs, either the costs of displaying or the costs of prolonged unresolved engagement. It is effectively an auction; the winner is the contestant who will swallow the greater cost while the loser gets the same cost as the winner but no resource. The resulting evolutionary game theory mathematics lead to an optimal strategy of timed bluffing.
This is because in the war of attrition any strategy that is unwavering and predictable is unstable, because it will ultimately be displaced by a mutant strategy which relies on the fact that it can best the existing predictable strategy by investing an extra small delta of waiting resource to ensure that it wins. Therefore, only a random unpredictable strategy can maintain itself in a population of bluffers. The contestants in effect choose an acceptable cost to be incurred related to the value of the resource being sought, effectively making a random bid as part of a mixed strategy (a strategy where a contestant has several, or even many, possible actions in their strategy). This implements a distribution of bids for a resource of specific value V, where the bid for any specific contest is chosen at random from that distribution. The distribution (an ESS) can be computed using the Bishop-Cannings theorem, which holds true for any mixed-strategy ESS. The distribution function in these contests was determined by Parker and Thompson to be:
p
(
x
)
=
e
−
x
/
V
V
.
{\displaystyle p(x)={\frac {e^{-x/V}}{V}}.}
The result is that the cumulative population of quitters for any particular cost m in this "mixed strategy" solution is:
p
(
m
)
=
1
−
e
−
m
/
V
,
{\displaystyle p(m)=1-e^{-m/V},}
as shown in the adjacent graph. The intuitive sense that greater values of resource sought leads to greater waiting times is borne out. This is observed in nature, as in male dung flies contesting for mating sites, where the timing of disengagement in contests is as predicted by evolutionary theory mathematics.
=== Asymmetries that allow new strategies ===
In the war of attrition there must be nothing that signals the size of a bid to an opponent, otherwise the opponent can use the cue in an effective counter-strategy. There is however a mutant strategy which can better a bluffer in the war of attrition game if a suitable asymmetry exists, the bourgeois strategy. Bourgeois uses an asymmetry of some sort to break the deadlock. In nature one such asymmetry is possession of a resource. The strategy is to play a hawk if in possession of the resource, but to display then retreat if not in possession. This requires greater cognitive capability than hawk, but bourgeois is common in many animal contests, such as in contests among mantis shrimps and among speckled wood butterflies.
=== Social behaviour ===
Games like hawk dove and war of attrition represent pure competition between individuals and have no attendant social elements. Where social influences apply, competitors have four possible alternatives for strategic interaction. This is shown on the adjacent figure, where a plus sign represents a benefit and a minus sign represents a cost.
In a cooperative or mutualistic relationship both "donor" and "recipient" are almost indistinguishable as both gain a benefit in the game by co-operating, i.e. the pair are in a game-wise situation where both can gain by executing a certain strategy, or alternatively both must act in concert because of some encompassing constraints that effectively puts them "in the same boat".
In an altruistic relationship the donor, at a cost to themself provides a benefit to the recipient. In the general case the recipient will have a kin relationship to the donor and the donation is one-way. Behaviours where benefits are donated alternatively (in both directions) at a cost, are often called "altruistic", but on analysis such "altruism" can be seen to arise from optimised "selfish" strategies.
Spite is essentially a “reversed” form of cooperation where neither party receives a tangible benefit. The general case is that the ally is kin related and the benefit is an easier competitive environment for the ally. Note: George Price, one of the early mathematical modellers of both altruism and spite, found this equivalence particularly disturbing at an emotional level.
Selfishness is the base criteria of all strategic choice from a game theory perspective – strategies not aimed at self-survival and self-replication are not long for any game. Critically however, this situation is impacted by the fact that competition is taking place on multiple levels – i.e. at a genetic, an individual and a group level.
== Contests of selfish genes ==
At first glance it may appear that the contestants of evolutionary games are the individuals present in each generation who directly participate in the game. But individuals live only through one game cycle, and instead it is the strategies that really contest with one another over the duration of these many-generation games. So it is ultimately genes that play out a full contest – selfish genes of strategy. The contesting genes are present in an individual and to a degree in all of the individual's kin. This can sometimes profoundly affect which strategies survive, especially with issues of cooperation and defection. William Hamilton, known for his theory of kin selection, explored many of these cases using game-theoretic models. Kin-related treatment of game contests helps to explain many aspects of the behaviour of social insects, the altruistic behaviour in parent-offspring interactions, mutual protection behaviours, and co-operative care of offspring. For such games, Hamilton defined an extended form of fitness – inclusive fitness, which includes an individual's offspring as well as any offspring equivalents found in kin.
Hamilton went beyond kin relatedness to work with Robert Axelrod, analysing games of co-operation under conditions not involving kin where reciprocal altruism came into play.
=== Eusociality and kin selection ===
Eusocial insect workers forfeit reproductive rights to their queen. It has been suggested that kin selection, based on the genetic makeup of these workers, may predispose them to altruistic behaviours. Most eusocial insect societies have haplodiploid sexual determination, which means that workers are unusually closely related.
This explanation of insect eusociality has, however, been challenged by a few highly-noted evolutionary game theorists (Nowak and Wilson) who have published a controversial alternative game theoretic explanation based on a sequential development and group selection effects proposed for these insect species.
=== Prisoner's dilemma ===
A difficulty of the theory of evolution, recognised by Darwin himself, was the problem of altruism. If the basis for selection is at an individual level, altruism makes no sense at all. But universal selection at the group level (for the good of the species, not the individual) fails to pass the test of the mathematics of game theory and is certainly not the general case in nature. Yet in many social animals, altruistic behaviour exists. The solution to this problem can be found in the application of evolutionary game theory to the prisoner's dilemma game – a game which tests the payoffs of cooperating or in defecting from cooperation. It is the most studied game in all of game theory.
The analysis of the prisoner's dilemma is as a repetitive game. This affords competitors the possibility of retaliating for defection in previous rounds of the game. Many strategies have been tested; the best competitive strategies are general cooperation, with a reserved retaliatory response if necessary. The most famous and one of the most successful of these is tit-for-tat with a simple algorithm.
The pay-off for any single round of the game is defined by the pay-off matrix for a single round game (shown in bar chart 1 below). In multi-round games the different choices – co-operate or defect – can be made in any particular round, resulting in a certain round payoff. It is, however, the possible accumulated pay-offs over the multiple rounds that count in shaping the overall pay-offs for differing multi-round strategies such as tit-for-tat.
Example 1: The straightforward single round prisoner's dilemma game. The classic prisoner's dilemma game payoffs gives a player a maximum payoff if they defect and their partner co-operates (this choice is known as temptation). If, however, the player co-operates and their partner defects, they get the worst possible result (the suckers payoff). In these payoff conditions the best choice (a Nash equilibrium) is to defect.
Example 2: Prisoner's dilemma played repeatedly. The strategy employed is tit-for-tat which alters behaviours based on the action taken by a partner in the previous round – i.e. reward co-operation and punish defection. The effect of this strategy in accumulated payoff over many rounds is to produce a higher payoff for both players' co-operation and a lower payoff for defection. This removes the temptation to defect. The suckers payoff also becomes less, although "invasion" by a pure defection strategy is not entirely eliminated.
=== Routes to altruism ===
Altruism takes place when one individual, at a cost (C) to itself, exercises a strategy that provides a benefit (B) to another individual. The cost may consist of a loss of capability or resource which helps in the battle for survival and reproduction, or an added risk to its own survival. Altruism strategies can arise through:
=== The evolutionarily stable strategy ===
The evolutionarily stable strategy (ESS) is akin to the Nash equilibrium in classical game theory, but with mathematically extended criteria. Nash equilibrium is a game equilibrium where it is not rational for any player to deviate from their present strategy, provided that the others adhere to their strategies. An ESS is a state of game dynamics where, in a very large population of competitors, another mutant strategy cannot successfully enter the population to disturb the existing dynamic (which itself depends on the population mix). Therefore, a successful strategy (with an ESS) must be both effective against competitors when it is rare – to enter the previous competing population, and successful when later in high proportion in the population – to defend itself. This in turn means that the strategy must be successful when it contends with others exactly like itself.
An ESS is not:
An optimal strategy: that would maximize fitness, and many ESS states are far below the maximum fitness achievable in a fitness landscape. (See hawk dove graph above as an example of this.)
A singular solution: often several ESS conditions can exist in a competitive situation. A particular contest might stabilize into any one of these possibilities, but later a major perturbation in conditions can move the solution into one of the alternative ESS states.
Always present: it is possible for there to be no ESS. An evolutionary game with no ESS is "rock-scissors-paper", as found in species such as the side-blotched lizard (Uta stansburiana).
An unbeatable strategy: the ESS is only an uninvadeable strategy.
The ESS state can be solved for by exploring either the dynamics of population change to determine an ESS, or by solving equations for the stable stationary point conditions which define an ESS. For example, in the hawk dove game we can look for whether there is a static population mix condition where the fitness of doves will be exactly the same as fitness of hawks (therefore both having equivalent growth rates – a static point).
Let the chance of meeting a hawk=p so therefore the chance of meeting a dove is (1-p)
Let Whawk equal the payoff for hawk...
Whawk=payoff in the chance of meeting a dove + payoff in the chance of meeting a hawk
Taking the payoff matrix results and plugging them into the above equation:
Whawk= V·(1-p)+(V/2-C/2)·p
Similarly for a dove:
Wdove= V/2·(1-p)+0·(p)
so....
Wdove= V/2·(1-p)
Equating the two fitnesses, hawk and dove
V·(1-p)+(V/2-C/2)·p= V/2·(1-p)
... and solving for p
p= V/C
so for this "static point" where the population percent is an ESS solves to be ESS(percent Hawk)=V/C
Similarly, using inequalities, it can be shown that an additional hawk or dove mutant entering this ESS state eventually results in less fitness for their kind – both a true Nash and an ESS equilibrium. This example shows that when the risks of contest injury or death (the cost C) is significantly greater than the potential reward (the benefit value V), the stable population will be mixed between aggressors and doves, and the proportion of doves will exceed that of the aggressors. This explains behaviours observed in nature.
== Unstable games, cyclic patterns ==
=== Rock paper scissors ===
Rock paper scissors incorporated into an evolutionary game has been used for modelling natural processes in the study of ecology.
Using experimental economics methods, scientists have used RPS games to test human social evolutionary dynamical behaviours in laboratories. The social cyclic behaviours, predicted by evolutionary game theory, have been observed in various laboratory experiments.
=== Side-blotched lizard plays the RPS, and other cyclical games ===
The first example of RPS in nature was seen in the behaviours and throat colours of a small lizard of western North America. The side-blotched lizard (Uta stansburiana) is polymorphic with three throat-colour morphs that each pursue a different mating strategy:
The orange throat is very aggressive and operates over a large territory – attempting to mate with numerous females
The unaggressive yellow throat mimics the markings and behavior of female lizards, and "sneakily" slips into the orange throat's territory to mate with the females there (thereby taking over the population)
The blue throat mates with, and carefully guards, one female – making it impossible for the sneakers to succeed and therefore overtakes their place in a population
However the blue throats cannot overcome the more aggressive orange throats. Later work showed that the blue males are altruistic to other blue males, with three key traits: they signal with blue color, they recognize and settle next to other (unrelated) blue males, and they will even defend their partner against orange, to the death. This is the hallmark of another game of cooperation that involves a green-beard effect.
The females in the same population have the same throat colours, and this affects how many offspring they produce and the size of the progeny, which generates cycles in density, yet another game – the r-K game. Here, r is the Malthusian parameter governing exponential growth, and K is the carrying capacity of the environment. Orange females have larger clutches and smaller offspring which do well at low density. Yellow & blue females have smaller clutches and larger offspring which do well at high density. This generates perpetual cycles tightly tied to population density. The idea of cycles due to density regulation of two strategies originated with rodent researcher Dennis Chitty, ergo these kinds of games lead to "Chitty cycles". There are games within games within games embedded in natural populations. These drive RPS cycles in the males with a periodicity of four years and r-K cycles in females with a two year period.
The overall situation corresponds to the rock, scissors, paper game, creating a four-year population cycle. The RPS game in male side-blotched lizards does not have an ESS, but it has a Nash equilibrium (NE) with endless orbits around the NE attractor. Following this Side-blotched lizard research, many other three-strategy polymorphisms have been discovered in lizards and some of these have RPS dynamics merging the male game and density regulation game in a single sex (males). More recently, mammals have been shown to harbour the same RPS game in males and r-K game in females, with coat-colour polymorphisms and behaviours that drive cycles. This game is also linked to the evolution of male care in rodents, and monogamy, and drives speciation rates. There are r-K strategy games linked to rodent population cycles (and lizard cycles).
When he read that these lizards were essentially engaged in a game with a rock-paper-scissors structure, John Maynard Smith is said to have exclaimed "They have read my book!".
== Signalling, sexual selection and the handicap principle ==
Aside from the difficulty of explaining how altruism exists in many evolved organisms, Darwin was also bothered by a second conundrum – why a significant number of species have phenotypical attributes that are patently disadvantageous to them with respect to their survival – and should by the process of natural section be selected against – e.g. the massive inconvenient feather structure found in a peacock's tail. Regarding this issue Darwin wrote to a colleague "The sight of a feather in a peacock's tail, whenever I gaze at it, makes me sick." It is the mathematics of evolutionary game theory, which has not only explained the existence of altruism, but also explains the totally counterintuitive existence of the peacock's tail and other such biological encumbrances.
On analysis, problems of biological life are not at all unlike the problems that define economics – eating (akin to resource acquisition and management), survival (competitive strategy) and reproduction (investment, risk and return). Game theory was originally conceived as a mathematical analysis of economic processes and indeed this is why it has proven so useful in explaining so many biological behaviours. One important further refinement of the evolutionary game theory model that has economic overtones rests on the analysis of costs. A simple model of cost assumes that all competitors suffer the same penalty imposed by the game costs, but this is not the case. More successful players will be endowed with or will have accumulated a higher "wealth reserve" or "affordability" than less-successful players. This wealth effect in evolutionary game theory is represented mathematically by "resource holding potential (RHP)" and shows that the effective cost to a competitor with a higher RHP are not as great as for a competitor with a lower RHP. As a higher RHP individual is a more desirable mate in producing potentially successful offspring, it is only logical that with sexual selection RHP should have evolved to be signalled in some way by the competing rivals, and for this to work this signalling must be done honestly. Amotz Zahavi has developed this thinking in what is known as the "handicap principle", where superior competitors signal their superiority by a costly display. As higher RHP individuals can properly afford such a costly display this signalling is inherently honest, and can be taken as such by the signal receiver. In nature this is illustrated than in the costly plumage of the peacock. The mathematical proof of the handicap principle was developed by Alan Grafen using evolutionary game-theoretic modelling.
== Coevolution ==
Two types of dynamics:
Evolutionary games which lead to a stable situation or point of stasis for contending strategies which result in an evolutionarily stable strategy
Evolutionary games which exhibit a cyclic behaviour (as with RPS game) where the proportions of contending strategies continuously cycle over time within the overall population
A third, coevolutionary, dynamic, combines intra-specific and inter-specific competition. Examples include predator-prey competition and host-parasite co-evolution, as well as mutualism. Evolutionary game models have been created for pairwise and multi-species coevolutionary systems. The general dynamic differs between competitive systems and mutualistic systems.
In competitive (non-mutualistic) inter-species coevolutionary system the species are involved in an arms race – where adaptations that are better at competing against the other species tend to be preserved. Both game payoffs and replicator dynamics reflect this. This leads to a Red Queen dynamic where the protagonists must "run as fast as they can to just stay in one place".
A number of evolutionary game theory models have been produced to encompass coevolutionary situations. A key factor applicable in these coevolutionary systems is the continuous adaptation of strategy in such arms races. Coevolutionary modelling therefore often includes genetic algorithms to reflect mutational effects, while computers simulate the dynamics of the overall coevolutionary game. The resulting dynamics are studied as various parameters are modified. Because several variables are simultaneously at play, solutions become the province of multi-variable optimisation. The mathematical criteria of determining stable points are Pareto efficiency and Pareto dominance, a measure of solution optimality peaks in multivariable systems.
Carl Bergstrom and Michael Lachmann apply evolutionary game theory to the division of benefits in mutualistic interactions between organisms. Darwinian assumptions about fitness are modeled using replicator dynamics to show that the organism evolving at a slower rate in a mutualistic relationship gains a disproportionately high share of the benefits or payoffs.
== Extending the model ==
A mathematical model analysing the behaviour of a system needs initially to be as simple as possible to aid in developing a base understanding the fundamentals, or “first order effects”, pertaining to what is being studied. With this understanding in place it is then appropriate to see if other, more subtle, parameters (second order effects) further impact the primary behaviours or shape additional behaviours in the system. Following Maynard Smith's seminal work in evolutionary game theory, the subject has had a number of very significant extensions which have shed more light on understanding evolutionary dynamics, particularly in the area of altruistic behaviors. Some of these key extensions to evolutionary game theory are:
=== Spatial games ===
Geographic factors in evolution include gene flow and horizontal gene transfer. Spatial game models represent geometry by putting contestants in a lattice of cells: contests take place only with immediate neighbours. Winning strategies take over these immediate neighbourhoods and then interact with adjacent neighbourhoods. This model is useful in showing how pockets of co-operators can invade and introduce altruism in the Prisoners Dilemma game, where Tit for Tat (TFT) is a Nash Equilibrium but NOT also an ESS. Spatial structure is sometimes abstracted into a general network of interactions. This is the foundation of evolutionary graph theory.
=== Effects of having information ===
In evolutionary game theory as in conventional game theory the effect of Signalling (the acquisition of information) is of critical importance, as in Indirect Reciprocity in Prisoners Dilemma (where contests between the SAME paired individuals are NOT repetitive). This models the reality of most normal social interactions which are non-kin related. Unless a probability measure of reputation is available in Prisoners Dilemma only direct reciprocity can be achieved. With this information indirect reciprocity is also supported.
Alternatively, agents might have access to an arbitrary signal initially uncorrelated to strategy but becomes correlated due to evolutionary dynamics. This is the green-beard effect (see side-blotched lizards, above) or evolution of ethnocentrism in humans. Depending on the game, it can allow the evolution of either cooperation or irrational hostility.
From molecular to multicellular level, a signaling game model with information asymmetry between sender and receiver might be appropriate, such as in mate attraction or evolution of translation machinery from RNA strings.
=== Finite populations ===
Many evolutionary games have been modelled in finite populations to see the effect this may have, for example in the success of mixed strategies.
== See also ==
== Notes ==
== References ==
== Further reading ==
Davis, Morton; "Game Theory – A Nontechnical Introduction", Dover Books, ISBN 0-486-29672-5
Dawkins, Richard (2006). The Selfish Gene (30th anniversary ed.). Oxford: Oxford University Press. ISBN 978-0-19-929115-1.
Dugatkin and Reeve; "Game Theory and Animal Behavior", Oxford University Press, ISBN 0-19-513790-6
Hofbauer and Sigmund; "Evolutionary Games and Population Dynamics", Cambridge University Press, ISBN 0-521-62570-X
Kohn, Marek; "A Reason for Everything", Faber and Faber, ISBN 0-571-22393-1
Li Richter and Lehtonen (Eds.) "Half a century of evolutionary games: a synthesis of theory, application and future directions", Philosophical Transactions of the Royal Society B, Volume 378, Issue 1876
Sandholm, William H.; "Population Games and Evolutionary Dynamics", The MIT Press, ISBN 0262195879
Segerstrale, Ullica; "Nature's Oracle – The life and work of W.D. Hamilton", Oxford University Press, 2013, ISBN 978-0-19-860727-4
Sigmund, Karl; "Games of Life", Penguin Books, also Oxford University Press, 1993, ISBN 0198547838
Vincent and Brown; "Evolutionary Game Theory, Natural Selection and Darwinian Dynamics", Cambridge University Press, ISBN 0-521-84170-4
== External links ==
Theme issue 'Half a century of evolutionary games: a synthesis of theory, application and future directions' (2023)
Evolutionary game theory at the Stanford Encyclopedia of Philosophy
Evolving Artificial Moral Ecologies at The Centre for Applied Ethics, University of British Columbia
"Life and work of John Maynard Smith, interviewed by Richard Dawkins". Web of Stories. 1997 – via YouTube. (via Web of Stories) | Wikipedia/Evolutionary_game_theory |
Systems neuroscience is a subdiscipline of neuroscience and systems biology that studies the structure and function of various neural circuits and systems that make up the central nervous system of an organism. Systems neuroscience encompasses a number of areas of study concerned with how nerve cells behave when connected together to form neural pathways, neural circuits, and larger brain networks. At this level of analysis, neuroscientists study how different neural circuits work together to analyze sensory information, form perceptions of the external world, form emotions, make decisions, and execute movements. Researchers in systems neuroscience are concerned with the relation between molecular and cellular approaches to understanding brain structure and function, as well as with the study of high-level mental functions such as language, memory, and self-awareness (which are the purview of behavioral and cognitive neuroscience). To deepen their understanding of these relations and understanding, systems neuroscientists typically employ techniques for understanding networks of neurons as they are seen to function, by way of electrophysiology using either single-unit recording or multi-electrode recording, functional magnetic resonance imaging (fMRI), and PET scans. The term is commonly used in an educational framework: a common sequence of graduate school neuroscience courses consists of cellular/molecular neuroscience for the first semester, then systems neuroscience for the second semester. It is also sometimes used to distinguish a subdivision within a neuroscience department in a university.
== Major branches ==
Systems neuroscience has three major branches in relation to measuring the brain: behavioral neuroscience, computational modeling, and brain activity. Through these three branches, it breaks down the core concepts of systems neuroscience and provides valuable information about how the functional systems of an organism interact independently and intertwined with one another.
=== Behavioral neuroscience ===
Behavioral neuroscience in relation to systems neuroscience focuses on representational dissimilarity matrices (RDMs), which categorizes brain activity patterns and compares them across different conditions, such as the dissimilar level of brain activity observing an animal in comparison to an inanimate object. These models give a quantitative representation of behavior while providing comparable models of the patterns observed.5 Correlations or anticorrelations between brain-activity patterns are used during experimental conditions to distinguish the processing of each brain region when stimuli is presented.
=== Computational modeling ===
Computational models provide a base form of brain-activity level, which is typically represented by the firing of a single neuron. This is essential for understanding systems neuroscience as it shows the physical changes that occur during functional changes in an organism. While these models are important for understanding brain-activity, one-to-one correspondence of neuron firing has not been completely uncovered yet. Different measurements of the same activity lead to different patterns, when in theory, the patterns should be the same, or at least similar to one another. However, studies show fundamental differences when it comes to measuring the brain, and science strives to investigate this dissimilarity.
=== Brain activity ===
Brain activity and brain imaging help scientists understand the differences between functional systems of an organism in combination with computational models and the understanding of behavioral neuroscience. The three major branches of systems neuroscience work together to provide the most accurate information about brain activity as neuroimaging allows in its current state. While there can always be improvements to brain-activity measurements, typical imaging studies through electrophysiology can already provide massive amounts of information about the systems of an organism and how they may work intertwined with one another. For example, using the core branches of systems neuroscience, scientists have been able to dissect a migraine’s attack on the nervous system by observing brain-activity dissimilarities and using computational modeling to compare the differences of a functioning brain and a brain affected by a migraine.6
== Observations ==
Systems neuroscience is observed through electrophysiology, which focuses on the electrical activity of biological systems in an organism. Through electrophysiology studies, the activity levels of different systems in the body help explain abnormalities of systematic functioning, such as an abnormal heartbeat rhythm or a stroke. While the main focus of electrophysiology is the heart, it does provide informational scanning of brain activity in relation to other bodily functions, which can be useful for the connection of neurological activity between systems.
Although systems neuroscience is generally observed in relation to a human’s level of functioning, many studies have been conducted on drosophila, or the small fruit fly, as it is considered to be easier due to the simpler brain structure and more controllable genetic and environmental factors from an experimental standpoint. While there are strong dissimilarities between the functioning capabilities of a fruit fly in comparison to a human, these studies still provide valuable insight on how a human brain might work.
Neural circuits and neuron firing is more easily observable in fruit flies through functional brain imaging, as neuronal pathways are simplified and, therefore, are easier to follow. These pathways may be simple, but by understanding the basis of neuron firing, this can lead to important studies on a human’s neuronal pathway and eventually to a one-to-one neuron correspondence when a system is functioning.7
== See also ==
== References ==
3. Bear, M. F. et al. Eds. (1995). Neuroscience: Exploring The Brain. Baltimore, Maryland, Williams and Wilkins. ISBN 0-7817-3944-6
4. Hemmen J. L., Sejnowski T. J. (2006). 23 Problems in Systems Neuroscience. Oxford University Press. ISBN 0-19-514822-3
5. Kriegeskorte, N., Mur, M., & Bandettini, P. (2008). Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2, 4–4. https://doi.org/10.3389/neuro.06.004.2008
6. Brennan, K. C., & Pietrobon, D. (2018). A Systems Neuroscience Approach to Migraine. Neuron (Cambridge, Mass.), 97(5), 1004–1021. https://doi.org/10.1016/j.neuron.2018.01.029
7. Kazama, H. (2015). Systems neuroscience in Drosophila : Conceptual and technical advantages. Neuroscience, 296, 3–14. https://doi.org/10.1016/j.neuroscience.2014.06.035 | Wikipedia/Systems_neuroscience |
Control engineering, also known as control systems engineering and, in some European countries, automation engineering, is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering, chemical engineering and mechanical engineering at many institutions around the world.
The practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems.
== Overview ==
Modern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem.
Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner.: 6 Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering.
Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.
In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a proportional–integral–derivative controller (PID controller) system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved.
Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors.
== History ==
Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt, around the third century BCE. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel.
: 22
This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 CE. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt: 22 in 1788.
In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis.
Control theory made significant strides over the next century. New mathematical techniques, as well as advances in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes.
Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today.
=== Mathematical modelling ===
David Quinn Mayne, (1930–2024) was among the early developers of a rigorous mathematical method for analysing Model predictive control algorithms (MPC). It is currently used in tens of thousands of applications and is a core part of the advanced control technology by hundreds of process control producers. MPC's major strength is its capacity to deal with nonlinearities and hard constraints in a simple and intuitive fashion. His work underpins a class of algorithms that are probably correct, heuristically explainable, and yield control system designs which meet practically important objectives.
== Control systems ==
== Control theory ==
== Education ==
At many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, in Italy there are several master in Automation & Robotics that are fully specialised in Control engineering or the Department of Automatic Control and Systems Engineering at the University of Sheffield or the Department of Robotics and Control Engineering at the United States Naval Academy and the Department of Control and Automation Engineering at the Istanbul Technical University.
Control engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education.
== Careers ==
A control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are typically paired with an electrical or mechanical engineering degree, but can also be paired with a degree in chemical engineering. According to a Control Engineering survey, most of the people who answered were control engineers in various forms of their own career.
There are not very many careers that are classified as "control engineer", most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering.
Because of this, there are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, chemical companies, petroleum companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, Phillips 66, Eastman, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation. Process Control Engineers, typically found in Refineries and Specialty Chemical plants, can earn upwards of $90k annually.
In India, control System Engineering is provided at different levels with a diploma, graduation and postgraduation. These programs require the candidate to have chosen physics, chemistry and mathematics for their secondary schooling or relevant bachelor's degree for postgraduate studies.
== Recent advancement ==
Originally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock.: 23 The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components.
Therefore, at the design stage either:
Digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or
Analog components are mapped into discrete domain and design is carried out there.
The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers.
Similarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme.
Resilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc.
== See also ==
== References ==
== Further reading ==
D. Q. Mayne (1965). P. H. Hammond (ed.). A Gradient Method for Determining Optimal Control of Nonlinear Stochastic Systems in Proceedings of IFAC Symposium, Theory of Self-Adaptive Control Systems. Plenum Press. pp. 19–27.
Bennett, Stuart (June 1986). A history of control engineering, 1800-1930. IET. ISBN 978-0-86341-047-5.
Bennett, Stuart (1993). A history of control engineering, 1930-1955. IET. ISBN 978-0-86341-299-8.
Christopher Kilian (2005). Modern Control Technology. Thompson Delmar Learning. ISBN 978-1-4018-5806-3.
Arnold Zankl (2006). Milestones in Automation: From the Transistor to the Digital Factory. Wiley-VCH. ISBN 978-3-89578-259-6.
Franklin, Gene F.; Powell, J. David; Emami-Naeini, Abbas (2014). Feedback control of dynamic systems (7th ed.). Stanford Cali. U.S.: Pearson. p. 880. ISBN 9780133496598.
== External links ==
Control Labs Worldwide
The Michigan Chemical Engineering Process Dynamics and Controls Open Textbook
Control System Integrators Association
List of control systems integrators
Institution of Mechanical Engineers - Mechatronics, Informatics and Control Group (MICG)
Systems Science & Control Engineering: An Open Access Journal | Wikipedia/Control_engineering |
An open system is a system that has external interactions. Such interactions can take the form of information, energy, or material transfers into or out of the system boundary, depending on the discipline which defines the concept. An open system is contrasted with the concept of an isolated system which exchanges neither energy, matter, nor information with its environment. An open system is also known as a flow system.
The concept of an open system was formalized within a framework that enabled one to interrelate the theory of the organism, thermodynamics, and evolutionary theory. This concept was expanded upon with the advent of information theory and subsequently systems theory. Today the concept has its applications in the natural and social sciences.
In the natural sciences an open system is one whose border is permeable to both energy and mass. By contrast, a closed system is permeable to energy but not to matter.
The definition of an open system assumes that there are supplies of energy that cannot be depleted; in practice, this energy is supplied from some source in the surrounding environment, which can be treated as infinite for the purposes of study. One type of open system is the radiant energy system, which receives its energy from solar radiation – an energy source that can be regarded as inexhaustible for all practical purposes.
== Social sciences ==
In the social sciences an open system is a process that exchanges material, energy, people, capital and information with its environment. French/Greek philosopher Kostas Axelos argued that seeing the "world system" as inherently open (though unified) would solve many of the problems in the social sciences, including that of praxis (the relation of knowledge to practice), so that various social scientific disciplines would work together rather than create monopolies whereby the world appears only sociological, political, historical, or psychological. Axelos argues that theorizing a closed system contributes to making it closed, and is thus a conservative approach. The Althusserian concept of overdetermination (drawing on Sigmund Freud) posits that there are always multiple causes in every event.
David Harvey uses this to argue that when systems such as capitalism enter a phase of crisis, it can happen through one of a number of elements, such as gender roles, the relation to nature/the environment, or crises in accumulation. Looking at the crisis in accumulation, Harvey argues that phenomena such as foreign direct investment, privatization of state-owned resources, and accumulation by dispossession act as necessary outlets when capital has overaccumulated too much in private hands and cannot circulate effectively in the marketplace. He cites the forcible displacement of Mexican and Indian peasants since the 1970s and the 1997 Asian financial crisis, involving "hedge fund raising" of national currencies, as examples of this.
Structural functionalists such as Talcott Parsons and neofunctionalists such as Niklas Luhmann have incorporated system theory to describe society and its components.
The sociology of religion finds both open and closed systems within the field of religion.
== Thermodynamics ==
== Systems engineering ==
== See also ==
Business process
Complex system
Dynamical system
Glossary of systems theory
Ludwig von Bertalanffy
Maximum power principle
Non-equilibrium thermodynamics
Open system (computing)
Open System Environment Reference Model
Openness
Phantom loop
Thermodynamic system
== References ==
== Further reading ==
Khalil, E.L. (1995). Nonlinear thermodynamics and social science modeling: fad cycles, cultural development and identificational slips. The American Journal of Economics and Sociology, Vol. 54, Issue 4, pp. 423–438.
Weber, B.H. (1989). Ethical Implications Of The Interface Of Natural And Artificial Systems. Delicate Balance: Technics, Culture and Consequences: Conference Proceedings for the Institute of Electrical and Electronics Engineers.
== External links ==
OPEN SYSTEM, Principia Cybernetica Web, 2007. | Wikipedia/Environment_(systems) |
Biological systems engineering or biosystems engineering is a broad-based engineering discipline with particular emphasis on non-medical biology. It can be thought of as a subset of the broader notion of biological engineering or bio-technology though not in the respects that pertain to biomedical engineering as biosystems engineering tends to focus less on medical applications than on agriculture, ecosystems, and food science. The discipline focuses broadly on environmentally sound and sustainable engineering solutions to meet societies' ecologically related needs. Biosystems engineering integrates the expertise of fundamental engineering fields with expertise from non-engineering disciplines.
== Background and organization ==
Many college and university biological engineering departments have a history of being grounded in agricultural engineering and have only in the past two decades or so changed their names to reflect the movement towards more diverse biological based engineering programs. This major is sometimes called agricultural and biological engineering, biological and environmental engineering, etc., in different universities, generally reflecting interests of local employment opportunities.
Since biological engineering covers a wide spectrum, many departments now offer specialization options. Depending on the department and the specialization options offered within each program, curricula may overlap with other related fields. There are a number of different titles for BSE-related departments at various universities. The professional societies commonly associated with many Biological Engineering programs include the American Society of Agricultural and Biological Engineers (ASABE) and the Institute of Biological Engineering (IBE), which generally encompasses BSE. Some program also participate in the Biomedical Engineering Society (BMES) and the American Institute of Chemical Engineers (AIChE).
A biological systems engineer has a background in what both environmental engineers and biologists do, thus bridging the gap between engineering and the (non-medical) biological sciences – although this is variable across academic institutions. For this reason, biological systems engineers are becoming integral parts of many environmental engineering firms, federal agencies, and biotechnology industries. A biological systems engineer will often address the solution to a problem from the perspective of employing living systems to enact change. For example, biological treatment methodologies can be applied to provide access to clean drinking water or for sequestration of carbon dioxide.
== Specializations ==
Land and water resources engineering
Food engineering and bioprocess engineering
Machinery systems engineering
Natural resources and environmental engineering
Biomedical engineering
== Academic programs in agricultural and biological systems engineering ==
Below is a listing of known academic programs that offer bachelor's degrees (B.S. or B.S.E.) in what ABET and/or ASABE terms "agricultural engineering", "biological systems engineering", "biological engineering", or similarly named programs. ABET accredits college and university programs in the disciplines of applied science, computing, engineering, and engineering technology. ASABE defines accredited programs within the scope of Ag/Bio Engineering.
=== North America ===
=== Central and South America ===
=== Europe ===
=== Asia ===
=== Africa ===
== See also ==
== References ==
== Further reading ==
2003, Dennis R. Heldman (ed), Encyclopedia of agricultural, food, and biological engineering.
2002, Teruyuki Nagamune, Tai Hyun Park & Mark R. Marten (ed), Biological Systems Engineering, Washington, D.C. : American Chemical Society, 320 pages.
2012, Paige Brown Jarreau, What is Biological Engineering, http://www.scilogs.com/from_the_lab_bench/what-is-biological-engineering-ibe-2012/ Archived 2016-08-10 at the Wayback Machine
== External links ==
UC San Diego, Department of Bioengineering, UCSD BE part of University of California, San Diego | Wikipedia/Biological_systems_engineering |
Systems pharmacology is the application of systems biology principles to the field of pharmacology. It seeks to understand how drugs affect the human body as a single complex biological system.
Instead of considering the effect of a drug to be the result of one specific drug-protein interaction, systems pharmacology considers the effect of a drug to be the outcome of the network of interactions a drug may have. In 1992, an article on systems medicine and pharmacology was published in China. Networks of interaction may include chemical-protein, protein–protein, genetic, signalling and physiological (at cellular, tissue, organ and whole body levels). Systems pharmacology uses bioinformatics and statistics techniques to integrate and interpret these networks.
Systems pharmacology can be applied to drug safety studies as a complement to pharmacoepidemiology.
== See also ==
Quantitative Systems Pharmacology
Drug interaction
== PhD programs ==
PharMetrX: Pharmacometrics & Computational Disease Modelling (annual call for applications, July - Sept 15th)
== References ==
== External links ==
Quantitative Systems Pharmacology white paper
Systems Pharmacology at Harvard
What is (Quantitative) Systems Pharmacology? by John Russell | Wikipedia/Systems_pharmacology |
Systems philosophy is a discipline aimed at constructing a new philosophy (in the sense of worldview) by using systems concepts. The discipline was first described by Ervin Laszlo in his 1972 book Introduction to Systems Philosophy: Toward a New Paradigm of Contemporary Thought. It has been described as the "reorientation of thought and world view ensuing from the introduction of "systems" as a new scientific paradigm".
== Overview ==
Soon after Laszlo founded systems philosophy it was placed in context by Ludwig von Bertalanffy, one of the founders of general system theory, when he categorized three domains within systemics namely:
"Systems science", which is concerned with "scientific exploration and theory of "systems" in the various sciences...and general system theory as doctrine of principles applying to all systems";
"Systems technology", which is concerned with "the problems arising in modern technology and society, comprising both the "hardware" of computers, automation self-regulating machinery etc., and the "software" of new theoretical developments and disciplines"; and
"Systems philosophy", which is concerned with "the new philosophy of nature" which regards the world as a great organization" that is "organismic" rather than "mechanistic" in nature.
Systems philosophy consists of four main areas:
"Systems ontology", which is concerned "with what is meant by "system" and how systems are realized at various levels of the world of observation";
"Systems paradigms", which is concerned with developing worldviews which "takes [humankind] as one species of concrete and actual system, embedded in encompassing natural hierarchies of likewise concrete and actual physical, biological, and social systems";
"Systems axiology", which is concerned with developing models of systems that involve "humanistic concerns", and views "symbols, values, social entities and cultures" as "something very "real"" and having an "embeddedness in a cosmic order of hierarchies"; and
"Applied systems philosophy", which is concerned with using the insights from the other branches of systems philosophy to solve practical problems, especially social and philosophical ones.
The term "systems philosophy" is often used as a convenient shorthand to refer to "the philosophy of systems", but this usage can be misleading. The philosophy of systems is in fact merely the element of systems philosophy called "systems ontology" by von Bertalanffy and "systems metaphysics" by Laszlo. Systems ontology provides important grounding for systems thinking but does not encompass the essential focus of systems philosophy, which is about articulating a worldview grounded in systems perspectives and humanistic concerns.
== Origin and development of systems philosophy ==
=== The founding of systems philosophy ===
Systems philosophy was founded by Ervin Laszlo in 1972 with his book Introduction to Systems Philosophy: Toward a New Paradigm of Contemporary Thought. The Foreword was written by Ludwig von Bertalanffy.
"Systems philosophy", in Ervin Laszlo's sense of the term, means using the systems perspective to model the nature of reality, and to use this to solve important human problems (Laszlo, 1972). Laszlo developed the idea behind systems philosophy independently of von Bertalanffy's work on General System Theory (published in 1968), but they met before Introduction to Systems Philosophy was published and the decision to call the new discipline "systems philosophy" was their joint one. Writing Introduction to Systems Philosophy took five years, and in his autobiography Laszlo calls it "my major work".
Laszlo's "great idea", that made systems philosophy possible, was that the existence of a general system theory that captures the "patterns" that recur across the Systemics, who themselves capture "patterns" that recur across the specialized disciplines, entails that the world is organised as a whole, and thus has an underlying unity. In this light, nature's special domains (as characterized by the specialized sciences) are contingent expressions or arrangements or projections of an underlying intelligibly ordered reality. If the nature of this underlying unity and the way it conditions phenomenal reality could be understood, it would provide a powerful aid to solving pressing sociological problems and answering deep philosophical questions.
In the subsequent years, systems philosophy has been developed in four important ways, discussed below.
=== Laszlo and evolutionary futures ===
The first development was due to Ervin Laszlo himself, and is grounded in the concern that the way in which global resources are exploited does not take global systemic effects into account, and appears likely to have catastrophic global consequences. Work in this area is focused on developing models and interventions that can bring about human thriving in a sustainable way on a global scale. Laszlo promotes work in this area through the Club of Budapest International Foundation, of which he is the founder and President, and the journal World Futures: The Journal of General Evolution, of which he is the editor.
=== Ozbekhan and the global problematique ===
A contemporary of Laszlo, Hasan Ozbekhan in the original proposal to the Club of Rome identified 49 Continuous Critical Problems (CCPs) that intertwine to generate the Global Problematique. This work was shoved aside by the Club as too humanistic and it adopted the system dynamics approach of Jay Forrester. This decision resulted in the volume The Limits to Growth.
Ozbekhan sat down with Alexander Christakis and revisited the 49 CCPs in 1995 using the methodology of Structured Dialogic Design (SDD) which was not available in 1970. They generated an influence map that identified leverage points for alleviating the global problematique. Subsequently, an online class at Flinders University generated an influence map that bore remarkable similarities to that produced by Ozbekhan and Christakis. In 2013, Reynaldo Trevino and Bethania Arango aligned the 15 Global Challenges of the Millennium Project with the 49 CCPs and generated actions that that shows the influence among the challenges and identifies actions for addressing the leverage points.
=== Apostel and worldview integration ===
The second strand was inspired by Leo Apostel, and is grounded in the concern that disciplinary worldviews are becoming increasingly fragmented, thus undermining the potential for the inter-disciplinary and trans-disciplinary work required to address the world's pressing social, cultural and economic problems. This effort was initiated via the publication in 1994 by Apostel et al. of the book Worldviews: from fragmentation to integration. Apostel promoted this agenda by forming the Worldviews Group and founding what is now the Leo Apostel Center for Interdisciplinary Studies in the Free University of Brussels. The work of these units is focused on developing systematic models of the structure and nature of worldviews and using this to promote work towards a unified perspective on the world.
=== Midgley and systemic intervention ===
The third initiative was led by Gerald Midgley, and reflects concerns that developments in philosophy of language, philosophy of science and philosophy of sociology suggested that objectivity in modelling reality is an unattainable ideal, because human values condition what is included or excluded in any investigation ("content selection"), and condition how subjects of interest are delineated ("boundary critique"). The implication that it may be impossible in practice to obtain objective agreement about the nature of reality and about the "rightness" of theories inspired Midgley to develop practices for systemic interventions that could bypass these debates by focusing on the processes involved in making boundary judgements in practical situations. This supports systematic intervention practices that exploit, rather than trying to unify, the plurality of theories and methods that reflect different value-conditioned perspectives. This perspective is grounded in the recognition that values have to be overtly taken into account in a realistic systems paradigm, contrary to the mechanism that is still widely used in modelling the behavior of natural systems. The central text of this approach is Midgley's 2000 book Systemic Intervention: Philosophy, Methodology, Practice. This approach is now called critical systems thinking ("critical" in the sense of "reflective"), and is a major focus of the University of Hull's Centre for Systems Studies, of which Midgley is the Director.
=== Rousseau and value realism ===
The fourth development was initiated by David Rousseau, and is grounded in the concern that the value relativism dominating academic discourse is problematic for social and individual welfare, is contrary to the holistic implications of systems philosophy, and is inconsistent with universalist aspects of moral intuitions and spiritual experiences. He is promoting research towards elucidating the ontological foundations of values and normative intuitions, so as to incorporate values into Laszlo's model of the natural systems in a way that is holistic (as Apostel advocated), non-reductive (as Midgley advocates), and empirically supported (as William James advocated). Rousseau promotes this work through the Center for Systems Philosophy, of which he is the founder and Director, and collaborative projects with the University of Hull, where he is a visiting fellow in the Centre for Systems Studies and a full member of the Centre for Spirituality Studies.
== Controversies in systems philosophy ==
=== The relationship of systems philosophy to general system theory ===
The relationship of general system theory (GST) to systems philosophy (SP) has been the subject of a technical debate within the field of systems studies.
GST was presented in 1969 by Von Bertalanffy as a theory that encapsulates "models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relationships or "forces" between them. ... It [is] a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general", so that the subject matter of GST is "the derivation of those principles which are valid for "systems" in general". However, by the early 1970s he was seeking to broaden the term to stand for the general subject of systems inquiry, arguing that systems science (which includes the Systemics and the 'classical' version of GST), systems technology and systems philosophy are "aspects" of GST that "are not separable in content but distinguishable in intention". This perspective is supported by modern von Bertalanffy scholars such as David Pouvreau.
An alternative perspective defends the original intent behind GST, and considers systems philosophy to be an endeavor that has a distinct objective from that of GST. This perspective follows the implications Ervin Laszlo laid out in his Introduction to Systems Philosophy, and regards systems philosophy as following up on an implication of GST, namely that there is an organized reality underlying the phenomenal world, and that GST can guide us to towards an understanding of it which systems philosophy seeks to elucidate. From this perspective GST "is the foundation upon which we can build ... systems philosophy". This view was taken up by other systems scientists such as Béla H. Bánáthy, who regarded systems philosophy as one of four distinct "conceptual domains" of systems inquiry alongside theory, methodology and application, and the systems philosopher David Rousseau, who following Laszlo reiterated that GST provides a formal model of the nature of Nature, but that an understanding of the nature of Nature requires an interpretation of GST involving concrete commitments that systems philosophy aims to provide.
David Pouvreau has suggested that this quandary can be resolved by the coinage of the new term "general systemology", to replace the usage of GST in the sense of the encompassing conception that the later Von Bertalanffy envisaged.
=== Perspectivism vs. realism in systems philosophy ===
An important debate in systems philosophy reflects on the nature of natural systems, and asks whether reality is really composed of objectively real systems, or whether the concept of "natural systems" merely reflects a way in which humans might regard the world in terms relative to their own concerns.
Ervin Laszlo's original conception of systems philosophy was as "a philosophy of natural systems", and as such to use the systems paradigm to show how nature is organized, and how that organization gives rise to the functional properties that we find exercised in the processes in Nature. However, this was immediately problematic, because it clearly is the case that natural systems are open systems, and continuously exchange matter and energy with their environment. This might make it look as if the boundary between a system and its environment is a function of the interests of the observer, and not something inherent in an actually existing system. This was taken by some to mean that system boundaries are subjective constructions, e.g., C. West Churchman argued that "boundaries are social or personal constructs that define the limits of the knowledge that is taken as pertinent in an analysis".
Ervin Laszlo acknowledged the problem without conceding to an ultimate relativism, saying "we can conceive of no radical separation between forming and being formed, and between substance and space and time…the universe is conceived as a continuum [in which] spatio-temporal events disclose themselves as "stresses" or "tensions" within the constitutive matrix…the cosmic matrix evolves in patterned flows…some flows hit upon configurations of intrinsic stability and thus survive, despite changes in their evolving environment…these we call systems." In this way Ervin Laszlo accommodated the intrinsic continuity of the cosmos understood as a plenum while insisting that it contained real systems whose properties emerge from the inherent dynamics of the universe.
Although solving social problems means taking social norms and perspectives into account, systems philosophy proposes that these problems have a "proper" solution because they are about real systems: as Alexander Laszlo pointed out, natural systems are "a complex of interacting parts that are interrelated in such a way that the interactions between them sustain a boundary-maintaining entity". In this way, the identity of a system is maintained over time despite continuing interactions with a changing environment. Systems can be destroyed or transformed, but absent radical interactions (e.g. the fission of an atom or the death of an organism) their identity is dynamically maintained by internal (autopoietic) processes. Although we can draw the boundaries around conceptual systems in ways that serve our needs or purposes, nature has (according to systems philosophy) intrinsic ways of drawing boundaries, and if we mismatch these in our models our 'solutions' might not work very well in practice.
In this way the answer to the ontological question about natural systems (do they exist?) is made conditional on epistemological virtue considerations: systems can be argued to exist if systems practice produces positive results in the real world. This debate in systems philosophy thus parallels the wider discussion in academia about the existence of a real world and the possibility of having objective knowledge about it (see e.g. the "science wars"), in which the technological success of science is often used as an argument favoring realism over relativism or constructivism. The systemic debate is far from resolved, as indeed is the case with the wider debate about constructivism, because natural systems include ones that exhibit values, purposes, and intentionality, and it is unclear how to explain such properties given what is known about the foundational nature of natural systems. This debate is therefore connected with the ones in philosophy of mind about the grounding of consciousnesses, and in axiology about the grounding of values.
== Research centers ==
Centre for Systems Philosophy, UK
Centre for Systems Studies, University of Hull, UK
Club of Budapest, Hungary
Leo Apostel Centre for Interdisciplinary Studies (CLEA), Free University of Brussels, Belgium
Worldviews, Belgium
== References ==
== Further reading ==
Diederik Aerts, B. D'Hooghe, R. Pinxten, and I. Wallerstein (Eds.). (2011). Worldviews, Science And Us: Interdisciplinary Perspectives On Worlds, Cultures And Society – Proceedings Of The Workshop On Worlds, Cultures And Society. World Scientific Publishing Company.
Diederik Aerts, Leo Apostel, B. De Moor, S. Hellemans, E. Maex, H. Van Belle, and J. Van der Veken (1994). Worldviews: from fragmentation to integration. Brussels: VUB Press.
Archie Bahm (1981). Five Types of Systems Philosophy. International Journal of General Systems, 6(4), 233–237.
Archie Bahm (1983). Five systems concepts of society. Behavioral Science, 28(3), 204–218.
Gregory Bateson (1979). Mind and nature : a necessary unity. New York: Dutton.
Gregory Bateson (2000). Steps to an ecology of mind. Chicago IL: University of Chicago Press.
Kenneth Boulding (1985). The World as a Total System. Beverly Hills, CA.: Sage Publications.
Mario Bunge (1977). Ontology I: The furniture of the world. Reidel.
Mario Bunge (1979). Ontology II: A World of Systems. Dordrecht: Reidel.
Mario Bunge (2010). Matter and Mind: A Philosophical Inquiry. New York, NY: Springer.
Francis Heylighen (2000). What is a world view? In F. Heylighen, C. Joslyn, & V. Turchin (Eds.), Principia Cybernetica Web (Principia Cybernetica, Brussels), http://cleamc11.vub.ac.be/WORLVIEW.html.
Arthur Koestler (1967). The Ghost in the Machine. Henry Regnery Co.
Alexander Laszlo & S. Krippner S. (1998) Systems theories: Their origins, foundations, and development. In J.S. Jordan (Ed.), Systems theories and a priori aspects of perception. Amsterdam: Elsevier Science, 1998. Ch. 3, pp. 47–74.
Laszlo, A. (1998) Humanistic and systems sciences: The birth of a third culture. Pluriverso, 3(1), April 1998. pp. 108–121.
Laszlo, A. & Laszlo, E. (1997) The contribution of the systems sciences to the humanities. Systems Research and Behavioral Science, 14(1), April 1997. pp. 5–19.
Ervin Laszlo (1972a). Introduction to Systems Philosophy: Toward a New Paradigm of Contemporary Thought. New York N.Y.: Gordon & Breach.
Laszlo, E. (1972b). The Systems View of the World: The Natural Philosophy of the New Developments in the Sciences. George Braziller.
Laszlo, E. (1973). A Systems Philosophy of Human Values. Systems Research and Behavioral Science, 18(4), 250–259.
Laszlo, E. (1996). The Systems View of the World: a Holistic Vision for our Time. Cresskill NJ: Hampton Press.
Laszlo, E. (2005). Religion versus Science: The Conflict in Reference to Truth Value, not Cash Value. Zygon, 40(1), 57–61.
Laszlo, E. (2006a). Science and the Reenchantment of the Cosmos: The Rise of the Integral Vision of Reality. Inner Traditions.
Laszlo, E. (2006b). New Grounds for a Re-Union Between Science and Spirituality. World Futures: Journal of General Evolution, 62(1), 3.
Gerald Midgley (2000) Systemic Intervention: Philosophy, Methodology, and Practice. Springer.
Rousseau, D. (2013) Systems Philosophy and the Unity of Knowledge, forthcoming in Systems Research and Behavioral Science.
Rousseau, D. (2011) Minds, Souls and Nature: A Systems-Philosophical Analysis of the Mind-Body Relationship. (PhD Thesis, University of Wales, Trinity Saint David, School of Theology, Religious Studies and Islamic Studies).
Jan Smuts (1926). Holism and Evolution. New York: Macmillan Co.
Vidal, C. (2008). Wat is een wereldbeeld? [What is a worldview?]. In H. Van Belle & J. Van der Veken (Eds.), Nieuwheid denken. De wetenschappen en het creatieve aspect van de werkelijkheid [Novel thoughts: Science and the Creative Aspect of Reality]. Acco Uitgeverij.*
Jennifer Wilby (2005). Applying a Critical Systematic Review Process to Hierarchy Theory. Presented at the 2005 Conference of the Japan Advanced Institute of Science and Technology. Retrieved from https://dspace.jaist.ac.jp/dspace/handle/10119/3846
Wilby, J. (2011). A New Framework for Viewing the Philosophy, Principles and Practice of Systems Science. Systems Research and Behavioral Science, 28(5), 437–442.
== External links ==
Systems Philosophy and Applications: A Bibliography by W. Huitt, Valdosta, Georgia, USA, last revised December 2007.
Organization and Process: Systems Philosophy and Whiteheadian Metaphysics by James E. Huchingson. | Wikipedia/Systems_philosophy |
Systems chemistry is the science of studying networks of interacting molecules, to create new functions from a set (or library) of molecules with different hierarchical levels and emergent properties.
Systems chemistry is also related to the origin of life (abiogenesis).
== Relations to systems biology ==
Systems chemistry is a relatively young sub-discipline of chemistry, where the focus does not lie on the individual chemical components but rather on the overall network of interacting molecules and on their emergent properties. Hence, it combines the classical knowledge of chemistry (structure, reactions and interactions of molecules) together with a systems approach inspired by systems biology and systems science.
== Examples ==
Dynamic combinatorial chemistry has been used as a method to develop ligands for biomolecules and receptors for small molecules.
Ligands that can recognize biomolecules are being identified by preparing libraries of potential ligands in the presence of a target biomacromolecule. This is relevant for application as biosensors for fast monitoring of imbalances and illnesses and therapeutic agents.
Individual components of certain chemical system will self-assemble to form receptors which are complementary to target molecule. In principle, the preferred library members will be selected and amplified based on the strongest interactions between the template and products.
== Molecular networks and equilibrium ==
A fundamental difference exists between chemistry as it is performed in most laboratories and chemistry as it occurs in life. Laboratory processes are mostly designed such that the (closed) system goes thermodynamically downhill; i.e. the product state is of lower Gibbs free energy, yielding stable molecules that can be isolated and stored. Yet the chemistry of life operates in a very different way: most molecules from which living systems are constituted are turned over continuously and are not necessarily thermodynamically stable. Nevertheless, living systems can be stable, but in a homeostatic sense. Such homeostatic (open) systems are far-from-equilibrium and are dissipative: they need energy to maintain themselves. In dissipative controlled systems the continuous supply of energy allows a continuous transition between different supramolecular states, where systems with unexpected properties may be discovered. One of the grand challenges of Systems Chemistry is to unveil complex reactions networks, where molecules continuously consume energy to perform specific functions.
== History ==
While multicomponent reactions have been studied for centuries, the idea of deliberately analyzing mixtures and reaction networks is more recent. The first mentions of systems chemistry as a field date from 2005. Early adopters focused on prebiotic chemistry combined with supramolecular chemistry, before it was generalized to the study of emergent properties and functions of any complex molecular systems. A 2017 review in the field of systems chemistry described the state of the art as out-of-equilibrium self-assembly, fuelled molecular motion, chemical networks in compartments and oscillating reactions.
== References == | Wikipedia/Systems_chemistry |
Systems analysis is "the process of studying a procedure or business to identify its goal and purposes and create systems and procedures that will efficiently achieve them". Another view sees systems analysis as a problem-solving technique that breaks a system down into its component pieces and analyses how well those parts work and interact to accomplish their purpose.
The field of system analysis relates closely to requirements analysis or to operations research. It is also "an explicit formal inquiry carried out to help a decision maker identify a better course of action and make a better decision than they might otherwise have made."
The terms analysis and synthesis stem from Greek, meaning "to take apart" and "to put together", respectively. These terms are used in many scientific disciplines, from mathematics and logic to economics and psychology, to denote similar investigative procedures. The analysis is defined as "the procedure by which we break down an intellectual or substantial whole into parts," while synthesis means "the procedure by which we combine separate elements or components to form a coherent whole." System analysis researchers apply methodology to the systems involved, forming an overall picture.
System analysis is used in every field where something is developed. Analysis can also be a series of components that perform organic functions together, such as systems engineering. Systems engineering is an interdisciplinary field of engineering that focuses on how complex engineering projects should be designed and managed.
== Information technology ==
The development of a computer-based information system includes a system analysis phase. This helps produce the data model, a precursor to creating or enhancing a database. There are several different approaches to system analysis. When a computer-based information system is developed, system analysis (according to the Waterfall model) would constitute the following steps:
The development of a feasibility study: determining whether a project is economically, socially, technologically, and organizationally feasible
Fact-finding measures, designed to ascertain the requirements of the system's end-users (typically involving interviews, questionnaires, or visual observations of work on the existing system)
Gauging how the end-users would operate the system (in terms of general experience in using computer hardware or software), what the system would be used for, and so on
Another view outlines a phased approach to the process. This approach breaks system analysis into 5 phases:
Scope Definition: Clearly defined objectives and requirements necessary to meet a project's requirements as defined by its stakeholders
Problem analysis: the process of understanding problems and needs and arriving at solutions that meet them
Requirements analysis: determining the conditions that need to be met
Logical design: looking at the logical relationship among the objects
Decision analysis: making a final decision
Use cases are widely used system analysis modeling tools for identifying and expressing the functional requirements of a system. Each use case is a business scenario or event for which the system must provide a defined response. Use cases evolved from the object-oriented analysis.
== Policy analysis ==
The discipline of what is today known as policy analysis originated from the application of system analysis when it was first instituted by United States Secretary of Defense Robert McNamara.
== Practitioners ==
Practitioners of system analysis are often called up to dissect systems that have grown haphazardly to determine the current components of the system. This was shown during the year 2000 re-engineering effort as business and manufacturing processes were examined as part of the Y2K automation upgrades. Employment utilizing system analysis include system analyst, business analyst, manufacturing engineer, systems architect, enterprise architect, software architect, etc.
While practitioners of system analysis can be called upon to create new systems, they often modify, expand, or document existing systems (processes, procedures, and methods). Researchers and practitioners rely on system analysis. Activity system analysis has been already applied to various research and practice studies, including business management, educational reform, educational technology, etc.
== See also ==
== References ==
== Selected publications ==
Bentley, Lonnie D., Kevin C. Dittman, and Jeffrey L. Whitten. System analysis and design methods. (1986, 1997, 2004).
Hawryszkiewycz, Igor T. Introduction to system analysis and design. Prentice-Hall PTR, 1994.
Whitten, Jeffery L., Lonnie D. Bentley, and Kevin C. Dittman. Fundamentals of system analysis and design methods. (2004).
== External links ==
A useful set of guides and a case study about the practical application of business and system analysis methods
A comprehensive description of the discipline of system analysis from Simmons College, Boston, MA, USA (Archive of original from www.simmons.edu)
System Analysis and Design introductory level lessons | Wikipedia/Systems_analysis |
Systems immunology is a research field under systems biology that uses mathematical approaches and computational methods to examine the interactions within cellular and molecular networks of the immune system. The immune system has been thoroughly analyzed as regards to its components and function by using a "reductionist" approach, but its overall function can't be easily predicted by studying the characteristics of its isolated components because they strongly rely on the interactions among these numerous constituents. It focuses on in silico experiments rather than in vivo.
Recent studies in experimental and clinical immunology have led to development of mathematical models that discuss the dynamics of both the innate and adaptive immune system. Most of the mathematical models were used to examine processes in silico that can't be done in vivo. These processes include: the activation of T cells, cancer-immune interactions, migration and death of various immune cells (e.g. T cells, B cells and neutrophils) and how the immune system will respond to a certain vaccine or drug without carrying out a clinical trial.
== Techniques of modelling in Immune cells ==
The techniques that are used in immunology for modelling have a quantitative and qualitative approach, where both have advantages and disadvantages. Quantitative models predict certain kinetic parameters and the behavior of the system at a certain time point or concentration point. The disadvantage is that it can only be applied to a small number of reactions and prior knowledge about some kinetic parameters is needed. On the other hand, qualitative models can take into account more reactions but in return they provide less details about the kinetics of the system. The only thing in common is that both approaches lose simplicity and become useless when the number of components drastically increase.
=== Ordinary Differential Equation model ===
Ordinary differential equations (ODEs) are used to describe the dynamics of biological systems. ODEs are used on a microscopic, mesoscopic and macroscopic scale to examine continuous variables. The equations represent the time evolution of observed variables such as concentrations of protein, transcription factors or number of cell types. They are usually used for modelling immunological synapses, microbial recognition and cell migration. Over the last 10 years, these models have been used to study the sensitivity of TCR to agonist ligands and the roles of CD4 and CD8 co-receptors.
Kinetic rates of these equations are represented by binding and dissociation rates of the interacting species. These models are able to present the concentration and steady state of each interacting molecule in the network.
ODE models are defined by linear and non-linear equations, where the nonlinear ones are used more often because they are easier to simulate on a computer (in silico) and to analyse. The limitation of this model is that for every network, the kinetics of each molecule has to be known so that this model could be applied.
The ODE model was used to examine how antigens bind to the B cell receptor. This model was very complex because it was represented by 1122 equations and six signalling proteins. The software tool that was used for the research was BioNetGen. The outcome of the model is according to the in vivo experiment.
The Epstein-Barr virus (EBV) was mathematically modeled with 12 equations to investigate three hypotheses that explain the higher occurrence of mononucleosis in younger people. After running numerical simulations, only the first two hypotheses were supported by the model.
=== Partial Differential Equation model ===
Partial differential equation (PDE) models are an extended version of the ODE model, which describes the time evolution of each variable in both time and space. PDEs are used on a microscopic level for modeling continuous variables in the sensing and recognition of pathogens pathway. They are also applied for physiological modeling to describe how proteins interact and where their movement is directed in an immunological synapse. These derivatives are partial because they are calculated with the respect to time and also with the respect to space. Sometimes a physiological variable such as age in cell division can be used instead of the spatial variables. Comparing the PDE models, which take into account the spatial distribution of cells, to the ODE ones, the PDEs are computationally more demanding. Spatial dynamics are an important aspect of cell signalling as it describes the motion of cells within a three dimensional compartment. T cells move around in a three dimensional lymph node while TCRs are located on the surface of cell membranes and therefore move within a two dimensional compartment.
The spatial distribution of proteins is important especially upon T cell stimulation, when an immunological synapse is made, therefore this model was used in a study where the T cell was activated by a weak agonist peptide.
=== Particle-based Stochastic model ===
Particle-based stochastic models are obtained based on the dynamics of an ODE model. What differs this model from others, is that it considers the components of the model as discrete variables, not continuous like the previous ones. They examine particles on a microscopic and mesoscopic level in immune-specific transduction pathways and immune cells-cancer interactions, respectively. The dynamics of the model are determined by the Markov process, which in this case, expresses the probability of each possible state in the system upon time in a form of differential equations. The equations are difficult to solve analytically, so simulations on the computer are performed as kinetic Monte Carlo schemes. The simulation is commonly carried out with the Gillespie algorithm, which uses reaction constants that are derived from chemical kinetic rate constants to predict whether a reaction is going to occur. Stochastic simulations are more computationally demanding and therefore the size and scope of the model is limited.
The stochastic simulation was used to show that the Ras protein, which is a crucial signalling molecule in T cells, can have an active and inactive form. It provided insight to a population of lymphocytes that upon stimulation had active and inactive subpopulations.
Co-receptors have an important role in the earliest stages of T cell activation and a stochastic simulation was used to explain the interactions as well as to model the migrating cells in a lymph node.
This model was used to examine T cell proliferation in the lymphoid system.
=== Agent-based models ===
Agent-based modeling (ABM) is a type of modelling where the components of the system that are being observed, are treated as discrete agents and represent an individual molecule or cell. The components - agents, called in this system, can interact with other agents and the environment.
ABM has the potential to observe events on a multiscale level and is becoming more popular in other disciplines. It has been used for modelling the interactions between CD8+ T cells and Beta cells in Diabetes I and modelling the rolling and activation of leukocytes.
=== Boolean model ===
Logic models are used to model the life cycles of cells, immune synapse, pathogen recognition and viral entries on a microscopic and mesoscopic level. Unlike the ODE models, details about the kinetics and concentrations of interacting species isn't required in logistic models. Each biochemical species is represented as a node in the network and can have a finite number of discrete states, usually two, for example: ON/OFF, high/low, active/inactive. Usually, logic models, with only two states are considered as Boolean models. When a molecule is in the OFF state, it means that the molecule isn't present at a high enough level to make a change in the system, not that it has zero concentration. Therefore, when it is in the ON state it has reached a high enough amount to initiate a reaction. This method was first introduced by Kauffman. The limit of this model is that it can only provide qualitative approximations of the system and it can’t perfectly model concurrent events.
This method has been used to explore special pathways in the immune system such as affinity maturation and hypermutation in the humoral immune system and tolerance to pathologic rheumatoid factors. Simulation tools that support this model are DDlab, Cell-Devs and IMMSIM-C. IMMSIM-C is used more often than the others, as it doesn’t require knowledge in the computer programming field. The platform is available as a public web application and finds usage in undergraduate immunology courses at various universities (Princeton, Genoa, etc.).
For modelling with statecharts, only Rhapsody has been used so far in systems immunology. It can translate the statechart into executable Java and C++ codes.
This method was also used to build a model of the Influenza Virus Infection. Some of the results were not in accordance with earlier research papers and the Boolean network showed that the amount of activated macrophages increased for both young and old mice, while others suggest that there is a decrease.
The SBML (Systems Biology Markup Language) was supposed to cover only models with ordinary differential equations, but recently it was upgraded so that Boolean models could be applied. Almost all modeling tools are compatible with SBML. There are a few more software packages for modeling with Boolean models: BoolNet, GINsim and Cell Collective.
== Computer tools ==
To model a system by using differential equations, the computer tool has to perform various tasks such as model construction, calibration, verification, analysis, simulation and visualization. There isn’t a single software tool that satisfies the mentioned criteria, so multiple tools need to be used.
=== GINsim ===
GINsim is a computer tool that generates and simulates genetic networks based on discrete variables. Based on the regulatory graphs and logical parameters, GINsim calculates the temporal evolution of the system which is returned as a State Transition Graph (STG) where the states are represented by nodes and transitions by arrows.
It was used to examine how T cells respond upon activation of the TCR and TLR5 pathway. These processes were observed both separately and in combination. First, the molecular maps and logic models for both TCR and TLR5 pathways were built and then merged. Molecular maps were produced in CellDesigner based on data from literature and various databases, such as KEGG and Reactome. The logical models were generated by GINsim where each component has the value of either 0 or 1 or additional values when modified. Logical rules are then applied to each component, which are called logical nodes in this network. After merging the final model consists of 128 nodes. The results of modelling were in accordance with the experimental ones, where it was demonstrated that the TLR5 is a costimulatory receptor for CD4+ T cells.
=== Boolnet ===
Boolnet is a R package which contains tools for reconstruction, analysis and visualization of Boolean networks.
=== Cell Collective ===
The Cell Collective is a scientific platform which enables scientists to build, analyse and simulate biological models without formulating mathematical equations and coding. It has a Knowledge Base component built in it which extends the knowledge of individual entities (proteins, genes, cells, etc.) into dynamical models. The data is qualitative but it takes into account the dynamical relationship between the interacting species. The models are simulated in real-time and everything is done on the web.
=== BioNetGen ===
BioNetGen (BNG) is an open-source software package that is used in rule-based modeling of complex systems such as gene regulation, cell signaling and metabolism. The software uses graphs to represent different molecules and their functional domains and rules to explain the interactions between them. In terms of immunology, it was used to model intracellular signalling pathways of the TLR-4 cascade.
=== DSAIRM ===
DSAIRM (Dynamical Systems Approach to Immune Response Modeling) is a R package that is designed for studying infection and immune response dynamics without prior knowledge of coding.
Other useful applications and learning environments are: Gepasi, Copasi, BioUML, Simbiology (MATLAB) and Bio-SPICE.
== Conferences ==
The first conference in Synthetic and Systems Immunology was hosted in Ascona by CSF and ETH Zurich. It took place in the first days of May 2019 where over fifty researchers, from different scientific fields were involved. Among all presentations that were held, the best went to Dr. Govinda Sharma who invented a platform for screening TCR epitopes.
Cold Spring Harbor Laboratory (CSHL) from New York, in March 2019, hosted a meeting where the focus was to exchange ideas between experimental, computational and mathematical biologists that study the immune system in depth. The topics for the meeting where: Modelling and Regulatory networks, the future of Synthetic and Systems Biology and Immunoreceptors.
== Further reading ==
A Plaidoyer for ‘Systems Immunology’
Systems and Synthetic Immunology
Systems Biology
Current Topics in Microbiology and Immunology
The FRiND model
The Multiscale Systems Immunology project
Modelling with BioNetGen
== References == | Wikipedia/Systems_immunology |
Living systems are life forms (or, more colloquially known as living things) treated as a system. They are said to be open self-organizing and said to interact with their environment. These systems are maintained by flows of information, energy and matter. Multiple theories of living systems have been proposed. Such theories attempt to map general principles for how all living systems work.
== Context ==
Some scientists have proposed in the last few decades that a general theory of living systems is required to explain the nature of life. Such a general theory would arise out of the ecological and biological sciences and attempt to map general principles for how all living systems work. Instead of examining phenomena by attempting to break things down into components, a general living systems theory explores phenomena in terms of dynamic patterns of the relationships of organisms with their environment.
== Theories ==
=== Miller's open systems ===
James Grier Miller's living systems theory is a general theory about the existence of all living systems, their structure, interaction, behavior and development, intended to formalize the concept of life. According to Miller's 1978 book Living Systems, such a system must contain each of twenty "critical subsystems" defined by their functions. Miller considers living systems as a type of system. Below the level of living systems, he defines space and time, matter and energy, information and entropy, levels of organization, and physical and conceptual factors, and above living systems ecological, planetary and solar systems, galaxies, etc. Miller's central thesis is that the multiple levels of living systems (cells, organs, organisms, groups, organizations, societies, supranational systems) are open systems composed of critical and mutually-dependent subsystems that process inputs, throughputs, and outputs of energy and information. Seppänen (1998) says that Miller applied general systems theory on a broad scale to describe all aspects of living systems. Bailey states that Miller's theory is perhaps the "most integrative" social systems theory, clearly distinguishing between matter–energy-processing and information-processing, showing how social systems are linked to biological systems. LST analyzes the irregularities or "organizational pathologies" of systems functioning (e.g., system stress and strain, feedback irregularities, information–input overload). It explicates the role of entropy in social research while it equates negentropy with information and order. It emphasizes both structure and process, as well as their interrelations.
=== Lovelock's Gaia hypothesis ===
The idea that Earth is alive is found in philosophy and religion, but the first scientific discussion of it was by the Scottish geologist James Hutton. In 1785, he stated that Earth was a superorganism and that its proper study should be physiology.: 10 The Gaia hypothesis, proposed in the 1960s by James Lovelock, suggests that life on Earth functions as a single organism that defines and maintains environmental conditions necessary for its survival.
=== Morowitz's property of ecosystems ===
A systems view of life treats environmental fluxes and biological fluxes together as a "reciprocity of influence," and a reciprocal relation with environment is arguably as important for understanding life as it is for understanding ecosystems. As Harold J. Morowitz (1992) explains it, life is a property of an ecological system rather than a single organism or species. He argues that an ecosystemic definition of life is preferable to a strictly biochemical or physical one. Robert Ulanowicz (2009) highlights mutualism as the key to understand the systemic, order-generating behaviour of life and ecosystems.
=== Rosen's complex systems biology ===
Robert Rosen devoted a large part of his career, from 1958 onwards, to developing a comprehensive theory of life as a self-organizing complex system, "closed to efficient causation". He defined a system component as "a unit of organization; a part with a function, i.e., a definite relation between part and whole." He identified the "nonfractionability of components in an organism" as the fundamental difference between living systems and "biological machines." He summarised his views in his book Life Itself.
Complex systems biology is a field of science that studies the emergence of complexity in functional organisms from the viewpoint of dynamic systems theory. The latter is also often called systems biology and aims to understand the most fundamental aspects of life. A closely related approach, relational biology, is concerned mainly with understanding life processes in terms of the most important relations, and categories of such relations among the essential functional components of organisms; for multicellular organisms, this has been defined as "categorical biology", or a model representation of organisms as a category theory of biological relations, as well as an algebraic topology of the functional organisation of living organisms in terms of their dynamic, complex networks of metabolic, genetic, and epigenetic processes and signalling pathways. Related approaches focus on the interdependence of constraints, where constraints can be either molecular, such as enzymes, or macroscopic, such as the geometry of a bone or of the vascular system.
=== Bernstein, Byerly and Hopf's Darwinian dynamic ===
Harris Bernstein and colleagues argued in 1983 that the evolution of order in living systems and certain physical systems obeys a common fundamental principle termed the Darwinian dynamic. This was formulated by first considering how macroscopic order is generated in a simple non-biological system far from thermodynamic equilibrium, and then extending consideration to short, replicating RNA molecules. The underlying order-generating process was concluded to be basically similar for both types of systems.
=== Gerard Jagers' operator theory ===
Gerard Jagers' operator theory proposes that life is a general term for the presence of the typical closures found in organisms; the typical closures are a membrane and an autocatalytic set in the cell and that an organism is any system with an organisation that complies with an operator type that is at least as complex as the cell. Life can be modelled as a network of inferior negative feedbacks of regulatory mechanisms subordinated to a superior positive feedback formed by the potential of expansion and reproduction.
=== Kauffman's multi-agent system ===
Stuart Kauffman defines a living system as an autonomous agent or a multi-agent system capable of reproducing itself or themselves, and of completing at least one thermodynamic work cycle. This definition is extended by the evolution of novel functions over time.
=== Budisa, Kubyshkin and Schmidt's four pillars ===
Budisa, Kubyshkin and Schmidt defined cellular life as an organizational unit resting on four pillars/cornerstones: (i) energy, (ii) metabolism, (iii) information and (iv) form. This system is able to regulate and control metabolism and energy supply and contains at least one subsystem that functions as an information carrier (genetic information). Cells as self-sustaining units are parts of different populations that are involved in the unidirectional and irreversible open-ended process known as evolution.
== See also ==
Artificial life – Field of study
Autonomous Agency Theory – viable system theoryPages displaying wikidata descriptions as a fallback
Autopoiesis – System capable of producing itself
Biological organization – Hierarchy of complex structures and systems within biological sciencesPages displaying short descriptions of redirect targets
Biological systems – Complex network which connects several biologically relevant entitiesPages displaying short descriptions of redirect targets
Complex systems – System composed of many interacting componentsPages displaying short descriptions of redirect targets
Earth system science – Scientific study of the Earth's spheres and their natural integrated systems
Extraterrestrial life – Life that does not originate on Earth
Information metabolism – Psychological theory of interaction between biological organisms and their environment
Organism – Individual living life form
Spome – Hypothetical matter-closed, energy-open life support system
Systems biology – Computational and mathematical modeling of complex biological systems
Systems theory – Interdisciplinary study of systems
Viable System Theory – concerns cybernetic processes in relation to the development/evolution of dynamical systemsPages displaying wikidata descriptions as a fallback
== References ==
== Further reading ==
Kenneth D. Bailey, (1994). Sociology and the new systems theory: Toward a theoretical synthesis. Albany, NY: SUNY Press.
Kenneth D. Bailey (2006). Living systems theory and social entropy theory. Systems Research and Behavioral Science, 22, 291–300.
James Grier Miller, (1978). Living systems. New York: McGraw-Hill. ISBN 0-87081-363-3
Miller, J.L., & Miller, J.G. (1992). Greater than the sum of its parts: Subsystems which process both matter-energy and information. Behavioral Science, 37, 1–38.
Humberto Maturana (1978), "Biology of language: The epistemology of reality," in Miller, George A., and Elizabeth Lenneberg (eds.), Psychology and Biology of Language and Thought: Essays in Honor of Eric Lenneberg. Academic Press: 27-63.
Jouko Seppänen, (1998). Systems ideology in human and social sciences. In G. Altmann & W.A. Koch (Eds.), Systems: New paradigms for the human sciences (pp. 180–302). Berlin: Walter de Gruyter.
James R. Simms (1999). Principles of Quantitative Living Systems Science. Dordrecht: Kluwer Academic. ISBN 0-306-45979-5
== External links ==
The Living Systems Theory Of James Grier Miller
James Grier Miller, Living Systems The Basic Concepts (1978) | Wikipedia/Living_systems |
World-systems theory (also known as world-systems analysis or the world-systems perspective) is a multidisciplinary approach to world history and social change which emphasizes the world-system (and not nation states) as the primary (but not exclusive) unit of social analysis. World-systems theorists argue that their theory explains the rise and fall of states, income inequality, social unrest, and imperialism.
The "world-system" refers to the inter-regional and transnational division of labor, which divides the world into core countries, semi-periphery countries, and periphery countries. Core countries have higher-skill, capital-intensive industries, and the rest of the world has low-skill, labor-intensive industries and extraction of raw materials. This constantly reinforces the dominance of the core countries. This structure is unified by the division of labour. It is a world-economy rooted in a capitalist economy. For a time, certain countries have become the world hegemon; during the last few centuries, as the world-system has extended geographically and intensified economically, this status has passed from the Netherlands, to the United Kingdom and (most recently) to the United States.
Immanuel Wallerstein is the main proponent of world systems theory. Components of the world-systems analysis are longue durée by Fernand Braudel, "development of underdevelopment" by Andre Gunder Frank, and the single-society assumption. Longue durée is the concept of the gradual change through the day-to-day activities by which social systems are continually reproduced. "Development of underdevelopment" describes the economic processes in the periphery as the opposite of the development in the core. Poorer countries are impoverished to enable a few countries to get richer. Lastly, the single-society assumption opposes the multiple-society assumption and includes looking at the world as a whole.
== Background ==
Immanuel Wallerstein has developed the best-known version of world-systems analysis, beginning in the 1970s. Wallerstein traces the rise of the capitalist world-economy from the "long" 16th century (c. 1450–1640). The rise of capitalism, in his view, was an accidental outcome of the protracted crisis of feudalism (c. 1290–1450). Europe (the West) used its advantages and gained control over most of the world economy and presided over the development and spread of industrialization and capitalist economy, indirectly resulting in unequal development.
Though other commentators refer to Wallerstein's project as world-systems "theory," he consistently rejects that term. For Wallerstein, world-systems analysis is a mode of analysis that aims to transcend the structures of knowledge inherited from the 19th century, especially the definition of capitalism, the divisions within the social sciences, and those between the social sciences and history. For Wallerstein, then, world-systems analysis is a "knowledge movement" that seeks to discern the "totality of what has been paraded under the labels of the... human sciences and indeed well beyond". "We must invent a new language," Wallerstein insists, to transcend the illusions of the "three supposedly distinctive arenas" of society, economy and politics. The trinitarian structure of knowledge is grounded in another, even grander, modernist architecture, the distinction of biophysical worlds (including those within bodies) from social ones: "One question, therefore, is whether we will be able to justify something called social science in the twenty-first century as a separate sphere of knowledge." Many other scholars have contributed significant work in this "knowledge movement."
== Origins ==
=== Influences ===
World-systems theory traces emerged in the 1970s. Its roots can be found in sociology, but it has developed into a highly interdisciplinary field.
World-systems theory was aiming to replace modernization theory, which Wallerstein criticised for three reasons:
its focus on the nation state as the only unit of analysis
its assumption that there is only a single path of evolutionary development for all countries
its disregard of transnational structures that constrain local and national development.
There are three major predecessors of world-systems theory: the Annales school, the Marxist tradition, and dependency theory. The Annales School tradition, represented most notably by Fernand Braudel, influenced Wallerstein to focus on long-term processes and geo-ecological regions as units of analysis. Marxism added a stress on social conflict, a focus on the capital accumulation process and competitive class struggles, a focus on a relevant totality, the transitory nature of social forms and a dialectical sense of motion through conflict and contradiction.
World-systems theory was also significantly influenced by dependency theory, a neo-Marxist explanation of development processes.
Other influences on the world-systems theory come from scholars such as Karl Polanyi, Nikolai Kondratiev and Joseph Schumpeter. These scholars researched business cycles and developed concepts of three basic modes of economic organization: reciprocal, redistributive, and market modes. Wallerstein reframed these concepts into a discussion of mini systems, world empires, and world economies.
Wallerstein sees the development of the capitalist world economy as detrimental to a large proportion of the world's population. Wallerstein views the period since the 1970s as an "age of transition" that will give way to a future world system (or world systems) whose configuration cannot be determined in advance.
Other world-systems thinkers include Oliver Cox, Samir Amin, Giovanni Arrighi, and Andre Gunder Frank, with major contributions by Christopher Chase-Dunn, Beverly Silver, Janet Abu Lughod, Li Minqi, Kunibert Raffer, and others. In sociology, a primary alternative perspective is World Polity Theory, as formulated by John W. Meyer.
=== Dependency theory ===
World-systems analysis builds upon but also differs fundamentally from dependency theory. While accepting world inequality, the world market and imperialism as fundamental features of historical capitalism, Wallerstein broke with orthodox dependency theory's central proposition. For Wallerstein, core countries do not exploit poor countries for two basic reasons.
Firstly, core capitalists exploit workers in all zones of the capitalist world economy (not just the periphery) and therefore, the crucial redistribution between core and periphery is surplus value, not "wealth" or "resources" abstractly conceived. Secondly, core states do not exploit poor states, as dependency theory proposes, because capitalism is organised around an inter-regional and transnational division of labor rather than an international division of labour. Thirdly, economically relevant structures such as metropolitan regions, international unions and bilateral agreements tend to weaken and blur out the economic importance of nation-states and their borders.
During the Industrial Revolution, for example, English capitalists exploited slaves (unfree workers) in the cotton zones of the American South, a peripheral region within a semiperipheral country, United States.
From a largely Weberian perspective, Fernando Henrique Cardoso described the main tenets of dependency theory as follows:
There is a financial and technological penetration of the periphery and semi-periphery countries by the developed capitalist core countries.
That produces an unbalanced economic structure within the peripheral societies and between them and the central countries.
That leads to limitations upon self-sustained growth in the periphery.
That helps the appearance of specific patterns of class relations.
They require modifications in the role of the state to guarantee the functioning of the economy and the political articulation of a society, which contains, within itself, foci of inarticulateness and structural imbalance.
Dependency and world system theory propose that the poverty and backwardness of poor countries are caused by their peripheral position in the international division of labor. Since the capitalist world system evolved, the distinction between the central and the peripheral states has grown and diverged. In recognizing a tripartite pattern in the division of labor, world-systems analysis criticized dependency theory with its bimodal system of only cores and peripheries.
=== Immanuel Wallerstein ===
The best-known version of the world-systems approach was developed by Immanuel Wallerstein. Wallerstein notes that world-systems analysis calls for a unidisciplinary historical social science and contends that the modern disciplines, products of the 19th century, are deeply flawed because they are not separate logics, as is manifest for example in the de facto overlap of analysis among scholars of the disciplines. Wallerstein offers several definitions of a world-system, defining it in 1974 briefly:
a system is defined as a unit with a single division of labor and multiple cultural systems.
He also offered a longer definition:
...a social system, one that has boundaries, structures, member groups, rules of legitimation, and coherence. Its life is made up of the conflicting forces which hold it together by tension and tear it apart as each group seeks eternally to remold it to its advantage. It has the characteristics of an organism, in that it has a life-span over which its characteristics change in some respects and remain stable in others. One can define its structures as being at different times strong or weak in terms of the internal logic of its functioning.
In 1987, Wallerstein again defined it:
... not the system of the world, but a system that is a world and which can be, most often has been, located in an area less than the entire globe. World-systems analysis argues that the units of social reality within which we operate, whose rules constrain us, are for the most part such world-systems (other than the now extinct, small minisystems that once existed on the earth). World-systems analysis argues that there have been thus far only two varieties of world-systems: world-economies and world empires. A world-empire (examples, the Roman Empire, Han China) are large bureaucratic structures with a single political center and an axial division of labor, but multiple cultures. A world-economy is a large axial division of labor with multiple political centers and multiple cultures. In English, the hyphen is essential to indicate these concepts. "World system" without a hyphen suggests that there has been only one world-system in the history of the world.
Wallerstein characterizes the world system as a set of mechanisms, which redistributes surplus value from the periphery to the core. In his terminology, the core is the developed, industrialized part of the world, and the periphery is the "underdeveloped", typically raw materials-exporting, poor part of the world; the market being the means by which the core exploits the periphery.
Apart from them, Wallerstein defines four temporal features of the world system. Cyclical rhythms represent the short-term fluctuation of economy, and secular trends mean deeper long run tendencies, such as general economic growth or decline. The term contradiction means a general controversy in the system, usually concerning some short term versus long term tradeoffs. For example, the problem of underconsumption, wherein the driving down of wages increases the profit for capitalists in the short term, but in the long term, the decreasing of wages may have a crucially harmful effect by reducing the demand for the product. The last temporal feature is the crisis: a crisis occurs if a constellation of circumstances brings about the end of the system.
In Wallerstein's view, there have been three kinds of historical systems across human history: "mini-systems" or what anthropologists call bands, tribes, and small chiefdoms, and two types of world-systems, one that is politically unified and the other is not (single state world empires and multi-polity world economies). World-systems are larger, and are ethnically diverse. The modern world-system, a capitalist world-economy, is unique in being the first and only world-system, which emerged around 1450 to 1550, to have geographically expanded across the entire planet, by about 1900. It is defined, as a world-economy, in having many political units tied together as an interstate system and through its division of labor based on capitalist enterprises.
== Importance ==
World-Systems Theory can be useful in understanding world history and the core countries' motives for imperialization and other involvements like the US aid following natural disasters in developing Central American countries or imposing regimes on other core states. With the interstate system as a system constant, the relative economic power of the three tiers points to the internal inequalities that are on the rise in states that appear to be developing. Some argue that this theory, though, ignores local efforts of innovation that have nothing to do with the global economy, such as the labor patterns implemented in Caribbean sugar plantations. Other modern global topics can be easily traced back to the world-systems theory.
As global talk about climate change and the future of industrial corporations, the world systems theory can help to explain the creation of the G-77 group, a coalition of 77 peripheral and semi-peripheral states wanting a seat at the global climate discussion table. The group was formed in 1964, but it now has more than 130 members who advocate for multilateral decision making. Since its creation, G-77 members have collaborated with two main aims: 1) decreasing their vulnerability based on the relative size of economic influence, and 2) improving outcomes for national development. World-systems theory has also been utilized to trace CO2 emissions’ damage to the ozone layer. The levels of world economic entrance and involvement can affect the damage a country does to the earth. In general, scientists can make assumptions about a country's CO2 emissions based on GDP. Higher exporting countries, countries with debt, and countries with social structure turmoil land in the upper-periphery tier. Though more research must be done in the arena, scientists can call core, semi-periphery, and periphery labels as indicators for CO2 intensity.
In a health realm, studies have shown the effect of less industrialized countries’, the periphery's, acceptance of packaged foods and beverages that are loaded with sugars and preservatives. While core states benefit from dumping large amounts of processed, fatty foods into poorer states, there has been a recorded increase in obesity and related chronic conditions such as diabetes and chronic heart disease. While some aspects of the modernization theory have been found to improve the global obesity crisis, a world systems theory approach identifies holes in the progress.
Knowledge economy and finance now dominate the industry in core states while manufacturing has shifted to semi-periphery and periphery ones. Technology has become a defining factor in the placement of states into core or semi-periphery versus periphery. Wallerstein's theory leaves room for poor countries to move into better economic development, but he also admits that there will always be a need for periphery countries as long as there are core states who derive resources from them. As a final mark of modernity, Wallerstein admits that advocates are the heart of this world-system: “Exploitation and the refusal to accept exploitation as either inevitable or just constitute the continuing antinomy of the modern era”.
== Influence on International Law ==
Third World Approaches to International Law (TWAIL), influenced by world-systems theory, critique the traditional view of international law (IL) as a horizontal system of equal sovereign states. This orthodox perspective, rooted in legal positivism, is challenged by the argument that global capitalism functions as a de facto sovereign power, shaping and enforcing international law in a hierarchical, top-down manner.
Scholars like Antony Anghie have examined the historical role of international law in advancing colonial agendas, aligning this analysis with the Marxian concept of primitive accumulation. This concept situates colonialism within the broader framework of capital accumulation, demonstrating how IL has historically facilitated the exploitation and subordination of non-European states. From this perspective, international law is not a neutral framework of horizontal equality but a tool that mirrors and perpetuates the dominance of global capital. By doing so, it enforces a vertical hierarchy of power, subordinating states and communities to the interests of dominant economic and political forces.
== Characteristics ==
World-systems analysis argues that capitalism, as a historical system, has always integrated a variety of labor forms within a functioning division of labor (world economy). Countries do not have economies but are part of the world economy. Far from being separate societies or worlds, the world economy manifests a tripartite division of labor, with core, semiperipheral and peripheral zones. In the core zones, businesses, with the support of states they operate within, monopolise the most profitable activities of the division of labor.
There are many ways to attribute a specific country to the core, semi-periphery, or periphery. Using an empirically based sharp formal definition of "domination" in a two-country relationship, Piana in 2004 defined the "core" as made up of "free countries" dominating others without being dominated, the "semi-periphery" as the countries that are dominated (usually, but not necessarily, by core countries) but at the same time dominating others (usually in the periphery) and "periphery" as the countries dominated. Based on 1998 data, the full list of countries in the three regions, together with a discussion of methodology, can be found.
The late 18th and early 19th centuries marked a great turning point in the development of capitalism in that capitalists achieved state society power in the key states, which furthered the industrial revolution marking the rise of capitalism. World-systems analysis contends that capitalism as a historical system formed earlier and that countries do not "develop" in stages, but the system does, and events have a different meaning as a phase in the development of historical capitalism, the emergence of the three ideologies of the national developmental mythology (the idea that countries can develop through stages if they pursue the right set of policies): conservatism, liberalism, and radicalism.
Proponents of world-systems analysis see the world stratification system the same way Karl Marx viewed class (ownership versus nonownership of the means of production) and Max Weber viewed class (which, in addition to ownership, stressed occupational skill level in the production process). The core states primarily own and control the major means of production in the world and perform the higher-level production tasks. The periphery nations own very little of the world's means of production (even when they are located in periphery states) and provide less-skilled labour. Like a class system within states, class positions in the world economy result in an unequal distribution of rewards or resources. The core states receive the greatest share of surplus production, and periphery states receive the smallest share. Furthermore, core states are usually able to purchase raw materials and other goods from non-core states at low prices and demand higher prices for their exports to non-core states. Chirot (1986) lists the five most important benefits coming to core states from their domination of the periphery:
Access to a large quantity of raw material
Cheap labour
Enormous profits from direct capital investments
A market for exports
Skilled professional labor through migration of these people from the non-core to the core.
According to Wallerstein, the unique qualities of the modern world system include its capitalistic nature, its truly global nature, and the fact that it is a world economy that has not become politically unified into a world empire.
=== Core states ===
In general, core states:
Are the most economically diversified, wealthy, and powerful both economically and militarily
Have strong central governments controlling extensive bureaucracies and powerful militaries
Have stronger and more complex state institutions that help manage economic affairs internally and externally
Have a sufficiently large tax base, such that state institutions can provide the infrastructure for a strong economy
Are highly industrialised and produce manufactured goods for export instead of raw materials
Increasingly tend to specialise in the information, finance, and service industries
Are more regularly at the forefront of new technologies and new industries. Contemporary examples include the electronics and biotechnology industries. The use of the assembly line is a historic example of this trend.
Have strong bourgeois and working classes
Have significant means of influence over non-core states
Are relatively independent of outside control
Throughout the history of the modern world system, a group of core states has competed for access to the world's resources, economic dominance, and hegemony over periphery states. Occasionally, one core state possessed clear dominance over the others. According to Immanuel Wallerstein, a core state is dominant over all the others when it has a lead in three forms of economic dominance:
Productivity dominance allows a country to develop higher-quality products at a cheaper price compared to other countries.
Productivity dominance may lead to trade dominance. In this case, there is a favorable balance of trade for the dominant state since other countries are buying more of its products than those of others.
Trade dominance may lead to financial dominance. At this point, more money is flowing into the country than is leaving it. Bankers from the dominant state tend to acquire greater control over the world's financial resources.
Military dominance is also likely once a state has reached this point. However, it has been posited that throughout the modern world system, no state has been able to use its military to gain economic dominance. Each of the past dominant states became dominant with fairly small levels of military spending and began to lose economic dominance with military expansion later on. Historically, cores were located in northwestern Europe (England, France, Netherlands) but later appeared in other parts of the world such as the United States, Canada, and Australia.
=== Peripheral states ===
Are the least economically diversified
Have relatively weak governments
Have relatively weak institutions, with tax bases too small to support infrastructural development
Tend to depend on one type of economic activity, often by extracting and exporting raw materials to core states
Tend to be the least industrialized
Are often targets for investments from multinational (or transnational) corporations from core states that come into the country to exploit cheap unskilled labor in order to export back to core states
Have a small bourgeois and a large peasant classes
Tend to have populations with high percentages of poor and uneducated people
Tend to have very high social inequality because of small upper classes that own most of the land and have profitable ties to multinational corporations
Tend to be extensively influenced by core states and their multinational corporations and often forced to follow economic policies that help core states and harm the long-term economic prospects of peripheral states.
Historically, peripheries were found outside Europe, such as in Latin America and today in sub-Saharan Africa.
=== Semi-peripheral states ===
Semi-peripheral states are those that are midway between the core and periphery. Thus, they have to keep themselves from falling into the category of peripheral states and at the same time, they strive to join the category of core states. Therefore, they tend to apply protectionist policies most aggressively among the three categories of states. They tend to be countries moving towards industrialization and more diversified economies. These regions often have relatively developed and diversified economies but are not dominant in international trade. They tend to export more to peripheral states and import more from core states in trade. According to some scholars, such as Chirot, they are not as subject to outside manipulation as peripheral societies; but according to others (Barfield), they have "periperial-like" relations to the core. While in the sphere of influence of some cores, semiperipheries also tend to exert their own control over some peripheries. Further, semi-peripheries act as buffers between cores and peripheries and thus "...partially deflect the political pressures which groups primarily located in peripheral areas might otherwise direct against core-states" and stabilise the world system.
Semi-peripheries can come into existence from developing peripheries and declining cores. Historically, two examples of semiperipheral states would be Spain and Portugal, which fell from their early core positions but still managed to retain influence in Latin America. Those countries imported silver and gold from their American colonies but then had to use it to pay for manufactured goods from core countries such as England and France. In the 20th century, states like the "settler colonies" of Australia, Canada and New Zealand had a semiperipheral status. In the 21st century, states like Brazil, Russia, India, China, South Africa (BRICS), and Israel are usually considered semiperipheral.
=== Interstate system ===
Between the core, periphery and semi-periphery countries lies a system of interconnected state relationships, or the interstate system. The interstate system arose either as a concomitant process or as a consequence of the development of the capitalist world-system over the course of the “long” 16th century as states began to recognize each other's sovereignty and form agreements and rules between themselves.
Wallerstein wrote that there were no concrete rules about what exactly constitutes an individual state as various indicators of statehood (sovereignty, power, market control etc.) could range from total to nil. There were also no clear rules about which group controlled the state, as various groups located inside, outside, and across the states’ frontiers could seek to increase or decrease state power in order to better profit from a world-economy. Nonetheless, the “relative power continuum of stronger and weaker states has remained relatively unchanged over 400-odd years” implying that while there is no universal state system, an interstate system had developed out of the sum of state actions, which existed to reinforce certain rules and preconditions of statehood. These rules included maintaining consistent relations of production, and regulating the flow of capital, commodities and labor across borders to maintain the price structures of the global market. If weak states attempt to rewrite these rules as they prefer them, strong states will typically intervene to rectify the situation.
The ideology of the interstate system is sovereign equality, and while the system generally presents a set of constraints on the power of individual states, within the system states are “neither sovereign nor equal.” Not only do strong states impose their will on weak states, strong states also impose limitations upon other strong states, and tend to seek strengthened international rules, since enforcing consequences for broken rules can be highly beneficial and confer comparative advantages.
=== External areas ===
External areas are those that maintain socially necessary divisions of labor independent of the capitalist world economy.
== The interpretation of world history ==
Wallerstein traces the origin of today's world-system to the "long 16th century" (a period that began with the discovery of the Americas by Western European sailors and ended with the English Revolution of 1640). And, according to Wallerstein, globalization, or the becoming of the world's system, is a process coterminous with the spread and development of capitalism over the past 500 years.
Janet Abu Lughod argues that a pre-modern world system extensive across Eurasia existed in the 13th century prior to the formation of the modern world-system identified by Wallerstein. She contends that the Mongol Empire played an important role in stitching together the Chinese, Indian, Muslim and European regions in the 13th century, before the rise of the modern world system. In debates, Wallerstein contends that Lughod's system was not a "world-system" because it did not entail integrated production networks, but it was instead a vast trading network.
Andre Gunder Frank goes further and claims that a global world system that includes Asia, Europe and Africa has existed since the 4th millennium BCE. The centre of this system was in Asia, specifically China. Andrey Korotayev goes even further than Frank and dates the beginning of the world system formation to the 10th millennium BCE and connects it with the start of the Neolithic Revolution in the Middle East. According to him, the centre of this system was originally in Western Asia.
Before the 16th century, Europe was dominated by feudal economies. European economies grew from mid-12th to 14th century but from 14th to mid 15th century, they suffered from a major crisis. Wallerstein explains this crisis as caused by the following:
stagnation or even decline of agricultural production, increasing the burden of peasants,
decreased agricultural productivity caused by changing climatological conditions (Little Ice Age),
an increase in epidemics (Black Death),
optimum level of the feudal economy having been reached in its economic cycle; the economy moved beyond it and entered a depression period.
As a response to the failure of the feudal system, European society embraced the capitalist system. Europeans were motivated to develop technology to explore and trade around the world, using their superior military to take control of the trade routes. Europeans exploited their initial small advantages, which led to an accelerating process of accumulation of wealth and power in Europe.
Wallerstein notes that never before had an economic system encompassed that much of the world, with trade links crossing so many political boundaries. In the past, geographically large economic systems existed but were mostly limited to spheres of domination of large empires (such as the Roman Empire); development of capitalism enabled the world economy to extend beyond individual states. International division of labor was crucial in deciding what relationships exists between different regions, their labor conditions and political systems. For classification and comparison purposes, Wallerstein introduced the categories of core, semi-periphery, periphery, and external countries. Cores monopolized the capital-intensive production, and the rest of the world could provide only workforce and raw resources. The resulting inequality reinforced existing unequal development.
According to Wallerstein there have only been three periods in which a core state dominated in the modern world-system, with each lasting less than one hundred years. In the initial centuries of the rise of European dominance, Northwestern Europe constituted the core, Mediterranean Europe the semiperiphery, and Eastern Europe and the Western hemisphere (and parts of Asia) the periphery. Around 1450, Spain and Portugal took the early lead when conditions became right for a capitalist world-economy. They led the way in establishing overseas colonies. However, Portugal and Spain lost their lead, primarily by becoming overextended with empire-building. It became too expensive to dominate and protect so many colonial territories around the world.
The first state to gain clear dominance was the Netherlands in the 17th century, after its revolution led to a new financial system that many historians consider revolutionary. An impressive shipbuilding industry also contributed to their economic dominance through more exports to other countries. Eventually, other countries began to copy the financial methods and efficient production created by the Dutch. After the Dutch gained their dominant status, the standard of living rose, pushing up production costs.
Dutch bankers began to go outside of the country seeking profitable investments, and the flow of capital moved, especially to England. By the end of the 17th century, conflict among core states increased as a result of the economic decline of the Dutch. Dutch financial investment helped England gain productivity and trade dominance, and Dutch military support helped England to defeat France, the other country competing for dominance at the time.
In the 19th century, Britain replaced the Netherlands as the hegemon. As a result of the new British dominance, the world system became relatively stable again during the 19th century. The British began to expand globally, with many colonies in the New World, Africa, and Asia. The colonial system began to place a strain on the British military and, along with other factors, led to an economic decline. Again there was a great deal of core conflict after the British lost their clear dominance. This time it was Germany, and later Italy and Japan that provided the new threat.
Industrialization was another ongoing process during British dominance, resulting in the diminishing importance of the agricultural sector. In the 18th century, Britain was Europe's leading industrial and agricultural producer; by 1900, only 10% of England's population was working in the agricultural sector.
By 1900, the modern world system appeared very different from that of a century earlier in that most of the periphery societies had already been colonised by one of the older core states. In 1800, the old European core claimed 35% of the world's territory, but by 1914, it claimed 85% of the world's territory, with the Scramble for Africa closing out the imperial era. If a core state wanted periphery areas to exploit as had done the Dutch and British, these periphery areas had to be taken from another core state, which the US did by way of the Spanish–American War, and Germany, and then Japan and Italy, attempted to do in the leadup to World War II. The modern world system was thus geographically global, and even the most remote regions of the world had all been integrated into the global economy.
As countries vied for core status, so did the United States. The American Civil War led to more power for the Northern industrial elites, who were now better able to pressure the government for policies helping industrial expansion. Like the Dutch bankers, British bankers were putting more investment toward the United States. The US had a small military budget compared to other industrial states at the time.
The US began to take the place of the British as a new dominant state after World War I. With Japan and Europe in ruins after World War II, the US was able to dominate the modern world system more than any other country in history, while the USSR and to a lesser extent China were viewed as primary threats. At its height, US economic reach accounted for over half of the world's industrial production, owned two thirds of the gold reserves in the world and supplied one third of the world's exports.
However, since the end of the Cold War, the future of US hegemony has been questioned by some scholars, as its hegemonic position has been in decline for a few decades. By the end of the 20th century, the core of the wealthy industrialized countries was composed of Western Europe, the United States, Japan and a rather limited selection of other countries. The semiperiphery was typically composed of independent states that had not achieved Western levels of influence, while poor former colonies of the West formed most of the periphery.
== Criticisms ==
World-systems theory has attracted criticisms from its rivals; notably for being too focused on economy and not enough on culture and for being too core-centric and state-centric. William I. Robinson has criticized world-systems theory for its nation-state centrism, state-structuralist approach, and its inability to conceptualize the rise of globalization. Robinson suggests that world-systems theory does not account for emerging transnational social forces and the relationships forged between them and global institutions serving their interests. These forces operate on a global, rather than state system and cannot be understood by Wallerstein's nation-centered approach.
According to Wallerstein himself, critique of the world-systems approach comes from four directions: the positivists, the orthodox Marxists, the state autonomists, and the culturalists. The positivists criticise the approach as too prone to generalization, lacking quantitative data and failing to put forth a falsifiable proposition. Orthodox Marxists find the world-systems approach deviating too far from orthodox Marxist principles, such as by not giving enough weight to the concept of social class. It is worth noting, however, that "[d]ependency theorists argued that [the beneficiaries of class society, the bourgeoisie,] maintained a dependent relationship because their private interests coincided with the interest of the dominant states." The state autonomists criticize the theory for blurring the boundaries between state and businesses. Further, the positivists and the state autonomists argue that state should be the central unit of analysis. Finally, the culturalists argue that world-systems theory puts too much importance on the economy and not enough on the culture. In Wallerstein's own words:
In short, most of the criticisms of world-systems analysis criticize it for what it explicitly proclaims as its perspective. World-systems analysis views these other modes of analysis as defective and/or limiting in scope and calls for unthinking them.
One of the fundamental conceptual problems of the world-system theory is that the assumptions that define its actual conceptual units are social systems. The assumptions, which define them, need to be examined as well as how they are related to each other and how one changes into another. The essential argument of the world-system theory is that in the 16th century a capitalist world economy developed, which could be described as a world system. The following is a theoretical critique concerned with the basic claims of world-system theory:
"There are today no socialist systems in the world-economy any more than there are feudal systems because there is only one world system. It is a world-economy and it is by definition capitalist in form."
Robert Brenner has pointed out that the prioritization of the world market means the neglect of local class structures and class struggles:
"They fail to take into account either the way in which these class structures themselves emerge as the outcome of class struggles whose results are incomprehensible in terms merely of market forces."
Another criticism is that of reductionism made by Theda Skocpol: she believes the interstate system is far from being a simple superstructure of the capitalist world economy:
"The international states system as a transnational structure of military competition was not originally created by capitalism. Throughout modern world history, it represents an analytically autonomous level [... of] world capitalism, but [is] not reducible to it."
A concept that we can perceive as critique and mostly as renewal is the concept of coloniality (Anibal Quijano, 2000, Nepantla, Coloniality of power, eurocentrism and Latin America). Issued from the think tank of the group "modernity/coloniality" in Latin America, it re-uses the concept of world working division and core/periphery system in its system of coloniality. But criticizing the "core-centric" origin of World-system and its only economical development, "coloniality" allows further conception of how power still processes in a colonial way over worldwide populations (Ramon Grosfogel, "the epistemic decolonial turn" 2007): "by 'colonial situations' I mean the cultural, political, sexual, spiritual, epistemic and economic oppression/exploitation of subordinate racialized/ethnic groups by dominant racialized/ethnic groups with or without the existence of colonial administration". Coloniality covers, so far, several fields such as coloniality of gender (Maria Lugones), coloniality of "being" (Maldonado Torres), coloniality of knowledge (Walter Mignolo) and Coloniality of power (Anibal Quijano).
== Related journals ==
Annales. Histoire, Sciences sociales
Ecology and Society
Journal of World-Systems Research
== See also ==
== References ==
== Further reading ==
Works of Samir Amin; especially 'Empire of Chaos' (1991) and 'Le developpement inegal. Essai sur les formations sociales du capitalisme peripherique' (1973)
Works of Giovanni Arrighi
Volker Bornschier in libraries (WorldCat catalog)
József Böröcz
(2005), 'Redistributing Global Inequality: A Thought Experiment', Economic and Political Weekly Archived 2009-02-27 at the Wayback Machine, February 26:886-92.
(1992) 'Dual Dependency and Property Vacuum: Social Change in the State Socialist Semiperiphery' Theory & Society, 21:74-104.
Christopher K. Chase-Dunn in libraries (WorldCat catalog)
Andre Gunder Frank in libraries (WorldCat catalog)
Grinin, L., Korotayev, A. and Tausch A. (2016) Economic Cycles, Crises, and the Global Periphery. Springer International Publishing, Heidelberg, New York, Dordrecht, London, ISBN 978-3-319-17780-9.
Kohler, Gernot; Emilio José Chaves, eds. (2003). Globalization: Critical Perspectives. Hauppauge, New York: Nova Science Publishers. ISBN 1-59033-346-2. With contributions by Samir Amin, Christopher Chase-Dunn, Andre Gunder Frank, Immanuel Wallerstein. Pre-publication download of Chapter 5: The European Union: global challenge or global governance? 14 world system hypotheses and two scenarios on the future of the Union, pages 93 - 196 Arno Tausch at http://edoc.vifapol.de/opus/volltexte/2012/3587/pdf/049.pdf.
Gotts, Nicholas M. (2007). "Resilience, Panarchy, and World-Systems Analysis". Ecology and Society. 12 (1). doi:10.5751/ES-02017-120124. hdl:10535/3271.
Korotayev A., Malkov A., Khaltourina D. Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS, 2006. ISBN 5-484-00414-4 .
Lenin, Vladimir, 'Imperialism, the Highest Stage of Capitalism'
Moore, Jason W. (2000). "Environmental Crises and the Metabolic Rift in World-Historical Perspective Archived 2017-03-09 at the Wayback Machine," Organization & Environment 13(2), 123–158.
Raffer K. (1993), ‘Trade, transfers, and development: problems and prospects for the twenty-first century’ Aldershot, Hants, England; Brookfield, Vt., USA: E. Elgar Pub. Co.
Raffer K. and Singer H.W. (1996), ‘The Foreign Aid Business. Economic Assistance and Development Cooperation’ Cheltenham and Borookfield: Edward Alger.
Osvaldo Sunkel in libraries (WorldCat catalog)
Tausch A. and Christian Ghymers (2006), 'From the "Washington" towards a "Vienna Consensus"? A quantitative analysis on globalization, development and global governance'. Hauppauge, New York: Nova Science.
== External links ==
Fernand Braudel Center for the Study of Economies, Historical Systems and Civilizations closed as of June 2020
Review, A Journal of the Fernand Braudel Center
Institute for Research on World-Systems (IROWS), University of California, Riverside
World-Systems Archive
Working Papers in the World Systems Archive
World-Systems Archive Books
World-Systems Electronic Seminars
Preface to "ReOrient" by Andre Gunder Frank | Wikipedia/World-systems_theory |
Conversation theory is a cybernetic approach to the study of conversation, cognition and learning that may occur between two participants who are engaged in conversation with each other. It presents an experimental framework heavily utilizing human-computer interactions and computer theoretic models as a means to present a scientific theory explaining how conversational interactions lead to the emergence of knowledge between participants. The theory was developed by Gordon Pask, who credits Bernard Scott, Dionysius Kallikourdis, Robin McKinnon-Wood, and others during its initial development and implementation as well as Paul Pangaro during subsequent years.
== Overview ==
Conversation theory may be described as a formal theory of conversational process, as well as a theoretical methodology concerned with concept-forming and concept-sharing between conversational participants. It may be viewed as a framework that may be used to examine learning and development through the means of conversational techniques by means of human-machine interactions; the results of which may then inform approaches to education, educational psychology, and epistemology. While the framework is interpretable as a psychological framework with educational applications (specifically, as a general framework to think about teaching and learning), Pask's motivation in developing the theory has been interpreted by some who closely worked with him develop upon certain theoretical concerns regarding the nature of cybernetic inquiry.
The theory has been noted to have been influenced by a variety of psychological, pedagogical and philosophical influences such as Lev Vygotsky, R. D. Laing and George H. Mead. With some authors suggesting that the kind of human-machine learning interactions documented in conversation theory to be mirroring Vygotsky's descriptions of the zone of proximal development, and his descriptions of spontaneous and scientific concepts.
The theory prioritizes learning and teaching approaches related to education. A central idea of the theory is that learning occurs through conversations: For if participant A is to be conscious with participant B of a topic of inquiry, both participants must be able to converse with each other about that topic. Because of this, participants engaging in a discussion about a subject matter make their knowledge claims explicit through the means of such conversational interactions.
The theory is concerned with a variety of "psychological, linguistic, epistemological, social or non-commitally mental events of which there is awareness". Awareness in this sense is not of a person-specific type, i.e., it is not necessarily localized in a single participant. Instead, the type of awareness examined in conversation theory is the kind of joint awareness that may be shared between entities. While there is an acknowledgment of its similarities to phenomenology, the theory extends its analysis to examine cognitive processes. However, the concept of cognition is not viewed as merely being confined to an individual's brain or central nervous system. Instead, cognition may occur at the level of a group of people (leading to the emergence of social awareness), or may characterize certain types of computing machines.
Initial results from the theory lead to a distinction in the type of learning strategies participants used during the learning process; whereby students in general gravitated towards holistic or serialist learning strategies (with the optimal mixture producing a versatile learning strategy).
=== Conversation ===
Following Hugh Dubberly and Paul Pangaro, a conversation in the context of conversation theory involves an exchange between two participants whereby each participant is contextualized as a learning system whose internal states are changed through the course of the conversation. What can be discussed through conversation, i.e., topics of discussion, are said to belong to a conversational domain.
Conversation is distinguished from the mere exchange of information as seen in information theory, by the fact that utterances are interpreted within the context of a given perspective of such a learning system. Each participant's meanings and perceptions change during the course of a conversation, and each participant can agree to commit to act in certain ways during the conversation. In this way, conversation permits not only learning but also collaboration through participants coordinating themselves and designating their roles through the means of conversation.
Since meanings are agreed during the course of a conversation, and since purported agreements can be illusory (whereby we think we have the same understanding of a given topic but in fact do not), an empirical approach to the study of conversation would require stable reference points during such conversational exchanges between peers so as to permit reproducible results. Using computer theoretical models of cognition, conversation theory can document these intervals of understanding that arise in the conversations between two participating individuals, such that the development of individual and collective understandings can be analyzed rigorously.
In this way, Pask has been argued to have been an early pioneer in AI-based educational approaches: Having proposed that advances in computational media may enable conversational forms of interactions to take place between man and machine.
=== Language ===
The types of languages that conversation theory utilizes in its approach are distinguishable based on a language's role in relation to an experiment in which a conversation is examined as the subject of inquiry; thus, it follows that conversations can be conducted at different levels depending on the role a language has in relation to an experiment. The types of languages are as follows: Natural languages used for general discussions outside the experiment; object languages which are the subject of inquiry during an experiment, and finally a metalanguage which is used to talk about the design, management, and results on an experiment.
A natural language
L
+
{\displaystyle L^{+}}
is treated as an unrestricted language used between a source (say a participant) and an interrogator or analyst (say an experimenter).
For this reason, it may be considered a language for general discussion in the context of conversation theory.
An object language
L
{\displaystyle L}
meanwhile, has some of the qualities of a natural language (which permits commands, questions, ostentation and predication), but is used in conversation theory specifically as the language studied during experiments. Finally, the metalanguage
L
∗
{\displaystyle L^{*}}
is an observational language used by an interrogator or analysis for describing the conversational system under observation, prescribing actions that are permitted within such a system, and posing parameters regarding what may be discussed during an experiment under observation.
The object language
L
{\displaystyle L}
differs from most formal languages, by virtue of being "a command and question language[,] not an assertoric language like [a] predicate calculus". Moreover,
L
{\displaystyle L}
is a language primarily dealing with metaphors indicating material analogies and not on the kind of propositions dealing with truth or falsity values. Since conversation theory specifically focuses on learning and development within human subjects, the object language is separated into two distinct modes of conversing.
Conversation theory conceptualises learning as being the result of two integrated levels of control: The first level of control is designated by
L
e
v
0
{\displaystyle Lev\;0}
and designates a set of problem-solving procedures which attempt to attain goals or subgoals, whereas the second level of control is designated as
L
e
v
1
{\displaystyle Lev\;1}
and denotes various constructive processes that have been acquired by a student through maturation, imprinting and previous learning.
The object language
L
{\displaystyle L}
then is demarcated in conversation theory based on these considerations, whereby it is split between
L
0
{\displaystyle L^{0}}
and
L
1
{\displaystyle L^{1}}
lines of inquiry such that an object language is the ordered pair of such discourse types
L
=
⟨
L
0
,
L
1
⟩
{\displaystyle L=\langle L^{0},L^{1}\rangle }
.
According to Bernard Scott,
L
0
{\displaystyle L^{0}}
discourse of an object language may be conceptualized as the level of how, i.e., discourse that is concerned with "how to “do” a topic: how to recognize it, construct it, maintain it and so on".
Meanwhile,
L
1
{\displaystyle L^{1}}
discourse may be conceptualized as the level of why, i.e., it is discourse "concerned with explaining or justifying what a topic means in terms of other topics".
=== Concepts ===
A concept in conversation theory, is conceived of as the production, reproduction, and maintenance of a given topic relation
R
i
{\displaystyle R_{i}}
from other topic relations
R
j
{\displaystyle R_{j}}
, all belonging to a given conversational domain
R
{\displaystyle R}
. This implies
R
i
,
R
j
∈
R
{\displaystyle R_{i},R_{j}\in R}
, where
i
{\displaystyle i}
and
j
{\displaystyle j}
are used to represent a number on a finite index of numbers. A concept must satisfy the twin condition that it must entail
R
i
⊢
R
j
{\displaystyle R_{i}\vdash R_{j}}
and be entailed
R
j
⊢
R
i
{\displaystyle R_{j}\vdash R_{i}}
by other topics.
A concept in the context of conversation theory is not a class, nor description of a class, nor a stored description: Instead, a concept is specifically used to reconstruct, reproduce or stabilize relations. Thus, if
R
H
{\displaystyle R_{H}}
is the head topic of discussion, then
C
O
N
(
R
H
)
→
R
H
{\displaystyle CON(R_{H})\rightarrow R_{H}}
implies that the concept of that relation produces, reproduces, and maintains that relation.
Now, a concept itself is considered to consist of the ordered pair containing a program and an interpretation:
C
O
N
≜
⟨
P
R
O
G
,
I
N
T
E
R
⟩
{\displaystyle CON\triangleq \langle PROG,INTER\rangle }
Whereby a program attempts to derive a given topic relation, while an interpretation refers to the compilation of that program. In other words, given a specific topic relation, a program attempts to derive that relation through a series of other topic relations, which are compiled in such a way as to derive the initial topic relation. A concept as defined above is considered to be a
L
{\displaystyle L}
-procedure, which is embodied by an underlying processor called a
L
{\displaystyle L}
-processor.
In this way, Pask envisages concepts as mental organisations that hold a hypothesis and seek to test that hypothesis in order to confirm or deny its validity. This notion of a concept has been noted as formally resembling a TOTE cycle discussed by Miller, Galanter and Pribram. The contents and structure that a concept might have at a given interaction of its continuous deformation can be represented through an entailment structure.
Such conceptual forms are said to be emergent through conversational interactions. They are encapsulated through entailment structures, which is a way by which we may visualize an organized and publicly available collection of resultant knowledge. Entailment structures may afford certain advantages compared to certain semantic network structures, as they force semantic relations to be expressed as belonging to coherent structures. The entailment structure is composed of a series of nodes and arrows representing a series of topic relations and the derivations of such topic relations. For example:
In the above illustration, let
T
P
Q
∈
{
R
i
}
{\displaystyle TPQ\in \{R_{i}\}}
, such that there are topic relations that are members of a set of topic relations. Each topic relation is represented by a node, and the entailment represented by the black arc. It follows that
⟨
P
,
Q
⟩
⊢
T
{\displaystyle \langle P,Q\rangle \vdash T}
in the case above, such that the topics P and Q entail the topic of T.
Assuming we use the same derivation process for all topics in above entailment structure, then we are let with the following product as illustrated above. This represents a minimal entailment mesh consisting of a triad of derivations:
⟨
P
,
Q
⟩
⊢
T
{\displaystyle \langle P,Q\rangle \vdash T}
,
⟨
T
,
Q
⟩
⊢
P
{\displaystyle \langle T,Q\rangle \vdash P}
, and
⟨
P
,
T
⟩
⊢
Q
{\displaystyle \langle P,T\rangle \vdash Q}
. The solid arc indicates that a given head topic relation is derived from subordinate topics, whereas the arcs with dotted lines represent how the head topic may be used to derive other topics. Finally:
Represents two solid arcs permitting alternative derivations of the topic T. This can be expressed as
⟨
P
,
Q
⟩
∨
⟨
R
,
S
⟩
⊢
T
{\displaystyle \langle P,Q\rangle \lor \langle R,S\rangle \vdash T}
, which reads either the set containing P and Q, or the set containing R and S entail T. Lastly, a formal analogy is shown where two topics T and T' belonging to two entailment meshes are demonstrated to have a one-to-one correspondence with each other. The diamond shape
R
{\displaystyle R}
below denotes analogy relation that can be claimed to exist between any three topics of each entailment mesh.
The relation of one topic T to another T' by an analogy can also be seen as: Being based on an isomorphism
⇔
{\displaystyle \Leftrightarrow }
, a semantic distinction
/
{\displaystyle /}
between two individual universes on interpretation
U
{\displaystyle \mathbb {U} }
. Assuming an analogy holds for two topics in two distinct entailment meshes, then it should hold for all if the analogy is to be considered coherent and stable.
=== Cognitive Reflector ===
From conversation theory, Pask developed what he called a "Cognitive Reflector". This is a virtual machine for selecting and executing concepts or topics from an entailment mesh shared by at least a pair of participants. It features an external modelling facility on which agreement between, say, a teacher and pupil may be shown by reproducing public descriptions of behaviour. We see this in essay and report writing or the "practicals" of science teaching.
Lp was Pask's protolanguage which produced operators like Ap which concurrently executes the concept, Con, of a Topic, T, to produce a Description, D. Thus:
Ap(Con(T)) => D(T), where => stands for produces.
A succinct account of these operators is presented in Pask Amongst many insights he points out that three indexes are required for concurrent execution, two for parallel and one to designate a serial process. He subsumes this complexity by designating participants A, B, etc.
In Commentary toward the end of Pask, he states:
The form not the content of the theories (conversation theory and interactions of actors theory) return to and is congruent with the forms of physical theories; such as wave particle duality (the set theoretic unfoldment part of conversation theory is a radiation and its reception is the interpretation by the recipient of the descriptions so exchanged, and vice versa). The particle aspect is the recompilation by the listener of what a speaker is saying. Theories of many universes, one at least for each participant A and one to participant B- are bridged by analogy. As before this is the truth value of any interaction; the metaphor for which is culture itself.
=== Learning strategies ===
In order to facilitate learning, Pask argued that subject matter should be represented in the form of structures which show what is to be learned. These structures exist in a variety of different levels depending upon the extent of the relationships displayed. The critical method of learning according to Conversation Theory is "teachback" in which one person teaches another what they have learned.
Pask identified two different types of learning strategies:
Serialists – Progress through a structure in a sequential fashion
Holists – Look for higher order relations
The ideal is the versatile learner who is neither vacuous holist "globe trotter" nor serialist who knows little of the context of his work.
In learning, the stage where one converges or evolves, many Cyberneticians describe the act of understanding as a closed-loop. Instead of simply “taking in” new information, one goes back to look at their understandings and pulls together information that was “triggered” and forms a new connection. This connection becomes tighter and one's understanding of a certain concept is solidified or “stable” (Pangaro, 2003). Furthermore, Gordon Pask emphasized that conflict is the basis for the notion of “calling for'' additional information (Pangaro, 1992).
According to Entwistle, experiments which lead to the investigation of phenomenon later denoted by the term learning strategy came about through the implementation of a variety of learning tasks. Initially, this was done through utilising either CASTE, INTUITION, or the Clobbits pseudo-taxonomy. However, given issues resulting from either the time-consuming nature or operating experiments or inexactness of experimental conditions, new tests were created in the form of the Spy Ring History test and the Smuggler's test. The former test involved a participant having to learn the history of a fictitious spy ring (in other words, the history of a fictitious espionage network); the participant, having to learn about the history of five spies in three countries over the period of five years. The comprehension learning component of the test involved learning the similarities and differences between a set of networks; whereas the operation learning aspect of the test involved learning the role each spy played and what sequence of actions that spy played over a given year.
While Entwistle noted difficulties regarding the length of such tests for groups of students who were engaged in the Spy Ring History test, the results of the test did seem to correspond with the type of learning strategies discussed. However, it has been noted that while Pask and associates work on learning styles has been influential in both the development of conceptual tools and methodology, the Spy Ring History test and Smuggler's test may have been biased towards STEM students than humanities in its implementation, with Entwistle arguing that the "rote learning of formulae and definitions, together with a positive reaction to solving puzzles and problems of a logical nature, are characteristics more commonly found in science than arts student".
== Applications ==
One potential application of conversation theory that has been studied and developed is as an alternative approach to common types of search engine Information retrieval algorithms. Unlike PageRank-like algorithms, which determine the priority of a search result based on how many hyperlinks on the web link to them, conversation theory has been used to apply a discursive approach to web search requests.
ThoughtShuffler is an attempt to build a search engine utilizing design principles from conversation theory: In this approach, terms that are input into a search request yield search results relating to other terms that derive or help provide context to the meaning of the first in a way that mimics derivations of topics in an entailment structure. For example, given the input of a search term, a neighbourhood of corresponding terms that comprise the meaning of the first term may be suggested for the user to explore. In doing this, the search engine interface highlights snippets of webpages corresponding to a neighbourhood terms that help provide meaning to the first.
The aim of this design, is to provide just enough information for a user to become curious about a topic in order to induce the intention to explore other subtopics related to the main term input into the search engine.
== Gallery ==
== See also ==
Conversational constraints theory
Analogy § Cybernetics
Gordon Pask § Interactions of Actors Theory
Integrative learning
Text and conversation theory
== Footnotes ==
== References ==
=== Citation Sources ===
== Further reading ==
Ranulph Glanville and Karl H. Muller (eds.), Gordon Pask, Philosopher Mechanic- An Introduction to the Cybernetician's Cybernetician edition echoraum 2007 ISBN 978-3-901941-15-3
Aleksej Heinze, Chris Procter, "Use of conversation theory to underpin blended learning", in: International Journal of Teaching and Case Studies (2007) – Vol. 1, No.1/2 pp. 108 – 120
W. R. Klemm, Software Issues for Applying Conversation Theory For Effective Collaboration Via the Internet, Manuscript 2002.
Gordon Pask, Conversation, cognition and learning. New York: Elsevier, 1975.
Gordon Pask, The Cybernetics of Human Learning and Performance, Hutchinson. 1975
Gordon Pask, Conversation Theory, Applications in Education and Epistemology, Elsevier, 1976.
Gordon Pask, Heinz von Foerster's Self-Organisation, the Progenitor of Conversation and Interaction Theories, 1996.
Scott, B. (ed. and commentary) (2011). "Gordon Pask: The Cybernetics of Self-Organisation, Learning and Evolution Papers 1960-1972" pp 648 Edition Echoraum (2011).
== External links ==
PDFs of Pask's books and key papers at pangaro.com
Conversation Theory – Gordon Pask overview from web.cortland.edu.
Cybernetics And Conversation by Paul Pangaro, 1994–2000.
Conversation Theory: Reasoning about significance and mutuality by Mike Martin and John Dobson,
Conversation Theory developed by the cybernetician Gordon Pask by Yitzhak I. Hayut-Man ea, 1995. | Wikipedia/Conversation_theory |
Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize this body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function.
Issues such as requirements engineering, reliability, logistics, coordination of different teams, testing and evaluation, maintainability, and many other disciplines, aka "ilities", necessary for successful system design, development, implementation, and ultimate decommission become more difficult when dealing with large or complex projects. Systems engineering deals with work processes, optimization methods, and risk management tools in such projects. It overlaps technical and human-centered disciplines such as industrial engineering, production systems engineering, process systems engineering, mechanical engineering, manufacturing engineering, production engineering, control engineering, software engineering, electrical engineering, cybernetics, aerospace engineering, organizational studies, civil engineering and project management. Systems engineering ensures that all likely aspects of a project or system are considered and integrated into a whole.
The systems engineering process is a discovery process that is quite unlike a manufacturing process. A manufacturing process is focused on repetitive activities that achieve high-quality outputs with minimum cost and time. The systems engineering process must begin by discovering the real problems that need to be resolved and identifying the most probable or highest-impact failures that can occur. Systems engineering involves finding solutions to these problems.
== History ==
The term systems engineering can be traced back to Bell Telephone Laboratories in the 1940s. The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries, especially those developing systems for the U.S. military, to apply the discipline.
When it was no longer possible to rely on design evolution to improve upon a system and the existing tools were not sufficient to meet growing demands, new methods began to be developed that addressed the complexity directly. The continuing evolution of systems engineering comprises the development and identification of new methods and modeling techniques. These methods aid in a better comprehension of the design and developmental control of engineering systems as they grow more complex. Popular tools that are often used in the systems engineering context were developed during these times, including Universal Systems Language (USL), Unified Modeling Language (UML), Quality function deployment (QFD), and Integration Definition (IDEF).
In 1990, a professional society for systems engineering, the National Council on Systems Engineering (NCOSE), was founded by representatives from a number of U.S. corporations and organizations. NCOSE was created to address the need for improvements in systems engineering practices and education. As a result of growing involvement from systems engineers outside of the U.S., the name of the organization was changed to the International Council on Systems Engineering (INCOSE) in 1995. Schools in several countries offer graduate programs in systems engineering, and continuing education options are also available for practicing engineers.
== Concept ==
Systems engineering signifies only an approach and, more recently, a discipline in engineering. The aim of education in systems engineering is to formalize various approaches simply and in doing so, identify new methods and research opportunities similar to that which occurs in other fields of engineering. As an approach, systems engineering is holistic and interdisciplinary in flavor.
=== Origins and traditional scope ===
The traditional scope of engineering embraces the conception, design, development, production, and operation of physical systems. Systems engineering, as originally conceived, falls within this scope. "Systems engineering", in this sense of the term, refers to the building of engineering concepts.
=== Evolution to a broader scope ===
The use of the term "systems engineer" has evolved over time to embrace a wider, more holistic concept of "systems" and of engineering processes. This evolution of the definition has been a subject of ongoing controversy, and the term continues to apply to both the narrower and a broader scope.
Traditional systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as spacecraft and aircraft. More recently, systems engineering has evolved to take on a broader meaning especially when humans were seen as an essential component of a system. Peter Checkland, for example, captures the broader meaning of systems engineering by stating that 'engineering' "can be read in its general sense; you can engineer a meeting or a political agreement.": 10
Consistent with the broader scope of systems engineering, the Systems Engineering Body of Knowledge (SEBoK) has defined three types of systems engineering:
Product Systems Engineering (PSE) is the traditional systems engineering focused on the design of physical systems consisting of hardware and software.
Enterprise Systems Engineering (ESE) pertains to the view of enterprises, that is, organizations or combinations of organizations, as systems.
Service Systems Engineering (SSE) has to do with the engineering of service systems. Checkland defines a service system as a system which is conceived as serving another system. Most civil infrastructure systems are service systems.
=== Holistic view ===
Systems engineering focuses on analyzing and eliciting customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem, the system lifecycle. This includes fully understanding all of the stakeholders involved. Oliver et al. claim that the systems engineering process can be decomposed into:
A Systems Engineering Technical Process
A Systems Engineering Management Process
Within Oliver's model, the goal of the Management Process is to organize the technical effort in the lifecycle, while the Technical Process includes assessing available information, defining effectiveness measures, to create a behavior model, create a structure model, perform trade-off analysis, and create sequential build & test plan.
Depending on their application, although there are several models that are used in the industry, all of them aim to identify the relation between the various stages mentioned above and incorporate feedback. Examples of such models include the Waterfall model and the VEE model (also called the V model).
=== Interdisciplinary field ===
System development often requires contribution from diverse technical disciplines. By providing a systems (holistic) view of the development effort, systems engineering helps mold all the technical contributors into a unified team effort, forming a structured development process that proceeds from concept to production to operation and, in some cases, to termination and disposal. In an acquisition, the holistic integrative discipline combines contributions and balances tradeoffs among cost, schedule, and performance while maintaining an acceptable level of risk covering the entire life cycle of the item.
This perspective is often replicated in educational programs, in that systems engineering courses are taught by faculty from other engineering departments, which helps create an interdisciplinary environment.
=== Managing complexity ===
The need for systems engineering arose with the increase in complexity of systems and projects, in turn exponentially increasing the possibility of component friction, and therefore the unreliability of the design. When speaking in this context, complexity incorporates not only engineering systems but also the logical human organization of data. At the same time, a system can become more complex due to an increase in size as well as with an increase in the amount of data, variables, or the number of fields that are involved in the design. The International Space Station is an example of such a system.
The development of smarter control algorithms, microprocessor design, and analysis of environmental systems also come within the purview of systems engineering. Systems engineering encourages the use of tools and methods to better comprehend and manage complexity in systems. Some examples of these tools can be seen here:
System architecture
System model, modeling, and simulation
Mathematical optimization
System dynamics
Systems analysis
Statistical analysis
Reliability engineering
Decision making
Taking an interdisciplinary approach to engineering systems is inherently complex since the behavior of and interaction among system components is not always immediately well defined or understood. Defining and characterizing such systems and subsystems and the interactions among them is one of the goals of systems engineering. In doing so, the gap that exists between informal requirements from users, operators, marketing organizations, and technical specifications is successfully bridged.
=== Scope ===
The principles of systems engineering – holism, emergent behavior, boundary, et al. – can be applied to any system, complex or otherwise, provided systems thinking is employed at all levels. Besides defense and aerospace, many information and technology-based companies, software development firms, and industries in the field of electronics & communications require systems engineers as part of their team.
An analysis by the INCOSE Systems Engineering Center of Excellence (SECOE) indicates that optimal effort spent on systems engineering is about 15–20% of the total project effort. At the same time, studies have shown that systems engineering essentially leads to a reduction in costs among other benefits. However, no quantitative survey at a larger scale encompassing a wide variety of industries has been conducted until recently. Such studies are underway to determine the effectiveness and quantify the benefits of systems engineering.
Systems engineering encourages the use of modeling and simulation to validate assumptions or theories on systems and the interactions within them.
Use of methods that allow early detection of possible failures, in safety engineering, are integrated into the design process. At the same time, decisions made at the beginning of a project whose consequences are not clearly understood can have enormous implications later in the life of a system, and it is the task of the modern systems engineer to explore these issues and make critical decisions. No method guarantees today's decisions will still be valid when a system goes into service years or decades after first conceived. However, there are techniques that support the process of systems engineering. Examples include soft systems methodology, Jay Wright Forrester's System dynamics method, and the Unified Modeling Language (UML)—all currently being explored, evaluated, and developed to support the engineering decision process.
== Education ==
Education in systems engineering is often seen as an extension to the regular engineering courses, reflecting the industry attitude that engineering students need a foundational background in one of the traditional engineering disciplines (e.g. aerospace engineering, civil engineering, electrical engineering, mechanical engineering, manufacturing engineering, industrial engineering, chemical engineering)—plus practical, real-world experience to be effective as systems engineers. Undergraduate university programs explicitly in systems engineering are growing in number but remain uncommon, the degrees including such material are most often presented as a BS in Industrial Engineering. Typically programs (either by themselves or in combination with interdisciplinary study) are offered beginning at the graduate level in both academic and professional tracks, resulting in the grant of either a MS/MEng or Ph.D./EngD degree.
INCOSE, in collaboration with the Systems Engineering Research Center at Stevens Institute of Technology maintains a regularly updated directory of worldwide academic programs at suitably accredited institutions. As of 2017, it lists over 140 universities in North America offering more than 400 undergraduate and graduate programs in systems engineering. Widespread institutional acknowledgment of the field as a distinct subdiscipline is quite recent; the 2009 edition of the same publication reported the number of such schools and programs at only 80 and 165, respectively.
Education in systems engineering can be taken as systems-centric or domain-centric:
Systems-centric programs treat systems engineering as a separate discipline and most of the courses are taught focusing on systems engineering principles and practice.
Domain-centric programs offer systems engineering as an option that can be exercised with another major field in engineering.
Both of these patterns strive to educate the systems engineer who is able to oversee interdisciplinary projects with the depth required of a core engineer.
== Systems engineering topics ==
Systems engineering tools are strategies, procedures, and techniques that aid in performing systems engineering on a project or product. The purpose of these tools varies from database management, graphical browsing, simulation, and reasoning, to document production, neutral import/export, and more.
=== System ===
There are many definitions of what a system is in the field of systems engineering. Below are a few authoritative definitions:
ANSI/EIA-632-1999: "An aggregation of end products and enabling products to achieve a given purpose."
DAU Systems Engineering Fundamentals: "an integrated composite of people, products, and processes that provide a capability to satisfy a stated need or objective."
IEEE Std 1220-1998: "A set or arrangement of elements and processes that are related and whose behavior satisfies customer/operational needs and provides for life cycle sustainment of the products."
INCOSE Systems Engineering Handbook: "homogeneous entity that exhibits predefined behavior in the real world and is composed of heterogeneous parts that do not individually exhibit that behavior and an integrated configuration of components and/or subsystems."
INCOSE: "A system is a construct or collection of different elements that together produce results not obtainable by the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, and documents; that is, all things required to produce systems-level results. The results include system-level qualities, properties, characteristics, functions, behavior, and performance. The value added by the system as a whole, beyond that contributed independently by the parts, is primarily created by the relationship among the parts; that is, how they are interconnected."
ISO/IEC 15288:2008: "A combination of interacting elements organized to achieve one or more stated purposes."
NASA Systems Engineering Handbook: "(1) The combination of elements that function together to produce the capability to meet a need. The elements include all hardware, software, equipment, facilities, personnel, processes, and procedures needed for this purpose. (2) The end product (which performs operational functions) and enabling products (which provide life-cycle support services to the operational end products) that make up a system."
=== Systems engineering processes ===
Systems engineering processes encompass all creative, manual, and technical activities necessary to define the product and which need to be carried out to convert a system definition to a sufficiently detailed system design specification for product manufacture and deployment. Design and development of a system can be divided into four stages, each with different definitions:
Task definition (informative definition)
Conceptual stage (cardinal definition)
Design stage (formative definition)
Implementation stage (manufacturing definition)
Depending on their application, tools are used for various stages of the systems engineering process:
=== Using models ===
Models play important and diverse roles in systems engineering. A model can be defined in several
ways, including:
An abstraction of reality designed to answer specific questions about the real world
An imitation, analog, or representation of a real-world process or structure; or
A conceptual, mathematical, or physical tool to assist a decision-maker.
Together, these definitions are broad enough to encompass physical engineering models used in the verification of a system design, as well as schematic models like a functional flow block diagram and mathematical (i.e. quantitative) models used in the trade study process. This section focuses on the last.
The main reason for using mathematical models and diagrams in trade studies is to provide estimates of system effectiveness, performance or technical attributes, and cost from a set of known or estimable quantities. Typically, a collection of separate models is needed to provide all of these outcome variables. The heart of any mathematical model is a set of meaningful quantitative relationships among its inputs and outputs. These relationships can be as simple as adding up constituent quantities to obtain a total, or as complex as a set of differential equations describing the trajectory of a spacecraft in a gravitational field. Ideally, the relationships express causality, not just correlation. Furthermore, key to successful systems engineering activities are also the methods with which these models are efficiently and effectively managed and used to simulate the systems. However, diverse domains often present recurring problems of modeling and simulation for systems engineering, and new advancements are aiming to cross-fertilize methods among distinct scientific and engineering communities, under the title of 'Modeling & Simulation-based Systems Engineering'.
=== Modeling formalisms and graphical representations ===
Initially, when the primary purpose of a systems engineer is to comprehend a complex problem, graphic representations of a system are used to communicate a system's functional and data requirements. Common graphical representations include:
Functional flow block diagram (FFBD)
Model-based design
Data flow diagram (DFD)
N2 chart
IDEF0 diagram
Use case diagram
Sequence diagram
Block diagram
Signal-flow graph
USL function maps and type maps
Enterprise architecture frameworks
A graphical representation relates the various subsystems or parts of a system through functions, data, or interfaces. Any or each of the above methods is used in an industry based on its requirements. For instance, the N2 chart may be used where interfaces between systems are important. Part of the design phase is to create structural and behavioral models of the system.
Once the requirements are understood, it is now the responsibility of a systems engineer to refine them and to determine, along with other engineers, the best technology for a job. At this point starting with a trade study, systems engineering encourages the use of weighted choices to determine the best option. A decision matrix, or Pugh method, is one way (QFD is another) to make this choice while considering all criteria that are important. The trade study in turn informs the design, which again affects graphic representations of the system (without changing the requirements). In an SE process, this stage represents the iterative step that is carried out until a feasible solution is found. A decision matrix is often populated using techniques such as statistical analysis, reliability analysis, system dynamics (feedback control), and optimization methods.
=== Other tools ===
==== Systems Modeling Language ====
Systems Modeling Language (SysML), a modeling language used for systems engineering applications, supports the specification, analysis, design, verification and validation of a broad range of complex systems.
==== Lifecycle Modeling Language ====
Lifecycle Modeling Language (LML), is an open-standard modeling language designed for systems engineering that supports the full lifecycle: conceptual, utilization, support, and retirement stages.
== Related fields and sub-fields ==
Many related fields may be considered tightly coupled to systems engineering. The following areas have contributed to the development of systems engineering as a distinct entity:
=== Cognitive systems engineering ===
Cognitive systems engineering (CSE) is a specific approach to the description and analysis of human-machine systems or sociotechnical systems. The three main themes of CSE are how humans cope with complexity, how work is accomplished by the use of artifacts, and how human-machine systems and socio-technical systems can be described as joint cognitive systems. CSE has since its beginning become a recognized scientific discipline, sometimes also referred to as cognitive engineering. The concept of a Joint Cognitive System (JCS) has in particular become widely used as a way of understanding how complex socio-technical systems can be described with varying degrees of resolution. The more than 20 years of experience with CSE has been described extensively.
=== Configuration management ===
Like systems engineering, configuration management as practiced in the defense and aerospace industry is a broad systems-level practice. The field parallels the taskings of systems engineering; where systems engineering deals with requirements development, allocation to development items and verification, configuration management deals with requirements capture, traceability to the development item, and audit of development item to ensure that it has achieved the desired functionality and outcomes that systems engineering and/or Test and Verification Engineering have obtained and proven through objective testing.
=== Control engineering ===
Control engineering and its design and implementation of control systems, used extensively in nearly every industry, is a large sub-field of systems engineering. The cruise control on an automobile and the guidance system for a ballistic missile are two examples. Control systems theory is an active field of applied mathematics involving the investigation of solution spaces and the development of new methods for the analysis of the control process.
=== Industrial engineering ===
Industrial engineering is a branch of engineering that concerns the development, improvement, implementation, and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material, and process. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design to specify, predict, and evaluate results obtained from such systems.
=== Production Systems Engineering ===
Production Systems Engineering (PSE) is an emerging branch of Engineering intended to uncover fundamental principles of production systems and utilize them for analysis, continuous improvement, and design.
=== Interface design ===
Interface design and its specification are concerned with assuring that the pieces of a system connect and inter-operate with other parts of the system and with external systems as necessary. Interface design also includes assuring that system interfaces are able to accept new features, including mechanical, electrical, and logical interfaces, including reserved wires, plug-space, command codes, and bits in communication protocols. This is known as extensibility. Human-Computer Interaction (HCI) or Human-Machine Interface (HMI) is another aspect of interface design and is a critical aspect of modern systems engineering. Systems engineering principles are applied in the design of communication protocols for local area networks and wide area networks.
=== Mechatronic engineering ===
Mechatronic engineering, like systems engineering, is a multidisciplinary field of engineering that uses dynamic systems modeling to express tangible constructs. In that regard, it is almost indistinguishable from Systems Engineering, but what sets it apart is the focus on smaller details rather than larger generalizations and relationships. As such, both fields are distinguished by the scope of their projects rather than the methodology of their practice.
=== Operations research ===
Operations research supports systems engineering. Operations research, briefly, is concerned with the optimization of a process under multiple constraints.
=== Performance engineering ===
Performance engineering is the discipline of ensuring a system meets customer expectations for performance throughout its life. Performance is usually defined as the speed with which a certain operation is executed or the capability of executing a number of such operations in a unit of time. Performance may be degraded when operations queued to execute are throttled by limited system capacity. For example, the performance of a packet-switched network is characterized by the end-to-end packet transit delay or the number of packets switched in an hour. The design of high-performance systems uses analytical or simulation modeling, whereas the delivery of high-performance implementation involves thorough performance testing. Performance engineering relies heavily on statistics, queueing theory, and probability theory for its tools and processes.
=== Program management and project management ===
Program management (or project management) has many similarities with systems engineering, but has broader-based origins than the engineering ones of systems engineering. Project management is also closely related to both program management and systems engineering. Both include scheduling as engineering support tool in assessing interdisciplinary concerns under management process. In particular, the direct relationship of resources, performance features, and risk to the duration of a task or the dependency links among tasks and impacts across the system lifecycle are systems engineering concerns.
=== Proposal engineering ===
Proposal engineering is the application of scientific and mathematical principles to design, construct, and operate a cost-effective proposal development system. Basically, proposal engineering uses the "systems engineering process" to create a cost-effective proposal and increase the odds of a successful proposal.
=== Reliability engineering ===
Reliability engineering is the discipline of ensuring a system meets customer expectations for reliability throughout its life (i.e. it does not fail more frequently than expected). Next to the prediction of failure, it is just as much about the prevention of failure. Reliability engineering applies to all aspects of the system. It is closely associated with maintainability, availability (dependability or RAMS preferred by some), and integrated logistics support. Reliability engineering is always a critical component of safety engineering, as in failure mode and effects analysis (FMEA) and hazard fault tree analysis, and of security engineering.
=== Risk management ===
Risk management, the practice of assessing and dealing with risk is one of the interdisciplinary parts of Systems Engineering. In development, acquisition, or operational activities, the inclusion of risk in tradeoffs with cost, schedule, and performance features, involves the iterative complex configuration management of traceability and evaluation to the scheduling and requirements management across domains and for the system lifecycle that requires the interdisciplinary technical approach of systems engineering. Systems Engineering has Risk Management define, tailor, implement, and monitor a structured process for risk management which is integrated into the overall effort.
=== Safety engineering ===
The techniques of safety engineering may be applied by non-specialist engineers in designing complex systems to minimize the probability of safety-critical failures. The "System Safety Engineering" function helps to identify "safety hazards" in emerging designs and may assist with techniques to "mitigate" the effects of (potentially) hazardous conditions that cannot be designed out of systems.
=== Security engineering ===
Security engineering can be viewed as an interdisciplinary field that integrates the community of practice for control systems design, reliability, safety, and systems engineering. It may involve such sub-specialties as authentication of system users, system targets, and others: people, objects, and processes.
=== Software engineering ===
From its beginnings, software engineering has helped shape modern systems engineering practice. The techniques used in the handling of the complexities of large software-intensive systems have had a major effect on the shaping and reshaping of the tools, methods, and processes of Systems Engineering.
== See also ==
== References ==
== Further reading ==
Madhavan, Guru (2024). Wicked Problems: How to Engineer a Better World. New York: W.W. Norton & Company. ISBN 978-0-393-65146-1
Blockley, D. Godfrey, P. Doing it Differently: Systems for Rethinking Infrastructure, Second Edition, ICE Publications, London, 2017.
Buede, D.M., Miller, W.D. The Engineering Design of Systems: Models and Methods, Third Edition, John Wiley and Sons, 2016.
Chestnut, H., Systems Engineering Methods. Wiley, 1967.
Gianni, D. et al. (eds.), Modeling and Simulation-Based Systems Engineering Handbook, CRC Press, 2014 at CRC
Goode, H.H., Robert E. Machol System Engineering: An Introduction to the Design of Large-scale Systems, McGraw-Hill, 1957.
Hitchins, D. (1997) World Class Systems Engineering at hitchins.net.
Lienig, J., Bruemmer, H., Fundamentals of Electronic Systems Design, Springer, 2017 ISBN 978-3-319-55839-4.
Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.ISBN 978-1-118-58537-5
MITRE, The MITRE Systems Engineering Guide(pdf)
NASA (2007) Systems Engineering Handbook, NASA/SP-2007-6105 Rev1, December 2007.
NASA (2013) NASA Systems Engineering Processes and Requirements Archived 27 December 2016 at the Wayback Machine NPR 7123.1B, April 2013 NASA Procedural Requirements
Oliver, D.W., et al. Engineering Complex Systems with Models and Objects. McGraw-Hill, 1997.
Parnell, G.S., Driscoll, P.J., Henderson, D.L. (eds.), Decision Making in Systems Engineering and Management, 2nd. ed., Hoboken, NJ: Wiley, 2011. This is a textbook for undergraduate students of engineering.
Ramo, S., St.Clair, R.K. The Systems Approach: Fresh Solutions to Complex Problems Through Combining Science and Practical Common Sense, Anaheim, CA: KNI, Inc, 1998.
Sage, A.P., Systems Engineering. Wiley IEEE, 1992. ISBN 0-471-53639-3.
Sage, A.P., Olson, S.R., Modeling and Simulation in Systems Engineering, 2001.
SEBOK.org, Systems Engineering Body of Knowledge (SEBoK)
Shermon, D. Systems Cost Engineering, Gower Publishing, 2009
Shishko, R., et al. (2005) NASA Systems Engineering Handbook. NASA Center for AeroSpace Information, 2005.
Stevens, R., et al. Systems Engineering: Coping with Complexity. Prentice Hall, 1998.
US Air Force, SMC Systems Engineering Primer & Handbook, 2004
US DoD Systems Management College (2001) Systems Engineering Fundamentals. Defense Acquisition University Press, 2001
US DoD Guide for Integrating Systems Engineering into DoD Acquisition Contracts Archived 29 August 2017 at the Wayback Machine, 2006
US DoD MIL-STD-499 System Engineering Management
== External links ==
ICSEng homepage
INCOSE homepage
INCOSE UK homepage
PPI SE Goldmine homepage
Systems Engineering Body of Knowledge
Systems Engineering Tools
AcqNotes DoD Systems Engineering Overview
NDIA Systems Engineering Division | Wikipedia/Systems_engineering |
Systems biology is the computational and mathematical analysis and modeling of complex biological systems. It is a biology-based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach (holism instead of the more traditional reductionism) to biological research. This multifaceted research domain necessitates the collaborative efforts of chemists, biologists, mathematicians, physicists, and engineers to decipher the biology of intricate living systems by merging various quantitative molecular measurements with carefully constructed mathematical models. It represents a comprehensive method for comprehending the complex relationships within biological systems. In contrast to conventional biological studies that typically center on isolated elements, systems biology seeks to combine different biological data to create models that illustrate and elucidate the dynamic interactions within a system. This methodology is essential for understanding the complex networks of genes, proteins, and metabolites that influence cellular activities and the traits of organisms. One of the aims of systems biology is to model and discover emergent properties, of cells, tissues and organisms functioning as a system whose theoretical description is only possible using techniques of systems biology. By exploring how function emerges from dynamic interactions, systems biology bridges the gaps that exist between molecules and physiological processes.
As a paradigm, systems biology is usually defined in antithesis to the so-called reductionist paradigm (biological organisation), although it is consistent with the scientific method. The distinction between the two paradigms is referred to in these quotations: "the reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge ... the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." (Sauer et al.) "Systems biology ... is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. ... It means changing our philosophy, in the full sense of the term." (Denis Noble)
As a series of operational protocols used for performing research, namely a cycle composed of theory, analytic or computational modelling to propose specific testable hypotheses about a biological system, experimental validation, and then using the newly acquired quantitative description of cells or cell processes to refine the computational model or theory. Since the objective is a model of the interactions in a system, the experimental techniques that most suit systems biology are those that are system-wide and attempt to be as complete as possible. Therefore, transcriptomics, metabolomics, proteomics and high-throughput techniques are used to collect quantitative data for the construction and validation of models.
A comprehensive systems biology approach necessitates: (i) a thorough characterization of an organism concerning its molecular components, the interactions among these molecules, and how these interactions contribute to cellular functions; (ii) a detailed spatio-temporal molecular characterization of a cell (for example, component dynamics, compartmentalization, and vesicle transport); and (iii) an extensive systems analysis of the cell's 'molecular response' to both external and internal perturbations. Furthermore, the data from (i) and (ii) should be synthesized into mathematical models to test knowledge by generating predictions (hypotheses), uncovering new biological mechanisms, assessing the system's behavior derived from (iii), and ultimately formulating rational strategies for controlling and manipulating cells. To tackle these challenges, systems biology must incorporate methods and approaches from various disciplines that have not traditionally interfaced with one another. The emergence of multi-omics technologies has transformed systems biology by providing extensive datasets that cover different biological layers, including genomics, transcriptomics, proteomics, and metabolomics. These technologies enable the large-scale measurement of biomolecules, leading to a more profound comprehension of biological processes and interactions. Increasingly, methods such as network analysis, machine learning, and pathway enrichment are utilized to integrate and interpret multi-omics data, thereby improving our understanding of biological functions and disease mechanisms.
== History ==
Holism vs. Reductionism
It is challenging to trace the origins and beginnings of systems biology. A comprehensive perspective on the human body was central to the medical practices of Greek, Roman, and East Asian traditions, where physicians and thinkers like Hippocrates believed that health and illness were linked to the equilibrium or disruption of bodily fluids known as humors. This holistic perspective persisted in the Western world throughout the 19th and 20th centuries, with prominent physiologists viewing the body as controlled by various systems, including the nervous system, the gastrointestinal system, and the cardiovascular system. In the latter half of the 20th century, however, this way of thinking was largely supplanted by reductionism: To grasp how the body functions properly, one needed to comprehend the role of each component, from tissues and cells to the complete set of intracellular molecular building blocks.
In the 17th century, the triumphs of physics and the advancement of mechanical clockwork prompted a reductionist viewpoint in biology, interpreting organisms as intricate machines made up of simpler elements.
Jan Smuts (1870–1950), naturalist/philosopher and twice Prime Minister of South Africa, coined the commonly used term holism. Whole systems such as cells, tissues, organisms, and populations were proposed to have unique (emergent) properties. It was impossible to try and reassemble the behavior of the whole from the properties of the individual components, and new technologies were necessary to define and understand the behavior of systems.
Even though reductionism and holism are often contrasted with one another, they can be synthesized. One must understand how organisms are built (reductionism), while it is just as important to understand why they are so arranged (systems; holism). Each provides useful insights and answers different questions. However, the study of biological systems requires knowledge about control and design paradigms, as well as principles of structural stability, resilience, and robustness that are not directly inferred from mechanistic information. More profound insight will be gained by employing computer modeling to overcome the complexity in biological systems.
Nevertheless, this perspective was consistently balanced by thinkers who underscored the significance of organization and emergent traits in living systems. This reductionist perspective has achieved remarkable success, and our understanding of biological processes has expanded with incredible speed and intensity. However, alongside these extraordinary advancements, science gradually came to understand that possessing complete information about molecular components alone would not suffice to elucidate the workings of life: the individual components rarely illustrate the function of a complex system. It is now commonly recognized that we need approaches for reconstructing integrated systems from their constituent parts and processes if we are to comprehend biological phenomena and manipulate them in a thoughtful, focused way.
Origin of systems biology as a field
In 1968, the term "systems biology" was first introduced at a conference. Those within the discipline soon recognized—and this understanding gradually became known to the wider public—that computational approaches were necessary to fully articulate the concepts and potential of systems biology. Specifically, these techniques needed to view biological phenomena as complex, multi-layered, adaptive, and dynamic systems. They had to account for transformations and intricate nonlinearities, thereby allowing for the smooth integration of smaller models ("modules") into larger, well-organized assemblies of models within complex settings. It became clear that mathematics and computation were vital for these methods. An acceleration of systems understanding came with the publication of the first ground-breaking text compiling molecular, physiological, and anatomical individuality in animals, which has been described as a revolution.
Initially, the wider scientific community was reluctant to accept the integration of computational methods and control theory in the exploration of living systems, believing that "biology was too complex to apply mathematics." However, as the new millennium neared, this viewpoint underwent a significant and lasting transformation. More scientists started working on integration of mathematical concepts to understand and solve biological problems. Now, Systems biology have been widely applied in several fields including agriculture and medicine.
== Approaches to systems biology ==
=== Top-down approach ===
Top-down systems biology identifies molecular interaction networks by analyzing the correlated behaviors observed in large-scale 'omics' studies. With the advent of 'omics', this top-down strategy has become a leading approach. It begins with an overarching perspective of the system's behavior – examining everything at once – by gathering genome-wide experimental data and seeks to unveil and understand biological mechanisms at a more granular level – specifically, the individual components and their interactions. In this framework of 'top-down' systems biology, the primary goal is to uncover novel molecular mechanisms through a cyclical process that initiates with experimental data, transitions into data analysis and integration to identify correlations among molecule concentrations and concludes with the development of hypotheses regarding the co- and inter-regulation of molecular groups. These hypotheses then generate new predictions of correlations, which can be explored in subsequent experiments or through additional biochemical investigations. The notable advantages of top-down systems biology lie in its potential to provide comprehensive (i.e., genome-wide) insights and its focus on the metabolome, fluxome, transcriptome, and/or proteome. Top-down methods prioritize overall system states as influencing factors in models and the computational (or optimality) principles that govern the dynamics of the global system. For instance, while the dynamics of motor control (neuro) emerge from the interactions of millions of neurons, one can also characterize the neural motor system as a sort of feedback control system, which directs a 'plant' (the body) and guides movement by minimizing 'cost functions' (e.g., achieving trajectories with minimal jerk).
=== Bottom-up approach ===
Bottom-up systems biology infers the functional characteristics that may arise from a subsystem characterized with a high degree of mechanistic detail using molecular techniques. This approach begins with the foundational elements by developing the interactive behavior (rate equation) of each component process (e.g., enzymatic processes) within a manageable portion of the system. It examines the mechanisms through which functional properties arise in the interactions of known components. Subsequently, these formulations are combined to understand the behavior of the system. The primary goal of this method is to integrate the pathway models into a comprehensive model representing the entire system - the top or whole. As research and understanding advance, these models are often expanded by incorporating additional processes with high mechanistic detail.
The bottom-up approach facilitates the integration and translation of drug-specific in vitro findings to the in vivo human context. This encompasses data collected during the early phases of drug development, such as safety evaluations. When assessing cardiac safety, a purely bottom-up modeling and simulation method entails reconstructing the processes that determine exposure, which includes the plasma (or heart tissue) concentration-time profiles and their electrophysiological implications, ideally incorporating hemodynamic effects and changes in contractility. Achieving this necessitates various models, ranging from single-cell to advanced three-dimensional (3D) multiphase models. Information from multiple in vitro systems that serve as stand-ins for the in vivo absorption, distribution, metabolism, and excretion (ADME) processes enables predictions of drug exposure, while in vitro data on drug-ion channel interactions support the translation of exposure to body surface potentials and the calculation of important electrophysiological endpoints. The separation of data related to the drug, system, and trial design, which is characteristic of the bottom-up approach, allows for predictions of exposure-response relationships considering both inter- and intra-individual variability, making it a valuable tool for evaluating drug effects at a population level. Numerous successful instances of applying physiologically based pharmacokinetic (PBPK) modeling in drug discovery and development have been documented in the literature.
== Associated disciplines ==
According to the interpretation of systems biology as using large data sets using interdisciplinary tools, a typical application is metabolomics, which is the complete set of all the metabolic products, metabolites, in the system at the organism, cell, or tissue level.
Items that may be a computer database include: phenomics, organismal variation in phenotype as it changes during its life span; genomics, organismal deoxyribonucleic acid (DNA) sequence, including intra-organismal cell specific variation. (i.e., telomere length variation); epigenomics/epigenetics, organismal and corresponding cell specific transcriptomic regulating factors not empirically coded in the genomic sequence. (i.e., DNA methylation, Histone acetylation and deacetylation, etc.); transcriptomics, organismal, tissue or whole cell gene expression measurements by DNA microarrays or serial analysis of gene expression; interferomics, organismal, tissue, or cell-level transcript correcting factors (i.e., RNA interference), proteomics, organismal, tissue, or cell level measurements of proteins and peptides via two-dimensional gel electrophoresis, mass spectrometry or multi-dimensional protein identification techniques (advanced HPLC systems coupled with mass spectrometry). Sub disciplines include phosphoproteomics, glycoproteomics and other methods to detect chemically modified proteins; glycomics, organismal, tissue, or cell-level measurements of carbohydrates; lipidomics, organismal, tissue, or cell level measurements of lipids.
The molecular interactions within the cell are also studied, this is called interactomics. A discipline in this field of study is protein–protein interactions, although interactomics includes the interactions of other molecules. Neuroelectrodynamics, where the computer's or a brain's computing function as a dynamic system is studied along with its (bio)physical mechanisms; and fluxomics, measurements of the rates of metabolic reactions in a biological system (cell, tissue, or organism).
In approaching a systems biology problem there are two main approaches. These are the top down and bottom up approach. The top down approach takes as much of the system into account as possible and relies largely on experimental results. The RNA-Seq technique is an example of an experimental top down approach. Conversely, the bottom up approach is used to create detailed models while also incorporating experimental data. An example of the bottom up approach is the use of circuit models to describe a simple gene network.
Various technologies utilized to capture dynamic changes in mRNA, proteins, and post-translational modifications. Mechanobiology, forces and physical properties at all scales, their interplay with other regulatory mechanisms; biosemiotics, analysis of the system of sign relations of an organism or other biosystems; Physiomics, a systematic study of physiome in biology.
Cancer systems biology is an example of the systems biology approach, which can be distinguished by the specific object of study (tumorigenesis and treatment of cancer). It works with the specific data (patient samples, high-throughput data with particular attention to characterizing cancer genome in patient tumour samples) and tools (immortalized cancer cell lines, mouse models of tumorigenesis, xenograft models, high-throughput sequencing methods, siRNA-based gene knocking down high-throughput screenings, computational modeling of the consequences of somatic mutations and genome instability). The long-term objective of the systems biology of cancer is ability to better diagnose cancer, classify it and better predict the outcome of a suggested treatment, which is a basis for personalized cancer medicine and virtual cancer patient in more distant prospective. Significant efforts in computational systems biology of cancer have been made in creating realistic multi-scale in silico models of various tumours.
The systems biology approach often involves the development of mechanistic models, such as the reconstruction of dynamic systems from the quantitative properties of their elementary building blocks. For instance, a cellular network can be modelled mathematically using methods coming from chemical kinetics and control theory. Due to the large number of parameters, variables and constraints in cellular networks, numerical and computational techniques are often used (e.g., flux balance analysis).
Other aspects of computer science, informatics, and statistics are also used in systems biology. These include new forms of computational models, such as the use of process calculi to model biological processes (notable approaches include stochastic π-calculus, BioAmbients, Beta Binders, BioPEPA, and Brane calculus) and constraint-based modeling; integration of information from the literature, using techniques of information extraction and text mining; development of online databases and repositories for sharing data and models, approaches to database integration and software interoperability via loose coupling of software, websites and databases, or commercial suits; network-based approaches for analyzing high dimensional genomic data sets. For example, weighted correlation network analysis is often used for identifying clusters (referred to as modules), modeling the relationship between clusters, calculating fuzzy measures of cluster (module) membership, identifying intramodular hubs, and for studying cluster preservation in other data sets; pathway-based methods for omics data analysis, e.g. approaches to identify and score pathways with differential activity of their gene, protein, or metabolite members. Much of the analysis of genomic data sets also include identifying correlations. Additionally, as much of the information comes from different fields, the development of syntactically and semantically sound ways of representing biological models is needed.
== Model and its types ==
=== What is a model? ===
A model serves as a conceptual depiction of objects or processes, highlighting certain characteristics of these items or activities. A model captures only certain facets of reality; however, when created correctly, this limited scope is adequate because the primary goal of modeling is to address specific inquiries. The saying, "essentially, all models are wrong, but some are useful," attributed to the statistician George Box, is a suitable principle for constructing models.
=== Types of models ===
Boolean Models: These models are also known as logical models and represent biological systems using binary states, allowing for the analysis of gene regulatory networks and signaling pathways. They are advantageous for their simplicity and ability to capture qualitative behaviors.
Petri nets (PN): A unique type of bipartite graph consisting of two types of nodes: places and transitions. When a transition is activated, a token is transferred from the input places to the output places; the process is asynchronous and non-deterministic.
Polynomial dynamical systems (PDS)- An algebraically based approach that represents a specific type of sequential FDS (Finite Dynamical System) operating over a finite field. Each transition function is an element within a polynomial ring defined over the finite field. It employs advanced rapid techniques from computer algebra and computational algebraic geometry, originating from the Buchberger algorithm, to compute the Gröbner bases of ideals in these rings. An ideal consists of a set of polynomials that remain closed under polynomial combinations.
Differential equation models (ODE and PDE)- Ordinary Differential Equations (ODEs) are commonly utilized to represent the temporal dynamics of networks, while Partial Differential Equations (PDEs) are employed to describe behaviors occurring in both space and time, enabling the modeling of pattern formation. These spatiotemporal Diffusion-Reaction Systems demonstrate the emergence of self-organizing patterns, typically articulated by the general local activity principle, which elucidates the factors contributing to complexity and self-organization observed in nature.
Bayesian models: This kind of model is commonly referred to as dynamic models. It utilizes a probabilistic approach that enables the integration of prior knowledge through Bayes' Theorem. A challenge can arise when determining the direction of an interaction.
Finite State Linear Model (FSML): This model integrates continuous variables (such as protein concentration) with discrete elements (like promoter regions that have a limited number of states) in modeling.
Agent-based models (ABM): Initially created within the fields of social sciences and economics, it models the behavior of individual agents (such as genes, mRNAs (siRNA, miRNA, lncRNA), proteins, and transcription factors) and examines how their interactions influence the larger system, which in this case is the cell.
Rule – based models: In this approach, molecular interactions are simulated using local rules that can be utilized even in the absence of a specific network structure, meaning that the step to infer the network is not required, allowing these network-free methods to avoid the complex challenges associated with network inference.
Piecewise-linear differential equation models (PLDE): The model is composed of a piecewise-linear representation of differential equations using step functions, along with a collection of inequality restrictions for the parameter values.
Stochastic models: Models utilizing the Gillespie algorithm for addressing the chemical master equation provide the likelihood that a particular molecular species will possess a defined molecular population or concentration at a specified future point in time. The Gillespie method is the most computationally intensive option available. In cases where the number of molecules is low or when modeling the effects of molecular crowding is desired, the stochastic approach is preferred.
State Space Model (SSM): Linear or non-linear modeling techniques that utilize an abstract state space along with various algorithms, which include Bayesian and other statistical methods, autoregressive models, and Kalman filtering.
=== Creating biological models ===
Researchers begin by choosing a biological pathway and diagramming all of the protein, gene, and/or metabolic pathways. After determining all of the interactions, mass action kinetics or enzyme kinetic rate laws are used to describe the speed of the reactions in the system. Using mass-conservation, the differential equations for the biological system can be constructed. Experiments or parameter fitting can be done to determine the parameter values to use in the differential equations. These parameter values will be the various kinetic constants required to fully describe the model. This model determines the behavior of species in biological systems and bring new insight to the specific activities of systems. Sometimes it is not possible to gather all reaction rates of a system. Unknown reaction rates are determined by simulating the model of known parameters and target behavior which provides possible parameter values.
The use of constraint-based reconstruction and analysis (COBRA) methods has become popular among systems biologists to simulate and predict the metabolic phenotypes, using genome-scale models. One of the methods is the flux balance analysis (FBA) approach, by which one can study the biochemical networks and analyze the flow of metabolites through a particular metabolic network, by optimizing the objective function of interest (e.g. maximizing biomass production to predict growth).
== Tools and database ==
== Applications in system biology ==
Systems biology, an interdisciplinary field that combines biology, data analysis, and mathematical modeling, has revolutionized various sectors, including medicine, agriculture, and environmental science. By integrating omics data (genomics, proteomics, metabolomics, etc.), systems biology provides a holistic understanding of complex biological systems, enabling advancements in drug discovery, crop improvement, and environmental impact assessment. This response explores the applications of systems biology across these domains, highlighting both industrial and academic research contributions. System biology is used in agriculture to identify the genetic and metabolic components of complex characteristics through trait dissection. It aids in the comprehension of plant-pathogen interactions in disease resistance. It is utilized in nutritional quality to enhance nutritional content through metabolic engineering.
=== Cancer ===
Approaches to cancer systems biology have made it possible to effectively combine experimental data with computer algorithms and, as an exception, to apply actionable targeted medicines for the treatment of cancer. In order to apply innovative cancer systems biology techniques and boost their effectiveness for customizing new, individualized cancer treatment modalities, comprehensive multi-omics data acquired through the sequencing of tumor samples and experimental model systems will be crucial.
Cancer systems biology has the potential to provide insights into intratumor heterogeneity and identify therapeutic options. In particular, enhanced cancer systems biology methods that incorporate not only multi-omics data from tumors, but also extensive experimental models derived from patients can assist clinicians in their decision-making processes, ultimately aiming to address treatment failures in cancer.
=== Drug development ===
Before the 1990s, phenotypic drug discovery formed the foundation of most research in drug discovery, utilizing cellular and animal disease models to find drugs without focusing on a specific molecular target. However, following the completion of the human genome project, target-based drug discovery has become the predominant approach in contemporary pharmaceutical research for various reasons. Gene knockout and transgenic models enable researchers to investigate and gain insights into the function of targets and the mechanisms by which drugs operate on a molecular level. Target-based assays lend themselves better to high-throughput screening, which simplifies the process of identifying second-generation drugs—those that improve upon first-in-class drugs in aspects such as potency, selectivity, and half-life, especially when combined with structure-based drug design. To do this, researchers utilize the three-dimensional structure of target proteins and computational models of interactions between small molecules and those targets to aid in the identification of superior compounds.
Cell systems biology represents a phenotypic drug discovery method that integrates the complexity of human disease biology with combinatorial design to develop assays. BioMAP® systems, founded on the principles of cell systems biology, consist of assays based on primary human cells that are designed to replicate intricate human disease and tissue biology in a feasible in vitro environment. Primary human cell types and co-cultures are activated using combinations of pathway activators to create cell signaling networks that align more closely with human disease. These systems are analyzed by assessing the levels of both secreted proteins and cell surface mediators. The distinct variations in protein readouts resulting from drug effects are recorded in a database that enables users to search for functional similarities (or biological 'read across'). In this method, inhibitors or activators targeting specific pathways are discovered to consistently affect the levels of multiple endpoints, often exhibiting a uniquely defined pattern, so that the resulting signatures can be linked to particular mechanisms of action.
=== Food safety and quality ===
The multi-omics technologies in system biology can be also be used in aspects of food quality and safety. High-throughput omics techniques, including genomics, proteomics, and metabolomics, offer valuable insights into the molecular composition of food products, facilitating the identification of critical elements that affect food quality and safety. For example, integrating omics data can enhance the understanding of the metabolic pathways and associated functional gene patterns that contribute to both the nutritional value and safety of food crops. This comprehensive approach guarantees the creation of food products that are both nutritious and safe, capable of satisfying the increasing global demand.
Environmental system biology
Genomics examines all genes as an evolving system over time, aiming to understand their interactions and effects on biological pathways, networks, and physiology in a broader context compared to genetics. As a result, genomics holds significant potential for discovering clusters of genes associated with complex disorders, aiding in the comprehension and management of diseases induced by environmental factors.
When exploring the interactions between the environment and the genome as contributors to complex diseases, it is clear that the genome itself cannot be altered for the time being. However, once these interactions are recognized, it is feasible to minimize exposure or adjust lifestyle factors related to the environmental aspect of the disease. Gene-environment interactions can occur through direct associations with active metabolites at certain locations within the genome, potentially leading to mutations that could cause human diseases. Indirect interactions with the human genome can take place through intracellular receptors that function as ligand-activated transcription factors, which modulate gene expression and maintain cellular balance, or with an environmental factor that may produce detrimental effects. This type of environmental-gene interaction could be more straightforward to investigate than direct interactions since there are numerous markers of this kind of interaction that are readily measurable before the disease manifests. Examples of this include the expression of cytochrome P450 genes following exposure to environmental substances, such as the polycyclic aromatic hydrocarbon benzo[a]pyrene, which binds to the Ah receptor.
== Technical challenges ==
One of the main challenges in systems biology is the connection between experimental descriptions, observations, data, models, and the assumptions that stem from them. In essence, systems biology must be understood within an information management framework that significantly encompasses experimental life sciences. Models are created using various languages or representation schemes, each suitable for conveying and reasoning about distinct sets of characteristics. There is no single universal language for systems biology that can adequately cover the diverse phenomena we aim to investigate. However, this intricate scenario overlooks two important aspects. Models can be developed in multiple versions over time and by different research teams. Conflicts can occur, and observations may be disputed. Various researchers might produce models in different versions and configurations. The unpredictable elements suggest that systems biology is not likely to yield a definitive collection of established models. Instead, we can expect a rich ecosystem of models to develop within a structure that fosters discussion and cooperation among participants. Challenges also exist in verifying the constraints and creating modeling frameworks with robust compositional strategies. This may create a need to handle models that may conflict with one another, whether between schemes or across different scales. In the end, the goal could involve the creation of personalized models that reflect differences in physiology, as opposed to universal models of biological processes.
Other challenges include the massive amount of data created by high-throughput omics technologies which presents considerable challenges in terms of computation and storage. Each analysis in omics can result in data files ranging from terabytes to petabytes, which requires strong computational systems and ample storage solutions to manage and process these datasets effectively. The computational requirements are made more difficult by the necessity for advanced algorithms that can integrate and analyze diverse, high-dimensional data. Approaches like deep learning and network-based methods have displayed potential in tackling these issues, but they also demand significant computational power.
== Artificial intelligence (AI) in systems biology ==
Utilizing AI in Systems Biology enables scientists to uncover novel insights into the intricate relationships present within biological systems, such as those among genes, proteins, and cells. A significant focus within Systems Biology is the application of AI for the analysis of expansive and complex datasets, including multi-omics data produced by high-throughput methods like next-generation sequencing and proteomics. Approaches powered by AI can be employed to detect patterns and correlations within these datasets and to anticipate the behavior of biological systems under varying conditions.
For instance, artificial intelligence can identify genes that are expressed differently across various cancer types or detect small molecules linked to particular disease states. A key difficulty in analyzing multi-omics data is the integration of information from multiple sources. AI can create integrative models that consider the intricate interactions between different types of molecular data. These models may be utilized to uncover new biomarkers or therapeutic targets for diseases, as well as to enhance our understanding of fundamental biological processes. By significantly speeding up our comprehension of complex biological systems, AI has the potential to lead to new treatments and therapies for a range of diseases.
Structural systems biology is a multidisciplinary field that merges systems biology with structural biology to investigate biological systems at the molecular scale. This domain strives for a thorough understanding of how biological molecules interact and function within cells, tissues, and organisms. The integration of AI in structural systems biology has become increasingly vital for examining extensive and complex datasets and modeling the behavior of biological systems. AI facilitates the analysis of protein–protein interaction networks within structural systems biology. These networks can be explored using graph theory and various mathematical methods, uncovering key characteristics such as hubs and modules. AI can also assist in the discovery of new drugs or therapies by predicting the effect of a drug on a particular biological component or pathway.
== See also ==
== References ==
== Further reading ==
Klipp, Edda; Liebermeister, Wolfram; Wierling, Christoph; Kowald, Axel (2016). Systems Biology - A Textbook, 2nd edition. Wiley. ISBN 978-3-527-33636-4.
Asfar S. Azmi, ed. (2012). Systems Biology in Cancer Research and Drug Discovery. Springer. ISBN 978-94-007-4819-4.
Kitano, Hiroaki (15 October 2001). Foundations of Systems Biology. MIT Press. ISBN 978-0-262-11266-6.
Werner, Eric (29 March 2007). "All systems go". Nature. 446 (7135): 493–494. Bibcode:2007Natur.446..493W. doi:10.1038/446493a. provides a comparative review of three books:
Alon, Uri (7 July 2006). An Introduction to Systems Biology: Design Principles of Biological Circuits. Chapman & Hall. ISBN 978-1-58488-642-6.
Kaneko, Kunihiko (15 September 2006). Life: An Introduction to Complex Systems Biology. Springer-Verlag. Bibcode:2006lics.book.....K. ISBN 978-3-540-32666-3.
Palsson, Bernhard O. (16 January 2006). Systems Biology: Properties of Reconstructed Networks. Cambridge University Press. ISBN 978-0-521-85903-5.
Werner Dubitzky; Olaf Wolkenhauer; Hiroki Yokota; Kwan-Hyun Cho, eds. (13 August 2013). Encyclopedia of Systems Biology. Springer-Verlag. ISBN 978-1-4419-9864-4.
== External links ==
Media related to Systems biology at Wikimedia Commons
Biological Systems in bio-physics-wiki | Wikipedia/Systems_biology |
Modelling biological systems is a significant task of systems biology and mathematical biology. Computational systems biology aims to develop and use efficient algorithms, data structures, visualization and communication tools with the goal of computer modelling of biological systems. It involves the use of computer simulations of biological systems, including cellular subsystems (such as the networks of metabolites and enzymes which comprise metabolism, signal transduction pathways and gene regulatory networks), to both analyze and visualize the complex connections of these cellular processes.
An unexpected emergent property of a complex system may be a result of the interplay of the cause-and-effect among simpler, integrated parts (see biological organisation). Biological systems manifest many important examples of emergent properties in the complex interplay of components. Traditional study of biological systems requires reductive methods in which quantities of data are gathered by category, such as concentration over time in response to a certain stimulus. Computers are critical to analysis and modelling of these data. The goal is to create accurate real-time models of a system's response to environmental and internal stimuli, such as a model of a cancer cell in order to find weaknesses in its signalling pathways, or modelling of ion channel mutations to see effects on cardiomyocytes and in turn, the function of a beating heart.
== Standards ==
By far the most widely accepted standard format for storing and exchanging models in the field is the Systems Biology Markup Language (SBML). The SBML.org website includes a guide to many important software packages used in computational systems biology. A large number of models encoded in SBML can be retrieved from BioModels. Other markup languages with different emphases include BioPAX, CellML and MorpheusML.
== Particular tasks ==
=== Cellular model ===
Creating a cellular model has been a particularly challenging task of systems biology and mathematical biology. It involves the use of computer simulations of the many cellular subsystems such as the networks of metabolites, enzymes which comprise metabolism and transcription, translation, regulation and induction of gene regulatory networks.
The complex network of biochemical reaction/transport processes and their spatial organization make the development of a predictive model of a living cell a grand challenge for the 21st century, listed as such by the National Science Foundation (NSF) in 2006.
A whole cell computational model for the bacterium Mycoplasma genitalium, including all its 525 genes, gene products, and their interactions, was built by scientists from Stanford University and the J. Craig Venter Institute and published on 20 July 2012 in Cell.
A dynamic computer model of intracellular signaling was the basis for Merrimack Pharmaceuticals to discover the target for their cancer medicine MM-111.
Membrane computing is the task of modelling specifically a cell membrane.
=== Multi-cellular organism simulation ===
An open source simulation of C. elegans at the cellular level is being pursued by the OpenWorm community. So far the physics engine Gepetto has been built and models of the neural connectome and a muscle cell have been created in the NeuroML format.
=== Protein folding ===
Protein structure prediction is the prediction of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of a protein's tertiary structure from its primary structure. It is one of the most important goals pursued by bioinformatics and theoretical chemistry. Protein structure prediction is of high importance in medicine (for example, in drug design) and biotechnology (for example, in the design of novel enzymes). Every two years, the performance of current methods is assessed in the CASP experiment.
=== Human biological systems ===
==== Brain model ====
The Blue Brain Project is an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. The aim of this project, founded in May 2005 by the Brain and Mind Institute of the École Polytechnique in Lausanne, Switzerland, is to study the brain's architectural and functional principles. The project is headed by the Institute's director, Henry Markram. Using a Blue Gene supercomputer running Michael Hines's NEURON software, the simulation does not consist simply of an artificial neural network, but involves a partially biologically realistic model of neurons. It is hoped by its proponents that it will eventually shed light on the nature of consciousness.
There are a number of sub-projects, including the Cajal Blue Brain, coordinated by the Supercomputing and Visualization Center of Madrid (CeSViMa), and others run by universities and independent laboratories in the UK, U.S., and Israel. The Human Brain Project builds on the work of the Blue Brain Project. It is one of six pilot projects in the Future Emerging Technologies Research Program of the European Commission, competing for a billion euro funding.
==== Model of the immune system ====
The last decade has seen the emergence of a growing number of simulations of the immune system.
==== Virtual liver ====
The Virtual Liver project is a 43 million euro research program funded by the German Government, made up of seventy research group distributed across Germany. The goal is to produce a virtual liver, a dynamic mathematical model that represents human liver physiology, morphology and function.
=== Tree model ===
Electronic trees (e-trees) usually use L-systems to simulate growth. L-systems are very important in the field of complexity science and A-life.
A universally accepted system for describing changes in plant morphology at the cellular or modular level has yet to be devised.
The most widely implemented tree generating algorithms are described in the papers "Creation and Rendering of Realistic Trees" and Real-Time Tree Rendering.
=== Ecological models ===
Ecosystem models are mathematical representations of ecosystems. Typically they simplify complex foodwebs down to their major components or trophic levels, and quantify these as either numbers of organisms, biomass or the inventory/concentration of some pertinent chemical element (for instance, carbon or a nutrient species such as nitrogen or phosphorus).
=== Models in ecotoxicology ===
The purpose of models in ecotoxicology is the understanding, simulation and prediction of effects caused by toxicants in the environment. Most current models describe effects on one of many different levels of biological organization (e.g. organisms or populations). A challenge is the development of models that predict effects across biological scales. Ecotoxicology and models discusses some types of ecotoxicological models and provides links to many others.
=== Modelling of infectious disease ===
It is possible to model the progress of most infectious diseases mathematically to discover the likely outcome of an epidemic or to help manage them by vaccination. This field tries to find parameters for various infectious diseases and to use those parameters to make useful calculations about the effects of a mass vaccination programme.
== See also ==
Biological data visualization
Biosimulation
Gillespie algorithm
Molecular modelling software
Stochastic simulation
== Notes ==
== References ==
== Sources ==
Antmann, S. S.; Marsden, J. E.; Sirovich, L., eds. (2009). Mathematical Physiology (2nd ed.). New York, New York: Springer. ISBN 978-0-387-75846-6.
Barnes, D.J.; Chu, D. (2010), Introduction to Modelling for Biosciences, Springer Verlag
An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White. An introductory book on infectious disease modelling and its applications.
== Further reading ==
== External links ==
The Center for Modeling Immunity to Enteric Pathogens (MIEP) | Wikipedia/Computational_systems_biology |
Systems art is art influenced by cybernetics and systems theory, reflecting on natural systems, social systems, and the social signs of the art world itself.
Systems art emerged as part of the first wave of the conceptual art movement in the 1960s and 1970s. Closely related and overlapping terms include anti-form movement, cybernetic art, generative systems, process art, systems aesthetic, systemic art, systemic painting, and systems sculpture.
== Related fields of systems art ==
=== Anti-form movement ===
By the early 1960s, minimalism had emerged as an abstract movement in art, with roots in geometric abstraction via Malevich, the Bauhaus, and Mondrian. This movement rejected the ideas of relational and subjective painting, the complexity of abstract expressionist surfaces, and the emotional zeitgeist and polemics present in action painting. Minimalism argued that extreme simplicity could capture all of the sublime representation needed in art. The term Systematic art was coined by Lawrence Alloway in 1966 to describe the method that artists such as Kenneth Noland, Al Held, and Frank Stella were using to compose abstract paintings.
Associated with painters such as Frank Stella, minimalism in painting, as opposed to other areas, is a modernist movement. Depending on the context, minimalism might be construed as a precursor to the postmodern movement. Some writers classify it as a postmodern movement, noting that early minimalism began and succeeded as a modernist movement, producing advanced works but partially abandoning this project when some artists shifted towards the anti-form movement.
In the late 1960s, the term postminimalism was coined by Robert Pincus-Witten to describe minimalist-derived art that incorporated content and contextual overtones that minimalism had rejected. This term was applied to the work of Eva Hesse, Keith Sonnier, Richard Serra, and new work by former minimalists such as Robert Smithson, Robert Morris, Bruce Nauman, Sol LeWitt, Barry Le Va, and others. Minimalists like Donald Judd, Dan Flavin, Carl Andre, Agnes Martin, John McCracken, and others continued to produce their late modernist paintings and sculptures for the remainder of their careers.
=== Cybernetic art ===
Audio feedback, tape loops, sound synthesis, and computer-generated compositions reflect a cybernetic awareness of information, systems, and cycles. These techniques became widespread in the 1960s music industry. The visual effects of electronic feedback became a focus of artistic research in the late 1960s when video equipment first reached the consumer market. For example, Steina and Woody Vasulka used "all manner and combination of audio and video signals to generate electronic feedback in their respective media."
Related work by Edward Ihnatowicz, Wen-Ying Tsai, cybernetician Gordon Pask, and the animist kinetics of Robert Breer and Jean Tinguely contributed to a strain of cybernetic art in the 1960s that was concerned with the shared circuits within and between the living and the technological. During this period, a line of cybernetic art theory also emerged. Writers such as Jonathan Benthall and Gene Youngblood drew on cybernetics. Notable contributors include British artist and theorist Roy Ascott, with his essay "Behaviourist Art and the Cybernetic Vision" published in the journal Cybernetica (1966–67), and American critic and theorist Jack Burnham. In his 1968 work Beyond Modern Sculpture, Burnham develops a theory of cybernetic art that centers on art's drive to imitate and ultimately reproduce life. Additionally, in 1968, curator Jasia Reichardt organized the landmark exhibition Cybernetic Serendipity at the Institute of Contemporary Arts in London.
=== Generative systems ===
Generative art is art that is created through algorithmic processes, using systems defined by computer software, algorithms, or similar mathematical, mechanical, or randomized autonomous methods. Sonia Landy Sheridan established the Generative Systems program at the School of the Art Institute of Chicago in 1970 in response to social changes brought about in part by the computer-robot communications revolution. The program, which brought artists and scientists together, aimed to transform the artist's role from passive to active by exploring contemporary scientific and technological systems and their relation to art and life. Unlike copier art, which was a commercial spin-off, Generative Systems was involved in developing elegant and simple systems intended for creative use by the general public. Generative Systems artists sought to bridge the gap between elite and novice by facilitating communication between the two, thus disseminating first-generation information to a broader audience and bypassing traditional commercial routes.
=== Process art ===
Process art is an artistic movement and creative sentiment where the end product of art and craft is not the principal focus. The 'process' in process art refers to the act of creating art: the gathering, sorting, collating, associating, and patterning. Process art emphasizes the actual doing—art as a rite, ritual, and performance. It often involves inherent motivation, rationale, and intentionality. Thus, art is seen as a creative journey or process, rather than merely a final product.
In artistic discourse, the work of Jackson Pollock as a type of action painting is sometimes considered a precursor to process art. Process art, with its use of serendipity, shares similarities with Dada. Themes of change and transience are prominent in the process art movement. According to the Solomon R. Guggenheim Museum, Robert Morris had a groundbreaking exhibition in 1968 that defined the movement. The museum's website notes that "Process artists were involved in issues attendant to the body, random occurrences, improvisation, and the liberating qualities of nontraditional materials such as wax, felt, and latex. Using these, they created eccentric forms in erratic or irregular arrangements produced by actions such as cutting, hanging, and dropping, or organic processes such as growth, condensation, freezing, or decomposition".
=== Systemic art ===
According to Chilvers (2004), "earlier in 1966 the British art critic Lawrence Alloway had coined the term "Systemic art", to describe a type of abstract art characterized by the use of very simple standardized forms, usually geometric in character, either in a single concentrated image, or repeated in a system arranged according to a clearly visible principle of organization. He considered the chevron paintings of Kenneth Noland as examples of Systemic art, and considered this as a branch of Minimal art".
John G. Harries identified common ground in the ideas underlying developments in 20th-century art such as Serial art, Systems art, Constructivism, and Kinetic art. These forms of art often do not stem directly from observations of the external natural environment but from the observation of depicted shapes and their relationships. According to Harries, Systems art represents a deliberate attempt by artists to develop a more flexible frame of reference. Rather than being a cognitive system that leads to the institutionalization of an imposed model, it uses its frame of reference as a model to be emulated. However, transferring the meaning of a picture to its location within a systemic structure does not eliminate the need to define the constitutive elements of the system. Without these definitions, constructing the system becomes challenging.
=== Systemic painting ===
Systemic Painting, according to Auping (1989), "was the title of a highly influential exhibition at the Guggenheim Museum in 1966 assembled and introduction written by Lawrence Alloway as curator. The show contained numerous works that many critics today would consider part of the Minimal art". In the catalogue, Alloway noted that "...paintings, such as those in this exhibition are not, as has been often claimed, impersonal. The personal is not expunged by using a neat technique: anonymity is not a consequence of highly finishing a painting". The term "Systemic Painting" later came to refer to artists who employ systems to make a number of aesthetic decisions before commencing to paint.
=== Systems sculpture ===
According to Feldman (1987), "serial art, serial painting, systems sculpture and ABC art, were art styles of the 1960s and 1970s in which simple geometric configurations are repeated with little or no variation. Sequences becomes important as in mathematics and linguistic context. These works rely on simple arrangements of basic volumes and voids, mechanically produced surfaces, and algebraic permutations of form. The impact on the viewer, however, is anything but simple".
== See also ==
== References ==
== Further reading ==
Vladimir Bonacic (1989), "A Transcendental Concept for Cybernetic Art in the 21st Century", in: Leonardo, Vol. 22, No. 1, Art and the New Biology: Biological Forms and Patterns (1989), pp. 109–111.
Jack Burnham (1968), "Systems Esthetics", in: Artforum (September 1968).
Karen Cham, Jeffrey Johnson (2007), "Complexity Theory: A Science of Cultural Systems?", in: M/C journal, Volume 10 Issue 3 June 2007
Francis Halsall (2007), "Systems Aesthetics and the System as Medium", Systems Art Symposium Whitechapel Art Gallery, 2007.
Pamela Lee, (2004), Chronophobia. Cambridge, Massachusetts: MIT Press.
Eddie Price (1974), Systems Art: An Enquiry, City of Birmingham Polytechnic, School of Art Education, ISBN 0-905017-00-5
Edward A. Shanken, "Cybernetics and Art: Cultural Convergence in the 1960s," in Bruce Clarke and Linda Dalrymple Henderson, eds. From Energy to Information: Representation in Science, Technology, Art, and Literature. (Stanford: Stanford University Press, 2002): 255–77.
Edward A. Shanken, "Art in the Information Age: Technology and Conceptual Art," in SIGGRAPH 2001 Electronic Art and Animation Catalog, (New York: ACM SIGGRAPH, 2001): 8–15; expanded and reprinted in Art Inquiry 3: 12 (2001): 7–33 and Leonardo 35:3 (August 2002): 433–38.
Edward A. Shanken, "The House That Jack Built: Jack Burnham’s Concept of Software as a Metaphor for Art," Leonardo Electronic Almanac 6:10 (November 1998). Reprinted in English and Spanish in a minima 12 (2005): 140–51.
Edward A. Shanken, "Reprogramming Systems Aesthetics: A Strategic Historiography," in Simon Penny, et al., eds., Proceedings of the Digital Arts and Culture Conference 2009, DAC: 2009.
Edward A. Shanken, Systems. Whitechapel/MIT Press, 2015.
Luke Skrebowski (2008), "All Systems Go: Recovering Hans Haacke's Systems Art", in Grey Room, Winter 2008, No. 30, Pages 54–83.
== External links ==
Walker, John. "Systems Art". Glossary of Art, Architecture & Design since 1945, 3rd. ed.
Systems Art Symposium, in de Whitechapel Art Gallery in London in 2007.
Observing 'Systems-Art' from a Systems-Theoretical Perspective by Francis Halsall: summary of presentation on Chart 2005, 2005.
Saturation Point: The online editorial and curatorial project for systems, non-objective and reductive artists working in the UK. | Wikipedia/Systems_art |
Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes.
== Overview ==
All versions of developmental systems theory espouse the view that:
All biological processes (including both evolution and development) operate by continually assembling new structures.
Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws.
Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms.
Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for.
In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any particular entity and thereby maintains an explanatory openness on all empirical fronts. For example, there is vigorous resistance to the widespread assumptions that one can legitimately speak of genes ‘for’ specific phenotypic characters or that adaptation consists of evolution ‘shaping’ the more or less passive species, as opposed to adaptation consisting of organisms actively selecting, defining, shaping and often creating their niches.
== Developmental systems theory: Topics ==
=== Six Themes of DST ===
Joint Determination by Multiple Causes: Development is a product of multiple interacting sources.
Context Sensitivity and Contingency: Development depends on the current state of the organism.
Extended Inheritance: An organism inherits resources from the environment in addition to genes.
Development as a process of construction: The organism helps shape its own environment, such as the way a beaver builds a dam to raise the water level to build a lodge.
Distributed Control: Idea that no single source of influence has central control over an organism's development.
Evolution As Construction: The evolution of an entire developmental system, including whole ecosystems of which given organisms are parts, not just the changes of a particular being or population.
=== A computing metaphor ===
To adopt a computing metaphor, the reductionists (whom developmental systems theory opposes) assume that causal factors can be divided into ‘processes’ and ‘data’, as in the Harvard computer architecture. Data (inputs, resources, content, and so on) is required by all processes, and must often fall within certain limits if the process in question is to have its ‘normal’ outcome. However, the data alone is helpless to create this outcome, while the process may be ‘satisfied’ with a considerable range of alternative data.
Developmental systems theory, by contrast, assumes that the process/data distinction is at best misleading and at worst completely false, and that while it may be helpful for very specific pragmatic or theoretical reasons to treat a structure now as a process and now as a datum, there is always a risk (to which reductionists routinely succumb) that this methodological convenience will be promoted into an ontological conclusion. In fact, for the proponents of DST, either all structures are both process and data, depending on context, or even more radically, no structure is either.
=== Fundamental asymmetry ===
For reductionists there is a fundamental asymmetry between different causal factors, whereas for DST such asymmetries can only be justified by specific purposes, and argue that many of the (generally unspoken) purposes to which such (generally exaggerated) asymmetries have been put are scientifically illegitimate. Thus, for developmental systems theory, many of the most widely applied, asymmetric and entirely legitimate distinctions biologists draw (between, say, genetic factors that create potential and environmental factors that select outcomes or genetic factors of determination and environmental factors of realisation) obtain their legitimacy from the conceptual clarity and specificity with which they are applied, not from their having tapped a profound and irreducible ontological truth about biological causation. One problem might be solved by reversing the direction of causation correctly identified in another. This parity of treatment is especially important when comparing the evolutionary and developmental explanations for one and the same character of an organism.
=== DST approach ===
One upshot of this approach is that developmental systems theory also argues that what is inherited from generation to generation is a good deal more than simply genes (or even the other items, such as the fertilised zygote, that are also sometimes conceded). As a result, much of the conceptual framework that justifies ‘selfish gene’ models is regarded by developmental systems theory as not merely weak but actually false. Not only are major elements of the environment built and inherited as materially as any gene but active modifications to the environment by the organism (for example, a termite mound or a beaver’s dam) demonstrably become major environmental factors to which future adaptation is addressed. Thus, once termites have begun to build their monumental nests, it is the demands of living in those very nests to which future generations of termite must adapt.
This inheritance may take many forms and operate on many scales, with a multiplicity of systems of inheritance complementing the genes. From position and maternal effects on gene expression to epigenetic inheritance to the active construction and intergenerational transmission of enduring niches, development systems theory argues that not only inheritance but evolution as a whole can be understood only by taking into account a far wider range of ‘reproducers’ or ‘inheritance systems’ – genetic, epigenetic, behavioural and symbolic – than neo-Darwinism’s ‘atomic’ genes and gene-like ‘replicators’. DST regards every level of biological structure as susceptible to influence from all the structures by which they are surrounded, be it from above, below, or any other direction – a proposition that throws into question some of (popular and professional) biology’s most central and celebrated claims, not least the ‘central dogma’ of Mendelian genetics, any direct determination of phenotype by genotype, and the very notion that any aspect of biological (or psychological, or any other higher form) activity or experience is capable of direct or exhaustive genetic or evolutionary ‘explanation’.
Developmental systems theory is plainly radically incompatible with both neo-Darwinism and information processing theory. Whereas neo-Darwinism defines evolution in terms of changes in gene distribution, the possibility that an evolutionarily significant change may arise and be sustained without any directly corresponding change in gene frequencies is an elementary assumption of developmental systems theory, just as neo-Darwinism’s ‘explanation’ of phenomena in terms of reproductive fitness is regarded as fundamentally shallow. Even the widespread mechanistic equation of ‘gene’ with a specific DNA sequence has been thrown into question, as have the analogous interpretations of evolution and adaptation.
Likewise, the wholly generic, functional and anti-developmental models offered by information processing theory are comprehensively challenged by DST’s evidence that nothing is explained without an explicit structural and developmental analysis on the appropriate levels. As a result, what qualifies as ‘information’ depends wholly on the content and context out of which that information arises, within which it is translated and to which it is applied.
== Criticism ==
Philosopher Neven Sesardić, while not dismissive of developmental systems theory, argues that its proponents forget that the role between levels of interaction is ultimately an empirical issue, which cannot be settled by a priori speculation; Sesardić observes that while the emergence of lung cancer is a highly complicated process involving the combined action of many factors and interactions, it is not unreasonable to believe that smoking has an effect on developing lung cancer. Therefore, though developmental processes are highly interactive, context dependent, and extremely complex, it is incorrect to conclude main effects of heredity and environment are unlikely to be found in the "messiness". Sesardić argues that the idea that changing the effect of one factor always depends on what is happening in other factors is an empirical claim, as well as a false one; for example, the bacterium Bacillus thuringiensis produces a protein that is toxic to caterpillars. Genes from this bacterium have been placed into plants vulnerable to caterpillars and the insects proceed to die when they eat part of the plant, as they consume the toxic protein. Thus, developmental approaches must be assessed on a case by case basis and in Sesardić's view, DST does not offer much if only posed in general terms. Hereditarian Psychologist Linda Gottfredson differentiates the "fallacy of so–called 'interactionism'" from the technical use of gene-environment interaction to denote a non–additive environmental effect conditioned upon genotype. “Interactionism's” over–generalization cannot render attempts to identify genetic and environmental contributions meaningless. Where behavioural genetics attempts to determine portions of variation accounted for by genetics, environmental–developmentalistics like DST attempt to determine the typical course of human development and erroneously conclude the common theme is readily changed.
Another Sesardić argument counters another DST claim of impossibility of determining contribution of trait influence (genetic vs. environment). It necessarily follows a trait cannot be causally attributed to environment as genes and environment are inseparable in DST. Yet DST, critical of genetic heritability, advocates developmentalist research of environmental effects, a logical inconsistency. Barnes et al., made similar criticisms observing that the innate human capacity for language (deeply genetic) does not determine the specific language spoken (a contextually environmental effect). It is then, in principle, possible to separate the effects of genes and environment. Similarly, Steven Pinker argues if genes and environment couldn't actually be separated then speakers have a deterministic genetic disposition to learn a specific native language upon exposure. Though seemingly consistent with the idea of gene–environment interaction, Pinker argues it is nonetheless an absurd position since empirical evidence shows ancestry has no effect on language acquisition — environmental effects are often separable from genetic ones.
== Related theories ==
Developmental systems theory is not a narrowly defined collection of ideas, and the boundaries with neighbouring models are porous. Notable related ideas (with key texts) include:
The Baldwin effect
Evolutionary developmental biology
Neural Darwinism
Probabilistic epigenesis
Relational developmental systems
== See also ==
Systems theory
Complex adaptive system
Developmental psychobiology
The Dialectical Biologist - a 1985 book by Richard Levins and Richard Lewontin which describe a related approach.
Living systems
== References ==
=== Bibliography ===
Baldwin, J. Mark (23 August 1895). "Consciousness and Evolution". Science. 2 (34): 219–223. Bibcode:1895Sci.....2..219B. doi:10.1126/science.2.34.219. ISSN 0036-8075. PMID 17835006.
Reprinted as: Baldwin, J. Mark (1896). "Psychology". The American Naturalist. 30 (351): 249–255. doi:10.1086/276362. ISSN 0003-0147. JSTOR 2452622.
Baldwin, J. Mark (1896). "A New Factor in Evolution". The American Naturalist. 30 (354): 441–451. Bibcode:1896ANat...30..441B. doi:10.1086/276408. ISSN 0003-0147. JSTOR 2453130. S2CID 7059820.
Baldwin, J. Mark (1896). "A New Factor in Evolution (Continued)". The American Naturalist. 30 (355): 536–553. Bibcode:1896ANat...30..536B. doi:10.1086/276428. ISSN 0003-0147. JSTOR 2453231.
Baldwin, J. Mark (1902). Development and Evolution. New York : Macmillan. Retrieved 1 January 2020.{{cite book}}: CS1 maint: publisher location (link)
Dawkins, R. (1976). The Selfish Gene. New York: Oxford University Press.
Dawkins, R. (1982). The Extended Phenotype. Oxford: Oxford University Press.
Oyama, S. (1985). The Ontogeny of Information: Developmental Systems and Evolution. Durham, N.C.: Duke University Press.
Edelman, G.M. (1987). Neural Darwinism: Theory of Neuronal Group Selection. New York: Basic Books.
Edelman, G.M. and Tononi, G. (2001). Consciousness. How Mind Becomes Imagination. London: Penguin.
Goodwin, B.C. (1995). How the Leopard Changed its Spots. London: Orion.
Goodwin, B.C. and Saunders, P. (1992). Theoretical Biology. Epigenetic and Evolutionary Order from Complex Systems. Baltimore: Johns Hopkins University Press.
Jablonka, E., and Lamb, M.J. (1995). Epigenetic Inheritance and Evolution. The Lamarckian Dimension. London: Oxford University Press.
Kauffman, S.A. (1993). The Origins of Order: Self-Organization and Selection in Evolution. Oxford: Oxford University Press.
Levins, R. and Lewontin, R. (1985). The Dialectical Biologist. London: Harvard University Press.
Lewontin, Richard C. (2000). The Triple Helix: Gene, Organism, and Environment. Harvard University Press. ISBN 0-674-00159-1.
Neumann-Held, E.M. (1999). The gene is dead- long live the gene. Conceptualizing genes the constructionist way. In P. Koslowski (ed.). Sociobiology and Bioeconomics: The Theory of Evolution in Economic and Biological Thinking, pp. 105–137. Berlin: Springer.
Oyama, Susan; Griffiths, Paul E.; Gray, Russell D., eds. (2001). Cycles of contingency : developmental systems and evolution. MIT Press. ISBN 9780262150538.
Gottfredson, Linda (2009). "Logical Fallacies Used to Dismiss the Evidence on Intelligence Testing". In Phelps, Richard P. (ed.). Correcting Fallacies About Educational and Psychological Testing (1st ed.). American Psychological Association. ISBN 978-1-4338-0392-5.
Waddington, C.H. (1957). The Strategy of the Genes. London: Allen and Unwin.
== Further reading ==
Depew, D.J. and Weber, B.H. (1995). Darwinism Evolving. System Dynamics and the Genealogy of Natural Selection. Cambridge, Massachusetts: MIT Press.
Eigen, M. (1992). Steps Towards Life. Oxford: Oxford University Press.
Gray, R.D. (2000). Selfish genes or developmental systems? In Singh, R.S., Krimbas, C.B., Paul, D.B., and Beatty, J. (2000). Thinking about Evolution: Historical, Philosophical, and Political Perspectives. Cambridge University Press: Cambridge. (184-207).
Koestler, A., and Smythies, J.R. (1969). Beyond Reductionism. London: Hutchinson.
Lehrman, D.S. (1953). A critique of Konrad Lorenz’s theory of instinctive behaviour. Quarterly Review of Biology 28: 337-363.
Thelen, E. and Smith, L.B. (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, Massachusetts: MIT Press.
== External links ==
William Bechtel, Developmental Systems Theory and Beyond presentation, winter 2006. | Wikipedia/Developmental_systems_theory |
Earth system science (ESS) is the application of systems science to the Earth. In particular, it considers interactions and 'feedbacks', through material and energy fluxes, between the Earth's sub-systems' cycles, processes and "spheres"—atmosphere, hydrosphere, cryosphere, geosphere, pedosphere, lithosphere, biosphere, and even the magnetosphere—as well as the impact of human societies on these components. At its broadest scale, Earth system science brings together researchers across both the natural and social sciences, from fields including ecology, economics, geography, geology, glaciology, meteorology, oceanography, climatology, paleontology, sociology, and space science. Like the broader subject of systems science, Earth system science assumes a holistic view of the dynamic interaction between the Earth's spheres and their many constituent subsystems fluxes and processes, the resulting spatial organization and time evolution of these systems, and their variability, stability and instability. Subsets of Earth System science include systems geology and systems ecology, and many aspects of Earth System science are fundamental to the subjects of physical geography and climate science.
== Definition ==
The Science Education Resource Center, Carleton College, offers the following description: "Earth System science embraces chemistry, physics, biology, mathematics and applied sciences in transcending disciplinary boundaries to treat the Earth as an integrated system. It seeks a deeper understanding of the physical, chemical, biological and human interactions that determine the past, current and future states of the Earth. Earth System science provides a physical basis for understanding the world in which we live and upon which humankind seeks to achieve sustainability".
Earth System science has articulated four overarching, definitive and critically important features of the Earth System, which include:
Variability: Many of the Earth System's natural 'modes' and variabilities across space and time are beyond human experience, because of the stability of the recent Holocene. Much Earth System science therefore relies on studies of the Earth's past behaviour and models to anticipate future behaviour in response to pressures.
Life: Biological processes play a much stronger role in the functioning and responses of the Earth System than previously thought. It appears to be integral to every part of the Earth System.
Connectivity: Processes are connected in ways and across depths and lateral distances that were previously unknown and inconceivable.
Non-linear: The behaviour of the Earth System is typified by strong non-linearities. This means that abrupt change can result when relatively small changes in a 'forcing function' push the System across a 'threshold'.
== History ==
For millennia, humans have speculated how the physical and living elements on the surface of the Earth combine, with gods and goddesses frequently posited to embody specific elements. The notion that the Earth, itself, is alive was a regular theme of Greek philosophy and religion.
Early scientific interpretations of the Earth system began in the field of geology, initially in the Middle East and China, and largely focused on aspects such as the age of the Earth and the large-scale processes involved in mountain and ocean formation. As geology developed as a science, understanding of the interplay of different facets of the Earth system increased, leading to the inclusion of factors such as the Earth's interior, planetary geology, living systems and Earth-like worlds.
In many respects, the foundational concepts of Earth System science can be seen in the natural philosophy 19th century geographer Alexander von Humboldt. In the 20th century, Vladimir Vernadsky (1863–1945) saw the functioning of the biosphere as a geological force generating a dynamic disequilibrium, which in turn promoted the diversity of life.
In parallel, the field of systems science was developing across numerous other scientific fields, driven in part by the increasing availability and power of computers, and leading to the development of climate models that began to allow the detailed and interacting simulations of the Earth's weather and climate. Subsequent extension of these models has led to the development of "Earth system models" (ESMs) that include facets such as the cryosphere and the biosphere.
In 1983 a NASA committee called the Earth System Science Committee was formed. The earliest reports of NASA's ESSC, Earth System Science: Overview (1986), and the book-length Earth System Science: A Closer View (1988), constitute a major landmark in the formal development of Earth system science. Early works discussing Earth system science, like these NASA reports, generally emphasized the increasing human impacts on the Earth system as a primary driver for the need of greater integration among the life and geo-sciences, making the origins of Earth system science parallel to the beginnings of global change studies and programs.
== Climate science ==
Climatology and climate change have been central to Earth System science since its inception, as evidenced by the prominent place given to climate change in the early NASA reports discussed above. The Earth's climate system is a prime example of an emergent property of the whole planetary system, that is, one which cannot be fully understood without regarding it as a single integrated entity. It is also a system where human impacts have been growing rapidly in recent decades, lending immense importance to the successful development and advancement of Earth System science research. As just one example of the centrality of climatology to the field, the mission statement of one of the earliest centers for Earth System science research, the Earth System Science Center at Pennsylvania State University, reads, "the Earth System Science Center (ESSC) maintains a mission to describe, model, and understand the Earth's climate system".
== Education ==
Earth System science can be studied at a postgraduate level at some universities. In general education, the American Geophysical Union, in cooperation with the Keck Geology Consortium and with support from five divisions within the National Science Foundation, convened a workshop in 1996, "to define common educational goals among all disciplines in the Earth sciences". In its report, participants noted that, "The fields that make up the Earth and space sciences are currently undergoing a major advancement that promotes understanding the Earth as a number of interrelated systems". Recognizing the rise of this systems approach, the workshop report recommended that an Earth System science curriculum be developed with support from the National Science Foundation.
In 2000, the Earth System Science Education Alliance (ESSEA) was begun, and currently includes the participation of 40+ institutions, with over 3,000 teachers having completed an ESSEA course as of fall 2009".
== Related concepts ==
The concept of earth system law (still in its infancy as per 2021) is a sub-discipline of earth system governance, itself a subfield of earth system sciences analyzed from a social sciences perspective.
== See also ==
== References ==
== External links ==
Media related to Earth system science at Wikimedia Commons
Earth system science at Nature.com | Wikipedia/Earth_system_science |
Systems theory is the transdisciplinary study of systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or artificial. Every system has causal boundaries, is influenced by its context, defined by its structure, function and role, and expressed through its relations with other systems. A system is "more than the sum of its parts" when it expresses synergy or emergent behavior.
Changing one component of a system may affect other components or the whole system. It may be possible to predict these changes in patterns of behavior. For systems that learn and adapt, the growth and the degree of adaptation depend upon how well the system is engaged with its environment and other contexts influencing its organization. Some systems support other systems, maintaining the other system to prevent failure. The goals of systems theory are to model a system's dynamics, constraints, conditions, and relations; and to elucidate principles (such as purpose, measure, methods, tools) that can be discerned and applied to other systems at every level of nesting, and in a wide range of fields for achieving optimized equifinality.
General systems theory is about developing broadly applicable concepts and principles, as opposed to concepts and principles specific to one domain of knowledge. It distinguishes dynamic or active systems from static or passive systems. Active systems are activity structures or components that interact in behaviours and processes or interrelate through formal contextual boundary conditions (attractors). Passive systems are structures and components that are being processed. For example, a computer program is passive when it is a file stored on the hard drive and active when it runs in memory. The field is related to systems thinking, machine logic, and systems engineering.
== Overview ==
Systems theory is manifest in the work of practitioners in many disciplines, for example the works of physician Alexander Bogdanov, biologist Ludwig von Bertalanffy, linguist Béla H. Bánáthy, and sociologist Talcott Parsons; in the study of ecological systems by Howard T. Odum, Eugene Odum; in Fritjof Capra's study of organizational theory; in the study of management by Peter Senge; in interdisciplinary areas such as human resource development in the works of Richard A. Swanson; and in the works of educators Debora Hammond and Alfonso Montuori.
As a transdisciplinary, interdisciplinary, and multiperspectival endeavor, systems theory brings together principles and concepts from ontology, the philosophy of science, physics, computer science, biology, and engineering, as well as geography, sociology, political science, psychotherapy (especially family systems therapy), and economics.
Systems theory promotes dialogue between autonomous areas of study as well as within systems science itself. In this respect, with the possibility of misinterpretations, von Bertalanffy believed a general theory of systems "should be an important regulative device in science," to guard against superficial analogies that "are useless in science and harmful in their practical consequences."
Others remain closer to the direct systems concepts developed by the original systems theorists. For example, Ilya Prigogine, of the Center for Complex Quantum Systems at the University of Texas, has studied emergent properties, suggesting that they offer analogues for living systems. The distinction of autopoiesis as made by Humberto Maturana and Francisco Varela represent further developments in this field. Important names in contemporary systems science include Russell Ackoff, Ruzena Bajcsy, Béla H. Bánáthy, Gregory Bateson, Anthony Stafford Beer, Peter Checkland, Barbara Grosz, Brian Wilson, Robert L. Flood, Allenna Leonard, Radhika Nagpal, Fritjof Capra, Warren McCulloch, Kathleen Carley, Michael C. Jackson, Katia Sycara, and Edgar Morin among others.
With the modern foundations for a general theory of systems following World War I, Ervin László, in the preface for Bertalanffy's book, Perspectives on General System Theory, points out that the translation of "general system theory" from German into English has "wrought a certain amount of havoc":
It (General System Theory) was criticized as pseudoscience and said to be nothing more than an admonishment to attend to things in a holistic way. Such criticisms would have lost their point had it been recognized that von Bertalanffy's general system theory is a perspective or paradigm, and that such basic conceptual frameworks play a key role in the development of exact scientific theory. .. Allgemeine Systemtheorie is not directly consistent with an interpretation often put on 'general system theory,' to wit, that it is a (scientific) "theory of general systems." To criticize it as such is to shoot at straw men. Von Bertalanffy opened up something much broader and of much greater significance than a single theory (which, as we now know, can always be falsified and has usually an ephemeral existence): he created a new paradigm for the development of theories.
Theorie (or Lehre) "has a much broader meaning in German than the closest English words 'theory' and 'science'," just as Wissenschaft (or 'Science'). These ideas refer to an organized body of knowledge and "any systematically presented set of concepts, whether empirically, axiomatically, or philosophically" represented, while many associate Lehre with theory and science in the etymology of general systems, though it also does not translate from the German very well; its "closest equivalent" translates to 'teaching', but "sounds dogmatic and off the mark." An adequate overlap in meaning is found within the word "nomothetic", which can mean "having the capability to posit long-lasting sense." While the idea of a "general systems theory" might have lost many of its root meanings in the translation, by defining a new way of thinking about science and scientific paradigms, systems theory became a widespread term used for instance to describe the interdependence of relationships created in organizations.
A system in this frame of reference can contain regularly interacting or interrelating groups of activities. For example, in noting the influence in the evolution of "an individually oriented industrial psychology [into] a systems and developmentally oriented organizational psychology," some theorists recognize that organizations have complex social systems; separating the parts from the whole reduces the overall effectiveness of organizations. This difference, from conventional models that center on individuals, structures, departments and units, separates in part from the whole, instead of recognizing the interdependence between groups of individuals, structures and processes that enable an organization to function.
László explains that the new systems view of organized complexity went "one step beyond the Newtonian view of organized simplicity" which reduced the parts from the whole, or understood the whole without relation to the parts. The relationship between organisations and their environments can be seen as the foremost source of complexity and interdependence. In most cases, the whole has properties that cannot be known from analysis of the constituent elements in isolation.
Béla H. Bánáthy, who argued—along with the founders of the systems society—that "the benefit of humankind" is the purpose of science, has made significant and far-reaching contributions to the area of systems theory. For the Primer Group at the International Society for the System Sciences, Bánáthy defines a perspective that iterates this view:
The systems view is a world-view that is based on the discipline of SYSTEM INQUIRY. Central to systems inquiry is the concept of SYSTEM. In the most general sense, system means a configuration of parts connected and joined together by a web of relationships. The Primer Group defines system as a family of relationships among the members acting as a whole. Von Bertalanffy defined system as "elements in standing relationship."
== Applications ==
=== Art ===
=== Biology ===
Systems biology is a movement that draws on several trends in bioscience research. Proponents describe systems biology as a biology-based interdisciplinary study field that focuses on complex interactions in biological systems, claiming that it uses a new perspective (holism instead of reduction).
Particularly from the year 2000 onwards, the biosciences use the term widely and in a variety of contexts. An often stated ambition of systems biology is the modelling and discovery of emergent properties which represents properties of a system whose theoretical description requires the only possible useful techniques to fall under the remit of systems biology. It is thought that Ludwig von Bertalanffy may have created the term systems biology in 1928.
Subdisciplines of systems biology include:
Systems neuroscience
Systems pharmacology
==== Ecology ====
Systems ecology is an interdisciplinary field of ecology that takes a holistic approach to the study of ecological systems, especially ecosystems; it can be seen as an application of general systems theory to ecology.
Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems.
=== Chemistry ===
Systems chemistry is the science of studying networks of interacting molecules, to create new functions from a set (or library) of molecules with different hierarchical levels and emergent properties. Systems chemistry is also related to the origin of life (abiogenesis).
=== Engineering ===
Systems engineering is an interdisciplinary approach and means for enabling the realisation and deployment of successful systems. It can be viewed as the application of engineering techniques to the engineering of systems, as well as the application of a systems approach to engineering efforts. Systems engineering integrates other disciplines and specialty groups into a team effort, forming a structured development process that proceeds from concept to production to operation and disposal. Systems engineering considers both the business and the technical needs of all customers, with the goal of providing a quality product that meets the user's needs.
==== User-centered design process ====
Systems thinking is a crucial part of user-centered design processes and is necessary to understand the whole impact of a new human computer interaction (HCI) information system. Overlooking this and developing software without insights input from the future users (mediated by user experience designers) is a serious design flaw that can lead to complete failure of information systems, increased stress and mental illness for users of information systems leading to increased costs and a huge waste of resources. It is currently surprisingly uncommon for organizations and governments to investigate the project management decisions leading to serious design flaws and lack of usability.
The Institute of Electrical and Electronics Engineers estimates that roughly 15% of the estimated $1 trillion used to develop information systems every year is completely wasted and the produced systems are discarded before implementation by entirely preventable mistakes. According to the CHAOS report published in 2018 by the Standish Group, a vast majority of information systems fail or partly fail according to their survey:
Pure success is the combination of high customer satisfaction with high return on value to the organization. Related figures for the year 2017 are: successful: 14%, challenged: 67%, failed 19%.
=== Mathematics ===
System dynamics is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, and time delays.
=== Social sciences and humanities ===
Systems theory in anthropology
Systems theory in archaeology
Systems theory in political science
==== Psychology ====
Systems psychology is a branch of psychology that studies human behaviour and experience in complex systems.
It received inspiration from systems theory and systems thinking, as well as the basics of theoretical work from Roger Barker, Gregory Bateson, Humberto Maturana and others. It makes an approach in psychology in which groups and individuals receive consideration as systems in homeostasis. Systems psychology "includes the domain of engineering psychology, but in addition seems more concerned with societal systems and with the study of motivational, affective, cognitive and group behavior that holds the name engineering psychology."
In systems psychology, characteristics of organizational behaviour (such as individual needs, rewards, expectations, and attributes of the people interacting with the systems) "considers this process in order to create an effective system."
=== Informatics ===
System theory has been applied in the field of neuroinformatics and connectionist cognitive science. Attempts are being made in neurocognition to merge connectionist cognitive neuroarchitectures with the approach of system theory and dynamical systems theory.
== History ==
=== Precursors ===
Systems thinking can date back to antiquity, whether considering the first systems of written communication with Sumerian cuneiform to Maya numerals, or the feats of engineering with the Egyptian pyramids. Differentiated from Western rationalist traditions of philosophy, C. West Churchman often identified with the I Ching as a systems approach sharing a frame of reference similar to pre-Socratic philosophy and Heraclitus.: 12–13 Ludwig von Bertalanffy traced systems concepts to the philosophy of Gottfried Leibniz and Nicholas of Cusa's coincidentia oppositorum. While modern systems can seem considerably more complicated, they may embed themselves in history.
Figures like James Joule and Sadi Carnot represent an important step to introduce the systems approach into the (rationalist) hard sciences of the 19th century, also known as the energy transformation. Then, the thermodynamics of this century, by Rudolf Clausius, Josiah Gibbs and others, established the system reference model as a formal scientific object.
Similar ideas are found in learning theories that developed from the same fundamental concepts, emphasising how understanding results from knowing concepts both in part and as a whole. In fact, Bertalanffy's organismic psychology paralleled the learning theory of Jean Piaget. Some consider interdisciplinary perspectives critical in breaking away from industrial age models and thinking, wherein history represents history and math represents math, while the arts and sciences specialization remain separate and many treat teaching as behaviorist conditioning.
The contemporary work of Peter Senge provides detailed discussion of the commonplace critique of educational systems grounded in conventional assumptions about learning, including the problems with fragmented knowledge and lack of holistic learning from the "machine-age thinking" that became a "model of school separated from daily life." In this way, some systems theorists attempt to provide alternatives to, and evolved ideation from orthodox theories which have grounds in classical assumptions, including individuals such as Max Weber and Émile Durkheim in sociology and Frederick Winslow Taylor in scientific management. The theorists sought holistic methods by developing systems concepts that could integrate with different areas.
Some may view the contradiction of reductionism in conventional theory (which has as its subject a single part) as simply an example of changing assumptions. The emphasis with systems theory shifts from parts to the organization of parts, recognizing interactions of the parts as not static and constant but dynamic processes. Some questioned the conventional closed systems with the development of open systems perspectives. The shift originated from absolute and universal authoritative principles and knowledge to relative and general conceptual and perceptual knowledge and still remains in the tradition of theorists that sought to provide means to organize human life. In other words, theorists rethought the preceding history of ideas; they did not lose them. Mechanistic thinking was particularly critiqued, especially the industrial-age mechanistic metaphor for the mind from interpretations of Newtonian mechanics by Enlightenment philosophers and later psychologists that laid the foundations of modern organizational theory and management by the late 19th century.
=== Founding and early development ===
Where assumptions in Western science from Plato and Aristotle to Isaac Newton's Principia (1687) have historically influenced all areas from the hard to social sciences (see, David Easton's seminal development of the "political system" as an analytical construct), the original systems theorists explored the implications of 20th-century advances in terms of systems.
Between 1929 and 1951, Robert Maynard Hutchins at the University of Chicago had undertaken efforts to encourage innovation and interdisciplinary research in the social sciences, aided by the Ford Foundation with the university's interdisciplinary Division of the Social Sciences established in 1931.: 5–9
Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science.
"General systems theory" (GST; German: allgemeine Systemlehre) was coined in the 1940s by Ludwig von Bertalanffy, who sought a new approach to the study of living systems. Bertalanffy developed the theory via lectures beginning in 1937 and then via publications beginning in 1946. According to Mike C. Jackson (2000), Bertalanffy promoted an embryonic form of GST as early as the 1920s and 1930s, but it was not until the early 1950s that it became more widely known in scientific circles.
Jackson also claimed that Bertalanffy's work was informed by Alexander Bogdanov's three-volume Tectology (1912–1917), providing the conceptual base for GST. A similar position is held by Richard Mattessich (1978) and Fritjof Capra (1996). Despite this, Bertalanffy never even mentioned Bogdanov in his works.
The systems view was based on several fundamental ideas. First, all phenomena can be viewed as a web of relationships among elements, or a system. Second, all systems, whether electrical, biological, or social, have common patterns, behaviors, and properties that the observer can analyze and use to develop greater insight into the behavior of complex phenomena and to move closer toward a unity of the sciences. System philosophy, methodology and application are complementary to this science.
Cognizant of advances in science that questioned classical assumptions in the organizational sciences, Bertalanffy's idea to develop a theory of systems began as early as the interwar period, publishing "An Outline for General Systems Theory" in the British Journal for the Philosophy of Science by 1950.
In 1954, von Bertalanffy, along with Anatol Rapoport, Ralph W. Gerard, and Kenneth Boulding, came together at the Center for Advanced Study in the Behavioral Sciences in Palo Alto to discuss the creation of a "society for the advancement of General Systems Theory." In December that year, a meeting of around 70 people was held in Berkeley to form a society for the exploration and development of GST. The Society for General Systems Research (renamed the International Society for Systems Science in 1988) was established in 1956 thereafter as an affiliate of the American Association for the Advancement of Science (AAAS), specifically catalyzing systems theory as an area of study. The field developed from the work of Bertalanffy, Rapoport, Gerard, and Boulding, as well as other theorists in the 1950s like William Ross Ashby, Margaret Mead, Gregory Bateson, and C. West Churchman, among others.
Bertalanffy's ideas were adopted by others, working in mathematics, psychology, biology, game theory, and social network analysis. Subjects that were studied included those of complexity, self-organization, connectionism and adaptive systems. In fields like cybernetics, researchers such as Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster examined complex systems mathematically; Von Neumann discovered cellular automata and self-reproducing systems, again with only pencil and paper. Aleksandr Lyapunov and Jules Henri Poincaré worked on the foundations of chaos theory without any computer at all. At the same time, Howard T. Odum, known as a radiation ecologist, recognized that the study of general systems required a language that could depict energetics, thermodynamics and kinetics at any system scale. To fulfill this role, Odum developed a general system, or universal language, based on the circuit language of electronics, known as the Energy Systems Language.
The Cold War affected the research project for systems theory in ways that sorely disappointed many of the seminal theorists. Some began to recognize that theories defined in association with systems theory had deviated from the initial general systems theory view. Economist Kenneth Boulding, an early researcher in systems theory, had concerns over the manipulation of systems concepts. Boulding concluded from the effects of the Cold War that abuses of power always prove consequential and that systems theory might address such issues.: 229–233 Since the end of the Cold War, a renewed interest in systems theory emerged, combined with efforts to strengthen an ethical view on the subject.
In sociology, systems thinking also began in the 20th century, including Talcott Parsons' action theory and Niklas Luhmann's social systems theory. According to Rudolf Stichweh (2011):: 2 Since its beginnings the social sciences were an important part of the establishment of systems theory... [T]he two most influential suggestions were the comprehensive sociological versions of systems theory which were proposed by Talcott Parsons since the 1950s and by Niklas Luhmann since the 1970s.Elements of systems thinking can also be seen in the work of James Clerk Maxwell, particularly control theory.
== General systems research and systems inquiry ==
Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. Ludwig von Bertalanffy began developing his 'general systems theory' via lectures in 1937 and then via publications from 1946. The concept received extensive focus in his 1968 book, General System Theory: Foundations, Development, Applications.
There are many definitions of a general system, some properties that definitions include are: an overall goal of the system, parts of the system and relationships between these parts, and emergent properties of the interaction between the parts of the system that are not performed by any part on its own.: 58 Derek Hitchins defines a system in terms of entropy as a collection of parts and relationships between the parts where the parts of their interrelationships decrease entropy.: 58
Bertalanffy aimed to bring together under one heading the organismic science that he had observed in his work as a biologist. He wanted to use the word system for those principles that are common to systems in general. In General System Theory (1968), he wrote:: 32
[T]here exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relationships or "forces" between them. It seems legitimate to ask for a theory, not of systems of a more or less special kind, but of universal principles applying to systems in general.
In the preface to von Bertalanffy's Perspectives on General System Theory, Ervin László stated:
Thus when von Bertalanffy spoke of Allgemeine Systemtheorie it was consistent with his view that he was proposing a new perspective, a new way of doing science. It was not directly consistent with an interpretation often put on "general system theory", to wit, that it is a (scientific) "theory of general systems." To criticize it as such is to shoot at straw men. Von Bertalanffy opened up something much broader and of much greater significance than a single theory (which, as we now know, can always be falsified and has usually an ephemeral existence): he created a new paradigm for the development of theories.
Bertalanffy outlines systems inquiry into three major domains: philosophy, science, and technology. In his work with the Primer Group, Béla H. Bánáthy generalized the domains into four integratable domains of systemic inquiry:
philosophy: the ontology, epistemology, and axiology of systems
theory: a set of interrelated concepts and principles applying to all systems
methodology: the set of models, strategies, methods and tools that instrumentalize systems theory and philosophy
application: the application and interaction of the domains
These operate in a recursive relationship, he explained; integrating 'philosophy' and 'theory' as knowledge, and 'method' and 'application' as action; systems inquiry is thus knowledgeable action.
=== Properties of general systems ===
General systems may be split into a hierarchy of systems, where there is less interactions between the different systems than there is the components in the system. The alternative is heterarchy where all components within the system interact with one another.: 65 Sometimes an entire system will be represented inside another system as a part, sometimes referred to as a holon. These hierarchies of system are studied in hierarchy theory. The amount of interaction between parts of systems higher in the hierarchy and parts of the system lower in the hierarchy is reduced. If all the parts of a system are tightly coupled (interact with one another a lot) then the system cannot be decomposed into different systems. The amount of coupling between parts of a system may differ temporally, with some parts interacting more often than other, or for different processes in a system.: 293 Herbert A. Simon distinguished between decomposable, nearly decomposable and nondecomposable systems.: 72
Russell L. Ackoff distinguished general systems by how their goals and subgoals could change over time. He distinguished between goal-maintaining, goal-seeking, multi-goal and reflective (or goal-changing) systems.: 73
== System types and fields ==
=== Theoretical fields ===
Chaos theory
Complex system
Control theory
Dynamical systems theory
Earth system science
Ecological systems theory
Industrial ecology
Living systems theory
Sociotechnical system
Systemics
Telecoupling
Urban metabolism
World-systems theory
==== Cybernetics ====
Cybernetics is the study of the communication and control of regulatory feedback both in living and lifeless systems (organisms, organizations, machines), and in combinations of those. Its focus is how anything (digital, mechanical or biological) controls its behavior, processes information, reacts to information, and changes or can be changed to better accomplish those three primary tasks.
The terms systems theory and cybernetics have been widely used as synonyms. Some authors use the term cybernetic systems to denote a proper subset of the class of general systems, namely those systems that include feedback loops. However, Gordon Pask's differences of eternal interacting actor loops (that produce finite products) makes general systems a proper subset of cybernetics. In cybernetics, complex systems have been examined mathematically by such researchers as W. Ross Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster.
Threads of cybernetics began in the late 1800s that led toward the publishing of seminal works (such as Wiener's Cybernetics in 1948 and Bertalanffy's General System Theory in 1968). Cybernetics arose more from engineering fields and GST from biology. If anything, it appears that although the two probably mutually influenced each other, cybernetics had the greater influence. Bertalanffy specifically made the point of distinguishing between the areas in noting the influence of cybernetics:Systems theory is frequently identified with cybernetics and control theory. This again is incorrect. Cybernetics as the theory of control mechanisms in technology and nature is founded on the concepts of information and feedback, but as part of a general theory of systems.... [T]he model is of wide application but should not be identified with 'systems theory' in general ... [and] warning is necessary against its incautious expansion to fields for which its concepts are not made.: 17–23 Cybernetics, catastrophe theory, chaos theory and complexity theory have the common goal to explain complex systems that consist of a large number of mutually interacting and interrelated parts in terms of those interactions. Cellular automata, neural networks, artificial intelligence, and artificial life are related fields, but do not try to describe general (universal) complex (singular) systems. The best context to compare the different "C"-Theories about complex systems is historical, which emphasizes different tools and methodologies, from pure mathematics in the beginning to pure computer science today. Since the beginning of chaos theory, when Edward Lorenz accidentally discovered a strange attractor with his computer, computers have become an indispensable source of information. One could not imagine the study of complex systems without the use of computers today.
=== System types ===
Biological
Anatomical systems
Nervous
Sensory
Ecological systems
Living systems
Complex
Complex adaptive system
Conceptual
Coordinate
Deterministic (philosophy)
Digital ecosystem
Experimental
Writing
Coupled human–environment
Database
Deterministic (science)
Mathematical
Dynamical system
Formal system
Energy
Holarchical
Information
Measurement
Imperial
Metric
Multi-agent
Nonlinear
Operating
Planetary
Social
Cultural
Economic
Legal
Political
Star
==== Complex adaptive systems ====
Complex adaptive systems (CAS), coined by John H. Holland, Murray Gell-Mann, and others at the interdisciplinary Santa Fe Institute, are special cases of complex systems: they are complex in that they are diverse and composed of multiple, interconnected elements; they are adaptive in that they have the capacity to change and learn from experience.
In contrast to control systems, in which negative feedback dampens and reverses disequilibria, CAS are often subject to positive feedback, which magnifies and perpetuates changes, converting local irregularities into global features.
== See also ==
=== Organizations ===
List of systems sciences organizations
== References ==
== Further reading ==
Ashby, W. Ross. 1956. An Introduction to Cybernetics. Chapman & Hall.
—— 1960. Design for a Brain: The Origin of Adaptive Behavior (2nd ed.). Chapman & Hall.
Bateson, Gregory. 1972. Steps to an Ecology of Mind: Collected essays in Anthropology, Psychiatry, Evolution, and Epistemology. University of Chicago Press.
von Bertalanffy, Ludwig. 1968. General System Theory: Foundations, Development, Applications New York: George Braziller
Burks, Arthur. 1970. Essays on Cellular Automata. University of Illinois Press.
Cherry, Colin. 1957. On Human Communication: A Review, a Survey, and a Criticism. Cambridge: The MIT Press.
Churchman, C. West. 1971. The Design of Inquiring Systems: Basic Concepts of Systems and Organizations. New York: Basic Books.
Checkland, Peter. 1999. Systems Thinking, Systems Practice: Includes a 30-Year Retrospective. Wiley.
Gleick, James. 1997. Chaos: Making a New Science, Random House.
Haken, Hermann. 1983. Synergetics: An Introduction – 3rd Edition, Springer.
Holland, John H. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge: The MIT Press.
Luhmann, Niklas. 2013. Introduction to Systems Theory, Polity.
Macy, Joanna. 1991. Mutual Causality in Buddhism and General Systems Theory: The Dharma of Natural Systems. SUNY Press.
Maturana, Humberto, and Francisco Varela. 1980. Autopoiesis and Cognition: The Realization of the Living. Springer Science & Business Media.
Miller, James Grier. 1978. Living Systems. Mcgraw-Hill.
von Neumann, John. 1951 "The General and Logical Theory of Automata." pp. 1–41 in Cerebral Mechanisms in Behavior.
—— 1956. "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components." Automata Studies 34: 43–98.
von Neumann, John, and Arthur Burks, eds. 1966. Theory of Self-Reproducing Automata. Illinois University Press.
Parsons, Talcott. 1951. The Social System. The Free Press.
Prigogine, Ilya. 1980. From Being to Becoming: Time and Complexity in the Physical Sciences. W H Freeman & Co.
Simon, Herbert A. 1962. "The Architecture of Complexity." Proceedings of the American Philosophical Society, 106.
—— 1996. The Sciences of the Artificial (3rd ed.), vol. 136. The MIT Press.
Shannon, Claude, and Warren Weaver. 1949. The Mathematical Theory of Communication. ISBN 0-252-72546-8.
Adapted from Shannon, Claude. 1948. "A Mathematical Theory of Communication." Bell System Technical Journal 27(3): 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x.
Thom, René. 1972. Structural Stability and Morphogenesis: An Outline of a General Theory of Models. Reading, Massachusetts
Volk, Tyler. 1995. Metapatterns: Across Space, Time, and Mind. New York: Columbia University Press.
Weaver, Warren. 1948. "Science and Complexity." The American Scientist, pp. 536–544.
Wiener, Norbert. 1965. Cybernetics: Or the Control and Communication in the Animal and the Machine (2nd ed.). Cambridge: The MIT Press.
Wolfram, Stephen. 2002. A New Kind of Science. Wolfram Media.
Zadeh, Lofti. 1962. "From Circuit Theory to System Theory." Proceedings of the IRE 50(5): 856–865.
== External links ==
Systems Thinking at Wikiversity
Systems theory at Principia Cybernetica Web
Introduction to systems thinking – 55 slides
Organizations
International Society for the System Sciences
New England Complex Systems Institute
System Dynamics Society | Wikipedia/General_systems_theory |
Sociotechnical systems (STS) in organizational development is an approach to complex organizational work design that recognizes the interaction between people and technology in workplaces. The term also refers to coherent systems of human relations, technical objects, and cybernetic processes that inhere to large, complex infrastructures. Social society, and its constituent substructures, qualify as complex sociotechnical systems.
The term sociotechnical systems was coined by Eric Trist, Ken Bamforth and Fred Emery, in the World War II era, based on their work with workers in English coal mines at the Tavistock Institute in London. Sociotechnical systems pertains to theory regarding the social aspects of people and society and technical aspects of organizational structure and processes. Here, technical does not necessarily imply material technology. The focus is on procedures and related knowledge, i.e. it refers to the ancient Greek term techne. "Technical" is a term used to refer to structure and a broader sense of technicalities. Sociotechnical refers to the interrelatedness of social and technical aspects of an organization or the society as a whole.
Sociotechnical theory is about joint optimization, with a shared emphasis on achievement of both excellence in technical performance and quality in people's work lives. Sociotechnical theory, as distinct from sociotechnical systems, proposes a number of different ways of achieving joint optimization. They are usually based on designing different kinds of organization, according to which the functional output of different sociotechnical elements leads to system efficiency, productive sustainability, user satisfaction, and change management.
== Overview ==
Sociotechnical refers to the interrelatedness of social and technical aspects of an organization. Sociotechnical theory is founded on two main principles:
One is that the interaction of social and technical factors creates the conditions for successful (or unsuccessful) organizational performance. This interaction consists partly of linear "cause and effect" relationships (the relationships that are normally "designed") and partly from "non-linear", complex, even unpredictable relationships (the good or bad relationships that are often unexpected). Whether designed or not, both types of interaction occur when socio and technical elements are put to work.
The corollary of this, and the second of the two main principles, is that optimization of each aspect alone (socio or technical) tends to increase not only the quantity of unpredictable, "un-designed" relationships, but those relationships that are injurious to the system's performance.
Therefore, sociotechnical theory is about joint optimization, that is, designing the social system and technical system in tandem so that they work smoothly together. Sociotechnical theory, as distinct from sociotechnical systems, proposes a number of different ways of achieving joint optimization. They are usually based on designing different kinds of organization, ones in which the relationships between socio and technical elements lead to the emergence of productivity and wellbeing, rather than all too often case of new technology failing to meet the expectations of designers and users alike.
The scientific literature shows terms like sociotechnical all one word, or socio-technical with a hyphen, sociotechnical theory, sociotechnical system and sociotechnical systems theory. All of these terms appear ubiquitously but their actual meanings often remain unclear. The key term "sociotechnical" is something of a buzzword and its varied usage can be unpicked. What can be said about it, though, is that it is most often used to simply, and quite correctly, describe any kind of organization that is composed of people and technology.
The key elements of the STS approach include combining the human elements and the technical systems together to enable new possibilities for work and pave the way for technological change (Trist, 1981). The involvement of human elements in negotiations may cause a larger workload initially, but it is crucial that requirements can be determined and accommodated for prior to implementation as it is central to the systems success. Due to its mutual causality (Davis, 1977), the STS approach has become widely linked with autonomy, completeness and job satisfaction as both systems can work together to achieving a goal.
Enid Mumford (1983) defines the socio-technical approach to recognize technology and people to ensure work systems are highly efficient and contain better characteristics which leads to higher job satisfaction for employees, resulting in a sense of fulfilment to improving quality of work and exceeding expectations. Mumford concludes that the development of information systems is not a technical issue, but a business organization issue which is concerned with the process of change.
== Principles ==
Some of the central principles of sociotechnical theory were elaborated in a seminal paper by Eric Trist and Ken Bamforth in 1951. This is an interesting case study which, like most of the work in sociotechnical theory, is focused on a form of 'production system' expressive of the era and the contemporary technological systems it contained. The study was based on the paradoxical observation that despite improved technology, productivity was falling, and that despite better pay and amenities, absenteeism was increasing. This particular rational organisation had become irrational. The cause of the problem was hypothesized to be the adoption of a new form of production technology which had created the need for a bureaucratic form of organization (rather like classic command-and-control). In this specific example, technology brought with it a retrograde step in organizational design terms. The analysis that followed introduced the terms "socio" and "technical" and elaborated on many of the core principles that sociotechnical theory subsequently became.
“The key elements of the STS approach include combining the human elements and the technical systems together to enable new possibilities for work and pave the way for technological change. Due to its mutual causality, the STS approach has become widely linked with autonomy, completeness and job satisfaction as both systems can work together to achieving a goal.”
=== Responsible autonomy ===
Sociotechnical theory was pioneering for its shift in emphasis, a shift towards considering teams or groups as the primary unit of analysis and not the individual. Sociotechnical theory pays particular attention to internal supervision and leadership at the level of the "group" and refers to it as "responsible autonomy". The overriding point seems to be that having the simple ability of individual team members being able to perform their function is not the only predictor of group effectiveness. There are a range of issues in team cohesion research, for example, that are answered by having the regulation and leadership internal to a group or team.
These, and other factors, play an integral and parallel role in ensuring successful teamwork which sociotechnical theory exploits.
The idea of semi-autonomous groups conveys a number of further advantages. Not least among these, especially in hazardous environments, is the often felt need on the part of people in the organisation for a role in a small primary group. It is argued that such a need arises in cases where the means for effective communication are often somewhat limited. As Carvalho states, this is because "...operators use verbal exchanges to produce continuous, redundant and recursive interactions to successfully construct and maintain individual and mutual awareness...". The immediacy and proximity of trusted team members makes it possible for this to occur. The coevolution of technology and organizations brings with it an expanding array of new possibilities for novel interaction. Responsible autonomy could become more distributed along with the team(s) themselves.
The key to responsible autonomy seems to be to design an organization possessing the characteristics of small groups whilst preventing the "silo-thinking" and "stovepipe" neologisms of contemporary management theory. In order to preserve "...intact the loyalties on which the small group [depend]...the system as a whole [needs to contain] its bad in a way that [does] not destroy its good". In practice, this requires groups to be responsible for their own internal regulation and supervision, with the primary task of relating the group to the wider system falling explicitly to a group leader. This principle, therefore, describes a strategy for removing more traditional command hierarchies.
=== Adaptability ===
Carvajal states that "the rate at which uncertainty overwhelms an organisation is related more to its internal structure than to the amount of environmental uncertainty". Sitter in 1997 offered two solutions for organisations confronted, like the military, with an environment of increased (and increasing) complexity:
"The first option is to restore the fit with the external complexity by an increasing internal complexity. ...This usually means the creation of more staff functions or the enlargement of staff-functions and/or the investment in vertical information systems". Vertical information systems are often confused for "network enabled capability" systems (NEC) but an important distinction needs to be made, which Sitter et al. propose as their second option:
"...the organisation tries to deal with the external complexity by 'reducing' the internal control and coordination needs. ...This option might be called the strategy of 'simple organisations and complex jobs'". This all contributes to a number of unique advantages.
Firstly is the issue of "human redundancy" in which "groups of this kind were free to set their own targets, so that aspiration levels with respect to production could be adjusted to the age and stamina of the individuals concerned". Human redundancy speaks towards the flexibility, ubiquity and pervasiveness of resources within NEC.
The second issue is that of complexity. Complexity lies at the heart of many organisational contexts (there are numerous organizational paradigms that struggle to cope with it). Trist and Bamforth (1951) could have been writing about these with the following passage: "A very large variety of unfavourable and changing environmental conditions is encountered ... many of which are impossible to predict. Others, though predictable, are impossible to alter."
Many type of organisations are clearly motivated by the appealing "industrial age", rational principles of "factory production", a particular approach to dealing with complexity: "In the factory a comparatively high degree of control can be exercised over the complex and moving "figure" of a production sequence, since it is possible to maintain the "ground" in a comparatively passive and constant state". On the other hand, many activities are constantly faced with the possibility of "untoward activity in the 'ground'" of the 'figure-ground' relationship" The central problem, one that appears to be at the nub of many problems that "classic" organisations have with complexity, is that "The instability of the 'ground' limits the applicability ... of methods derived from the factory".
In Classic organisations, problems with the moving "figure" and moving "ground" often become magnified through a much larger social space, one in which there is a far greater extent of hierarchical task interdependence. For this reason, the semi-autonomous group, and its ability to make a much more fine grained response to the "ground" situation, can be regarded as "agile". Added to which, local problems that do arise need not propagate throughout the entire system (to affect the workload and quality of work of many others) because a complex organization doing simple tasks has been replaced by a simpler organization doing more complex tasks. The agility and internal regulation of the group allows problems to be solved locally without propagation through a larger social space, thus increasing tempo.
=== Whole tasks ===
Another concept in sociotechnical theory is the "whole task". A whole task "has the advantage of placing responsibility for the ... task squarely on the shoulders of a single, small, face-to-face group which experiences the entire cycle of operations within the compass of its membership." The Sociotechnical embodiment of this principle is the notion of minimal critical specification. This principle states that, "While it may be necessary to be quite precise about what has to be done, it is rarely necessary to be precise about how it is done". This is no more illustrated by the antithetical example of "working to rule" and the virtual collapse of any system that is subject to the intentional withdrawal of human adaptation to situations and contexts.
The key factor in minimally critically specifying tasks is the responsible autonomy of the group to decide, based on local conditions, how best to undertake the task in a flexible adaptive manner. This principle is isomorphic with ideas like effects-based operations (EBO). EBO asks the question of what goal is it that we want to achieve, what objective is it that we need to reach rather than what tasks have to be undertaken, when and how. The EBO concept enables the managers to "...manipulate and decompose high level effects. They must then assign lesser effects as objectives for subordinates to achieve. The intention is that subordinates' actions will cumulatively achieve the overall effects desired". In other words, the focus shifts from being a scriptwriter for tasks to instead being a designer of behaviours. In some cases, this can make the task of the manager significantly less arduous.
=== Meaningfulness of tasks ===
Effects-based operations and the notion of a "whole task", combined with adaptability and responsible autonomy, have additional advantages for those at work in the organization. This is because "for each participant the task has total significance and dynamic closure" as well as the requirement to deploy a multiplicity of skills and to have the responsible autonomy in order to select when and how to do so. This is clearly hinting at a relaxation of the myriad of control mechanisms found in more classically designed organizations.
Greater interdependence (through diffuse processes such as globalisation) also bring with them an issue of size, in which "the scale of a task transcends the limits of simple spatio-temporal structure. By this is meant conditions under which those concerned can complete a job in one place at one time, i.e., the situation of the face-to-face, or singular group". In other words, in classic organisations the "wholeness" of a task is often diminished by multiple group integration and spatiotemporal disintegration. The group based form of organization design proposed by sociotechnical theory combined with new technological possibilities (such as the internet) provide a response to this often forgotten issue, one that contributes significantly to joint optimisation.
== Topics ==
=== Sociotechnical system ===
A sociotechnical system is the term usually given to any instantiation of socio and technical elements engaged in goal directed behaviour. Sociotechnical systems are a particular expression of sociotechnical theory, although they are not necessarily one and the same thing. Sociotechnical systems theory is a mixture of sociotechnical theory, joint optimisation and so forth and general systems theory. The term sociotechnical system recognises that organizations have boundaries and that transactions occur within the system (and its sub-systems) and between the wider context and dynamics of the environment. It is an extension of Sociotechnical Theory which provides a richer descriptive and conceptual language for describing, analysing and designing organisations. A Sociotechnical System, therefore, often describes a 'thing' (an interlinked, systems based mixture of people, technology and their environment).
Social technical means that technology, which by definition, should not be allowed to be the controlling factor when new work systems are implemented. So in order to be classified as 'Sociotechnical', equal attention must be paid to providing a high quality and satisfying work environment for employees.
The Tavistock researchers, presented that employees who will be using the new and improved system, should be participating in determining the required quality of working life improvements. Participative socio‐technical design can be achieved by in‐depth interviews, questionnaires and collection of data.
Participative socio-technical design can be conducted through in-depth interviews, the collection of statistics and the analysis of relevant documents. These will provide important comparative data that can help approve or disprove the chosen hypotheses. A common approach to participative design is, whenever possible, to use a democratically selected user design group as the key information collectors and decision makers. The design group is backed by a committee of senior staff who can lay the foundations and subsequently oversee the project.
Alter describes sociotechnical analysis and design methods to not be a strong point in the information systems practice. The aim of socio-technical designs is to optimise and join both social and technical systems. However, the problem is that of the technical and social system along with the work system and joint optimisation are not defined as they should be.
=== Sustainability ===
Standalone, incremental improvements are not sufficient to address current, let alone future sustainability challenges. These challenges will require deep changes of sociotechnical systems. Theories on innovation systems; sustainable innovations; system thinking and design; and sustainability transitions, among others, have attempted to describe potential changes capable of shifting development towards more sustainable directions.
=== Autonomous work teams ===
Autonomous work teams also called self-managed teams, are an alternative to traditional assembly line methods. Rather than having a large number of employees each do a small operation to assemble a product, the employees are organized into small teams, each of which is responsible for assembling an entire product. These teams are self-managed, and are independent of one another.
In the mid-1970s, Pehr Gyllenhammar created his new “dock assembly” work system at Volvo’s Kalmar Plant. Instead of the traditional flow line system of car production, self-managed teams would assemble the entire car. The idea of worker directors – a director on the company board who is a representative of the workforce – was established through this project and the Swedish government required them in state enterprises.
=== Job enrichment ===
Job enrichment in organizational development, human resources management, and organizational behavior, is the process of giving the employee a wider and higher level scope of responsibility with increased decision-making authority. This is the opposite of job enlargement, which simply would not involve greater authority. Instead, it will only have an increased number of duties.
The concept of minimal critical specifications. (Mumford, 2006) states workers should be told what to do but not how to do it. Deciding this should be left to their initiative. She says they can be involved in work groups, matrices and networks. The employee should receive correct objectives but they decide how to achieve these objectives.
=== Job enlargement ===
Job enlargement means increasing the scope of a job through extending the range of its job duties and responsibilities. This contradicts the principles of specialisation and the division of labour whereby work is divided into small units, each of which is performed repetitively by an individual worker. Some motivational theories suggest that the boredom and alienation caused by the division of labour can actually cause efficiency to fall.
=== Job rotation ===
Job rotation is an approach to management development, where an individual is moved through a schedule of assignments designed to give him or her a breadth of exposure to the entire operation. Job rotation is also practiced to allow qualified employees to gain more insights into the processes of a company and to increase job satisfaction through job variation. The term job rotation can also mean the scheduled exchange of persons in offices, especially in public offices, prior to the end of incumbency or the legislative period. This has been practiced by the German green party for some time but has been discontinued.
=== Motivation ===
Motivation in psychology refers to the initiation, direction, intensity and persistence of behavior. Motivation is a temporal and dynamic state that should not be confused with personality or emotion. Motivation is having the desire and willingness to do something. A motivated person can be reaching for a long-term goal such as becoming a professional writer or a more short-term goal like learning how to spell a particular word. Personality invariably refers to more or less permanent characteristics of an individual's state of being (e.g., shy, extrovert, conscientious). As opposed to motivation, emotion refers to temporal states that do not immediately link to behavior (e.g., anger, grief, happiness).
With the view that socio-technical design is by which intelligence and skill combined with emerging technologies could improve the work-life balance of employees, it is also believed that the aim is to achieve both a safer and more pleasurable workplace as well as to see greater democracy in society. The achievement of these aims would therefore lead to increased motivation of employees and would directly and positively influence their ability to express ideas. Enid Mumford's work on redesigning designing human systems also expressed that it is the role of the facilitator to “keep the members interested and motivated toward the design task, to help them resolve any conflicts”.
Mumford states that although technology and organizational structures may change in industry, the employee rights and needs must be given a high priority. Future commercial success requires motivated work forces who are committed to their employers’ interests. This requires companies; managers who are dedicated to creating this motivation and recognize what is required for it to be achieved. Returning to socio-technical values, objectives; principals may provide an answer.
Mumford reflects on leadership within organisations, because lack of leadership has proven to be the downfall of most companies. As competition increases employers have lost their valued and qualified employees to their competitors. Opportunities such as better job roles and an opportunity to work your way up has motivated these employees to join their rivals. Mumford suggests that a delegation of responsibility could help employees stay motivated as they would feel appreciated and belonging thus keeping them in their current organization. Leadership is key as employees would prefer following a structure and knowing that there is opportunity to improve.
When Mumford analysed the role of user participation during two ES projects A drawback that was found was that users found it difficult to see beyond their current practices and found it difficult to anticipate how things can be done differently. Motivation was found to be another challenge during this process as users were not interested in participating (Wagner, 2007).
=== Process improvement ===
Process improvement in organizational development is a series of actions taken to identify, analyze and improve existing processes within an organization to meet new goals and objectives. These actions often follow a specific methodology or strategy to create successful results.
=== Task analysis ===
Task analysis is the analysis of how a task is accomplished, including a detailed description of both manual and mental activities, task and element durations, task frequency, task allocation, task complexity, environmental conditions, necessary clothing and equipment, and any other unique factors involved in or required for one or more people to perform a given task. This information can then be used for many purposes, such as personnel selection and training, tool or equipment design, procedure design (e.g., design of checklists or decision support systems) and automation.
=== Job design ===
Job design or work design in organizational development is the application of sociotechnical systems principles and techniques to the humanization of work, for example, through job enrichment. The aims of work design to improved job satisfaction, to improved through-put, to improved quality and to reduced employee problems, e.g., grievances, absenteeism.
=== Deliberations ===
Deliberations are key units of analysis in non-linear, knowledge work. They are 'choice points' that move knowledge work forward. As originated and defined by Cal Pava (1983) in a second-generation development of STS theory, deliberations are patterns of exchange and communication to reduce the equivocality of a problematic issue; for example, for systems engineering work, what features to develop in new software. Deliberations are not discrete decisions—they are a more continuous context for decisions. They have 3 aspects: topics, forums, and participants.
=== Work System Theory (WST) and Work System Method (WSM) ===
The WST and WSM simplifies the conceptualization of traditional complicated socio-technical system (STS) approach (Alter, 2015). Extending the prior research on STS which divides social and technical aspects; WST combines the two perspectives in a work system and outlines the framework for WSM which considers work system as the system of interest and proposes solutions accordingly (Alter, 2015).
The Work System Theory (WST) and Work System Method (WSM) are both forms of socio-technical systems but in the form of work systems. Also, the Work System Method encourages the use of both socio-technical ideas and values when it comes to IS development, use and implementation.
=== Evolution of socio-technical systems ===
The evolution of socio-technical design has seen its development from being approached as a social system exclusively. The realisation of the joint optimisation of social and technical systems was later realised. It was divided into sections where primary work which looks into principles and description, and how to incorporate technical designs on a macrosocial level.
=== Benefits of seeing sociotechnical systems through a work system lens ===
Analysing and designing sociotechnical systems from a work system perspective and eliminates the artificial distinction of the social system from the technical system. This also eliminates the idea of joint optimization. By using a work system lens in can bring many benefits, such as:
Viewing the work system as a whole, making it easier to discuss and analyse
More organised approach by even outlining basic understanding of a work system
A readily usable analysis method making it more adaptable for performing analysis of a work system
Does not require guidance by experts and researchers
Reinforces the idea that a work system exists to produce a product(s)/service(s)
Easier to theorize potential staff reductions, job roles changing and reorganizations
Encourages motivation and good will while reducing the stress from monitoring
Conscientious that documentation and practice may differ
=== Problems to overcome ===
Difference in cultures across the world
Data theft of company information and networked systems
"Big Brother" effect on employees
Hierarchical imbalance between managers and lower staff
Persuading peoples old attitude of 'instant fixes' without any real thought of structure
=== Social network / structure ===
The social network perspective first started in 1920 at Harvard University within the Sociology Department. Within information systems social networks have been used to study behaviour of teams, organisations and Industries. Social network perspective is useful for studying some of the emerging forms of social or organisational arrangements and the roles of ICT.
=== Social media and Artificial Intelligence ===
Recent work on Artificial Intelligence considers large Sociotechnical Systems,
such as social networks and online marketplaces,
as agents whose behaviour can be purposeful and adaptive. The behaviour of recommender
systems can therefore be analysed in the language and framework of sociotechnical systems,
leading also to a new perspective for their legal regulation.
=== Multi-directional inheritance ===
Multi-directional inheritance is the premise that work systems inherit their purpose, meaning and structure from the organisation and reflect the priorities and purposes of the organisation that encompasses them. Fundamentally, this premise includes crucial assumptions about sequencing, timescales, and precedence. The purpose, meaning and structure can derive from multiple contexts and once obtained it can be passed on to the sociotechnical systems that emerge throughout the organisation.
=== Sociological perspective on sociotechnical systems ===
A 1990s research interest in social dimensions of IS directed to relationship among IS development, uses, and resultant social and organizational changes offered fresh insight into the emerging role of ICT within differing organizational context; drawing directly on sociological theories of institution. This sociotechnical research has informed if not shaped IS scholarship. Sociological theories have offered a solid basis upon which emerging sociotechnical research built.
=== ETHICS history ===
The ETHICS (Effective Technical and Human Implementation of Computer Systems) process has been used successfully by Mumford in a variety of projects since its idea conception from the Turners Asbestos Cement project. After forgetting a vital request from the customer to discuss and potentially fix the issues found with the current organisation, she gave her advice on making a system. The system was not received well and Mumford was told they already had been using a similar system. This is when she realised a participative based approach would benefit many future projects.
Enid Mumfords ETHICS development was a push from her to remind those in the field that research doesn't always need to be done on things of current interest and following the immediate trends over your current research is not always the way forward. A reminder that work should always be finished and we should never “write them off with no outcome.” as she said.
== See also ==
Complex systems – System composed of many interacting componentsPages displaying short descriptions of redirect targets
Cybernetics – Transdisciplinary field concerned with regulatory and purposive systems
Feedback – Process where information about current status is used to influence future status
Human factors – Designing systems to suit their usersPages displaying short descriptions of redirect targets
Remote work – Employees working from any location
Social machine – social systemPages displaying wikidata descriptions as a fallback
Social network – Social structure made up of a set of social actors
Sociology – Social science that studies human society and its development
Sociotechnology – Study of processes on the intersection of society and technology
Systems theory – Interdisciplinary study of systems
Systems science – Study of the nature of systems
== References ==
== Further reading ==
Kenyon B. De Greene (1973). Sociotechnical systems: factors in analysis, design, and management.
Jose Luis Mate and Andres Silva (2005). Requirements Engineering for Sociotechnical Systems.
Enid Mumford (1985). Sociotechnical Systems Design: Evolving Theory and Practice.
William A. Pasmore and John J. Sherwood (1978). Sociotechnical Systems: A Sourcebook.
William A. Pasmore (1988). Designing Effective Organizations: The Sociotechnical Systems Perspective.
Pascal Salembier, Tahar Hakim Benchekroun (2002). Cooperation and Complexity in Sociotechnical Systems.
Sawyer, S. and Jarrahi, M.H. (2014) The Sociotechnical Perspective: Information Systems and Information Technology, Volume 2 (Computing Handbook Set, Third Edition,) edited by Heikki Topi and Allen Tucker. Chapman and Hall/CRC. | http://sawyer.syr.edu/publications/2013/sociotechnical%20chapter.pdf
James C. Taylor and David F. Felten (1993). Performance by Design: Sociotechnical Systems in North America.
Eric Trist and H. Murray ed. (1993).The Social Engagement of Social Science, Volume II: The Socio-Technical Perspective. Philadelphia: University of Pennsylvania Press.http://www.moderntimesworkplace.com/archives/archives.html
James T. Ziegenfuss (1983). Patients' Rights and Organizational Models: Sociotechnical Systems Research on mental health programs.
Hongbin Zha (2006). Interactive Technologies and Sociotechnical Systems: 12th International Conference, VSMM 2006, Xi'an, China, October 18–20, 2006, Proceedings.
Trist, E., & Labour, O. M. o. (1981). The evolution of socio-technical systems: A conceptual framework and an action research program: Ontario Ministry of Labour, Ontario Quality of Working Life Centre.
Amelsvoort, P., & Mohr, B. (Co-Eds.) (2016). "Co-Creating Humane and Innovative Organizations: Evolutions in the Practice of Socio-Technical System Design": Global STS-D Network Press
Pava, C., 1983. Managing New Office Technology. Free Press, New York, NY.
== External links ==
Ropohl, Günter (1999). "Philosophy of Socio-Technical Systems". Society for Philosophy and Technology Quarterly Electronic Journal. 4 (3): 186–194. doi:10.5840/techne19994311. S2CID 18537088.
JP Vos, The making of strategic realities : an application of the social systems theory of Niklas Luhmann, Technical University of Eindhoven, Department of Technology Management, 2002.
STS Roundtable, an international not-for-profit association of professional and scholarly practitioners of Sociotechnical Systems Theory
IEEE 1st Workshop on Socio-Technical Aspects of Mashups
http://istheory.byu.edu/wiki/Socio-technical_theory
Cartelli, Antonio (2007). "Socio-Technical Theory and Knowledge Construction: Towards New Pedagogical Paradigms?". Proceedings of the 2007 InSITE Conference. Vol. 7. CiteSeerX 10.1.1.381.9353. doi:10.28945/3051.
http://www.moderntimesworkplace.com/archives/archives.html, Archived Vol I, II, & III of The Tavistock Anthology | Wikipedia/Sociotechnical_systems_theory |
Climatology (from Greek κλίμα, klima, "slope"; and -λογία, -logia) or climate science is the scientific study of Earth's climate, typically defined as weather conditions averaged over a period of at least 30 years. Climate concerns the atmospheric condition during an extended to indefinite period of time; weather is the condition of the atmosphere during a relative brief period of time. The main topics of research are the study of climate variability, mechanisms of climate changes and modern climate change. This topic of study is regarded as part of the atmospheric sciences and a subdivision of physical geography, which is one of the Earth sciences. Climatology includes some aspects of oceanography and biogeochemistry.
The main methods employed by climatologists are the analysis of observations and modelling of the physical processes that determine climate. Short term weather forecasting can be interpreted in terms of knowledge of longer-term phenomena of climate, for instance climatic cycles such as the El Niño–Southern Oscillation (ENSO), the Madden–Julian oscillation (MJO), the North Atlantic oscillation (NAO), the Arctic oscillation (AO), the Pacific decadal oscillation (PDO), and the Interdecadal Pacific Oscillation (IPO). Climate models are used for a variety of purposes from studying the dynamics of the weather and climate system to predictions of future climate.
== History ==
The Greeks began the formal study of climate; in fact, the word "climate" is derived from the Greek word klima, meaning "slope", referring to the slope or inclination of the Earth's axis. Arguably the most influential classic text concerning climate was On Airs, Water and Places written by Hippocrates about 400 BCE. This work commented on the effect of climate on human health and cultural differences between Asia and Europe. This idea that climate controls which populations excel depending on their climate, or climatic determinism, remained influential throughout history. Chinese scientist Shen Kuo (1031–1095) inferred that climates naturally shifted over an enormous span of time, after observing petrified bamboos found underground near Yanzhou (modern Yan'an, Shaanxi province), a dry-climate area unsuitable at that time for the growth of bamboo.
The invention of thermometers and barometers during the Scientific Revolution allowed for systematic recordkeeping, that began as early as 1640–1642 in England. Early climate researchers include Edmund Halley, who published a map of the trade winds in 1686 after a voyage to the southern hemisphere. Benjamin Franklin (1706–1790) first mapped the course of the Gulf Stream for use in sending mail from North America to Europe. Francis Galton (1822–1911) invented the term anticyclone. Helmut Landsberg (1906–1985) fostered the use of statistical analysis in climatology.
During the early 20th century, climatology mostly emphasized the description of regional climates. This descriptive climatology was mainly an applied science, giving farmers and other interested people statistics about what the normal weather was and how great chances were of extreme events. To do this, climatologists had to define a climate normal, or an average of weather and weather extremes over a period of typically 30 years. While scientists knew of past climate change such as the ice ages, the concept of climate as changing only very gradually was useful for descriptive climatology. This started to change during the decades that followed, and while the history of climate change science started earlier, climate change only became one of the main topics of study for climatologists during the 1970s and afterward.
== Subfields ==
Various subtopics of climatology study different aspects of climate. There are different categorizations of the sub-topics of climatology. The American Meteorological Society for instance identifies descriptive climatology, scientific climatology and applied climatology as the three subcategories of climatology, a categorization based on the complexity and the purpose of the research. Applied climatologists apply their expertise to different industries such as manufacturing and agriculture.
Paleoclimatology is the attempt to reconstruct and understand past climates by examining records such as ice cores and tree rings (dendroclimatology). Paleotempestology uses these same records to help determine hurricane frequency over millennia. Historical climatology is the study of climate as related to human history and is thus concerned mainly with the last few thousand years.
Boundary-layer climatology concerns exchanges in water, energy and momentum near surfaces. Further identified subtopics are physical climatology, dynamic climatology, tornado climatology, regional climatology, bioclimatology, and synoptic climatology. The study of the hydrological cycle over long time scales is sometimes termed hydroclimatology, in particular when studying the effects of climate change on the water cycle.
== Methods ==
The study of contemporary climates incorporates meteorological data accumulated over many years, such as records of rainfall, temperature and atmospheric composition. Knowledge of the atmosphere and its dynamics is also embodied in models, either statistical or mathematical, which help by integrating different observations and testing how well they match. Modeling is used for understanding past, present and potential future climates.
Climate research is made difficult by the large scale, long time periods, and complex processes which govern climate. Climate is governed by physical principles which can be expressed as differential equations. These equations are coupled and nonlinear, so that approximate solutions are obtained by using numerical methods to create global climate models. Climate is sometimes modeled as a stochastic process but this is generally accepted as an approximation to processes that are otherwise too complicated to analyze.
=== Climate data ===
The collection of a long record of climate variables is essential for the study of climate. Climatology deals with the aggregate data that meteorologists have recorded. Scientists use both direct and indirect observations of the climate, from Earth observing satellites and scientific instrumentation such as a global network of thermometers, to prehistoric ice extracted from glaciers. As measuring technology changes over time, records of data often cannot be compared directly. As cities are generally warmer than the areas surrounding, urbanization has made it necessary to constantly correct data for this urban heat island effect.
=== Models ===
Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. They are used for a variety of purposes from study of the dynamics of the weather and climate system to projections of future climate. All climate models balance, or very nearly balance, incoming energy as short wave (including visible) electromagnetic radiation to the Earth with outgoing energy as long wave (infrared) electromagnetic radiation from the Earth. Any unbalance results in a change of the average temperature of the Earth. Most climate models include the radiative effects of greenhouse gases such as carbon dioxide. These models predict a trend of increase of surface temperatures, as well as a more rapid increase of temperature at higher latitudes.
Models can range from relatively simple to complex:
A simple radiant heat transfer model that treats the Earth as a single point and averages outgoing energy.
This can be expanded vertically (radiative-convective models), or horizontally.
Coupled atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
Earth system models further include the biosphere.
Additionally, they are available with different resolutions ranging from >100 km to 1 km. High resolutions in global climate models are computational very demanding and only few global datasets exists. Examples are ICON or mechanistically downscaled data such as CHELSA (Climatologies at high resolution for the Earth's land surface areas).
== Topics of research ==
Topics that climatologists study comprise three main categories: climate variability, mechanisms of climatic change, and modern changes of climate.
=== Climatological processes ===
Various factors affect the average state of the atmosphere at a particular location. For instance, midlatitudes will have a pronounced seasonal cycle of temperature whereas tropical regions show little variation of temperature over a year. Another major variable of climate is continentality: the distance to major water bodies such as oceans. Oceans act as a moderating factor, so that land close to it has typically less difference of temperature between winter and summer than areas further from it. The atmosphere interacts with other parts of the climate system, with winds generating ocean currents that transport heat around the globe.
=== Climate classification ===
Classification is an important method of simplifying complicated processes. Different climate classifications have been developed over the centuries, with the first ones in Ancient Greece. How climates are classified depends on what the application is. A wind energy producer will require different information (wind) in a classification than someone more interested in agriculture, for whom precipitation and temperature are more important. The most widely used classification, the Köppen climate classification, was developed during the late nineteenth century and is based on vegetation. It uses monthly data concerning temperature and precipitation.
=== Climate variability ===
There are different types of variability: recurring patterns of temperature or other climate variables. They are quantified with different indices. Much in the way the Dow Jones Industrial Average, which is based on the stock prices of 30 companies, is used to represent the fluctuations of stock prices in general, climate indices are used to represent the essential elements of climate. Climate indices are generally devised with the twin objectives of simplicity and completeness, and each index typically represents the status and timing of the climate factor it represents. By their very nature, indices are simple, and combine many details into a generalized, overall description of the atmosphere or ocean which can be used to characterize the factors which effect the global climate system.
El Niño–Southern Oscillation (ENSO) is a coupled ocean-atmosphere phenomenon in the Pacific Ocean responsible for much of the global variability of temperature, and has a cycle between two and seven years. The North Atlantic oscillation is a mode of variability that is mainly contained to the lower atmosphere, the troposphere. The layer of atmosphere above, the stratosphere is also capable of creating its own variability, most importantly the Madden–Julian oscillation (MJO), which has a cycle of approximately 30 to 60 days. The Interdecadal Pacific oscillation can create changes in the Pacific Ocean and lower atmosphere on decadal time scales.
=== Climate change ===
Climate change occurs when changes of Earth's climate system result in new weather patterns that remain for an extended period of time. This duration of time can be as brief as a few decades to as long as millions of years. The climate system receives nearly all of its energy from the sun. The climate system also gives off energy to outer space. The balance of incoming and outgoing energy, and the passage of the energy through the climate system, determines Earth's energy budget. When the incoming energy is greater than the outgoing energy, earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and earth experiences cooling. Climate change also influences the average sea level.
Modern climate change is caused largely by the human emissions of greenhouse gas from the burning of fossil fuel which increases global mean surface temperatures. Increasing temperature is only one aspect of modern climate change, which also includes observed changes of precipitation, storm tracks and cloudiness. Warmer temperatures are causing further changes of the climate system, such as the widespread melt of glaciers, sea level rise and shifts of flora and fauna.
== Differences with meteorology ==
In contrast to meteorology, which emphasises short term weather systems lasting no more than a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over years to millennia, as well as changes of long-term average weather patterns in relation to atmospheric conditions. Climatologists study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climates to change. Climatology considers the past and can help predict future climate change.
Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere and the oceans and land surface (particularly vegetation, land use and topography), and the chemical and physical composition of the atmosphere.
== Use in weather forecasting ==
A relative difficult method of forecast, the analog technique requires remembering a previous weather event which is expected to be mimicked by an upcoming event. What makes it a difficult technique is that there is rarely a perfect analog for an event of the future. Some refer to this type of forecasting as pattern recognition, which remains a useful method of estimating rainfall over data voids such as oceans using knowledge of how satellite imagery relates to precipitation rates over land, as well as the forecasting of precipitation amounts and distribution of the future. A variation of this theme, used for medium range forecasting, is known as teleconnections, when systems in other locations are used to help determine the location of a system within the regime surrounding. One method of using teleconnections are by using climate indices such as ENSO-related phenomena.
== See also ==
== References ==
=== Books ===
Robinson, Peter J. Robinson; Henderson-Sellers, Ann (1999). Contemporary Climatology. Harlow, England: Pearson Prentice Hall. ISBN 0582276314.
Rohli, Robert. V.; Vega, Anthony J. (2018). Climatology (fourth ed.). Jones & Bartlett Learning. ISBN 9781284126563.
Rohli, Robert. V.; Vega, Anthony J. (2011). Climatology (second ed.). Jones & Bartlett Learning.
Wang, Shih-Yu; Gillies, Robert R., eds. (2012). Modern Climatology. Rijeka, Croatia: InTech. ISBN 978-953-51-0095-9.
== Further reading ==
Jenny Uglow, "What the Weather Is" (review of Sarah Dry, Waters of the World: The Story of the Scientists Who Unraveled the Mysteries of Our Oceans, Atmosphere, and Ice Sheets and Made the Planet Whole, University of Chicago Press, 2019, 332 pp.), The New York Review of Books, vol. LXVI, no. 20 (19 December 2019), pp. 56–58.
== External links ==
Climate Science Special Report – U.S. Global Change Research Program
KNMI Climate Explorer The Royal Netherlands Meteorological Institute's Climate Explorer graphs climatological relationships of spatial and temporal data.
Climatology as a Profession Archived 2007-07-13 at the Wayback Machine Amer. Inst. of Physics account of the history of the discipline of climatology in the 20th century | Wikipedia/Climate_Science |
Critical systems thinking (CST) is a systems thinking approach designed to aid decision-makers, and other stakeholders, improve complex problem situations that cross departmental and, often, organizational boundaries. CST sees systems thinking as essential to managing multidimensional 'messes' in which technical, economic, organizational, human, cultural and political elements interact. It is critical in a positive manner because it seeks to capitalize on the strengths of existing approaches while also calling attention to their limitations. CST seeks to allow systems approaches such as systems engineering, system dynamics, organizational cybernetics, soft systems methodology, critical systems heuristics, and others, to be used together, in a responsive and flexible way, to maximize the benefits they can bring.
== History ==
CST has its origins in the 1980s with accounts of how the theoretical partiality of existing systems methodologies limited their ability to guide interventions in the full range of problem situations; calls for pluralism in systems practice; and suggestions about how those disadvantaged by systems designs could be given a voice and have impact. CST was largely developed at the Centre for Systems Studies, University of Hull, based on research by Michael C Jackson, Paul Keys, and Robert L Flood. It came to prominence in 1991 with the publication of three books - Critical Systems Thinking: Directed Readings, Systems Methodology for the Management Sciences, and Creative Problem Solving: Total Systems Intervention. The first was a collection of papers, accompanied by a commentary, which traced the origins and outlined the major themes of the approach. It highlighted the contributions of authors such as Flood, Fuenmayor, Jackson, Mingers, Oliga and Ulrich. The second offered a critique of existing systems approaches from the perspective of social theory, made the case for CST and sought to demonstrate that it could take the lead in enriching theory and practice in the management sciences. The third was the first attempt to show how CST could be used in practice. Since 1991, CST has been taken forward by authors such as Robert L Flood, Michael C Jackson, John Mingers and Gerald Midgley.
== Recent developments ==
Recent developments have centered on the application of CST in practice - in particular Gerald Midgley's 'Systemic Intervention', focusing on boundary critique, and Michael C Jackson's multiperspectival and multimethodological 'Critical Systems Practice' (CSP). Adopting a pragmatist orientation, Jackson has set out, in a series of papers, how the four commitments of CST can be applied in practice. CSP has 4 main stages - Explore, Produce, Intervene, and Check (EPIC) - and various sub-stages:
Explore the problem situation
view it from a variety of systemic perspectives
identify primary and secondary issues
Produce an appropriate intervention strategy
appreciate the variety of systems approaches
choose appropriate systems methodologies
choose appropriate systems models and methods
structure, schedule and set objectives for the intervention
Intervene flexibly (revisiting the first two stages as necessary)
Check on progress
evaluate the improvements achieved
reflect on the systems approaches used
discuss and agree next steps
== See also ==
systems engineering
system dynamics
Organizational cybernetics
Soft systems methodology
Systems thinking
Systems theory
== References == | Wikipedia/Critical_systems_thinking |
The prisoner's dilemma is a game theory thought experiment involving two rational agents, each of whom can either cooperate for mutual benefit or betray their partner ("defect") for individual gain. The dilemma arises from the fact that while defecting is rational for each agent, cooperation yields a higher payoff for each. The puzzle was designed by Merrill Flood and Melvin Dresher in 1950 during their work at the RAND Corporation. They invited economist Armen Alchian and mathematician John Williams to play a hundred rounds of the game, observing that Alchian and Williams often chose to cooperate. When asked about the results, John Nash remarked that rational behavior in the iterated version of the game can differ from that in a single-round version. This insight anticipated a key result in game theory: cooperation can emerge in repeated interactions, even in situations where it is not rational in a one-off interaction.
Albert W. Tucker later named the game the "prisoner's dilemma" by framing the rewards in terms of prison sentences. The prisoner's dilemma models many real-world situations involving strategic behavior. In casual usage, the label "prisoner's dilemma" is applied to any situation in which two entities can gain important benefits by cooperating or suffer by failing to do so, but find it difficult or expensive to coordinate their choices.
== Premise ==
This "typical contemporary version" of the game is described in William Poundstone's 1993 book Prisoner's Dilemma:
Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail. The prisoners are given a little time to think this over, but in no case may either learn what the other has decided until he has irrevocably made his decision. Each is informed that the other prisoner is being offered the very same deal. Each prisoner is concerned only with his own welfare—with minimizing his own prison sentence.
This leads to three different possible outcomes for prisoners A and B:
If A and B both remain silent, they will each serve one year in prison.
If one testifies against the other but the other doesn’t, the one testifying will be set free while the other serves three years in prison.
If A and B testify against each other, they will each serve two years.
== Strategy for the prisoner's dilemma ==
Two prisoners are separated into individual rooms and cannot communicate with each other. It is assumed that both prisoners understand the nature of the game, have no loyalty to each other, and will have no opportunity for retribution or reward outside of the game. The normal game is shown below:
Regardless of what the other decides, each prisoner gets a higher reward by betraying the other ("defecting"). The reasoning involves analyzing both players' best responses: B will either cooperate or defect. If B cooperates, A should defect, because going free is better than serving 1 year. If B defects, A should also defect, because serving 2 years is better than serving 3. So, either way, A should defect since defecting is A's best response regardless of B's strategy. Parallel reasoning will show that B should defect.
Defection always results in a better payoff than cooperation, so it is a strictly dominant strategy for both players. Mutual defection is the only strong Nash equilibrium in the game. Since the collectively ideal result of mutual cooperation is irrational from a self-interested standpoint, this Nash equilibrium is not Pareto efficient.
== Generalized form ==
The structure of the traditional prisoner's dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors red and blue and that each player chooses to either "cooperate" or "defect".
If both players cooperate, they both receive the reward
R
{\displaystyle R}
for cooperating. If both players defect, they both receive the punishment payoff
P
{\displaystyle P}
. If Blue defects while Red cooperates, then Blue receives the temptation payoff
T
{\displaystyle T}
, while Red receives the "sucker's" payoff,
S
{\displaystyle S}
. Similarly, if Blue cooperates while Red defects, then Blue receives the sucker's payoff
S
{\displaystyle S}
, while Red receives the temptation payoff
T
{\displaystyle T}
.
This can be expressed in normal form:
and to be a prisoner's dilemma game in the strong sense, the following condition must hold for the payoffs:
T
>
R
>
P
>
S
{\displaystyle T>R>P>S}
The payoff relationship
R
>
P
{\displaystyle R>P}
implies that mutual cooperation is superior to mutual defection, while the payoff relationships
T
>
R
{\displaystyle T>R}
and
P
>
S
{\displaystyle P>S}
imply that defection is the dominant strategy for both agents.
== The iterated prisoner's dilemma ==
If two players play the prisoner's dilemma more than once in succession, remember their opponent's previous actions, and are allowed to change their strategy accordingly, the game is called the iterated prisoner's dilemma.
In addition to the general form above, the iterative version also requires that
2
R
>
T
+
S
{\displaystyle 2R>T+S}
, to prevent alternating cooperation and defection giving a greater reward than mutual cooperation.
The iterated prisoner's dilemma is fundamental to some theories of human cooperation and trust. Assuming that the game effectively models transactions between two people that require trust, cooperative behavior in populations can be modeled by a multi-player iterated version of the game. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoner's dilemma is also called the "peace-war game".
=== General strategy ===
If the iterated prisoner's dilemma is played a finite number of times and both players know this, then the dominant strategy and Nash equilibrium is to defect in all rounds. The proof is inductive: one might as well defect on the last turn, since the opponent will not have a chance to later retaliate. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. The same applies if the game length is unknown but has a known upper limit.
For cooperation to emerge between rational players, the number of rounds must be unknown or infinite. In that case, "always defect" may no longer be a dominant strategy. As shown by Robert Aumann in a 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain cooperation. Specifically, a player may be less willing to cooperate if their counterpart did not cooperate many times, which causes disappointment. Conversely, as time elapses, the likelihood of cooperation tends to rise, owing to the establishment of a "tacit agreement" among participating players. In experimental situations, cooperation can occur even when both participants know how many iterations will be played.
According to a 2019 experimental study in the American Economic Review that tested what strategies real-life subjects used in iterated prisoner's dilemma situations with perfect monitoring, the majority of chosen strategies were always to defect, tit-for-tat, and grim trigger. Which strategy the subjects chose depended on the parameters of the game.
=== Axelrod's tournament and successful strategy conditions ===
Interest in the iterated prisoner's dilemma was kindled by Robert Axelrod in his 1984 book The Evolution of Cooperation, in which he reports on a tournament that he organized of the N-step prisoner's dilemma (with N fixed) in which participants have to choose their strategy repeatedly and remember their previous encounters. Axelrod invited academic colleagues from around the world to devise computer strategies to compete in an iterated prisoner's dilemma tournament. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth.
Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run while more altruistic strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behavior from mechanisms that are initially purely selfish, by natural selection.
The winning deterministic strategy was tit for tat, developed and entered into the tournament by Anatol Rapoport. It was the simplest of any program entered, containing only four lines of BASIC, and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his or her opponent did on the previous move. Depending on the situation, a slightly better strategy can be "tit for tat with forgiveness": when the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1–5%, depending on the lineup of opponents). This allows for occasional recovery from getting trapped in a cycle of defections.
After analyzing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to succeed:
Nice: The strategy will not be the first to defect (this is sometimes referred to as an "optimistic" algorithm), i.e., it will not "cheat" on its opponent for purely self-interested reasons first. Almost all the top-scoring strategies were nice.
Retaliating: The strategy must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate, a very bad choice that will frequently be exploited by "nasty" strategies.
Forgiving: Successful strategies must be forgiving. Though players will retaliate, they will cooperate again if the opponent does not continue to defect. This can stop long runs of revenge and counter-revenge, maximizing points.
Non-envious: The strategy must not strive to score more than the opponent.
In contrast to the one-time prisoner's dilemma game, the optimal strategy in the iterated prisoner's dilemma depends upon the strategies of likely opponents, and how they will react to defections and cooperation. For example, if a population consists entirely of players who always defect, except for one who follows the tit-for-tat strategy, that person is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy is to defect every time. More generally, given a population with a certain percentage of always-defectors with the rest being tit-for-tat players, the optimal strategy depends on the percentage and number of iterations played.
=== Other strategies ===
Deriving the optimal strategy is generally done in two ways:
Bayesian Nash equilibrium: If the statistical distribution of opposing strategies can be determined an optimal counter-strategy can be derived analytically.
Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a genetic algorithm for finding an optimal strategy). The mix of algorithms in the final population generally depends on the mix in the initial population. The introduction of mutation (random variation during reproduction) lessens the dependency on the initial population; empirical experiments with such systems tend to produce tit-for-tat players, but no analytic proof exists that this will always occur.
In the strategy called win-stay, lose-switch, faced with a failure to cooperate, the player switches strategy the next turn. In certain circumstances, Pavlov beats all other strategies by giving preferential treatment to co-players using a similar strategy.
Although tit-for-tat is considered the most robust basic strategy, a team from Southampton University in England introduced a more successful strategy at the 20th-anniversary iterated prisoner's dilemma competition. It relied on collusion between programs to achieve the highest number of points for a single program. The university submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the competing program's score. As a result, the 2004 Prisoners' Dilemma Tournament results show University of Southampton's strategies in the first three places (and a number of positions towards the bottom), despite having fewer wins and many more losses than the GRIM strategy. The Southampton strategy takes advantage of the fact that multiple entries were allowed in this particular competition and that a team's performance was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of minmaxing).
Because of this new rule, this competition also has little theoretical significance when analyzing single-agent strategies as compared to Axelrod's seminal tournament. But it provided a basis for analyzing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise.
Long before this new-rules tournament was played, Richard Dawkins, in his book The Selfish Gene, pointed out the possibility of such strategies winning if multiple entries were allowed, but wrote that Axelrod would most likely not have allowed them if they had been submitted. Such strategies also circumvent the rule against communication between players: the Southampton programs' "ten-move dance" allowed them to recognize one another, reinforcing how valuable communication can be in shifting the balance of the game.
Even without implicit collusion between software strategies, tit-for-tat is not always the absolute winner of any given tournament; more precisely, its long-run results over a series of tournaments outperform its rivals, but this does not mean it is the most successful in the short term. The same applies to tit-for-tat with forgiveness and other optimal strategies.
This can also be illustrated using the Darwinian ESS simulation. In such a simulation, tit-for-tat will almost always come to dominate, though nasty strategies will drift in and out of the population because a tit-for-tat population is penetrable by non-retaliating nice strategies, which in turn are easy prey for the nasty strategies. Dawkins showed that here, no static mix of strategies forms a stable equilibrium, and the system will always oscillate between bounds.
=== Stochastic iterated prisoner's dilemma ===
In a stochastic iterated prisoner's dilemma game, strategies are specified in terms of "cooperation probabilities". In an encounter between player X and player Y, X's strategy is specified by a set of probabilities P of cooperating with Y. P is a function of the outcomes of their previous encounters or some subset thereof. If P is a function of only their most recent n encounters, it is called a "memory-n" strategy. A memory-1 strategy is then specified by four cooperation probabilities:
P
=
{
P
c
c
,
P
c
d
,
P
d
c
,
P
d
d
}
{\displaystyle P=\{P_{cc},P_{cd},P_{dc},P_{dd}\}}
, where Pcd is the probability that X will cooperate in the present encounter given that the previous encounter was characterized by X cooperating and Y defecting. If each of the probabilities are either 1 or 0, the strategy is called deterministic. An example of a deterministic strategy is the tit-for-tat strategy written as
P
=
{
1
,
0
,
1
,
0
}
{\displaystyle P=\{1,0,1,0\}}
, in which X responds as Y did in the previous encounter. Another is the win-stay, lose switch strategy written as
P
=
{
1
,
0
,
0
,
1
}
{\displaystyle P=\{1,0,0,1\}}
. It has been shown that for any memory-n strategy there is a corresponding memory-1 strategy that gives the same statistical results, so that only memory-1 strategies need be considered.
If
P
{\displaystyle P}
is defined as the above 4-element strategy vector of X and
Q
=
{
Q
c
c
,
Q
c
d
,
Q
d
c
,
Q
d
d
}
{\displaystyle Q=\{Q_{cc},Q_{cd},Q_{dc},Q_{dd}\}}
as the 4-element strategy vector of Y (where the indices are from Y's point of view), a transition matrix M may be defined for X whose ij-th entry is the probability that the outcome of a particular encounter between X and Y will be j given that the previous encounter was i, where i and j are one of the four outcome indices: cc, cd, dc, or dd. For example, from X's point of view, the probability that the outcome of the present encounter is cd given that the previous encounter was cd is equal to
M
c
d
,
c
d
=
P
c
d
(
1
−
Q
d
c
)
{\displaystyle M_{cd,cd}=P_{cd}(1-Q_{dc})}
. Under these definitions, the iterated prisoner's dilemma qualifies as a stochastic process and M is a stochastic matrix, allowing all of the theory of stochastic processes to be applied.
One result of stochastic theory is that there exists a stationary vector v for the matrix v such that
v
⋅
M
=
v
{\displaystyle v\cdot M=v}
. Without loss of generality, it may be specified that v is normalized so that the sum of its four components is unity. The ij-th entry in
M
n
{\displaystyle M^{n}}
will give the probability that the outcome of an encounter between X and Y will be j given that the encounter n steps previous is i. In the limit as n approaches infinity, M will converge to a matrix with fixed values, giving the long-term probabilities of an encounter producing j independent of i. In other words, the rows of
M
∞
{\displaystyle M^{\infty }}
will be identical, giving the long-term equilibrium result probabilities of the iterated prisoner's dilemma without the need to explicitly evaluate a large number of interactions. It can be seen that v is a stationary vector for
M
n
{\displaystyle M^{n}}
and particularly
M
∞
{\displaystyle M^{\infty }}
, so that each row of
M
∞
{\displaystyle M^{\infty }}
will be equal to v. Thus, the stationary vector specifies the equilibrium outcome probabilities for X. Defining
S
x
=
{
R
,
S
,
T
,
P
}
{\displaystyle S_{x}=\{R,S,T,P\}}
and
S
y
=
{
R
,
T
,
S
,
P
}
{\displaystyle S_{y}=\{R,T,S,P\}}
as the short-term payoff vectors for the {cc,cd,dc,dd} outcomes (from X's point of view), the equilibrium payoffs for X and Y can now be specified as
s
x
=
v
⋅
S
x
{\displaystyle s_{x}=v\cdot S_{x}}
and
s
y
=
v
⋅
S
y
{\displaystyle s_{y}=v\cdot S_{y}}
, allowing the two strategies P and Q to be compared for their long-term payoffs.
==== Zero-determinant strategies ====
In 2012, William H. Press and Freeman Dyson published a new class of strategies for the stochastic iterated prisoner's dilemma called "zero-determinant" (ZD) strategies. The long term payoffs for encounters between X and Y can be expressed as the determinant of a matrix which is a function of the two strategies and the short term payoff vectors:
s
x
=
D
(
P
,
Q
,
S
x
)
{\displaystyle s_{x}=D(P,Q,S_{x})}
and
s
y
=
D
(
P
,
Q
,
S
y
)
{\displaystyle s_{y}=D(P,Q,S_{y})}
, which do not involve the stationary vector v. Since the determinant function
s
y
=
D
(
P
,
Q
,
f
)
{\displaystyle s_{y}=D(P,Q,f)}
is linear in
f
{\displaystyle f}
, it follows that
α
s
x
+
β
s
y
+
γ
=
D
(
P
,
Q
,
α
S
x
+
β
S
y
+
γ
U
)
{\displaystyle \alpha s_{x}+\beta s_{y}+\gamma =D(P,Q,\alpha S_{x}+\beta S_{y}+\gamma U)}
(where
U
=
{
1
,
1
,
1
,
1
}
{\displaystyle U=\{1,1,1,1\}}
). Any strategies for which
D
(
P
,
Q
,
α
S
x
+
β
S
y
+
γ
U
)
=
0
{\displaystyle D(P,Q,\alpha S_{x}+\beta S_{y}+\gamma U)=0}
are by definition a ZD strategy, and the long-term payoffs obey the relation
α
s
x
+
β
s
y
+
γ
=
0
{\displaystyle \alpha s_{x}+\beta s_{y}+\gamma =0}
.
Tit-for-tat is a ZD strategy which is "fair", in the sense of not gaining advantage over the other player. But the ZD space also contains strategies that, in the case of two players, can allow one player to unilaterally set the other player's score or alternatively force an evolutionary player to achieve a payoff some percentage lower than his own. The extorted player could defect, but would thereby hurt himself by getting a lower payoff. Thus, extortion solutions turn the iterated prisoner's dilemma into a sort of ultimatum game. Specifically, X is able to choose a strategy for which
D
(
P
,
Q
,
β
S
y
+
γ
U
)
=
0
{\displaystyle D(P,Q,\beta S_{y}+\gamma U)=0}
, unilaterally setting sy to a specific value within a particular range of values, independent of Y's strategy, offering an opportunity for X to "extort" player Y (and vice versa). But if X tries to set sx to a particular value, the range of possibilities is much smaller, consisting only of complete cooperation or complete defection.
An extension of the iterated prisoner's dilemma is an evolutionary stochastic iterated prisoner's dilemma, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly because they reduce each other's surplus).
Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is larger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.
While extortionary ZD strategies are not stable in large populations, another ZD class called "generous" strategies is both stable and robust. When the population is not too small, these strategies can supplant any other ZD strategy and even perform well against a broad array of generic strategies for iterated prisoner's dilemma, including win–stay, lose–switch. This was proven specifically for the donation game by Alexander Stewart and Joshua Plotkin in 2013. Generous strategies will cooperate with other cooperative players, and in the face of defection, the generous player loses more utility than its rival. Generous strategies are the intersection of ZD strategies and so-called "good" strategies, which were defined by Ethan Akin to be those for which the player responds to past mutual cooperation with future cooperation and splits expected payoffs equally if he receives at least the cooperative expected payoff. Among good strategies, the generous (ZD) subset performs well when the population is not too small. If the population is very small, defection strategies tend to dominate.
=== Continuous iterated prisoner's dilemma ===
Most work on the iterated prisoner's dilemma has focused on the discrete case, in which players either cooperate or defect, because this model is relatively simple to analyze. However, some researchers have looked at models of the continuous iterated prisoner's dilemma, in which players are able to make a variable contribution to the other player. Le and Boyd found that in such situations, cooperation is much harder to evolve than in the discrete iterated prisoner's dilemma. In a continuous prisoner's dilemma, if a population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than non-cooperators get little benefit from assorting with one another. By contrast, in a discrete prisoner's dilemma, tit-for-tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative to non-cooperators. Since nature arguably offers more opportunities for variable cooperation rather than a strict dichotomy of cooperation or defection, the continuous prisoner's dilemma may help explain why real-life examples of tit-for-tat-like cooperation are extremely rare even though tit-for-tat seems robust in theoretical models.
== Real-life examples ==
Many instances of human interaction and natural processes have payoff matrices like the prisoner's dilemma's. It is therefore of interest to the social sciences, such as economics, politics, and sociology, as well as to the biological sciences, such as ethology and evolutionary biology. Many natural processes have been abstracted into models in which living beings are engaged in endless games of prisoner's dilemma.
=== Environmental studies ===
In environmental studies, the dilemma is evident in crises such as global climate change. It is argued all countries will benefit from a stable climate, but any single country is often hesitant to curb CO2 emissions. The immediate benefit to any one country from maintaining current behavior is perceived to be greater than the purported eventual benefit to that country if all countries' behavior was changed, therefore explaining the impasse concerning climate-change in 2007.
An important difference between climate-change politics and the prisoner's dilemma is uncertainty; the extent and pace at which pollution can change climate is not known. The dilemma faced by governments is therefore different from the prisoner's dilemma in that the payoffs of cooperation are unknown. This difference suggests that states will cooperate much less than in a real iterated prisoner's dilemma, so that the probability of avoiding a possible climate catastrophe is much smaller than that suggested by a game-theoretical analysis of the situation using a real iterated prisoner's dilemma.
Thomas Osang and Arundhati Nandy provide a theoretical explanation with proofs for a regulation-driven win-win situation along the lines of Michael Porter's hypothesis, in which government regulation of competing firms is substantial.
=== Animals ===
Cooperative behavior of many animals can be understood as an example of the iterated prisoner's dilemma. Often animals engage in long-term partnerships; for example, guppies inspect predators cooperatively in groups, and they are thought to punish non-cooperative inspectors.
Vampire bats are social animals that engage in reciprocal food exchange. Applying the payoffs from the prisoner's dilemma can help explain this behavior.
=== Psychology ===
In addiction research and behavioral economics, George Ainslie points out that addiction can be cast as an intertemporal prisoner's dilemma problem between the present and future selves of the addict. In this case, "defecting" means relapsing, where not relapsing both today and in the future is by far the best outcome. The case where one abstains today but relapses in the future is the worst outcome: in some sense, the discipline and self-sacrifice involved in abstaining today have been "wasted" because the future relapse means that the addict is right back where they started and will have to start over. Relapsing today and tomorrow is a slightly "better" outcome, because while the addict is still addicted, they haven't put the effort in to trying to stop. The final case, where one engages in the addictive behavior today while abstaining tomorrow, has the problem that (as in other prisoner's dilemmas) there is an obvious benefit to defecting "today", but tomorrow one will face the same prisoner's dilemma, and the same obvious benefit will be present then, ultimately leading to an endless string of defections.
In The Science of Trust, John Gottman defines good relationships as those where partners know not to enter into mutual defection behavior, or at least not to get dynamically stuck there in a loop. In cognitive neuroscience, fast brain signaling associated with processing different rounds may indicate choices at the next round. Mutual cooperation outcomes entail brain activity changes predictive of how quickly a person will cooperate in kind at the next opportunity; this activity may be linked to basic homeostatic and motivational processes, possibly increasing the likelihood of short-cutting into mutual cooperation.
=== Economics ===
The prisoner's dilemma has been called the E. coli of social psychology, and it has been used widely to research various topics such as oligopolistic competition and collective action to produce a collective good.
Advertising is sometimes cited as a real example of the prisoner's dilemma. When cigarette advertising was legal in the United States, competing cigarette manufacturers had to decide how much money to spend on advertising. The effectiveness of Firm A's advertising was partially determined by the advertising conducted by Firm B. Likewise, the profit derived from advertising for Firm B is affected by the advertising conducted by Firm A. If both Firm A and Firm B chose to advertise during a given period, then the advertisement from each firm negates the other's, receipts remain constant, and expenses increase due to the cost of advertising. Both firms would benefit from a reduction in advertising. However, should Firm B choose not to advertise, Firm A could benefit greatly by advertising. Nevertheless, the optimal amount of advertising by one firm depends on how much advertising the other undertakes. As the best strategy is dependent on what the other firm chooses there is no dominant strategy, which makes it slightly different from a prisoner's dilemma. The outcome is similar, though, in that both firms would be better off were they to advertise less than in the equilibrium.
Sometimes cooperative behaviors do emerge in business situations. For instance, cigarette manufacturers endorsed the making of laws banning cigarette advertising, understanding that this would reduce costs and increase profits across the industry.
Without enforceable agreements, members of a cartel are also involved in a (multi-player) prisoner's dilemma. "Cooperating" typically means agreeing to a price floor, while "defecting" means selling under this minimum level, instantly taking business from other cartel members. Anti-trust authorities want potential cartel members to mutually defect, ensuring the lowest possible prices for consumers.
=== Sport ===
Doping in sport has been cited as an example of a prisoner's dilemma. Two competing athletes have the option to use an illegal and/or dangerous drug to boost their performance. If neither athlete takes the drug, then neither gains an advantage. If only one does, then that athlete gains a significant advantage over the competitor, reduced by the legal and/or medical dangers of having taken the drug. But if both athletes take the drug, the benefits cancel out and only the dangers remain, putting them both in a worse position than if neither had doped.
=== International politics ===
In international relations theory, the prisoner's dilemma is often used to demonstrate why cooperation fails in situations when cooperation between states is collectively optimal but individually suboptimal. A classic example is the security dilemma, whereby an increase in one state's security (such as increasing its military strength) leads other states to fear for their own security out of fear of offensive action. Consequently, security-increasing measures can lead to tensions, escalation or conflict with one or more other parties, producing an outcome which no party truly desires. The security dilemma is particularly intense in situations when it is hard to distinguish offensive weapons from defensive weapons, and offense has the advantage in any conflict over defense.
The prisoner's dilemma has frequently been used by realist international relations theorists to demonstrate the why all states (regardless of their internal policies or professed ideology) under international anarchy will struggle to cooperate with one another even when all benefit from such cooperation.
Critics of realism argue that iteration and extending the shadow of the future are solutions to the prisoner's dilemma. When actors play the prisoner's dilemma once, they have incentives to defect, but when they expect to play it repeatedly, they have greater incentives to cooperate.
=== Multiplayer dilemmas ===
Many real-life dilemmas involve multiple players. Although metaphorical, Garrett Hardin's tragedy of the commons may be viewed as an example of a multi-player generalization of the prisoner's dilemma: each villager makes a choice for personal gain or restraint. The collective reward for unanimous or frequent defection is very low payoffs and the destruction of the commons.
The commons are not always exploited: William Poundstone, in a book about the prisoner's dilemma, describes a situation in New Zealand where newspaper boxes are left unlocked. It is possible for people to take a paper without paying (defecting), but very few do, feeling that if they do not pay then neither will others, destroying the system. Subsequent research by Elinor Ostrom, winner of the 2009 Nobel Memorial Prize in Economic Sciences, hypothesized that the tragedy of the commons is oversimplified, with the negative outcome influenced by outside influences. Without complicating pressures, groups communicate and manage the commons among themselves for their mutual benefit, enforcing social norms to preserve the resource and achieve the maximum good for the group, an example of effecting the best-case outcome for prisoner's dilemma.
=== Academic settings ===
The prisoner's dilemma has been used in various academic settings to illustrate the complexities of cooperation and competition. One notable example is the classroom experiment conducted by sociology professor Dan Chambliss at Hamilton College in the 1980s. Starting in 1981, Chambliss proposed that if no student took the final exam, everyone would receive an A, but if even one student took it, those who didn't would receive a zero. In 1988, John Werner, a first-year student, successfully organized his classmates to boycott the exam, demonstrating a practical application of game theory and the prisoner's dilemma concept.
Nearly 25 years later, a similar incident occurred at Johns Hopkins University in 2013. Professor Peter Fröhlich's grading policy scaled final exams according to the highest score, meaning that if everyone received the same score, they would all get an A. Students in Fröhlich's classes organized a boycott of the final exam, ensuring that no one took it. As a result, every student received an A, successfully solving the prisoner's dilemma in a mutually optimal way without iteration. These examples highlight how the prisoner's dilemma can be used to explore cooperative behavior and strategic decision-making in educational contexts.
== Related games ==
=== Closed-bag exchange ===
Douglas Hofstadter suggested that people often find problems such as the prisoner's dilemma problem easier to understand when it is illustrated in the form of a simple game, or trade-off. One of several examples he used was "closed bag exchange":
Two people meet and exchange closed bags, with the understanding that one of them contains money, and the other contains a purchase. Either player can choose to honor the deal by putting into his or her bag what he or she agreed, or he or she can defect by handing over an empty bag.
=== Friend or Foe? ===
Friend or Foe? is a game show that aired from 2002 to 2003 on the Game Show Network in the US. On the game show, three pairs of people compete. When a pair is eliminated, they play a game similar to the prisoner's dilemma to determine how the winnings are split. If they both cooperate (Friend), they share the winnings 50–50. If one cooperates and the other defects (Foe), the defector gets all the winnings, and the cooperator gets nothing. If both defect, both leave with nothing. Notice that the reward matrix is slightly different from the standard one given above, as the rewards for the "both defect" and the "cooperate while the opponent defects" cases are identical. This makes the "both defect" case a weak equilibrium, compared with being a strict equilibrium in the standard prisoner's dilemma. If a contestant knows that their opponent is going to vote "Foe", then their own choice does not affect their own winnings. In a specific sense, Friend or Foe has a rewards model between prisoner's dilemma and the game of Chicken.
This is the rewards matrix:
This payoff matrix has also been used on the British television programs Trust Me, Shafted, The Bank Job and Golden Balls, and on the American game show Take It All, as well as for the winning couple on the reality shows Bachelor Pad and Love Island. Game data from the Golden Balls series has been analyzed by a team of economists, who found that cooperation was "surprisingly high" for amounts of money that would seem consequential in the real world but were comparatively low in the context of the game.
=== Iterated snowdrift ===
Researchers from the University of Lausanne and the University of Edinburgh have suggested that the "Iterated Snowdrift Game" may more closely reflect real-world social situations, although this model is actually a chicken game. In this model, the risk of being exploited through defection is lower, and individuals always gain from taking the cooperative choice. The snowdrift game imagines two drivers who are stuck on opposite sides of a snowdrift, each of whom is given the option of shoveling snow to clear a path or remaining in their car. A player's highest payoff comes from leaving the opponent to clear all the snow by themselves, but the opponent is still nominally rewarded for their work.
This may better reflect real-world scenarios, the researchers giving the example of two scientists collaborating on a report, both of whom would benefit if the other worked harder. "But when your collaborator doesn't do any work, it's probably better for you to do all the work yourself. You'll still end up with a completed project."
=== Coordination games ===
In coordination games, players must coordinate their strategies for a good outcome. An example is two cars that abruptly meet in a blizzard; each must choose whether to swerve left or right. If both swerve left, or both right, the cars do not collide. The local left- and right-hand traffic convention helps to co-ordinate their actions.
Symmetrical co-ordination games include Stag hunt and Bach or Stravinsky.
=== Asymmetric prisoner's dilemmas ===
A more general set of games is asymmetric. As in the prisoner's dilemma, the best outcome is cooperation, and there are motives for defection. Unlike the symmetric prisoner's dilemma, though, one player has more to lose and/or more to gain than the other. Some such games have been described as a prisoner's dilemma in which one prisoner has an alibi, hence the term "alibi game".
In experiments, players getting unequal payoffs in repeated games may seek to maximize profits, but only under the condition that both players receive equal payoffs; this may lead to a stable equilibrium strategy in which the disadvantaged player defects every X game, while the other always co-operates. Such behavior may depend on the experiment's social norms around fairness.
== Software ==
Several software packages have been created to run simulations and tournaments of the prisoner's dilemma, some of which have their source code available:
The source code for the second tournament run by Robert Axelrod (written by Axelrod and many contributors in Fortran)
Prison, a library written in Java, last updated in 1998
Axelrod-Python, written in Python
Evoplex, a fast agent-based modeling program released in 2018 by Marcos Cardinot
== In fiction ==
Hannu Rajaniemi set the opening scene of his The Quantum Thief trilogy in a "dilemma prison". The main theme of the series has been described as the "inadequacy of a binary universe" and the ultimate antagonist is a character called the All-Defector. The first book in the series was published in 2010, with the two sequels, The Fractal Prince and The Causal Angel, published in 2012 and 2014, respectively.
A game modeled after the iterated prisoner's dilemma is a central focus of the 2012 video game Zero Escape: Virtue's Last Reward and a minor part in its 2016 sequel Zero Escape: Zero Time Dilemma.
In The Mysterious Benedict Society and the Prisoner's Dilemma by Trenton Lee Stewart, the main characters start by playing a version of the game and escaping from the "prison" altogether. Later, they become actual prisoners and escape once again.
In The Adventure Zone: Balance during The Suffering Game subarc, the player characters are twice presented with the prisoner's dilemma during their time in two liches' domain, once cooperating and once defecting.
In the eighth novel from the author James S. A. Corey, Tiamat's Wrath, Winston Duarte explains the prisoner's dilemma to his 14-year-old daughter, Teresa, to train her in strategic thinking.
The 2008 film The Dark Knight includes a scene loosely based on the problem in which the Joker rigs two ferries, one containing prisoners and the other containing civilians, arming both groups with the means to detonate the bomb on each other's ferries, threatening to detonate them both if they hesitate.
== In reality TV ==
In episode 9 of the second series of the Australian reality TV show The Traitors, three "traitors" reached the End Game. They participated in the "Traitor's Dilemma": they were given a choice to Share or Steal the prize pot. If all three traitors chose to share, then each would receive a third of the pot. If one or two traitors chose to steal and one or two chose to share, then those who shared would win nothing and the pot would be divided among those who stole. If all three chose to steal, then no one would win anything.
== In moral philosophy ==
The prisoner's dilemma is commonly used as a thinking tool in moral philosophy as an illustration of the potential tension between the benefit of the individual and the benefit of the community.
Both the one-shot and the iterated prisoner's dilemma have applications in moral philosophy. Indeed, many of the moral situations, such as genocide, are not easily repeated more than once. Moreover, in many situations, the previous rounds' outcomes are unknown to the players, since they are not necessarily the same (e.g. interaction with a panhandler on the street).
The philosopher David Gauthier uses the prisoner's dilemma to show how morality and rationality can conflict.
Some game theorists have criticized the use of the prisoner's dilemma as a thinking tool in moral philosophy. Kenneth Binmore argued that the prisoner's dilemma does not accurately describe the game played by humanity, which he argues is closer to a coordination game. Brian Skyrms shares this perspective.
Steven Kuhn suggests that these views may be reconciled by considering that moral behavior can modify the payoff matrix of a game, transforming it from a prisoner's dilemma into other games.
=== Pure and impure prisoner's dilemma ===
A prisoner's dilemma is considered "impure" if a mixed strategy may give better expected payoffs than a pure strategy. This creates the interesting possibility that the moral action from a utilitarian perspective (i.e., aiming at maximizing the good of an action) may require randomization of one's strategy, such as cooperating with 80% chance and defecting with 20% chance.
== See also ==
== Notes ==
== References ==
=== Bibliography ===
Poundstone, William (1993). Prisoner's Dilemma (1st Anchor Books ed.). New York: Anchor. ISBN 0-385-41580-X.
== Further reading ==
== External links ==
Media related to Prisoner's dilemma at Wikimedia Commons
The Bowerbird's Dilemma The Prisoner's Dilemma in ornithology – mathematical cartoon by Larry Gonick.
Dixit, Avinash; Nalebuff, Barry (2008). "Prisoner's Dilemma". In David R. Henderson (ed.). Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267.
Dawkins: Nice Guys Finish First
Axelrod Iterated Prisoner's Dilemma Python library
Play Prisoner's Dilemma on oTree (N/A 11-5-17)
Nicky Case's Evolution of Trust, an example of the donation game
Iterated Prisoner's Dilemma online game by Wayne Davis
What The Prisoner's Dilemma Reveals About Life, The Universe, and Everything by Veritasium | Wikipedia/Prisoner's_dilemma |
The Internal Family Systems Model (IFS) is an integrative approach to individual psychotherapy developed by Richard C. Schwartz in the 1980s. It combines systems thinking with the view that the mind is made up of relatively discrete subpersonalities, each with its own unique viewpoint and qualities. IFS uses systems psychology, particularly as developed for family therapy, to understand how these collections of subpersonalities are organized.
== Parts ==
IFS posits that the mind is made up of multiple parts, and underlying them is a person's core or true Self. Like members of a family, a person's inner parts can take on extreme roles or subpersonalities. Each part has its own perspective, interests, memories, and viewpoint. A core tenet of IFS is that every part has a positive intent, even if its actions are counterproductive or cause dysfunction. There is no need to fight with, coerce, or eliminate parts; the IFS method promotes internal connection and harmony to bring the mind back into balance.
IFS therapy aims to heal wounded parts and restore mental balance. The first step is to access the core Self and then, from there, understand the different parts in order to heal them.
In the IFS model, there are three general types of parts:
Exiles represent psychological trauma, often from childhood, and they carry the pain and fear. Exiles may become isolated from the other parts and polarize the system. Managers and Firefighters try to protect a person's consciousness by preventing the Exiles' pain from coming to awareness.
Managers take on a preemptive, protective role. They influence the way a person interacts with the external world, protecting the person from harm and preventing painful or traumatic experiences from flooding the person's conscious awareness.
Firefighters emerge when Exiles break out and demand attention. They work to divert attention away from the Exile's hurt and shame, which leads to impulsive and/or inappropriate behaviors like overeating, drug use, and/or violence. They can also distract a person from pain by excessively focusing attention on more subtle activities such as overworking or overmedicating.
== The internal system ==
IFS focuses on the relationships between parts and the core Self. The goal of therapy is to create a cooperative and trusting relationship between the Self and each part.
There are three primary types of relationships between parts: protection, polarization, and alliance.
Protection is provided by Managers and Firefighters. They intend to spare Exiles from harm and protect the individual from the Exile's pain.
Polarization occurs between two parts that battle each other to determine how a person feels or behaves in a certain situation. Each part believes that it must act as it does in order to counter the extreme behavior of the other part. IFS has a method for working with polarized parts.
Alliance is formed between two different parts if they're working together to accomplish the same goal.
== IFS method ==
IFS practitioners report a well-defined therapeutic method for individual therapy based on the following principles. In this description, the term "protector" refers to either a manager or firefighter.
Parts in extreme roles carry "burdens": painful emotions or negative beliefs they have taken on as a result of past harmful experiences, often in childhood. These burdens are not intrinsic to the part and therefore they can be released or "unburdened" through IFS therapy, allowing the part to assume its natural healthy role.
The Self is the agent of psychological healing. Therapists help their clients to access and remain in Self, providing guidance along the way.
Protectors usually can't let go of their protective roles and transform until the Exiles they are protecting have been unburdened.
There is no attempt to work with Exiles until the client has obtained permission from the Protectors who are protecting it. This allegedly makes the method relatively safe, even in working with traumatized parts.
The Self is the natural leader of the internal system. However, because of past harmful incidents or relationships, Protectors have stepped in and taken over for the Self. One Protector after another is activated and takes the lead, causing dysfunctional behavior. Protectors are also frequently in conflict with each other, resulting in internal chaos or stagnation. The aim is for the Protectors to trust the Self and allow it to lead the system, creating internal harmony under its guidance.
The first step is to help the client access the Self. Next, the Self gets to know the Protector(s), its positive intent, and develops a trusting relationship with it. Then, with the Protector's permission, the client accesses the Exile(s) to uncover the childhood incident or relationship that is the source of the burden(s) it carries. The Exile is retrieved from the past situation and guided to release its burdens. Finally, the Protector can then let go of its protective role and assume a healthy one.
== Critiques ==
Therapists Sharon A. Deacon and Jonathan C. Davis suggested that working with one's parts may "be emotional and anxiety-provoking for clients", and that IFS may not work well with delusional, paranoid, or schizophrenic clients who may not be grounded in reality and therefore misuse the idea of "parts".
== See also ==
Dissociation (psychology)
Ego-state therapy
Family therapy
Inner Relationship Focusing
Family Constellations
Intrapersonal communication
Inner Team
Inside Out (2015 film)
== References ==
== Further reading ==
=== Books ===
=== Peer-reviewed articles ===
== External links ==
Official website | Wikipedia/Family_systems_theory |
Population dynamics is the type of mathematics used to model and study the size and age composition of populations as dynamical systems. Population dynamics is a branch of mathematical biology, and uses mathematical techniques such as differential equations to model behaviour. Population dynamics is also closely related to other mathematical biology fields such as epidemiology, and also uses techniques from evolutionary game theory in its modelling.
== History ==
Population dynamics has traditionally been the dominant branch of mathematical biology, which has a history of more than 220 years, although over the last century the scope of mathematical biology has greatly expanded.
The beginning of population dynamics is widely regarded as the work of Malthus, formulated as the Malthusian growth model. According to Malthus, assuming that the conditions (the environment) remain constant (ceteris paribus), a population will grow (or decline) exponentially.: 18 This principle provided the basis for the subsequent predictive theories, such as the demographic studies such as the work of Benjamin Gompertz and Pierre François Verhulst in the early 19th century, who refined and adjusted the Malthusian demographic model.
A more general model formulation was proposed by F. J. Richards in 1959, further expanded by Simon Hopkins, in which the models of Gompertz, Verhulst and also Ludwig von Bertalanffy are covered as special cases of the general formulation. The Lotka–Volterra predator-prey equations are another famous example, as well as the alternative Arditi–Ginzburg equations.
== Logistic function ==
Simplified population models usually start with four key variables (four demographic processes) including death, birth, immigration, and emigration. Mathematical models used to calculate changes in population demographics and evolution hold the assumption of no external influence. Models can be more mathematically complex where "...several competing hypotheses are simultaneously confronted with the data." For example, in a closed system where immigration and emigration does not take place, the rate of change in the number of individuals in a population can be described as:
d
N
d
t
=
B
−
D
=
b
N
−
d
N
=
(
b
−
d
)
N
=
r
N
,
{\displaystyle {\mathrm {d} N \over \mathrm {d} t}=B-D=bN-dN=(b-d)N=rN,}
where N is the total number of individuals in the specific experimental population being studied, B is the number of births and D is the number of deaths per individual in a particular experiment or model. The algebraic symbols b, d and r stand for the rates of birth, death, and the rate of change per individual in the general population, the intrinsic rate of increase. This formula can be read as the rate of change in the population (dN/dt) is equal to births minus deaths (B − D).
Using these techniques, Malthus' population principle of growth was later transformed into a mathematical model known as the logistic equation:
d
N
d
t
=
r
N
(
1
−
N
K
)
,
{\displaystyle {\mathrm {d} N \over \mathrm {d} t}=rN\left(1-{N \over K}\right),}
where N is the population size, r is the intrinsic rate of natural increase, and K is the carrying capacity of the population. The formula can be read as follows: the rate of change in the population (dN/dt) is equal to growth (rN) that is limited by carrying capacity (1 − N/K). From these basic mathematical principles the discipline of population ecology expands into a field of investigation that queries the demographics of real populations and tests these results against the statistical models. The field of population ecology often uses data on life history and matrix algebra to develop projection matrices on fecundity and survivorship. This information is used for managing wildlife stocks and setting harvest quotas.
== Intrinsic rate of increase ==
The rate at which a population increases in size if there are no density-dependent forces regulating the population is known as the intrinsic rate of increase. It is
d
N
d
t
=
r
N
{\displaystyle {\mathrm {d} N \over \mathrm {d} t}=rN}
where the derivative
d
N
/
d
t
{\displaystyle dN/dt}
is the rate of increase of the population, N is the population size, and r is the intrinsic rate of increase. Thus r is the maximum theoretical rate of increase of a population per individual – that is, the maximum population growth rate. The concept is commonly used in insect population ecology or management to determine how environmental factors affect the rate at which pest populations increase. See also exponential population growth and logistic population growth.
== Epidemiology ==
Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of viral spread have been proposed and analysed, and provide important results that may be applied to health policy decisions.
== Geometric populations ==
The mathematical formula below is used to model geometric populations. Such populations grow in discrete reproductive periods between intervals of abstinence, as opposed to populations which grow without designated periods for reproduction. Say that the natural number t is the index the generation (t=0 for the first generation, t=1 for the second generation, etc.). The letter t is used because the index of a generation is time. Say Nt denotes, at generation t, the number of individuals of the population that will reproduce, i.e. the population size at generation t. The population at the next generation, which is the population at time t+1 is:
N
t
+
1
=
N
t
+
B
t
−
D
t
+
I
t
−
E
t
{\displaystyle N_{t+1}=N_{t}+B_{t}-D_{t}+I_{t}-E_{t}}
where
Bt is the number of births in the population between generations t and t + 1,
Dt is the number of deaths between generations t and t + 1,
It is the number of immigrants added to the population between generations t and t + 1, and
Et is the number of emigrants moving out of the population between generations t and t + 1.
For the sake of simplicity, we suppose there is no migration to or from the population, but the following method can be applied without this assumption. Mathematically, it means that for all t, It = Et = 0. The previous equation becomes:
N
t
+
1
=
N
t
+
B
t
−
D
t
.
{\displaystyle N_{t+1}=N_{t}+B_{t}-D_{t}.}
In general, the number of births and the number of deaths are approximately proportional to the population size. This remark motivates the following definitions.
The birth rate at time t is defined by bt = Bt / Nt.
The death rate at time t is defined by dt = Dt / Nt.
The previous equation can then be rewritten as:
N
t
+
1
=
(
1
+
b
t
−
d
t
)
N
t
.
{\displaystyle N_{t+1}=(1+b_{t}-d_{t})N_{t}.}
Then, we assume the birth and death rates do not depend on the time t (which is equivalent to assume that the number of births and deaths are effectively proportional to the population size). This is the core assumption for geometric populations, because with it we are going to obtain a geometric sequence. Then we define the geometric rate of increase R = bt - dt to be the birth rate minus the death rate. The geometric rate of increase do not depend on time t, because both the birth rate minus the death rate do not, with our assumption. We obtain:
N
t
+
1
=
(
1
+
R
)
N
t
.
{\displaystyle {\begin{aligned}N_{t+1}&=\left(1+R\right)N_{t}.\end{aligned}}}
This equation means that the sequence (Nt) is geometric with first term N0 and common ratio 1 + R, which we define to be λ. λ is also called the finite rate of increase.
Therefore, by induction, we obtain the expression of the population size at time t:
N
t
=
λ
t
N
0
{\displaystyle N_{t}=\lambda ^{t}N_{0}}
where λt is the finite rate of increase raised to the power of the number of generations.
This last expression is more convenient than the previous one, because it is explicit. For example, say one wants to calculate with a calculator N10, the population at the tenth generation, knowing N0 the initial population and λ the finite rate of increase. With the last formula, the result is immediate by plugging t = 10, whether with the previous one it is necessary to know N9, N8, ..., N2 until N1.
We can identify three cases:
If λ > 1, i.e. if R > 0, i.e. (with the assumption that both birth and death rate do not depend on time t) if b0 > d0, i.e. if the birth rate is strictly greater than the death rate, then the population size is increasing and tends to infinity. Of course, in real life, a population cannot grow indefinitely: at some point the population lacks resources and so the death rate increases, which invalidates our core assumption because the death rate now depends on time.
If λ < 1, i.e. if R < 0, i.e. (with the assumption that both birth and death rate do not depend on time t) if b0 < d0, i.e. if the birth rate is strictly smaller than the death rate, then the population size is decreasing and tends to 0.
If λ = 1, i.e. if R = 0, i.e. (with the assumption that both birth and death rate do not depend on time t) if b0 = d0, i.e. if the birth rate is equal to the death rate, then the population size is constant, equal to the initial population N0.
=== Doubling time ===
The doubling time (td) of a population is the time required for the population to grow to twice its size. We can calculate the doubling time of a geometric population using the equation: Nt = λt N0 by exploiting our knowledge of the fact that the population (N) is twice its size (2N) after the doubling time.
N
t
d
=
λ
t
d
N
0
2
N
0
=
λ
t
d
N
0
λ
t
d
=
2
{\displaystyle {\begin{aligned}N_{t_{d}}&=\lambda ^{t_{d}}N_{0}\\2N_{0}&=\lambda ^{t_{d}}N_{0}\\\lambda ^{t_{d}}&=2\end{aligned}}}
The doubling time can be found by taking logarithms. For instance:
t
d
log
2
(
λ
)
=
log
2
(
2
)
=
1
⟹
t
d
=
1
log
2
(
λ
)
{\displaystyle t_{d}\log _{2}(\lambda )=\log _{2}(2)=1\implies t_{d}={1 \over \log _{2}(\lambda )}}
Or:
t
d
ln
(
λ
)
=
ln
(
2
)
⟹
t
d
=
ln
(
2
)
ln
(
λ
)
{\displaystyle t_{d}\ln(\lambda )=\ln(2)\implies t_{d}={\ln(2) \over \ln(\lambda )}}
Therefore:
t
d
=
1
log
2
(
λ
)
=
0.693...
ln
(
λ
)
{\displaystyle t_{d}={\frac {1}{\log _{2}(\lambda )}}={\frac {0.693...}{\ln(\lambda )}}}
=== Half-life of geometric populations ===
The half-life of a population is the time taken for the population to decline to half its size. We can calculate the half-life of a geometric population using the equation: Nt = λt N0 by exploiting our knowledge of the fact that the population (N) is half its size (0.5N) after a half-life.
N
t
1
/
2
=
λ
t
1
/
2
N
0
⟹
1
2
N
0
=
λ
t
1
/
2
N
0
⟹
λ
t
1
/
2
=
1
2
{\displaystyle N_{t_{1/2}}=\lambda ^{t_{1/2}}N_{0}\implies {\frac {1}{2}}N_{0}=\lambda ^{t_{1/2}}N_{0}\implies \lambda ^{t_{1/2}}={\frac {1}{2}}}
where t1/2 is the half-life.
The half-life can be calculated by taking logarithms (see above).
t
1
/
2
=
1
log
0.5
(
λ
)
=
−
ln
(
2
)
ln
(
λ
)
{\displaystyle t_{1/2}={1 \over \log _{0.5}(\lambda )}=-{\ln(2) \over \ln(\lambda )}}
Note that as the population is assumed to decline, λ < 1, so ln(λ) < 0.
=== Mathematical relationship between geometric and logistic populations ===
In geometric populations, R and λ represent growth constants (see 2 and 2.3). In logistic populations however, the intrinsic growth rate, also known as intrinsic rate of increase (r) is the relevant growth constant. Since generations of reproduction in a geometric population do not overlap (e.g. reproduce once a year) but do in an exponential population, geometric and exponential populations are usually considered to be mutually exclusive. However, both sets of constants share the mathematical relationship below.
The growth equation for exponential populations is
N
t
=
N
0
e
r
t
{\displaystyle N_{t}=N_{0}e^{rt}}
where e is Euler's number, a universal constant often applicable in logistic equations, and r is the intrinsic growth rate.
To find the relationship between a geometric population and a logistic population, we assume the Nt is the same for both models, and we expand to the following equality:
N
0
e
r
t
=
N
0
λ
t
e
r
t
=
λ
t
r
t
=
t
ln
(
λ
)
{\displaystyle {\begin{aligned}N_{0}e^{rt}&=N_{0}\lambda ^{t}\\e^{rt}&=\lambda ^{t}\\rt&=t\ln(\lambda )\end{aligned}}}
Giving us
r
=
ln
(
λ
)
{\displaystyle r=\ln(\lambda )}
and
λ
=
e
r
.
{\displaystyle \lambda =e^{r}.}
== Evolutionary game theory ==
Evolutionary game theory was first developed by Ronald Fisher in his 1930 article The Genetic Theory of Natural Selection. In 1973 John Maynard Smith formalised a central concept, the evolutionarily stable strategy.
Population dynamics have been used in several control theory applications. Evolutionary game theory can be used in different industrial or other contexts. Industrially, it is mostly used in multiple-input-multiple-output (MIMO) systems, although it can be adapted for use in single-input-single-output (SISO) systems. Some other examples of applications are military campaigns, water distribution, dispatch of distributed generators, lab experiments, transport problems, communication problems, among others.
== Oscillatory ==
Population size in plants experiences significant oscillation due to the annual environmental oscillation. Plant dynamics experience a higher degree of this seasonality than do mammals, birds, or bivoltine insects. When combined with perturbations due to disease, this often results in chaotic oscillations.
== In popular culture ==
The computer game SimCity, Sim Earth and the MMORPG Ultima Online, among others, tried to simulate some of these population dynamics.
== See also ==
== References ==
== Further reading ==
Andrey Korotayev, Artemy Malkov, and Daria Khaltourina. Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. ISBN 5-484-00414-4
Turchin, P. 2003. Complex Population Dynamics: a Theoretical/Empirical Synthesis. Princeton, NJ: Princeton University Press.
Smith, Frederick E. (1952). "Experimental methods in population dynamics: a critique". Ecology. 33 (4): 441–450. Bibcode:1952Ecol...33..441S. doi:10.2307/1931519. JSTOR 1931519.
== External links ==
The Virtual Handbook on Population Dynamics. An online compilation of state-of-the-art basic tools for the analysis of population dynamics with emphasis on benthic invertebrates. | Wikipedia/Population_dynamics |
Living systems are life forms (or, more colloquially known as living things) treated as a system. They are said to be open self-organizing and said to interact with their environment. These systems are maintained by flows of information, energy and matter. Multiple theories of living systems have been proposed. Such theories attempt to map general principles for how all living systems work.
== Context ==
Some scientists have proposed in the last few decades that a general theory of living systems is required to explain the nature of life. Such a general theory would arise out of the ecological and biological sciences and attempt to map general principles for how all living systems work. Instead of examining phenomena by attempting to break things down into components, a general living systems theory explores phenomena in terms of dynamic patterns of the relationships of organisms with their environment.
== Theories ==
=== Miller's open systems ===
James Grier Miller's living systems theory is a general theory about the existence of all living systems, their structure, interaction, behavior and development, intended to formalize the concept of life. According to Miller's 1978 book Living Systems, such a system must contain each of twenty "critical subsystems" defined by their functions. Miller considers living systems as a type of system. Below the level of living systems, he defines space and time, matter and energy, information and entropy, levels of organization, and physical and conceptual factors, and above living systems ecological, planetary and solar systems, galaxies, etc. Miller's central thesis is that the multiple levels of living systems (cells, organs, organisms, groups, organizations, societies, supranational systems) are open systems composed of critical and mutually-dependent subsystems that process inputs, throughputs, and outputs of energy and information. Seppänen (1998) says that Miller applied general systems theory on a broad scale to describe all aspects of living systems. Bailey states that Miller's theory is perhaps the "most integrative" social systems theory, clearly distinguishing between matter–energy-processing and information-processing, showing how social systems are linked to biological systems. LST analyzes the irregularities or "organizational pathologies" of systems functioning (e.g., system stress and strain, feedback irregularities, information–input overload). It explicates the role of entropy in social research while it equates negentropy with information and order. It emphasizes both structure and process, as well as their interrelations.
=== Lovelock's Gaia hypothesis ===
The idea that Earth is alive is found in philosophy and religion, but the first scientific discussion of it was by the Scottish geologist James Hutton. In 1785, he stated that Earth was a superorganism and that its proper study should be physiology.: 10 The Gaia hypothesis, proposed in the 1960s by James Lovelock, suggests that life on Earth functions as a single organism that defines and maintains environmental conditions necessary for its survival.
=== Morowitz's property of ecosystems ===
A systems view of life treats environmental fluxes and biological fluxes together as a "reciprocity of influence," and a reciprocal relation with environment is arguably as important for understanding life as it is for understanding ecosystems. As Harold J. Morowitz (1992) explains it, life is a property of an ecological system rather than a single organism or species. He argues that an ecosystemic definition of life is preferable to a strictly biochemical or physical one. Robert Ulanowicz (2009) highlights mutualism as the key to understand the systemic, order-generating behaviour of life and ecosystems.
=== Rosen's complex systems biology ===
Robert Rosen devoted a large part of his career, from 1958 onwards, to developing a comprehensive theory of life as a self-organizing complex system, "closed to efficient causation". He defined a system component as "a unit of organization; a part with a function, i.e., a definite relation between part and whole." He identified the "nonfractionability of components in an organism" as the fundamental difference between living systems and "biological machines." He summarised his views in his book Life Itself.
Complex systems biology is a field of science that studies the emergence of complexity in functional organisms from the viewpoint of dynamic systems theory. The latter is also often called systems biology and aims to understand the most fundamental aspects of life. A closely related approach, relational biology, is concerned mainly with understanding life processes in terms of the most important relations, and categories of such relations among the essential functional components of organisms; for multicellular organisms, this has been defined as "categorical biology", or a model representation of organisms as a category theory of biological relations, as well as an algebraic topology of the functional organisation of living organisms in terms of their dynamic, complex networks of metabolic, genetic, and epigenetic processes and signalling pathways. Related approaches focus on the interdependence of constraints, where constraints can be either molecular, such as enzymes, or macroscopic, such as the geometry of a bone or of the vascular system.
=== Bernstein, Byerly and Hopf's Darwinian dynamic ===
Harris Bernstein and colleagues argued in 1983 that the evolution of order in living systems and certain physical systems obeys a common fundamental principle termed the Darwinian dynamic. This was formulated by first considering how macroscopic order is generated in a simple non-biological system far from thermodynamic equilibrium, and then extending consideration to short, replicating RNA molecules. The underlying order-generating process was concluded to be basically similar for both types of systems.
=== Gerard Jagers' operator theory ===
Gerard Jagers' operator theory proposes that life is a general term for the presence of the typical closures found in organisms; the typical closures are a membrane and an autocatalytic set in the cell and that an organism is any system with an organisation that complies with an operator type that is at least as complex as the cell. Life can be modelled as a network of inferior negative feedbacks of regulatory mechanisms subordinated to a superior positive feedback formed by the potential of expansion and reproduction.
=== Kauffman's multi-agent system ===
Stuart Kauffman defines a living system as an autonomous agent or a multi-agent system capable of reproducing itself or themselves, and of completing at least one thermodynamic work cycle. This definition is extended by the evolution of novel functions over time.
=== Budisa, Kubyshkin and Schmidt's four pillars ===
Budisa, Kubyshkin and Schmidt defined cellular life as an organizational unit resting on four pillars/cornerstones: (i) energy, (ii) metabolism, (iii) information and (iv) form. This system is able to regulate and control metabolism and energy supply and contains at least one subsystem that functions as an information carrier (genetic information). Cells as self-sustaining units are parts of different populations that are involved in the unidirectional and irreversible open-ended process known as evolution.
== See also ==
Artificial life – Field of study
Autonomous Agency Theory – viable system theoryPages displaying wikidata descriptions as a fallback
Autopoiesis – System capable of producing itself
Biological organization – Hierarchy of complex structures and systems within biological sciencesPages displaying short descriptions of redirect targets
Biological systems – Complex network which connects several biologically relevant entitiesPages displaying short descriptions of redirect targets
Complex systems – System composed of many interacting componentsPages displaying short descriptions of redirect targets
Earth system science – Scientific study of the Earth's spheres and their natural integrated systems
Extraterrestrial life – Life that does not originate on Earth
Information metabolism – Psychological theory of interaction between biological organisms and their environment
Organism – Individual living life form
Spome – Hypothetical matter-closed, energy-open life support system
Systems biology – Computational and mathematical modeling of complex biological systems
Systems theory – Interdisciplinary study of systems
Viable System Theory – concerns cybernetic processes in relation to the development/evolution of dynamical systemsPages displaying wikidata descriptions as a fallback
== References ==
== Further reading ==
Kenneth D. Bailey, (1994). Sociology and the new systems theory: Toward a theoretical synthesis. Albany, NY: SUNY Press.
Kenneth D. Bailey (2006). Living systems theory and social entropy theory. Systems Research and Behavioral Science, 22, 291–300.
James Grier Miller, (1978). Living systems. New York: McGraw-Hill. ISBN 0-87081-363-3
Miller, J.L., & Miller, J.G. (1992). Greater than the sum of its parts: Subsystems which process both matter-energy and information. Behavioral Science, 37, 1–38.
Humberto Maturana (1978), "Biology of language: The epistemology of reality," in Miller, George A., and Elizabeth Lenneberg (eds.), Psychology and Biology of Language and Thought: Essays in Honor of Eric Lenneberg. Academic Press: 27-63.
Jouko Seppänen, (1998). Systems ideology in human and social sciences. In G. Altmann & W.A. Koch (Eds.), Systems: New paradigms for the human sciences (pp. 180–302). Berlin: Walter de Gruyter.
James R. Simms (1999). Principles of Quantitative Living Systems Science. Dordrecht: Kluwer Academic. ISBN 0-306-45979-5
== External links ==
The Living Systems Theory Of James Grier Miller
James Grier Miller, Living Systems The Basic Concepts (1978) | Wikipedia/Living_systems_theory |
Network science is an academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes (or vertices) and the connections between the elements or actors as links (or edges). The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology. The United States National Research Council defines network science as "the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena."
== Background and history ==
The study of networks has emerged in diverse disciplines as a means of analyzing complex relational data. The earliest known paper in this field is the famous Seven Bridges of Königsberg written by Leonhard Euler in 1736. Euler's mathematical description of vertices and edges was the foundation of graph theory, a branch of mathematics that studies the properties of pairwise relations in a network structure. The field of graph theory continued to develop and found applications in chemistry (Sylvester, 1878).
Dénes Kőnig, a Hungarian mathematician and professor, wrote the first book in Graph Theory, entitled "Theory of finite and infinite graphs", in 1936.
In the 1930s Jacob Moreno, a psychologist in the Gestalt tradition, arrived in the United States. He developed the sociogram and presented it to the public in April 1933 at a convention of medical scholars. Moreno claimed that "before the advent of sociometry no one knew what the interpersonal structure of a group 'precisely' looked like". The sociogram was a representation of the social structure of a group of elementary school students. The boys were friends of boys and the girls were friends of girls with the exception of one boy who said he liked a single girl. The feeling was not reciprocated. This network representation of social structure was found so intriguing that it was printed in The New York Times. The sociogram has found many applications and has grown into the field of social network analysis.
Probabilistic theory in network science developed as an offshoot of graph theory with Paul Erdős and Alfréd Rényi's eight famous papers on random graphs. For social networks the exponential random graph model or p* is a notational framework used to represent the probability space of a tie occurring in a social network. An alternate approach to network probability structures is the network probability matrix, which models the probability of edges occurring in a network, based on the historic presence or absence of the edge in a sample of networks.
Interest in networks exploded around 2000, following new discoveries that offered novel mathematical framework to describe different network topologies, leading to the term 'network science'. Albert-László Barabási and Reka Albert discovered the scale-free networks nature of many real networks, from the WWW to the cell. The scale-free property captures the fact that in real network hubs coexist with many small degree vertices, and the authors offered a dynamical model to explain the origin of this scale-free state. Duncan Watts and Steven Strogatz reconciled empirical data on networks with mathematical representation, describing the small-world network.
== Network Classification ==
=== Deterministic Network ===
The definition of deterministic network is defined compared with the definition of probabilistic network. In un-weighted deterministic networks, edges either exist or not, usually we use 0 to represent non-existence of an edge while 1 to represent existence of an edge. In weighted deterministic networks, the edge value represents the weight of each edge, for example, the strength level.
=== Probabilistic Network ===
In probabilistic networks, values behind each edge represent the likelihood of the existence of each edge. For example, if one edge has a value equals to 0.9, we say the existence probability of this edge is 0.9.
== Network properties ==
Often, networks have certain attributes that can be calculated to analyze the properties & characteristics of the network. The behavior of these network properties often define network models and can be used to analyze how certain models contrast to each other. Many of the definitions for other terms used in network science can be found in Glossary of graph theory.
=== Size ===
The size of a network can refer to the number of nodes
N
{\displaystyle N}
or, less commonly, the number of edges
E
{\displaystyle E}
which (for connected graphs with no multi-edges) can range from
N
−
1
{\displaystyle N-1}
(a tree) to
E
max
{\displaystyle E_{\max }}
(a complete graph). In the case of a simple graph (a network in which at most one (undirected) edge exists between each pair of vertices, and in which no vertices connect to themselves), we have
E
max
=
(
N
2
)
=
N
(
N
−
1
)
/
2
{\displaystyle E_{\max }={\tbinom {N}{2}}=N(N-1)/2}
; for directed graphs (with no self-connected nodes),
E
max
=
N
(
N
−
1
)
{\displaystyle E_{\max }=N(N-1)}
; for directed graphs with self-connections allowed,
E
max
=
N
2
{\displaystyle E_{\max }=N^{2}}
. In the circumstance of a graph within which multiple edges may exist between a pair of vertices,
E
max
=
∞
{\displaystyle E_{\max }=\infty }
.
=== Density ===
The density
D
{\displaystyle D}
of a network is defined as a normalized ratio between 0 and 1 of the number of edges
E
{\displaystyle E}
to the number of possible edges in a network with
N
{\displaystyle N}
nodes. Network density is a measure of the percentage of "optional" edges that exist in the network and can be computed as
D
=
E
−
E
m
i
n
E
m
a
x
−
E
m
i
n
{\displaystyle D={\frac {E-E_{\mathrm {min} }}{E_{\mathrm {max} }-E_{\mathrm {min} }}}}
where
E
m
i
n
{\displaystyle E_{\mathrm {min} }}
and
E
m
a
x
{\displaystyle E_{\mathrm {max} }}
are the minimum and maximum number of edges in a connected network with
N
{\displaystyle N}
nodes, respectively. In the case of simple graphs,
E
m
a
x
{\displaystyle E_{\mathrm {max} }}
is given by the binomial coefficient
(
N
2
)
{\displaystyle {\tbinom {N}{2}}}
and
E
m
i
n
=
N
−
1
{\displaystyle E_{\mathrm {min} }=N-1}
, giving density
D
=
E
−
(
N
−
1
)
E
m
a
x
−
(
N
−
1
)
=
2
(
E
−
N
+
1
)
N
(
N
−
3
)
+
2
{\displaystyle D={\frac {E-(N-1)}{E_{\mathrm {max} }-(N-1)}}={\frac {2(E-N+1)}{N(N-3)+2}}}
.
Another possible equation is
D
=
T
−
2
N
+
2
N
(
N
−
3
)
+
2
,
{\displaystyle D={\frac {T-2N+2}{N(N-3)+2}},}
whereas the ties
T
{\displaystyle T}
are unidirectional (Wasserman & Faust 1994). This gives a better overview over the network density, because unidirectional relationships can be measured.
=== Planar network density ===
The density
D
{\displaystyle D}
of a network, where there is no intersection between edges, is defined as a ratio of the number of edges
E
{\displaystyle E}
to the number of possible edges in a network with
N
{\displaystyle N}
nodes, given by a graph with no intersecting edges
(
E
max
=
3
N
−
6
)
{\displaystyle (E_{\max }=3N-6)}
, giving
D
=
E
−
N
+
1
2
N
−
5
.
{\displaystyle D={\frac {E-N+1}{2N-5}}.}
=== Average degree ===
The degree
k
{\displaystyle k}
of a node is the number of edges connected to it. Closely related to the density of a network is the average degree,
⟨
k
⟩
=
2
E
N
{\displaystyle \langle k\rangle ={\tfrac {2E}{N}}}
(or, in the case of directed graphs,
⟨
k
⟩
=
E
N
{\displaystyle \langle k\rangle ={\tfrac {E}{N}}}
, the former factor of 2 arising from each edge in an undirected graph contributing to the degree of two distinct vertices). In the ER random graph model (
G
(
N
,
p
)
{\displaystyle G(N,p)}
) we can compute the expected value of
⟨
k
⟩
{\displaystyle \langle k\rangle }
(equal to the expected value of
k
{\displaystyle k}
of an arbitrary vertex): a random vertex has
N
−
1
{\displaystyle N-1}
other vertices in the network available, and with probability
p
{\displaystyle p}
, connects to each. Thus,
E
[
⟨
k
⟩
]
=
E
[
k
]
=
p
(
N
−
1
)
{\displaystyle \mathbb {E} [\langle k\rangle ]=\mathbb {E} [k]=p(N-1)}
.
Degree distribution
The degree distribution
P
(
k
)
{\displaystyle P(k)}
is a fundamental property of both real networks, such as the Internet and social networks, and of theoretical models. The degree distribution P(k) of a network is defined to be the fraction of nodes in the network with degree k. The simplest network model, for example, the (Erdős–Rényi model) random graph, in which each of n nodes is independently connected (or not) with probability p (or 1 − p), has a binomial distribution of degrees k (or Poisson in the limit of large n). Most real networks, from the WWW to the protein interaction networks, however, have a degree distribution that are highly right-skewed, meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. For such scale-free networks the degree distribution approximately follows a power law:
P
(
k
)
∼
k
−
γ
{\displaystyle P(k)\sim k^{-\gamma }}
, where γ is the degree exponent, and is a constant. Such scale-free networks have unexpected structural and dynamical properties, rooted in the diverging second moment of the degree distribution.
=== Average shortest path length (or characteristic path length) ===
The average shortest path length is calculated by finding the shortest path between all pairs of nodes, and taking the average over all paths of the length thereof (the length being the number of intermediate edges contained in the path, i.e., the distance
d
u
,
v
{\displaystyle d_{u,v}}
between the two vertices
u
,
v
{\displaystyle u,v}
within the graph). This shows us, on average, the number of steps it takes to get from one member of the network to another. The behavior of the expected average shortest path length (that is, the ensemble average of the average shortest path length) as a function of the number of vertices
N
{\displaystyle N}
of a random network model defines whether that model exhibits the small-world effect; if it scales as
O
(
ln
N
)
{\displaystyle O(\ln N)}
, the model generates small-world nets. For faster-than-logarithmic growth, the model does not produce small worlds. The special case of
O
(
ln
ln
N
)
{\displaystyle O(\ln \ln N)}
is known as ultra-small world effect.
=== Diameter of a network ===
As another means of measuring network graphs, we can define the diameter of a network as the longest of all the calculated shortest paths in a network. It is the shortest distance between the two most distant nodes in the network. In other words, once the shortest path length from every node to all other nodes is calculated, the diameter is the longest of all the calculated path lengths. The diameter is representative of the linear size of a network. If node A-B-C-D are connected, going from A->D this would be the diameter of 3 (3-hops, 3-links).
=== Clustering coefficient ===
The clustering coefficient is a measure of an "all-my-friends-know-each-other" property. This is sometimes described as the friends of my friends are my friends. More precisely, the clustering coefficient of a node is the ratio of existing links connecting a node's neighbors to each other to the maximum possible number of such links. The clustering coefficient for the entire network is the average of the clustering coefficients of all the nodes. A high clustering coefficient for a network is another indication of a small world.
The clustering coefficient of the
i
{\displaystyle i}
'th node is
C
i
=
2
e
i
k
i
(
k
i
−
1
)
,
{\displaystyle C_{i}={2e_{i} \over k_{i}{(k_{i}-1)}}\,,}
where
k
i
{\displaystyle k_{i}}
is the number of neighbours of the
i
{\displaystyle i}
'th node, and
e
i
{\displaystyle e_{i}}
is the number of connections between these neighbours. The maximum possible number of connections between neighbors is, then,
(
k
2
)
=
k
(
k
−
1
)
2
.
{\displaystyle {\binom {k}{2}}={{k(k-1)} \over 2}\,.}
From a probabilistic standpoint, the expected local clustering coefficient is the likelihood of a link existing between two arbitrary neighbors of the same node.
=== Connectedness ===
The way in which a network is connected plays a large part into how networks are analyzed and interpreted. Networks are classified in four different categories:
Clique/Complete Graph: a completely connected network, where all nodes are connected to every other node. These networks are symmetric in that all nodes have in-links and out-links from all others.
Giant Component: A single connected component which contains most of the nodes in the network.
Weakly Connected Component: A collection of nodes in which there exists a path from any node to any other, ignoring directionality of the edges.
Strongly Connected Component: A collection of nodes in which there exists a directed path from any node to any other.
=== Node centrality ===
Centrality indices produce rankings which seek to identify the most important nodes in a network model. Different centrality indices encode different contexts for the word "importance." The betweenness centrality, for example, considers a node highly important if it form bridges between many other nodes. The eigenvalue centrality, in contrast, considers a node highly important if many other highly important nodes link to it. Hundreds of such measures have been proposed in the literature.
Centrality indices are only accurate for identifying the most important nodes. The measures are seldom, if ever, meaningful for the remainder of network nodes. Also, their indications are only accurate within their assumed context for importance, and tend to "get it wrong" for other contexts. For example, imagine two separate communities whose only link is an edge between the most junior member of each community. Since any transfer from one community to the other must go over this link, the two junior members will have high betweenness centrality. But, since they are junior, (presumably) they have few connections to the "important" nodes in their community, meaning their eigenvalue centrality would be quite low.
=== Node influence ===
Limitations to centrality measures have led to the development of more general measures.
Two examples are
the accessibility, which uses the diversity of random walks to measure how accessible the rest of the network is from a given start node,
and the expected force, derived from the expected value of the force of infection generated by a node.
Both of these measures can be meaningfully computed from the structure of the network alone.
=== Community structure ===
Nodes in a network may be partitioned into groups representing communities. Depending on the context, communities may be distinct or overlapping. Typically, nodes in such communities will be strongly connected to other nodes in the same community, but weakly connected to nodes outside the community. In the absence of a ground truth describing the community structure of a specific network, several algorithms have been developed to infer possible community structures using either supervised of unsupervised clustering methods.
== Network models ==
Network models serve as a foundation to understanding interactions within empirical complex networks. Various random graph generation models produce network structures that may be used in comparison to real-world complex networks.
=== Erdős–Rényi random graph model ===
The Erdős–Rényi model, named for Paul Erdős and Alfréd Rényi, is used for generating random graphs in which edges are set between nodes with equal probabilities. It can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs.
To generate an Erdős–Rényi model
G
(
n
,
p
)
{\displaystyle G(n,p)}
two parameters must be specified: the total number of nodes n and the probability p that a random pair of nodes has an edge.
Because the model is generated without bias to particular nodes, the degree distribution is binomial: for a randomly chosen vertex
v
{\displaystyle v}
,
P
(
deg
(
v
)
=
k
)
=
(
n
−
1
k
)
p
k
(
1
−
p
)
n
−
1
−
k
.
{\displaystyle P(\deg(v)=k)={n-1 \choose k}p^{k}(1-p)^{n-1-k}.}
In this model the clustering coefficient is 0 a.s. The behavior of
G
(
n
,
p
)
{\displaystyle G(n,p)}
can be broken into three regions.
Subcritical
n
p
<
1
{\displaystyle np<1}
: All components are simple and very small, the largest component has size
|
C
1
|
=
O
(
log
n
)
{\displaystyle |C_{1}|=O(\log n)}
;
Critical
n
p
=
1
{\displaystyle np=1}
:
|
C
1
|
=
O
(
n
2
3
)
{\displaystyle |C_{1}|=O(n^{\frac {2}{3}})}
;
Supercritical
n
p
>
1
{\displaystyle np>1}
:
|
C
1
|
≈
y
n
{\displaystyle |C_{1}|\approx yn}
where
y
=
y
(
n
p
)
{\displaystyle y=y(np)}
is the positive solution to the equation
e
−
p
n
y
=
1
−
y
{\displaystyle e^{-pny}=1-y}
.
The largest connected component has high complexity. All other components are simple and small
|
C
2
|
=
O
(
log
n
)
{\displaystyle |C_{2}|=O(\log n)}
.
=== Configuration model ===
The configuration model takes a degree sequence or degree distribution (which subsequently is used to generate a degree sequence) as the input, and produces randomly connected graphs in all respects other than the degree sequence. This means that for a given choice of the degree sequence, the graph is chosen uniformly at random from the set of all graphs that comply with this degree sequence. The degree
k
{\displaystyle k}
of a randomly chosen vertex is an independent and identically distributed random variable with integer values. When
E
[
k
2
]
−
2
E
[
k
]
>
0
{\textstyle \mathbb {E} [k^{2}]-2\mathbb {E} [k]>0}
, the configuration graph contains the giant connected component, which has infinite size. The rest of the components have finite sizes, which can be quantified with the notion of the size distribution. The probability
w
(
n
)
{\displaystyle w(n)}
that a randomly sampled node is connected to a component of size
n
{\displaystyle n}
is given by convolution powers of the degree distribution:
w
(
n
)
=
{
E
[
k
]
n
−
1
u
1
∗
n
(
n
−
2
)
,
n
>
1
,
u
(
0
)
n
=
1
,
{\displaystyle w(n)={\begin{cases}{\frac {\mathbb {E} [k]}{n-1}}u_{1}^{*n}(n-2),&n>1,\\u(0)&n=1,\end{cases}}}
where
u
(
k
)
{\displaystyle u(k)}
denotes the degree distribution and
u
1
(
k
)
=
(
k
+
1
)
u
(
k
+
1
)
E
[
k
]
{\displaystyle u_{1}(k)={\frac {(k+1)u(k+1)}{\mathbb {E} [k]}}}
. The giant component can be destroyed by randomly removing the critical fraction
p
c
{\displaystyle p_{c}}
of all edges. This process is called percolation on random networks. When the second moment of the degree distribution is finite,
E
[
k
2
]
<
∞
{\textstyle \mathbb {E} [k^{2}]<\infty }
, this critical edge fraction is given by
p
c
=
1
−
E
[
k
]
E
[
k
2
]
−
E
[
k
]
{\displaystyle p_{c}=1-{\frac {\mathbb {E} [k]}{\mathbb {E} [k^{2}]-\mathbb {E} [k]}}}
, and the average vertex-vertex distance
l
{\displaystyle l}
in the giant component scales logarithmically with the total size of the network,
l
=
O
(
log
N
)
{\displaystyle l=O(\log N)}
.
In the directed configuration model, the degree of a node is given by two numbers, in-degree
k
in
{\displaystyle k_{\text{in}}}
and out-degree
k
out
{\displaystyle k_{\text{out}}}
, and consequently, the degree distribution is two-variate. The expected number of in-edges and out-edges coincides, so that
E
[
k
in
]
=
E
[
k
out
]
{\textstyle \mathbb {E} [k_{\text{in}}]=\mathbb {E} [k_{\text{out}}]}
. The directed configuration model contains the giant component iff
2
E
[
k
in
]
E
[
k
in
k
out
]
−
E
[
k
in
]
E
[
k
out
2
]
−
E
[
k
in
]
E
[
k
in
2
]
+
E
[
k
in
2
]
E
[
k
out
2
]
−
E
[
k
in
k
out
]
2
>
0.
{\displaystyle 2\mathbb {E} [k_{\text{in}}]\mathbb {E} [k_{\text{in}}k_{\text{out}}]-\mathbb {E} [k_{\text{in}}]\mathbb {E} [k_{\text{out}}^{2}]-\mathbb {E} [k_{\text{in}}]\mathbb {E} [k_{\text{in}}^{2}]+\mathbb {E} [k_{\text{in}}^{2}]\mathbb {E} [k_{\text{out}}^{2}]-\mathbb {E} [k_{\text{in}}k_{\text{out}}]^{2}>0.}
Note that
E
[
k
in
]
{\textstyle \mathbb {E} [k_{\text{in}}]}
and
E
[
k
out
]
{\textstyle \mathbb {E} [k_{\text{out}}]}
are equal and therefore interchangeable in the latter inequality. The probability that a randomly chosen vertex belongs to a component of size
n
{\displaystyle n}
is given by:
h
in
(
n
)
=
E
[
k
i
n
]
n
−
1
u
~
in
∗
n
(
n
−
2
)
,
n
>
1
,
u
~
in
=
k
in
+
1
E
[
k
in
]
∑
k
out
≥
0
u
(
k
in
+
1
,
k
out
)
,
{\displaystyle h_{\text{in}}(n)={\frac {\mathbb {E} [k_{in}]}{n-1}}{\tilde {u}}_{\text{in}}^{*n}(n-2),\;n>1,\;{\tilde {u}}_{\text{in}}={\frac {k_{\text{in}}+1}{\mathbb {E} [k_{\text{in}}]}}\sum \limits _{k_{\text{out}}\geq 0}u(k_{\text{in}}+1,k_{\text{out}}),}
for in-components, and
h
out
(
n
)
=
E
[
k
out
]
n
−
1
u
~
out
∗
n
(
n
−
2
)
,
n
>
1
,
u
~
out
=
k
out
+
1
E
[
k
out
]
∑
k
in
≥
0
u
(
k
in
,
k
out
+
1
)
,
{\displaystyle h_{\text{out}}(n)={\frac {\mathbb {E} [k_{\text{out}}]}{n-1}}{\tilde {u}}_{\text{out}}^{*n}(n-2),\;n>1,\;{\tilde {u}}_{\text{out}}={\frac {k_{\text{out}}+1}{\mathbb {E} [k_{\text{out}}]}}\sum \limits _{k_{\text{in}}\geq 0}u(k_{\text{in}},k_{\text{out}}+1),}
for out-components.
=== Watts–Strogatz small world model ===
The Watts and Strogatz model is a random graph generation model that produces graphs with small-world properties.
An initial lattice structure is used to generate a Watts–Strogatz model. Each node in the network is initially linked to its
⟨
k
⟩
{\displaystyle \langle k\rangle }
closest neighbors. Another parameter is specified as the rewiring probability. Each edge has a probability
p
{\displaystyle p}
that it will be rewired to the graph as a random edge. The expected number of rewired links in the model is
p
E
=
p
N
⟨
k
⟩
/
2
{\displaystyle pE=pN\langle k\rangle /2}
.
As the Watts–Strogatz model begins as a non-random lattice structure, it has a very high clustering coefficient along with a high average path length. Each rewire is likely to create a shortcut between highly connected clusters. As the rewiring probability increases, the clustering coefficient decreases slower than the average path length. In effect, this allows the average path length of the network to decrease significantly with only slight decreases in the clustering coefficient. Higher values of p force more rewired edges, which, in effect, makes the Watts–Strogatz model a random network.
=== Barabási–Albert (BA) preferential attachment model ===
The Barabási–Albert model is a random network model used to demonstrate a preferential attachment or a "rich-get-richer" effect. In this model, an edge is most likely to attach to nodes with higher degrees.
The network begins with an initial network of m0 nodes. m0 ≥ 2 and the degree of each node in the initial network should be at least 1, otherwise it will always remain disconnected from the rest of the network.
In the BA model, new nodes are added to the network one at a time. Each new node is connected to
m
{\displaystyle m}
existing nodes with a probability that is proportional to the number of links that the existing nodes already have. Formally, the probability pi that the new node is connected to node i is
p
i
=
k
i
∑
j
k
j
,
{\displaystyle p_{i}={\frac {k_{i}}{\sum _{j}k_{j}}},}
where ki is the degree of node i. Heavily linked nodes ("hubs") tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as the destination for a new link. The new nodes have a "preference" to attach themselves to the already heavily linked nodes.
The degree distribution resulting from the BA model is scale free, in particular, for large degree it is a power law of the form:
P
(
k
)
∼
k
−
3
{\displaystyle P(k)\sim k^{-3}\,}
Hubs exhibit high betweenness centrality which allows short paths to exist between nodes. As a result, the BA model tends to have very short average path lengths. The clustering coefficient of this model also tends to 0.
The Barabási–Albert model was developed for undirected networks, aiming to explain the universality of the scale-free property, and applied to a wide range of different networks and applications. The directed version of this model is the Price model which was developed to just citation networks.
==== Non-linear preferential attachment ====
In non-linear preferential attachment (NLPA), existing nodes in the network gain new edges proportionally to the node degree raised to a constant positive power,
α
{\displaystyle \alpha }
. Formally, this means that the probability that node
i
{\displaystyle i}
gains a new edge is given by
p
i
=
k
i
α
∑
j
k
j
α
.
{\displaystyle p_{i}={\frac {k_{i}^{\alpha }}{\sum _{j}k_{j}^{\alpha }}}.}
If
α
=
1
{\displaystyle \alpha =1}
, NLPA reduces to the BA model and is referred to as "linear". If
0
<
α
<
1
{\displaystyle 0<\alpha <1}
, NLPA is referred to as "sub-linear" and the degree distribution of the network tends to a stretched exponential distribution. If
α
>
1
{\displaystyle \alpha >1}
, NLPA is referred to as "super-linear" and a small number of nodes connect to almost all other nodes in the network. For both
α
<
1
{\displaystyle \alpha <1}
and
α
>
1
{\displaystyle \alpha >1}
, the scale-free property of the network is broken in the limit of infinite system size. However, if
α
{\displaystyle \alpha }
is only slightly larger than
1
{\displaystyle 1}
, NLPA may result in degree distributions which appear to be transiently scale free.
=== Fitness model ===
Another model where the key ingredient is the nature of the vertex has been introduced by Caldarelli et al. Here a link is created between two vertices
i
,
j
{\displaystyle i,j}
with a probability given by a linking function
f
(
η
i
,
η
j
)
{\displaystyle f(\eta _{i},\eta _{j})}
of the fitnesses of the vertices involved.
The degree of a vertex i is given by
k
(
η
i
)
=
N
∫
0
∞
f
(
η
i
,
η
j
)
ρ
(
η
j
)
d
η
j
{\displaystyle k(\eta _{i})=N\int _{0}^{\infty }f(\eta _{i},\eta _{j})\rho (\eta _{j})\,d\eta _{j}}
If
k
(
η
i
)
{\displaystyle k(\eta _{i})}
is an invertible and increasing function of
η
i
{\displaystyle \eta _{i}}
, then
the probability distribution
P
(
k
)
{\displaystyle P(k)}
is given by
P
(
k
)
=
ρ
(
η
(
k
)
)
⋅
η
′
(
k
)
{\displaystyle P(k)=\rho (\eta (k))\cdot \eta '(k)}
As a result, if the fitnesses
η
{\displaystyle \eta }
are distributed as a power law, then also the node degree does.
Less intuitively with a fast decaying probability distribution as
ρ
(
η
)
=
e
−
η
{\displaystyle \rho (\eta )=e^{-\eta }}
together with a linking function of the kind
f
(
η
i
,
η
j
)
=
Θ
(
η
i
+
η
j
−
Z
)
{\displaystyle f(\eta _{i},\eta _{j})=\Theta (\eta _{i}+\eta _{j}-Z)}
with
Z
{\displaystyle Z}
a constant and
Θ
{\displaystyle \Theta }
the Heavyside function, we also obtain
scale-free networks.
Such model has been successfully applied to describe trade between nations by using GDP as fitness for the various nodes
i
,
j
{\displaystyle i,j}
and a linking function of the kind
δ
η
i
η
j
1
+
δ
η
i
η
j
.
{\displaystyle {\frac {\delta \eta _{i}\eta _{j}}{1+\delta \eta _{i}\eta _{j}}}.}
=== Exponential random graph models ===
Exponential Random Graph Models (ERGMs) are a family of statistical models for analyzing data from social and other networks. The Exponential family is a broad family of models for covering many types of data, not just networks. An ERGM is a model from this family which describes networks.
We adopt the notation to represent a random graph
Y
∈
Y
{\displaystyle Y\in {\mathcal {Y}}}
via a set of
n
{\displaystyle n}
nodes and a collection of tie variables
{
Y
i
j
:
i
=
1
,
…
,
n
;
j
=
1
,
…
,
n
}
{\displaystyle \{Y_{ij}:i=1,\dots ,n;j=1,\dots ,n\}}
, indexed by pairs of nodes
i
j
{\displaystyle ij}
, where
Y
i
j
=
1
{\displaystyle Y_{ij}=1}
if the nodes
(
i
,
j
)
{\displaystyle (i,j)}
are connected by an edge and
Y
i
j
=
0
{\displaystyle Y_{ij}=0}
otherwise.
The basic assumption of ERGMs is that the structure in an observed graph
y
{\displaystyle y}
can be explained by a given vector of sufficient statistics
s
(
y
)
{\displaystyle s(y)}
which are a function of the observed network and, in some cases, nodal attributes. The probability of a graph
y
∈
Y
{\displaystyle y\in {\mathcal {Y}}}
in an ERGM is defined by:
P
(
Y
=
y
|
θ
)
=
exp
(
θ
T
s
(
y
)
)
c
(
θ
)
{\displaystyle P(Y=y|\theta )={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}}}
where
θ
{\displaystyle \theta }
is a vector of model parameters associated with
s
(
y
)
{\displaystyle s(y)}
and
c
(
θ
)
=
∑
y
′
∈
Y
exp
(
θ
T
s
(
y
′
)
)
{\displaystyle c(\theta )=\sum _{y'\in {\mathcal {Y}}}\exp(\theta ^{T}s(y'))}
is a normalising constant.
== Network analysis ==
=== Social network analysis ===
Social network analysis examines the structure of relationships between social entities. These entities are often persons, but may also be groups, organizations, nation states, web sites, scholarly publications.
Since the 1970s, the empirical study of networks has played a central role in social science, and many of the mathematical and statistical tools used for studying networks have been first developed in sociology. Amongst many other applications, social network analysis has been used to understand the diffusion of innovation, news and rumors. Similarly, it has been used to examine the spread of both diseases and health-related behaviors. It has also been applied to the study of markets, where it has been used to examine the role of trust in exchange relationships and of social mechanisms in setting prices. Similarly, it has been used to study recruitment into political movements and social organizations. It has also been used to conceptualize scientific disagreements as well as academic prestige. More recently, network analysis (and its close cousin traffic analysis) has gained a significant use in military intelligence, for uncovering insurgent networks of both hierarchical and leaderless nature. In criminology, it is being used to identify influential actors in criminal gangs, offender movements, co-offending, predict criminal activities and make policies.
=== Dynamic network analysis ===
Dynamic network analysis examines the shifting structure of relationships among different classes of entities in complex socio-technical systems effects, and reflects social stability and changes such as the emergence of new groups, topics, and leaders. Dynamic Network Analysis focuses on meta-networks composed of multiple types of nodes (entities) and multiple types of links. These entities can be highly varied. Examples include people, organizations, topics, resources, tasks, events, locations, and beliefs.
Dynamic network techniques are particularly useful for assessing trends and changes in networks over time, identification of emergent leaders, and examining the co-evolution of people and ideas.
=== Biological network analysis ===
With the recent explosion of publicly available high throughput biological data, the analysis of molecular networks has gained significant interest. The type of analysis in this content are closely related to social network analysis, but often focusing on local patterns in the network. For example, network motifs are small subgraphs that are over-represented in the network. Activity motifs are similar over-represented patterns in the attributes of nodes and edges in the network that are over represented given the network structure. The analysis of biological networks has led to the development of network medicine, which looks at the effect of diseases in the interactome.
=== Semantic network analysis ===
Semantic network analysis is a sub-field of network analysis that focuses on the relationships between words and concepts in a network. Words are represented as nodes and their proximity or co-occurrences in the text are represented as edges. Semantic networks are therefore graphical representations of knowledge and are commonly used in neurolinguistics and natural language processing applications. Semantic network analysis is also used as a method to analyze large texts and identify the main themes and topics (e.g., of social media posts), to reveal biases (e.g., in news coverage), or even to map an entire research field.
=== Link analysis ===
Link analysis is a subset of network analysis, exploring associations between objects. An example may be examining the addresses of suspects and victims, the telephone numbers they have dialed, financial transactions they have partaken in during a given timeframe, and the familial relationships between these subjects as a part of the police investigation. Link analysis here provides the crucial relationships and associations between objects of different types that are not apparent from isolated pieces of information. Computer-assisted or fully automatic computer-based link analysis is increasingly employed by banks and insurance agencies in fraud detection, by telecommunication operators in telecommunication network analysis, by medical sector in epidemiology and pharmacology, in law enforcement investigations, by search engines for relevance rating (and conversely by the spammers for spamdexing and by business owners for search engine optimization), and everywhere else where relationships between many objects have to be analyzed.
=== Pandemic analysis ===
The SIR model is one of the most well known algorithms on predicting the spread of global pandemics within an infectious population.
==== Susceptible to infected ====
S
=
β
(
1
N
)
{\displaystyle S=\beta \left({\frac {1}{N}}\right)}
The formula above describes the "force" of infection for each susceptible unit in an infectious population, where β is equivalent to the transmission rate of said disease.
To track the change of those susceptible in an infectious population:
Δ
S
=
β
×
S
1
N
Δ
t
{\displaystyle \Delta S=\beta \times S{1 \over N}\,\Delta t}
==== Infected to recovered ====
Δ
I
=
μ
I
Δ
t
{\displaystyle \Delta I=\mu I\,\Delta t}
Over time, the number of those infected fluctuates by: the specified rate of recovery, represented by
μ
{\displaystyle \mu }
but deducted to one over the average infectious period
1
τ
{\displaystyle {1 \over \tau }}
, the numbered of infectious individuals,
I
{\displaystyle I}
, and the change in time,
Δ
t
{\displaystyle \Delta t}
.
==== Infectious period ====
Whether a population will be overcome by a pandemic, with regards to the SIR model, is dependent on the value of
R
0
{\displaystyle R_{0}}
or the "average people infected by an infected individual."
R
0
=
β
τ
=
β
μ
{\displaystyle R_{0}=\beta \tau ={\beta \over \mu }}
=== Web link analysis ===
Several Web search ranking algorithms use link-based centrality metrics, including (in order of appearance) Marchiori's Hyper Search, Google's PageRank, Kleinberg's HITS algorithm, the CheiRank and TrustRank algorithms. Link analysis is also conducted in information science and communication science in order to understand and extract information from the structure of collections of web pages. For example, the analysis might be of the interlinking between politicians' web sites or blogs.
==== PageRank ====
PageRank works by randomly picking "nodes" or websites and then with a certain probability, "randomly jumping" to other nodes. By randomly jumping to these other nodes, it helps PageRank completely traverse the network as some webpages exist on the periphery and would not as readily be assessed.
Each node,
x
i
{\displaystyle x_{i}}
, has a PageRank as defined by the sum of pages
j
{\displaystyle j}
that link to
i
{\displaystyle i}
times one over the outlinks or "out-degree" of
j
{\displaystyle j}
times the "importance" or PageRank of
j
{\displaystyle j}
.
x
i
=
∑
j
→
i
1
N
j
x
j
(
k
)
{\displaystyle x_{i}=\sum _{j\rightarrow i}{1 \over N_{j}}x_{j}^{(k)}}
===== Random jumping =====
As explained above, PageRank enlists random jumps in attempts to assign PageRank to every website on the internet. These random jumps find websites that might not be found during the normal search methodologies such as breadth-first search and depth-first search.
In an improvement over the aforementioned formula for determining PageRank includes adding these random jump components. Without the random jumps, some pages would receive a PageRank of 0 which would not be good.
The first is
α
{\displaystyle \alpha }
, or the probability that a random jump will occur. Contrasting is the "damping factor", or
1
−
α
{\displaystyle 1-\alpha }
.
R
(
p
)
=
α
N
+
(
1
−
α
)
∑
j
→
i
1
N
j
x
j
(
k
)
{\displaystyle R{(p)}={\alpha \over N}+(1-\alpha )\sum _{j\rightarrow i}{1 \over N_{j}}x_{j}^{(k)}}
Another way of looking at it:
R
(
A
)
=
∑
R
B
B
(outlinks)
+
⋯
+
R
n
n
(outlinks)
{\displaystyle R(A)=\sum {R_{B} \over B_{\text{(outlinks)}}}+\cdots +{R_{n} \over n_{\text{(outlinks)}}}}
=== Centrality measures ===
Information about the relative importance of nodes and edges in a graph can be obtained through centrality measures, widely used in disciplines like sociology. Centrality measures are essential when a network analysis has to answer questions such as: "Which nodes in the network should be targeted to ensure that a message or information spreads to all or most nodes in the network?" or conversely, "Which nodes should be targeted to curtail the spread of a disease?". Formally established measures of centrality are degree centrality, closeness centrality, betweenness centrality, eigenvector centrality, and katz centrality. The objective of network analysis generally determines the type of centrality measure(s) to be used.
Degree centrality of a node in a network is the number of links (vertices) incident on the node.
Closeness centrality determines how "close" a node is to other nodes in a network by measuring the sum of the shortest distances (geodesic paths) between that node and all other nodes in the network.
Betweenness centrality determines the relative importance of a node by measuring the amount of traffic flowing through that node to other nodes in the network. This is done by measuring the fraction of paths connecting all pairs of nodes and containing the node of interest. Group Betweenness centrality measures the amount of traffic flowing through a group of nodes.
Eigenvector centrality is a more sophisticated version of degree centrality where the centrality of a node not only depends on the number of links incident on the node but also the quality of those links. This quality factor is determined by the eigenvectors of the adjacency matrix of the network.
Katz centrality of a node is measured by summing the geodesic paths between that node and all (reachable) nodes in the network. These paths are weighted, paths connecting the node with its immediate neighbors carry higher weights than those which connect with nodes farther away from the immediate neighbors.
== Spread of content in networks ==
Content in a complex network can spread via two major methods: conserved spread and non-conserved spread. In conserved spread, the total amount of content that enters a complex network remains constant as it passes through. The model of conserved spread can best be represented by a pitcher containing a fixed amount of water being poured into a series of funnels connected by tubes. The pitcher represents the source, and the water represents the spread content. The funnels and connecting tubing represent the nodes and the connections between nodes, respectively. As the water passes from one funnel into another, the water disappears instantly from the funnel that was previously exposed to the water. In non-conserved spread, the content changes as it enters and passes through a complex network. The model of non-conserved spread can best be represented by a continuously running faucet running through a series of funnels connected by tubes. Here, the amount of water from the source is infinite. Also, any funnels exposed to the water continue to experience the water even as it passes into successive funnels. The non-conserved model is the most suitable for explaining the transmission of most infectious diseases.
=== The SIR model ===
In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments, susceptible:
S
(
t
)
{\displaystyle S(t)}
, infected,
I
(
t
)
{\displaystyle I(t)}
, and recovered,
R
(
t
)
{\displaystyle R(t)}
. The compartments used for this model consist of three classes:
S
(
t
)
{\displaystyle S(t)}
is used to represent the number of individuals not yet infected with the disease at time t, or those susceptible to the disease
I
(
t
)
{\displaystyle I(t)}
denotes the number of individuals who have been infected with the disease and are capable of spreading the disease to those in the susceptible category
R
(
t
)
{\displaystyle R(t)}
is the compartment used for those individuals who have been infected and then recovered from the disease. Those in this category are not able to be infected again or to transmit the infection to others.
The flow of this model may be considered as follows:
S
→
I
→
R
{\displaystyle {\mathcal {S}}\rightarrow {\mathcal {I}}\rightarrow {\mathcal {R}}}
Using a fixed population,
N
=
S
(
t
)
+
I
(
t
)
+
R
(
t
)
{\displaystyle N=S(t)+I(t)+R(t)}
, Kermack and McKendrick derived the following equations:
d
S
d
t
=
−
β
S
I
d
I
d
t
=
β
S
I
−
γ
I
d
R
d
t
=
γ
I
{\displaystyle {\begin{aligned}{\frac {dS}{dt}}&=-\beta SI\\[8pt]{\frac {dI}{dt}}&=\beta SI-\gamma I\\[8pt]{\frac {dR}{dt}}&=\gamma I\end{aligned}}}
Several assumptions were made in the formulation of these equations: First, an individual in the population must be considered as having an equal probability as every other individual of contracting the disease with a rate of
β
{\displaystyle \beta }
, which is considered the contact or infection rate of the disease. Therefore, an infected individual makes contact and is able to transmit the disease with
β
N
{\displaystyle \beta N}
others per unit time and the fraction of contacts by an infected with a susceptible is
S
/
N
{\displaystyle S/N}
. The number of new infections in unit time per infective then is
β
N
(
S
/
N
)
{\displaystyle \beta N(S/N)}
, giving the rate of new infections (or those leaving the susceptible category) as
β
N
(
S
/
N
)
I
=
β
S
I
{\displaystyle \beta N(S/N)I=\beta SI}
(Brauer & Castillo-Chavez, 2001). For the second and third equations, consider the population leaving the susceptible class as equal to the number entering the infected class. However, infectives are leaving this class per unit time to enter the recovered/removed class at a rate
γ
{\displaystyle \gamma }
per unit time (where
γ
{\displaystyle \gamma }
represents the mean recovery rate, or
1
/
γ
{\displaystyle 1/\gamma }
the mean infective period). These processes which occur simultaneously are referred to as the Law of Mass Action, a widely accepted idea that the rate of contact between two groups in a population is proportional to the size of each of the groups concerned (Daley & Gani, 2005). Finally, it is assumed that the rate of infection and recovery is much faster than the time scale of births and deaths and therefore, these factors are ignored in this model.
More can be read on this model on the Epidemic model page.
=== The master equation approach ===
A master equation can express the behaviour of an undirected growing network where, at each time step, a new node is added to the network, linked to an old node (randomly chosen and without preference). The initial network is formed by two nodes and two links between them at time
t
=
2
{\displaystyle t=2}
, this configuration is necessary only to simplify further calculations, so at time
t
=
n
{\displaystyle t=n}
the network have
n
{\displaystyle n}
nodes and
n
{\displaystyle n}
links.
The master equation for this network is:
p
(
k
,
s
,
t
+
1
)
=
1
t
p
(
k
−
1
,
s
,
t
)
+
(
1
−
1
t
)
p
(
k
,
s
,
t
)
,
{\displaystyle p(k,s,t+1)={\frac {1}{t}}p(k-1,s,t)+\left(1-{\frac {1}{t}}\right)p(k,s,t),}
where
p
(
k
,
s
,
t
)
{\displaystyle p(k,s,t)}
is the probability to have the node
s
{\displaystyle s}
with degree
k
{\displaystyle k}
at time
t
+
1
{\displaystyle t+1}
, and
s
{\displaystyle s}
is the time step when this node was added to the network. Note that there are only two ways for an old node
s
{\displaystyle s}
to have
k
{\displaystyle k}
links at time
t
+
1
{\displaystyle t+1}
:
The node
s
{\displaystyle s}
have degree
k
−
1
{\displaystyle k-1}
at time
t
{\displaystyle t}
and will be linked by the new node with probability
1
/
t
{\displaystyle 1/t}
Already has degree
k
{\displaystyle k}
at time
t
{\displaystyle t}
and will not be linked by the new node.
After simplifying this model, the degree distribution is
P
(
k
)
=
2
−
k
.
{\displaystyle P(k)=2^{-k}.}
Based on this growing network, an epidemic model is developed following a simple rule: Each time the new node is added and after choosing the old node to link, a decision is made: whether or not this new node will be infected. The master equation for this epidemic model is:
p
r
(
k
,
s
,
t
)
=
r
t
1
t
p
r
(
k
−
1
,
s
,
t
)
+
(
1
−
1
t
)
p
r
(
k
,
s
,
t
)
,
{\displaystyle p_{r}(k,s,t)=r_{t}{\frac {1}{t}}p_{r}(k-1,s,t)+\left(1-{\frac {1}{t}}\right)p_{r}(k,s,t),}
where
r
t
{\displaystyle r_{t}}
represents the decision to infect (
r
t
=
1
{\displaystyle r_{t}=1}
) or not (
r
t
=
0
{\displaystyle r_{t}=0}
). Solving this master equation, the following solution is obtained:
P
~
r
(
k
)
=
(
r
2
)
k
.
{\displaystyle {\tilde {P}}_{r}(k)=\left({\frac {r}{2}}\right)^{k}.}
== Multilayer networks ==
Multilayer networks are networks with multiple kinds of relations. Attempts to model real-world systems as multidimensional networks have been used in various fields such as social network analysis, economics, history, urban and international transport, ecology, psychology, medicine, biology, commerce, climatology, physics, computational neuroscience, operations management, and finance.
== Network optimization ==
Network problems that involve finding an optimal way of doing something are studied under the name of combinatorial optimization. Examples include network flow, shortest path problem, transport problem, transshipment problem, location problem, matching problem, assignment problem, packing problem, routing problem, critical path analysis and PERT (Program Evaluation & Review Technique).
In recent years, innovative research has emerged focusing on the optimization of network problems. For example, Dr. Michael Mann's research which published in IEEE addresses the optimization of transportation networks.
== Interdependent networks ==
Interdependent networks are networks where the functioning of nodes in one network depends on the functioning of nodes in another network. In nature, networks rarely appear in isolation, rather, usually networks are typically elements in larger systems, and interact with elements in that complex system. Such complex dependencies can have non-trivial effects on one another. A well studied example is the interdependency of infrastructure networks, the power stations which form the nodes of the power grid require fuel delivered via a network of roads or pipes and are also controlled via the nodes of communications network. Though the transportation network does not depend on the power network to function, the communications network does. In such infrastructure networks, the disfunction of a critical number of nodes in either the power network or the communication network can lead to cascading failures across the system with potentially catastrophic result to the whole system functioning. If the two networks were treated in isolation, this important feedback effect would not be seen and predictions of network robustness would be greatly overestimated.
== See also ==
== References ==
== Further reading ==
A First Course in Network Science, F. Menczer, S. Fortunato, C.A. Davis. (Cambridge University Press, 2020). ISBN 9781108471138. GitHub site with tutorials, datasets, and other resources
"Connected: The Power of Six Degrees," https://web.archive.org/web/20111006191031/http://ivl.slis.indiana.edu/km/movies/2008-talas-connected.mov
Cohen, R.; Erez, K. (2000). "Resilience of the Internet to random breakdown". Phys. Rev. Lett. 85 (21): 4626–4628. arXiv:cond-mat/0007048. Bibcode:2000PhRvL..85.4626C. CiteSeerX 10.1.1.242.6797. doi:10.1103/physrevlett.85.4626. PMID 11082612. S2CID 15372152. Archived from the original on 2013-05-12. Retrieved 2011-04-12.
Pu, Cun-Lai; Wen-; Pei, Jiang; Michaelson, Andrew (2012). "Robustness analysis of network controllability" (PDF). Physica A. 391 (18): 4420–4425. Bibcode:2012PhyA..391.4420P. doi:10.1016/j.physa.2012.04.019. Archived from the original (PDF) on 2016-10-13. Retrieved 2013-09-18.
S.N. Dorogovtsev and J.F.F. Mendes, Evolution of Networks: From biological networks to the Internet and WWW, Oxford University Press, 2003, ISBN 0-19-851590-1
Linked: The New Science of Networks, A.-L. Barabási (Perseus Publishing, Cambridge)
'Scale-Free Networks, G. Caldarelli (Oxford University Press, Oxford)
Network Science, Committee on Network Science for Future Army Applications, National Research Council. 2005. The National Academies Press (2005)ISBN 0-309-10026-7
Network Science Bulletin, USMA (2007) ISBN 978-1-934808-00-9
The Structure and Dynamics of Networks Mark Newman, Albert-László Barabási, & Duncan J. Watts (The Princeton Press, 2006) ISBN 0-691-11357-2
Dynamical processes on complex networks, Alain Barrat, Marc Barthelemy, Alessandro Vespignani (Cambridge University Press, 2008) ISBN 978-0-521-87950-7
Network Science: Theory and Applications, Ted G. Lewis (Wiley, March 11, 2009) ISBN 0-470-33188-7
Nexus: Small Worlds and the Groundbreaking Theory of Networks, Mark Buchanan (W. W. Norton & Company, June 2003) ISBN 0-393-32442-7
Six Degrees: The Science of a Connected Age, Duncan J. Watts (W. W. Norton & Company, February 17, 2004) ISBN 0-393-32542-3 | Wikipedia/Network_science |
Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems. It entails a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human–natural systems in a highly integrated and ethical fashion". ESEM is a newly emerging area of study that has taken root at the University of Virginia, Cornell and other universities throughout the United States, and at the Centre for Earth Systems Engineering Research (CESER) at Newcastle University in the United Kingdom. Founders of the discipline are Braden Allenby and Michael Gorman.
== Introduction to ESEM ==
For centuries, humans have utilized the earth and its natural resources to advance civilization and develop technology. "As a principle [sic] result of Industrial Revolutions and associated changes in human demographics, technology systems, cultures, and economic systems have been the evolution of an Earth in which the dynamics of major natural systems are increasingly dominated by human activity".
In many ways, ESEM views the earth as a human artifact. "In order to maintain continued stability of both natural and human systems, we need to develop the ability to rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion- an Earth Systems Engineering and Management (ESEM) capability".
ESEM has been developed by a few individuals. One of particular note is Braden Allenby. Allenby holds that the foundation upon which ESEM is built is the notion that "the Earth, as it now exists, is a product of human design". In fact there are no longer any natural systems left in the world, "there are no places left on Earth that don't fall under humanity's shadow". "So the question is not, as some might wish, whether we should begin ESEM, because we have been doing it for a long time, albeit unintentionally.
The issue is whether we will assume the ethical responsibility to do ESEM rationally and responsibly". Unlike the traditional engineering and management process "which assume a high degree of knowledge and certainty about the systems behavior and a defined endpoint to the process," ESEM "will be in constant dialog with [the systems], as they – and we and our cultures – change and coevolve together into the future". ESEM is a new concept, however there are a number of fields "such as industrial ecology, adaptive management, and systems engineering that can be relied on to enable rapid progress in developing" ESEM as a discipline.
The premise of ESEM is that science and technology can provide successful and lasting solutions to human-created problems such as environmental pollution and climate-change. This assumption has recently been challenged in Techno-Fix: Why Technology Won't Save Us or the Environment.
== Topics ==
=== Adaptive management ===
Adaptive management is a key aspect of ESEM. Adaptive management is a way of approaching environmental management. It assumes that there is a great deal of uncertainty in environmental systems and holds that there is never a final solution to an earth systems problem. Therefore, once action has been taken, the Earth Systems Engineer will need to be in constant dialogue with the system, watching for changes and how the system evolves. This way of monitoring and managing ecosystems accepts nature's inherent uncertainty and embraces it by never concluding to one certain cure to a problem.
=== Earth systems engineering ===
Earth systems engineering is essentially the use of systems analysis methods in the examination of environmental problems. When analyzing complex environmental systems, there are numerous data sets, stakeholders and variables. It is therefore appropriate to approach such problems with a systems analysis method. Essentially there are "six major phases of a properly-conducted system study". The six phases are as follows:
Determine goals of system
Establish criteria for ranking alternative candidates
Develop alternatives solutions
Rank alternative candidates
Iterate
Act
Part of the systems analysis process includes determining the goals of the system. The key components of goal development include the development of a Descriptive Scenario, a Normative Scenario and Transitive Scenario. Essentially, the Descriptive Scenario "describe[s] the situation as it is [and] tell[s] how it got to be that way" (Gibson, 1991). Another important part of the Descriptive Scenario is how it "point[s] out the good features and the unacceptable elements of the status quo". Next, the Normative Scenario shows the final outcome or the way the system should operate under ideal conditions once action has been taken. For the earth systems approach, the "Normative Scenario" will involve the most complicated analysis. The Normative Scenario will deal with stakeholders, creating a common trading zone or location for the free exchange of ideas to come up with a solution of where a system may be restored to or just how exactly a system should be modified. Finally the Transitive scenario comes up with the actual process of changing a system from a Descriptive state to a Normative state. Often, there is not one final solution, as noted in adaptive management. Typically an iterative process ensues as variables and inputs change and the system coevolves with the analysis.
=== Environmental science ===
When examining complex ecosystems there is an inherent need for the earth systems engineer to have a strong understanding of how natural processes function. A training in Environmental Science will be crucial to fully understand the possible unintended and undesired effects of a proposed earth systems design. Fundamental topics such as the carbon cycle or the water cycle are pivotal processes that need to be understood.
=== Ethics and sustainability ===
At the heart of ESEM is the social, ethical and moral responsibility of the earth systems engineer to stakeholders and to the natural system being engineered, to come up with an objective Transitive and Normative scenario. "ESEM is the cultural and ethical context itself". The earth systems engineer will be expected to explore the ethical implications of proposed solutions.
"The perspective of environmental sustainability requires that we ask ourselves how each interaction with the natural environment will affect, and be judged by, our children in the future" ". "There is an increasing awareness that the process of development, left to itself, can cause irreversible damage to the environment, and that the resultant net addition to wealth and human welfare may very well be negative, if not catastrophic". With this notion in mind, there is now a new goal of sustainable environment-friendly development. Sustainable development is an important part to developing appropriate ESEM solutions to complex environmental problems.
=== Industrial ecology ===
Industrial ecology is the notion that major manufacturing and industrial processes need to shift from open loop systems to closed loop systems. This is essentially the recycling of waste to make new products. This reduces refuse and increases the effectiveness of resources. ESEM looks to minimize the impact of industrial processes on the environment, therefore the notion of recycling of industrial products is important to ESEM.
== Case study: Florida Everglades ==
The Florida Everglades system is a prime example of a complex ecological system that underwent an ESEM analysis.
=== Background ===
The Florida Everglades is located in southern Florida. The ecosystem is essentially a subtropical fresh water marsh composed of a variety of flora and fauna. Of particular note is the saw grass and ridge slough formations that make the Everglades unique. Over the course of the past century mankind has had a rising presence in this region. Currently, all of the eastern shore of Florida is developed and the population has increased to over 6 million residents. This increased presence over the years has resulted in the channeling and redirecting of water from its traditional path through the Everglades and into the Gulf of Mexico and Atlantic Ocean. With this there have been a variety of deleterious effects upon the Florida Everglades.
=== Descriptive scenario ===
By 1993, the Everglades had been affected by numerous human developments. The water flow and quality had been affected by the construction of canals and levees, to the series of elevated highways running through the Everglades to the expansive Everglades Agricultural Area that had contaminated the Everglades with high amounts of nitrogen. The result of this reduced flow of water was dramatic. There was a 90 - 95% reduction in wading bird populations, declining fish populations and salt water intrusion into the ecosystem. If the Florida Everglades were to remain a US landmark, action needed to be taken.
=== Normative scenario ===
It was in 1993 that the Army Corps of Engineers analyzed the system. They determined that an ideal situation would be to "get the water right". In doing so there would be a better flow through the Everglades and a reduced number of canals and levees sending water to tide.
=== Transitive scenario ===
It was from the development of the Normative Scenario, that the Army Corps of Engineers developed CERP, the Comprehensive Everglades Restoration Plan. In the plan they created a time line of projects to be completed, the estimated cost and the ultimate results of improving the ecosystem by having native flora and fauna prosper. They also outline the human benefits of the project. Not only will the solution be sustainable, as future generations will be able to enjoy the Everglades, but the correction of the water flow and through the creation of storage facilities will reduce the occurrence of droughts and water shortages in southern Florida.
== See also ==
Design review
Environmental management
Industrial ecology
Sustainability
Systems engineering
== Publications ==
Allenby, B. R. (2000). Earth systems engineering: the world as human artifact. Bridge 30 (1), 5–13.
Allenby, B. R. (2005). Reconstructing earth: Technology and environment in the age of humans. Washington, DC: Island Press. From https://www.loc.gov/catdir/toc/ecip059/2005006241.html
Allenby, B. R. (2000, Winter). Earth systems engineering and management. IEEE Technology and Society Magazine, 0278-0079(Winter) 10-24.
Davis, Steven, et al. Everglades: The Ecosystem and Its Restoration. Boca Raton: St Lucie Press, 1997.
"Everglades." Comprehensive Everglades Restoration Plan. 10 April 2004. https://web.archive.org/web/20051214102114/http://www.evergladesplan.org/
Gibson, J. E. (1991). How to do A systems analysis and systems analyst decalog. In W. T. Scherer (Ed.), (Fall 2003 ed.) (pp. 29–238). Department of Systems and Information Engineering: U of Virginia. Retrieved October 29, 2005,
Gorman, Michael. (2004). Syllabus Spring Semester 2004. Retrieved October 29, 2005 from https://web.archive.org/web/20110716231016/http://repo-nt.tcc.virginia.edu/classes/ESEM/syllabus.html
Hall, J.W. and O'Connell, P.E. (2007). Earth Systems Engineering: turning vision into action. Civil Engineering, 160(3): 114-122.
Newton, L. H. (2003). Ethics and sustainability: Sustainable development and the moral life. Upper Saddle River, N.J.: Prentice Hall.
== References ==
== External links ==
Class Taught Spring 2004 at The University of Virginia on ESEM
UVA article on Spring 2004 course Archived 2005-11-15 at the Wayback Machine
Class Taught January 2007 at the University of Virginia on ESEM
Allenby Article on ESEM
Centre for Earth Systems Engineering Research @ Newcastle University | Wikipedia/Earth_systems_engineering_and_management |
In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks.
A neural network consists of connected units or nodes called artificial neurons, which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by edges, which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process.
Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.
Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information.
== Training ==
Neural networks are typically trained through empirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data.
== History ==
=== Early work ===
Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement.
Historically, digital computers such as the von Neumann model operate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework of connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing.
Warren McCulloch and Walter Pitts (1943) considered a non-learning computational model for neural networks. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.
In the late 1940s, D. O. Hebb proposed a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. It was used in many early neural networks, such as Rosenblatt's perceptron and the Hopfield network. Farley and Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956).
In 1958, psychologist Frank Rosenblatt described the perceptron, one of the first implemented artificial neural networks, funded by the United States Office of Naval Research.
R. D. Joseph (1960) mentions an even earlier perceptron-like device by Farley and Clark: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject."
The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.
The first perceptrons did not have adaptive hidden units. However, Joseph (1960) also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962): section 16 cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
=== Deep learning breakthroughs in the 1960s and 1970s ===
Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in the Soviet Union (1965). They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."
The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique.
In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning.
Nevertheless, research stagnated in the United States following the work of Minsky and Papert (1969), who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967).
In 1976 transfer learning was introduced in neural networks learning.
Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.
=== Backpropagation ===
Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his Master's thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
=== Convolutional neural networks ===
Kunihiko Fukushima's convolutional neural network (CNN) architecture of 1979 also introduced max pooling, a popular downsampling procedure for CNNs. CNNs have become an essential tool for computer vision.
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.
In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images.
From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments.
=== Recurrent neural networks ===
One origin of RNN was statistical mechanics. In 1972, Shun'ichi Amari proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network by John Hopfield (1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past.
In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array, used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks.
In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on the relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion. In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation. It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology.
Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology.
In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991, Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor" which introduced the important concepts of self-supervised pre-training (the "P" in ChatGPT) and neural knowledge distillation. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.
In 1991, Sepp Hochreiter's diploma thesis identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. He and Schmidhuber introduced long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture.
During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models.
=== Deep learning ===
Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly.
In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3.
In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".
Radial basis function and wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied in nonlinear system identification and classification applications.
Generative adversarial network (GAN) (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022).
In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net.
During the 2010s, the seq2seq model was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 in Attention Is All You Need.
It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992) scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.
Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use this architecture.
== Models ==
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms a directed, weighted graph.
An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another, allowing weights to choose the signal between neurons.
=== Artificial neurons ===
ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image.
To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.
=== Organization ===
The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can be pooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks.
=== Hyperparameter ===
A hyperparameter is a constant parameter whose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters include learning rate, the number of hidden layers and batch size. The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.
=== Learning ===
Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation.
==== Learning rate ====
The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.
==== Cost function ====
While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) because it arises from the model (e.g. in a probabilistic model, the model's posterior probability can be used as an inverse cost).
==== Backpropagation ====
Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backpropagation calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks.
=== Learning paradigms ===
Machine learning is commonly separated into three main learning paradigms, supervised learning, unsupervised learning and reinforcement learning. Each corresponds to a particular learning task.
==== Supervised learning ====
Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
==== Unsupervised learning ====
In unsupervised learning, input data is given along with the cost function, some function of the data
x
{\displaystyle \textstyle x}
and the network's output. The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model
f
(
x
)
=
a
{\displaystyle \textstyle f(x)=a}
where
a
{\displaystyle \textstyle a}
is a constant and the cost
C
=
E
[
(
x
−
f
(
x
)
)
2
]
{\displaystyle \textstyle C=E[(x-f(x))^{2}]}
. Minimizing this cost produces a value of
a
{\displaystyle \textstyle a}
that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between
x
{\displaystyle \textstyle x}
and
f
(
x
)
{\displaystyle \textstyle f(x)}
, whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
==== Reinforcement learning ====
In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly.
Formally, the environment is modeled as a Markov decision process (MDP) with states
s
1
,
.
.
.
,
s
n
∈
S
{\displaystyle \textstyle {s_{1},...,s_{n}}\in S}
and actions
a
1
,
.
.
.
,
a
m
∈
A
{\displaystyle \textstyle {a_{1},...,a_{m}}\in A}
. Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution
P
(
c
t
|
s
t
)
{\displaystyle \textstyle P(c_{t}|s_{t})}
, the observation distribution
P
(
x
t
|
s
t
)
{\displaystyle \textstyle P(x_{t}|s_{t})}
and the transition distribution
P
(
s
t
+
1
|
s
t
,
a
t
)
{\displaystyle \textstyle P(s_{t+1}|s_{t},a_{t})}
, while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC.
ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
==== Self-learning ====
Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion. Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation:
In situation s perform action a;
Receive consequence situation s';
Compute emotion of being in consequence situation v(s');
Update crossbar memory w'(a,s) = w(a,s) + v(s').
The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it receives initial emotions (only once) about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.
==== Neuroevolution ====
Neuroevolution can create neural network topologies and weights using evolutionary computation. It is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".
=== Stochastic neural network ===
Stochastic neural networks originating from Sherrington–Kirkpatrick models are a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neurons stochastic transfer functions , or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks.
=== Topological deep learning ===
Topological deep learning, first introduced in 2017, is an emerging approach in machine learning that integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted in algebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such as differential topology and geometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematical artificial intelligence, fostering a mutually beneficial relationship between AI and mathematics.
=== Other ===
In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation–maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks.
==== Modes ====
Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning, weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set.
== Types ==
ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
Some of the main breakthroughs include:
Convolutional neural networks that have proven particularly successful in processing visual and other two-dimensional data; where long short-term memory avoids the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads;
Competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input.
== Network design ==
Using artificial neural networks requires an understanding of their characteristics.
Choice of model: This depends on the data representation and the application. Model parameters include the number, type, and connectedness of network layers, as well as the size of each and the connection type (full, pooling, etc.). Overly complex models learn slowly.
Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation.
Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust.
Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras.
Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc. The Python code snippet provides an overview of the training function, which uses the training dataset, number of hidden layer units, learning rate, and number of iterations as parameters:
== Applications ==
Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. These include:
Function approximation, or regression analysis, (including time series prediction, fitness approximation, and modeling)
Data processing (including filtering, clustering, blind source separation, and compression)
Nonlinear system identification and control (including vehicle control, trajectory prediction, adaptive control, process control, and natural resource management)
Pattern recognition (including radar systems, face identification, signal classification, novelty detection, 3D reconstruction, object recognition, and sequential decision making)
Sequence recognition (including gesture, speech, and handwritten and printed text recognition)
Sensor data analysis (including image analysis)
Robotics (including directing manipulators and prostheses)
Data mining (including knowledge discovery in databases)
Finance (such as ex-ante models for specific financial long-run forecasts and artificial financial markets)
Quantum chemistry
General game playing
Generative AI
Data visualization
Machine translation
Social network filtering
E-mail spam filtering
Medical diagnosis
ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.
ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions.
ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level.
It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition.
Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation.
== Theoretical properties ==
=== Computational power ===
The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
=== Capacity ===
A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.
=== Convergence ===
Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical.
Another issue worthy to mention is that training may cross some saddle point which may lead the convergence to the wrong direction.
The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fit target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.
=== Generalization and statistics ===
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters.
Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error. The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications.
The softmax activation function is:
y
i
=
e
x
i
∑
j
=
1
c
e
x
j
{\displaystyle y_{i}={\frac {e^{x_{i}}}{\sum _{j=1}^{c}e^{x_{j}}}}}
== Criticism ==
=== Training ===
A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation.
Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC.
Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right).
=== Theory ===
A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, Alexander Dewdney, a former Scientific American columnist, commented that as a result, artificial neural networks have a
something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything. One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go.
Technology writer Roger Bridgman commented:
Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource".
In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.
Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.
Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
=== Hardware ===
Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which require enormous CPU power and time.
Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.
Neuromorphic engineering or a physical neural network addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.
=== Practical counterexamples ===
Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.
=== Hybrid approaches ===
Advocates of hybrid models (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.
=== Dataset bias ===
Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases. These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute. This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications like facial recognition, hiring processes, and law enforcement. For example, in 2018, Amazon had to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field. The program would penalize any resume with the word "woman" or the name of any women's college. However, the use of synthetic data can help reduce dataset bias and increase representation in datasets.
== Gallery ==
== Recent advancements and future directions ==
Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.
=== Image processing ===
In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance. This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging.
=== Speech recognition ===
By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques. These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.
=== Natural language processing ===
In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content. This has implications for automated customer service, content moderation, and language understanding technologies.
=== Control systems ===
In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications.
=== Finance ===
ANNs are used for stock market prediction and credit scoring:
In investing, ANNs can process vast amounts of financial data, recognize complex patterns, and forecast stock market trends, aiding investors and risk managers in making informed decisions.
In credit scoring, ANNs offer data-driven, personalized assessments of creditworthiness, improving the accuracy of default predictions and automating the lending process.
ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancing risk management strategies.
=== Medicine ===
ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs. Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management. Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.
=== Content creation ===
ANNs such as generative adversarial networks (GAN) and transformers are used for content creation across numerous industries. This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance, DALL-E is a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user. In the field of music, transformers are used to create original music for commercials and documentaries through companies such as AIVA and Jukedeck. In the marketing industry generative models are used to create personalized advertisements for consumers. Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020. Furthermore, neural networks have found uses in video game creation, where Non Player Characters (NPCs) can make decisions based on all the characters currently in the game.
== See also ==
== References ==
== Bibliography ==
== External links ==
A Brief Introduction to Neural Networks (D. Kriesel) – Illustrated, bilingual manuscript about artificial neural networks; Topics so far: Perceptrons, Backpropagation, Radial Basis Functions, Recurrent Neural Networks, Self Organizing Maps, Hopfield Networks.
Review of Neural Networks in Materials Science Archived 7 June 2015 at the Wayback Machine
Artificial Neural Networks Tutorial in three languages (Univ. Politécnica de Madrid)
Another introduction to ANN
Next Generation of Neural Networks Archived 24 January 2011 at the Wayback Machine – Google Tech Talks
Performance of Neural Networks
Neural Networks and Information Archived 9 July 2009 at the Wayback Machine
Sanderson G (5 October 2017). "But what is a Neural Network?". 3Blue1Brown. Archived from the original on 7 November 2021 – via YouTube. | Wikipedia/Artificial_neural_network |
Enterprise systems engineering (ESE) is the discipline that applies systems engineering to the design of an enterprise. As a discipline, it includes a body of knowledge, principles, and processes tailored to the design of enterprise systems.
An enterprise is a complex, socio-technical system that comprises interdependent resources of people, information, and technology that must interact to fulfill a common mission.
Enterprise systems engineering incorporates all the tasks of traditional systems engineering but is further informed by an expansive view of the political, operational, economic, and technological (POET) contexts in which the system(s) under consideration are developed, acquired, modified, maintained, or disposed.
Enterprise systems engineering may be appropriate when the complexity of the enterprise exceeds the scope of the assumptions upon which textbook systems engineering are based. Traditional systems engineering assumptions include relatively stable and well understood requirements, a system configuration that can be controlled, and a small, easily discernible set of stakeholders.
An enterprise systems engineer must produce a different kind of analysis on the people, technology, and other components of the organization in order to see the whole enterprise. As the enterprise becomes more complex, with more parameters and people involved, it is important to integrate the system as much as possible to enable the organization to achieve a higher standard.
== Elements ==
Four elements are needed for enterprise systems engineering to work. These include development through adaption, strategic technical planning, enterprise governance, and ESE processes (with stages).
=== Development through adaptation ===
Development through adaptation is a way to compromise with the problems and obstacles in complex systems. Over time, the environment changes and adaptation is required to continue development. For example, mobile phones have undergone numerous modifications since their introduction. Initially, the devices were considerably larger than those seen in later iterations. Over time, variations in size and design have been observed across different generations of mobile phones. Additionally, the evolution of mobile data technology from 1G to 5G has influenced the speed and convenience of mobile phone usage.
=== Strategic technical planning ===
Strategic technical planning (STP) gives the enterprise the picture of their aim and objectives. STP components are:
Mission statement
Needs assessment
Technology descriptions and goal statement
Hardware and software requirement
Budget plan
Human Resources
=== Enterprise governance ===
Enterprise governance is defined as 'the set of responsibilities and practices exercised by the board and executive management to provide strategic direction, ensure that objectives are achieved, ascertain that risks are managed appropriately and verify that the organization's resources are used responsibly,' according to CIMA Official Terminology. EG allows one to make the right decision on the choice of CEO and executives for the company, and also to identify the risks of the company.
== Processes ==
Four steps comprise the enterprise system engineering process: technology planning (TP); capabilities-based engineering analysis (CBEA); enterprise architecture (EA); and enterprise analysis and assessment (EA&A).
=== Technology planning ===
TP looks for technologies key to the enterprise. This step aims to identify the innovative ideas and choose the technologies that are useful for the enterprise.
=== Capabilities-based engineering analysis ===
CBEA is an analysis method that focuses on elements that the whole enterprise needs. The three steps are purpose formulation, exploratory analysis, and evolutionary planning:
==== Purpose formulation ====
Assess stakeholder Interest – understand what the stakeholders want and like
Specify outcome spaces – find solutions for several conditions and the goal for the operations
Frame capability portfolios - collect fundamental elements
==== Exploratory analysis ====
Assess performance and cost – identify the performance and cost in different conditions and find solutions to improve
Explore concepts – search for new concepts and transform advanced capabilities
Determine the need for more variety – examine the risks and chances and decide whether new ways are needed
==== Evolutionary planning ====
Assess enterprise impacts – investigate the effects on the enterprise in technical and capability aspects
Examine evolution strategies – explore and construct more strategies and evolution route
Develop capability road map – plan for the capability area which includes analysis and decision making which is a tool for assessment and development for the enterprise
=== Enterprise architecture ===
EA is a model that illustrates the vision, network and framework of an organization. The four aspects (according to Michael Platt) are business prospects, application, information and technology. The diagram shows the structure of enterprise architecture. The benefits are improvement of decision making, increased IT efficiency and minimizes losses.
Business – The strategies and processes by the operation of business
Application – Interaction and communication along with the processes used in the company
Information – The logical data and statistics that the organization requires to run properly and actively
Technology – The software and hardware and different operational systems that are used in the company
All the elements are dependent and rely on each other in order to build the infrastructure.
=== Enterprise analysis and assessment ===
Enterprise analysis and assessment aims to assess whether the enterprise is going in the right direction and help to make correct decisions. Qualities required for this step include awareness of technologies, knowing and understanding command and control (C2) issues, and using modeling and simulation (M&S) explore the implications.
Activities and actions for this event include:
Multi-scale analysis
Early and continuous war fighter operational assessment
Lightweight, portable M&S-based C2 capability representations
Development software available for assessment
Minimal infrastructure
Flexible M&S operator-in-the-loop (OITL), and hardware-in-the-loop (HWIL) capabilities
In-line, continuous performance monitoring and selective forensics
== Traditional systems engineering ==
Traditional systems engineering (TSE) is a term to be defined as an engineering sub-system. Elements:
TSE is conducted by an external designer
It is a stable system which doesn't change automatically
Operation and development are independent of each other
People do not play an important role in it
Massive machines have expected conduct
A survey compared ESE and TSE. The survey reported that the two are complementary and interdependent. ESE had a higher rating while TSE could be part of ESE. The combination could be ideal.
== Applications ==
The two types of ESE application are Information Enterprise Systems Engineering and Social Enterprise Systems Engineering.
=== Information Enterprise Systems Engineering (IESE) ===
It is a system that builds up to meet the requirements and expectations of different stakeholders in the organization. There must be an input device to collect the information and output device to satisfy the information needs.
There are three different aspects for the framework of IESE:
Functional view
Topology view
Physical view
Also, there are different rules for the IESE model.
Interchangeable point of view
Detailed views and well displayed. Showing the specific method, solution and techniques
Consistent views
Supported viewpoints
=== Social Enterprise System Engineering ===
This is a framework that involves planning, analyzing, mapping, and drawing a network of the process for enterprises and stakeholders. Moreover, it creates social value for entrepreneurship and explores and focuses on social and societal issues. It forms a connection between social enterprise and system engineering. There is a Social Enterprise Systems Engineering V-model, in which two or more social elements are established based on the system engineering framework—for example, more social interface analysis that reviews stakeholders' requirements, and more activities and interactions between stakeholders to exchange opinion.
== Opportunity and risk management ==
There are opportunities and risks in ESE and they have to be aggressive in seeking opportunities and also finding ways to avoid or minimize the risks. Opportunity is a trigger element that may lead to the accomplishment of objectives. Risk is a potential occurrence and will affect the performance of the entire system. There are several reasons for the importance of risk management.
To identify the risks before head which can prepare actions to prevent or minimize the risks
Since risks can cost the enterprise, determining the risk events can reduce the amount of loss
Help to know how to allocate the human or technology resources in order avoid the most critical risks
There are few steps in Enterprise risk and opportunity Management Process
Prepare the risk and opportunity plan – Select team and representatives
Identify Risks – Complete risks statements for each risk
Identify Opportunities – People that work at tactical level and manager must understand the opportunities in order to take a further action
Evaluate the Enterprise Risks and Opportunities – To decide which is more critical and vital
Develop the plan – Develop after identification and evaluation with different strategies
== See also ==
Enterprise architecture
Enterprise engineering
Enterprise life cycle
Industrial engineering
Systems engineering
Soft systems methodology
System of systems
System of systems engineering (SoSE)
Risk management plan
Technology roadmap
== References ==
24. "Enterprise Architecture | Centric". Business Consulting. Retrieved 2024-1-30.
== Further reading ==
R.E. Giachetti, (2010), Design of Enterprise Systems, CRC Press, Boca Raton, Florida. [3]
Oscar A. Saenz, and Chin-Sheng Chen (2004). "A Framework for Enterprise Systems Engineering"
Robert S. Swarz, and Joseph K. DeRosa (2006). A Framework for Enterprise Systems Engineering Processes
== External links ==
Department of Industrial and Enterprise Systems Engineering University of Illinois at Urbana-Champaign.
MIT Engineering Systems Division | Wikipedia/Enterprise_systems_engineering |
Perceptual control theory (PCT) is a model of behavior based on the properties of negative feedback control loops. A control loop maintains a sensed variable at or near a reference value by means of the effects of its outputs upon that variable, as mediated by physical properties of the environment. In engineering control theory, reference values are set by a user outside the system. An example is a thermostat. In a living organism, reference values for controlled perceptual variables are endogenously maintained. Biological homeostasis and reflexes are simple, low-level examples. The discovery of mathematical principles of control introduced a way to model a negative feedback loop closed through the environment (circular causation), which spawned perceptual control theory. It differs fundamentally from some models in behavioral and cognitive psychology that model stimuli as causes of behavior (linear causation). PCT research is published in experimental psychology, neuroscience, ethology, anthropology, linguistics, sociology, robotics, developmental psychology, organizational psychology and management, and a number of other fields. PCT has been applied to design and administration of educational systems, and has led to a psychotherapy called the method of levels.
== Principles and differences from other theories ==
The perceptual control theory is deeply rooted in biological cybernetics, systems biology and control theory and the related concept of feedback loops. Unlike some models in behavioral and cognitive psychology it sets out from the concept of circular causality. It shares, therefore, its theoretical foundation with the concept of plant control, but it is distinct from it by emphasizing the control of the internal representation of the physical world.
The plant control theory focuses on neuro-computational processes of movement generation, once a decision for generating the movement has been taken. PCT spotlights the embeddedness of agents in their environment. Therefore, from the perspective of perceptual control, the central problem of motor control consists in finding a sensory input to the system that matches a desired perception.
== History ==
PCT has roots in physiological insights of Claude Bernard and in 20th century in the research by Walter B. Cannon and in the fields of control systems engineering and cybernetics. Classical negative feedback control was worked out by engineers in the 1930s and 1940s, and further developed by Wiener, Ashby, and others in the early development of the field of cybernetics. Beginning in the 1950s, William T. Powers applied the concepts and methods of engineered control systems to biological control systems, and developed the experimental methodology of PCT.
A key insight of PCT is that the controlled variable is not the output of the system (the behavioral actions), but its input, that is, a sensed and transformed function of some state of the environment that the control system's output can affect. Because these sensed and transformed inputs may appear as consciously perceived aspects of the environment, Powers labelled the controlled variable "perception". The theory came to be known as "Perceptual Control Theory" or PCT rather than "Control Theory Applied to Psychology" because control theorists often assert or assume that it is the system's output that is controlled. In PCT it is the internal representation of the state of some variable in the environment—a "perception" in everyday language—that is controlled. The basic principles of PCT were first published by Powers, Clark, and MacFarland as a "general feedback theory of behavior" in 1960, with credits to cybernetic authors Wiener and Ashby. It has been systematically developed since then in the research community that has gathered around it. Initially, it was overshadowed by the cognitive revolution (later supplanted by cognitive science), but has now become better known.
Powers and other researchers in the field point to problems of purpose, causation, and teleology at the foundations of psychology which control theory resolves. From Aristotle through William James and John Dewey it has been recognized that behavior is purposeful and not merely reactive, but how to account for this has been problematic because the only evidence for intentions was subjective. As Powers pointed out, behaviorists following Wundt, Thorndike, Watson, and others rejected introspective reports as data for an objective science of psychology. Only observable behavior could be admitted as data. Such behaviorists modeled environmental events (stimuli) as causing behavioral actions (responses). This causal assumption persists in some models in cognitive psychology that interpose cognitive maps and other postulated information processing between stimulus and response but otherwise retain the assumption of linear causation from environment to behavior, which Richard Marken called an "open-loop causal model of behavioral organization" in contrast to PCT's closed-loop model.
Another, more specific reason that Powers observed for psychologists' rejecting notions of purpose or intention was that they could not see how a goal (a state that did not yet exist) could cause the behavior that led to it. PCT resolves these philosophical arguments about teleology because it provides a model of the functioning of organisms in which purpose has objective status without recourse to introspection, and in which causation is circular around feedback loops.
== Example ==
A simple negative feedback control system is a cruise control system for a car. A cruise control system has a sensor which "perceives" speed as the rate of spin of the drive shaft directly connected to the wheels. It also has a driver-adjustable 'goal' specifying a particular speed. The sensed speed is continuously compared against the specified speed by a device (called a "comparator") which subtracts the currently sensed input value from the stored goal value. The difference (the error signal) determines the throttle setting (the accelerator depression), so that the engine output is continuously varied to prevent the speed of the car from increasing or decreasing from that desired speed as environmental conditions change.
If the speed of the car starts to drop below the goal-speed, for example when climbing a hill, the small increase in the error signal, amplified, causes engine output to increase, which keeps the error very nearly at zero. If the speed begins to exceed the goal, e.g. when going down a hill, the engine is throttled back so as to act as a brake, so again the speed is kept from departing more than a barely detectable amount from the goal speed (brakes being needed only if the hill is too steep). The result is that the cruise control system maintains a speed close to the goal as the car goes up and down hills, and as other disturbances such as wind affect the car's speed. This is all done without any planning of specific actions, and without any blind reactions to stimuli. Indeed, the cruise control system does not sense disturbances such as wind pressure at all, it only senses the controlled variable, speed. Nor does it control the power generated by the engine, it uses the 'behavior' of engine power as its means to control the sensed speed.
The same principles of negative feedback control (including the ability to nullify effects of unpredictable external or internal disturbances) apply to living control systems. Implications of these principle are e.g. intensively studied by biological and medical cybernetics and systems biology.
The thesis of PCT is that animals and people do not control their behavior; rather, they vary their behavior as their means for controlling their perceptions, with or without external disturbances. This is harmoniously consistent with the historical and still widespread assumption that behavior is the final result of stimulus inputs and cognitive plans.
== The methodology of modeling, and PCT as model ==
The principal datum in PCT methodology is the controlled variable. The fundamental step of PCT research, the test for controlled variables, begins with the slow and gentle application of disturbing influences to the state of a variable in the environment which the researcher surmises is already under control by the observed organism. It is essential not to overwhelm the organism's ability to control, since that is what is being investigated. If the organism changes its actions just so as to prevent the disturbing influence from having the expected effect on that variable, that is strong evidence that the experimental action disturbed a controlled variable. It is crucially important to distinguish the perceptions and point of view of the observer from those of the observed organism. It may take a number of variations of the test to isolate just which aspect of the environmental situation is under control, as perceived by the observed organism.
PCT employs a black box methodology. The controlled variable as measured by the observer corresponds quantitatively to a reference value for a perception that the organism is controlling. The controlled variable is thus an objective index of the purpose or intention of those particular behavioral actions by the organism—the goal which those actions consistently work to attain despite disturbances. With few exceptions, in the current state of neuroscience this internally maintained reference value is seldom directly observed as such (e.g. as a rate of firing in a neuron), since few researchers trace the relevant electrical and chemical variables by their specific pathways while a living organism is engaging in what we externally observe as behavior. However, when a working negative feedback system simulated on a digital computer performs essentially identically to observed organisms, then the well understood negative feedback structure of the simulation or model (the white box) is understood to demonstrate the unseen negative feedback structure within the organism (the black box).
Data for individuals are not aggregated for statistical analysis; instead, a generative model is built which replicates the data observed for individuals with very high fidelity (0.95 or better). To build such a model of a given behavioral situation requires careful measurements of three observed variables:
A fourth value, the internally maintained reference r (a variable ′setpoint′), is deduced from the value at which the organism is observed to maintain qi, as determined by the test for controlled variables (described at the beginning of this section).
With two variables specified, the controlled input qi and the reference r, a properly designed control system, simulated on a digital computer, produces outputs qo that almost precisely oppose unpredictable disturbances d to the controlled input. Further, the variance from perfect control accords well with that observed for living organisms. Perfect control would result in zero effect of the disturbance, but living organisms are not perfect controllers, and the aim of PCT is to model living organisms. When a computer simulation performs with >95% conformity to experimentally measured values, opposing the effect of unpredictable changes in d by generating (nearly) equal and opposite values of qo, it is understood to model the behavior and the internal control-loop structure of the organism.
By extension, the elaboration of the theory constitutes a general model of cognitive process and behavior. With every specific model or simulation of behavior that is constructed and tested against observed data, the general model that is presented in the theory is exposed to potential challenge that could call for revision or could lead to refutation.
== Mathematics ==
To illustrate the mathematical calculations employed in a PCT simulation, consider a pursuit tracking task in which the participant keeps a mouse cursor aligned with a moving target on a computer monitor.
The model assumes that a perceptual signal within the participant represents the magnitude of the input quantity qi. (This has been demonstrated to be a rate of firing in a neuron, at least at the lowest levels.) In the tracking task, the input quantity is the vertical distance between the target position T and the cursor position C, and the random variation of the target position acts as the disturbance d of that input quantity. This suggests that the perceptual signal p quantitatively represents the cursor position C minus the target position T, as expressed in the equation p=C–T.
Between the perception of target and cursor and the construction of the signal representing the distance between them there is a delay of τ milliseconds, so that the working perceptual signal at time t represents the target-to-cursor distance at a prior time, t – τ. Consequently, the equation used in the model is
1. p(t) = C(t–τ) – T(t–τ)
The negative feedback control system receives a reference signal r which specifies the magnitude of the given perceptual signal which is currently intended or desired. (For the origin of r within the organism, see under "A hierarchy of control", below.) Both r and p are input to a simple neural structure with r excitatory and p inhibitory. This structure is called a "comparator". The effect is to subtract p from r, yielding an error signal e that indicates the magnitude and sign of the difference between the desired magnitude r and the currently input magnitude p of the given perception. The equation representing this in the model is:
2. e = r–p
The error signal e must be transformed to the output quantity qo (representing the participant's muscular efforts affecting the mouse position). Experiments have shown that in the best model for the output function, the mouse velocity Vcursor is proportional to the error signal e by a gain factor G (that is, Vcursor = G*e). Thus, when the perceptual signal p is smaller than the reference signal r, the error signal e has a positive sign, and from it the model computes an upward velocity of the cursor that is proportional to the error.
The next position of the cursor Cnew is the current position Cold plus the velocity Vcursor times the duration dt of one iteration of the program. By simple algebra, we substitute G*e (as given above) for Vcursor, yielding a third equation:
3. Cnew = Cold + G*e*dt
These three simple equations or program steps constitute the simplest form of the model for the tracking task. When these three simultaneous equations are evaluated over and over with similarly distributed random disturbances d of the target position that the human participant experienced, the output positions and velocities of the cursor duplicate the participant's actions in the tracking task above within 4.0% of their peak-to-peak range, in great detail.
This simple model can be refined with a damping factor d which reduces the discrepancy between the model and the human participant to 3.6% when the disturbance d is set to maximum difficulty.
3'. Cnew = Cold + [(G*e)–(d*Cold)]*dt
Detailed discussion of this model in (Powers 2008) includes both source and executable code, with which the reader can verify how well this simple program simulates real behavior. No consideration is needed of possible nonlinearities such as the Weber-Fechner law, potential noise in the system, continuously varying angles at the joints, and many other factors that could afflict performance if this were a simple linear model. No inverse kinematics or predictive calculations are required. The model simply reduces the discrepancy between input p and reference r continuously as it arises in real time, and that is all that is required—as predicted by the theory.
== Distinctions from engineering control theory ==
In the artificial systems that are specified by engineering control theory, the reference signal is considered to be an external input to the 'plant'. In engineering control theory, the reference signal or set point is public; in PCT, it is not, but rather must be deduced from the results of the test for controlled variables, as described above in the methodology section. This is because in living systems a reference signal is not an externally accessible input, but instead originates within the system. In the hierarchical model, error output of higher-level control loops, as described in the next section below, evokes the reference signal r from synapse-local memory, and the strength of r is proportional to the (weighted) strength of the error signal or signals from one or more higher-level systems.
In engineering control systems, in the case where there are several such reference inputs, a 'Controller' is designed to manipulate those inputs so as to obtain the effect on the output of the system that is desired by the system's designer, and the task of a control theory (so conceived) is to calculate those manipulations so as to avoid instability and oscillation. The designer of a PCT model or simulation specifies no particular desired effect on the output of the system, except that it must be whatever is required to bring the input from the environment (the perceptual signal) into conformity with the reference. In Perceptual Control Theory, the input function for the reference signal is a weighted sum of internally generated signals (in the canonical case, higher-level error signals), and loop stability is determined locally for each loop in the manner sketched in the preceding section on the mathematics of PCT (and elaborated more fully in the referenced literature). The weighted sum is understood to result from reorganization.
Engineering control theory is computationally demanding, but as the preceding section shows, PCT is not. For example, contrast the implementation of a model of an inverted pendulum in engineering control theory with the PCT implementation as a hierarchy of five simple control systems.
== A hierarchy of control ==
Perceptions, in PCT, are constructed and controlled in a hierarchy of levels. For example, visual perception of an object is constructed from differences in light intensity or differences in sensations such as color at its edges. Controlling the shape or location of the object requires altering the perceptions of sensations or intensities (which are controlled by lower-level systems). This organizing principle is applied at all levels, up to the most abstract philosophical and theoretical constructs.
The Russian physiologist Nicolas Bernstein independently came to the same conclusion that behavior has to be multiordinal—organized hierarchically, in layers. A simple problem led to this conclusion at about the same time both in PCT and in Bernstein's work. The spinal reflexes act to stabilize limbs against disturbances. Why do they not prevent centers higher in the brain from using those limbs to carry out behavior? Since the brain obviously does use the spinal systems in producing behavior, there must be a principle that allows the higher systems to operate by incorporating the reflexes, not just by overcoming them or turning them off. The answer is that the reference value (setpoint) for a spinal reflex is not static; rather, it is varied by higher-level systems as their means of moving the limbs (servomechanism). This principle applies to higher feedback loops, as each loop presents the same problem to subsystems above it.
Whereas an engineered control system has a reference value or setpoint adjusted by some external agency, the reference value for a biological control system cannot be set in this way. The setpoint must come from some internal process. If there is a way for behavior to affect it, any perception may be brought to the state momentarily specified by higher levels and then be maintained in that state against unpredictable disturbances. In a hierarchy of control systems, higher levels adjust the goals of lower levels as their means of approaching their own goals set by still-higher systems. This has important consequences for any proposed external control of an autonomous living control system (organism). At the highest level, reference values (goals) are set by heredity or adaptive processes.
== Reorganization in evolution, development, and learning ==
If an organism controls inappropriate perceptions, or if it controls some perceptions to inappropriate values, then it is less likely to bring progeny to maturity, and may die. Consequently, by natural selection successive generations of organisms evolve so that they control those perceptions that, when controlled with appropriate setpoints, tend to maintain critical internal variables at optimal levels, or at least within non-lethal limits. Powers called these critical internal variables "intrinsic variables" (Ashby's "essential variables").
The mechanism that influences the development of structures of perceptions to be controlled is termed "reorganization", a process within the individual organism that is subject to natural selection just as is the evolved structure of individuals within a species.
This "reorganization system" is proposed to be part of the inherited structure of the organism. It changes the underlying parameters and connectivity of the control hierarchy in a random-walk manner. There is a basic continuous rate of change in intrinsic variables which proceeds at a speed set by the total error (and stops at zero error), punctuated by random changes in direction in a hyperspace with as many dimensions as there are critical variables. This is a more or less direct adaptation of Ashby's "homeostat", first adopted into PCT in the 1960 paper and then changed to use E. coli's method of navigating up gradients of nutrients, as described by Koshland (1980).
Reorganization may occur at any level when loss of control at that level causes intrinsic (essential) variables to deviate from genetically determined set points. This is the basic mechanism that is involved in trial-and-error learning, which leads to the acquisition of more systematic kinds of learning processes.
== Psychotherapy: the method of levels (MOL) ==
The reorganization concept has led to a method of psychotherapy called the method of levels (MOL). Using MOL, the therapist aims to help the patient shift his or her awareness to higher levels of perception in order to resolve conflicts and allow reorganization to take place.
== Neuroscience ==
=== Learning ===
Currently, no one theory has been agreed upon to explain the synaptic, neuronal or systemic basis of learning. Prominent since 1973, however, is the idea that long-term potentiation (LTP) of populations of synapses induces learning through both pre- and postsynaptic mechanisms. LTP is a form of Hebbian learning, which proposed that high-frequency, tonic activation of a circuit of neurones increases the efficacy with which they are activated and the size of their response to a given stimulus as compared to the standard neurone (Hebb, 1949). These mechanisms are the principles behind Hebb's famously simple explanation: "Those that fire together, wire together".
LTP has received much support since it was first observed by Terje Lømo in 1966 and is still the subject of many modern studies and clinical research. However, there are possible alternative mechanisms underlying LTP, as presented by Enoki, Hu, Hamilton and Fine in 2009, published in the journal Neuron. They concede that LTP is the basis of learning. However, they firstly propose that LTP occurs in individual synapses, and this plasticity is graded (as opposed to in a binary mode) and bidirectional. Secondly, the group suggest that the synaptic changes are expressed solely presynaptically, via changes in the probability of transmitter release. Finally, the team predict that the occurrence of LTP could be age-dependent, as the plasticity of a neonatal brain would be higher than that of a mature one. Therefore, the theories differ, as one proposes an on/off occurrence of LTP by pre- and postsynaptic mechanisms and the other proposes only presynaptic changes, graded ability, and age-dependence.
These theories do agree on one element of LTP, namely, that it must occur through physical changes to the synaptic membrane/s, i.e. synaptic plasticity. Perceptual control theory encompasses both of these views. It proposes the mechanism of 'reorganisation' as the basis of learning. Reorganisation occurs within the inherent control system of a human or animal by restructuring the inter- and intraconnections of its hierarchical organisation, akin to the neuroscientific phenomenon of neural plasticity. This reorganisation initially allows the trial-and-error form of learning, which is seen in babies, and then progresses to more structured learning through association, apparent in infants, and finally to systematic learning, covering the adult ability to learn from both internally and externally generated stimuli and events. In this way, PCT provides a valid model for learning that combines the biological mechanisms of LTP with an explanation of the progression and change of mechanisms associated with developmental ability.
Powers in 2008 produced a simulation of arm co-ordination. He suggested that in order to move your arm, fourteen control systems that control fourteen joint angles are involved, and they reorganise simultaneously and independently. It was found that for optimum performance, the output functions must be organised in a way so as each control system's output only affects the one environmental variable it is perceiving. In this simulation, the reorganising process is working as it should, and just as Powers suggests that it works in humans, reducing outputs that cause error and increasing those that reduce error. Initially, the disturbances have large effects on the angles of the joints, but over time the joint angles match the reference signals more closely due to the system being reorganised. Powers suggests that in order to achieve coordination of joint angles to produce desired movements, instead of calculating how multiple joint angles must change to produce this movement the brain uses negative feedback systems to generate the joint angles that are required. A single reference signal that is varied in a higher-order system can generate a movement that requires several joint angles to change at the same time.
=== Hierarchical organisation ===
Botvinick in 2008 proposed that one of the founding insights of the cognitive revolution was the recognition of hierarchical structure in human behavior. Despite decades of research, however, the computational mechanisms underlying hierarchically organized behavior are still not fully understood. Bedre, Hoffman, Cooney & D'Esposito in 2009 proposed that the fundamental goal in cognitive neuroscience is to characterize the functional organization of the frontal cortex that supports the control of action.
Recent neuroimaging data has supported the hypothesis that the frontal lobes are organized hierarchically, such that control is supported in progressively caudal regions as control moves to more concrete specification of action. However, it is still not clear whether lower-order control processors are differentially affected by impairments in higher-order control when between-level interactions are required to complete a task, or whether there are feedback influences of lower-level on higher-level control.
Botvinik in 2008 found that all existing models of hierarchically structured behavior share at least one general assumption – that the hierarchical, part–whole organization of human action is mirrored in the internal or neural representations underlying it. Specifically, the assumption is that there exist representations not only of low-level motor behaviors, but also separable representations of higher-level behavioral units. The latest crop of models provides new insights, but also poses new or refined questions for empirical research, including how abstract action representations emerge through learning, how they interact with different modes of action control, and how they sort out within the prefrontal cortex (PFC).
Perceptual control theory (PCT) can provide an explanatory model of neural organisation that deals with the current issues. PCT describes the hierarchical character of behavior as being determined by control of hierarchically organized perception. Control systems in the body and in the internal environment of billions of interconnected neurons within the brain are responsible for keeping perceptual signals within survivable limits in the unpredictably variable environment from which those perceptions are derived. PCT does not propose that there is an internal model within which the brain simulates behavior before issuing commands to execute that behavior. Instead, one of its characteristic features is the principled lack of cerebral organisation of behavior. Rather, behavior is the organism's variable means to reduce the discrepancy between perceptions and reference values which are based on various external and internal inputs. Behavior must constantly adapt and change for an organism to maintain its perceptual goals. In this way, PCT can provide an explanation of abstract learning through spontaneous reorganisation of the hierarchy. PCT proposes that conflict occurs between disparate reference values for a given perception rather than between different responses, and that learning is implemented as trial-and-error changes of the properties of control systems, rather than any specific response being reinforced. In this way, behavior remains adaptive to the environment as it unfolds, rather than relying on learned action patterns that may not fit.
Hierarchies of perceptual control have been simulated in computer models and have been shown to provide a close match to behavioral data. For example, Marken conducted an experiment comparing the behavior of a perceptual control hierarchy computer model with that of six healthy volunteers in three experiments. The participants were required to keep the distance between a left line and a centre line equal to that of the centre line and a right line. They were also instructed to keep both distances equal to 2 cm. They had 2 paddles in their hands, one controlling the left line and one controlling the middle line. To do this, they had to resist random disturbances applied to the positions of the lines. As the participants achieved control, they managed to nullify the expected effect of the disturbances by moving their paddles. The correlation between the behavior of subjects and the model in all the experiments approached 0.99. It is proposed that the organization of models of hierarchical control systems such as this informs us about the organization of the human subjects whose behavior it so closely reproduces.
== Robotics ==
PCT has significant implications for Robotics and Artificial Intelligence. W.T. Powers introduced the application of PCT to robotics in 1978, early in the availability of home computers.
The comparatively simple architecture, a hierarchy of perceptual controllers, has no need for complex models of the external world, inverse kinematics, or computation from input-output mappings. Traditional approaches to robotics generally depend upon the computation of actions in a constrained environment. Robots designed this way are inflexible and clumsy, unable to cope with the dynamic nature of the real world. PCT robots inherently resist and counter the chaotic, unpredictable disturbances to their controlled inputs which occur in an unconstrained environment. The PCT robotics architecture has recently been applied to a number of real-world robotic systems including robotic rovers, balancing robot and robot arms. Some commercially available robots which demonstrate good control in a naturalistic environment use a control-theoretic architecture which requires much more intensive computation. For example, Boston Dynamics has said that its robots use historically leveraged model predictive control.
== Current situation and prospects ==
The preceding explanation of PCT principles provides justification of how this theory can provide a valid explanation of neural organisation and how it can explain some of the current issues of conceptual models.
Perceptual control theory currently proposes a hierarchy of 11 levels of perceptions controlled by systems in the human mind and neural architecture. These are: intensity, sensation, configuration, transition, event, relationship, category, sequence, program, principle, and system concept. Diverse perceptual signals at a lower level (e.g. visual perceptions of intensities) are combined in an input function to construct a single perception at the higher level (e.g. visual perception of a color sensation). The perceptions that are constructed and controlled at the lower levels are passed along as the perceptual inputs at the higher levels. The higher levels in turn control by adjusting the reference levels (goals) of the lower levels, in effect telling the lower levels what to perceive.
While many computer demonstrations of principles have been developed, the proposed higher levels are difficult to model because too little is known about how the brain works at these levels. Isolated higher-level control processes can be investigated, but models of an extensive hierarchy of control are still only conceptual, or at best rudimentary.
Perceptual control theory has not been widely accepted in mainstream psychology, but has been effectively used in a considerable range of domains in human factors, clinical psychology, and psychotherapy (the "Method of Levels"), it is the basis for a considerable body of research in sociology, and it has formed the conceptual foundation for the reference model used by a succession of NATO research study groups.
Recent approaches use principles of perceptual control theory to provide new algorithmic foundations for artificial intelligence and machine learning.
== Selected bibliography ==
Cziko, Gary (1995). Without miracles: Universal selection theory and the second Darwinian revolution. Cambridge, MA: MIT Press (A Bradford Book). ISBN 0-262-53147-X
Cziko, Gary (2000). The things we do: Using the lessons of Bernard and Darwin to understand the what, how, and why of our behavior. Cambridge, MA: MIT Press (A Bradford Book). ISBN 0-262-03277-5
Forssell, Dag (Ed.), 2016. Perceptual Control Theory, An Overview of the Third Grand Theory in Psychology: Introductions, Readings, and Resources. Hayward, CA: Living Control Systems Publishing. ISBN 978-1938090134.
Mansell, Warren (Ed.), (2020). The Interdisciplinary Handbook of Perceptual Control Theory: Living Control Systems IV. Cambridge: Academic Press. ISBN 978-0128189481.
Marken, Richard S. (1992) Mind readings: Experimental studies of purpose. Benchmark Publications: New Canaan, CT.
Marken, Richard S. (2002) More mind readings: Methods and models in the study of purpose. Chapel Hill, NC: New View. ISBN 0-944337-43-0
Pfau, Richard H. (2017). Your Behavior: Understanding and Changing the Things You Do. St. Paul, MN: Paragon House. ISBN 9781557789273
Plooij, F. X. (1984). The behavioral development of free-living chimpanzee babies and infants. Norwood, N.J.: Ablex.
Plooij, F. X. (2003). "The trilogy of mind". In M. Heimann (Ed.), Regression periods in human infancy (pp. 185–205). Mahwah, NJ: Erlbaum.
Powers, William T. (1973). Behavior: The control of perception. Chicago: Aldine de Gruyter. ISBN 0-202-25113-6. [2nd exp. ed. = Powers (2005)].
Powers, William T. (1989). Living control systems. [Selected papers 1960–1988.] New Canaan, CT: Benchmark Publications. ISBN 0-9647121-3-X.
Powers, William T. (1992). Living control systems II. [Selected papers 1959–1990.] New Canaan, CT: Benchmark Publications.
Powers, William T. (1998). Making sense of behavior: The meaning of control. New Canaan, CT: Benchmark Publications. ISBN 0-9647121-5-6.
Powers, William T. (2005). Behavior: The control of perception. New Canaan: Benchmark Publications. ISBN 0-9647121-7-2. [2nd exp. ed. of Powers (1973). Chinese tr. (2004) Guongdong Higher Learning Education Press, Guangzhou, China. ISBN 7-5361-2996-3.]
Powers, William T. (2008). Living Control Systems III: The fact of control. [Mathematical appendix by Dr. Richard Kennaway. Includes computer programs for the reader to demonstrate and experimentally test the theory.] New Canaan, CT: Benchmark Publications. ISBN 978-0-9647121-8-8.
Powers, William. T., Clark, R. K., and McFarland, R. L. (1960). "A general feedback theory of human behavior [Part 1; Part 2]. Perceptual and Motor Skills 11, 71–88; 309–323.
Powers, William T. and Runkel, Philip J. 2011. Dialogue concerning the two chief approaches to a science of life: Word pictures and correlations versus working models. Hayward, CA: Living Control Systems Publishing ISBN 0-9740155-1-2.
Robertson, R. J. & Powers, W.T. (1990). Introduction to modern psychology: the control-theory view. Gravel Switch, KY: Control Systems Group.
Robertson, R. J., Goldstein, D.M., Mermel, M., & Musgrave, M. (1999). Testing the self as a control system: Theoretical and methodological issues. Int. J. Human-Computer Studies, 50, 571–580.
Runkel, Philip J[ulian]. 1990. Casting Nets and Testing Specimens: Two Grand Methods of Psychology. New York: Praeger. ISBN 0-275-93533-7. [Repr. 2007, Hayward, CA: Living Control Systems Publishing ISBN 0-9740155-7-1.]
Runkel, Philip J[ulian]. (2003). People as living things. Hayward, CA: Living Control Systems Publishing ISBN 0-9740155-0-4
Taylor, Martin M. (1999). "Editorial: Perceptual Control Theory and its Application," International Journal of Human-Computer Studies, Vol 50, No. 6, June 1999, pp. 433–444.
=== Sociology ===
McClelland, Kent (1994). "Perceptual Control and Social Power". Sociological Perspectives. 37 (4): 461–496. doi:10.2307/1389276. JSTOR 1389276. S2CID 144872350.
McClelland, Kent (2004). "The Collective Control of Perceptions: Constructing Order from Conflict". International Journal of Human-Computer Studies. 60: 65–99. doi:10.1016/j.ijhcs.2003.08.003.
McClelland, Kent and Thomas J. Fararo, eds. (2006). Purpose, Meaning, and Action: Control Systems Theories in Sociology. New York: Palgrave Macmillan.
McPhail, Clark. 1991. The Myth of the Madding Crowd. New York: Aldine de Gruyter.
== References ==
== External links ==
=== Articles ===
PCT for the Beginner by William T. Powers (2007)
The Dispute Over Control theory by William T. Powers (1993) – requires access approval
Demonstrations of perceptual control by Gary Cziko (2006)
=== Audio ===
Interview with William T. Powers on origin and history of PCT (Part One – 20060722 (58.7M)
Interview with William T. Powers on origin and history of PCT (Part Two – 20070728 (57.7M)
=== Videos ===
Demonstration of a robot arm with visual servoing and pressure control based on principles of PCT
=== Websites ===
The International Association for Perceptual Control Systems – The IAPCT website.
PCTWeb – Warren Mansell's comprehensive website on PCT.
Living Control Systems Publishing – resources and books about PCT.
Mind Readings – Rick Marken's website on PCT, with many interactive demonstrations.
Method of Levels – Timothy Carey's website on the Method of Levels.
Perceptual Robots – The PCT methodology and architecture applied to robotics.
ResearchGate Project – Recent research products. | Wikipedia/Perceptual_control_theory |
Soft systems methodology (SSM) is an organised way of thinking applicable to problematic social situations and in the management of change by using action. It was developed in England by academics at the Lancaster Systems Department on the basis of a ten-year action research programme.
== Overview ==
The Soft Systems Methodology was developed primarily by Peter Checkland, through 10 years of research with his colleagues, such as Brian Wilson. The method was derived from numerous earlier systems engineering processes, primarily from the fact traditional 'hard' systems thinking was not able to account for larger organisational issues, with many complex relationships. SSM has a primary use in the analysis of these complex situations, where there are divergent views about the definition of the problem.
These complex situations are known as "soft problems". They are usually real world problems where the goals and purposes of the problem are problematic themselves. Examples of soft problems include: How to improve the delivery of health services? and How to manage homelessness with young people? Soft approaches take as tacit that people's view of the world will change all the time and their preferences of it will also change.
Depending on the current circumstances of a situation, trying to agree on the problem may be difficult as there might be multiple factors to take into consideration, such as all the different kinds of methods used to tackle these problems. Additionally, Peter Checkland had moved away from the idea of 'obvious' problems and started working with situations to make concepts of models to use them as a source of questions to help with the problem, soft systems methodologies then started emerging to be an organised learning system.
Purposeful activity models could be declared using worldviews, meaning they were never models of real-world action. Still, those relevant to disclosure and argument about real-world action led to them being called epistemological devices that could be used for discourse and debate. The distinction between the everyday world and systems thinking was to draw attention to the conscious use of systems language in developing intellectual devices which were used to structure debates or an exploration of the problem situation being addressed.
In its 'classic' form the methodology consists of seven steps, with initial appreciation of the problem situation leading to the modelling of several human activity systems that might be thought relevant to the problem situation. By getting all the relevant people who are the decision-makers in this situation to come together, sit down in discussion and exploration about the definition of the problem. Only then will the decision makers in said situation will more likely arrive at a mutual agreement which will settle any arguments or problems and help get to the solution over exactly what kind of changes could be either systemically desirable and feasible in the situation at hand.
Later explanations of the ideas give a more sophisticated view of this systemic method and give more attention to locating the methodology with respect to its philosophical underpinnings. It is the earlier classical view which is most widely used in practice (created by Peter Checkland). A common criticism of this earlier methodology is that it follows an approach that is too linear. Checkland himself agreed that the earlier methodology is 'rather bald'. Most advanced SSM analysts will agree, though, that the classical view is an easy way for inexperienced analysts to learn the SSM methodology.
SSM has been successfully used as a business analysis methodology in various fields. Real-world examples of SSM's wide range of applicability include research applying SSM in the sugar industry leading to improvements in business partner relationships, successful use as an approach in project management by directly involving stakeholders or aiding in business management by improving communication between stakeholders. It has proven to be a useful analysis approach to teaching and learning processes, as it does not require a specific problem to be identified as its starting point – which has led to "outside of the box" suggestions for improvement. SSM was even used by the UK government as part of the revaluation of their Structured Systems Analysis and Design Method (SSADM) system development methodology.
Even professional researchers who are to take the change for face value structure of thinking, show the same tendency to distort perceptions of the world rather than change the mental structure which we give our bearings with. Failure of classic systems in rich 'management' problem situations during the research programme led to examining the adequacy of the systems thinking.
The methodology has been described in several books and many academic articles.
SSM remains the most widely used and practical application of systems thinking, and other systems approaches such as critical systems thinking have incorporated many of its ideas.
== Representation evolution ==
SSM had a gradual development process of the methodology as a whole from 1972 to 1990. During this period of time, four different representations of SSM were designed, becoming more sophisticated and at the same time less structured and broader in scope.
=== Blocks and arrows (1972) ===
The first studies in the research programme were carried out in 1969, and the first account of what became SSM was published in a paper three-years later titled "Towards a systems-based methodology for real-world problem solving" (Checkland 1972). In this paper, soft systems methodology is presented as a sequence of stages with iteration back to previous stages.The sequence was as follows: analysis, root definition of relevant systems, conceptualisation, comparison and definition of changes, selection of change to implement, design of change and implementation and appraisal.
The overall aim to implement change instead of introducing or enhancing a system implies that the thinking was ongoing as a result of these early experiences, even if the straight arrows in the diagrams and the rectangular blocks in some of the models can now be misleading!
=== Seven stages (1981) ===
Soft systems methodology (SSM) is a powerful tool that is utilised to analyse very complex organisational and systemic problems, that do not have an obvious solution. The methodology incorporates seven steps to come up with a viable solution for the problem defined. The seven steps are;
Enter situation in which a problem situation(s) have been identified
Address the issue at hand
Formulate root definitions of relevant systems of purposeful activity
Build conceptual models of the systems named in the root definitions : This methodology comes into place from raising concerns/ capturing problems within an organisation and looking into ways how it can be solved. Defining the root definition also describes the root purpose of a system.
The comparison stage: The systems thinker is to compare the perceived conceptual models against an intuitive perception of a real-world situation or scenario. Checkland defines this stage as the comparison of Stage 4 with Stage 2, formally, "Comparison of 4 with 2". Parts of the problem situation analysed in Stage 2 are to be examined alongside the conceptual model(s) created in Stage 4, this helps to achieve a "complete" comparison.
Problems identified should be accompanied now by feasible and desirable changes that will distinctly help the problem situation based in the system given. Human activity systems and other aspects of the system should be considered so that soft systems thinking, and Mumford's needs can be achieved with the potential changes. These potential changes should not be acted on until step 7 but they should be feasible enough to act upon to improve the problem situation.
Take action to improve the problem situation
=== Two streams (1988) ===
The two-stream model of SSM recognizes the crucially important role of history in human affairs, and for a given group of people their history determines what will be noticed as significant and how it will be judged. This expression of SSM is presented as an approach embodying not only a logic-based stream of analysis (via activity models) but also a cultural and political stream which enable judgements to be made about the accommodations between conflicting interests which might be reachable by the people concerned and which would enable action to be taken.
This particular expression of SSM removes the dividing line between the world of the problem situation and the systems thinking world.
=== Four main activities (1990) ===
The four-activities model is iconic rather than descriptive and subsumes the cultural stream of analysis in the four activities. The seven stage model gave an approach which applies real world situations, both large and small and public and private sector. The four main activities were created as a way to capture the more flexible use of SSM and to include more of the cultural aspect of the workplace into the concept of SSM. The four activities are used to show that SSM does not have to be used rigidly; it's there to show real life and not be constrained. The four main activities should be seen as an individual concept rather than a descriptive which incorporates the cultural stream of analysis. The four activities are:
Finding out about a problem situation, including culturally/politically
Formulating some relevant purposeful activity models: Creating and drawing specific diagrammatic illustrations of activity processes that occur in an organisation, which shows the relevant processes that take place in a structured order, and depicts any problem situation visually by showing the flow of one action to another. An example of this would be a diagram of a Soft Systems Methodology method, which is a 'Conceptual Model', which is a representation of a systems' human actions, or an 'Architecture System Map', which is a visual representation of the implementation of sections of a software system.
Debating the situation, using the models, seeking from that debate both:
changes which would improve the situation and are regarded as both desirable and (culturally) feasible, and
the accommodations between conflicting interests which will enable action
Taking action in the situation to bring about improvement
== CATWOE ==
In 1975, David Smyth, a researcher in Checkland's department, observed that SSM was most successful when the root definition included certain elements. These elements, captured in the mnemonic CATWOE, identified the people, processes and environment that contribute to a situation, issue or problem that required analyzing.
This is used to prompt thinking about what the business is trying to achieve. In further detail, CATWOE helps explore a system by underlining the roots which involve turning the inputs into outputs. CATWOE helps businesses as it analyses a gap between current and useful systems. Business perspectives help the business analyst to consider the impact of any proposed solution on the people involved. This mainly involves stakeholders which allows them to test assumptions they have made as stakeholders will all have different opinions about certain problems and opportunities. CATWOE's method helps gain better and achievable results, as well as avoiding additional problems using six elements.
The six elements of CATWOE are:
Customers – Who are the beneficiaries of the highest level business process and how does the issue affect them?
Actors - The person or people directly involved in the transformation (T) part of CATWOE (Checkland & Scholes, 1999, p. 35). Implementation and involvement by the actors allows for the input to be transformed into an output (Checkland & Scholes, 1999, p. 35). Actors are also stakeholders as their actions can affect the transformation process and the system as a whole. As actors are directly involved, they also have a 'holon' by which they interpret the world outside (Checkland & Scholes, 1999, p. 19) and so how they view the situation would impact their work and success.
Transformation process – Change, in one word, is the centre of the transformation system; the process of the transformation is more important for the business solution system. This is because the change is what the industry 5.0 sustainability system intends. The purpose behind the transformation system where change is applied holds value. For example, when converting grapes into wine the purpose for Change is to supply to grape consumers more value of the grape (product), thus sustaining the product value systemically. What is the transformation that lies at the heart of the system - transforming grapes into wine, transforming unsold goods into sold goods, transforming a societal need into a societal need met? This means change, in one word, is the centre of the transformation system; the process of becoming is more important than the business solution system. This is because the change is what the industry 2.0 systemic sustainability system practice purpose solves. The purpose behind the transformation system where change is provides the change, thus the results. For example when converting grapes into wine the purpose for Change is to supply to members of the public interest or involvement in grapes more value of the product, thus sustaining the product value more systemically.
Weltanschauung (or Worldview) – What is the big picture and what are the wider impacts of the issue? "The word Weltanschauung is a German word that has no real English equivalent. It refers to "all the things that you take for granted" and is related to our values". But the closest translation would be "world view", which is the collective summary of the stakeholders belief that gives meaning to the root definition. Model of the human activity system as a whole.
Owner – Who owns the process or situation being investigated and what role will they play in the solution?
Environmental constraints – What are the constraints and limitations that will impact the solution and its success?
CATWOE can also be related to the holistic multi-benefit analysis due to the multiple perspectives that are taken into consideration. It further understands the perspectives and concerns of different stakeholders involved in the human activity systems adhering to the core values of soft systems thinking allowing multiple perspectives to be appreciated with good knowledge management
== Human activity system ==
A human activity system can be defined as "notional system (i.e. not existing in any tangible form) where human beings are undertaking some activities that achieve some purpose".
Within most systems there will be many human activity systems integrated within it to form the whole system. Human activity systems can be used in SSM to establish worldviews (Weltanschauung) for people involved in problematic situations. The assumption with all human activity systems is that all actors within them will act accordingly with their own worldviews.
== See also ==
Enterprise modelling
Hard systems
Holism
List of thought processes
Problem structuring methods
Rich picture
Structured systems analysis and design method
Systems theory
Systems philosophy
== References ==
== Further reading ==
=== Books ===
Avison, D., & Fitzgerald, G. (2006). Information Systems Development. methodologies, techniques & tools (4th ed.). McGraw-Hill Education.
Wilson, B. and van Haperen, K. (2015) Soft Systems Thinking, Methodology and the Management of Change (including the history of the systems engineering department at Lancaster University), London: Palgrave MacMillan. ISBN 978-1-137-43268-1.
Checkland, P.B. and J. Scholes (2001) Soft Systems Methodology in Action, in J. Rosenhead and J. Mingers (eds), Rational Analysis for a Problematic World Revisited. Chichester: Wiley
Checkland, P.B. & Poulter, J. (2006) Learning for Action: A short definitive account of Soft Systems Methodology and its use for Practitioners, teachers and Students, Wiley, Chichester. ISBN 0-470-02554-9
Checkland, P.B. Systems Thinking, Systems Practice, John Wiley & Sons Ltd. 1981, 1998. ISBN 0-471-98606-2
Checkland, P.B. and S. Holwell Information, Systems and Information Systems, John Wiley & Sons Ltd. 1998. ISBN 0-471-95820-4
Wilson, B. Systems: Concepts, Methodologies and Applications, John Wiley & Sons Ltd. 1984, 1990. ISBN 0-471-92716-3
Wilson, B. Soft Systems Methodology, John Wiley & Sons Ltd. 2001. ISBN 0-471-89489-3
=== Articles ===
Dale Couprie et al. (2007) Soft Systems Methodology Department of Computer Science, University of Calgary.
Mark P. Mobach, Jos J. van der Werf & F.J. Tromp (2000). The art of modelling in SSM, in papers ISSS meeting 2000.
Ian Bailey (2008) MODAF and Soft Systems. white paper.
Ivanov, K. (1991). Critical systems thinking and information technology. - In J. of Applied Systems Analysis, 18, 39-55. (ISSN 0308-9541). A review of soft systems methodology as related to critical systems thinking.
Michael Rada (2015-12-01) [1]. white paper, INDUSTRY 5.0 launch.
Michael Rada (2015-02-03) [2]. white paper, INDUSTRY 5.0 DEFINITION.
== External links ==
Peter Checkland homepage.
Models for Change Soft Systems Methodology . Business Process Transformation, 1996.
Soft systems methodology Action research and evaluation on line, 2007.
Checkland and Smyth's CATWOE and Soft Systems Methodology, Business Open Learning Archive 2007. | Wikipedia/Soft_systems_methodology |
Dynamical systems theory is an area of mathematics used to describe the behavior of complex dynamical systems, usually by employing differential equations by nature of the ergodicity of dynamic systems. When differential equations are employed, the theory is called continuous dynamical systems. From a physical point of view, continuous dynamical systems is a generalization of classical mechanics, a generalization where the equations of motion are postulated directly and are not constrained to be Euler–Lagrange equations of a least action principle. When difference equations are employed, the theory is called discrete dynamical systems. When the time variable runs over a set that is discrete over some intervals and continuous over other intervals or is any arbitrary time-set such as a Cantor set, one gets dynamic equations on time scales. Some situations may also be modeled by mixed operators, such as differential-difference equations.
This theory deals with the long-term qualitative behavior of dynamical systems, and studies the nature of, and when possible the solutions of, the equations of motion of systems that are often primarily mechanical or otherwise physical in nature, such as planetary orbits and the behaviour of electronic circuits, as well as systems that arise in biology, economics, and elsewhere. Much of modern research is focused on the study of chaotic systems and bizarre systems.
This field of study is also called just dynamical systems, mathematical dynamical systems theory or the mathematical theory of dynamical systems.
== Overview ==
Dynamical systems theory and chaos theory deal with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible steady states?", or "Does the long-term behavior of the system depend on its initial condition?"
An important goal is to describe the fixed points, or steady states of a given dynamical system; these are values of the variable that do not change over time. Some of these fixed points are attractive, meaning that if the system starts out in a nearby state, it converges towards the fixed point.
Similarly, one is interested in periodic points, states of the system that repeat after several timesteps. Periodic points can also be attractive. Sharkovskii's theorem is an interesting statement about the number of periodic points of a one-dimensional discrete dynamical system.
Even simple nonlinear dynamical systems often exhibit seemingly random behavior that has been called chaos. The branch of dynamical systems that deals with the clean definition and investigation of chaos is called chaos theory.
== History ==
The concept of dynamical systems theory has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is given implicitly by a relation that gives the state of the system only a short time into the future.
Before the advent of fast computing machines, solving a dynamical system required sophisticated mathematical techniques and could only be accomplished for a small class of dynamical systems.
Some excellent presentations of mathematical dynamic system theory include Beltrami (1998), Luenberger (1979), Padulo & Arbib (1974), and Strogatz (1994).
== Concepts ==
=== Dynamical systems ===
The dynamical system concept is a mathematical formalization for any fixed "rule" that describes the time dependence of a point's position in its ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, and the number of fish each spring in a lake.
A dynamical system has a state determined by a collection of real numbers, or more generally by a set of points in an appropriate state space. Small changes in the state of the system correspond to small changes in the numbers. The numbers are also the coordinates of a geometrical space—a manifold. The evolution rule of the dynamical system is a fixed rule that describes what future states follow from the current state. The rule may be deterministic (for a given time interval one future state can be precisely predicted given the current state) or stochastic (the evolution of the state can only be predicted with a certain probability).
=== Dynamicism ===
Dynamicism, also termed the dynamic hypothesis or the dynamic hypothesis in cognitive science or dynamic cognition, is a new approach in cognitive science exemplified by the work of philosopher Tim van Gelder. It argues that differential equations are more suited to modelling cognition than more traditional computer models.
=== Nonlinear system ===
In mathematics, a nonlinear system is a system that is not linear—i.e., a system that does not satisfy the superposition principle. Less technically, a nonlinear system is any problem where the variable(s) to solve for cannot be written as a linear sum of independent components. A nonhomogeneous system, which is linear apart from the presence of a function of the independent variables, is nonlinear according to a strict definition, but such systems are usually studied alongside linear systems, because they can be transformed to a linear system as long as a particular solution is known.
== Related fields ==
=== Arithmetic dynamics ===
Arithmetic dynamics is a field that emerged in the 1990s that amalgamates two areas of mathematics, dynamical systems and number theory. Classically, discrete dynamics refers to the study of the iteration of self-maps of the complex plane or real line. Arithmetic dynamics is the study of the number-theoretic properties of integer, rational, p-adic, and/or algebraic points under repeated application of a polynomial or rational function.
=== Chaos theory ===
Chaos theory describes the behavior of certain dynamical systems – that is, systems whose state evolves with time – that may exhibit dynamics that are highly sensitive to initial conditions (popularly referred to as the butterfly effect). As a result of this sensitivity, which manifests itself as an exponential growth of perturbations in the initial conditions, the behavior of chaotic systems appears random. This happens even though these systems are deterministic, meaning that their future dynamics are fully defined by their initial conditions, with no random elements involved. This behavior is known as deterministic chaos, or simply chaos.
=== Complex systems ===
Complex systems is a scientific field that studies the common properties of systems considered complex in nature, society, and science. It is also called complex systems theory, complexity science, study of complex systems and/or sciences of complexity. The key problems of such systems are difficulties with their formal modeling and simulation. From such perspective, in different research contexts complex systems are defined on the base of their different attributes.
The study of complex systems is bringing new vitality to many areas of science where a more typical reductionist strategy has fallen short. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines including neurosciences, social sciences, meteorology, chemistry, physics, computer science, psychology, artificial life, evolutionary computation, economics, earthquake prediction, molecular biology and inquiries into the nature of living cells themselves.
=== Control theory ===
Control theory is an interdisciplinary branch of engineering and mathematics, in part it deals with influencing the behavior of dynamical systems.
=== Ergodic theory ===
Ergodic theory is a branch of mathematics that studies dynamical systems with an invariant measure and related problems. Its initial development was motivated by problems of statistical physics.
=== Functional analysis ===
Functional analysis is the branch of mathematics, and specifically of analysis, concerned with the study of vector spaces and operators acting upon them. It has its historical roots in the study of functional spaces, in particular transformations of functions, such as the Fourier transform, as well as in the study of differential and integral equations. This usage of the word functional goes back to the calculus of variations, implying a function whose argument is a function. Its use in general has been attributed to mathematician and physicist Vito Volterra and its founding is largely attributed to mathematician Stefan Banach.
=== Graph dynamical systems ===
The concept of graph dynamical systems (GDS) can be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of graph dynamical systems is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result.
=== Projected dynamical systems ===
Projected dynamical systems is a mathematical theory investigating the behaviour of dynamical systems where solutions are restricted to a constraint set. The discipline shares connections to and applications with both the static world of optimization and equilibrium problems and the dynamical world of ordinary differential equations. A projected dynamical system is given by the flow to the projected differential equation.
=== Symbolic dynamics ===
Symbolic dynamics is the practice of modelling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator.
=== System dynamics ===
System dynamics is an approach to understanding the behaviour of systems over time. It deals with internal feedback loops and time delays that affect the behaviour and state of the entire system. What makes using system dynamics different from other approaches to studying systems is the language used to describe feedback loops with stocks and flows. These elements help describe how even seemingly simple systems display baffling nonlinearity.
=== Topological dynamics ===
Topological dynamics is a branch of the theory of dynamical systems in which qualitative, asymptotic properties of dynamical systems are studied from the viewpoint of general topology.
== Applications ==
=== In biomechanics ===
In sports biomechanics, dynamical systems theory has emerged in the movement sciences as a viable framework for modeling athletic performance and efficiency. It comes as no surprise, since dynamical systems theory has its roots in Analytical mechanics. From psychophysiological perspective, the human movement system is a highly intricate network of co-dependent sub-systems (e.g. respiratory, circulatory, nervous, skeletomuscular, perceptual) that are composed of a large number of interacting components (e.g. blood cells, oxygen molecules, muscle tissue, metabolic enzymes, connective tissue and bone). In dynamical systems theory, movement patterns emerge through generic processes of self-organization found in physical and biological systems. There is no research validation of any of the claims associated to the conceptual application of this framework.
=== In cognitive science ===
Dynamical system theory has been applied in the field of neuroscience and cognitive development, especially in the neo-Piagetian theories of cognitive development. It is the belief that cognitive development is best represented by physical theories rather than theories based on syntax and AI. It also believed that differential equations are the most appropriate tool for modeling human behavior. These equations are interpreted to represent an agent's cognitive trajectory through state space. In other words, dynamicists argue that psychology should be (or is) the description (via differential equations) of the cognitions and behaviors of an agent under certain environmental and internal pressures. The language of chaos theory is also frequently adopted.
In it, the learner's mind reaches a state of disequilibrium where old patterns have broken down. This is the phase transition of cognitive development. Self-organization (the spontaneous creation of coherent forms) sets in as activity levels link to each other. Newly formed macroscopic and microscopic structures support each other, speeding up the process. These links form the structure of a new state of order in the mind through a process called scalloping (the repeated building up and collapsing of complex performance.) This new, novel state is progressive, discrete, idiosyncratic and unpredictable.
Dynamic systems theory has recently been used to explain a long-unanswered problem in child development referred to as the A-not-B error.
Further, since the middle of the 1990s cognitive science, oriented towards a system theoretical connectionism, has increasingly adopted the methods from (nonlinear) “Dynamic Systems Theory (DST)“. A variety of neurosymbolic cognitive neuroarchitectures in modern connectionism, considering their mathematical structural core, can be categorized as (nonlinear) dynamical systems. These attempts in neurocognition to merge connectionist cognitive neuroarchitectures with DST come from not only neuroinformatics and connectionism, but also recently from developmental psychology (“Dynamic Field Theory (DFT)”) and from “evolutionary robotics” and “developmental robotics” in connection with the mathematical method of “evolutionary computation (EC)”. For an overview see Maurer.
=== In second language development ===
The application of Dynamic Systems Theory to study second language acquisition is attributed to Diane Larsen-Freeman who published an article in 1997 in which she claimed that second language acquisition should be viewed as a developmental process which includes language attrition as well as language acquisition. In her article she claimed that language should be viewed as a dynamic system which is dynamic, complex, nonlinear, chaotic, unpredictable, sensitive to initial conditions, open, self-organizing, feedback sensitive, and adaptive.
== See also ==
Related subjects
Related scientists
== Notes ==
== Further reading ==
Abraham, Frederick D.; Abraham, Ralph; Shaw, Christopher D. (1990). A Visual Introduction to Dynamical Systems Theory for Psychology. Aerial Press. ISBN 978-0-942344-09-7. OCLC 24345312.
Beltrami, Edward J. (1998). Mathematics for Dynamic Modeling (2nd ed.). Academic Press. ISBN 978-0-12-085566-7. OCLC 36713294.
Hájek, Otomar (1968). Dynamical systems in the plane. Academic Press. ISBN 9780123172402. OCLC 343328.
Luenberger, David G. (1979). Introduction to dynamic systems: theory, models, and applications. Wiley. ISBN 978-0-471-02594-8. OCLC 4195122.
Michel, Anthony; Kaining Wang; Bo Hu (2001). Qualitative Theory of Dynamical Systems. Taylor & Francis. ISBN 978-0-8247-0526-8. OCLC 45873628.
Padulo, Louis; Arbib, Michael A. (1974). System theory: a unified state-space approach to continuous and discrete systems. Saunders. ISBN 9780721670355. OCLC 947600.
Strogatz, Steven H. (1994). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Addison Wesley. ISBN 978-0-7382-0453-6. OCLC 49839504.
== External links ==
Dynamic Systems Encyclopedia of Cognitive Science entry.
Definition of dynamical system in MathWorld.
DSWeb Dynamical Systems Magazine | Wikipedia/Mathematical_system_theory |
Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change and information systems including the transmission of information in telecommunication.
Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible.
The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation.
Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behaviour, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, which has become one of the defining universal constants for the modern International System of Units.
== History ==
In his 1803 paper Fundamental Principles of Equilibrium and Movement, the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of moment of activity; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published Reflections on the Motive Power of Fire, which posited that in all heat-engines, whenever "caloric" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation.
In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. He described his observations as a dissipative use of energy, resulting in a transformation-content (Verwandlungsinhalt in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix en-, as in 'energy', and from the Greek word τροπή [tropē], which is translated in an established lexicon as turning or change and that he rendered in German as Verwandlung, a word often translated into English as transformation, in 1865 Clausius coined the name of that property as entropy. The word was adopted into the English language in 1868.
Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability.
== Etymology ==
In 1865, Clausius named the concept of "the differential of a quantity which depends on the configuration of the system", entropy (Entropie) after the Greek word for 'transformation'. He gave "transformational content" (Verwandlungsinhalt) as a synonym, paralleling his "thermal and ergonal content" (Wärme- und Werkinhalt) as the name of U, but preferring the term entropy as a close parallel of the word energy, as he found the concepts nearly "analogous in their physical significance". This term was formed by replacing the root of ἔργον ('ergon', 'work') by that of τροπή ('tropy', 'transformation').
In more detail, Clausius explained his choice of "entropy" as a name as follows:
I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call S the entropy of a body, after the Greek word "transformation". I have designedly coined the word entropy to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful.
Leon Cooper added that in this way "he succeeded in coining a word that meant the same thing to everybody: nothing".
== Definitions and descriptions ==
The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system — modelled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes.
=== State variables and functions of state ===
Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium, which essentially are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has a particular volume. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero.
=== Reversible process ===
The entropy change
d
S
{\textstyle \mathrm {d} S}
of a system can be well-defined as a small portion of heat
δ
Q
r
e
v
{\textstyle \delta Q_{\mathsf {rev}}}
transferred from the surroundings to the system during a reversible process divided by the temperature
T
{\textstyle T}
of the system during this heat transfer:
d
S
=
δ
Q
r
e
v
T
{\displaystyle \mathrm {d} S={\frac {\delta Q_{\mathsf {rev}}}{T}}}
The reversible process is quasistatic (i.e., it occurs without any dissipation, deviating only infinitesimally from the thermodynamic equilibrium), and it may conserve total entropy. For example, in the Carnot cycle, while the heat flow from a hot reservoir to a cold reservoir represents the increase in the entropy in a cold reservoir, the work output, if reversibly and perfectly stored, represents the decrease in the entropy which could be used to operate the heat engine in reverse, returning to the initial state; thus the total entropy change may still be zero at all times if the entire process is reversible.
In contrast, an irreversible process increases the total entropy of the system and surroundings. Any process that happens quickly enough to deviate from the thermal equilibrium cannot be reversible; the total entropy increases, and the potential for maximum work to be done during the process is lost.
=== Carnot cycle ===
The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle which is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle, the heat
Q
H
{\textstyle Q_{\mathsf {H}}}
is transferred from a hot reservoir to a working gas at the constant temperature
T
H
{\textstyle T_{\mathsf {H}}}
during isothermal expansion stage and the heat
Q
C
{\textstyle Q_{\mathsf {C}}}
is transferred from a working gas to a cold reservoir at the constant temperature
T
C
{\textstyle T_{\mathsf {C}}}
during isothermal compression stage. According to Carnot's theorem, a heat engine with two thermal reservoirs can produce a work
W
{\textstyle W}
if and only if there is a temperature difference between reservoirs. Originally, Carnot did not distinguish between heats
Q
H
{\textstyle Q_{\mathsf {H}}}
and
Q
C
{\textstyle Q_{\mathsf {C}}}
, as he assumed caloric theory to be valid and hence that the total heat in the system was conserved. But in fact, the magnitude of heat
Q
H
{\textstyle Q_{\mathsf {H}}}
is greater than the magnitude of heat
Q
C
{\textstyle Q_{\mathsf {C}}}
. Through the efforts of Clausius and Kelvin, the work
W
{\textstyle W}
done by a reversible heat engine was found to be the product of the Carnot efficiency (i.e., the efficiency of all reversible heat engines with the same pair of thermal reservoirs) and the heat
Q
H
{\textstyle Q_{\mathsf {H}}}
absorbed by a working body of the engine during isothermal expansion:
W
=
T
H
−
T
C
T
H
⋅
Q
H
=
(
1
−
T
C
T
H
)
Q
H
{\displaystyle W={\frac {T_{\mathsf {H}}-T_{\mathsf {C}}}{T_{\mathsf {H}}}}\cdot Q_{\mathsf {H}}=\left(1-{\frac {T_{\mathsf {C}}}{T_{\mathsf {H}}}}\right)Q_{\mathsf {H}}}
To derive the Carnot efficiency Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale.
It is known that a work
W
>
0
{\textstyle W>0}
produced by an engine over a cycle equals to a net heat
Q
Σ
=
|
Q
H
|
−
|
Q
C
|
{\textstyle Q_{\Sigma }=\left\vert Q_{\mathsf {H}}\right\vert -\left\vert Q_{\mathsf {C}}\right\vert }
absorbed over a cycle. Thus, with the sign convention for a heat
Q
{\textstyle Q}
transferred in a thermodynamic process (
Q
>
0
{\textstyle Q>0}
for an absorption and
Q
<
0
{\textstyle Q<0}
for a dissipation) we get:
W
−
Q
Σ
=
W
−
|
Q
H
|
+
|
Q
C
|
=
W
−
Q
H
−
Q
C
=
0
{\displaystyle W-Q_{\Sigma }=W-\left\vert Q_{\mathsf {H}}\right\vert +\left\vert Q_{\mathsf {C}}\right\vert =W-Q_{\mathsf {H}}-Q_{\mathsf {C}}=0}
Since this equality holds over an entire Carnot cycle, it gave Clausius the hint that at each stage of the cycle the difference between a work and a net heat would be conserved, rather than a net heat itself. Which means there exists a state function
U
{\textstyle U}
with a change of
d
U
=
δ
Q
−
d
W
{\textstyle \mathrm {d} U=\delta Q-\mathrm {d} W}
. It is called an internal energy and forms a central concept for the first law of thermodynamics.
Finally, comparison for both the representations of a work output in a Carnot cycle gives us:
|
Q
H
|
T
H
−
|
Q
C
|
T
C
=
Q
H
T
H
+
Q
C
T
C
=
0
{\displaystyle {\frac {\left\vert Q_{\mathsf {H}}\right\vert }{T_{\mathsf {H}}}}-{\frac {\left\vert Q_{\mathsf {C}}\right\vert }{T_{\mathsf {C}}}}={\frac {Q_{\mathsf {H}}}{T_{\mathsf {H}}}}+{\frac {Q_{\mathsf {C}}}{T_{\mathsf {C}}}}=0}
Similarly to the derivation of internal energy, this equality implies existence of a state function
S
{\textstyle S}
with a change of
d
S
=
δ
Q
/
T
{\textstyle \mathrm {d} S=\delta Q/T}
and which is conserved over an entire cycle. Clausius called this state function entropy.
In addition, the total change of entropy in both thermal reservoirs over Carnot cycle is zero too, since the inversion of a heat transfer direction means a sign inversion for the heat transferred during isothermal stages:
−
Q
H
T
H
−
Q
C
T
C
=
Δ
S
r
,
H
+
Δ
S
r
,
C
=
0
{\displaystyle -{\frac {Q_{\mathsf {H}}}{T_{\mathsf {H}}}}-{\frac {Q_{\mathsf {C}}}{T_{\mathsf {C}}}}=\Delta S_{\mathsf {r,H}}+\Delta S_{\mathsf {r,C}}=0}
Here we denote the entropy change for a thermal reservoir by
Δ
S
r
,
i
=
−
Q
i
/
T
i
{\textstyle \Delta S_{{\mathsf {r}},i}=-Q_{i}/T_{i}}
, where
i
{\textstyle i}
is either
H
{\textstyle {\mathsf {H}}}
for a hot reservoir or
C
{\textstyle {\mathsf {C}}}
for a cold one.
If we consider a heat engine which is less effective than Carnot cycle (i.e., the work
W
{\textstyle W}
produced by this engine is less than the maximum predicted by Carnot's theorem), its work output is capped by Carnot efficiency as:
W
<
(
1
−
T
C
T
H
)
Q
H
{\displaystyle W<\left(1-{\frac {T_{\mathsf {C}}}{T_{\mathsf {H}}}}\right)Q_{\mathsf {H}}}
Substitution of the work
W
{\textstyle W}
as the net heat into the inequality above gives us:
Q
H
T
H
+
Q
C
T
C
<
0
{\displaystyle {\frac {Q_{\mathsf {H}}}{T_{\mathsf {H}}}}+{\frac {Q_{\mathsf {C}}}{T_{\mathsf {C}}}}<0}
or in terms of the entropy change
Δ
S
r
,
i
{\textstyle \Delta S_{{\mathsf {r}},i}}
:
Δ
S
r
,
H
+
Δ
S
r
,
C
>
0
{\displaystyle \Delta S_{\mathsf {r,H}}+\Delta S_{\mathsf {r,C}}>0}
A Carnot cycle and an entropy as shown above prove to be useful in the study of any classical thermodynamic heat engine: other cycles, such as an Otto, Diesel or Brayton cycle, could be analysed from the same standpoint. Notably, any machine or cyclic process converting heat into work (i.e., heat engine) that is claimed to produce an efficiency greater than the one of Carnot is not viable — due to violation of the second law of thermodynamics.
For further analysis of sufficiently discrete systems, such as an assembly of particles, statistical thermodynamics must be used. Additionally, descriptions of devices operating near the limit of de Broglie waves, e.g. photovoltaic cells, have to be consistent with quantum statistics.
=== Classical thermodynamics ===
The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system.
While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may not flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur.
According to the Clausius equality, for a reversible cyclic thermodynamic process:
∮
δ
Q
r
e
v
T
=
0
{\displaystyle \oint {\frac {\delta Q_{\mathsf {rev}}}{T}}=0}
which means the line integral
∫
L
δ
Q
r
e
v
/
T
{\textstyle \int _{L}{\delta Q_{\mathsf {rev}}/T}}
is path-independent. Thus we can define a state function
S
{\textstyle S}
, called entropy:
d
S
=
δ
Q
r
e
v
T
{\displaystyle \mathrm {d} S={\frac {\delta Q_{\mathsf {rev}}}{T}}}
Therefore, thermodynamic entropy has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI).
To find the entropy difference between any two states of the system, the integral must be evaluated for some reversible path between the initial and final states. Since an entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from the surroundings is different as well as its entropy change.
We can calculate the change of entropy only by integrating the above formula. To obtain the absolute value of the entropy, we consider the third law of thermodynamics: perfect crystals at the absolute zero have an entropy
S
=
0
{\textstyle S=0}
.
From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process, where the system gives up
Δ
E
{\displaystyle \Delta E}
of energy to the surrounding at the temperature
T
{\textstyle T}
, its entropy falls by
Δ
S
{\textstyle \Delta S}
and at least
T
⋅
Δ
S
{\textstyle T\cdot \Delta S}
of that energy must be given up to the system's surroundings as a heat. Otherwise, this process cannot go forward. In classical thermodynamics, the entropy of a system is defined if and only if it is in a thermodynamic equilibrium (though a chemical equilibrium is not required: for example, the entropy of a mixture of two moles of hydrogen and one mole of oxygen in standard conditions is well-defined).
=== Statistical mechanics ===
The statistical definition was developed by Ludwig Boltzmann in the 1870s by analysing the statistical behaviour of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature.
The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and momentum of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant.
The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K−1) in the International System of Units (or kg⋅m2⋅s−2⋅K−1 in terms of base units). The entropy of a substance is usually given as an intensive property — either entropy per unit mass (SI unit: J⋅K−1⋅kg−1) or entropy per unit amount of substance (SI unit: J⋅K−1⋅mol−1).
Specifically, entropy is a logarithmic measure for the system with a number of states, each with a probability
p
i
{\textstyle p_{i}}
of being occupied (usually given by the Boltzmann distribution):
S
=
−
k
B
∑
i
p
i
ln
p
i
{\displaystyle S=-k_{\mathsf {B}}\sum _{i}{p_{i}\ln {p_{i}}}}
where
k
B
{\textstyle k_{\mathsf {B}}}
is the Boltzmann constant and the summation is performed over all possible microstates of the system.
In case states are defined in a continuous manner, the summation is replaced by an integral over all possible states, or equivalently we can consider the expected value of the logarithm of the probability that a microstate is occupied:
S
=
−
k
B
⟨
ln
p
⟩
{\displaystyle S=-k_{\mathsf {B}}\left\langle \ln {p}\right\rangle }
This definition assumes the basis states to be picked in a way that there is no information on their relative phases. In a general case the expression is:
S
=
−
k
B
t
r
(
ρ
^
×
ln
ρ
^
)
{\displaystyle S=-k_{\mathsf {B}}\ \mathrm {tr} {\left({\hat {\rho }}\times \ln {\hat {\rho }}\right)}}
where
ρ
^
{\textstyle {\hat {\rho }}}
is a density matrix,
t
r
{\displaystyle \mathrm {tr} }
is a trace operator and
ln
{\displaystyle \ln }
is a matrix logarithm. The density matrix formalism is not required if the system is in thermal equilibrium so long as the basis states are chosen to be eigenstates of the Hamiltonian. For most practical purposes it can be taken as the fundamental definition of entropy since all other formulae for
S
{\textstyle S}
can be derived from it, but not vice versa.
In what has been called the fundamental postulate in statistical mechanics, among system microstates of the same energy (i.e., degenerate microstates) each microstate is assumed to be populated with equal probability
p
i
=
1
/
Ω
{\textstyle p_{i}=1/\Omega }
, where
Ω
{\textstyle \Omega }
is the number of microstates whose energy equals that of the system. Usually, this assumption is justified for an isolated system in a thermodynamic equilibrium. Then in case of an isolated system the previous formula reduces to:
S
=
k
B
ln
Ω
{\displaystyle S=k_{\mathsf {B}}\ln {\Omega }}
In thermodynamics, such a system is one with a fixed volume, number of molecules, and internal energy, called a microcanonical ensemble.
The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model.
The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications when two observers use different sets of macroscopic variables. For example, consider observer A using variables
U
{\textstyle U}
,
V
{\textstyle V}
,
W
{\textstyle W}
and observer B using variables
U
{\textstyle U}
,
V
{\textstyle V}
,
W
{\textstyle W}
,
X
{\textstyle X}
. If observer B changes variable
X
{\textstyle X}
, then observer A will see a violation of the second law of thermodynamics, since he does not possess information about variable
X
{\textstyle X}
and its influence on the system. In other words, one must choose a complete set of macroscopic variables to describe the system, i.e. every independent parameter that may change during experiment.
Entropy can also be defined for any Markov processes with reversible dynamics and the detailed balance property.
In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics.
=== Entropy of a system ===
In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state. As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased.
However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalisation has progressed.
Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do.
Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds.
One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work" in a cyclic process. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine.
A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing.
=== Equivalence of definitions ===
Proofs of equivalence between the entropy in statistical mechanics — the Gibbs entropy formula:
S
=
−
k
B
∑
i
p
i
ln
p
i
{\displaystyle S=-k_{\mathsf {B}}\sum _{i}{p_{i}\ln {p_{i}}}}
and the entropy in classical thermodynamics:
d
S
=
δ
Q
r
e
v
T
{\displaystyle \mathrm {d} S={\frac {\delta Q_{\mathsf {rev}}}{T}}}
together with the fundamental thermodynamic relation are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalised Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average
U
=
⟨
E
i
⟩
{\textstyle U=\left\langle E_{i}\right\rangle }
. Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution.
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:
== Second law of thermodynamics ==
The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient.
It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics.
In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature
T
{\textstyle T}
absorbing an infinitesimal amount of heat
δ
q
{\textstyle \delta q}
in a reversible way, is given by
δ
q
/
T
{\textstyle \delta q/T}
. More explicitly, an energy
T
R
S
{\textstyle T_{R}S}
is not available to do useful work, where
T
R
{\textstyle T_{R}}
is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy.
Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely.
The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximises its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state.
== Applications ==
=== The fundamental thermodynamic relation ===
The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy
U
{\textstyle U}
to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure
p
{\textstyle p}
bears on the volume
V
{\textstyle V}
as the only external parameter, this relation is:
d
U
=
T
d
S
−
p
d
V
{\displaystyle \mathrm {d} U=T\ \mathrm {d} S-p\ \mathrm {d} V}
Since both internal energy and entropy are monotonic functions of temperature
T
{\textstyle T}
, implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist).
The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities.
=== Entropy in chemical thermodynamics ===
Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system — the combination of a subsystem under study and its surroundings — increases during all spontaneous chemical and physical processes. The Clausius equation introduces the measurement of entropy change which describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems — always from hotter body to cooler one spontaneously.
Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg−1⋅K−1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol−1⋅K−1.
Thus, when one mole of substance at about 0 K is warmed by its surroundings to 298 K, the sum of the incremental values of
q
r
e
v
/
T
{\textstyle q_{\mathsf {rev}}/T}
constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at 298 K. Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture.
Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications,
Δ
S
{\textstyle \Delta S}
must be incorporated in an expression that includes both the system and its surroundings:
Δ
S
u
n
i
v
e
r
s
e
=
Δ
S
s
u
r
r
o
u
n
d
i
n
g
s
+
Δ
S
s
y
s
t
e
m
{\displaystyle \Delta S_{\mathsf {universe}}=\Delta S_{\mathsf {surroundings}}+\Delta S_{\mathsf {system}}}
Via additional steps this expression becomes the equation of Gibbs free energy change
Δ
G
{\textstyle \Delta G}
for reactants and products in the system at the constant pressure and temperature
T
{\textstyle T}
:
Δ
G
=
Δ
H
−
T
Δ
S
{\displaystyle \Delta G=\Delta H-T\ \Delta S}
where
Δ
H
{\textstyle \Delta H}
is the enthalpy change and
Δ
S
{\textstyle \Delta S}
is the entropy change.
The spontaneity of a chemical or physical process is governed by the Gibbs free energy change (ΔG), as defined by the equation ΔG = ΔH − TΔS, where ΔH represents the enthalpy change, ΔS the entropy change, and T the temperature in Kelvin. A negative ΔG indicates a thermodynamically favorable (spontaneous) process, while a positive ΔG denotes a non-spontaneous one. When both ΔH and ΔS are positive (endothermic, entropy-increasing), the reaction becomes spontaneous at sufficiently high temperatures, as the TΔS term dominates. Conversely, if both ΔH and ΔS are negative (exothermic, entropy-decreasing), spontaneity occurs only at low temperatures, where the enthalpy term prevails. Reactions with ΔH < 0 and ΔS > 0 (exothermic and entropy-increasing) are spontaneous at all temperatures, while those with ΔH > 0 and ΔS < 0 (endothermic and entropy-decreasing) are non-spontaneous regardless of temperature. These principles underscore the interplay between energy exchange, disorder, and temperature in determining the direction of natural processes, from phase transitions to biochemical reactions.
=== World's technological capacity to store and communicate entropic information ===
A 2011 study in Science estimated the world's technological capacity to store and communicate optimally compressed information normalised on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that humankind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007.
=== Entropy balance equation for open systems ===
In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. In general, flow of heat
Q
˙
{\textstyle {\dot {Q}}}
, flow of shaft work
W
˙
S
{\textstyle {\dot {W}}_{\mathsf {S}}}
and pressure-volume work
P
V
˙
{\textstyle P{\dot {V}}}
across the system boundaries cause changes in the entropy of the system. Heat transfer entails entropy transfer
Q
˙
/
T
{\textstyle {\dot {Q}}/T}
, where
T
{\textstyle T}
is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system.
To derive a generalised entropy balanced equation, we start with the general balance equation for the change in any extensive quantity
θ
{\textstyle \theta }
in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that
d
θ
/
d
t
{\textstyle \mathrm {d} \theta /\mathrm {d} t}
, i.e. the rate of change of
θ
{\textstyle \theta }
in the system, equals the rate at which
θ
{\textstyle \theta }
enters the system at the boundaries, minus the rate at which
θ
{\textstyle \theta }
leaves the system across the system boundaries, plus the rate at which
θ
{\textstyle \theta }
is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time
t
{\textstyle t}
of the extensive quantity entropy
S
{\textstyle S}
, the entropy balance equation is:
d
S
d
t
=
∑
k
=
1
K
M
˙
k
S
^
k
+
Q
˙
T
+
S
˙
g
e
n
{\displaystyle {\frac {\mathrm {d} S}{\mathrm {d} t}}=\sum _{k=1}^{K}{{\dot {M}}_{k}{\hat {S}}_{k}+{\frac {\dot {Q}}{T}}+{\dot {S}}_{\mathsf {gen}}}}
where
∑
k
=
1
K
M
˙
k
S
^
k
{\textstyle \sum _{k=1}^{K}{{\dot {M}}_{k}{\hat {S}}_{k}}}
is the net rate of entropy flow due to the flows of mass
M
˙
k
{\textstyle {\dot {M}}_{k}}
into and out of the system with entropy per unit mass
S
^
k
{\textstyle {\hat {S}}_{k}}
,
Q
˙
/
T
{\textstyle {\dot {Q}}/T}
is the rate of entropy flow due to the flow of heat across the system boundary and
S
˙
g
e
n
{\textstyle {\dot {S}}_{\mathsf {gen}}}
is the rate of entropy generation within the system, e.g. by chemical reactions, phase transitions, internal heat transfer or frictional effects such as viscosity.
In case of multiple heat flows the term
Q
˙
/
T
{\textstyle {\dot {Q}}/T}
is replaced by
∑
j
Q
˙
j
/
T
j
{\textstyle \sum _{j}{{\dot {Q}}_{j}/T_{j}}}
, where
Q
˙
j
{\textstyle {\dot {Q}}_{j}}
is the heat flow through
j
{\textstyle j}
-th port into the system and
T
j
{\textstyle T_{j}}
is the temperature at the
j
{\textstyle j}
-th port.
The nomenclature "entropy balance" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term
S
˙
g
e
n
{\textstyle {\dot {S}}_{\mathsf {gen}}}
is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the "entropy generation equation" since it specifies that:
S
˙
g
e
n
≥
0
{\displaystyle {\dot {S}}_{\mathsf {gen}}\geq 0}
with zero for reversible process and positive values for irreversible one.
== Entropy change formulas for simple processes ==
For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas.
=== Isothermal expansion or compression of an ideal gas ===
For the expansion (or compression) of an ideal gas from an initial volume
V
0
{\textstyle V_{0}}
and pressure
P
0
{\textstyle P_{0}}
to a final volume
V
{\textstyle V}
and pressure
P
{\textstyle P}
at any constant temperature, the change in entropy is given by:
Δ
S
=
n
R
ln
V
V
0
=
−
n
R
ln
P
P
0
{\displaystyle \Delta S=nR\ln {\frac {V}{V_{0}}}=-nR\ln {\frac {P}{P_{0}}}}
Here
n
{\textstyle n}
is the amount of gas (in moles) and
R
{\textstyle R}
is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant.
=== Cooling and heating ===
For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature
T
0
{\textstyle T_{0}}
to a final temperature
T
{\textstyle T}
, the entropy change is:
Δ
S
=
n
C
P
ln
T
T
0
{\textstyle \Delta S=nC_{\mathrm {P} }\ln {\frac {T}{T_{0}}}}
provided that the constant-pressure molar heat capacity (or specific heat)
C
P
{\textstyle C_{\mathrm {P} }}
is constant and that no phase transition occurs in this temperature interval.
Similarly at constant volume, the entropy change is:
Δ
S
=
n
C
V
ln
T
T
0
{\displaystyle \Delta S=nC_{\mathrm {V} }\ln {\frac {T}{T_{0}}}}
where the constant-volume molar heat capacity
C
V
{\textstyle C_{\mathrm {V} }}
is constant and there is no phase change.
At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply.
Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is:
Δ
S
=
n
C
V
ln
T
T
0
+
n
R
ln
V
V
0
{\displaystyle \Delta S=nC_{\mathrm {V} }\ln {\frac {T}{T_{0}}}+nR\ln {\frac {V}{V_{0}}}}
Similarly if the temperature and pressure of an ideal gas both vary:
Δ
S
=
n
C
P
ln
T
T
0
−
n
R
ln
P
P
0
{\displaystyle \Delta S=nC_{\mathrm {P} }\ln {\frac {T}{T_{0}}}-nR\ln {\frac {P}{P_{0}}}}
=== Phase transitions ===
Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (i.e., melting) of a solid to a liquid at the melting point
T
m
{\textstyle T_{\mathsf {m}}}
, the entropy of fusion is:
Δ
S
f
u
s
=
Δ
H
f
u
s
T
m
.
{\displaystyle \Delta S_{\mathsf {fus}}={\frac {\Delta H_{\mathsf {fus}}}{T_{\mathsf {m}}}}.}
Similarly, for vaporisation of a liquid to a gas at the boiling point
T
b
{\displaystyle T_{\mathsf {b}}}
, the entropy of vaporisation is:
Δ
S
v
a
p
=
Δ
H
v
a
p
T
b
{\displaystyle \Delta S_{\mathsf {vap}}={\frac {\Delta H_{\mathsf {vap}}}{T_{\mathsf {b}}}}}
== Approaches to understanding entropy ==
As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid.
=== Standard textbook definitions ===
The following is a list of additional definitions of entropy from a collection of textbooks:
a measure of energy dispersal at a specific temperature.
a measure of disorder in the universe or of the availability of the energy in a system to do work.
a measure of a system's thermal energy per unit temperature that is unavailable for doing useful work.
In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium.
=== Order and disorder ===
Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the state of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" and "order" in the system are each given by:: 69
D
i
s
o
r
d
e
r
=
C
D
C
I
{\displaystyle {\mathsf {Disorder}}={\frac {C_{\mathsf {D}}}{C_{\mathsf {I}}}}}
O
r
d
e
r
=
1
−
C
O
C
I
{\displaystyle {\mathsf {Order}}=1-{\frac {C_{\mathsf {O}}}{C_{\mathsf {I}}}}}
Here,
C
D
{\textstyle C_{\mathsf {D}}}
is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble,
C
I
{\textstyle C_{\mathsf {I}}}
is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and
C
O
{\textstyle C_{\mathsf {O}}}
is the "order" capacity of the system.
=== Energy dispersal ===
The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantised energy levels.
Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook Physical Chemistry, introduces entropy with the statement that "spontaneous changes are always accompanied by a dispersal of energy or matter and often both".
=== Relating entropy to energy usefulness ===
It is possible (in a thermal context) to regard lower entropy as a measure of the effectiveness or usefulness of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" that can never be replaced.
As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorised to lead to the heat death of the universe.
=== Entropy and adiabatic accessibility ===
A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states
X
0
{\textstyle X_{0}}
and
X
1
{\textstyle X_{1}}
such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state
X
{\textstyle X}
is defined as the largest number
λ
{\textstyle \lambda }
such that
X
{\textstyle X}
is adiabatically accessible from a composite state consisting of an amount
λ
{\textstyle \lambda }
in the state
X
1
{\textstyle X_{1}}
and a complementary amount,
(
1
−
λ
)
{\textstyle (1-\lambda )}
, in the state
X
0
{\textstyle X_{0}}
. A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: it is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling.
=== Entropy in quantum mechanics ===
In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy":
S
=
−
k
B
t
r
(
ρ
^
×
ln
ρ
^
)
{\displaystyle S=-k_{\mathsf {B}}\ \mathrm {tr} {\left({\hat {\rho }}\times \ln {\hat {\rho }}\right)}}
where
ρ
^
{\textstyle {\hat {\rho }}}
is the density matrix,
t
r
{\textstyle \mathrm {tr} }
is the trace operator and
k
B
{\textstyle k_{\mathsf {B}}}
is the Boltzmann constant.
This upholds the correspondence principle, because in the classical limit, when the phases between the basis states are purely random, this expression is equivalent to the familiar classical definition of entropy for states with classical probabilities
p
i
{\textstyle p_{i}}
:
S
=
−
k
B
∑
i
p
i
ln
p
i
{\displaystyle S=-k_{\mathsf {B}}\sum _{i}{p_{i}\ln {p_{i}}}}
i.e. in such a basis the density matrix is diagonal.
Von Neumann established a rigorous mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.
=== Information theory ===
When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities
p
i
{\textstyle p_{i}}
so that:
H
(
X
)
=
−
∑
i
=
1
n
p
(
x
i
)
log
p
(
x
i
)
{\displaystyle H(X)=-\sum _{i=1}^{n}{p(x_{i})\log {p(x_{i})}}}
where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits).
In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message.
Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If
W
{\textstyle W}
is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is
p
=
1
/
W
{\textstyle p=1/W}
. The Shannon entropy (in nats) is:
H
=
−
∑
i
=
1
W
p
i
ln
p
i
=
ln
W
{\displaystyle H=-\sum _{i=1}^{W}{p_{i}\ln {p_{i}}}=\ln {W}}
and if entropy is measured in units of
k
{\textstyle k}
per nat, then the entropy is given by:
H
=
k
ln
W
{\displaystyle H=k\ln {W}}
which is the Boltzmann entropy formula, where
k
{\textstyle k}
is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the
H
{\textstyle H}
function of information theory and using Shannon's other term, "uncertainty", instead.
=== Measurement ===
The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system with constant number of particles
N
{\textstyle N}
and constant volume
V
{\textstyle V}
, and it uses the definition of temperature in terms of entropy, while limiting energy exchange to heat
d
U
→
d
Q
{\textstyle \mathrm {d} U\rightarrow \mathrm {d} Q}
:
T
:=
(
∂
U
∂
S
)
V
,
N
⇒
⋯
⇒
d
S
=
d
Q
T
{\displaystyle T:={\left({\frac {\partial U}{\partial S}}\right)}_{V,N}\ \Rightarrow \ \cdots \ \Rightarrow \ \mathrm {d} S={\frac {\mathrm {d} Q}{T}}}
The resulting relation describes how entropy changes
d
S
{\textstyle \mathrm {d} S}
when a small amount of energy
d
Q
{\textstyle \mathrm {d} Q}
is introduced into the system at a certain temperature
T
{\textstyle T}
.
The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zero – due to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25 °C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy.
== Interdisciplinary applications ==
Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution.
=== Philosophy and theoretical physics ===
Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions. Since the 19th century, a number the philosophers have drawn upon the concept of entropy to develop novel metaphysical and ethical systems. Examples of this work can be found in the thought of Friedrich Nietzsche and Philipp Mainländer, Claude Lévi-Strauss, Isabelle Stengers, Shannon Mussett, and Drew M. Dalton.
=== Biology ===
Chiavazzo et al. proposed that where cave spiders choose to lay their eggs can be explained through entropy minimisation.
Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species.
=== Cosmology ===
Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source.
If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation).
The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult.
Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe.
=== Economics ===
Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school.: 204f : 29–35 Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics.: 95–112
In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'.: 116 Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position.: 545f
== See also ==
== Notes ==
== References ==
David, Kover (14 August 2018). "Entropia – fyzikálna veličina vesmíru a nášho života". stejfree.sk. Archived from the original on 27 May 2022. Retrieved 13 April 2022.
== Further reading ==
== External links ==
"Entropy" at Scholarpedia
Entropy and the Clausius inequality MIT OCW lecture, part of 5.60 Thermodynamics & Kinetics, Spring 2008
Entropy and the Second Law of Thermodynamics – an A-level physics lecture with 'derivation' of entropy based on Carnot cycle
Khan Academy: entropy lectures, part of Chemistry playlist
Entropy Intuition
More on Entropy
Proof: S (or Entropy) is a valid state variable
Reconciling Thermodynamic and State Definitions of Entropy
Thermodynamic Entropy Definition Clarification
Moriarty, Philip; Merrifield, Michael (2009). "S Entropy". Sixty Symbols. Brady Haran for the University of Nottingham.
The Discovery of Entropy by Adam Shulman. Hour-long video, January 2013.
The Second Law of Thermodynamics and Entropy – Yale OYC lecture, part of Fundamentals of Physics I (PHYS 200) | Wikipedia/Entropy |
A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as
P
(
k
)
∼
k
−
γ
{\displaystyle P(k)\ \sim \ k^{\boldsymbol {-\gamma }}}
where
γ
{\displaystyle \gamma }
is a parameter whose value is typically in the range
2
<
γ
<
3
{\textstyle 2<\gamma <3}
(wherein the second moment (scale parameter) of
k
−
γ
{\displaystyle k^{\boldsymbol {-\gamma }}}
is infinite but the first moment is finite), although occasionally it may lie outside these bounds. The name "scale-free" could be explained by the fact that some moments of the degree distribution are not defined, so that the network does not have a characteristic scale or "size".
Preferential attachment and the fitness model have been proposed as mechanisms to explain the power law degree distributions in real networks. Alternative models such as super-linear preferential attachment and second-neighbour preferential attachment may appear to generate transient scale-free networks, but the degree distribution deviates from a power law as networks become very large.
== History ==
In studies of citations between scientific papers, Derek de Solla Price showed in 1965 that the number of citations a paper receives had a heavy-tailed distribution following a Pareto distribution or power law. In a later paper in 1976, Price also proposed a mechanism to explain the occurrence of power laws in citation networks, which he called "cumulative advantage." However, both treated citations are scalar quantities, rather than a fundamental feature of a new class of networks.
The interest in scale-free networks started in 1999 with work by Albert-László Barabási and Réka Albert at the University of Notre Dame who mapped the topology of a portion of the World Wide Web, finding that some nodes, which they called "hubs", had many more connections than others and that the network as a whole had a power-law distribution of the number of links connecting to a node. In a subsequent paper Barabási and Albert showed that the power laws are not a unique property of the WWW, but the feature is present in a few real networks, prompting them to coin the term "scale-free network" to describe the class of networks that exhibit a power-law degree distribution.
Barabási and Réka Albert proposed a generative mechanism to explain the appearance of power-law distributions, which they called "preferential attachment". Analytic solutions for this mechanism were presented in 2000 by Dorogovtsev, Mendes and Samukhin and independently by Krapivsky, Redner, and Leyvraz, and later rigorously proved by mathematician Béla Bollobás.
== Overview ==
When the concept of "scale-free" was initially introduced in the context of networks, it primarily referred to a specific trait: a power-law distribution for a given variable
k
{\displaystyle k}
, expressed as
f
(
k
)
∝
k
−
γ
{\displaystyle f(k)\propto k^{-\gamma }}
. This property maintains its form when subjected to a continuous scale transformation
k
→
k
+
ϵ
k
{\displaystyle k\to k+\epsilon k}
, evoking parallels with the renormalization group techniques in statistical field theory.
However, there's a key difference. In statistical field theory, the term "scale" often pertains to system size. In the realm of networks, "scale"
k
{\displaystyle k}
is a measure of connectivity, generally quantified by a node's degree—that is, the number of links attached to it. Networks featuring a higher number of high-degree nodes are deemed to have greater connectivity.
The power-law degree distribution enables us to make "scale-free" assertions about the prevalence of high-degree nodes. For instance, we can say that "nodes with triple the average connectivity occur half as frequently as nodes with average connectivity". The specific numerical value of what constitutes "average connectivity" becomes irrelevant, whether it's a hundred or a million.
== Characteristics ==
The most notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and are thought to serve specific purposes in their networks, although this depends greatly on the domain. In a random network the maximum degree, or the expected largest hub, scales as kmax~ log N, where N is the network size, a very slow dependence. In contrast, in scale-free networks the largest hub scales as kmax~ ~N1/(γ−1) indicating that the hubs increase polynomically with the size of the network.
A key feature of scale-free networks is their high degree heterogeneity, κ= <k2>/<k>, which governs multiple network-based processes, from network robustness to epidemic spreading and network synchronization. While for a random network κ= <k> + 1, i.e. the ration is independent of the network size N, for a scale-free network we have κ~ N(3−γ)/(γ−1), increasing with the network size, indicating that for these networks the degree heterogeneity increases.
=== Clustering ===
Another important characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. This implies that the low-degree nodes belong to very dense sub-graphs and those sub-graphs are connected to each other through hubs. Consider a social network in which nodes are people and links are acquaintance relationships between people. It is easy to see that people tend to form communities, i.e., small groups in which everyone knows everyone (one can think of such community as a complete graph). In addition, the members of a community also have a few acquaintance relationships to people outside that community. Some people, however, are connected to a large number of communities (e.g., celebrities, politicians). Those people may be considered the hubs responsible for the small-world phenomenon.
At present, the more specific characteristics of scale-free networks vary with the generative mechanism used to create them. For instance, networks generated by preferential attachment typically place the high-degree vertices in the middle of the network, connecting them together to form a core, with progressively lower-degree nodes making up the regions between the core and the periphery. The random removal of even a large fraction of vertices impacts the overall connectedness of the network very little, suggesting that such topologies could be useful for security, while targeted attacks destroys the connectedness very quickly. Other scale-free networks, which place the high-degree vertices at the periphery, do not exhibit these properties. Similarly, the clustering coefficient of scale-free networks can vary significantly depending on other topological details.
=== Immunization ===
The question of how to immunize efficiently scale free networks which represent realistic networks such as the Internet and social networks has been studied extensively. One such strategy is to immunize the largest degree nodes, i.e., targeted (intentional) attacks since for this case p
c
{\displaystyle c}
is relatively high and less nodes are needed to be immunized.
However, in many realistic cases the global structure is not available and the largest degree nodes are not known.
Properties of random graph may change or remain invariant under graph transformations. Mashaghi A. et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient. Scale free graphs, as such, remain scale free under such transformations.
== Examples ==
Examples of networks found to be scale-free include:
Some Social networks, including collaboration networks. Two examples that have been studied extensively are the collaboration of movie actors in films and the co-authorship by mathematicians of papers.
Many kinds of computer networks, including the internet and the webgraph of the World Wide Web.
Some financial networks such as interbank payment networks
Protein–protein interaction networks.
Semantic networks.
Airline networks.
Scale free topology has been also found in high temperature superconductors. The qualities of a high-temperature superconductor — a compound in which electrons obey the laws of quantum physics, and flow in perfect synchrony, without friction — appear linked to the fractal arrangements of seemingly random oxygen atoms and lattice distortion.
== Generative models ==
Scale-free networks do not arise by chance alone. Erdős and Rényi (1960) studied a model of growth for graphs in which, at each step, two nodes are chosen uniformly at random and a link is inserted between them. The properties of these random graphs are different from the properties found in scale-free networks, and therefore a model for this growth process is needed.
The most widely known generative model for a subset of scale-free networks is Barabási and Albert's (1999) rich get richer generative model in which each new Web page creates links to existing Web pages with a probability distribution which is not uniform, but
proportional to the current in-degree of Web pages. According to this process, a page with many in-links will attract more in-links than a regular page. This generates a power-law but the resulting graph differs from the actual Web graph in other properties such as the presence of small tightly connected communities. More general models and network characteristics have been proposed and studied. For example, Pachon et al. (2018) proposed a variant of the rich get richer generative model which takes into account two different attachment rules: a preferential attachment mechanism and a uniform choice only for the most recent nodes. For a review see the book by Dorogovtsev and Mendes. Some mechanisms such as super-linear preferential attachment and second neighbour attachment generate networks which are transiently scale-free, but deviate from a power law as networks grow large.
A somewhat different generative model for Web links has been suggested by Pennock et al. (2002). They examined communities with interests in a specific topic such as the home pages of universities, public companies, newspapers or scientists, and discarded the major hubs of the Web. In this case, the distribution of links was no longer a power law but resembled a normal distribution. Based on these observations, the authors proposed a generative model that mixes preferential attachment with a baseline probability of gaining a link.
Another generative model is the copy model studied by Kumar et al. (2000),
in which new nodes choose an existent node at random and copy a fraction of the links of the existent node. This also generates a power law.
There are two major components that explain the emergence of the power-law distribution in the Barabási–Albert model: the growth and the preferential attachment.
By "growth" is meant a growth process where, over an extended period of time, new nodes join an already existing system, a network (like the World Wide Web which has grown by billions of web pages over 10 years). Finally, by "preferential attachment" is meant that new nodes prefer to connect to nodes that already have a high number of links with others. Thus, there is a higher probability that more and more nodes will link themselves to that one which has already many links, leading this node to a hub in-fine.
Depending on the network, the hubs might either be assortative or disassortative. Assortativity would be found in social networks in which well-connected/famous people would tend to know better each other. Disassortativity would be found in technological (Internet, World Wide Web) and biological (protein interaction, metabolism) networks.
However, the growth of the networks (adding new nodes) is not a necessary condition for creating a scale-free network (see
Dangalchev). One possibility (Caldarelli et al. 2002) is to consider the structure as static and draw a link between vertices according to a particular property of the two vertices involved. Once specified the statistical distribution for these vertex properties (fitnesses), it turns out that in some circumstances also static networks develop scale-free properties.
== Generalized scale-free model ==
There has been a burst of activity in the modeling of scale-free complex networks. The recipe of Barabási and Albert has been followed by several variations and generalizations and the revamping of previous mathematical works.
In today's terms, if a complex network has a power-law distribution of any of its metrics, it's generally considered a scale-free network. Similarly, any model with this feature is called a scale-free model.
=== Features ===
Many real networks are (approximately) scale-free and hence require scale-free models to describe them. In Price's scheme, there are two ingredients needed to build up a scale-free model:
1. Adding or removing nodes. Usually we concentrate on growing the network, i.e. adding nodes.
2. Preferential attachment: The probability
Π
{\displaystyle \Pi }
that new nodes will be connected to the "old" node.
Note that some models (see
Dangalchev and
Fitness model below) can work also statically, without changing the number of nodes. It should also be kept in mind that the fact that "preferential attachment" models give rise to scale-free networks does not prove that this is the mechanism underlying the evolution of real-world scale-free networks, as there might exist different mechanisms at work in real-world systems that nevertheless give rise to scaling.
=== Examples ===
There have been several attempts to generate scale-free network properties. Here are some examples:
==== The Barabási–Albert model ====
The Barabási–Albert model, an undirected version of Price's model has a linear preferential attachment
Π
(
k
i
)
=
k
i
∑
j
k
j
{\displaystyle \Pi (k_{i})={\frac {k_{i}}{\sum _{j}k_{j}}}}
and adds one new node at every time step.
(Note, another general feature of
Π
(
k
)
{\displaystyle \Pi (k)}
in real networks is that
Π
(
0
)
≠
0
{\displaystyle \Pi (0)\neq 0}
, i.e. there is a nonzero probability that a
new node attaches to an isolated node. Thus in general
Π
(
k
)
{\displaystyle \Pi (k)}
has the form
Π
(
k
)
=
A
+
k
α
{\displaystyle \Pi (k)=A+k^{\alpha }}
, where
A
{\displaystyle A}
is the initial attractiveness of the node.)
==== Two-level network model ====
Dangalchev (see ) builds a 2-L model by considering the importance of each of the neighbours of a target node in preferential attachment. The attractiveness of a node in the 2-L model depends not only on the number of nodes linked to it but also on the number of links in each of these nodes.
Π
(
k
i
)
=
k
i
+
C
∑
(
i
,
j
)
k
j
∑
j
k
j
+
C
∑
j
k
j
2
,
{\displaystyle \Pi (k_{i})={\frac {k_{i}+C\sum _{(i,j)}k_{j}}{\sum _{j}k_{j}+C\sum _{j}k_{j}^{2}}},}
where C is a coefficient between 0 and 1.
A variant of the 2-L model, the k2 model, where first and second neighbour nodes contribute equally to a target node's attractiveness, demonstrates the emergence of transient scale-free networks. In the k2 model, the degree distribution appears approximately scale-free as long as the network is relatively small, but significant deviations from the scale-free regime emerge as the network grows larger. This results in the relative attractiveness of nodes with different degrees changing over time, a feature also observed in real networks.
==== Mediation-driven attachment (MDA) model ====
In the mediation-driven attachment (MDA) model, a new node coming with
m
{\displaystyle m}
edges picks an existing connected node at random and then connects itself, not with that one, but with
m
{\displaystyle m}
of its neighbors, also chosen at random. The probability
Π
(
i
)
{\displaystyle \Pi (i)}
that the node
i
{\displaystyle i}
of the existing node picked is
Π
(
i
)
=
k
i
N
∑
j
=
1
k
i
1
k
j
k
i
.
{\displaystyle \Pi (i)={\frac {k_{i}}{N}}{\frac {\sum _{j=1}^{k_{i}}{\frac {1}{k_{j}}}}{k_{i}}}.}
The factor
∑
j
=
1
k
i
1
k
j
k
i
{\displaystyle {\frac {\sum _{j=1}^{k_{i}}{\frac {1}{k_{j}}}}{k_{i}}}}
is the inverse of the harmonic mean
(IHM) of degrees of the
k
i
{\displaystyle k_{i}}
neighbors of a node
i
{\displaystyle i}
. Extensive numerical investigation suggest that for approximately
m
>
14
{\displaystyle m>14}
the mean IHM value in the large
N
{\displaystyle N}
limit becomes a constant which means
Π
(
i
)
∝
k
i
{\displaystyle \Pi (i)\propto k_{i}}
. It implies that the higher the
links (degree) a node has, the higher its chance of gaining more links since they can be
reached in a larger number of ways through mediators which essentially embodies the intuitive
idea of rich get richer mechanism (or the preferential attachment rule of the Barabasi–Albert model). Therefore, the MDA network can be seen to follow
the PA rule but in disguise.
However, for
m
=
1
{\displaystyle m=1}
it describes the winner takes it all mechanism as we find that almost
99
%
{\displaystyle 99\%}
of the total nodes has degree one and one is super-rich in degree. As
m
{\displaystyle m}
value increases the disparity between the super rich and poor decreases and as
m
>
14
{\displaystyle m>14}
we find a transition from rich get super richer to rich get richer mechanism.
==== Non-linear preferential attachment ====
The Barabási–Albert model assumes that the probability
Π
(
k
)
{\displaystyle \Pi (k)}
that a node attaches to node
i
{\displaystyle i}
is proportional to the degree
k
{\displaystyle k}
of node
i
{\displaystyle i}
. This assumption involves two hypotheses: first, that
Π
(
k
)
{\displaystyle \Pi (k)}
depends on
k
{\displaystyle k}
, in contrast to random graphs in which
Π
(
k
)
=
p
{\displaystyle \Pi (k)=p}
, and second, that the functional form of
Π
(
k
)
{\displaystyle \Pi (k)}
is linear in
k
{\displaystyle k}
.
In non-linear preferential attachment, the form of
Π
(
k
)
{\displaystyle \Pi (k)}
is not linear, and recent studies have demonstrated that the degree distribution depends strongly on the shape of the function
Π
(
k
)
{\displaystyle \Pi (k)}
Krapivsky, Redner, and Leyvraz demonstrate that the scale-free nature of the network is destroyed for nonlinear preferential attachment. The only case in which the topology of the network is scale free is that in which the preferential attachment is asymptotically linear, i.e.
Π
(
k
i
)
∼
a
∞
k
i
{\displaystyle \Pi (k_{i})\sim a_{\infty }k_{i}}
as
k
i
→
∞
{\displaystyle k_{i}\to \infty }
. In this case the rate equation leads to
P
(
k
)
∼
k
−
γ
with
γ
=
1
+
μ
a
∞
.
{\displaystyle P(k)\sim k^{-\gamma }{\text{ with }}\gamma =1+{\frac {\mu }{a_{\infty }}}.}
This way the exponent of the degree distribution can be tuned to any value between 2 and
∞
{\displaystyle \infty }
.
==== Hierarchical network model ====
Hierarchical network models are, by design, scale free and have high clustering of nodes.
The iterative construction leads to a hierarchical network. Starting from a fully connected cluster of five nodes, we create four identical replicas connecting the peripheral nodes of each cluster to the central node of the original cluster. From this, we get a network of 25 nodes (N = 25).
Repeating the same process, we can create four more replicas of the original cluster – the four peripheral nodes of each one connect to the central node of the nodes created in the first step. This gives N = 125, and the process can continue indefinitely.
==== Fitness model ====
The idea is that the link between two vertices is assigned not randomly with a probability p equal for all the couple of vertices. Rather, for
every vertex j there is an intrinsic fitness xj and a link between vertex i and j is created with a probability
p
(
x
i
,
x
j
)
{\displaystyle p(x_{i},x_{j})}
.
In the case of World Trade Web it is possible to reconstruct all the properties by using as fitnesses of the country their GDP, and taking
p
(
x
i
,
x
j
)
=
δ
x
i
x
j
1
+
δ
x
i
x
j
.
{\displaystyle p(x_{i},x_{j})={\frac {\delta x_{i}x_{j}}{1+\delta x_{i}x_{j}}}.}
==== Hyperbolic geometric graphs ====
Assuming that a network has an underlying hyperbolic geometry, one can use the framework of spatial networks to generate scale-free degree distributions. This heterogeneous degree distribution then simply reflects the negative curvature and metric properties of the underlying hyperbolic geometry.
==== Edge dual transformation to generate scale free graphs with desired properties ====
Starting with scale free graphs with low degree correlation and clustering coefficient, one can generate new graphs with much higher degree correlations and clustering coefficients by applying edge-dual transformation.
==== Uniform-preferential-attachment model (UPA model) ====
UPA model is a variant of the preferential attachment model (proposed by Pachon et al.) which takes into account two different attachment rules: a preferential attachment mechanism (with probability 1−p) that stresses the rich get richer system, and a uniform choice (with probability p) for the most recent nodes. This modification is interesting to study the robustness of the scale-free behavior of the degree distribution. It is proved analytically that the asymptotically power-law degree distribution is preserved.
== Scale-free ideal networks ==
In the context of network theory a scale-free ideal network is a random network with a degree distribution following the scale-free ideal gas density distribution. These networks are able to reproduce city-size distributions and electoral results by unraveling the size distribution of social groups with information theory on complex networks when a competitive cluster growth process is applied to the network. In models of scale-free ideal networks it is possible to demonstrate that Dunbar's number is the cause of the phenomenon known as the 'six degrees of separation'.
== Novel characteristics ==
For a scale-free network with
n
{\displaystyle n}
nodes and power-law exponent
γ
>
3
{\displaystyle \gamma >3}
, the induced subgraph constructed by vertices with degrees larger than
log
n
×
log
∗
n
{\displaystyle \log {n}\times \log ^{*}{n}}
is a scale-free network with
γ
′
=
2
{\displaystyle \gamma '=2}
, almost surely.
== The scale-free metric ==
On a theoretical level, refinements to the abstract definition of scale-free have been proposed. For example, Li et al. (2005) offered a potentially more precise "scale-free metric". Briefly, let G be a graph with edge set E, and denote the degree of a vertex
v
{\displaystyle v}
(that is, the number of edges incident to
v
{\displaystyle v}
) by
deg
(
v
)
{\displaystyle \deg(v)}
. Define
s
(
G
)
=
∑
(
u
,
v
)
∈
E
deg
(
u
)
⋅
deg
(
v
)
.
{\displaystyle s(G)=\sum _{(u,v)\in E}\deg(u)\cdot \deg(v).}
This is maximized when high-degree nodes are connected to other high-degree nodes. Now define
S
(
G
)
=
s
(
G
)
s
max
,
{\displaystyle S(G)={\frac {s(G)}{s_{\max }}},}
where smax is the maximum value of s(H) for H in the set of all graphs with degree distribution identical to that of G. This gives a metric between 0 and 1, where a graph G with small S(G) is "scale-rich", and a graph G with S(G) close to 1 is "scale-free". This definition captures the notion of self-similarity implied in the name "scale-free".
== Estimating the power law exponent ==
Estimating the power-law exponent
γ
{\displaystyle \gamma }
of a scale-free network is typically done by using the maximum likelihood estimation with the degrees of a few uniformly sampled nodes. However, since uniform sampling does not obtain enough samples from the important heavy-tail of the power law degree distribution, this method can yield a large bias and a variance. It has been recently proposed to sample random friends (i.e., random ends of random links) who are more likely come from the tail of the degree distribution as a result of the friendship paradox. Theoretically, maximum likelihood estimation with random friends lead to a smaller bias and a smaller variance compared to classical approach based on uniform sampling.
== See also ==
Random graph – Graph generated by a random process
Erdős–Rényi model – Two closely related models for generating random graphs
Non-linear preferential attachment
Bose–Einstein condensation (network theory) – model in network sciencePages displaying wikidata descriptions as a fallback
Scale invariance – Features that do not change if length or energy scales are multiplied by a common factor
Complex network – Network with non-trivial topological features
Webgraph – Graph of connected web pages
Barabási–Albert model – Scale-free network generation algorithm
Bianconi–Barabási model – model in network sciencePages displaying wikidata descriptions as a fallback
== References ==
== Further reading ==
Albert R.; Barabási A.-L. (2002). "Statistical mechanics of complex networks". Rev. Mod. Phys. 74 (1): 47–97. arXiv:cond-mat/0106096. Bibcode:2002RvMP...74...47A. doi:10.1103/RevModPhys.74.47. S2CID 60545.
Amaral LAN, Scala A, Barthelemy M, Stanley HE (2000). "Classes of small-world networks". PNAS. 97 (21): 11149–52. arXiv:cond-mat/0001458. Bibcode:2000PNAS...9711149A. doi:10.1073/pnas.200327197. PMC 17168. PMID 11005838.
Barabási, Albert-László (2004). Linked: How Everything is Connected to Everything Else. Perseus Pub. ISBN 0-452-28439-2.
Barabási, Albert-László; Bonabeau, Eric (May 2003). "Scale-Free Networks" (PDF). Scientific American. 288 (5): 50–9. Bibcode:2003SciAm.288e..60B. doi:10.1038/scientificamerican0503-60. PMID 12701331.
Dan Braha; Yaneer Bar-Yam (2004). "Topology of Large-Scale Engineering Problem-Solving Networks" (PDF). Phys. Rev. E. 69 (1): 016113. Bibcode:2004PhRvE..69a6113B. doi:10.1103/PhysRevE.69.016113. PMID 14995673. S2CID 1001176.
Caldarelli G. "Scale-Free Networks" Oxford University Press, Oxford (2007).
Caldarelli G.; Capocci A.; De Los Rios P.; Muñoz M.A. (2002). "Scale-free networks from varying vertex intrinsic fitness". Physical Review Letters. 89 (25): 258702. arXiv:cond-mat/0207366. Bibcode:2002PhRvL..89y8702C. doi:10.1103/PhysRevLett.89.258702. PMID 12484927.
Dangalchev, Ch. (2004). "Generation models for scale-free networks". Physica A. 338 (3–4): 659–671. Bibcode:2004PhyA..338..659D. doi:10.1016/j.physa.2004.01.056.
Dorogovtsev, S.N.; Mendes, J.F.F.; Samukhin, A.N. (2000). "Structure of Growing Networks: Exact Solution of the Barabási—Albert's Model". Phys. Rev. Lett. 85 (21): 4633–6. arXiv:cond-mat/0004434. Bibcode:2000PhRvL..85.4633D. doi:10.1103/PhysRevLett.85.4633. PMID 11082614. S2CID 118876189.
Dorogovtsev, S.N.; Mendes, J.F.F. (2003). Evolution of Networks: from biological networks to the Internet and WWW. Oxford University Press. ISBN 0-19-851590-1.
Dorogovtsev, S.N.; Goltsev A.V.; Mendes, J.F.F. (2008). "Critical phenomena in complex networks". Rev. Mod. Phys. 80 (4): 1275–1335. arXiv:0705.0010. Bibcode:2008RvMP...80.1275D. doi:10.1103/RevModPhys.80.1275. S2CID 3174463.
Dorogovtsev, S.N.; Mendes, J.F.F. (2002). "Evolution of networks". Advances in Physics. 51 (4): 1079–1187. arXiv:cond-mat/0106144. Bibcode:2002AdPhy..51.1079D. doi:10.1080/00018730110112519. S2CID 429546.
Erdős, P.; Rényi, A. (1960). On the Evolution of Random Graphs (PDF). Vol. 5. Publication of the Mathematical Institute of the Hungarian Academy of Science. pp. 17–61.
Faloutsos, M.; Faloutsos, P.; Faloutsos, C. (1999). "On power-law relationships of the internet topology". ACM SIGCOMM Computer Communication Review. 29 (4): 251–262. doi:10.1145/316194.316229.
Li, L.; Alderson, D.; Tanaka, R.; Doyle, J.C.; Willinger, W. (2005). "Towards a Theory of Scale-Free Graphs: Definition, Properties, and Implications (Extended Version)". arXiv:cond-mat/0501169.
Kumar, R.; Raghavan, P.; Rajagopalan, S.; Sivakumar, D.; Tomkins, A.; Upfal, E. (2000). "Stochastic models for the web graph" (PDF). Proceedings of the 41st Annual Symposium on Foundations of Computer Science (FOCS). Redondo Beach, CA: IEEE CS Press. pp. 57–65.
Matlis, Jan (November 4, 2002). "Scale-Free Networks".
Newman, Mark E.J. (2003). "The structure and function of complex networks". SIAM Review. 45 (2): 167–256. arXiv:cond-mat/0303516. Bibcode:2003SIAMR..45..167N. doi:10.1137/S003614450342480. S2CID 221278130.
Pastor-Satorras, R.; Vespignani, A. (2004). Evolution and Structure of the Internet: A Statistical Physics Approach. Cambridge University Press. ISBN 0-521-82698-5.
Pennock, D.M.; Flake, G.W.; Lawrence, S.; Glover, E.J.; Giles, C.L. (2002). "Winners don't take all: Characterizing the competition for links on the web". PNAS. 99 (8): 5207–11. Bibcode:2002PNAS...99.5207P. doi:10.1073/pnas.032085699. PMC 122747. PMID 16578867.
Robb, John. Scale-Free Networks and Terrorism, 2004.
Keller, E.F. (2005). "Revisiting "scale-free" networks". BioEssays. 27 (10): 1060–8. doi:10.1002/bies.20294. PMID 16163729. Archived from the original on 2011-08-13.
Onody, R.N.; de Castro, P.A. (2004). "Complex Network Study of Brazilian Soccer Player". Phys. Rev. E. 70 (3): 037103. arXiv:cond-mat/0409609. Bibcode:2004PhRvE..70c7103O. doi:10.1103/PhysRevE.70.037103. PMID 15524675. S2CID 31653489.
Kasthurirathna, D.; Piraveenan, M. (2015). "Complex Network Study of Brazilian Soccer Player". Sci. Rep. In Press. | Wikipedia/Scale-free_network |
A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines. The control systems are designed via control engineering process.
For continuously modulated control, a feedback controller is used to automatically control a process or operation. The control system compares the value or status of the process variable (PV) being controlled with the desired value or setpoint (SP), and applies the difference as a control signal to bring the process variable output of the plant to the same value as the setpoint.
For sequential and combinational logic, software logic, such as in a programmable logic controller, is used.
== Open-loop and closed-loop control ==
== Feedback control systems ==
== Logic control ==
Logic control systems for industrial and commercial machinery were historically implemented by interconnected electrical relays and cam timers using ladder logic. Today, most such systems are constructed with microcontrollers or more specialized programmable logic controllers (PLCs). The notation of ladder logic is still in use as a programming method for PLCs.
Logic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use of actuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine.
PLC software can be written in many different ways – ladder diagrams, SFC (sequential function charts) or statement lists.
== On–off control ==
On–off control uses a feedback controller that switches abruptly between two states. A simple bi-metallic domestic thermostat can be described as an on-off controller. When the temperature in the room (PV) goes below the user setting (SP), the heater is switched on. Another example is a pressure switch on an air compressor. When the pressure (PV) drops below the setpoint (SP) the compressor is powered. Refrigerators and vacuum pumps contain similar mechanisms. Simple on–off control systems like these can be cheap and effective.
== Linear control ==
== Fuzzy logic ==
Fuzzy logic is an attempt to apply the easy design of logic controllers to the control of complex continuously varying systems. Basically, a measurement in a fuzzy logic system can be partly true.
The rules of the system are written in natural language and translated into fuzzy logic. For example, the design for a furnace would start with: "If the temperature is too high, reduce the fuel to the furnace. If the temperature is too low, increase the fuel to the furnace."
Measurements from the real world (such as the temperature of a furnace) are fuzzified and logic is calculated arithmetic, as opposed to Boolean logic, and the outputs are de-fuzzified to control equipment.
When a robust fuzzy design is reduced to a single, quick calculation, it begins to resemble a conventional feedback loop solution and it might appear that the fuzzy design was unnecessary. However, the fuzzy logic paradigm may provide scalability for large control systems where conventional methods become unwieldy or costly to derive.
Fuzzy electronics is an electronic technology that uses fuzzy logic instead of the two-value logic more commonly used in digital electronics.
== Physical implementation ==
The range of control system implementation is from compact controllers often with dedicated software for a particular machine or device, to distributed control systems for industrial process control for a large physical plant.
Logic systems and feedback controllers are usually implemented with programmable logic controllers. The Broadly Reconfigurable and Expandable Automation Device (BREAD) is a recent framework that provides many open-source hardware devices which can be connected to create more complex data acquisition and control systems.
== See also ==
== References ==
== External links ==
SystemControl Create, simulate or HWIL control loops with Python. Includes Kalman filter, LQG control among others.
Semiautonomous Flight Direction - Reference unmannedaircraft.org
Control System Toolbox for design and analysis of control systems.
Control Systems Manufacturer Design and Manufacture of control systems.
Mathematica functions for the analysis, design, and simulation of control systems
Python Control System (PyConSys) Create and simulate control loops with Python. AI for setting PID parameters. | Wikipedia/Control_system |
Rational choice modeling refers to the use of decision theory (the theory of rational choice) as a set of guidelines to help understand economic and social behavior. The theory tries to approximate, predict, or mathematically model human behavior by analyzing the behavior of a rational actor facing the same costs and benefits.
Rational choice models are most closely associated with economics, where mathematical analysis of behavior is standard. However, they are widely used throughout the social sciences, and are commonly applied to cognitive science, criminology, political science, and sociology.
== Overview ==
The basic premise of rational choice theory is that the decisions made by individual actors will collectively produce aggregate social behaviour. The theory also assumes that individuals have preferences out of available choice alternatives. These preferences are assumed to be complete and transitive. Completeness refers to the individual being able to say which of the options they prefer (i.e. individual prefers A over B, B over A or are indifferent to both). Alternatively, transitivity is where the individual weakly prefers option A over B and weakly prefers option B over C, leading to the conclusion that the individual weakly prefers A over C. The rational agent will then perform their own cost–benefit analysis using a variety of criterion to perform their self-determined best choice of action.
One version of rationality is instrumental rationality, which involves achieving a goal using the most cost effective method without reflecting on the worthiness of that goal. Duncan Snidal emphasises that the goals are not restricted to self-regarding, selfish, or material interests. They also include other-regarding, altruistic, as well as normative or ideational goals.
Rational choice theory does not claim to describe the choice process, but rather it helps predict the outcome and pattern of choice. It is consequently assumed that the individual is a self-interested or “homo economicus”. Here, the individual comes to a decision that optimizes their preferences by balancing costs and benefits.
Rational choice theory has proposed that there are two outcomes of two choices regarding human action. Firstly, the feasible region will be chosen within all the possible and related action. Second, after the preferred option has been chosen, the feasible region that has been selected was picked based on restriction of financial, legal, social, physical or emotional restrictions that the agent is facing. After that, a choice will be made based on the preference order.
The concept of rationality used in rational choice theory is different from the colloquial and most philosophical use of the word. In this sense, "rational" behaviour can refer to "sensible", "predictable", or "in a thoughtful, clear-headed manner." Rational choice theory uses a much more narrow definition of rationality. At its most basic level, behavior is rational if it is reflective and consistent (across time and different choice situations). More specifically, behavior is only considered irrational if it is logically incoherent, i.e. self-contradictory.
Early neoclassical economists writing about rational choice, including William Stanley Jevons, assumed that agents make consumption choices so as to maximize their happiness, or utility. Contemporary theory bases rational choice on a set of choice axioms that need to be satisfied, and typically does not specify where the goal (preferences, desires) comes from. It mandates just a consistent ranking of the alternatives.: 501 Individuals choose the best action according to their personal preferences and the constraints facing them.
== Actions, assumptions, and individual preferences ==
Rational choice theory can be viewed in different contexts. At an individual level, the theory suggests that the agent will decide on the action (or outcome) they most prefer. If the actions (or outcomes) are evaluated in terms of costs and benefits, the choice with the maximum net benefit will be chosen by the rational individual. Rational behaviour is not solely driven by monetary gain, but can also be driven by emotional motives.
The theory can be applied to general settings outside of those identified by costs and benefits. In general, rational decision making entails choosing among all available alternatives the alternative that the individual most prefers. The "alternatives" can be a set of actions ("what to do?") or a set of objects ("what to choose/buy"). In the case of actions, what the individual really cares about are the outcomes that results from each possible action. Actions, in this case, are only an instrument for obtaining a particular outcome.
=== Formal statement ===
The available alternatives are often expressed as a set of objects, for example a set of j exhaustive and exclusive actions:
A
=
{
a
1
,
…
,
a
i
,
…
,
a
j
}
{\displaystyle A=\{a_{1},\ldots ,a_{i},\ldots ,a_{j}\}}
For example, if a person can choose to vote for either Roger or Sara or to abstain, their set of possible alternatives is:
A
=
{
Vote for Roger, Vote for Sara, Abstain
}
{\displaystyle A=\{{\text{Vote for Roger, Vote for Sara, Abstain}}\}}
The theory makes two technical assumptions about individuals' preferences over alternatives:
Completeness – for any two alternatives ai and aj in the set, either ai is preferred to aj, or aj is preferred to ai, or the individual is indifferent between ai and aj. In other words, all pairs of alternatives can be compared with each other.
Transitivity – if alternative a1 is preferred to a2, and alternative a2 is preferred to a3, then a1 is preferred to a3.
Together these two assumptions imply that given a set of exhaustive and exclusive actions to choose from, an individual can rank the elements of this set in terms of his preferences in an internally consistent way (the ranking constitutes a total ordering, minus some assumptions), and the set has at least one maximal element.
The preference between two alternatives can be:
Strict preference occurs when an individual prefers a1 to a2 and does not view them as equally preferred.
Weak preference implies that individual either strictly prefers a1 over a2 or is indifferent between them.
Indifference occurs when an individual neither prefers a1 to a2, nor a2 to a1. Since (by completeness) the individual does not refuse a comparison, they must therefore be indifferent in this case.
Research since the 1980s sought to develop models that weaken these assumptions and argue some cases of this behaviour can be considered rational. However, the Dutch book theorems show that this comes at a major cost of internal coherence, such that weakening any of the Von Neumann–Morgenstern axioms makes. The most severe consequences are associated with violating independence of irrelevant alternatives, and transitive preferences, or fully abandoning completeness rather than weakening it to "asymptotic" completeness.
== Utility maximization ==
Often preferences are described by their utility function or payoff function. This is an ordinal number that an individual assigns over the available actions, such as:
u
(
a
i
)
>
u
(
a
j
)
.
{\displaystyle u\left(a_{i}\right)>u\left(a_{j}\right).}
The individual's preferences are then expressed as the relation between these ordinal assignments. For example, if an individual prefers the candidate Sara over Roger over abstaining, their preferences would have the relation:
u
(
Sara
)
>
u
(
Roger
)
>
u
(
abstain
)
.
{\displaystyle u\left({\text{Sara}}\right)>u\left({\text{Roger}}\right)>u\left({\text{abstain}}\right).}
A preference relation that as above satisfies completeness, transitivity, and, in addition, continuity, can be equivalently represented by a utility function.
== Benefits ==
The rational choice approach allows preferences to be represented as real-valued utility functions. Economic decision making then becomes a problem of maximizing this utility function, subject to constraints (e.g. a budget). This has many advantages. It provides a compact theory that makes empirical predictions with a relatively sparse model – just a description of the agent's objectives and constraints. Furthermore, optimization theory is a well-developed field of mathematics. These two factors make rational choice models tractable compared to other approaches to choice. Most importantly, this approach is strikingly general. It has been used to analyze not only personal and household choices about
traditional economic matters like consumption and savings, but also choices about education, marriage, child-bearing, migration, crime and so on, as well as business decisions about output, investment, hiring, entry, exit, etc. with varying degrees of success.
In the field of political science rational choice theory has been used to help predict human decision making and model for the future; therefore it is useful in creating effective public policy, and enables the government to develop solutions quickly and efficiently.
Despite the empirical shortcomings of rational choice theory, the flexibility and tractability of rational choice models (and the lack of equally powerful alternatives) lead to them still being widely used.
== Applications ==
Rational choice theory has become increasingly employed in social sciences other than economics, such as sociology, evolutionary theory and political science in recent decades. It has had far-reaching impacts on the study of political science, especially in fields like the study of interest groups, elections, behaviour in legislatures, coalitions, and bureaucracy. In these fields, the use of rational choice theory to explain broad social phenomena is the subject of controversy.
=== Rational choice theory in political science ===
Rational choice theory provides a framework to explain why groups of rational individuals can come to collectively irrational decisions. For example, while at the individual level a group of people may have common interests, applying a rational choice framework to their individually rational preferences can explain group-level outcomes that fail to accomplish any one individual's preferred objectives. Rational choice theory provides a framework to describe outcomes like this as the product of rational agents performing their own cost–benefit analysis to maximize their self-interests, a process that doesn't always align with the group's preferences.
==== Rational choice in voting behavior ====
Voter behaviour shifts significantly thanks to rational theory, which is ingrained in human nature, the most significant of which occurs when there are times of economic trouble. An example in economic policy, economist Anthony Downs concluded that a high income voter ‘votes for whatever party he believes would provide him with the highest utility income from government action’, using rational choice theory to explain people's income as their justification for their preferred tax rate.
Downs' work provides a framework for analyzing tax-rate preference in a rational choice framework. He argues that an individual votes if it is in their rational interest to do so. Downs models this utility function as B + D > C, where B is the benefit of the voter winning, D is the satisfaction derived from voting and C is the cost of voting. It is from this that we can determine that parties have moved their policy outlook to be more centric in order to maximise the number of voters they have for support. It is from this very simple framework that more complex adjustments can be made to describe the success of politicians as an outcome of their ability or failure to satisfy the utility function of individual voters.
==== Rational choice theory in international relations ====
Rational choice theory has become one of the major tools used to study international relations. Proponents of its use in this field typically assume that states and the policies crafted at the national outcome are the outcome of self-interested, politically shrewd actors including, but not limited to, politicians, lobbyists, businesspeople, activists, regular voters and any other individual in the national audience. The use of rational choice theory as a framework to predict political behavior has led to a rich literature that describes the trajectory of policy to varying degrees of success. For example, some scholars have examined how states can make credible threats to deter other states from a (nuclear) attack. Others have explored under what conditions states wage war against each other. Yet others have investigated under what circumstances the threat and imposition of international economic sanctions tend to succeed and when they are likely to fail.
=== Rational choice theory in social interactions ===
Rational choice theory and social exchange theory involves looking at all social relations in the form of costs and rewards, both tangible and non tangible.
According to Abell, Rational Choice Theory is "understanding individual actors... as acting, or more likely interacting, in a manner such that they can be deemed to be doing the best they can for themselves, given their objectives, resources, circumstances, as they seem them". Rational Choice Theory has been used to comprehend the complex social phenomena, of which derives from the actions and motivations of an individual. Individuals are often highly motivated by their wants and needs.
By making calculative decisions, it is considered as rational action. Individuals are often making calculative decisions in social situations by weighing out the pros and cons of an action taken towards a person. The decision to act on a rational decision is also dependent on the unforeseen benefits of the friendship. Homan mentions that actions of humans are motivated by punishment or rewards. This reinforcement through punishments or rewards determines the course of action taken by a person in a social situation as well. Individuals are motivated by mutual reinforcement and are also fundamentally motivated by the approval of others. Attaining the approval of others has been a generalized character, along with money, as a means of exchange in both Social and Economic exchanges. In Economic exchanges, it involves the exchange of goods or services. In Social exchange, it is the exchange of approval and certain other valued behaviors.
Rational Choice Theory in this instance, heavily emphasizes the individual's interest as a starting point for making social decisions. Despite differing view points about Rational choice theory, it all comes down to the individual as a basic unit of theory. Even though sharing, cooperation and cultural norms emerge, it all stems from an individual's initial concern about the self.
G.S Becker offers an example of how Rational choice can be applied to personal decisions, specifically regarding the rationale that goes behind decisions on whether to marry or divorce another individual. Due to the self-serving drive on which the theory of rational choice is derived, Becker concludes that people marry if the expected utility from such marriage exceeds the utility one would gain from remaining single, and in the same way couples would separate should the utility of being together be less than expected and provide less (economic) benefit than being separated would. Since the theory behind rational choice is that individuals will take the course of action that best serves their personal interests, when considering relationships it is still assumed that they will display such mentality due to deep-rooted, self-interested aspects of human nature.
Social Exchange and Rational Choice Theory both comes down to an individual's efforts to meet their own personal needs and interests through the choices they make. Even though some may be done sincerely for the welfare of others at that point of time, both theories point to the benefits received in return. These returns may be received immediately or in the future, be it tangible or not.
Coleman discussed a number of theories to elaborate on the premises and promises of rational choice theory. One of the concepts that He introduced was Trust. It is where "individuals place trust, in both judgement and performance of others, based on rational considerations of what is best, given the alternatives they confront". In a social situation, there has to be a level of trust among the individuals. He noted that this level of trust is a consideration that an individual takes into concern before deciding on a rational action towards another individual. It affects the social situation as one navigates the risks and benefits of an action. By assessing the possible outcomes or alternatives to an action for another individual, the person is making a calculated decision. In another situation such as making a bet, you are calculating the possible lost and how much can be won. If the chances of winning exceeds the cost of losing, the rational decision would be to place the bet. Therefore, the decision to place trust in another individual involves the same rational calculations that are involved in the decision of making a bet.
Even though rational theory is used in Economics and Social settings, there are some similarities and differences. The concept of reward and reinforcement is parallel to each other while the concept of cost is also parallel to the concept of punishment. However, there is a difference of underlying assumptions in both contexts. In a social setting, the focus is often on the current or past reinforcements, with no guarantee of immediate tangible or intangible returns from another individual in the future. In Economics, decisions are made with heavier emphasis on future rewards.
Despite having both perspectives differ in focus, they primarily reflect on how individuals make different rational decisions when given an immediate or long-term circumstances to consider in their rational decision making.
== Criticism ==
Both the assumptions and the behavioral predictions of rational choice theory have sparked criticism from various camps.
=== The limits of rationality ===
As mentioned above, some economists have developed models of bounded rationality, such as Herbert Simon, which hope to be more psychologically plausible without completely abandoning the idea that reason underlies decision-making processes. Simon argues factors such as imperfect information, uncertainty and time constraints all affect and limit our rationality, and therefore our decision-making skills. Furthermore, his concepts of 'satisficing' and 'optimizing' suggest sometimes because of these factors, we settle for a decision which is good enough, rather than the best decision. Other economists have developed more theories of human decision-making that allow for the roles of uncertainty, institutions, and determination of individual tastes by their socioeconomic environment (cf. Fernandez-Huerga, 2008).
=== Philosophical critiques ===
Martin Hollis and Edward J. Nell's 1975 book offers both a philosophical critique of neo-classical economics and an innovation in the field of economic methodology. Further, they outlined an alternative vision to neo-classicism based on a rationalist theory of knowledge. Within neo-classicism, the authors addressed consumer behaviour (in the form of indifference curves and simple versions of revealed preference theory) and marginalist producer behaviour in both product and factor markets. Both are based on rational optimizing behaviour. They consider imperfect as well as perfect markets since neo-classical thinking embraces many market varieties and disposes of a whole system for their classification. However, the authors believe that the issues arising from basic maximizing models have extensive implications for econometric methodology (Hollis and Nell, 1975, p. 2). In particular it is this class of models – rational behavior as maximizing behaviour – which provide support for specification and identification. And this, they argue, is where the flaw is to be found. Hollis and Nell (1975) argued that positivism (broadly conceived) has provided neo-classicism with important support, which they then show to be unfounded. They base their critique of neo-classicism not only on their critique of positivism but also on the alternative they propose, rationalism. Indeed, they argue that rationality is central to neo-classical economics – as rational choice – and that this conception of rationality is misused. Demands are made of it that it cannot fulfill. Ultimately, individuals do not always act rationally or conduct themselves in a utility maximising manner.
Duncan K. Foley (2003, p. 1) has also provided an important criticism of the concept of rationality and its role in economics. He argued that“Rationality” has played a central role in shaping and establishing the hegemony of contemporary mainstream economics. As the specific claims of robust neoclassicism fade into the history of economic thought, an orientation toward situating explanations of economic phenomena in relation to rationality has increasingly become the touchstone by which mainstream economists identify themselves and recognize each other. This is not so much a question of adherence to any particular conception of rationality, but of taking rationality of individual behavior as the unquestioned starting point of economic analysis.
Foley (2003, p. 9) went on to argue thatThe concept of rationality, to use Hegelian language, represents the relations of modern capitalist society one-sidedly. The burden of rational-actor theory is the assertion that ‘naturally’ constituted individuals facing existential conflicts over scarce resources would rationally impose on themselves the institutional structures of modern capitalist society, or something approximating them. But this way of looking at matters systematically neglects the ways in which modern capitalist society and its social relations in fact constitute the ‘rational’, calculating individual. The well-known limitations of rational-actor theory, its static quality, its logical antinomies, its vulnerability to arguments of infinite regress, its failure to develop a progressive concrete research program, can all be traced to this starting-point.
More recently Edward J. Nell and Karim Errouaki (2011, Ch. 1) argued that:The DNA of neoclassical economics is defective. Neither the induction problem nor the problems of methodological individualism can be solved within the framework of neoclassical assumptions. The neoclassical approach is to call on rational economic man to solve both. Economic relationships that reflect rational choice should be ‘projectible’. But that attributes a deductive power to ‘rational’ that it cannot have consistently with positivist (or even pragmatist) assumptions (which require deductions to be simply analytic). To make rational calculations projectible, the agents may be assumed to have idealized abilities, especially foresight; but then the induction problem is out of reach because the agents of the world do not resemble those of the model. The agents of the model can be abstract, but they cannot be endowed with powers actual agents could not have. This also undermines methodological individualism; if behaviour cannot be reliably predicted on the basis of the ‘rational choices of agents’, a social order cannot reliably follow from the choices of agents.
=== Psychological critiques ===
The validity of Rational Choice Theory has been generally refuted by the results of research in behavioral psychology. The revision or alternative theory that arises from these discrepancies is called Prospect Theory.
The 'doubly-divergent' critique of Rational Choice Theory implicit in Prospect Theory has sometimes been presented as a revision or alternative. Daniel Kahneman's work has been notably elaborated by research undertaken and supervised by Jonathan Haidt and other scholars.
=== Empirical critiques ===
In their 1994 work, Pathologies of Rational Choice Theory, Donald P. Green and Ian Shapiro argue that the empirical outputs of rational choice theory have been limited. They contend that much of the applicable literature, at least in political science, was done with weak statistical methods and that when corrected many of the empirical outcomes no longer hold. When taken in this perspective, rational choice theory has provided very little to the overall understanding of political interaction – and is an amount certainly disproportionately weak relative to its appearance in the literature. Yet, they concede that cutting-edge research, by scholars well-versed in the general scholarship of their fields (such as work on the U.S. Congress by Keith Krehbiel, Gary Cox, and Mat McCubbins) has generated valuable scientific progress.
=== Methodological critiques ===
Schram and Caterino (2006) contains a fundamental methodological criticism of rational choice theory for promoting the view that the natural science model is the only appropriate methodology in social science and that political science should follow this model, with its emphasis on quantification and mathematization. Schram and Caterino argue instead for methodological pluralism. The same argument is made by William E. Connolly, who in his work Neuropolitics shows that advances in neuroscience further illuminate some of the problematic practices of rational choice theory.
=== Sociological critiques ===
Pierre Bourdieu fiercely opposed rational choice theory as grounded in a misunderstanding of how social agents operate. Bourdieu argued that social agents do not continuously calculate according to explicit rational and economic criteria. According to Bourdieu, social agents operate according to an implicit practical logic – a practical sense – and bodily dispositions. Social agents act according to their "feel for the game" (the "feel" being, roughly, habitus, and the "game" being the field).
Other social scientists, inspired in part by Bourdieu's thinking have expressed concern about the inappropriate use of economic metaphors in other contexts, suggesting that this may have political implications. The argument they make is that by treating everything as a kind of "economy" they make a particular vision of the way an economy works seem more natural. Thus, they suggest, rational choice is as much ideological as it is scientific.
==== Criticism based on motivational assumptions ====
Rational choice theorists discuss individual values and structural elements as equally important determinants of outcomes. However, for methodological reasons in the empirical application, more emphasis is usually placed on social structural determinants. Therefore, in line with structural functionalism and social network analysis perspectives, rational choice explanations are considered mainstream in sociology .
==== Criticism based on the assumption of realism ====
Some of the scepticism among sociologists regarding rational choice stems from a misunderstanding of the lack of realist assumptions. Social research has shown that social agents usually act solely based on habit or impulse, the power of emotion. Social Agents predict the expected consequences of options in stock markets and economic crises and choose the best option through collective "emotional drives," implying social forces rather than "rational" choices.
However, sociology commonly misunderstands rational choice in its critique of rational choice theory. Rational choice theory does not explain what rational people would do in a given situation, which falls under decision theory. Theoretical choice focuses on social outcomes rather than individual outcomes. Social outcomes are identified as stable equilibria in which individuals have no incentive to deviate from their course of action. This orientation of others' behaviour toward social outcomes may be unintended or undesirable. Therefore, the conclusions generated in such cases are relegated to the "study of irrational behaviour".
=== Criticism based on the biopolitical paradigm ===
The basic assumptions of rational choice theory do not take into account external factors (social, cultural, economic) that interfere with autonomous decision-making. Representatives of the biopolitical paradigm such as Michel Foucault drew attention to the micro-power structures that shape the soul, body and mind and thus top-down impose certain decisions on individuals. Humans – according to the assumptions of the biopolitical paradigm – therefore conform to dominant social and cultural systems rather than to their own subjectively defined goals, which they would seek to achieve through rational and optimal decisions.
=== Critiques on the basis of evolutionary psychology ===
An evolutionary psychology perspective suggests that many of the seeming contradictions and biases regarding rational choice can be explained as being rational in the context of maximizing biological fitness in the ancestral environment but not necessarily in the current one. Thus, when living at subsistence level where a reduction of resources may have meant death it may have been rational to place a greater value on losses than on gains. Proponents argue it may also explain differences between groups.
=== Critiques on the basis of emotion research ===
Proponents of emotional choice theory criticize the rational choice paradigm by drawing on new findings from emotion research in psychology and neuroscience. They point out that rational choice theory is generally based on the assumption that decision-making is a conscious and reflective process based on thoughts and beliefs. It presumes that people decide on the basis of calculation and deliberation. However, cumulative research in neuroscience suggests that only a small part of the brain's activities operate at the level of conscious reflection. The vast majority of its activities consist of unconscious appraisals and emotions. The significance of emotions in decision-making has generally been ignored by rational choice theory, according to these critics. Moreover, emotional choice theorists contend that the rational choice paradigm has difficulty incorporating emotions into its models, because it cannot account for the social nature of emotions. Even though emotions are felt by individuals, psychologists and sociologists have shown that emotions cannot be isolated from the social environment in which they arise. Emotions are inextricably intertwined with people's social norms and identities, which are typically outside the scope of standard rational choice models. Emotional choice theory seeks to capture not only the social but also the physiological and dynamic character of emotions. It represents a unitary action model to organize, explain, and predict the ways in which emotions shape decision-making.
=== The difference between public and private spheres ===
Herbert Gintis has also provided an important criticism to rational choice theory. He argued that rationality differs between the public and private spheres. The public sphere being what you do in collective action and the private sphere being what you do in your private life. Gintis argues that this is because “models of rational choice in the private sphere treat agents’ choices as instrumental”. “Behaviour in the public sphere, by contrast, is largely non-instrumental because it is non-consequential". Individuals make no difference to the outcome, “much as single molecules make no difference to the properties of the gas" (Herbert, G). This is a weakness of rational choice theory as it shows that in situations such as voting in an election, the rational decision for the individual would be to not vote as their vote makes no difference to the outcome of the election. However, if everyone were to act in this way the democratic society would collapse as no one would vote. Therefore, we can see that rational choice theory does not describe how everything in the economic and political world works, and that there are other factors of human behaviour at play.
== See also ==
== Notes ==
== References ==
Abella, Alex (2008). Soldiers of Reason: The RAND Corporation and the Rise of the American Empire. New York: Harcourt.
Allingham, Michael (2002). Choice Theory: A Very Short Introduction, Oxford, ISBN 978-0192803030.
Anand, P. (1993)."Foundations of Rational Choice Under Risk", Oxford: Oxford University Press.
Amadae, S.M.(2003). Rationalizing Capitalist Democracy: The Cold War Origins of Rational Choice Liberalism, Chicago: University of Chicago Press.
Arrow, Kenneth J. ([1987] 1989). "Economic Theory and the Hypothesis of Rationality," in The New Palgrave: Utility and Probability, pp. 25–39.
Bicchieri, Cristina (1993). Rationality and Coordination. Cambridge University Press
Bicchieri, Cristina (2003). “Rationality and Game Theory”, in The Handbook of Rationality, The Oxford Reference Library of Philosophy, Oxford University Press.
Cristian Maquieira, Jan 2019, Japan's Withdrawal from the International Whaling Commission: A Disaster that Could Have Been Avoided, Available at: [2], November 2019
Downs, Anthony (1957). "An Economic Theory of Democracy." Harper.
Anthony Downs, 1957, An Economic Theory of Political Action in a Democracy, Journal of Political Economy, Vol. 65, No. 2, pp. 135–150
Coleman, James S. (1990). Foundations of Social Theory
Dixon, Huw (2001), Surfing Economics, Pearson. Especially chapters 7 and 8
Elster, Jon (1979). Ulysses and the Sirens, Cambridge University Press.
Elster, Jon (1989). Nuts and Bolts for the Social Sciences, Cambridge University Press.
Elster, Jon (2007). Explaining Social Behavior – more Nuts and Bolts for the Social Sciences, Cambridge University Press.
Fernandez-Huerga (2008.) The Economic Behavior of Human Beings: The Institutionalist//Post-Keynesian Model Journal of Economic Issues. vol. 42 no. 3, September.
Schram, Sanford F. and Brian Caterino, eds. (2006). Making Political Science Matter: Debating Knowledge, Research, and Method. New York and London: New York University Press.
Walsh, Vivian (1996). Rationality, Allocation, and Reproduction, Oxford. Description and scroll to chapter-preview links.
Martin Hollis and Edward J. Nell (1975) Rational Economic Man. Cambridge: Cambridge University Press.
Foley, D. K. (1989) Ideology and Methodology. An unpublished lecture to Berkeley graduate students in 1989 discussing personal and collective survival strategies for non-mainstream economists.
Foley, D.K. (1998). Introduction (chapter 1) in Peter S. Albin, Barriers and Bounds to Rationality: Essays on Economic Complexity and Dynamics in Interactive Systems. Princeton: Princeton University Press.
Foley, D. K. (2003) Rationality and Ideology in Economics. lecture in the World Political Economy course at the Graduate Faculty of New School UM, New School.
Boland, L. (1982) The Foundations of Economic Method. London: George Allen & Unwin
Edward J. Nell and Errouaki, K. (2011) Rational Econometric Man. Cheltenham: E. Elgar.
Pierre Bourdieu (2005) The Social Structures of the Economy, Polity 2005
Calhoun, C. et al. (1992) "Pierre Bourdieu: Critical Perspectives." University of Chicago Press.
Gary Browning, Abigail Halcli, Frank Webster, 2000, Understanding Contemporary Society: Theories of the Present, London, Sage Publications
Grenfell, M (2011) "Bourdieu, Language and Linguistics" London, Continuum.
Grenfell, M. (ed) (2008) "Pierre Bourdieu: Key concepts" London, Acumen Press
Herbert Gintis. Centre for the study of Governance and Society CSGS(Rational Choice and Political Behaviour: A lecture by Herbert Gintis. YouTube video. 23:57. Nov 21, 2018)
== Further reading ==
Gilboa, Itzhak (2010). Rational Choice. Cambridge, MA: MIT Press.
Green, Donald P., and Justin Fox (2007). "Rational Choice Theory," in The SAGE Handbook of Social Science Methodology, edited by William Outhwaite and Stephen P. Turner. London: Sage, pp. 269–282.
Kydd, Andrew H. (2008). "Methodological Individualism and Rational Choice," The Oxford Handbook of International Relations, edited by Christian Reus-Smit and Duncan Snidal. Oxford: Oxford University Press, pp. 425–443.
Mas-Colell, A., M. D. Whinston, and J. R. Green (1995). Microeconomic Theory. Oxford: Oxford University Press.
Nedergaard, Peter (July 2006). "The 2003 reform of the Common Agricultural Policy: against all odds or rational explanations?" (PDF). Journal of European Integration. 28 (3): 203–223. doi:10.1080/07036330600785749. S2CID 154437960.
== External links ==
Rational Choice Theory at the Stanford Encyclopedia of Philosophy
Rational Choice Theory – Article by John Scott
The New Nostradamus – on the use by Bruce Bueno de Mesquita of rational choice theory in political forecasting
To See The Future, Use The Logic Of Self-Interest – NPR audio clip | Wikipedia/Rational_choice_theory |
Biochemical systems theory is a mathematical modelling framework for biochemical systems, based on ordinary differential equations (ODE), in which biochemical processes are represented using power-law expansions in the variables of the system.
This framework, which became known as Biochemical Systems Theory, has been developed since the 1960s by Michael Savageau, Eberhard Voit and others for the systems analysis of biochemical processes. According to Cornish-Bowden (2007) they "regarded this as a general theory of metabolic control, which includes both metabolic control analysis and flux-oriented theory as special cases".
== Representation ==
The dynamics of a species is represented by a differential equation with the structure:
where Xi represents one of the nd variables of the model (metabolite concentrations, protein concentrations or levels of gene expression). j represents the nf biochemical processes affecting the dynamics of the species. On the other hand,
μ
{\displaystyle \mu }
ij (stoichiometric coefficient),
γ
{\displaystyle \gamma }
j (rate constants) and fjk (kinetic orders) are two different kinds of parameters defining the dynamics of the system.
The principal difference of power-law models with respect to other ODE models used in biochemical systems is that the kinetic orders can be non-integer numbers. A kinetic order can have even negative value when inhibition is modeled. In this way, power-law models have a higher flexibility to reproduce the non-linearity of biochemical systems.
Models using power-law expansions have been used during the last 35 years to model and analyze several kinds of biochemical systems including metabolic networks, genetic networks and recently in cell signalling.
== See also ==
Dynamical systems
Ludwig von Bertalanffy
Systems theory
== References ==
== Literature ==
Books:
M.A. Savageau, Biochemical systems analysis: a study of function and design in molecular biology, Reading, MA, Addison–Wesley, 1976.
E.O. Voit (ed), Canonical Nonlinear Modeling. S-System Approach to Understanding Complexity, Van Nostrand Reinhold, NY, 1991.
E.O. Voit, Computational Analysis of Biochemical Systems. A Practical Guide for Biochemists and Molecular Biologists, Cambridge University Press, Cambridge, U.K., 2000.
N.V. Torres and E.O. Voit, Pathway Analysis and Optimization in Metabolic Engineering, Cambridge University Press, Cambridge, U.K., 2002.
Scientific articles:
M.A. Savageau, Biochemical systems analysis: I. Some mathematical properties of the rate law for the component enzymatic reactions in: J. Theor. Biol. 25, pp. 365–369, 1969.
M.A. Savageau, Development of fractal kinetic theory for enzyme-catalysed reactions and implications for the design of biochemical pathways in: Biosystems 47(1-2), pp. 9–36, 1998.
M.R. Atkinson et al., Design of gene circuits using power-law models, in: Cell 113, pp. 597–607, 2003.
F. Alvarez-Vasquez et al., Simulation and validation of modelled sphingolipid metabolism in Saccharomyces cerevisiae, Nature 27, pp. 433(7024), pp. 425–30, 2005.
J. Vera et al., Power-Law models of signal transduction pathways in: Cellular Signalling doi:10.1016/j.cellsig.2007.01.029), 2007.
Eberhart O. Voit, Applications of Biochemical Systems Theory, 2006.
== External links ==
[1] Savageau Lab at UC Davis
[2] Voit Lab at GA Tech | Wikipedia/Biochemical_systems_theory |
System dynamics (SD) is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays.
== Overview ==
System dynamics is a methodology and mathematical modeling technique to frame, understand, and discuss complex issues and problems. Originally developed in the 1950s to help corporate managers improve their understanding of industrial processes, SD is currently being used throughout the public and private sector for policy analysis and design.
Convenient graphical user interface (GUI) system dynamics software developed into user friendly versions by the 1990s and have been applied to diverse systems. SD models solve the problem of simultaneity (mutual causation) by updating all variables in small time increments with positive and negative feedbacks and time delays structuring the interactions and control. The best known SD model is probably the 1972 The Limits to Growth. This model forecast that exponential growth of population and capital, with finite resource sources and sinks and perception delays, would lead to economic collapse during the 21st century under a wide variety of growth scenarios.
System dynamics is an aspect of systems theory as a method to understand the dynamic behavior of complex systems. The basis of the method is the recognition that the structure of any system, the many circular, interlocking, sometimes time-delayed relationships among its components, is often just as important in determining its behavior as the individual components themselves. Examples are chaos theory and social dynamics. It is also claimed that because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts.
== History ==
System dynamics was created during the mid-1950s by Professor Jay Forrester of the Massachusetts Institute of Technology. In 1956, Forrester accepted a professorship in the newly formed MIT Sloan School of Management. His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, on the core issues that determine the success or failure of corporations. Forrester's insights into the common foundations that underlie engineering, which led to the creation of system dynamics, were triggered, to a large degree, by his involvement with managers at General Electric (GE) during the mid-1950s. At that time, the managers at GE were perplexed because employment at their appliance plants in Kentucky exhibited a significant three-year cycle. The business cycle was judged to be an insufficient explanation for the employment instability. From hand simulations (or calculations) of the stock-flow-feedback structure of the GE plants, which included the existing corporate decision-making structure for hiring and layoffs, Forrester was able to show how the instability in GE employment was due to the internal structure of the firm and not to an external force such as the business cycle. These hand simulations were the start of the field of system dynamics.
During the late 1950s and early 1960s, Forrester and a team of graduate students moved the emerging field of system dynamics from the hand-simulation stage to the formal computer modeling stage. Richard Bennett created the first system dynamics computer modeling language called SIMPLE (Simulation of Industrial Management Problems with Lots of Equations) in the spring of 1958. In 1959, Phyllis Fox and Alexander Pugh wrote the first version of
DYNAMO (DYNAmic MOdels), an improved version of SIMPLE, and the system dynamics language became the industry standard for over thirty years. Forrester published the first, and still classic, book in the field titled Industrial Dynamics in 1961.
From the late 1950s to the late 1960s, system dynamics was applied almost exclusively to corporate/managerial problems. In 1968, however, an unexpected occurrence caused the field to broaden beyond corporate modeling. John F. Collins, the former mayor of Boston, was appointed a visiting professor of Urban Affairs at MIT. The result of the Collins-Forrester collaboration was a book titled Urban Dynamics. The Urban Dynamics model presented in the book was the first major non-corporate application of system dynamics. In 1967, Richard M. Goodwin published the first edition of his paper "A Growth Cycle", which was the first attempt to apply the principles of system dynamics to economics. He devoted most of his life teaching what he called "Economic Dynamics", which could be considered a precursor of modern Non-equilibrium economics.
The second major noncorporate application of system dynamics came shortly after the first. In 1970, Jay Forrester was invited by the Club of Rome to a meeting in Bern, Switzerland. The Club of Rome is an organization devoted to solving what its members describe as the "predicament of mankind"—that is, the global crisis that may appear sometime in the future, due to the demands being placed on the Earth's carrying capacity (its sources of renewable and nonrenewable resources and its sinks for the disposal of pollutants) by the world's exponentially growing population. At the Bern meeting, Forrester was asked if system dynamics could be used to address the predicament of mankind. His answer, of course, was that it could. On the plane back from the Bern meeting, Forrester created the first draft of a system dynamics model of the world's socioeconomic system. He called this model WORLD1. Upon his return to the United States, Forrester refined WORLD1 in preparation for a visit to MIT by members of the Club of Rome. Forrester called the refined version of the model WORLD2. Forrester published WORLD2 in a book titled World Dynamics.
== Topics in systems dynamics ==
The primary elements of system dynamics diagrams are feedback, accumulation of flows into stocks and time delays.
As an illustration of the use of system dynamics, imagine an organisation that plans to introduce an innovative new durable consumer product. The organisation needs to understand the possible market dynamics in order to design marketing and production plans.
=== Causal loop diagrams ===
In the system dynamics methodology, a problem or a system (e.g., ecosystem, political system or mechanical system) may be represented as a causal loop diagram. A causal loop diagram is a simple map of a system with all its constituent components and their interactions. By capturing interactions and consequently the feedback loops (see figure below), a causal loop diagram reveals the structure of a system. By understanding the structure of a system, it becomes possible to ascertain a system's behavior over a certain time period.
The causal loop diagram of the new product introduction may look as follows:
There are two feedback loops in this diagram. The positive reinforcement (labeled R) loop on the right indicates that the more people have already adopted the new product, the stronger the word-of-mouth impact. There will be more references to the product, more demonstrations, and more reviews. This positive feedback should generate sales that continue to grow.
The second feedback loop on the left is negative reinforcement (or "balancing" and hence labeled B). Clearly, growth cannot continue forever, because as more and more people adopt, there remain fewer and fewer potential adopters.
Both feedback loops act simultaneously, but at different times they may have different strengths. Thus one might expect growing sales in the initial years, and then declining sales in the later years. However, in general a causal loop diagram does not specify the structure of a system sufficiently to permit determination of its behavior from the visual representation alone.
=== Stock and flow diagrams ===
Causal loop diagrams aid in visualizing a system's structure and behavior, and analyzing the system qualitatively. To perform a more detailed quantitative analysis, a causal loop diagram is transformed to a stock and flow diagram. A stock and flow model helps in studying and analyzing the system in a quantitative way; such models are usually built and simulated using computer software.
A stock is the term for any entity that accumulates or depletes over time. A flow is the rate of change in a stock.
In this example, there are two stocks: Potential adopters and Adopters. There is one flow: New adopters. For every new adopter, the stock of potential adopters declines by one, and the stock of adopters increases by one.
=== Equations ===
The real power of system dynamics is utilised through simulation. Although it is possible to perform the modeling in a spreadsheet, there are a variety of software packages that have been optimised for this.
The steps involved in a simulation are:
Define the problem boundary.
Identify the most important stocks and flows that change these stock levels.
Identify sources of information that impact the flows.
Identify the main feedback loops.
Draw a causal loop diagram that links the stocks, flows and sources of information.
Write the equations that determine the flows.
Estimate the parameters and initial conditions. These can be estimated using statistical methods, expert opinion, market research data or other relevant sources of information.
Simulate the model and analyse results.
In this example, the equations that change the two stocks via the flow are:
Potential adopters
=
−
∫
0
t
New adopters
d
t
{\displaystyle \ {\mbox{Potential adopters}}=-\int _{0}^{t}{\mbox{New adopters }}\,dt}
Adopters
=
∫
0
t
New adopters
d
t
{\displaystyle \ {\mbox{Adopters}}=\int _{0}^{t}{\mbox{New adopters }}\,dt}
=== Equations in discrete time ===
List of all the equations in discrete time, in their order of execution in each year, for years 1 to 15 :
1
)
Probability that contact has not yet adopted
=
Potential adopters
/
(
Potential adopters
+
Adopters
)
{\displaystyle 1)\ {\mbox{Probability that contact has not yet adopted}}={\mbox{Potential adopters}}/({\mbox{Potential adopters }}+{\mbox{ Adopters}})}
2
)
Imitators
=
q
⋅
Adopters
⋅
Probability that contact has not yet adopted
{\displaystyle 2)\ {\mbox{Imitators}}=q\cdot {\mbox{Adopters}}\cdot {\mbox{Probability that contact has not yet adopted}}}
3
)
Innovators
=
p
⋅
Potential adopters
{\displaystyle 3)\ {\mbox{Innovators}}=p\cdot {\mbox{Potential adopters}}}
4
)
New adopters
=
Innovators
+
Imitators
{\displaystyle 4)\ {\mbox{New adopters}}={\mbox{Innovators}}+{\mbox{Imitators}}}
4.1
)
Potential adopters
−
=
New adopters
{\displaystyle 4.1)\ {\mbox{Potential adopters}}\ -={\mbox{New adopters }}}
4.2
)
Adopters
+
=
New adopters
{\displaystyle 4.2)\ {\mbox{Adopters}}\ +={\mbox{New adopters }}}
p
=
0.03
{\displaystyle \ p=0.03}
q
=
0.4
{\displaystyle \ q=0.4}
==== Dynamic simulation results ====
The dynamic simulation results show that the behaviour of the system would be to have growth in adopters that follows a classic s-curve shape.
The increase in adopters is very slow initially, then exponential growth for a period, followed ultimately by saturation.
=== Equations in continuous time ===
To get intermediate values and better accuracy, the model can run in continuous time: we multiply the number of units of time and we proportionally divide values that change stock levels. In this example we multiply the 15 years by 4 to obtain 60 quarters, and we divide the value of the flow by 4.
Dividing the value is the simplest with the Euler method, but other methods could be employed instead, such as Runge–Kutta methods.
List of the equations in continuous time for trimesters = 1 to 60 :
They are the same equations as in the section Equation in discrete time above, except equations 4.1 and 4.2 replaced by following :
10
)
Valve New adopters
=
New adopters
⋅
T
i
m
e
S
t
e
p
{\displaystyle 10)\ {\mbox{Valve New adopters}}\ ={\mbox{New adopters}}\cdot TimeStep}
10.1
)
Potential adopters
−
=
Valve New adopters
{\displaystyle 10.1)\ {\mbox{Potential adopters}}\ -={\mbox{Valve New adopters}}}
10.2
)
Adopters
+
=
Valve New adopters
{\displaystyle 10.2)\ {\mbox{Adopters}}\ +={\mbox{Valve New adopters }}}
T
i
m
e
S
t
e
p
=
1
/
4
{\displaystyle \ TimeStep=1/4}
In the below stock and flow diagram, the intermediate flow 'Valve New adopters' calculates the equation :
Valve New adopters
=
New adopters
⋅
T
i
m
e
S
t
e
p
{\displaystyle \ {\mbox{Valve New adopters}}\ ={\mbox{New adopters }}\cdot TimeStep}
== Application ==
System dynamics has found application in a wide range of areas, for example population, agriculture, epidemiological, ecological and economic systems, which usually interact strongly with each other.
System dynamics have various "back of the envelope" management applications. They are a potent tool to:
Teach system thinking reflexes to persons being coached
Analyze and compare assumptions and mental models about the way things work
Gain qualitative insight into the workings of a system or the consequences of a decision
Recognize archetypes of dysfunctional systems in everyday practice
Computer software is used to simulate a system dynamics model of the situation being studied. Running "what if" simulations to test certain policies on such a model can greatly aid in understanding how the system changes over time. System dynamics is very similar to systems thinking and constructs the same causal loop diagrams of systems with feedback. However, system dynamics typically goes further and utilises simulation to study the behaviour of systems and the impact of alternative policies.
System dynamics has been used to investigate resource dependencies, and resulting problems, in product development.
A system dynamics approach to macroeconomics, known as Minsky, has been developed by the economist Steve Keen. This has been used to successfully model world economic behaviour from the apparent stability of the Great Moderation to the 2008 financial crisis.
=== Example: Growth and decline of companies ===
The figure above is a causal loop diagram of a system dynamics model created to examine forces that may be responsible for the growth or decline of life insurance companies in the United Kingdom. A number of this figure's features are worth mentioning. The first is that the model's negative feedback loops are identified by C's, which stand for Counteracting loops. The second is that double slashes are used to indicate places where there is a significant delay between causes (i.e., variables at the tails of arrows) and effects (i.e., variables at the heads of arrows). This is a common causal loop diagramming convention in system dynamics. Third, is that thicker lines are used to identify the feedback loops and links that author wishes the audience to focus on. This is also a common system dynamics diagramming convention. Last, it is clear that a decision maker would find it impossible to think through the dynamic behavior inherent in the model, from inspection of the figure alone.
=== Example: Piston motion ===
Objective: study of a crank-connecting rod system. We want to model a crank-connecting rod system through a system dynamic model. Two different full descriptions of the physical system with related systems of equations can be found here (in English) and here (in French); they give the same results. In this example, the crank, with variable radius and angular frequency, will drive a piston with a variable connecting rod length.
System dynamic modeling: the system is now modeled, according to a stock and flow system dynamic logic. The figure below shows the stock and flow diagram
Simulation: the behavior of the crank-connecting rod dynamic system can then be simulated. The next figure is a 3D simulation created using procedural animation. Variables of the model animate all parts of this animation: crank, radius, angular frequency, rod length, and piston position.
== See also ==
== References ==
== Further reading ==
Kypuros, Javier (2013). System dynamics and control with bond graph modeling. Boca Raton: Taylor & Francis. ISBN 978-1466560758.
Forrester, Jay W. (1961). Industrial Dynamics. M.I.T. Press.
Forrester, Jay W. (1969). Urban Dynamics. Pegasus Communications. ISBN 978-1-883823-39-9.
Meadows, Donella H. (1972). Limits to Growth. New York: University books. ISBN 978-0-87663-165-2.
Morecroft, John (2007). Strategic Modelling and Business Dynamics: A Feedback Systems Approach. John Wiley & Sons. ISBN 978-0-470-01286-4.
Roberts, Edward B. (1978). Managerial Applications of System Dynamics. Cambridge: MIT Press. ISBN 978-0-262-18088-7.
Randers, Jorgen (1980). Elements of the System Dynamics Method. Cambridge: MIT Press. ISBN 978-0-915299-39-3.
Senge, Peter (1990). The Fifth Discipline. Currency. ISBN 978-0-385-26095-4.
Sterman, John D. (2000). Business Dynamics: Systems thinking and modeling for a complex world. McGraw Hill. ISBN 978-0-07-231135-8.
== External links ==
System Dynamics Society
Study Prepared for the U.S. Department of Energy's Introducing System Dynamics -
Desert Island Dynamics "An Annotated Survey of the Essential System Dynamics Literature"
True World : Temporal Reasoning Universal Elaboration : System Dynamics software used for diagrams in this article (free) | Wikipedia/System_Dynamics |
The basic study of system design is the understanding of component parts and their subsequent interaction with one another.
Systems design has appeared in a variety of fields, including sustainability, computer/software architecture, and sociology.
== Product Development ==
If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," then design is the act of taking the marketing information and creating the design of the product to be manufactured.
Thus in product development, systems design involves the process of defining and developing systems, such as interfaces and data, for an electronic control system to satisfy specified requirements. Systems design could be seen as the application of systems theory to product development. There is some overlap with the disciplines of systems analysis, systems architecture and systems engineering.
=== Physical design ===
The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed.
In physical design, the following requirements about the system are decided.
Input requirement,
Output requirements,
Storage requirements,
Processing requirements,
System control and backup or recovery.
Put another way, the physical portion of system design can generally be broken down into three sub-tasks:
User Interface Design
Data Design
Process Design
=== Architecture design ===
Designing the overall structure of a system focuses on creating a scalable, reliable, and efficient system. For example, services like Google, Twitter, Facebook, Amazon, and Netflix exemplify large-scale distributed systems. Here are key considerations:
Functional and non-functional requirements
Capacity estimation
Usage of relational and/or NoSQL databases
Vertical scaling, horizontal scaling, sharding
Load balancing
Primary-secondary replication
Cache and CDN
Stateless and Stateful servers
Datacenter georouting
Message Queue, Publish-Subscribe Architecture
Performance Metrics Monitoring and Logging
Build, test, configure deploy automation
Finding single point of failure
API Rate Limiting
Service Level Agreement
=== Machine Learning Systems Design ===
Machine learning systems design focuses on building scalable, reliable, and efficient systems that integrate machine learning (ML) models to solve real-world problems. ML systems require careful consideration of data pipelines, model training, and deployment infrastructure. ML systems are often used in applications such as recommendation engines, fraud detection, and natural language processing.
Key components to consider when designing ML systems include:
Problem Definition: Clearly define the problem, data requirements, and evaluation metrics. Success criteria often involve accuracy, latency, and scalability.
Data Pipeline: Build automated pipelines to collect, clean, transform, and validate data.
Model Selection and Training: Choose appropriate algorithms (e.g., linear regression, decision trees, neural networks) and train models using frameworks like TensorFlow or PyTorch.
Deployment and Serving: Deploy trained models to production environments using scalable architectures such as containerized services (e.g., Docker and Kubernetes).
Monitoring and Maintenance: Continuously monitor model performance, retrain as necessary, and ensure data drift is addressed.
Designing an ML system involves balancing trade-offs between accuracy, latency, cost, and maintainability, while ensuring system scalability and reliability. The discipline overlaps with MLOps, a set of practices that unifies machine learning development and operations to ensure smooth deployment and lifecycle management of ML systems.
== See also ==
== References ==
== Further reading ==
Bentley, Lonnie D.; Dittman, Kevin C.; Whitten, Jeffrey L. (2004) [1986]. System analysis and design methods.
Churchman, C. West (1971). The Design of Inquiring Systems: Basic Concepts of Systems and Organization. New York: Basic Books. ISBN 0-465-01608-1.
Gosling, William (1962). The design of engineering systems. New York: Wiley.
Hawryszkiewycz, Igor T. (1994). Introduction to system analysis and design. Prentice Hall PTR.
Levin, Mark S. (2015). Modular system design and evaluation. Springer.
Maier, Mark W.; Rechtin, Eberhardt (2000). The Art of System Architecting (Second ed.). Boca Raton: CRC Press.
J. H. Saltzer; D. P. Reed; D. D. Clark (1 November 1984). "End-to-end arguments in system design" (PDF). ACM Transactions on Computer Systems. 2 (4): 277–288. doi:10.1145/357401.357402. ISSN 0734-2071. S2CID 215746877. Wikidata Q56503280.
Whitten, Jeffrey L.; Bentley, Lonnie D.; Dittman, Kevin C. (2004). Fundamentals of system analysis and design methods.
== External links ==
Interactive System Design. Course by Chris Johnson, 1993
[1] Course by Prof. Birgit Weller, 2020 | Wikipedia/Systems_design |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.