text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In mathematics, weak topology is an alternative term for certain initial topologies, often on topological vector spaces or spaces of linear operators, for instance on a Hilbert space. The term is most commonly used for the initial topology of a topological vector space (such as a normed vector space) with respect to its continuous dual. The remainder of this article will deal with this case, which is one of the concepts of functional analysis.
One may call subsets of a topological vector space weakly closed (respectively, weakly compact, etc.) if they are closed (respectively, compact, etc.) with respect to the weak topology. Likewise, functions are sometimes called weakly continuous (respectively, weakly differentiable, weakly analytic, etc.) if they are continuous (respectively, differentiable, analytic, etc.) with respect to the weak topology.
== History ==
Starting in the early 1900s, David Hilbert and Marcel Riesz made extensive use of weak convergence. The early pioneers of functional analysis did not elevate norm convergence above weak convergence and oftentimes viewed weak convergence as preferable. In 1929, Banach introduced weak convergence for normed spaces and also introduced the analogous weak-* convergence. The weak topology is called topologie faible in French and schwache Topologie in German.
== The weak and strong topologies ==
Let
K
{\displaystyle \mathbb {K} }
be a topological field, namely a field with a topology such that addition, multiplication, and division are continuous. In most applications
K
{\displaystyle \mathbb {K} }
will be either the field of complex numbers or the field of real numbers with the familiar topologies.
=== Weak topology with respect to a pairing ===
Both the weak topology and the weak* topology are special cases of a more general construction for pairings, which we now describe.
The benefit of this more general construction is that any definition or result proved for it applies to both the weak topology and the weak* topology, thereby making redundant the need for many definitions, theorem statements, and proofs. This is also the reason why the weak* topology is also frequently referred to as the "weak topology"; because it is just an instance of the weak topology in the setting of this more general construction.
Suppose (X, Y, b) is a pairing of vector spaces over a topological field
K
{\displaystyle \mathbb {K} }
(i.e. X and Y are vector spaces over
K
{\displaystyle \mathbb {K} }
and b : X × Y →
K
{\displaystyle \mathbb {K} }
is a bilinear map).
Notation. For all x ∈ X, let b(x, •) : Y →
K
{\displaystyle \mathbb {K} }
denote the linear functional on Y defined by y ↦ b(x, y). Similarly, for all y ∈ Y, let b(•, y) : X →
K
{\displaystyle \mathbb {K} }
be defined by x ↦ b(x, y).
Definition. The weak topology on X induced by Y (and b) is the weakest topology on X, denoted by 𝜎(X, Y, b) or simply 𝜎(X, Y), making all maps b(•, y) : X →
K
{\displaystyle \mathbb {K} }
continuous, as y ranges over Y.
The weak topology on Y is now automatically defined as described in the article Dual system. However, for clarity, we now repeat it.
Definition. The weak topology on Y induced by X (and b) is the weakest topology on Y, denoted by 𝜎(Y, X, b) or simply 𝜎(Y, X), making all maps b(x, •) : Y →
K
{\displaystyle \mathbb {K} }
continuous, as x ranges over X.
If the field
K
{\displaystyle \mathbb {K} }
has an absolute value |⋅|, then the weak topology 𝜎(X, Y, b) on X is induced by the family of seminorms, py : X →
R
{\displaystyle \mathbb {R} }
, defined by
py(x) := |b(x, y)|
for all y ∈ Y and x ∈ X. This shows that weak topologies are locally convex.
Assumption. We will henceforth assume that
K
{\displaystyle \mathbb {K} }
is either the real numbers
R
{\displaystyle \mathbb {R} }
or the complex numbers
C
{\displaystyle \mathbb {C} }
.
==== Canonical duality ====
We now consider the special case where Y is a vector subspace of the algebraic dual space of X (i.e. a vector space of linear functionals on X).
There is a pairing, denoted by
(
X
,
Y
,
⟨
⋅
,
⋅
⟩
)
{\displaystyle (X,Y,\langle \cdot ,\cdot \rangle )}
or
(
X
,
Y
)
{\displaystyle (X,Y)}
, called the canonical pairing whose bilinear map
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is the canonical evaluation map, defined by
⟨
x
,
x
′
⟩
=
x
′
(
x
)
{\displaystyle \langle x,x'\rangle =x'(x)}
for all
x
∈
X
{\displaystyle x\in X}
and
x
′
∈
Y
{\displaystyle x'\in Y}
. Note in particular that
⟨
⋅
,
x
′
⟩
{\displaystyle \langle \cdot ,x'\rangle }
is just another way of denoting
x
′
{\displaystyle x'}
i.e.
⟨
⋅
,
x
′
⟩
=
x
′
(
⋅
)
{\displaystyle \langle \cdot ,x'\rangle =x'(\cdot )}
.
Assumption. If Y is a vector subspace of the algebraic dual space of X then we will assume that they are associated with the canonical pairing ⟨X, Y⟩.
In this case, the weak topology on X (resp. the weak topology on Y), denoted by 𝜎(X,Y) (resp. by 𝜎(Y,X)) is the weak topology on X (resp. on Y) with respect to the canonical pairing ⟨X, Y⟩.
The topology σ(X,Y) is the initial topology of X with respect to Y.
If Y is a vector space of linear functionals on X, then the continuous dual of X with respect to the topology σ(X,Y) is precisely equal to Y.(Rudin 1991, Theorem 3.10)
==== The weak and weak* topologies ====
Let X be a topological vector space (TVS) over
K
{\displaystyle \mathbb {K} }
, that is, X is a
K
{\displaystyle \mathbb {K} }
vector space equipped with a topology so that vector addition and scalar multiplication are continuous. We call the topology that X starts with the original, starting, or given topology (the reader is cautioned against using the terms "initial topology" and "strong topology" to refer to the original topology since these already have well-known meanings, so using them may cause confusion). We may define a possibly different topology on X using the topological or continuous dual space
X
∗
{\displaystyle X^{*}}
, which consists of all linear functionals from X into the base field
K
{\displaystyle \mathbb {K} }
that are continuous with respect to the given topology.
Recall that
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is the canonical evaluation map defined by
⟨
x
,
x
′
⟩
=
x
′
(
x
)
{\displaystyle \langle x,x'\rangle =x'(x)}
for all
x
∈
X
{\displaystyle x\in X}
and
x
′
∈
X
∗
{\displaystyle x'\in X^{*}}
, where in particular,
⟨
⋅
,
x
′
⟩
=
x
′
(
⋅
)
=
x
′
{\displaystyle \langle \cdot ,x'\rangle =x'(\cdot )=x'}
.
Definition. The weak topology on X is the weak topology on X with respect to the canonical pairing
⟨
X
,
X
∗
⟩
{\displaystyle \langle X,X^{*}\rangle }
. That is, it is the weakest topology on X making all maps
x
′
=
⟨
⋅
,
x
′
⟩
:
X
→
K
{\displaystyle x'=\langle \cdot ,x'\rangle :X\to \mathbb {K} }
continuous, as
x
′
{\displaystyle x'}
ranges over
X
∗
{\displaystyle X^{*}}
.
Definition: The weak topology on
X
∗
{\displaystyle X^{*}}
is the weak topology on
X
∗
{\displaystyle X^{*}}
with respect to the canonical pairing
⟨
X
,
X
∗
⟩
{\displaystyle \langle X,X^{*}\rangle }
. That is, it is the weakest topology on
X
∗
{\displaystyle X^{*}}
making all maps
⟨
x
,
⋅
⟩
:
X
∗
→
K
{\displaystyle \langle x,\cdot \rangle :X^{*}\to \mathbb {K} }
continuous, as x ranges over X. This topology is also called the weak* topology.
We give alternative definitions below.
=== Weak topology induced by the continuous dual space ===
Alternatively, the weak topology on a TVS X is the initial topology with respect to the family
X
∗
{\displaystyle X^{*}}
. In other words, it is the coarsest topology on X such that each element of
X
∗
{\displaystyle X^{*}}
remains a continuous function.
A subbase for the weak topology is the collection of sets of the form
ϕ
−
1
(
U
)
{\displaystyle \phi ^{-1}(U)}
where
ϕ
∈
X
∗
{\displaystyle \phi \in X^{*}}
and U is an open subset of the base field
K
{\displaystyle \mathbb {K} }
. In other words, a subset of X is open in the weak topology if and only if it can be written as a union of (possibly infinitely many) sets, each of which is an intersection of finitely many sets of the form
ϕ
−
1
(
U
)
{\displaystyle \phi ^{-1}(U)}
.
From this point of view, the weak topology is the coarsest polar topology.
=== Weak convergence ===
The weak topology is characterized by the following condition: a net
(
x
λ
)
{\displaystyle (x_{\lambda })}
in X converges in the weak topology to the element x of X if and only if
ϕ
(
x
λ
)
{\displaystyle \phi (x_{\lambda })}
converges to
ϕ
(
x
)
{\displaystyle \phi (x)}
in
R
{\displaystyle \mathbb {R} }
or
C
{\displaystyle \mathbb {C} }
for all
ϕ
∈
X
∗
{\displaystyle \phi \in X^{*}}
.
In particular, if
x
n
{\displaystyle x_{n}}
is a sequence in X, then
x
n
{\displaystyle x_{n}}
converges weakly to x if
φ
(
x
n
)
→
φ
(
x
)
{\displaystyle \varphi (x_{n})\to \varphi (x)}
as n → ∞ for all
φ
∈
X
∗
{\displaystyle \varphi \in X^{*}}
. In this case, it is customary to write
x
n
⟶
w
x
{\displaystyle x_{n}{\overset {\mathrm {w} }{\longrightarrow }}x}
or, sometimes,
x
n
⇀
x
.
{\displaystyle x_{n}\rightharpoonup x.}
=== Other properties ===
If X is equipped with the weak topology, then addition and scalar multiplication remain continuous operations, and X is a locally convex topological vector space.
If X is a normed space, then the dual space
X
∗
{\displaystyle X^{*}}
is itself a normed vector space by using the norm
‖
ϕ
‖
=
sup
‖
x
‖
≤
1
|
ϕ
(
x
)
|
.
{\displaystyle \|\phi \|=\sup _{\|x\|\leq 1}|\phi (x)|.}
This norm gives rise to a topology, called the strong topology, on
X
∗
{\displaystyle X^{*}}
. This is the topology of uniform convergence. The uniform and strong topologies are generally different for other spaces of linear maps; see below.
== Weak-* topology ==
The weak* topology is an important example of a polar topology.
A space X can be embedded into its double dual X** by
x
↦
{
T
x
:
X
∗
→
K
T
x
(
ϕ
)
=
ϕ
(
x
)
{\displaystyle x\mapsto {\begin{cases}T_{x}:X^{*}\to \mathbb {K} \\T_{x}(\phi )=\phi (x)\end{cases}}}
Thus
T
:
X
→
X
∗
∗
{\displaystyle T:X\to X^{**}}
is an injective linear mapping, though not necessarily surjective (spaces for which this canonical embedding is surjective are called reflexive). The weak-* topology on
X
∗
{\displaystyle X^{*}}
is the weak topology induced by the image of
T
:
T
(
X
)
⊂
X
∗
∗
{\displaystyle T:T(X)\subset X^{**}}
. In other words, it is the coarsest topology such that the maps Tx, defined by
T
x
(
ϕ
)
=
ϕ
(
x
)
{\displaystyle T_{x}(\phi )=\phi (x)}
from
X
∗
{\displaystyle X^{*}}
to the base field
R
{\displaystyle \mathbb {R} }
or
C
{\displaystyle \mathbb {C} }
remain continuous.
Weak-* convergence
A net
ϕ
λ
{\displaystyle \phi _{\lambda }}
in
X
∗
{\displaystyle X^{*}}
is convergent to
ϕ
{\displaystyle \phi }
in the weak-* topology if it converges pointwise:
ϕ
λ
(
x
)
→
ϕ
(
x
)
{\displaystyle \phi _{\lambda }(x)\to \phi (x)}
for all
x
∈
X
{\displaystyle x\in X}
. In particular, a sequence of
ϕ
n
∈
X
∗
{\displaystyle \phi _{n}\in X^{*}}
converges to
ϕ
{\displaystyle \phi }
provided that
ϕ
n
(
x
)
→
ϕ
(
x
)
{\displaystyle \phi _{n}(x)\to \phi (x)}
for all x ∈ X. In this case, one writes
ϕ
n
→
w
∗
ϕ
{\displaystyle \phi _{n}{\overset {w^{*}}{\to }}\phi }
as n → ∞.
Weak-* convergence is sometimes called the simple convergence or the pointwise convergence. Indeed, it coincides with the pointwise convergence of linear functionals.
=== Properties ===
If X is a separable (i.e. has a countable dense subset) locally convex space and H is a norm-bounded subset of its continuous dual space, then H endowed with the weak* (subspace) topology is a metrizable topological space. However, for infinite-dimensional spaces, the metric cannot be translation-invariant. If X is a separable metrizable locally convex space then the weak* topology on the continuous dual space of X is separable.
Properties on normed spaces
By definition, the weak* topology is weaker than the weak topology on
X
∗
{\displaystyle X^{*}}
. An important fact about the weak* topology is the Banach–Alaoglu theorem: if X is normed, then the closed unit ball in
X
∗
{\displaystyle X^{*}}
is weak*-compact (more generally, the polar in
X
∗
{\displaystyle X^{*}}
of a neighborhood of 0 in X is weak*-compact). Moreover, the closed unit ball in a normed space X is compact in the weak topology if and only if X is reflexive.
In more generality, let F be locally compact valued field (e.g., the reals, the complex numbers, or any of the p-adic number systems). Let X be a normed topological vector space over F, compatible with the absolute value in F. Then in
X
∗
{\displaystyle X^{*}}
, the topological dual space X of continuous F-valued linear functionals on X, all norm-closed balls are compact in the weak* topology.
If X is a normed space, a version of the Heine-Borel theorem holds. In particular, a subset of the continuous dual is weak* compact if and only if it is weak* closed and norm-bounded. This implies, in particular, that when X is an infinite-dimensional normed space then the closed unit ball at the origin in the dual space of X does not contain any weak* neighborhood of 0 (since any such neighborhood is norm-unbounded). Thus, even though norm-closed balls are compact, X* is not weak* locally compact.
If X is a normed space, then X is separable if and only if the weak* topology on the closed unit ball of
X
∗
{\displaystyle X^{*}}
is metrizable, in which case the weak* topology is metrizable on norm-bounded subsets of
X
∗
{\displaystyle X^{*}}
. If a normed space X has a dual space that is separable (with respect to the dual-norm topology) then X is necessarily separable. If X is a Banach space, the weak* topology is not metrizable on all of
X
∗
{\displaystyle X^{*}}
unless X is finite-dimensional.
== Examples ==
=== Hilbert spaces ===
Consider, for example, the difference between strong and weak convergence of functions in the Hilbert space L2(
R
n
{\displaystyle \mathbb {R} ^{n}}
). Strong convergence of a sequence
ψ
k
∈
L
2
(
R
n
)
{\displaystyle \psi _{k}\in L^{2}(\mathbb {R} ^{n})}
to an element ψ means that
∫
R
n
|
ψ
k
−
ψ
|
2
d
μ
→
0
{\displaystyle \int _{\mathbb {R} ^{n}}|\psi _{k}-\psi |^{2}\,{\rm {d}}\mu \,\to 0}
as k → ∞. Here the notion of convergence corresponds to the norm on L2.
In contrast weak convergence only demands that
∫
R
n
ψ
¯
k
f
d
μ
→
∫
R
n
ψ
¯
f
d
μ
{\displaystyle \int _{\mathbb {R} ^{n}}{\bar {\psi }}_{k}f\,\mathrm {d} \mu \to \int _{\mathbb {R} ^{n}}{\bar {\psi }}f\,\mathrm {d} \mu }
for all functions f ∈ L2 (or, more typically, all f in a dense subset of L2 such as a space of test functions, if the sequence {ψk} is bounded). For given test functions, the relevant notion of convergence only corresponds to the topology used in
C
{\displaystyle \mathbb {C} }
.
For example, in the Hilbert space L2(0,π), the sequence of functions
ψ
k
(
x
)
=
2
/
π
sin
(
k
x
)
{\displaystyle \psi _{k}(x)={\sqrt {2/\pi }}\sin(kx)}
form an orthonormal basis. In particular, the (strong) limit of
ψ
k
{\displaystyle \psi _{k}}
as k → ∞ does not exist. On the other hand, by the Riemann–Lebesgue lemma, the weak limit exists and is zero.
=== Distributions ===
One normally obtains spaces of distributions by forming the strong dual of a space of test functions (such as the compactly supported smooth functions on
R
n
{\displaystyle \mathbb {R} ^{n}}
). In an alternative construction of such spaces, one can take the weak dual of a space of test functions inside a Hilbert space such as L2. Thus one is led to consider the idea of a rigged Hilbert space.
=== Weak topology induced by the algebraic dual ===
Suppose that X is a vector space and X# is the algebraic dual space of X (i.e. the vector space of all linear functionals on X). If X is endowed with the weak topology induced by X# then the continuous dual space of X is X#, every bounded subset of X is contained in a finite-dimensional vector subspace of X, every vector subspace of X is closed and has a topological complement.
== Operator topologies ==
If X and Y are topological vector spaces, the space L(X,Y) of continuous linear operators f : X → Y may carry a variety of different possible topologies. The naming of such topologies depends on the kind of topology one is using on the target space Y to define operator convergence (Yosida 1980, IV.7 Topologies of linear maps). There are, in general, a vast array of possible operator topologies on L(X,Y), whose naming is not entirely intuitive.
For example, the strong operator topology on L(X,Y) is the topology of pointwise convergence. For instance, if Y is a normed space, then this topology is defined by the seminorms indexed by x ∈ X:
f
↦
‖
f
(
x
)
‖
Y
.
{\displaystyle f\mapsto \|f(x)\|_{Y}.}
More generally, if a family of seminorms Q defines the topology on Y, then the seminorms pq, x on L(X,Y) defining the strong topology are given by
p
q
,
x
:
f
↦
q
(
f
(
x
)
)
,
{\displaystyle p_{q,x}:f\mapsto q(f(x)),}
indexed by q ∈ Q and x ∈ X.
In particular, see the weak operator topology and weak* operator topology.
== See also ==
Eberlein compactum, a compact set in the weak topology
Weak convergence (Hilbert space)
Weak-star operator topology
Weak convergence of measures
Topologies on spaces of linear maps
Topologies on the set of operators on a Hilbert space
Vague topology
== References ==
== Bibliography ==
Conway, John B. (1994), A Course in Functional Analysis (2nd ed.), Springer-Verlag, ISBN 0-387-97245-5
Folland, G.B. (1999). Real Analysis: Modern Techniques and Their Applications (Second ed.). John Wiley & Sons, Inc. ISBN 978-0-471-31716-6.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Pedersen, Gert (1989), Analysis Now, Springer, ISBN 0-387-96788-5
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Willard, Stephen (February 2004). General Topology. Courier Dover Publications. ISBN 9780486434797.
Yosida, Kosaku (1980), Functional analysis (6th ed.), Springer, ISBN 978-3-540-58654-8 | Wikipedia/Weak_topology |
In mathematics, the spectrum of a C*-algebra or dual of a C*-algebra A, denoted Â, is the set of unitary equivalence classes of irreducible *-representations of A. A *-representation π of A on a Hilbert space H is irreducible if, and only if, there is no closed subspace K different from H and {0} which is invariant under all operators π(x) with x ∈ A. We implicitly assume that irreducible representation means non-null irreducible representation, thus excluding trivial (i.e. identically 0) representations on one-dimensional spaces. As explained below, the spectrum  is also naturally a topological space; this is similar to the notion of the spectrum of a ring.
One of the most important applications of this concept is to provide a notion of dual object for any locally compact group. This dual object is suitable for formulating a Fourier transform and a Plancherel theorem for unimodular separable locally compact groups of type I and a decomposition theorem for arbitrary representations of separable locally compact groups of type I. The resulting duality theory for locally compact groups is however much weaker than the Tannaka–Krein duality theory for compact topological groups or Pontryagin duality for locally compact abelian groups, both of which are complete invariants. That the dual is not a complete invariant is easily seen as the dual of any finite-dimensional full matrix algebra Mn(C) consists of a single point.
== Primitive spectrum ==
The topology of  can be defined in several equivalent ways. We first define it in terms of the primitive spectrum .
The primitive spectrum of A is the set of primitive ideals Prim(A) of A, where a primitive ideal is the kernel of a non-zero irreducible *-representation. The set of primitive ideals is a topological space with the hull-kernel topology (or Jacobson topology). This is defined as follows: If X is a set of primitive ideals, its hull-kernel closure is
X
¯
=
{
ρ
∈
Prim
(
A
)
:
ρ
⊇
⋂
π
∈
X
π
}
.
{\displaystyle {\overline {X}}=\left\{\rho \in \operatorname {Prim} (A):\rho \supseteq \bigcap _{\pi \in X}\pi \right\}.}
Hull-kernel closure is easily shown to be an idempotent operation, that is
X
¯
¯
=
X
¯
,
{\displaystyle {\overline {\overline {X}}}={\overline {X}},}
and it can be shown to satisfy the Kuratowski closure axioms. As a consequence, it can be shown that there is a unique topology τ on Prim(A) such that the closure of a set X with respect to τ is identical to the hull-kernel closure of X.
Since unitarily equivalent representations have the same kernel, the map π ↦ ker(π) factors through a surjective map
k
:
A
^
→
Prim
(
A
)
.
{\displaystyle \operatorname {k} :{\hat {A}}\to \operatorname {Prim} (A).}
We use the map k to define the topology on  as follows:
Definition. The open sets of  are inverse images k−1(U) of open subsets U of Prim(A). This is indeed a topology.
The hull-kernel topology is an analogue for non-commutative rings of the Zariski topology for commutative rings.
The topology on  induced from the hull-kernel topology has other characterizations in terms of states of A.
== Examples ==
=== Commutative C*-algebras ===
The spectrum of a commutative C*-algebra A coincides with the Gelfand dual of A (not to be confused with the dual A' of the Banach space A). In particular, suppose X is a compact Hausdorff space. Then there is a natural homeomorphism
I
:
X
≅
Prim
(
C
(
X
)
)
.
{\displaystyle \operatorname {I} :X\cong \operatorname {Prim} (\operatorname {C} (X)).}
This mapping is defined by
I
(
x
)
=
{
f
∈
C
(
X
)
:
f
(
x
)
=
0
}
.
{\displaystyle \operatorname {I} (x)=\{f\in \operatorname {C} (X):f(x)=0\}.}
I(x) is a closed maximal ideal in C(X) so is in fact primitive. For details of the proof, see the Dixmier reference. For a commutative C*-algebra,
A
^
≅
Prim
(
A
)
.
{\displaystyle {\hat {A}}\cong \operatorname {Prim} (A).}
=== The C*-algebra of bounded operators ===
Let H be a separable infinite-dimensional Hilbert space. L(H) has two norm-closed *-ideals: I0 = {0} and the ideal K = K(H) of compact operators. Thus as a set, Prim(L(H)) = {I0, K}. Now
{K} is a closed subset of Prim(L(H)).
The closure of {I0} is Prim(L(H)).
Thus Prim(L(H)) is a non-Hausdorff space.
The spectrum of L(H) on the other hand is much larger. There are many inequivalent irreducible representations with kernel K(H) or with kernel {0}.
=== Finite-dimensional C*-algebras ===
Suppose A is a finite-dimensional C*-algebra. It is known A is isomorphic to a finite direct sum of full matrix algebras:
A
≅
⨁
e
∈
min
(
A
)
A
e
,
{\displaystyle A\cong \bigoplus _{e\in \operatorname {min} (A)}Ae,}
where min(A) are the minimal central projections of A. The spectrum of A is canonically isomorphic to min(A) with the discrete topology. For finite-dimensional C*-algebras, we also have the isomorphism
A
^
≅
Prim
(
A
)
.
{\displaystyle {\hat {A}}\cong \operatorname {Prim} (A).}
== Other characterizations of the spectrum ==
The hull-kernel topology is easy to describe abstractly, but in practice for C*-algebras associated to locally compact topological groups, other characterizations of the topology on the spectrum in terms of positive definite functions are desirable.
In fact, the topology on  is intimately connected with the concept of weak containment of representations as is shown by the following:
Theorem. Let S be a subset of Â. Then the following are equivalent for an irreducible representation π;
The equivalence class of π in  is in the closure of S
Every state associated to π, that is one of the form
f
ξ
(
x
)
=
⟨
ξ
∣
π
(
x
)
ξ
⟩
{\displaystyle f_{\xi }(x)=\langle \xi \mid \pi (x)\xi \rangle }
with ||ξ|| = 1, is the weak limit of states associated to representations in S.
The second condition means exactly that π is weakly contained in S.
The GNS construction is a recipe for associating states of a C*-algebra A to representations of A. By one of the basic theorems associated to the GNS construction, a state f is pure if and only if the associated representation πf is irreducible. Moreover, the mapping κ : PureState(A) → Â defined by f ↦ πf is a surjective map.
From the previous theorem one can easily prove the following;
Theorem The mapping
κ
:
PureState
(
A
)
→
A
^
{\displaystyle \kappa :\operatorname {PureState} (A)\to {\hat {A}}}
given by the GNS construction is continuous and open.
=== The space Irrn(A) ===
There is yet another characterization of the topology on  which arises by considering the space of representations as a topological space with an appropriate pointwise convergence topology. More precisely, let n be a cardinal number and let Hn be the canonical Hilbert space of dimension n.
Irrn(A) is the space of irreducible *-representations of A on Hn with the point-weak topology. In terms of convergence of nets, this topology is defined by πi → π; if and only if
⟨
π
i
(
x
)
ξ
∣
η
⟩
→
⟨
π
(
x
)
ξ
∣
η
⟩
∀
ξ
,
η
∈
H
n
x
∈
A
.
{\displaystyle \langle \pi _{i}(x)\xi \mid \eta \rangle \to \langle \pi (x)\xi \mid \eta \rangle \quad \forall \xi ,\eta \in H_{n}\ x\in A.}
It turns out that this topology on Irrn(A) is the same as the point-strong topology, i.e. πi → π if and only if
π
i
(
x
)
ξ
→
π
(
x
)
ξ
normwise
∀
ξ
∈
H
n
x
∈
A
.
{\displaystyle \pi _{i}(x)\xi \to \pi (x)\xi \quad {\mbox{ normwise }}\forall \xi \in H_{n}\ x\in A.}
Theorem. Let Ân be the subset of  consisting of equivalence classes of representations whose underlying Hilbert space has dimension n. The canonical map Irrn(A) → Ân is continuous and open. In particular, Ân can be regarded as the quotient topological space of Irrn(A) under unitary equivalence.
Remark. The piecing together of the various Ân can be quite complicated.
== Mackey–Borel structure ==
 is a topological space and thus can also be regarded as a Borel space. A famous conjecture of G. Mackey proposed that a separable locally compact group is of type I if and only if the Borel space is standard, i.e. is isomorphic (in the category of Borel spaces) to the underlying Borel space of a complete separable metric space. Mackey called Borel spaces with this property smooth. This conjecture was proved by James Glimm for separable C*-algebras in the 1961 paper listed in the references below.
Definition. A non-degenerate *-representation π of a separable C*-algebra A is a factor representation if and only if the center of the von Neumann algebra generated by π(A) is one-dimensional. A C*-algebra A is of type I if and only if any separable factor representation of A is a finite or countable multiple of an irreducible one.
Examples of separable locally compact groups G such that C*(G) is of type I are connected (real) nilpotent Lie groups and connected real semi-simple Lie groups. Thus the Heisenberg groups are all of type I. Compact and abelian groups are also of type I.
Theorem. If A is separable, Â is smooth if and only if A is of type I.
The result implies a far-reaching generalization of the structure of representations of separable type I C*-algebras and correspondingly of separable locally compact groups of type I.
== Algebraic primitive spectra ==
Since a C*-algebra A is a ring, we can also consider the set of primitive ideals of A, where A is regarded algebraically. For a ring an ideal is primitive if and only if it is the annihilator of a simple module. It turns out that for a C*-algebra A, an ideal is algebraically primitive if and only if it is primitive in the sense defined above.
Theorem. Let A be a C*-algebra. Any algebraically irreducible representation of A on a complex vector space is algebraically equivalent to a topologically irreducible *-representation on a Hilbert space. Topologically irreducible *-representations on a Hilbert space are algebraically isomorphic if and only if they are unitarily equivalent.
This is the Corollary of Theorem 2.9.5 of the Dixmier reference.
If G is a locally compact group, the topology on dual space of the group C*-algebra C*(G) of G is called the Fell topology, named after J. M. G. Fell.
== References ==
J. Dixmier, C*-Algebras, North-Holland, 1977 (a translation of Les C*-algèbres et leurs représentations)
J. Dixmier, Les C*-algèbres et leurs représentations, Gauthier-Villars, 1969.
J. Glimm, Type I C*-algebras, Annals of Mathematics, vol 73, 1961.
G. Mackey, The Theory of Group Representations, The University of Chicago Press, 1955. | Wikipedia/Spectrum_of_a_C*-algebra |
The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces (L2 space mainly), and operators on these spaces. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space.
These formulations of quantum mechanics continue to be used today. At the heart of the description are ideas of quantum state and quantum observables, which are radically different from those used in previous models of physical reality. While the mathematics permits calculation of many quantities that can be measured experimentally, there is a definite theoretical limit to values that can be simultaneously measured. This limitation was first elucidated by Heisenberg through a thought experiment, and is represented mathematically in the new formalism by the non-commutativity of operators representing quantum observables.
Prior to the development of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of formal mathematical analysis, beginning with calculus, and increasing in complexity up to differential geometry and partial differential equations. Probability theory was used in statistical mechanics. Geometric intuition played a strong role in the first two and, accordingly, theories of relativity were formulated entirely in terms of differential geometric concepts. The phenomenology of quantum physics arose roughly between 1895 and 1915, and for the 10 to 15 years before the development of quantum mechanics (around 1925) physicists continued to think of quantum theory within the confines of what is now called classical physics, and in particular within the same mathematical structures. The most sophisticated example of this is the Sommerfeld–Wilson–Ishiwara quantization rule, which was formulated entirely on the classical phase space.
== History of the formalism ==
=== The "old quantum theory" and the need for new mathematics ===
In the 1890s, Planck was able to derive the blackbody spectrum, which was later used to avoid the classical ultraviolet catastrophe by making the unorthodox assumption that, in the interaction of electromagnetic radiation with matter, energy could only be exchanged in discrete units which he called quanta. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. The proportionality constant, h, is now called the Planck constant in his honor.
In 1905, Einstein explained certain features of the photoelectric effect by assuming that Planck's energy quanta were actual particles, which were later dubbed photons.
All of these developments were phenomenological and challenged the theoretical physics of the time. Bohr and Sommerfeld went on to modify classical mechanics in an attempt to deduce the Bohr model from first principles. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of the Planck constant were actually allowed. The most sophisticated version of this formalism was the so-called Sommerfeld–Wilson–Ishiwara quantization. Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. The mathematical status of quantum theory remained uncertain for some time.
In 1923, de Broglie proposed that wave–particle duality applied not only to photons but to electrons and every other physical system.
The situation changed rapidly in the years 1925–1930, when working mathematical foundations were found through the groundbreaking work of Erwin Schrödinger, Werner Heisenberg, Max Born, Pascual Jordan, and the foundational work of John von Neumann, Hermann Weyl and Paul Dirac, and it became possible to unify several different approaches in terms of a fresh set of ideas. The physical interpretation of the theory was also clarified in these years after Werner Heisenberg discovered the uncertainty relations and Niels Bohr introduced the idea of complementarity.
=== The "new quantum theory" ===
Werner Heisenberg's matrix mechanics was the first successful attempt at replicating the observed quantization of atomic spectra. Later in the same year, Schrödinger created his wave mechanics. Schrödinger's formalism was considered easier to understand, visualize and calculate as it led to differential equations, which physicists were already familiar with solving. Within a year, it was shown that the two theories were equivalent.
Schrödinger himself initially did not understand the fundamental probabilistic nature of quantum mechanics, as he thought that the absolute square of the wave function of an electron should be interpreted as the charge density of an object smeared out over an extended, possibly infinite, volume of space. It was Max Born who introduced the interpretation of the absolute square of the wave function as the probability distribution of the position of a pointlike object. Born's idea was soon taken over by Niels Bohr in Copenhagen who then became the "father" of the Copenhagen interpretation of quantum mechanics. Schrödinger's wave function can be seen to be closely related to the classical Hamilton–Jacobi equation. The correspondence to classical mechanics was even more explicit, although somewhat more formal, in Heisenberg's matrix mechanics. In his PhD thesis project, Paul Dirac discovered that the equation for the operators in the Heisenberg representation, as it is now called, closely translates to classical equations for the dynamics of certain quantities in the Hamiltonian formalism of classical mechanics, when one expresses them through Poisson brackets, a procedure now known as canonical quantization.
Already before Schrödinger, the young postdoctoral fellow Werner Heisenberg invented his matrix mechanics, which was the first correct quantum mechanics – the essential breakthrough. Heisenberg's matrix mechanics formulation was based on algebras of infinite matrices, a very radical formulation in light of the mathematics of classical physics, although he started from the index-terminology of the experimentalists of that time, not even aware that his "index-schemes" were matrices, as Born soon pointed out to him. In fact, in these early years, linear algebra was not generally popular with physicists in its present form.
Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. He is the third, and possibly most important, pillar of that field (he soon was the only one to have discovered a relativistic generalization of the theory). In his above-mentioned account, he introduced the bra–ket notation, together with an abstract formulation in terms of the Hilbert space used in functional analysis; he showed that Schrödinger's and Heisenberg's approaches were two different representations of the same theory, and found a third, most general one, which represented the dynamics of the system. His work was particularly fruitful in many types of generalizations of the field.
The first complete mathematical formulation of this approach, known as the Dirac–von Neumann axioms, is generally credited to John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Hermann Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic paper and 1928 book. It was developed in parallel with a new approach to the mathematical spectral theory based on linear operators rather than the quadratic forms that were David Hilbert's approach a generation earlier. Though theories of quantum mechanics continue to evolve to this day, there is a basic framework for the mathematical formulation of quantum mechanics which underlies most approaches and can be traced back to the mathematical work of John von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations.
=== Later developments ===
The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases.
Path integral formulation
Phase-space formulation of quantum mechanics & geometric quantization
quantum field theory in curved spacetime
axiomatic, algebraic and constructive quantum field theory
C*-algebra formalism
Generalized statistical model of quantum mechanics
A related topic is the relationship to classical mechanics. Any new physical theory is supposed to reduce to successful old theories in some approximation. For quantum mechanics, this translates into the need to study the so-called classical limit of quantum mechanics. Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself.
Finally, some of the originators of quantum theory (notably Einstein and Schrödinger) were unhappy with what they thought were the philosophical implications of quantum mechanics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. The issue of hidden variables has become in part an experimental issue with the help of quantum optics.
== Postulates of quantum mechanics ==
A physical system is generally described by three basic ingredients: states; observables; and dynamics (or law of time evolution) or, more generally, a group of physical symmetries. A classical description can be given in a fairly direct way by a phase space model of mechanics: states are points in a phase space formulated by symplectic manifold, observables are real-valued functions on it, time evolution is given by a one-parameter group of symplectic transformations of the phase space, and physical symmetries are realized by symplectic transformations. A quantum description normally consists of a Hilbert space of states, observables are self-adjoint operators on the space of states, time evolution is given by a one-parameter group of unitary transformations on the Hilbert space of states, and physical symmetries are realized by unitary transformations. (It is possible, to map this Hilbert-space picture to a phase space formulation, invertibly. See below.)
The following summary of the mathematical framework of quantum mechanics can be partly traced back to the Dirac–von Neumann axioms.
=== Description of the state of a system ===
Each isolated physical system is associated with a (topologically) separable complex Hilbert space H with inner product ⟨φ|ψ⟩.
Separability is a mathematically convenient hypothesis, with the physical interpretation that the state is uniquely determined by countably many observations. Quantum states can be identified with equivalence classes in H, where two vectors (of length 1) represent the same state if they differ only by a phase factor:
|
ψ
k
⟩
∼
|
ψ
l
⟩
⇔
|
ψ
k
⟩
=
e
i
α
|
ψ
l
⟩
,
α
∈
R
.
{\displaystyle |\psi _{k}\rangle \sim |\psi _{l}\rangle \;\;\Leftrightarrow \;\;|\psi _{k}\rangle =e^{i\alpha }|\psi _{l}\rangle ,\quad \ \alpha \in \mathbb {R} .}
As such, a quantum state is an element of a projective Hilbert space, conventionally termed a "ray".
Accompanying Postulate I is the composite system postulate:
In the presence of quantum entanglement, the quantum state of the composite system cannot be factored as a tensor product of states of its local constituents; Instead, it is expressed as a sum, or superposition, of tensor products of states of component subsystems. A subsystem in an entangled composite system generally cannot be described by a state vector (or a ray), but instead is described by a density operator; Such quantum state is known as a mixed state. The density operator of a mixed state is a trace class, nonnegative (positive semi-definite) self-adjoint operator ρ normalized to be of trace 1. In turn, any density operator of a mixed state can be represented as a subsystem of a larger composite system in a pure state (see purification theorem).
In the absence of quantum entanglement, the quantum state of the composite system is called a separable state. The density matrix of a bipartite system in a separable state can be expressed as
ρ
=
∑
k
p
k
ρ
1
k
⊗
ρ
2
k
{\displaystyle \rho =\sum _{k}p_{k}\rho _{1}^{k}\otimes \rho _{2}^{k}}
, where
∑
k
p
k
=
1
{\displaystyle \;\sum _{k}p_{k}=1}
. If there is only a single non-zero
p
k
{\displaystyle p_{k}}
, then the state can be expressed just as
ρ
=
ρ
1
⊗
ρ
2
,
{\textstyle \rho =\rho _{1}\otimes \rho _{2},}
and is called simply separable or product state.
=== Measurement on a system ===
==== Description of physical quantities ====
Physical observables are represented by Hermitian matrices on H. Since these operators are Hermitian, their eigenvalues are always real, and represent the possible outcomes/results from measuring the corresponding observable. If the spectrum of the observable is discrete, then the possible results are quantized.
==== Results of measurement ====
By spectral theory, we can associate a probability measure to the values of A in any state ψ. We can also show that the possible values of the observable A in any state must belong to the spectrum of A. The expectation value (in the sense of probability theory) of the observable A for the system in state represented by the unit vector ψ ∈ H is
⟨
ψ
|
A
|
ψ
⟩
{\displaystyle \langle \psi |A|\psi \rangle }
. If we represent the state ψ in the basis formed by the eigenvectors of A, then the square of the modulus of the component attached to a given eigenvector is the probability of observing its corresponding eigenvalue.
For a mixed state ρ, the expected value of A in the state ρ is
tr
(
A
ρ
)
{\displaystyle \operatorname {tr} (A\rho )}
, and the probability of obtaining an eigenvalue
a
n
{\displaystyle a_{n}}
in a discrete, nondegenerate spectrum of the corresponding observable
A
{\displaystyle A}
is given by
P
(
a
n
)
=
tr
(
|
a
n
⟩
⟨
a
n
|
ρ
)
=
⟨
a
n
|
ρ
|
a
n
⟩
{\displaystyle \mathbb {P} (a_{n})=\operatorname {tr} (|a_{n}\rangle \langle a_{n}|\rho )=\langle a_{n}|\rho |a_{n}\rangle }
.
If the eigenvalue
a
n
{\displaystyle a_{n}}
has degenerate, orthonormal eigenvectors
{
|
a
n
1
⟩
,
|
a
n
2
⟩
,
…
,
|
a
n
m
⟩
}
{\displaystyle \{|a_{n1}\rangle ,|a_{n2}\rangle ,\dots ,|a_{nm}\rangle \}}
, then the projection operator onto the eigensubspace can be defined as the identity operator in the eigensubspace:
P
n
=
|
a
n
1
⟩
⟨
a
n
1
|
+
|
a
n
2
⟩
⟨
a
n
2
|
+
⋯
+
|
a
n
m
⟩
⟨
a
n
m
|
,
{\displaystyle P_{n}=|a_{n1}\rangle \langle a_{n1}|+|a_{n2}\rangle \langle a_{n2}|+\dots +|a_{nm}\rangle \langle a_{nm}|,}
and then
P
(
a
n
)
=
tr
(
P
n
ρ
)
{\displaystyle \mathbb {P} (a_{n})=\operatorname {tr} (P_{n}\rho )}
.
Postulates II.a and II.b are collectively known as the Born rule of quantum mechanics.
==== Effect of measurement on the state ====
When a measurement is performed, only one result is obtained (according to some interpretations of quantum mechanics). This is modeled mathematically as the processing of additional information from the measurement, confining the probabilities of an immediate second measurement of the same observable. In the case of a discrete, non-degenerate spectrum, two sequential measurements of the same observable will always give the same value assuming the second immediately follows the first. Therefore, the state vector must change as a result of measurement, and collapse onto the eigensubspace associated with the eigenvalue measured.
For a mixed state ρ, after obtaining an eigenvalue
a
n
{\displaystyle a_{n}}
in a discrete, nondegenerate spectrum of the corresponding observable
A
{\displaystyle A}
, the updated state is given by
ρ
′
=
P
n
ρ
P
n
†
tr
(
P
n
ρ
P
n
†
)
{\textstyle \rho '={\frac {P_{n}\rho P_{n}^{\dagger }}{\operatorname {tr} (P_{n}\rho P_{n}^{\dagger })}}}
. If the eigenvalue
a
n
{\displaystyle a_{n}}
has degenerate, orthonormal eigenvectors
{
|
a
n
1
⟩
,
|
a
n
2
⟩
,
…
,
|
a
n
m
⟩
}
{\displaystyle \{|a_{n1}\rangle ,|a_{n2}\rangle ,\dots ,|a_{nm}\rangle \}}
, then the projection operator onto the eigensubspace is
P
n
=
|
a
n
1
⟩
⟨
a
n
1
|
+
|
a
n
2
⟩
⟨
a
n
2
|
+
⋯
+
|
a
n
m
⟩
⟨
a
n
m
|
{\displaystyle P_{n}=|a_{n1}\rangle \langle a_{n1}|+|a_{n2}\rangle \langle a_{n2}|+\dots +|a_{nm}\rangle \langle a_{nm}|}
.
Postulates II.c is sometimes called the "state update rule" or "collapse rule"; Together with the Born rule (Postulates II.a and II.b), they form a complete representation of measurements, and are sometimes collectively called the measurement postulate(s).
Note that the projection-valued measures (PVM) described in the measurement postulate(s) can be generalized to positive operator-valued measures (POVM), which is the most general kind of measurement in quantum mechanics. A POVM can be understood as the effect on a component subsystem when a PVM is performed on a larger, composite system (see Naimark's dilation theorem).
=== Time evolution of a system ===
The Schrödinger equation describes how a state vector evolves in time. Depending on the text, it may be derived from some other assumptions, motivated on heuristic grounds, or asserted as a postulate. Derivations include using the de Broglie relation between wavelength and momentum or path integrals.
Equivalently, the time evolution postulate can be stated as:
For a closed system in a mixed state ρ, the time evolution is
ρ
(
t
)
=
U
(
t
;
t
0
)
ρ
(
t
0
)
U
†
(
t
;
t
0
)
{\displaystyle \rho (t)=U(t;t_{0})\rho (t_{0})U^{\dagger }(t;t_{0})}
.
The evolution of an open quantum system can be described by quantum operations (in an operator sum formalism) and quantum instruments, and generally does not have to be unitary.
=== Other implications of the postulates ===
Physical symmetries act on the Hilbert space of quantum states unitarily or antiunitarily due to Wigner's theorem (supersymmetry is another matter entirely).
Density operators are those that are in the closure of the convex hull of the one-dimensional orthogonal projectors. Conversely, one-dimensional orthogonal projectors are extreme points of the set of density operators. Physicists also call one-dimensional orthogonal projectors pure states and other density operators mixed states.
One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article.
Furthermore, to the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle, see below.
=== Spin ===
In addition to their other properties, all particles possess a quantity called spin, an intrinsic angular momentum. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. In the position representation, a spinless wavefunction has position r and time t as continuous variables, ψ = ψ(r, t). For spin wavefunctions the spin is an additional discrete variable: ψ = ψ(r, t, σ), where σ takes the values;
σ
=
−
S
ℏ
,
−
(
S
−
1
)
ℏ
,
…
,
0
,
…
,
+
(
S
−
1
)
ℏ
,
+
S
ℏ
.
{\displaystyle \sigma =-S\hbar ,-(S-1)\hbar ,\dots ,0,\dots ,+(S-1)\hbar ,+S\hbar \,.}
That is, the state of a single particle with spin S is represented by a (2S + 1)-component spinor of complex-valued wave functions.
Two classes of particles with very different behaviour are bosons which have integer spin (S = 0, 1, 2, ...), and fermions possessing half-integer spin (S = 1⁄2, 3⁄2, 5⁄2, ...).
=== Symmetrization postulate ===
In quantum mechanics, two particles can be distinguished from one another using two methods. By performing a measurement of intrinsic properties of each particle, particles of different types can be distinguished. Otherwise, if the particles are identical, their trajectories can be tracked which distinguishes the particles based on the locality of each particle. While the second method is permitted in classical mechanics, (i.e. all classical particles are treated with distinguishability), the same cannot be said for quantum mechanical particles since the process is infeasible due to the fundamental uncertainty principles that govern small scales. Hence the requirement of indistinguishability of quantum particles is presented by the symmetrization postulate. The postulate is applicable to a system of bosons or fermions, for example, in predicting the spectra of helium atom. The postulate, explained in the following sections, can be stated as follows:
Exceptions can occur when the particles are constrained to two spatial dimensions where existence of particles known as anyons are possible which are said to have a continuum of statistical properties spanning the range between fermions and bosons. The connection between behaviour of identical particles and their spin is given by spin statistics theorem.
It can be shown that two particles localized in different regions of space can still be represented using a symmetrized/antisymmetrized wavefunction and that independent treatment of these wavefunctions gives the same result. Hence the symmetrization postulate is applicable in the general case of a system of identical particles.
==== Exchange Degeneracy ====
In a system of identical particles, let P be known as exchange operator that acts on the wavefunction as:
P
(
⋯
|
ψ
⟩
|
ϕ
⟩
⋯
)
≡
⋯
|
ϕ
⟩
|
ψ
⟩
⋯
{\displaystyle P{\bigg (}\cdots |\psi \rangle |\phi \rangle \cdots {\bigg )}\equiv \cdots |\phi \rangle |\psi \rangle \cdots }
If a physical system of identical particles is given, wavefunction of all particles can be well known from observation but these cannot be labelled to each particle. Thus, the above exchanged wavefunction represents the same physical state as the original state which implies that the wavefunction is not unique. This is known as exchange degeneracy.
More generally, consider a linear combination of such states,
|
Ψ
⟩
{\displaystyle |\Psi \rangle }
. For the best representation of the physical system, we expect this to be an eigenvector of P since exchange operator is not excepted to give completely different vectors in projective Hilbert space. Since
P
2
=
1
{\displaystyle P^{2}=1}
, the possible eigenvalues of P are +1 and −1. The
|
Ψ
⟩
{\displaystyle |\Psi \rangle }
states for identical particle system are represented as symmetric for +1 eigenvalue or antisymmetric for -1 eigenvalue as follows:
P
|
⋯
n
i
,
n
j
⋯
;
S
⟩
=
+
|
⋯
n
i
,
n
j
⋯
;
S
⟩
{\displaystyle P|\cdots n_{i},n_{j}\cdots ;S\rangle =+|\cdots n_{i},n_{j}\cdots ;S\rangle }
P
|
⋯
n
i
,
n
j
⋯
;
A
⟩
=
−
|
⋯
n
i
,
n
j
⋯
;
A
⟩
{\displaystyle P|\cdots n_{i},n_{j}\cdots ;A\rangle =-|\cdots n_{i},n_{j}\cdots ;A\rangle }
The explicit symmetric/antisymmetric form of
|
Ψ
⟩
{\displaystyle |\Psi \rangle }
is constructed using a symmetrizer or antisymmetrizer operator. Particles that form symmetric states are called bosons and those that form antisymmetric states are called as fermions. The relation of spin with this classification is given from spin statistics theorem which shows that integer spin particles are bosons and half integer spin particles are fermions.
==== Pauli exclusion principle ====
The property of spin relates to another basic property concerning systems of N identical particles: the Pauli exclusion principle, which is a consequence of the following permutation behaviour of an N-particle wave function; again in the position representation one must postulate that for the transposition of any two of the N particles one always should have
i.e., on transposition of the arguments of any two particles the wavefunction should reproduce, apart from a prefactor (−1)2S which is +1 for bosons, but (−1) for fermions.
Electrons are fermions with S = 1/2; quanta of light are bosons with S = 1.
Due to the form of anti-symmetrized wavefunction:
Ψ
n
1
⋯
n
N
(
A
)
(
x
1
,
…
,
x
N
)
=
1
N
!
|
ψ
n
1
(
x
1
)
ψ
n
1
(
x
2
)
⋯
ψ
n
1
(
x
N
)
ψ
n
2
(
x
1
)
ψ
n
2
(
x
2
)
⋯
ψ
n
2
(
x
N
)
⋮
⋮
⋱
⋮
ψ
n
N
(
x
1
)
ψ
n
N
(
x
2
)
⋯
ψ
n
N
(
x
N
)
|
{\displaystyle \Psi _{n_{1}\cdots n_{N}}^{(A)}(x_{1},\ldots ,x_{N})={\frac {1}{\sqrt {N!}}}\left|{\begin{matrix}\psi _{n_{1}}(x_{1})&\psi _{n_{1}}(x_{2})&\cdots &\psi _{n_{1}}(x_{N})\\\psi _{n_{2}}(x_{1})&\psi _{n_{2}}(x_{2})&\cdots &\psi _{n_{2}}(x_{N})\\\vdots &\vdots &\ddots &\vdots \\\psi _{n_{N}}(x_{1})&\psi _{n_{N}}(x_{2})&\cdots &\psi _{n_{N}}(x_{N})\\\end{matrix}}\right|}
if the wavefunction of each particle is completely determined by a set of quantum number, then two fermions cannot share the same set of quantum numbers since the resulting function cannot be anti-symmetrized (i.e. above formula gives zero). The same cannot be said of Bosons since their wavefunction is:
|
x
1
x
2
⋯
x
N
;
S
⟩
=
∏
j
n
j
!
N
!
∑
p
|
x
p
(
1
)
⟩
|
x
p
(
2
)
⟩
⋯
|
x
p
(
N
)
⟩
{\displaystyle |x_{1}x_{2}\cdots x_{N};S\rangle ={\frac {\prod _{j}n_{j}!}{N!}}\sum _{p}\left|x_{p(1)}\right\rangle \left|x_{p(2)}\right\rangle \cdots \left|x_{p(N)}\right\rangle }
where
n
j
{\displaystyle n_{j}}
is the number of particles with same wavefunction.
==== Exceptions for symmetrization postulate ====
In nonrelativistic quantum mechanics all particles are either bosons or fermions; in relativistic quantum theories also "supersymmetric" theories exist, where a particle is a linear combination of a bosonic and a fermionic part. Only in dimension d = 2 can one construct entities where (−1)2S is replaced by an arbitrary complex number with magnitude 1, called anyons. In relativistic quantum mechanics, spin statistic theorem can prove that under certain set of assumptions that the integer spins particles are classified as bosons and half spin particles are classified as fermions. Anyons which form neither symmetric nor antisymmetric states are said to have fractional spin.
Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics, the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties.
== Mathematical structure of quantum mechanics ==
=== Pictures of dynamics ===
Summary:
=== Representations ===
The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone–von Neumann theorem dictates that all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. A systematic understanding of its consequences has led to the phase space formulation of quantum mechanics, which works in full phase space instead of Hilbert space, so then with a more intuitive link to the classical limit thereof. This picture also simplifies considerations
of quantization, the deformation extension from classical to quantum mechanics.
The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). All four are unitarily equivalent.
=== Time as an operator ===
The framework presented so far singles out time as the parameter that everything depends on. It is possible to formulate mechanics in such a way that time becomes itself an observable associated with a self-adjoint operator. At the classical level, it is possible to arbitrarily parameterize the trajectories of particles in terms of an unphysical parameter s, and in that case the time t becomes an additional generalized coordinate of the physical system. At the quantum level, translations in s would be generated by a "Hamiltonian" H − E, where E is the energy operator and H is the "ordinary" Hamiltonian. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of H − E (this requires the use of a rigged Hilbert space and a renormalization of the norm).
This is related to the quantization of constrained systems and quantization of gauge theories. It
is also possible to formulate a quantum theory of "events" where time becomes an observable.
== Problem of measurement ==
The picture given in the preceding paragraphs is sufficient for description of a completely isolated system. However, it fails to account for one of the main differences between quantum mechanics and classical mechanics, that is, the effects of measurement. The von Neumann description of quantum measurement of an observable A, when the system is prepared in a pure state ψ is the following (note, however, that von Neumann's description dates back to the 1930s and is based on experiments as performed during that time – more specifically the Compton–Simon experiment; it is not applicable to most present-day measurements within the quantum domain):
Let A have spectral resolution
A
=
∫
λ
d
E
A
(
λ
)
,
{\displaystyle A=\int \lambda \,d\operatorname {E} _{A}(\lambda ),}
where EA is the resolution of the identity (also called projection-valued measure) associated with A. Then the probability of the measurement outcome lying in an interval B of R is |EA(B) ψ|2. In other words, the probability is obtained by integrating the characteristic function of B against the countably additive measure
⟨
ψ
∣
E
A
ψ
⟩
.
{\displaystyle \langle \psi \mid \operatorname {E} _{A}\psi \rangle .}
If the measured value is contained in B, then immediately after the measurement, the system will be in the (generally non-normalized) state EA(B)ψ. If the measured value does not lie in B, replace B by its complement for the above state.
For example, suppose the state space is the n-dimensional complex Hilbert space Cn and A is a Hermitian matrix with eigenvalues λi, with corresponding eigenvectors ψi. The projection-valued measure associated with A, EA, is then
E
A
(
B
)
=
|
ψ
i
⟩
⟨
ψ
i
|
,
{\displaystyle \operatorname {E} _{A}(B)=|\psi _{i}\rangle \langle \psi _{i}|,}
where B is a Borel set containing only the single eigenvalue λi. If the system is prepared in state
|
ψ
⟩
{\displaystyle |\psi \rangle }
Then the probability of a measurement returning the value λi can be calculated by integrating the spectral measure
⟨
ψ
∣
E
A
ψ
⟩
{\displaystyle \langle \psi \mid \operatorname {E} _{A}\psi \rangle }
over Bi. This gives trivially
⟨
ψ
|
ψ
i
⟩
⟨
ψ
i
∣
ψ
⟩
=
|
⟨
ψ
∣
ψ
i
⟩
|
2
.
{\displaystyle \langle \psi |\psi _{i}\rangle \langle \psi _{i}\mid \psi \rangle =|\langle \psi \mid \psi _{i}\rangle |^{2}.}
The characteristic property of the von Neumann measurement scheme is that repeating the same measurement will give the same results. This is also called the projection postulate.
A more general formulation replaces the projection-valued measure with a positive-operator valued measure (POVM). To illustrate, take again the finite-dimensional case. Here we would replace the rank-1 projections
|
ψ
i
⟩
⟨
ψ
i
|
{\displaystyle |\psi _{i}\rangle \langle \psi _{i}|}
by a finite set of positive operators
F
i
F
i
∗
{\displaystyle F_{i}F_{i}^{*}}
whose sum is still the identity operator as before (the resolution of identity). Just as a set of possible outcomes {λ1 ... λn} is associated to a projection-valued measure, the same can be said for a POVM. Suppose the measurement outcome is λi. Instead of collapsing to the (unnormalized) state
|
ψ
i
⟩
⟨
ψ
i
|
ψ
⟩
{\displaystyle |\psi _{i}\rangle \langle \psi _{i}|\psi \rangle }
after the measurement, the system now will be in the state
F
i
|
ψ
⟩
.
{\displaystyle F_{i}|\psi \rangle .}
Since the Fi Fi* operators need not be mutually orthogonal projections, the projection postulate of von Neumann no longer holds.
The same formulation applies to general mixed states.
In von Neumann's approach, the state transformation due to measurement is distinct from that due to time evolution in several ways. For example, time evolution is deterministic and unitary whereas measurement is non-deterministic and non-unitary. However, since both types of state transformation take one quantum state to another, this difference was viewed by many as unsatisfactory. The POVM formalism views measurement as one among many other quantum operations, which are described by completely positive maps which do not increase the trace.
== List of mathematical tools ==
Part of the folklore of the subject concerns the mathematical physics textbook Methods of Mathematical Physics put together by Richard Courant from David Hilbert's Göttingen University courses. The story is told (by mathematicians) that physicists had dismissed the material as not interesting in the current research areas, until the advent of Schrödinger's equation. At that point it was realised that the mathematics of the new quantum mechanics was already laid out in it. It is also said that Heisenberg had consulted Hilbert about his matrix mechanics, and Hilbert observed that his own experience with infinite-dimensional matrices had derived from differential equations, advice which Heisenberg ignored, missing the opportunity to unify the theory as Weyl and Dirac did a few years later. Whatever the basis of the anecdotes, the mathematics of the theory was conventional at the time, whereas the physics was radically new.
The main tools include:
linear algebra: complex numbers, eigenvectors, eigenvalues
functional analysis: Hilbert spaces, linear operators, spectral theory
differential equations: partial differential equations, separation of variables, ordinary differential equations, Sturm–Liouville theory, eigenfunctions
harmonic analysis: Fourier transforms
== See also ==
List of mathematical topics in quantum theory
Quantum foundations
Symmetry in quantum mechanics
== Notes ==
== References ==
Bäuerle, Gerard G. A.; de Kerf, Eddy A. (1990). Lie Algebras, Part 1: Finite and Infinite Dimensional Lie Algebras and Applications in Physics. Studies in Mathematical Physics. Amsterdam: North Holland. ISBN 0-444-88776-8.
Byron, Frederick W.; Fuller, Robert W. (1992). Mathematics of Classical and Quantum Physics. New York: Courier Corporation. ISBN 978-0-486-67164-2.
Cohen-Tannoudji, Claude; Diu, Bernard; Laloë, Franck (2020). Quantum mechanics. Volume 2: Angular momentum, spin, and approximation methods. Weinheim: Wiley-VCH Verlag GmbH & Co. KGaA. ISBN 978-3-527-82272-0.
Dirac, P. A. M. (1925). "The Fundamental Equations of Quantum Mechanics". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 109 (752): 642–653. Bibcode:1925RSPSA.109..642D. doi:10.1098/rspa.1925.0150.
Edwards, David A. (1979). "The mathematical foundations of quantum mechanics". Synthese. 42 (1). Springer Science and Business Media LLC: 1–70. doi:10.1007/bf00413704. ISSN 0039-7857. S2CID 46969028.
Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge. Sudbury, Mass.: Jones & Bartlett Learning. ISBN 978-0-7637-2470-2.
Jauch, J. M.; Wigner, E. P.; Yanase, M. M. (1997). "Some Comments Concerning Measurements in Quantum Mechanics". Part I: Particles and Fields. Part II: Foundations of Quantum Mechanics. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 475–482. doi:10.1007/978-3-662-09203-3_52. ISBN 978-3-642-08179-8.
Solem, J. C.; Biedenharn, L. C. (1993). "Understanding geometrical phases in quantum mechanics: An elementary example". Foundations of Physics. 23 (2): 185–195. Bibcode:1993FoPh...23..185S. doi:10.1007/BF01883623. S2CID 121930907.
Streater, Raymond Frederick; Wightman, Arthur Strong (2000). PCT, Spin and Statistics, and All that. Princeton, NJ: Princeton University Press. ISBN 978-0-691-07062-9.
Sakurai, Jun John; Napolitano, Jim (2021). Modern quantum mechanics (3rd ed.). Cambridge: Cambridge University Press. ISBN 978-1-108-47322-4.
Weyl, Hermann (1950) [1931]. The Theory of Groups and Quantum Mechanics. Translated by Robertson, H. P. Dover.
== Further reading ==
Auyang, Sunny Y. (1995). How is Quantum Field Theory Possible?. New York, NY: Oxford University Press on Demand. ISBN 978-0-19-509344-5.
Emch, Gérard G. (1972). Algebraic Methods in Statistical Mechanics and Quantum Field Theory. New York: John Wiley & Sons. ISBN 0-471-23900-3.
Giachetta, Giovanni; Mangiarotti, Luigi; Sardanashvily, Gennadi (2005). Geometric and Algebraic Topological Methods in Quantum Mechanics. WORLD SCIENTIFIC. arXiv:math-ph/0410040. doi:10.1142/5731. ISBN 978-981-256-129-9.
Gleason, Andrew M. (1957). "Measures on the Closed Subspaces of a Hilbert Space". Journal of Mathematics and Mechanics. 6 (6). Indiana University Mathematics Department: 885–893. JSTOR 24900629.
Hall, Brian C. (2013). Quantum Theory for Mathematicians. Graduate Texts in Mathematics. Vol. 267. New York, NY: Springer New York. Bibcode:2013qtm..book.....H. doi:10.1007/978-1-4614-7116-5. ISBN 978-1-4614-7115-8. ISSN 0072-5285. S2CID 117837329.
Jauch, Josef Maria (1968). Foundations of Quantum Mechanics. Reading, Mass.: Addison-Wesley. ISBN 0-201-03298-8.
Jost, R. (1965). The General Theory of Quantized Fields. Lectures in applied mathematics. American Mathematical Society.
Kuhn, Thomas S. (1987). Black-Body Theory and the Quantum Discontinuity, 1894-1912. Chicago: University of Chicago Press. ISBN 978-0-226-45800-7.
Landsman, Klaas (2017). Foundations of Quantum Theory. Fundamental Theories of Physics. Vol. 188. Cham: Springer International Publishing. doi:10.1007/978-3-319-51777-3. ISBN 978-3-319-51776-6. ISSN 0168-1222.
Mackey, George W. (2004). Mathematical Foundations of Quantum Mechanics. Mineola, N.Y: Courier Corporation. ISBN 978-0-486-43517-6.
McMahon, David (2013). Quantum Mechanics Demystified, 2nd Edition (PDF). New York, NY: McGraw-Hill Prof Med/Tech. ISBN 978-0-07-176563-3.
Moretti, Valter (2017). Spectral Theory and Quantum Mechanics. Unitext. Vol. 110. Cham: Springer International Publishing. doi:10.1007/978-3-319-70706-8. ISBN 978-3-319-70705-1. ISSN 2038-5714. S2CID 125121522.
Moretti, Valter (2019). Fundamental Mathematical Structures of Quantum Theory. Cham: Springer International Publishing. doi:10.1007/978-3-030-18346-2. ISBN 978-3-030-18345-5. S2CID 197485828.
Prugovecki, Eduard (2006). Quantum Mechanics in Hilbert Space. Mineola, NY: Courier Dover Publications. ISBN 978-0-486-45327-9.
Reed, Michael; Simon, Barry (1972). Methods of Modern Mathematical Physics. New York: Academic Press. ISBN 978-0-12-585001-8.
Shankar, R. (2013). Principles of Quantum Mechanics (PDF). Springer. ISBN 978-1-4615-7675-4.
Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics (PDF). Providence, R.I: American Mathematical Soc. ISBN 978-0-8218-4660-5.
von Neumann, John (2018). Mathematical Foundations of Quantum Mechanics. Princeton Oxford: Princeton University Press. ISBN 978-0-691-17856-1.
Weaver, Nik (2001). Mathematical Quantization. Chapman and Hall/CRC. doi:10.1201/9781420036237. ISBN 978-0-429-07514-8. | Wikipedia/Mathematical_formulation_of_quantum_mechanics |
In mathematics, and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in direct analogy to the definition that a continuous function between topological spaces preserves the topological structure: the preimage of any open set is open. In real analysis, measurable functions are used in the definition of the Lebesgue integral. In probability theory, a measurable function on a probability space is known as a random variable.
== Formal definition ==
Let
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
and
(
Y
,
T
)
{\displaystyle (Y,\mathrm {T} )}
be measurable spaces, meaning that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are sets equipped with respective
σ
{\displaystyle \sigma }
-algebras
Σ
{\displaystyle \Sigma }
and
T
.
{\displaystyle \mathrm {T} .}
A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is said to be measurable if for every
E
∈
T
{\displaystyle E\in \mathrm {T} }
the pre-image of
E
{\displaystyle E}
under
f
{\displaystyle f}
is in
Σ
{\displaystyle \Sigma }
; that is, for all
E
∈
T
{\displaystyle E\in \mathrm {T} }
f
−
1
(
E
)
:=
{
x
∈
X
∣
f
(
x
)
∈
E
}
∈
Σ
.
{\displaystyle f^{-1}(E):=\{x\in X\mid f(x)\in E\}\in \Sigma .}
That is,
σ
(
f
)
⊆
Σ
,
{\displaystyle \sigma (f)\subseteq \Sigma ,}
where
σ
(
f
)
{\displaystyle \sigma (f)}
is the σ-algebra generated by f. If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a measurable function, one writes
f
:
(
X
,
Σ
)
→
(
Y
,
T
)
.
{\displaystyle f\colon (X,\Sigma )\rightarrow (Y,\mathrm {T} ).}
to emphasize the dependency on the
σ
{\displaystyle \sigma }
-algebras
Σ
{\displaystyle \Sigma }
and
T
.
{\displaystyle \mathrm {T} .}
== Term usage variations ==
The choice of
σ
{\displaystyle \sigma }
-algebras in the definition above is sometimes implicit and left up to the context. For example, for
R
,
{\displaystyle \mathbb {R} ,}
C
,
{\displaystyle \mathbb {C} ,}
or other topological spaces, the Borel algebra (generated by all the open sets) is a common choice. Some authors define measurable functions as exclusively real-valued ones with respect to the Borel algebra.
If the values of the function lie in an infinite-dimensional vector space, other non-equivalent definitions of measurability, such as weak measurability and Bochner measurability, exist.
== Notable classes of measurable functions ==
Random variables are by definition measurable functions defined on probability spaces.
If
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
and
(
Y
,
T
)
{\displaystyle (Y,T)}
are Borel spaces, a measurable function
f
:
(
X
,
Σ
)
→
(
Y
,
T
)
{\displaystyle f:(X,\Sigma )\to (Y,T)}
is also called a Borel function. Continuous functions are Borel functions but not all Borel functions are continuous. However, a measurable function is nearly a continuous function; see Luzin's theorem. If a Borel function happens to be a section of a map
Y
→
π
X
,
{\displaystyle Y\xrightarrow {~\pi ~} X,}
it is called a Borel section.
A Lebesgue measurable function is a measurable function
f
:
(
R
,
L
)
→
(
C
,
B
C
)
,
{\displaystyle f:(\mathbb {R} ,{\mathcal {L}})\to (\mathbb {C} ,{\mathcal {B}}_{\mathbb {C} }),}
where
L
{\displaystyle {\mathcal {L}}}
is the
σ
{\displaystyle \sigma }
-algebra of Lebesgue measurable sets, and
B
C
{\displaystyle {\mathcal {B}}_{\mathbb {C} }}
is the Borel algebra on the complex numbers
C
.
{\displaystyle \mathbb {C} .}
Lebesgue measurable functions are of interest in mathematical analysis because they can be integrated. In the case
f
:
X
→
R
,
{\displaystyle f:X\to \mathbb {R} ,}
f
{\displaystyle f}
is Lebesgue measurable if and only if
{
f
>
α
}
=
{
x
∈
X
:
f
(
x
)
>
α
}
{\displaystyle \{f>\alpha \}=\{x\in X:f(x)>\alpha \}}
is measurable for all
α
∈
R
.
{\displaystyle \alpha \in \mathbb {R} .}
This is also equivalent to any of
{
f
≥
α
}
,
{
f
<
α
}
,
{
f
≤
α
}
{\displaystyle \{f\geq \alpha \},\{f<\alpha \},\{f\leq \alpha \}}
being measurable for all
α
,
{\displaystyle \alpha ,}
or the preimage of any open set being measurable. Continuous functions, monotone functions, step functions, semicontinuous functions, Riemann-integrable functions, and functions of bounded variation are all Lebesgue measurable. A function
f
:
X
→
C
{\displaystyle f:X\to \mathbb {C} }
is measurable if and only if the real and imaginary parts are measurable.
== Properties of measurable functions ==
The sum and product of two complex-valued measurable functions are measurable. So is the quotient, so long as there is no division by zero.
If
f
:
(
X
,
Σ
1
)
→
(
Y
,
Σ
2
)
{\displaystyle f:(X,\Sigma _{1})\to (Y,\Sigma _{2})}
and
g
:
(
Y
,
Σ
2
)
→
(
Z
,
Σ
3
)
{\displaystyle g:(Y,\Sigma _{2})\to (Z,\Sigma _{3})}
are measurable functions, then so is their composition
g
∘
f
:
(
X
,
Σ
1
)
→
(
Z
,
Σ
3
)
.
{\displaystyle g\circ f:(X,\Sigma _{1})\to (Z,\Sigma _{3}).}
If
f
:
(
X
,
Σ
1
)
→
(
Y
,
Σ
2
)
{\displaystyle f:(X,\Sigma _{1})\to (Y,\Sigma _{2})}
and
g
:
(
Y
,
Σ
3
)
→
(
Z
,
Σ
4
)
{\displaystyle g:(Y,\Sigma _{3})\to (Z,\Sigma _{4})}
are measurable functions, their composition
g
∘
f
:
X
→
Z
{\displaystyle g\circ f:X\to Z}
need not be
(
Σ
1
,
Σ
4
)
{\displaystyle (\Sigma _{1},\Sigma _{4})}
-measurable unless
Σ
3
⊆
Σ
2
.
{\displaystyle \Sigma _{3}\subseteq \Sigma _{2}.}
Indeed, two Lebesgue-measurable functions may be constructed in such a way as to make their composition non-Lebesgue-measurable.
The (pointwise) supremum, infimum, limit superior, and limit inferior of a sequence (viz., countably many) of real-valued measurable functions are all measurable as well.
The pointwise limit of a sequence of measurable functions
f
n
:
X
→
Y
{\displaystyle f_{n}:X\to Y}
is measurable, where
Y
{\displaystyle Y}
is a metric space (endowed with the Borel algebra). This is not true in general if
Y
{\displaystyle Y}
is non-metrizable. The corresponding statement for continuous functions requires stronger conditions than pointwise convergence, such as uniform convergence.
== Non-measurable functions ==
Real-valued functions encountered in applications tend to be measurable; however, it is not difficult to prove the existence of non-measurable functions. Such proofs rely on the axiom of choice in an essential way, in the sense that Zermelo–Fraenkel set theory without the axiom of choice does not prove the existence of such functions.
In any measure space
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
with a non-measurable set
A
⊂
X
,
{\displaystyle A\subset X,}
A
∉
Σ
,
{\displaystyle A\notin \Sigma ,}
one can construct a non-measurable indicator function:
1
A
:
(
X
,
Σ
)
→
R
,
1
A
(
x
)
=
{
1
if
x
∈
A
0
otherwise
,
{\displaystyle \mathbf {1} _{A}:(X,\Sigma )\to \mathbb {R} ,\quad \mathbf {1} _{A}(x)={\begin{cases}1&{\text{ if }}x\in A\\0&{\text{ otherwise}},\end{cases}}}
where
R
{\displaystyle \mathbb {R} }
is equipped with the usual Borel algebra. This is a non-measurable function since the preimage of the measurable set
{
1
}
{\displaystyle \{1\}}
is the non-measurable
A
.
{\displaystyle A.}
As another example, any non-constant function
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
is non-measurable with respect to the trivial
σ
{\displaystyle \sigma }
-algebra
Σ
=
{
∅
,
X
}
,
{\displaystyle \Sigma =\{\varnothing ,X\},}
since the preimage of any point in the range is some proper, nonempty subset of
X
,
{\displaystyle X,}
which is not an element of the trivial
Σ
.
{\displaystyle \Sigma .}
== See also ==
Bochner measurable function
Bochner space – Type of topological space
Lp space – Function spaces generalizing finite-dimensional p norm spaces - Vector spaces of measurable functions: the
L
p
{\displaystyle L^{p}}
spaces
Measure-preserving dynamical system – Subject of study in ergodic theory
Vector measure
Weakly measurable function
== Notes ==
== External links ==
Measurable function at Encyclopedia of Mathematics
Borel function at Encyclopedia of Mathematics | Wikipedia/Lebesgue-measurable_function |
In mathematics, a functional is a certain type of function. The exact definition of the term varies depending on the subfield (and sometimes even the author).
In linear algebra, it is synonymous with a linear form, which is a linear mapping from a vector space
V
{\displaystyle V}
into its field of scalars (that is, it is an element of the dual space
V
∗
{\displaystyle V^{*}}
)
In functional analysis and related fields, it refers to a mapping from a space
X
{\displaystyle X}
into the field of real or complex numbers. In functional analysis, the term linear functional is a synonym of linear form; that is, it is a scalar-valued linear map. Depending on the author, such mappings may or may not be assumed to be linear, or to be defined on the whole space
X
.
{\displaystyle X.}
In computer science, it is synonymous with a higher-order function, which is a function that takes one or more functions as arguments or returns them.
This article is mainly concerned with the second concept, which arose in the early 18th century as part of the calculus of variations. The first concept, which is more modern and abstract, is discussed in detail in a separate article, under the name linear form. The third concept is detailed in the computer science article on higher-order functions.
In the case where the space
X
{\displaystyle X}
is a space of functions, the functional is a "function of a function", and some older authors actually define the term "functional" to mean "function of a function".
However, the fact that
X
{\displaystyle X}
is a space of functions is not mathematically essential, so this older definition is no longer prevalent.
The term originates from the calculus of variations, where one searches for a function that minimizes (or maximizes) a given functional. A particularly important application in physics is search for a state of a system that minimizes (or maximizes) the action, or in other words the time integral of the Lagrangian.
== Details ==
=== Duality ===
The mapping
x
0
↦
f
(
x
0
)
{\displaystyle x_{0}\mapsto f(x_{0})}
is a function, where
x
0
{\displaystyle x_{0}}
is an argument of a function
f
.
{\displaystyle f.}
At the same time, the mapping of a function to the value of the function at a point
f
↦
f
(
x
0
)
{\displaystyle f\mapsto f(x_{0})}
is a functional; here,
x
0
{\displaystyle x_{0}}
is a parameter.
Provided that
f
{\displaystyle f}
is a linear function from a vector space to the underlying scalar field, the above linear maps are dual to each other, and in functional analysis both are called linear functionals.
=== Definite integral ===
Integrals such as
f
↦
I
[
f
]
=
∫
Ω
H
(
f
(
x
)
,
f
′
(
x
)
,
…
)
μ
(
d
x
)
{\displaystyle f\mapsto I[f]=\int _{\Omega }H(f(x),f'(x),\ldots )\;\mu (\mathrm {d} x)}
form a special class of functionals. They map a function
f
{\displaystyle f}
into a real number, provided that
H
{\displaystyle H}
is real-valued. Examples include
the area underneath the graph of a positive function
f
{\displaystyle f}
f
↦
∫
x
0
x
1
f
(
x
)
d
x
{\displaystyle f\mapsto \int _{x_{0}}^{x_{1}}f(x)\;\mathrm {d} x}
L
p
{\displaystyle L^{p}}
norm of a function on a set
E
{\displaystyle E}
f
↦
(
∫
E
|
f
|
p
d
x
)
1
/
p
{\displaystyle f\mapsto \left(\int _{E}|f|^{p}\;\mathrm {d} x\right)^{1/p}}
the arclength of a curve in 2-dimensional Euclidean space
f
↦
∫
x
0
x
1
1
+
|
f
′
(
x
)
|
2
d
x
{\displaystyle f\mapsto \int _{x_{0}}^{x_{1}}{\sqrt {1+|f'(x)|^{2}}}\;\mathrm {d} x}
=== Inner product spaces ===
Given an inner product space
X
,
{\displaystyle X,}
and a fixed vector
x
→
∈
X
,
{\displaystyle {\vec {x}}\in X,}
the map defined by
y
→
↦
x
→
⋅
y
→
{\displaystyle {\vec {y}}\mapsto {\vec {x}}\cdot {\vec {y}}}
is a linear functional on
X
.
{\displaystyle X.}
The set of vectors
y
→
{\displaystyle {\vec {y}}}
such that
x
→
⋅
y
→
{\displaystyle {\vec {x}}\cdot {\vec {y}}}
is zero is a vector subspace of
X
,
{\displaystyle X,}
called the null space or kernel of the functional, or the orthogonal complement of
x
→
,
{\displaystyle {\vec {x}},}
denoted
{
x
→
}
⊥
.
{\displaystyle \{{\vec {x}}\}^{\perp }.}
For example, taking the inner product with a fixed function
g
∈
L
2
(
[
−
π
,
π
]
)
{\displaystyle g\in L^{2}([-\pi ,\pi ])}
defines a (linear) functional on the Hilbert space
L
2
(
[
−
π
,
π
]
)
{\displaystyle L^{2}([-\pi ,\pi ])}
of square integrable functions on
[
−
π
,
π
]
:
{\displaystyle [-\pi ,\pi ]:}
f
↦
⟨
f
,
g
⟩
=
∫
[
−
π
,
π
]
f
¯
g
{\displaystyle f\mapsto \langle f,g\rangle =\int _{[-\pi ,\pi ]}{\bar {f}}g}
=== Locality ===
If a functional's value can be computed for small segments of the input curve and then summed to find the total value, the functional is called local. Otherwise it is called non-local. For example:
F
(
y
)
=
∫
x
0
x
1
y
(
x
)
d
x
{\displaystyle F(y)=\int _{x_{0}}^{x_{1}}y(x)\;\mathrm {d} x}
is local while
F
(
y
)
=
∫
x
0
x
1
y
(
x
)
d
x
∫
x
0
x
1
(
1
+
[
y
(
x
)
]
2
)
d
x
{\displaystyle F(y)={\frac {\int _{x_{0}}^{x_{1}}y(x)\;\mathrm {d} x}{\int _{x_{0}}^{x_{1}}(1+[y(x)]^{2})\;\mathrm {d} x}}}
is non-local. This occurs commonly when integrals occur separately in the numerator and denominator of an equation such as in calculations of center of mass.
== Functional equations ==
The traditional usage also applies when one talks about a functional equation, meaning an equation between functionals: an equation
F
=
G
{\displaystyle F=G}
between functionals can be read as an 'equation to solve', with solutions being themselves functions. In such equations there may be several sets of variable unknowns, like when it is said that an additive map
f
{\displaystyle f}
is one satisfying Cauchy's functional equation:
f
(
x
+
y
)
=
f
(
x
)
+
f
(
y
)
for all
x
,
y
.
{\displaystyle f(x+y)=f(x)+f(y)\qquad {\text{ for all }}x,y.}
== Derivative and integration ==
Functional derivatives are used in Lagrangian mechanics. They are derivatives of functionals; that is, they carry information on how a functional changes when the input function changes by a small amount.
Richard Feynman used functional integrals as the central idea in his sum over the histories formulation of quantum mechanics. This usage implies an integral taken over some function space.
== See also ==
Linear form – Linear map from a vector space to its field of scalars
Optimization (mathematics) – Study of mathematical algorithms for optimization problemsPages displaying short descriptions of redirect targets
Tensor – Algebraic object with geometric applications
== References ==
Axler, Sheldon (December 18, 2014), Linear Algebra Done Right, Undergraduate Texts in Mathematics (3rd ed.), Springer (published 2015), ISBN 978-3-319-11079-0
Kolmogorov, Andrey; Fomin, Sergei V. (2012) [1957]. Elements of the Theory of Functions and Functional Analysis. Dover Books on Mathematics. New York: Dover Books. ISBN 978-1-61427-304-2. OCLC 912495626.
Lang, Serge (2002), "III. Modules, §6. The dual space and dual module", Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, pp. 142–146, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001
Wilansky, Albert (October 17, 2008) [1970]. Topology for Analysis. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-46903-4. OCLC 227923899.
Sobolev, V.I. (2001) [1994], "Functional", Encyclopedia of Mathematics, EMS Press
Linear functional at the nLab
Nonlinear functional at the nLab
Rowland, Todd. "Functional". MathWorld.
Rowland, Todd. "Linear functional". MathWorld. | Wikipedia/Functional_(mathematics) |
In mathematics, the closed graph theorem may refer to one of several basic results characterizing continuous functions in terms of their graphs.
Each gives conditions when functions with closed graphs are necessarily continuous.
A blog post by T. Tao lists several closed graph theorems throughout mathematics.
== Graphs and maps with closed graphs ==
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a map between topological spaces then the graph of
f
{\displaystyle f}
is the set
Γ
f
:=
{
(
x
,
f
(
x
)
)
:
x
∈
X
}
{\displaystyle \Gamma _{f}:=\{(x,f(x)):x\in X\}}
or equivalently,
Γ
f
:=
{
(
x
,
y
)
∈
X
×
Y
:
y
=
f
(
x
)
}
{\displaystyle \Gamma _{f}:=\{(x,y)\in X\times Y:y=f(x)\}}
It is said that the graph of
f
{\displaystyle f}
is closed if
Γ
f
{\displaystyle \Gamma _{f}}
is a closed subset of
X
×
Y
{\displaystyle X\times Y}
(with the product topology).
Any continuous function into a Hausdorff space has a closed graph (see § Closed graph theorem in point-set topology)
Any linear map,
L
:
X
→
Y
,
{\displaystyle L:X\to Y,}
between two topological vector spaces whose topologies are (Cauchy) complete with respect to translation invariant metrics, and if in addition (1a)
L
{\displaystyle L}
is sequentially continuous in the sense of the product topology, then the map
L
{\displaystyle L}
is continuous and its graph, Gr L, is necessarily closed. Conversely, if
L
{\displaystyle L}
is such a linear map with, in place of (1a), the graph of
L
{\displaystyle L}
is (1b) known to be closed in the Cartesian product space
X
×
Y
{\displaystyle X\times Y}
, then
L
{\displaystyle L}
is continuous and therefore necessarily sequentially continuous.
=== Examples of continuous maps that do not have a closed graph ===
If
X
{\displaystyle X}
is any space then the identity map
Id
:
X
→
X
{\displaystyle \operatorname {Id} :X\to X}
is continuous but its graph, which is the diagonal
Γ
Id
:=
{
(
x
,
x
)
:
x
∈
X
}
,
{\displaystyle \Gamma _{\operatorname {Id} }:=\{(x,x):x\in X\},}
, is closed in
X
×
X
{\displaystyle X\times X}
if and only if
X
{\displaystyle X}
is Hausdorff. In particular, if
X
{\displaystyle X}
is not Hausdorff then
Id
:
X
→
X
{\displaystyle \operatorname {Id} :X\to X}
is continuous but does not have a closed graph.
Let
X
{\displaystyle X}
denote the real numbers
R
{\displaystyle \mathbb {R} }
with the usual Euclidean topology and let
Y
{\displaystyle Y}
denote
R
{\displaystyle \mathbb {R} }
with the indiscrete topology (where note that
Y
{\displaystyle Y}
is not Hausdorff and that every function valued in
Y
{\displaystyle Y}
is continuous). Let
f
:
X
→
Y
{\displaystyle f:X\to Y}
be defined by
f
(
0
)
=
1
{\displaystyle f(0)=1}
and
f
(
x
)
=
0
{\displaystyle f(x)=0}
for all
x
≠
0
{\displaystyle x\neq 0}
. Then
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous but its graph is not closed in
X
×
Y
{\displaystyle X\times Y}
.
== Closed graph theorem in point-set topology ==
In point-set topology, the closed graph theorem states the following:
If X, Y are compact Hausdorff spaces, then the theorem can also be deduced from the open mapping theorem for such spaces; see § Relation to the open mapping theorem.
Non-Hausdorff spaces are rarely seen, but non-compact spaces are common. An example of non-compact
Y
{\displaystyle Y}
is the real line, which allows the discontinuous function with closed graph
f
(
x
)
=
{
1
x
if
x
≠
0
,
0
else
{\displaystyle f(x)={\begin{cases}{\frac {1}{x}}{\text{ if }}x\neq 0,\\0{\text{ else}}\end{cases}}}
.
Also, closed linear operators in functional analysis (linear operators with closed graphs) are typically not continuous.
=== For set-valued functions ===
== In functional analysis ==
If
T
:
X
→
Y
{\displaystyle T:X\to Y}
is a linear operator between topological vector spaces (TVSs) then we say that
T
{\displaystyle T}
is a closed operator if the graph of
T
{\displaystyle T}
is closed in
X
×
Y
{\displaystyle X\times Y}
when
X
×
Y
{\displaystyle X\times Y}
is endowed with the product topology.
The closed graph theorem is an important result in functional analysis that guarantees that a closed linear operator is continuous under certain conditions.
The original result has been generalized many times.
A well known version of the closed graph theorems is the following.
The theorem is a consequence of the open mapping theorem; see § Relation to the open mapping theorem below (conversely, the open mapping theorem in turn can be deduced from the closed graph theorem).
== Relation to the open mapping theorem ==
Often, the closed graph theorems are obtained as corollaries of the open mapping theorems in the following way. Let
f
:
X
→
Y
{\displaystyle f:X\to Y}
be any map. Then it factors as
f
:
X
→
i
Γ
f
→
q
Y
{\displaystyle f:X{\overset {i}{\to }}\Gamma _{f}{\overset {q}{\to }}Y}
.
Now,
i
{\displaystyle i}
is the inverse of the projection
p
:
Γ
f
→
X
{\displaystyle p:\Gamma _{f}\to X}
. So, if the open mapping theorem holds for
p
{\displaystyle p}
; i.e.,
p
{\displaystyle p}
is an open mapping, then
i
{\displaystyle i}
is continuous and then
f
{\displaystyle f}
is continuous (as the composition of continuous maps).
For example, the above argument applies if
f
{\displaystyle f}
is a linear operator between Banach spaces with closed graph, or if
f
{\displaystyle f}
is a map with closed graph between compact Hausdorff spaces.
== See also ==
Almost open linear map – Map that satisfies a condition similar to that of being an open map.Pages displaying short descriptions of redirect targets
Barrelled space – Type of topological vector space
Closed graph – Graph of a map closed in the product spacePages displaying short descriptions of redirect targets
Closed linear operator – Linear operator whose graph is closed
Discontinuous linear map
Kakutani fixed-point theorem – Fixed-point theorem for set-valued functions
Open mapping theorem (functional analysis) – Condition for a linear operator to be open
Ursescu theorem – Generalization of closed graph, open mapping, and uniform boundedness theorem
Webbed space – Space where open mapping and closed graph theorems hold
Zariski's main theorem – Theorem of algebraic geometry and commutative algebra
== Notes ==
== References ==
== Bibliography ==
Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190.
Folland, Gerald B. (1984), Real Analysis: Modern Techniques and Their Applications (1st ed.), John Wiley & Sons, ISBN 978-0-471-80958-6
Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342.
Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704.
Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities)
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
Zălinescu, Constantin (30 July 2002). Convex Analysis in General Vector Spaces. River Edge, N.J. London: World Scientific Publishing. ISBN 978-981-4488-15-0. MR 1921556. OCLC 285163112 – via Internet Archive.
"Proof of closed graph theorem". PlanetMath. | Wikipedia/Closed_graph_theorem |
In functional analysis and related areas of mathematics, the group algebra is any of various constructions to assign to a locally compact group an operator algebra (or more generally a Banach algebra), such that representations of the algebra are related to representations of the group. As such, they are similar to the group ring associated to a discrete group.
== The algebra Cc(G) of continuous functions with compact support ==
If G is a locally compact Hausdorff group, G carries an essentially unique left-invariant countably additive Borel measure μ called a Haar measure. Using the Haar measure, one can define a convolution operation on the space Cc(G) of complex-valued continuous functions on G with compact support; Cc(G) can then be given any of various norms and the completion will be a group algebra.
To define the convolution operation, let f and g be two functions in Cc(G). For t in G, define
[
f
∗
g
]
(
t
)
=
∫
G
f
(
s
)
g
(
s
−
1
t
)
d
μ
(
s
)
.
{\displaystyle [f*g](t)=\int _{G}f(s)g\left(s^{-1}t\right)\,d\mu (s).}
The fact that
f
∗
g
{\displaystyle f*g}
is continuous is immediate from the dominated convergence theorem. Also
Support
(
f
∗
g
)
⊆
Support
(
f
)
⋅
Support
(
g
)
{\displaystyle \operatorname {Support} (f*g)\subseteq \operatorname {Support} (f)\cdot \operatorname {Support} (g)}
where the dot stands for the product in G. Cc(G) also has a natural involution defined by:
f
∗
(
s
)
=
f
(
s
−
1
)
¯
Δ
(
s
−
1
)
{\displaystyle f^{*}(s)={\overline {f(s^{-1})}}\,\Delta (s^{-1})}
where Δ is the modular function on G. With this involution, it is a *-algebra.
Theorem. With the norm:
‖
f
‖
1
:=
∫
G
|
f
(
s
)
|
d
μ
(
s
)
,
{\displaystyle \|f\|_{1}:=\int _{G}|f(s)|\,d\mu (s),}
Cc(G) becomes an involutive normed algebra with an approximate identity.
The approximate identity can be indexed on a neighborhood basis of the identity consisting of compact sets. Indeed, if V is a compact neighborhood of the identity, let fV be a non-negative continuous function supported in V such that
∫
V
f
V
(
g
)
d
μ
(
g
)
=
1.
{\displaystyle \int _{V}f_{V}(g)\,d\mu (g)=1.}
Then {fV}V is an approximate identity. A group algebra has an identity, as opposed to just an approximate identity, if and only if the topology on the group is the discrete topology.
Note that for discrete groups, Cc(G) is the same thing as the complex group ring C[G].
The importance of the group algebra is that it captures the unitary representation theory of G as shown in the following
Theorem. Let G be a locally compact group. If U is a strongly continuous unitary representation of G on a Hilbert space H, then
π
U
(
f
)
=
∫
G
f
(
g
)
U
(
g
)
d
μ
(
g
)
{\displaystyle \pi _{U}(f)=\int _{G}f(g)U(g)\,d\mu (g)}
is a non-degenerate bounded *-representation of the normed algebra Cc(G). The map
U
↦
π
U
{\displaystyle U\mapsto \pi _{U}}
is a bijection between the set of strongly continuous unitary representations of G and non-degenerate bounded *-representations of Cc(G). This bijection respects unitary equivalence and strong containment. In particular, πU is irreducible if and only if U is irreducible.
Non-degeneracy of a representation π of Cc(G) on a Hilbert space Hπ means that
{
π
(
f
)
ξ
:
f
∈
C
c
(
G
)
,
ξ
∈
H
π
}
{\displaystyle \left\{\pi (f)\xi :f\in \operatorname {C} _{c}(G),\xi \in H_{\pi }\right\}}
is dense in Hπ.
== The convolution algebra L1(G) ==
It is a standard theorem of measure theory that the completion of Cc(G) in the L1(G) norm is isomorphic to the space L1(G) of equivalence classes of functions which are integrable with respect to the Haar measure, where, as usual, two functions are regarded as equivalent if and only if they differ only on a set of Haar measure zero.
Theorem. L1(G) is a Banach *-algebra with the convolution product and involution defined above and with the L1 norm. L1(G) also has a bounded approximate identity.
=== The group C*-algebra C*(G) ===
Let C[G] be the group ring of a discrete group G.
For a locally compact group G, the group C*-algebra C*(G) of G is defined to be the C*-enveloping algebra of L1(G), i.e. the completion of Cc(G) with respect to the largest C*-norm:
‖
f
‖
C
∗
:=
sup
π
‖
π
(
f
)
‖
,
{\displaystyle \|f\|_{C^{*}}:=\sup _{\pi }\|\pi (f)\|,}
where π ranges over all non-degenerate *-representations of Cc(G) on Hilbert spaces. When G is discrete, it follows from the triangle inequality that, for any such π, one has:
‖
π
(
f
)
‖
≤
‖
f
‖
1
,
{\displaystyle \|\pi (f)\|\leq \|f\|_{1},}
hence the norm is well-defined.
It follows from the definition that, when G is a discrete group, C*(G) has the following universal property: any *-homomorphism from C[G] to some B(H) (the C*-algebra of bounded operators on some Hilbert space H) factors through the inclusion map:
C
[
G
]
↪
C
max
∗
(
G
)
.
{\displaystyle \mathbf {C} [G]\hookrightarrow C_{\max }^{*}(G).}
== The reduced group C*-algebra Cr*(G) ==
The reduced group C*-algebra Cr*(G) is the completion of Cc(G) with respect to the norm
‖
f
‖
C
r
∗
:=
sup
{
‖
f
∗
g
‖
2
:
‖
g
‖
2
=
1
}
,
{\displaystyle \|f\|_{C_{r}^{*}}:=\sup \left\{\|f*g\|_{2}:\|g\|_{2}=1\right\},}
where
‖
f
‖
2
=
∫
G
|
f
|
2
d
μ
{\displaystyle \|f\|_{2}={\sqrt {\int _{G}|f|^{2}\,d\mu }}}
is the L2 norm. Since the completion of Cc(G) with regard to the L2 norm is a Hilbert space, the Cr* norm is the norm of the bounded operator acting on L2(G) by convolution with f and thus a C*-norm.
Equivalently, Cr*(G) is the C*-algebra generated by the image of the left regular representation on ℓ2(G).
In general, Cr*(G) is a quotient of C*(G). The reduced group C*-algebra is isomorphic to the non-reduced group C*-algebra defined above if and only if G is amenable.
== von Neumann algebras associated to groups ==
The group von Neumann algebra W*(G) of G is the enveloping von Neumann algebra of C*(G).
For a discrete group G, we can consider the Hilbert space ℓ2(G) for which G is an orthonormal basis. Since G operates on ℓ2(G) by permuting the basis vectors, we can identify the complex group ring C[G] with a subalgebra of the algebra of bounded operators on ℓ2(G). The weak closure of this subalgebra, NG, is a von Neumann algebra.
The center of NG can be described in terms of those elements of G whose conjugacy class is finite. In particular, if the identity element of G is the only group element with that property (that is, G has the infinite conjugacy class property), the center of NG consists only of complex multiples of the identity.
NG is isomorphic to the hyperfinite type II1 factor if and only if G is countable, amenable, and has the infinite conjugacy class property.
== See also ==
Graph algebra
Incidence algebra
Hecke algebra of a locally compact group
Path algebra
Groupoid algebra
Stereotype algebra
Stereotype group algebra
Hopf algebra
== Notes ==
== References ==
Lang, S. (2002). Algebra. Graduate Texts in Mathematics. Springer. ISBN 978-1-4613-0041-0.
Vinberg, E. (10 April 2003). A Course in Algebra. Graduate Studies in Mathematics. Vol. 56. American Mathematical Society. doi:10.1090/gsm/056. ISBN 978-0-8218-3413-8.
Dixmier, Jacques (1982). C*-algebras. North-Holland. ISBN 978-0-444-86391-1.
Kirillov, Aleksandr A. (1976). Elements of the Theory of Representations. Grundlehren der mathematischen Wissenschaften. Vol. 220. Springer-Verlag. doi:10.1007/978-3-642-66243-0. ISBN 978-3-642-66245-4.
Loomis, Lynn H. (19 July 2011). Introduction to Abstract Harmonic Analysis (Dover Books on Mathematics) by Lynn H. Loomis (2011) Paperback. Dover Publications. ISBN 978-0-486-48123-4.
A.I. Shtern (2001) [1994], "Group algebra of a locally compact group", Encyclopedia of Mathematics, EMS Press This article incorporates material from Group $C^*$-algebra on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Group_algebra_of_a_locally_compact_group |
In linear algebra, a sublinear function (or functional as is more often used in functional analysis), also called a quasi-seminorm or a Banach functional, on a vector space
X
{\displaystyle X}
is a real-valued function with only some of the properties of a seminorm. Unlike seminorms, a sublinear function does not have to be nonnegative-valued and also does not have to be absolutely homogeneous. Seminorms are themselves abstractions of the more well known notion of norms, where a seminorm has all the defining properties of a norm except that it is not required to map non-zero vectors to non-zero values.
In functional analysis the name Banach functional is sometimes used, reflecting that they are most commonly used when applying a general formulation of the Hahn–Banach theorem.
The notion of a sublinear function was introduced by Stefan Banach when he proved his version of the Hahn-Banach theorem.
There is also a different notion in computer science, described below, that also goes by the name "sublinear function."
== Definitions ==
Let
X
{\displaystyle X}
be a vector space over a field
K
,
{\displaystyle \mathbb {K} ,}
where
K
{\displaystyle \mathbb {K} }
is either the real numbers
R
{\displaystyle \mathbb {R} }
or complex numbers
C
.
{\displaystyle \mathbb {C} .}
A real-valued function
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
on
X
{\displaystyle X}
is called a sublinear function (or a sublinear functional if
K
=
R
{\displaystyle \mathbb {K} =\mathbb {R} }
), and also sometimes called a quasi-seminorm or a Banach functional, if it has these two properties:
Positive homogeneity/Nonnegative homogeneity:
p
(
r
x
)
=
r
p
(
x
)
{\displaystyle p(rx)=rp(x)}
for all real
r
≥
0
{\displaystyle r\geq 0}
and all
x
∈
X
.
{\displaystyle x\in X.}
This condition holds if and only if
p
(
r
x
)
=
r
p
(
x
)
{\displaystyle p(rx)=rp(x)}
for all positive real
r
>
0
{\displaystyle r>0}
and all
x
∈
X
.
{\displaystyle x\in X.}
Subadditivity/Triangle inequality:
p
(
x
+
y
)
≤
p
(
x
)
+
p
(
y
)
{\displaystyle p(x+y)\leq p(x)+p(y)}
for all
x
,
y
∈
X
.
{\displaystyle x,y\in X.}
This subadditivity condition requires
p
{\displaystyle p}
to be real-valued.
A function
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
is called positive or nonnegative if
p
(
x
)
≥
0
{\displaystyle p(x)\geq 0}
for all
x
∈
X
,
{\displaystyle x\in X,}
although some authors define positive to instead mean that
p
(
x
)
≠
0
{\displaystyle p(x)\neq 0}
whenever
x
≠
0
;
{\displaystyle x\neq 0;}
these definitions are not equivalent.
It is a symmetric function if
p
(
−
x
)
=
p
(
x
)
{\displaystyle p(-x)=p(x)}
for all
x
∈
X
.
{\displaystyle x\in X.}
Every subadditive symmetric function is necessarily nonnegative.
A sublinear function on a real vector space is symmetric if and only if it is a seminorm.
A sublinear function on a real or complex vector space is a seminorm if and only if it is a balanced function or equivalently, if and only if
p
(
u
x
)
≤
p
(
x
)
{\displaystyle p(ux)\leq p(x)}
for every unit length scalar
u
{\displaystyle u}
(satisfying
|
u
|
=
1
{\displaystyle |u|=1}
) and every
x
∈
X
.
{\displaystyle x\in X.}
The set of all sublinear functions on
X
,
{\displaystyle X,}
denoted by
X
#
,
{\displaystyle X^{\#},}
can be partially ordered by declaring
p
≤
q
{\displaystyle p\leq q}
if and only if
p
(
x
)
≤
q
(
x
)
{\displaystyle p(x)\leq q(x)}
for all
x
∈
X
.
{\displaystyle x\in X.}
A sublinear function is called minimal if it is a minimal element of
X
#
{\displaystyle X^{\#}}
under this order.
A sublinear function is minimal if and only if it is a real linear functional.
== Examples and sufficient conditions ==
Every norm, seminorm, and real linear functional is a sublinear function.
The identity function
R
→
R
{\displaystyle \mathbb {R} \to \mathbb {R} }
on
X
:=
R
{\displaystyle X:=\mathbb {R} }
is an example of a sublinear function (in fact, it is even a linear functional) that is neither positive nor a seminorm; the same is true of this map's negation
x
↦
−
x
.
{\displaystyle x\mapsto -x.}
More generally, for any real
a
≤
b
,
{\displaystyle a\leq b,}
the map
S
a
,
b
:
R
→
R
x
↦
{
a
x
if
x
≤
0
b
x
if
x
≥
0
{\displaystyle {\begin{alignedat}{4}S_{a,b}:\;&&\mathbb {R} &&\;\to \;&\mathbb {R} \\[0.3ex]&&x&&\;\mapsto \;&{\begin{cases}ax&{\text{ if }}x\leq 0\\bx&{\text{ if }}x\geq 0\\\end{cases}}\\\end{alignedat}}}
is a sublinear function on
X
:=
R
{\displaystyle X:=\mathbb {R} }
and moreover, every sublinear function
p
:
R
→
R
{\displaystyle p:\mathbb {R} \to \mathbb {R} }
is of this form; specifically, if
a
:=
−
p
(
−
1
)
{\displaystyle a:=-p(-1)}
and
b
:=
p
(
1
)
{\displaystyle b:=p(1)}
then
a
≤
b
{\displaystyle a\leq b}
and
p
=
S
a
,
b
.
{\displaystyle p=S_{a,b}.}
If
p
{\displaystyle p}
and
q
{\displaystyle q}
are sublinear functions on a real vector space
X
{\displaystyle X}
then so is the map
x
↦
max
{
p
(
x
)
,
q
(
x
)
}
.
{\displaystyle x\mapsto \max\{p(x),q(x)\}.}
More generally, if
P
{\displaystyle {\mathcal {P}}}
is any non-empty collection of sublinear functionals on a real vector space
X
{\displaystyle X}
and if for all
x
∈
X
,
{\displaystyle x\in X,}
q
(
x
)
:=
sup
{
p
(
x
)
:
p
∈
P
}
,
{\displaystyle q(x):=\sup\{p(x):p\in {\mathcal {P}}\},}
then
q
{\displaystyle q}
is a sublinear functional on
X
.
{\displaystyle X.}
A function
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
which is subadditive, convex, and satisfies
p
(
0
)
≤
0
{\displaystyle p(0)\leq 0}
is also positively homogeneous (the latter condition
p
(
0
)
≤
0
{\displaystyle p(0)\leq 0}
is necessary as the example of
p
(
x
)
:=
x
2
+
1
{\displaystyle p(x):={\sqrt {x^{2}+1}}}
on
X
:=
R
{\displaystyle X:=\mathbb {R} }
shows). If
p
{\displaystyle p}
is positively homogeneous, it is convex if and only if it is subadditive. Therefore, assuming
p
(
0
)
≤
0
{\displaystyle p(0)\leq 0}
, any two properties among subadditivity, convexity, and positive homogeneity implies the third.
== Properties ==
Every sublinear function is a convex function: For
0
≤
t
≤
1
,
{\displaystyle 0\leq t\leq 1,}
p
(
t
x
+
(
1
−
t
)
y
)
≤
p
(
t
x
)
+
p
(
(
1
−
t
)
y
)
subadditivity
=
t
p
(
x
)
+
(
1
−
t
)
p
(
y
)
nonnegative homogeneity
{\displaystyle {\begin{alignedat}{3}p(tx+(1-t)y)&\leq p(tx)+p((1-t)y)&&\quad {\text{ subadditivity}}\\&=tp(x)+(1-t)p(y)&&\quad {\text{ nonnegative homogeneity}}\\\end{alignedat}}}
If
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
is a sublinear function on a vector space
X
{\displaystyle X}
then
p
(
0
)
=
0
≤
p
(
x
)
+
p
(
−
x
)
,
{\displaystyle p(0)~=~0~\leq ~p(x)+p(-x),}
for every
x
∈
X
,
{\displaystyle x\in X,}
which implies that at least one of
p
(
x
)
{\displaystyle p(x)}
and
p
(
−
x
)
{\displaystyle p(-x)}
must be nonnegative; that is, for every
x
∈
X
,
{\displaystyle x\in X,}
0
≤
max
{
p
(
x
)
,
p
(
−
x
)
}
.
{\displaystyle 0~\leq ~\max\{p(x),p(-x)\}.}
Moreover, when
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
is a sublinear function on a real vector space then the map
q
:
X
→
R
{\displaystyle q:X\to \mathbb {R} }
defined by
q
(
x
)
=
def
max
{
p
(
x
)
,
p
(
−
x
)
}
{\displaystyle q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\max\{p(x),p(-x)\}}
is a seminorm.
Subadditivity of
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
guarantees that for all vectors
x
,
y
∈
X
,
{\displaystyle x,y\in X,}
p
(
x
)
−
p
(
y
)
≤
p
(
x
−
y
)
,
{\displaystyle p(x)-p(y)~\leq ~p(x-y),}
−
p
(
x
)
≤
p
(
−
x
)
,
{\displaystyle -p(x)~\leq ~p(-x),}
so if
p
{\displaystyle p}
is also symmetric then the reverse triangle inequality will hold for all vectors
x
,
y
∈
X
,
{\displaystyle x,y\in X,}
|
p
(
x
)
−
p
(
y
)
|
≤
p
(
x
−
y
)
.
{\displaystyle |p(x)-p(y)|~\leq ~p(x-y).}
Defining
ker
p
=
def
p
−
1
(
0
)
,
{\displaystyle \ker p~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~p^{-1}(0),}
then subadditivity also guarantees that for all
x
∈
X
,
{\displaystyle x\in X,}
the value of
p
{\displaystyle p}
on the set
x
+
(
ker
p
∩
−
ker
p
)
=
{
x
+
k
:
p
(
k
)
=
0
=
p
(
−
k
)
}
{\displaystyle x+(\ker p\cap -\ker p)=\{x+k:p(k)=0=p(-k)\}}
is constant and equal to
p
(
x
)
.
{\displaystyle p(x).}
In particular, if
ker
p
=
p
−
1
(
0
)
{\displaystyle \ker p=p^{-1}(0)}
is a vector subspace of
X
{\displaystyle X}
then
−
ker
p
=
ker
p
{\displaystyle -\ker p=\ker p}
and the assignment
x
+
ker
p
↦
p
(
x
)
,
{\displaystyle x+\ker p\mapsto p(x),}
which will be denoted by
p
^
,
{\displaystyle {\hat {p}},}
is a well-defined real-valued sublinear function on the quotient space
X
/
ker
p
{\displaystyle X\,/\,\ker p}
that satisfies
p
^
−
1
(
0
)
=
ker
p
.
{\displaystyle {\hat {p}}^{-1}(0)=\ker p.}
If
p
{\displaystyle p}
is a seminorm then
p
^
{\displaystyle {\hat {p}}}
is just the usual canonical norm on the quotient space
X
/
ker
p
.
{\displaystyle X\,/\,\ker p.}
Adding
b
c
{\displaystyle bc}
to both sides of the hypothesis
p
(
x
)
+
a
c
<
inf
p
(
x
+
a
K
)
{\textstyle p(x)+ac\,<\,\inf _{}p(x+aK)}
(where
p
(
x
+
a
K
)
=
def
{
p
(
x
+
a
k
)
:
k
∈
K
}
{\displaystyle p(x+aK)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{p(x+ak):k\in K\}}
) and combining that with the conclusion gives
p
(
x
)
+
a
c
+
b
c
<
inf
p
(
x
+
a
K
)
+
b
c
≤
p
(
x
+
a
z
)
+
b
c
<
inf
p
(
x
+
a
z
+
b
K
)
{\displaystyle p(x)+ac+bc~<~\inf _{}p(x+aK)+bc~\leq ~p(x+a\mathbf {z} )+bc~<~\inf _{}p(x+a\mathbf {z} +bK)}
which yields many more inequalities, including, for instance,
p
(
x
)
+
a
c
+
b
c
<
p
(
x
+
a
z
)
+
b
c
<
p
(
x
+
a
z
+
b
z
)
{\displaystyle p(x)+ac+bc~<~p(x+a\mathbf {z} )+bc~<~p(x+a\mathbf {z} +b\mathbf {z} )}
in which an expression on one side of a strict inequality
<
{\displaystyle \,<\,}
can be obtained from the other by replacing the symbol
c
{\displaystyle c}
with
z
{\displaystyle \mathbf {z} }
(or vice versa) and moving the closing parenthesis to the right (or left) of an adjacent summand (all other symbols remain fixed and unchanged).
=== Associated seminorm ===
If
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
is a real-valued sublinear function on a real vector space
X
{\displaystyle X}
(or if
X
{\displaystyle X}
is complex, then when it is considered as a real vector space) then the map
q
(
x
)
=
def
max
{
p
(
x
)
,
p
(
−
x
)
}
{\displaystyle q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\max\{p(x),p(-x)\}}
defines a seminorm on the real vector space
X
{\displaystyle X}
called the seminorm associated with
p
.
{\displaystyle p.}
A sublinear function
p
{\displaystyle p}
on a real or complex vector space is a symmetric function if and only if
p
=
q
{\displaystyle p=q}
where
q
(
x
)
=
def
max
{
p
(
x
)
,
p
(
−
x
)
}
{\displaystyle q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\max\{p(x),p(-x)\}}
as before.
More generally, if
p
:
X
→
R
{\displaystyle p:X\to \mathbb {R} }
is a real-valued sublinear function on a (real or complex) vector space
X
{\displaystyle X}
then
q
(
x
)
=
def
sup
|
u
|
=
1
p
(
u
x
)
=
sup
{
p
(
u
x
)
:
u
is a unit scalar
}
{\displaystyle q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\sup _{|u|=1}p(ux)~=~\sup\{p(ux):u{\text{ is a unit scalar }}\}}
will define a seminorm on
X
{\displaystyle X}
if this supremum is always a real number (that is, never equal to
∞
{\displaystyle \infty }
).
=== Relation to linear functionals ===
If
p
{\displaystyle p}
is a sublinear function on a real vector space
X
{\displaystyle X}
then the following are equivalent:
p
{\displaystyle p}
is a linear functional.
for every
x
∈
X
,
{\displaystyle x\in X,}
p
(
x
)
+
p
(
−
x
)
≤
0.
{\displaystyle p(x)+p(-x)\leq 0.}
for every
x
∈
X
,
{\displaystyle x\in X,}
p
(
x
)
+
p
(
−
x
)
=
0.
{\displaystyle p(x)+p(-x)=0.}
p
{\displaystyle p}
is a minimal sublinear function.
If
p
{\displaystyle p}
is a sublinear function on a real vector space
X
{\displaystyle X}
then there exists a linear functional
f
{\displaystyle f}
on
X
{\displaystyle X}
such that
f
≤
p
.
{\displaystyle f\leq p.}
If
X
{\displaystyle X}
is a real vector space,
f
{\displaystyle f}
is a linear functional on
X
,
{\displaystyle X,}
and
p
{\displaystyle p}
is a positive sublinear function on
X
,
{\displaystyle X,}
then
f
≤
p
{\displaystyle f\leq p}
on
X
{\displaystyle X}
if and only if
f
−
1
(
1
)
∩
{
x
∈
X
:
p
(
x
)
<
1
}
=
∅
.
{\displaystyle f^{-1}(1)\cap \{x\in X:p(x)<1\}=\varnothing .}
==== Dominating a linear functional ====
A real-valued function
f
{\displaystyle f}
defined on a subset of a real or complex vector space
X
{\displaystyle X}
is said to be dominated by a sublinear function
p
{\displaystyle p}
if
f
(
x
)
≤
p
(
x
)
{\displaystyle f(x)\leq p(x)}
for every
x
{\displaystyle x}
that belongs to the domain of
f
.
{\displaystyle f.}
If
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
is a real linear functional on
X
{\displaystyle X}
then
f
{\displaystyle f}
is dominated by
p
{\displaystyle p}
(that is,
f
≤
p
{\displaystyle f\leq p}
) if and only if
−
p
(
−
x
)
≤
f
(
x
)
≤
p
(
x
)
for every
x
∈
X
.
{\displaystyle -p(-x)\leq f(x)\leq p(x)\quad {\text{ for every }}x\in X.}
Moreover, if
p
{\displaystyle p}
is a seminorm or some other symmetric map (which by definition means that
p
(
−
x
)
=
p
(
x
)
{\displaystyle p(-x)=p(x)}
holds for all
x
{\displaystyle x}
) then
f
≤
p
{\displaystyle f\leq p}
if and only if
|
f
|
≤
p
.
{\displaystyle |f|\leq p.}
=== Continuity ===
Suppose
X
{\displaystyle X}
is a topological vector space (TVS) over the real or complex numbers and
p
{\displaystyle p}
is a sublinear function on
X
.
{\displaystyle X.}
Then the following are equivalent:
p
{\displaystyle p}
is continuous;
p
{\displaystyle p}
is continuous at 0;
p
{\displaystyle p}
is uniformly continuous on
X
{\displaystyle X}
;
and if
p
{\displaystyle p}
is positive then this list may be extended to include:
{
x
∈
X
:
p
(
x
)
<
1
}
{\displaystyle \{x\in X:p(x)<1\}}
is open in
X
.
{\displaystyle X.}
If
X
{\displaystyle X}
is a real TVS,
f
{\displaystyle f}
is a linear functional on
X
,
{\displaystyle X,}
and
p
{\displaystyle p}
is a continuous sublinear function on
X
,
{\displaystyle X,}
then
f
≤
p
{\displaystyle f\leq p}
on
X
{\displaystyle X}
implies that
f
{\displaystyle f}
is continuous.
=== Relation to Minkowski functions and open convex sets ===
==== Relation to open convex sets ====
== Operators ==
The concept can be extended to operators that are homogeneous and subadditive.
This requires only that the codomain be, say, an ordered vector space to make sense of the conditions.
== Computer science definition ==
In computer science, a function
f
:
Z
+
→
R
{\displaystyle f:\mathbb {Z} ^{+}\to \mathbb {R} }
is called sublinear if
lim
n
→
∞
f
(
n
)
n
=
0
,
{\displaystyle \lim _{n\to \infty }{\frac {f(n)}{n}}=0,}
or
f
(
n
)
∈
o
(
n
)
{\displaystyle f(n)\in o(n)}
in asymptotic notation (notice the small
o
{\displaystyle o}
).
Formally,
f
(
n
)
∈
o
(
n
)
{\displaystyle f(n)\in o(n)}
if and only if, for any given
c
>
0
,
{\displaystyle c>0,}
there exists an
N
{\displaystyle N}
such that
f
(
n
)
<
c
n
{\displaystyle f(n)<cn}
for
n
≥
N
.
{\displaystyle n\geq N.}
That is,
f
{\displaystyle f}
grows slower than any linear function.
The two meanings should not be confused: while a Banach functional is convex, almost the opposite is true for functions of sublinear growth: every function
f
(
n
)
∈
o
(
n
)
{\displaystyle f(n)\in o(n)}
can be upper-bounded by a concave function of sublinear growth.
== See also ==
Asymmetric norm – Generalization of the concept of a norm
Auxiliary normed space
Hahn-Banach theorem – Theorem on extension of bounded linear functionalsPages displaying short descriptions of redirect targets
Linear functional – Linear map from a vector space to its field of scalarsPages displaying short descriptions of redirect targets
Minkowski functional – Function made from a set
Norm (mathematics) – Length in a vector space
Seminorm – Mathematical function
Superadditivity – Property of a function
== Notes ==
Proofs
== References ==
== Bibliography ==
Kubrusly, Carlos S. (2011). The Elements of Operator Theory (Second ed.). Boston: Birkhäuser. ISBN 978-0-8176-4998-2. OCLC 710154895.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. | Wikipedia/Sublinear_function |
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
== Introduction ==
Spectral theory for second order ordinary differential equations on a compact interval was developed by Jacques Charles François Sturm and Joseph Liouville in the nineteenth century and is now known as Sturm–Liouville theory. In modern language, it is an application of the spectral theorem for compact operators due to David Hilbert. In his dissertation, published in 1910, Hermann Weyl extended this theory to second order ordinary differential equations with singularities at the endpoints of the interval, now allowed to be infinite or semi-infinite. He simultaneously developed a spectral theory adapted to these special operators and introduced boundary conditions in terms of his celebrated dichotomy between limit points and limit circles.
In the 1920s, John von Neumann established a general spectral theorem for unbounded self-adjoint operators, which Kunihiko Kodaira used to streamline Weyl's method. Kodaira also generalised Weyl's method to singular ordinary differential equations of even order and obtained a simple formula for the spectral measure. The same formula had also been obtained independently by E. C. Titchmarsh in 1946 (scientific communication between Japan and the United Kingdom had been interrupted by World War II). Titchmarsh had followed the method of the German mathematician Emil Hilb, who derived the eigenfunction expansions using complex function theory instead of operator theory. Other methods avoiding the spectral theorem were later developed independently by Levitan, Levinson and Yoshida, who used the fact that the resolvent of the singular differential operator could be approximated by compact resolvents corresponding to Sturm–Liouville problems for proper subintervals. Another method was found by Mark Grigoryevich Krein; his use of direction functionals was subsequently generalised by Izrail Glazman to arbitrary ordinary differential equations of even order.
Weyl applied his theory to Carl Friedrich Gauss's hypergeometric differential equation, thus obtaining a far-reaching generalisation of the transform formula of Gustav Ferdinand Mehler (1881) for the Legendre differential equation, rediscovered by the Russian physicist Vladimir Fock in 1943, and usually called the Mehler–Fock transform. The corresponding ordinary differential operator is the radial part of the Laplacian operator on 2-dimensional hyperbolic space. More generally, the Plancherel theorem for SL(2,R) of Harish Chandra and Gelfand–Naimark can be deduced from Weyl's theory for the hypergeometric equation, as can the theory of spherical functions for the isometry groups of higher dimensional hyperbolic spaces. Harish Chandra's later development of the Plancherel theorem for general real semisimple Lie groups was strongly influenced by the methods Weyl developed for eigenfunction expansions associated with singular ordinary differential equations. Equally importantly the theory also laid the mathematical foundations for the analysis of the Schrödinger equation and scattering matrix in quantum mechanics.
== Solutions of ordinary differential equations ==
=== Reduction to standard form ===
Let D be the second order differential operator on (a, b) given by
D
f
(
x
)
=
−
p
(
x
)
f
″
(
x
)
+
r
(
x
)
f
′
(
x
)
+
q
(
x
)
f
(
x
)
,
{\displaystyle Df(x)=-p(x)f''(x)+r(x)f'(x)+q(x)f(x),}
where p is a strictly positive continuously differentiable function and q and r are continuous real-valued functions.
For x0 in (a, b), define the Liouville transformation ψ by
ψ
(
x
)
=
∫
x
0
x
p
(
t
)
−
1
/
2
d
t
{\displaystyle \psi (x)=\int _{x_{0}}^{x}p(t)^{-1/2}\,dt}
If
U
:
L
2
(
a
,
b
)
↦
L
2
(
ψ
(
a
)
,
ψ
(
b
)
)
{\displaystyle U:L^{2}(a,b)\mapsto L^{2}(\psi (a),\psi (b))}
is the unitary operator defined by
(
U
f
)
(
ψ
(
x
)
)
=
f
(
x
)
×
(
ψ
′
(
x
)
)
−
1
/
2
,
∀
x
∈
(
a
,
b
)
{\displaystyle (Uf)(\psi (x))=f(x)\times \left(\psi '(x)\right)^{-1/2},\ \ \forall x\in (a,b)}
then
U
d
d
x
U
−
1
g
=
g
′
ψ
′
+
1
2
g
ψ
″
ψ
′
{\displaystyle U{\frac {\mathrm {d} }{\mathrm {d} x}}U^{-1}g=g'\psi '+{\frac {1}{2}}g{\frac {\psi ''}{\psi '}}}
and
U
d
2
d
x
2
U
−
1
g
=
(
U
d
d
x
U
−
1
)
×
(
U
d
d
x
U
−
1
)
g
=
d
d
ψ
[
g
′
ψ
′
+
1
2
g
ψ
″
ψ
′
]
⋅
ψ
′
+
1
2
[
g
′
ψ
′
+
1
2
g
ψ
″
ψ
′
]
⋅
ψ
″
ψ
′
=
g
″
ψ
′
2
+
2
g
′
ψ
″
+
1
2
g
⋅
[
ψ
‴
ψ
′
−
1
2
ψ
″
2
ψ
′
2
]
{\displaystyle {\begin{aligned}U{\frac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}U^{-1}g&=\left(U{\frac {\mathrm {d} }{\mathrm {d} x}}U^{-1}\right)\times \left(U{\frac {\mathrm {d} }{\mathrm {d} x}}U^{-1}\right)g\\[1ex]&={\frac {\mathrm {d} }{\mathrm {d} \psi }}\left[g'\psi '+{\frac {1}{2}}g{\frac {\psi ''}{\psi '}}\right]\cdot \psi '+{\frac {1}{2}}\left[g'\psi '+{\frac {1}{2}}g{\frac {\psi ''}{\psi '}}\right]\cdot {\frac {\psi ''}{\psi '}}\\[1ex]&=g''\psi '^{2}+2g'\psi ''+{\frac {1}{2}}g\cdot \left[{\frac {\psi '''}{\psi '}}-{\frac {1}{2}}{\frac {\psi ''^{2}}{\psi '^{2}}}\right]\end{aligned}}}
Hence,
U
D
U
−
1
g
=
−
g
″
+
R
g
′
+
Q
g
,
{\displaystyle UDU^{-1}g=-g''+Rg'+Qg,}
where
R
=
p
′
+
r
p
1
/
2
{\displaystyle R={\frac {p'+r}{p^{1/2}}}}
and
Q
=
q
−
r
p
′
4
p
+
p
″
4
−
5
p
′
2
16
p
{\displaystyle Q=q-{\frac {rp'}{4p}}+{\frac {p''}{4}}-{\frac {5p'^{2}}{16p}}}
The term in g′ can be removed using an Euler integrating factor. If S′/S = R/2, then h = Sg satisfies
(
S
U
D
U
−
1
S
−
1
)
h
=
−
h
″
+
V
h
,
{\displaystyle (SUDU^{-1}S^{-1})h=-h''+Vh,}
where the potential V is given by
V
=
Q
+
S
″
S
{\displaystyle V=Q+{\frac {S''}{S}}}
The differential operator can thus always be reduced to one of the form
D
f
=
−
f
″
+
q
f
.
{\displaystyle Df=-f''+qf.}
=== Existence theorem ===
The following is a version of the classical Picard existence theorem for second order differential equations with values in a Banach space E.
Let α, β be arbitrary elements of E, A a bounded operator on E and q a continuous function on [a, b].
Then, for c = a or c = b, the differential equation
D
f
=
A
f
{\displaystyle Df=Af}
has a unique solution f in C2([a,b], E) satisfying the initial conditions
f
(
c
)
=
β
,
f
′
(
c
)
=
α
.
{\displaystyle f(c)=\beta \,,\;f'(c)=\alpha .}
In fact a solution of the differential equation with these initial conditions is equivalent to a solution
of the integral equation
f
=
h
+
T
f
{\displaystyle f=h+Tf}
with T the bounded linear map on C([a,b], E) defined by
T
f
(
x
)
=
∫
c
x
K
(
x
,
y
)
f
(
y
)
d
y
,
{\displaystyle Tf(x)=\int _{c}^{x}K(x,y)f(y)\,dy,}
where K is the Volterra kernel
K
(
x
,
t
)
=
(
x
−
t
)
(
q
(
t
)
−
A
)
{\displaystyle K(x,t)=(x-t)(q(t)-A)}
and
h
(
x
)
=
α
(
x
−
c
)
+
β
.
{\displaystyle h(x)=\alpha (x-c)+\beta .}
Since ‖Tk‖ tends to 0, this integral equation has a unique solution given by the Neumann series
f
=
(
I
−
T
)
−
1
h
=
h
+
T
h
+
T
2
h
+
T
3
h
+
⋯
{\displaystyle f=(I-T)^{-1}h=h+Th+T^{2}h+T^{3}h+\cdots }
This iterative scheme is often called Picard iteration after the French mathematician Charles Émile Picard.
=== Fundamental eigenfunctions ===
If f is twice continuously differentiable (i.e. C2) on (a, b) satisfying Df = λf, then f is called an eigenfunction of D with eigenvalue λ.
In the case of a compact interval [a, b] and q continuous on [a, b], the existence theorem implies that for c = a or c = b and every complex number λ there a unique C2 eigenfunction fλ on [a, b] with fλ(c) and f′λ(c) prescribed. Moreover, for each x in [a, b], fλ(x) and f′λ(x) are holomorphic functions of λ.
For an arbitrary interval (a, b) and q continuous on (a, b), the existence theorem implies that for c in (a, b) and every complex number λ there a unique C2 eigenfunction fλ on (a, b) with fλ(c) and f′λ(c) prescribed. Moreover, for each x in (a, b), fλ(x) and f′λ(x) are holomorphic functions of λ.
=== Green's formula ===
If f and g are C2 functions on (a, b), the Wronskian W(f, g) is defined by
W
(
f
,
g
)
(
x
)
=
f
(
x
)
g
′
(
x
)
−
f
′
(
x
)
g
(
x
)
.
{\displaystyle W(f,g)(x)=f(x)g'(x)-f'(x)g(x).}
Green's formula - which in this one-dimensional case is a simple integration by parts - states that for x, y in (a, b)
∫
x
y
(
D
f
)
g
−
f
(
D
g
)
d
t
=
W
(
f
,
g
)
(
y
)
−
W
(
f
,
g
)
(
x
)
.
{\displaystyle \int _{x}^{y}(Df)g-f(Dg)\,dt=W(f,g)(y)-W(f,g)(x).}
When q is continuous and f, g are C2 on the compact interval [a, b], this formula also holds for x = a or y = b.
When f and g are eigenfunctions for the same eigenvalue, then
d
d
x
W
(
f
,
g
)
=
0
,
{\displaystyle {\frac {d}{dx}}W(f,g)=0,}
so that W(f, g) is independent of x.
== Classical Sturm–Liouville theory ==
Let [a, b] be a finite closed interval, q a real-valued continuous function on [a, b] and let H0 be the space of C2 functions f on [a, b] satisfying the Robin boundary conditions
{
cos
α
f
(
a
)
−
sin
α
f
′
(
a
)
=
0
,
cos
β
f
(
b
)
−
sin
β
f
′
(
b
)
=
0
,
{\displaystyle {\begin{cases}\cos \alpha \,f(a)-\sin \alpha \,f'(a)=0,\\[0.5ex]\cos \beta \,f(b)-\sin \beta \,f'(b)=0,\end{cases}}}
with inner product
(
f
,
g
)
=
∫
a
b
f
(
x
)
g
(
x
)
¯
d
x
.
{\displaystyle (f,g)=\int _{a}^{b}f(x){\overline {g(x)}}\,dx.}
In practice usually one of the two standard boundary conditions:
Dirichlet boundary condition f(c) = 0
Neumann boundary condition f′(c) = 0
is imposed at each endpoint c = a, b.
The differential operator D given by
D
f
=
−
f
″
+
q
f
{\displaystyle Df=-f''+qf}
acts on H0. A function f in H0 is called an eigenfunction of D (for the above choice of boundary values) if Df = λ f for some complex number λ, the corresponding eigenvalue. By Green's formula, D is formally self-adjoint on H0, since the Wronskian W(f, g) vanishes if both f, g satisfy the boundary conditions:
(
D
f
,
g
)
=
(
f
,
D
g
)
,
for
f
,
g
∈
H
0
.
{\displaystyle (Df,g)=(f,Dg),\quad {\text{ for }}f,g\in H_{0}.}
As a consequence, exactly as for a self-adjoint matrix in finite dimensions,
the eigenvalues of D are real;
the eigenspaces for distinct eigenvalues are orthogonal.
It turns out that the eigenvalues can be described by the maximum-minimum principle of Rayleigh–Ritz (see below). In fact it is easy to see a priori that the eigenvalues are bounded below because the operator D is itself bounded below on H0:
In fact, integrating by parts,
(
D
f
,
f
)
=
[
−
f
′
f
¯
]
a
b
+
∫
|
f
′
|
2
+
∫
q
|
f
|
2
.
{\displaystyle (Df,f)=\left[-f'{\overline {f}}\right]_{a}^{b}+\int |f'|^{2}+\int q|f|^{2}.}
For Dirichlet or Neumann boundary conditions, the first term vanishes and the inequality holds with M = inf q.
For general Robin boundary conditions the first term can be estimated using an elementary Peter-Paul version of Sobolev's inequality:
"Given ε > 0, there is constant R > 0 such that |f(x)|2 ≤ ε (f′, f′) + R (f, f) for all f in C1[a, b]."
In fact, since
|
f
(
b
)
−
f
(
x
)
|
≤
(
b
−
a
)
1
/
2
⋅
‖
f
′
‖
2
,
{\displaystyle |f(b)-f(x)|\leq (b-a)^{1/2}\cdot \|f'\|_{2},}
only an estimate for f(b) is needed and this follows by replacing f(x) in the above inequality by (x − a)n·(b − a)−n·f(x) for n sufficiently large.
=== Green's function (regular case) ===
From the theory of ordinary differential equations, there are unique fundamental eigenfunctions φλ(x), χλ(x) such that
D φλ = λ φλ, φλ(a) = sin α, φλ'(a) = cos α
D χλ = λ χλ, χλ(b) = sin β, χλ'(b) = cos β
which at each point, together with their first derivatives, depend holomorphically on λ. Let
ω
(
λ
)
=
W
(
ϕ
λ
,
χ
λ
)
,
{\displaystyle \omega (\lambda )=W(\phi _{\lambda },\chi _{\lambda }),}
be an entire holomorphic function.
This function ω(λ) plays the role of the characteristic polynomial of D. Indeed, the uniqueness of the fundamental eigenfunctions implies that its zeros are precisely the eigenvalues of D and that each non-zero eigenspace is one-dimensional. In particular there are at most countably many eigenvalues of D and, if there are infinitely many, they must tend to infinity. It turns out that the zeros of ω(λ) also have mutilplicity one (see below).
If λ is not an eigenvalue of D on H0, define the Green's function by
G
λ
(
x
,
y
)
=
{
ϕ
λ
(
x
)
χ
λ
(
y
)
/
ω
(
λ
)
for
x
≥
y
χ
λ
(
x
)
ϕ
λ
(
y
)
/
ω
(
λ
)
for
y
≥
x
.
{\displaystyle G_{\lambda }(x,y)={\begin{cases}\phi _{\lambda }(x)\chi _{\lambda }(y)/\omega (\lambda )&{\text{ for }}x\geq y\\[1ex]\chi _{\lambda }(x)\phi _{\lambda }(y)/\omega (\lambda )&{\text{ for }}y\geq x.\end{cases}}}
This kernel defines an operator on the inner product space C[a,b] via
(
G
λ
f
)
(
x
)
=
∫
a
b
G
λ
(
x
,
y
)
f
(
y
)
d
y
.
{\displaystyle (G_{\lambda }f)(x)=\int _{a}^{b}G_{\lambda }(x,y)f(y)\,dy.}
Since Gλ(x,y) is continuous on [a, b] × [a, b], it defines a Hilbert–Schmidt operator on the Hilbert space completion H of C[a, b] = H1 (or equivalently of the dense subspace H0), taking values in H1. This operator carries H1 into H0. When λ is real, Gλ(x,y) = Gλ(y,x) is also real, so defines a self-adjoint operator on H. Moreover,
Gλ (D − λ) = I on H0
Gλ carries H1 into H0, and (D − λ) Gλ = I on H1.
Thus the operator Gλ can be identified with the resolvent (D − λ)−1.
=== Spectral theorem ===
In fact let T = Gλ for λ large and negative. Then T defines a compact self-adjoint operator on the Hilbert space H.
By the spectral theorem for compact self-adjoint operators, H has an orthonormal basis consisting of eigenvectors ψn of T with Tψn = μn ψn, where μn tends to zero. The range of T contains H0 so is dense. Hence 0 is not an eigenvalue of T. The resolvent properties of T imply that ψn lies in H0 and that
D
ψ
n
=
(
λ
+
1
μ
n
)
ψ
n
{\displaystyle D\psi _{n}=\left(\lambda +{\frac {1}{\mu _{n}}}\right)\psi _{n}}
The minimax principle follows because if
λ
(
G
)
=
min
f
⊥
G
(
D
f
,
f
)
(
f
,
f
)
,
{\displaystyle \lambda (G)=\min _{f\perp G}{\frac {(Df,f)}{(f,f)}},}
then λ(G) = λk for the linear span of the first k − 1 eigenfunctions. For any other (k − 1)-dimensional subspace G, some f in the linear span of the first k eigenvectors must be orthogonal to G. Hence λ(G) ≤ (Df,f)/(f,f) ≤ λk.
=== Wronskian as a Fredholm determinant ===
For simplicity, suppose that m ≤ q(x) ≤ M on [0, π] with Dirichlet boundary conditions. The minimax principle shows that
n
2
+
m
≤
λ
n
(
D
)
≤
n
2
+
M
.
{\displaystyle n^{2}+m\leq \lambda _{n}(D)\leq n^{2}+M.}
It follows that the resolvent (D − λ)−1 is a trace-class operator whenever λ is not an eigenvalue of D and hence that the Fredholm determinant det I − μ(D − λ)−1 is defined.
The Dirichlet boundary conditions imply that
ω
(
λ
)
=
ϕ
λ
(
b
)
.
{\displaystyle \omega (\lambda )=\phi _{\lambda }(b).}
Using Picard iteration, Titchmarsh showed that φλ(b), and hence ω(λ), is an entire function of finite order 1/2:
ω
(
λ
)
=
O
(
e
|
λ
|
)
{\displaystyle \omega (\lambda )={\mathcal {O}}\left(e^{\sqrt {|\lambda |}}\right)}
At a zero μ of ω(λ), φμ(b) = 0. Moreover,
ψ
(
x
)
=
∂
λ
φ
λ
(
x
)
|
λ
=
μ
{\displaystyle \psi (x)=\partial _{\lambda }\varphi _{\lambda }(x)|_{\lambda =\mu }}
satisfies (D − μ)ψ = φμ. Thus
ω
(
λ
)
=
(
λ
−
μ
)
ψ
(
b
)
+
O
(
(
λ
−
μ
)
2
)
{\displaystyle \omega (\lambda )=(\lambda -\mu )\psi (b)+{\mathcal {O}}((\lambda -\mu )^{2})}
This implies that
For otherwise ψ(b) = 0, so that ψ would have to lie in H0. But then
(
ϕ
μ
,
ϕ
μ
)
=
(
(
D
−
μ
)
ψ
,
ϕ
μ
)
=
(
ψ
,
(
D
−
μ
)
ϕ
μ
)
=
0
,
{\displaystyle (\phi _{\mu },\phi _{\mu })=((D-\mu )\psi ,\phi _{\mu })=(\psi ,(D-\mu )\phi _{\mu })=0,}
a contradiction.
On the other hand, the distribution of the zeros of the entire function ω(λ) is already known from the minimax principle.
By the Hadamard factorization theorem, it follows that
ω
(
λ
)
=
C
∏
(
1
−
λ
/
λ
n
)
,
{\displaystyle \omega (\lambda )=C\prod (1-\lambda /\lambda _{n}),}
for some non-zero constant C.
Hence
det
(
I
−
μ
(
D
−
λ
)
−
1
)
=
∏
(
1
−
μ
λ
n
−
λ
)
=
∏
1
−
(
λ
+
μ
)
/
λ
n
1
−
λ
/
λ
n
=
ω
(
λ
+
μ
)
ω
(
λ
)
.
{\displaystyle \det(I-\mu (D-\lambda )^{-1})=\prod \left(1-{\mu \over \lambda _{n}-\lambda }\right)=\prod {1-(\lambda +\mu )/\lambda _{n} \over 1-\lambda /\lambda _{n}}={\omega (\lambda +\mu ) \over \omega (\lambda )}.}
In particular if 0 is not an eigenvalue of D
ω
(
μ
)
=
ω
(
0
)
⋅
det
(
I
−
μ
D
−
1
)
.
{\displaystyle \omega (\mu )=\omega (0)\cdot \det(I-\mu D^{-1}).}
== Tools from abstract spectral theory ==
=== Functions of bounded variation ===
A function ρ(x) of bounded variation on a closed interval [a, b] is a complex-valued function such that its total variation V(ρ), the supremum of the variations
∑
r
=
0
k
−
1
|
ρ
(
x
r
+
1
)
−
ρ
(
x
r
)
|
{\displaystyle \sum _{r=0}^{k-1}|\rho (x_{r+1})-\rho (x_{r})|}
over all dissections
a
=
x
0
<
x
1
<
⋯
<
x
k
=
b
{\displaystyle a=x_{0}<x_{1}<\dots <x_{k}=b}
is finite. The real and imaginary parts of ρ are real-valued functions of bounded variation. If ρ is real-valued and normalised so that ρ(a) = 0, it has a canonical decomposition as the difference of two bounded non-decreasing functions:
ρ
(
x
)
=
ρ
+
(
x
)
−
ρ
−
(
x
)
,
{\displaystyle \rho (x)=\rho _{+}(x)-\rho _{-}(x),}
where ρ+(x) and ρ–(x) are the total positive and negative variation of ρ over [a, x].
If f is a continuous function on [a, b] its Riemann–Stieltjes integral with respect to ρ
∫
a
b
f
(
x
)
d
ρ
(
x
)
{\displaystyle \int _{a}^{b}f(x)\,d\rho (x)}
is defined to be the limit of approximating sums
∑
r
=
0
k
−
1
f
(
x
r
)
(
ρ
(
x
r
+
1
)
−
ρ
(
x
r
)
)
{\displaystyle \sum _{r=0}^{k-1}f(x_{r})(\rho (x_{r+1})-\rho (x_{r}))}
as the mesh of the dissection, given by sup |xr+1 − xr|, tends to zero.
This integral satisfies
|
∫
a
b
f
(
x
)
d
ρ
(
x
)
|
≤
V
(
ρ
)
⋅
‖
f
‖
∞
{\displaystyle \left|\int _{a}^{b}f(x)\,d\rho (x)\right|\leq V(\rho )\cdot \|f\|_{\infty }}
and thus defines a bounded linear functional dρ on C[a, b] with norm ‖dρ‖ = V(ρ).
Every bounded linear functional μ on C[a, b] has an absolute value |μ| defined for non-negative f by
|
μ
|
(
f
)
=
sup
0
≤
|
g
|
≤
f
|
μ
(
g
)
|
.
{\displaystyle |\mu |(f)=\sup _{0\leq |g|\leq f}|\mu (g)|.}
The form |μ| extends linearly to a bounded linear form on C[a, b] with norm ‖μ‖ and satisfies the characterizing inequality
|
μ
(
f
)
|
≤
|
μ
|
(
|
f
|
)
{\displaystyle |\mu (f)|\leq |\mu |(|f|)}
for f in C[a, b]. If μ is real, i.e. is real-valued on real-valued functions, then
μ
=
|
μ
|
−
(
|
μ
|
−
μ
)
≡
μ
+
−
μ
−
{\displaystyle \mu =|\mu |-(|\mu |-\mu )\equiv \mu _{+}-\mu _{-}}
gives a canonical decomposition as a difference of positive forms, i.e. forms that are non-negative on non-negative functions.
Every positive form μ extends uniquely to the linear span of non-negative bounded lower semicontinuous functions g by the formula
μ
(
g
)
=
lim
μ
(
f
n
)
,
{\displaystyle \mu (g)=\lim \mu (f_{n}),}
where the non-negative continuous functions fn increase pointwise to g.
The same therefore applies to an arbitrary bounded linear form μ, so that a function ρ of bounded variation may be defined by
ρ
(
x
)
=
μ
(
χ
[
a
,
x
]
)
,
{\displaystyle \rho (x)=\mu (\chi _{[a,x]}),}
where χA denotes the characteristic function of a subset A of [a, b]. Thus μ = dρ and ‖μ‖ = ‖dρ‖.
Moreover μ+ = dρ+ and μ– = dρ–.
This correspondence between functions of bounded variation and bounded linear forms is a special case of the Riesz representation theorem.
The support of μ = dρ is the complement of all points x in [a, b] where ρ is constant on some neighborhood of x; by definition it is a closed subset A of [a, b]. Moreover, μ((1 − χA)f) = 0, so that μ(f) = 0 if f vanishes on A.
=== Spectral measure ===
Let H be a Hilbert space and
T
{\displaystyle T}
a self-adjoint bounded operator on H with
0
≤
T
≤
I
{\displaystyle 0\leq T\leq I}
, so that the spectrum
σ
(
T
)
{\displaystyle \sigma (T)}
of
T
{\displaystyle T}
is contained in
[
0
,
1
]
{\displaystyle [0,1]}
. If
p
(
t
)
{\displaystyle p(t)}
is a complex polynomial, then by the spectral mapping theorem
σ
(
p
(
T
)
)
=
p
(
σ
(
T
)
)
{\displaystyle \sigma (p(T))=p(\sigma (T))}
and hence
‖
p
(
T
)
‖
≤
‖
p
‖
∞
{\displaystyle \|p(T)\|\leq \|p\|_{\infty }}
where
‖
⋅
‖
∞
{\displaystyle \|\cdot \|_{\infty }}
denotes the uniform norm on C[0, 1]. By the Weierstrass approximation theorem, polynomials are uniformly dense in C[0, 1]. It follows that
f
(
T
)
{\displaystyle f(T)}
can be defined
∀
f
∈
C
[
0
,
1
]
{\displaystyle \forall f\in C[0,1]}
, with
σ
(
f
(
T
)
)
=
f
(
σ
(
T
)
)
{\displaystyle \sigma (f(T))=f(\sigma (T))}
and
‖
f
(
T
)
‖
≤
‖
f
‖
∞
.
{\displaystyle \|f(T)\|\leq \|f\|_{\infty }.}
If
0
≤
g
≤
1
{\displaystyle 0\leq g\leq 1}
is a lower semicontinuous function on [0, 1], for example the characteristic function
χ
[
0
,
α
]
{\displaystyle \chi _{[0,\alpha ]}}
of a subinterval of [0, 1], then
g
{\displaystyle g}
is a pointwise increasing limit of non-negative
f
n
∈
C
[
0
,
1
]
{\displaystyle f_{n}\in C[0,1]}
.
If
ξ
{\displaystyle \xi }
is a vector in H, then the vectors
η
n
=
f
n
(
T
)
ξ
{\displaystyle \eta _{n}=f_{n}(T)\xi }
form a Cauchy sequence in H, since, for
n
≥
m
{\displaystyle n\geq m}
,
‖
η
n
−
η
m
‖
2
≤
(
η
n
,
ξ
)
−
(
η
m
,
ξ
)
,
{\displaystyle \|\eta _{n}-\eta _{m}\|^{2}\leq (\eta _{n},\xi )-(\eta _{m},\xi ),}
and
(
η
n
,
ξ
)
=
(
f
n
(
T
)
ξ
,
ξ
)
{\displaystyle (\eta _{n},\xi )=(f_{n}(T)\xi ,\xi )}
is bounded and increasing, so has a limit.
It follows that
g
(
T
)
{\displaystyle g(T)}
can be defined by
g
(
T
)
ξ
=
lim
f
n
(
T
)
ξ
.
{\displaystyle g(T)\xi =\lim f_{n}(T)\xi .}
If
ξ
{\displaystyle \xi }
and η are vectors in H, then
μ
ξ
,
η
(
f
)
=
(
f
(
T
)
ξ
,
η
)
{\displaystyle \mu _{\xi ,\eta }(f)=(f(T)\xi ,\eta )}
defines a bounded linear form
μ
ξ
,
η
{\displaystyle \mu _{\xi ,\eta }}
on H. By the Riesz representation theorem
μ
ξ
,
η
=
d
ρ
ξ
,
η
{\displaystyle \mu _{\xi ,\eta }=d\rho _{\xi ,\eta }}
for a unique normalised function
ρ
ξ
,
η
{\displaystyle \rho _{\xi ,\eta }}
of bounded variation on [0, 1].
d
ρ
ξ
,
η
{\displaystyle d\rho _{\xi ,\eta }}
(or sometimes slightly incorrectly
ρ
ξ
,
η
{\displaystyle \rho _{\xi ,\eta }}
itself) is called the spectral measure determined by
ξ
{\displaystyle \xi }
and η.
The operator
g
(
T
)
{\displaystyle g(T)}
is accordingly uniquely characterised by the equation
(
g
(
T
)
ξ
,
η
)
=
μ
ξ
,
η
(
g
)
=
∫
0
1
g
(
λ
)
d
ρ
ξ
,
η
(
λ
)
.
{\displaystyle (g(T)\xi ,\eta )=\mu _{\xi ,\eta }(g)=\int _{0}^{1}g(\lambda )\,d\rho _{\xi ,\eta }(\lambda ).}
The spectral projection
E
(
λ
)
{\displaystyle E(\lambda )}
is defined by
E
(
λ
)
=
χ
[
0
,
λ
]
(
T
)
,
{\displaystyle E(\lambda )=\chi _{[0,\lambda ]}(T),}
so that
ρ
ξ
,
η
(
λ
)
=
(
E
(
λ
)
ξ
,
η
)
.
{\displaystyle \rho _{\xi ,\eta }(\lambda )=(E(\lambda )\xi ,\eta ).}
It follows that
g
(
T
)
=
∫
0
1
g
(
λ
)
d
E
(
λ
)
,
{\displaystyle g(T)=\int _{0}^{1}g(\lambda )\,dE(\lambda ),}
which is understood in the sense that for any vectors
ξ
{\displaystyle \xi }
and
η
{\displaystyle \eta }
,
(
g
(
T
)
ξ
,
η
)
=
∫
0
1
g
(
λ
)
d
(
E
(
λ
)
ξ
,
η
)
=
∫
0
1
g
(
λ
)
d
ρ
ξ
,
η
(
λ
)
.
{\displaystyle (g(T)\xi ,\eta )=\int _{0}^{1}g(\lambda )\,d(E(\lambda )\xi ,\eta )=\int _{0}^{1}g(\lambda )\,d\rho _{\xi ,\eta }(\lambda ).}
For a single vector
ξ
,
μ
ξ
=
μ
ξ
,
ξ
{\displaystyle \xi ,\,\mu _{\xi }=\mu _{\xi ,\xi }}
is a positive form on [0, 1]
(in other words proportional to a probability measure on [0, 1]) and
ρ
ξ
=
ρ
ξ
,
ξ
{\displaystyle \rho _{\xi }=\rho _{\xi ,\xi }}
is non-negative and non-decreasing. Polarisation shows that all the forms
μ
ξ
,
η
{\displaystyle \mu _{\xi ,\eta }}
can naturally be expressed in terms of such positive forms, since
μ
ξ
,
η
=
1
4
(
μ
ξ
+
η
+
i
μ
ξ
+
i
η
−
μ
ξ
−
η
−
i
μ
ξ
−
i
η
)
{\displaystyle \mu _{\xi ,\eta }={\frac {1}{4}}\left(\mu _{\xi +\eta }+i\mu _{\xi +i\eta }-\mu _{\xi -\eta }-i\mu _{\xi -i\eta }\right)}
If the vector
ξ
{\displaystyle \xi }
is such that the linear span of the vectors
(
T
n
ξ
)
{\displaystyle (T^{n}\xi )}
is dense in H, i.e.
ξ
{\displaystyle \xi }
is a cyclic vector for
T
{\displaystyle T}
, then the map
U
{\displaystyle U}
defined by
U
(
f
)
=
f
(
T
)
ξ
,
C
[
0
,
1
]
→
H
{\displaystyle U(f)=f(T)\xi ,\,C[0,1]\rightarrow H}
satisfies
(
U
f
1
,
U
f
2
)
=
∫
0
1
f
1
(
λ
)
f
2
(
λ
)
¯
d
ρ
ξ
(
λ
)
.
{\displaystyle (Uf_{1},Uf_{2})=\int _{0}^{1}f_{1}(\lambda ){\overline {f_{2}(\lambda )}}\,d\rho _{\xi }(\lambda ).}
Let
L
2
(
[
0
,
1
]
,
d
ρ
ξ
)
{\displaystyle L_{2}([0,1],d\rho _{\xi })}
denote the Hilbert space completion of
C
[
0
,
1
]
{\displaystyle C[0,1]}
associated with the possibly degenerate inner product on the right hand side.
Thus
U
{\displaystyle U}
extends to a unitary transformation of
L
2
(
[
0
,
1
]
,
ρ
ξ
)
{\displaystyle L_{2}([0,1],\rho _{\xi })}
onto H.
U
T
U
∗
{\displaystyle UTU^{\ast }}
is then just multiplication by
λ
{\displaystyle \lambda }
on
L
2
(
[
0
,
1
]
,
d
ρ
ξ
)
{\displaystyle L_{2}([0,1],d\rho _{\xi })}
; and more generally
U
f
(
T
)
U
∗
{\displaystyle Uf(T)U^{\ast }}
is multiplication by
f
(
λ
)
{\displaystyle f(\lambda )}
. In this case, the support of
d
ρ
ξ
{\displaystyle d\rho _{\xi }}
is exactly
σ
(
T
)
{\displaystyle \sigma (T)}
, so that
== Weyl–Titchmarsh–Kodaira theory ==
The eigenfunction expansion associated with singular differential operators of the form
D
f
=
−
(
p
f
′
)
′
+
q
f
{\displaystyle Df=-(pf')'+qf}
on an open interval (a, b) requires an initial analysis of the behaviour of the fundamental eigenfunctions near the endpoints a and b to determine possible boundary conditions there. Unlike the regular Sturm–Liouville case, in some circumstances spectral values of D can have multiplicity 2. In the development outlined below standard assumptions will be imposed on p and q that guarantee that the spectrum of D has multiplicity one everywhere and is bounded below. This includes almost all important applications; modifications required for the more general case will be discussed later.
Having chosen the boundary conditions, as in the classical theory the resolvent of D, (D + R)−1 for R large and positive, is given by an operator T corresponding to a Green's function constructed from two fundamental eigenfunctions. In the classical case T was a compact self-adjoint operator; in this case T is just a self-adjoint bounded operator with 0 ≤ T ≤ I. The abstract theory of spectral measure can therefore be applied to T to give the eigenfunction expansion for D.
The central idea in the proof of Weyl and Kodaira can be explained informally as follows. Assume that the spectrum of D lies in [1, ∞) and that T = D−1 and let
E
(
λ
)
=
χ
[
λ
−
1
,
1
]
(
T
)
{\displaystyle E(\lambda )=\chi _{[\lambda ^{-1},1]}(T)}
be the spectral projection of D corresponding to the interval [1, λ]. For an arbitrary function f define
f
(
x
,
λ
)
=
(
E
(
λ
)
f
)
(
x
)
.
{\displaystyle f(x,\lambda )=(E(\lambda )f)(x).}
f(x, λ) may be regarded as a differentiable map into the space of functions of bounded variation ρ; or equivalently as a differentiable map
x
↦
(
d
λ
f
)
(
x
)
{\displaystyle x\mapsto (d_{\lambda }f)(x)}
into the Banach space E of bounded linear functionals dρ on C[α,β] whenever [α, β] is a compact subinterval of [1, ∞).
Weyl's fundamental observation was that dλ f satisfies a second order ordinary differential equation taking values in E:
D
(
d
λ
f
)
=
λ
⋅
d
λ
f
.
{\displaystyle D(d_{\lambda }f)=\lambda \cdot d_{\lambda }f.}
After imposing initial conditions on the first two derivatives at a fixed point c, this equation can be solved explicitly in terms of the two fundamental eigenfunctions and the "initial value" functionals
(
d
λ
f
)
(
c
)
=
d
λ
f
(
c
,
⋅
)
,
(
d
λ
f
)
′
(
c
)
=
d
λ
f
x
(
c
,
⋅
)
.
{\displaystyle (d_{\lambda }f)(c)=d_{\lambda }f(c,\cdot ),\quad (d_{\lambda }f)^{\prime }(c)=d_{\lambda }f_{x}(c,\cdot ).}
This point of view may now be turned on its head: f(c, λ) and fx(c, λ) may be written as
f
(
c
,
λ
)
=
(
f
,
ξ
1
(
λ
)
)
,
f
x
(
c
,
λ
)
=
(
f
,
ξ
2
(
λ
)
)
,
{\displaystyle f(c,\lambda )=(f,\xi _{1}(\lambda )),\quad f_{x}(c,\lambda )=(f,\xi _{2}(\lambda )),}
where ξ1(λ) and ξ2(λ) are given purely in terms of the fundamental eigenfunctions.
The functions of bounded variation
σ
i
j
(
λ
)
=
(
ξ
i
(
λ
)
,
ξ
j
(
λ
)
)
{\displaystyle \sigma _{ij}(\lambda )=(\xi _{i}(\lambda ),\xi _{j}(\lambda ))}
determine a spectral measure on the spectrum of D and can be computed explicitly from the behaviour of the fundamental eigenfunctions (the Titchmarsh–Kodaira formula).
=== Limit circle and limit point for singular equations ===
Let q(x) be a continuous real-valued function on (0, ∞) and let D be the second order differential operator
D
f
=
−
f
″
+
q
f
{\displaystyle Df=-f''+qf}
on (0, ∞). Fix a point c in (0, ∞) and, for complex λ, let
φ
λ
,
θ
λ
{\displaystyle \varphi _{\lambda },\theta _{\lambda }}
be the unique fundamental eigenfunctions of D on (0, ∞) satisfying
(
D
−
λ
)
φ
λ
=
0
,
(
D
−
λ
)
θ
λ
=
0
{\displaystyle (D-\lambda )\varphi _{\lambda }=0,\quad (D-\lambda )\theta _{\lambda }=0}
together with the initial conditions at c
φ
λ
(
c
)
=
1
,
φ
λ
′
(
c
)
=
0
,
θ
λ
(
c
)
=
0
,
θ
λ
′
(
c
)
=
1.
{\displaystyle \varphi _{\lambda }(c)=1,\,\varphi _{\lambda }'(c)=0,\,\theta _{\lambda }(c)=0,\,\theta _{\lambda }'(c)=1.}
Then their Wronskian satisfies
W
(
φ
λ
,
θ
λ
)
=
φ
λ
θ
λ
′
−
θ
λ
φ
λ
′
≡
1
,
{\displaystyle W(\varphi _{\lambda },\theta _{\lambda })=\varphi _{\lambda }\theta _{\lambda }'-\theta _{\lambda }\varphi _{\lambda }'\equiv 1,}
since it is constant and equal to 1 at c.
Let λ be non-real and 0 < x < ∞. If the complex number
μ
{\displaystyle \mu }
is such that
f
=
φ
+
μ
θ
{\displaystyle f=\varphi +\mu \theta }
satisfies the boundary condition
cos
β
f
(
x
)
−
sin
β
f
′
(
x
)
=
0
{\displaystyle \cos \beta \,f(x)-\sin \beta \,f'(x)=0}
for some
β
{\displaystyle \beta }
(or, equivalently,
f
′
(
x
)
/
f
(
x
)
{\displaystyle f'(x)/f(x)}
is real) then, using integration by parts, one obtains
Im
(
λ
)
∫
c
x
|
φ
+
μ
θ
|
2
=
Im
(
μ
)
.
{\displaystyle \operatorname {Im} (\lambda )\int _{c}^{x}|\varphi +\mu \theta |^{2}=\operatorname {Im} (\mu ).}
Therefore, the set of μ satisfying this equation is not empty. This set is a circle in the complex μ-plane. Points μ in its interior are characterized by
∫
c
x
|
φ
+
μ
θ
|
2
<
Im
(
μ
)
Im
(
λ
)
{\displaystyle \int _{c}^{x}|\varphi +\mu \theta |^{2}<{\operatorname {Im} (\mu ) \over \operatorname {Im} (\lambda )}}
if x > c and by
∫
x
c
|
φ
+
μ
θ
|
2
<
Im
(
μ
)
Im
(
λ
)
{\displaystyle \int _{x}^{c}|\varphi +\mu \theta |^{2}<{\operatorname {Im} (\mu ) \over \operatorname {Im} (\lambda )}}
if x < c.
Let Dx be the closed disc enclosed by the circle. By definition these closed discs are nested and decrease as x approaches 0 or ∞. So in the limit, the circles tend either to a limit circle or a limit point at each end. If
μ
{\displaystyle \mu }
is a limit point or a point on the limit circle at 0 or ∞, then
f
=
φ
+
μ
θ
{\displaystyle f=\varphi +\mu \theta }
is square integrable (L2) near 0 or ∞, since
μ
{\displaystyle \mu }
lies in Dx for all x > c (in the ∞ case) and so
∫
c
x
|
φ
+
μ
θ
|
2
<
Im
(
μ
)
Im
(
λ
)
{\displaystyle \int _{c}^{x}|\varphi +\mu \theta |^{2}<{\operatorname {Im} (\mu ) \over \operatorname {Im} (\lambda )}}
is bounded independent of x. In particular:
there are always non-zero solutions of Df = λf which are square integrable near 0 resp. ∞;
in the limit circle case all solutions of Df = λf are square integrable near 0 resp. ∞.
The radius of the disc Dx can be calculated to be
|
1
2
Im
(
λ
)
∫
c
x
|
θ
|
2
|
{\displaystyle \left|{1 \over {2\operatorname {Im} (\lambda )\int _{c}^{x}|\theta |^{2}}}\right|}
and this implies that in the limit point case
θ
{\displaystyle \theta }
cannot be square integrable near 0 resp. ∞. Therefore, we have a converse to the second statement above:
in the limit point case there is exactly one non-zero solution (up to scalar multiples) of Df = λf which is square integrable near 0 resp. ∞.
On the other hand, if Dg = λ′ g for another value λ′, then
h
(
x
)
=
g
(
x
)
−
(
λ
′
−
λ
)
∫
c
x
(
φ
λ
(
x
)
θ
λ
(
y
)
−
θ
λ
(
x
)
φ
λ
(
y
)
)
g
(
y
)
d
y
{\displaystyle h(x)=g(x)-(\lambda ^{\prime }-\lambda )\int _{c}^{x}(\varphi _{\lambda }(x)\theta _{\lambda }(y)-\theta _{\lambda }(x)\varphi _{\lambda }(y))g(y)\,dy}
satisfies Dh = λh, so that
g
(
x
)
=
c
1
φ
λ
+
c
2
θ
λ
+
(
λ
′
−
λ
)
∫
c
x
(
φ
λ
(
x
)
θ
λ
(
y
)
−
θ
λ
(
x
)
φ
λ
(
y
)
)
g
(
y
)
d
y
.
{\displaystyle g(x)=c_{1}\varphi _{\lambda }+c_{2}\theta _{\lambda }+(\lambda ^{\prime }-\lambda )\int _{c}^{x}(\varphi _{\lambda }(x)\theta _{\lambda }(y)-\theta _{\lambda }(x)\varphi _{\lambda }(y))g(y)\,dy.}
This formula may also be obtained directly by the variation of constant method from (D − λ)g = (λ′ − λ)g.
Using this to estimate g, it follows that
the limit point/limit circle behaviour at 0 or ∞ is independent of the choice of λ.
More generally if Dg = (λ – r) g for some function r(x), then
g
(
x
)
=
c
1
φ
λ
+
c
2
θ
λ
−
∫
c
x
(
φ
λ
(
x
)
θ
λ
(
y
)
−
θ
λ
(
x
)
φ
λ
(
y
)
)
r
(
y
)
g
(
y
)
d
y
.
{\displaystyle g(x)=c_{1}\varphi _{\lambda }+c_{2}\theta _{\lambda }-\int _{c}^{x}(\varphi _{\lambda }(x)\theta _{\lambda }(y)-\theta _{\lambda }(x)\varphi _{\lambda }(y))r(y)g(y)\,dy.}
From this it follows that
if r is continuous at 0, then D + r is limit point or limit circle at 0 precisely when D is,
so that in particular
if q(x) − a/x2 is continuous at 0, then D is limit point at 0 if and only if a ≥ 3/4.
Similarly
if r has a finite limit at ∞, then D + r is limit point or limit circle at ∞ precisely when D is,
so that in particular
if q has a finite limit at ∞, then D is limit point at ∞.
Many more elaborate criteria to be limit point or limit circle can be found in the mathematical literature.
=== Green's function (singular case) ===
Consider the differential operator
D
0
f
=
−
(
p
0
f
′
)
′
+
q
0
f
{\displaystyle D_{0}f=-(p_{0}f')'+q_{0}f}
on (0, ∞) with q0 positive and continuous on (0, ∞) and p0 continuously differentiable in [0, ∞), positive in (0, ∞) and p0(0) = 0.
Moreover, assume that after reduction to standard form D0 becomes the equivalent operator
D
f
=
−
f
″
+
q
f
{\displaystyle Df=-f''+qf}
on (0, ∞) where q has a finite limit at ∞. Thus
D is limit point at ∞.
At 0, D may be either limit circle or limit point. In either case there is an eigenfunction Φ0 with DΦ0 = 0 and Φ0 square integrable near 0. In the limit circle case, Φ0 determines a boundary condition at 0:
W
(
f
,
Φ
0
)
(
0
)
=
0.
{\displaystyle W(f,\Phi _{0})(0)=0.}
For complex λ, let Φλ and Χλ satisfy
(D – λ)Φλ = 0, (D – λ)Χλ = 0
Χλ square integrable near infinity
Φλ square integrable at 0 if 0 is limit point
Φλ satisfies the boundary condition above if 0 is limit circle.
Let
ω
(
λ
)
=
W
(
Φ
λ
,
X
λ
)
,
{\displaystyle \omega (\lambda )=W(\Phi _{\lambda },\mathrm {X} _{\lambda }),}
a constant which vanishes precisely when Φλ and Χλ are proportional, i.e. λ is an eigenvalue of D for these boundary conditions.
On the other hand, this cannot occur if Im λ ≠ 0 or if λ is negative.
Indeed, if D f = λf with q0 – λ ≥ δ > 0, then by Green's formula (Df,f) = (f,Df), since W(f,f*) is constant. So λ must be real. If f is taken to be real-valued in the D0 realization, then for 0 < x < y
[
p
0
f
f
′
]
x
y
=
∫
x
y
(
q
0
−
λ
)
|
f
|
2
+
p
0
(
f
′
)
2
.
{\displaystyle [p_{0}ff']_{x}^{y}=\int _{x}^{y}(q_{0}-\lambda )|f|^{2}+p_{0}(f')^{2}.}
Since p0(0) = 0 and f is integrable near 0, p0f f′ must vanish at 0. Setting x = 0, it follows that f(y) f′(y) > 0, so that f2 is increasing, contradicting the square integrability of f near ∞.
Thus, adding a positive scalar to q, it may be assumed that
ω
(
λ
)
≠
0
if
λ
∉
[
1
,
∞
)
.
{\displaystyle \omega (\lambda )\neq 0~~{\text{ if }}\lambda \notin [1,\infty ).}
If ω(λ) ≠ 0, the Green's function Gλ(x,y) at λ is defined by
G
λ
(
x
,
y
)
=
{
Φ
λ
(
x
)
X
λ
(
y
)
/
ω
(
λ
)
(
x
≤
y
)
,
X
λ
(
x
)
Φ
λ
(
y
)
/
ω
(
λ
)
(
x
≥
y
)
.
{\displaystyle G_{\lambda }(x,y)={\begin{cases}\Phi _{\lambda }(x)\mathrm {X} _{\lambda }(y)/\omega (\lambda )&(x\leq y),\\[1ex]\mathrm {X} _{\lambda }(x)\Phi _{\lambda }(y)/\omega (\lambda )&(x\geq y).\end{cases}}}
and is independent of the choice of Φλ and Χλ.
In the examples there will be a third "bad" eigenfunction Ψλ defined and holomorphic for λ not in [1, ∞) such that Ψλ satisfies the boundary conditions at neither 0 nor ∞. This means that for λ not in [1, ∞)
W(Φλ,Ψλ) is nowhere vanishing;
W(Χλ,Ψλ) is nowhere vanishing.
In this case Χλ is proportional to Φλ + m(λ) Ψλ, where
m
(
λ
)
=
−
W
(
Φ
λ
,
X
λ
)
/
W
(
Ψ
λ
,
X
λ
)
.
{\displaystyle m(\lambda )=-W(\Phi _{\lambda },\mathrm {X} _{\lambda })/W(\Psi _{\lambda },\mathrm {X} _{\lambda }).}
Let H1 be the space of square integrable continuous functions on (0, ∞) and let H0 be
the space of C2 functions f on (0, ∞) of compact support if D is limit point at 0
the space of C2 functions f on (0, ∞) with W(f, Φ0) = 0 at 0 and with f = 0 near ∞ if D is limit circle at 0.
Define T = G0 by
(
T
f
)
(
x
)
=
∫
0
∞
G
0
(
x
,
y
)
f
(
y
)
d
y
.
{\displaystyle (Tf)(x)=\int _{0}^{\infty }G_{0}(x,y)f(y)\,dy.}
Then T D = I on H0, D T = I on H1 and the operator D is bounded below on H0:
(
D
f
,
f
)
≥
(
f
,
f
)
.
{\displaystyle (Df,f)\geq (f,f).}
Thus T is a self-adjoint bounded operator with 0 ≤ T ≤ I.
Formally T = D−1. The corresponding operators Gλ defined for λ not in [1, ∞) can be formally identified with
(
D
−
λ
)
−
1
=
T
(
I
−
λ
T
)
−
1
{\displaystyle (D-\lambda )^{-1}=T(I-\lambda T)^{-1}}
and satisfy Gλ (D – λ) = I on H0, (D – λ)Gλ = I on H1.
=== Spectral theorem and Titchmarsh–Kodaira formula ===
Kodaira gave a streamlined version of Weyl's original proof. (M.H. Stone had previously shown how part of Weyl's work could be simplified using von Neumann's spectral theorem.)
In fact for T =D−1 with 0 ≤ T ≤ I, the spectral projection E(λ) of T is defined by
E
(
λ
)
=
χ
[
λ
−
1
,
1
]
(
T
)
{\displaystyle E(\lambda )=\chi _{[\lambda ^{-1},1]}(T)}
It is also the spectral projection of D corresponding to the interval [1, λ].
For f in H1 define
f
(
x
,
λ
)
=
(
E
(
λ
)
f
)
(
x
)
.
{\displaystyle f(x,\lambda )=(E(\lambda )f)(x).}
f(x, λ) may be regarded as a differentiable map into the space of functions ρ of bounded variation; or equivalently as a differentiable map
x
↦
(
d
λ
f
)
(
x
)
{\displaystyle x\mapsto (d_{\lambda }f)(x)}
into the Banach space E of bounded linear functionals dρ on [C[α, β]] for any compact subinterval [α, β] of [1, ∞).
The functionals (or measures) dλ f(x) satisfies the following E-valued second order ordinary differential equation:
D
(
d
λ
f
)
=
λ
⋅
d
λ
f
,
{\displaystyle D(d_{\lambda }f)=\lambda \cdot d_{\lambda }f,}
with initial conditions at c in (0, ∞)
(
d
λ
f
)
(
c
)
=
d
λ
f
(
c
,
⋅
)
=
μ
(
0
)
,
(
d
λ
f
)
′
(
c
)
=
d
λ
f
x
(
c
,
⋅
)
=
μ
(
1
)
.
{\displaystyle (d_{\lambda }f)(c)=d_{\lambda }f(c,\cdot )=\mu ^{(0)},\quad (d_{\lambda }f)^{\prime }(c)=d_{\lambda }f_{x}(c,\cdot )=\mu ^{(1)}.}
If φλ and χλ are the special eigenfunctions adapted to c, then
d
λ
f
(
x
)
=
φ
λ
(
x
)
μ
(
0
)
+
χ
λ
(
x
)
μ
(
1
)
.
{\displaystyle d_{\lambda }f(x)=\varphi _{\lambda }(x)\mu ^{(0)}+\chi _{\lambda }(x)\mu ^{(1)}.}
Moreover,
μ
(
k
)
=
d
λ
(
f
,
ξ
λ
(
k
)
)
,
{\displaystyle \mu ^{(k)}=d_{\lambda }(f,\xi _{\lambda }^{(k)}),}
where
ξ
λ
(
k
)
=
D
E
(
λ
)
η
(
k
)
,
{\displaystyle \xi _{\lambda }^{(k)}=DE(\lambda )\eta ^{(k)},}
with
η
z
(
0
)
(
y
)
=
G
z
(
c
,
y
)
,
η
z
(
1
)
(
x
)
=
∂
x
G
z
(
c
,
y
)
,
(
z
∉
[
1
,
∞
)
)
.
{\displaystyle \eta _{z}^{(0)}(y)=G_{z}(c,y),\,\,\,\,\eta _{z}^{(1)}(x)=\partial _{x}G_{z}(c,y),\,\,\,\,(z\notin [1,\infty )).}
(As the notation suggests, ξλ(0) and ξλ(1) do not depend on the choice of z.)
Setting
σ
i
j
(
λ
)
=
(
ξ
λ
(
i
)
,
ξ
λ
(
j
)
)
,
{\displaystyle \sigma _{ij}(\lambda )=(\xi _{\lambda }^{(i)},\xi _{\lambda }^{(j)}),}
it follows that
d
λ
(
E
(
λ
)
η
z
(
i
)
,
η
z
(
j
)
)
=
|
λ
−
z
|
−
2
⋅
d
λ
σ
i
j
(
λ
)
.
{\displaystyle d_{\lambda }(E(\lambda )\eta _{z}^{(i)},\eta _{z}^{(j)})=|\lambda -z|^{-2}\cdot d_{\lambda }\sigma _{ij}(\lambda ).}
On the other hand, there are holomorphic functions a(λ), b(λ) such that
φλ + a(λ) χλ is proportional to Φλ;
φλ + b(λ) χλ is proportional to Χλ.
Since W(φλ, χλ) = 1, the Green's function is given by
G
λ
(
x
,
y
)
=
{
(
φ
λ
(
x
)
+
a
(
λ
)
χ
λ
(
x
)
)
(
φ
λ
(
y
)
+
b
(
λ
)
χ
λ
(
y
)
)
b
(
λ
)
−
a
(
λ
)
(
x
≤
y
)
,
(
φ
λ
(
x
)
+
b
(
λ
)
χ
λ
(
x
)
)
(
φ
λ
(
y
)
+
a
(
λ
)
χ
λ
(
y
)
)
b
(
λ
)
−
a
(
λ
)
(
y
≤
x
)
.
{\displaystyle G_{\lambda }(x,y)={\begin{cases}{\dfrac {(\varphi _{\lambda }(x)+a(\lambda )\chi _{\lambda }(x))(\varphi _{\lambda }(y)+b(\lambda )\chi _{\lambda }(y))}{b(\lambda )-a(\lambda )}}&(x\leq y),\\[1ex]{\dfrac {(\varphi _{\lambda }(x)+b(\lambda )\chi _{\lambda }(x))(\varphi _{\lambda }(y)+a(\lambda )\chi _{\lambda }(y))}{b(\lambda )-a(\lambda )}}&(y\leq x).\end{cases}}}
Direct calculation shows that
(
η
z
(
i
)
,
η
z
(
j
)
)
=
Im
M
i
j
(
z
)
/
Im
z
,
{\displaystyle (\eta _{z}^{(i)},\eta _{z}^{(j)})=\operatorname {Im} M_{ij}(z)/\operatorname {Im} z,}
where the so-called characteristic matrix Mij(z) is given by
M
00
(
z
)
=
a
(
z
)
b
(
z
)
a
(
z
)
−
b
(
z
)
,
M
01
(
z
)
=
M
10
(
z
)
=
a
(
z
)
+
b
(
z
)
2
(
a
(
z
)
−
b
(
z
)
)
,
M
11
(
z
)
=
1
a
(
z
)
−
b
(
z
)
.
{\displaystyle M_{00}(z)={\frac {a(z)b(z)}{a(z)-b(z)}},\,\,M_{01}(z)=M_{10}(z)={\frac {a(z)+b(z)}{2(a(z)-b(z))}},\,\,M_{11}(z)={\frac {1}{a(z)-b(z)}}.}
Hence
∫
−
∞
∞
(
Im
z
)
⋅
|
λ
−
z
|
−
2
d
σ
i
j
(
λ
)
=
Im
M
i
j
(
z
)
,
{\displaystyle \int _{-\infty }^{\infty }(\operatorname {Im} z)\cdot |\lambda -z|^{-2}\,d\sigma _{ij}(\lambda )=\operatorname {Im} M_{ij}(z),}
which immediately implies
σ
i
j
(
λ
)
=
lim
δ
↓
0
lim
ε
↓
0
∫
δ
λ
+
δ
Im
M
i
j
(
t
+
i
ε
)
d
t
.
{\displaystyle \sigma _{ij}(\lambda )=\lim _{\delta \downarrow 0}\lim _{\varepsilon \downarrow 0}\int _{\delta }^{\lambda +\delta }\operatorname {Im} M_{ij}(t+i\varepsilon )\,dt.}
(This is a special case of the "Stieltjes inversion formula".)
Setting ψλ(0) = φλ and ψλ(1) = χλ, it follows that
(
E
(
μ
)
f
)
(
x
)
=
∑
i
.
j
∫
0
μ
∫
0
∞
ψ
λ
(
i
)
(
x
)
ψ
λ
(
j
)
(
y
)
f
(
y
)
d
y
d
σ
i
j
(
λ
)
=
∫
0
μ
∫
0
∞
Φ
λ
(
x
)
Φ
λ
(
y
)
f
(
y
)
d
y
d
ρ
(
λ
)
.
{\displaystyle (E(\mu )f)(x)=\sum _{i.j}\int _{0}^{\mu }\int _{0}^{\infty }\psi _{\lambda }^{(i)}(x)\psi _{\lambda }^{(j)}(y)f(y)\,dy\,d\sigma _{ij}(\lambda )=\int _{0}^{\mu }\int _{0}^{\infty }\Phi _{\lambda }(x)\Phi _{\lambda }(y)f(y)\,dy\,d\rho (\lambda ).}
This identity is equivalent to the spectral theorem and Titchmarsh–Kodaira formula.
== Application to the hypergeometric equation ==
The Mehler–Fock transform concerns the eigenfunction expansion associated with the Legendre differential operator D
D
f
=
−
(
(
x
2
−
1
)
f
′
)
′
=
−
(
x
2
−
1
)
f
″
−
2
x
f
′
{\displaystyle Df=-((x^{2}-1)f')'=-(x^{2}-1)f''-2xf'}
on (1, ∞). The eigenfunctions are the Legendre functions
P
−
1
/
2
+
i
λ
(
cosh
r
)
=
1
2
π
∫
0
2
π
(
sin
θ
+
i
e
−
r
cos
θ
cos
θ
−
i
e
−
r
sin
θ
)
1
2
+
i
λ
d
θ
{\displaystyle P_{-1/2+i{\sqrt {\lambda }}}(\cosh r)={1 \over 2\pi }\int _{0}^{2\pi }\left({\sin \theta +ie^{-r}\cos \theta \over \cos \theta -ie^{-r}\sin \theta }\right)^{{1 \over 2}+i{\sqrt {\lambda }}}\,d\theta }
with eigenvalue λ ≥ 0. The two Mehler–Fock transformations are
U
f
(
λ
)
=
∫
1
∞
f
(
x
)
P
−
1
/
2
+
i
λ
(
x
)
d
x
{\displaystyle Uf(\lambda )=\int _{1}^{\infty }f(x)\,P_{-1/2+i{\sqrt {\lambda }}}(x)\,dx}
and
U
−
1
g
(
x
)
=
∫
0
∞
g
(
λ
)
1
2
tanh
π
λ
d
λ
.
{\displaystyle U^{-1}g(x)=\int _{0}^{\infty }g(\lambda )\,{1 \over 2}\tanh \pi {\sqrt {\lambda }}\,d\lambda .}
(Often this is written in terms of the variable τ = √λ.)
Mehler and Fock studied this differential operator because it arose as the radial component of the Laplacian on 2-dimensional hyperbolic space.
More generally, consider the group G = SU(1,1) consisting of complex matrices of the form
[
α
β
β
¯
α
¯
]
{\displaystyle {\begin{bmatrix}\alpha &\beta \\{\overline {\beta }}&{\overline {\alpha }}\end{bmatrix}}}
with determinant |α|2 − |β|2 = 1.
== Application to the hydrogen atom ==
== Generalisations and alternative approaches ==
A Weyl function can be defined at a singular endpoint a giving rise to a singular version of Weyl–Titchmarsh–Kodaira theory. this applies for example to the case of radial Schrödinger operators
D
f
=
−
f
″
+
ℓ
(
ℓ
+
1
)
x
2
f
+
V
(
x
)
f
,
x
∈
(
0
,
∞
)
{\displaystyle Df=-f''+{\frac {\ell (\ell +1)}{x^{2}}}f+V(x)f,\qquad x\in (0,\infty )}
The whole theory can also be extended to the case where the coefficients are allowed to be measures.
== Gelfand–Levitan theory ==
== Notes ==
== References ==
=== Citations ===
=== Bibliography === | Wikipedia/Spectral_theory_of_ordinary_differential_equations |
In mathematics, a functional calculus is a theory allowing one to apply mathematical functions to mathematical operators. It is now a branch (more accurately, several related areas) of the field of functional analysis, connected with spectral theory. (Historically, the term was also used synonymously with calculus of variations; this usage is obsolete, except for functional derivative. Sometimes it is used in relation to types of functional equations, or in logic for systems of predicate calculus.)
If
f
{\displaystyle f}
is a function, say a numerical function of a real number, and
M
{\displaystyle M}
is an operator, there is no particular reason why the expression
f
(
M
)
{\displaystyle f(M)}
should make sense. If it does, then we are no longer using
f
{\displaystyle f}
on its original function domain. In the tradition of operational calculus, algebraic expressions in operators are handled irrespective of their meaning. This passes nearly unnoticed if we talk about 'squaring a matrix', though, which is the case of
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
and
M
{\displaystyle M}
an
n
×
n
{\displaystyle n\times n}
matrix. The idea of a functional calculus is to create a principled approach to this kind of overloading of the notation.
The most immediate case is to apply polynomial functions to a square matrix, extending what has just been discussed. In the finite-dimensional case, the polynomial functional calculus yields quite a bit of information about the operator. For example, consider the family of polynomials which annihilates an operator
T
{\displaystyle T}
. This family is an ideal in the ring of polynomials. Furthermore, it is a nontrivial ideal: let
N
{\displaystyle N}
be the finite dimension of the algebra of matrices, then
{
I
,
T
,
T
2
,
…
,
T
N
}
{\displaystyle \{I,T,T^{2},\ldots ,T^{N}\}}
is linearly dependent. So
∑
i
=
0
N
α
i
T
i
=
0
{\displaystyle \sum _{i=0}^{N}\alpha _{i}T^{i}=0}
for some scalars
α
i
{\displaystyle \alpha _{i}}
, not all equal to 0. This implies that the polynomial
∑
i
=
0
N
α
i
x
i
{\displaystyle \sum _{i=0}^{N}\alpha _{i}x^{i}}
lies in the ideal. Since the ring of polynomials is a principal ideal domain, this ideal is generated by some polynomial
m
{\displaystyle m}
. Multiplying by a unit if necessary, we can choose
m
{\displaystyle m}
to be monic. When this is done, the polynomial
m
{\displaystyle m}
is precisely the minimal polynomial of
T
{\displaystyle T}
. This polynomial gives deep information about
T
{\displaystyle T}
. For instance, a scalar
α
{\displaystyle \alpha }
is an eigenvalue of
T
{\displaystyle T}
if and only if
α
{\displaystyle \alpha }
is a root of
m
{\displaystyle m}
. Also, sometimes
m
{\displaystyle m}
can be used to calculate the exponential of
T
{\displaystyle T}
efficiently.
The polynomial calculus is not as informative in the infinite-dimensional case. Consider the unilateral shift with the polynomials calculus; the ideal defined above is now trivial. Thus one is interested in functional calculi more general than polynomials. The subject is closely linked to spectral theory, since for a diagonal matrix or multiplication operator, it is rather clear what the definitions should be.
== See also ==
Borel functional calculus – Branch of functional analysis
Continuous functional calculus – branch of functional analysisPages displaying wikidata descriptions as a fallback
Direct integral – Generalization of the concept of direct sum in mathematics
Holomorphic functional calculus
== References ==
"Functional calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
== External links ==
Media related to Functional calculus at Wikimedia Commons | Wikipedia/Functional_calculus |
Functional linguistics is an approach to the study of language characterized by taking systematically into account the speaker's and the hearer's side, and the communicative needs of the speaker and of the given language community.: 5–6 Linguistic functionalism spawned in the 1920s to 1930s from Ferdinand de Saussure's systematic structuralist approach to language (1916).
Functionalism sees functionality of language and its elements to be the key to understanding linguistic processes and structures. Functional theories of language propose that since language is fundamentally a tool, it is reasonable to assume that its structures are best analyzed and understood with reference to the functions they carry out. These include the tasks of conveying meaning and contextual information.
Functional theories of grammar belong to structural and, broadly, humanistic linguistics, considering language as being created by the community, and linguistics as relating to systems theory. Functional theories take into account the context where linguistic elements are used and study the way they are instrumentally useful or functional in the given environment. This means that pragmatics is given an explanatory role, along with semantics. The formal relations between linguistic elements are assumed to be functionally-motivated. Functionalism is sometimes contrasted with formalism, but this does not exclude functional theories from creating grammatical descriptions that are generative in the sense of formulating rules that distinguish grammatical or well-formed elements from ungrammatical elements.
Simon Dik characterizes the functional approach as follows:
In the functional paradigm a language is in the first place conceptualized as an instrument of social interaction among human beings, used with the intention of establishing communicative relationships. Within this paradigm one attempts to reveal the instrumentality of language with respect to what people do and achieve with it in social interaction. A natural language, in other words, is seen as an integrated part of the communicative competence of the natural language user. (2, p. 3)
Functional theories of grammar can be divided on the basis of geographical origin or base (though it simplifies many aspects): European functionalist theories include Functional (discourse) grammar and Systemic functional grammar (among others), while American functionalist theories include Role and reference grammar and West Coast functionalism. Since the 1970s, studies by American functional linguists in languages other than English from Asia, Africa, Australia and the Americas (like Mandarin Chinese and Japanese), led to insights about the interaction of form and function, and the discovery of functional motivations for grammatical phenomena, which apply also to the English language.
== History ==
=== 1920s to 1970s: early developments ===
The establishment of functional linguistics follows from a shift from structural to functional explanation in 1920s sociology. Prague, at the crossroads of western European structuralism and Russian formalism, became an important centre for functional linguistics.
The shift was related to the organic analogy exploited by Émile Durkheim and Ferdinand de Saussure. Saussure had argued in his Course in General Linguistics that the 'organism' of language should be studied anatomically, and not in respect with its environment, to avoid the false conclusions made by August Schleicher and other social Darwinists. The post-Saussurean functionalist movement sought ways to account for the 'adaptation' of language to its environment while still remaining strictly anti-Darwinian.
Russian émigrés Roman Jakobson and Nikolai Trubetzkoy disseminated insights of Russian grammarians in Prague, but also the evolutionary theory of Lev Berg, arguing for teleology of language change. As Berg's theory failed to gain popularity outside the Soviet Union, the organic aspect of functionalism diminished, and Jakobson adopted a standard model of functional explanation from Ernst Nagel's philosophy of science. It is, then, the same mode of explanation as in biology and social sciences; but it became emphasised that the word 'adaptation' is not to be understood in linguistics in the same meaning as in biology.
Work on functionalist linguistics by the Prague school resumed in the 1950s after a hiatus caused by World War II and Stalinism. In North America, Joseph Greenberg published his 1963 seminal paper on language universals that not only revived the field of linguistic typology, but also the approach of seeking functional explanations for typological patterns. Greenberg's approach has been highly influential for the movement of North American functionalism that formed from the early 1970s, which has since been characterized by a profound interest in typology. Greenberg's paper was influenced by the Prague School and in particular it was written in response to Jakobson's call for an 'implicational typology'. While North American functionalism was initially influenced by the functionalism of the Prague school, such influence has been later discontinued.
=== 1980s onward: name controversy ===
The term 'functionalism' or 'functional linguistics' became controversial in the 1980s with the rise of a new wave of evolutionary linguistics. Johanna Nichols argued that the meaning of 'functionalism' had changed, and the terms formalism and functionalism should be taken as referring to generative grammar, and the emergent linguistics of Paul Hopper and Sandra Thompson, respectively; and that the term structuralism should be reserved for frameworks derived from the Prague linguistic circle. William Croft argued subsequently that it is a fact to be agreed by all linguists that form does not follow from function. He proposed that functionalism should be understood as autonomous linguistics, opposing the idea that language arises functionally from the need to express meaning:
"The notion of autonomy emerges from an undeniable fact of all languages, 'the curious lack of accord ... between form and function'"
Croft explains that, until the 1970s, functionalism related to semantics and pragmatics, or the 'semiotic function'. But around 1980s the notion of function changed from semiotics to "external function", proposing a neo-Darwinian view of language change as based on natural selection. Croft proposes that 'structuralism' and 'formalism' should both be taken as referring to generative grammar; and 'functionalism' to usage-based and cognitive linguistics; while neither André Martinet, Systemic functional linguistics nor Functional discourse grammar properly represents any of the three concepts.
The situation was further complicated by the arrival of evolutionary psychological thinking in linguistics, with Steven Pinker, Ray Jackendoff and others hypothesising that the human language faculty, or universal grammar, could have developed through normal evolutionary processes, thus defending an adaptational explanation of the origin and evolution of the language faculty. This brought about a functionalism versus formalism debate, with Frederick Newmeyer arguing that the evolutionary psychological approach to linguistics should also be considered functionalist.
The terms functionalism and functional linguistics nonetheless continue to be used by the Prague linguistic circle and its derivatives, including SILF, Danish functional school, Systemic functional linguistics and Functional discourse grammar; and the American framework Role and reference grammar which sees itself as the midway between formal and functional linguistics.
== Functional analysis ==
Since the earliest work of the Prague School, language was conceived as a functional system, where term system references back to De Saussure structuralist approach. The term function seems to have been introduced by Vilém Mathesius, possibly influenced from works in sociology. Functional analysis is the examination of how linguistic elements function on different layers of linguistic structure, and how the levels interact with each other. Functions exist on all levels of grammar, even in phonology, where the phoneme has the function of distinguishing between lexical material.
Syntactic functions: (e.g. Subject and Object), defining different perspectives in the presentation of a linguistic expression.
Semantic functions: (Agent, Patient, Recipient, etc.), describing the role of participants in states of affairs or actions expressed.
Pragmatic functions: (Theme and Rheme, Topic and Focus, Predicate), defining the informational status of constituents, determined by the pragmatic context of the verbal interaction.
== Functional explanation ==
In the functional mode of explanation, a linguistic structure is explained with an appeal to its function. Functional linguistics takes as its starting point the notion that communication is the primary purpose of language. Therefore, general phonological, morphosyntactic and semantic phenomena are thought of as being motivated by the needs of people to communicate successfully with each other. Thus, the perspective is taken that the organisation of language reflects its use value.
Many prominent functionalist approaches, like Role and reference grammar and Functional discourse grammar, are also typologically oriented, that is they aim their analysis cross-linguistically, rather than only to a single language like English (as is typical of formalist/generativism approaches).
=== Economy ===
The concept of economy is metaphorically transferred from a social or economical context to a linguistic level. It is considered as a regulating force in language maintenance. Controlling the impact of language change or internal and external conflicts of the system, the economy principle means that systemic coherence is maintained without increasing energy cost. This is why all human languages, no matter how different they are, have high functional value as based on a compromise between the competing motivations of speaker-easiness (simplicity or inertia) versus hearer-easiness (clarity or energeia).
The principle of economy was elaborated by the French structural–functional linguist André Martinet. Martinet's concept is similar to Zipf's principle of least effort; although the idea had been discussed by various linguists in the late 19th and early 20th century. The functionalist concept of economy is not to be confused with economy in generative grammar.
=== Information structure ===
Some key adaptations of functional explanation are found in the study of information structure. Based on earlier linguists' work, Prague Circle linguists Vilém Mathesius, Jan Firbas and others elaborated the concept of theme–rheme relations (topic and comment) to study pragmatic concepts such as sentence focus, and givenness of information, to successfully explain word-order variation. The method has been used widely in linguistics to uncover word-order patterns in the languages of the world. Its importance, however, is limited to within-language variation, with no apparent explanation of cross-linguistic word order tendencies.
=== Functional principles ===
Several principles from pragmatics have been proposed as functional explanations of linguistic structures, often in a typological perspective.
Theme first: languages prefer placing the theme before the rheme; and the subject typically carries the role of the theme; therefore, most languages have subject before object in their basic word order.
Animate first: similarly, since subjects are more likely to be animate, they are more likely to precede the object.
Given before new: already established information comes before new information.
First things first: more important or more urgent information comes before other information.
Lightness: light (short) constituents are ordered before heavy (long) constituents.
Uniformity: word-order choices are generalised. For example, languages tend to have either prepositions or postpositions; and not both equally.
Functional load: elements within a linguistic sub-system are made distinct to avoid confusion.
Orientation: role-indicating particles including adpositions and subordinators are oriented to their semantic head.
== Frameworks ==
There are several distinct grammatical frameworks that employ a functional approach.
The structuralist functionalism of the Prague school was the earliest functionalist framework developed in the 1920s.
André Martinet's Functional Syntax, with two major books, A functional view of language (1962) and Studies in Functional Syntax (1975). Martinet is one of the most famous French linguists and can be regarded as the father of French functionalism. Founded by Martinet and his colleagues, SILF (Société internationale de linguistique fonctionnelle) is an international organisation of functional linguistics which operates mainly in French.
Simon Dik's Functional Grammar, originally developed in the 1970s and 80s, has been influential and inspired many other functional theories. It has been developed into Functional Discourse Grammar by the linguist Kees Hengeveld.
Michael Halliday's systemic functional grammar (SFG) argues that the explanation of how language works "needed to be grounded in a functional analysis, since language had evolved in the process of carrying out certain critical functions as human beings interacted with their ... 'eco-social' environment". Halliday draws on the work of Bühler and Malinowski, as well as his doctoral supervisor J.R. Firth. Notably, Halliday's former student Robin Fawcett has developed a version of SFG called the "Cardiff Grammar" which is distinct from the "Sydney Grammar" as developed by the later Halliday and his colleagues in Australia. The link between Firthian and Hallidayan linguistics and the philosophy of Alfred North Whitehead also deserves a mention.
Role and reference grammar, developed by Robert Van Valin employs functional analytical framework with a somewhat formal mode of description. In RRG, the description of a sentence in a particular language is formulated in terms of its semantic structure and communicative functions, as well as the grammatical procedures used to express these meanings.
Danish functional grammar combines Saussurean/Hjelmslevian structuralism with a focus on pragmatics and discourse.
Interactional linguistics, based on Conversation Analysis, considers linguistic structures as related to the functions of e.g. action and turn-taking in interaction.
Construction grammar is a family of different theories some of which may be considered functional, such as Croft's Radical Construction Grammar.
Relational Network Theory (RNT) or Neurocognitive Linguistics (NCL), originally developed by Sydney Lamb, may be considered functionalist in the sense of being a usage-based model. In RNT, the description of linguistic structure is formulated as networks of realizational relationships, such that all linguistic units are defined only by what they realize and are realized by. RNT networks have been hypothesized to be implemented by cortical minicolumns in the human neocortex.
== See also ==
Theory of language
Functional grammar (disambiguation)
Thematic relation
Morphosyntactic alignment
Linguistic typology
== References ==
== Further reading ==
Van Valin Jr, R. D. (2003) Functional linguistics, ch. 13 in The handbook of linguistics, pp. 319–336. | Wikipedia/Functional_analysis_(linguistics) |
In the theory of von Neumann algebras, a part of the mathematical field of functional analysis, Tomita–Takesaki theory is a method for constructing modular automorphisms of von Neumann algebras from the polar decomposition of a certain involution. It is essential for the theory of type III factors, and has led to a good structure theory for these previously intractable objects.
The theory was introduced by Minoru Tomita (1967), but his work was hard to follow and mostly unpublished, and little notice was taken of it until Masamichi Takesaki (1970) wrote an account of Tomita's theory.
== Modular automorphisms of a state ==
Suppose that M is a von Neumann algebra acting on a Hilbert space H, and Ω is a cyclic and separating vector of H of norm 1. (Cyclic means that MΩ is dense in H, and separating means that the map from M to MΩ is injective.) We write
ϕ
{\displaystyle \phi }
for the vector state
ϕ
(
x
)
=
(
x
Ω
,
Ω
)
{\displaystyle \phi (x)=(x\Omega ,\Omega )}
of M, so that H is constructed from
ϕ
{\displaystyle \phi }
using the Gelfand–Naimark–Segal construction. Since Ω is separating,
ϕ
{\displaystyle \phi }
is faithful.
We can define a (not necessarily bounded) antilinear operator S0 on H with dense domain MΩ by setting
S
0
(
m
Ω
)
=
m
∗
Ω
{\displaystyle S_{0}(m\Omega )=m^{*}\Omega }
for all m in M, and similarly we can define a (not necessarily bounded) antilinear operator F0 on H with dense domain M'Ω by setting
F
0
(
m
Ω
)
=
m
∗
Ω
{\displaystyle F_{0}(m\Omega )=m^{*}\Omega }
for m in M′, where M′ is the commutant of M.
These operators are closable, and we denote their closures by S and F = S*. They have polar decompositions
S
=
J
|
S
|
=
J
Δ
1
2
=
Δ
−
1
2
J
F
=
J
|
F
|
=
J
Δ
−
1
2
=
Δ
1
2
J
{\displaystyle {\begin{aligned}S=J|S|&=J\Delta ^{\frac {1}{2}}=\Delta ^{-{\frac {1}{2}}}J\\F=J|F|&=J\Delta ^{-{\frac {1}{2}}}=\Delta ^{\frac {1}{2}}J\end{aligned}}}
where
J
=
J
−
1
=
J
∗
{\displaystyle J=J^{-1}=J^{*}}
is an antilinear isometry of H called the modular conjugation and
Δ
=
S
∗
S
=
F
S
{\displaystyle \Delta =S^{*}S=FS}
is a positive (hence, self-adjoint) and densely defined operator called the modular operator.
=== Commutation theorem ===
The main result of Tomita–Takesaki theory states that:
Δ
i
t
M
Δ
−
i
t
=
M
{\displaystyle \Delta ^{it}M\Delta ^{-it}=M}
for all t and that
J
M
J
=
M
′
,
{\displaystyle JMJ=M',}
the commutant of M.
There is a 1-parameter group of modular automorphisms
σ
ϕ
t
{\displaystyle \sigma ^{\phi _{t}}}
of M associated with the state
ϕ
{\displaystyle \phi }
, defined by
σ
ϕ
t
(
x
)
=
Δ
i
t
x
Δ
−
i
t
{\displaystyle \sigma ^{\phi _{t}}(x)=\Delta ^{it}x\Delta ^{-it}}
.
The modular conjugation operator J and the 1-parameter unitary group
Δ
i
t
{\displaystyle \Delta ^{it}}
satisfy
J
Δ
i
t
J
=
Δ
i
t
{\displaystyle J\Delta ^{it}J=\Delta ^{it}}
and
J
Δ
J
=
Δ
−
1
.
{\displaystyle J\Delta J=\Delta ^{-1}.}
== The Connes cocycle ==
The modular automorphism group of a von Neumann algebra M depends on the choice of state φ. Connes discovered that changing the state does not change the image of the modular automorphism in the outer automorphism group of M. More precisely, given two faithful states φ and ψ of M, we can find unitary elements ut of M for all real t such that
σ
ψ
t
(
x
)
=
u
t
σ
ϕ
t
(
x
)
u
t
−
1
{\displaystyle \sigma ^{\psi _{t}}(x)=u_{t}\sigma ^{\phi _{t}}(x)u_{t}^{-1}}
so that the modular automorphisms differ by inner automorphisms, and moreover ut satisfies the 1-cocycle condition
u
s
+
t
=
u
s
σ
ϕ
s
(
u
t
)
{\displaystyle u_{s+t}=u_{s}\sigma ^{\phi _{s}}(u_{t})}
In particular, there is a canonical homomorphism from the additive group of reals to the outer automorphism group of M, that is independent of the choice of faithful state.
== KMS states ==
The term KMS state comes from the Kubo–Martin–Schwinger condition in quantum statistical mechanics.
A KMS state
ϕ
{\displaystyle \phi }
on a von Neumann algebra M with a given 1-parameter group of automorphisms αt is a state fixed by the automorphisms such that for every pair of elements A, B of M there is a bounded continuous function F in the strip 0 ≤ Im(t) ≤ 1, holomorphic in the interior, such that
F
(
t
)
=
ϕ
(
A
α
t
(
B
)
)
,
F
(
t
+
i
)
=
ϕ
(
α
t
(
B
)
A
)
{\displaystyle {\begin{aligned}F(t)&=\phi (A\alpha _{t}(B)),\\F(t+i)&=\phi (\alpha _{t}(B)A)\end{aligned}}}
Takesaki and Winnink showed that any (faithful semi finite normal) state
ϕ
{\displaystyle \phi }
is a KMS state for the 1-parameter group of modular automorphisms
σ
ϕ
−
t
{\displaystyle \sigma ^{\phi _{-t}}}
. Moreover, this characterizes the modular automorphisms of
ϕ
{\displaystyle \phi }
.
(There is often an extra parameter, denoted by β, used in the theory of KMS states. In the description above this has been normalized to be 1 by rescaling the 1-parameter family of automorphisms.)
== Structure of type III factors ==
We have seen above that there is a canonical homomorphism δ from the group of reals to the outer automorphism group of a von Neumann algebra, given by modular automorphisms. The kernel of δ is an important invariant of the algebra. For simplicity assume that the von Neumann algebra is a factor. Then the possibilities for the kernel of δ are:
The whole real line. In this case δ is trivial and the factor is type I or II.
A proper dense subgroup of the real line. Then the factor is called a factor of type III0.
A discrete subgroup generated by some x > 0. Then the factor is called a factor of type IIIλ with 0 < λ = exp(−2π/x) < 1, or sometimes a Powers factor.
The trivial group 0. Then the factor is called a factor of type III1. (This is in some sense the generic case.)
== Left Hilbert algebras ==
The main results of Tomita–Takesaki theory were proved using left and right Hilbert algebras.
A left Hilbert algebra is an algebra
A
{\displaystyle {\mathfrak {A}}}
with involution x → x♯ and an inner product (·,·) such that
Left multiplication by a fixed a ∈
A
{\displaystyle {\mathfrak {A}}}
is a bounded operator.
♯ is the adjoint; in other words (xy, z) = (y, x♯z).
The involution ♯ is preclosed.
The subalgebra spanned by all products xy is dense in
|
{\displaystyle {\mathfrak {|}}}
w.r.t. the inner product.
A right Hilbert algebra is defined similarly (with an involution ♭) with left and right reversed in the conditions above.
A (unimodular) Hilbert algebra is a left Hilbert algebra for which ♯ is an isometry, in other words (x, y) = (y♯, x♯). In this case the involution is denoted by x* instead of x♯ and coincides with modular conjugation J. This is the special case of Hilbert algebras. The modular operator is trivial and the corresponding von Neumann algebra is a direct sum of type I and type II von Neumann algebras.
Examples:
If M is a von Neumann algebra acting on a Hilbert space H with a cyclic separating unit vector v, then put
A
{\displaystyle {\mathfrak {A}}}
= Mv and define (xv)(yv) = xyv and (xv)♯ = x*v. The vector v is the identity of
A
{\displaystyle {\mathfrak {A}}}
, so
A
{\displaystyle {\mathfrak {A}}}
is a unital left Hilbert algebra.
If G is a locally compact group, then the vector space of all continuous complex functions on G with compact support is a right Hilbert algebra if multiplication is given by convolution, and x♭(g) = x(g−1)*.
For a fixed left Hilbert algebra
A
{\displaystyle {\mathfrak {A}}}
, let H be its Hilbert space completion. Left multiplication by x yields a bounded operator λ(x) on H and hence a *-homomorphism λ of
A
{\displaystyle {\mathfrak {A}}}
into B(H). The *-algebra
λ
(
A
)
{\displaystyle \lambda ({\mathfrak {A}})}
generates the von Neumann algebra
R
λ
(
A
)
=
λ
(
A
)
′
′
.
{\displaystyle {\cal {R}}_{\lambda }({\mathfrak {A}})=\lambda ({\mathfrak {A}})^{\prime \prime }.}
Tomita's key discovery concerned the remarkable properties of the closure of the operator ♯ and its polar decomposition. If S denotes this closure (a conjugate-linear unbounded operator), let Δ = S* S, a positive unbounded operator. Let S = J Δ1/2 denote its polar decomposition. Then J is a conjugate-linear isometry satisfying
S
=
S
−
1
,
{\displaystyle S=S^{-1},\,\,\,}
J
2
=
I
,
{\displaystyle J^{2}=I,\,\,\,}
J
Δ
J
=
Δ
−
1
{\displaystyle J\Delta J=\Delta ^{-1}\,\,\,}
and
S
=
Δ
−
1
/
2
J
{\displaystyle \,S=\Delta ^{-1/2}J}
.
Δ is called the modular operator and J the modular conjugation.
In Takesaki (2003, pp. 5–17), there is a self-contained proof of the main commutation theorem of Tomita-Takesaki:
Δ
i
t
R
λ
(
A
)
Δ
−
i
t
=
R
λ
(
A
)
{\displaystyle \Delta ^{it}{\cal {R}}_{\lambda }({\mathfrak {A}})\Delta ^{-it}={\cal {R}}_{\lambda }({\mathfrak {A}})\,\,}
and
J
R
λ
(
A
)
J
=
R
λ
(
A
)
′
.
{\displaystyle \,\,J{\cal {R}}_{\lambda }({\mathfrak {A}})J={\cal {R}}_{\lambda }({\mathfrak {A}})^{\prime }.}
The proof hinges on evaluating the operator integral:
e
s
/
2
Δ
1
/
2
(
Δ
+
e
s
)
−
1
=
∫
−
∞
∞
e
−
i
s
t
e
π
t
+
e
−
π
t
Δ
i
t
d
t
.
{\displaystyle e^{s/2}\Delta ^{1/2}\,(\Delta +e^{s})^{-1}=\int _{-\infty }^{\infty }{e^{-ist} \over e^{\pi t}+e^{-\pi t}}\,\Delta ^{it}\,{\rm {d}}t.}
By the spectral theorem, that is equivalent to proving the equality with ex replacing Δ; the identity for scalars follows by contour integration. It reflects the well-known fact that, with a suitable normalisation, the function
s
e
c
h
{\displaystyle {\rm {sech}}}
is its own Fourier transform.
== Notes ==
== References ==
Borchers, H. J. (2000), "On revolutionizing quantum field theory with Tomita's modular theory", Journal of Mathematical Physics, 41 (6): 3604–3673, Bibcode:2000JMP....41.3604B, doi:10.1063/1.533323, MR 1768633
Longer version with proofs
Bratteli, O.; Robinson, D.W. (1987), Operator Algebras and Quantum Statistical Mechanics 1, Second Edition, Springer-Verlag, ISBN 3-540-17093-6
Connes, Alain (1973), "Une classification des facteurs de type III" (PDF), Annales Scientifiques de l'École Normale Supérieure, 4e série, 6 (2): 133–252, doi:10.24033/asens.1247
Connes, Alain (1994), Non-commutative geometry, Boston, MA: Academic Press, ISBN 978-0-12-185860-5
Dixmier, Jacques (1981), von Neumann algebras, North-Holland Mathematical Library, vol. 27, translated by F. Jellet, Amsterdam: North-Holland, ISBN 978-0-444-86308-9, MR 0641217
Inoue, A. (2001) [1994], "Tomita–Takesaki theory", Encyclopedia of Mathematics, EMS Press
Longo, Roberto (1978), "A simple proof of the existence of modular automorphisms in approximately finite-dimensional von Neumann algebras", Pacific J. Math., 75: 199–205, doi:10.2140/pjm.1978.75.199, hdl:2108/19146
Nakano, Hidegorô (1950), "Hilbert algebras", The Tohoku Mathematical Journal, Second Series, 2: 4–23, doi:10.2748/tmj/1178245666, MR 0041362
Pedersen, G.K. (1979), C* algebras and their automorphism groups, London Mathematical Society Monographs, vol. 14, Academic Press, ISBN 0-12-549450-5
Rieffel, M.A.; van Daele, A. (1977), "A bounded operator approach to Tomita–Takesaki theory", Pacific J. Math., 69: 187–221, doi:10.2140/pjm.1977.69.187
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Shtern, A.I. (2001) [1994], "Hilbert algebra", Encyclopedia of Mathematics, EMS Press
Summers, S. J. (2006), "Tomita–Takesaki Modular Theory", in Françoise, Jean-Pierre; Naber, Gregory L.; Tsun, Tsou Sheung (eds.), Encyclopedia of mathematical physics, Academic Press/Elsevier Science, Oxford, arXiv:math-ph/0511034, Bibcode:2005math.ph..11034S, ISBN 978-0-12-512660-1, MR 2238867
Sunder, V. S. (1987), An Invitation to von Neumann Algebras, Universitext, Springer, doi:10.1007/978-1-4613-8669-8, ISBN 978-0-387-96356-3
Strătilă, Şerban; Zsidó, László (1979), Lectures on von Neumann algebras. Revision of the 1975 original., translated by Silviu Teleman, Tunbridge Wells: Abacus Press, ISBN 0-85626-109-2
Strătilă, Şerban (1981), Modular theory in operator algebras, translated by Şerban Strătilă, Tunbridge Wells: Abacus Press, ISBN 0-85626-190-4
Takesaki, M. (1970), Tomita's theory of modular Hilbert algebras and its applications, Lecture Notes Math., vol. 128, Springer, doi:10.1007/BFb0065832, ISBN 978-3-540-04917-3
Takesaki, Masamichi (2003), Theory of operator algebras. II, Encyclopaedia of Mathematical Sciences, vol. 125, Berlin, New York: Springer-Verlag, ISBN 978-3-540-42914-2, MR 1943006
Tomita, Minoru (1967), "On canonical forms of von Neumann algebras", Fifth Functional Analysis Sympos. (Tôhoku Univ., Sendai, 1967) (in Japanese), Tôhoku Univ., Sendai: Math. Inst., pp. 101–102, MR 0284822
Tomita, M. (1967), Quasi-standard von Neumann algebras, mimographed note, unpublished | Wikipedia/Tomita–Takesaki_theory |
In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers).
If V is a vector space over a field k, the set of all linear functionals from V to k is itself a vector space over k with addition and scalar multiplication defined pointwise. This space is called the dual space of V, or sometimes the algebraic dual space, when a topological dual space is also considered. It is often denoted Hom(V, k), or, when the field k is understood,
V
∗
{\displaystyle V^{*}}
; other notations are also used, such as
V
′
{\displaystyle V'}
,
V
#
{\displaystyle V^{\#}}
or
V
∨
.
{\displaystyle V^{\vee }.}
When vectors are represented by column vectors (as is common when a basis is fixed), then linear functionals are represented as row vectors, and their values on specific vectors are given by matrix products (with the row vector on the left).
== Examples ==
The constant zero function, mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) is surjective (that is, its range is all of k).
Indexing into a vector: The second element of a three-vector is given by the one-form
[
0
,
1
,
0
]
.
{\displaystyle [0,1,0].}
That is, the second element of
[
x
,
y
,
z
]
{\displaystyle [x,y,z]}
is
[
0
,
1
,
0
]
⋅
[
x
,
y
,
z
]
=
y
.
{\displaystyle [0,1,0]\cdot [x,y,z]=y.}
Mean: The mean element of an
n
{\displaystyle n}
-vector is given by the one-form
[
1
/
n
,
1
/
n
,
…
,
1
/
n
]
.
{\displaystyle \left[1/n,1/n,\ldots ,1/n\right].}
That is,
mean
(
v
)
=
[
1
/
n
,
1
/
n
,
…
,
1
/
n
]
⋅
v
.
{\displaystyle \operatorname {mean} (v)=\left[1/n,1/n,\ldots ,1/n\right]\cdot v.}
Sampling: Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location.
Net present value of a net cash flow,
R
(
t
)
,
{\displaystyle R(t),}
is given by the one-form
w
(
t
)
=
(
1
+
i
)
−
t
{\displaystyle w(t)=(1+i)^{-t}}
where
i
{\displaystyle i}
is the discount rate. That is,
N
P
V
(
R
(
t
)
)
=
⟨
w
,
R
⟩
=
∫
t
=
0
∞
R
(
t
)
(
1
+
i
)
t
d
t
.
{\displaystyle \mathrm {NPV} (R(t))=\langle w,R\rangle =\int _{t=0}^{\infty }{\frac {R(t)}{(1+i)^{t}}}\,dt.}
=== Linear functionals in Rn ===
Suppose that vectors in the real coordinate space
R
n
{\displaystyle \mathbb {R} ^{n}}
are represented as column vectors
x
=
[
x
1
⋮
x
n
]
.
{\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.}
For each row vector
a
=
[
a
1
⋯
a
n
]
{\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}}
there is a linear functional
f
a
{\displaystyle f_{\mathbf {a} }}
defined by
f
a
(
x
)
=
a
1
x
1
+
⋯
+
a
n
x
n
,
{\displaystyle f_{\mathbf {a} }(\mathbf {x} )=a_{1}x_{1}+\cdots +a_{n}x_{n},}
and each linear functional can be expressed in this form.
This can be interpreted as either the matrix product or the dot product of the row vector
a
{\displaystyle \mathbf {a} }
and the column vector
x
{\displaystyle \mathbf {x} }
:
f
a
(
x
)
=
a
⋅
x
=
[
a
1
⋯
a
n
]
[
x
1
⋮
x
n
]
.
{\displaystyle f_{\mathbf {a} }(\mathbf {x} )=\mathbf {a} \cdot \mathbf {x} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.}
=== Trace of a square matrix ===
The trace
tr
(
A
)
{\displaystyle \operatorname {tr} (A)}
of a square matrix
A
{\displaystyle A}
is the sum of all elements on its main diagonal. Matrices can be multiplied by scalars and two matrices of the same dimension can be added together; these operations make a vector space from the set of all
n
×
n
{\displaystyle n\times n}
matrices. The trace is a linear functional on this space because
tr
(
s
A
)
=
s
tr
(
A
)
{\displaystyle \operatorname {tr} (sA)=s\operatorname {tr} (A)}
and
tr
(
A
+
B
)
=
tr
(
A
)
+
tr
(
B
)
{\displaystyle \operatorname {tr} (A+B)=\operatorname {tr} (A)+\operatorname {tr} (B)}
for all scalars
s
{\displaystyle s}
and all
n
×
n
{\displaystyle n\times n}
matrices
A
and
B
.
{\displaystyle A{\text{ and }}B.}
=== (Definite) Integration ===
Linear functionals first appeared in functional analysis, the study of vector spaces of functions. A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral
I
(
f
)
=
∫
a
b
f
(
x
)
d
x
{\displaystyle I(f)=\int _{a}^{b}f(x)\,dx}
is a linear functional from the vector space
C
[
a
,
b
]
{\displaystyle C[a,b]}
of continuous functions on the interval
[
a
,
b
]
{\displaystyle [a,b]}
to the real numbers. The linearity of
I
{\displaystyle I}
follows from the standard facts about the integral:
I
(
f
+
g
)
=
∫
a
b
[
f
(
x
)
+
g
(
x
)
]
d
x
=
∫
a
b
f
(
x
)
d
x
+
∫
a
b
g
(
x
)
d
x
=
I
(
f
)
+
I
(
g
)
I
(
α
f
)
=
∫
a
b
α
f
(
x
)
d
x
=
α
∫
a
b
f
(
x
)
d
x
=
α
I
(
f
)
.
{\displaystyle {\begin{aligned}I(f+g)&=\int _{a}^{b}[f(x)+g(x)]\,dx=\int _{a}^{b}f(x)\,dx+\int _{a}^{b}g(x)\,dx=I(f)+I(g)\\I(\alpha f)&=\int _{a}^{b}\alpha f(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx=\alpha I(f).\end{aligned}}}
=== Evaluation ===
Let
P
n
{\displaystyle P_{n}}
denote the vector space of real-valued polynomial functions of degree
≤
n
{\displaystyle \leq n}
defined on an interval
[
a
,
b
]
.
{\displaystyle [a,b].}
If
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
then let
ev
c
:
P
n
→
R
{\displaystyle \operatorname {ev} _{c}:P_{n}\to \mathbb {R} }
be the evaluation functional
ev
c
f
=
f
(
c
)
.
{\displaystyle \operatorname {ev} _{c}f=f(c).}
The mapping
f
↦
f
(
c
)
{\displaystyle f\mapsto f(c)}
is linear since
(
f
+
g
)
(
c
)
=
f
(
c
)
+
g
(
c
)
(
α
f
)
(
c
)
=
α
f
(
c
)
.
{\displaystyle {\begin{aligned}(f+g)(c)&=f(c)+g(c)\\(\alpha f)(c)&=\alpha f(c).\end{aligned}}}
If
x
0
,
…
,
x
n
{\displaystyle x_{0},\ldots ,x_{n}}
are
n
+
1
{\displaystyle n+1}
distinct points in
[
a
,
b
]
,
{\displaystyle [a,b],}
then the evaluation functionals
ev
x
i
,
{\displaystyle \operatorname {ev} _{x_{i}},}
i
=
0
,
…
,
n
{\displaystyle i=0,\ldots ,n}
form a basis of the dual space of
P
n
{\displaystyle P_{n}}
(Lax (1996) proves this last fact using Lagrange interpolation).
=== Non-example ===
A function
f
{\displaystyle f}
having the equation of a line
f
(
x
)
=
a
+
r
x
{\displaystyle f(x)=a+rx}
with
a
≠
0
{\displaystyle a\neq 0}
(for example,
f
(
x
)
=
1
+
2
x
{\displaystyle f(x)=1+2x}
) is not a linear functional on
R
{\displaystyle \mathbb {R} }
, since it is not linear. It is, however, affine-linear.
== Visualization ==
In finite dimensions, a linear functional can be visualized in terms of its level sets, the sets of vectors which map to a given value. In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel hyperplanes. This method of visualizing linear functionals is sometimes introduced in general relativity texts, such as Gravitation by Misner, Thorne & Wheeler (1973).
== Applications ==
=== Application to quadrature ===
If
x
0
,
…
,
x
n
{\displaystyle x_{0},\ldots ,x_{n}}
are
n
+
1
{\displaystyle n+1}
distinct points in [a, b], then the linear functionals
ev
x
i
:
f
↦
f
(
x
i
)
{\displaystyle \operatorname {ev} _{x_{i}}:f\mapsto f\left(x_{i}\right)}
defined above form a basis of the dual space of Pn, the space of polynomials of degree
≤
n
.
{\displaystyle \leq n.}
The integration functional I is also a linear functional on Pn, and so can be expressed as a linear combination of these basis elements. In symbols, there are coefficients
a
0
,
…
,
a
n
{\displaystyle a_{0},\ldots ,a_{n}}
for which
I
(
f
)
=
a
0
f
(
x
0
)
+
a
1
f
(
x
1
)
+
⋯
+
a
n
f
(
x
n
)
{\displaystyle I(f)=a_{0}f(x_{0})+a_{1}f(x_{1})+\dots +a_{n}f(x_{n})}
for all
f
∈
P
n
.
{\displaystyle f\in P_{n}.}
This forms the foundation of the theory of numerical quadrature.
=== In quantum mechanics ===
Linear functionals are particularly important in quantum mechanics. Quantum mechanical systems are represented by Hilbert spaces, which are anti–isomorphic to their own dual spaces. A state of a quantum mechanical system can be identified with a linear functional. For more information see bra–ket notation.
=== Distributions ===
In the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions.
== Dual vectors and bilinear forms ==
Every non-degenerate bilinear form on a finite-dimensional vector space V induces an isomorphism V → V∗ : v ↦ v∗ such that
v
∗
(
w
)
:=
⟨
v
,
w
⟩
∀
w
∈
V
,
{\displaystyle v^{*}(w):=\langle v,w\rangle \quad \forall w\in V,}
where the bilinear form on V is denoted
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle }
(for instance, in Euclidean space,
⟨
v
,
w
⟩
=
v
⋅
w
{\displaystyle \langle v,w\rangle =v\cdot w}
is the dot product of v and w).
The inverse isomorphism is V∗ → V : v∗ ↦ v, where v is the unique element of V such that
⟨
v
,
w
⟩
=
v
∗
(
w
)
{\displaystyle \langle v,w\rangle =v^{*}(w)}
for all
w
∈
V
.
{\displaystyle w\in V.}
The above defined vector v∗ ∈ V∗ is said to be the dual vector of
v
∈
V
.
{\displaystyle v\in V.}
In an infinite dimensional Hilbert space, analogous results hold by the Riesz representation theorem. There is a mapping V ↦ V∗ from V into its continuous dual space V∗.
== Relationship to bases ==
=== Basis of the dual space ===
Let the vector space V have a basis
e
1
,
e
2
,
…
,
e
n
{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\dots ,\mathbf {e} _{n}}
, not necessarily orthogonal. Then the dual space
V
∗
{\displaystyle V^{*}}
has a basis
ω
~
1
,
ω
~
2
,
…
,
ω
~
n
{\displaystyle {\tilde {\omega }}^{1},{\tilde {\omega }}^{2},\dots ,{\tilde {\omega }}^{n}}
called the dual basis defined by the special property that
ω
~
i
(
e
j
)
=
{
1
if
i
=
j
0
if
i
≠
j
.
{\displaystyle {\tilde {\omega }}^{i}(\mathbf {e} _{j})={\begin{cases}1&{\text{if}}\ i=j\\0&{\text{if}}\ i\neq j.\end{cases}}}
Or, more succinctly,
ω
~
i
(
e
j
)
=
δ
i
j
{\displaystyle {\tilde {\omega }}^{i}(\mathbf {e} _{j})=\delta _{ij}}
where
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta. Here the superscripts of the basis functionals are not exponents but are instead contravariant indices.
A linear functional
u
~
{\displaystyle {\tilde {u}}}
belonging to the dual space
V
~
{\displaystyle {\tilde {V}}}
can be expressed as a linear combination of basis functionals, with coefficients ("components") ui,
u
~
=
∑
i
=
1
n
u
i
ω
~
i
.
{\displaystyle {\tilde {u}}=\sum _{i=1}^{n}u_{i}\,{\tilde {\omega }}^{i}.}
Then, applying the functional
u
~
{\displaystyle {\tilde {u}}}
to a basis vector
e
j
{\displaystyle \mathbf {e} _{j}}
yields
u
~
(
e
j
)
=
∑
i
=
1
n
(
u
i
ω
~
i
)
e
j
=
∑
i
u
i
[
ω
~
i
(
e
j
)
]
{\displaystyle {\tilde {u}}(\mathbf {e} _{j})=\sum _{i=1}^{n}\left(u_{i}\,{\tilde {\omega }}^{i}\right)\mathbf {e} _{j}=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left(\mathbf {e} _{j}\right)\right]}
due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals. Then
u
~
(
e
j
)
=
∑
i
u
i
[
ω
~
i
(
e
j
)
]
=
∑
i
u
i
δ
i
j
=
u
j
.
{\displaystyle {\begin{aligned}{\tilde {u}}({\mathbf {e} }_{j})&=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left({\mathbf {e} }_{j}\right)\right]\\&=\sum _{i}u_{i}{\delta }_{ij}\\&=u_{j}.\end{aligned}}}
So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector.
=== The dual basis and inner product ===
When the space V carries an inner product, then it is possible to write explicitly a formula for the dual basis of a given basis. Let V have (not necessarily orthogonal) basis
e
1
,
…
,
e
n
.
{\displaystyle \mathbf {e} _{1},\dots ,\mathbf {e} _{n}.}
In three dimensions (n = 3), the dual basis can be written explicitly
ω
~
i
(
v
)
=
1
2
⟨
∑
j
=
1
3
∑
k
=
1
3
ε
i
j
k
(
e
j
×
e
k
)
e
1
⋅
e
2
×
e
3
,
v
⟩
,
{\displaystyle {\tilde {\omega }}^{i}(\mathbf {v} )={\frac {1}{2}}\left\langle {\frac {\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon ^{ijk}\,(\mathbf {e} _{j}\times \mathbf {e} _{k})}{\mathbf {e} _{1}\cdot \mathbf {e} _{2}\times \mathbf {e} _{3}}},\mathbf {v} \right\rangle ,}
for
i
=
1
,
2
,
3
,
{\displaystyle i=1,2,3,}
where ε is the Levi-Civita symbol and
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
the inner product (or dot product) on V.
In higher dimensions, this generalizes as follows
ω
~
i
(
v
)
=
⟨
∑
1
≤
i
2
<
i
3
<
⋯
<
i
n
≤
n
ε
i
i
2
…
i
n
(
⋆
e
i
2
∧
⋯
∧
e
i
n
)
⋆
(
e
1
∧
⋯
∧
e
n
)
,
v
⟩
,
{\displaystyle {\tilde {\omega }}^{i}(\mathbf {v} )=\left\langle {\frac {\sum _{1\leq i_{2}<i_{3}<\dots <i_{n}\leq n}\varepsilon ^{ii_{2}\dots i_{n}}(\star \mathbf {e} _{i_{2}}\wedge \cdots \wedge \mathbf {e} _{i_{n}})}{\star (\mathbf {e} _{1}\wedge \cdots \wedge \mathbf {e} _{n})}},\mathbf {v} \right\rangle ,}
where
⋆
{\displaystyle \star }
is the Hodge star operator.
== Over a ring ==
Modules over a ring are generalizations of vector spaces, which removes the restriction that coefficients belong to a field. Given a module M over a ring R, a linear form on M is a linear map from M to R, where the latter is considered as a module over itself. The space of linear forms is always denoted Homk(V, k), whether k is a field or not. It is a right module if V is a left module.
The existence of "enough" linear forms on a module is equivalent to projectivity.
== Change of field ==
Suppose that
X
{\displaystyle X}
is a vector space over
C
.
{\displaystyle \mathbb {C} .}
Restricting scalar multiplication to
R
{\displaystyle \mathbb {R} }
gives rise to a real vector space
X
R
{\displaystyle X_{\mathbb {R} }}
called the realification of
X
.
{\displaystyle X.}
Any vector space
X
{\displaystyle X}
over
C
{\displaystyle \mathbb {C} }
is also a vector space over
R
,
{\displaystyle \mathbb {R} ,}
endowed with a complex structure; that is, there exists a real vector subspace
X
R
{\displaystyle X_{\mathbb {R} }}
such that we can (formally) write
X
=
X
R
⊕
X
R
i
{\displaystyle X=X_{\mathbb {R} }\oplus X_{\mathbb {R} }i}
as
R
{\displaystyle \mathbb {R} }
-vector spaces.
=== Real versus complex linear functionals ===
Every linear functional on
X
{\displaystyle X}
is complex-valued while every linear functional on
X
R
{\displaystyle X_{\mathbb {R} }}
is real-valued. If
dim
X
≠
0
{\displaystyle \dim X\neq 0}
then a linear functional on either one of
X
{\displaystyle X}
or
X
R
{\displaystyle X_{\mathbb {R} }}
is non-trivial (meaning not identically
0
{\displaystyle 0}
) if and only if it is surjective (because if
φ
(
x
)
≠
0
{\displaystyle \varphi (x)\neq 0}
then for any scalar
s
,
{\displaystyle s,}
φ
(
(
s
/
φ
(
x
)
)
x
)
=
s
{\displaystyle \varphi \left((s/\varphi (x))x\right)=s}
), where the image of a linear functional on
X
{\displaystyle X}
is
C
{\displaystyle \mathbb {C} }
while the image of a linear functional on
X
R
{\displaystyle X_{\mathbb {R} }}
is
R
.
{\displaystyle \mathbb {R} .}
Consequently, the only function on
X
{\displaystyle X}
that is both a linear functional on
X
{\displaystyle X}
and a linear function on
X
R
{\displaystyle X_{\mathbb {R} }}
is the trivial functional; in other words,
X
#
∩
X
R
#
=
{
0
}
,
{\displaystyle X^{\#}\cap X_{\mathbb {R} }^{\#}=\{0\},}
where
⋅
#
{\displaystyle \,{\cdot }^{\#}}
denotes the space's algebraic dual space.
However, every
C
{\displaystyle \mathbb {C} }
-linear functional on
X
{\displaystyle X}
is an
R
{\displaystyle \mathbb {R} }
-linear operator (meaning that it is additive and homogeneous over
R
{\displaystyle \mathbb {R} }
), but unless it is identically
0
,
{\displaystyle 0,}
it is not an
R
{\displaystyle \mathbb {R} }
-linear functional on
X
{\displaystyle X}
because its range (which is
C
{\displaystyle \mathbb {C} }
) is 2-dimensional over
R
.
{\displaystyle \mathbb {R} .}
Conversely, a non-zero
R
{\displaystyle \mathbb {R} }
-linear functional has range too small to be a
C
{\displaystyle \mathbb {C} }
-linear functional as well.
=== Real and imaginary parts ===
If
φ
∈
X
#
{\displaystyle \varphi \in X^{\#}}
then denote its real part by
φ
R
:=
Re
φ
{\displaystyle \varphi _{\mathbb {R} }:=\operatorname {Re} \varphi }
and its imaginary part by
φ
i
:=
Im
φ
.
{\displaystyle \varphi _{i}:=\operatorname {Im} \varphi .}
Then
φ
R
:
X
→
R
{\displaystyle \varphi _{\mathbb {R} }:X\to \mathbb {R} }
and
φ
i
:
X
→
R
{\displaystyle \varphi _{i}:X\to \mathbb {R} }
are linear functionals on
X
R
{\displaystyle X_{\mathbb {R} }}
and
φ
=
φ
R
+
i
φ
i
.
{\displaystyle \varphi =\varphi _{\mathbb {R} }+i\varphi _{i}.}
The fact that
z
=
Re
z
−
i
Re
(
i
z
)
=
Im
(
i
z
)
+
i
Im
z
{\displaystyle z=\operatorname {Re} z-i\operatorname {Re} (iz)=\operatorname {Im} (iz)+i\operatorname {Im} z}
for all
z
∈
C
{\displaystyle z\in \mathbb {C} }
implies that for all
x
∈
X
,
{\displaystyle x\in X,}
φ
(
x
)
=
φ
R
(
x
)
−
i
φ
R
(
i
x
)
=
φ
i
(
i
x
)
+
i
φ
i
(
x
)
{\displaystyle {\begin{alignedat}{4}\varphi (x)&=\varphi _{\mathbb {R} }(x)-i\varphi _{\mathbb {R} }(ix)\\&=\varphi _{i}(ix)+i\varphi _{i}(x)\\\end{alignedat}}}
and consequently, that
φ
i
(
x
)
=
−
φ
R
(
i
x
)
{\displaystyle \varphi _{i}(x)=-\varphi _{\mathbb {R} }(ix)}
and
φ
R
(
x
)
=
φ
i
(
i
x
)
.
{\displaystyle \varphi _{\mathbb {R} }(x)=\varphi _{i}(ix).}
The assignment
φ
↦
φ
R
{\displaystyle \varphi \mapsto \varphi _{\mathbb {R} }}
defines a bijective
R
{\displaystyle \mathbb {R} }
-linear operator
X
#
→
X
R
#
{\displaystyle X^{\#}\to X_{\mathbb {R} }^{\#}}
whose inverse is the map
L
∙
:
X
R
#
→
X
#
{\displaystyle L_{\bullet }:X_{\mathbb {R} }^{\#}\to X^{\#}}
defined by the assignment
g
↦
L
g
{\displaystyle g\mapsto L_{g}}
that sends
g
:
X
R
→
R
{\displaystyle g:X_{\mathbb {R} }\to \mathbb {R} }
to the linear functional
L
g
:
X
→
C
{\displaystyle L_{g}:X\to \mathbb {C} }
defined by
L
g
(
x
)
:=
g
(
x
)
−
i
g
(
i
x
)
for all
x
∈
X
.
{\displaystyle L_{g}(x):=g(x)-ig(ix)\quad {\text{ for all }}x\in X.}
The real part of
L
g
{\displaystyle L_{g}}
is
g
{\displaystyle g}
and the bijection
L
∙
:
X
R
#
→
X
#
{\displaystyle L_{\bullet }:X_{\mathbb {R} }^{\#}\to X^{\#}}
is an
R
{\displaystyle \mathbb {R} }
-linear operator, meaning that
L
g
+
h
=
L
g
+
L
h
{\displaystyle L_{g+h}=L_{g}+L_{h}}
and
L
r
g
=
r
L
g
{\displaystyle L_{rg}=rL_{g}}
for all
r
∈
R
{\displaystyle r\in \mathbb {R} }
and
g
,
h
∈
X
R
#
.
{\displaystyle g,h\in X_{\mathbb {R} }^{\#}.}
Similarly for the imaginary part, the assignment
φ
↦
φ
i
{\displaystyle \varphi \mapsto \varphi _{i}}
induces an
R
{\displaystyle \mathbb {R} }
-linear bijection
X
#
→
X
R
#
{\displaystyle X^{\#}\to X_{\mathbb {R} }^{\#}}
whose inverse is the map
X
R
#
→
X
#
{\displaystyle X_{\mathbb {R} }^{\#}\to X^{\#}}
defined by sending
I
∈
X
R
#
{\displaystyle I\in X_{\mathbb {R} }^{\#}}
to the linear functional on
X
{\displaystyle X}
defined by
x
↦
I
(
i
x
)
+
i
I
(
x
)
.
{\displaystyle x\mapsto I(ix)+iI(x).}
This relationship was discovered by Henry Löwig in 1934 (although it is usually credited to F. Murray), and can be generalized to arbitrary finite extensions of a field in the natural way. It has many important consequences, some of which will now be described.
=== Properties and relationships ===
Suppose
φ
:
X
→
C
{\displaystyle \varphi :X\to \mathbb {C} }
is a linear functional on
X
{\displaystyle X}
with real part
φ
R
:=
Re
φ
{\displaystyle \varphi _{\mathbb {R} }:=\operatorname {Re} \varphi }
and imaginary part
φ
i
:=
Im
φ
.
{\displaystyle \varphi _{i}:=\operatorname {Im} \varphi .}
Then
φ
=
0
{\displaystyle \varphi =0}
if and only if
φ
R
=
0
{\displaystyle \varphi _{\mathbb {R} }=0}
if and only if
φ
i
=
0.
{\displaystyle \varphi _{i}=0.}
Assume that
X
{\displaystyle X}
is a topological vector space. Then
φ
{\displaystyle \varphi }
is continuous if and only if its real part
φ
R
{\displaystyle \varphi _{\mathbb {R} }}
is continuous, if and only if
φ
{\displaystyle \varphi }
's imaginary part
φ
i
{\displaystyle \varphi _{i}}
is continuous. That is, either all three of
φ
,
φ
R
,
{\displaystyle \varphi ,\varphi _{\mathbb {R} },}
and
φ
i
{\displaystyle \varphi _{i}}
are continuous or none are continuous. This remains true if the word "continuous" is replaced with the word "bounded". In particular,
φ
∈
X
′
{\displaystyle \varphi \in X^{\prime }}
if and only if
φ
R
∈
X
R
′
{\displaystyle \varphi _{\mathbb {R} }\in X_{\mathbb {R} }^{\prime }}
where the prime denotes the space's continuous dual space.
Let
B
⊆
X
.
{\displaystyle B\subseteq X.}
If
u
B
⊆
B
{\displaystyle uB\subseteq B}
for all scalars
u
∈
C
{\displaystyle u\in \mathbb {C} }
of unit length (meaning
|
u
|
=
1
{\displaystyle |u|=1}
) then
sup
b
∈
B
|
φ
(
b
)
|
=
sup
b
∈
B
|
φ
R
(
b
)
|
.
{\displaystyle \sup _{b\in B}|\varphi (b)|=\sup _{b\in B}\left|\varphi _{\mathbb {R} }(b)\right|.}
Similarly, if
φ
i
:=
Im
φ
:
X
→
R
{\displaystyle \varphi _{i}:=\operatorname {Im} \varphi :X\to \mathbb {R} }
denotes the complex part of
φ
{\displaystyle \varphi }
then
i
B
⊆
B
{\displaystyle iB\subseteq B}
implies
sup
b
∈
B
|
φ
R
(
b
)
|
=
sup
b
∈
B
|
φ
i
(
b
)
|
.
{\displaystyle \sup _{b\in B}\left|\varphi _{\mathbb {R} }(b)\right|=\sup _{b\in B}\left|\varphi _{i}(b)\right|.}
If
X
{\displaystyle X}
is a normed space with norm
‖
⋅
‖
{\displaystyle \|\cdot \|}
and if
B
=
{
x
∈
X
:
‖
x
‖
≤
1
}
{\displaystyle B=\{x\in X:\|x\|\leq 1\}}
is the closed unit ball then the supremums above are the operator norms (defined in the usual way) of
φ
,
φ
R
,
{\displaystyle \varphi ,\varphi _{\mathbb {R} },}
and
φ
i
{\displaystyle \varphi _{i}}
so that
‖
φ
‖
=
‖
φ
R
‖
=
‖
φ
i
‖
.
{\displaystyle \|\varphi \|=\left\|\varphi _{\mathbb {R} }\right\|=\left\|\varphi _{i}\right\|.}
This conclusion extends to the analogous statement for polars of balanced sets in general topological vector spaces.
If
X
{\displaystyle X}
is a complex Hilbert space with a (complex) inner product
⟨
⋅
|
⋅
⟩
{\displaystyle \langle \,\cdot \,|\,\cdot \,\rangle }
that is antilinear in its first coordinate (and linear in the second) then
X
R
{\displaystyle X_{\mathbb {R} }}
becomes a real Hilbert space when endowed with the real part of
⟨
⋅
|
⋅
⟩
.
{\displaystyle \langle \,\cdot \,|\,\cdot \,\rangle .}
Explicitly, this real inner product on
X
R
{\displaystyle X_{\mathbb {R} }}
is defined by
⟨
x
|
y
⟩
R
:=
Re
⟨
x
|
y
⟩
{\displaystyle \langle x|y\rangle _{\mathbb {R} }:=\operatorname {Re} \langle x|y\rangle }
for all
x
,
y
∈
X
{\displaystyle x,y\in X}
and it induces the same norm on
X
{\displaystyle X}
as
⟨
⋅
|
⋅
⟩
{\displaystyle \langle \,\cdot \,|\,\cdot \,\rangle }
because
⟨
x
|
x
⟩
R
=
⟨
x
|
x
⟩
{\displaystyle {\sqrt {\langle x|x\rangle _{\mathbb {R} }}}={\sqrt {\langle x|x\rangle }}}
for all vectors
x
.
{\displaystyle x.}
Applying the Riesz representation theorem to
φ
∈
X
′
{\displaystyle \varphi \in X^{\prime }}
(resp. to
φ
R
∈
X
R
′
{\displaystyle \varphi _{\mathbb {R} }\in X_{\mathbb {R} }^{\prime }}
) guarantees the existence of a unique vector
f
φ
∈
X
{\displaystyle f_{\varphi }\in X}
(resp.
f
φ
R
∈
X
R
{\displaystyle f_{\varphi _{\mathbb {R} }}\in X_{\mathbb {R} }}
) such that
φ
(
x
)
=
⟨
f
φ
|
x
⟩
{\displaystyle \varphi (x)=\left\langle f_{\varphi }|\,x\right\rangle }
(resp.
φ
R
(
x
)
=
⟨
f
φ
R
|
x
⟩
R
{\displaystyle \varphi _{\mathbb {R} }(x)=\left\langle f_{\varphi _{\mathbb {R} }}|\,x\right\rangle _{\mathbb {R} }}
) for all vectors
x
.
{\displaystyle x.}
The theorem also guarantees that
‖
f
φ
‖
=
‖
φ
‖
X
′
{\displaystyle \left\|f_{\varphi }\right\|=\|\varphi \|_{X^{\prime }}}
and
‖
f
φ
R
‖
=
‖
φ
R
‖
X
R
′
.
{\displaystyle \left\|f_{\varphi _{\mathbb {R} }}\right\|=\left\|\varphi _{\mathbb {R} }\right\|_{X_{\mathbb {R} }^{\prime }}.}
It is readily verified that
f
φ
=
f
φ
R
.
{\displaystyle f_{\varphi }=f_{\varphi _{\mathbb {R} }}.}
Now
‖
f
φ
‖
=
‖
f
φ
R
‖
{\displaystyle \left\|f_{\varphi }\right\|=\left\|f_{\varphi _{\mathbb {R} }}\right\|}
and the previous equalities imply that
‖
φ
‖
X
′
=
‖
φ
R
‖
X
R
′
,
{\displaystyle \|\varphi \|_{X^{\prime }}=\left\|\varphi _{\mathbb {R} }\right\|_{X_{\mathbb {R} }^{\prime }},}
which is the same conclusion that was reached above.
== In infinite dimensions ==
Below, all vector spaces are over either the real numbers
R
{\displaystyle \mathbb {R} }
or the complex numbers
C
.
{\displaystyle \mathbb {C} .}
If
V
{\displaystyle V}
is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If
V
{\displaystyle V}
is a Banach space, then so is its (continuous) dual. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual space. In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensions the continuous dual is a proper subspace of the algebraic dual.
A linear functional f on a (not necessarily locally convex) topological vector space X is continuous if and only if there exists a continuous seminorm p on X such that
|
f
|
≤
p
.
{\displaystyle |f|\leq p.}
=== Characterizing closed subspaces ===
Continuous linear functionals have nice properties for analysis: a linear functional is continuous if and only if its kernel is closed, and a non-trivial continuous linear functional is an open map, even if the (topological) vector space is not complete.
==== Hyperplanes and maximal subspaces ====
A vector subspace
M
{\displaystyle M}
of
X
{\displaystyle X}
is called maximal if
M
⊊
X
{\displaystyle M\subsetneq X}
(meaning
M
⊆
X
{\displaystyle M\subseteq X}
and
M
≠
X
{\displaystyle M\neq X}
) and does not exist a vector subspace
N
{\displaystyle N}
of
X
{\displaystyle X}
such that
M
⊊
N
⊊
X
.
{\displaystyle M\subsetneq N\subsetneq X.}
A vector subspace
M
{\displaystyle M}
of
X
{\displaystyle X}
is maximal if and only if it is the kernel of some non-trivial linear functional on
X
{\displaystyle X}
(that is,
M
=
ker
f
{\displaystyle M=\ker f}
for some linear functional
f
{\displaystyle f}
on
X
{\displaystyle X}
that is not identically 0). An affine hyperplane in
X
{\displaystyle X}
is a translate of a maximal vector subspace. By linearity, a subset
H
{\displaystyle H}
of
X
{\displaystyle X}
is a affine hyperplane if and only if there exists some non-trivial linear functional
f
{\displaystyle f}
on
X
{\displaystyle X}
such that
H
=
f
−
1
(
1
)
=
{
x
∈
X
:
f
(
x
)
=
1
}
.
{\displaystyle H=f^{-1}(1)=\{x\in X:f(x)=1\}.}
If
f
{\displaystyle f}
is a linear functional and
s
≠
0
{\displaystyle s\neq 0}
is a scalar then
f
−
1
(
s
)
=
s
(
f
−
1
(
1
)
)
=
(
1
s
f
)
−
1
(
1
)
.
{\displaystyle f^{-1}(s)=s\left(f^{-1}(1)\right)=\left({\frac {1}{s}}f\right)^{-1}(1).}
This equality can be used to relate different level sets of
f
.
{\displaystyle f.}
Moreover, if
f
≠
0
{\displaystyle f\neq 0}
then the kernel of
f
{\displaystyle f}
can be reconstructed from the affine hyperplane
H
:=
f
−
1
(
1
)
{\displaystyle H:=f^{-1}(1)}
by
ker
f
=
H
−
H
.
{\displaystyle \ker f=H-H.}
==== Relationships between multiple linear functionals ====
Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other).
This fact can be generalized to the following theorem.
If f is a non-trivial linear functional on X with kernel N,
x
∈
X
{\displaystyle x\in X}
satisfies
f
(
x
)
=
1
,
{\displaystyle f(x)=1,}
and U is a balanced subset of X, then
N
∩
(
x
+
U
)
=
∅
{\displaystyle N\cap (x+U)=\varnothing }
if and only if
|
f
(
u
)
|
<
1
{\displaystyle |f(u)|<1}
for all
u
∈
U
.
{\displaystyle u\in U.}
=== Hahn–Banach theorem ===
Any (algebraic) linear functional on a vector subspace can be extended to the whole space; for example, the evaluation functionals described above can be extended to the vector space of polynomials on all of
R
.
{\displaystyle \mathbb {R} .}
However, this extension cannot always be done while keeping the linear functional continuous. The Hahn–Banach family of theorems gives conditions under which this extension can be done. For example,
=== Equicontinuity of families of linear functionals ===
Let X be a topological vector space (TVS) with continuous dual space
X
′
.
{\displaystyle X'.}
For any subset H of
X
′
,
{\displaystyle X',}
the following are equivalent:
H is equicontinuous;
H is contained in the polar of some neighborhood of
0
{\displaystyle 0}
in X;
the (pre)polar of H is a neighborhood of
0
{\displaystyle 0}
in X;
If H is an equicontinuous subset of
X
′
{\displaystyle X'}
then the following sets are also equicontinuous:
the weak-* closure, the balanced hull, the convex hull, and the convex balanced hull.
Moreover, Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of
X
′
{\displaystyle X'}
is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact).
== See also ==
Discontinuous linear map
Locally convex topological vector space – Vector space with a topology defined by convex open sets
Positive linear functional – ordered vector space with a partial orderPages displaying wikidata descriptions as a fallback
Multilinear form – Map from multiple vectors to an underlying field of scalars, linear in each argument
Topological vector space – Vector space with a notion of nearness
== Notes ==
=== Footnotes ===
=== Proofs ===
== References ==
== Bibliography ==
Axler, Sheldon (2015), Linear Algebra Done Right, Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN 978-3-319-11079-0
Bishop, Richard; Goldberg, Samuel (1980), "Chapter 4", Tensor Analysis on Manifolds, Dover Publications, ISBN 0-486-64039-6
Conway, John (1990). A course in functional analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908.
Dunford, Nelson (1988). Linear operators (in Romanian). New York: Interscience Publishers. ISBN 0-471-60848-3. OCLC 18412261.
Halmos, Paul Richard (1974), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics (1958 2nd ed.), Springer, ISBN 0-387-90093-4
Katznelson, Yitzhak; Katznelson, Yonatan R. (2008), A (Terse) Introduction to Linear Algebra, American Mathematical Society, ISBN 978-0-8218-4419-9
Lax, Peter (1996), Linear algebra, Wiley-Interscience, ISBN 978-0-471-11111-5
Misner, Charles W.; Thorne, Kip S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 0-7167-0344-0
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Schutz, Bernard (1985), "Chapter 3", A first course in general relativity, Cambridge, UK: Cambridge University Press, ISBN 0-521-27703-5
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Tu, Loring W. (2011), An Introduction to Manifolds, Universitext (2nd ed.), Springer, ISBN 978-0-8218-4419-9
Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114. | Wikipedia/Linear_functional |
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives.
The function is often thought of as an "unknown" that solves the equation, similar to how x is thought of as an unknown number solving, e.g., an algebraic equation like x2 − 3x + 2 = 0. However, it is usually impossible to write down explicit formulae for solutions of partial differential equations. There is correspondingly a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers. Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations, such as existence, uniqueness, regularity and stability. Among the many open questions are the existence and smoothness of solutions to the Navier–Stokes equations, named as one of the Millennium Prize Problems in 2000.
Partial differential equations are ubiquitous in mathematically oriented scientific fields, such as physics and engineering. For instance, they are foundational in the modern scientific understanding of sound, heat, diffusion, electrostatics, electrodynamics, thermodynamics, fluid dynamics, elasticity, general relativity, and quantum mechanics (Schrödinger equation, Pauli equation etc.). They also arise from many purely mathematical considerations, such as differential geometry and the calculus of variations; among other notable applications, they are the fundamental tool in the proof of the Poincaré conjecture from geometric topology.
Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, where the meaning of a solution depends on the context of the problem, and methods have been developed for dealing with many of the individual equations which arise. As such, it is usually acknowledged that there is no "universal theory" of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields.
Ordinary differential equations can be viewed as a subclass of partial differential equations, corresponding to functions of a single variable. Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the "PDE" notion. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics, Boltzmann equations, and dispersive partial differential equations.
== Introduction ==
A function u(x, y, z) of three variables is "harmonic" or "a solution of the Laplace equation" if it satisfies the condition
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
=
0.
{\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}=0.}
Such functions were widely studied in the 19th century due to their relevance for classical mechanics, for example the equilibrium temperature distribution of a homogeneous solid is a harmonic function. If explicitly given a function, it is usually a matter of straightforward computation to check whether or not it is harmonic. For instance
u
(
x
,
y
,
z
)
=
1
x
2
−
2
x
+
y
2
+
z
2
+
1
{\displaystyle u(x,y,z)={\frac {1}{\sqrt {x^{2}-2x+y^{2}+z^{2}+1}}}}
and
u
(
x
,
y
,
z
)
=
2
x
2
−
y
2
−
z
2
{\displaystyle u(x,y,z)=2x^{2}-y^{2}-z^{2}}
are both harmonic while
u
(
x
,
y
,
z
)
=
sin
(
x
y
)
+
z
{\displaystyle u(x,y,z)=\sin(xy)+z}
is not. It may be surprising that the two examples of harmonic functions are of such strikingly different form. This is a reflection of the fact that they are not, in any immediate way, special cases of a "general solution formula" of the Laplace equation. This is in striking contrast to the case of ordinary differential equations (ODEs) roughly similar to the Laplace equation, with the aim of many introductory textbooks being to find algorithms leading to general solution formulas. For the Laplace equation, as for a large number of partial differential equations, such solution formulas fail to exist.
The nature of this failure can be seen more concretely in the case of the following PDE: for a function v(x, y) of two variables, consider the equation
∂
2
v
∂
x
∂
y
=
0.
{\displaystyle {\frac {\partial ^{2}v}{\partial x\partial y}}=0.}
It can be directly checked that any function v of the form v(x, y) = f(x) + g(y), for any single-variable functions f and g whatsoever, will satisfy this condition. This is far beyond the choices available in ODE solution formulas, which typically allow the free choice of some numbers. In the study of PDEs, one generally has the free choice of functions.
The nature of this choice varies from PDE to PDE. To understand it for any given equation, existence and uniqueness theorems are usually important organizational principles. In many introductory textbooks, the role of existence and uniqueness theorems for ODE can be somewhat opaque; the existence half is usually unnecessary, since one can directly check any proposed solution formula, while the uniqueness half is often only present in the background in order to ensure that a proposed solution formula is as general as possible. By contrast, for PDE, existence and uniqueness theorems are often the only means by which one can navigate through the plethora of different solutions at hand. For this reason, they are also fundamental when carrying out a purely numerical simulation, as one must have an understanding of what data is to be prescribed by the user and what is to be left to the computer to calculate.
To discuss such existence and uniqueness theorems, it is necessary to be precise about the domain of the "unknown function". Otherwise, speaking only in terms such as "a function of two variables", it is impossible to meaningfully formulate the results. That is, the domain of the unknown function must be regarded as part of the structure of the PDE itself.
The following provides two classic examples of such existence and uniqueness theorems. Even though the two PDE in question are so similar, there is a striking difference in behavior: for the first PDE, one has the free prescription of a single function, while for the second PDE, one has the free prescription of two functions.
Let B denote the unit-radius disk around the origin in the plane. For any continuous function U on the unit circle, there is exactly one function u on B such that
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
=
0
{\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0}
and whose restriction to the unit circle is given by U.
For any functions f and g on the real line R, there is exactly one function u on R × (−1, 1) such that
∂
2
u
∂
x
2
−
∂
2
u
∂
y
2
=
0
{\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}-{\frac {\partial ^{2}u}{\partial y^{2}}}=0}
and with u(x, 0) = f(x) and ∂u/∂y(x, 0) = g(x) for all values of x.
Even more phenomena are possible. For instance, the following PDE, arising naturally in the field of differential geometry, illustrates an example where there is a simple and completely explicit solution formula, but with the free choice of only three numbers and not even one function.
If u is a function on R2 with
∂
∂
x
∂
u
∂
x
1
+
(
∂
u
∂
x
)
2
+
(
∂
u
∂
y
)
2
+
∂
∂
y
∂
u
∂
y
1
+
(
∂
u
∂
x
)
2
+
(
∂
u
∂
y
)
2
=
0
,
{\displaystyle {\frac {\partial }{\partial x}}{\frac {\frac {\partial u}{\partial x}}{\sqrt {1+\left({\frac {\partial u}{\partial x}}\right)^{2}+\left({\frac {\partial u}{\partial y}}\right)^{2}}}}+{\frac {\partial }{\partial y}}{\frac {\frac {\partial u}{\partial y}}{\sqrt {1+\left({\frac {\partial u}{\partial x}}\right)^{2}+\left({\frac {\partial u}{\partial y}}\right)^{2}}}}=0,}
then there are numbers a, b, and c with u(x, y) = ax + by + c.
In contrast to the earlier examples, this PDE is nonlinear, owing to the square roots and the squares. A linear PDE is one such that, if it is homogeneous, the sum of any two solutions is also a solution, and any constant multiple of any solution is also a solution.
== Definition ==
A partial differential equation is an equation that involves an unknown function of
n
≥
2
{\displaystyle n\geq 2}
variables and (some of) its partial derivatives. That is, for the unknown function
u
:
U
→
R
,
{\displaystyle u:U\rightarrow \mathbb {R} ,}
of variables
x
=
(
x
1
,
…
,
x
n
)
{\displaystyle x=(x_{1},\dots ,x_{n})}
belonging to the open subset
U
{\displaystyle U}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
, the
k
t
h
{\displaystyle k^{th}}
-order partial differential equation is defined as
F
[
D
k
u
,
D
k
−
1
u
,
…
,
D
u
,
u
,
x
]
=
0
,
{\displaystyle F[D^{k}u,D^{k-1}u,\dots ,Du,u,x]=0,}
where
F
:
R
n
k
×
R
n
k
−
1
⋯
×
R
n
×
R
×
U
→
R
,
{\displaystyle F:\mathbb {R} ^{n^{k}}\times \mathbb {R} ^{n^{k-1}}\dots \times \mathbb {R} ^{n}\times \mathbb {R} \times U\rightarrow \mathbb {R} ,}
and
D
{\displaystyle D}
is the partial derivative operator.
=== Notation ===
When writing PDEs, it is common to denote partial derivatives using subscripts. For example:
u
x
=
∂
u
∂
x
,
u
x
x
=
∂
2
u
∂
x
2
,
u
x
y
=
∂
2
u
∂
y
∂
x
=
∂
∂
y
(
∂
u
∂
x
)
.
{\displaystyle u_{x}={\frac {\partial u}{\partial x}},\quad u_{xx}={\frac {\partial ^{2}u}{\partial x^{2}}},\quad u_{xy}={\frac {\partial ^{2}u}{\partial y\,\partial x}}={\frac {\partial }{\partial y}}\left({\frac {\partial u}{\partial x}}\right).}
In the general situation that u is a function of n variables, then ui denotes the first partial derivative relative to the i-th input, uij denotes the second partial derivative relative to the i-th and j-th inputs, and so on.
The Greek letter Δ denotes the Laplace operator; if u is a function of n variables, then
Δ
u
=
u
11
+
u
22
+
⋯
+
u
n
n
.
{\displaystyle \Delta u=u_{11}+u_{22}+\cdots +u_{nn}.}
In the physics literature, the Laplace operator is often denoted by ∇2; in the mathematics literature, ∇2u may also denote the Hessian matrix of u.
== Classification ==
=== Linear and nonlinear equations ===
A PDE is called linear if it is linear in the unknown and its derivatives. For example, for a function u of x and y, a second order linear PDE is of the form
a
1
(
x
,
y
)
u
x
x
+
a
2
(
x
,
y
)
u
x
y
+
a
3
(
x
,
y
)
u
y
x
+
a
4
(
x
,
y
)
u
y
y
+
a
5
(
x
,
y
)
u
x
+
a
6
(
x
,
y
)
u
y
+
a
7
(
x
,
y
)
u
=
f
(
x
,
y
)
{\displaystyle a_{1}(x,y)u_{xx}+a_{2}(x,y)u_{xy}+a_{3}(x,y)u_{yx}+a_{4}(x,y)u_{yy}+a_{5}(x,y)u_{x}+a_{6}(x,y)u_{y}+a_{7}(x,y)u=f(x,y)}
where ai and f are functions of the independent variables x and y only. (Often the mixed-partial derivatives uxy and uyx will be equated, but this is not required for the discussion of linearity.)
If the ai are constants (independent of x and y) then the PDE is called linear with constant coefficients. If f is zero everywhere then the linear PDE is homogeneous, otherwise it is inhomogeneous. (This is separate from asymptotic homogenization, which studies the effects of high-frequency oscillations in the coefficients upon solutions to PDEs.)
Nearest to linear PDEs are semi-linear PDEs, where only the highest order derivatives appear as linear terms, with coefficients that are functions of the independent variables. The lower order derivatives and the unknown function may appear arbitrarily. For example, a general second order semi-linear PDE in two variables is
a
1
(
x
,
y
)
u
x
x
+
a
2
(
x
,
y
)
u
x
y
+
a
3
(
x
,
y
)
u
y
x
+
a
4
(
x
,
y
)
u
y
y
+
f
(
u
x
,
u
y
,
u
,
x
,
y
)
=
0
{\displaystyle a_{1}(x,y)u_{xx}+a_{2}(x,y)u_{xy}+a_{3}(x,y)u_{yx}+a_{4}(x,y)u_{yy}+f(u_{x},u_{y},u,x,y)=0}
In a quasilinear PDE the highest order derivatives likewise appear only as linear terms, but with coefficients possibly functions of the unknown and lower-order derivatives:
a
1
(
u
x
,
u
y
,
u
,
x
,
y
)
u
x
x
+
a
2
(
u
x
,
u
y
,
u
,
x
,
y
)
u
x
y
+
a
3
(
u
x
,
u
y
,
u
,
x
,
y
)
u
y
x
+
a
4
(
u
x
,
u
y
,
u
,
x
,
y
)
u
y
y
+
f
(
u
x
,
u
y
,
u
,
x
,
y
)
=
0
{\displaystyle a_{1}(u_{x},u_{y},u,x,y)u_{xx}+a_{2}(u_{x},u_{y},u,x,y)u_{xy}+a_{3}(u_{x},u_{y},u,x,y)u_{yx}+a_{4}(u_{x},u_{y},u,x,y)u_{yy}+f(u_{x},u_{y},u,x,y)=0}
Many of the fundamental PDEs in physics are quasilinear, such as the Einstein equations of general relativity and the Navier–Stokes equations describing fluid motion.
A PDE without any linearity properties is called fully nonlinear, and possesses nonlinearities on one or more of the highest-order derivatives. An example is the Monge–Ampère equation, which arises in differential geometry.
=== Second order equations ===
The elliptic/parabolic/hyperbolic classification provides a guide to appropriate initial- and boundary conditions and to the smoothness of the solutions. Assuming uxy = uyx, the general linear second-order PDE in two independent variables has the form
A
u
x
x
+
2
B
u
x
y
+
C
u
y
y
+
⋯
(lower order terms)
=
0
,
{\displaystyle Au_{xx}+2Bu_{xy}+Cu_{yy}+\cdots {\mbox{(lower order terms)}}=0,}
where the coefficients A, B, C... may depend upon x and y. If A2 + B2 + C2 > 0 over a region of the xy-plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section:
A
x
2
+
2
B
x
y
+
C
y
2
+
⋯
=
0.
{\displaystyle Ax^{2}+2Bxy+Cy^{2}+\cdots =0.}
More precisely, replacing ∂x by X, and likewise for other variables (formally this is done by a Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree, with the terms of the highest degree (a homogeneous polynomial, here a quadratic form) being most significant for the classification.
Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic based on the discriminant B2 − 4AC, the same can be done for a second-order PDE at a given point. However, the discriminant in a PDE is given by B2 − AC due to the convention of the xy term being 2B rather than B; formally, the discriminant (of the associated quadratic form) is (2B)2 − 4AC = 4(B2 − AC), with the factor of 4 dropped for simplicity.
B2 − AC < 0 (elliptic partial differential equation): Solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be approximated with elliptic PDEs, and the Euler–Tricomi equation is elliptic where x < 0. By change of variables, the equation can always be expressed in the form:
u
x
x
+
u
y
y
+
⋯
=
0
,
{\displaystyle u_{xx}+u_{yy}+\cdots =0,}
where x and y correspond to changed variables. This justifies Laplace equation as an example of this type.
B2 − AC = 0 (parabolic partial differential equation): Equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the line where x = 0. By change of variables, the equation can always be expressed in the form:
u
x
x
+
⋯
=
0
,
{\displaystyle u_{xx}+\cdots =0,}
where x correspond to changed variables. This justifies heat equation, which are of form
u
t
−
u
x
x
+
⋯
=
0
{\textstyle u_{t}-u_{xx}+\cdots =0}
, as an example of this type.
B2 − AC > 0 (hyperbolic partial differential equation): hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where x > 0. By change of variables, the equation can always be expressed in the form:
u
x
x
−
u
y
y
+
⋯
=
0
,
{\displaystyle u_{xx}-u_{yy}+\cdots =0,}
where x and y correspond to changed variables. This justifies wave equation as an example of this type.
If there are n independent variables x1, x2 , …, xn, a general linear partial differential equation of second order has the form
L
u
=
∑
i
=
1
n
∑
j
=
1
n
a
i
,
j
∂
2
u
∂
x
i
∂
x
j
+
lower-order terms
=
0.
{\displaystyle Lu=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{i,j}{\frac {\partial ^{2}u}{\partial x_{i}\partial x_{j}}}\quad +{\text{lower-order terms}}=0.}
The classification depends upon the signature of the eigenvalues of the coefficient matrix ai,j.
Elliptic: the eigenvalues are all positive or all negative.
Parabolic: the eigenvalues are all positive or all negative, except one that is zero.
Hyperbolic: there is only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative.
Ultrahyperbolic: there is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues.
The theory of elliptic, parabolic, and hyperbolic equations have been studied for centuries, largely centered around or based upon the standard examples of the Laplace equation, the heat equation, and the wave equation.
However, the classification only depends on linearity of the second-order terms and is therefore applicable to semi- and quasilinear PDEs as well. The basic types also extend to hybrids such as the Euler–Tricomi equation; varying from elliptic to hyperbolic for different regions of the domain, as well as higher-order PDEs, but such knowledge is more specialized.
=== Systems of first-order equations and characteristic surfaces ===
The classification of partial differential equations can be extended to systems of first-order equations, where the unknown u is now a vector with m components, and the coefficient matrices Aν are m by m matrices for ν = 1, 2, …, n. The partial differential equation takes the form
L
u
=
∑
ν
=
1
n
A
ν
∂
u
∂
x
ν
+
B
=
0
,
{\displaystyle Lu=\sum _{\nu =1}^{n}A_{\nu }{\frac {\partial u}{\partial x_{\nu }}}+B=0,}
where the coefficient matrices Aν and the vector B may depend upon x and u. If a hypersurface S is given in the implicit form
φ
(
x
1
,
x
2
,
…
,
x
n
)
=
0
,
{\displaystyle \varphi (x_{1},x_{2},\ldots ,x_{n})=0,}
where φ has a non-zero gradient, then S is a characteristic surface for the operator L at a given point if the characteristic form vanishes:
Q
(
∂
φ
∂
x
1
,
…
,
∂
φ
∂
x
n
)
=
det
[
∑
ν
=
1
n
A
ν
∂
φ
∂
x
ν
]
=
0.
{\displaystyle Q\left({\frac {\partial \varphi }{\partial x_{1}}},\ldots ,{\frac {\partial \varphi }{\partial x_{n}}}\right)=\det \left[\sum _{\nu =1}^{n}A_{\nu }{\frac {\partial \varphi }{\partial x_{\nu }}}\right]=0.}
The geometric interpretation of this condition is as follows: if data for u are prescribed on the surface S, then it may be possible to determine the normal derivative of u on S from the differential equation. If the data on S and the differential equation determine the normal derivative of u on S, then S is non-characteristic. If the data on S and the differential equation do not determine the normal derivative of u on S, then the surface is characteristic, and the differential equation restricts the data on S: the differential equation is internal to S.
A first-order system Lu = 0 is elliptic if no surface is characteristic for L: the values of u on S and the differential equation always determine the normal derivative of u on S.
A first-order system is hyperbolic at a point if there is a spacelike surface S with normal ξ at that point. This means that, given any non-trivial vector η orthogonal to ξ, and a scalar multiplier λ, the equation Q(λξ + η) = 0 has m real roots λ1, λ2, …, λm. The system is strictly hyperbolic if these roots are always distinct. The geometrical interpretation of this condition is as follows: the characteristic form Q(ζ) = 0 defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this cone has nm sheets, and the axis ζ = λξ runs inside these sheets: it does not intersect any of them. But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the normal cone has no real sheets.
== Analytical solutions ==
=== Separation of variables ===
Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a feature of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution (this also applies to ODEs). We assume as an ansatz that the dependence of a solution on the parameters space and time can be written as a product of terms that each depend on a single parameter, and then see if this can be made to solve the problem.
In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ordinary differential equation if in one variable – these are in turn easier to solve.
This is possible for simple PDEs, which are called separable partial differential equations, and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed x" as a coordinate, each coordinate can be understood separately.
This generalizes to the method of characteristics, and is also used in integral transforms.
=== Method of characteristics ===
The characteristic surface in n = 2-dimensional space is called a characteristic curve.
In special cases, one can find characteristic curves on which the first-order PDE reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the method of characteristics.
More generally, applying the method to first-order PDEs in higher dimensions, one may find characteristic surfaces.
=== Integral transform ===
An integral transform may transform the PDE to a simpler one, in particular, a separable PDE. This corresponds to diagonalizing an operator.
An important example of this is Fourier analysis, which diagonalizes the heat equation using the eigenbasis of sinusoidal waves.
If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite domains. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral.
=== Change of variables ===
Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example, the Black–Scholes equation
∂
V
∂
t
+
1
2
σ
2
S
2
∂
2
V
∂
S
2
+
r
S
∂
V
∂
S
−
r
V
=
0
{\displaystyle {\frac {\partial V}{\partial t}}+{\tfrac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0}
is reducible to the heat equation
∂
u
∂
τ
=
∂
2
u
∂
x
2
{\displaystyle {\frac {\partial u}{\partial \tau }}={\frac {\partial ^{2}u}{\partial x^{2}}}}
by the change of variables
V
(
S
,
t
)
=
v
(
x
,
τ
)
,
x
=
ln
(
S
)
,
τ
=
1
2
σ
2
(
T
−
t
)
,
v
(
x
,
τ
)
=
e
−
α
x
−
β
τ
u
(
x
,
τ
)
.
{\displaystyle {\begin{aligned}V(S,t)&=v(x,\tau ),\\[5px]x&=\ln \left(S\right),\\[5px]\tau &={\tfrac {1}{2}}\sigma ^{2}(T-t),\\[5px]v(x,\tau )&=e^{-\alpha x-\beta \tau }u(x,\tau ).\end{aligned}}}
=== Fundamental solution ===
Inhomogeneous equations can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source
P
(
D
)
u
=
δ
{\displaystyle P(D)u=\delta }
), then taking the convolution with the boundary conditions to get the solution.
This is analogous in signal processing to understanding a filter by its impulse response.
=== Superposition principle ===
The superposition principle applies to any linear system, including linear systems of PDEs. A common visualization of this concept is the interaction of two waves in phase being combined to result in a greater amplitude, for example sin x + sin x = 2 sin x. The same principle can be observed in PDEs where the solutions may be real or complex and additive. If u1 and u2 are solutions of linear PDE in some function space R, then u = c1u1 + c2u2 with any constants c1 and c2 are also a solution of that PDE in the same function space.
=== Methods for non-linear equations ===
There are no generally applicable analytical methods to solve nonlinear PDEs. Still, existence and uniqueness results (such as the Cauchy–Kowalevski theorem) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of analysis).
Nevertheless, some techniques can be used for several types of equations. The h-principle is the most powerful method to solve underdetermined equations. The Riquier–Janet theory is an effective method for obtaining information about many analytic overdetermined systems.
The method of characteristics can be used in some very special cases to solve nonlinear partial differential equations.
In some cases, a PDE can be solved via perturbation analysis in which the solution is considered to be a correction to an equation with a known solution. Alternatives are numerical analysis techniques from simple finite difference schemes to the more mature multigrid and finite element methods. Many interesting problems in science and engineering are solved in this way using computers, sometimes high performance supercomputers.
=== Lie group method ===
From 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred, to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact.
A general approach to solving PDEs uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE.
Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines.
=== Semi-analytical methods ===
The Adomian decomposition method, the Lyapunov artificial small parameter method, and his homotopy perturbation method are all special cases of the more general homotopy analysis method. These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known perturbation theory, thus giving these methods greater flexibility and solution generality.
== Numerical solutions ==
The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM), as well other kind of methods called meshfree methods, which were made to solve problems where the aforementioned methods are limited. The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM. Other hybrid versions of FEM and Meshfree methods include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), element-free Galerkin method (EFGM), interpolating element-free Galerkin method (IEFGM), etc.
=== Finite element method ===
The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations. The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge–Kutta, etc.
=== Finite difference method ===
Finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives.
=== Finite volume method ===
Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, surface integrals in a partial differential equation that contain a divergence term are converted to volume integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods conserve mass by design.
=== Neural networks ===
== Weak solutions ==
Weak solutions are functions that satisfy the PDE, yet in other meanings than regular sense. The meaning for this term may differ with context, and one of the most commonly used definitions is based on the notion of distributions.
An example for the definition of a weak solution is as follows:
Consider the boundary-value problem given by:
L
u
=
f
in
U
,
u
=
0
on
∂
U
,
{\displaystyle {\begin{aligned}Lu&=f\quad {\text{in }}U,\\u&=0\quad {\text{on }}\partial U,\end{aligned}}}
where
L
u
=
−
∑
i
,
j
∂
j
(
a
i
j
∂
i
u
)
+
∑
i
b
i
∂
i
u
+
c
u
{\displaystyle Lu=-\sum _{i,j}\partial _{j}(a^{ij}\partial _{i}u)+\sum _{i}b^{i}\partial _{i}u+cu}
denotes a second-order partial differential operator in divergence form.
We say a
u
∈
H
0
1
(
U
)
{\displaystyle u\in H_{0}^{1}(U)}
is a weak solution if
∫
U
[
∑
i
,
j
a
i
j
(
∂
i
u
)
(
∂
j
v
)
+
∑
i
b
i
(
∂
i
u
)
v
+
c
u
v
]
d
x
=
∫
U
f
v
d
x
{\displaystyle \int _{U}[\sum _{i,j}a^{ij}(\partial _{i}u)(\partial _{j}v)+\sum _{i}b^{i}(\partial _{i}u)v+cuv]dx=\int _{U}fvdx}
for every
v
∈
H
0
1
(
U
)
{\displaystyle v\in H_{0}^{1}(U)}
, which can be derived by a formal integral by parts.
An example for a weak solution is as follows:
ϕ
(
x
)
=
1
4
π
1
|
x
|
{\displaystyle \phi (x)={\frac {1}{4\pi }}{\frac {1}{|x|}}}
is a weak solution satisfying
∇
2
ϕ
=
δ
in
R
3
{\displaystyle \nabla ^{2}\phi =\delta {\text{ in }}R^{3}}
in distributional sense, as formally,
∫
R
3
∇
2
ϕ
(
x
)
ψ
(
x
)
d
x
=
∫
R
3
ϕ
(
x
)
∇
2
ψ
(
x
)
d
x
=
ψ
(
0
)
for
ψ
∈
C
c
∞
(
R
3
)
.
{\displaystyle \int _{R^{3}}\nabla ^{2}\phi (x)\psi (x)dx=\int _{R^{3}}\phi (x)\nabla ^{2}\psi (x)dx=\psi (0){\text{ for }}\psi \in C_{c}^{\infty }(R^{3}).}
== Theoretical Studies ==
As a branch of pure mathematics, the theoretical studies of PDEs focus on the criteria for a solution to exist, the properties of a solution, and finding its formula is often secondary.
=== Well-posedness ===
Well-posedness refers to a common schematic package of information about a PDE. To say that a PDE is well-posed, one must have:
an existence and uniqueness theorem, asserting that by the prescription of some freely chosen functions, one can single out one specific solution of the PDE
by continuously changing the free choices, one continuously changes the corresponding solution
This is, by the necessity of being applicable to several different PDE, somewhat vague. The requirement of "continuity", in particular, is ambiguous, since there are usually many inequivalent means by which it can be rigorously defined. It is, however, somewhat unusual to study a PDE without specifying a way in which it is well-posed.
=== Regularity ===
Regularity refers to the integrability and differentiability of weak solutions, which can often be represented by Sobolev spaces.
This problem arise due to the difficulty in searching for classical solutions. Researchers often tend to find weak solutions at first and then find out whether it is smooth enough to be qualified as a classical solution.
Results from functional analysis are often used in this field of study.
== See also ==
Some common PDEs
Acoustic wave equation
Burgers' equation
Continuity equation
Heat equation
Helmholtz equation
Klein–Gordon equation
Jacobi equation
Lagrange equation
Lorenz equation
Laplace's equation
Maxwell's equations
Navier-Stokes equation
Poisson's equation
Reaction–diffusion system
Schrödinger equation
Wave equation
Types of boundary conditions
Dirichlet boundary condition
Neumann boundary condition
Robin boundary condition
Cauchy problem
Various topics
Jet bundle
Laplace transform applied to differential equations
List of dynamical systems and differential equations topics
Matrix differential equation
Numerical partial differential equations
Partial differential algebraic equation
Recurrence relation
Stochastic processes and boundary value problems
== Notes ==
== References ==
== Further reading ==
Cajori, Florian (1928). "The Early History of Partial Differential Equations and of Partial Differentiation and Integration" (PDF). The American Mathematical Monthly. 35 (9): 459–467. doi:10.2307/2298771. JSTOR 2298771. Archived from the original (PDF) on 2018-11-23. Retrieved 2016-05-15.
Nirenberg, Louis (1994). "Partial differential equations in the first half of the century." Development of mathematics 1900–1950 (Luxembourg, 1992), 479–515, Birkhäuser, Basel.
Brezis, Haïm; Browder, Felix (1998). "Partial Differential Equations in the 20th Century". Advances in Mathematics. 135 (1): 76–144. doi:10.1006/aima.1997.1713.
== External links ==
"Differential equation, partial", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Partial Differential Equations: Exact Solutions at EqWorld: The World of Mathematical Equations.
Partial Differential Equations: Index at EqWorld: The World of Mathematical Equations.
Partial Differential Equations: Methods at EqWorld: The World of Mathematical Equations.
Example problems with solutions at exampleproblems.com
Partial Differential Equations at mathworld.wolfram.com
Partial Differential Equations with Mathematica
Partial Differential Equations in Cleve Moler: Numerical Computing with MATLAB
Partial Differential Equations at nag.com
Sanderson, Grant (April 21, 2019). "But what is a partial differential equation?". 3Blue1Brown. Archived from the original on 2021-11-02 – via YouTube. | Wikipedia/Partial_differential_equations |
In convex geometry, the Mahler volume of a centrally symmetric convex body is a dimensionless quantity that is associated with the body and is invariant under linear transformations. It is named after German-English mathematician Kurt Mahler. It is known that the shapes with the largest possible Mahler volume are the balls and solid ellipsoids; this is now known as the Blaschke–Santaló inequality. The still-unsolved Mahler conjecture states that the minimum possible Mahler volume is attained by a hypercube.
== Definition ==
A convex body in Euclidean space is defined as a compact convex set with non-empty interior. If
B
{\displaystyle B}
is a centrally symmetric convex body in
n
{\displaystyle n}
-dimensional Euclidean space, the polar body
B
∘
{\displaystyle B^{\circ }}
is another centrally symmetric body in the same space, defined as the set
{
x
∣
x
⋅
y
≤
1
for all
y
∈
B
}
.
{\displaystyle \left\{x\mid x\cdot y\leq 1{\text{ for all }}y\in B\right\}.}
The Mahler volume of
B
{\displaystyle B}
is the product of the volumes of
B
{\displaystyle B}
and
B
∘
{\displaystyle B^{\circ }}
.
If
T
{\displaystyle T}
is an invertible linear transformation, then
(
T
B
)
∘
=
(
T
−
1
)
∗
B
∘
{\displaystyle (TB)^{\circ }=(T^{-1})^{\ast }B^{\circ }}
. Applying
T
{\displaystyle T}
to
B
{\displaystyle B}
multiplies its volume by
det
T
{\displaystyle \det T}
and multiplies the volume of
B
∘
{\displaystyle B^{\circ }}
by
det
(
T
−
1
)
∗
{\displaystyle \det(T^{-1})^{\ast }}
. As these determinants are multiplicative inverses, the overall Mahler volume of
B
{\displaystyle B}
is preserved by linear transformations.
== Examples ==
The polar body of an
n
{\displaystyle n}
-dimensional unit sphere is itself another unit sphere. Thus, its Mahler volume is just the square of its volume,
Γ
(
3
/
2
)
2
n
4
n
Γ
(
n
2
+
1
)
2
{\displaystyle {\frac {\Gamma (3/2)^{2n}4^{n}}{\Gamma ({\frac {n}{2}}+1)^{2}}}}
where
Γ
{\displaystyle \Gamma }
is the Gamma function.
By affine invariance, any ellipsoid has the same Mahler volume.
The polar body of a polyhedron or polytope is its dual polyhedron or dual polytope. In particular, the polar body of a cube or hypercube is an octahedron or cross polytope. Its Mahler volume can be calculated as
4
n
Γ
(
n
+
1
)
.
{\displaystyle {\frac {4^{n}}{\Gamma (n+1)}}.}
The Mahler volume of the sphere is larger than the Mahler volume of the hypercube by a factor of approximately
(
π
2
)
n
{\displaystyle \left({\tfrac {\pi }{2}}\right)^{n}}
.
== Extreme shapes ==
The Blaschke–Santaló inequality states that the shapes with maximum Mahler volume are the spheres and ellipsoids. The three-dimensional case of this result was proven by Wilhelm Blaschke (1917); the full result was proven much later by Luis Santaló (1949) using a technique known as Steiner symmetrization by which any centrally symmetric convex body can be replaced with a more sphere-like body without decreasing its Mahler volume.
The shapes with the minimum known Mahler volume are hypercubes, cross polytopes, and more generally the Hanner polytopes which include these two types of shapes, as well as their affine transformations. The Mahler conjecture states that the Mahler volume of these shapes is the smallest of any n-dimensional symmetric convex body; it remains unsolved when
n
≥
4
{\displaystyle n\geq 4}
. As Terry Tao writes:
The main reason why this conjecture is so difficult is that unlike the upper bound, in which there is essentially only one extremiser up to affine transformations (namely the ball), there are many distinct extremisers for the lower bound - not only the cube and the octahedron, but also products of cubes and octahedra, polar bodies of products of cubes and octahedra, products of polar bodies of… well, you get the idea. It is really difficult to conceive of any sort of flow or optimisation procedure which would converge to exactly these bodies and no others; a radically different type of argument might be needed.
Bourgain & Milman (1987) proved that the Mahler volume is bounded below by
c
n
{\displaystyle c^{n}}
times the volume of a sphere for some absolute constant
c
>
0
{\displaystyle c>0}
, matching the scaling behavior of the hypercube volume but with a smaller constant. Kuperberg (2008) proved that, more concretely, one can take
c
=
1
2
{\displaystyle c={\tfrac {1}{2}}}
in this bound. A result of this type is known as a reverse Santaló inequality.
== Partial results ==
The 2-dimensional case of the Mahler conjecture has been solved by Mahler and the 3-dimensional case by Iriyeh and Shibata.
It is known that each of the Hanner polytopes is a strict local minimizer for the Mahler volume in the class of origin-symmetric convex bodies endowed with the Banach–Mazur distance. This was first proven by Nazarov, Petrov, Ryabogin, and Zvavitch for the unit cube, and later generalized to all Hanner polytopes by Jaegil Kim.
The Mahler conjecture holds for zonotopes.
The Mahler conjecture holds in the class of unconditional bodies, that is, convex bodies invariant under reflection on each coordinate hyperplane {xi = 0}. This was first proven by Saint-Raymond in 1980. Later, a much shorter proof was found by Meyer. This was further generalized to convex bodies with symmetry groups that are more general reflection groups. The minimizers are then not necessarily Hanner polytopes, but were found to be regular polytopes corresponding to the reflection groups.
Reisner et al. (2010) showed that a minimizer of the Mahler volume must have Gaussian curvature equal to zero almost everywhere on its boundary, suggesting strongly that a minimal body is a polytope.
== For asymmetric bodies ==
The Mahler volume can be defined in the same way, as the product of the volume and the polar volume, for convex bodies whose interior contains the origin regardless of symmetry. Mahler conjectured that, for this generalization, the minimum volume is obtained by a simplex, with its centroid at the origin. As with the symmetric Mahler conjecture, reverse Santaló inequalities are known showing that the minimum volume is at least within an exponential factor of the simplex.
== Notes ==
== References ==
Blaschke, Wilhelm (1917). "Uber affine Geometrie VII: Neue Extremeingenschaften von Ellipse und Ellipsoid". Ber. Verh. Sächs. Akad. Wiss. Leipzig Math.-Phys. Kl. (in German). 69. Leipzig: 412–420.
Bourgain, Jean; Milman, Vitali D. (1987). "New volume ratio properties for convex symmetric bodies in
R
n
{\displaystyle \mathbb {R} ^{n}}
". Inventiones Mathematicae. 88 (2): 319–340. Bibcode:1987InMat..88..319B. doi:10.1007/BF01388911. MR 0880954.
Kuperberg, Greg (2008). "From the Mahler conjecture to Gauss linking integrals". Geometric and Functional Analysis. 18 (3): 870–892. arXiv:math/0610904. doi:10.1007/s00039-008-0669-4. MR 2438998.
Nazarov, Fedor; Petrov, Fedor; Ryabogin, Dmitry; Zvavitch, Artem (2010). "A remark on the Mahler conjecture: local minimality of the unit cube". Duke Mathematical Journal. 154 (3): 419–430. arXiv:0905.0867. doi:10.1215/00127094-2010-042. MR 2730574.
Santaló, Luis A. (1949). "An affine invariant for convex bodies of
n
{\displaystyle n}
-dimensional space". Portugaliae Mathematica (in Spanish). 8: 155–161. MR 0039293.
Tao, Terence (March 8, 2007). "Open question: the Mahler conjecture on convex bodies". Revised and reprinted in Tao, Terence (2009). "3.8 Mahler's conjecture for convex bodies". Structure and Randomness: Pages from Year One of a Mathematical Blog. American Mathematical Society. pp. 216–219. ISBN 978-0-8218-4695-7. | Wikipedia/Mahler's_conjecture |
The modified discrete cosine transform (MDCT) is a transform based on the type-IV discrete cosine transform (DCT-IV), with the additional property of being lapped: it is designed to be performed on consecutive blocks of a larger dataset, where subsequent blocks are overlapped so that the last half of one block coincides with the first half of the next block. This overlapping, in addition to the energy-compaction qualities of the DCT, makes the MDCT especially attractive for signal compression applications, since it helps to avoid artifacts stemming from the block boundaries. As a result of these advantages, the MDCT is the most widely used lossy compression technique in audio data compression. It is employed in most modern audio coding standards, including MP3, Dolby Digital (AC-3), Vorbis (Ogg), Windows Media Audio (WMA), ATRAC, Cook, Advanced Audio Coding (AAC), High-Definition Coding (HDC), LDAC, Dolby AC-4, and MPEG-H 3D Audio, as well as speech coding standards such as AAC-LD (LD-MDCT), G.722.1, G.729.1, CELT, and Opus.
The discrete cosine transform (DCT) was first proposed by Nasir Ahmed in 1972, and demonstrated by Ahmed with T. Natarajan and K. R. Rao in 1974. The MDCT was later proposed by John P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987, following earlier work by Princen and Bradley (1986) to develop the MDCT's underlying principle of time-domain aliasing cancellation (TDAC), described below. (There also exists an analogous transform, the MDST, based on the discrete sine transform, as well as other, rarely used, forms of the MDCT based on different types of DCT or DCT/DST combinations.)
In MP3, the MDCT is not applied to the audio signal directly, but rather to the output of a 32-band polyphase quadrature filter (PQF) bank. The output of this MDCT is postprocessed by an alias reduction formula to reduce the typical aliasing of the PQF filter bank. Such a combination of a filter bank with an MDCT is called a hybrid filter bank or a subband MDCT. AAC, on the other hand, normally uses a pure MDCT; only the (rarely used) MPEG-4 AAC-SSR variant (by Sony) uses a four-band PQF bank followed by an MDCT. Similar to MP3, ATRAC uses stacked quadrature mirror filters (QMF) followed by an MDCT.
== Definition ==
As a lapped transform, the MDCT is somewhat unusual compared to other Fourier-related transforms in that it has half as many outputs as inputs (instead of the same number). In particular, it is a linear function
F
:
R
2
N
→
R
N
{\displaystyle F\colon \mathbf {R} ^{2N}\to \mathbf {R} ^{N}}
(where R denotes the set of real numbers). The 2N real numbers x0, ..., x2N−1 are transformed into the N real numbers X0, ..., XN−1 according to the formula
X
k
=
∑
n
=
0
2
N
−
1
x
n
cos
[
π
N
(
n
+
1
2
+
N
2
)
(
k
+
1
2
)
]
.
{\displaystyle X_{k}=\sum _{n=0}^{2N-1}x_{n}\cos \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}+{\frac {N}{2}}\right)\left(k+{\frac {1}{2}}\right)\right].}
The normalization coefficient in front of this transform, here unity, is an arbitrary convention and differs between treatments. Only the product of the normalizations of the MDCT and the IMDCT, below, is constrained.
=== Inverse transform ===
The inverse MDCT is known as the IMDCT. Because there are different numbers of inputs and outputs, at first glance it might seem that the MDCT should not be invertible. However, perfect invertibility is achieved by adding the overlapped IMDCTs of subsequent overlapping blocks, causing the errors to cancel and the original data to be retrieved; this technique is known as time-domain aliasing cancellation (TDAC).
The IMDCT transforms N real numbers X0, ..., XN−1 into 2N real numbers y0, ..., y2N−1 according to the formula
y
n
=
1
N
∑
k
=
0
N
−
1
X
k
cos
[
π
N
(
n
+
1
2
+
N
2
)
(
k
+
1
2
)
]
.
{\displaystyle y_{n}={\frac {1}{N}}\sum _{k=0}^{N-1}X_{k}\cos \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}+{\frac {N}{2}}\right)\left(k+{\frac {1}{2}}\right)\right].}
Like for the DCT-IV, an orthogonal transform, the inverse has the same form as the forward transform.
In the case of a windowed MDCT with the usual window normalization (see below), the normalization coefficient in front of the IMDCT should be multiplied by 2 (i.e., becoming 2/N).
=== Computation ===
Although the direct application of the MDCT formula would require O(N2) operations, it is possible to compute the same thing with only O(N log N) complexity by recursively factorizing the computation, as in the fast Fourier transform (FFT). One can also compute MDCTs via other transforms, typically a DFT (FFT) or a DCT, combined with O(N) pre- and post-processing steps. Also, as described below, any algorithm for the DCT-IV immediately provides a method to compute the MDCT and IMDCT of even size.
== Window functions ==
In typical signal-compression applications, the transform properties are further improved by using a window function wn (n = 0, ..., 2N − 1) that is multiplied with xn in the MDCT and with yn in the IMDCT formulas above, in order to avoid discontinuities at the n = 0 and 2N boundaries by making the function go smoothly to zero at those points. (That is, the window function is applied to the data before the MDCT or after the IMDCT.) In principle, x and y could have different window functions, and the window function could also change from one block to the next (especially for the case where data blocks of different sizes are combined), but for simplicity we consider the common case of identical window functions for equal-sized blocks.
The transform remains invertible (that is, TDAC works), for a symmetric window wn = w2N−1−n, as long as w satisfies the Princen–Bradley condition:
w
n
2
+
w
n
+
N
2
=
1.
{\displaystyle w_{n}^{2}+w_{n+N}^{2}=1.}
Various window functions are used. A window that produces a form known as a modulated lapped transform (MLT) is given by
w
n
=
sin
[
π
2
N
(
n
+
1
2
)
]
{\displaystyle w_{n}=\sin \left[{\frac {\pi }{2N}}\left(n+{\frac {1}{2}}\right)\right]}
and is used for MP3 and MPEG-2 AAC, and
w
n
=
sin
(
π
2
sin
2
[
π
2
N
(
n
+
1
2
)
]
)
{\displaystyle w_{n}=\sin \left({\frac {\pi }{2}}\sin ^{2}\left[{\frac {\pi }{2N}}\left(n+{\frac {1}{2}}\right)\right]\right)}
for Vorbis. AC-3 uses a Kaiser–Bessel derived (KBD) window, and MPEG-4 AAC can also use a KBD window.
Note that windows applied to the MDCT are different from windows used for some other types of signal analysis, since they must fulfill the Princen–Bradley condition. One of the reasons for this difference is that MDCT windows are applied twice, for both the MDCT (analysis) and the IMDCT (synthesis).
== Relationship to DCT-IV and origin of TDAC ==
As can be seen by inspection of the definitions, for even N the MDCT is essentially equivalent to a DCT-IV, where the input is shifted by N/2 and two N-blocks of data are transformed at once. By examining this equivalence more carefully, important properties like TDAC can be easily derived.
In order to define the precise relationship to the DCT-IV, one must realize that the DCT-IV corresponds to alternating even/odd boundary conditions: even at its left boundary (around n = −1/2), odd at its right boundary (around n = N − 1/2), and so on (instead of periodic boundaries as for a DFT). This follows from the identities
cos
[
π
N
(
−
n
−
1
+
1
2
)
(
k
+
1
2
)
]
=
cos
[
π
N
(
n
+
1
2
)
(
k
+
1
2
)
]
{\displaystyle \cos \left[{\frac {\pi }{N}}\left(-n-1+{\frac {1}{2}}\right)\left(k+{\frac {1}{2}}\right)\right]=\cos \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}\right)\left(k+{\frac {1}{2}}\right)\right]}
and
cos
[
π
N
(
2
N
−
n
−
1
+
1
2
)
(
k
+
1
2
)
]
=
−
cos
[
π
N
(
n
+
1
2
)
(
k
+
1
2
)
]
.
{\displaystyle \cos \left[{\frac {\pi }{N}}\left(2N-n-1+{\frac {1}{2}}\right)\left(k+{\frac {1}{2}}\right)\right]=-\cos \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}\right)\left(k+{\frac {1}{2}}\right)\right].}
Thus, if its inputs are an array x of length N, we can imagine extending this array to (x, −xR, −x, xR, ...) and so on, where xR denotes x in reverse order.
Consider an MDCT with 2N inputs and N outputs, where we divide the inputs into four blocks (a, b, c, d) each of size N/2. If we shift these to the right by N/2 (from the +N/2 term in the MDCT definition), then (b, c, d) extend past the end of the N DCT-IV inputs, so we must "fold" them back according to the boundary conditions described above.
Thus, the MDCT of 2N inputs (a, b, c, d) is exactly equivalent to a DCT-IV of the N inputs: (−cR − d, a − bR), where R denotes reversal as above.
In this way, any algorithm to compute the DCT-IV can be trivially applied to the MDCT.
Similarly, the IMDCT formula above is precisely 1/2 of the DCT-IV (which is its own inverse), where the output is extended (via the boundary conditions) to a length 2N and shifted back to the left by N/2. The inverse DCT-IV would simply give back the inputs (−cR − d, a − bR) from above. When this is extended via the boundary conditions and shifted, one obtains
IMDCT(MDCT(a, b, c, d)) = (a − bR, b − aR, c + dR, d + cR)/2.
Half of the IMDCT outputs are thus redundant, as b − aR = −(a − bR)R, and likewise for the last two terms. If we group the input into bigger blocks A,B of size N, where A = (a, b) and B = (c, d), we can write this result in a simpler way:
IMDCT(MDCT(A, B)) = (A − AR, B + BR)/2.
One can now understand how TDAC works. Suppose that one computes the MDCT of the subsequent, 50% overlapped, 2N block (B, C). The IMDCT will then yield, analogous to the above: (B − BR, C + CR)/2. When this is added with the previous IMDCT result in the overlapping half, the reversed terms cancel and one obtains simply B, recovering the original data.
=== Origin of TDAC ===
The origin of the term "time-domain aliasing cancellation" is now clear. The use of input data that extend beyond the boundaries of the logical DCT-IV causes the data to be aliased in the same way that frequencies beyond the Nyquist frequency are aliased to lower frequencies, except that this aliasing occurs in the time domain instead of the frequency domain: we cannot distinguish the contributions of
a and of bR to the MDCT of (a, b, c, d), or equivalently, to
the result of
IMDCT(MDCT(a, b, c, d))= (a − bR, b − aR, c + dR, d + cR)/2.
The combinations c − dR and so on have precisely the right signs for the combinations to cancel when they are added.
For odd N (which are rarely used in practice), N/2 is not an integer, so the MDCT is not simply a shift permutation of a DCT-IV. In this case, the additional shift by half a sample means that the MDCT/IMDCT becomes equivalent to the DCT-III/II, and the analysis is analogous to the above.
=== Smoothness and discontinuities ===
We have seen above that the MDCT of 2N inputs (a, b, c, d) is equivalent to a DCT-IV of the N inputs (−cR − d, a − bR).
The DCT-IV is designed for the case where the function at the right boundary is odd, and therefore the values near the right boundary are close to 0. If the input signal is smooth, this is the case: the rightmost components of a and bR are consecutive in the input sequence (a, b, c, d), and therefore their difference is small.
Let us look at the middle of the interval: if we rewrite the above expression as (−cR − d, a − bR) = (−d, a) − (b, c)R, the second term, (b, c)R, gives a smooth transition in the middle.
However, in the first term, (−d, a), there is a potential discontinuity where the right end of −d meets the left end of a.
This is the reason for using a window function that reduces the components near the boundaries of the input sequence (a, b, c, d) towards 0.
=== TDAC for the windowed MDCT ===
Above, the TDAC property was proved for the ordinary MDCT, showing that adding IMDCTs of subsequent blocks in their overlapping half recovers the original data. The derivation of this inverse property for the windowed MDCT is only slightly more complicated.
Consider two overlapping consecutive sets of 2N inputs (A,B) and (B,C), for blocks A,B,C of size N.
Recall from above that when
(
A
,
B
)
{\displaystyle (A,B)}
and
(
B
,
C
)
{\displaystyle (B,C)}
are MDCTed, IMDCTed, and added in their overlapping half, we obtain
(
B
+
B
R
)
/
2
+
(
B
−
B
R
)
/
2
=
B
{\displaystyle (B+B_{R})/2+(B-B_{R})/2=B}
, the original data.
Now we suppose that we multiply both the MDCT inputs and the IMDCT outputs by a window function of length 2N. As above, we assume a symmetric window function, which is therefore of the form
(
W
,
W
R
)
{\displaystyle (W,W_{R})}
where W is a length-N vector and R denotes reversal as before. Then the Princen-Bradley condition can be written as
W
2
+
W
R
2
=
(
1
,
1
,
…
)
{\displaystyle W^{2}+W_{R}^{2}=(1,1,\ldots )}
, with the squares and additions performed elementwise.
Therefore, instead of MDCTing
(
A
,
B
)
{\displaystyle (A,B)}
, we now MDCT
(
W
A
,
W
R
B
)
{\displaystyle (WA,W_{R}B)}
(with all multiplications performed elementwise). When this is IMDCTed and multiplied again (elementwise) by the window function, the last-N half becomes:
W
R
⋅
(
W
R
B
+
(
W
R
B
)
R
)
=
W
R
⋅
(
W
R
B
+
W
B
R
)
=
W
R
2
B
+
W
W
R
B
R
{\displaystyle W_{R}\cdot (W_{R}B+(W_{R}B)_{R})=W_{R}\cdot (W_{R}B+WB_{R})=W_{R}^{2}B+WW_{R}B_{R}}
.
(Note that we no longer have the multiplication by 1/2, because the IMDCT normalization differs by a factor of 2 in the windowed case.)
Similarly, the windowed MDCT and IMDCT of
(
B
,
C
)
{\displaystyle (B,C)}
yields, in its first-N half:
W
⋅
(
W
B
−
W
R
B
R
)
=
W
2
B
−
W
W
R
B
R
{\displaystyle W\cdot (WB-W_{R}B_{R})=W^{2}B-WW_{R}B_{R}}
.
When we add these two halves together, we obtain:
(
W
R
2
B
+
W
W
R
B
R
)
+
(
W
2
B
−
W
W
R
B
R
)
=
(
W
R
2
+
W
2
)
B
=
B
,
{\displaystyle (W_{R}^{2}B+WW_{R}B_{R})+(W^{2}B-WW_{R}B_{R})=\left(W_{R}^{2}+W^{2}\right)B=B,}
recovering the original data.
== See also ==
Discrete cosine transform
Other overlapping windowed Fourier transforms include:
Modulated complex lapped transform
Short-time Fourier transform
Welch's method
Audio coding format
Audio compression (data)
== References ==
== Bibliography ==
Henrique S. Malvar, Signal Processing with Lapped Transforms (Artech House: Norwood MA, 1992).
A. W. Johnson and A. B. Bradley, "Adaptive transform coding incorporating time domain aliasing cancellation," Speech Comm. 6, 299-308 (1987).
For algorithms, see examples:
Chi-Min Liu and Wen-Chieh Lee, "A unified fast algorithm for cosine modulated filterbanks in current audio standards", J. Audio Engineering 47 (12), 1061-1075 (1999).
V. Britanak and K. R. Rao, "A new fast algorithm for the unified forward and inverse MDCT/MDST computation," Signal Processing 82, 433-459 (2002)
Vladimir Nikolajevic and Gerhard Fettweis, "Computation of forward and inverse MDCT using Clenshaw's recurrence formula," IEEE Trans. Sig. Proc. 51 (5), 1439-1444 (2003)
Che-Hong Chen, Bin-Da Liu, and Jar-Ferr Yang, "Recursive architectures for realizing modified discrete cosine transform and its inverse," IEEE Trans. Circuits Syst. II: Analog Dig. Sig. Proc. 50 (1), 38-45 (2003)
J.S. Wu, H.Z. Shu, L. Senhadji, and L.M. Luo, "Mixed-radix algorithm for the computation of forward and inverse MDCTs," IEEE Trans. Circuits Syst. I: Reg. Papers 56 (4), 784-794 (2009)
V. Britanak, "A survey of efficient MDCT implementations in MP3 audio coding standard: retrospective and state-of-the-art," Signal. Process. 91 (4), 624-672(2011) | Wikipedia/Modified_discrete_cosine_transform |
Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding an expected distortion D.
== Introduction ==
Rate–distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods. Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate–distortion functions.
Rate–distortion theory was created by Claude Shannon in his foundational work on information theory.
In rate–distortion theory, the rate is usually understood as the number of bits per data sample to be stored or transmitted. The notion of distortion is a subject of on-going discussion. In the most simple case (which is actually used in most cases), the distortion is defined as the expected value of the square of the difference between input and output signal (i.e., the mean squared error). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video) the distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory. In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate–distortion theory. In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighting (quantization, normalization) matrix.
== Distortion functions ==
Distortion functions measure the cost of representing a symbol
x
{\displaystyle x}
by an approximated symbol
x
^
{\displaystyle {\hat {x}}}
. Typical distortion functions are the Hamming distortion and the Squared-error distortion.
=== Hamming distortion ===
d
(
x
,
x
^
)
=
{
0
if
x
=
x
^
1
if
x
≠
x
^
{\displaystyle d(x,{\hat {x}})={\begin{cases}0&{\text{if }}x={\hat {x}}\\1&{\text{if }}x\neq {\hat {x}}\end{cases}}}
=== Squared-error distortion ===
d
(
x
,
x
^
)
=
(
x
−
x
^
)
2
{\displaystyle d(x,{\hat {x}})=\left(x-{\hat {x}}\right)^{2}}
== Rate–distortion functions ==
The functions that relate the rate and distortion are found as the solution of the following minimization problem:
inf
Q
Y
∣
X
(
y
∣
x
)
I
Q
(
Y
;
X
)
subject to
D
Q
≤
D
∗
.
{\displaystyle \inf _{Q_{Y\mid X}(y\mid x)}I_{Q}(Y;X){\text{ subject to }}D_{Q}\leq D^{*}.}
Here
Q
Y
∣
X
(
y
∣
x
)
{\displaystyle Q_{Y\mid X}(y\mid x)}
, sometimes called a test channel, is the conditional probability density function (PDF) of the communication channel output (compressed signal)
Y
{\displaystyle Y}
for a given input (original signal)
X
{\displaystyle X}
, and
I
Q
(
Y
;
X
)
{\displaystyle I_{Q}(Y;X)}
is the mutual information between
Y
{\displaystyle Y}
and
X
{\displaystyle X}
defined as
I
(
Y
;
X
)
=
H
(
Y
)
−
H
(
Y
∣
X
)
{\displaystyle I(Y;X)=H(Y)-H(Y\mid X)\,}
where
H
(
Y
)
{\displaystyle H(Y)}
and
H
(
Y
∣
X
)
{\displaystyle H(Y\mid X)}
are the entropy of the output signal Y and the conditional entropy of the output signal given the input signal, respectively:
H
(
Y
)
=
−
∫
−
∞
∞
P
Y
(
y
)
log
2
(
P
Y
(
y
)
)
d
y
{\displaystyle H(Y)=-\int _{-\infty }^{\infty }P_{Y}(y)\log _{2}(P_{Y}(y))\,dy}
H
(
Y
∣
X
)
=
−
∫
−
∞
∞
∫
−
∞
∞
Q
Y
∣
X
(
y
∣
x
)
P
X
(
x
)
log
2
(
Q
Y
∣
X
(
y
∣
x
)
)
d
x
d
y
.
{\displaystyle H(Y\mid X)=-\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }Q_{Y\mid X}(y\mid x)P_{X}(x)\log _{2}(Q_{Y\mid X}(y\mid x))\,dx\,dy.}
The problem can also be formulated as a distortion–rate function, where we find the infimum over achievable distortions for given rate constraint. The relevant expression is:
inf
Q
Y
∣
X
(
y
∣
x
)
E
[
D
Q
[
X
,
Y
]
]
subject to
I
Q
(
Y
;
X
)
≤
R
.
{\displaystyle \inf _{Q_{Y\mid X}(y\mid x)}E[D_{Q}[X,Y]]{\text{ subject to }}I_{Q}(Y;X)\leq R.}
The two formulations lead to functions which are inverses of each other.
The mutual information can be understood as a measure for 'prior' uncertainty the receiver has about the sender's signal (H(Y)), diminished by the uncertainty that is left after receiving information about the sender's signal (
H
(
Y
∣
X
)
{\displaystyle H(Y\mid X)}
). Of course the decrease in uncertainty is due to the communicated amount of information, which is
I
(
Y
;
X
)
{\displaystyle I\left(Y;X\right)}
.
As an example, in case there is no communication at all, then
H
(
Y
∣
X
)
=
H
(
Y
)
{\displaystyle H(Y\mid X)=H(Y)}
and
I
(
Y
;
X
)
=
0
{\displaystyle I(Y;X)=0}
. Alternatively, if the communication channel is perfect and the received signal
Y
{\displaystyle Y}
is identical to the signal
X
{\displaystyle X}
at the sender, then
H
(
Y
∣
X
)
=
0
{\displaystyle H(Y\mid X)=0}
and
I
(
Y
;
X
)
=
H
(
X
)
=
H
(
Y
)
{\displaystyle I(Y;X)=H(X)=H(Y)}
.
In the definition of the rate–distortion function,
D
Q
{\displaystyle D_{Q}}
and
D
∗
{\displaystyle D^{*}}
are the distortion between
X
{\displaystyle X}
and
Y
{\displaystyle Y}
for a given
Q
Y
∣
X
(
y
∣
x
)
{\displaystyle Q_{Y\mid X}(y\mid x)}
and the prescribed maximum distortion, respectively. When we use the mean squared error as distortion measure, we have (for amplitude-continuous signals):
D
Q
=
∫
−
∞
∞
∫
−
∞
∞
P
X
,
Y
(
x
,
y
)
(
x
−
y
)
2
d
x
d
y
=
∫
−
∞
∞
∫
−
∞
∞
Q
Y
∣
X
(
y
∣
x
)
P
X
(
x
)
(
x
−
y
)
2
d
x
d
y
.
{\displaystyle D_{Q}=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }P_{X,Y}(x,y)(x-y)^{2}\,dx\,dy=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }Q_{Y\mid X}(y\mid x)P_{X}(x)(x-y)^{2}\,dx\,dy.}
As the above equations show, calculating a rate–distortion function requires the stochastic description of the input
X
{\displaystyle X}
in terms of the PDF
P
X
(
x
)
{\displaystyle P_{X}(x)}
, and then aims at finding the conditional PDF
Q
Y
∣
X
(
y
∣
x
)
{\displaystyle Q_{Y\mid X}(y\mid x)}
that minimize rate for a given distortion
D
∗
{\displaystyle D^{*}}
. These definitions can be formulated measure-theoretically to account for discrete and mixed random variables as well.
An analytical solution to this minimization problem is often difficult to obtain except in some instances for which we next offer two of the best known examples. The rate–distortion function of any source is known to obey several fundamental properties, the most important ones being that it is a continuous, monotonically decreasing convex (U) function and thus the shape for the function in the examples is typical (even measured rate–distortion functions in real life tend to have very similar forms).
Although analytical solutions to this problem are scarce, there are upper and lower bounds to these functions including the famous Shannon lower bound (SLB), which in the case of squared error and memoryless sources, states that for arbitrary sources with finite differential entropy,
R
(
D
)
≥
h
(
X
)
−
h
(
D
)
{\displaystyle R(D)\geq h(X)-h(D)\,}
where h(D) is the differential entropy of a Gaussian random variable with variance D. This lower bound is extensible to sources with memory and other distortion measures. One important feature of the SLB is that it is asymptotically tight in the low distortion regime for a wide class of sources and in some occasions, it actually coincides with the rate–distortion function. Shannon Lower Bounds can generally be found if the distortion between any two numbers can be expressed as a function of the difference between the value of these two numbers.
The Blahut–Arimoto algorithm, co-invented by Richard Blahut, is an elegant iterative technique for numerically obtaining rate–distortion functions of arbitrary finite input/output alphabet sources and much work has been done to extend it to more general problem instances.
The computation of the rate-distortion function requires knowledge of the underlying distribution, which is often unavailable in contemporary applications in data-science and machine learning. However, this challenge can be addressed using deep learning-based estimators of the rate-distortion function. These estimators are typically referred to as 'neural estimators', involving the optimization of a parametrized variational form of the rate distortion objective.
When working with stationary sources with memory, it is necessary to modify the definition of the rate distortion function and it must be understood in the sense of a limit taken over sequences of increasing lengths.
R
(
D
)
=
lim
n
→
∞
R
n
(
D
)
{\displaystyle R(D)=\lim _{n\rightarrow \infty }R_{n}(D)}
where
R
n
(
D
)
=
1
n
inf
Q
Y
n
∣
X
n
∈
Q
I
(
Y
n
,
X
n
)
{\displaystyle R_{n}(D)={\frac {1}{n}}\inf _{Q_{Y^{n}\mid X^{n}}\in {\mathcal {Q}}}I(Y^{n},X^{n})}
and
Q
=
{
Q
Y
n
∣
X
n
(
Y
n
∣
X
n
,
X
0
)
:
E
[
d
(
X
n
,
Y
n
)
]
≤
D
}
{\displaystyle {\mathcal {Q}}=\{Q_{Y^{n}\mid X^{n}}(Y^{n}\mid X^{n},X_{0}):E[d(X^{n},Y^{n})]\leq D\}}
where superscripts denote a complete sequence up to that time and the subscript 0 indicates initial state.
=== Memoryless (independent) Gaussian source with squared-error distortion ===
If we assume that
X
{\displaystyle X}
is a Gaussian random variable with variance
σ
2
{\displaystyle \sigma ^{2}}
, and if we assume that successive samples of the signal
X
{\displaystyle X}
are stochastically independent (or equivalently, the source is memoryless, or the signal is uncorrelated), we find the following analytical expression for the rate–distortion function:
R
(
D
)
=
{
1
2
log
2
(
σ
x
2
/
D
)
,
if
0
≤
D
≤
σ
x
2
0
,
if
D
>
σ
x
2
.
{\displaystyle R(D)={\begin{cases}{\frac {1}{2}}\log _{2}(\sigma _{x}^{2}/D),&{\text{if }}0\leq D\leq \sigma _{x}^{2}\\0,&{\text{if }}D>\sigma _{x}^{2}.\end{cases}}}
The following figure shows what this function looks like:
Rate–distortion theory tell us that 'no compression system exists that performs outside the gray area'. The closer a practical compression system is to the red (lower) bound, the better it performs. As a general rule, this bound can only be attained by increasing the coding block length parameter. Nevertheless, even at unit blocklengths one can often find good (scalar) quantizers that operate at distances from the rate–distortion function that are practically relevant.
This rate–distortion function holds only for Gaussian memoryless sources. It is known that the Gaussian source is the most "difficult" source to encode: for a given mean square error, it requires the greatest number of bits. The performance of a practical compression system working on—say—images, may well be below the
R
(
D
)
{\displaystyle R\left(D\right)}
lower bound shown.
=== Memoryless (independent) Bernoulli source with Hamming distortion ===
The rate-distortion function of a Bernoulli random variable with Hamming distortion is given by:
R
(
D
)
=
{
H
b
(
p
)
−
H
b
(
D
)
,
0
≤
D
≤
min
(
p
,
1
−
p
)
0
,
D
>
min
(
p
,
1
−
p
)
{\displaystyle R(D)=\left\{{\begin{matrix}H_{b}(p)-H_{b}(D),&0\leq D\leq \min {(p,1-p)}\\0,&D>\min {(p,1-p)}\end{matrix}}\right.}
where
H
b
{\displaystyle H_{b}}
denotes the binary entropy function.
Plot of the rate-distortion function for
p
=
0.5
{\displaystyle p=0.5}
:
== Connecting rate-distortion theory to channel capacity ==
Suppose we want to transmit information about a source to the user with a distortion not exceeding D. Rate–distortion theory tells us that at least
R
(
D
)
{\displaystyle R(D)}
bits/symbol of information from the source must reach the user. We also know from Shannon's channel coding theorem that if the source entropy is H bits/symbol, and the channel capacity is C (where
C
<
H
{\displaystyle C<H}
), then
H
−
C
{\displaystyle H-C}
bits/symbol will be lost when transmitting this information over the given channel. For the user to have any hope of reconstructing with a maximum distortion D, we must impose the requirement that the information lost in transmission does not exceed the maximum tolerable loss of
H
−
R
(
D
)
{\displaystyle H-R(D)}
bits/symbol. This means that the channel capacity must be at least as large as
R
(
D
)
{\displaystyle R(D)}
.
== See also ==
Blahut–Arimoto algorithm – Class of algorithms in information theory
Data compression – Compact encoding of digital data
Decorrelation – Process of reducing correlation within one or more signals
Rate–distortion optimization – decision algorithm used in video compressionPages displaying wikidata descriptions as a fallback
Sphere packing – Geometrical structure
White noise – Type of signal in signal processing
== References ==
== External links ==
Marzen, Sarah; DeDeo, Simon. "PyRated: a python package for rate distortion theory". PyRated is a very simple Python package to do the most basic calculation in rate-distortion theory: the determination of the "codebook" and the transmission rate R, given a utility function (distortion matrix) and a Lagrange multiplier beta.
VcDemo Image and Video Compression Learning Tool | Wikipedia/Rate–distortion_theory |
Molecular dynamics (MD) is a computer simulation method for analyzing the physical movements of atoms and molecules. The atoms and molecules are allowed to interact for a fixed period of time, giving a view of the dynamic "evolution" of the system. In the most common version, the trajectories of atoms and molecules are determined by numerically solving Newton's equations of motion for a system of interacting particles, where forces between the particles and their potential energies are often calculated using interatomic potentials or molecular mechanical force fields. The method is applied mostly in chemical physics, materials science, and biophysics.
Because molecular systems typically consist of a vast number of particles, it is impossible to determine the properties of such complex systems analytically; MD simulation circumvents this problem by using numerical methods. However, long MD simulations are mathematically ill-conditioned, generating cumulative errors in numerical integration that can be minimized with proper selection of algorithms and parameters, but not eliminated.
For systems that obey the ergodic hypothesis, the evolution of one molecular dynamics simulation may be used to determine the macroscopic thermodynamic properties of the system: the time averages of an ergodic system correspond to microcanonical ensemble averages. MD has also been termed "statistical mechanics by numbers" and "Laplace's vision of Newtonian mechanics" of predicting the future by animating nature's forces and allowing insight into molecular motion on an atomic scale.
== History ==
MD was originally developed in the early 1950s, following earlier successes with Monte Carlo simulations—which themselves date back to the eighteenth century, in the Buffon's needle problem for example—but was popularized for statistical mechanics at Los Alamos National Laboratory by Marshall Rosenbluth and Nicholas Metropolis in what is known today as the Metropolis–Hastings algorithm. Interest in the time evolution of N-body systems dates much earlier to the seventeenth century, beginning with Isaac Newton, and continued into the following century largely with a focus on celestial mechanics and issues such as the stability of the Solar System. Many of the numerical methods used today were developed during this time period, which predates the use of computers; for example, the most common integration algorithm used today, the Verlet integration algorithm, was used as early as 1791 by Jean Baptiste Joseph Delambre. Numerical calculations with these algorithms can be considered to be MD done "by hand".
As early as 1941, integration of the many-body equations of motion was carried out with analog computers. Some undertook the labor-intensive work of modeling atomic motion by constructing physical models, e.g., using macroscopic spheres. The aim was to arrange them in such a way as to replicate the structure of a liquid and use this to examine its behavior. J.D. Bernal describes this process in 1962, writing:... I took a number of rubber balls and stuck them together with rods of a selection of different lengths ranging from 2.75 to 4 inches. I tried to do this in the first place as casually as possible, working in my own office, being interrupted every five minutes or so and not remembering what I had done before the interruption.Following the discovery of microscopic particles and the development of computers, interest expanded beyond the proving ground of gravitational systems to the statistical properties of matter. In an attempt to understand the origin of irreversibility, Enrico Fermi proposed in 1953, and published in 1955, the use of the early computer MANIAC I, also at Los Alamos National Laboratory, to solve the time evolution of the equations of motion for a many-body system subject to several choices of force laws. Today, this seminal work is known as the Fermi–Pasta–Ulam–Tsingou problem. The time evolution of the energy from the original work is shown in the figure to the right.
In 1957, Berni Alder and Thomas Wainwright used an IBM 704 computer to simulate perfectly elastic collisions between hard spheres. In 1960, in perhaps the first realistic simulation of matter, J.B. Gibson et al. simulated radiation damage of solid copper by using a Born–Mayer type of repulsive interaction along with a cohesive surface force. In 1964, Aneesur Rahman published simulations of liquid argon that used a Lennard-Jones potential; calculations of system properties, such as the coefficient of self-diffusion, compared well with experimental data. Today, the Lennard-Jones potential is still one of the most frequently used intermolecular potentials. It is used for describing simple substances (a.k.a. Lennard-Jonesium) for conceptual and model studies and as a building block in many force fields of real substances.
== Areas of application and limits ==
First used in theoretical physics, the molecular dynamics method gained popularity in materials science soon afterward, and since the 1970s it has also been commonly used in biochemistry and biophysics. MD is frequently used to refine 3-dimensional structures of proteins and other macromolecules based on experimental constraints from X-ray crystallography or NMR spectroscopy. In physics, MD is used to examine the dynamics of atomic-level phenomena that cannot be observed directly, such as thin film growth and ion subplantation, and to examine the physical properties of nanotechnological devices that have not or cannot yet be created. In biophysics and structural biology, the method is frequently applied to study the motions of macromolecules such as proteins and nucleic acids, which can be useful for interpreting the results of certain biophysical experiments and for modeling interactions with other molecules, as in ligand docking. In principle, MD can be used for ab initio prediction of protein structure by simulating folding of the polypeptide chain from a random coil. MD can also be used to compute other thermodynamic properties such as drug solubilities and free energies of solvation including in polymers.
The results of MD simulations can be tested through comparison to experiments that measure molecular dynamics, of which a popular method is NMR spectroscopy. MD-derived structure predictions can be tested through community-wide experiments in Critical Assessment of Protein Structure Prediction (CASP), although the method has historically had limited success in this area. Michael Levitt, who shared the Nobel Prize partly for the application of MD to proteins, wrote in 1999 that CASP participants usually did not use the method due to "... a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Improvements in computational resources permitting more and longer MD trajectories, combined with modern improvements in the quality of force field parameters, have yielded some improvements in both structure prediction and homology model refinement, without reaching the point of practical utility in these areas; many identify force field parameters as a key area for further development.
MD simulation has been reported for pharmacophore development and drug design. For example, Pinto et al. implemented MD simulations of Bcl-xL complexes to calculate average positions of critical amino acids involved in ligand binding. Carlson et al. implemented molecular dynamics simulations to identify compounds that complement a receptor while causing minimal disruption to the conformation and flexibility of the active site. Snapshots of the protein at constant time intervals during the simulation were overlaid to identify conserved binding regions (conserved in at least three out of eleven frames) for pharmacophore development. Spyrakis et al. relied on a workflow of MD simulations, fingerprints for ligands and proteins (FLAP) and linear discriminant analysis (LDA) to identify the best ligand-protein conformations to act as pharmacophore templates based on retrospective ROC analysis of the resulting pharmacophores. In an attempt to ameliorate structure-based drug discovery modeling, vis-à-vis the need for many modeled compounds, Hatmal et al. proposed a combination of MD simulation and ligand-receptor intermolecular contacts analysis to discern critical intermolecular contacts (binding interactions) from redundant ones in a single ligand–protein complex. Critical contacts can then be converted into pharmacophore models that can be used for virtual screening.
An important factor is intramolecular hydrogen bonds, which are not explicitly included in modern force fields, but described as Coulomb interactions of atomic point charges. This is a crude approximation because hydrogen bonds have a partially quantum mechanical and chemical nature. Furthermore, electrostatic interactions are usually calculated using the dielectric constant of a vacuum, even though the surrounding aqueous solution has a much higher dielectric constant. Thus, using the macroscopic dielectric constant at short interatomic distances is questionable. Finally, van der Waals interactions in MD are usually described by Lennard-Jones potentials based on the Fritz London theory that is only applicable in a vacuum. However, all types of van der Waals forces are ultimately of electrostatic origin and therefore depend on dielectric properties of the environment. The direct measurement of attraction forces between different materials (as Hamaker constant) shows that "the interaction between hydrocarbons across water is about 10% of that across vacuum". The environment-dependence of van der Waals forces is neglected in standard simulations, but can be included by developing polarizable force fields.
== Design constraints ==
The design of a molecular dynamics simulation should account for the available computational power. Simulation size (n = number of particles), timestep, and total time duration must be selected so that the calculation can finish within a reasonable time period. However, the simulations should be long enough to be relevant to the time scales of the natural processes being studied. To make statistically valid conclusions from the simulations, the time span simulated should match the kinetics of the natural process. Otherwise, it is analogous to making conclusions about how a human walks when only looking at less than one footstep. Most scientific publications about the dynamics of proteins and DNA use data from simulations spanning nanoseconds (10−9 s) to microseconds (10−6 s). To obtain these simulations, several CPU-days to CPU-years are needed. Parallel algorithms allow the load to be distributed among CPUs; an example is the spatial or force decomposition algorithm.
During a classical MD simulation, the most CPU intensive task is the evaluation of the potential as a function of the particles' internal coordinates. Within that energy evaluation, the most expensive one is the non-bonded or non-covalent part. In big O notation, common molecular dynamics simulations scale by
O
(
n
2
)
{\displaystyle O(n^{2})}
if all pair-wise electrostatic and van der Waals interactions must be accounted for explicitly. This computational cost can be reduced by employing electrostatics methods such as particle mesh Ewald summation (
O
(
n
log
(
n
)
)
{\displaystyle O(n\log(n))}
), particle-particle-particle mesh (P3M), or good spherical cutoff methods (
O
(
n
)
{\displaystyle O(n)}
).
Another factor that impacts total CPU time needed by a simulation is the size of the integration timestep. This is the time length between evaluations of the potential. The timestep must be chosen small enough to avoid discretization errors (i.e., smaller than the period related to fastest vibrational frequency in the system). Typical timesteps for classical MD are on the order of 1 femtosecond (10−15 s). This value may be extended by using algorithms such as the SHAKE constraint algorithm, which fix the vibrations of the fastest atoms (e.g., hydrogens) into place. Multiple time scale methods have also been developed, which allow extended times between updates of slower long-range forces.
For simulating molecules in a solvent, a choice should be made between an explicit and implicit solvent. Explicit solvent particles (such as the TIP3P, SPC/E and SPC-f water models) must be calculated expensively by the force field, while implicit solvents use a mean-field approach. Using an explicit solvent is computationally expensive, requiring inclusion of roughly ten times more particles in the simulation. But the granularity and viscosity of explicit solvent is essential to reproduce certain properties of the solute molecules. This is especially important to reproduce chemical kinetics.
In all kinds of molecular dynamics simulations, the simulation box size must be large enough to avoid boundary condition artifacts. Boundary conditions are often treated by choosing fixed values at the edges (which may cause artifacts), or by employing periodic boundary conditions in which one side of the simulation loops back to the opposite side, mimicking a bulk phase (which may cause artifacts too).
=== Microcanonical ensemble (NVE) ===
In the microcanonical ensemble, the system is isolated from changes in moles (N), volume (V), and energy (E). It corresponds to an adiabatic process with no heat exchange. A microcanonical molecular dynamics trajectory may be seen as an exchange of potential and kinetic energy, with total energy being conserved. For a system of N particles with coordinates
X
{\displaystyle X}
and velocities
V
{\displaystyle V}
, the following pair of first order differential equations may be written in Newton's notation as
F
(
X
)
=
−
∇
U
(
X
)
=
M
V
˙
(
t
)
{\displaystyle F(X)=-\nabla U(X)=M{\dot {V}}(t)}
V
(
t
)
=
X
˙
(
t
)
.
{\displaystyle V(t)={\dot {X}}(t).}
The potential energy function
U
(
X
)
{\displaystyle U(X)}
of the system is a function of the particle coordinates
X
{\displaystyle X}
. It is referred to simply as the potential in physics, or the force field in chemistry. The first equation comes from Newton's laws of motion; the force
F
{\displaystyle F}
acting on each particle in the system can be calculated as the negative gradient of
U
(
X
)
{\displaystyle U(X)}
.
For every time step, each particle's position
X
{\displaystyle X}
and velocity
V
{\displaystyle V}
may be integrated with a symplectic integrator method such as Verlet integration. The time evolution of
X
{\displaystyle X}
and
V
{\displaystyle V}
is called a trajectory. Given the initial positions (e.g., from theoretical knowledge) and velocities (e.g., randomized Gaussian), we can calculate all future (or past) positions and velocities.
One frequent source of confusion is the meaning of temperature in MD. Commonly we have experience with macroscopic temperatures, which involve a huge number of particles, but temperature is a statistical quantity. If there is a large enough number of atoms, statistical temperature can be estimated from the instantaneous temperature, which is found by equating the kinetic energy of the system to nkBT/2, where n is the number of degrees of freedom of the system.
A temperature-related phenomenon arises due to the small number of atoms that are used in MD simulations. For example, consider simulating the growth of a copper film starting with a substrate containing 500 atoms and a deposition energy of 100 eV. In the real world, the 100 eV from the deposited atom would rapidly be transported through and shared among a large number of atoms (
10
10
{\displaystyle 10^{10}}
or more) with no big change in temperature. When there are only 500 atoms, however, the substrate is almost immediately vaporized by the deposition. Something similar happens in biophysical simulations. The temperature of the system in NVE is naturally raised when macromolecules such as proteins undergo exothermic conformational changes and binding.
=== Canonical ensemble (NVT) ===
In the canonical ensemble, amount of substance (N), volume (V) and temperature (T) are conserved. It is also sometimes called constant temperature molecular dynamics (CTMD). In NVT, the energy of endothermic and exothermic processes is exchanged with a thermostat.
A variety of thermostat algorithms are available to add and remove energy from the boundaries of an MD simulation in a more or less realistic way, approximating the canonical ensemble. Popular methods to control temperature include velocity rescaling, the Nosé–Hoover thermostat, Nosé–Hoover chains, the Berendsen thermostat, the Andersen thermostat and Langevin dynamics. The Berendsen thermostat might introduce the flying ice cube effect, which leads to unphysical translations and rotations of the simulated system.
It is not trivial to obtain a canonical ensemble distribution of conformations and velocities using these algorithms. How this depends on system size, thermostat choice, thermostat parameters, time step and integrator is the subject of many articles in the field.
=== Isothermal–isobaric (NPT) ensemble ===
In the isothermal–isobaric ensemble, amount of substance (N), pressure (P) and temperature (T) are conserved. In addition to a thermostat, a barostat is needed. It corresponds most closely to laboratory conditions with a flask open to ambient temperature and pressure.
In the simulation of biological membranes, isotropic pressure control is not appropriate. For lipid bilayers, pressure control occurs under constant membrane area (NPAT) or constant surface tension "gamma" (NPγT).
=== Generalized ensembles ===
The replica exchange method is a generalized ensemble. It was originally created to deal with the slow dynamics of disordered spin systems. It is also called parallel tempering. The replica exchange MD (REMD) formulation tries to overcome the multiple-minima problem by exchanging the temperature of non-interacting replicas of the system running at several temperatures.
== Potentials in MD simulations ==
A molecular dynamics simulation requires the definition of a potential function, or a description of the terms by which the particles in the simulation will interact. In chemistry and biology this is usually referred to as a force field and in materials physics as an interatomic potential. Potentials may be defined at many levels of physical accuracy; those most commonly used in chemistry are based on molecular mechanics and embody a classical mechanics treatment of particle-particle interactions that can reproduce structural and conformational changes but usually cannot reproduce chemical reactions.
The reduction from a fully quantum description to a classical potential entails two main approximations. The first one is the Born–Oppenheimer approximation, which states that the dynamics of electrons are so fast that they can be considered to react instantaneously to the motion of their nuclei. As a consequence, they may be treated separately. The second one treats the nuclei, which are much heavier than electrons, as point particles that follow classical Newtonian dynamics. In classical molecular dynamics, the effect of the electrons is approximated as one potential energy surface, usually representing the ground state.
When finer levels of detail are needed, potentials based on quantum mechanics are used; some methods attempt to create hybrid classical/quantum potentials where the bulk of the system is treated classically but a small region is treated as a quantum system, usually undergoing a chemical transformation.
=== Empirical potentials ===
Empirical potentials used in chemistry are frequently called force fields, while those used in materials physics are called interatomic potentials.
Most force fields in chemistry are empirical and consist of a summation of bonded forces associated with chemical bonds, bond angles, and bond dihedrals, and non-bonded forces associated with van der Waals forces and electrostatic charge. Empirical potentials represent quantum-mechanical effects in a limited way through ad hoc functional approximations. These potentials contain free parameters such as atomic charge, van der Waals parameters reflecting estimates of atomic radius, and equilibrium bond length, angle, and dihedral; these are obtained by fitting against detailed electronic calculations (quantum chemical simulations) or experimental physical properties such as elastic constants, lattice parameters and spectroscopic measurements.
Because of the non-local nature of non-bonded interactions, they involve at least weak interactions between all particles in the system. Its calculation is normally the bottleneck in the speed of MD simulations. To lower the computational cost, force fields employ numerical approximations such as shifted cutoff radii, reaction field algorithms, particle mesh Ewald summation, or the newer particle–particle-particle–mesh (P3M).
Chemistry force fields commonly employ preset bonding arrangements (an exception being ab initio dynamics), and thus are unable to model the process of chemical bond breaking and reactions explicitly. On the other hand, many of the potentials used in physics, such as those based on the bond order formalism can describe several different coordinations of a system and bond breaking. Examples of such potentials include the Brenner potential for hydrocarbons and its
further developments for the C-Si-H and C-O-H systems. The ReaxFF potential can be considered a fully reactive hybrid between bond order potentials and chemistry force fields.
=== Pair potentials versus many-body potentials ===
The potential functions representing the non-bonded energy are formulated as a sum over interactions between the particles of the system. The simplest choice, employed in many popular force fields, is the "pair potential", in which the total potential energy can be calculated from the sum of energy contributions between pairs of atoms. Therefore, these force fields are also called "additive force fields". An example of such a pair potential is the non-bonded Lennard-Jones potential (also termed the 6–12 potential), used for calculating van der Waals forces.
U
(
r
)
=
4
ε
[
(
σ
r
)
12
−
(
σ
r
)
6
]
{\displaystyle U(r)=4\varepsilon \left[\left({\frac {\sigma }{r}}\right)^{12}-\left({\frac {\sigma }{r}}\right)^{6}\right]}
Another example is the Born (ionic) model of the ionic lattice. The first term in the next equation is Coulomb's law for a pair of ions, the second term is the short-range repulsion explained by Pauli's exclusion principle and the final term is the dispersion interaction term. Usually, a simulation only includes the dipolar term, although sometimes the quadrupolar term is also included. When nl = 6, this potential is also called the Coulomb–Buckingham potential.
U
i
j
(
r
i
j
)
=
z
i
z
j
4
π
ϵ
0
1
r
i
j
+
A
l
exp
−
r
i
j
p
l
+
C
l
r
i
j
−
n
l
+
⋯
{\displaystyle U_{ij}(r_{ij})={\frac {z_{i}z_{j}}{4\pi \epsilon _{0}}}{\frac {1}{r_{ij}}}+A_{l}\exp {\frac {-r_{ij}}{p_{l}}}+C_{l}r_{ij}^{-n_{l}}+\cdots }
In many-body potentials, the potential energy includes the effects of three or more particles interacting with each other. In simulations with pairwise potentials, global interactions in the system also exist, but they occur only through pairwise terms. In many-body potentials, the potential energy cannot be found by a sum over pairs of atoms, as these interactions are calculated explicitly as a combination of higher-order terms. In the statistical view, the dependency between the variables cannot in general be expressed using only pairwise products of the degrees of freedom. For example, the Tersoff potential, which was originally used to simulate carbon, silicon, and germanium, and has since been used for a wide range of other materials, involves a sum over groups of three atoms, with the angles between the atoms being an important factor in the potential. Other examples are the embedded-atom method (EAM), the EDIP, and the Tight-Binding Second Moment Approximation (TBSMA) potentials, where the electron density of states in the region of an atom is calculated from a sum of contributions from surrounding atoms, and the potential energy contribution is then a function of this sum.
=== Semi-empirical potentials ===
Semi-empirical potentials make use of the matrix representation from quantum mechanics. However, the values of the matrix elements are found through empirical formulae that estimate the degree of overlap of specific atomic orbitals. The matrix is then diagonalized to determine the occupancy of the different atomic orbitals, and empirical formulae are used once again to determine the energy contributions of the orbitals.
There are a wide variety of semi-empirical potentials, termed tight-binding potentials, which vary according to the atoms being modeled.
=== Polarizable potentials ===
Most classical force fields implicitly include the effect of polarizability, e.g., by scaling up the partial charges obtained from quantum chemical calculations. These partial charges are stationary with respect to the mass of the atom. But molecular dynamics simulations can explicitly model polarizability with the introduction of induced dipoles through different methods, such as Drude particles or fluctuating charges. This allows for a dynamic redistribution of charge between atoms which responds to the local chemical environment.
For many years, polarizable MD simulations have been touted as the next generation. For homogenous liquids such as water, increased accuracy has been achieved through the inclusion of polarizability. Some promising results have also been achieved for proteins. However, it is still uncertain how to best approximate polarizability in a simulation. The point becomes more important when a particle experiences different environments during its simulation trajectory, e.g. translocation of a drug through a cell membrane.
=== Potentials in ab initio methods ===
In classical molecular dynamics, one potential energy surface (usually the ground state) is represented in the force field. This is a consequence of the Born–Oppenheimer approximation. In excited states, chemical reactions or when a more accurate representation is needed, electronic behavior can be obtained from first principles using a quantum mechanical method, such as density functional theory. This is named Ab Initio Molecular Dynamics (AIMD). Due to the cost of treating the electronic degrees of freedom, the computational burden of these simulations is far higher than classical molecular dynamics. For this reason, AIMD is typically limited to smaller systems and shorter times.
Ab initio quantum mechanical and chemical methods may be used to calculate the potential energy of a system on the fly, as needed for conformations in a trajectory. This calculation is usually made in the close neighborhood of the reaction coordinate. Although various approximations may be used, these are based on theoretical considerations, not on empirical fitting. Ab initio calculations produce a vast amount of information that is not available from empirical methods, such as density of electronic states or other electronic properties. A significant advantage of using ab initio methods is the ability to study reactions that involve breaking or formation of covalent bonds, which correspond to multiple electronic states. Moreover, ab initio methods also allow recovering effects beyond the Born–Oppenheimer approximation using approaches like mixed quantum-classical dynamics.
=== Hybrid QM/MM ===
QM (quantum-mechanical) methods are very powerful. However, they are computationally expensive, while the MM (classical or molecular mechanics) methods are fast but suffer from several limits (require extensive parameterization; energy estimates obtained are not very accurate; cannot be used to simulate reactions where covalent bonds are broken/formed; and are limited in their abilities for providing accurate details regarding the chemical environment). A new class of method has emerged that combines the good points of QM (accuracy) and MM (speed) calculations. These methods are termed mixed or hybrid quantum-mechanical and molecular mechanics methods (hybrid QM/MM).
The most important advantage of hybrid QM/MM method is the speed. The cost of doing classical molecular dynamics (MM) in the most straightforward case scales O(n2), where n is the number of atoms in the system. This is mainly due to electrostatic interactions term (every particle interacts with every other particle). However, use of cutoff radius, periodic pair-list updates and more recently the variations of the particle-mesh Ewald's (PME) method has reduced this to between O(n) to O(n2). In other words, if a system with twice as many atoms is simulated then it would take between two and four times as much computing power. On the other hand, the simplest ab initio calculations typically scale O(n3) or worse (restricted Hartree–Fock calculations have been suggested to scale ~O(n2.7)). To overcome the limit, a small part of the system is treated quantum-mechanically (typically active-site of an enzyme) and the remaining system is treated classically.
In more sophisticated implementations, QM/MM methods exist to treat both light nuclei susceptible to quantum effects (such as hydrogens) and electronic states. This allows generating hydrogen wave-functions (similar to electronic wave-functions). This methodology has been useful in investigating phenomena such as hydrogen tunneling. One example where QM/MM methods have provided new discoveries is the calculation of hydride transfer in the enzyme liver alcohol dehydrogenase. In this case, quantum tunneling is important for the hydrogen, as it determines the reaction rate.
=== Coarse-graining and reduced representations ===
At the other end of the detail scale are coarse-grained and lattice models. Instead of explicitly representing every atom of the system, one uses "pseudo-atoms" to represent groups of atoms. MD simulations on very large systems may require such large computer resources that they cannot easily be studied by traditional all-atom methods. Similarly, simulations of processes on long timescales (beyond about 1 microsecond) are prohibitively expensive, because they require so many time steps. In these cases, one can sometimes tackle the problem by using reduced representations, which are also called coarse-grained models.
Examples for coarse graining (CG) methods are discontinuous molecular dynamics (CG-DMD) and Go-models. Coarse-graining is done sometimes taking larger pseudo-atoms. Such united atom approximations have been used in MD simulations of biological membranes. Implementation of such approach on systems where electrical properties are of interest can be challenging owing to the difficulty of using a proper charge distribution on the pseudo-atoms. The aliphatic tails of lipids are represented by a few pseudo-atoms by gathering 2 to 4 methylene groups into each pseudo-atom.
The parameterization of these very coarse-grained models must be done empirically, by matching the behavior of the model to appropriate experimental data or all-atom simulations. Ideally, these parameters should account for both enthalpic and entropic contributions to free energy in an implicit way. When coarse-graining is done at higher levels, the accuracy of the dynamic description may be less reliable. But very coarse-grained models have been used successfully to examine a wide range of questions in structural biology, liquid crystal organization, and polymer glasses.
Examples of applications of coarse-graining:
protein folding and protein structure prediction studies are often carried out using one, or a few, pseudo-atoms per amino acid;
liquid crystal phase transitions have been examined in confined geometries and/or during flow using the Gay-Berne potential, which describes anisotropic species;
Polymer glasses during deformation have been studied using simple harmonic or FENE springs to connect spheres described by the Lennard-Jones potential;
DNA supercoiling has been investigated using 1–3 pseudo-atoms per basepair, and at even lower resolution;
Packaging of double-helical DNA into bacteriophage has been investigated with models where one pseudo-atom represents one turn (about 10 basepairs) of the double helix;
RNA structure in the ribosome and other large systems has been modeled with one pseudo-atom per nucleotide.
The simplest form of coarse-graining is the united atom (sometimes called extended atom) and was used in most early MD simulations of proteins, lipids, and nucleic acids. For example, instead of treating all four atoms of a CH3 methyl group explicitly (or all three atoms of CH2 methylene group), one represents the whole group with one pseudo-atom. It must, of course, be properly parameterized so that its van der Waals interactions with other groups have the proper distance-dependence. Similar considerations apply to the bonds, angles, and torsions in which the pseudo-atom participates. In this kind of united atom representation, one typically eliminates all explicit hydrogen atoms except those that have the capability to participate in hydrogen bonds (polar hydrogens). An example of this is the CHARMM 19 force-field.
The polar hydrogens are usually retained in the model, because proper treatment of hydrogen bonds requires a reasonably accurate description of the directionality and the electrostatic interactions between the donor and acceptor groups. A hydroxyl group, for example, can be both a hydrogen bond donor, and a hydrogen bond acceptor, and it would be impossible to treat this with one OH pseudo-atom. About half the atoms in a protein or nucleic acid are non-polar hydrogens, so the use of united atoms can provide a substantial savings in computer time.
=== Machine Learning Force Fields ===
Machine Learning Force Fields] (MLFFs) represent one approach to modeling interatomic interactions in molecular dynamics simulations. MLFFs can achieve accuracy close to that of ab initio methods. Once trained, MLFFs are much faster than direct quantum mechanical calculations. MLFFs address the limitations of traditional force fields by learning complex potential energy surfaces directly from high-level quantum mechanical data. Several software packages now support MLFFs, including VASP and open-source libraries like DeePMD-kit and SchNetPack.
== Incorporating solvent effects ==
In many simulations of a solute-solvent system the main focus is on the behavior of the solute with little interest of the solvent behavior particularly in those solvent molecules residing in regions far from the solute molecule. Solvents may influence the dynamic behavior of solutes via random collisions and by imposing a frictional drag on the motion of the solute through the solvent. The use of non-rectangular periodic boundary conditions, stochastic boundaries and solvent shells can all help reduce the number of solvent molecules required and enable a larger proportion of the computing time to be spent instead on simulating the solute. It is also possible to incorporate the effects of a solvent without needing any explicit solvent molecules present. One example of this approach is to use a potential mean force (PMF) which describes how the free energy changes as a particular coordinate is varied. The free energy change described by PMF contains the averaged effects of the solvent.
Without incorporating the effects of solvent simulations of macromolecules (such as proteins) may yield unrealistic behavior and even small molecules may adopt more compact conformations due to favourable van der Waals forces and electrostatic interactions which would be dampened in the presence of a solvent.
== Long-range forces ==
A long range interaction is an interaction in which the spatial interaction falls off no faster than
r
−
d
{\displaystyle r^{-d}}
where
d
{\displaystyle d}
is the dimensionality of the system. Examples include charge-charge interactions between ions and dipole-dipole interactions between molecules. Modelling these forces presents quite a challenge as they are significant over a distance which may be larger than half the box length with simulations of many thousands of particles. Though one solution would be to significantly increase the size of the box length, this brute force approach is less than ideal as the simulation would become computationally very expensive. Spherically truncating the potential is also out of the question as unrealistic behaviour may be observed when the distance is close to the cut off distance.
== Steered molecular dynamics (SMD) ==
Steered molecular dynamics (SMD) simulations, or force probe simulations, apply forces to a protein in order to manipulate its structure by pulling it along desired degrees of freedom. These experiments can be used to reveal structural changes in a protein at the atomic level. SMD is often used to simulate events such as mechanical unfolding or stretching.
There are two typical protocols of SMD: one in which pulling velocity is held constant, and one in which applied force is constant. Typically, part of the studied system (e.g., an atom in a protein) is restrained by a harmonic potential. Forces are then applied to specific atoms at either a constant velocity or a constant force. Umbrella sampling is used to move the system along the desired reaction coordinate by varying, for example, the forces, distances, and angles manipulated in the simulation. Through umbrella sampling, all of the system's configurations—both high-energy and low-energy—are adequately sampled. Then, each configuration's change in free energy can be calculated as the potential of mean force. A popular method of computing PMF is through the weighted histogram analysis method (WHAM), which analyzes a series of umbrella sampling simulations.
A lot of important applications of SMD are in the field of drug discovery and biomolecular sciences. For e.g. SMD was used to investigate the stability of Alzheimer's protofibrils, to study the protein ligand interaction in cyclin-dependent kinase 5 and even to show the effect of electric field on thrombin (protein) and aptamer (nucleotide) complex among many other interesting studies.
== Examples of applications ==
Molecular dynamics is used in many fields of science.
First MD simulation of a simplified biological folding process was published in 1975. Its simulation published in Nature paved the way for the vast area of modern computational protein-folding.
First MD simulation of a biological process was published in 1976. Its simulation published in Nature paved the way for understanding protein motion as essential in function and not just accessory.
MD is the standard method to treat collision cascades in the heat spike regime, i.e., the effects that energetic neutron and ion irradiation have on solids and solid surfaces.
The following biophysical examples illustrate notable efforts to produce simulations of a systems of very large size (a complete virus) or very long simulation times (up to 1.112 milliseconds):
MD simulation of the full satellite tobacco mosaic virus (STMV) (2006, Size: 1 million atoms, Simulation time: 50 ns, program: NAMD) This virus is a small, icosahedral plant virus that worsens the symptoms of infection by Tobacco Mosaic Virus (TMV). Molecular dynamics simulations were used to probe the mechanisms of viral assembly. The entire STMV particle consists of 60 identical copies of one protein that make up the viral capsid (coating), and a 1063 nucleotide single stranded RNA genome. One key finding is that the capsid is very unstable when there is no RNA inside. The simulation would take one 2006 desktop computer around 35 years to complete. It was thus done in many processors in parallel with continuous communication between them.
Folding simulations of the Villin Headpiece in all-atom detail (2006, Size: 20,000 atoms; Simulation time: 500 μs= 500,000 ns, Program: Folding@home) This simulation was run in 200,000 CPU's of participating personal computers around the world. These computers had the Folding@home program installed, a large-scale distributed computing effort coordinated by Vijay Pande at Stanford University. The kinetic properties of the Villin Headpiece protein were probed by using many independent, short trajectories run by CPU's without continuous real-time communication. One method employed was the Pfold value analysis, which measures the probability of folding before unfolding of a specific starting conformation. Pfold gives information about transition state structures and an ordering of conformations along the folding pathway. Each trajectory in a Pfold calculation can be relatively short, but many independent trajectories are needed.
Long continuous-trajectory simulations have been performed on Anton, a massively parallel supercomputer designed and built around custom application-specific integrated circuits (ASICs) and interconnects by D. E. Shaw Research. The longest published result of a simulation performed using Anton is a 1.112-millisecond simulation of NTL9 at 355 K; a second, independent 1.073-millisecond simulation of this configuration was also performed (and many other simulations of over 250 μs continuous chemical time). In How Fast-Folding Proteins Fold, researchers Kresten Lindorff-Larsen, Stefano Piana, Ron O. Dror, and David E. Shaw discuss "the results of atomic-level molecular dynamics simulations, over periods ranging between 100 μs and 1 ms, that reveal a set of common principles underlying the folding of 12 structurally diverse proteins." Examination of these diverse long trajectories, enabled by specialized, custom hardware, allow them to conclude that "In most cases, folding follows a single dominant route in which elements of the native structure appear in an order highly correlated with their propensity to form in the unfolded state." In a separate study, Anton was used to conduct a 1.013-millisecond simulation of the native-state dynamics of bovine pancreatic trypsin inhibitor (BPTI) at 300 K.
Another important application of MD method benefits from its ability of 3-dimensional characterization and analysis of microstructural evolution at atomic scale.
MD simulations are used in characterization of grain size evolution, for example, when describing wear and friction of nanocrystalline Al and Al(Zr) materials. Dislocations evolution and grain size evolution are analyzed during the friction process in this simulation. Since MD method provided the full information of the microstructure, the grain size evolution was calculated in 3D using the Polyhedral Template Matching, Grain Segmentation, and Graph clustering methods. In such simulation, MD method provided an accurate measurement of grain size. Making use of these information, the actual grain structures were extracted, measured, and presented. Compared to the traditional method of using SEM with a single 2-dimensional slice of the material, MD provides a 3-dimensional and accurate way to characterize the microstructural evolution at atomic scale.
== Molecular dynamics algorithms ==
Screened Coulomb potentials implicit solvent model
=== Integrators ===
Symplectic integrator
Verlet–Stoermer integration
Runge–Kutta integration
Beeman's algorithm
Constraint algorithms (for constrained systems)
=== Short-range interaction algorithms ===
Cell lists
Verlet list
Bonded interactions
=== Long-range interaction algorithms ===
Ewald summation
Particle mesh Ewald summation (PME)
Particle–particle-particle–mesh (P3M)
Shifted force method
=== Parallelization strategies ===
Domain decomposition method (Distribution of system data for parallel computing)
=== Ab-initio molecular dynamics ===
Car–Parrinello molecular dynamics
== Specialized hardware for MD simulations ==
Anton – A specialized, massively parallel supercomputer designed to execute MD simulations
MDGRAPE – A special purpose system built for molecular dynamics simulations, especially protein structure prediction
== Graphics card as a hardware for MD simulations ==
== See also ==
== References ==
=== General references ===
== External links ==
The GPUGRID.net Project (GPUGRID.net)
The Blue Gene Project (IBM) JawBreakers.org
Materials modelling and computer simulation codes
A few tips on molecular dynamics
Movie of MD simulation of water (YouTube) | Wikipedia/Molecular_dynamics |
The move-to-front (MTF) transform is an encoding of data (typically a stream of bytes) designed to improve the performance of entropy encoding techniques of compression. When efficiently implemented, it is fast enough that its benefits usually justify including it as an extra step in data compression algorithm.
This algorithm was first published by Boris Ryabko under the name of "book stack" in 1980. Subsequently, it was rediscovered by J.K. Bentley et al. in 1986, as attested in the explanatory note.
== The transform ==
The main idea is that each symbol in the data is replaced by its index in the stack of “recently used symbols”. For example, long sequences of identical symbols are replaced by as many zeroes, whereas when a symbol that has not been used in a long time appears, it is replaced with a large number. Thus at the end the data is transformed into a sequence of integers; if the data exhibits a lot of local correlations, then these integers tend to be small.
Let us give a precise description. Assume for simplicity that the symbols in the data are bytes.
Each byte value is encoded by its index in a list of bytes, which changes over the course of the algorithm. The list is initially in order by byte value (0, 1, 2, 3, ..., 255). Therefore, the first byte is always encoded by its own value. However, after encoding a byte, that value is moved to the front of the list before continuing to the next byte.
An example will shed some light on how the transform works. Imagine instead of bytes, we are encoding values in a–z. We wish to transform the following sequence:
bananaaa
By convention, the list is initially (abcdefghijklmnopqrstuvwxyz). The first letter in the sequence is b, which appears at index 1 (the list is indexed from 0 to 25). We put a 1 to the output stream:
1
The b moves to the front of the list, producing (bacdefghijklmnopqrstuvwxyz). The next letter is a, which now appears at index 1. So we add a 1 to the output stream. We have:
1,1
and we move the letter a back to the top of the list. Continuing this way, we find that the sequence is encoded by:
1,1,13,1,1,1,0,0
It is easy to see that the transform is reversible. Simply maintain the same list and decode by replacing each index in the encoded stream with the letter at that index in the list. Note the difference between this and the encoding method: The index in the list is used directly instead of looking up each value for its index.
i.e. you start again with (abcdefghijklmnopqrstuvwxyz). You take the "1" of the encoded block and look it up in the list, which results in "b". Then move the "b" to front which results in (bacdef...). Then take the next "1", look it up in the list, this results in "a", move the "a" to front ... etc.
== Implementation ==
Details of implementation are important for performance, particularly for decoding. For encoding, no clear advantage is gained by using a linked list, so using an array to store the list is acceptable, with worst-case performance O(nk), where n is the length of the data to be encoded and k is the number of values (generally a constant for a given implementation).
The typical performance is better because frequently-used symbols are more likely to be at the front and will produce earlier hits. This is also the idea behind a Move-to-front self-organizing list.
However, for decoding, we can use specialized data structures to greatly improve performance.
=== Python ===
This is a possible implementation of the move-to-front algorithm in Python.
In this example we can see the MTF code taking advantage of the three repetitive i's in the input word. The common dictionary here, however, is less than ideal since it is initialized with more commonly used ASCII printable characters put after little-used control codes, against the MTF code's design intent of keeping what's commonly used in the front. If one rotates the dictionary to put the more-used characters in earlier places, a better encoding can be obtained:
== Use in practical data compression algorithms ==
The MTF transform takes advantage of local correlation of frequencies to reduce the entropy of a message. Indeed, recently used letters stay towards the front of the list; if use of letters exhibits local correlations, this will result in a large number of small numbers such as "0"'s and "1"'s in the output.
However, not all data exhibits this type of local correlation, and for some messages, the MTF transform may actually increase the entropy.
An important use of the MTF transform is in Burrows–Wheeler transform based compression. The Burrows–Wheeler transform is very good at producing a sequence that exhibits local frequency correlation from text and certain other special classes of data. Compression benefits greatly from following up the Burrows–Wheeler transform with an MTF transform before the final entropy-encoding step.
=== Example ===
As an example, imagine we wish to compress Hamlet's soliloquy (To be, or not to be...). We can calculate the size of this message to be 7033 bits. Naively, we might try to apply the MTF transform directly. The result is a message with 7807 bits (higher than the original). The reason is that English text does not in general exhibit a high level of local frequency correlation. However, if we first apply the Burrows–Wheeler transform, and then the MTF transform, we get a message with 6187 bits. Note that the Burrows–Wheeler transform does not decrease the entropy of the message; it only reorders the bytes in a way that makes the MTF transform more effective.
One problem with the basic MTF transform is that it makes the same changes for any character, regardless of frequency, which can result in diminished compression as characters that occur rarely may push frequent characters to higher values. Various alterations and alternatives have been developed for this reason. One common change is to make it so that characters above a certain point can only be moved to a certain threshold. Another is to make some algorithm that runs a count of each character's local frequency and uses these values to choose the characters' order at any point. Many of these transforms still reserve zero for repeat characters, since these are often the most common in data after the Burrows–Wheeler Transform.
== Move-to-front linked-list ==
The term Move To Front (MTF) is also used in a slightly different context, as a type of a dynamic linked list. In an MTF list, each element is moved to the front when it is accessed. This ensures that, over time, the more frequently accessed elements are easier to access.
== References ==
== External links ==
"Move to front" by Arturo San Emeterio Campos | Wikipedia/Move-to-front_transform |
Formal science is a branch of science studying disciplines concerned with abstract structures described by formal systems, such as logic, mathematics, statistics, theoretical computer science, artificial intelligence, information theory, game theory, systems theory, decision theory and theoretical linguistics. Whereas the natural sciences and social sciences seek to characterize physical systems and social systems, respectively, using theoretical and empirical methods, the formal sciences use language tools concerned with characterizing abstract structures described by formal systems and the deductions that can be made from them. The formal sciences aid the natural and social sciences by providing information about the structures used to describe the physical world, and what inferences may be made about them.
== Branches ==
Logic (also a branch of philosophy)
Mathematics
Statistics
Systems science
Data science
Information theory
Computer science
Cryptography
== Differences from other sciences ==
One reason why mathematics enjoys special esteem, above all other sciences, is that its laws are absolutely certain and indisputable, while those of other sciences are to some extent debatable and in constant danger of being overthrown by newly discovered facts.
Because of their non-empirical nature, formal sciences are construed by outlining a set of axioms and definitions from which other statements (theorems) are deduced. For this reason, in Rudolf Carnap's logical-positivist conception of the epistemology of science, theories belonging to formal sciences are understood to contain no synthetic statements, instead containing only analytic statements.
== See also ==
== References ==
== Further reading ==
Mario Bunge (1985). Philosophy of Science and Technology. Springer.
Mario Bunge (1998). Philosophy of Science. Rev. ed. of: Scientific research. Berlin, New York: Springer-Verlag, 1967.
C. West Churchman (1940). Elements of Logic and Formal Science, J.B. Lippincott Co., New York.
James Franklin (1994). The formal sciences discover the philosophers' stone. In: Studies in History and Philosophy of Science. Vol. 25, No. 4, pp. 513–533, 1994
Stephen Leacock (1906). Elements of Political Science. Houghton, Mifflin Co, 417 pp.
Popper, Karl R. (2002) [1959]. The Logic of Scientific Discovery. New York, NY: Routledge Classics. ISBN 0-415-27844-9. OCLC 59377149.
Bernt P. Stigum (1990). Toward a Formal Science of Economics. MIT Press
Marcus Tomalin (2006), Linguistics and the Formal Sciences. Cambridge University Press
William L. Twining (1997). Law in Context: Enlarging a Discipline. 365 pp.
== External links ==
Media related to Formal sciences at Wikimedia Commons
Interdisciplinary conferences — Foundations of the Formal Sciences | Wikipedia/Formal_science |
In mathematics and empirical science, quantification (or quantitation) is the act of counting and measuring that maps human sense observations and experiences into quantities. Quantification in this sense is fundamental to the scientific method.
== Natural science ==
Some measure of the undisputed general importance of quantification in the natural sciences can be gleaned from the following comments:
"these are mere facts, but they are quantitative facts and the basis of science."
It seems to be held as universally true that "the foundation of quantification is measurement."
There is little doubt that "quantification provided a basis for the objectivity of science."
In ancient times, "musicians and artists ... rejected quantification, but merchants, by definition, quantified their affairs, in order to survive, made them visible on parchment and paper."
Any reasonable "comparison between Aristotle and Galileo shows clearly that there can be no unique lawfulness discovered without detailed quantification."
Even today, "universities use imperfect instruments called 'exams' to indirectly quantify something they call knowledge."
This meaning of quantification comes under the heading of pragmatics.
In some instances in the natural sciences a seemingly intangible concept may be quantified by creating a scale—for example, a pain scale in medical research, or a discomfort scale at the intersection of meteorology and human physiology such as the heat index measuring the combined perceived effect of heat and humidity, or the wind chill factor measuring the combined perceived effects of cold and wind.
== Social sciences ==
In the social sciences, quantification is an integral part of economics and psychology. Both disciplines gather data – economics by empirical observation and psychology by experimentation – and both use statistical techniques such as regression analysis to draw conclusions from it.
In some instances a seemingly intangible property may be quantified by asking subjects to rate something on a scale—for example, a happiness scale or a quality-of-life scale—or by the construction of a scale by the researcher, as with the index of economic freedom. In other cases, an unobservable variable may be quantified by replacing it with a proxy variable with which it is highly correlated—for example, per capita gross domestic product is often used as a proxy for standard of living or quality of life.
Frequently in the use of regression, the presence or absence of a trait is quantified by employing a dummy variable, which takes on the value 1 in the presence of the trait or the value 0 in the absence of the trait.
Quantitative linguistics is an area of linguistics that relies on quantification. For example, indices of grammaticalization of morphemes, such as phonological shortness, dependence on surroundings, and fusion with the verb, have been developed and found to be significantly correlated across languages with stage of evolution of function of the morpheme.
== Hard versus soft science ==
The ease of quantification is one of the features used to distinguish hard and soft sciences from each other. Scientists often consider hard sciences to be more scientific or rigorous, but this is disputed by social scientists who maintain that appropriate rigor includes the qualitative evaluation of the broader contexts of qualitative data. In some social sciences such as sociology, quantitative data are difficult to obtain, either because laboratory conditions are not present or because the issues involved are conceptual but not directly quantifiable. Thus in these cases qualitative methods are preferred.
== See also ==
Calibration
Internal standard
Isotope dilution
Physical quantity
Quantitative analysis (chemistry)
Standard addition
== References ==
== Further reading ==
Crosby, Alfred W. (1996) The Measure of Reality: Quantification and Western Society, 1250–1600. Cambridge University Press.
Wiese, Heike, 2003. Numbers, Language, and the Human Mind. Cambridge University Press. ISBN 0-521-83182-2. | Wikipedia/Quantification_(science) |
In telecommunications, average bitrate (ABR) refers to the average amount of data transferred per unit of time, usually measured per second, commonly for digital music or video. An MP3 file, for example, that has an average bit rate of 128 kbit/s transfers, on average, 128,000 bits every second. It can have higher bitrate and lower bitrate parts, and the average bitrate for a certain timeframe is obtained by dividing the number of bits used during the timeframe by the number of seconds in the timeframe. Bitrate is not reliable as a standalone measure of audio or video quality, since more efficient compression methods use lower bitrates to encode material at a similar quality.
Average bitrate can also refer to a form of variable bitrate (VBR) encoding in which the encoder will try to reach a target average bitrate or file size while allowing the bitrate to vary between different parts of the audio or video. As it is a form of variable bitrate, this allows more complex portions of the material to use more bits and less complex areas to use fewer bits. However, bitrate will not vary as much as in variable bitrate encoding. At a given bitrate, VBR is usually higher quality than ABR, which is higher quality than CBR (constant bitrate). ABR encoding is desirable for users who want the general benefits of VBR encoding (an optimum bitrate from frame to frame) but with a relatively predictable file size. Two-pass encoding is usually needed for accurate ABR encoding, as on the first pass the encoder has no way of knowing what parts of the audio or video need the highest bitrates to be encoded.
== See also ==
Variable bitrate
Constant bitrate
== References ==
== External links ==
"Average Bitrate in LAME encoder", Knowledgebase (wiki), Hydrogenaudio.
"An explanation (sort of) of ABR — GPSYCHO-Average Bit Rate (ABR)", Lame, Sourceforge. | Wikipedia/Average_bitrate |
Because the mathematical expressions for information theory developed by Claude Shannon and Ralph Hartley in the 1940s are similar to the mathematics of statistical thermodynamics worked out by Ludwig Boltzmann and J. Willard Gibbs in the 1870s, in which the concept of entropy is central, Shannon was persuaded to employ the same term 'entropy' for his measure of uncertainty. Information entropy is often presumed to be equivalent to physical (thermodynamic) entropy.
== Equivalence of form of the defining expressions ==
The defining expression for entropy in the theory of statistical mechanics established by Ludwig Boltzmann and J. Willard Gibbs in the 1870s, is of the form:
S
=
−
k
B
∑
i
p
i
ln
p
i
,
{\displaystyle S=-k_{\text{B}}\sum _{i}p_{i}\ln p_{i},}
where
p
i
{\displaystyle p_{i}}
is the probability of the microstate i taken from an equilibrium ensemble, and
k
B
{\displaystyle k_{B}}
is the Boltzmann constant.
The defining expression for entropy in the theory of information established by Claude E. Shannon in 1948 is of the form:
H
=
−
∑
i
p
i
log
b
p
i
,
{\displaystyle H=-\sum _{i}p_{i}\log _{b}p_{i},}
where
p
i
{\displaystyle p_{i}}
is the probability of the message
m
i
{\displaystyle m_{i}}
taken from the message space M, and b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the unit of entropy is shannon (or bit) for b = 2, nat for b = e, and hartley for b = 10.
Mathematically H may also be seen as an average information, taken over the message space, because when a certain message occurs with probability pi, the information quantity −log(pi) (called information content or self-information) will be obtained.
If all the microstates are equiprobable (a microcanonical ensemble), the statistical thermodynamic entropy reduces to the form, as given by Boltzmann,
S
=
k
B
ln
W
,
{\displaystyle S=k_{\text{B}}\ln W,}
where W is the number of microstates that corresponds to the macroscopic thermodynamic state. Therefore S depends on temperature.
If all the messages are equiprobable, the information entropy reduces to the Hartley entropy
H
=
log
b
|
M
|
,
{\displaystyle H=\log _{b}|M|\ ,}
where
|
M
|
{\displaystyle |M|}
is the cardinality of the message space M.
The logarithm in the thermodynamic definition is the natural logarithm. It can be shown that the Gibbs entropy formula, with the natural logarithm, reproduces all of the properties of the macroscopic classical thermodynamics of Rudolf Clausius. (See article: Entropy (statistical views)).
The logarithm can also be taken to the natural base in the case of information entropy. This is equivalent to choosing to measure information in nats instead of the usual bits (or more formally, shannons). In practice, information entropy is almost always calculated using base-2 logarithms, but this distinction amounts to nothing other than a change in units. One nat is about 1.44 shannons.
For a simple compressible system that can only perform volume work, the first law of thermodynamics becomes
d
E
=
−
p
d
V
+
T
d
S
.
{\displaystyle dE=-pdV+TdS.}
But one can equally well write this equation in terms of what physicists and chemists sometimes call the 'reduced' or dimensionless entropy, σ = S/k, so that
d
E
=
−
p
d
V
+
k
B
T
d
σ
.
{\displaystyle dE=-pdV+k_{\text{B}}Td\sigma .}
Just as S is conjugate to T, so σ is conjugate to kBT (the energy that is characteristic of T on a molecular scale).
Thus the definitions of entropy in statistical mechanics (The Gibbs entropy formula
S
=
−
k
B
∑
i
p
i
log
p
i
{\displaystyle S=-k_{\mathrm {B} }\sum _{i}p_{i}\log p_{i}}
) and in classical thermodynamics (
d
S
=
δ
Q
rev
T
{\displaystyle dS={\frac {\delta Q_{\text{rev}}}{T}}}
, and the fundamental thermodynamic relation) are equivalent for microcanonical ensemble, and statistical ensembles describing a thermodynamic system in equilibrium with a reservoir, such as the canonical ensemble, grand canonical ensemble, isothermal–isobaric ensemble. This equivalence is commonly shown in textbooks. However, the equivalence between the thermodynamic definition of entropy and the Gibbs entropy is not general but instead an exclusive property of the generalized Boltzmann distribution.
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:
== Theoretical relationship ==
Despite the foregoing, there is a difference between the two quantities. The information entropy Η can be calculated for any probability distribution (if the "message" is taken to be that the event i which had probability pi occurred, out of the space of the events possible), while the thermodynamic entropy S refers to thermodynamic probabilities pi specifically. The difference is more theoretical than actual, however, because any probability distribution can be approximated arbitrarily closely by some thermodynamic system.
Moreover, a direct connection can be made between the two. If the probabilities in question are the thermodynamic probabilities pi: the (reduced) Gibbs entropy σ can then be seen as simply the amount of Shannon information needed to define the detailed microscopic state of the system, given its macroscopic description. Or, in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more". To be more concrete, in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the average of the minimum number of yes–no questions needed to be answered in order to fully specify the microstate, given that we know the macrostate.
Furthermore, the prescription to find the equilibrium distributions of statistical mechanics—such as the Boltzmann distribution—by maximising the Gibbs entropy subject to appropriate constraints (the Gibbs algorithm) can be seen as something not unique to thermodynamics, but as a principle of general relevance in statistical inference, if it is desired to find a maximally uninformative probability distribution, subject to certain constraints on its averages. (These perspectives are explored further in the article Maximum entropy thermodynamics.)
The Shannon entropy in information theory is sometimes expressed in units of bits per symbol. The physical entropy may be on a "per quantity" basis (h) which is called "intensive" entropy instead of the usual total entropy which is called "extensive" entropy. The "shannons" of a message (Η) are its total "extensive" information entropy and is h times the number of bits in the message.
A direct and physically real relationship between h and S can be found by assigning a symbol to each microstate that occurs per mole, kilogram, volume, or particle of a homogeneous substance, then calculating the 'h' of these symbols. By theory or by observation, the symbols (microstates) will occur with different probabilities and this will determine h. If there are N moles, kilograms, volumes, or particles of the unit substance, the relationship between h (in bits per unit substance) and physical extensive entropy in nats is:
S
=
k
B
ln
(
2
)
N
h
{\displaystyle S=k_{\mathrm {B} }\ln(2)Nh}
where ln(2) is the conversion factor from base 2 of Shannon entropy to the natural base e of physical entropy. N h is the amount of information in bits needed to describe the state of a physical system with entropy S. Landauer's principle demonstrates the reality of this by stating the minimum energy E required (and therefore heat Q generated) by an ideally efficient memory change or logic operation by irreversibly erasing or merging N h bits of information will be S times the temperature which is
E
=
Q
=
T
k
B
ln
(
2
)
N
h
,
{\displaystyle E=Q=Tk_{\mathrm {B} }\ln(2)Nh,}
where h is in informational bits and E and Q are in physical Joules. This has been experimentally confirmed.
Temperature is a measure of the average kinetic energy per particle in an ideal gas (kelvins = 2/3 joules/kB) so the J/K units of kB is dimensionless (joule/joule). kb is the conversion factor from energy in 3/2 kelvins to joules for an ideal gas. If kinetic energy measurements per particle of an ideal gas were expressed as joules instead of kelvins, kb in the above equations would be replaced by 3/2. This shows that S is a true statistical measure of microstates that does not have a fundamental physical unit other than the units of information, in this case nats, which is just a statement of which logarithm base was chosen by convention.
== Information is physical ==
=== Szilard's engine ===
A physical thought experiment demonstrating how just the possession of information might in principle have thermodynamic consequences was established in 1929 by Leó Szilárd, in a refinement of the famous Maxwell's demon scenario (and a reversal of the Joule expansion thought experiment).
Consider Maxwell's set-up, but with only a single gas particle in a box. If the demon knows which half of the box the particle is in (equivalent to a single bit of information), it can close a shutter between the two halves of the box, close a piston unopposed into the empty half of the box, and then extract
k
B
T
ln
2
{\displaystyle k_{\text{B}}T\ln 2}
joules of useful work if the shutter is opened again. The particle can then be left to isothermally expand back to its original equilibrium occupied volume. In just the right circumstances therefore, the possession of a single bit of Shannon information (a single bit of negentropy in Brillouin's term) really does correspond to a reduction in the entropy of the physical system. The global entropy is not decreased, but information to free energy conversion is possible.
This thought experiment has been physically demonstrated, using a phase-contrast microscope equipped with a high speed camera connected to a computer, acting as the demon. In this experiment, information to energy conversion is performed on a Brownian particle by means of feedback control; that is, synchronizing the work given to the particle with the information obtained on its position. Computing energy balances for different feedback protocols, has confirmed that the Jarzynski equality requires a generalization that accounts for the amount of information involved in the feedback.
=== Landauer's principle ===
In fact one can generalise: any information that has a physical representation must somehow be embedded in the statistical mechanical degrees of freedom of a physical system.
Thus, Rolf Landauer argued in 1961, if one were to imagine starting with those degrees of freedom in a thermalised state, there would be a real reduction in thermodynamic entropy if they were then re-set to a known state. This can only be achieved under information-preserving microscopically deterministic dynamics if the uncertainty is somehow dumped somewhere else – i.e. if the entropy of the environment (or the non information-bearing degrees of freedom) is increased by at least an equivalent amount, as required by the Second Law, by gaining an appropriate quantity of heat: specifically kT ln(2) of heat for every 1 bit of randomness erased.
On the other hand, Landauer argued, there is no thermodynamic objection to a logically reversible operation potentially being achieved in a physically reversible way in the system. It is only logically irreversible operations – for example, the erasing of a bit to a known state, or the merging of two computation paths – which must be accompanied by a corresponding entropy increase. When information is physical, all processing of its representations, i.e. generation, encoding, transmission, decoding and interpretation, are natural processes where entropy increases by consumption of free energy.
Applied to the Maxwell's demon/Szilard engine scenario, this suggests that it might be possible to "read" the state of the particle into a computing apparatus with no entropy cost; but only if the apparatus has already been SET into a known state, rather than being in a thermalised state of uncertainty. To SET (or RESET) the apparatus into this state will cost all the entropy that can be saved by knowing the state of Szilard's particle.
In 2008 and 2009, researchers showed that Landauer's principle can be derived from the second law of thermodynamics and the entropy change associated with information gain, developing the thermodynamics of quantum and classical feedback-controlled systems.
== Negentropy ==
Shannon entropy has been related by physicist Léon Brillouin to a concept sometimes called negentropy. In 1953, Brillouin derived a general equation stating that the changing of an information bit value requires at least kT ln(2) energy. This is the same energy as the work Leo Szilard's engine produces in the idealistic case, which in turn equals the same quantity found by Landauer. In his book, he further explored this problem concluding that any cause of a bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount, kT ln(2), of energy. Consequently, acquiring information about a system's microstates is associated with an entropy production, while erasure yields entropy production only when the bit value is changing. Setting up a bit of information in a sub-system originally in thermal equilibrium results in a local entropy reduction. However, there is no violation of the second law of thermodynamics, according to Brillouin, since a reduction in any local system's thermodynamic entropy results in an increase in thermodynamic entropy elsewhere. In this way, Brillouin clarified the meaning of negentropy which was considered as controversial because its earlier understanding can yield Carnot efficiency higher than one. Additionally, the relationship between energy and information formulated by Brillouin has been proposed as a connection between the amount of bits that the brain processes and the energy it consumes: Collell and Fauquet argued that De Castro analytically found the Landauer limit as the thermodynamic lower bound for brain computations. However, even though evolution is supposed to have "selected" the most energetically efficient processes, the physical lower bounds are not realistic quantities in the brain. Firstly, because the minimum processing unit considered in physics is the atom/molecule, which is distant from the actual way that brain operates; and, secondly, because neural networks incorporate important redundancy and noise factors that greatly reduce their efficiency. Laughlin et al. was the first to provide explicit quantities for the energetic cost of processing sensory information. Their findings in blowflies revealed that for visual sensory data, the cost of transmitting one bit of information is around 5 × 10−14 Joules, or equivalently 104 ATP molecules. Thus, neural processing efficiency is still far from Landauer's limit of kT ln(2) J, but as a curious fact, it is still much more efficient than modern computers.
In 2009, Mahulikar & Herwig redefined thermodynamic negentropy as the specific entropy deficit of the dynamically ordered sub-system relative to its surroundings. This definition enabled the formulation of the Negentropy Principle, which is mathematically shown to follow from the 2nd Law of Thermodynamics, during order existence.
== Quantum theory ==
Hirschman showed, cf. Hirschman uncertainty, that Heisenberg's uncertainty principle can be expressed as a particular lower bound on the sum of the classical distribution entropies of the quantum observable probability distributions of a quantum mechanical state, the square of the wave-function, in coordinate, and also momentum space, when expressed in Planck units. The resulting inequalities provide a tighter bound on the uncertainty relations of Heisenberg.
It is meaningful to assign a "joint entropy", because positions and momenta are quantum conjugate variables and are therefore not jointly observable. Mathematically, they have to be treated as joint distribution.
Note that this joint entropy is not equivalent to the Von Neumann entropy, −Tr ρ lnρ = −⟨lnρ⟩.
Hirschman's entropy is said to account for the full information content of a mixture of quantum states.
(Dissatisfaction with the Von Neumann entropy from quantum information points of view has been expressed by Stotland, Pomeransky, Bachmat and Cohen, who have introduced a yet different definition of entropy that reflects the inherent uncertainty of quantum mechanical states. This definition allows distinction between the minimum uncertainty entropy of pure states, and the excess statistical entropy of mixtures.)
== See also ==
== References ==
== Further reading ==
Bennett, C.H. (1973). "Logical reversibility of computation". IBM J. Res. Dev. 17 (6): 525–532. doi:10.1147/rd.176.0525.
Brillouin, Léon (2004), Science And Information Theory (second ed.), Dover, ISBN 978-0-486-43918-1. [Republication of 1962 original.]
Frank, Michael P. (May–June 2002). "Physical Limits of Computing". Computing in Science and Engineering. 4 (3): 16–25. Bibcode:2002CSE.....4c..16F. CiteSeerX 10.1.1.429.1618. doi:10.1109/5992.998637. OSTI 1373456. S2CID 499628.
Greven, Andreas; Keller, Gerhard; Warnecke, Gerald, eds. (2003). Entropy. Princeton University Press. ISBN 978-0-691-11338-8. (A highly technical collection of writings giving an overview of the concept of entropy as it appears in various disciplines.)
Kalinin, M.I.; Kononogov, S.A. (2005), "Boltzmann's constant, the energy meaning of temperature, and thermodynamic irreversibility", Measurement Techniques, 48 (7): 632–636, Bibcode:2005MeasT..48..632K, doi:10.1007/s11018-005-0195-9, S2CID 118726162.
Koutsoyiannis, D. (2011), "Hurst–Kolmogorov dynamics as a result of extremal entropy production", Physica A, 390 (8): 1424–1432, Bibcode:2011PhyA..390.1424K, doi:10.1016/j.physa.2010.12.035.
Landauer, R. (1993). "Information is Physical". Proc. Workshop on Physics and Computation PhysComp'92. Los Alamitos: IEEE Comp. Sci.Press. pp. 1–4. doi:10.1109/PHYCMP.1992.615478. ISBN 978-0-8186-3420-8. S2CID 60640035.
Landauer, R. (1961). "Irreversibility and Heat Generation in the Computing Process". IBM J. Res. Dev. 5 (3): 183–191. doi:10.1147/rd.53.0183.
Leff, H.S.; Rex, A.F., eds. (1990). Maxwell's Demon: Entropy, Information, Computing. Princeton NJ: Princeton University Press. ISBN 978-0-691-08727-6.
Middleton, D. (1960). An Introduction to Statistical Communication Theory. McGraw-Hill.
Shannon, Claude E. (July–October 1948). "A Mathematical Theory of Communication". Bell System Technical Journal. 27 (3): 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x. hdl:10338.dmlcz/101429. (as PDF)
== External links ==
Information Processing and Thermodynamic Entropy Stanford Encyclopedia of Philosophy.
An Intuitive Guide to the Concept of Entropy Arising in Various Sectors of Science — a wikibook on the interpretation of the concept of entropy. | Wikipedia/Entropy_in_thermodynamics_and_information_theory |
In cryptography, a brute-force attack or exhaustive key search is a cryptanalytic attack that consists of an attacker submitting many possible keys or passwords with the hope of eventually guessing correctly. This strategy can theoretically be used to break any form of encryption that is not information-theoretically secure. However, in a properly designed cryptosystem the chance of successfully guessing the key is negligible.
When cracking passwords, this method is very fast when used to check all short passwords, but for longer passwords other methods such as the dictionary attack are used because a brute-force search takes too long. Longer passwords, passphrases and keys have more possible values, making them exponentially more difficult to crack than shorter ones due to diversity of characters.
Brute-force attacks can be made less effective by obfuscating the data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute-force attack against it.
Brute-force attacks are an application of brute-force search, the general problem-solving technique of enumerating all candidates and checking each one. The word 'hammering' is sometimes used to describe a brute-force attack, with 'anti-hammering' for countermeasures.
== Basic concept ==
Brute-force attacks work by calculating every possible combination that could make up a password and testing it to see if it is the correct password. As the password's length increases, the amount of time, on average, to find the correct password increases exponentially.
== Theoretical limits ==
The resources required for a brute-force attack grow exponentially with increasing key size, not linearly. Although U.S. export regulations historically restricted key lengths to 56-bit symmetric keys (e.g. Data Encryption Standard), these restrictions are no longer in place, so modern symmetric algorithms typically use computationally stronger 128- to 256-bit keys.
There is a physical argument that a 128-bit symmetric key is computationally secure against brute-force attack. The Landauer limit implied by the laws of physics sets a lower limit on the energy required to perform a computation of kT · ln 2 per bit erased in a computation, where T is the temperature of the computing device in kelvins, k is the Boltzmann constant, and the natural logarithm of 2 is about 0.693 (0.6931471805599453). No irreversible computing device can use less energy than this, even in principle. Thus, in order to simply flip through the possible values for a 128-bit symmetric key (ignoring doing the actual computing to check it) would, theoretically, require 2128 − 1 bit flips on a conventional processor. If it is assumed that the calculation occurs near room temperature (≈300 K), the Von Neumann-Landauer Limit can be applied to estimate the energy required as ≈1018 joules, which is equivalent to consuming 30 gigawatts of power for one year. This is equal to 30×109 W×365×24×3600 s = 9.46×1017 J or 262.7 TWh (about 0.1% of the yearly world energy production). The full actual computation – checking each key to see if a solution has been found – would consume many times this amount. Furthermore, this is simply the energy requirement for cycling through the key space; the actual time it takes to flip each bit is not considered, which is certainly greater than 0 (see Bremermann's limit).
However, this argument assumes that the register values are changed using conventional set and clear operations, which inevitably generate entropy. It has been shown that computational hardware can be designed not to encounter this theoretical obstruction (see reversible computing), though no such computers are known to have been constructed.
As commercial successors of governmental ASIC solutions have become available, also known as custom hardware attacks, two emerging technologies have proven their capability in the brute-force attack of certain ciphers. One is modern graphics processing unit (GPU) technology, the other is the field-programmable gate array (FPGA) technology. GPUs benefit from their wide availability and price-performance benefit, FPGAs from their energy efficiency per cryptographic operation. Both technologies try to transport the benefits of parallel processing to brute-force attacks. In case of GPUs some hundreds, in the case of FPGA some thousand processing units making them much better suited to cracking passwords than conventional processors. For instance in 2022, 8 Nvidia RTX 4090 GPU were linked together to test password strength by using the software Hashcat with results that showed 200 billion eight-character NTLM password combinations could be cycled through in 48 minutes.
Various publications in the fields of cryptographic analysis have proved the energy efficiency of today's FPGA technology, for example, the COPACOBANA FPGA Cluster computer consumes the same energy as a single PC (600 W), but performs like 2,500 PCs for certain algorithms. A number of firms provide hardware-based FPGA cryptographic analysis solutions from a single FPGA PCI Express card up to dedicated FPGA computers. WPA and WPA2 encryption have successfully been brute-force attacked by reducing the workload by a factor of 50 in comparison to conventional CPUs and some hundred in case of FPGAs.
Advanced Encryption Standard (AES) permits the use of 256-bit keys. Breaking a symmetric 256-bit key by brute-force requires 2128 times more computational power than a 128-bit key. One of the fastest supercomputers in 2019 has a speed of 100 petaFLOPS which could theoretically check 100 trillion (1014) AES keys per second (assuming 1000 operations per check), but would still require 3.67×1055 years to exhaust the 256-bit key space.
An underlying assumption of a brute-force attack is that the complete key space was used to generate keys, something that relies on an effective random number generator, and that there are no defects in the algorithm or its implementation. For example, a number of systems that were originally thought to be impossible to crack by brute-force have nevertheless been cracked because the key space to search through was found to be much smaller than originally thought, because of a lack of entropy in their pseudorandom number generators. These include Netscape's implementation of Secure Sockets Layer (SSL) (cracked by Ian Goldberg and David Wagner in 1995) and a Debian/Ubuntu edition of OpenSSL discovered in 2008 to be flawed. A similar lack of implemented entropy led to the breaking of Enigma's code.
== Credential recycling ==
Credential recycling is the hacking practice of re-using username and password combinations gathered in previous brute-force attacks. A special form of credential recycling is pass the hash, where unsalted hashed credentials are stolen and re-used without first being brute-forced.
== Unbreakable codes ==
Certain types of encryption, by their mathematical properties, cannot be defeated by brute-force. An example of this is one-time pad cryptography, where every cleartext bit has a corresponding key from a truly random sequence of key bits. A 140 character one-time-pad-encoded string subjected to a brute-force attack would eventually reveal every 140 character string possible, including the correct answer – but of all the answers given, there would be no way of knowing which was the correct one. Defeating such a system, as was done by the Venona project, generally relies not on pure cryptography, but upon mistakes in its implementation, such as the key pads not being truly random, intercepted keypads, or operators making mistakes.
== Countermeasures ==
In case of an offline attack where the attacker has gained access to the encrypted material, one can try key combinations without the risk of discovery or interference. In case of online attacks, database and directory administrators can deploy countermeasures such as limiting the number of attempts that a password can be tried, introducing time delays between successive attempts, increasing the answer's complexity (e.g., requiring a CAPTCHA answer or employing multi-factor authentication), and/or locking accounts out after unsuccessful login attempts. Website administrators may prevent a particular IP address from trying more than a predetermined number of password attempts against any account on the site. Additionally, the MITRE D3FEND framework provides structured recommendations for defending against brute-force attacks by implementing strategies such as network traffic filtering, deploying decoy credentials, and invalidating authentication caches.
== Reverse brute-force attack ==
In a reverse brute-force attack (also called password spraying), a single (usually common) password is tested against multiple usernames or encrypted files. The process may be repeated for a select few passwords. In such a strategy, the attacker is not targeting a specific user.
== See also ==
Bitcoin mining
Cryptographic key length
Distributed.net
Hail Mary Cloud
Key derivation function
MD5CRK
Metasploit Express
Side-channel attack
TWINKLE and TWIRL
Unicity distance
RSA Factoring Challenge
Secure Shell
== Notes ==
== References ==
== External links ==
RSA-sponsored DES-III cracking contest
Demonstration of a brute-force device designed to guess the passcode of locked iPhones running iOS 10.3.3
How We Cracked the Code Book Ciphers – Essay by the winning team of the challenge in The Code Book | Wikipedia/Brute_force_attack |
In information theory, the conditional entropy quantifies the amount of information needed to describe the outcome of a random variable
Y
{\displaystyle Y}
given that the value of another random variable
X
{\displaystyle X}
is known. Here, information is measured in shannons, nats, or hartleys. The entropy of
Y
{\displaystyle Y}
conditioned on
X
{\displaystyle X}
is written as
H
(
Y
|
X
)
{\displaystyle \mathrm {H} (Y|X)}
.
== Definition ==
The conditional entropy of
Y
{\displaystyle Y}
given
X
{\displaystyle X}
is defined as
H
(
Y
|
X
)
=
−
∑
x
∈
X
,
y
∈
Y
p
(
x
,
y
)
log
p
(
x
,
y
)
p
(
x
)
{\displaystyle \mathrm {H} (Y|X)\ =-\sum _{x\in {\mathcal {X}},y\in {\mathcal {Y}}}p(x,y)\log {\frac {p(x,y)}{p(x)}}}
where
X
{\displaystyle {\mathcal {X}}}
and
Y
{\displaystyle {\mathcal {Y}}}
denote the support sets of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
.
Note: Here, the convention is that the expression
0
log
0
{\displaystyle 0\log 0}
should be treated as being equal to zero. This is because
lim
θ
→
0
+
θ
log
θ
=
0
{\displaystyle \lim _{\theta \to 0^{+}}\theta \,\log \theta =0}
.
Intuitively, notice that by definition of expected value and of conditional probability,
H
(
Y
|
X
)
{\displaystyle \displaystyle H(Y|X)}
can be written as
H
(
Y
|
X
)
=
E
[
f
(
X
,
Y
)
]
{\displaystyle H(Y|X)=\mathbb {E} [f(X,Y)]}
, where
f
{\displaystyle f}
is defined as
f
(
x
,
y
)
:=
−
log
(
p
(
x
,
y
)
p
(
x
)
)
=
−
log
(
p
(
y
|
x
)
)
{\displaystyle \displaystyle f(x,y):=-\log \left({\frac {p(x,y)}{p(x)}}\right)=-\log(p(y|x))}
. One can think of
f
{\displaystyle \displaystyle f}
as associating each pair
(
x
,
y
)
{\displaystyle \displaystyle (x,y)}
with a quantity measuring the information content of
(
Y
=
y
)
{\displaystyle \displaystyle (Y=y)}
given
(
X
=
x
)
{\displaystyle \displaystyle (X=x)}
. This quantity is directly related to the amount of information needed to describe the event
(
Y
=
y
)
{\displaystyle \displaystyle (Y=y)}
given
(
X
=
x
)
{\displaystyle (X=x)}
. Hence by computing the expected value of
f
{\displaystyle \displaystyle f}
over all pairs of values
(
x
,
y
)
∈
X
×
Y
{\displaystyle (x,y)\in {\mathcal {X}}\times {\mathcal {Y}}}
, the conditional entropy
H
(
Y
|
X
)
{\displaystyle \displaystyle H(Y|X)}
measures how much information, on average, the variable
X
{\displaystyle X}
encodes about
Y
{\displaystyle Y}
.
== Motivation ==
Let
H
(
Y
|
X
=
x
)
{\displaystyle \mathrm {H} (Y|X=x)}
be the entropy of the discrete random variable
Y
{\displaystyle Y}
conditioned on the discrete random variable
X
{\displaystyle X}
taking a certain value
x
{\displaystyle x}
. Denote the support sets of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
by
X
{\displaystyle {\mathcal {X}}}
and
Y
{\displaystyle {\mathcal {Y}}}
. Let
Y
{\displaystyle Y}
have probability mass function
p
Y
(
y
)
{\displaystyle p_{Y}{(y)}}
. The unconditional entropy of
Y
{\displaystyle Y}
is calculated as
H
(
Y
)
:=
E
[
I
(
Y
)
]
{\displaystyle \mathrm {H} (Y):=\mathbb {E} [\operatorname {I} (Y)]}
, i.e.
H
(
Y
)
=
∑
y
∈
Y
P
r
(
Y
=
y
)
I
(
y
)
=
−
∑
y
∈
Y
p
Y
(
y
)
log
2
p
Y
(
y
)
,
{\displaystyle \mathrm {H} (Y)=\sum _{y\in {\mathcal {Y}}}{\mathrm {Pr} (Y=y)\,\mathrm {I} (y)}=-\sum _{y\in {\mathcal {Y}}}{p_{Y}(y)\log _{2}{p_{Y}(y)}},}
where
I
(
y
i
)
{\displaystyle \operatorname {I} (y_{i})}
is the information content of the outcome of
Y
{\displaystyle Y}
taking the value
y
i
{\displaystyle y_{i}}
. The entropy of
Y
{\displaystyle Y}
conditioned on
X
{\displaystyle X}
taking the value
x
{\displaystyle x}
is defined by:
H
(
Y
|
X
=
x
)
=
−
∑
y
∈
Y
Pr
(
Y
=
y
|
X
=
x
)
log
2
Pr
(
Y
=
y
|
X
=
x
)
.
{\displaystyle \mathrm {H} (Y|X=x)=-\sum _{y\in {\mathcal {Y}}}{\Pr(Y=y|X=x)\log _{2}{\Pr(Y=y|X=x)}}.}
Note that
H
(
Y
|
X
)
{\displaystyle \mathrm {H} (Y|X)}
is the result of averaging
H
(
Y
|
X
=
x
)
{\displaystyle \mathrm {H} (Y|X=x)}
over all possible values
x
{\displaystyle x}
that
X
{\displaystyle X}
may take. Also, if the above sum is taken over a sample
y
1
,
…
,
y
n
{\displaystyle y_{1},\dots ,y_{n}}
, the expected value
E
X
[
H
(
y
1
,
…
,
y
n
∣
X
=
x
)
]
{\displaystyle E_{X}[\mathrm {H} (y_{1},\dots ,y_{n}\mid X=x)]}
is known in some domains as equivocation.
Given discrete random variables
X
{\displaystyle X}
with image
X
{\displaystyle {\mathcal {X}}}
and
Y
{\displaystyle Y}
with image
Y
{\displaystyle {\mathcal {Y}}}
, the conditional entropy of
Y
{\displaystyle Y}
given
X
{\displaystyle X}
is defined as the weighted sum of
H
(
Y
|
X
=
x
)
{\displaystyle \mathrm {H} (Y|X=x)}
for each possible value of
x
{\displaystyle x}
, using
p
(
x
)
{\displaystyle p(x)}
as the weights:: 15
H
(
Y
|
X
)
≡
∑
x
∈
X
p
(
x
)
H
(
Y
|
X
=
x
)
=
−
∑
x
∈
X
p
(
x
)
∑
y
∈
Y
p
(
y
|
x
)
log
2
p
(
y
|
x
)
=
−
∑
x
∈
X
,
y
∈
Y
p
(
x
)
p
(
y
|
x
)
log
2
p
(
y
|
x
)
=
−
∑
x
∈
X
,
y
∈
Y
p
(
x
,
y
)
log
2
p
(
x
,
y
)
p
(
x
)
.
{\displaystyle {\begin{aligned}\mathrm {H} (Y|X)\ &\equiv \sum _{x\in {\mathcal {X}}}\,p(x)\,\mathrm {H} (Y|X=x)\\&=-\sum _{x\in {\mathcal {X}}}p(x)\sum _{y\in {\mathcal {Y}}}\,p(y|x)\,\log _{2}\,p(y|x)\\&=-\sum _{x\in {\mathcal {X}},y\in {\mathcal {Y}}}\,p(x)p(y|x)\,\log _{2}\,p(y|x)\\&=-\sum _{x\in {\mathcal {X}},y\in {\mathcal {Y}}}p(x,y)\log _{2}{\frac {p(x,y)}{p(x)}}.\end{aligned}}}
== Properties ==
=== Conditional entropy equals zero ===
H
(
Y
|
X
)
=
0
{\displaystyle \mathrm {H} (Y|X)=0}
if and only if the value of
Y
{\displaystyle Y}
is completely determined by the value of
X
{\displaystyle X}
.
=== Conditional entropy of independent random variables ===
Conversely,
H
(
Y
|
X
)
=
H
(
Y
)
{\displaystyle \mathrm {H} (Y|X)=\mathrm {H} (Y)}
if and only if
Y
{\displaystyle Y}
and
X
{\displaystyle X}
are independent random variables.
=== Chain rule ===
Assume that the combined system determined by two random variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
has joint entropy
H
(
X
,
Y
)
{\displaystyle \mathrm {H} (X,Y)}
, that is, we need
H
(
X
,
Y
)
{\displaystyle \mathrm {H} (X,Y)}
bits of information on average to describe its exact state. Now if we first learn the value of
X
{\displaystyle X}
, we have gained
H
(
X
)
{\displaystyle \mathrm {H} (X)}
bits of information. Once
X
{\displaystyle X}
is known, we only need
H
(
X
,
Y
)
−
H
(
X
)
{\displaystyle \mathrm {H} (X,Y)-\mathrm {H} (X)}
bits to describe the state of the whole system. This quantity is exactly
H
(
Y
|
X
)
{\displaystyle \mathrm {H} (Y|X)}
, which gives the chain rule of conditional entropy:
H
(
Y
|
X
)
=
H
(
X
,
Y
)
−
H
(
X
)
.
{\displaystyle \mathrm {H} (Y|X)\,=\,\mathrm {H} (X,Y)-\mathrm {H} (X).}
: 17
The chain rule follows from the above definition of conditional entropy:
H
(
Y
|
X
)
=
∑
x
∈
X
,
y
∈
Y
p
(
x
,
y
)
log
(
p
(
x
)
p
(
x
,
y
)
)
=
∑
x
∈
X
,
y
∈
Y
p
(
x
,
y
)
(
log
(
p
(
x
)
)
−
log
(
p
(
x
,
y
)
)
)
=
−
∑
x
∈
X
,
y
∈
Y
p
(
x
,
y
)
log
(
p
(
x
,
y
)
)
+
∑
x
∈
X
,
y
∈
Y
p
(
x
,
y
)
log
(
p
(
x
)
)
=
H
(
X
,
Y
)
+
∑
x
∈
X
p
(
x
)
log
(
p
(
x
)
)
=
H
(
X
,
Y
)
−
H
(
X
)
.
{\displaystyle {\begin{aligned}\mathrm {H} (Y|X)&=\sum _{x\in {\mathcal {X}},y\in {\mathcal {Y}}}p(x,y)\log \left({\frac {p(x)}{p(x,y)}}\right)\\[4pt]&=\sum _{x\in {\mathcal {X}},y\in {\mathcal {Y}}}p(x,y)(\log(p(x))-\log(p(x,y)))\\[4pt]&=-\sum _{x\in {\mathcal {X}},y\in {\mathcal {Y}}}p(x,y)\log(p(x,y))+\sum _{x\in {\mathcal {X}},y\in {\mathcal {Y}}}{p(x,y)\log(p(x))}\\[4pt]&=\mathrm {H} (X,Y)+\sum _{x\in {\mathcal {X}}}p(x)\log(p(x))\\[4pt]&=\mathrm {H} (X,Y)-\mathrm {H} (X).\end{aligned}}}
In general, a chain rule for multiple random variables holds:
H
(
X
1
,
X
2
,
…
,
X
n
)
=
∑
i
=
1
n
H
(
X
i
|
X
1
,
…
,
X
i
−
1
)
{\displaystyle \mathrm {H} (X_{1},X_{2},\ldots ,X_{n})=\sum _{i=1}^{n}\mathrm {H} (X_{i}|X_{1},\ldots ,X_{i-1})}
: 22
It has a similar form to chain rule in probability theory, except that addition instead of multiplication is used.
=== Bayes' rule ===
Bayes' rule for conditional entropy states
H
(
Y
|
X
)
=
H
(
X
|
Y
)
−
H
(
X
)
+
H
(
Y
)
.
{\displaystyle \mathrm {H} (Y|X)\,=\,\mathrm {H} (X|Y)-\mathrm {H} (X)+\mathrm {H} (Y).}
Proof.
H
(
Y
|
X
)
=
H
(
X
,
Y
)
−
H
(
X
)
{\displaystyle \mathrm {H} (Y|X)=\mathrm {H} (X,Y)-\mathrm {H} (X)}
and
H
(
X
|
Y
)
=
H
(
Y
,
X
)
−
H
(
Y
)
{\displaystyle \mathrm {H} (X|Y)=\mathrm {H} (Y,X)-\mathrm {H} (Y)}
. Symmetry entails
H
(
X
,
Y
)
=
H
(
Y
,
X
)
{\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (Y,X)}
. Subtracting the two equations implies Bayes' rule.
If
Y
{\displaystyle Y}
is conditionally independent of
Z
{\displaystyle Z}
given
X
{\displaystyle X}
we have:
H
(
Y
|
X
,
Z
)
=
H
(
Y
|
X
)
.
{\displaystyle \mathrm {H} (Y|X,Z)\,=\,\mathrm {H} (Y|X).}
=== Other properties ===
For any
X
{\displaystyle X}
and
Y
{\displaystyle Y}
:
H
(
Y
|
X
)
≤
H
(
Y
)
H
(
X
,
Y
)
=
H
(
X
|
Y
)
+
H
(
Y
|
X
)
+
I
(
X
;
Y
)
,
H
(
X
,
Y
)
=
H
(
X
)
+
H
(
Y
)
−
I
(
X
;
Y
)
,
I
(
X
;
Y
)
≤
H
(
X
)
,
{\displaystyle {\begin{aligned}\mathrm {H} (Y|X)&\leq \mathrm {H} (Y)\,\\\mathrm {H} (X,Y)&=\mathrm {H} (X|Y)+\mathrm {H} (Y|X)+\operatorname {I} (X;Y),\qquad \\\mathrm {H} (X,Y)&=\mathrm {H} (X)+\mathrm {H} (Y)-\operatorname {I} (X;Y),\,\\\operatorname {I} (X;Y)&\leq \mathrm {H} (X),\,\end{aligned}}}
where
I
(
X
;
Y
)
{\displaystyle \operatorname {I} (X;Y)}
is the mutual information between
X
{\displaystyle X}
and
Y
{\displaystyle Y}
.
For independent
X
{\displaystyle X}
and
Y
{\displaystyle Y}
:
H
(
Y
|
X
)
=
H
(
Y
)
{\displaystyle \mathrm {H} (Y|X)=\mathrm {H} (Y)}
and
H
(
X
|
Y
)
=
H
(
X
)
{\displaystyle \mathrm {H} (X|Y)=\mathrm {H} (X)\,}
Although the specific-conditional entropy
H
(
X
|
Y
=
y
)
{\displaystyle \mathrm {H} (X|Y=y)}
can be either less or greater than
H
(
X
)
{\displaystyle \mathrm {H} (X)}
for a given random variate
y
{\displaystyle y}
of
Y
{\displaystyle Y}
,
H
(
X
|
Y
)
{\displaystyle \mathrm {H} (X|Y)}
can never exceed
H
(
X
)
{\displaystyle \mathrm {H} (X)}
.
== Conditional differential entropy ==
=== Definition ===
The above definition is for discrete random variables. The continuous version of discrete conditional entropy is called conditional differential (or continuous) entropy. Let
X
{\displaystyle X}
and
Y
{\displaystyle Y}
be a continuous random variables with a joint probability density function
f
(
x
,
y
)
{\displaystyle f(x,y)}
. The differential conditional entropy
h
(
X
|
Y
)
{\displaystyle h(X|Y)}
is defined as: 249
h
(
X
|
Y
)
=
−
∫
X
,
Y
f
(
x
,
y
)
log
f
(
x
|
y
)
d
x
d
y
{\displaystyle h(X|Y)=-\int _{{\mathcal {X}},{\mathcal {Y}}}f(x,y)\log f(x|y)\,dxdy}
.
=== Properties ===
In contrast to the conditional entropy for discrete random variables, the conditional differential entropy may be negative.
As in the discrete case there is a chain rule for differential entropy:
h
(
Y
|
X
)
=
h
(
X
,
Y
)
−
h
(
X
)
{\displaystyle h(Y|X)\,=\,h(X,Y)-h(X)}
: 253
Notice however that this rule may not be true if the involved differential entropies do not exist or are infinite.
Joint differential entropy is also used in the definition of the mutual information between continuous random variables:
I
(
X
,
Y
)
=
h
(
X
)
−
h
(
X
|
Y
)
=
h
(
Y
)
−
h
(
Y
|
X
)
{\displaystyle \operatorname {I} (X,Y)=h(X)-h(X|Y)=h(Y)-h(Y|X)}
h
(
X
|
Y
)
≤
h
(
X
)
{\displaystyle h(X|Y)\leq h(X)}
with equality if and only if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent.: 253
=== Relation to estimator error ===
The conditional differential entropy yields a lower bound on the expected squared error of an estimator. For any Gaussian random variable
X
{\displaystyle X}
, observation
Y
{\displaystyle Y}
and estimator
X
^
{\displaystyle {\widehat {X}}}
the following holds:: 255
E
[
(
X
−
X
^
(
Y
)
)
2
]
≥
1
2
π
e
e
2
h
(
X
|
Y
)
{\displaystyle \mathbb {E} \left[{\bigl (}X-{\widehat {X}}{(Y)}{\bigr )}^{2}\right]\geq {\frac {1}{2\pi e}}e^{2h(X|Y)}}
This is related to the uncertainty principle from quantum mechanics.
== Generalization to quantum theory ==
In quantum information theory, the conditional entropy is generalized to the conditional quantum entropy. The latter can take negative values, unlike its classical counterpart.
== See also ==
Entropy (information theory)
Mutual information
Conditional quantum entropy
Variation of information
Entropy power inequality
Likelihood function
== References == | Wikipedia/Conditional_entropy |
In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method must have an expected code length greater than or equal to the entropy of the source.
More precisely, the source coding theorem states that for any source distribution, the expected code length satisfies
E
x
∼
P
[
ℓ
(
d
(
x
)
)
]
≥
E
x
∼
P
[
−
log
b
(
P
(
x
)
)
]
{\displaystyle \operatorname {E} _{x\sim P}[\ell (d(x))]\geq \operatorname {E} _{x\sim P}[-\log _{b}(P(x))]}
, where
ℓ
{\displaystyle \ell }
is the function specifying the number of symbols in a code word,
d
{\displaystyle d}
is the coding function,
b
{\displaystyle b}
is the number of symbols used to make output codes and
P
{\displaystyle P}
is the probability of the source symbol. An entropy coding attempts to approach this lower bound.
Two of the most common entropy coding techniques are Huffman coding and arithmetic coding.
If the approximate entropy characteristics of a data stream are known in advance (especially for signal compression), a simpler static code may be useful.
These static codes include universal codes (such as Elias gamma coding or Fibonacci coding) and Golomb codes (such as unary coding or Rice coding).
Since 2014, data compressors have started using the asymmetric numeral systems family of entropy coding techniques, which allows combination of the compression ratio of arithmetic coding with a processing cost similar to Huffman coding.
== Entropy as a measure of similarity ==
Besides using entropy coding as a way to compress digital data, an entropy encoder can also be used to measure the amount of similarity between streams of data and already existing classes of data. This is done by generating an entropy coder/compressor for each class of data; unknown data is then classified by feeding the uncompressed data to each compressor and seeing which compressor yields the highest compression. The coder with the best compression is probably the coder trained on the data that was most similar to the unknown data.
== See also ==
Arithmetic coding
Asymmetric numeral systems (ANS)
Context-adaptive binary arithmetic coding (CABAC)
Huffman coding
Range coding
== References ==
== External links ==
Information Theory, Inference, and Learning Algorithms, by David MacKay (2003), gives an introduction to Shannon theory and data compression, including the Huffman coding and arithmetic coding.
Source Coding, by T. Wiegand and H. Schwarz (2011). | Wikipedia/Entropy_coding |
In signal processing, a lapped transform is a type of linear discrete block transformation where the basis functions of the transformation overlap the block boundaries, yet the number of coefficients overall resulting from a series of overlapping block transforms remains the same as if a non-overlapping block transform had been used.
Lapped transforms substantially reduce the blocking artifacts that otherwise occur with block transform coding techniques, in particular those using the discrete cosine transform. The best known example is the modified discrete cosine transform used in the MP3, Vorbis, AAC, and Opus audio codecs.
Although the best-known application of lapped transforms has been for audio coding, they have also been used for video and image coding and various other applications. They are used in video coding for coding I-frames in VC-1 and for image coding in the JPEG XR format. More recently, a form of lapped transform has also been used in the development of the Daala video coding format.
== References == | Wikipedia/Lapped_transform |
In information technology and computer science, a system is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system.
The set of states a system can occupy is known as its state space. In a discrete system, the state space is countable and often finite. The system's internal behaviour or interaction with its environment consists of separately occurring individual actions or events, such as accepting input or producing output, that may or may not cause the system to change its state. Examples of such systems are digital logic circuits and components, automata and formal language, computer programs, and computers.
The output of a digital circuit or deterministic computer program at any time is completely determined by its current inputs and its state.
== Digital logic circuit state ==
Digital logic circuits can be divided into two types: combinational logic, whose output signals are dependent only on its present input signals, and sequential logic, whose outputs are a function of both the current inputs and the past history of inputs. In sequential logic, information from past inputs is stored in electronic memory elements, such as flip-flops. The stored contents of these memory elements, at a given point in time, is collectively referred to as the circuit's state and contains all the information about the past to which the circuit has access.
Since each binary memory element, such as a flip-flop, has only two possible states, one or zero, and there is a finite number of memory elements, a digital circuit has only a certain finite number of possible states. If N is the number of binary memory elements in the circuit, the maximum number of states a circuit can have is 2N.
== Program state ==
Similarly, a computer program stores data in variables, which represent storage locations in the computer's memory. The contents of these memory locations, at any given point in the program's execution, are called the program's state.
A more specialized definition of state is used for computer programs that operate serially or sequentially on streams of data, such as parsers, firewalls, communication protocols and encryption. Serial programs operate on the incoming data characters or packets sequentially, one at a time. In some of these programs, information about previous data characters or packets received is stored in variables and used to affect the processing of the current character or packet. This is called a stateful protocol and the data carried over from the previous processing cycle is called the state. In others, the program has no information about the previous data stream and starts fresh with each data input; this is called a stateless protocol.
Imperative programming is a programming paradigm (way of designing a programming language) that describes computation in terms of the program state, and of the statements which change the program state. Changes of state are implicit, managed by the program runtime, so that a subroutine has visibility of the changes of state made by other parts of the program, known as side effects.
In declarative programming languages, the program describes the desired results and doesn't specify changes to the state directly.
In functional programming, state is usually represented with temporal logic as explicit variables that represent the program state at each step of a program execution: a state variable is passed as an input parameter of a state-transforming function, which returns the updated state as part of its return value. A pure functional subroutine only has visibility of changes of state represented by the state variables in its scope.
== Finite-state machines ==
The output of a sequential circuit or computer program at any time is completely determined by its current inputs and current state. Since each binary memory element has only two possible states, 0 or 1, the total number of different states a circuit can assume is finite, and fixed by the number of memory elements. If there are N binary memory elements, a digital circuit can have at most 2N distinct states. The concept of state is formalized in an abstract mathematical model of computation called a finite-state machine, used to design both sequential digital circuits and computer programs.
== Examples ==
An example of an everyday device that has a state is a television set. To change the channel of a TV, the user usually presses a channel up or channel down button on the remote control, which sends a coded message to the set. In order to calculate the new channel that the user desires, the digital tuner in the television must have stored in it the number of the current channel it is on. It then adds one or subtracts one from this number to get the number for the new channel, and adjusts the TV to receive that channel. This new number is then stored as the current channel. Similarly, the television also stores a number that controls the level of volume produced by the speaker. Pressing the volume up or volume down buttons increments or decrements this number, setting a new level of volume. Both the current channel and current volume numbers are part of the TV's state. They are stored in non-volatile memory, which preserves the information when the TV is turned off, so when it is turned on again the TV will return to its previous station and volume level.
As another example, the state of a microprocessor is the contents of all the memory elements in it: the accumulators, storage registers, data caches, and flags. When computers such as laptops go into hibernation mode to save energy by shutting down the processor, the state of the processor is stored on the computer's hard disk, so it can be restored when the computer comes out of hibernation, and the processor can take up operations where it left off.
== See also ==
Data (computing)
== References == | Wikipedia/State_(computer_science) |
In information theory, joint entropy is a measure of the uncertainty associated with a set of variables.
== Definition ==
The joint Shannon entropy (in bits) of two discrete random variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
with images
X
{\displaystyle {\mathcal {X}}}
and
Y
{\displaystyle {\mathcal {Y}}}
is defined as: 16
H
(
X
,
Y
)
=
−
∑
x
∈
X
∑
y
∈
Y
P
(
x
,
y
)
log
2
[
P
(
x
,
y
)
]
{\displaystyle \mathrm {H} (X,Y)=-\sum _{x\in {\mathcal {X}}}\sum _{y\in {\mathcal {Y}}}P(x,y)\log _{2}[P(x,y)]}
where
x
{\displaystyle x}
and
y
{\displaystyle y}
are particular values of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, respectively,
P
(
x
,
y
)
{\displaystyle P(x,y)}
is the joint probability of these values occurring together, and
P
(
x
,
y
)
log
2
[
P
(
x
,
y
)
]
{\displaystyle P(x,y)\log _{2}[P(x,y)]}
is defined to be 0 if
P
(
x
,
y
)
=
0
{\displaystyle P(x,y)=0}
.
For more than two random variables
X
1
,
.
.
.
,
X
n
{\displaystyle X_{1},...,X_{n}}
this expands to
H
(
X
1
,
.
.
.
,
X
n
)
=
−
∑
x
1
∈
X
1
.
.
.
∑
x
n
∈
X
n
P
(
x
1
,
.
.
.
,
x
n
)
log
2
[
P
(
x
1
,
.
.
.
,
x
n
)
]
{\displaystyle \mathrm {H} (X_{1},...,X_{n})=-\sum _{x_{1}\in {\mathcal {X}}_{1}}...\sum _{x_{n}\in {\mathcal {X}}_{n}}P(x_{1},...,x_{n})\log _{2}[P(x_{1},...,x_{n})]}
where
x
1
,
.
.
.
,
x
n
{\displaystyle x_{1},...,x_{n}}
are particular values of
X
1
,
.
.
.
,
X
n
{\displaystyle X_{1},...,X_{n}}
, respectively,
P
(
x
1
,
.
.
.
,
x
n
)
{\displaystyle P(x_{1},...,x_{n})}
is the probability of these values occurring together, and
P
(
x
1
,
.
.
.
,
x
n
)
log
2
[
P
(
x
1
,
.
.
.
,
x
n
)
]
{\displaystyle P(x_{1},...,x_{n})\log _{2}[P(x_{1},...,x_{n})]}
is defined to be 0 if
P
(
x
1
,
.
.
.
,
x
n
)
=
0
{\displaystyle P(x_{1},...,x_{n})=0}
.
== Properties ==
=== Nonnegativity ===
The joint entropy of a set of random variables is a nonnegative number.
H
(
X
,
Y
)
≥
0
{\displaystyle \mathrm {H} (X,Y)\geq 0}
H
(
X
1
,
…
,
X
n
)
≥
0
{\displaystyle \mathrm {H} (X_{1},\ldots ,X_{n})\geq 0}
=== Greater than individual entropies ===
The joint entropy of a set of variables is greater than or equal to the maximum of all of the individual entropies of the variables in the set.
H
(
X
,
Y
)
≥
max
[
H
(
X
)
,
H
(
Y
)
]
{\displaystyle \mathrm {H} (X,Y)\geq \max \left[\mathrm {H} (X),\mathrm {H} (Y)\right]}
H
(
X
1
,
…
,
X
n
)
≥
max
1
≤
i
≤
n
{
H
(
X
i
)
}
{\displaystyle \mathrm {H} {\bigl (}X_{1},\ldots ,X_{n}{\bigr )}\geq \max _{1\leq i\leq n}{\Bigl \{}\mathrm {H} {\bigl (}X_{i}{\bigr )}{\Bigr \}}}
=== Less than or equal to the sum of individual entropies ===
The joint entropy of a set of variables is less than or equal to the sum of the individual entropies of the variables in the set. This is an example of subadditivity. This inequality is an equality if and only if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are statistically independent.: 30
H
(
X
,
Y
)
≤
H
(
X
)
+
H
(
Y
)
{\displaystyle \mathrm {H} (X,Y)\leq \mathrm {H} (X)+\mathrm {H} (Y)}
H
(
X
1
,
…
,
X
n
)
≤
H
(
X
1
)
+
…
+
H
(
X
n
)
{\displaystyle \mathrm {H} (X_{1},\ldots ,X_{n})\leq \mathrm {H} (X_{1})+\ldots +\mathrm {H} (X_{n})}
== Relations to other entropy measures ==
Joint entropy is used in the definition of conditional entropy: 22
H
(
X
|
Y
)
=
H
(
X
,
Y
)
−
H
(
Y
)
{\displaystyle \mathrm {H} (X|Y)=\mathrm {H} (X,Y)-\mathrm {H} (Y)\,}
,
and
H
(
X
1
,
…
,
X
n
)
=
∑
k
=
1
n
H
(
X
k
|
X
k
−
1
,
…
,
X
1
)
{\displaystyle \mathrm {H} (X_{1},\dots ,X_{n})=\sum _{k=1}^{n}\mathrm {H} (X_{k}|X_{k-1},\dots ,X_{1})}
.
For two variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, this means that
H
(
X
,
Y
)
=
H
(
X
|
Y
)
+
H
(
Y
)
=
H
(
Y
|
X
)
+
H
(
X
)
{\displaystyle \mathrm {H} (X,Y)=\mathrm {H} (X|Y)+\mathrm {H} (Y)=\mathrm {H} (Y|X)+\mathrm {H} (X)}
.
Joint entropy is also used in the definition of mutual information: 21
I
(
X
;
Y
)
=
H
(
X
)
+
H
(
Y
)
−
H
(
X
,
Y
)
{\displaystyle \operatorname {I} (X;Y)=\mathrm {H} (X)+\mathrm {H} (Y)-\mathrm {H} (X,Y)\,}
.
In quantum information theory, the joint entropy is generalized into the joint quantum entropy.
== Joint differential entropy ==
=== Definition ===
The above definition is for discrete random variables and just as valid in the case of continuous random variables. The continuous version of discrete joint entropy is called joint differential (or continuous) entropy. Let
X
{\displaystyle X}
and
Y
{\displaystyle Y}
be a continuous random variables with a joint probability density function
f
(
x
,
y
)
{\displaystyle f(x,y)}
. The differential joint entropy
h
(
X
,
Y
)
{\displaystyle h(X,Y)}
is defined as: 249
h
(
X
,
Y
)
=
−
∫
X
,
Y
f
(
x
,
y
)
log
f
(
x
,
y
)
d
x
d
y
{\displaystyle h(X,Y)=-\int _{{\mathcal {X}},{\mathcal {Y}}}f(x,y)\log f(x,y)\,dxdy}
For more than two continuous random variables
X
1
,
.
.
.
,
X
n
{\displaystyle X_{1},...,X_{n}}
the definition is generalized to:
h
(
X
1
,
…
,
X
n
)
=
−
∫
f
(
x
1
,
…
,
x
n
)
log
f
(
x
1
,
…
,
x
n
)
d
x
1
…
d
x
n
{\displaystyle h(X_{1},\ldots ,X_{n})=-\int f(x_{1},\ldots ,x_{n})\log f(x_{1},\ldots ,x_{n})\,dx_{1}\ldots dx_{n}}
The integral is taken over the support of
f
{\displaystyle f}
. It is possible that the integral does not exist in which case we say that the differential entropy is not defined.
=== Properties ===
As in the discrete case the joint differential entropy of a set of random variables is smaller or equal than the sum of the entropies of the individual random variables:
h
(
X
1
,
X
2
,
…
,
X
n
)
≤
∑
i
=
1
n
h
(
X
i
)
{\displaystyle h(X_{1},X_{2},\ldots ,X_{n})\leq \sum _{i=1}^{n}h(X_{i})}
: 253
The following chain rule holds for two random variables:
h
(
X
,
Y
)
=
h
(
X
|
Y
)
+
h
(
Y
)
{\displaystyle h(X,Y)=h(X|Y)+h(Y)}
In the case of more than two random variables this generalizes to:: 253
h
(
X
1
,
X
2
,
…
,
X
n
)
=
∑
i
=
1
n
h
(
X
i
|
X
1
,
X
2
,
…
,
X
i
−
1
)
{\displaystyle h(X_{1},X_{2},\ldots ,X_{n})=\sum _{i=1}^{n}h(X_{i}|X_{1},X_{2},\ldots ,X_{i-1})}
Joint differential entropy is also used in the definition of the mutual information between continuous random variables:
I
(
X
,
Y
)
=
h
(
X
)
+
h
(
Y
)
−
h
(
X
,
Y
)
{\displaystyle \operatorname {I} (X,Y)=h(X)+h(Y)-h(X,Y)}
== References == | Wikipedia/Joint_entropy |
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform.
== Definition ==
A function
ψ
∈
L
2
(
R
)
{\displaystyle \psi \,\in \,L^{2}(\mathbb {R} )}
is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is, a complete orthonormal system for the Hilbert space of square-integrable functions on the real line.
The Hilbert basis is constructed as the family of functions
{
ψ
j
k
:
j
,
k
∈
Z
}
{\displaystyle \{\psi _{jk}:\,j,\,k\,\in \,\mathbb {Z} \}}
by means of dyadic translations and dilations of
ψ
{\displaystyle \psi \,}
,
ψ
j
k
(
x
)
=
2
j
2
ψ
(
2
j
x
−
k
)
,
{\displaystyle \psi _{jk}(x)=2^{\frac {j}{2}}\psi \left(2^{j}x-k\right),}
for integers
j
,
k
∈
Z
{\displaystyle j,\,k\,\in \,\mathbb {Z} }
.
If, under the standard inner product on
L
2
(
R
)
{\displaystyle L^{2}\left(\mathbb {R} \right)}
,
⟨
f
,
g
⟩
=
∫
−
∞
∞
f
(
x
)
g
(
x
)
¯
d
x
,
{\displaystyle \langle f,g\rangle =\int _{-\infty }^{\infty }f(x){\overline {g(x)}}dx,}
this family is orthonormal, then it is an orthonormal system:
⟨
ψ
j
k
,
ψ
l
m
⟩
=
∫
−
∞
∞
ψ
j
k
(
x
)
ψ
l
m
(
x
)
¯
d
x
,
=
δ
j
l
δ
k
m
,
{\displaystyle {\begin{aligned}\langle \psi _{jk},\psi _{lm}\rangle &=\int _{-\infty }^{\infty }\psi _{jk}(x){\overline {\psi _{lm}(x)}}dx,\\&=\delta _{jl}\delta _{km},\end{aligned}}}
where
δ
j
l
{\displaystyle \delta _{jl}\,}
is the Kronecker delta.
Completeness is satisfied if every function
f
∈
L
2
(
R
)
{\displaystyle f\,\in \,L^{2}\left(\mathbb {R} \right)}
may be expanded in the basis as
f
(
x
)
=
∑
j
,
k
=
−
∞
∞
c
j
k
ψ
j
k
(
x
)
{\displaystyle f(x)=\sum _{j,k=-\infty }^{\infty }c_{jk}\psi _{jk}(x)}
with convergence of the series understood to be convergence in norm. Such a representation of
f
{\displaystyle f}
is known as a wavelet series. This implies that an orthonormal wavelet is self-dual.
The integral wavelet transform is the integral transform defined as
[
W
ψ
f
]
(
a
,
b
)
=
1
|
a
|
∫
−
∞
∞
ψ
(
x
−
b
a
)
¯
f
(
x
)
d
x
{\displaystyle \left[W_{\psi }f\right](a,b)={\frac {1}{\sqrt {|a|}}}\int _{-\infty }^{\infty }{\overline {\psi \left({\frac {x-b}{a}}\right)}}f(x)dx\,}
The wavelet coefficients
c
j
k
{\displaystyle c_{jk}}
are then given by
c
j
k
=
[
W
ψ
f
]
(
2
−
j
,
k
2
−
j
)
{\displaystyle c_{jk}=\left[W_{\psi }f\right]\left(2^{-j},k2^{-j}\right)}
Here,
a
=
2
−
j
{\displaystyle a=2^{-j}}
is called the binary dilation or dyadic dilation, and
b
=
k
2
−
j
{\displaystyle b=k2^{-j}}
is the binary or dyadic position.
== Principle ==
The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape, imposing a restriction on choosing suitable basis functions. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing,
Δ
t
Δ
ω
≥
1
2
{\displaystyle \Delta t\Delta \omega \geq {\frac {1}{2}}}
where
t
{\displaystyle t}
represents time and
ω
{\displaystyle \omega }
angular frequency (
ω
=
2
π
f
{\displaystyle \omega =2\pi f}
, where
f
{\displaystyle f}
is ordinary frequency).
The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysis windows is chosen, the larger is the value of
Δ
t
{\displaystyle \Delta t}
.
When
Δ
t
{\displaystyle \Delta t}
is large
Bad time resolution
Good frequency resolution
Low frequency, large scaling factor
When
Δ
t
{\displaystyle \Delta t}
is small
Good time resolution
Bad frequency resolution
High frequency, small scaling factor
In other words, the basis function
ψ
{\displaystyle \psi }
can be regarded as an impulse response of a system with which the function
x
(
t
)
{\displaystyle x(t)}
has been filtered. The transformed signal provides information about the time and the frequency. Therefore, wavelet-transformation contains information similar to the short-time-Fourier-transformation, but with additional special properties of the wavelets, which show up at the resolution in time at higher analysis frequencies of the basis function. The difference in time resolution at ascending frequencies for the Fourier transform and the wavelet transform is shown below. Note however, that the frequency resolution is decreasing for increasing frequencies while the temporal resolution increases. This consequence of the Fourier uncertainty principle is not correctly displayed in the Figure.
This shows that wavelet transformation is good in time resolution of high frequencies, while for slowly varying functions, the frequency resolution is remarkable.
Another example: The analysis of three superposed sinusoidal signals
y
(
t
)
=
sin
(
2
π
f
0
t
)
+
sin
(
4
π
f
0
t
)
+
sin
(
8
π
f
0
t
)
{\displaystyle y(t)\;=\;\sin(2\pi f_{0}t)\;+\;\sin(4\pi f_{0}t)\;+\;\sin(8\pi f_{0}t)}
with STFT and wavelet-transformation.
== Wavelet compression ==
Wavelet compression is a form of data compression well suited for image compression (sometimes also video compression and audio compression). Notable implementations are JPEG 2000, DjVu and ECW for still images, JPEG XS, CineForm, and the BBC's Dirac. The goal is to store image data in as little space as possible in a file. Wavelet compression can be either lossless or lossy.
Using a wavelet transform, the wavelet compression methods are adequate for representing transients, such as percussion sounds in audio, or high-frequency components in two-dimensional images, for example an image of stars on a night sky. This means that the transient elements of a data signal can be represented by a smaller amount of information than would be the case if some other transform, such as the more widespread discrete cosine transform, had been used.
Discrete wavelet transform has been successfully applied for the compression of electrocardiograph (ECG) signals In this work, the high correlation between the corresponding wavelet coefficients of signals of successive cardiac cycles is utilized employing linear prediction.
Wavelet compression is not effective for all kinds of data. Wavelet compression handles transient signals well. But smooth, periodic signals are better compressed using other methods, particularly traditional harmonic analysis in the frequency domain with Fourier-related transforms. Compressing data that has both transient and periodic characteristics may be done with hybrid techniques that use wavelets along with traditional harmonic analysis. For example, the Vorbis audio codec primarily uses the modified discrete cosine transform to compress audio (which is generally smooth and periodic), however allows the addition of a hybrid wavelet filter bank for improved reproduction of transients.
See Diary Of An x264 Developer: The problems with wavelets (2010) for discussion of practical issues of current methods using wavelets for video compression.
=== Method ===
First a wavelet transform is applied. This produces as many coefficients as there are pixels in the image (i.e., there is no compression yet since it is only a transform). These coefficients can then be compressed more easily because the information is statistically concentrated in just a few coefficients. This principle is called transform coding. After that, the coefficients are quantized and the quantized values are entropy encoded and/or run length encoded.
A few 1D and 2D applications of wavelet compression use a technique called "wavelet footprints".
=== Evaluation ===
==== Requirement for image compression ====
For most natural images, the spectrum density of lower frequency is higher. As a result, information of the low frequency signal (reference signal) is generally preserved, while the information in the detail signal is discarded. From the perspective of image compression and reconstruction, a wavelet should meet the following criteria while performing image compression:
Being able to transform more original image into the reference signal.
Highest fidelity reconstruction based on the reference signal.
Should not lead to artifacts in the image reconstructed from the reference signal alone.
==== Requirement for shift variance and ringing behavior ====
Wavelet image compression system involves filters and decimation, so it can be described as a linear shift-variant system. A typical wavelet transformation diagram is displayed below:
The transformation system contains two analysis filters (a low pass filter
h
0
(
n
)
{\displaystyle h_{0}(n)}
and a high pass filter
h
1
(
n
)
{\displaystyle h_{1}(n)}
), a decimation process, an interpolation process, and two synthesis filters (
g
0
(
n
)
{\displaystyle g_{0}(n)}
and
g
1
(
n
)
{\displaystyle g_{1}(n)}
). The compression and reconstruction system generally involves low frequency components, which is the analysis filters
h
0
(
n
)
{\displaystyle h_{0}(n)}
for image compression and the synthesis filters
g
0
(
n
)
{\displaystyle g_{0}(n)}
for reconstruction. To evaluate such system, we can input an impulse
δ
(
n
−
n
i
)
{\displaystyle \delta (n-n_{i})}
and observe its reconstruction
h
(
n
−
n
i
)
{\displaystyle h(n-n_{i})}
; The optimal wavelet are those who bring minimum shift variance and sidelobe to
h
(
n
−
n
i
)
{\displaystyle h(n-n_{i})}
. Even though wavelet with strict shift variance is not realistic, it is possible to select wavelet with only slight shift variance. For example, we can compare the shift variance of two filters:
By observing the impulse responses of the two filters, we can conclude that the second filter is less sensitive to the input location (i.e. it is less shift variant).
Another important issue for image compression and reconstruction is the system's oscillatory behavior, which might lead to severe undesired artifacts in the reconstructed image. To achieve this, the wavelet filters should have a large peak to sidelobe ratio.
So far we have discussed about one-dimension transformation of the image compression system. This issue can be extended to two dimension, while a more general term - shiftable multiscale transforms - is proposed.
==== Derivation of impulse response ====
As mentioned earlier, impulse response can be used to evaluate the image compression/reconstruction system.
For the input sequence
x
(
n
)
=
δ
(
n
−
n
i
)
{\displaystyle x(n)=\delta (n-n_{i})}
, the reference signal
r
1
(
n
)
{\displaystyle r_{1}(n)}
after one level of decomposition is
x
(
n
)
∗
h
0
(
n
)
{\displaystyle x(n)*h_{0}(n)}
goes through decimation by a factor of two, while
h
0
(
n
)
{\displaystyle h_{0}(n)}
is a low pass filter. Similarly, the next reference signal
r
2
(
n
)
{\displaystyle r_{2}(n)}
is obtained by
r
1
(
n
)
∗
h
0
(
n
)
{\displaystyle r_{1}(n)*h_{0}(n)}
goes through decimation by a factor of two. After L levels of decomposition (and decimation), the analysis response is obtained by retaining one out of every
2
L
{\displaystyle 2^{L}}
samples:
h
A
(
L
)
(
n
,
n
i
)
=
f
h
0
(
L
)
(
n
−
n
i
/
2
L
)
{\displaystyle h_{A}^{(L)}(n,n_{i})=f_{h0}^{(L)}(n-n_{i}/2^{L})}
.
On the other hand, to reconstruct the signal x(n), we can consider a reference signal
r
L
(
n
)
=
δ
(
n
−
n
j
)
{\displaystyle r_{L}(n)=\delta (n-n_{j})}
. If the detail signals
d
i
(
n
)
{\displaystyle d_{i}(n)}
are equal to zero for
1
≤
i
≤
L
{\displaystyle 1\leq i\leq L}
, then the reference signal at the previous stage (
L
−
1
{\displaystyle L-1}
stage) is
r
L
−
1
(
n
)
=
g
0
(
n
−
2
n
j
)
{\displaystyle r_{L-1}(n)=g_{0}(n-2n_{j})}
, which is obtained by interpolating
r
L
(
n
)
{\displaystyle r_{L}(n)}
and convoluting with
g
0
(
n
)
{\displaystyle g_{0}(n)}
. Similarly, the procedure is iterated to obtain the reference signal
r
(
n
)
{\displaystyle r(n)}
at stage
L
−
2
,
L
−
3
,
.
.
.
.
,
1
{\displaystyle L-2,L-3,....,1}
. After L iterations, the synthesis impulse response is calculated:
h
s
(
L
)
(
n
,
n
i
)
=
f
g
0
(
L
)
(
n
/
2
L
−
n
j
)
{\displaystyle h_{s}^{(L)}(n,n_{i})=f_{g0}^{(L)}(n/2^{L}-n_{j})}
, which relates the reference signal
r
L
(
n
)
{\displaystyle r_{L}(n)}
and the reconstructed signal.
To obtain the overall L level analysis/synthesis system, the analysis and synthesis responses are combined as below:
h
A
S
(
L
)
(
n
,
n
i
)
=
∑
k
f
h
0
(
L
)
(
k
−
n
i
/
2
L
)
f
g
0
(
L
)
(
n
/
2
L
−
k
)
{\displaystyle h_{AS}^{(L)}(n,n_{i})=\sum _{k}f_{h0}^{(L)}(k-n_{i}/2^{L})f_{g0}^{(L)}(n/2^{L}-k)}
.
Finally, the peak to first sidelobe ratio and the average second sidelobe of the overall impulse response
h
A
S
(
L
)
(
n
,
n
i
)
{\displaystyle h_{AS}^{(L)}(n,n_{i})}
can be used to evaluate the wavelet image compression performance.
== Comparison with Fourier transform and time-frequency analysis ==
Wavelets have some slight benefits over Fourier transforms in reducing computations when examining specific frequencies. However, they are rarely more sensitive, and indeed, the common Morlet wavelet is mathematically identical to a short-time Fourier transform using a Gaussian window function. The exception is when searching for signals of a known, non-sinusoidal shape (e.g., heartbeats); in that case, using matched wavelets can outperform standard STFT/Morlet analyses.
== Other practical applications ==
The wavelet transform can provide us with the frequency of the signals and the time associated to those frequencies, making it very convenient for its application in numerous fields. For instance, signal processing of accelerations for gait analysis, for fault detection, for the analysis of seasonal displacements of landslides, for design of low power pacemakers and also in ultra-wideband (UWB) wireless communications.
== Time-causal wavelets ==
For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al and Lindeberg, with the latter method also involving a memory-efficient time-recursive implementation.
== Synchro-squeezed transform ==
Synchro-squeezed transform can significantly enhance temporal and frequency resolution of time-frequency representation obtained using conventional wavelet transform.
== See also ==
Binomial QMF (also known as Daubechies wavelet)
Biorthogonal nearly coiflet basis, which shows that wavelet for image compression can also be nearly coiflet (nearly orthogonal)
Chirplet transform
Complex wavelet transform
Constant-Q transform
Continuous wavelet transform
Daubechies wavelet
Discrete wavelet transform
DjVu format uses wavelet-based IW44 algorithm for image compression
Dual wavelet
ECW, a wavelet-based geospatial image format designed for speed and processing efficiency
Gabor wavelet
Haar wavelet
JPEG 2000, a wavelet-based image compression standard
Least-squares spectral analysis
Morlet wavelet
Multiresolution analysis
MrSID, the image format developed from original wavelet compression research at Los Alamos National Laboratory (LANL)
S transform
Scaleograms, a type of spectrogram generated using wavelets instead of a short-time Fourier transform
Set partitioning in hierarchical trees
Short-time Fourier transform
Stationary wavelet transform
Time–frequency representation
Wavelet
== References ==
== External links ==
Amara Graps (June 1995). "An Introduction to Wavelets". IEEE Computational Science and Engineering. 2 (2): 50–61. doi:10.1109/99.388960.
Robi Polikar (January 12, 2001). "The Wavelet Tutorial".
Concise Introduction to Wavelets by René Puschinger | Wikipedia/Wavelet_transform |
842, 8-4-2, or EFT is a data compression algorithm. It is a variation on Lempel–Ziv compression with a limited dictionary length. With typical data, 842 gives 80 to 90 percent of the compression of LZ77 with much faster throughput and less memory use. Hardware implementations also provide minimal use of energy and minimal chip area.
842 compression can be used for virtual memory compression, for databases — especially column-oriented stores, and when streaming input-output — for example to do backups or to write to log files.
== Algorithm ==
The algorithm operates on blocks of 8 bytes with sub-phrases of 8, 4 and 2 bytes. A hash of each phrase is used to look up a hash table with offsets to a sliding window buffer of past encoded data. Matches can be replaced by the offset, so the result for each block can be some mixture of matched data and new literal data.
== Implementations ==
IBM added hardware accelerators and instructions for 842 compression to their Power processors from POWER7+ onward. In addition, POWER9 and Power10 added hardware acceleration for the RFC 1951 Deflate algorithm, which is used by zlib and gzip.
A device driver for hardware-assisted 842 compression on a POWER processor was added to the Linux kernel in 2011. More recently, Linux can fallback to a software implementation, which of course is much slower. zram, a Linux kernel module for compressed RAM drives, can be configured to use 842.
Researchers have implemented 842 using graphics processing units and found about 30x faster decompression using dedicated GPUs. An open source library provides 842 for CUDA and OpenCL. An FPGA implementation of 842 demonstrated 13 times better throughput than a software implementation.
== References == | Wikipedia/842_(compression_algorithm) |
Frame rate, most commonly expressed in frame/s, frames per second or FPS, is typically the frequency (rate) at which consecutive images (frames) are captured or displayed. This definition applies to film and video cameras, computer animation, and motion capture systems. In these contexts, frame rate may be used interchangeably with frame frequency and refresh rate, which are expressed in hertz. Additionally, in the context of computer graphics performance, FPS is the rate at which a system, particularly a GPU, is able to generate frames, and refresh rate is the frequency at which a display shows completed frames. In electronic camera specifications frame rate refers to the maximum possible rate frames could be captured, but in practice, other settings (such as exposure time) may reduce the actual frequency to a lower number than the frame rate.
== Human vision ==
The temporal sensitivity and resolution of human vision varies depending on the type and characteristics of visual stimulus, and it differs between individuals. The human visual system can process 10 to 12 images per second and perceive them individually, while higher rates are perceived as motion. Modulated light (such as a computer display) is perceived as stable by the majority of participants in studies when the rate is higher than 50 Hz. This perception of modulated light as steady is known as the flicker fusion threshold. However, when the modulated light is non-uniform and contains an image, the flicker fusion threshold can be much higher, in the hundreds of hertz. With regard to image recognition, people have been found to recognize a specific image in an unbroken series of different images, each of which lasts as little as 13 milliseconds. Persistence of vision sometimes accounts for very short single-millisecond visual stimulus having a perceived duration of between 100 ms and 400 ms. Multiple stimuli that are very short are sometimes perceived as a single stimulus, such as a 10 ms green flash of light immediately followed by a 10 ms red flash of light perceived as a single yellow flash of light.
== Film and video ==
=== Silent film ===
Early silent films had stated frame rates anywhere from 16 to 24 frames per second (FPS), but since the cameras were hand-cranked, the rate often changed during the scene to fit the mood. Projectionists could also change the frame rate in the theater by adjusting a rheostat controlling the voltage powering the film-carrying mechanism in the projector. Film companies often intended for theaters to show their silent films at a higher frame rate than that at which they were filmed. These frame rates were enough for the sense of motion, but it was perceived as jerky motion. To minimize the perceived flicker, projectors employed dual- and triple-blade shutters, so each frame was displayed two or three times, increasing the flicker rate to 48 or 72 hertz and reducing eye strain. Thomas Edison said that 46 frames per second was the minimum needed for the eye to perceive motion: "Anything less will strain the eye." In the mid to late 1920s, the frame rate for silent film increased to 20–26 FPS.
=== Sound film ===
When sound film was introduced in 1926, variations in film speed were no longer tolerated, as the human ear is more sensitive than the eye to changes in frequency. Many theaters had shown silent films at 22 to 26 FPS, which is why the industry chose 24 FPS for sound film as a compromise. From 1927 to 1930, as various studios updated equipment, the rate of 24 FPS became standard for 35 mm sound film. At 24 FPS, the film travels through the projector at a rate of 456 millimetres (18.0 in) per second. This allowed simple two-blade shutters to give a projected series of images at 48 per second, satisfying Edison's recommendation. Many modern 35 mm film projectors use three-blade shutters to give 72 images per second—each frame is flashed on screen three times.
=== Animation ===
In drawn animation, moving characters are often shot "on twos", that is to say, one drawing is shown for every two frames of film (which usually runs at 24 frame per second), meaning there are only 12 drawings per second. Even though the image update rate is low, the fluidity is satisfactory for most subjects. However, when a character is required to perform a quick movement, it is usually necessary to revert to animating "on ones", as "twos" are too slow to convey the motion adequately. A blend of the two techniques keeps the eye fooled without unnecessary production cost.
Animation for most "Saturday morning cartoons" was produced as cheaply as possible and was most often shot on "threes" or even "fours", i.e. three or four frames per drawing. This translates to only 8 or 6 drawings per second respectively. Anime is also usually drawn on threes or twos.
=== Modern video standards ===
Due to the mains frequency of electric grids, analog television broadcast was developed with frame rates of 50 Hz (most of the world) or 60 Hz (Canada, US, Mexico, Philippines, Japan, South Korea). The frequency of the electricity grid was extremely stable and therefore it was logical to use for synchronization.
The introduction of color television technology made it necessary to lower that 60 FPS frequency by 0.1% to avoid "dot crawl", a display artifact appearing on legacy black-and-white displays, showing up on highly-color-saturated surfaces. It was found that by lowering the frame rate by 0.1%, the undesirable effect was minimized.
As of 2021, video transmission standards in North America, Japan, and South Korea are still based on 60/1.001 ≈ 59.94 images per second. Two sizes of images are typically used: 1920×1080 ("1080i/p") and 1280×720 ("720p"). Confusingly, interlaced formats are customarily stated at 1/2 their image rate, 29.97/25 FPS, and double their image height, but these statements are purely custom; in each format, 60 images per second are produced. A resolution of 1080i produces 59.94 or 50 1920×540 images, each squashed to half-height in the photographic process and stretched back to fill the screen on playback in a television set. The 720p format produces 59.94/50 or 29.97/25 1280×720p images, not squeezed, so that no expansion or squeezing of the image is necessary. This confusion was industry-wide in the early days of digital video software, with much software being written incorrectly, the developers believing that only 29.97 images were expected each second, which was incorrect. While it was true that each picture element was polled and sent only 29.97 times per second, the pixel location immediately below that one was polled 1/60 of a second later, part of a completely separate image for the next 1/60-second frame.
At its native 24 FPS rate, film could not be displayed on 60 Hz video without the necessary pulldown process, often leading to "judder": to convert 24 frames per second into 60 frames per second, every odd frame is repeated, playing twice, while every even frame is tripled. This creates uneven motion, appearing stroboscopic. Other conversions have similar uneven frame doubling. Newer video standards support 120, 240, or 300 frames per second, so frames can be evenly sampled for standard frame rates such as 24, 48 and 60 FPS film or 25, 30, 50 or 60 FPS video. Of course these higher frame rates may also be displayed at their native rates.
=== Electronic camera specifications ===
In electronic camera specifications frame rate refers to the maximum possible rate frames that can be captured (e.g. if the exposure time were set to near-zero), but in practice, other settings (such as exposure time) may reduce the actual frequency to a lower number than the frame rate.
== Computer games ==
In computer video games, frame rate plays an important part in the experience as, unlike film, games are rendered in real-time. 60 frames per second has for a long time been considered the minimum frame rate for smoothly animated game play. Video games designed for PAL markets, before the sixth generation of video game consoles, had lower frame rates by design due to the 50 Hz output. This noticeably made fast-paced games, such as racing or fighting games, run slower; less frequently developers accounted for the frame rate difference and altered the game code to achieve (nearly) identical pacing across both regions, with varying degrees of success. Computer monitors marketed to competitive PC gamers can hit 360 Hz, 500 Hz, or more. High frame rates make action scenes look less blurry, such as sprinting through the wilderness in an open world game, spinning rapidly to face an opponent in a first-person shooter, or keeping track of details during an intense fight in a multiplayer online battle arena. Input latency is also reduced. Some people may have difficulty perceiving the differences between high frame rates, though.
Frame time is related to frame rate, but it measures the time between frames. A game could maintain an average of 60 frames per second but appear choppy because of a poor frame time. Game reviews sometimes average the worst 1% of frame rates, reported as the 99th percentile, to measure how choppy the game appears. A small difference between the average frame rate and 99th percentile would generally indicate a smooth experience. To mitigate the choppiness of poorly optimized games, players can set frame rate caps closer to their 99% percentile.
When a game's frame rate is different than the display's refresh rate, screen tearing can occur. Vsync mitigates this, but it caps the frame rate to the display's refresh rate, increases input lag, and introduces judder. Variable refresh rate displays automatically set their refresh rate equal to the game's frame rate, as long as it is within the display's supported range.
== Frame rate up-conversion ==
Frame rate up-conversion (FRC) is the process of increasing the temporal resolution of a video sequence by synthesizing one or more intermediate frames between two consecutive frames. A low frame rate causes aliasing, yields abrupt motion artifacts, and degrades the video quality. Consequently, the temporal resolution is an important factor affecting video quality. Algorithms for FRC are widely used in applications, including visual quality enhancement, video compression and slow-motion video generation.
=== Methods ===
Most FRC methods can be categorized into optical flow or kernel-based and pixel hallucination-based methods.
==== Flow-based FRC ====
Flow-based methods linearly combine predicted optical flows between two input frames to approximate flows from the target intermediate frame to the input frames. They also propose flow reversal (projection) for more accurate image warping. Moreover, there are algorithms that give different weights of overlapped flow vectors depending on the object depth of the scene via a flow projection layer.
==== Pixel hallucination-based FRC ====
Pixel hallucination-based methods use deformable convolution to the center frame generator by replacing optical flows with offset vectors. There are algorithms that also interpolate middle frames with the help of deformable convolution in the feature domain. However, since these methods directly hallucinate pixels unlike the flow-based FRC methods, the predicted frames tend to be blurry when fast-moving objects are present.
== See also ==
Delta timing
Federal Standard 1037C
Film-out
Flicker fusion threshold
Glossary of video terms
High frame rate
List of motion picture film formats
Micro stuttering
MIL-STD-188
Movie projector
Time-lapse photography
Video compression
== References ==
== External links ==
"Temporal Rate Conversion"—a very detailed guide about the visual interference of TV, video & PC (Wayback Machine copy)
Compare frames per second: which looks better?—a web tool to visually compare differences in frame rate and motion blur. | Wikipedia/Frame_rate |
Thermodynamics is a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws of thermodynamics, which convey a quantitative description using measurable macroscopic physical quantities but may be explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to various topics in science and engineering, especially physical chemistry, biochemistry, chemical engineering, and mechanical engineering, as well as other complex fields such as meteorology.
Historically, thermodynamics developed out of a desire to increase the efficiency of early steam engines, particularly through the work of French physicist Sadi Carnot (1824) who believed that engine efficiency was the key that could help France win the Napoleonic Wars. Scots-Irish physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854 which stated, "Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency." German physicist and mathematician Rudolf Clausius restated Carnot's principle known as the Carnot cycle and gave the theory of heat a truer and sounder basis. His most important paper, "On the Moving Force of Heat", published in 1850, first stated the second law of thermodynamics. In 1865 he introduced the concept of entropy. In 1870 he introduced the virial theorem, which applied to heat.
The initial application of thermodynamics to mechanical heat engines was quickly extended to the study of chemical compounds and chemical reactions. Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged. Statistical thermodynamics, or statistical mechanics, concerns itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a purely mathematical approach in an axiomatic formulation, a description often referred to as geometrical thermodynamics.
== Introduction ==
A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis. The first law specifies that energy can be transferred between physical systems as heat, as work, and with transfer of matter. The second law defines the existence of a quantity called entropy, that describes the direction, thermodynamically, that a system can evolve and quantifies the state of order of a system and that can be used to quantify the useful work that can be extracted from the system.
In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, and those properties are in turn related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes.
With these tools, thermodynamics can be used to describe how systems respond to changes in their environment. This can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemical engineering, corrosion engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, materials science, and economics, to name a few.
This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium. Non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field.
== History ==
The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the Anglo-Irish physicist and chemist Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyle's Law was formulated, which states that pressure and volume are inversely proportional. Then, in 1679, based on these concepts, an associate of Boyle's named Denis Papin built a steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated.
Later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine, followed by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time.
The fundamental concepts of heat capacity and latent heat, which were necessary for the development of thermodynamics, were developed by Professor Joseph Black at the University of Glasgow, where James Watt was employed as an instrument maker. Black and Watt performed experiments together, but it was Watt who conceived the idea of the external condenser which resulted in a large increase in steam engine efficiency. Drawing on all the previous work led Sadi Carnot, the "father of thermodynamics", to publish Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy and engine efficiency. The book outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science.
The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the University of Glasgow. The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin).
The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J. Willard Gibbs.
Clausius, who first stated the basic ideas of the second law in his paper "On the Moving Force of Heat", published in 1850, and is called "one of the founding fathers of thermodynamics", introduced the concept of entropy in 1865.
During the years 1873–76 the American mathematical physicist Josiah Willard Gibbs published a series of three papers, the most famous being On the Equilibrium of Heterogeneous Substances, in which he showed how thermodynamic processes, including chemical reactions, could be graphically analyzed, by studying the energy, entropy, volume, temperature and pressure of the thermodynamic system in such a manner, one can determine if a process would occur spontaneously. Also Pierre Duhem in the 19th century wrote about chemical thermodynamics. During the early 20th century, chemists such as Gilbert N. Lewis, Merle Randall, and E. A. Guggenheim applied the mathematical methods of Gibbs to the analysis of chemical processes.
== Etymology ==
Thermodynamics has an intricate etymology.
By a surface-level analysis, the word consists of two parts that can be traced back to Ancient Greek. Firstly, thermo- ("of heat"; used in words such as thermometer) can be traced back to the root θέρμη therme, meaning "heat". Secondly, the word dynamics ("science of force [or power]") can be traced back to the root δύναμις dynamis, meaning "power".
In 1849, the adjective thermo-dynamic is used by William Thomson.
In 1854, the noun thermo-dynamics is used by Thomson and William Rankine to represent the science of generalized heat engines.
Pierre Perrot claims that the term thermodynamics was coined by James Joule in 1858 to designate the science of relations between heat and power, however, Joule never used that term, but used instead the term perfect thermo-dynamic engine in reference to Thomson's 1849 phraseology.
== Branches of thermodynamics ==
The study of thermodynamical systems has developed into several related branches, each using a different fundamental model as a theoretical or experimental basis, or applying the principles to varying types of systems.
=== Classical thermodynamics ===
Classical thermodynamics is the description of the states of thermodynamic systems at near-equilibrium, that uses macroscopic, measurable properties. It is used to model exchanges of energy, work and heat based on the laws of thermodynamics. The qualifier classical reflects the fact that it represents the first level of understanding of the subject as it developed in the 19th century and describes the changes of a system in terms of macroscopic empirical (large scale, and measurable) parameters. A microscopic interpretation of these concepts was later provided by the development of statistical mechanics.
=== Statistical mechanics ===
Statistical mechanics, also known as statistical thermodynamics, emerged with the development of atomic and molecular theories in the late 19th century and early 20th century, and supplemented classical thermodynamics with an interpretation of the microscopic interactions between individual particles or quantum-mechanical states. This field relates the microscopic properties of individual atoms and molecules to the macroscopic, bulk properties of materials that can be observed on the human scale, thereby explaining classical thermodynamics as a natural result of statistics, classical mechanics, and quantum theory at the microscopic level.
=== Chemical thermodynamics ===
Chemical thermodynamics is the study of the interrelation of energy with chemical reactions or with a physical change of state within the confines of the laws of thermodynamics. The primary objective of chemical thermodynamics is determining the spontaneity of a given transformation.
=== Equilibrium thermodynamics ===
Equilibrium thermodynamics is the study of transfers of matter and energy in systems or bodies that, by agencies in their surroundings, can be driven from one state of thermodynamic equilibrium to another. The term 'thermodynamic equilibrium' indicates a state of balance, in which all macroscopic flows are zero; in the case of the simplest systems or bodies, their intensive properties are homogeneous, and their pressures are perpendicular to their boundaries. In an equilibrium state there are no unbalanced potentials, or driving forces, between macroscopically distinct parts of the system. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial equilibrium state, and given its surroundings, and given its constitutive walls, to calculate what will be the final equilibrium state of the system after a specified thermodynamic operation has changed its walls or surroundings.
=== Non-equilibrium thermodynamics ===
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are not in stationary states, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods.
== Laws of thermodynamics ==
Thermodynamics is principally based on a set of four laws which are universally valid when applied to systems that fall within the constraints implied by each. In the various theoretical descriptions of thermodynamics these laws may be expressed in seemingly differing forms, but the most prominent formulations are the following.
=== Zeroth law ===
The zeroth law of thermodynamics states: If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other.
This statement implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems under consideration. Systems are said to be in equilibrium if the small, random exchanges between them (e.g. Brownian motion) do not lead to a net change in energy. This law is tacitly assumed in every measurement of temperature. Thus, if one seeks to decide whether two bodies are at the same temperature, it is not necessary to bring them into contact and measure any changes of their observable properties in time. The law provides an empirical definition of temperature, and justification for the construction of practical thermometers.
The zeroth law was not initially recognized as a separate law of thermodynamics, as its basis in thermodynamical equilibrium was implied in the other laws. The first, second, and third laws had been explicitly stated already, and found common acceptance in the physics community before the importance of the zeroth law for the definition of temperature was realized. As it was impractical to renumber the other laws, it was named the zeroth law.
=== First law ===
The first law of thermodynamics states: In a process without transfer of matter, the change in internal energy,
Δ
U
{\displaystyle \Delta U}
, of a thermodynamic system is equal to the energy gained as heat,
Q
{\displaystyle Q}
, less the thermodynamic work,
W
{\displaystyle W}
, done by the system on its surroundings.
Δ
U
=
Q
−
W
{\displaystyle \Delta U=Q-W}
.
where
Δ
U
{\displaystyle \Delta U}
denotes the change in the internal energy of a closed system (for which heat or work through the system boundary are possible, but matter transfer is not possible),
Q
{\displaystyle Q}
denotes the quantity of energy supplied to the system as heat, and
W
{\displaystyle W}
denotes the amount of thermodynamic work done by the system on its surroundings. An equivalent statement is that perpetual motion machines of the first kind are impossible; work
W
{\displaystyle W}
done by a system on its surrounding requires that the system's internal energy
U
{\displaystyle U}
decrease or be consumed, so that the amount of internal energy lost by that work must be resupplied as heat
Q
{\displaystyle Q}
by an external energy source or as work by an external machine acting on the system (so that
U
{\displaystyle U}
is recovered) to make the system work continuously.
For processes that include transfer of matter, a further statement is needed: With due account of the respective fiducial reference states of the systems, when two systems, which may be of different chemical compositions, initially separated only by an impermeable wall, and otherwise isolated, are combined into a new system by the thermodynamic operation of removal of the wall, then
U
0
=
U
1
+
U
2
{\displaystyle U_{0}=U_{1}+U_{2}}
,
where U0 denotes the internal energy of the combined system, and U1 and U2 denote the internal energies of the respective separated systems.
Adapted for thermodynamics, this law is an expression of the principle of conservation of energy, which states that energy can be transformed (changed from one form to another), but cannot be created or destroyed.
Internal energy is a principal property of the thermodynamic state, while heat and work are modes of energy transfer by which a process may change this state. A change of internal energy of a system may be achieved by any combination of heat added or removed and work performed on or by the system. As a function of state, the internal energy does not depend on the manner, or on the path through intermediate steps, by which the system arrived at its state.
=== Second law ===
A traditional version of the second law of thermodynamics states: Heat does not spontaneously flow from a colder body to a hotter body.
The second law refers to a system of matter and radiation, initially with inhomogeneities in temperature, pressure, chemical potential, and other intensive properties, that are due to internal 'constraints', or impermeable rigid walls, within it, or to externally imposed forces. The law observes that, when the system is isolated from the outside world and from those forces, there is a definite thermodynamic quantity, its entropy, that increases as the constraints are removed, eventually reaching a maximum value at thermodynamic equilibrium, when the inhomogeneities practically vanish. For systems that are initially far from thermodynamic equilibrium, though several have been proposed, there is known no general physical principle that determines the rates of approach to thermodynamic equilibrium, and thermodynamics does not deal with such rates. The many versions of the second law all express the general irreversibility of the transitions involved in systems approaching thermodynamic equilibrium.
In macroscopic thermodynamics, the second law is a basic observation applicable to any actual thermodynamic process; in statistical thermodynamics, the second law is postulated to be a consequence of molecular chaos.
=== Third law ===
The third law of thermodynamics states: As the temperature of a system approaches absolute zero, all processes cease and the entropy of the system approaches a minimum value.
This law of thermodynamics is a statistical law of nature regarding entropy and the impossibility of reaching absolute zero of temperature. This law provides an absolute reference point for the determination of entropy. The entropy determined relative to this point is the absolute entropy. Alternate definitions include "the entropy of all systems and of all states of a system is smallest at absolute zero," or equivalently "it is impossible to reach the absolute zero of temperature by any finite number of processes".
Absolute zero, at which all activity would stop if it were possible to achieve, is −273.15 °C (degrees Celsius), or −459.67 °F (degrees Fahrenheit), or 0 K (kelvin), or 0° R (degrees Rankine).
== System models ==
An important concept in thermodynamics is the thermodynamic system, which is a precisely defined region of the universe under study. Everything in the universe except the system is called the surroundings. A system is separated from the remainder of the universe by a boundary which may be a physical or notional, but serve to confine the system to a finite volume. Segments of the boundary are often described as walls; they have respective defined 'permeabilities'. Transfers of energy as work, or as heat, or of matter, between the system and the surroundings, take place through the walls, according to their respective permeabilities.
Matter or energy that pass across the boundary so as to effect a change in the internal energy of the system need to be accounted for in the energy balance equation. The volume contained by the walls can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. The system could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. When a looser viewpoint is adopted, and the requirement of thermodynamic equilibrium is dropped, the system can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics, or the event horizon of a black hole.
Boundaries are of four types: fixed, movable, real, and imaginary. For example, in an engine, a fixed boundary means the piston is locked at its position, within which a constant volume process might occur. If the piston is allowed to move that boundary is movable while the cylinder and cylinder head boundaries are fixed. For closed systems, boundaries are real while for open systems boundaries are often imaginary. In the case of a jet engine, a fixed imaginary boundary might be assumed at the intake of the engine, fixed boundaries along the surface of the case and a second fixed imaginary boundary across the exhaust nozzle.
Generally, thermodynamics distinguishes three classes of systems, defined in terms of what is allowed to cross their boundaries:
As time passes in an isolated system, internal differences of pressures, densities, and temperatures tend to even out. A system in which all equalizing processes have gone to completion is said to be in a state of thermodynamic equilibrium.
Once in thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than are systems which are not in equilibrium. Often, when analysing a dynamic thermodynamic process, the simplifying assumption is made that each intermediate state in the process is at equilibrium, producing thermodynamic processes which develop so slowly as to allow each intermediate step to be an equilibrium state and are said to be reversible processes.
== States and processes ==
When a system is at equilibrium under a given set of conditions, it is said to be in a definite thermodynamic state. The state of the system can be described by a number of state quantities that do not depend on the process by which the system arrived at its state. They are called intensive variables or extensive variables according to how they change when the size of the system changes. The properties of the system can be described by an equation of state which specifies the relationship between these variables. State may be thought of as the instantaneous quantitative description of a system with a set number of variables held constant.
A thermodynamic process may be defined as the energetic evolution of a thermodynamic system proceeding from an initial state to a final state. It can be described by process quantities. Typically, each thermodynamic process is distinguished from other processes in energetic character according to what parameters, such as temperature, pressure, or volume, etc., are held fixed; Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair.
Several commonly studied thermodynamic processes are:
Adiabatic process: occurs without loss or gain of energy by heat
Isenthalpic process: occurs at a constant enthalpy
Isentropic process: a reversible adiabatic process, occurs at a constant entropy
Isobaric process: occurs at constant pressure
Isochoric process: occurs at constant volume (also called isometric/isovolumetric)
Isothermal process: occurs at a constant temperature
Steady state process: occurs without a change in the internal energy
== Instrumentation ==
There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law pV=nRT, the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device, called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is used to measure and define the internal energy of a system.
A thermodynamic reservoir is a system which is so large that its state parameters are not appreciably altered when it is brought into contact with the system of interest. When the reservoir is brought into contact with the system, the system is brought into equilibrium with the reservoir. For example, a pressure reservoir is a system at a particular pressure, which imposes that pressure upon the system to which it is mechanically connected. The Earth's atmosphere is often used as a pressure reservoir. The ocean can act as temperature reservoir when used to cool power plants.
== Conjugate variables ==
The central concept of thermodynamics is that of energy, the ability to do work. By the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer equals the product of the force applied to a body and the resulting displacement.
Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a "force" applied to some thermodynamic system, the second being akin to the resulting "displacement", and the product of the two equaling the amount of energy transferred. The common conjugate variables are:
Pressure-volume (the mechanical parameters);
Temperature-entropy (thermal parameters);
Chemical potential-particle number (material parameters).
== Potentials ==
Thermodynamic potentials are different quantitative measures of the stored energy in a system. Potentials are used to measure the energy changes in systems as they evolve from an initial state to a final state. The potential used depends on the constraints of the system, such as constant temperature or pressure. For example, the Helmholtz and Gibbs energies are the energies available in a system to do useful work when the temperature and volume or the pressure and temperature are fixed, respectively. Thermodynamic potentials cannot be measured in laboratories, but can be computed using molecular thermodynamics.
The five most well known potentials are:
where
T
{\displaystyle T}
is the temperature,
S
{\displaystyle S}
the entropy,
p
{\displaystyle p}
the pressure,
V
{\displaystyle V}
the volume,
μ
{\displaystyle \mu }
the chemical potential,
N
{\displaystyle N}
the number of particles in the system, and
i
{\displaystyle i}
is the count of particles types in the system.
Thermodynamic potentials can be derived from the energy balance equation applied to a thermodynamic system. Other thermodynamic potentials can also be obtained through Legendre transformation.
== Axiomatic thermodynamics ==
Axiomatic thermodynamics is a mathematical discipline that aims to describe thermodynamics in terms of rigorous axioms, for example by finding a mathematically rigorous way to express the familiar laws of thermodynamics.
The first attempt at an axiomatic theory of thermodynamics was Constantin Carathéodory's 1909 work Investigations on the Foundations of Thermodynamics, which made use of Pfaffian systems and the concept of adiabatic accessibility, a notion that was introduced by Carathéodory himself. In this formulation, thermodynamic concepts such as heat, entropy, and temperature are derived from quantities that are more directly measurable. Theories that came after, differed in the sense that they made assumptions regarding thermodynamic processes with arbitrary initial and final states, as opposed to considering only neighboring states.
== Applied fields ==
== See also ==
Thermodynamic process path
=== Lists and timelines ===
List of important publications in thermodynamics
List of textbooks on thermodynamics and statistical mechanics
List of thermal conductivities
List of thermodynamic properties
Table of thermodynamic equations
Timeline of thermodynamics
Thermodynamic equations
== Notes ==
== References ==
== Further reading ==
Goldstein, Martin & Inge F. (1993). The Refrigerator and the Universe. Harvard University Press. ISBN 978-0-674-75325-9. OCLC 32826343. A nontechnical introduction, good on historical and interpretive matters.
Kazakov, Andrei; Muzny, Chris D.; Chirico, Robert D.; Diky, Vladimir V.; Frenkel, Michael (2008). "Web Thermo Tables – an On-Line Version of the TRC Thermodynamic Tables". Journal of Research of the National Institute of Standards and Technology. 113 (4): 209–220. doi:10.6028/jres.113.016. ISSN 1044-677X. PMC 4651616. PMID 27096122.
Gibbs J.W. (1928). The Collected Works of J. Willard Gibbs Thermodynamics. New York: Longmans, Green and Co. Vol. 1, pp. 55–349.
Guggenheim E.A. (1933). Modern thermodynamics by the methods of Willard Gibbs. London: Methuen & co. ltd.
Denbigh K. (1981). The Principles of Chemical Equilibrium: With Applications in Chemistry and Chemical Engineering. London: Cambridge University Press.
Stull, D.R., Westrum Jr., E.F. and Sinke, G.C. (1969). The Chemical Thermodynamics of Organic Compounds. London: John Wiley and Sons, Inc.{{cite book}}: CS1 maint: multiple names: authors list (link)
Bazarov I.P. (2010). Thermodynamics: Textbook. St. Petersburg: Lan publishing house. p. 384. ISBN 978-5-8114-1003-3. 5th ed. (in Russian)
Bawendi Moungi G., Alberty Robert A. and Silbey Robert J. (2004). Physical Chemistry. J. Wiley & Sons, Incorporated.
Alberty Robert A. (2003). Thermodynamics of Biochemical Reactions. Wiley-Interscience.
Alberty Robert A. (2006). Biochemical Thermodynamics: Applications of Mathematica. Vol. 48. John Wiley & Sons, Inc. pp. 1–458. ISBN 978-0-471-75798-6. PMID 16878778. {{cite book}}: |journal= ignored (help)
Dill Ken A., Bromberg Sarina (2011). Molecular Driving Forces: Statistical Thermodynamics in Biology, Chemistry, Physics, and Nanoscience. Garland Science. ISBN 978-0-8153-4430-8.
M. Scott Shell (2015). Thermodynamics and Statistical Mechanics: An Integrated Approach. Cambridge University Press. ISBN 978-1107656789.
Douglas E. Barrick (2018). Biomolecular Thermodynamics: From Theory to Applications. CRC Press. ISBN 978-1-4398-0019-5.
The following titles are more technical:
Bejan, Adrian (2016). Advanced Engineering Thermodynamics (4 ed.). Wiley. ISBN 978-1-119-05209-8.
Cengel, Yunus A., & Boles, Michael A. (2002). Thermodynamics – an Engineering Approach. McGraw Hill. ISBN 978-0-07-238332-4. OCLC 45791449.{{cite book}}: CS1 maint: multiple names: authors list (link)
Dunning-Davies, Jeremy (1997). Concise Thermodynamics: Principles and Applications. Horwood Publishing. ISBN 978-1-8985-6315-0. OCLC 36025958.
Kroemer, Herbert & Kittel, Charles (1980). Thermal Physics. W.H. Freeman Company. ISBN 978-0-7167-1088-2. OCLC 32932988.
== External links ==
Media related to Thermodynamics at Wikimedia Commons
Callendar, Hugh Longbourne (1911). "Thermodynamics" . Encyclopædia Britannica. Vol. 26 (11th ed.). pp. 808–814.
Thermodynamics Data & Property Calculation Websites
Thermodynamics Educational Websites
Biochemistry Thermodynamics
Thermodynamics and Statistical Mechanics
Engineering Thermodynamics – A Graphical Approach
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick | Wikipedia/Thermodynamics |
Constructor theory is a proposal for a new mode of explanation in fundamental physics in the language of ergodic theory, developed by physicists David Deutsch and Chiara Marletto, at the University of Oxford, since 2012. Constructor theory expresses physical laws exclusively in terms of which physical transformations, or tasks, are possible versus which are impossible, and why. By allowing such counterfactual statements into fundamental physics, it allows new physical laws to be expressed, such as the constructor theory of information.
== Overview ==
The fundamental elements of the theory are tasks: the abstract specifications of transformations as input–output pairs of attributes. A task is impossible if there is a law of physics that forbids its being performed with arbitrarily high accuracy, and possible otherwise. When it is possible, a constructor for it can be built, again with arbitrary accuracy and reliability. A constructor is an entity that can cause the task to occur while retaining the ability to cause it again. Examples of constructors include a heat engine (a thermodynamic constructor), a catalyst (a chemical constructor) or a computer program controlling an automated factory (an example of a programmable constructor).
The theory was developed by physicists David Deutsch and Chiara Marletto. It draws together ideas from diverse areas, including thermodynamics, statistical mechanics, information theory, and quantum computation.
Quantum mechanics and all other physical theories are claimed to be subsidiary theories, and quantum information becomes a special case of superinformation.
Chiara Marletto's constructor theory of life builds on constructor theory.
== Motivations ==
According to Deutsch, current theories of physics, based on quantum mechanics, do not adequately explain why some transformations between states of being are possible and some are not. For example, a drop of dye can dissolve in water, but thermodynamics shows that the reverse transformation, of the dye clumping back together, is effectively impossible. We do not know at a quantum level why this should be so. Constructor theory provides an explanatory framework built on the transformations themselves, rather than the components.
Information has the property that a given statement might have said something else, and one of these alternatives would not be true. The untrue alternative is said to be "counterfactual". Conventional physical theories do not model such counterfactuals. However, the link between information and such physical ideas as the entropy in a thermodynamic system is so strong that they are sometimes identified. For example, the area of a black hole's event horizon is a measure both of the hole's entropy and of the information that it contains, as per the Bekenstein bound. Constructor theory is an attempt to bridge this gap, providing a physical model that can express counterfactuals, thus allowing the laws of information and computation to be viewed as laws of physics.
== Outline ==
In constructor theory, a transformation or change is described as a task. A constructor is a physical entity that is able to carry out a given task repeatedly. A task is only possible if a constructor capable of carrying it out exists, otherwise it is impossible. To work with constructor theory, everything is expressed in terms of tasks. The properties of information are then expressed as relationships between possible and impossible tasks. Counterfactuals are thus fundamental statements, and the properties of information may be described by physical laws. If a system has a set of attributes, then the set of permutations of these attributes is seen as a set of tasks. A computation medium is a system whose attributes permute to always produce a possible task. The set of permutations, and hence of tasks, is a computation set. If it is possible to copy the attributes in the computation set, the computation medium is also an information medium.
Information, or a given task, does not rely on a specific constructor. Any suitable constructor will serve. This ability of information to be carried on different physical systems or media is described as interoperability and arises as the principle that the combination of two information media is also an information medium.
Media capable of carrying out quantum computations are called superinformation media and are characterised by specific properties. Broadly, certain copying tasks on their states are impossible tasks. This is claimed to give rise to all the known differences between quantum and classical information.
== See also ==
Calculating Space
Computability theory
Undecidable problem
Quantum circuit
Generalized probabilistic theory
== References ==
== Bibliography ==
Deutsch, David (December 2013). "Constructor theory". Synthese. 190 (18): 4331–4359. arXiv:1210.7439. Bibcode:2013Synth.190.4331D. doi:10.1007/s11229-013-0279-z. S2CID 16083339.
Marletto, Chiara (2021). The Science of Can and Can't. Penguin. ISBN 9780525521921.
== External links ==
Official website
"Deeper Than Quantum Mechanics—David Deutsch’s New Theory of Reality" Mediums.com's The Physics arXiv Blog. 28 May 2014.
Kehoe, J.; "To What Extent Do We See with Mathematics?". Scientific American Guest blog. 2013.
"Formulating Science in Terms of Possible and Impossible Tasks". edge.org. 12 June 2014.
"Reconstructing physics: The universe is information". NewScientist.com (Two leading quantum physicists say information is key to understanding the universe. Their constructor theory puts it centre stage). 21 May 2014. | Wikipedia/Constructor_theory |
A cryptographically secure pseudorandom number generator (CSPRNG) or cryptographic pseudorandom number generator (CPRNG) is a pseudorandom number generator (PRNG) with properties that make it suitable for use in cryptography. It is also referred to as a cryptographic random number generator (CRNG).
== Background ==
Most cryptographic applications require random numbers, for example:
key generation
initialization vectors
nonces
salts in certain signature schemes, including ECDSA and RSASSA-PSS
token generation
The "quality" of the randomness required for these applications varies. For example, creating a nonce in some protocols needs only uniqueness. On the other hand, the generation of a master key requires a higher quality, such as more entropy. And in the case of one-time pads, the information-theoretic guarantee of perfect secrecy only holds if the key material comes from a true random source with high entropy, and thus just any kind of pseudorandom number generator is insufficient.
Ideally, the generation of random numbers in CSPRNGs uses entropy obtained from a high-quality source, generally the operating system's randomness API. However, unexpected correlations have been found in several such ostensibly independent processes. From an information-theoretic point of view, the amount of randomness, the entropy that can be generated, is equal to the entropy provided by the system. But sometimes, in practical situations, numbers are needed with more randomness than the available entropy can provide. Also, the processes to extract randomness from a running system are slow in actual practice. In such instances, a CSPRNG can sometimes be used. A CSPRNG can "stretch" the available entropy over more bits.
== Requirements ==
The requirements of an ordinary PRNG are also satisfied by a cryptographically secure PRNG, but the reverse is not true. CSPRNG requirements fall into two groups:
They pass statistical randomness tests:
Every CSPRNG should satisfy the next-bit test. That is, given the first k bits of a random sequence, there is no polynomial-time algorithm that can predict the (k+1)th bit with probability of success non-negligibly better than 50%. Andrew Yao proved in 1982 that a generator passing the next-bit test will pass all other polynomial-time statistical tests for randomness.
They hold up well under serious attack, even when part of their initial or running state becomes available to an attacker:
Every CSPRNG should withstand "state compromise extension attacks".: 4 In the event that part or all of its state has been revealed (or guessed correctly), it should be impossible to reconstruct the stream of random numbers prior to the revelation. Additionally, if there is an entropy input while running, it should be infeasible to use knowledge of the input's state to predict future conditions of the CSPRNG state.
For instance, if the PRNG under consideration produces output by computing bits of pi in sequence, starting from some unknown point in the binary expansion, it may well satisfy the next-bit test and thus be statistically random, as pi is conjectured to be a normal number. However, this algorithm is not cryptographically secure; an attacker who determines which bit of pi is currently in use (i.e. the state of the algorithm) will be able to calculate all preceding bits as well.
Most PRNGs are not suitable for use as CSPRNGs and will fail on both counts. First, while most PRNGs' outputs appear random to assorted statistical tests, they do not resist determined reverse engineering. Specialized statistical tests may be found specially tuned to such a PRNG that shows the random numbers not to be truly random. Second, for most PRNGs, when their state has been revealed, all past random numbers can be retrodicted, allowing an attacker to read all past messages, as well as future ones.
CSPRNGs are designed explicitly to resist this type of cryptanalysis.
== Definitions ==
In the asymptotic setting, a family of deterministic polynomial time computable functions
G
k
:
{
0
,
1
}
k
→
{
0
,
1
}
p
(
k
)
{\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}
for some polynomial p, is a pseudorandom number generator (PRNG, or PRG in some references), if it stretches the length of its input (
p
(
k
)
>
k
{\displaystyle p(k)>k}
for any k), and if its output is computationally indistinguishable from true randomness, i.e. for any probabilistic polynomial time algorithm A, which outputs 1 or 0 as a distinguisher,
|
Pr
x
←
{
0
,
1
}
k
[
A
(
G
(
x
)
)
=
1
]
−
Pr
r
←
{
0
,
1
}
p
(
k
)
[
A
(
r
)
=
1
]
|
<
μ
(
k
)
{\displaystyle \left|\Pr _{x\gets \{0,1\}^{k}}[A(G(x))=1]-\Pr _{r\gets \{0,1\}^{p(k)}}[A(r)=1]\right|<\mu (k)}
for some negligible function
μ
{\displaystyle \mu }
. (The notation
x
←
X
{\displaystyle x\gets X}
means that x is chosen uniformly at random from the set X.)
There is an equivalent characterization: For any function family
G
k
:
{
0
,
1
}
k
→
{
0
,
1
}
p
(
k
)
{\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}
, G is a PRNG if and only if the next output bit of G cannot be predicted by a polynomial time algorithm.
A forward-secure PRNG with block length
t
(
k
)
{\displaystyle t(k)}
is a PRNG
G
k
:
{
0
,
1
}
k
→
{
0
,
1
}
k
×
{
0
,
1
}
t
(
k
)
{\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{k}\times \{0,1\}^{t(k)}}
, where the input string
s
i
{\displaystyle s_{i}}
with length k is the current state at period i, and the output (
s
i
+
1
{\displaystyle s_{i+1}}
,
y
i
{\displaystyle y_{i}}
) consists of the next state
s
i
+
1
{\displaystyle s_{i+1}}
and the pseudorandom output block
y
i
{\displaystyle y_{i}}
of period i, that withstands state compromise extensions in the following sense. If the initial state
s
1
{\displaystyle s_{1}}
is chosen uniformly at random from
{
0
,
1
}
k
{\displaystyle \{0,1\}^{k}}
, then for any i, the sequence
(
y
1
,
y
2
,
…
,
y
i
,
s
i
+
1
)
{\displaystyle (y_{1},y_{2},\dots ,y_{i},s_{i+1})}
must be computationally indistinguishable from
(
r
1
,
r
2
,
…
,
r
i
,
s
i
+
1
)
{\displaystyle (r_{1},r_{2},\dots ,r_{i},s_{i+1})}
, in which the
r
i
{\displaystyle r_{i}}
are chosen uniformly at random from
{
0
,
1
}
t
(
k
)
{\displaystyle \{0,1\}^{t(k)}}
.
Any PRNG
G
:
{
0
,
1
}
k
→
{
0
,
1
}
p
(
k
)
{\displaystyle G\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}
can be turned into a forward secure PRNG with block length
p
(
k
)
−
k
{\displaystyle p(k)-k}
by splitting its output into the next state and the actual output. This is done by setting
G
(
s
)
=
G
0
(
s
)
‖
G
1
(
s
)
{\displaystyle G(s)=G_{0}(s)\Vert G_{1}(s)}
, in which
|
G
0
(
s
)
|
=
|
s
|
=
k
{\displaystyle |G_{0}(s)|=|s|=k}
and
|
G
1
(
s
)
|
=
p
(
k
)
−
k
{\displaystyle |G_{1}(s)|=p(k)-k}
; then G is a forward secure PRNG with
G
0
{\displaystyle G_{0}}
as the next state and
G
1
{\displaystyle G_{1}}
as the pseudorandom output block of the current period.
== Entropy extraction ==
Santha and Vazirani proved that several bit streams with weak randomness can be combined to produce a higher-quality, quasi-random bit stream.
Even earlier, John von Neumann proved that a simple algorithm can remove a considerable amount of the bias in any bit stream, which should be applied to each bit stream before using any variation of the Santha–Vazirani design.
== Designs ==
CSPRNG designs are divided into two classes:
Designs based on cryptographic primitives such as ciphers and cryptographic hashes
Designs based on mathematical problems thought to be hard
=== Designs based on cryptographic primitives ===
A secure block cipher can be converted into a CSPRNG by running it in counter mode using, for example, a special construct that the NIST in SP 800-90A calls CTR DRBG. CTR_DBRG typically uses Advanced Encryption Standard (AES).
AES-CTR_DRBG is often used as a random number generator in systems that use AES encryption.
The NIST CTR_DRBG scheme erases the key after the requested randomness is output by running additional cycles. This is wasteful from a performance perspective, but does not immediately cause issues with forward secrecy. However, realizing the performance implications, the NIST recommends an "extended AES-CTR-DRBG interface" for its Post-Quantum Cryptography Project submissions. This interface allows multiple sets of randomness to be generated without intervening erasure, only erasing when the user explicitly signals the end of requests. As a result, the key could remain in memory for an extended time if the "extended interface" is misused. Newer "fast-key-erasure" RNGs erase the key with randomness as soon as randomness is requested.
A stream cipher can be converted into a CSPRNG. This has been done with RC4, ISAAC, and ChaCha20, to name a few.
A cryptographically secure hash might also be a base of a good CSPRNG, using, for example, a construct that NIST calls Hash DRBG.
An HMAC primitive can be used as a base of a CSPRNG, for example, as part of the construct that NIST calls HMAC DRBG.
=== Number-theoretic designs ===
The Blum Blum Shub algorithm has a security proof based on the difficulty of the quadratic residuosity problem. Since the only known way to solve that problem is to factor the modulus, it is generally regarded that the difficulty of integer factorization provides a conditional security proof for the Blum Blum Shub algorithm. However the algorithm is very inefficient and therefore impractical unless extreme security is needed.
The Blum–Micali algorithm has a security proof based on the difficulty of the discrete logarithm problem but is also very inefficient.
Daniel Brown of Certicom wrote a 2006 security proof for Dual EC DRBG, based on the assumed hardness of the Decisional Diffie–Hellman assumption, the x-logarithm problem, and the truncated point problem. The 2006 proof explicitly assumes a lower outlen (amount of bits provided per iteration) than in the Dual_EC_DRBG standard, and that the P and Q in the Dual_EC_DRBG standard (which were revealed in 2013 to be probably backdoored by NSA) are replaced with non-backdoored values.
=== Practical schemes ===
"Practical" CSPRNG schemes not only include an CSPRNG algorithm, but also a way to initialize ("seed") it while keeping the seed secret. A number of such schemes have been defined, including:
Implementations of /dev/random in Unix-like systems.
Yarrow, which attempts to evaluate the entropic quality of its seeding inputs, and uses SHA-1 and 3DES internally. Yarrow was used in macOS and other Apple OS' up until about December 2019, after which it switched to Fortuna.
Fortuna, the successor to Yarrow, which does not attempt to evaluate the entropic quality of its inputs; it uses SHA-256 and "any good block cipher". Fortuna is used in FreeBSD. Apple changed to Fortuna for most or all Apple OSs beginning around Dec. 2019.
The Linux kernel CSPRNG, which uses ChaCha20 to generate data, and BLAKE2s to ingest entropy.
arc4random, a CSPRNG in Unix-like systems that seeds from /dev/random. It originally is based on RC4, but all main implementations now use ChaCha20.
CryptGenRandom, part of Microsoft's CryptoAPI, offered on Windows. Different versions of Windows use different implementations.
ANSI X9.17 standard (Financial Institution Key Management (wholesale)), which has been adopted as a FIPS standard as well. It takes as input a TDEA (keying option 2) key bundle k and (the initial value of) a 64-bit random seed s. Each time a random number is required, it executes the following steps:
Obviously, the technique is easily generalized to any block cipher; AES has been suggested. If the key k is leaked, the entire X9.17 stream can be predicted; this weakness is cited as a reason for creating Yarrow.
All these above-mentioned schemes, save for X9.17, also mix the state of a CSPRNG with an additional source of entropy. They are therefore not "pure" pseudorandom number generators, in the sense that the output is not completely determined by their initial state. This addition aims to prevent attacks even if the initial state is compromised.
== Standards ==
Several CSPRNGs have been standardized. For example:
FIPS 186-4
NIST SP 800-90A
NIST SP 800-90A Rev.1
ANSI X9.17-1985 Appendix C
ANSI X9.31-1998 Appendix A.2.4
ANSI X9.62-1998 Annex A.4, obsoleted by ANSI X9.62-2005, Annex D (HMAC_DRBG)
A good reference is maintained by NIST.
There are also standards for statistical testing of new CSPRNG designs:
A Statistical Test Suite for Random and Pseudorandom Number Generators, NIST Special Publication 800-22.
== Security flaws ==
=== NSA kleptographic backdoor in the Dual_EC_DRBG PRNG ===
The Guardian and The New York Times reported in 2013 that the National Security Agency (NSA) inserted a backdoor into a pseudorandom number generator (PRNG) of NIST SP 800-90A, which allows the NSA to readily decrypt material that was encrypted with the aid of Dual EC DRBG. Both papers reported that, as independent security experts long suspected, the NSA had been introducing weaknesses into CSPRNG standard 800-90; this being confirmed for the first time by one of the top-secret documents leaked to The Guardian by Edward Snowden. The NSA worked covertly to get its own version of the NIST draft security standard approved for worldwide use in 2006. The leaked document states that "eventually, NSA became the sole editor". In spite of the known potential for a kleptographic backdoor and other known significant deficiencies with Dual_EC_DRBG, several companies such as RSA Security continued using Dual_EC_DRBG until the backdoor was confirmed in 2013. RSA Security received a $10 million payment from the NSA to do so.
=== DUHK attack ===
On October 23, 2017, Shaanan Cohney, Matthew Green, and Nadia Heninger, cryptographers at the University of Pennsylvania and Johns Hopkins University, released details of the DUHK (Don't Use Hard-coded Keys) attack on WPA2 where hardware vendors use a hardcoded seed key for the ANSI X9.31 RNG algorithm, stating "an attacker can brute-force encrypted data to discover the rest of the encryption parameters and deduce the master encryption key used to encrypt web sessions or virtual private network (VPN) connections."
=== Japanese PURPLE cipher machine ===
During World War II, Japan used a cipher machine for diplomatic communications; the United States was able to crack it and read its messages, mostly because the "key values" used were insufficiently random.
== References ==
== External links ==
RFC 4086, Randomness Requirements for Security
Java "entropy pool" for cryptographically secure unpredictable random numbers. Archived 2008-12-02 at the Wayback Machine
Java standard class providing a cryptographically strong pseudo-random number generator (PRNG).
Cryptographically Secure Random number on Windows without using CryptoAPI
Conjectured Security of the ANSI-NIST Elliptic Curve RNG, Daniel R. L. Brown, IACR ePrint 2006/117.
A Security Analysis of the NIST SP 800-90 Elliptic Curve Random Number Generator, Daniel R. L. Brown and Kristian Gjosteen, IACR ePrint 2007/048. To appear in CRYPTO 2007.
Cryptanalysis of the Dual Elliptic Curve Pseudorandom Generator, Berry Schoenmakers and Andrey Sidorenko, IACR ePrint 2006/190.
Efficient Pseudorandom Generators Based on the DDH Assumption, Reza Rezaeian Farashahi and Berry Schoenmakers and Andrey Sidorenko, IACR ePrint 2006/321.
Analysis of the Linux Random Number Generator, Zvi Gutterman and Benny Pinkas and Tzachy Reinman.
NIST Statistical Test Suite documentation and software download. | Wikipedia/Cryptographically_secure_pseudorandom_number_generator |
In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations.
Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as reducing attacks and eavesdropping. The nodes of a network take several packets and combine for transmission. This process may be used to attain the maximum possible information flow in a network.
It has been proven that, theoretically, linear coding is enough to achieve the upper bound in multicast problems with one source. However linear coding is not sufficient in general; even for more general versions of linearity such as convolutional coding and filter-bank coding. Finding optimal coding solutions for general network problems with arbitrary demands is a hard problem, which can be NP-hard
and even undecidable.
== Encoding and decoding ==
In a linear network coding problem, a group of nodes
P
{\displaystyle P}
are involved in moving the data from
S
{\displaystyle S}
source nodes to
K
{\displaystyle K}
sink nodes. Each node generates new packets which are linear combinations of past received packets by multiplying them by coefficients chosen from a finite field, typically of size
G
F
(
2
s
)
{\displaystyle GF(2^{s})}
.
More formally, each node,
p
k
{\displaystyle p_{k}}
with indegree,
I
n
D
e
g
(
p
k
)
=
S
{\displaystyle InDeg(p_{k})=S}
, generates a message
X
k
{\displaystyle X_{k}}
from the linear combination of received messages
{
M
i
}
i
=
1
S
{\displaystyle \{M_{i}\}_{i=1}^{S}}
by the formula:
X
k
=
∑
i
=
1
S
g
k
i
⋅
M
i
{\displaystyle X_{k}=\sum _{i=1}^{S}g_{k}^{i}\cdot M_{i}}
Where the values
g
k
i
{\displaystyle g_{k}^{i}}
are coefficients selected from
G
F
(
2
s
)
{\displaystyle GF(2^{s})}
. Since operations are computed in a finite field, the generated message is of the same length as the original messages. Each node forwards the computed value
X
k
{\displaystyle X_{k}}
along with the coefficients,
g
k
i
{\displaystyle g_{k}^{i}}
, used in the
k
th
{\displaystyle k^{\text{th}}}
level,
g
k
i
{\displaystyle g_{k}^{i}}
.
Sink nodes receive these network coded messages, and collect them in a matrix. The original messages can be recovered by performing Gaussian elimination on the matrix. In reduced row echelon form, decoded packets correspond to the rows of the form
e
i
=
[
0...010...0
]
{\displaystyle e_{i}=[0...010...0]}
== Background ==
A network is represented by a directed graph
G
=
(
V
,
E
,
C
)
{\displaystyle {\mathcal {G}}=(V,E,C)}
.
V
{\displaystyle V}
is the set of nodes or vertices,
E
{\displaystyle E}
is the set of directed links (or edges), and
C
{\displaystyle C}
gives the capacity of each link of
E
{\displaystyle E}
. Let
T
(
s
,
t
)
{\displaystyle T(s,t)}
be the maximum possible throughput from node
s
{\displaystyle s}
to node
t
{\displaystyle t}
. By the max-flow min-cut theorem,
T
(
s
,
t
)
{\displaystyle T(s,t)}
is upper bounded by the minimum capacity of all cuts, which is the sum of the capacities of the edges on a cut, between these two nodes.
Karl Menger proved that there is always a set of edge-disjoint paths achieving the upper bound in a unicast scenario, known as the max-flow min-cut theorem. Later, the Ford–Fulkerson algorithm was proposed to find such paths in polynomial time. Then, Edmonds proved in the paper "Edge-Disjoint Branchings" the upper bound in the broadcast scenario is also achievable, and proposed a polynomial time algorithm.
However, the situation in the multicast scenario is more complicated, and in fact, such an upper bound can't be reached using traditional routing ideas. Ahlswede et al. proved that it can be achieved if additional computing tasks (incoming packets are combined into one or several outgoing packets) can be done in the intermediate nodes.
== The Butterfly Network ==
The butterfly network is often used to illustrate how linear network coding can outperform routing. Two source nodes (at the top of the picture) have information A and B that must be transmitted to the two destination nodes (at the bottom). Each destination node wants to know both A and B. Each edge can carry only a single value (we can think of an edge transmitting a bit in each time slot).
If only routing were allowed, then the central link would be only able to carry A or B, but not both. Supposing we send A through the center; then the left destination would receive A twice and not know B at all. Sending B poses a similar problem for the right destination. We say that routing is insufficient because no routing scheme can transmit both A and B to both destinations simultaneously. Meanwhile, it takes four time slots in total for both destination nodes to know A and B.
Using a simple code, as shown, A and B can be transmitted to both destinations simultaneously by sending the sum of the symbols through the two relay nodes – encoding A and B using the formula "A+B". The left destination receives A and A + B, and can calculate B by subtracting the two values. Similarly, the right destination will receive B and A + B, and will also be able to determine both A and B. Therefore, with network coding, it takes only three time slots and improves the throughput.
== Random Linear Network Coding ==
Random linear network coding (RLNC) is a simple yet powerful encoding scheme, which in broadcast transmission schemes allows close to optimal throughput using a decentralized algorithm. Nodes transmit random linear combinations of the packets they receive, with coefficients chosen randomly, with a uniform distribution from a Galois field. If the field size is sufficiently large, the probability that the receiver(s) will obtain linearly independent combinations (and therefore obtain innovative information) approaches 1. It should however be noted that, although random linear network coding has excellent throughput performance, if a receiver obtains an insufficient number of packets, it is extremely unlikely that they can recover any of the original packets. This can be addressed by sending additional random linear combinations until the receiver obtains the appropriate number of packets.
=== Operation and key parameters ===
There are three key parameters in RLNC. The first one is the generation size. In RLNC, the original data transmitted over the network is divided into packets. The source and intermediate nodes in the network can combine and recombine the set of original and coded packets. The original
M
{\displaystyle M}
packets form a block, usually called a generation. The number of original packets combined and recombined together is the generation size. The second parameter is the packet size. Usually, the size of the original packets is fixed. In the case of unequally-sized packets, these can be zero-padded if they are shorter or split into multiple packets if they are longer. In practice, the packet size can be the size of the maximum transmission unit (MTU) of the underlying network protocol. For example, it can be around 1500 bytes in an Ethernet frame. The third key parameter is the Galois field used. In practice, the most commonly used Galois fields are binary extension fields. And the most commonly used sizes for the Galois fields are the binary field
G
F
(
2
)
{\displaystyle GF(2)}
and the so-called binary-8 (
G
F
(
2
8
)
{\displaystyle GF(2^{8})}
). In the binary field, each element is one bit long, while in the binary-8, it is one byte long. Since the packet size is usually larger than the field size, each packet is seen as a set of elements from the Galois field (usually referred to as symbols) appended together. The packets have a fixed amount of symbols (Galois field elements), and since all the operations are performed over Galois fields, then the size of the packets does not change with subsequent linear combinations.
The sources and the intermediate nodes can combine any subset of the original and previously coded packets performing linear operations. To form a coded packet in RLNC, the original and previously coded packets are multiplied by randomly chosen coefficients and added together. Since each packet is just an appended set of Galois field elements, the operations of multiplication and addition are performed symbol-wise over each of the individual symbols of the packets, as shown in the picture from the example.
To preserve the statelessness of the code, the coding coefficients used to generate the coded packets are appended to the packets transmitted over the network. Therefore, each node in the network can see what coefficients were used to generate each coded packet. One novelty of linear network coding over traditional block codes is that it allows the recombination of previously coded packets into new and valid coded packets. This process is usually called recoding. After a recoding operation, the size of the appended coding coefficients does not change. Since all the operations are linear, the state of the recoded packet can be preserved by applying the same operations of addition and multiplication to the payload and the appended coding coefficients. In the following example, we will illustrate this process.
Any destination node must collect enough linearly independent coded packets to be able to reconstruct the original data. Each coded packet can be understood as a linear equation where the coefficients are known since they are appended to the packet. In these equations, each of the original
M
{\displaystyle M}
packets is the unknown. To solve the linear system of equations, the destination needs at least
M
{\displaystyle M}
linearly independent equations (packets).
==== Example ====
In the figure, we can see an example of two packets linearly combined into a new coded packet. In the example, we have two packets, namely packet
f
{\displaystyle f}
and packet
e
{\displaystyle e}
. The generation size of our example is two. We know this because each packet has two coding coefficients (
C
i
j
{\displaystyle C_{ij}}
) appended. The appended coefficients can take any value from the Galois field. However, an original, uncoded data packet would have appended the coding coefficients
[
0
,
1
]
{\displaystyle [0,1]}
or
[
1
,
0
]
{\displaystyle [1,0]}
, which means that they are constructed by a linear combination of zero times one of the packets plus one time the other packet. Any coded packet would have appended other coefficients. In our example, packet
f
{\displaystyle f}
for instance has appended the coefficients
[
C
11
,
C
12
]
{\displaystyle [C_{11},C_{12}]}
. Since network coding can be applied at any layter of the communication protocol, these packets can have a header from the other layers, which is ignored in the network coding operations.
Now, lets assume that the network node wants to produce a new coded packet combining packet
f
{\displaystyle f}
and packet
e
{\displaystyle e}
. In RLNC, it will randomly choose two coding coefficients,
d
1
{\displaystyle d_{1}}
and
d
2
{\displaystyle d_{2}}
in the example. The node will multiply each symbol of packet
f
{\displaystyle f}
by
d
1
{\displaystyle d_{1}}
, and each symbol of packet
e
{\displaystyle e}
by
d
2
{\displaystyle d_{2}}
. Then, it will add the results symbol-wise to produce the new coded data. It will perform the same operations of multiplication and addition to the coding coefficients of the coded packets.
=== Misconceptions ===
Linear network coding is still a relatively new subject. However, the topic has been vastly researched over the last twenty years. Nevertheless, there are still some misconceptions that are no longer valid:
Decoding computational complexity: Network coding decoders have been improved over the years. Nowadays, the algorithms are highly efficient and parallelizable. In 2016, with Intel Core i5 processors with SIMD instructions enabled, the decoding goodput of network coding was 750 MB/s for a generation size of 16 packets and 250 MB/s for a generation size of 64 packets. Furthermore, today's algorithms can be vastly parallelizable, increasing the encoding and decoding goodput even further.
Transmission Overhead: It is usually thought that the transmission overhead of network coding is high due to the need to append the coding coefficients to each coded packet. In reality, this overhead is negligible in most applications. The overhead due to coding coefficients can be computed as follows. Each packet has appended
M
{\displaystyle M}
coding coefficients. The size of each coefficient is the number of bits needed to represent one element of the Galois field. In practice, most network coding applications use a generation size of no more than 32 packets per generation and Galois fields of 256 elements (binary-8). With these numbers, each packet needs
M
∗
l
o
g
2
(
s
)
=
32
{\displaystyle M*log_{2}(s)=32}
bytes of appended overhead. If each packet is 1500 bytes long (i.e. the Ethernet MTU), then 32 bytes represent an overhead of only 2%.
Overhead due to linear dependencies: Since the coding coefficients are chosen randomly in RLNC, there is a chance that some transmitted coded packets are not beneficial to the destination because they are formed using a linearly dependent combination of packets. However, this overhead is negligible in most applications. The linear dependencies depend on the Galois fields' size and are practically independent of the generation size used. We can illustrate this with the following example. Let us assume we are using a Galois field of
q
{\displaystyle q}
elements and a generation size of
M
{\displaystyle M}
packets. If the destination has not received any coded packet, we say it has
M
{\displaystyle M}
degrees of freedom, and then almost any coded packet will be useful and innovative. In fact, only the zero-packet (only zeroes in the coding coefficients) will be non-innovative. The probability of generating the zero-packet is equal to the probability of each of the
M
{\displaystyle M}
coding coefficient to be equal to the zero-element of the Galois field. I.e., the probability of a non-innovative packet is of
1
q
M
{\displaystyle {\frac {1}{q^{M}}}}
. With each successive innovative transmission, it can be shown that the exponent of the probability of a non innovative packet is reduced by one. When the destination has received
M
−
1
{\displaystyle M-1}
innovative packets (i.e., it needs only one more packet to fully decode the data). Then the probability of a non innovative packet is of
1
q
{\displaystyle {\frac {1}{q}}}
. We can use this knowledge to calculate the expected number of linearly dependent packets per generation. In the worst-case scenario, when the Galois field used contains only two elements (
q
=
2
{\displaystyle q=2}
), the expected number of linearly dependent packets per generation is of 1.6 extra packets. If our generation size if of 32 or 64 packets, this represents an overhead of 5% or 2.5%, respectively. If we use the binary-8 field (
q
=
256
{\displaystyle q=256}
), then the expected number of linearly dependent packets per generation is practically zero. Since it is the last packets the major contributor to the overhead due to linear dependencies, there are RLNC-based protocols such as tunable sparse network coding that exploit this knowledge. These protocols introduce sparsity (zero-elements) in the coding coefficients at the beginning of the transmission to reduce the decoding complexity, and reduce the sparsity at the end of the transmission to reduce the overhead due to linear dependencies.
== Applications ==
Over the years, multiple researchers and companies have integrated network coding solutions into their applications. We can list some of the applications of network coding in different areas:
VoIP: The performance of streaming services such as VoIP over wireless mesh networks can be improved with network coding by reducing the network delay and jitter.
Video and audio streaming and conferencing: The performance of MPEG-4 traffic in terms of delay, packet loss, and jitter over wireless networks prone to packet erasures can be improved with RLNC. In the case of audio streaming over wireless mesh networks, the packet delivery ratio, latency, and jitter performance of the network can be significantly increased when using RLNC instead of packet forwarding-based protocols such as simplified multicast forwarding and partial dominant pruning. The performance improvements of network coding for video conferencing are not only theoretical. In 2016, the authors of built a real-world testbed of 15 wireless Android devices to evaluate the feasibility of network-coding-based video conference systems. Their results showed large improvements in packet delivery ratio and overall user experience, especially over poor quality links compared to multicasting technologies based on packet forwarding.
Software-defined wide area networks (SD-WAN): Large industrial IoT wireless networks can benefit from network coding. Researchers showed that network coding and its channel bundling capabilities improved the performance of SD-WANs with a large number of nodes with multiple cellular connections. Nowadays, companies such as Barracuda are employing RLNC-based solutions due to their advantages in low latency, small footprint on computing devices, and low overhead.
Channel bundling: Due to the statelessness characteristics of RLNC, it can be used to efficiently perform channel bundling, i.e., the transmission of information through multiple network interfaces. Since the coded packets are randomly generated, and the state of the code traverses the network together with the coded packets, a source can achieve bundling without much planning just by sending coded packets through all its network interfaces. The destination can decode the information once enough coded packets arrive, irrespectively of the network interface. A video demonstrating the channel bundling capabilities of RLNC is available at.
5G private networks: RLNC can be integrated into the 5G NR standard to improve the performance of video delivery over 5G systems. In 2018, a demo presented at the Consumer Electronics Show demonstrated a practical deployment of RLNC with NFV and SDN technologies to improve video quality against packet loss due to congestion at the core network.
Remote collaboration.
Augmented reality remote support and training.
Remote vehicle driving applications.
Connected cars networks.
Gaming applications such as low latency streaming and multiplayer connectivity.
Healthcare applications.
Industry 4.0.
Satellite networks.
Agricultural sensor fields.
In-flight entertainment networks.
Major security and firmware updates for mobile product families.
Smart city infrastructure.
Information-centric networking and named data networking.: Linear network coding can improve the network efficiency of information-centric networking solutions by exploiting the multi-source multi-cast nature of such systems. It has been shown, that RLNC can be integrated into distributed content delivery networks such as IPFS to increase data availability while reducing storage resources.
Alternative to forward error correction and automatic repeat requests in traditional and wireless networks with packet loss, such as Coded TCP and Multi-user ARQ
Protection against network attacks such as snooping, eavesdropping, replay, or data corruption.
Digital file distribution and P2P file sharing, e.g. Avalanche filesystem from Microsoft
Distributed storage
Throughput increase in wireless mesh networks, e.g.: COPE, CORE, Coding-aware routing, and B.A.T.M.A.N.
Buffer and delay reduction in spatial sensor networks: Spatial buffer multiplexing
Wireless broadcast: RLNC can reduce the number of packet transmission for a single-hop wireless multicast network, and hence improve network bandwidth
Distributed file sharing
Low-complexity video streaming to mobile device
Device-to-device extensions
== See also ==
Secret sharing protocol
Homomorphic signatures for network coding
Triangular network coding
== References ==
Fragouli, C.; Le Boudec, J. & Widmer, J. "Network coding: An instant primer" in Computer Communication Review, 2006. https://doi.org/10.1145/1111322.1111337
Ali Farzamnia, Sharifah K. Syed-Yusof, Norsheila Fisa "Multicasting Multiple Description Coding Using p-Cycle Network Coding", KSII Transactions on Internet and Information Systems, Vol 7, No 12, 2013. doi:10.3837/tiis.2013.12.009
== External links ==
Network Coding Homepage
A network coding bibliography
Raymond W. Yeung, Information Theory and Network Coding, Springer 2008, http://iest2.ie.cuhk.edu.hk/~whyeung/book2/
Raymond W. Yeung et al., Network Coding Theory, now Publishers, 2005, http://iest2.ie.cuhk.edu.hk/~whyeung/netcode/monograph.html
Christina Fragouli et al., Network Coding: An Instant Primer, ACM SIGCOMM 2006, http://infoscience.epfl.ch/getfile.py?mode=best&recid=58339.
Avalanche Filesystem, http://research.microsoft.com/en-us/projects/avalanche/default.aspx
Random Network Coding, https://web.archive.org/web/20060618083034/http://www.mit.edu/~medard/coding1.htm
Digital Fountain Codes, http://www.icsi.berkeley.edu/~luby/
Coding-Aware Routing, https://web.archive.org/web/20081011124616/http://arena.cse.sc.edu/papers/rocx.secon06.pdf
MIT offers a course: Introduction to Network Coding
Network coding: Networking's next revolution?
Coding-aware protocol design for wireless networks: http://scholarcommons.sc.edu/etd/230/ | Wikipedia/Network_coding |
Sequitur (or Nevill-Manning–Witten algorithm) is a recursive algorithm developed by Craig Nevill-Manning and Ian H. Witten in 1997 that infers a hierarchical structure (context-free grammar) from a sequence of discrete symbols. The algorithm operates in linear space and time. It can be used in data compression software applications.
== Constraints ==
The sequitur algorithm constructs a grammar by substituting repeating phrases in the given sequence with new rules and therefore produces a concise representation of the sequence. For example, if the sequence is
S→abcab,
the algorithm will produce
S→AcA, A→ab.
While scanning the input sequence, the algorithm follows two constraints for generating its grammar efficiently: digram uniqueness and rule utility.
=== Digram uniqueness ===
Whenever a new symbol is scanned from the sequence, it is appended with the last scanned symbol to form a new digram. If this digram has been formed earlier then a new rule is made to replace both occurrences of the digrams.
Therefore, it ensures that no digram occurs more than once in the grammar. For example, in the sequence S→abaaba, when the first four symbols are already scanned, digrams formed are ab, ba, aa. When the fifth symbol is read, a new digram 'ab' is formed which exists already. Therefore, both instances of 'ab' are replaced by a new rule (say, A) in S. Now, the grammar becomes S→AaAa, A→ab, and the process continues until no repeated digram exists in the grammar.
=== Rule utility ===
This constraint ensures that all the rules are used more than once in the right sides of all the productions of the grammar, i.e., if a rule occurs just once, it should be removed from the grammar and its occurrence should be substituted with the symbols from which it is created. For example, in the above example, if one scans the last symbol and applies digram uniqueness for 'Aa', then the grammar will produce: S→BB, A→ab, B→Aa. Now, rule 'A' occurs only once in the grammar in B→Aa. Therefore, A is deleted and finally the grammar becomes
S→BB, B→aba.
This constraint helps reduce the number of rules in the grammar.
== Method summary ==
The algorithm works by scanning a sequence of terminal symbols and building a list of all the symbol pairs which it has read. Whenever a second occurrence of a pair is discovered, the two occurrences are replaced in the sequence by an invented nonterminal symbol, the list of symbol pairs is adjusted to match the new sequence, and scanning continues. If a pair's nonterminal symbol is used only in the just created symbol's definition, the used symbol is replaced by its definition and the symbol is removed from the defined nonterminal symbols. Once the scanning has been completed, the transformed sequence can be interpreted as the top-level rule in a grammar for the original sequence. The rule definitions for the nonterminal symbols which it contains can be found in the list of symbol pairs. Those rule definitions may themselves contain additional nonterminal symbols whose rule definitions can also be read from elsewhere in the list of symbol pairs.
== See also ==
Context-free grammar
Data compression
Lossless data compression
Straight-line grammar
Byte pair encoding
== References ==
== External links ==
sequitur.info – the reference Sequitur algorithm implementation in C++, Java, and other languages | Wikipedia/Sequitur_algorithm |
A key in cryptography is a piece of information, usually a string of numbers or letters that are stored in a file, which, when processed through a cryptographic algorithm, can encode or decode cryptographic data. Based on the used method, the key can be different sizes and varieties, but in all cases, the strength of the encryption relies on the security of the key being maintained. A key's security strength is dependent on its algorithm, the size of the key, the generation of the key, and the process of key exchange.
== Scope ==
The key is what is used to encrypt data from plaintext to ciphertext. There are different methods for utilizing keys and encryption.
=== Symmetric cryptography ===
Symmetric cryptography refers to the practice of the same key being used for both encryption and decryption.
=== Asymmetric cryptography ===
Asymmetric cryptography has separate keys for encrypting and decrypting. These keys are known as the public and private keys, respectively.
== Purpose ==
Since the key protects the confidentiality and integrity of the system, it is important to be kept secret from unauthorized parties. With public key cryptography, only the private key must be kept secret, but with symmetric cryptography, it is important to maintain the confidentiality of the key. Kerckhoff's principle states that the entire security of the cryptographic system relies on the secrecy of the key.
== Key sizes ==
Key size is the number of bits in the key defined by the algorithm. This size defines the upper bound of the cryptographic algorithm's security. The larger the key size, the longer it will take before the key is compromised by a brute force attack. Since perfect secrecy is not feasible for key algorithms, researches are now more focused on computational security.
In the past, keys were required to be a minimum of 40 bits in length, however, as technology advanced, these keys were being broken quicker and quicker. As a response, restrictions on symmetric keys were enhanced to be greater in size.
Currently, 2048 bit RSA is commonly used, which is sufficient for current systems. However, current RSA key sizes would all be cracked quickly with a powerful quantum computer.
"The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security equivalent to a 128 bit symmetric cipher."
== Key generation ==
To prevent a key from being guessed, keys need to be generated randomly and contain sufficient entropy. The problem of how to safely generate random keys is difficult and has been addressed in many ways by various cryptographic systems. A key can directly be generated by using the output of a Random Bit Generator (RBG), a system that generates a sequence of unpredictable and unbiased bits. A RBG can be used to directly produce either a symmetric key or the random output for an asymmetric key pair generation. Alternatively, a key can also be indirectly created during a key-agreement transaction, from another key or from a password.
Some operating systems include tools for "collecting" entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high-quality randomness.
== Establishment scheme ==
The security of a key is dependent on how a key is exchanged between parties. Establishing a secured communication channel is necessary so that outsiders cannot obtain the key. A key establishment scheme (or key exchange) is used to transfer an encryption key among entities. Key agreement and key transport are the two types of a key exchange scheme that are used to be remotely exchanged between entities . In a key agreement scheme, a secret key, which is used between the sender and the receiver to encrypt and decrypt information, is set up to be sent indirectly. All parties exchange information (the shared secret) that permits each party to derive the secret key material. In a key transport scheme, encrypted keying material that is chosen by the sender is transported to the receiver. Either symmetric key or asymmetric key techniques can be used in both schemes.
The Diffie–Hellman key exchange and Rivest-Shamir-Adleman (RSA) are the most two widely used key exchange algorithms. In 1976, Whitfield Diffie and Martin Hellman constructed the Diffie–Hellman algorithm, which was the first public key algorithm. The Diffie–Hellman key exchange protocol allows key exchange over an insecure channel by electronically generating a shared key between two parties. On the other hand, RSA is a form of the asymmetric key system which consists of three steps: key generation, encryption, and decryption.
Key confirmation delivers an assurance between the key confirmation recipient and provider that the shared keying materials are correct and established. The National Institute of Standards and Technology recommends key confirmation to be integrated into a key establishment scheme to validate its implementations.
== Management ==
Key management concerns the generation, establishment, storage, usage and replacement of cryptographic keys. A key management system (KMS) typically includes three steps of establishing, storing and using keys. The base of security for the generation, storage, distribution, use and destruction of keys depends on successful key management protocols.
== Key vs password ==
A password is a memorized series of characters including letters, digits, and other special symbols that are used to verify identity. It is often produced by a human user or a password management software to protect personal and sensitive information or generate cryptographic keys. Passwords are often created to be memorized by users and may contain non-random information such as dictionary words. On the other hand, a key can help strengthen password protection by implementing a cryptographic algorithm which is difficult to guess or replace the password altogether. A key is generated based on random or pseudo-random data and can often be unreadable to humans.
A password is less safe than a cryptographic key due to its low entropy, randomness, and human-readable properties. However, the password may be the only secret data that is accessible to the cryptographic algorithm for information security in some applications such as securing information in storage devices. Thus, a deterministic algorithm called a key derivation function (KDF) uses a password to generate the secure cryptographic keying material to compensate for the password's weakness. Various methods such as adding a salt or key stretching may be used in the generation.
== See also ==
== References == | Wikipedia/Key_(cryptography) |
In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time.
The bit rate is expressed in the unit bit per second (symbol: bit/s), often in conjunction with an SI prefix such as kilo (1 kbit/s = 1,000 bit/s), mega (1 Mbit/s = 1,000 kbit/s), giga (1 Gbit/s = 1,000 Mbit/s) or tera (1 Tbit/s = 1,000 Gbit/s). The non-standard abbreviation bps is often used to replace the standard symbol bit/s, so that, for example, 1 Mbps is used to mean one million bits per second.
In most computing and digital communication environments, one byte per second (symbol: B/s) corresponds roughly to 8 bit/s.
1 byte = 8 bits However if stop bits, start bits, and parity bits need to be factored in, a higher number of bits per second will be required to achieve a throughput of the same number of bytes.
== Prefixes ==
When quantifying large or small bit rates, SI prefixes (also known as metric prefixes or decimal prefixes) are used, thus:
Binary prefixes are sometimes used for bit rates.
The International Standard (IEC 80000-13) specifies different symbols for binary and decimal (SI) prefixes (e.g., 1 KiB/s = 1024 B/s = 8192 bit/s, and 1 MiB/s = 1024 KiB/s).
== In data communications ==
=== Gross bit rate ===
In digital communication systems, the physical layer gross bitrate, raw bitrate, data signaling rate, gross data transfer rate or uncoded transmission rate (sometimes written as a variable Rb or fb) is the total number of physically transferred bits per second over a communication link, including useful data as well as protocol overhead.
In case of serial communications, the gross bit rate is related to the bit transmission time
T
b
{\displaystyle T_{\text{b}}}
as:
R
b
=
1
T
b
,
{\displaystyle R_{\text{b}}={1 \over T_{\text{b}}},}
The gross bit rate is related to the symbol rate or modulation rate, which is expressed in bauds or symbols per second. However, the gross bit rate and the baud value are equal only when there are only two levels per symbol, representing 0 and 1, meaning that each symbol of a data transmission system carries exactly one bit of data; for example, this is not the case for modern modulation systems used in modems and LAN equipment.
For most line codes and modulation methods:
symbol rate
≤
gross bit rate
{\displaystyle {\text{symbol rate}}\leq {\text{gross bit rate}}}
More specifically, a line code (or baseband transmission scheme) representing the data using pulse-amplitude modulation with
2
N
{\displaystyle 2^{N}}
different voltage levels, can transfer
N
{\displaystyle N}
bits per pulse. A digital modulation method (or passband transmission scheme) using
2
N
{\displaystyle 2^{N}}
different symbols, for example
2
N
{\displaystyle 2^{N}}
amplitudes, phases or frequencies, can transfer
N
{\displaystyle N}
bits per symbol. This results in:
gross bit rate
=
symbol rate
×
N
{\displaystyle {\text{gross bit rate}}={\text{symbol rate}}\times N}
An exception from the above is some self-synchronizing line codes, for example Manchester coding and return-to-zero (RTZ) coding, where each bit is represented by two pulses (signal states), resulting in:
gross bit rate = symbol rate/2
{\displaystyle {\text{gross bit rate = symbol rate/2}}}
A theoretical upper bound for the symbol rate in baud, symbols/s or pulses/s for a certain spectral bandwidth in hertz is given by the Nyquist law:
symbol rate
≤
Nyquist rate
=
2
×
bandwidth
{\displaystyle {\text{symbol rate}}\leq {\text{Nyquist rate}}=2\times {\text{bandwidth}}}
In practice this upper bound can only be approached for line coding schemes and for so-called vestigial sideband digital modulation. Most other digital carrier-modulated schemes, for example ASK, PSK, QAM and OFDM, can be characterized as double sideband modulation, resulting in the following relation:
symbol rate
≤
bandwidth
{\displaystyle {\text{symbol rate}}\leq {\text{bandwidth}}}
In case of parallel communication, the gross bit rate is given by
∑
i
=
1
n
log
2
M
i
T
i
{\displaystyle \sum _{i=1}^{n}{\frac {\log _{2}{M_{i}}}{T_{i}}}}
where n is the number of parallel channels, Mi is the number of symbols or levels of the modulation in the ith channel, and Ti is the symbol duration time, expressed in seconds, for the ith channel.
=== Information rate ===
The physical layer net bitrate, information rate, useful bit rate, payload rate, net data transfer rate, coded transmission rate, effective data rate or wire speed (informal language) of a digital communication channel is the capacity excluding the physical layer protocol overhead, for example time division multiplex (TDM) framing bits, redundant forward error correction (FEC) codes, equalizer training symbols and other channel coding. Error-correcting codes are common especially in wireless communication systems, broadband modem standards and modern copper-based high-speed LANs. The physical layer net bitrate is the datarate measured at a reference point in the interface between the data link layer and physical layer, and may consequently include data link and higher layer overhead.
In modems and wireless systems, link adaptation (automatic adaptation of the data rate and the modulation and/or error coding scheme to the signal quality) is often applied. In that context, the term peak bitrate denotes the net bitrate of the fastest and least robust transmission mode, used for example when the distance is very short between sender and transmitter. Some operating systems and network equipment may detect the "connection speed" (informal language) of a network access technology or communication device, implying the current net bit rate. The term line rate in some textbooks is defined as gross bit rate, in others as net bit rate.
The relationship between the gross bit rate and net bit rate is affected by the FEC code rate according to the following.
net bit rate ≤ gross bit rate × code rate
The connection speed of a technology that involves forward error correction typically refers to the physical layer net bit rate in accordance with the above definition.
For example, the net bitrate (and thus the "connection speed") of an IEEE 802.11a wireless network is the net bit rate of between 6 and 54 Mbit/s, while the gross bit rate is between 12 and 72 Mbit/s inclusive of error-correcting codes.
The net bit rate of ISDN2 Basic Rate Interface (2 B-channels + 1 D-channel) of 64+64+16 = 144 kbit/s also refers to the payload data rates, while the D channel signalling rate is 16 kbit/s.
The net bit rate of the Ethernet 100BASE-TX physical layer standard is 100 Mbit/s, while the gross bitrate is 125 Mbit/s, due to the 4B5B (four bit over five bit) encoding. In this case, the gross bit rate is equal to the symbol rate or pulse rate of 125 megabaud, due to the NRZI line code.
In communications technologies without forward error correction and other physical layer protocol overhead, there is no distinction between gross bit rate and physical layer net bit rate. For example, the net as well as gross bit rate of Ethernet 10BASE-T is 10 Mbit/s. Due to the Manchester line code, each bit is represented by two pulses, resulting in a pulse rate of 20 megabaud.
The "connection speed" of a V.92 voiceband modem typically refers to the gross bit rate, since there is no additional error-correction code. It can be up to 56,000 bit/s downstream and 48,000 bit/s upstream. A lower bit rate may be chosen during the connection establishment phase due to adaptive modulation – slower but more robust modulation schemes are chosen in case of poor signal-to-noise ratio. Due to data compression, the actual data transmission rate or throughput (see below) may be higher.
The channel capacity, also known as the Shannon capacity, is a theoretical upper bound for the maximum net bitrate, exclusive of forward error correction coding, that is possible without bit errors for a certain physical analog node-to-node communication link.
net bit rate ≤ channel capacity
The channel capacity is proportional to the analog bandwidth in hertz. This proportionality is called Hartley's law. Consequently, the net bit rate is sometimes called digital bandwidth capacity in bit/s.
=== Network throughput ===
The term throughput, essentially the same thing as digital bandwidth consumption, denotes the achieved average useful bit rate in a computer network over a logical or physical communication link or through a network node, typically measured at a reference point above the data link layer. This implies that the throughput often excludes data link layer protocol overhead. The throughput is affected by the traffic load from the data source in question, as well as from other sources sharing the same network resources. See also measuring network throughput.
=== Goodput (data transfer rate) ===
Goodput or data transfer rate refers to the achieved average net bit rate that is delivered to the application layer, exclusive of all protocol overhead, data packets retransmissions, etc. For example, in the case of file transfer, the goodput corresponds to the achieved file transfer rate. The file transfer rate in bit/s can be calculated as the file size (in bytes) divided by the file transfer time (in seconds) and multiplied by eight.
As an example, the goodput or data transfer rate of a V.92 voiceband modem is affected by the modem physical layer and data link layer protocols. It is sometimes higher than the physical layer data rate due to V.44 data compression, and sometimes lower due to bit-errors and automatic repeat request retransmissions.
If no data compression is provided by the network equipment or protocols, we have the following relation:
goodput ≤ throughput ≤ maximum throughput ≤ net bit rate
for a certain communication path.
=== Progress trends ===
These are examples of physical layer net bit rates in proposed communication standard interfaces and devices:
== Multimedia ==
In digital multimedia, bit rate represents the amount of information, or detail, that is stored per unit of time of a recording. The bitrate depends on several factors:
The original material may be sampled at different frequencies.
The samples may use different numbers of bits.
The data may be encoded by different schemes.
The information may be digitally compressed by different algorithms or to different degrees.
Generally, choices are made about the above factors in order to achieve the desired trade-off between minimizing the bitrate and maximizing the quality of the material when it is played.
If lossy data compression is used on audio or visual data, differences from the original signal will be introduced; if the compression is substantial, or lossy data is decompressed and recompressed, this may become noticeable in the form of compression artifacts. Whether these affect the perceived quality, and if so how much, depends on the compression scheme, encoder power, the characteristics of the input data, the listener's perceptions, the listener's familiarity with artifacts, and the listening or viewing environment.
The encoding bit rate of a multimedia file is its size in bytes divided by the playback time of the recording (in seconds), multiplied by eight.
For real-time streaming multimedia, the encoding bit rate is the goodput that is required to avoid playback interruption.
The term average bitrate is used in case of variable bitrate multimedia source coding schemes. In this context, the peak bit rate is the maximum number of bits required for any short-term block of compressed data.
A theoretical lower bound for the encoding bit rate for lossless data compression is the source information rate, also known as the entropy rate.
The bitrates in this section are approximately the minimum that the average listener in a typical listening or viewing environment, when using the best available compression, would perceive as not significantly worse than the reference standard.
=== Audio ===
==== CD-DA ====
Compact Disc Digital Audio (CD-DA) uses 44,100 samples per second, each with a bit depth of 16, a format sometimes abbreviated like "16bit / 44.1kHz". CD-DA is also stereo, using a left and right channel, so the amount of audio data per second is double that of mono, where only a single channel is used.
The bit rate of PCM audio data can be calculated with the following formula:
bit rate
=
sample rate
×
bit depth
×
channels
{\displaystyle {\text{bit rate}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}}
For example, the bit rate of a CD-DA recording (44.1 kHz sampling rate, 16 bits per sample and two channels) can be calculated as follows:
44
,
100
×
16
×
2
=
1
,
411
,
200
bit/s
=
1
,
411.2
kbit/s
{\displaystyle 44,100\times 16\times 2=1,411,200\ {\text{bit/s}}=1,411.2\ {\text{kbit/s}}}
The cumulative size of a length of PCM audio data (excluding a file header or other metadata) can be calculated using the following formula:
size in bits
=
sample rate
×
bit depth
×
channels
×
time
.
{\displaystyle {\text{size in bits}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}\times {\text{time}}.}
The cumulative size in bytes can be found by dividing the file size in bits by the number of bits in a byte, which is eight:
size in bytes
=
size in bits
8
{\displaystyle {\text{size in bytes}}={\frac {\text{size in bits}}{8}}}
Therefore, 80 minutes (4,800 seconds) of CD-DA data requires 846,720,000 bytes of storage:
44
,
100
×
16
×
2
×
4
,
800
8
=
846
,
720
,
000
bytes
≈
847
MB
≈
807.5
MiB
{\displaystyle {\frac {44,100\times 16\times 2\times 4,800}{8}}=846,720,000\ {\text{bytes}}\approx 847\ {\text{MB}}\approx 807.5\ {\text{MiB}}}
where MiB is mebibytes with binary prefix Mi, meaning 220 = 1,048,576.
==== MP3 ====
The MP3 audio format provides lossy data compression. Audio quality improves with increasing bitrate:
32 kbit/s – generally acceptable only for speech
96 kbit/s – generally used for speech or low-quality streaming
128 or 160 kbit/s – mid-range bitrate quality
192 kbit/s – medium quality bitrate
256 kbit/s – a commonly used high-quality bitrate
320 kbit/s – highest level supported by the MP3 standard
==== Other audio ====
700 bit/s – lowest bitrate open-source speech codec Codec2, but Codec2 sounds much better at 1.2 kbit/s
800 bit/s – minimum necessary for recognizable speech, using the special-purpose FS-1015 speech codecs
2.15 kbit/s – minimum bitrate available through the open-source Speex codec
6 kbit/s – minimum bitrate available through the open-source Opus codec
8 kbit/s – telephone quality using speech codecs
32–500 kbit/s – lossy audio as used in Ogg Vorbis
256 kbit/s – Digital Audio Broadcasting (DAB) MP2 bit rate required to achieve a high quality signal
292 kbit/s – Sony Adaptive Transform Acoustic Coding (ATRAC) for use on the MiniDisc Format
400 kbit/s–1,411 kbit/s – lossless audio as used in formats such as Free Lossless Audio Codec, WavPack, or Monkey's Audio to compress CD audio
1,411.2 kbit/s – Linear PCM sound format of CD-DA
5,644.8 kbit/s – DSD, which is a trademarked implementation of PDM sound format used on Super Audio CD.
6.144 Mbit/s – E-AC-3 (Dolby Digital Plus), an enhanced coding system based on the AC-3 codec
9.6 Mbit/s – DVD-Audio, a digital format for delivering high-fidelity audio content on a DVD. DVD-Audio is not intended to be a video delivery format and is not the same as video DVDs containing concert films or music videos. These discs cannot be played on a standard DVD-player without DVD-Audio logo.
18 Mbit/s – advanced lossless audio codec based on Meridian Lossless Packing (MLP)
=== Video ===
16 kbit/s – videophone quality (minimum necessary for a consumer-acceptable "talking head" picture using various video compression schemes)
128–384 kbit/s – business-oriented videoconferencing quality using video compression
400 kbit/s YouTube 240p videos (using H.264)
750 kbit/s YouTube 360p videos (using H.264)
1 Mbit/s YouTube 480p videos (using H.264)
1.15 Mbit/s max – VCD quality (using MPEG1 compression)
2.5 Mbit/s YouTube 720p videos (using H.264)
3.5 Mbit/s typ – Standard-definition television quality (with bit-rate reduction from MPEG-2 compression)
3.8 Mbit/s YouTube 720p60 (60 FPS) videos (using H.264)
4.5 Mbit/s YouTube 1080p videos (using H.264)
6.8 Mbit/s YouTube 1080p60 (60 FPS) videos (using H.264)
9.8 Mbit/s max – DVD (using MPEG2 compression)
8 to 15 Mbit/s typ – HDTV quality (with bit-rate reduction from MPEG-4 AVC compression)
19 Mbit/s approximate – HDV 720p (using MPEG2 compression)
24 Mbit/s max – AVCHD (using MPEG4 AVC compression)
25 Mbit/s approximate – HDV 1080i (using MPEG2 compression)
29.4 Mbit/s max – HD DVD
40 Mbit/s max – 1080p Blu-ray Disc (using MPEG2, MPEG4 AVC or VC-1 compression)
250 Mbit/s max – DCP (using JPEG 2000 compression)
1.4 Gbit/s – 10-bit 4:4:4 uncompressed 1080p at 24 FPS
=== Notes ===
For technical reasons (hardware/software protocols, overheads, encoding schemes, etc.) the actual bit rates used by some of the compared-to devices may be significantly higher than listed above. For example, telephone circuits using μlaw or A-law companding (pulse code modulation) yield 64 kbit/s.
== See also ==
== References ==
== External links ==
Live Video Streaming Bitrate Calculator Calculate bitrate for video and live streams
DVD-HQ bit rate calculator Calculate bit rate for various types of digital video media.
Maximum PC - Do Higher MP3 Bit Rates Pay Off?
Valid8 Data Rate Calculator | Wikipedia/Information_rate |
Bayesian inference ( BAY-zee-ən or BAY-zhən) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian inference uses a prior distribution to estimate posterior probabilities. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".
== Introduction to Bayes' rule ==
=== Formal explanation ===
Bayesian inference derives the posterior probability as a consequence of two antecedents: a prior probability and a "likelihood function" derived from a statistical model for the observed data. Bayesian inference computes the posterior probability according to Bayes' theorem:
P
(
H
∣
E
)
=
P
(
E
∣
H
)
⋅
P
(
H
)
P
(
E
)
,
{\displaystyle P(H\mid E)={\frac {P(E\mid H)\cdot P(H)}{P(E)}},}
where
H stands for any hypothesis whose probability may be affected by data (called evidence below). Often there are competing hypotheses, and the task is to determine which is the most probable.
P
(
H
)
{\displaystyle P(H)}
, the prior probability, is the estimate of the probability of the hypothesis H before the data E, the current evidence, is observed.
E, the evidence, corresponds to new data that were not used in computing the prior probability.
P
(
H
∣
E
)
{\displaystyle P(H\mid E)}
, the posterior probability, is the probability of H given E, i.e., after E is observed. This is what we want to know: the probability of a hypothesis given the observed evidence.
P
(
E
∣
H
)
{\displaystyle P(E\mid H)}
is the probability of observing E given H and is called the likelihood. As a function of E with H fixed, it indicates the compatibility of the evidence with the given hypothesis. The likelihood function is a function of the evidence, E, while the posterior probability is a function of the hypothesis, H.
P
(
E
)
{\displaystyle P(E)}
is sometimes termed the marginal likelihood or "model evidence". This factor is the same for all possible hypotheses being considered (as is evident from the fact that the hypothesis H does not appear anywhere in the symbol, unlike for all the other factors) and hence does not factor into determining the relative probabilities of different hypotheses.
P
(
E
)
>
0
{\displaystyle P(E)>0}
(Else one has
0
/
0
{\displaystyle 0/0}
.)
For different values of H, only the factors
P
(
H
)
{\displaystyle P(H)}
and
P
(
E
∣
H
)
{\displaystyle P(E\mid H)}
, both in the numerator, affect the value of
P
(
H
∣
E
)
{\displaystyle P(H\mid E)}
– the posterior probability of a hypothesis is proportional to its prior probability (its inherent likeliness) and the newly acquired likelihood (its compatibility with the new observed evidence).
In cases where
¬
H
{\displaystyle \neg H}
("not H"), the logical negation of H, is a valid likelihood, Bayes' rule can be rewritten as follows:
P
(
H
∣
E
)
=
P
(
E
∣
H
)
P
(
H
)
P
(
E
)
=
P
(
E
∣
H
)
P
(
H
)
P
(
E
∣
H
)
P
(
H
)
+
P
(
E
∣
¬
H
)
P
(
¬
H
)
=
1
1
+
(
1
P
(
H
)
−
1
)
P
(
E
∣
¬
H
)
P
(
E
∣
H
)
{\displaystyle {\begin{aligned}P(H\mid E)&={\frac {P(E\mid H)P(H)}{P(E)}}\\\\&={\frac {P(E\mid H)P(H)}{P(E\mid H)P(H)+P(E\mid \neg H)P(\neg H)}}\\\\&={\frac {1}{1+\left({\frac {1}{P(H)}}-1\right){\frac {P(E\mid \neg H)}{P(E\mid H)}}}}\\\end{aligned}}}
because
P
(
E
)
=
P
(
E
∣
H
)
P
(
H
)
+
P
(
E
∣
¬
H
)
P
(
¬
H
)
{\displaystyle P(E)=P(E\mid H)P(H)+P(E\mid \neg H)P(\neg H)}
and
P
(
H
)
+
P
(
¬
H
)
=
1.
{\displaystyle P(H)+P(\neg H)=1.}
This focuses attention on the term
(
1
P
(
H
)
−
1
)
P
(
E
∣
¬
H
)
P
(
E
∣
H
)
.
{\displaystyle \left({\tfrac {1}{P(H)}}-1\right){\tfrac {P(E\mid \neg H)}{P(E\mid H)}}.}
If that term is approximately 1, then the probability of the hypothesis given the evidence,
P
(
H
∣
E
)
{\displaystyle P(H\mid E)}
, is about
1
2
{\displaystyle {\tfrac {1}{2}}}
, about 50% likely - equally likely or not likely. If that term is very small, close to zero, then the probability of the hypothesis, given the evidence,
P
(
H
∣
E
)
{\displaystyle P(H\mid E)}
is close to 1 or the conditional hypothesis is quite likely. If that term is very large, much larger than 1, then the hypothesis, given the evidence, is quite unlikely. If the hypothesis (without consideration of evidence) is unlikely, then
P
(
H
)
{\displaystyle P(H)}
is small (but not necessarily astronomically small) and
1
P
(
H
)
{\displaystyle {\tfrac {1}{P(H)}}}
is much larger than 1 and this term can be approximated as
P
(
E
∣
¬
H
)
P
(
E
∣
H
)
⋅
P
(
H
)
{\displaystyle {\tfrac {P(E\mid \neg H)}{P(E\mid H)\cdot P(H)}}}
and relevant probabilities can be compared directly to each other.
One quick and easy way to remember the equation would be to use rule of multiplication:
P
(
E
∩
H
)
=
P
(
E
∣
H
)
P
(
H
)
=
P
(
H
∣
E
)
P
(
E
)
.
{\displaystyle P(E\cap H)=P(E\mid H)P(H)=P(H\mid E)P(E).}
=== Alternatives to Bayesian updating ===
Bayesian updating is widely used and computationally convenient. However, it is not the only updating rule that might be considered rational.
Ian Hacking noted that traditional "Dutch book" arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. Hacking wrote: "And neither the Dutch book argument nor any other in the personalist arsenal of proofs of the probability axioms entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour."
Indeed, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics") following the publication of Richard C. Jeffrey's rule, which applies Bayes' rule to the case where the evidence itself is assigned a probability. The additional hypotheses needed to uniquely require Bayesian updating have been deemed to be substantial, complicated, and unsatisfactory.
== Inference over exclusive and exhaustive possibilities ==
If evidence is simultaneously used to update belief over a set of exclusive and exhaustive propositions, Bayesian inference may be thought of as acting on this belief distribution as a whole.
=== General formulation ===
Suppose a process is generating independent and identically distributed events
E
n
,
n
=
1
,
2
,
3
,
…
{\displaystyle E_{n},\ n=1,2,3,\ldots }
, but the probability distribution is unknown. Let the event space
Ω
{\displaystyle \Omega }
represent the current state of belief for this process. Each model is represented by event
M
m
{\displaystyle M_{m}}
. The conditional probabilities
P
(
E
n
∣
M
m
)
{\displaystyle P(E_{n}\mid M_{m})}
are specified to define the models.
P
(
M
m
)
{\displaystyle P(M_{m})}
is the degree of belief in
M
m
{\displaystyle M_{m}}
. Before the first inference step,
{
P
(
M
m
)
}
{\displaystyle \{P(M_{m})\}}
is a set of initial prior probabilities. These must sum to 1, but are otherwise arbitrary.
Suppose that the process is observed to generate
E
∈
{
E
n
}
{\displaystyle E\in \{E_{n}\}}
. For each
M
∈
{
M
m
}
{\displaystyle M\in \{M_{m}\}}
, the prior
P
(
M
)
{\displaystyle P(M)}
is updated to the posterior
P
(
M
∣
E
)
{\displaystyle P(M\mid E)}
. From Bayes' theorem:
P
(
M
∣
E
)
=
P
(
E
∣
M
)
∑
m
P
(
E
∣
M
m
)
P
(
M
m
)
⋅
P
(
M
)
.
{\displaystyle P(M\mid E)={\frac {P(E\mid M)}{\sum _{m}{P(E\mid M_{m})P(M_{m})}}}\cdot P(M).}
Upon observation of further evidence, this procedure may be repeated.
=== Multiple observations ===
For a sequence of independent and identically distributed observations
E
=
(
e
1
,
…
,
e
n
)
{\displaystyle \mathbf {E} =(e_{1},\dots ,e_{n})}
, it can be shown by induction that repeated application of the above is equivalent to
P
(
M
∣
E
)
=
P
(
E
∣
M
)
∑
m
P
(
E
∣
M
m
)
P
(
M
m
)
⋅
P
(
M
)
,
{\displaystyle P(M\mid \mathbf {E} )={\frac {P(\mathbf {E} \mid M)}{\sum _{m}{P(\mathbf {E} \mid M_{m})P(M_{m})}}}\cdot P(M),}
where
P
(
E
∣
M
)
=
∏
k
P
(
e
k
∣
M
)
.
{\displaystyle P(\mathbf {E} \mid M)=\prod _{k}{P(e_{k}\mid M)}.}
=== Parametric formulation: motivating the formal description ===
By parameterizing the space of models, the belief in all models may be updated in a single step. The distribution of belief over the model space may then be thought of as a distribution of belief over the parameter space. The distributions in this section are expressed as continuous, represented by probability densities, as this is the usual situation. The technique is, however, equally applicable to discrete distributions.
Let the vector
θ
{\displaystyle {\boldsymbol {\theta }}}
span the parameter space. Let the initial prior distribution over
θ
{\displaystyle {\boldsymbol {\theta }}}
be
p
(
θ
∣
α
)
{\displaystyle p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})}
, where
α
{\displaystyle {\boldsymbol {\alpha }}}
is a set of parameters to the prior itself, or hyperparameters. Let
E
=
(
e
1
,
…
,
e
n
)
{\displaystyle \mathbf {E} =(e_{1},\dots ,e_{n})}
be a sequence of independent and identically distributed event observations, where all
e
i
{\displaystyle e_{i}}
are distributed as
p
(
e
∣
θ
)
{\displaystyle p(e\mid {\boldsymbol {\theta }})}
for some
θ
{\displaystyle {\boldsymbol {\theta }}}
. Bayes' theorem is applied to find the posterior distribution over
θ
{\displaystyle {\boldsymbol {\theta }}}
:
p
(
θ
∣
E
,
α
)
=
p
(
E
∣
θ
,
α
)
p
(
E
∣
α
)
⋅
p
(
θ
∣
α
)
=
p
(
E
∣
θ
,
α
)
∫
p
(
E
∣
θ
,
α
)
p
(
θ
∣
α
)
d
θ
⋅
p
(
θ
∣
α
)
,
{\displaystyle {\begin{aligned}p({\boldsymbol {\theta }}\mid \mathbf {E} ,{\boldsymbol {\alpha }})&={\frac {p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})}{p(\mathbf {E} \mid {\boldsymbol {\alpha }})}}\cdot p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})\\&={\frac {p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})}{\int p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})\,d{\boldsymbol {\theta }}}}\cdot p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }}),\end{aligned}}}
where
p
(
E
∣
θ
,
α
)
=
∏
k
p
(
e
k
∣
θ
)
.
{\displaystyle p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})=\prod _{k}p(e_{k}\mid {\boldsymbol {\theta }}).}
== Formal description of Bayesian inference ==
=== Definitions ===
x
{\displaystyle x}
, a data point in general. This may in fact be a vector of values.
θ
{\displaystyle \theta }
, the parameter of the data point's distribution, i.e.,
x
∼
p
(
x
∣
θ
)
{\displaystyle x\sim p(x\mid \theta )}
. This may be a vector of parameters.
α
{\displaystyle \alpha }
, the hyperparameter of the parameter distribution, i.e.,
θ
∼
p
(
θ
∣
α
)
{\displaystyle \theta \sim p(\theta \mid \alpha )}
. This may be a vector of hyperparameters.
X
{\displaystyle \mathbf {X} }
is the sample, a set of
n
{\displaystyle n}
observed data points, i.e.,
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
.
x
~
{\displaystyle {\tilde {x}}}
, a new data point whose distribution is to be predicted.
=== Bayesian inference ===
The prior distribution is the distribution of the parameter(s) before any data is observed, i.e.
p
(
θ
∣
α
)
{\displaystyle p(\theta \mid \alpha )}
. The prior distribution might not be easily determined; in such a case, one possibility may be to use the Jeffreys prior to obtain a prior distribution before updating it with newer observations.
The sampling distribution is the distribution of the observed data conditional on its parameters, i.e.
p
(
X
∣
θ
)
{\displaystyle p(\mathbf {X} \mid \theta )}
. This is also termed the likelihood, especially when viewed as a function of the parameter(s), sometimes written
L
(
θ
∣
X
)
=
p
(
X
∣
θ
)
{\displaystyle \operatorname {L} (\theta \mid \mathbf {X} )=p(\mathbf {X} \mid \theta )}
.
The marginal likelihood (sometimes also termed the evidence) is the distribution of the observed data marginalized over the parameter(s), i.e.
p
(
X
∣
α
)
=
∫
p
(
X
∣
θ
)
p
(
θ
∣
α
)
d
θ
.
{\displaystyle p(\mathbf {X} \mid \alpha )=\int p(\mathbf {X} \mid \theta )p(\theta \mid \alpha )d\theta .}
It quantifies the agreement between data and expert opinion, in a geometric sense that can be made precise. If the marginal likelihood is 0 then there is no agreement between the data and expert opinion and Bayes' rule cannot be applied.
The posterior distribution is the distribution of the parameter(s) after taking into account the observed data. This is determined by Bayes' rule, which forms the heart of Bayesian inference:
p
(
θ
∣
X
,
α
)
=
p
(
θ
,
X
,
α
)
p
(
X
,
α
)
=
p
(
X
∣
θ
,
α
)
p
(
θ
,
α
)
p
(
X
∣
α
)
p
(
α
)
=
p
(
X
∣
θ
,
α
)
p
(
θ
∣
α
)
p
(
X
∣
α
)
∝
p
(
X
∣
θ
,
α
)
p
(
θ
∣
α
)
.
{\displaystyle p(\theta \mid \mathbf {X} ,\alpha )={\frac {p(\theta ,\mathbf {X} ,\alpha )}{p(\mathbf {X} ,\alpha )}}={\frac {p(\mathbf {X} \mid \theta ,\alpha )p(\theta ,\alpha )}{p(\mathbf {X} \mid \alpha )p(\alpha )}}={\frac {p(\mathbf {X} \mid \theta ,\alpha )p(\theta \mid \alpha )}{p(\mathbf {X} \mid \alpha )}}\propto p(\mathbf {X} \mid \theta ,\alpha )p(\theta \mid \alpha ).}
This is expressed in words as "posterior is proportional to likelihood times prior", or sometimes as "posterior = likelihood times prior, over evidence".
In practice, for almost all complex Bayesian models used in machine learning, the posterior distribution
p
(
θ
∣
X
,
α
)
{\displaystyle p(\theta \mid \mathbf {X} ,\alpha )}
is not obtained in a closed form distribution, mainly because the parameter space for
θ
{\displaystyle \theta }
can be very high, or the Bayesian model retains certain hierarchical structure formulated from the observations
X
{\displaystyle \mathbf {X} }
and parameter
θ
{\displaystyle \theta }
. In such situations, we need to resort to approximation techniques.
General case: Let
P
Y
x
{\displaystyle P_{Y}^{x}}
be the conditional distribution of
Y
{\displaystyle Y}
given
X
=
x
{\displaystyle X=x}
and let
P
X
{\displaystyle P_{X}}
be the distribution of
X
{\displaystyle X}
. The joint distribution is then
P
X
,
Y
(
d
x
,
d
y
)
=
P
Y
x
(
d
y
)
P
X
(
d
x
)
{\displaystyle P_{X,Y}(dx,dy)=P_{Y}^{x}(dy)P_{X}(dx)}
. The conditional distribution
P
X
y
{\displaystyle P_{X}^{y}}
of
X
{\displaystyle X}
given
Y
=
y
{\displaystyle Y=y}
is then determined by
P
X
y
(
A
)
=
E
(
1
A
(
X
)
|
Y
=
y
)
{\displaystyle P_{X}^{y}(A)=E(1_{A}(X)|Y=y)}
Existence and uniqueness of the needed conditional expectation is a consequence of the Radon–Nikodym theorem. This was formulated by Kolmogorov in his famous book from 1933. Kolmogorov underlines the importance of conditional probability by writing "I wish to call attention to ... and especially the theory of conditional probabilities and conditional expectations ..." in the Preface. The Bayes theorem determines the posterior distribution from the prior distribution. Uniqueness requires continuity assumptions. Bayes' theorem can be generalized to include improper prior distributions such as the uniform distribution on the real line. Modern Markov chain Monte Carlo methods have boosted the importance of Bayes' theorem including cases with improper priors.
=== Bayesian prediction ===
The posterior predictive distribution is the distribution of a new data point, marginalized over the posterior:
p
(
x
~
∣
X
,
α
)
=
∫
p
(
x
~
∣
θ
)
p
(
θ
∣
X
,
α
)
d
θ
{\displaystyle p({\tilde {x}}\mid \mathbf {X} ,\alpha )=\int p({\tilde {x}}\mid \theta )p(\theta \mid \mathbf {X} ,\alpha )d\theta }
The prior predictive distribution is the distribution of a new data point, marginalized over the prior:
p
(
x
~
∣
α
)
=
∫
p
(
x
~
∣
θ
)
p
(
θ
∣
α
)
d
θ
{\displaystyle p({\tilde {x}}\mid \alpha )=\int p({\tilde {x}}\mid \theta )p(\theta \mid \alpha )d\theta }
Bayesian theory calls for the use of the posterior predictive distribution to do predictive inference, i.e., to predict the distribution of a new, unobserved data point. That is, instead of a fixed point as a prediction, a distribution over possible points is returned. Only this way is the entire posterior distribution of the parameter(s) used. By comparison, prediction in frequentist statistics often involves finding an optimum point estimate of the parameter(s)—e.g., by maximum likelihood or maximum a posteriori estimation (MAP)—and then plugging this estimate into the formula for the distribution of a data point. This has the disadvantage that it does not account for any uncertainty in the value of the parameter, and hence will underestimate the variance of the predictive distribution.
In some instances, frequentist statistics can work around this problem. For example, confidence intervals and prediction intervals in frequentist statistics when constructed from a normal distribution with unknown mean and variance are constructed using a Student's t-distribution. This correctly estimates the variance, due to the facts that (1) the average of normally distributed random variables is also normally distributed, and (2) the predictive distribution of a normally distributed data point with unknown mean and variance, using conjugate or uninformative priors, has a Student's t-distribution. In Bayesian statistics, however, the posterior predictive distribution can always be determined exactly—or at least to an arbitrary level of precision when numerical methods are used.
Both types of predictive distributions have the form of a compound probability distribution (as does the marginal likelihood). In fact, if the prior distribution is a conjugate prior, such that the prior and posterior distributions come from the same family, it can be seen that both prior and posterior predictive distributions also come from the same family of compound distributions. The only difference is that the posterior predictive distribution uses the updated values of the hyperparameters (applying the Bayesian update rules given in the conjugate prior article), while the prior predictive distribution uses the values of the hyperparameters that appear in the prior distribution.
== Mathematical properties ==
=== Interpretation of factor ===
P
(
E
∣
M
)
P
(
E
)
>
1
⇒
P
(
E
∣
M
)
>
P
(
E
)
{\textstyle {\frac {P(E\mid M)}{P(E)}}>1\Rightarrow P(E\mid M)>P(E)}
. That is, if the model were true, the evidence would be more likely than is predicted by the current state of belief. The reverse applies for a decrease in belief. If the belief does not change,
P
(
E
∣
M
)
P
(
E
)
=
1
⇒
P
(
E
∣
M
)
=
P
(
E
)
{\textstyle {\frac {P(E\mid M)}{P(E)}}=1\Rightarrow P(E\mid M)=P(E)}
. That is, the evidence is independent of the model. If the model were true, the evidence would be exactly as likely as predicted by the current state of belief.
=== Cromwell's rule ===
If
P
(
M
)
=
0
{\displaystyle P(M)=0}
then
P
(
M
∣
E
)
=
0
{\displaystyle P(M\mid E)=0}
. If
P
(
M
)
=
1
{\displaystyle P(M)=1}
and
P
(
E
)
>
0
{\displaystyle P(E)>0}
, then
P
(
M
|
E
)
=
1
{\displaystyle P(M|E)=1}
. This can be interpreted to mean that hard convictions are insensitive to counter-evidence.
The former follows directly from Bayes' theorem. The latter can be derived by applying the first rule to the event "not
M
{\displaystyle M}
" in place of "
M
{\displaystyle M}
", yielding "if
1
−
P
(
M
)
=
0
{\displaystyle 1-P(M)=0}
, then
1
−
P
(
M
∣
E
)
=
0
{\displaystyle 1-P(M\mid E)=0}
", from which the result immediately follows.
=== Asymptotic behaviour of posterior ===
Consider the behaviour of a belief distribution as it is updated a large number of times with independent and identically distributed trials. For sufficiently nice prior probabilities, the Bernstein-von Mises theorem gives that in the limit of infinite trials, the posterior converges to a Gaussian distribution independent of the initial prior under some conditions firstly outlined and rigorously proven by Joseph L. Doob in 1948, namely if the random variable in consideration has a finite probability space. The more general results were obtained later by the statistician David A. Freedman who published in two seminal research papers in 1963 and 1965 when and under what circumstances the asymptotic behaviour of posterior is guaranteed. His 1963 paper treats, like Doob (1949), the finite case and comes to a satisfactory conclusion. However, if the random variable has an infinite but countable probability space (i.e., corresponding to a die with infinite many faces) the 1965 paper demonstrates that for a dense subset of priors the Bernstein-von Mises theorem is not applicable. In this case there is almost surely no asymptotic convergence. Later in the 1980s and 1990s Freedman and Persi Diaconis continued to work on the case of infinite countable probability spaces. To summarise, there may be insufficient trials to suppress the effects of the initial choice, and especially for large (but finite) systems the convergence might be very slow.
=== Conjugate priors ===
In parameterized form, the prior distribution is often assumed to come from a family of distributions called conjugate priors. The usefulness of a conjugate prior is that the corresponding posterior distribution will be in the same family, and the calculation may be expressed in closed form.
=== Estimates of parameters and predictions ===
It is often desired to use a posterior distribution to estimate a parameter or variable. Several methods of Bayesian estimation select measurements of central tendency from the posterior distribution.
For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as a robust estimator.
If there exists a finite mean for the posterior distribution, then the posterior mean is a method of estimation.
θ
~
=
E
[
θ
]
=
∫
θ
p
(
θ
∣
X
,
α
)
d
θ
{\displaystyle {\tilde {\theta }}=\operatorname {E} [\theta ]=\int \theta \,p(\theta \mid \mathbf {X} ,\alpha )\,d\theta }
Taking a value with the greatest probability defines maximum a posteriori (MAP) estimates:
{
θ
MAP
}
⊂
arg
max
θ
p
(
θ
∣
X
,
α
)
.
{\displaystyle \{\theta _{\text{MAP}}\}\subset \arg \max _{\theta }p(\theta \mid \mathbf {X} ,\alpha ).}
There are examples where no maximum is attained, in which case the set of MAP estimates is empty.
There are other methods of estimation that minimize the posterior risk (expected-posterior loss) with respect to a loss function, and these are of interest to statistical decision theory using the sampling distribution ("frequentist statistics").
The posterior predictive distribution of a new observation
x
~
{\displaystyle {\tilde {x}}}
(that is independent of previous observations) is determined by
p
(
x
~
|
X
,
α
)
=
∫
p
(
x
~
,
θ
∣
X
,
α
)
d
θ
=
∫
p
(
x
~
∣
θ
)
p
(
θ
∣
X
,
α
)
d
θ
.
{\displaystyle p({\tilde {x}}|\mathbf {X} ,\alpha )=\int p({\tilde {x}},\theta \mid \mathbf {X} ,\alpha )\,d\theta =\int p({\tilde {x}}\mid \theta )p(\theta \mid \mathbf {X} ,\alpha )\,d\theta .}
== Examples ==
=== Probability of a hypothesis ===
Suppose there are two full bowls of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?
Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes' theorem. Let
H
1
{\displaystyle H_{1}}
correspond to bowl #1, and
H
2
{\displaystyle H_{2}}
to bowl #2.
It is given that the bowls are identical from Fred's point of view, thus
P
(
H
1
)
=
P
(
H
2
)
{\displaystyle P(H_{1})=P(H_{2})}
, and the two must add up to 1, so both are equal to 0.5.
The event
E
{\displaystyle E}
is the observation of a plain cookie. From the contents of the bowls, we know that
P
(
E
∣
H
1
)
=
30
/
40
=
0.75
{\displaystyle P(E\mid H_{1})=30/40=0.75}
and
P
(
E
∣
H
2
)
=
20
/
40
=
0.5.
{\displaystyle P(E\mid H_{2})=20/40=0.5.}
Bayes' formula then yields
P
(
H
1
∣
E
)
=
P
(
E
∣
H
1
)
P
(
H
1
)
P
(
E
∣
H
1
)
P
(
H
1
)
+
P
(
E
∣
H
2
)
P
(
H
2
)
=
0.75
×
0.5
0.75
×
0.5
+
0.5
×
0.5
=
0.6
{\displaystyle {\begin{aligned}P(H_{1}\mid E)&={\frac {P(E\mid H_{1})\,P(H_{1})}{P(E\mid H_{1})\,P(H_{1})\;+\;P(E\mid H_{2})\,P(H_{2})}}\\\\\ &={\frac {0.75\times 0.5}{0.75\times 0.5+0.5\times 0.5}}\\\\\ &=0.6\end{aligned}}}
Before we observed the cookie, the probability we assigned for Fred having chosen bowl #1 was the prior probability,
P
(
H
1
)
{\displaystyle P(H_{1})}
, which was 0.5. After observing the cookie, we must revise the probability to
P
(
H
1
∣
E
)
{\displaystyle P(H_{1}\mid E)}
, which is 0.6.
=== Making a prediction ===
An archaeologist is working at a site thought to be from the medieval period, between the 11th century to the 16th century. However, it is uncertain exactly when in this period the site was inhabited. Fragments of pottery are found, some of which are glazed and some of which are decorated. It is expected that if the site were inhabited during the early medieval period, then 1% of the pottery would be glazed and 50% of its area decorated, whereas if it had been inhabited in the late medieval period then 81% would be glazed and 5% of its area decorated. How confident can the archaeologist be in the date of inhabitation as fragments are unearthed?
The degree of belief in the continuous variable
C
{\displaystyle C}
(century) is to be calculated, with the discrete set of events
{
G
D
,
G
D
¯
,
G
¯
D
,
G
¯
D
¯
}
{\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}}
as evidence. Assuming linear variation of glaze and decoration with time, and that these variables are independent,
P
(
E
=
G
D
∣
C
=
c
)
=
(
0.01
+
0.81
−
0.01
16
−
11
(
c
−
11
)
)
(
0.5
−
0.5
−
0.05
16
−
11
(
c
−
11
)
)
{\displaystyle P(E=GD\mid C=c)=(0.01+{\frac {0.81-0.01}{16-11}}(c-11))(0.5-{\frac {0.5-0.05}{16-11}}(c-11))}
P
(
E
=
G
D
¯
∣
C
=
c
)
=
(
0.01
+
0.81
−
0.01
16
−
11
(
c
−
11
)
)
(
0.5
+
0.5
−
0.05
16
−
11
(
c
−
11
)
)
{\displaystyle P(E=G{\bar {D}}\mid C=c)=(0.01+{\frac {0.81-0.01}{16-11}}(c-11))(0.5+{\frac {0.5-0.05}{16-11}}(c-11))}
P
(
E
=
G
¯
D
∣
C
=
c
)
=
(
(
1
−
0.01
)
−
0.81
−
0.01
16
−
11
(
c
−
11
)
)
(
0.5
−
0.5
−
0.05
16
−
11
(
c
−
11
)
)
{\displaystyle P(E={\bar {G}}D\mid C=c)=((1-0.01)-{\frac {0.81-0.01}{16-11}}(c-11))(0.5-{\frac {0.5-0.05}{16-11}}(c-11))}
P
(
E
=
G
¯
D
¯
∣
C
=
c
)
=
(
(
1
−
0.01
)
−
0.81
−
0.01
16
−
11
(
c
−
11
)
)
(
0.5
+
0.5
−
0.05
16
−
11
(
c
−
11
)
)
{\displaystyle P(E={\bar {G}}{\bar {D}}\mid C=c)=((1-0.01)-{\frac {0.81-0.01}{16-11}}(c-11))(0.5+{\frac {0.5-0.05}{16-11}}(c-11))}
Assume a uniform prior of
f
C
(
c
)
=
0.2
{\textstyle f_{C}(c)=0.2}
, and that trials are independent and identically distributed. When a new fragment of type
e
{\displaystyle e}
is discovered, Bayes' theorem is applied to update the degree of belief for each
c
{\displaystyle c}
:
f
C
(
c
∣
E
=
e
)
=
P
(
E
=
e
∣
C
=
c
)
P
(
E
=
e
)
f
C
(
c
)
=
P
(
E
=
e
∣
C
=
c
)
∫
11
16
P
(
E
=
e
∣
C
=
c
)
f
C
(
c
)
d
c
f
C
(
c
)
{\displaystyle f_{C}(c\mid E=e)={\frac {P(E=e\mid C=c)}{P(E=e)}}f_{C}(c)={\frac {P(E=e\mid C=c)}{\int _{11}^{16}{P(E=e\mid C=c)f_{C}(c)dc}}}f_{C}(c)}
A computer simulation of the changing belief as 50 fragments are unearthed is shown on the graph. In the simulation, the site was inhabited around 1420, or
c
=
15.2
{\displaystyle c=15.2}
. By calculating the area under the relevant portion of the graph for 50 trials, the archaeologist can say that there is practically no chance the site was inhabited in the 11th and 12th centuries, about 1% chance that it was inhabited during the 13th century, 63% chance during the 14th century and 36% during the 15th century. The Bernstein-von Mises theorem asserts here the asymptotic convergence to the "true" distribution because the probability space corresponding to the discrete set of events
{
G
D
,
G
D
¯
,
G
¯
D
,
G
¯
D
¯
}
{\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}}
is finite (see above section on asymptotic behaviour of the posterior).
== In frequentist statistics and decision theory ==
A decision-theoretic justification of the use of Bayesian inference was given by Abraham Wald, who proved that every unique Bayesian procedure is admissible. Conversely, every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.
Wald characterized admissible procedures as Bayesian procedures (and limits of Bayesian procedures), making the Bayesian formalism a central technique in such areas of frequentist inference as parameter estimation, hypothesis testing, and computing confidence intervals. For example:
"Under some conditions, all admissible procedures are either Bayes procedures or limits of Bayes procedures (in various senses). These remarkable results, at least in their original form, are due essentially to Wald. They are useful because the property of being Bayes is easier to analyze than admissibility."
"In decision theory, a quite general method for proving admissibility consists in exhibiting a procedure as a unique Bayes solution."
"In the first chapters of this work, prior distributions with finite support and the corresponding Bayes procedures were used to establish some of the main theorems relating to the comparison of experiments. Bayes procedures with respect to more general prior distributions have played a very important role in the development of statistics, including its asymptotic theory." "There are many problems where a glance at posterior distributions, for suitable priors, yields immediately interesting information. Also, this technique can hardly be avoided in sequential analysis."
"A useful fact is that any Bayes decision rule obtained by taking a proper prior over the whole parameter space must be admissible"
"An important area of investigation in the development of admissibility ideas has been that of conventional sampling-theory procedures, and many interesting results have been obtained."
=== Model selection ===
Bayesian methodology also plays a role in model selection where the aim is to select one model from a set of competing models that represents most closely the underlying process that generated the observed data. In Bayesian model comparison, the model with the highest posterior probability given the data is selected. The posterior probability of a model depends on the evidence, or marginal likelihood, which reflects the probability that the data is generated by the model, and on the prior belief of the model. When two competing models are a priori considered to be equiprobable, the ratio of their posterior probabilities corresponds to the Bayes factor. Since Bayesian model comparison is aimed on selecting the model with the highest posterior probability, this methodology is also referred to as the maximum a posteriori (MAP) selection rule or the MAP probability rule.
== Probabilistic programming ==
While conceptually simple, Bayesian methods can be mathematically and numerically challenging. Probabilistic programming languages (PPLs) implement functions to easily build Bayesian models together with efficient automatic inference methods. This helps separate the model building from the inference, allowing practitioners to focus on their specific problems and leaving PPLs to handle the computational details for them.
== Applications ==
=== Statistical data analysis ===
See the separate Wikipedia entry on Bayesian statistics, specifically the statistical modeling section in that page.
=== Computer applications ===
Bayesian inference has applications in artificial intelligence and expert systems. Bayesian inference techniques have been a fundamental part of computerized pattern recognition techniques since the late 1950s. There is also an ever-growing connection between Bayesian methods and simulation-based Monte Carlo techniques since complex models cannot be processed in closed form by a Bayesian analysis, while a graphical model structure may allow for efficient simulation algorithms like the Gibbs sampling and other Metropolis–Hastings algorithm schemes. Recently Bayesian inference has gained popularity among the phylogenetics community for these reasons; a number of applications allow many demographic and evolutionary parameters to be estimated simultaneously.
As applied to statistical classification, Bayesian inference has been used to develop algorithms for identifying e-mail spam. Applications which make use of Bayesian inference for spam filtering include CRM114, DSPAM, Bogofilter, SpamAssassin, SpamBayes, Mozilla, XEAMS, and others. Spam classification is treated in more detail in the article on the naïve Bayes classifier.
Solomonoff's Inductive inference is the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computable probability distribution. It is a formal inductive framework that combines two well-studied principles of inductive inference: Bayesian statistics and Occam's Razor. Solomonoff's universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.
=== Bioinformatics and healthcare applications ===
Bayesian inference has been applied in different Bioinformatics applications, including differential gene expression analysis. Bayesian inference is also used in a general cancer risk model, called CIRI (Continuous Individualized Risk Index), where serial measurements are incorporated to update a Bayesian model which is primarily built from prior knowledge.
=== In the courtroom ===
Bayesian inference can be used by jurors to coherently accumulate the evidence for and against a defendant, and to see whether, in totality, it meets their personal threshold for "beyond a reasonable doubt". Bayes' theorem is applied successively to all evidence presented, with the posterior from one stage becoming the prior for the next. The benefit of a Bayesian approach is that it gives the juror an unbiased, rational mechanism for combining evidence. It may be appropriate to explain Bayes' theorem to jurors in odds form, as betting odds are more widely understood than probabilities. Alternatively, a logarithmic approach, replacing multiplication with addition, might be easier for a jury to handle.
If the existence of the crime is not in doubt, only the identity of the culprit, it has been suggested that the prior should be uniform over the qualifying population. For example, if 1,000 people could have committed the crime, the prior probability of guilt would be 1/1000.
The use of Bayes' theorem by jurors is controversial. In the United Kingdom, a defence expert witness explained Bayes' theorem to the jury in R v Adams. The jury convicted, but the case went to appeal on the basis that no means of accumulating evidence had been provided for jurors who did not wish to use Bayes' theorem. The Court of Appeal upheld the conviction, but it also gave the opinion that "To introduce Bayes' Theorem, or any similar method, into a criminal trial plunges the jury into inappropriate and unnecessary realms of theory and complexity, deflecting them from their proper task."
Gardner-Medwin argues that the criterion on which a verdict in a criminal trial should be based is not the probability of guilt, but rather the probability of the evidence, given that the defendant is innocent (akin to a frequentist p-value). He argues that if the posterior probability of guilt is to be computed by Bayes' theorem, the prior probability of guilt must be known. This will depend on the incidence of the crime, which is an unusual piece of evidence to consider in a criminal trial. Consider the following three propositions:
A – the known facts and testimony could have arisen if the defendant is guilty.
B – the known facts and testimony could have arisen if the defendant is innocent.
C – the defendant is guilty.
Gardner-Medwin argues that the jury should believe both A and not-B in order to convict. A and not-B implies the truth of C, but the reverse is not true. It is possible that B and C are both true, but in this case he argues that a jury should acquit, even though they know that they will be letting some guilty people go free. See also Lindley's paradox.
=== Bayesian epistemology ===
Bayesian epistemology is a movement that advocates for Bayesian inference as a means of justifying the rules of inductive logic.
Karl Popper and David Miller have rejected the idea of Bayesian rationalism, i.e. using Bayes rule to make epistemological inferences: It is prone to the same vicious circle as any other justificationist epistemology, because it presupposes what it attempts to justify. According to this view, a rational interpretation of Bayesian inference would see it merely as a probabilistic version of falsification, rejecting the belief, commonly held by Bayesians, that high likelihood achieved by a series of Bayesian updates would prove the hypothesis beyond any reasonable doubt, or even with likelihood greater than 0.
=== Other ===
The scientific method is sometimes interpreted as an application of Bayesian inference. In this view, Bayes' rule guides (or should guide) the updating of probabilities about hypotheses conditional on new observations or experiments. The Bayesian inference has also been applied to treat stochastic scheduling problems with incomplete information by Cai et al. (2009).
Bayesian search theory is used to search for lost objects.
Bayesian inference in phylogeny
Bayesian tool for methylation analysis
Bayesian approaches to brain function investigate the brain as a Bayesian mechanism.
Bayesian inference in ecological studies
Bayesian inference is used to estimate parameters in stochastic chemical kinetic models
Bayesian inference in econophysics for currency or prediction of trend changes in financial quotations
Bayesian inference in marketing
Bayesian inference in motor learning
Bayesian inference is used in probabilistic numerics to solve numerical problems
== Bayes and Bayesian inference ==
The problem considered by Bayes in Proposition 9 of his essay, "An Essay Towards Solving a Problem in the Doctrine of Chances", is the posterior distribution for the parameter a (the success rate) of the binomial distribution.
== History ==
The term Bayesian refers to Thomas Bayes (1701–1761), who proved that probabilistic limits could be placed on an unknown event. However, it was Pierre-Simon Laplace (1749–1827) who introduced (as Principle VI) what is now called Bayes' theorem and used it to address problems in celestial mechanics, medical statistics, reliability, and jurisprudence. Early Bayesian inference, which used uniform priors following Laplace's principle of insufficient reason, was called "inverse probability" (because it infers backwards from observations to parameters, or from effects to causes). After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be called frequentist statistics.
In the 20th century, the ideas of Laplace were further developed in two different directions, giving rise to objective and subjective currents in Bayesian practice. In the objective or "non-informative" current, the statistical analysis depends on only the model assumed, the data analyzed, and the method assigning the prior, which differs from one objective Bayesian practitioner to another. In the subjective or "informative" current, the specification of the prior depends on the belief (that is, propositions on which the analysis is prepared to act), which can summarize information from experts, previous studies, etc.
In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery of Markov chain Monte Carlo methods, which removed many of the computational problems, and an increasing interest in nonstandard, complex applications. Despite growth of Bayesian research, most undergraduate teaching is still based on frequentist statistics. Nonetheless, Bayesian methods are widely accepted and used, such as for example in the field of machine learning.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
For a full report on the history of Bayesian statistics and the debates with frequentists approaches, read Vallverdu, Jordi (2016). Bayesians Versus Frequentists A Philosophical Debate on Statistical Reasoning. New York: Springer. ISBN 978-3-662-48638-2.
Clayton, Aubrey (August 2021). Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science. Columbia University Press. ISBN 978-0-231-55335-3.
=== Elementary ===
The following books are listed in ascending order of probabilistic sophistication:
Stone, JV (2013), "Bayes' Rule: A Tutorial Introduction to Bayesian Analysis", Download first chapter here, Sebtel Press, England.
Dennis V. Lindley (2013). Understanding Uncertainty, Revised Edition (2nd ed.). John Wiley. ISBN 978-1-118-65012-7.
Colin Howson & Peter Urbach (2005). Scientific Reasoning: The Bayesian Approach (3rd ed.). Open Court Publishing Company. ISBN 978-0-8126-9578-6.
Berry, Donald A. (1996). Statistics: A Bayesian Perspective. Duxbury. ISBN 978-0-534-23476-8.
Morris H. DeGroot & Mark J. Schervish (2002). Probability and Statistics (third ed.). Addison-Wesley. ISBN 978-0-201-52488-8.
Bolstad, William M. (2007) Introduction to Bayesian Statistics: Second Edition, John Wiley ISBN 0-471-27020-2
Winkler, Robert L (2003). Introduction to Bayesian Inference and Decision (2nd ed.). Probabilistic. ISBN 978-0-9647938-4-2. Updated classic textbook. Bayesian theory clearly presented.
Lee, Peter M. Bayesian Statistics: An Introduction. Fourth Edition (2012), John Wiley ISBN 978-1-1183-3257-3
Carlin, Bradley P. & Louis, Thomas A. (2008). Bayesian Methods for Data Analysis, Third Edition. Boca Raton, FL: Chapman and Hall/CRC. ISBN 978-1-58488-697-6.
Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013). Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC. ISBN 978-1-4398-4095-5.
=== Intermediate or advanced ===
Berger, James O (1985). Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics (Second ed.). Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2.
Bernardo, José M.; Smith, Adrian F. M. (1994). Bayesian Theory. Wiley.
DeGroot, Morris H., Optimal Statistical Decisions. Wiley Classics Library. 2004. (Originally published (1970) by McGraw-Hill.) ISBN 0-471-68029-X.
Schervish, Mark J. (1995). Theory of statistics. Springer-Verlag. ISBN 978-0-387-94546-0.
Jaynes, E. T. (1998). Probability Theory: The Logic of Science.
O'Hagan, A. and Forster, J. (2003). Kendall's Advanced Theory of Statistics, Volume 2B: Bayesian Inference. Arnold, New York. ISBN 0-340-52922-9.
Robert, Christian P (2007). The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation (paperback ed.). Springer. ISBN 978-0-387-71598-8.
Pearl, Judea. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, CA: Morgan Kaufmann.
Pierre Bessière et al. (2013). "Bayesian Programming". CRC Press. ISBN 9781439880326
Francisco J. Samaniego (2010). "A Comparison of the Bayesian and Frequentist Approaches to Estimation". Springer. New York, ISBN 978-1-4419-5940-9
== External links ==
"Bayesian approach to statistical problems", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Bayesian Statistics from Scholarpedia.
Introduction to Bayesian probability from Queen Mary University of London
Mathematical Notes on Bayesian Statistics and Markov Chain Monte Carlo
Bayesian reading list Archived 2011-06-25 at the Wayback Machine, categorized and annotated by Tom Griffiths
A. Hajek and S. Hartmann: Bayesian Epistemology, in: J. Dancy et al. (eds.), A Companion to Epistemology. Oxford: Blackwell 2010, 93–106.
S. Hartmann and J. Sprenger: Bayesian Epistemology, in: S. Bernecker and D. Pritchard (eds.), Routledge Companion to Epistemology. London: Routledge 2010, 609–620.
Stanford Encyclopedia of Philosophy: "Inductive Logic"
Bayesian Confirmation Theory (PDF)
What is Bayesian Learning?
Data, Uncertainty and Inference — Informal introduction with many examples, ebook (PDF) freely available at causaScientia | Wikipedia/Bayesian_inference |
The Burrows–Wheeler transform (BWT) rearranges a character string into runs of similar characters, in a manner that can be reversed to recover the original string. Since compression techniques such as move-to-front transform and run-length encoding are more effective when such runs are present, the BWT can be used as a preparatory step to improve the efficiency of a compression algorithm, and is used this way in software such as bzip2. The algorithm can be implemented efficiently using a suffix array thus reaching linear time complexity.
It was invented by David Wheeler in 1983, and later published by him and Michael Burrows in 1994. Their paper included a compression algorithm, called the Block-sorting Lossless Data Compression Algorithm or BSLDCA, that compresses data by using the BWT followed by move-to-front coding and Huffman coding or arithmetic coding.
== Description ==
The transform is done by constructing a matrix (known as the Burrows-Wheeler Matrix) whose rows are the circular shifts of the input text, sorted in lexicographic order, then taking the final column of that matrix.
To allow the transform to be reversed, one additional step is necessary: either the index of the original string in the Burrows-Wheeler Matrix must be returned along with the transformed string (the approach shown in the original paper by Burrows and Wheeler) or a special end-of-text character must be added at the start or end of the input text before the transform is executed.
=== Example ===
Given an input string S = ^BANANA$ (step 1 in the table below), rotate it N times (step 2), where N = 8 is the length of the S string considering also the red ^ character representing the start of the string and the red $ character representing the 'EOF' pointer; these rotations, or circular shifts, are then sorted lexicographically (step 3). The output of the encoding phase is the last column L = BNN^AA$A after step 3, and the index (0-based) I of the row containing the original string S, in this case I = 6.
It is not necessary to use both $ and ^, but at least one must be used, else we cannot invert the transform, since all circular permutations of a string have the same Burrows–Wheeler transform.
=== Pseudocode ===
The following pseudocode gives a simple (though inefficient) way to calculate the BWT and its inverse. It assumes that the input string s contains a special character 'EOF' which is the last character and occurs nowhere else in the text.
function BWT (string s)
create a table, where the rows are all possible rotations of s
sort rows alphabetically
return (last column of the table)
function inverseBWT (string s)
create empty table
repeat length(s) times
// first insert creates first column
insert s as a column of table before first column of the table
sort rows of the table alphabetically
return (row that ends with the 'EOF' character)
== Explanation ==
If the original string had several substrings that occurred often, then the BWT-transformed string will have several places where a single character is repeated many times in a row, creating more-easily-compressible data. For instance, consider transforming an English text frequently containing the word "the":
For example:
Sorting the rotations of this text groups rotations starting with "he " together, and the last character of such a rotation (which is also the character before the "he ") will usually be "t" (though perhaps occasionally not, such as if the text contained "ache "), so the result of the transform will contain a run, or runs, of many consecutive "t" characters. Similarly, rotations beginning with "e " are grouped together, but "e " is often preceded by "h", so we see the output above contains a run of five consecutive "h" characters.
Thus it can be seen that the success of this transform depends upon one value having a high probability of occurring before a sequence, so that in general it needs fairly long samples (a few kilobytes at least) of appropriate data (such as text).
The remarkable thing about the BWT is not that it generates a more easily encoded output—an ordinary sort would do that—but that it does this reversibly, allowing the original document to be re-generated from the last column data.
The inverse can be understood this way. Take the final table in the BWT algorithm, and erase all but the last column. Given only this information, you can easily reconstruct the first column. The last column tells you all the characters in the text, so just sort these characters alphabetically to get the first column. Then, the last and first columns (of each row) together give you all pairs of successive characters in the document, where pairs are taken cyclically so that the last and first character form a pair. Sorting the list of pairs gives the first and second columns. To obtain the third column, the last column is again prepended to the table, and the rows are sorted lexicographically. Continuing in this manner, you can reconstruct the entire list. Then, the row with the "end of file" character at the end is the original text. Reversing the example above is done like this:
== Optimization ==
A number of optimizations can make these algorithms run more efficiently without changing the output. There is no need to represent the table in either the encoder or decoder. In the encoder, each row of the table can be represented by a single pointer into the strings, and the sort performed using the indices. In the decoder, there is also no need to store the table, and the decoded string can be generated one character at a time from left to right. Comparative sorting can even be avoided in favor of linear sorting, with performance proportional to the alphabet size and string length. A "character" in the algorithm can be a byte, or a bit, or any other convenient size.
One may also make the observation that mathematically, the encoded string can be computed as a simple modification of the suffix array, and suffix arrays can be computed with linear time and memory. The BWT can be defined with regards to the suffix array SA of text T as (1-based indexing):
B
W
T
[
i
]
=
{
T
[
S
A
[
i
]
−
1
]
,
if
S
A
[
i
]
>
0
$
,
otherwise
{\displaystyle BWT[i]={\begin{cases}T[SA[i]-1],&{\text{if }}SA[i]>0\\\$,&{\text{otherwise}}\end{cases}}}
There is no need to have an actual 'EOF' character. Instead, a pointer can be used that remembers where in a string the 'EOF' would be if it existed. In this approach, the output of the BWT must include both the transformed string, and the final value of the pointer. The inverse transform then shrinks it back down to the original size: it is given a string and a pointer, and returns just a string.
A complete description of the algorithms can be found in Burrows and Wheeler's paper, or in a number of online sources. The algorithms vary somewhat by whether EOF is used, and in which direction the sorting was done. In fact, the original formulation did not use an EOF marker.
== Bijective variant ==
Since any rotation of the input string will lead to the same transformed string, the BWT cannot be inverted without adding an EOF marker to the end of the input or doing something equivalent, making it possible to distinguish the input string from all its rotations. Increasing the size of the alphabet (by appending the EOF character) makes later compression steps awkward.
There is a bijective version of the transform, by which the transformed string uniquely identifies the original, and the two have the same length and contain exactly the same characters, just in a different order.
The bijective transform is computed by factoring the input into a non-increasing sequence of Lyndon words; such a factorization exists and is unique by the Chen–Fox–Lyndon theorem, and may be found in linear time and constant space. The algorithm sorts the rotations of all the words; as in the Burrows–Wheeler transform, this produces a sorted sequence of n strings. The transformed string is then obtained by picking the final character of each string in this sorted list. The one important caveat here is that strings of different lengths are not ordered in the usual way; the two strings are repeated forever, and the infinite repeats are sorted. For example, "ORO" precedes "OR" because "OROORO..." precedes "OROROR...".
For example, the text "^BANANA$" is transformed into "ANNBAA^$" through these steps (the red $ character indicates the EOF pointer) in the original string. The EOF character is unneeded in the bijective transform, so it is dropped during the transform and re-added to its proper place in the file.
The string is broken into Lyndon words so the words in the sequence are decreasing using the comparison method above. (Note that we're sorting '^' as succeeding other characters.) "^BANANA" becomes (^) (B) (AN) (AN) (A).
Up until the last step, the process is identical to the inverse Burrows–Wheeler process, but here it will not necessarily give rotations of a single sequence; it instead gives rotations of Lyndon words (which will start to repeat as the process is continued). Here, we can see (repetitions of) four distinct Lyndon words: (A), (AN) (twice), (B), and (^). (NANA... doesn't represent a distinct word, as it is a cycle of ANAN....)
At this point, these words are sorted into reverse order: (^), (B), (AN), (AN), (A). These are then concatenated to get
^BANANA
The Burrows–Wheeler transform can indeed be viewed as a special case of this bijective transform; instead of the traditional introduction of a new letter from outside our alphabet to denote the end of the string, we can introduce a new letter that compares as preceding all existing letters that is put at the beginning of the string. The whole string is now a Lyndon word, and running it through the bijective process will therefore result in a transformed result that, when inverted, gives back the Lyndon word, with no need for reassembling at the end.
For example, applying the bijective transform gives:
The bijective transform includes eight runs of identical
characters. These runs are, in order: XX,
II,
XX,
PP,
..,
EE,
..,
and
IIII.
In total, 18 characters are used in these runs.
== Dynamic Burrows–Wheeler transform ==
When a text is edited, its Burrows–Wheeler transform will change. Salson et al. propose an algorithm that deduces the Burrows–Wheeler transform of an edited text from that of the original text, doing a limited number of local reorderings in the original Burrows–Wheeler transform, which can be faster than constructing the Burrows–Wheeler transform of the edited text directly.
== Sample implementation ==
This Python implementation sacrifices speed for simplicity: the program is short, but takes more than the linear time that would be desired in a practical implementation. It essentially does what the pseudocode section does.
Using the STX/ETX control codes to mark the start and end of the text, and using s[i:] + s[:i] to construct the ith rotation of s, the forward transform takes the last character of each of the sorted rows:
The inverse transform repeatedly inserts r as the left column of the table and sorts the table. After the whole table is built, it returns the row that ends with ETX, minus the STX and ETX.
Following implementation notes from Manzini, it is equivalent to use a simple null character suffix instead. The sorting should be done in colexicographic order (string read right-to-left), i.e. sorted(..., key=lambda s: s[::-1]) in Python. (The above control codes actually fail to satisfy EOF being the last character; the two codes are actually the first. The rotation holds nevertheless.)
== BWT applications ==
As a lossless compression algorithm the Burrows–Wheeler transform offers the important quality that its encoding is reversible and hence the original data may be recovered from the resulting compression. The lossless quality of Burrows algorithm has provided for different algorithms with different purposes in mind. To name a few, Burrows–Wheeler transform is used in algorithms for sequence alignment, image compression, data compression, etc. The following is a compilation of some uses given to the Burrows–Wheeler Transform.
=== BWT for sequence alignment ===
The advent of next-generation sequencing (NGS) techniques at the end of the 2000s decade has led to another application of the Burrows–Wheeler transformation. In NGS, DNA is fragmented into small pieces, of which the first few bases are sequenced, yielding several millions of "reads", each 30 to 500 base pairs ("DNA characters") long. In many experiments, e.g., in ChIP-Seq, the task is now to align these reads to a reference genome, i.e., to the known, nearly complete sequence of the organism in question (which may be up to several billion base pairs long). A number of alignment programs, specialized for this task, were published, which initially relied on hashing (e.g., Eland, SOAP, or Maq). In an effort to reduce the memory requirement for sequence alignment, several alignment programs were developed (Bowtie, BWA, and SOAP2) that use the Burrows–Wheeler transform.
=== BWT for image compression ===
The Burrows–Wheeler transformation has proved to be fundamental for image compression applications. For example, Showed a compression pipeline based on the application of the Burrows–Wheeler transformation followed by inversion, run-length, and arithmetic encoders. The pipeline developed in this case is known as Burrows–Wheeler transform with an inversion encoder (BWIC). The results shown by BWIC are shown to outperform the compression performance of well-known and widely used algorithms like Lossless JPEG and JPEG 2000. BWIC is shown to outperform those in terms of final compression size of radiography medical images on the order of 5.1% and 4.1% respectively. The improvements are achieved by combining BWIC and a pre-BWIC scan of the image in a vertical snake order fashion. More recently, additional works have shown the implementation of the Burrows–Wheeler Transform in conjunction with the known move-to-front transform (MTF) achieve near lossless compression of images.
=== BWT for compression of genomic databases ===
Cox et al. presented a genomic compression scheme that uses BWT as the algorithm applied during the first stage of compression of several genomic datasets including the human genomic information. Their work proposed that BWT compression could be enhanced by including a second stage compression mechanism called same-as-previous encoding ("SAP"), which makes use of the fact that suffixes of two or more prefix letters could be equal. With the compression mechanism BWT-SAP, Cox et al. showed that in the genomic database ERA015743, 135.5 GB in size, the compression scheme BWT-SAP compresses the ERA015743 dataset by around 94%, to 8.2 GB.
=== BWT for sequence prediction ===
BWT has also been proved to be useful on sequence prediction which is a common area of study in machine learning and natural-language processing. In particular, Ktistakis et al. proposed a sequence prediction scheme called SuBSeq that exploits the lossless compression of data of the Burrows–Wheeler transform. SuBSeq exploits BWT by extracting the FM-index and then performing a series of operations called backwardSearch, forwardSearch, neighbourExpansion, and getConsequents in order to search for predictions given a suffix. The predictions are then classified based on a weight and put into an array from which the element with the highest weight is given as the prediction from the SuBSeq algorithm. SuBSeq has been shown to outperform state of the art algorithms for sequence prediction both in terms of training time and accuracy.
== References ==
== External links ==
Article by Mark Nelson on the BWT Archived 2017-03-25 at the Wayback Machine
A Bijective String-Sorting Transform, by Gil and Scott Archived 2011-10-08 at the Wayback Machine
Yuta's openbwt-v1.5.zip contains source code for various BWT routines including BWTS for bijective version
On Bijective Variants of the Burrows–Wheeler Transform, by Kufleitner
Blog post and project page for an open-source compression program and library based on the Burrows–Wheeler algorithm
MIT open courseware lecture on BWT (Foundations of Computational and Systems Biology)
League Table Sort (LTS) or The Weighting algorithm to BWT by Abderrahim Hechachena | Wikipedia/Burrows–Wheeler_transform |
In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
, is a type of statistical distance: a measure of how much a model probability distribution Q is different from a true probability distribution P. Mathematically, it is defined as
D
KL
(
P
∥
Q
)
=
∑
x
∈
X
P
(
x
)
log
P
(
x
)
Q
(
x
)
.
{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}.}
A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model instead of P when the actual distribution is P. While it is a measure of how different two distributions are and is thus a distance in some sense, it is not actually a metric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions (in contrast to variation of information), and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a type of divergence, a generalization of squared distance, and for certain classes of distributions (notably an exponential family), it satisfies a generalized Pythagorean theorem (which applies to squared distances).
Relative entropy is always a non-negative real number, with value 0 if and only if the two distributions in question are identical. It has diverse applications, both theoretical, such as characterizing the relative (Shannon) entropy in information systems, randomness in continuous time-series, and information gain when comparing statistical models of inference; and practical, such as applied statistics, fluid mechanics, neuroscience, bioinformatics, and machine learning.
== Introduction and context ==
Consider two probability distributions P and Q. Usually, P represents the data, the observations, or a measured probability distribution. Distribution Q represents instead a theory, a model, a description or an approximation of P. The Kullback–Leibler divergence
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
is then interpreted as the average difference of the number of bits required for encoding samples of P using a code optimized for Q rather than one optimized for P. Note that the roles of P and Q can be reversed in some situations where that is easier to compute, such as with the expectation–maximization algorithm (EM) and evidence lower bound (ELBO) computations.
== Etymology ==
The relative entropy was introduced by Solomon Kullback and Richard Leibler in Kullback & Leibler (1951) as "the mean information for discrimination between
H
1
{\displaystyle H_{1}}
and
H
2
{\displaystyle H_{2}}
per observation from
μ
1
{\displaystyle \mu _{1}}
", where one is comparing two probability measures
μ
1
,
μ
2
{\displaystyle \mu _{1},\mu _{2}}
, and
H
1
,
H
2
{\displaystyle H_{1},H_{2}}
are the hypotheses that one is selecting from measure
μ
1
,
μ
2
{\displaystyle \mu _{1},\mu _{2}}
(respectively). They denoted this by
I
(
1
:
2
)
{\displaystyle I(1:2)}
, and defined the "'divergence' between
μ
1
{\displaystyle \mu _{1}}
and
μ
2
{\displaystyle \mu _{2}}
" as the symmetrized quantity
J
(
1
,
2
)
=
I
(
1
:
2
)
+
I
(
2
:
1
)
{\displaystyle J(1,2)=I(1:2)+I(2:1)}
, which had already been defined and used by Harold Jeffreys in 1948. In Kullback (1959), the symmetrized form is again referred to as the "divergence", and the relative entropies in each direction are referred to as a "directed divergences" between two distributions; Kullback preferred the term discrimination information. The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality. Numerous references to earlier uses of the symmetrized divergence and to other statistical distances are given in Kullback (1959, pp. 6–7, §1.3 Divergence). The asymmetric "directed divergence" has come to be known as the Kullback–Leibler divergence, while the symmetrized "divergence" is now referred to as the Jeffreys divergence.
== Definition ==
For discrete probability distributions P and Q defined on the same sample space,
X
{\displaystyle {\mathcal {X}}}
, the relative entropy from Q to P is defined to be
D
KL
(
P
∥
Q
)
=
∑
x
∈
X
P
(
x
)
log
P
(
x
)
Q
(
x
)
,
{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {P(x)}{Q(x)}}\,,}
which is equivalent to
D
KL
(
P
∥
Q
)
=
−
∑
x
∈
X
P
(
x
)
log
Q
(
x
)
P
(
x
)
.
{\displaystyle D_{\text{KL}}(P\parallel Q)=-\sum _{x\in {\mathcal {X}}}P(x)\,\log {\frac {Q(x)}{P(x)}}\,.}
In other words, it is the expectation of the logarithmic difference between the probabilities P and Q, where the expectation is taken using the probabilities P.
Relative entropy is only defined in this way if, for all x,
Q
(
x
)
=
0
{\displaystyle Q(x)=0}
implies
P
(
x
)
=
0
{\displaystyle P(x)=0}
(absolute continuity). Otherwise, it is often defined as
+
∞
{\displaystyle +\infty }
, but the value
+
∞
{\displaystyle \ +\infty \ }
is possible even if
Q
(
x
)
≠
0
{\displaystyle Q(x)\neq 0}
everywhere, provided that
X
{\displaystyle {\mathcal {X}}}
is infinite in extent. Analogous comments apply to the continuous and general measure cases defined below.
Whenever
P
(
x
)
{\displaystyle P(x)}
is zero the contribution of the corresponding term is interpreted as zero because
lim
x
→
0
+
x
log
(
x
)
=
0
.
{\displaystyle \lim _{x\to 0^{+}}x\,\log(x)=0\,.}
For distributions P and Q of a continuous random variable, relative entropy is defined to be the integral
D
KL
(
P
∥
Q
)
=
∫
−
∞
∞
p
(
x
)
log
p
(
x
)
q
(
x
)
d
x
,
{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{-\infty }^{\infty }p(x)\,\log {\frac {p(x)}{q(x)}}\,dx\,,}
where p and q denote the probability densities of P and Q.
More generally, if P and Q are probability measures on a measurable space
X
,
{\displaystyle {\mathcal {X}}\,,}
and P is absolutely continuous with respect to Q, then the relative entropy from Q to P is defined as
D
KL
(
P
∥
Q
)
=
∫
x
∈
X
log
P
(
d
x
)
Q
(
d
x
)
P
(
d
x
)
,
{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}\log {\frac {P(dx)}{Q(dx)}}\,P(dx)\,,}
where
P
(
d
x
)
Q
(
d
x
)
{\displaystyle {\frac {P(dx)}{Q(dx)}}}
is the Radon–Nikodym derivative of P with respect to Q, i.e. the unique Q almost everywhere defined function r on
X
{\displaystyle {\mathcal {X}}}
such that
P
(
d
x
)
=
r
(
x
)
Q
(
d
x
)
{\displaystyle P(dx)=r(x)Q(dx)}
which exists because P is absolutely continuous with respect to Q. Also we assume the expression on the right-hand side exists. Equivalently (by the chain rule), this can be written as
D
KL
(
P
∥
Q
)
=
∫
x
∈
X
P
(
d
x
)
Q
(
d
x
)
log
P
(
d
x
)
Q
(
d
x
)
Q
(
d
x
)
,
{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}{\frac {P(dx)}{Q(dx)}}\ \log {\frac {P(dx)}{Q(dx)}}\ Q(dx)\,,}
which is the entropy of P relative to Q. Continuing in this case, if
μ
{\displaystyle \mu }
is any measure on
X
{\displaystyle {\mathcal {X}}}
for which densities p and q with
P
(
d
x
)
=
p
(
x
)
μ
(
d
x
)
{\displaystyle P(dx)=p(x)\mu (dx)}
and
Q
(
d
x
)
=
q
(
x
)
μ
(
d
x
)
{\displaystyle Q(dx)=q(x)\mu (dx)}
exist (meaning that P and Q are both absolutely continuous with respect to
μ
{\displaystyle \mu }
), then the relative entropy from Q to P is given as
D
KL
(
P
∥
Q
)
=
∫
x
∈
X
p
(
x
)
log
p
(
x
)
q
(
x
)
μ
(
d
x
)
.
{\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{x\in {\mathcal {X}}}p(x)\,\log {\frac {p(x)}{q(x)}}\ \mu (dx)\,.}
Note that such a measure
μ
{\displaystyle \mu }
for which densities can be defined always exists, since one can take
μ
=
1
2
(
P
+
Q
)
{\textstyle \mu ={\frac {1}{2}}\left(P+Q\right)}
although in practice it will usually be one that applies in the context such as counting measure for discrete distributions, or Lebesgue measure or a convenient variant thereof such as Gaussian measure or the uniform measure on the sphere, Haar measure on a Lie group etc. for continuous distributions.
The logarithms in these formulae are usually taken to base 2 if information is measured in units of bits, or to base e if information is measured in nats. Most formulas involving relative entropy hold regardless of the base of the logarithm.
Various conventions exist for referring to
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
in words. Often it is referred to as the divergence between P and Q, but this fails to convey the fundamental asymmetry in the relation. Sometimes, as in this article, it may be described as the divergence of P from Q or as the divergence from Q to P. This reflects the asymmetry in Bayesian inference, which starts from a prior Q and updates to the posterior P. Another common way to refer to
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
is as the relative entropy of P with respect to Q or the information gain from P over Q.
== Basic example ==
Kullback gives the following example (Table 2.1, Example 2.1). Let P and Q be the distributions shown in the table and figure. P is the distribution on the left side of the figure, a binomial distribution with
N
=
2
{\displaystyle N=2}
and
p
=
0.4
{\displaystyle p=0.4}
. Q is the distribution on the right side of the figure, a discrete uniform distribution with the three possible outcomes x = 0, 1, 2 (i.e.
X
=
{
0
,
1
,
2
}
{\displaystyle {\mathcal {X}}=\{0,1,2\}}
), each with probability
p
=
1
/
3
{\displaystyle p=1/3}
.
Relative entropies
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
and
D
KL
(
Q
∥
P
)
{\displaystyle D_{\text{KL}}(Q\parallel P)}
are calculated as follows. This example uses the natural log with base e, designated ln to get results in nats (see units of information):
D
KL
(
P
∥
Q
)
=
∑
x
∈
X
P
(
x
)
ln
P
(
x
)
Q
(
x
)
=
9
25
ln
9
/
25
1
/
3
+
12
25
ln
12
/
25
1
/
3
+
4
25
ln
4
/
25
1
/
3
=
1
25
(
32
ln
2
+
55
ln
3
−
50
ln
5
)
≈
0.0852996
,
{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}P(x)\,\ln {\frac {P(x)}{Q(x)}}\\&={\frac {9}{25}}\ln {\frac {9/25}{1/3}}+{\frac {12}{25}}\ln {\frac {12/25}{1/3}}+{\frac {4}{25}}\ln {\frac {4/25}{1/3}}\\&={\frac {1}{25}}\left(32\ln 2+55\ln 3-50\ln 5\right)\\&\approx 0.0852996,\end{aligned}}}
D
KL
(
Q
∥
P
)
=
∑
x
∈
X
Q
(
x
)
ln
Q
(
x
)
P
(
x
)
=
1
3
ln
1
/
3
9
/
25
+
1
3
ln
1
/
3
12
/
25
+
1
3
ln
1
/
3
4
/
25
=
1
3
(
−
4
ln
2
−
6
ln
3
+
6
ln
5
)
≈
0.097455.
{\displaystyle {\begin{aligned}D_{\text{KL}}(Q\parallel P)&=\sum _{x\in {\mathcal {X}}}Q(x)\,\ln {\frac {Q(x)}{P(x)}}\\&={\frac {1}{3}}\,\ln {\frac {1/3}{9/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{12/25}}+{\frac {1}{3}}\,\ln {\frac {1/3}{4/25}}\\&={\frac {1}{3}}\left(-4\ln 2-6\ln 3+6\ln 5\right)\\&\approx 0.097455.\end{aligned}}}
== Interpretations ==
=== Statistics ===
In the field of statistics, the Neyman–Pearson lemma states that the most powerful way to distinguish between the two distributions P and Q based on an observation Y (drawn from one of them) is through the log of the ratio of their likelihoods:
log
P
(
Y
)
−
log
Q
(
Y
)
{\displaystyle \log P(Y)-\log Q(Y)}
. The KL divergence is the expected value of this statistic if Y is actually drawn from P. Kullback motivated the statistic as an expected log likelihood ratio.
=== Coding ===
In the context of coding theory,
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
can be constructed by measuring the expected number of extra bits required to code samples from P using a code optimized for Q rather than the code optimized for P.
=== Inference ===
In the context of machine learning,
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
is often called the information gain achieved if P would be used instead of Q which is currently used. By analogy with information theory, it is called the relative entropy of P with respect to Q.
Expressed in the language of Bayesian inference,
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
is a measure of the information gained by revising one's beliefs from the prior probability distribution Q to the posterior probability distribution P. In other words, it is the amount of information lost when Q is used to approximate P.
=== Information geometry ===
In applications, P typically represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution, while Q typically represents a theory, model, description, or approximation of P. In order to find a distribution Q that is closest to P, we can minimize the KL divergence and compute an information projection.
While it is a statistical distance, it is not a metric, the most familiar type of distance, but instead it is a divergence. While metrics are symmetric and generalize linear distance, satisfying the triangle inequality, divergences are asymmetric and generalize squared distance, in some cases satisfying a generalized Pythagorean theorem. In general
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
does not equal
D
KL
(
Q
∥
P
)
{\displaystyle D_{\text{KL}}(Q\parallel P)}
, and the asymmetry is an important part of the geometry. The infinitesimal form of relative entropy, specifically its Hessian, gives a metric tensor that equals the Fisher information metric; see § Fisher information metric. Fisher information metric on the certain probability distribution let determine the natural gradient for information-geometric optimization algorithms. Its quantum version is Fubini-study metric. Relative entropy satisfies a generalized Pythagorean theorem for exponential families (geometrically interpreted as dually flat manifolds), and this allows one to minimize relative entropy by geometric means, for example by information projection and in maximum likelihood estimation.
The relative entropy is the Bregman divergence generated by the negative entropy, but it is also of the form of an f-divergence. For probabilities over a finite alphabet, it is unique in being a member of both of these classes of statistical divergences. The application of Bregman divergence can be found in mirror descent.
=== Finance (game theory) ===
Consider a growth-optimizing investor in a fair game with mutually exclusive outcomes
(e.g. a “horse race” in which the official odds add up to one).
The rate of return expected by such an investor is equal to the relative entropy
between the investor's believed probabilities and the official odds.
This is a special case of a much more general connection between financial returns and divergence measures.
Financial risks are connected to
D
KL
{\displaystyle D_{\text{KL}}}
via information geometry. Investors' views, the prevailing market view, and risky scenarios form triangles on the relevant manifold of probability distributions. The shape of the triangles determines key financial risks (both qualitatively and quantitatively). For instance, obtuse triangles in which investors' views and risk scenarios appear on “opposite sides” relative to the market describe negative risks, acute triangles describe positive exposure, and the right-angled situation in the middle corresponds to zero risk. Extending this concept, relative entropy can be hypothetically utilised to identify the behaviour of informed investors, if one takes this to be represented by the magnitude and deviations away from the prior expectations of fund flows, for example.
== Motivation ==
In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value
x
i
{\displaystyle x_{i}}
out of a set of possibilities X can be seen as representing an implicit probability distribution
q
(
x
i
)
=
2
−
ℓ
i
{\displaystyle q(x_{i})=2^{-\ell _{i}}}
over X, where
ℓ
i
{\displaystyle \ell _{i}}
is the length of the code for
x
i
{\displaystyle x_{i}}
in bits. Therefore, relative entropy can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distribution Q is used, compared to using a code based on the true distribution P: it is the excess entropy.
D
KL
(
P
∥
Q
)
=
∑
x
∈
X
p
(
x
)
log
1
q
(
x
)
−
∑
x
∈
X
p
(
x
)
log
1
p
(
x
)
=
H
(
P
,
Q
)
−
H
(
P
)
{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{q(x)}}-\sum _{x\in {\mathcal {X}}}p(x)\log {\frac {1}{p(x)}}\\[5pt]&=\mathrm {H} (P,Q)-\mathrm {H} (P)\end{aligned}}}
where
H
(
P
,
Q
)
{\displaystyle \mathrm {H} (P,Q)}
is the cross entropy of Q relative to P and
H
(
P
)
{\displaystyle \mathrm {H} (P)}
is the entropy of P (which is the same as the cross-entropy of P with itself).
The relative entropy
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
can be thought of geometrically as a statistical distance, a measure of how far the distribution Q is from the distribution P. Geometrically it is a divergence: an asymmetric, generalized form of squared distance. The cross-entropy
H
(
P
,
Q
)
{\displaystyle H(P,Q)}
is itself such a measurement (formally a loss function), but it cannot be thought of as a distance, since
H
(
P
,
P
)
=:
H
(
P
)
{\displaystyle H(P,P)=:H(P)}
is not zero. This can be fixed by subtracting
H
(
P
)
{\displaystyle H(P)}
to make
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
agree more closely with our notion of distance, as the excess loss. The resulting function is asymmetric, and while this can be symmetrized (see § Symmetrised divergence), the asymmetric form is more useful. See § Interpretations for more on the geometric interpretation.
Relative entropy relates to "rate function" in the theory of large deviations.
Arthur Hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly used characterization of entropy. Consequently, mutual information is the only measure of mutual dependence that obeys certain related conditions, since it can be defined in terms of Kullback–Leibler divergence.
== Properties ==
Relative entropy is always non-negative,
D
KL
(
P
∥
Q
)
≥
0
,
{\displaystyle D_{\text{KL}}(P\parallel Q)\geq 0,}
a result known as Gibbs' inequality, with
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
equals zero if and only if
P
=
Q
{\displaystyle P=Q}
as measures.
In particular, if
P
(
d
x
)
=
p
(
x
)
μ
(
d
x
)
{\displaystyle P(dx)=p(x)\mu (dx)}
and
Q
(
d
x
)
=
q
(
x
)
μ
(
d
x
)
{\displaystyle Q(dx)=q(x)\mu (dx)}
, then
p
(
x
)
=
q
(
x
)
{\displaystyle p(x)=q(x)}
μ
{\displaystyle \mu }
-almost everywhere. The entropy
H
(
P
)
{\displaystyle \mathrm {H} (P)}
thus sets a minimum value for the cross-entropy
H
(
P
,
Q
)
{\displaystyle \mathrm {H} (P,Q)}
, the expected number of bits required when using a code based on Q rather than P; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a value x drawn from X, if a code is used corresponding to the probability distribution Q, rather than the "true" distribution P.
No upper-bound exists for the general case. However, it is shown that if P and Q are two discrete probability distributions built by distributing the same discrete quantity, then the maximum value of
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
can be calculated.
Relative entropy remains well-defined for continuous distributions, and furthermore is invariant under parameter transformations. For example, if a transformation is made from variable x to variable
y
(
x
)
{\displaystyle y(x)}
, then, since
P
(
d
x
)
=
p
(
x
)
d
x
=
p
~
(
y
)
d
y
=
p
~
(
y
(
x
)
)
|
d
y
d
x
(
x
)
|
d
x
{\displaystyle P(dx)=p(x)\,dx={\tilde {p}}(y)\,dy={\tilde {p}}(y(x))\left|{\tfrac {dy}{dx}}(x)\right|\,dx}
and
Q
(
d
x
)
=
q
(
x
)
d
x
=
q
~
(
y
)
d
y
=
q
~
(
y
)
|
d
y
d
x
(
x
)
|
d
x
{\displaystyle Q(dx)=q(x)\,dx={\tilde {q}}(y)\,dy={\tilde {q}}(y)\left|{\tfrac {dy}{dx}}(x)\right|dx}
where
|
d
y
d
x
(
x
)
|
{\displaystyle \left|{\tfrac {dy}{dx}}(x)\right|}
is the absolute value of the derivative or more generally of the Jacobian, the relative entropy may be rewritten:
D
KL
(
P
∥
Q
)
=
∫
x
a
x
b
p
(
x
)
log
p
(
x
)
q
(
x
)
d
x
=
∫
x
a
x
b
p
~
(
y
(
x
)
)
|
d
y
d
x
|
log
p
~
(
y
(
x
)
)
|
d
y
d
x
|
q
~
(
y
(
x
)
)
|
d
y
d
x
|
d
x
=
∫
y
a
y
b
p
~
(
y
)
log
p
~
(
y
)
q
~
(
y
)
d
y
{\displaystyle {\begin{aligned}D_{\text{KL}}(P\parallel Q)&=\int _{x_{a}}^{x_{b}}p(x)\,\log {\frac {p(x)}{q(x)}}\,dx\\[6pt]&=\int _{x_{a}}^{x_{b}}{\tilde {p}}(y(x))\left|{\frac {dy}{dx}}\right|\log {\frac {{\tilde {p}}(y(x))\,\left|{\frac {dy}{dx}}\right|}{{\tilde {q}}(y(x))\,\left|{\frac {dy}{dx}}\right|}}\,dx\\&=\int _{y_{a}}^{y_{b}}{\tilde {p}}(y)\,\log {\frac {{\tilde {p}}(y)}{{\tilde {q}}(y)}}\,dy\end{aligned}}}
where
y
a
=
y
(
x
a
)
{\displaystyle y_{a}=y(x_{a})}
and
y
b
=
y
(
x
b
)
{\displaystyle y_{b}=y(x_{b})}
. Although it was assumed that the transformation was continuous, this need not be the case. This also shows that the relative entropy produces a dimensionally consistent quantity, since if x is a dimensioned variable,
p
(
x
)
{\displaystyle p(x)}
and
q
(
x
)
{\displaystyle q(x)}
are also dimensioned, since e.g.
P
(
d
x
)
=
p
(
x
)
d
x
{\displaystyle P(dx)=p(x)\,dx}
is dimensionless. The argument of the logarithmic term is and remains dimensionless, as it must. It can therefore be seen as in some ways a more fundamental quantity than some other properties in information theory (such as self-information or Shannon entropy), which can become undefined or negative for non-discrete probabilities.
Relative entropy is additive for independent distributions in much the same way as Shannon entropy. If
P
1
,
P
2
{\displaystyle P_{1},P_{2}}
are independent distributions, and
P
(
d
x
,
d
y
)
=
P
1
(
d
x
)
P
2
(
d
y
)
{\displaystyle P(dx,dy)=P_{1}(dx)P_{2}(dy)}
, and likewise
Q
(
d
x
,
d
y
)
=
Q
1
(
d
x
)
Q
2
(
d
y
)
{\displaystyle Q(dx,dy)=Q_{1}(dx)Q_{2}(dy)}
for independent distributions
Q
1
,
Q
2
{\displaystyle Q_{1},Q_{2}}
then
D
KL
(
P
∥
Q
)
=
D
KL
(
P
1
∥
Q
1
)
+
D
KL
(
P
2
∥
Q
2
)
.
{\displaystyle D_{\text{KL}}(P\parallel Q)=D_{\text{KL}}(P_{1}\parallel Q_{1})+D_{\text{KL}}(P_{2}\parallel Q_{2}).}
Relative entropy
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
is convex in the pair of probability measures
(
P
,
Q
)
{\displaystyle (P,Q)}
, i.e. if
(
P
1
,
Q
1
)
{\displaystyle (P_{1},Q_{1})}
and
(
P
2
,
Q
2
)
{\displaystyle (P_{2},Q_{2})}
are two pairs of probability measures then
D
KL
(
λ
P
1
+
(
1
−
λ
)
P
2
∥
λ
Q
1
+
(
1
−
λ
)
Q
2
)
≤
λ
D
KL
(
P
1
∥
Q
1
)
+
(
1
−
λ
)
D
KL
(
P
2
∥
Q
2
)
for
0
≤
λ
≤
1.
{\displaystyle D_{\text{KL}}(\lambda P_{1}+(1-\lambda )P_{2}\parallel \lambda Q_{1}+(1-\lambda )Q_{2})\leq \lambda D_{\text{KL}}(P_{1}\parallel Q_{1})+(1-\lambda )D_{\text{KL}}(P_{2}\parallel Q_{2}){\text{ for }}0\leq \lambda \leq 1.}
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
may be Taylor expanded about its minimum (i.e.
P
=
Q
{\displaystyle P=Q}
) as
D
KL
(
P
∥
Q
)
=
∑
n
=
2
∞
1
n
(
n
−
1
)
∑
x
∈
X
(
Q
(
x
)
−
P
(
x
)
)
n
Q
(
x
)
n
−
1
{\displaystyle D_{\text{KL}}(P\parallel Q)=\sum _{n=2}^{\infty }{\frac {1}{n(n-1)}}\sum _{x\in {\mathcal {X}}}{\frac {(Q(x)-P(x))^{n}}{Q(x)^{n-1}}}}
which converges if and only if
P
≤
2
Q
{\displaystyle P\leq 2Q}
almost surely w.r.t
Q
{\displaystyle Q}
.
== Duality formula for variational inference ==
The following result, due to Donsker and Varadhan, is known as Donsker and Varadhan's variational formula.
== Examples ==
=== Multivariate normal distributions ===
Suppose that we have two multivariate normal distributions, with means
μ
0
,
μ
1
{\displaystyle \mu _{0},\mu _{1}}
and with (non-singular) covariance matrices
Σ
0
,
Σ
1
.
{\displaystyle \Sigma _{0},\Sigma _{1}.}
If the two distributions have the same dimension, k, then the relative entropy between the distributions is as follows:
D
KL
(
N
0
∥
N
1
)
=
1
2
[
tr
(
Σ
1
−
1
Σ
0
)
−
k
+
(
μ
1
−
μ
0
)
T
Σ
1
−
1
(
μ
1
−
μ
0
)
+
ln
det
Σ
1
det
Σ
0
]
.
{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left[\operatorname {tr} \left(\Sigma _{1}^{-1}\Sigma _{0}\right)-k+\left(\mu _{1}-\mu _{0}\right)^{\mathsf {T}}\Sigma _{1}^{-1}\left(\mu _{1}-\mu _{0}\right)+\ln {\frac {\det \Sigma _{1}}{\det \Sigma _{0}}}\right].}
The logarithm in the last term must be taken to base e since all terms apart from the last are base-e logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats. Dividing the entire expression above by
ln
(
2
)
{\displaystyle \ln(2)}
yields the divergence in bits.
In a numerical implementation, it is helpful to express the result in terms of the Cholesky decompositions
L
0
,
L
1
{\displaystyle L_{0},L_{1}}
such that
Σ
0
=
L
0
L
0
T
{\displaystyle \Sigma _{0}=L_{0}L_{0}^{T}}
and
Σ
1
=
L
1
L
1
T
{\displaystyle \Sigma _{1}=L_{1}L_{1}^{T}}
. Then with M and y solutions to the triangular linear systems
L
1
M
=
L
0
{\displaystyle L_{1}M=L_{0}}
, and
L
1
y
=
μ
1
−
μ
0
{\displaystyle L_{1}y=\mu _{1}-\mu _{0}}
,
D
KL
(
N
0
∥
N
1
)
=
1
2
(
∑
i
,
j
=
1
k
(
M
i
j
)
2
−
k
+
|
y
|
2
+
2
∑
i
=
1
k
ln
(
L
1
)
i
i
(
L
0
)
i
i
)
.
{\displaystyle D_{\text{KL}}\left({\mathcal {N}}_{0}\parallel {\mathcal {N}}_{1}\right)={\frac {1}{2}}\left(\sum _{i,j=1}^{k}{\left(M_{ij}\right)}^{2}-k+|y|^{2}+2\sum _{i=1}^{k}\ln {\frac {(L_{1})_{ii}}{(L_{0})_{ii}}}\right).}
A special case, and a common quantity in variational inference, is the relative entropy between a diagonal multivariate normal, and a standard normal distribution (with zero mean and unit variance):
D
KL
(
N
(
(
μ
1
,
…
,
μ
k
)
T
,
diag
(
σ
1
2
,
…
,
σ
k
2
)
)
∥
N
(
0
,
I
)
)
=
1
2
∑
i
=
1
k
[
σ
i
2
+
μ
i
2
−
1
−
ln
(
σ
i
2
)
]
.
{\displaystyle D_{\text{KL}}\left({\mathcal {N}}\left(\left(\mu _{1},\ldots ,\mu _{k}\right)^{\mathsf {T}},\operatorname {diag} \left(\sigma _{1}^{2},\ldots ,\sigma _{k}^{2}\right)\right)\parallel {\mathcal {N}}\left(\mathbf {0} ,\mathbf {I} \right)\right)={\frac {1}{2}}\sum _{i=1}^{k}\left[\sigma _{i}^{2}+\mu _{i}^{2}-1-\ln \left(\sigma _{i}^{2}\right)\right].}
For two univariate normal distributions p and q the above simplifies to
D
KL
(
p
∥
q
)
=
log
σ
1
σ
0
+
σ
0
2
+
(
μ
0
−
μ
1
)
2
2
σ
1
2
−
1
2
{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {\sigma _{1}}{\sigma _{0}}}+{\frac {\sigma _{0}^{2}+{\left(\mu _{0}-\mu _{1}\right)}^{2}}{2\sigma _{1}^{2}}}-{\frac {1}{2}}}
In the case of co-centered normal distributions with
k
=
σ
1
/
σ
0
{\displaystyle k=\sigma _{1}/\sigma _{0}}
, this simplifies to:
D
KL
(
p
∥
q
)
=
log
2
k
+
(
k
−
2
−
1
)
/
2
/
ln
(
2
)
b
i
t
s
{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log _{2}k+(k^{-2}-1)/2/\ln(2)\mathrm {bits} }
=== Uniform distributions ===
Consider two uniform distributions, with the support of
p
=
[
A
,
B
]
{\displaystyle p=[A,B]}
enclosed within
q
=
[
C
,
D
]
{\displaystyle q=[C,D]}
(
C
≤
A
<
B
≤
D
{\displaystyle C\leq A<B\leq D}
). Then the information gain is:
D
KL
(
p
∥
q
)
=
log
D
−
C
B
−
A
{\displaystyle D_{\text{KL}}\left({\mathcal {p}}\parallel {\mathcal {q}}\right)=\log {\frac {D-C}{B-A}}}
Intuitively, the information gain to a k times narrower uniform distribution contains
log
2
k
{\displaystyle \log _{2}k}
bits. This connects with the use of bits in computing, where
log
2
k
{\displaystyle \log _{2}k}
bits would be needed to identify one element of a k long stream.
=== Exponential family ===
The exponential family of distribution is given by
p
X
(
x
|
θ
)
=
h
(
x
)
exp
(
θ
T
T
(
x
)
−
A
(
θ
)
)
{\displaystyle p_{X}(x|\theta )=h(x)\exp \left(\theta ^{\mathsf {T}}T(x)-A(\theta )\right)}
where
h
(
x
)
{\displaystyle h(x)}
is reference measure,
T
(
x
)
{\displaystyle T(x)}
is sufficient statistics,
θ
{\displaystyle \theta }
is canonical natural parameters, and
A
(
θ
)
{\displaystyle A(\theta )}
is the log-partition function.
The KL divergence between two distributions
p
(
x
|
θ
1
)
{\displaystyle p(x|\theta _{1})}
and
p
(
x
|
θ
2
)
{\displaystyle p(x|\theta _{2})}
is given by
D
KL
(
θ
1
∥
θ
2
)
=
(
θ
1
−
θ
2
)
T
μ
1
−
A
(
θ
1
)
+
A
(
θ
2
)
{\displaystyle D_{\text{KL}}(\theta _{1}\parallel \theta _{2})={\left(\theta _{1}-\theta _{2}\right)}^{\mathsf {T}}\mu _{1}-A(\theta _{1})+A(\theta _{2})}
where
μ
1
=
E
θ
1
[
T
(
X
)
]
=
∇
A
(
θ
1
)
{\displaystyle \mu _{1}=E_{\theta _{1}}[T(X)]=\nabla A(\theta _{1})}
is the mean parameter of
p
(
x
|
θ
1
)
{\displaystyle p(x|\theta _{1})}
.
For example, for the Poisson distribution with mean
λ
{\displaystyle \lambda }
, the sufficient statistics
T
(
x
)
=
x
{\displaystyle T(x)=x}
, the natural parameter
θ
=
log
λ
{\displaystyle \theta =\log \lambda }
, and log partition function
A
(
θ
)
=
e
θ
{\displaystyle A(\theta )=e^{\theta }}
. As such, the divergence between two Poisson distributions with means
λ
1
{\displaystyle \lambda _{1}}
and
λ
2
{\displaystyle \lambda _{2}}
is
D
KL
(
λ
1
∥
λ
2
)
=
λ
1
log
λ
1
λ
2
−
λ
1
+
λ
2
.
{\displaystyle D_{\text{KL}}(\lambda _{1}\parallel \lambda _{2})=\lambda _{1}\log {\frac {\lambda _{1}}{\lambda _{2}}}-\lambda _{1}+\lambda _{2}.}
As another example, for a normal distribution with unit variance
N
(
μ
,
1
)
{\displaystyle N(\mu ,1)}
, the sufficient statistics
T
(
x
)
=
x
{\displaystyle T(x)=x}
, the natural parameter
θ
=
μ
{\displaystyle \theta =\mu }
, and log partition function
A
(
θ
)
=
μ
2
/
2
{\displaystyle A(\theta )=\mu ^{2}/2}
. Thus, the divergence between two normal distributions
N
(
μ
1
,
1
)
{\displaystyle N(\mu _{1},1)}
and
N
(
μ
2
,
1
)
{\displaystyle N(\mu _{2},1)}
is
D
KL
(
μ
1
∥
μ
2
)
=
(
μ
1
−
μ
2
)
μ
1
−
μ
1
2
2
+
μ
2
2
2
=
(
μ
2
−
μ
1
)
2
2
.
{\displaystyle D_{\text{KL}}(\mu _{1}\parallel \mu _{2})=\left(\mu _{1}-\mu _{2}\right)\mu _{1}-{\frac {\mu _{1}^{2}}{2}}+{\frac {\mu _{2}^{2}}{2}}={\frac {{\left(\mu _{2}-\mu _{1}\right)}^{2}}{2}}.}
As final example, the divergence between a normal distribution with unit variance
N
(
μ
,
1
)
{\displaystyle N(\mu ,1)}
and a Poisson distribution with mean
λ
{\displaystyle \lambda }
is
D
KL
(
μ
∥
λ
)
=
(
μ
−
log
λ
)
μ
−
μ
2
2
+
λ
.
{\displaystyle D_{\text{KL}}(\mu \parallel \lambda )=(\mu -\log \lambda )\mu -{\frac {\mu ^{2}}{2}}+\lambda .}
== Relation to metrics ==
While relative entropy is a statistical distance, it is not a metric on the space of probability distributions, but instead it is a divergence. While metrics are symmetric and generalize linear distance, satisfying the triangle inequality, divergences are asymmetric in general and generalize squared distance, in some cases satisfying a generalized Pythagorean theorem. In general
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
does not equal
D
KL
(
Q
∥
P
)
{\displaystyle D_{\text{KL}}(Q\parallel P)}
, and while this can be symmetrized (see § Symmetrised divergence), the asymmetry is an important part of the geometry.
It generates a topology on the space of probability distributions. More concretely, if
{
P
1
,
P
2
,
…
}
{\displaystyle \{P_{1},P_{2},\ldots \}}
is a sequence of distributions such that
lim
n
→
∞
D
KL
(
P
n
∥
Q
)
=
0
,
{\displaystyle \lim _{n\to \infty }D_{\text{KL}}(P_{n}\parallel Q)=0,}
then it is said that
P
n
→
D
Q
.
{\displaystyle P_{n}\xrightarrow {D} \,Q.}
Pinsker's inequality entails that
P
n
→
D
P
⇒
P
n
→
T
V
P
,
{\displaystyle P_{n}\xrightarrow {D} P\Rightarrow P_{n}\xrightarrow {TV} P,}
where the latter stands for the usual convergence in total variation.
=== Fisher information metric ===
Relative entropy is directly related to the Fisher information metric. This can be made explicit as follows. Assume that the probability distributions P and Q are both parameterized by some (possibly multi-dimensional) parameter
θ
{\displaystyle \theta }
. Consider then two close by values of
P
=
P
(
θ
)
{\displaystyle P=P(\theta )}
and
Q
=
P
(
θ
0
)
{\displaystyle Q=P(\theta _{0})}
so that the parameter
θ
{\displaystyle \theta }
differs by only a small amount from the parameter value
θ
0
{\displaystyle \theta _{0}}
. Specifically, up to first order one has (using the Einstein summation convention)
P
(
θ
)
=
P
(
θ
0
)
+
Δ
θ
j
P
j
(
θ
0
)
+
⋯
{\displaystyle P(\theta )=P(\theta _{0})+\Delta \theta _{j}\,P_{j}(\theta _{0})+\cdots }
with
Δ
θ
j
=
(
θ
−
θ
0
)
j
{\displaystyle \Delta \theta _{j}=(\theta -\theta _{0})_{j}}
a small change of
θ
{\displaystyle \theta }
in the j direction, and
P
j
(
θ
0
)
=
∂
P
∂
θ
j
(
θ
0
)
{\displaystyle P_{j}\left(\theta _{0}\right)={\frac {\partial P}{\partial \theta _{j}}}(\theta _{0})}
the corresponding rate of change in the probability distribution. Since relative entropy has an absolute minimum 0 for
P
=
Q
{\displaystyle P=Q}
, i.e.
θ
=
θ
0
{\displaystyle \theta =\theta _{0}}
, it changes only to second order in the small parameters
Δ
θ
j
{\displaystyle \Delta \theta _{j}}
. More formally, as for any minimum, the first derivatives of the divergence vanish
∂
∂
θ
j
|
θ
=
θ
0
D
KL
(
P
(
θ
)
∥
P
(
θ
0
)
)
=
0
,
{\displaystyle \left.{\frac {\partial }{\partial \theta _{j}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))=0,}
and by the Taylor expansion one has up to second order
D
KL
(
P
(
θ
)
∥
P
(
θ
0
)
)
=
1
2
Δ
θ
j
Δ
θ
k
g
j
k
(
θ
0
)
+
⋯
{\displaystyle D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))={\frac {1}{2}}\,\Delta \theta _{j}\,\Delta \theta _{k}\,g_{jk}(\theta _{0})+\cdots }
where the Hessian matrix of the divergence
g
j
k
(
θ
0
)
=
∂
2
∂
θ
j
∂
θ
k
|
θ
=
θ
0
D
KL
(
P
(
θ
)
∥
P
(
θ
0
)
)
{\displaystyle g_{jk}(\theta _{0})=\left.{\frac {\partial ^{2}}{\partial \theta _{j}\,\partial \theta _{k}}}\right|_{\theta =\theta _{0}}D_{\text{KL}}(P(\theta )\parallel P(\theta _{0}))}
must be positive semidefinite. Letting
θ
0
{\displaystyle \theta _{0}}
vary (and dropping the subindex 0) the Hessian
g
j
k
(
θ
)
{\displaystyle g_{jk}(\theta )}
defines a (possibly degenerate) Riemannian metric on the θ parameter space, called the Fisher information metric.
==== Fisher information metric theorem ====
When
p
(
x
,
ρ
)
{\displaystyle p_{(x,\rho )}}
satisfies the following regularity conditions:
∂
log
(
p
)
∂
ρ
,
∂
2
log
(
p
)
∂
ρ
2
,
∂
3
log
(
p
)
∂
ρ
3
{\displaystyle {\frac {\partial \log(p)}{\partial \rho }},{\frac {\partial ^{2}\log(p)}{\partial \rho ^{2}}},{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}}
exist,
|
∂
p
∂
ρ
|
<
F
(
x
)
:
∫
x
=
0
∞
F
(
x
)
d
x
<
∞
,
|
∂
2
p
∂
ρ
2
|
<
G
(
x
)
:
∫
x
=
0
∞
G
(
x
)
d
x
<
∞
|
∂
3
log
(
p
)
∂
ρ
3
|
<
H
(
x
)
:
∫
x
=
0
∞
p
(
x
,
0
)
H
(
x
)
d
x
<
ξ
<
∞
{\displaystyle {\begin{aligned}\left|{\frac {\partial p}{\partial \rho }}\right|&<F(x):\int _{x=0}^{\infty }F(x)\,dx<\infty ,\\\left|{\frac {\partial ^{2}p}{\partial \rho ^{2}}}\right|&<G(x):\int _{x=0}^{\infty }G(x)\,dx<\infty \\\left|{\frac {\partial ^{3}\log(p)}{\partial \rho ^{3}}}\right|&<H(x):\int _{x=0}^{\infty }p(x,0)H(x)\,dx<\xi <\infty \end{aligned}}}
where ξ is independent of ρ
∫
x
=
0
∞
∂
p
(
x
,
ρ
)
∂
ρ
|
ρ
=
0
d
x
=
∫
x
=
0
∞
∂
2
p
(
x
,
ρ
)
∂
ρ
2
|
ρ
=
0
d
x
=
0
{\displaystyle \left.\int _{x=0}^{\infty }{\frac {\partial p(x,\rho )}{\partial \rho }}\right|_{\rho =0}\,dx=\left.\int _{x=0}^{\infty }{\frac {\partial ^{2}p(x,\rho )}{\partial \rho ^{2}}}\right|_{\rho =0}\,dx=0}
then:
D
(
p
(
x
,
0
)
∥
p
(
x
,
ρ
)
)
=
c
ρ
2
2
+
O
(
ρ
3
)
as
ρ
→
0.
{\displaystyle {\mathcal {D}}(p(x,0)\parallel p(x,\rho ))={\frac {c\rho ^{2}}{2}}+{\mathcal {O}}\left(\rho ^{3}\right){\text{ as }}\rho \to 0.}
=== Variation of information ===
Another information-theoretic metric is variation of information, which is roughly a symmetrization of conditional entropy. It is a metric on the set of partitions of a discrete probability space.
=== MAUVE Metric ===
MAUVE is a measure of the statistical gap between two text distributions, such as the difference between text generated by a model and human-written text. This measure is computed using Kullback–Leibler divergences between the two distributions in a quantized embedding space of a foundation model.
== Relation to other quantities of information theory ==
Many of the other quantities of information theory can be interpreted as applications of relative entropy to specific cases.
=== Self-information ===
The self-information, also known as the information content of a signal, random variable, or event is defined as the negative logarithm of the probability of the given outcome occurring.
When applied to a discrete random variable, the self-information can be represented as
I
(
m
)
=
D
KL
(
δ
im
∥
{
p
i
}
)
,
{\displaystyle \operatorname {\operatorname {I} } (m)=D_{\text{KL}}\left(\delta _{\text{im}}\parallel \{p_{i}\}\right),}
is the relative entropy of the probability distribution
P
(
i
)
{\displaystyle P(i)}
from a Kronecker delta representing certainty that
i
=
m
{\displaystyle i=m}
— i.e. the number of extra bits that must be transmitted to identify i if only the probability distribution
P
(
i
)
{\displaystyle P(i)}
is available to the receiver, not the fact that
i
=
m
{\displaystyle i=m}
.
=== Mutual information ===
The mutual information,
I
(
X
;
Y
)
=
D
KL
(
P
X
,
Y
∥
P
X
⋅
P
Y
)
=
E
X
[
D
KL
Y
(
P
Y
∣
X
∥
P
Y
)
]
=
E
Y
[
D
KL
X
(
P
X
∣
Y
∥
P
X
)
]
{\displaystyle {\begin{aligned}\operatorname {I} (X;Y)&=D_{\text{KL}}(P_{X,Y}\parallel P_{X}\cdot P_{Y})\\&=\operatorname {E} _{X}[D_{\text{KL}}^{Y}(P_{Y\mid X}\parallel P_{Y})]\\&=\operatorname {E} _{Y}[D_{\text{KL}}^{X}(P_{X\mid Y}\parallel P_{X})]\end{aligned}}}
is the relative entropy of the joint probability distribution
P
X
,
Y
(
x
,
y
)
{\displaystyle P_{X,Y}(x,y)}
from the product
(
P
X
⋅
P
Y
)
(
x
,
y
)
=
P
X
(
x
)
P
Y
(
y
)
{\displaystyle (P_{X}\cdot P_{Y})(x,y)=P_{X}(x)P_{Y}(y)}
of the two marginal probability distributions — i.e. the expected number of extra bits that must be transmitted to identify X and Y if they are coded using only their marginal distributions instead of the joint distribution.
=== Shannon entropy ===
The Shannon entropy,
H
(
X
)
=
E
[
I
X
(
x
)
]
=
log
N
−
D
KL
(
p
X
(
x
)
∥
P
U
(
X
)
)
{\displaystyle {\begin{aligned}\mathrm {H} (X)&=\operatorname {E} \left[\operatorname {I} _{X}(x)\right]\\&=\log N-D_{\text{KL}}{\left(p_{X}(x)\parallel P_{U}(X)\right)}\end{aligned}}}
is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the relative entropy of the uniform distribution on the random variates of X,
P
U
(
X
)
{\displaystyle P_{U}(X)}
, from the true distribution
P
(
X
)
{\displaystyle P(X)}
— i.e. less the expected number of bits saved, which would have had to be sent if the value of X were coded according to the uniform distribution
P
U
(
X
)
{\displaystyle P_{U}(X)}
rather than the true distribution
P
(
X
)
{\displaystyle P(X)}
. This definition of Shannon entropy forms the basis of E.T. Jaynes's alternative generalization to continuous distributions, the limiting density of discrete points (as opposed to the usual differential entropy), which defines the continuous entropy as
lim
N
→
∞
H
N
(
X
)
=
log
N
−
∫
p
(
x
)
log
p
(
x
)
m
(
x
)
d
x
,
{\displaystyle \lim _{N\to \infty }H_{N}(X)=\log N-\int p(x)\log {\frac {p(x)}{m(x)}}\,dx,}
which is equivalent to:
log
(
N
)
−
D
KL
(
p
(
x
)
|
|
m
(
x
)
)
{\displaystyle \log(N)-D_{\text{KL}}(p(x)||m(x))}
=== Conditional entropy ===
The conditional entropy,
H
(
X
∣
Y
)
=
log
N
−
D
KL
(
P
(
X
,
Y
)
∥
P
U
(
X
)
P
(
Y
)
)
=
log
N
−
D
KL
(
P
(
X
,
Y
)
∥
P
(
X
)
P
(
Y
)
)
−
D
KL
(
P
(
X
)
∥
P
U
(
X
)
)
=
H
(
X
)
−
I
(
X
;
Y
)
=
log
N
−
E
Y
[
D
KL
(
P
(
X
∣
Y
)
∥
P
U
(
X
)
)
]
{\displaystyle {\begin{aligned}\mathrm {H} (X\mid Y)&=\log N-D_{\text{KL}}(P(X,Y)\parallel P_{U}(X)P(Y))\\[5pt]&=\log N-D_{\text{KL}}(P(X,Y)\parallel P(X)P(Y))-D_{\text{KL}}(P(X)\parallel P_{U}(X))\\[5pt]&=\mathrm {H} (X)-\operatorname {I} (X;Y)\\[5pt]&=\log N-\operatorname {E} _{Y}\left[D_{\text{KL}}\left(P\left(X\mid Y\right)\parallel P_{U}(X)\right)\right]\end{aligned}}}
is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the relative entropy of the true joint distribution
P
(
X
,
Y
)
{\displaystyle P(X,Y)}
from the product distribution
P
U
(
X
)
P
(
Y
)
{\displaystyle P_{U}(X)P(Y)}
from — i.e. less the expected number of bits saved which would have had to be sent if the value of X were coded according to the uniform distribution
P
U
(
X
)
{\displaystyle P_{U}(X)}
rather than the conditional distribution
P
(
X
|
Y
)
{\displaystyle P(X|Y)}
of X given Y.
=== Cross entropy ===
When we have a set of possible events, coming from the distribution p, we can encode them (with a lossless data compression) using entropy encoding. This compresses the data by replacing each fixed-length input symbol with a corresponding unique, variable-length, prefix-free code (e.g.: the events (A, B, C) with probabilities p = (1/2, 1/4, 1/4) can be encoded as the bits (0, 10, 11)). If we know the distribution p in advance, we can devise an encoding that would be optimal (e.g.: using Huffman coding). Meaning the messages we encode will have the shortest length on average (assuming the encoded events are sampled from p), which will be equal to Shannon's Entropy of p (denoted as
H
(
p
)
{\displaystyle \mathrm {H} (p)}
). However, if we use a different probability distribution (q) when creating the entropy encoding scheme, then a larger number of bits will be used (on average) to identify an event from a set of possibilities. This new (larger) number is measured by the cross entropy between p and q.
The cross entropy between two probability distributions (p and q) measures the average number of bits needed to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distribution q, rather than the "true" distribution p. The cross entropy for two distributions p and q over the same probability space is thus defined as follows.
H
(
p
,
q
)
=
E
p
[
−
log
q
]
=
H
(
p
)
+
D
KL
(
p
∥
q
)
.
{\displaystyle \mathrm {H} (p,q)=\operatorname {E} _{p}[-\log q]=\mathrm {H} (p)+D_{\text{KL}}(p\parallel q).}
For explicit derivation of this, see the Motivation section above.
Under this scenario, relative entropies (kl-divergence) can be interpreted as the extra number of bits, on average, that are needed (beyond
H
(
p
)
{\displaystyle \mathrm {H} (p)}
) for encoding the events because of using q for constructing the encoding scheme instead of p.
== Bayesian updating ==
In Bayesian statistics, relative entropy can be used as a measure of the information gain in moving from a prior distribution to a posterior distribution:
p
(
x
)
→
p
(
x
∣
I
)
{\displaystyle p(x)\to p(x\mid I)}
. If some new fact
Y
=
y
{\displaystyle Y=y}
is discovered, it can be used to update the posterior distribution for X from
p
(
x
∣
I
)
{\displaystyle p(x\mid I)}
to a new posterior distribution
p
(
x
∣
y
,
I
)
{\displaystyle p(x\mid y,I)}
using Bayes' theorem:
p
(
x
∣
y
,
I
)
=
p
(
y
∣
x
,
I
)
p
(
x
∣
I
)
p
(
y
∣
I
)
{\displaystyle p(x\mid y,I)={\frac {p(y\mid x,I)p(x\mid I)}{p(y\mid I)}}}
This distribution has a new entropy:
H
(
p
(
x
∣
y
,
I
)
)
=
−
∑
x
p
(
x
∣
y
,
I
)
log
p
(
x
∣
y
,
I
)
,
{\displaystyle \mathrm {H} {\big (}p(x\mid y,I){\big )}=-\sum _{x}p(x\mid y,I)\log p(x\mid y,I),}
which may be less than or greater than the original entropy
H
(
p
(
x
∣
I
)
)
{\displaystyle \mathrm {H} (p(x\mid I))}
. However, from the standpoint of the new probability distribution one can estimate that to have used the original code based on
p
(
x
∣
I
)
{\displaystyle p(x\mid I)}
instead of a new code based on
p
(
x
∣
y
,
I
)
{\displaystyle p(x\mid y,I)}
would have added an expected number of bits:
D
KL
(
p
(
x
∣
y
,
I
)
∥
p
(
x
∣
I
)
)
=
∑
x
p
(
x
∣
y
,
I
)
log
p
(
x
∣
y
,
I
)
p
(
x
∣
I
)
{\displaystyle D_{\text{KL}}{\big (}p(x\mid y,I)\parallel p(x\mid I){\big )}=\sum _{x}p(x\mid y,I)\log {\frac {p(x\mid y,I)}{p(x\mid I)}}}
to the message length. This therefore represents the amount of useful information, or information gain, about X, that has been learned by discovering
Y
=
y
{\displaystyle Y=y}
.
If a further piece of data,
Y
2
=
y
2
{\displaystyle Y_{2}=y_{2}}
, subsequently comes in, the probability distribution for x can be updated further, to give a new best guess
p
(
x
∣
y
1
,
y
2
,
I
)
{\displaystyle p(x\mid y_{1},y_{2},I)}
. If one reinvestigates the information gain for using
p
(
x
∣
y
1
,
I
)
{\displaystyle p(x\mid y_{1},I)}
rather than
p
(
x
∣
I
)
{\displaystyle p(x\mid I)}
, it turns out that it may be either greater or less than previously estimated:
∑
x
p
(
x
∣
y
1
,
y
2
,
I
)
log
p
(
x
∣
y
1
,
y
2
,
I
)
p
(
x
∣
I
)
{\displaystyle \sum _{x}p(x\mid y_{1},y_{2},I)\log {\frac {p(x\mid y_{1},y_{2},I)}{p(x\mid I)}}}
may be ≤ or > than
∑
x
p
(
x
∣
y
1
,
I
)
log
p
(
x
∣
y
1
,
I
)
p
(
x
∣
I
)
{\textstyle \sum _{x}p(x\mid y_{1},I)\log {\frac {p(x\mid y_{1},I)}{p(x\mid I)}}}
and so the combined information gain does not obey the triangle inequality:
D
KL
(
p
(
x
∣
y
1
,
y
2
,
I
)
∥
p
(
x
∣
I
)
)
{\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid I){\big )}}
may be <, = or > than
D
KL
(
p
(
x
∣
y
1
,
y
2
,
I
)
∥
p
(
x
∣
y
1
,
I
)
)
+
D
KL
(
p
(
x
∣
y
1
,
I
)
∥
p
(
x
∣
I
)
)
{\displaystyle D_{\text{KL}}{\big (}p(x\mid y_{1},y_{2},I)\parallel p(x\mid y_{1},I){\big )}+D_{\text{KL}}{\big (}p(x\mid y_{1},I)\parallel p(x\mid I){\big )}}
All one can say is that on average, averaging using
p
(
y
2
∣
y
1
,
x
,
I
)
{\displaystyle p(y_{2}\mid y_{1},x,I)}
, the two sides will average out.
=== Bayesian experimental design ===
A common goal in Bayesian experimental design is to maximise the expected relative entropy between the prior and the posterior. When posteriors are approximated to be Gaussian distributions, a design maximising the expected relative entropy is called Bayes d-optimal.
== Discrimination information ==
Relative entropy
D
KL
(
p
(
x
∣
H
1
)
∥
p
(
x
∣
H
0
)
)
{\textstyle D_{\text{KL}}{\bigl (}p(x\mid H_{1})\parallel p(x\mid H_{0}){\bigr )}}
can also be interpreted as the expected discrimination information for
H
1
{\displaystyle H_{1}}
over
H
0
{\displaystyle H_{0}}
: the mean information per sample for discriminating in favor of a hypothesis
H
1
{\displaystyle H_{1}}
against a hypothesis
H
0
{\displaystyle H_{0}}
, when hypothesis
H
1
{\displaystyle H_{1}}
is true. Another name for this quantity, given to it by I. J. Good, is the expected weight of evidence for
H
1
{\displaystyle H_{1}}
over
H
0
{\displaystyle H_{0}}
to be expected from each sample.
The expected weight of evidence for
H
1
{\displaystyle H_{1}}
over
H
0
{\displaystyle H_{0}}
is not the same as the information gain expected per sample about the probability distribution
p
(
H
)
{\displaystyle p(H)}
of the hypotheses,
D
KL
(
p
(
x
∣
H
1
)
∥
p
(
x
∣
H
0
)
)
≠
I
G
=
D
KL
(
p
(
H
∣
x
)
∥
p
(
H
∣
I
)
)
.
{\displaystyle D_{\text{KL}}(p(x\mid H_{1})\parallel p(x\mid H_{0}))\neq IG=D_{\text{KL}}(p(H\mid x)\parallel p(H\mid I)).}
Either of the two quantities can be used as a utility function in Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies.
On the entropy scale of information gain there is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on the logit scale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, the Riemann hypothesis is correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales of loss function for uncertainty are both useful, according to how well each reflects the particular circumstances of the problem in question.
=== Principle of minimum discrimination information ===
The idea of relative entropy as discrimination information led Kullback to propose the Principle of Minimum Discrimination Information (MDI): given new facts, a new distribution f should be chosen which is as hard to discriminate from the original distribution
f
0
{\displaystyle f_{0}}
as possible; so that the new data produces as small an information gain
D
KL
(
f
∥
f
0
)
{\displaystyle D_{\text{KL}}(f\parallel f_{0})}
as possible.
For example, if one had a prior distribution
p
(
x
,
a
)
{\displaystyle p(x,a)}
over x and a, and subsequently learnt the true distribution of a was
u
(
a
)
{\displaystyle u(a)}
, then the relative entropy between the new joint distribution for x and a,
q
(
x
∣
a
)
u
(
a
)
{\displaystyle q(x\mid a)u(a)}
, and the earlier prior distribution would be:
D
KL
(
q
(
x
∣
a
)
u
(
a
)
∥
p
(
x
,
a
)
)
=
E
u
(
a
)
{
D
KL
(
q
(
x
∣
a
)
∥
p
(
x
∣
a
)
)
}
+
D
KL
(
u
(
a
)
∥
p
(
a
)
)
,
{\displaystyle D_{\text{KL}}(q(x\mid a)u(a)\parallel p(x,a))=\operatorname {E} _{u(a)}\left\{D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))\right\}+D_{\text{KL}}(u(a)\parallel p(a)),}
i.e. the sum of the relative entropy of
p
(
a
)
{\displaystyle p(a)}
the prior distribution for a from the updated distribution
u
(
a
)
{\displaystyle u(a)}
, plus the expected value (using the probability distribution
u
(
a
)
{\displaystyle u(a)}
) of the relative entropy of the prior conditional distribution
p
(
x
∣
a
)
{\displaystyle p(x\mid a)}
from the new conditional distribution
q
(
x
∣
a
)
{\displaystyle q(x\mid a)}
. (Note that often the later expected value is called the conditional relative entropy (or conditional Kullback–Leibler divergence) and denoted by
D
KL
(
q
(
x
∣
a
)
∥
p
(
x
∣
a
)
)
{\displaystyle D_{\text{KL}}(q(x\mid a)\parallel p(x\mid a))}
) This is minimized if
q
(
x
∣
a
)
=
p
(
x
∣
a
)
{\displaystyle q(x\mid a)=p(x\mid a)}
over the whole support of
u
(
a
)
{\displaystyle u(a)}
; and we note that this result incorporates Bayes' theorem, if the new distribution
u
(
a
)
{\displaystyle u(a)}
is in fact a δ function representing certainty that a has one particular value.
MDI can be seen as an extension of Laplace's Principle of Insufficient Reason, and the Principle of Maximum Entropy of E.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (see differential entropy), but the relative entropy continues to be just as relevant.
In the engineering literature, MDI is sometimes called the Principle of Minimum Cross-Entropy (MCE) or Minxent for short. Minimising relative entropy from m to p with respect to m is equivalent to minimizing the cross-entropy of p and m, since
H
(
p
,
m
)
=
H
(
p
)
+
D
KL
(
p
∥
m
)
,
{\displaystyle \mathrm {H} (p,m)=\mathrm {H} (p)+D_{\text{KL}}(p\parallel m),}
which is appropriate if one is trying to choose an adequate approximation to p. However, this is just as often not the task one is trying to achieve. Instead, just as often it is m that is some fixed prior reference measure, and p that one is attempting to optimise by minimising
D
KL
(
p
∥
m
)
{\displaystyle D_{\text{KL}}(p\parallel m)}
subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be
D
KL
(
p
∥
m
)
{\displaystyle D_{\text{KL}}(p\parallel m)}
, rather than
H
(
p
,
m
)
{\displaystyle \mathrm {H} (p,m)}
.
== Relationship to available work ==
Surprisals add where probabilities multiply. The surprisal for an event of probability p is defined as
s
=
−
k
ln
p
{\displaystyle s=-k\ln p}
. If k is
{
1
,
1
/
ln
2
,
1.38
×
10
−
23
}
{\displaystyle \left\{1,1/\ln 2,1.38\times 10^{-23}\right\}}
then surprisal is in
{
{\displaystyle \{}
nats, bits, or
J
/
K
}
{\displaystyle J/K\}}
so that, for instance, there are N bits of surprisal for landing all "heads" on a toss of N coins.
Best-guess states (e.g. for atoms in a gas) are inferred by maximizing the average surprisal S (entropy) for a given set of control parameters (like pressure P or volume V). This constrained entropy maximization, both classically and quantum mechanically, minimizes Gibbs availability in entropy units
A
≡
−
k
ln
Z
{\displaystyle A\equiv -k\ln Z}
where Z is a constrained multiplicity or partition function.
When temperature T is fixed, free energy (
T
×
A
{\displaystyle T\times A}
) is also minimized. Thus if
T
,
V
{\displaystyle T,V}
and number of molecules N are constant, the Helmholtz free energy
F
≡
U
−
T
S
{\displaystyle F\equiv U-TS}
(where U is energy and S is entropy) is minimized as a system "equilibrates." If T and P are held constant (say during processes in your body), the Gibbs free energy
G
=
U
+
P
V
−
T
S
{\displaystyle G=U+PV-TS}
is minimized instead. The change in free energy under these conditions is a measure of available work that might be done in the process. Thus available work for an ideal gas at constant temperature
T
o
{\displaystyle T_{o}}
and pressure
P
o
{\displaystyle P_{o}}
is
W
=
Δ
G
=
N
k
T
o
Θ
(
V
/
V
o
)
{\displaystyle W=\Delta G=NkT_{o}\Theta (V/V_{o})}
where
V
o
=
N
k
T
o
/
P
o
{\displaystyle V_{o}=NkT_{o}/P_{o}}
and
Θ
(
x
)
=
x
−
1
−
ln
x
≥
0
{\displaystyle \Theta (x)=x-1-\ln x\geq 0}
(see also Gibbs inequality).
More generally the work available relative to some ambient is obtained by multiplying ambient temperature
T
o
{\displaystyle T_{o}}
by relative entropy or net surprisal
Δ
I
≥
0
,
{\displaystyle \Delta I\geq 0,}
defined as the average value of
k
ln
(
p
/
p
o
)
{\displaystyle k\ln(p/p_{o})}
where
p
o
{\displaystyle p_{o}}
is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values of
V
o
{\displaystyle V_{o}}
and
T
o
{\displaystyle T_{o}}
is thus
W
=
T
o
Δ
I
{\displaystyle W=T_{o}\Delta I}
, where relative entropy
Δ
I
=
N
k
[
Θ
(
V
V
o
)
+
3
2
Θ
(
T
T
o
)
]
.
{\displaystyle \Delta I=Nk\left[\Theta {\left({\frac {V}{V_{o}}}\right)}+{\frac {3}{2}}\Theta {\left({\frac {T}{T_{o}}}\right)}\right].}
The resulting contours of constant relative entropy, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here. Thus relative entropy measures thermodynamic availability in bits.
== Quantum information theory ==
For density matrices P and Q on a Hilbert space, the quantum relative entropy from Q to P is defined to be
D
KL
(
P
∥
Q
)
=
Tr
(
P
(
log
P
−
log
Q
)
)
.
{\displaystyle D_{\text{KL}}(P\parallel Q)=\operatorname {Tr} (P(\log P-\log Q)).}
In quantum information science the minimum of
D
KL
(
P
∥
Q
)
{\displaystyle D_{\text{KL}}(P\parallel Q)}
over all separable states Q can also be used as a measure of entanglement in the state P.
== Relationship between models and reality ==
Just as relative entropy of "actual from ambient" measures thermodynamic availability, relative entropy of "reality from a model" is also useful even if the only clues we have about reality are some experimental measurements. In the former case relative entropy describes distance to equilibrium or (when multiplied by ambient temperature) the amount of available work, while in the latter case it tells you about surprises that reality has up its sleeve or, in other words, how much the model has yet to learn.
Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting a statistical model via Akaike information criterion are particularly well described in papers and a book by Burnham and Anderson. In a nutshell the relative entropy of reality from a model may be estimated, to within a constant additive term, by a function of the deviations observed between data and the model's predictions (like the mean squared deviation) . Estimates of such divergence for models that share the same additive term can in turn be used to select among models.
When trying to fit parametrized models to data there are various estimators which attempt to minimize relative entropy, such as maximum likelihood and maximum spacing estimators.
== Symmetrised divergence ==
Kullback & Leibler (1951)
also considered the symmetrized function:
D
KL
(
P
∥
Q
)
+
D
KL
(
Q
∥
P
)
{\displaystyle D_{\text{KL}}(P\parallel Q)+D_{\text{KL}}(Q\parallel P)}
which they referred to as the "divergence", though today the "KL divergence" refers to the asymmetric function (see § Etymology for the evolution of the term). This function is symmetric and nonnegative, and had already been defined and used by Harold Jeffreys in 1948; it is accordingly called the Jeffreys divergence.
This quantity has sometimes been used for feature selection in classification problems, where P and Q are the conditional pdfs of a feature under two different classes. In the Banking and Finance industries, this quantity is referred to as Population Stability Index (PSI), and is used to assess distributional shifts in model features through time.
An alternative is given via the
λ
{\displaystyle \lambda }
-divergence,
D
λ
(
P
∥
Q
)
=
λ
D
KL
(
P
∥
λ
P
+
(
1
−
λ
)
Q
)
+
(
1
−
λ
)
D
KL
(
Q
∥
λ
P
+
(
1
−
λ
)
Q
)
,
{\displaystyle D_{\lambda }(P\parallel Q)=\lambda D_{\text{KL}}(P\parallel \lambda P+(1-\lambda )Q)+(1-\lambda )D_{\text{KL}}(Q\parallel \lambda P+(1-\lambda )Q),}
which can be interpreted as the expected information gain about X from discovering which probability distribution X is drawn from, P or Q, if they currently have probabilities
λ
{\displaystyle \lambda }
and
1
−
λ
{\displaystyle 1-\lambda }
respectively.
The value
λ
=
0.5
{\displaystyle \lambda =0.5}
gives the Jensen–Shannon divergence, defined by
D
JS
=
1
2
D
KL
(
P
∥
M
)
+
1
2
D
KL
(
Q
∥
M
)
{\displaystyle D_{\text{JS}}={\tfrac {1}{2}}D_{\text{KL}}(P\parallel M)+{\tfrac {1}{2}}D_{\text{KL}}(Q\parallel M)}
where M is the average of the two distributions,
M
=
1
2
(
P
+
Q
)
.
{\displaystyle M={\tfrac {1}{2}}\left(P+Q\right).}
We can also interpret
D
JS
{\displaystyle D_{\text{JS}}}
as the capacity of a noisy information channel with two inputs giving the output distributions P and Q. The Jensen–Shannon divergence, like all f-divergences, is locally proportional to the Fisher information metric. It is similar to the Hellinger metric (in the sense that it induces the same affine connection on a statistical manifold).
Furthermore, the Jensen–Shannon divergence can be generalized using abstract statistical M-mixtures relying on an abstract mean M.
== Relationship to other probability-distance measures ==
There are many other important measures of probability distance. Some of these are particularly connected with relative entropy. For example:
The total-variation distance,
δ
(
p
,
q
)
{\displaystyle \delta (p,q)}
. This is connected to the divergence through Pinsker's inequality:
δ
(
P
,
Q
)
≤
1
2
D
KL
(
P
∥
Q
)
.
{\displaystyle \delta (P,Q)\leq {\sqrt {{\tfrac {1}{2}}D_{\text{KL}}(P\parallel Q)}}.}
Pinsker's inequality is vacuous for any distributions where
D
K
L
(
P
∥
Q
)
>
2
{\displaystyle D_{\mathrm {KL} }(P\parallel Q)>2}
, since the total variation distance is at most 1. For such distributions, an alternative bound can be used, due to Bretagnolle and Huber (see, also, Tsybakov):
δ
(
P
,
Q
)
≤
1
−
e
−
D
K
L
(
P
∥
Q
)
.
{\displaystyle \delta (P,Q)\leq {\sqrt {1-e^{-D_{\mathrm {KL} }(P\parallel Q)}}}.}
The family of Rényi divergences generalize relative entropy. Depending on the value of a certain parameter,
α
{\displaystyle \alpha }
, various inequalities may be deduced.
Other notable measures of distance include the Hellinger distance, histogram intersection, Chi-squared statistic, quadratic form distance, match distance, Kolmogorov–Smirnov distance, and earth mover's distance.
== Data differencing ==
Just as absolute entropy serves as theoretical background for data compression, relative entropy serves as theoretical background for data differencing – the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the target given the source (minimum size of a patch).
== See also ==
== References ==
== External links ==
Information Theoretical Estimators Toolbox
Ruby gem for calculating Kullback–Leibler divergence
Jon Shlens' tutorial on Kullback–Leibler divergence and likelihood theory
Matlab code for calculating Kullback–Leibler divergence for discrete distributions
Sergio Verdú, Relative Entropy, NIPS 2009. One-hour video lecture.
A modern summary of info-theoretic divergence measures | Wikipedia/Relative_entropy |
Algebraic code-excited linear prediction (ACELP) is a speech coding algorithm in which a limited set of pulses is distributed as excitation to a linear prediction filter. It is a linear predictive coding (LPC) algorithm that is based on the code-excited linear prediction (CELP) method and has an algebraic structure. ACELP was developed in 1989 by the researchers at the Université de Sherbrooke in Canada.
The ACELP method is widely employed in current speech coding standards such as AMR, EFR, AMR-WB (G.722.2), VMR-WB, EVRC, EVRC-B, SMV, TETRA, PCS 1900, MPEG-4 CELP and ITU-T G-series standards G.729, G.729.1 (first coding stage) and G.723.1. The ACELP algorithm is also used in the proprietary ACELP.net codec. Audible Inc. use a modified version for their speaking books. It is also used in conference-calling software, speech compression tools and has become one of the 3GPP formats.
The ACELP patent expired in 2018 and is now royalty-free.
== Features ==
The main advantage of ACELP is that the algebraic codebook it uses can be made very large (> 50 bits) without running into storage (RAM/ROM) or complexity (CPU time) problems.
== Technology ==
The ACELP algorithm is based on that used in code-excited linear prediction (CELP), but ACELP codebooks have a specific algebraic structure imposed upon them.
A 16-bit algebraic codebook shall be used in the innovative codebook search, the aim of which is to find the best innovation and gain parameters. The innovation vector contains, at most, four non-zero pulses.
In ACELP, a block of N speech samples is synthesized by filtering an appropriate innovation sequence from a codebook, scaled by a gain factor g c, through two time-varying filters.
The long-term (pitch) synthesis filter is given by:
1
B
(
z
)
=
1
1
−
g
p
z
−
T
{\displaystyle {\frac {1}{B(z)}}={\frac {1}{1-g_{p}z^{-T}}}}
The short-term synthesis filter is given by:
1
A
(
z
)
=
1
1
+
∑
i
=
1
P
a
i
z
−
i
{\displaystyle {\frac {1}{A(z)}}={\frac {1}{1+\sum _{i=1}^{P}a_{i}z^{-i}}}}
== References == | Wikipedia/Algebraic_code-excited_linear_prediction |
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy (a measure of average surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not.: 181–218 The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy (described here) is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.
In terms of measure theory, the differential entropy of a probability measure is the negative relative entropy from that measure to the Lebesgue measure, where the latter is treated as if it were a probability measure, despite being unnormalized.
== Definition ==
Let
X
{\displaystyle X}
be a random variable with a probability density function
f
{\displaystyle f}
whose support is a set
X
{\displaystyle {\mathcal {X}}}
. The differential entropy
h
(
X
)
{\displaystyle h(X)}
or
h
(
f
)
{\displaystyle h(f)}
is defined as: 243
For probability distributions which do not have an explicit density function expression, but have an explicit quantile function expression,
Q
(
p
)
{\displaystyle Q(p)}
, then
h
(
Q
)
{\displaystyle h(Q)}
can be defined in terms of the derivative of
Q
(
p
)
{\displaystyle Q(p)}
i.e. the quantile density function
Q
′
(
p
)
{\displaystyle Q'(p)}
as: 54–59
h
(
Q
)
=
∫
0
1
log
Q
′
(
p
)
d
p
.
{\displaystyle h(Q)=\int _{0}^{1}\log Q'(p)\,dp.}
As with its discrete analog, the units of differential entropy depend on the base of the logarithm, which is usually 2 (i.e., the units are bits). See logarithmic units for logarithms taken in different bases. Related concepts such as joint, conditional differential entropy, and relative entropy are defined in a similar fashion. Unlike the discrete analog, the differential entropy has an offset that depends on the units used to measure
X
{\displaystyle X}
.: 183–184 For example, the differential entropy of a quantity measured in millimeters will be log(1000) more than the same quantity measured in meters; a dimensionless quantity will have differential entropy of log(1000) more than the same quantity divided by 1000.
One must take care in trying to apply properties of discrete entropy to differential entropy, since probability density functions can be greater than 1. For example, the uniform distribution
U
(
0
,
1
/
2
)
{\displaystyle {\mathcal {U}}(0,1/2)}
has negative differential entropy; i.e., it is better ordered than
U
(
0
,
1
)
{\displaystyle {\mathcal {U}}(0,1)}
as shown now
∫
0
1
2
−
2
log
(
2
)
d
x
=
−
log
(
2
)
{\displaystyle \int _{0}^{\frac {1}{2}}-2\log(2)\,dx=-\log(2)\,}
being less than that of
U
(
0
,
1
)
{\displaystyle {\mathcal {U}}(0,1)}
which has zero differential entropy. Thus, differential entropy does not share all properties of discrete entropy.
The continuous mutual information
I
(
X
;
Y
)
{\displaystyle I(X;Y)}
has the distinction of retaining its fundamental significance as a measure of discrete information since it is actually the limit of the discrete mutual information of partitions of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
as these partitions become finer and finer. Thus it is invariant under non-linear homeomorphisms (continuous and uniquely invertible maps), including linear transformations of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, and still represents the amount of discrete information that can be transmitted over a channel that admits a continuous space of values.
For the direct analogue of discrete entropy extended to the continuous space, see limiting density of discrete points.
== Properties of differential entropy ==
For probability densities
f
{\displaystyle f}
and
g
{\displaystyle g}
, the Kullback–Leibler divergence
D
K
L
(
f
∥
g
)
{\displaystyle D_{KL}(f\parallel g)}
is greater than or equal to 0 with equality only if
f
=
g
{\displaystyle f=g}
almost everywhere. Similarly, for two random variables
X
{\displaystyle X}
and
Y
{\displaystyle Y}
,
I
(
X
;
Y
)
≥
0
{\displaystyle I(X;Y)\geq 0}
and
h
(
X
∣
Y
)
≤
h
(
X
)
{\displaystyle h(X\mid Y)\leq h(X)}
with equality if and only if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent.
The chain rule for differential entropy holds as in the discrete case: 253
h
(
X
1
,
…
,
X
n
)
=
∑
i
=
1
n
h
(
X
i
∣
X
1
,
…
,
X
i
−
1
)
≤
∑
i
=
1
n
h
(
X
i
)
.
{\displaystyle h(X_{1},\ldots ,X_{n})=\sum _{i=1}^{n}h(X_{i}\mid X_{1},\ldots ,X_{i-1})\leq \sum _{i=1}^{n}h(X_{i}).}
Differential entropy is translation invariant, i.e. for a constant
c
{\displaystyle c}
.: 253
h
(
X
+
c
)
=
h
(
X
)
{\displaystyle h(X+c)=h(X)}
Differential entropy is in general not invariant under arbitrary invertible maps.In particular, for a constant
a
{\displaystyle a}
,
h
(
a
X
)
=
h
(
X
)
+
log
|
a
|
{\displaystyle h(aX)=h(X)+\log |a|}
For a vector valued random variable
X
{\displaystyle \mathbf {X} }
and an invertible (square) matrix
A
{\displaystyle \mathbf {A} }
: 253
h
(
A
X
)
=
h
(
X
)
+
log
(
|
det
A
|
)
{\displaystyle h(\mathbf {A} \mathbf {X} )=h(\mathbf {X} )+\log \left(\left|\det \mathbf {A} \right|\right)}
In general, for a transformation from a random vector to another random vector with same dimension
Y
=
m
(
X
)
{\displaystyle \mathbf {Y} =m\left(\mathbf {X} \right)}
, the corresponding entropies are related via
h
(
Y
)
≤
h
(
X
)
+
∫
f
(
x
)
log
|
∂
m
∂
x
|
d
x
{\displaystyle h(\mathbf {Y} )\leq h(\mathbf {X} )+\int f(x)\log \left\vert {\frac {\partial m}{\partial x}}\right\vert \,dx}
where
|
∂
m
∂
x
|
{\displaystyle \left\vert {\frac {\partial m}{\partial x}}\right\vert }
is the Jacobian of the transformation
m
{\displaystyle m}
. The above inequality becomes an equality if the transform is a bijection. Furthermore, when
m
{\displaystyle m}
is a rigid rotation, translation, or combination thereof, the Jacobian determinant is always 1, and
h
(
Y
)
=
h
(
X
)
{\displaystyle h(Y)=h(X)}
.
If a random vector
X
∈
R
n
{\displaystyle X\in \mathbb {R} ^{n}}
has mean zero and covariance matrix
K
{\displaystyle K}
,
h
(
X
)
≤
1
2
log
(
det
2
π
e
K
)
=
1
2
log
[
(
2
π
e
)
n
det
K
]
{\textstyle h(\mathbf {X} )\leq {\frac {1}{2}}\log(\det {2\pi eK})={\frac {1}{2}}\log[(2\pi e)^{n}\det {K}]}
with equality if and only if
X
{\displaystyle X}
is jointly gaussian (see below).: 254
However, differential entropy does not have other desirable properties:
It is not invariant under change of variables, and is therefore most useful with dimensionless variables.
It can be negative.
A modification of differential entropy that addresses these drawbacks is the relative information entropy, also known as the Kullback–Leibler divergence, which includes an invariant measure factor (see limiting density of discrete points).
== Maximization in the normal distribution ==
=== Theorem ===
With a normal distribution, differential entropy is maximized for a given variance. A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian.: 255
=== Proof ===
Let
g
(
x
)
{\displaystyle g(x)}
be a Gaussian PDF with mean μ and variance
σ
2
{\displaystyle \sigma ^{2}}
and
f
(
x
)
{\displaystyle f(x)}
an arbitrary PDF with the same variance. Since differential entropy is translation invariant we can assume that
f
(
x
)
{\displaystyle f(x)}
has the same mean of
μ
{\displaystyle \mu }
as
g
(
x
)
{\displaystyle g(x)}
.
Consider the Kullback–Leibler divergence between the two distributions
0
≤
D
K
L
(
f
∥
g
)
=
∫
−
∞
∞
f
(
x
)
log
(
f
(
x
)
g
(
x
)
)
d
x
=
−
h
(
f
)
−
∫
−
∞
∞
f
(
x
)
log
(
g
(
x
)
)
d
x
.
{\displaystyle 0\leq D_{KL}(f\parallel g)=\int _{-\infty }^{\infty }f(x)\log \left({\frac {f(x)}{g(x)}}\right)\,dx=-h(f)-\int _{-\infty }^{\infty }f(x)\log(g(x))\,dx.}
Now note that
∫
−
∞
∞
f
(
x
)
log
(
g
(
x
)
)
d
x
=
∫
−
∞
∞
f
(
x
)
log
(
1
2
π
σ
2
e
−
(
x
−
μ
)
2
2
σ
2
)
d
x
=
∫
−
∞
∞
f
(
x
)
log
1
2
π
σ
2
d
x
+
log
(
e
)
∫
−
∞
∞
f
(
x
)
(
−
(
x
−
μ
)
2
2
σ
2
)
d
x
=
−
1
2
log
(
2
π
σ
2
)
−
log
(
e
)
σ
2
2
σ
2
=
−
1
2
(
log
(
2
π
σ
2
)
+
log
(
e
)
)
=
−
1
2
log
(
2
π
e
σ
2
)
=
−
h
(
g
)
{\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }f(x)\log(g(x))\,dx&=\int _{-\infty }^{\infty }f(x)\log \left({\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}\right)\,dx\\&=\int _{-\infty }^{\infty }f(x)\log {\frac {1}{\sqrt {2\pi \sigma ^{2}}}}dx\,+\,\log(e)\int _{-\infty }^{\infty }f(x)\left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\,dx\\&=-{\tfrac {1}{2}}\log(2\pi \sigma ^{2})-\log(e){\frac {\sigma ^{2}}{2\sigma ^{2}}}\\[1ex]&=-{\tfrac {1}{2}}\left(\log(2\pi \sigma ^{2})+\log(e)\right)\\[1ex]&=-{\tfrac {1}{2}}\log(2\pi e\sigma ^{2})\\[1ex]&=-h(g)\end{aligned}}}
because the result does not depend on
f
(
x
)
{\displaystyle f(x)}
other than through the variance. Combining the two results yields
h
(
g
)
−
h
(
f
)
≥
0
{\displaystyle h(g)-h(f)\geq 0\!}
with equality when
f
(
x
)
=
g
(
x
)
{\displaystyle f(x)=g(x)}
following from the properties of Kullback–Leibler divergence.
=== Alternative proof ===
This result may also be demonstrated using the calculus of variations. A Lagrangian function with two Lagrangian multipliers may be defined as:
L
=
∫
−
∞
∞
g
(
x
)
log
(
g
(
x
)
)
d
x
−
λ
0
(
1
−
∫
−
∞
∞
g
(
x
)
d
x
)
−
λ
(
σ
2
−
∫
−
∞
∞
g
(
x
)
(
x
−
μ
)
2
d
x
)
{\displaystyle L=\int _{-\infty }^{\infty }g(x)\log(g(x))\,dx-\lambda _{0}\left(1-\int _{-\infty }^{\infty }g(x)\,dx\right)-\lambda \left(\sigma ^{2}-\int _{-\infty }^{\infty }g(x)(x-\mu )^{2}\,dx\right)}
where g(x) is some function with mean μ. When the entropy of g(x) is at a maximum and the constraint equations, which consist of the normalization condition
(
1
=
∫
−
∞
∞
g
(
x
)
d
x
)
{\displaystyle \left(1=\int _{-\infty }^{\infty }g(x)\,dx\right)}
and the requirement of fixed variance
(
σ
2
=
∫
−
∞
∞
g
(
x
)
(
x
−
μ
)
2
d
x
)
{\displaystyle \left(\sigma ^{2}=\int _{-\infty }^{\infty }g(x)(x-\mu )^{2}\,dx\right)}
, are both satisfied, then a small variation δg(x) about g(x) will produce a variation δL about L which is equal to zero:
0
=
δ
L
=
∫
−
∞
∞
δ
g
(
x
)
[
log
(
g
(
x
)
)
+
1
+
λ
0
+
λ
(
x
−
μ
)
2
]
d
x
{\displaystyle 0=\delta L=\int _{-\infty }^{\infty }\delta g(x)\left[\log(g(x))+1+\lambda _{0}+\lambda (x-\mu )^{2}\right]\,dx}
Since this must hold for any small δg(x), the term in brackets must be zero, and solving for g(x) yields:
g
(
x
)
=
e
−
λ
0
−
1
−
λ
(
x
−
μ
)
2
{\displaystyle g(x)=e^{-\lambda _{0}-1-\lambda (x-\mu )^{2}}}
Using the constraint equations to solve for λ0 and λ yields the normal distribution:
g
(
x
)
=
1
2
π
σ
2
e
−
(
x
−
μ
)
2
2
σ
2
{\displaystyle g(x)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}}
== Example: Exponential distribution ==
Let
X
{\displaystyle X}
be an exponentially distributed random variable with parameter
λ
{\displaystyle \lambda }
, that is, with probability density function
f
(
x
)
=
λ
e
−
λ
x
for
x
≥
0.
{\displaystyle f(x)=\lambda e^{-\lambda x}{\text{ for }}x\geq 0.}
Its differential entropy is then
h
e
(
X
)
=
−
∫
0
∞
λ
e
−
λ
x
log
(
λ
e
−
λ
x
)
d
x
=
−
(
∫
0
∞
(
log
λ
)
λ
e
−
λ
x
d
x
+
∫
0
∞
(
−
λ
x
)
λ
e
−
λ
x
d
x
)
=
−
log
λ
∫
0
∞
f
(
x
)
d
x
+
λ
E
[
X
]
=
−
log
λ
+
1
.
{\displaystyle {\begin{aligned}h_{e}(X)&=-\int _{0}^{\infty }\lambda e^{-\lambda x}\log \left(\lambda e^{-\lambda x}\right)dx\\[2pt]&=-\left(\int _{0}^{\infty }(\log \lambda )\lambda e^{-\lambda x}\,dx+\int _{0}^{\infty }(-\lambda x)\lambda e^{-\lambda x}\,dx\right)\\[2pt]&=-\log \lambda \int _{0}^{\infty }f(x)\,dx+\lambda \operatorname {E} [X]\\[4pt]&=-\log \lambda +1\,.\end{aligned}}}
Here,
h
e
(
X
)
{\displaystyle h_{e}(X)}
was used rather than
h
(
X
)
{\displaystyle h(X)}
to make it explicit that the logarithm was taken to base e, to simplify the calculation.
== Relation to estimator error ==
The differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable
X
{\displaystyle X}
and estimator
X
^
{\displaystyle {\widehat {X}}}
the following holds:
E
[
(
X
−
X
^
)
2
]
≥
1
2
π
e
e
2
h
(
X
)
{\displaystyle \operatorname {E} [(X-{\widehat {X}})^{2}]\geq {\frac {1}{2\pi e}}e^{2h(X)}}
with equality if and only if
X
{\displaystyle X}
is a Gaussian random variable and
X
^
{\displaystyle {\widehat {X}}}
is the mean of
X
{\displaystyle X}
.
== Differential entropies for various distributions ==
In the table below
Γ
(
x
)
=
∫
0
∞
e
−
t
t
x
−
1
d
t
{\displaystyle \Gamma (x)=\int _{0}^{\infty }e^{-t}t^{x-1}dt}
is the gamma function,
ψ
(
x
)
=
d
d
x
log
Γ
(
x
)
=
Γ
′
(
x
)
Γ
(
x
)
{\displaystyle \psi (x)={\frac {d}{dx}}\log \Gamma (x)={\frac {\Gamma '(x)}{\Gamma (x)}}}
is the digamma function,
B
(
p
,
q
)
=
Γ
(
p
)
Γ
(
q
)
Γ
(
p
+
q
)
{\displaystyle B(p,q)={\frac {\Gamma (p)\Gamma (q)}{\Gamma (p+q)}}}
is the beta function, and γE is Euler's constant.: 219–230
Many of the differential entropies are from.: 120–122
== Variants ==
As described above, differential entropy does not share all properties of discrete entropy. For example, the differential entropy can be negative; also it is not invariant under continuous coordinate transformations. Edwin Thompson Jaynes showed in fact that the expression above is not the correct limit of the expression for a finite set of probabilities.: 181–218
A modification of differential entropy adds an invariant measure factor to correct this, (see limiting density of discrete points). If
m
(
x
)
{\displaystyle m(x)}
is further constrained to be a probability density, the resulting notion is called relative entropy in information theory:
D
(
p
∥
m
)
=
∫
p
(
x
)
log
p
(
x
)
m
(
x
)
d
x
.
{\displaystyle D(p\parallel m)=\int p(x)\log {\frac {p(x)}{m(x)}}\,dx.}
The definition of differential entropy above can be obtained by partitioning the range of
X
{\displaystyle X}
into bins of length
h
{\displaystyle h}
with associated sample points
i
h
{\displaystyle ih}
within the bins, for
X
{\displaystyle X}
Riemann integrable. This gives a quantized version of
X
{\displaystyle X}
, defined by
X
h
=
i
h
{\displaystyle X_{h}=ih}
if
i
h
≤
X
≤
(
i
+
1
)
h
{\displaystyle ih\leq X\leq (i+1)h}
. Then the entropy of
X
h
=
i
h
{\displaystyle X_{h}=ih}
is
H
h
=
−
∑
i
h
f
(
i
h
)
log
(
f
(
i
h
)
)
−
∑
h
f
(
i
h
)
log
(
h
)
.
{\displaystyle H_{h}=-\sum _{i}hf(ih)\log(f(ih))-\sum hf(ih)\log(h).}
The first term on the right approximates the differential entropy, while the second term is approximately
−
log
(
h
)
{\displaystyle -\log(h)}
. Note that this procedure suggests that the entropy in the discrete sense of a continuous random variable should be
∞
{\displaystyle \infty }
.
== See also ==
Information entropy
Self-information
Entropy estimation
== References ==
== External links ==
"Differential entropy", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Differential entropy". PlanetMath. | Wikipedia/Differential_entropy |
Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. The requirement that both parties have access to the secret key is one of the main drawbacks of symmetric-key encryption, in comparison to public-key encryption (also known as asymmetric-key encryption). However, symmetric-key encryption algorithms are usually better for bulk encryption. With exception of the one-time pad they have a smaller key size, which means less storage space and faster transmission. Due to this, asymmetric-key encryption is often used to exchange the secret key for symmetric-key encryption.
== Types ==
Symmetric-key encryption can use either stream ciphers or block ciphers.
Stream ciphers encrypt the digits (typically bytes), or letters (in substitution ciphers) of a message one at a time. An example is ChaCha20. Substitution ciphers are well-known ciphers, but can be easily decrypted using a frequency table.
Block ciphers take a number of bits and encrypt them in a single unit, padding the plaintext to achieve a multiple of the block size. The Advanced Encryption Standard (AES) algorithm, approved by NIST in December 2001, uses 128-bit blocks.
== Implementations ==
Examples of popular symmetric-key algorithms include Twofish, Serpent, AES (Rijndael), Camellia, Salsa20, ChaCha20, Blowfish, CAST5, Kuznyechik, RC4, DES, 3DES, Skipjack, Safer, and IDEA.
== Use as a cryptographic primitive ==
Symmetric ciphers are commonly used to achieve other cryptographic primitives than just encryption.
Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from an AEAD cipher (e.g. AES-GCM).
However, symmetric ciphers cannot be used for non-repudiation purposes except by involving additional parties. See the ISO/IEC 13888-2 standard.
Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods.
== Construction of symmetric ciphers ==
Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible.
== Security of symmetric ciphers ==
Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the functions for each round can greatly reduce the chances of a successful attack. It is also possible to increase the key length or the rounds in the encryption process to better protect against attack. This, however, tends to increase the processing power and decrease the speed at which the process runs due to the amount of operations the system needs to do.
Most modern symmetric-key algorithms appear to be resistant to the threat of post-quantum cryptography. Quantum computers would exponentially increase the speed at which these ciphers can be decoded; notably, Grover's algorithm would take the square-root of the time traditionally required for a brute-force attack, although these vulnerabilities can be compensated for by doubling key length. For example, a 128 bit AES cipher would not be secure against such an attack as it would reduce the time required to test all possible iterations from over 10 quintillion years to about six months. By contrast, it would still take a quantum computer the same amount of time to decode a 256 bit AES cipher as it would a conventional computer to decode a 128 bit AES cipher. For this reason, AES-256 is believed to be "quantum resistant".
== Key management ==
== Key establishment ==
Symmetric-key algorithms require both the sender and the recipient of a message to have the same secret key. All early cryptographic systems required either the sender or the recipient to somehow receive a copy of that secret key over a physically secure channel.
Nearly all modern cryptographic systems still use symmetric-key algorithms internally to encrypt the bulk of the messages, but they eliminate the need for a physically secure channel by using Diffie–Hellman key exchange or some other public-key protocol to securely come to agreement on a fresh new secret key for each session/conversation (forward secrecy).
== Key generation ==
When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation use a source of high entropy for its initialization.
== Reciprocal cipher ==
A reciprocal cipher is a cipher where, just as one enters the plaintext into the cryptography system to get the ciphertext, one could enter the ciphertext into the same place in the system to get the plaintext. A reciprocal cipher is also sometimes referred as self-reciprocal cipher.
Practically all mechanical cipher machines implement a reciprocal cipher, a mathematical involution on each typed-in letter.
Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way.
Examples of reciprocal ciphers include:
Atbash
Beaufort cipher
Enigma machine
Marie Antoinette and Axel von Fersen communicated with a self-reciprocal cipher.
the Porta polyalphabetic cipher is self-reciprocal.
Purple cipher
RC4
ROT13
XOR cipher
Vatsyayana cipher
The majority of all modern ciphers can be classified as either a stream cipher, most of which use a reciprocal XOR cipher combiner, or a block cipher, most of which use a Feistel cipher or Lai–Massey scheme with a reciprocal transformation in each round.
== Notes ==
== References == | Wikipedia/Symmetric-key_algorithm |
In information theory, the binary entropy function, denoted
H
(
p
)
{\displaystyle \operatorname {H} (p)}
or
H
b
(
p
)
{\displaystyle \operatorname {H} _{\text{b}}(p)}
, is defined as the entropy of a Bernoulli process (i.i.d. binary variable) with probability
p
{\displaystyle p}
of one of two values, and is given by the formula:
H
(
X
)
=
−
p
log
p
−
(
1
−
p
)
log
(
1
−
p
)
.
{\displaystyle \operatorname {H} (X)=-p\log p-(1-p)\log(1-p).}
The base of the logarithm corresponds to the choice of units of information; base e corresponds to nats and is mathematically convenient, while base 2 (binary logarithm) corresponds to shannons and is conventional (as shown in the graph); explicitly:
H
(
X
)
=
−
p
log
2
p
−
(
1
−
p
)
log
2
(
1
−
p
)
.
{\displaystyle \operatorname {H} (X)=-p\log _{2}p-(1-p)\log _{2}(1-p).}
Note that the values at 0 and 1 are given by the limit
0
log
0
:=
lim
x
→
0
+
x
log
x
=
0
{\displaystyle \textstyle 0\log 0:=\lim _{x\to 0^{+}}x\log x=0}
(by L'Hôpital's rule); and that "binary" refers to two possible values for the variable, not the units of information.
When
p
=
1
/
2
{\displaystyle p=1/2}
, the binary entropy function attains its maximum value, 1 shannon (1 binary unit of information); this is the case of an unbiased coin flip. When
p
=
0
{\displaystyle p=0}
or
p
=
1
{\displaystyle p=1}
, the binary entropy is 0 (in any units), corresponding to no information, since there is no uncertainty in the variable.
== Notation ==
Binary entropy
H
(
p
)
{\displaystyle \operatorname {H} (p)}
is a special case of
H
(
X
)
{\displaystyle \mathrm {H} (X)}
, the entropy function.
H
(
p
)
{\displaystyle \operatorname {H} (p)}
is distinguished from the entropy function
H
(
X
)
{\displaystyle \mathrm {H} (X)}
in that the former takes a single real number as a parameter whereas the latter takes a distribution or random variable as a parameter. Thus the binary entropy (of p) is the entropy of the distribution
Ber
(
p
)
{\displaystyle \operatorname {Ber} (p)}
, so
H
(
p
)
=
H
(
Ber
(
p
)
)
{\displaystyle \operatorname {H} (p)=\mathrm {H} (\operatorname {Ber} (p))}
.
Writing the probability of each of the two values being p and q, so
p
+
q
=
1
{\displaystyle p+q=1}
and
q
=
1
−
p
{\displaystyle q=1-p}
, this corresponds to
H
(
X
)
=
−
p
log
p
−
(
1
−
p
)
log
(
1
−
p
)
=
−
p
log
p
−
q
log
q
=
−
∑
x
∈
X
Pr
(
X
=
x
)
⋅
log
Pr
(
X
=
x
)
=
H
(
Ber
(
p
)
)
.
{\displaystyle \operatorname {H} (X)=-p\log p-(1-p)\log(1-p)=-p\log p-q\log q=-\sum _{x\in X}\operatorname {Pr} (X=x)\cdot \log \operatorname {Pr} (X=x)=\mathrm {H} (\operatorname {Ber} (p)).}
Sometimes the binary entropy function is also written as
H
2
(
p
)
{\displaystyle \operatorname {H} _{2}(p)}
. However, it is different from and should not be confused with the Rényi entropy, which is denoted as
H
2
(
X
)
{\displaystyle \mathrm {H} _{2}(X)}
.
== Explanation ==
In terms of information theory, entropy is considered to be a measure of the uncertainty in a message. To put it intuitively, suppose
p
=
0
{\displaystyle p=0}
. At this probability, the event is certain never to occur, and so there is no uncertainty at all, leading to an entropy of 0. If
p
=
1
{\displaystyle p=1}
, the result is again certain, so the entropy is 0 here as well. When
p
=
1
/
2
{\displaystyle p=1/2}
, the uncertainty is at a maximum; if one were to place a fair bet on the outcome in this case, there is no advantage to be gained with prior knowledge of the probabilities. In this case, the entropy is maximum at a value of 1 bit. Intermediate values fall between these cases; for instance, if
p
=
1
/
4
{\displaystyle p=1/4}
, there is still a measure of uncertainty on the outcome, but one can still predict the outcome correctly more often than not, so the uncertainty measure, or entropy, is less than 1 full bit.
== Properties ==
=== Derivative ===
The derivative of the binary entropy function may be expressed as the negative of the logit function:
d
d
p
H
b
(
p
)
=
−
logit
2
(
p
)
=
−
log
2
(
p
1
−
p
)
{\displaystyle {d \over dp}\operatorname {H} _{\text{b}}(p)=-\operatorname {logit} _{2}(p)=-\log _{2}\left({\frac {p}{1-p}}\right)}
.
d
2
d
p
2
H
b
(
p
)
=
−
1
p
(
1
−
p
)
ln
2
{\displaystyle {d^{2} \over dp^{2}}\operatorname {H} _{\text{b}}(p)=-{\frac {1}{p(1-p)\ln 2}}}
=== Convex conjugate ===
The convex conjugate (specifically, the Legendre transform) of the binary entropy (with base e) is the negative softplus function. This is because (following the definition of the Legendre transform: the derivatives are inverse functions) the derivative of negative binary entropy is the logit, whose inverse function is the logistic function, which is the derivative of softplus.
Softplus can be interpreted as logistic loss, so by duality, minimizing logistic loss corresponds to maximizing entropy. This justifies the principle of maximum entropy as loss minimization.
=== Taylor series ===
The Taylor series of the binary entropy function at 1/2 is
H
b
(
p
)
=
1
−
1
2
ln
2
∑
n
=
1
∞
(
1
−
2
p
)
2
n
n
(
2
n
−
1
)
{\displaystyle \operatorname {H} _{\text{b}}(p)=1-{\frac {1}{2\ln 2}}\sum _{n=1}^{\infty }{\frac {(1-2p)^{2n}}{n(2n-1)}}}
which converges to the binary entropy function for all values
0
≤
p
≤
1
{\displaystyle 0\leq p\leq 1}
.
=== Bounds ===
The following bounds hold for
0
<
p
<
1
{\displaystyle 0<p<1}
:
ln
(
2
)
⋅
log
2
(
p
)
⋅
log
2
(
1
−
p
)
≤
H
b
(
p
)
≤
log
2
(
p
)
⋅
log
2
(
1
−
p
)
{\displaystyle \ln(2)\cdot \log _{2}(p)\cdot \log _{2}(1-p)\leq H_{\text{b}}(p)\leq \log _{2}(p)\cdot \log _{2}(1-p)}
and
4
p
(
1
−
p
)
≤
H
b
(
p
)
≤
(
4
p
(
1
−
p
)
)
(
1
/
ln
4
)
{\displaystyle 4p(1-p)\leq H_{\text{b}}(p)\leq (4p(1-p))^{(1/\ln 4)}}
where
ln
{\displaystyle \ln }
denotes natural logarithm.
== See also ==
Metric entropy
Information theory
Information entropy
Quantities of information
== References ==
== Further reading ==
MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1 | Wikipedia/Binary_entropy_function |
Transform coding is a type of data compression for "natural" data like audio signals or photographic images. The transformation is typically lossless (perfectly reversible) on its own but is used to enable better (more targeted) quantization, which then results in a lower quality copy of the original input (lossy compression).
In transform coding, knowledge of the application is used to choose information to discard, thereby lowering its bandwidth. The remaining information can then be compressed via a variety of methods. When the output is decoded, the result may not be identical to the original input, but is expected to be close enough for the purpose of the application.
== Colour television ==
=== NTSC ===
One of the most successful transform encoding system is typically not referred to as such—the example being NTSC color television. After an extensive series of studies in the 1950s, Alda Bedford showed that the human eye has high resolution only for black and white, somewhat less for "mid-range" colors like yellows and greens, and much less for colors on the end of the spectrum, reds and blues.
Using this knowledge allowed RCA to develop a system in which they discarded most of the blue signal after it comes from the camera, keeping most of the green and only some of the red; this is chroma subsampling in the YIQ color space.
The result is a signal with considerably less content, one that would fit within existing 6 MHz black-and-white signals as a phase modulated differential signal. The average TV displays the equivalent of 350 pixels on a line, but the TV signal contains enough information for only about 50 pixels of blue and perhaps 150 of red. This is not apparent to the viewer in most cases, as the eye makes little use of the "missing" information anyway.
=== PAL and SECAM ===
The PAL and SECAM systems use nearly identical or very similar methods to transmit colour. In any case both systems are subsampled.
== Digital ==
The term is much more commonly used in digital media and digital signal processing. The most widely used transform coding technique in this regard is the discrete cosine transform (DCT), proposed by Nasir Ahmed in 1972, and presented by Ahmed with T. Natarajan and K. R. Rao in 1974. This DCT, in the context of the family of discrete cosine transforms, is the DCT-II. It is the basis for the common JPEG image compression standard, which examines small blocks of the image and transforms them to the frequency domain for more efficient quantization (lossy) and data compression. In video coding, the H.26x and MPEG standards modify this DCT image compression technique across frames in a motion image using motion compensation, further reducing the size compared to a series of JPEGs.
In audio coding, MPEG audio compression analyzes the transformed data according to a psychoacoustic model that describes the human ear's sensitivity to parts of the signal, similar to the TV model. MP3 uses a hybrid coding algorithm, combining the modified discrete cosine transform (MDCT) and fast Fourier transform (FFT). It was succeeded by Advanced Audio Coding (AAC), which uses a pure MDCT algorithm to significantly improve compression efficiency.
The basic process of digitizing an analog signal is a kind of transform coding that uses sampling in one or more domains as its transform.
== See also ==
Karhunen–Loève theorem
Transformation (function)
Wavelet transform
== References == | Wikipedia/Transform_coding |
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.
In estimation theory, two approaches are generally considered:
The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest
The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector.
== Examples ==
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal.
== Basics ==
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a vector,
x
=
[
x
[
0
]
x
[
1
]
⋮
x
[
N
−
1
]
]
.
{\displaystyle \mathbf {x} ={\begin{bmatrix}x[0]\\x[1]\\\vdots \\x[N-1]\end{bmatrix}}.}
Secondly, there are M parameters
θ
=
[
θ
1
θ
2
⋮
θ
M
]
,
{\displaystyle {\boldsymbol {\theta }}={\begin{bmatrix}\theta _{1}\\\theta _{2}\\\vdots \\\theta _{M}\end{bmatrix}},}
whose values are to be estimated. Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:
p
(
x
|
θ
)
.
{\displaystyle p(\mathbf {x} |{\boldsymbol {\theta }}).\,}
It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics). It is then necessary to define the Bayesian probability
π
(
θ
)
.
{\displaystyle \pi ({\boldsymbol {\theta }}).\,}
After the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted
θ
^
{\displaystyle {\hat {\boldsymbol {\theta }}}}
, where the "hat" indicates the estimate.
One common estimator is the minimum mean squared error (MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parameters
e
=
θ
^
−
θ
{\displaystyle \mathbf {e} ={\hat {\boldsymbol {\theta }}}-{\boldsymbol {\theta }}}
as the basis for optimality. This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator.
== Estimators ==
Commonly used estimators (estimation methods) and topics related to them include:
Maximum likelihood estimators
Bayes estimators
Method of moments estimators
Cramér–Rao bound
Least squares
Minimum mean squared error (MMSE), also known as Bayes least squared error (BLSE)
Maximum a posteriori (MAP)
Minimum variance unbiased estimator (MVUE)
Nonlinear system identification
Best linear unbiased estimator (BLUE)
Unbiased estimators — see estimator bias.
Particle filter
Markov chain Monte Carlo (MCMC)
Kalman filter, and its various derivatives
Wiener filter
== Examples ==
=== Unknown constant in additive white Gaussian noise ===
Consider a received discrete signal,
x
[
n
]
{\displaystyle x[n]}
, of
N
{\displaystyle N}
independent samples that consists of an unknown constant
A
{\displaystyle A}
with additive white Gaussian noise (AWGN)
w
[
n
]
{\displaystyle w[n]}
with zero mean and known variance
σ
2
{\displaystyle \sigma ^{2}}
(i.e.,
N
(
0
,
σ
2
)
{\displaystyle {\mathcal {N}}(0,\sigma ^{2})}
).
Since the variance is known then the only unknown parameter is
A
{\displaystyle A}
.
The model for the signal is then
x
[
n
]
=
A
+
w
[
n
]
n
=
0
,
1
,
…
,
N
−
1
{\displaystyle x[n]=A+w[n]\quad n=0,1,\dots ,N-1}
Two possible (of many) estimators for the parameter
A
{\displaystyle A}
are:
A
^
1
=
x
[
0
]
{\displaystyle {\hat {A}}_{1}=x[0]}
A
^
2
=
1
N
∑
n
=
0
N
−
1
x
[
n
]
{\displaystyle {\hat {A}}_{2}={\frac {1}{N}}\sum _{n=0}^{N-1}x[n]}
which is the sample mean
Both of these estimators have a mean of
A
{\displaystyle A}
, which can be shown through taking the expected value of each estimator
E
[
A
^
1
]
=
E
[
x
[
0
]
]
=
A
{\displaystyle \mathrm {E} \left[{\hat {A}}_{1}\right]=\mathrm {E} \left[x[0]\right]=A}
and
E
[
A
^
2
]
=
E
[
1
N
∑
n
=
0
N
−
1
x
[
n
]
]
=
1
N
[
∑
n
=
0
N
−
1
E
[
x
[
n
]
]
]
=
1
N
[
N
A
]
=
A
{\displaystyle \mathrm {E} \left[{\hat {A}}_{2}\right]=\mathrm {E} \left[{\frac {1}{N}}\sum _{n=0}^{N-1}x[n]\right]={\frac {1}{N}}\left[\sum _{n=0}^{N-1}\mathrm {E} \left[x[n]\right]\right]={\frac {1}{N}}\left[NA\right]=A}
At this point, these two estimators would appear to perform the same.
However, the difference between them becomes apparent when comparing the variances.
v
a
r
(
A
^
1
)
=
v
a
r
(
x
[
0
]
)
=
σ
2
{\displaystyle \mathrm {var} \left({\hat {A}}_{1}\right)=\mathrm {var} \left(x[0]\right)=\sigma ^{2}}
and
v
a
r
(
A
^
2
)
=
v
a
r
(
1
N
∑
n
=
0
N
−
1
x
[
n
]
)
=
independence
1
N
2
[
∑
n
=
0
N
−
1
v
a
r
(
x
[
n
]
)
]
=
1
N
2
[
N
σ
2
]
=
σ
2
N
{\displaystyle \mathrm {var} \left({\hat {A}}_{2}\right)=\mathrm {var} \left({\frac {1}{N}}\sum _{n=0}^{N-1}x[n]\right){\overset {\text{independence}}{=}}{\frac {1}{N^{2}}}\left[\sum _{n=0}^{N-1}\mathrm {var} (x[n])\right]={\frac {1}{N^{2}}}\left[N\sigma ^{2}\right]={\frac {\sigma ^{2}}{N}}}
It would seem that the sample mean is a better estimator since its variance is lower for every N > 1.
==== Maximum likelihood ====
Continuing the example using the maximum likelihood estimator, the probability density function (pdf) of the noise for one sample
w
[
n
]
{\displaystyle w[n]}
is
p
(
w
[
n
]
)
=
1
σ
2
π
exp
(
−
1
2
σ
2
w
[
n
]
2
)
{\displaystyle p(w[n])={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}w[n]^{2}\right)}
and the probability of
x
[
n
]
{\displaystyle x[n]}
becomes (
x
[
n
]
{\displaystyle x[n]}
can be thought of a
N
(
A
,
σ
2
)
{\displaystyle {\mathcal {N}}(A,\sigma ^{2})}
)
p
(
x
[
n
]
;
A
)
=
1
σ
2
π
exp
(
−
1
2
σ
2
(
x
[
n
]
−
A
)
2
)
{\displaystyle p(x[n];A)={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}(x[n]-A)^{2}\right)}
By independence, the probability of
x
{\displaystyle \mathbf {x} }
becomes
p
(
x
;
A
)
=
∏
n
=
0
N
−
1
p
(
x
[
n
]
;
A
)
=
1
(
σ
2
π
)
N
exp
(
−
1
2
σ
2
∑
n
=
0
N
−
1
(
x
[
n
]
−
A
)
2
)
{\displaystyle p(\mathbf {x} ;A)=\prod _{n=0}^{N-1}p(x[n];A)={\frac {1}{\left(\sigma {\sqrt {2\pi }}\right)^{N}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}\sum _{n=0}^{N-1}(x[n]-A)^{2}\right)}
Taking the natural logarithm of the pdf
ln
p
(
x
;
A
)
=
−
N
ln
(
σ
2
π
)
−
1
2
σ
2
∑
n
=
0
N
−
1
(
x
[
n
]
−
A
)
2
{\displaystyle \ln p(\mathbf {x} ;A)=-N\ln \left(\sigma {\sqrt {2\pi }}\right)-{\frac {1}{2\sigma ^{2}}}\sum _{n=0}^{N-1}(x[n]-A)^{2}}
and the maximum likelihood estimator is
A
^
=
arg
max
ln
p
(
x
;
A
)
{\displaystyle {\hat {A}}=\arg \max \ln p(\mathbf {x} ;A)}
Taking the first derivative of the log-likelihood function
∂
∂
A
ln
p
(
x
;
A
)
=
1
σ
2
[
∑
n
=
0
N
−
1
(
x
[
n
]
−
A
)
]
=
1
σ
2
[
∑
n
=
0
N
−
1
x
[
n
]
−
N
A
]
{\displaystyle {\frac {\partial }{\partial A}}\ln p(\mathbf {x} ;A)={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}(x[n]-A)\right]={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}x[n]-NA\right]}
and setting it to zero
0
=
1
σ
2
[
∑
n
=
0
N
−
1
x
[
n
]
−
N
A
]
=
∑
n
=
0
N
−
1
x
[
n
]
−
N
A
{\displaystyle 0={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}x[n]-NA\right]=\sum _{n=0}^{N-1}x[n]-NA}
This results in the maximum likelihood estimator
A
^
=
1
N
∑
n
=
0
N
−
1
x
[
n
]
{\displaystyle {\hat {A}}={\frac {1}{N}}\sum _{n=0}^{N-1}x[n]}
which is simply the sample mean.
From this example, it was found that the sample mean is the maximum likelihood estimator for
N
{\displaystyle N}
samples of a fixed, unknown parameter corrupted by AWGN.
==== Cramér–Rao lower bound ====
To find the Cramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find the Fisher information number
I
(
A
)
=
E
(
[
∂
∂
A
ln
p
(
x
;
A
)
]
2
)
=
−
E
[
∂
2
∂
A
2
ln
p
(
x
;
A
)
]
{\displaystyle {\mathcal {I}}(A)=\mathrm {E} \left(\left[{\frac {\partial }{\partial A}}\ln p(\mathbf {x} ;A)\right]^{2}\right)=-\mathrm {E} \left[{\frac {\partial ^{2}}{\partial A^{2}}}\ln p(\mathbf {x} ;A)\right]}
and copying from above
∂
∂
A
ln
p
(
x
;
A
)
=
1
σ
2
[
∑
n
=
0
N
−
1
x
[
n
]
−
N
A
]
{\displaystyle {\frac {\partial }{\partial A}}\ln p(\mathbf {x} ;A)={\frac {1}{\sigma ^{2}}}\left[\sum _{n=0}^{N-1}x[n]-NA\right]}
Taking the second derivative
∂
2
∂
A
2
ln
p
(
x
;
A
)
=
1
σ
2
(
−
N
)
=
−
N
σ
2
{\displaystyle {\frac {\partial ^{2}}{\partial A^{2}}}\ln p(\mathbf {x} ;A)={\frac {1}{\sigma ^{2}}}(-N)={\frac {-N}{\sigma ^{2}}}}
and finding the negative expected value is trivial since it is now a deterministic constant
−
E
[
∂
2
∂
A
2
ln
p
(
x
;
A
)
]
=
N
σ
2
{\displaystyle -\mathrm {E} \left[{\frac {\partial ^{2}}{\partial A^{2}}}\ln p(\mathbf {x} ;A)\right]={\frac {N}{\sigma ^{2}}}}
Finally, putting the Fisher information into
v
a
r
(
A
^
)
≥
1
I
{\displaystyle \mathrm {var} \left({\hat {A}}\right)\geq {\frac {1}{\mathcal {I}}}}
results in
v
a
r
(
A
^
)
≥
σ
2
N
{\displaystyle \mathrm {var} \left({\hat {A}}\right)\geq {\frac {\sigma ^{2}}{N}}}
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean is equal to the Cramér–Rao lower bound for all values of
N
{\displaystyle N}
and
A
{\displaystyle A}
.
In other words, the sample mean is the (necessarily unique) efficient estimator, and thus also the minimum variance unbiased estimator (MVUE), in addition to being the maximum likelihood estimator.
=== Maximum of a uniform distribution ===
One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use of maximum likelihood estimators and likelihood functions.
Given a discrete uniform distribution
1
,
2
,
…
,
N
{\displaystyle 1,2,\dots ,N}
with unknown maximum, the UMVU estimator for the maximum is given by
k
+
1
k
m
−
1
=
m
+
m
k
−
1
{\displaystyle {\frac {k+1}{k}}m-1=m+{\frac {m}{k}}-1}
where m is the sample maximum and k is the sample size, sampling without replacement. This problem is commonly known as the German tank problem, due to application of maximum estimation to estimates of German tank production during World War II.
The formula may be understood intuitively as;
the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.
This has a variance of
1
k
(
N
−
k
)
(
N
+
1
)
(
k
+
2
)
≈
N
2
k
2
for small samples
k
≪
N
{\displaystyle {\frac {1}{k}}{\frac {(N-k)(N+1)}{(k+2)}}\approx {\frac {N^{2}}{k^{2}}}{\text{ for small samples }}k\ll N}
so a standard deviation of approximately
N
/
k
{\displaystyle N/k}
, the (population) average size of a gap between samples; compare
m
k
{\displaystyle {\frac {m}{k}}}
above. This can be seen as a very simple case of maximum spacing estimation.
The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased.
== Applications ==
Numerous fields require the use of estimation theory.
Some of these fields include:
Interpretation of scientific experiments
Signal processing
Clinical trials
Opinion polls
Quality control
Telecommunications
Project management
Software engineering
Control theory (in particular Adaptive control)
Network intrusion detection system
Orbit determination
Measured data are likely to be subject to noise or uncertainty and it is through statistical probability that optimal solutions are sought to extract as much information from the data as possible.
== See also ==
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
Media related to Estimation theory at Wikimedia Commons | Wikipedia/Estimation_theory |
Information processing theory is the approach to the study of cognitive development evolved out of the American experimental tradition in psychology. Developmental psychologists who adopt the information processing perspective account for mental development in terms of maturational changes in basic components of a child's mind. The theory is based on the idea that humans process the information they receive, rather than merely responding to stimuli. This perspective uses an analogy to consider how the mind works like a computer. In this way, the mind functions like a biological computer responsible for analyzing information from the environment. According to the standard information-processing model for mental development, the mind's machinery includes attention mechanisms for bringing information in, working memory for actively manipulating information, and long-term memory for passively holding information so that it can be used in the future. This theory addresses how as children grow, their brains likewise mature, leading to advances in their ability to process and respond to the information they received through their senses. The theory emphasizes a continuous pattern of development, in contrast with cognitive-developmental theorists such as Jean Piaget's theory of cognitive development that thought development occurs in stages at a time.
== Humans as information processing systems ==
The information processing theory simplified is comparing the human brain to a computer or basic processor. It is theorized that the brain works in a set sequence, as does a computer. The sequence goes as follows, "receives input, processes the information, and delivers an output".
This theory suggests that we as humans will process information in a similar way. Like a computer receives input the mind will receive information through the senses. If the information is focused on, it will move to the short-term memory. While in the short-term memory or working memory, the mind is able to use the information to address its surroundings. The information is then encoded to the long-term memory, where the information is then stored. The information can be retrieved when necessary using the central executive. The central executive can be understood as the conscious mind. The central executive can pull information from the long-term memory back to the working memory for its use. As a computer processes information, this is how it is thought our minds are processing information. The output that a computer would deliver can be likened to the mind's output of information through behavior or action.
== Components ==
Though information processing can be compared to a computer, there is much more that needs to be explained. Information processing has several components. The major components are information stores, cognitive processes, and executive cognition.
Information stores are the different places that information can be stored in the mind. Information is stored briefly in the sensory memory. This information is stored just long enough for us to move the information to the short-term memory. George Armitage Miller discovered the short-term memory can only hold 7 (plus or minus two) things at once. The information here is also stored for only 15–20 seconds. The information stored in the short-term memory can be committed to the long-term memory store. There is no limit to the information stored in the long-term memory. The information stored here can stay for many years. Long-term memory can be divided between semantic, episodic, and procedural memories. Semantic memory is made up of facts or information learned or obtained throughout life. Episodic memory concerns personal experiences or real events that have happened in a person's life. Lastly, procedural memory is made up of procedures or processes learned such as riding a bike. Each of these are subcategories of long-term memory.
Cognitive processes are the way humans transfer information among the different memory stores. Some prominent processes used in transferring information are coding, retrieval, and perception. Coding is the process of transferring information from the short to long-term memory by relating the information of the long-term memory to the item in the short-term memory. This can be done through memorization techniques. Retrieval is used to bring information from the long-term memory back to the short-term memory. This can be achieved through many different recall techniques. Perception is the use of the information processed to interpret the environment. Another useful technique advised by George Miller is recoding. Recoding is the process of regrouping or organizing the information the mind is working with. A successful method of recoding is chunking. Chunking is used to group together pieces of information. Each unit of information is considered a chunk, this could be one or several words. This is commonly used when trying to memorize a phone number.
Executive cognition is the idea that someone is aware of the way they process information. They know their strengths and weaknesses. This concept is similar to metacognition. The conscious mind has control over the processes of the information processing theory.
== Emergence ==
Information processing as a model for human thinking and learning is part of the resurgence of cognitive perspectives of learning. The cognitive perspective asserts that complex mental states affect human learning and behavior that such mental states can be scientifically investigated. Computers, which process information, include internal states that affect processing. Computers, therefore, provided a model for possible human mental states that provided researchers with clues and direction for understanding human thinking and learning as information processing. Overall, information-processing models helped reestablish mental processes—processes that cannot be directly observed—as a legitimate area of scientific research.
== Major theorists ==
George Armitage Miller was one of the founders of the field of psychology known as cognition. He played a large role when it came to the information processing theory. He researched the capacity of the working memory, discovering that people can only hold up to 7 plus or minus 2 items. He also created the term chunking when explaining how to make the most of our short-term memory.
Two other theorists associated with the cognitive information processing theory are Richard C. Atkinson and Richard Shiffrin. In 1968 these two proposed a multi-stage theory of memory. They explained that from the time information is received by the processing system, it goes through different stages to be fully stored. They broke this down to sensory memory, short-term memory, and long-term memory (Atkinson).
Later in 1974 Alan Baddeley and Graham Hitch would contribute more to the information processing theory through their own discoveries. They deepened the understanding of memory through the central executive, phonological loop, and visuospatial sketch pad. Baddeley later updated his model with the episodic buffer.
== Atkinson and Shiffrin model ==
The Atkinson–Shiffrin memory model was proposed in 1968 by Richard C. Atkinson and Richard Shiffrin. This model illustrates their theory of the human memory. These two theorists used this model to show that the human memory can be broken in to three sub-sections: Sensory Memory, short-term memory and long-term memory.
=== Sensory memory ===
The sensory memory is responsible for holding onto information that the mind receives through the senses such as haptic, auditory and visual information. For example, if someone were to hear a bird chirp, they know that it is a bird because that information is held in the brief sensory memory.
=== Short-term memory ===
Short-term memory lasts for about 30 seconds. Short-term memory retains information that is needed for only a short period of time such as remembering a phone number that needs to be dialed.
=== Long-term memory ===
The long-term memory has an unlimited amount of space. In the long-term memory, there can be memory stored in there from the beginning of our life time. The long-term memory is tapped into when there is a need to recall an event that happened in an individual's previous experiences.
== Baddeley and Hitch model of working memory ==
Baddeley and Hitch introduced the model of working memory in 1974. Through their research, they contributed more to help understand how the mind may process information. They added three elements that explain further cognitive processes. These elements are the central executive, phonological loop, and the visuo-spatial working memory. Later Alan Baddeley added a fourth element to the working memory model called the episodic buffer. Together these ideas support the information processing theory and possibly explain how the mind processes information.
=== Central executive ===
=== Phonological loop ===
Working in connection with the central executive is the phonological loop. The phonological loop is used to hold auditory information. There are two sub components of the phonological loop; the phonological store and the articulatory rehearsal process. The phonological store holds auditory information for a short period. The articulatory rehearsal process keeps the information in the store for a longer period of time through rehearsal.
=== Visuospatial sketch pad ===
The visuospatial sketch pad is the other portion of the central executive. This is used to hold visual and spatial information. The visuospatial sketch pad is used to help the conscious imagine objects as well as maneuver through the physical environment.
=== Episodic buffer ===
Baddeley later added a fourth aspect to the model called the episodic buffer. It is proposed that the episodic buffer is able to hold information thereby increasing the amount stored. Due to the ability to hold information the episodic buffer is said to also transfer information between perception, short-term memory and long-term memory. The episodic buffer is a relatively new idea and is still being researched.
== Other cognitive processes ==
Cognitive processes include perception, recognition, imagining, remembering, thinking, judging, reasoning, problem solving, conceptualizing, and planning. These cognitive processes can emerge from human language, thought, imagery, and symbols.
In addition to these specific cognitive processes, many cognitive psychologists study language-acquisition, altered states of mind and consciousness, visual perception, auditory perception, short-term memory, long-term memory, storage, retrieval, perceptions of thought and much more.
Cognitive processes emerge through senses, thoughts, and experiences. The first step is aroused by paying attention, it allows processing of the information given. Cognitive processing cannot occur without learning, they work hand in hand to fully grasp the information.
== Nature versus nurture ==
Nature versus nurture refers to the theory about how people are influenced. The nature mentality is around the idea that we are influenced by our genetics. This involves all of our physical characteristics and our personality. On the other hand, nurture revolves around the idea that we are influenced by the environment and our experiences. Some believe that we are the way we are due to how we were raised, in what type of environment we were raised in and our early childhood experiences. This theory views humans as actively inputting, retrieving, processing, and storing information. Context, social content, and social influences on processing are simply viewed as information. Nature provides the hardware of cognitive processing and Information Processing theory explains cognitive functioning based on that hardware. Individuals innately vary in some cognitive abilities, such a memory span, but human cognitive systems function similarly based on a set of memory stores that store information and control processes determine how information is processed. The “Nurture” component provides information input (stimuli) that is processed resulting in behavior and learning. Changes in the contents of the long-term memory store (knowledge) are learning. Prior knowledge affects future processing and thus affects future behavior and learning.
== Quantitative versus qualitative ==
Information processing theory combines elements of both quantitative and qualitative development. Qualitative development occurs through the emergence of new strategies for information storage and retrieval, developing representational abilities (such as the utilization of language to represent concepts), or obtaining problem-solving rules (Miller, 2011). Increases in the knowledge base or the ability to remember more items in working memory are examples of quantitative changes, as well as increases in the strength of connected cognitive associations (Miller, 2011). The qualitative and quantitative components often interact together to develop new and more efficient strategies within the processing system.
== Current areas of research ==
Information processing theory is currently being used in the study of computer or artificial intelligence. This theory has also been applied to systems beyond the individual, including families and business organizations. For example, Ariel (1987) applied information processing theory to family systems, with sensing, attending, and encoding of stimuli occurring either within individuals or within the family system itself. Unlike traditional systems theory, where the family system tends to maintain stasis and resists incoming stimuli which would violate the system's rules, the information processing family develops individual and mutual schemes which influence what and how information is attended to and processed. Dysfunctions can occur both at the individual level as well as within the family system itself, creating more targets for therapeutic change. Rogers, P. R. et al. (1999) utilized information processing theory to describe business organizational behavior, as well as to present a model describing how effective and ineffective business strategies are developed. In their study, components of organizations that "sense" market information are identified as well as how organizations attend to this information; which gatekeepers determine what information is relevant/important for the organization, how this is organized into the existing culture (organizational schemas), and whether or not the organization has effective or ineffective processes for their long-term strategy.
Cognitive psychologists Kahnemen and Grabe noted that learners has some control over this process. Selective attention is the ability of humans to select and process certain information while simultaneously ignoring others. This is influenced by many things including:
What the information being processed means to the individual
The complexity of the stimuli (based partially on background knowledge)
Ability to control attention (varies based on age, hyperactivity, etc.)
Some research has shown that individuals with a high working memory are better able to filter out irrelevant information. In particular, one study on focusing on dichotic listening, followed participants were played two audio tracks, one in each ear, and were asked to pay attention only to one. It was shown that there was a significant positive relationship between working memory capacity and ability of the participant to filter out the information from the other audio track.
== Implications for teaching ==
Some examples of classroom implications of the information processing theory include:
== References ==
== Further reading ==
Rogers, Patrick R.; Miller, Alex; Judge, William Q. (1999). "Using information-processing theory to understand planning/Performance relationships in the context of strategy". Strategic Management Journal. 20 (6): 567–577. doi:10.1002/(SICI)1097-0266(199906)20:6<567::AID-SMJ36>3.0.CO;2-K.
Atkinson, R.C.; Shiffrin, R.M. (1968). Human Memory: A Proposed System and its Control Processes. Psychology of Learning and Motivation. Vol. 2. pp. 89–195. doi:10.1016/S0079-7421(08)60422-3. ISBN 9780125433020. S2CID 22958289.
Miller, George A. (2003). "The cognitive revolution: A historical perspective". Trends in Cognitive Sciences. 7 (3): 141–144. doi:10.1016/S1364-6613(03)00029-9. PMID 12639696. S2CID 206129621.
Miller, George A. (1956). "The magical number seven, plus or minus two: Some limits on our capacity for processing information". Psychological Review. 63 (2): 81–97. doi:10.1037/h0043158. hdl:11858/00-001M-0000-002C-4646-B. PMID 13310704. S2CID 15654531.
Shaki, Samuel; Gevers, Wim (2011). "Cultural Characteristics Dissociate Magnitude and Ordinal Information Processing" (PDF). Journal of Cross-Cultural Psychology. 42 (4): 639–650. doi:10.1177/0022022111406100. S2CID 145054174.
Hamamura, Takeshi; Meijer, Zita; Heine, Steven J.; Kamaya, Kengo; Hori, Izumi (2009). "Approach—Avoidance Motivation and Information Processing: A Cross-Cultural Analysis". Personality and Social Psychology Bulletin. 35 (4): 454–462. doi:10.1177/0146167208329512. PMID 19164704. S2CID 6642553.
Proctor, Robert W.; Vu, Kim-Phuong L. (2006). "The Cognitive Revolution at Age 50: Has the Promise of the Human Information-Processing Approach Been Fulfilled?". International Journal of Human–Computer Interaction. 21 (3): 253–284. doi:10.1207/s15327590ijhc2103_1. S2CID 41905965. | Wikipedia/Information_processing_theory |
LZ4 is a lossless data compression algorithm that is focused on compression and decompression speed. It belongs to the LZ77 family of byte-oriented compression schemes.
== Features ==
The LZ4 algorithm aims to provide a good trade-off between speed and compression ratio. Typically, it has a smaller (i.e., worse) compression ratio than the similar LZO algorithm, which in turn is worse than algorithms like DEFLATE. However, LZ4 compression speed is similar to LZO and several times faster than DEFLATE, while decompression speed is significantly faster than LZO.
== Design ==
LZ4 only uses a dictionary-matching stage (LZ77), and unlike other common compression algorithms does not combine it with an entropy coding stage (e.g. Huffman coding in DEFLATE).
The LZ4 algorithm represents the data as a series of sequences. Each sequence begins with a one-byte token that is broken into two 4-bit fields. The first field represents the number of literal bytes that are to be copied to the output. The second field represents the number of bytes to copy from the already decoded output buffer (with 0 representing the minimum match length of 4 bytes). A value of 15 in either of the bitfields indicates that the length is larger and there is an extra byte of data that is to be added to the length. A value of 255 in these extra bytes indicates that yet another byte is to be added. Hence arbitrary lengths are represented by a series of extra bytes containing the value 255. The string of literals comes after the token and any extra bytes needed to indicate string length. This is followed by an offset that indicates how far back in the output buffer to begin copying. The extra bytes (if any) of the match-length come at the end of the sequence.
Compression can be carried out in a stream or in blocks. Higher compression ratios can be achieved by investing more effort in finding the best matches. This results in both a smaller output and faster decompression.
== Implementation ==
The reference implementation in C by Yann Collet is licensed under a BSD license. There are ports and bindings in various languages including Java, C#, Rust, and Python. The Apache Hadoop system uses this algorithm for fast compression. LZ4 was also implemented natively in the Linux kernel 3.11. The FreeBSD, Illumos, ZFS on Linux, and ZFS-OSX implementations of the ZFS filesystem support the LZ4 algorithm for on-the-fly compression. Linux supports LZ4 for SquashFS since 3.19-rc1. LZ4 is also supported by the newer zstd command line utility by Yann Collet, as well as a 7-Zip fork called 7-Zip-zstd.
== References ==
== External links ==
Official website | Wikipedia/LZ4_(compression_algorithm) |
Embedded zerotrees of wavelet transforms (EZW) is a lossy image compression algorithm. At low bit rates, i.e. high compression ratios, most of the coefficients produced by a subband transform (such as the wavelet transform) will be zero, or very close to zero. This occurs because "real world" images tend to contain mostly low frequency information (highly correlated). However where high frequency information does occur (such as edges in the image) this is particularly important in terms of human perception of the image quality, and thus must be represented accurately in any high quality coding scheme.
By considering the transformed coefficients as a tree (or trees) with the lowest frequency coefficients at the root node and with the children of each tree node being the spatially related coefficients in the next higher frequency subband, there is a high probability that one or more subtrees will consist entirely of coefficients which are zero or nearly zero, such subtrees are called zerotrees. Due to this, we use the terms node and coefficient interchangeably, and when we refer to the children of a coefficient, we mean the child coefficients of the node in the tree where that coefficient is located. We use children to refer to directly connected nodes lower in the tree and descendants to refer to all nodes which are below a particular node in the tree, even if not directly connected.
In zerotree based image compression scheme such as EZW and SPIHT, the intent is to use the statistical properties of the trees in order to efficiently code the locations of the significant coefficients. Since most of the coefficients will be zero or close to zero, the spatial locations of the significant coefficients make up a large portion of the total size of a typical compressed image. A coefficient (likewise a tree) is considered significant if its magnitude (or magnitudes of a node and all its descendants in the case of a tree) is above a particular threshold. By starting with a threshold which is close to the maximum coefficient magnitudes and iteratively decreasing the threshold, it is possible to create a compressed representation of an image which progressively adds finer detail. Due to the structure of the trees, it is very likely that if a coefficient in a particular frequency band is insignificant, then all its descendants (the spatially related higher frequency band coefficients) will also be insignificant.
EZW uses four symbols to represent (a) a zerotree root, (b) an isolated zero (a coefficient which is insignificant, but which has significant descendants), (c) a significant positive coefficient and (d) a significant negative coefficient. The symbols may be thus represented by two binary bits. The compression algorithm consists
of a number of iterations through a dominant pass and a subordinate pass, the threshold is updated (reduced by a factor of two) after each iteration. The dominant pass encodes the significance of the coefficients which have not yet been found significant in earlier iterations, by scanning the trees and emitting one of the four symbols. The children of a coefficient are only scanned if the coefficient was found to be significant, or if the coefficient was an isolated zero. The subordinate pass emits one bit (the most significant bit of each coefficient not so far emitted) for each coefficient which has been found significant in the previous significance passes. The subordinate pass is therefore similar to bit-plane coding.
There are several important features to note. Firstly, it is possible to stop the compression algorithm at any time and obtain an approximation of the original image, the greater the number of bits received, the better the image. Secondly, due to the way in which the compression algorithm is structured as a series of decisions, the same algorithm can be run at the decoder to reconstruct the coefficients, but with the decisions being taken according to the incoming bit stream. In practical implementations, it would be usual to use an entropy code such as arithmetic code to further improve the performance of the dominant pass. Bits from the subordinate pass are usually random enough that entropy coding provides no further coding gain.
The coding performance of EZW has since been exceeded by SPIHT and its many derivatives.
== Introduction ==
Embedded zerotree wavelet algorithm (EZW) as developed by J. Shapiro in 1993, enables scalable image transmission and decoding. It is based on four key concepts: first, it should be a discrete wavelet transform or hierarchical subband decomposition; second, it should predict the absence of significant information when exploring the self-similarity inherent in images; third, it has entropy-coded successive-approximation quantization, and fourth, it is enabled to achieve universal lossless data compression via adaptive arithmetic coding.
Besides, the EZW algorithm also contains the following features:
(1) A discrete wavelet transform which can use a compact multiresolution representation in the image.
(2) Zerotree coding which provides a compact multiresolution representation of significance maps.
(3) Successive approximation for a compact multiprecision representation of the significant coefficients.
(4) A prioritization protocol which the importance is determined by the precision, magnitude, scale, and spatial location of the wavelet coefficients in order.
(5) Adaptive multilevel arithmetic coding which is a fast and efficient method for entropy coding strings of symbols.
== Embedded zerotree wavelet coding ==
=== A. Encoding a coefficient of the significance map ===
In a significance map, the coefficients can be representing by the following four different symbols. With using these symbols to represent the image information, the coding will be less complication.
==== 1. Zerotree root ====
If the magnitude of a coefficient is less than a threshold T, and all its descendants are less than T, then this coefficient is called zerotree root. And if a coefficient has been labeled as zerotree root, it means that all of its descendants are insignificant, so there is no need to label its descendants.
==== 2. Isolated zero ====
If the magnitude of a coefficient that is less than a threshold T, but it still has some significant descendants, then this coefficient is called isolated zero.
==== 3. Positive significant coefficient ====
If the magnitude of a coefficient is greater than a threshold T at level T, and also is positive, than it is a positive significant coefficient.
==== 4. Negative significant coefficient ====
If the magnitude of a coefficient is greater than a threshold T at level T, and also is negative, than it is a negative significant coefficient.
=== B. Defining threshold ===
The threshold using above can be defined as the type below.
==== 1. Initial threshold T0: (Assume Cmax is the largest coefficient.) ====
==== 2. Threshold Ti is reduced to half of the value of the previous threshold. ====
=== C. Scanning order for coefficients ===
Raster scanning is the rectangular pattern of image capture and reconstruction. Using this scanning on EZW transform is to perform scanning the coefficients in such way that no child node is scanned before its parent node. Also, all positions in a given subband are scanned before it moves to the next subband.
=== D. Two-pass bitplane coding ===
==== (1) Refinement pass (or subordinate pass) ====
This determine that if the coefficient is in the interval [Ti, 2Ti). And a refinement bit is coded for each significant coefficient.
In this method, it will visit the significant coefficients according to the magnitude and raster order within subbands.
==== (2) Significant pass (or dominant pass) ====
This method will code a bit for each coefficient that is not yet be seen as significant. Once a determination of significance has been made, the significant coefficient is included in a list for further refinement in the refinement pass. And if any coefficient already known to be zero, it will not be coded again.
== Example ==
DCT data ZeroTree scan order (EZW)
63 -34 49 10 7 13 -12 7 A B BE BF E1 E2 F1 F2
-31 23 14 -13 3 4 6 -1 C D BG BH E3 E4 F3 F4
15 14 3 -12 5 -7 3 9 CI CJ DM DN G1 G2 H1 H2
-9 -7 -14 8 4 -2 3 2 CK CL DO DP G3 G4 H3 H4
-5 9 -1 47 4 6 -2 2 I1 I2 J1 J2 M1 M2 N1 N2
3 0 -3 2 3 -2 0 4 I3 I4 J3 J4 M3 M4 N3 N4
2 -3 6 -4 3 6 3 6 K1 K2 L1 L2 O1 O2 P1 P2
5 11 5 6 0 3 -4 4 K3 K4 L3 L4 O3 O4 P3 P4
D1: pnzt p ttt tztt tttttptt (20 codes)
PNZT P(t) TTT TZTT TPTT (D1 by M-EZW, 16 codes)
PNZT P(t) Z(t) TZ(p) TPZ(p) (D1 by NM-EZW, 11 codes)
P N (t), P or N above zerotree scan
P N Z(t p), p=pair T, t=triple T, P/N + TT/TTT in D1 code
S1: 1010
D2: ztnp tttttttt
S2: 1001 10 (Shapiro PDF end here)
D3: zzzz zppnppnttnnp tpttnttttttttptttptttttttttptttttttttttt
S3: 1001 11 01111011011000
D4: zzzzzzztztznzzzzpttptpptpnptntttttptpnpppptttttptptttpnp
S4: 1101 11 11011001000001 110110100010010101100
D5: zzzzztzzzzztpzzzttpttttnptppttptttnppnttttpnnpttpttppttt
S5: 1011 11 00110100010111 110101101100100000000 110110110011000111
D6: zzzttztttztttttnnttt
( http://www.polyvalens.com/wavelets/ezw/ )
Detailed: (new S is first, other computed by before cycles)
s-step 1 21 321
val D1 S1 R1 D2 S2 R2 D3 S3. ... R3 ... D4,S4...
A 63 P 1 >=48 56 Z .1 >=56 60 Z ..1 >=60 62
B -34 N 0 <48 -40 T .0 <40 -36 Z ..0 <36 -36
C -31 IZ <32 0 N 1. >=24 -28 Z .1. >=28 -30
D 23 T <32 0 P 0. <24 20 Z .1. >=20 22
BE 49 P 1 >=48 56 .0 <56 52 Z ..0 <52 50
BF 10 T <32 0 P 0 <12 10
BG 14 T <32 0 P 1 >=12 14
BH -13 T <32 0 N 1 >=12 -14
CI 15 T <32 0 T <16 0 P 1 >=12 14
CJ 14 IZ <32 0 T <16 0 P 1 >=12 14
CK -9 T <32 0 T <16 0 N 0 <12 -10
CL -7 T <32 0 T <16 0 T <8 0
DM 3 T <16 0 T <8 0
DN -12 T <16 0 N 1 >=12 -14
DO -14 T <16 0 N 1 >=12 -14
DP 8 T <16 0 P <12 10
E1 7 T <32 0 .E,F,G,H(1,2,3,4)
E2 13 T <32 0 .I,J,K(1,2,3,4)
E3 3 T <32 0 .N,O,P(1,2,3,4)
E4 4 T <32 0 .
J1 -1 T <32 0 .
J2 47 P 0 >48 40 1 >=40 44 .
J3 -3 T <32 0
J4 2 T <32 0
D = dominant pass (P=positive, N=negative, T=ZeroTree, IZ=Izolated zero)
S = subordinate pass;
(R = back reconstructed value)
== See also ==
Set partitioning in hierarchical trees (SPIHT)
== References ==
J.M. Shapiro (1993). "Embedded image coding using zerotrees of wavelet coefficients". IEEE Transactions on Signal Processing. 41 (12): 3445–3462. CiteSeerX 10.1.1.131.5757. doi:10.1109/78.258085. ISSN 1053-587X. S2CID 18047405. Zbl 0841.94020. Wikidata Q56883112.
== External links ==
Clemens Valens (2003-08-24). "EZW encoding". Archived from the original on 2009-02-03. | Wikipedia/Embedded_zerotrees_of_wavelet_transforms |
In physics, the Tsallis entropy is a generalization of the standard Boltzmann–Gibbs entropy.
It is proportional to the expectation of the q-logarithm of a distribution.
== History ==
The concept was introduced in 1988 by Constantino Tsallis as a basis for generalizing the standard statistical mechanics and is identical in form to Havrda–Charvát structural α-entropy, introduced in 1967 within information theory.
== Definition ==
Given a discrete set of probabilities
{
p
i
}
{\displaystyle \{p_{i}\}}
with the condition
∑
i
p
i
=
1
{\displaystyle \sum _{i}p_{i}=1}
, and
q
{\displaystyle q}
any real number, the Tsallis entropy is defined as
S
q
(
p
i
)
=
k
⋅
1
q
−
1
(
1
−
∑
i
p
i
q
)
,
{\displaystyle S_{q}({p_{i}})=k\cdot {\frac {1}{q-1}}\left(1-\sum _{i}p_{i}^{q}\right),}
where
q
{\displaystyle q}
is a real parameter sometimes called entropic-index and
k
{\displaystyle k}
a positive constant.
In the limit as
q
→
1
{\displaystyle q\to 1}
, the usual Boltzmann–Gibbs entropy is recovered, namely
S
BG
=
S
1
(
p
)
=
−
k
∑
i
p
i
ln
p
i
,
{\displaystyle S_{\text{BG}}=S_{1}(p)=-k\sum _{i}p_{i}\ln p_{i},}
where one identifies
k
{\displaystyle k}
with the Boltzmann constant
k
B
{\displaystyle k_{B}}
.
For continuous probability distributions, we define the entropy as
S
q
[
p
]
=
1
q
−
1
(
1
−
∫
(
p
(
x
)
)
q
d
x
)
,
{\displaystyle S_{q}[p]={1 \over q-1}\left(1-\int (p(x))^{q}\,dx\right),}
where
p
(
x
)
{\displaystyle p(x)}
is a probability density function.
=== Cross-entropy ===
The cross-entropy pendant is the expectation of the negative q-logarithm with respect to a second distribution,
r
{\displaystyle r}
. So
1
q
−
1
(
1
−
∑
i
p
i
q
⋅
r
i
p
i
)
{\displaystyle {\tfrac {1}{q-1}}(1-{\textstyle \sum _{i}}p_{i}^{q}\cdot {\tfrac {r_{i}}{p_{i}}})}
.
Using
t
=
q
−
1
{\displaystyle t=q-1}
, this may be written
(
1
−
E
r
[
p
t
]
)
/
t
{\displaystyle (1-E_{r}[p^{t}])/t}
. For smaller
t
{\displaystyle t}
, values
p
i
t
{\displaystyle p_{i}^{t}}
all tend towards
1
{\displaystyle 1}
.
The limit
q
→
1
{\displaystyle q\to 1}
computes the negative of the slope of
E
r
[
p
t
]
{\displaystyle E_{r}[p^{t}]}
at
t
=
0
{\displaystyle t=0}
and one recovers
−
∑
i
r
i
ln
p
i
{\displaystyle -{\textstyle \sum _{i}}r_{i}\ln p_{i}}
. So for fixed small
t
{\displaystyle t}
, raising this expectation relates to log-likelihood maximalization.
== Properties ==
=== Identities ===
A logarithm can be expressed in terms of a slope through
d
d
x
p
x
=
p
x
ln
p
{\displaystyle {\tfrac {d}{dx}}p^{x}=p^{x}\ln p}
resulting in the following formula for the standard entropy:
S
=
−
lim
x
→
1
d
d
x
∑
i
p
i
x
=
−
∑
i
p
i
ln
p
i
{\displaystyle S=-\lim _{x\rightarrow 1}{\tfrac {d}{dx}}\sum _{i}p_{i}^{x}=-{\textstyle \sum _{i}}p_{i}\ln p_{i}}
Likewise, the discrete Tsallis entropy satisfies
S
q
=
−
lim
x
→
1
D
q
∑
i
p
i
x
{\displaystyle S_{q}=-\lim _{x\rightarrow 1}D_{q}\sum _{i}p_{i}^{x}}
where Dq is the q-derivative with respect to x.
=== Non-additivity ===
Given two independent systems A and B, for which the joint probability density satisfies
p
(
A
,
B
)
=
p
(
A
)
p
(
B
)
,
{\displaystyle p(A,B)=p(A)p(B),\,}
the Tsallis entropy of this system satisfies
S
q
(
A
,
B
)
=
S
q
(
A
)
+
S
q
(
B
)
+
(
1
−
q
)
S
q
(
A
)
S
q
(
B
)
.
{\displaystyle S_{q}(A,B)=S_{q}(A)+S_{q}(B)+(1-q)S_{q}(A)S_{q}(B).\,}
From this result, it is evident that the parameter
|
1
−
q
|
{\displaystyle |1-q|}
is a measure of the departure from additivity. In the limit when q = 1,
S
(
A
,
B
)
=
S
(
A
)
+
S
(
B
)
,
{\displaystyle S(A,B)=S(A)+S(B),\,}
which is what is expected for an additive system. This property is sometimes referred to as "pseudo-additivity".
=== Exponential families ===
Many common distributions like the normal distribution belongs to the statistical exponential families.
Tsallis entropy for an exponential family can be written as
H
q
T
(
p
F
(
x
;
θ
)
)
=
1
1
−
q
(
(
e
F
(
q
θ
)
−
q
F
(
θ
)
)
E
p
[
e
(
q
−
1
)
k
(
x
)
]
−
1
)
{\displaystyle H_{q}^{T}(p_{F}(x;\theta ))={\frac {1}{1-q}}\left((e^{F(q\theta )-qF(\theta )})E_{p}[e^{(q-1)k(x)}]-1\right)}
where F is log-normalizer and k the term indicating the carrier measure.
For multivariate normal, term k is zero, and therefore the Tsallis entropy is in closed-form.
== Applications ==
The Tsallis Entropy has been used along with the Principle of maximum entropy to derive the Tsallis distribution.
In scientific literature, the physical relevance of the Tsallis entropy has been debated. However, from the years 2000 on, an increasingly wide spectrum of natural, artificial and social complex systems have been identified which confirm the predictions and consequences that are derived from this nonadditive entropy, such as nonextensive statistical mechanics, which generalizes the Boltzmann–Gibbs theory.
Among the various experimental verifications and applications presently available in the literature, the following ones deserve a special mention:
The distribution characterizing the motion of cold atoms in dissipative optical lattices predicted in 2003 and observed in 2006.
The fluctuations of the magnetic field in the solar wind enabled the calculation of the q-triplet (or Tsallis triplet).
The velocity distributions in a driven dissipative dusty plasma.
Spin glass relaxation.
Trapped ion interacting with a classical buffer gas.
High energy collisional experiments at LHC/CERN (CMS, ATLAS and ALICE detectors) and RHIC/Brookhaven (STAR and PHENIX detectors).
Among the various available theoretical results which clarify the physical conditions under which Tsallis entropy and associated statistics apply, the following ones can be selected:
Anomalous diffusion.
Uniqueness theorem.
Sensitivity to initial conditions and entropy production at the edge of chaos.
Probability sets that make the nonadditive Tsallis entropy to be extensive in the thermodynamical sense.
Strongly quantum entangled systems and thermodynamics.
Thermostatistics of overdamped motion of interacting particles.
Nonlinear generalizations of the Schrödinger, Klein–Gordon and Dirac equations.
Blackhole entropy calculation.
For further details a bibliography is available at http://tsallis.cat.cbpf.br/biblio.htm
== Generalized entropies ==
Several interesting physical systems abide by entropic functionals that are more general than the standard Tsallis entropy. Therefore, several physically meaningful generalizations have been introduced. The two most general of these are notably: Superstatistics, introduced by C. Beck and E. G. D. Cohen in 2003 and Spectral Statistics, introduced by G. A. Tsekouras and Constantino Tsallis in 2005. Both these entropic forms have Tsallis and Boltzmann–Gibbs statistics as special cases; Spectral Statistics has been proven to at least contain Superstatistics and it has been conjectured to also cover some additional cases.
== See also ==
Rényi entropy
Tsallis distribution
== References ==
== Further reading ==
Furuichi, Shigeru; Mitroi-Symeonidis, Flavia-Corina; Symeonidis, Eleutherius (2014). "On some properties of Tsallis hypoentropies and hypodivergences". Entropy. 16 (10): 5377–5399. arXiv:1410.4903. Bibcode:2014Entrp..16.5377F. doi:10.3390/e16105377.
Furuichi, Shigeru; Mitroi, Flavia-Corina (2012). "Mathematical inequalities for some divergences". Physica A. 391 (1–2): 388–400. arXiv:1104.5603. Bibcode:2012PhyA..391..388F. doi:10.1016/j.physa.2011.07.052. S2CID 92394.
Furuichi, Shigeru; Minculete, Nicușor; Mitroi, Flavia-Corina (2012). "Some inequalities on generalized entropies". Journal of Inequalities and Applications. 2012: 226. arXiv:1104.0360. doi:10.1186/1029-242X-2012-226.
== External links ==
Tsallis Statistics, Statistical Mechanics for Non-extensive Systems and Long-Range Interactions | Wikipedia/Tsallis_entropy |
In information theory, the cross-entropy between two probability distributions
p
{\displaystyle p}
and
q
{\displaystyle q}
, over the same underlying set of events, measures the average number of bits needed to identify an event drawn from the set when the coding scheme used for the set is optimized for an estimated probability distribution
q
{\displaystyle q}
, rather than the true distribution
p
{\displaystyle p}
.
== Definition ==
The cross-entropy of the distribution
q
{\displaystyle q}
relative to a distribution
p
{\displaystyle p}
over a given set is defined as follows:
H
(
p
,
q
)
=
−
E
p
[
log
q
]
,
{\displaystyle H(p,q)=-\operatorname {E} _{p}[\log q],}
where
E
p
[
⋅
]
{\displaystyle \operatorname {E} _{p}[\cdot ]}
is the expected value operator with respect to the distribution
p
{\displaystyle p}
.
The definition may be formulated using the Kullback–Leibler divergence
D
K
L
(
p
∥
q
)
{\displaystyle D_{\mathrm {KL} }(p\parallel q)}
, divergence of
p
{\displaystyle p}
from
q
{\displaystyle q}
(also known as the relative entropy of
p
{\displaystyle p}
with respect to
q
{\displaystyle q}
).
H
(
p
,
q
)
=
H
(
p
)
+
D
K
L
(
p
∥
q
)
,
{\displaystyle H(p,q)=H(p)+D_{\mathrm {KL} }(p\parallel q),}
where
H
(
p
)
{\displaystyle H(p)}
is the entropy of
p
{\displaystyle p}
.
For discrete probability distributions
p
{\displaystyle p}
and
q
{\displaystyle q}
with the same support
X
{\displaystyle {\mathcal {X}}}
, this means
The situation for continuous distributions is analogous. We have to assume that
p
{\displaystyle p}
and
q
{\displaystyle q}
are absolutely continuous with respect to some reference measure
r
{\displaystyle r}
(usually
r
{\displaystyle r}
is a Lebesgue measure on a Borel σ-algebra). Let
P
{\displaystyle P}
and
Q
{\displaystyle Q}
be probability density functions of
p
{\displaystyle p}
and
q
{\displaystyle q}
with respect to
r
{\displaystyle r}
. Then
−
∫
X
P
(
x
)
log
Q
(
x
)
d
x
=
E
p
[
−
log
Q
]
,
{\displaystyle -\int _{\mathcal {X}}P(x)\,\log Q(x)\,\mathrm {d} x=\operatorname {E} _{p}[-\log Q],}
and therefore
NB: The notation
H
(
p
,
q
)
{\displaystyle H(p,q)}
is also used for a different concept, the joint entropy of
p
{\displaystyle p}
and
q
{\displaystyle q}
.
== Motivation ==
In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value
x
i
{\displaystyle x_{i}}
out of a set of possibilities
{
x
1
,
…
,
x
n
}
{\displaystyle \{x_{1},\ldots ,x_{n}\}}
can be seen as representing an implicit probability distribution
q
(
x
i
)
=
(
1
2
)
ℓ
i
{\displaystyle q(x_{i})=\left({\frac {1}{2}}\right)^{\ell _{i}}}
over
{
x
1
,
…
,
x
n
}
{\displaystyle \{x_{1},\ldots ,x_{n}\}}
, where
ℓ
i
{\displaystyle \ell _{i}}
is the length of the code for
x
i
{\displaystyle x_{i}}
in bits. Therefore, cross-entropy can be interpreted as the expected message-length per datum when a wrong distribution
q
{\displaystyle q}
is assumed while the data actually follows a distribution
p
{\displaystyle p}
. That is why the expectation is taken over the true probability distribution
p
{\displaystyle p}
and not
q
.
{\displaystyle q.}
Indeed the expected message-length under the true distribution
p
{\displaystyle p}
is
E
p
[
ℓ
]
=
−
E
p
[
ln
q
(
x
)
ln
(
2
)
]
=
−
E
p
[
log
2
q
(
x
)
]
=
−
∑
x
i
p
(
x
i
)
log
2
q
(
x
i
)
=
−
∑
x
p
(
x
)
log
2
q
(
x
)
=
H
(
p
,
q
)
.
{\displaystyle {\begin{aligned}\operatorname {E} _{p}[\ell ]&=-\operatorname {E} _{p}\left[{\frac {\ln {q(x)}}{\ln(2)}}\right]\\[1ex]&=-\operatorname {E} _{p}\left[\log _{2}{q(x)}\right]\\[1ex]&=-\sum _{x_{i}}p(x_{i})\,\log _{2}q(x_{i})\\[1ex]&=-\sum _{x}p(x)\,\log _{2}q(x)=H(p,q).\end{aligned}}}
== Estimation ==
There are many situations where cross-entropy needs to be measured but the distribution of
p
{\displaystyle p}
is unknown. An example is language modeling, where a model is created based on a training set
T
{\displaystyle T}
, and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example,
p
{\displaystyle p}
is the true distribution of words in any corpus, and
q
{\displaystyle q}
is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula:
H
(
T
,
q
)
=
−
∑
i
=
1
N
1
N
log
2
q
(
x
i
)
{\displaystyle H(T,q)=-\sum _{i=1}^{N}{\frac {1}{N}}\log _{2}q(x_{i})}
where
N
{\displaystyle N}
is the size of the test set, and
q
(
x
)
{\displaystyle q(x)}
is the probability of event
x
{\displaystyle x}
estimated from the training set. In other words,
q
(
x
i
)
{\displaystyle q(x_{i})}
is the probability estimate of the model that the i-th word of the text is
x
i
{\displaystyle x_{i}}
. The sum is averaged over the
N
{\displaystyle N}
words of the test. This is a Monte Carlo estimate of the true cross-entropy, where the test set is treated as samples from
p
(
x
)
{\displaystyle p(x)}
.
== Relation to maximum likelihood ==
The cross entropy arises in classification problems when introducing a logarithm in the guise of the log-likelihood function.
The section is concerned with the subject of estimation of the probability of different possible discrete outcomes. To this end, denote a parametrized family of distributions by
q
θ
{\displaystyle q_{\theta }}
, with
θ
{\displaystyle \theta }
subject to the optimization effort. Consider a given finite sequence of
N
{\displaystyle N}
values
x
i
{\displaystyle x_{i}}
from a training set, obtained from conditionally independent sampling. The likelihood assigned to any considered parameter
θ
{\displaystyle \theta }
of the model is then given by the product over all probabilities
q
θ
(
X
=
x
i
)
{\displaystyle q_{\theta }(X=x_{i})}
.
Repeated occurrences are possible, leading to equal factors in the product. If the count of occurrences of the value equal to
x
i
{\displaystyle x_{i}}
(for some index
i
{\displaystyle i}
) is denoted by
#
x
i
{\displaystyle \#x_{i}}
, then the frequency of that value equals
#
x
i
/
N
{\displaystyle \#x_{i}/N}
. Denote the latter by
p
(
X
=
x
i
)
{\displaystyle p(X=x_{i})}
, as it may be understood as empirical approximation to the probability distribution underlying the scenario. Further denote by
P
P
:=
e
H
(
p
,
q
θ
)
{\displaystyle PP:={\mathrm {e} }^{H(p,q_{\theta })}}
the perplexity, which can be seen to equal
∏
x
i
q
θ
(
X
=
x
i
)
−
p
(
X
=
x
i
)
{\textstyle \prod _{x_{i}}q_{\theta }(X=x_{i})^{-p(X=x_{i})}}
by the calculation rules for the logarithm, and where the product is over the values without double counting. So
L
(
θ
;
x
)
=
∏
i
q
θ
(
X
=
x
i
)
=
∏
x
i
q
θ
(
X
=
x
i
)
#
x
i
=
P
P
−
N
=
e
−
N
⋅
H
(
p
,
q
θ
)
{\displaystyle {\mathcal {L}}(\theta ;{\mathbf {x} })=\prod _{i}q_{\theta }(X=x_{i})=\prod _{x_{i}}q_{\theta }(X=x_{i})^{\#x_{i}}=PP^{-N}={\mathrm {e} }^{-N\cdot H(p,q_{\theta })}}
or
log
L
(
θ
;
x
)
=
−
N
⋅
H
(
p
,
q
θ
)
.
{\displaystyle \log {\mathcal {L}}(\theta ;{\mathbf {x} })=-N\cdot H(p,q_{\theta }).}
Since the logarithm is a monotonically increasing function, it does not affect extremization. So observe that the likelihood maximization amounts to minimization of the cross-entropy.
== Cross-entropy minimization ==
Cross-entropy minimization is frequently used in optimization and rare-event probability estimation. When comparing a distribution
q
{\displaystyle q}
against a fixed reference distribution
p
{\displaystyle p}
, cross-entropy and KL divergence are identical up to an additive constant (since
p
{\displaystyle p}
is fixed): According to the Gibbs' inequality, both take on their minimal values when
p
=
q
{\displaystyle p=q}
, which is
0
{\displaystyle 0}
for KL divergence, and
H
(
p
)
{\displaystyle \mathrm {H} (p)}
for cross-entropy. In the engineering literature, the principle of minimizing KL divergence (Kullback's "Principle of Minimum Discrimination Information") is often called the Principle of Minimum Cross-Entropy (MCE), or Minxent.
However, as discussed in the article Kullback–Leibler divergence, sometimes the distribution
q
{\displaystyle q}
is the fixed prior reference distribution, and the distribution
p
{\displaystyle p}
is optimized to be as close to
q
{\displaystyle q}
as possible, subject to some constraint. In this case the two minimizations are not equivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by restating cross-entropy to be
D
K
L
(
p
∥
q
)
{\displaystyle D_{\mathrm {KL} }(p\parallel q)}
, rather than
H
(
p
,
q
)
{\displaystyle H(p,q)}
. In fact, cross-entropy is another name for relative entropy; see Cover and Thomas and Good. On the other hand,
H
(
p
,
q
)
{\displaystyle H(p,q)}
does not agree with the literature and can be misleading.
== Cross-entropy loss function and logistic regression ==
Cross-entropy can be used to define a loss function in machine learning and optimization. Mao, Mohri, and Zhong (2023) give an extensive analysis of the properties of the family of cross-entropy loss functions in machine learning, including theoretical learning guarantees and extensions to adversarial learning. The true probability
p
i
{\displaystyle p_{i}}
is the true label, and the given distribution
q
i
{\displaystyle q_{i}}
is the predicted value of the current model. This is also known as the log loss (or logarithmic loss or logistic loss); the terms "log loss" and "cross-entropy loss" are used interchangeably.
More specifically, consider a binary regression model which can be used to classify observations into two possible classes (often simply labelled
0
{\displaystyle 0}
and
1
{\displaystyle 1}
). The output of the model for a given observation, given a vector of input features
x
{\displaystyle x}
, can be interpreted as a probability, which serves as the basis for classifying the observation. In logistic regression, the probability is modeled using the logistic function
g
(
z
)
=
1
/
(
1
+
e
−
z
)
{\displaystyle g(z)=1/(1+e^{-z})}
where
z
{\displaystyle z}
is some function of the input vector
x
{\displaystyle x}
, commonly just a linear function. The probability of the output
y
=
1
{\displaystyle y=1}
is given by
q
y
=
1
=
y
^
≡
g
(
w
⋅
x
)
=
1
1
+
e
−
w
⋅
x
,
{\displaystyle q_{y=1}={\hat {y}}\equiv g(\mathbf {w} \cdot \mathbf {x} )={\frac {1}{1+e^{-\mathbf {w} \cdot \mathbf {x} }}},}
where the vector of weights
w
{\displaystyle \mathbf {w} }
is optimized through some appropriate algorithm such as gradient descent. Similarly, the complementary probability of finding the output
y
=
0
{\displaystyle y=0}
is simply given by
q
y
=
0
=
1
−
y
^
.
{\displaystyle q_{y=0}=1-{\hat {y}}.}
Having set up our notation,
p
∈
{
y
,
1
−
y
}
{\displaystyle p\in \{y,1-y\}}
and
q
∈
{
y
^
,
1
−
y
^
}
{\displaystyle q\in \{{\hat {y}},1-{\hat {y}}\}}
, we can use cross-entropy to get a measure of dissimilarity between
p
{\displaystyle p}
and
q
{\displaystyle q}
:
H
(
p
,
q
)
=
−
∑
i
p
i
log
q
i
=
−
y
log
y
^
−
(
1
−
y
)
log
(
1
−
y
^
)
.
{\displaystyle {\begin{aligned}H(p,q)&=-\sum _{i}p_{i}\log q_{i}\\[1ex]&=-y\log {\hat {y}}-(1-y)\log(1-{\hat {y}}).\end{aligned}}}
Logistic regression typically optimizes the log loss for all the observations on which it is trained, which is the same as optimizing the average cross-entropy in the sample. Other loss functions that penalize errors differently can be also used for training, resulting in models with different final test accuracy. For example, suppose we have
N
{\displaystyle N}
samples with each sample indexed by
n
=
1
,
…
,
N
{\displaystyle n=1,\dots ,N}
. The average of the loss function is then given by:
J
(
w
)
=
1
N
∑
n
=
1
N
H
(
p
n
,
q
n
)
=
−
1
N
∑
n
=
1
N
[
y
n
log
y
^
n
+
(
1
−
y
n
)
log
(
1
−
y
^
n
)
]
,
{\displaystyle {\begin{aligned}J(\mathbf {w} )&={\frac {1}{N}}\sum _{n=1}^{N}H(p_{n},q_{n})\\&=-{\frac {1}{N}}\sum _{n=1}^{N}\ \left[y_{n}\log {\hat {y}}_{n}+(1-y_{n})\log(1-{\hat {y}}_{n})\right],\end{aligned}}}
where
y
^
n
≡
g
(
w
⋅
x
n
)
=
1
/
(
1
+
e
−
w
⋅
x
n
)
{\displaystyle {\hat {y}}_{n}\equiv g(\mathbf {w} \cdot \mathbf {x} _{n})=1/(1+e^{-\mathbf {w} \cdot \mathbf {x} _{n}})}
, with
g
(
z
)
{\displaystyle g(z)}
the logistic function as before.
The logistic loss is sometimes called cross-entropy loss. It is also known as log loss. (In this case, the binary label is often denoted by {−1,+1}.)
Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared-error loss for linear regression. That is, define
X
T
=
(
1
x
11
…
x
1
p
1
x
21
⋯
x
2
p
⋮
⋮
⋮
1
x
n
1
⋯
x
n
p
)
∈
R
n
×
(
p
+
1
)
,
{\displaystyle X^{\mathsf {T}}={\begin{pmatrix}1&x_{11}&\dots &x_{1p}\\1&x_{21}&\cdots &x_{2p}\\\vdots &\vdots &&\vdots \\1&x_{n1}&\cdots &x_{np}\\\end{pmatrix}}\in \mathbb {R} ^{n\times (p+1)},}
y
i
^
=
f
^
(
x
i
1
,
…
,
x
i
p
)
=
1
1
+
exp
(
−
β
0
−
β
1
x
i
1
−
⋯
−
β
p
x
i
p
)
,
{\displaystyle {\hat {y_{i}}}={\hat {f}}(x_{i1},\dots ,x_{ip})={\frac {1}{1+\exp(-\beta _{0}-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})}},}
L
(
β
)
=
−
∑
i
=
1
N
[
y
i
log
y
^
i
+
(
1
−
y
i
)
log
(
1
−
y
^
i
)
]
.
{\displaystyle L({\boldsymbol {\beta }})=-\sum _{i=1}^{N}\left[y_{i}\log {\hat {y}}_{i}+(1-y_{i})\log(1-{\hat {y}}_{i})\right].}
Then we have the result
∂
∂
β
L
(
β
)
=
X
T
(
Y
^
−
Y
)
.
{\displaystyle {\frac {\partial }{\partial {\boldsymbol {\beta }}}}L({\boldsymbol {\beta }})=X^{T}({\hat {Y}}-Y).}
The proof is as follows. For any
y
^
i
{\displaystyle {\hat {y}}_{i}}
, we have
∂
∂
β
0
ln
1
1
+
e
−
β
0
+
k
0
=
e
−
β
0
+
k
0
1
+
e
−
β
0
+
k
0
,
{\displaystyle {\frac {\partial }{\partial \beta _{0}}}\ln {\frac {1}{1+e^{-\beta _{0}+k_{0}}}}={\frac {e^{-\beta _{0}+k_{0}}}{1+e^{-\beta _{0}+k_{0}}}},}
∂
∂
β
0
ln
(
1
−
1
1
+
e
−
β
0
+
k
0
)
=
−
1
1
+
e
−
β
0
+
k
0
,
{\displaystyle {\frac {\partial }{\partial \beta _{0}}}\ln \left(1-{\frac {1}{1+e^{-\beta _{0}+k_{0}}}}\right)={\frac {-1}{1+e^{-\beta _{0}+k_{0}}}},}
∂
∂
β
0
L
(
β
)
=
−
∑
i
=
1
N
[
y
i
⋅
e
−
β
0
+
k
0
1
+
e
−
β
0
+
k
0
−
(
1
−
y
i
)
1
1
+
e
−
β
0
+
k
0
]
=
−
∑
i
=
1
N
[
y
i
−
y
^
i
]
=
∑
i
=
1
N
(
y
^
i
−
y
i
)
,
{\displaystyle {\begin{aligned}{\frac {\partial }{\partial \beta _{0}}}L({\boldsymbol {\beta }})&=-\sum _{i=1}^{N}\left[{\frac {y_{i}\cdot e^{-\beta _{0}+k_{0}}}{1+e^{-\beta _{0}+k_{0}}}}-(1-y_{i}){\frac {1}{1+e^{-\beta _{0}+k_{0}}}}\right]\\&=-\sum _{i=1}^{N}\left[y_{i}-{\hat {y}}_{i}\right]=\sum _{i=1}^{N}({\hat {y}}_{i}-y_{i}),\end{aligned}}}
∂
∂
β
1
ln
1
1
+
e
−
β
1
x
i
1
+
k
1
=
x
i
1
e
k
1
e
β
1
x
i
1
+
e
k
1
,
{\displaystyle {\frac {\partial }{\partial \beta _{1}}}\ln {\frac {1}{1+e^{-\beta _{1}x_{i1}+k_{1}}}}={\frac {x_{i1}e^{k_{1}}}{e^{\beta _{1}x_{i1}}+e^{k_{1}}}},}
∂
∂
β
1
ln
[
1
−
1
1
+
e
−
β
1
x
i
1
+
k
1
]
=
−
x
i
1
e
β
1
x
i
1
e
β
1
x
i
1
+
e
k
1
,
{\displaystyle {\frac {\partial }{\partial \beta _{1}}}\ln \left[1-{\frac {1}{1+e^{-\beta _{1}x_{i1}+k_{1}}}}\right]={\frac {-x_{i1}e^{\beta _{1}x_{i1}}}{e^{\beta _{1}x_{i1}}+e^{k_{1}}}},}
∂
∂
β
1
L
(
β
)
=
−
∑
i
=
1
N
x
i
1
(
y
i
−
y
^
i
)
=
∑
i
=
1
N
x
i
1
(
y
^
i
−
y
i
)
.
{\displaystyle {\frac {\partial }{\partial \beta _{1}}}L({\boldsymbol {\beta }})=-\sum _{i=1}^{N}x_{i1}(y_{i}-{\hat {y}}_{i})=\sum _{i=1}^{N}x_{i1}({\hat {y}}_{i}-y_{i}).}
In a similar way, we eventually obtain the desired result.
== Amended cross-entropy ==
It may be beneficial to train an ensemble of models that have diversity, such that when they are combined, their predictive accuracy is augmented.
Assuming a simple ensemble of
K
{\displaystyle K}
classifiers is assembled via averaging the outputs, then the amended cross-entropy is given by
e
k
=
H
(
p
,
q
k
)
−
λ
K
∑
j
≠
k
H
(
q
j
,
q
k
)
{\displaystyle e^{k}=H(p,q^{k})-{\frac {\lambda }{K}}\sum _{j\neq k}H(q^{j},q^{k})}
where
e
k
{\displaystyle e^{k}}
is the cost function of the
k
t
h
{\displaystyle k^{th}}
classifier,
q
k
{\displaystyle q^{k}}
is the output probability of the
k
t
h
{\displaystyle k^{th}}
classifier,
p
{\displaystyle p}
is the true probability to be estimated, and
λ
{\displaystyle \lambda }
is a parameter between 0 and 1 that defines the 'diversity' that we would like to establish among the ensemble. When
λ
=
0
{\displaystyle \lambda =0}
we want each classifier to do its best regardless of the ensemble and when
λ
=
1
{\displaystyle \lambda =1}
we would like the classifier to be as diverse as possible.
== See also ==
Cross-entropy method
Logistic regression
Conditional entropy
Kullback–Leibler distance
Maximum-likelihood estimation
Mutual information
Perplexity
== References ==
== Further reading ==
de Boer, Kroese, D.P., Mannor, S. and Rubinstein, R.Y. (2005). A tutorial on the cross-entropy method. Annals of Operations Research 134 (1), 19–67. | Wikipedia/Cross-entropy |
Communication theory is a proposed description of communication phenomena, the relationships among them, a storyline describing these relationships, and an argument for these three elements. Communication theory provides a way of talking about and analyzing key events, processes, and commitments that together form communication. Theory can be seen as a way to map the world and make it navigable; communication theory gives us tools to answer empirical, conceptual, or practical communication questions.
Communication is defined in both commonsense and specialized ways. Communication theory emphasizes its symbolic and social process aspects as seen from two perspectives—as exchange of information (the transmission perspective), and as work done to connect and thus enable that exchange (the ritual perspective).
Sociolinguistic research in the 1950s and 1960s demonstrated that the level to which people change their formality of their language depends on the social context that they are in. This had been explained in terms of social norms that dictated language use. The way that we use language differs from person to person.
Communication theories have emerged from multiple historical points of origin, including classical traditions of oratory and rhetoric, Enlightenment-era conceptions of society and the mind, and post-World War II efforts to understand propaganda and relationships between media and society. Prominent historical and modern foundational communication theorists include Kurt Lewin, Harold Lasswell, Paul Lazarsfeld, Carl Hovland, James Carey, Elihu Katz, Kenneth Burke, John Dewey, Jurgen Habermas, Marshall McLuhan, Theodor Adorno, Antonio Gramsci, Jean-Luc Nancy, Robert E. Park, George Herbert Mead, Joseph Walther, Claude Shannon, Stuart Hall and Harold Innis—although some of these theorists may not explicitly associate themselves with communication as a discipline or field of study.
== Models and elements ==
One key activity in communication theory is the development of models and concepts used to describe communication. In the Linear Model, communication works in one direction: a sender encodes some message and sends it through a channel for a receiver to decode. In comparison, the Interactional Model of communication is bidirectional. People send and receive messages in a cooperative fashion as they continuously encode and decode information. The Transactional Model assumes that information is sent and received simultaneously through a noisy channel, and further considers a frame of reference or experience each person brings to the interaction.
Some of the basic elements of communication studied in communication theory are:
Source: Shannon calls this element the "information source", which "produces a message or sequence of messages to be communicated to the receiving terminal."
Sender: Shannon calls this element the "transmitter", which "operates on the message in some way to produce a signal suitable for transmission over the channel." In Aristotle, this element is the "speaker" (orator).
Channel: For Shannon, the channel is "merely the medium used to transmit the signal from transmitter to receiver."
Receiver: For Shannon, the receiver "performs the inverse operation of that done by the transmitter, reconstructing the message from the signal."
Destination: For Shannon, the destination is "the person (or thing) for whom the message is intended".
Message: from Latin mittere, "to send". The message is a concept, information, communication, or statement that is sent in a verbal, written, recorded, or visual form to the recipient.
Feedback
Entropic elements, positive and negative
== Epistemology ==
Communication theories vary substantially in their epistemology, and articulating this philosophical commitment is part of the theorizing process. Although the various epistemic positions used in communication theories can vary, one categorization scheme distinguishes among interpretive empirical, metric empirical or post-positivist, rhetorical, and critical epistemologies. Communication theories may also fall within or vary by distinct domains of interest, including information theory, rhetoric and speech, interpersonal communication, organizational communication, sociocultural communication, political communication, computer-mediated communication, and critical perspectives on media and communication.
=== Interpretive empirical epistemology ===
Interpretive empirical epistemology or interpretivism seeks to develop subjective insight and understanding of communication phenomena through the grounded study of local interactions. When developing or applying an interpretivist theory, the researcher themself is a vital instrument. Theories characteristic of this epistemology include structuration and symbolic interactionism, and frequently associated methods include discourse analysis and ethnography.
=== Metric empirical or post-positivist epistemology ===
A metric empirical or post-positivist epistemology takes an axiomatic and sometimes causal view of phenomena, developing evidence about association or making predictions, and using methods oriented to measurement of communication phenomena.
Post-positivist theories are generally evaluated by their accuracy, consistency, fruitfulness, and parsimoniousness. Theories characteristic of a post-positivist epistemology may originate from a wide range of perspectives, including pragmatist, behaviorist, cognitivist, structuralist, or functionalist. Although post-positivist work may be qualitative or quantitative, statistical analysis is a common form of evidence and scholars taking this approach often seek to develop results that can be reproduced by others.
=== Rhetorical epistemology ===
A rhetorical epistemology lays out a formal, logical, and global view of phenomena with particular concern for persuasion through speech.
A rhetorical epistemology often draws from Greco-Roman foundations such as the works of Aristotle and Cicero although recent work also draws from Michel Foucault, Kenneth Burke, Marxism, second-wave feminism, and cultural studies. Rhetoric has changed overtime. Fields of rhetoric and composition have grown to become more interested in alternative types of rhetoric.
=== Critical epistemology ===
A critical epistemology is explicitly political and intentional with respect to its standpoint, articulating an ideology and criticizing phenomena with respect to this ideology. A critical epistemology is driven by its values and oriented to social and political change. Communication theories associated with this epistemology include deconstructionism, cultural Marxism, third-wave feminism, and resistance studies.
=== New modes of communication ===
During the mid-1970's, presiding paradigm had passed in regards to the development in communication. More specifically the increase in a participatory approach which challenged studies like diffusionism which had dominated the 1950s. There is no valid reason for studying people as an aggregation of specific individuals that have their social experience unified and cancelled out with the means of allowing only the attributes of socio-economic status, age and sex, representative of them except by assuming that the audience is a mass.
== By perspective or subdiscipline ==
Approaches to theory also vary by perspective or subdiscipline. The communication theory as a field model proposed by Robert Craig has been an influential approach to breaking down the field of communication theory into perspectives, each with its own strengths, weaknesses, and trade-offs.
=== Information theory ===
In information theory, communication theories examine the technical process of information exchange while typically using mathematics. This perspective on communication theory originated from the development of information theory in the early 1920s. Limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. The history of information theory as a form of communication theory can be traced through a series of key papers during this time. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system. Ralph Hartley's 1928 paper, Transmission of Information, uses the word "information" as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other. The natural unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. The main landmark event that opened the way to the development of the information theory form of communication theory was the publication of an article by Claude Shannon (1916–2001) in the Bell System Technical Journal in July and October 1948 under the title "A Mathematical Theory of Communication". Shannon focused on the problem of how best to encode the information that a sender wants to transmit. He also used tools in probability theory, developed by Norbert Wiener.
They marked the nascent stages of applied communication theory at that time. Shannon developed information entropy as a measure for the uncertainty in a message while essentially inventing the field of information theory. "The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point." In 1949, in a declassified version of Shannon's wartime work on the mathematical theory of cryptography ("Communication Theory of Secrecy Systems"), he proved that all theoretically unbreakable ciphers must have the same requirements as the one-time pad. He is also credited with the introduction of sampling theory, which is concerned with representing a continuous-time signal from a (uniform) discrete set of samples. This theory was essential in enabling telecommunications to move from analog to digital transmissions systems in the 1960s and later. In 1951, Shannon made his fundamental contribution to natural language processing and computational linguistics with his article "Prediction and Entropy of Printed English" (1951), providing a clear quantifiable link between cultural practice and probabilistic cognition.
=== Interpersonal communication ===
Theories in interpersonal communication are concerned with the ways in which very small groups of people communicate with one another. It also provides the framework in which we view the world around us. Although interpersonal communication theories have their origin in mass communication studies of attitude and response to messages, since the 1970s, interpersonal communication theories have taken on a distinctly personal focus. Interpersonal theories examine relationships and their development, non-verbal communication, how we adapt to one another during conversation, how we develop the messages we seek to convey, and how deception works.
=== Organizational communication ===
Organizational communication theories address not only the ways in which people use communication in organizations, but also how they use communication to constitute that organization, developing structures, relationships, and practices to achieve their goals. Although early organization communication theories were characterized by a so-called container model (the idea that an organization is a clearly bounded object inside which communication happens in a straightforward manner following hierarchical lines), more recent theories have viewed the organization as a more fluid entity with fuzzy boundaries. Studies within the field of organizational communication mention communication as a facilitating act and a precursor to organizational activity as cooperative systems.
Given that its object of study is the organization, it is perhaps not surprising that organization communication scholarship has important connections to theories of management, with Management Communication Quarterly serving as a key venue for disseminating scholarly work. However, theories in organizational communication retain a distinct identity through their critical perspective toward power and attention to the needs and interests of workers, rather than privileging the will of management.
Organizational communication can be distinguished by its orientation to four key problematics: voice (who can speak within an organization), rationality (how decisions are made and whose ends are served), organization (how is the organization itself structured and how does it function), and the organization-society relationship (how the organization may alternately serve, exploit, and reflect society as a whole).
=== Sociocultural communication ===
This line of theory examines how social order is both produced and reproduced through communication. Communication problems in the sociocultural tradition may be theorized in terms of misalignment, conflict, or coordination failure. Theories in this domain explore dynamics such as micro and macro level phenomena, structure versus agency, the local versus the global, and communication problems which emerge due to gaps of space and time, sharing some kinship with sociological and anthropological perspectives
but distinguished by keen attention to communication as constructed and constitutive.
=== Political communication ===
Political communication theories are concerned with the public exchange of messages among political actors of all kinds. This scope is in contrast to theories of political science which look inside political institutions to understand decision-making processes.
Early political communication theories examined the roles of mass communication (i.e. television and newspapers) and political parties on political discourse. However, as the conduct of political discourse has expanded,
theories of political communication have likewise developed, to now include models of deliberation and sensemaking, and discourses about a wide range of political topics: the role of the media (e.g. as a gatekeeper, framer, and agenda-setter); forms of government (e.g. democracy, populism, and autocracy); social change (e.g. activism and protests); economic order (e.g. capitalism, neoliberalism and socialism); human values (e.g. rights, norms, freedom, and authority.); and propaganda, disinformation, and trust.
Two of the important emerging areas for theorizing about political communication are the examination of civic engagement and international comparative work (given that much of political communication has been done in the United States).
=== Computer-mediated communication ===
Theories of computer-mediated communication or CMC emerged as a direct response to the rapid emergence of novel mediating communication technologies in the form of computers. CMC scholars inquire as to what may be lost and what may be gained when we shift many of our formerly unmediated and entrained practices (that is, activities that were necessarily conducted in a synchronized, ordered, dependent fashion) into mediated and disentrained modes. For example, a discussion that once required a meeting can now be an e-mail thread, an appointment confirmation that once involved a live phone call can now be a click on a text message, a collaborative writing project that once required an elaborate plan for drafting, circulating, and annotating can now take place in a shared document.
CMC theories fall into three categories: cues-filtered-out theories, experiential/perceptual theories, and adaptation to/exploitation of media. Cues-filtered-out theories have often treated face-to-face interaction as the gold standard against which mediated communication should be compared, and includes such theories as social presence theory, media richness theory, and the Social Identity model of Deindividuation Effects (SIDE). Experiential/perceptual theories are concerned with how individuals perceive the capacity of technologies, such as whether the technology creates psychological closeness (electronic propinquity theory).
Adaptation/exploitation theories consider how people may creatively expand or make use of the limitations in CMC systems, including social information processing theory (SIP) and the idea of the hyperpersonal (when people make use of the limitations of the mediated channel to create a selective view of themselves with their communication partner, developing an impression that exceeds reality). Theoretical work from Joseph Walther has been highly influential in the development of CMC.
Theories in this area often examine the limitations and capabilities of new technologies, taking up an 'affordances' perspective inquiring what the technology may "request, demand, encourage, discourage, refuse, and allow." Recently the theoretical and empirical focus of CMC has shifted more explicitly away from the 'C' (i.e. Computer) and toward the 'M' (i.e. Mediation).
=== Rhetoric and speech ===
Theories in rhetoric and speech are often concerned with discourse as an art, including practical consideration of the power of words and our ability to improve our skills through practice. Rhetorical theories provide a way of analyzing speeches when read in an exegetical manner (close, repeated reading to extract themes, metaphors, techniques, argument, meaning, etc.); for example with respect to their relationship to power or justice, or their persuasion, emotional appeal, or logic.
=== Critical perspectives on media and communication ===
Critical social theory in communication, while sharing some traditions with rhetoric, is explicitly oriented toward "articulating, questioning, and transcending presuppositions that are judged to be untrue, dishonest, or unjust."(p. 147) Some work bridges this distinction to form critical rhetoric. Critical theories have their roots in the Frankfurt School, which brought together anti-establishment thinkers alarmed by the rise of Nazism and propaganda, including the work of Max Horkheimer and Theodor Adorno. Modern critical perspectives often engage with emergent social movements such as post-colonialism and queer theory, seeking to be reflective and emancipatory. One of the influential bodies of theory in this area comes from the work of Stuart Hall, who questioned traditional assumptions about the monolithic functioning of mass communication with his Encoding/Decoding Model of Communication and offered significant expansions of theories of discourse, semiotics, and power through media criticism and explorations of linguistic codes and cultural identity.
== Axiology ==
Axiology is concerned with how values inform research and theory development. Most communication theory is guided by one of three axiological approaches. The first approach recognizes that values will influence theorists' interests but suggests that those values must be set aside once actual research begins. Outside replication of research findings is particularly important in this approach to prevent individual researchers' values from contaminating their findings and interpretations. The second approach rejects the idea that values can be eliminated from any stage of theory development. Within this approach, theorists do not try to divorce their values from inquiry. Instead, they remain mindful of their values so that they understand how those values contextualize, influence or skew their findings. The third approach not only rejects the idea that values can be separated from research and theory, but rejects the idea that they should be separated. This approach is often adopted by critical theorists who believe that the role of communication theory is to identify oppression and produce social change. In this axiological approach, theorists embrace their values and work to reproduce those values in their research and theory development.
== References ==
== Further reading ==
== External links ==
American Communication Association
Association for Education in Journalism and Mass Communication
Central States Communication Association
Eastern Communication Association
International Communication Association
National Communication Association
Southern States Communication Association
Western States Communication Association | Wikipedia/Communication_theory |
Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator).
In the field of electronics, signal recovery is the separation of such patterns from a disguising background.
According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g.
fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat.
Much of the early work in detection theory was done by radar researchers. By 1954, the theory was fully developed on the theoretical side as described by Peterson, Birdsall and Fox and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, and John A. Swets, also in 1954.
Detection theory was used in 1966 by John A. Swets and David M. Green for psychophysics. Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential) response biases.
Detection theory has applications in many fields such as diagnostics of any kind, quality control, telecommunications, and psychology. The concept is similar to the signal-to-noise ratio used in the sciences and confusion matrices used in artificial intelligence. It is also usable in alarm management, where it is important to separate important events from background noise.
== Psychology ==
Signal detection theory (SDT) is used when psychologists want to measure the way we make decisions under conditions of uncertainty, such as how we would perceive distances in foggy conditions or during eyewitness identification. SDT assumes that the decision maker is not a passive receiver of information, but an active decision-maker who makes difficult perceptual judgments under conditions of uncertainty. In foggy circumstances, we are forced to decide how far away from us an object is, based solely upon visual stimulus which is impaired by the fog. Since the brightness of the object, such as a traffic light, is used by the brain to discriminate the distance of an object, and the fog reduces the brightness of objects, we perceive the object to be much farther away than it actually is (see also decision theory). According to SDT, during eyewitness identifications, witnesses base their decision as to whether a suspect is the culprit or not based on their perceived level of familiarity with the suspect.
To apply signal detection theory to a data set where stimuli were either present or absent, and the observer categorized each trial as having the stimulus present or absent, the trials are sorted into one of four categories:
Based on the proportions of these types of trials, numerical estimates of sensitivity can be obtained with statistics like the sensitivity index d' and A', and response bias can be estimated with statistics like c and β. β is the measure of response bias.
Signal detection theory can also be applied to memory experiments, where items are presented on a study list for later testing. A test list is created by combining these 'old' items with novel, 'new' items that did not appear on the study list. On each test trial the subject will respond 'yes, this was on the study list' or 'no, this was not on the study list'. Items presented on the study list are called Targets, and new items are called Distractors. Saying 'Yes' to a target constitutes a Hit, while saying 'Yes' to a distractor constitutes a False Alarm.
== Applications ==
Signal Detection Theory has wide application, both in humans and animals. Topics include memory, stimulus characteristics of schedules of reinforcement, etc.
=== Sensitivity or discriminability ===
Conceptually, sensitivity refers to how hard or easy it is to detect that a target stimulus is present from background events. For example, in a recognition memory paradigm, having longer to study to-be-remembered words makes it easier to recognize previously seen or heard words. In contrast, having to remember 30 words rather than 5 makes the discrimination harder. One of the most commonly used statistics for computing sensitivity is the so-called sensitivity index or d'. There are also non-parametric measures, such as the area under the ROC-curve.
=== Bias ===
Bias is the extent to which one response is more probable than another, averaging across stimulus-present and stimulus-absent cases. That is, a receiver may be more likely overall to respond that a stimulus is present or more likely overall to respond that a stimulus is not present. Bias is independent of sensitivity. Bias can be desirable if false alarms and misses lead to different costs. For example, if the stimulus is a bomber, then a miss (failing to detect the bomber) may be more costly than a false alarm (reporting a bomber when there is not one), making a liberal response bias desirable. In contrast, giving false alarms too often (crying wolf) may make people less likely to respond, a problem that can be reduced by a conservative response bias.
=== Compressed sensing ===
Another field which is closely related to signal detection theory is called compressed sensing (or compressive sensing). The objective of compressed sensing is to recover high dimensional but with low complexity entities from only a few measurements. Thus, one of the most important applications of compressed sensing is in the recovery of high dimensional signals which are known to be sparse (or nearly sparse) with only a few linear measurements. The number of measurements needed in the recovery of signals is by far smaller than what Nyquist sampling theorem requires provided that the signal is sparse, meaning that it only contains a few non-zero elements. There are different methods of signal recovery in compressed sensing including basis pursuit, expander recovery algorithm, CoSaMP and also fast non-iterative algorithm. In all of the recovery methods mentioned above, choosing an appropriate measurement matrix using probabilistic constructions or deterministic constructions, is of great importance. In other words, measurement matrices must satisfy certain specific conditions such as RIP (Restricted Isometry Property) or Null-Space property in order to achieve robust sparse recovery.
== Mathematics ==
=== P(H1|y) > P(H2|y) / MAP testing ===
In the case of making a decision between two hypotheses, H1, absent, and H2, present, in the event of a particular observation, y, a classical approach is to choose H1 when p(H1|y) > p(H2|y) and H2 in the reverse case. In the event that the two a posteriori probabilities are equal, one might choose to default to a single choice (either always choose H1 or always choose H2), or might randomly select either H1 or H2. The a priori probabilities of H1 and H2 can guide this choice, e.g. by always choosing the hypothesis with the higher a priori probability.
When taking this approach, usually what one knows are the conditional probabilities, p(y|H1) and p(y|H2), and the a priori probabilities
p
(
H
1
)
=
π
1
{\displaystyle p(H1)=\pi _{1}}
and
p
(
H
2
)
=
π
2
{\displaystyle p(H2)=\pi _{2}}
. In this case,
p
(
H
1
|
y
)
=
p
(
y
|
H
1
)
⋅
π
1
p
(
y
)
{\displaystyle p(H1|y)={\frac {p(y|H1)\cdot \pi _{1}}{p(y)}}}
,
p
(
H
2
|
y
)
=
p
(
y
|
H
2
)
⋅
π
2
p
(
y
)
{\displaystyle p(H2|y)={\frac {p(y|H2)\cdot \pi _{2}}{p(y)}}}
where p(y) is the total probability of event y,
p
(
y
|
H
1
)
⋅
π
1
+
p
(
y
|
H
2
)
⋅
π
2
{\displaystyle p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}
.
H2 is chosen in case
p
(
y
|
H
2
)
⋅
π
2
p
(
y
|
H
1
)
⋅
π
1
+
p
(
y
|
H
2
)
⋅
π
2
≥
p
(
y
|
H
1
)
⋅
π
1
p
(
y
|
H
1
)
⋅
π
1
+
p
(
y
|
H
2
)
⋅
π
2
{\displaystyle {\frac {p(y|H2)\cdot \pi _{2}}{p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}}\geq {\frac {p(y|H1)\cdot \pi _{1}}{p(y|H1)\cdot \pi _{1}+p(y|H2)\cdot \pi _{2}}}}
⇒
p
(
y
|
H
2
)
p
(
y
|
H
1
)
≥
π
1
π
2
{\displaystyle \Rightarrow {\frac {p(y|H2)}{p(y|H1)}}\geq {\frac {\pi _{1}}{\pi _{2}}}}
and H1 otherwise.
Often, the ratio
π
1
π
2
{\displaystyle {\frac {\pi _{1}}{\pi _{2}}}}
is called
τ
M
A
P
{\displaystyle \tau _{MAP}}
and
p
(
y
|
H
2
)
p
(
y
|
H
1
)
{\displaystyle {\frac {p(y|H2)}{p(y|H1)}}}
is called
L
(
y
)
{\displaystyle L(y)}
, the likelihood ratio.
Using this terminology, H2 is chosen in case
L
(
y
)
≥
τ
M
A
P
{\displaystyle L(y)\geq \tau _{MAP}}
. This is called MAP testing, where MAP stands for "maximum a posteriori").
Taking this approach minimizes the expected number of errors one will make.
=== Bayes criterion ===
In some cases, it is far more important to respond appropriately to H1 than it is to respond appropriately to H2. For example, if an alarm goes off, indicating H1 (an incoming bomber is carrying a nuclear weapon), it is much more important to shoot down the bomber if H1 = TRUE, than it is to avoid sending a fighter squadron to inspect a false alarm (i.e., H1 = FALSE, H2 = TRUE) (assuming a large supply of fighter squadrons). The Bayes criterion is an approach suitable for such cases.
Here a utility is associated with each of four situations:
U
11
{\displaystyle U_{11}}
: One responds with behavior appropriate to H1 and H1 is true: fighters destroy bomber, incurring fuel, maintenance, and weapons costs, take risk of some being shot down;
U
12
{\displaystyle U_{12}}
: One responds with behavior appropriate to H1 and H2 is true: fighters sent out, incurring fuel and maintenance costs, bomber location remains unknown;
U
21
{\displaystyle U_{21}}
: One responds with behavior appropriate to H2 and H1 is true: city destroyed;
U
22
{\displaystyle U_{22}}
: One responds with behavior appropriate to H2 and H2 is true: fighters stay home, bomber location remains unknown;
As is shown below, what is important are the differences,
U
11
−
U
21
{\displaystyle U_{11}-U_{21}}
and
U
22
−
U
12
{\displaystyle U_{22}-U_{12}}
.
Similarly, there are four probabilities,
P
11
{\displaystyle P_{11}}
,
P
12
{\displaystyle P_{12}}
, etc., for each of the cases (which are dependent on one's decision strategy).
The Bayes criterion approach is to maximize the expected utility:
E
{
U
}
=
P
11
⋅
U
11
+
P
21
⋅
U
21
+
P
12
⋅
U
12
+
P
22
⋅
U
22
{\displaystyle E\{U\}=P_{11}\cdot U_{11}+P_{21}\cdot U_{21}+P_{12}\cdot U_{12}+P_{22}\cdot U_{22}}
E
{
U
}
=
P
11
⋅
U
11
+
(
1
−
P
11
)
⋅
U
21
+
P
12
⋅
U
12
+
(
1
−
P
12
)
⋅
U
22
{\displaystyle E\{U\}=P_{11}\cdot U_{11}+(1-P_{11})\cdot U_{21}+P_{12}\cdot U_{12}+(1-P_{12})\cdot U_{22}}
E
{
U
}
=
U
21
+
U
22
+
P
11
⋅
(
U
11
−
U
21
)
−
P
12
⋅
(
U
22
−
U
12
)
{\displaystyle E\{U\}=U_{21}+U_{22}+P_{11}\cdot (U_{11}-U_{21})-P_{12}\cdot (U_{22}-U_{12})}
Effectively, one may maximize the sum,
U
′
=
P
11
⋅
(
U
11
−
U
21
)
−
P
12
⋅
(
U
22
−
U
12
)
{\displaystyle U'=P_{11}\cdot (U_{11}-U_{21})-P_{12}\cdot (U_{22}-U_{12})}
,
and make the following substitutions:
P
11
=
π
1
⋅
∫
R
1
p
(
y
|
H
1
)
d
y
{\displaystyle P_{11}=\pi _{1}\cdot \int _{R_{1}}p(y|H1)\,dy}
P
12
=
π
2
⋅
∫
R
1
p
(
y
|
H
2
)
d
y
{\displaystyle P_{12}=\pi _{2}\cdot \int _{R_{1}}p(y|H2)\,dy}
where
π
1
{\displaystyle \pi _{1}}
and
π
2
{\displaystyle \pi _{2}}
are the a priori probabilities,
P
(
H
1
)
{\displaystyle P(H1)}
and
P
(
H
2
)
{\displaystyle P(H2)}
, and
R
1
{\displaystyle R_{1}}
is the region of observation events, y, that are responded to as though H1 is true.
⇒
U
′
=
∫
R
1
{
π
1
⋅
(
U
11
−
U
21
)
⋅
p
(
y
|
H
1
)
−
π
2
⋅
(
U
22
−
U
12
)
⋅
p
(
y
|
H
2
)
}
d
y
{\displaystyle \Rightarrow U'=\int _{R_{1}}\left\{\pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)-\pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)\right\}\,dy}
U
′
{\displaystyle U'}
and thus
U
{\displaystyle U}
are maximized by extending
R
1
{\displaystyle R_{1}}
over the region where
π
1
⋅
(
U
11
−
U
21
)
⋅
p
(
y
|
H
1
)
−
π
2
⋅
(
U
22
−
U
12
)
⋅
p
(
y
|
H
2
)
>
0
{\displaystyle \pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)-\pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)>0}
This is accomplished by deciding H2 in case
π
2
⋅
(
U
22
−
U
12
)
⋅
p
(
y
|
H
2
)
≥
π
1
⋅
(
U
11
−
U
21
)
⋅
p
(
y
|
H
1
)
{\displaystyle \pi _{2}\cdot (U_{22}-U_{12})\cdot p(y|H2)\geq \pi _{1}\cdot (U_{11}-U_{21})\cdot p(y|H1)}
⇒
L
(
y
)
≡
p
(
y
|
H
2
)
p
(
y
|
H
1
)
≥
π
1
⋅
(
U
11
−
U
21
)
π
2
⋅
(
U
22
−
U
12
)
≡
τ
B
{\displaystyle \Rightarrow L(y)\equiv {\frac {p(y|H2)}{p(y|H1)}}\geq {\frac {\pi _{1}\cdot (U_{11}-U_{21})}{\pi _{2}\cdot (U_{22}-U_{12})}}\equiv \tau _{B}}
and H1 otherwise, where L(y) is the so-defined likelihood ratio.
=== Normal distribution models ===
Das and Geisler extended the results of signal detection theory for normally distributed stimuli, and derived methods of computing the error rate and confusion matrix for ideal observers and non-ideal observers for detecting and categorizing univariate and multivariate normal signals from two or more categories.
== See also ==
== References ==
=== Bibliography ===
Coren, S., Ward, L.M., Enns, J. T. (1994) Sensation and Perception. (4th Ed.) Toronto: Harcourt Brace.
Kay, SM. Fundamentals of Statistical Signal Processing: Detection Theory (ISBN 0-13-504135-X)
McNichol, D. (1972) A Primer of Signal Detection Theory. London: George Allen & Unwin.
Van Trees HL. Detection, Estimation, and Modulation Theory, Part 1 (ISBN 0-471-09517-6; website)
Wickens, Thomas D., (2002) Elementary Signal Detection Theory. New York: Oxford University Press. (ISBN 0-19-509250-3)
== External links ==
A Description of Signal Detection Theory
An application of SDT to safety
Signal Detection Theory by Garrett Neske, The Wolfram Demonstrations Project
Lecture by Steven Pinker | Wikipedia/Detection_theory |
Active networking is a communication pattern that allows packets flowing through a telecommunications network to dynamically modify the operation of the network.
Active network architecture is composed of execution environments (similar to a unix shell that can execute active packets), a node operating system capable of supporting one or more execution environments.
It also consists of active hardware, capable of routing or switching as well as executing code within active packets.
This differs from the traditional network architecture which seeks robustness and stability by attempting to remove complexity and the ability to change its fundamental operation from underlying network components. Network processors are one means of implementing active networking concepts. Active networks have also been implemented as overlay networks.
== What does it offer? ==
Active networking allows the possibility of highly tailored and rapid "real-time" changes to the underlying network operation.
This enables such ideas as sending code along with packets of information allowing the data to change its form (code) to match the channel characteristics.
The smallest program that can generate a sequence of data can be found in the definition of Kolmogorov complexity.
The use of real-time genetic algorithms within the network to compose network services is also enabled by active networking.
== How it relates to other networking paradigms ==
Active networking relates to other networking paradigms primarily based upon how computing and communication are partitioned in the architecture.
=== Active networking and software-defined networking ===
Active networking is an approach to network architecture with in-network programmability. The name derives from a comparison with network approaches advocating minimization of in-network processing, based on design advice such as the "end-to-end argument". Two major approaches were conceived: programmable network elements ("switches") and capsules, a programmability approach that places computation within packets traveling through the network. Treating packets as programs later became known as "active packets". Software-defined networking decouples the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). The concept of a programmable control plane originated at the University of Cambridge in the Systems Research Group, where (using virtual circuit identifiers available in Asynchronous Transfer Mode switches) multiple virtual control planes were made available on a single physical switch. Control Plane Technologies (CPT) was founded to commercialize this concept.
== Fundamental challenges ==
Active network research addresses the nature of how best to incorporate extremely dynamic capability within networks.
In order to do this, active network research must address the problem of optimally allocating computation versus communication within communication networks. A similar problem related to the compression of code as a measure of complexity is addressed via algorithmic information theory.
One of the challenges of active networking has been the inability of information theory to mathematically model the active network paradigm and enable active network engineering. This is due to the active nature of the network in which communication packets contain code that dynamically change the operation of the network. Fundamental advances in information theory are required in order to understand such networks.
== Nanoscale active networks ==
As the limit in reduction of transistor size is reached with current technology, active networking concepts are being explored as a more efficient means accomplishing computation and communication. More on this can be found in nanoscale networking.
== See also ==
Nanoscale networking
Network processing
Software-defined networking (SDN)
Communication complexity
Kolmogorov complexity
== References ==
== Further reading ==
Towards an Active Network Architecture (1996), David L. Tennenhouse, et al., Computer Communication Review
Active Networks and Active Network Management: A Proactive Management Framework by Stephen F. Bush and Amit Kulkarni, Kluwer Academic/Plenum Publishers, New York, Boston, Dordrecht, London, Moscow, 2001, 196 pp. Hardbound, ISBN 0-306-46560-4.
Programmable Networks for IP Service Deployment" by Galis, A., Denazis, S., Brou, C., Klein, C.- Artech House Books, London, June 20;, 450 pp. ISBN 1-58053-745-6.
== External links ==
Introduction to Active Networks (video) | Wikipedia/Active_networking |
Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.
Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. In machine learning, the term inference is sometimes used instead to mean "make a prediction, by evaluating an already trained model"; in this context inferring properties of the model is referred to as training or learning (rather than inference), and using a model for prediction is referred to as inference (instead of prediction); see also predictive inference.
== Introduction ==
Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model.
Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
The conclusion of a statistical inference is a statistical proposition. Some common forms of statistical proposition are the following:
a point estimate, i.e. a particular value that best approximates some parameter of interest;
an interval estimate, e.g. a confidence interval (or set estimate), i.e. an interval constructed using a dataset drawn from a population so that, under repeated sampling of such datasets, such intervals would contain the true parameter value with the probability at the stated confidence level;
a credible interval, i.e. a set of values containing, for example, 95% of posterior belief;
rejection of a hypothesis;
clustering or classification of data points into groups.
== Models and assumptions ==
Any statistical inference requires some assumptions. A statistical model is a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference. Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn.
=== Degree of models/assumptions ===
Statisticians distinguish between three levels of modeling assumptions:
Fully parametric: The probability distributions describing the data-generation process are assumed to be fully described by a family of probability distributions involving only a finite number of unknown parameters. For example, one may assume that the distribution of population values is truly Normal, with unknown mean and variance, and that datasets are generated by 'simple' random sampling. The family of generalized linear models is a widely used and flexible class of parametric models.
Non-parametric: The assumptions made about the process generating the data are much less than in parametric statistics and may be minimal. For example, every continuous probability distribution has a median, which may be estimated using the sample median or the Hodges–Lehmann–Sen estimator, which has good properties when the data arise from simple random sampling.
Semi-parametric: This term typically implies assumptions 'in between' fully and non-parametric approaches. For example, one may assume that a population distribution has a finite mean. Furthermore, one may assume that the mean response level in the population depends in a truly linear manner on some covariate (a parametric assumption) but not make any parametric assumption describing the variance around that mean (i.e. about the presence or possible form of any heteroscedasticity). More generally, semi-parametric models can often be separated into 'structural' and 'random variation' components. One component is treated parametrically and the other non-parametrically. The well-known Cox model is a set of semi-parametric assumptions.
=== Importance of valid models/assumptions ===
Whatever level of assumption is made, correctly calibrated inference, in general, requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified.
Incorrect assumptions of 'simple' random sampling can invalidate statistical inference. More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions. Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference. The use of any parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal." In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population." Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy-tailed.
==== Approximate distributions ====
Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these.
With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem. Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience. Following Kolmogorov's work in the 1950s, advanced statistics uses approximation theory and functional analysis to quantify the error of approximation. In this approach, the metric geometry of probability distributions is studied; this approach quantifies approximation error with, for example, the Kullback–Leibler divergence, Bregman divergence, and the Hellinger distance.
With indefinitely large samples, limiting results like the central limit theorem describe the sample statistic's limiting distribution if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples. However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify the generalized method of moments and the use of generalized estimating equations, which are popular in econometrics and biostatistics. The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation. The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families).
=== Randomization-based models ===
For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, the randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments. Statistical inference from randomized studies is also more straightforward than many other situations. In Bayesian inference, randomization is also of importance: in survey sampling, use of sampling without replacement ensures the exchangeability of the sample with the population; in randomized experiments, randomization warrants a missing at random assumption for covariate information.
Objective randomization allows properly inductive procedures. Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures. (However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences.) Similarly, results from randomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena. However, a good observational study may be better than a bad randomized experiment.
The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model.
However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical.
==== Model-based analysis of randomized experiments ====
It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments. However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme. Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units.
==== Model-free randomization inference ====
Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations.
For example, model-free simple linear regression is based either on:
a random design, where the pairs of observations
(
X
1
,
Y
1
)
,
(
X
2
,
Y
2
)
,
⋯
,
(
X
n
,
Y
n
)
{\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\cdots ,(X_{n},Y_{n})}
are independent and identically distributed (iid),
or a deterministic design, where the variables
X
1
,
X
2
,
⋯
,
X
n
{\displaystyle X_{1},X_{2},\cdots ,X_{n}}
are deterministic, but the corresponding response variables
Y
1
,
Y
2
,
⋯
,
Y
n
{\displaystyle Y_{1},Y_{2},\cdots ,Y_{n}}
are random and independent with a common conditional distribution, i.e.,
P
(
Y
j
≤
y
|
X
j
=
x
)
=
D
x
(
y
)
{\displaystyle P\left(Y_{j}\leq y|X_{j}=x\right)=D_{x}(y)}
, which is independent of the index
j
{\displaystyle j}
.
In either case, the model-free randomization inference for features of the common conditional distribution
D
x
(
.
)
{\displaystyle D_{x}(.)}
relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature conditional mean,
μ
(
x
)
=
E
(
Y
|
X
=
x
)
{\displaystyle \mu (x)=E(Y|X=x)}
, can be consistently estimated via local averaging or local polynomial fitting, under the assumption that
μ
(
x
)
{\displaystyle \mu (x)}
is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the conditional mean,
μ
(
x
)
{\displaystyle \mu (x)}
.
== Paradigms for inference ==
Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms.
Bandyopadhyay and Forster describe four paradigms: The classical (or frequentist) paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the Akaikean-Information Criterion-based paradigm.
=== Frequentist inference ===
This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging.
==== Examples of frequentist inference ====
p-value
Confidence interval
Null hypothesis significance testing
==== Frequentist inference, objectivity, and decision theory ====
One interpretation of frequentist inference (or classical inference) is that it is applicable only in terms of frequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach.
The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility functions. However, some elements of frequentist statistics, such as statistical decision theory, do incorporate utility functions. In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators, or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property. However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under absolute value loss functions, in that they minimize expected loss, and least squares estimators are optimal under squared error loss functions, in that they minimize expected loss.
While statisticians using frequentist inference must choose for themselves the parameters of interest, and the estimators/test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'.
=== Bayesian inference ===
The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions. There are several different justifications for using the Bayesian approach.
==== Examples of Bayesian inference ====
Credible interval for interval estimation
Bayes factors for model comparison
==== Bayesian inference, subjectivity and decision theory ====
Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's utility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been proposed but not yet fully developed.)
Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically) incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent. Some advocates of Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs.
=== Likelihood-based inference ===
Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data. Likelihoodism approaches statistics by using the likelihood function, denoted as
L
(
x
|
θ
)
{\displaystyle L(x|\theta )}
, quantifies the probability of observing the given data
x
{\displaystyle x}
, assuming a specific set of parameter values
θ
{\displaystyle \theta }
. In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data.
The process of likelihood-based inference usually involves the following steps:
Formulating the statistical model: A statistical model is defined based on the problem at hand, specifying the distributional assumptions and the relationship between the observed data and the unknown parameters. The model can be simple, such as a normal distribution with known variance, or complex, such as a hierarchical model with multiple levels of random effects.
Constructing the likelihood function: Given the statistical model, the likelihood function is constructed by evaluating the joint probability density or mass function of the observed data as a function of the unknown parameters. This function represents the probability of observing the data for different values of the parameters.
Maximizing the likelihood function: The next step is to find the set of parameter values that maximizes the likelihood function. This can be achieved using optimization techniques such as numerical optimization algorithms. The estimated parameter values, often denoted as
y
¯
{\displaystyle {\bar {y}}}
, are the maximum likelihood estimates (MLEs).
Assessing uncertainty: Once the MLEs are obtained, it is crucial to quantify the uncertainty associated with the parameter estimates. This can be done by calculating standard errors, confidence intervals, or conducting hypothesis tests based on asymptotic theory or simulation techniques such as bootstrapping.
Model checking: After obtaining the parameter estimates and assessing their uncertainty, it is important to assess the adequacy of the statistical model. This involves checking the assumptions made in the model and evaluating the fit of the model to the data using goodness-of-fit tests, residual analysis, or graphical diagnostics.
Inference and interpretation: Finally, based on the estimated parameters and model assessment, statistical inference can be performed. This involves drawing conclusions about the population parameters, making predictions, or testing hypotheses based on the estimated model.
=== AIC-based inference ===
The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection.
AIC is founded on information theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the goodness of fit of the model and the simplicity of the model.)
=== Other paradigms for inference ===
==== Minimum description length ====
The minimum description length (MDL) principle has been developed from ideas in information theory and the theory of Kolmogorov complexity. The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches.
However, if a "data generating mechanism" does exist in reality, then according to Shannon's source coding theorem it provides the MDL description of the data, on average and asymptotically. In minimizing description length (or descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori estimation (using maximum-entropy Bayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling.
The MDL principle has been applied in communication-coding theory in information theory, in linear regression, and in data mining.
The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory.
==== Fiducial inference ====
Fiducial inference was an approach to statistical inference based on fiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious. However this argument is the same as that which shows that a so-called confidence distribution is not a valid probability distribution and, since this has not invalidated the application of confidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's fiducial argument as a special case of an inference theory using upper and lower probabilities.
==== Structural inference ====
Developing ideas of Fisher and of Pitman from 1938 to 1939, George A. Barnard developed "structural inference" or "pivotal inference", an approach using invariant probabilities on group families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful. Donald A. S. Fraser developed a general theory for structural inference based on group theory and applied this to linear models. The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist.
== Inference topics ==
The topics below are usually included in the area of statistical inference.
Statistical assumptions
Statistical decision theory
Estimation theory
Statistical hypothesis testing
Revising opinions in statistics
Design of experiments, the analysis of variance, and regression
Survey sampling
Summarizing statistical data
== Predictive inference ==
Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations.
Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability, but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics). De Finetti's idea of exchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper, and has since been propounded by such statisticians as Seymour Geisser.
== See also ==
Algorithmic inference
Induction (philosophy)
Informal inferential reasoning
Information field theory
Population proportion
Philosophy of statistics
Prediction interval
Predictive analytics
Predictive modelling
Stylometry
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
Casella, G., Berger, R. L. (2002). Statistical Inference. Duxbury Press. ISBN 0-534-24312-6
Freedman, D.A. (1991). "Statistical models and shoe leather". Sociological Methodology. 21: 291–313. doi:10.2307/270939. JSTOR 270939.
Held L., Bové D.S. (2014). Applied Statistical Inference—Likelihood and Bayes (Springer).
Lenhard, Johannes (2006). "Models and Statistical Inference: the controversy between Fisher and Neyman–Pearson" (PDF). British Journal for the Philosophy of Science. 57: 69–91. doi:10.1093/bjps/axi152. S2CID 14136146.
Lindley, D (1958). "Fiducial distribution and Bayes' theorem". Journal of the Royal Statistical Society, Series B. 20: 102–7. doi:10.1111/j.2517-6161.1958.tb00278.x.
Rahlf, Thomas (2014). "Statistical Inference", in Claude Diebolt, and Michael Haupert (eds.), "Handbook of Cliometrics ( Springer Reference Series)", Berlin/Heidelberg: Springer.
Reid, N.; Cox, D. R. (2014). "On Some Principles of Statistical Inference". International Statistical Review. 83 (2): 293–308. doi:10.1111/insr.12067. hdl:10.1111/insr.12067. S2CID 17410547.
Sagitov, Serik (2022). "Statistical Inference". Wikibooks. http://upload.wikimedia.org/wikipedia/commons/f/f9/Statistical_Inference.pdf
Young, G.A., Smith, R.L. (2005). Essentials of Statistical Inference, CUP. ISBN 0-521-83971-8
== External links ==
Statistical Inference – lecture on the MIT OpenCourseWare platform
Statistical Inference – lecture by the National Programme on Technology Enhanced Learning
An online, Bayesian (MCMC) demo/calculator is available at causaScientia | Wikipedia/Statistical_inference |
Asymmetric numeral systems (ANS) is a family of entropy encoding methods introduced by Jarosław (Jarek) Duda from Jagiellonian University, used in data compression since 2014 due to improved performance compared to previous methods. ANS combines the compression ratio of arithmetic coding (which uses a nearly accurate probability distribution), with a processing cost similar to that of Huffman coding. In the tabled ANS (tANS) variant, this is achieved by constructing a finite-state machine to operate on a large alphabet without using multiplication.
Among others, ANS is used in the Facebook Zstandard compressor (also used e.g. in Linux kernel, Google Chrome browser, Android operating system, was published as RFC 8478 for MIME and HTTP), Apple LZFSE compressor, Google Draco 3D compressor (used e.g. in Pixar Universal Scene Description format) and PIK image compressor, CRAM DNA compressor from SAMtools utilities,
NVIDIA nvCOMP high speed compression library,
Dropbox DivANS compressor, Microsoft DirectStorage BCPack texture compressor, and JPEG XL image compressor.
The basic idea is to encode information into a single natural number
x
{\displaystyle x}
. In the standard binary number system, we can add a bit
s
∈
{
0
,
1
}
{\displaystyle s\in \{0,1\}}
of information to
x
{\displaystyle x}
by appending
s
{\displaystyle s}
at the end of
x
{\displaystyle x}
, which gives us
x
′
=
2
x
+
s
{\displaystyle x'=2x+s}
. For an entropy coder, this is optimal if
Pr
(
0
)
=
Pr
(
1
)
=
1
/
2
{\displaystyle \Pr(0)=\Pr(1)=1/2}
. ANS generalizes this process for arbitrary sets of symbols
s
∈
S
{\displaystyle s\in S}
with an accompanying probability distribution
(
p
s
)
s
∈
S
{\displaystyle (p_{s})_{s\in S}}
. In ANS, if the information from
s
{\displaystyle s}
is appended to
x
{\displaystyle x}
to result in
x
′
{\displaystyle x'}
, then
x
′
≈
x
⋅
p
s
−
1
{\displaystyle x'\approx x\cdot p_{s}^{-1}}
. Equivalently,
log
2
(
x
′
)
≈
log
2
(
x
)
+
log
2
(
1
/
p
s
)
{\displaystyle \log _{2}(x')\approx \log _{2}(x)+\log _{2}(1/p_{s})}
, where
log
2
(
x
)
{\displaystyle \log _{2}(x)}
is the number of bits of information stored in the number
x
{\displaystyle x}
, and
log
2
(
1
/
p
s
)
{\displaystyle \log _{2}(1/p_{s})}
is the number of bits contained in the symbol
s
{\displaystyle s}
.
For the encoding rule, the set of natural numbers is split into disjoint subsets corresponding to different symbols – like into even and odd numbers, but with densities corresponding to the probability distribution of the symbols to encode. Then to add information from symbol
s
{\displaystyle s}
into the information already stored in the current number
x
{\displaystyle x}
, we go to number
x
′
=
C
(
x
,
s
)
≈
x
/
p
{\displaystyle x'=C(x,s)\approx x/p}
being the position of the
x
{\displaystyle x}
-th appearance from the
s
{\displaystyle s}
-th subset.
There are alternative ways to apply it in practice – direct mathematical formulas for encoding and decoding steps (uABS and rANS variants), or one can put the entire behavior into a table (tANS variant). Renormalization is used to prevent
x
{\displaystyle x}
going to infinity – transferring accumulated bits to or from the bitstream.
== Entropy coding ==
Suppose a sequence of 1,000 zeros and ones would be encoded, which would take 1000 bits to store directly. However, if it is somehow known that it only contains 1 zero and 999 ones, it would be sufficient to encode the zero's position, which requires only
⌈
log
2
(
1000
)
⌉
≈
10
{\displaystyle \lceil \log _{2}(1000)\rceil \approx 10}
bits here instead of the original 1000 bits.
Generally, such sequences of length
n
{\displaystyle n}
containing
p
n
{\displaystyle pn}
zeros and
(
1
−
p
)
n
{\displaystyle (1-p)n}
ones, for some probability
p
∈
(
0
,
1
)
{\displaystyle p\in (0,1)}
, are called combinations. Using Stirling's approximation we get their asymptotic number being
(
n
p
n
)
≈
2
n
h
(
p
)
for large
n
and
h
(
p
)
=
−
p
log
2
(
p
)
−
(
1
−
p
)
log
2
(
1
−
p
)
,
{\displaystyle {n \choose pn}\approx 2^{nh(p)}{\text{ for large }}n{\text{ and }}h(p)=-p\log _{2}(p)-(1-p)\log _{2}(1-p),}
called Shannon entropy.
Hence, to choose one such sequence we need approximately
n
h
(
p
)
{\displaystyle nh(p)}
bits. It is still
n
{\displaystyle n}
bits if
p
=
1
/
2
{\displaystyle p=1/2}
, however, it can also be much smaller. For example, we need only
≈
n
/
2
{\displaystyle \approx n/2}
bits for
p
=
0.11
{\displaystyle p=0.11}
.
An entropy coder allows the encoding of a sequence of symbols using approximately the Shannon entropy bits per symbol. For example, ANS could be directly used to enumerate combinations: assign a different natural number to every sequence of symbols having fixed proportions in a nearly optimal way.
In contrast to encoding combinations, this probability distribution usually varies in data compressors. For this purpose, Shannon entropy can be seen as a weighted average: a symbol of probability
p
{\displaystyle p}
contains
log
2
(
1
/
p
)
{\displaystyle \log _{2}(1/p)}
bits of information. ANS encodes information into a single natural number
x
{\displaystyle x}
, interpreted as containing
log
2
(
x
)
{\displaystyle \log _{2}(x)}
bits of information. Adding information from a symbol of probability
p
{\displaystyle p}
increases this informational content to
log
2
(
x
)
+
log
2
(
1
/
p
)
=
log
2
(
x
/
p
)
{\displaystyle \log _{2}(x)+\log _{2}(1/p)=\log _{2}(x/p)}
. Hence, the new number containing both information should be
x
′
≈
x
/
p
{\displaystyle x'\approx x/p}
.
=== Motivating examples ===
Consider a source with 3 letters A, B, C, with probability 1/2, 1/4, 1/4. It is simple to construct the optimal prefix code in binary: A = 0, B = 10, C = 11. Then, a message is encoded as ABC -> 01011.
We see that an equivalent method for performing the encoding is as follows:
Start with number 1, and perform an operation on the number for each input letter.
A = multiply by 2; B = multiply by 4, add 2; C = multiply by 4, add 3.
Express the number in binary, then remove the first digit 1.
Consider a more general source with k letters, with rational probabilities
n
1
/
N
,
.
.
.
,
n
k
/
N
{\displaystyle n_{1}/N,...,n_{k}/N}
. Then performing arithmetic coding on the source requires only exact arithmetic with integers.
In general, ANS is an approximation of arithmetic coding that approximates the real probabilities
r
1
,
.
.
.
,
r
k
{\displaystyle r_{1},...,r_{k}}
by rational numbers
n
1
/
N
,
.
.
.
,
n
k
/
N
{\displaystyle n_{1}/N,...,n_{k}/N}
with a small denominator
N
{\displaystyle N}
.
== Basic concepts of ANS ==
Imagine there is some information stored in a natural number
x
{\displaystyle x}
, for example as bit sequence of its binary expansion. To add information from a binary variable
s
{\displaystyle s}
, we can use coding function
x
′
=
C
(
x
,
s
)
=
2
x
+
s
{\displaystyle x'=C(x,s)=2x+s}
, which shifts all bits one position up, and place the new bit in the least significant position. Now decoding function
D
(
x
′
)
=
(
⌊
x
′
/
2
⌋
,
m
o
d
(
x
′
,
2
)
)
{\displaystyle D(x')=(\lfloor x'/2\rfloor ,\mathrm {mod} (x',2))}
allows one to retrieve the previous
x
{\displaystyle x}
and this added bit:
D
(
C
(
x
,
s
)
)
=
(
x
,
s
)
,
C
(
D
(
x
′
)
)
=
x
′
{\displaystyle D(C(x,s))=(x,s),\ C(D(x'))=x'}
. We can start with
x
=
1
{\displaystyle x=1}
initial state, then use the
C
{\displaystyle C}
function on the successive bits of a finite bit sequence to obtain a final
x
{\displaystyle x}
number storing this entire sequence. Then using
D
{\displaystyle D}
function multiple times until
x
=
1
{\displaystyle x=1}
allows one to retrieve the bit sequence in reversed order.
The above procedure is optimal for the uniform (symmetric) probability distribution of symbols
Pr
(
0
)
=
Pr
(
1
)
=
1
/
2
{\displaystyle \Pr(0)=\Pr(1)=1/2}
. ANS generalize it to make it optimal for any chosen (asymmetric) probability distribution of symbols:
Pr
(
s
)
=
p
s
{\displaystyle \Pr(s)=p_{s}}
. While
s
{\displaystyle s}
in the above example was choosing between even and odd
C
(
x
,
s
)
{\displaystyle C(x,s)}
, in ANS this even/odd division of natural numbers is replaced with division into subsets having densities corresponding to the assumed probability distribution
{
p
s
}
s
{\displaystyle \{p_{s}\}_{s}}
: up to position
x
{\displaystyle x}
, there are approximately
x
p
s
{\displaystyle xp_{s}}
occurrences of symbol
s
{\displaystyle s}
.
The coding function
C
(
x
,
s
)
{\displaystyle C(x,s)}
returns the
x
{\displaystyle x}
-th appearance from such subset corresponding to symbol
s
{\displaystyle s}
. The density assumption is equivalent to condition
x
′
=
C
(
x
,
s
)
≈
x
/
p
s
{\displaystyle x'=C(x,s)\approx x/p_{s}}
. Assuming that a natural number
x
{\displaystyle x}
contains
log
2
(
x
)
{\displaystyle \log _{2}(x)}
bits of information,
log
2
(
C
(
x
,
s
)
)
≈
log
2
(
x
)
+
log
2
(
1
/
p
s
)
{\displaystyle \log _{2}(C(x,s))\approx \log _{2}(x)+\log _{2}(1/p_{s})}
. Hence the symbol of probability
p
s
{\displaystyle p_{s}}
is encoded as containing
≈
log
2
(
1
/
p
s
)
{\displaystyle \approx \log _{2}(1/p_{s})}
bits of information as it is required from entropy coders.
== Variants ==
=== Uniform binary variant (uABS) ===
Let us start with the binary alphabet and a probability distribution
Pr
(
1
)
=
p
{\displaystyle \Pr(1)=p}
,
Pr
(
0
)
=
1
−
p
{\displaystyle \Pr(0)=1-p}
. Up to position
x
{\displaystyle x}
we want approximately
p
⋅
x
{\displaystyle p\cdot x}
analogues of odd numbers (for
s
=
1
{\displaystyle s=1}
). We can choose this number of appearances as
⌈
x
⋅
p
⌉
{\displaystyle \lceil x\cdot p\rceil }
, getting
s
=
⌈
(
x
+
1
)
⋅
p
⌉
−
⌈
x
⋅
p
⌉
{\displaystyle s=\lceil (x+1)\cdot p\rceil -\lceil x\cdot p\rceil }
. This variant is called uABS and leads to the following decoding and encoding functions:
Decoding:
Encoding:
For
p
=
1
/
2
{\displaystyle p=1/2}
it amounts to the standard binary system (with 0 and 1 inverted), for a different
p
{\displaystyle p}
it becomes optimal for this given probability distribution. For example, for
p
=
0.3
{\displaystyle p=0.3}
these formulas lead to a table for small values of
x
{\displaystyle x}
:
The symbol
s
=
1
{\displaystyle s=1}
corresponds to a subset of natural numbers with density
p
=
0.3
{\displaystyle p=0.3}
, which in this case are positions
{
0
,
3
,
6
,
10
,
13
,
16
,
20
,
23
,
26
,
…
}
{\displaystyle \{0,3,6,10,13,16,20,23,26,\ldots \}}
. As
1
/
4
<
0.3
<
1
/
3
{\displaystyle 1/4<0.3<1/3}
, these positions increase by 3 or 4. Because
p
=
3
/
10
{\displaystyle p=3/10}
here, the pattern of symbols repeats every 10 positions.
The coding
C
(
x
,
s
)
{\displaystyle C(x,s)}
can be found by taking the row corresponding to a given symbol
s
{\displaystyle s}
, and choosing the given
x
{\displaystyle x}
in this row. Then the top row provides
C
(
x
,
s
)
{\displaystyle C(x,s)}
. For example,
C
(
7
,
0
)
=
11
{\displaystyle C(7,0)=11}
from the middle to the top row.
Imagine we would like to encode the sequence '0100' starting from
x
=
1
{\displaystyle x=1}
. First
s
=
0
{\displaystyle s=0}
takes us to
x
=
2
{\displaystyle x=2}
, then
s
=
1
{\displaystyle s=1}
to
x
=
6
{\displaystyle x=6}
, then
s
=
0
{\displaystyle s=0}
to
x
=
9
{\displaystyle x=9}
, then
s
=
0
{\displaystyle s=0}
to
x
=
14
{\displaystyle x=14}
. By using the decoding function
D
(
x
′
)
{\displaystyle D(x')}
on this final
x
{\displaystyle x}
, we can retrieve the symbol sequence. Using the table for this purpose,
x
{\displaystyle x}
in the first row determines the column, then the non-empty row and the written value determine the corresponding
s
{\displaystyle s}
and
x
{\displaystyle x}
.
=== Range variants (rANS) and streaming ===
The range variant also uses arithmetic formulas, but allows operation on a large alphabet. Intuitively, it divides the set of natural numbers into size
2
n
{\displaystyle 2^{n}}
ranges, and split each of them in identical way into subranges of proportions given by the assumed probability distribution.
We start with quantization of probability distribution to
2
n
{\displaystyle 2^{n}}
denominator, where n is chosen (usually 8-12 bits):
p
s
≈
f
[
s
]
/
2
n
{\displaystyle p_{s}\approx f[s]/2^{n}}
for some natural
f
[
s
]
{\displaystyle f[s]}
numbers (sizes of subranges).
Denote
mask
=
2
n
−
1
{\displaystyle {\text{mask}}=2^{n}-1}
, and a cumulative distribution function:
CDF
[
s
]
=
∑
i
<
s
f
[
i
]
=
f
[
0
]
+
⋯
+
f
[
s
−
1
]
.
{\displaystyle \operatorname {CDF} [s]=\sum _{i<s}f[i]=f[0]+\cdots +f[s-1].}
Note here that the
CDF[s] function is not a true CDF in that the current symbol's probability is not included in the expression's value. Instead, the CDF[s] represents the total probability of all previous symbols. Example: Instead of the normal definition of CDF[0] = f[0], it is evaluated as CDF[0] = 0, since there are no previous symbols.
For
y
∈
[
0
,
2
n
−
1
]
{\displaystyle y\in [0,2^{n}-1]}
denote function (usually tabled)
Now coding function is:
Decoding: s = symbol(x & mask)
This way we can encode a sequence of symbols into a large natural number x. To avoid using large number arithmetic, in practice stream variants are used: which enforce
x
∈
[
L
,
b
⋅
L
−
1
]
{\displaystyle x\in [L,b\cdot L-1]}
by renormalization: sending the least significant bits of x to or from the bitstream (usually L and b are powers of 2).
In rANS variant x is for example 32 bit. For 16 bit renormalization,
x
∈
[
2
16
,
2
32
−
1
]
{\displaystyle x\in [2^{16},2^{32}-1]}
, decoder refills the least significant bits from the bitstream when needed:
=== Tabled variant (tANS) ===
tANS variant puts the entire behavior (including renormalization) for
x
∈
[
L
,
2
L
−
1
]
{\displaystyle x\in [L,2L-1]}
into a table which yields a finite-state machine avoiding the need of multiplication.
Finally, the step of the decoding loop can be written as:
The step of the encoding loop:
A specific tANS coding is determined by assigning a symbol to every
[
L
,
2
L
−
1
]
{\displaystyle [L,2L-1]}
position, their number of appearances should be proportional to the assumed probabilities. For example, one could choose "abdacdac" assignment for Pr(a)=3/8, Pr(b)=1/8, Pr(c)=2/8, Pr(d)=2/8 probability distribution. If symbols are assigned in ranges of lengths being powers of 2, we would get Huffman coding. For example, a->0, b->100, c->101, d->11 prefix code would be obtained for tANS with "aaaabcdd" symbol assignment.
== Remarks ==
As for Huffman coding, modifying the probability distribution of tANS is relatively costly, hence it is mainly used in static situations, usually with some Lempel–Ziv scheme (e.g. ZSTD, LZFSE). In this case, the file is divided into blocks - for each of them symbol frequencies are independently counted, then after approximation (quantization) written in the block header and used as static probability distribution for tANS.
In contrast, rANS is usually used as a faster replacement for range coding (e.g. CRAM, LZNA, Draco,). It requires multiplication, but is more memory efficient and is appropriate for dynamically adapting probability distributions.
Encoding and decoding of ANS are performed in opposite directions, making it a stack for symbols. This inconvenience is usually resolved by encoding in backward direction, after which decoding can be done forward. For context-dependence, like Markov model, the encoder needs to use context from the perspective of later decoding. For adaptivity, the encoder should first go forward to find probabilities which will be used (predicted) by decoder and store them in a buffer, then encode in backward direction using the buffered probabilities.
The final state of encoding is required to start decoding, hence it needs to be stored in the compressed file. This cost can be compensated by storing some information in the initial state of encoder. For example, instead of starting with "10000" state, start with "1****" state, where "*" are some additional stored bits, which can be retrieved at the end of the decoding. Alternatively, this state can be used as a checksum by starting encoding with a fixed state, and testing if the final state of decoding is the expected one.
== Patent controversy ==
The author of the novel ANS algorithm and its variants tANS and rANS specifically intended his work to be available freely in the public domain, for altruistic reasons. He has not sought to profit from them and took steps to ensure they would not become a "legal minefield", or restricted by, or profited from by others. In 2015, Google published a US and then worldwide patent for "Mixed boolean-token ans coefficient coding". At the time, Professor Duda had been asked by Google to help it with video compression, so was intimately aware of this domain, having the original author assisting them.
Duda was not pleased by (accidentally) discovering Google's patent intentions, given he had been clear he wanted it as public domain, and had assisted Google specifically on that basis. Duda subsequently filed a third-party application to the US Patent office seeking a rejection. The USPTO rejected its application in 2018, and Google subsequently abandoned the patent.
In June 2019 Microsoft lodged a patent application called "Features of range asymmetric number system encoding and decoding". The USPTO issued a final rejection of the application on October 27, 2020. Yet on March 2, 2021, Microsoft gave a USPTO explanatory filing stating "The Applicant respectfully disagrees with the rejections.", seeking to overturn the final rejection under the "After Final Consideration Pilot 2.0" program. After reconsideration, the USPTO granted the application on January 25, 2022.
== See also ==
Entropy encoding
Huffman coding
Arithmetic coding
Range encoding
Zstandard Facebook compressor
LZFSE Apple compressor
== References ==
== External links ==
High throughput hardware architectures for asymmetric numeral systems entropy coding S. M. Najmabadi, Z. Wang, Y. Baroud, S. Simon, ISPA 2015
New Generation Entropy coders Finite state entropy (FSE) implementation of tANS by Yann Collet
rygorous/ryg_rans Implementation of rANS by Fabian Giesen
jkbonfield/rans_static Fast implementation of rANS and arithmetic coding by James K. Bonfield
facebook/zstd Facebook Zstandard compressor by Yann Collet (author of LZ4)
LZFSE LZFSE compressor (LZ+FSE) of Apple Inc.
CRAM 3.0 DNA compressor (order 1 rANS) (part of SAMtools) by European Bioinformatics Institute
[1] implementation for Google VP10
[2] implementation for Google WebP
[3] Google Draco 3D compression library
aom_dsp - aom - Git at Google implementation of Alliance for Open Media
Data Compression Using Asymmetric Numeral Systems - Wolfram Demonstrations Project Wolfram Demonstrations Project
GST: GPU-decodable Supercompressed Textures GST: GPU-decodable Supercompressed Textures
Understanding compression book by A. Haecky, C. McAnlis | Wikipedia/Asymmetric_numeral_systems |
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted. It is named after Jacob Bernoulli, a 17th-century Swiss mathematician, who analyzed them in his Ars Conjectandi (1713).
The mathematical formalization and advanced formulation of the Bernoulli trial is known as the Bernoulli process.
Since a Bernoulli trial has only two possible outcomes, it can be framed as a "yes or no" question. For example:
Is the top card of a shuffled deck an ace?
Was the newborn child a girl? (See human sex ratio.)
Success and failure are in this context labels for the two outcomes, and should not be construed literally or as value judgments. More generally, given any probability space, for any event (set of outcomes), one can define a Bernoulli trial according to whether the event occurred or not (event or complementary event). Examples of Bernoulli trials include:
Flipping a coin. In this context, obverse ("heads") conventionally denotes success and reverse ("tails") denotes failure. A fair coin has the probability of success 0.5 by definition. In this case, there are exactly two possible outcomes.
Rolling a die, where a six is "success" and everything else a "failure". In this case, there are six possible outcomes, and the event is a six; the complementary event "not a six" corresponds to the other five possible outcomes.
In conducting a political opinion poll, choosing a voter at random to ascertain whether that voter will vote "yes" in an upcoming referendum.
== Preliminary ==
Suppose there exists an experiment consisting of independently repeated trials, each of which has only two possible outcomes; called experimental Bernoulli trials. The collection of
n
{\displaystyle n}
experimental realizations of success (1) and failure (0) will be defined by a Bernoulli random variable:
b
X
r
|
==>
x
:
b
X
r
==
f
(
b
X
r
=
x
)
::
[
x
=
1
,
x
=
0
;
;
(
p
,
p
−
1
)
]
{\displaystyle bX_{r}|==>{x:bX_{r}==f(bX_{r}=x)::[x=1,x=0;;(p,p-1)]}}
|
p
=
t
o
t
a
l
1
/
n
{\displaystyle p=total_{1}/n}
Let
p
{\displaystyle p}
be the probability of success in a Bernoulli trial, and
q
{\displaystyle q}
be the probability of failure. Then the probability of success and the probability of failure sum to one, since these are complementary events: "success" and "failure" are mutually exclusive and exhaustive. Thus, one has the following relations:
p
=
1
−
q
,
q
=
1
−
p
,
p
+
q
=
1.
{\displaystyle p=1-q,\quad \quad q=1-p,\quad \quad p+q=1.}
Alternatively, these can be stated in terms of odds: given probability
p
{\displaystyle p}
of success and
q
{\displaystyle q}
of failure, the odds for are
p
:
q
{\displaystyle p:q}
and the odds against are
q
:
p
.
{\displaystyle q:p.}
These can also be expressed as numbers, by dividing, yielding the odds for,
o
f
{\displaystyle o_{f}}
, and the odds against,
o
a
{\displaystyle o_{a}}
:
o
f
=
p
/
q
=
p
/
(
1
−
p
)
=
(
1
−
q
)
/
q
o
a
=
q
/
p
=
(
1
−
p
)
/
p
=
q
/
(
1
−
q
)
.
{\displaystyle {\begin{aligned}o_{f}&=p/q=p/(1-p)=(1-q)/q\\o_{a}&=q/p=(1-p)/p=q/(1-q).\end{aligned}}}
These are multiplicative inverses, so they multiply to 1, with the following relations:
o
f
=
1
/
o
a
,
o
a
=
1
/
o
f
,
o
f
⋅
o
a
=
1.
{\displaystyle o_{f}=1/o_{a},\quad o_{a}=1/o_{f},\quad o_{f}\cdot o_{a}=1.}
In the case that a Bernoulli trial is representing an event from finitely many equally likely outcomes, where
S
{\displaystyle S}
of the outcomes are success and
F
{\displaystyle F}
of the outcomes are failure, the odds for are
S
:
F
{\displaystyle S:F}
and the odds against are
F
:
S
.
{\displaystyle F:S.}
This yields the following formulas for probability and odds:
p
=
S
/
(
S
+
F
)
q
=
F
/
(
S
+
F
)
o
f
=
S
/
F
o
a
=
F
/
S
.
{\displaystyle {\begin{aligned}p&=S/(S+F)\\q&=F/(S+F)\\o_{f}&=S/F\\o_{a}&=F/S.\end{aligned}}}
Here the odds are computed by dividing the number of outcomes, not the probabilities, but the proportion is the same, since these ratios only differ by multiplying both terms by the same constant factor.
Random variables describing Bernoulli trials are often encoded using the convention that 1 = "success", 0 = "failure".
Closely related to a Bernoulli trial is a binomial experiment, which consists of a fixed number
n
{\displaystyle n}
of statistically independent Bernoulli trials, each with a probability of success
p
{\displaystyle p}
, and counts the number of successes. A random variable corresponding to a binomial experiment is denoted by
B
(
n
,
p
)
{\displaystyle B(n,p)}
, and is said to have a binomial distribution.
The probability of exactly
k
{\displaystyle k}
successes in the experiment
B
(
n
,
p
)
{\displaystyle B(n,p)}
is given by:
P
(
k
)
=
(
n
k
)
p
k
q
n
−
k
{\displaystyle P(k)={n \choose k}p^{k}q^{n-k}}
where
(
n
k
)
{\displaystyle {n \choose k}}
is a binomial coefficient.
Bernoulli trials may also lead to negative binomial distributions (which count the number of successes in a series of repeated Bernoulli trials until a specified number of failures are seen), as well as various other distributions.
When multiple Bernoulli trials are performed, each with its own probability of success, these are sometimes referred to as Poisson trials.
== Examples ==
=== Tossing coins ===
Consider the simple experiment where a fair coin is tossed four times. Find the probability that exactly two of the tosses result in heads.
==== Solution ====
For this experiment, let a heads be defined as a success and a tails as a failure. Because the coin is assumed to be fair, the probability of success is
p
=
1
2
{\displaystyle p={\tfrac {1}{2}}}
. Thus, the probability of failure,
q
{\displaystyle q}
, is given by
q
=
1
−
p
=
1
−
1
2
=
1
2
{\displaystyle q=1-p=1-{\tfrac {1}{2}}={\tfrac {1}{2}}}
.
Using the equation above, the probability of exactly two tosses out of four total tosses resulting in a heads is given by:
P
(
2
)
=
(
4
2
)
p
2
q
4
−
2
=
6
×
(
1
2
)
2
×
(
1
2
)
2
=
3
8
.
{\displaystyle {\begin{aligned}P(2)&={4 \choose 2}p^{2}q^{4-2}\\&=6\times \left({\tfrac {1}{2}}\right)^{2}\times \left({\tfrac {1}{2}}\right)^{2}\\&={\dfrac {3}{8}}.\end{aligned}}}
=== Rolling dice ===
What is probability that when three independent fair six-sided dice are rolled, exactly two yield sixes?
==== Solution ====
On one die, the probability of rolling a six,
p
=
1
6
{\displaystyle p={\tfrac {1}{6}}}
. Thus, the probability of not rolling a six,
q
=
1
−
p
=
5
6
{\displaystyle q=1-p={\tfrac {5}{6}}}
.
As above, the probability of exactly two sixes out of three,
P
(
2
)
=
(
3
2
)
p
2
q
3
−
2
=
3
×
(
1
6
)
2
×
(
5
6
)
1
=
5
72
≈
0.069.
{\displaystyle {\begin{aligned}P(2)&={3 \choose 2}p^{2}q^{3-2}\\&=3\times \left({\tfrac {1}{6}}\right)^{2}\times \left({\tfrac {5}{6}}\right)^{1}\\&={\dfrac {5}{72}}\approx 0.069.\end{aligned}}}
== See also ==
Bernoulli scheme
Bernoulli sampling
Bernoulli distribution
Binomial distribution
Binomial coefficient
Binomial proportion confidence interval
Poisson sampling
Sampling design
Coin flipping
Jacob Bernoulli
Fisher's exact test
Boschloo's test
== References ==
== External links ==
"Bernoulli trials", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Simulation of n Bernoulli trials". math.uah.edu. Retrieved 2025-03-16. | Wikipedia/Bernoulli_trial |
In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time.
The bit rate is expressed in the unit bit per second (symbol: bit/s), often in conjunction with an SI prefix such as kilo (1 kbit/s = 1,000 bit/s), mega (1 Mbit/s = 1,000 kbit/s), giga (1 Gbit/s = 1,000 Mbit/s) or tera (1 Tbit/s = 1,000 Gbit/s). The non-standard abbreviation bps is often used to replace the standard symbol bit/s, so that, for example, 1 Mbps is used to mean one million bits per second.
In most computing and digital communication environments, one byte per second (symbol: B/s) corresponds roughly to 8 bit/s.
1 byte = 8 bits However if stop bits, start bits, and parity bits need to be factored in, a higher number of bits per second will be required to achieve a throughput of the same number of bytes.
== Prefixes ==
When quantifying large or small bit rates, SI prefixes (also known as metric prefixes or decimal prefixes) are used, thus:
Binary prefixes are sometimes used for bit rates.
The International Standard (IEC 80000-13) specifies different symbols for binary and decimal (SI) prefixes (e.g., 1 KiB/s = 1024 B/s = 8192 bit/s, and 1 MiB/s = 1024 KiB/s).
== In data communications ==
=== Gross bit rate ===
In digital communication systems, the physical layer gross bitrate, raw bitrate, data signaling rate, gross data transfer rate or uncoded transmission rate (sometimes written as a variable Rb or fb) is the total number of physically transferred bits per second over a communication link, including useful data as well as protocol overhead.
In case of serial communications, the gross bit rate is related to the bit transmission time
T
b
{\displaystyle T_{\text{b}}}
as:
R
b
=
1
T
b
,
{\displaystyle R_{\text{b}}={1 \over T_{\text{b}}},}
The gross bit rate is related to the symbol rate or modulation rate, which is expressed in bauds or symbols per second. However, the gross bit rate and the baud value are equal only when there are only two levels per symbol, representing 0 and 1, meaning that each symbol of a data transmission system carries exactly one bit of data; for example, this is not the case for modern modulation systems used in modems and LAN equipment.
For most line codes and modulation methods:
symbol rate
≤
gross bit rate
{\displaystyle {\text{symbol rate}}\leq {\text{gross bit rate}}}
More specifically, a line code (or baseband transmission scheme) representing the data using pulse-amplitude modulation with
2
N
{\displaystyle 2^{N}}
different voltage levels, can transfer
N
{\displaystyle N}
bits per pulse. A digital modulation method (or passband transmission scheme) using
2
N
{\displaystyle 2^{N}}
different symbols, for example
2
N
{\displaystyle 2^{N}}
amplitudes, phases or frequencies, can transfer
N
{\displaystyle N}
bits per symbol. This results in:
gross bit rate
=
symbol rate
×
N
{\displaystyle {\text{gross bit rate}}={\text{symbol rate}}\times N}
An exception from the above is some self-synchronizing line codes, for example Manchester coding and return-to-zero (RTZ) coding, where each bit is represented by two pulses (signal states), resulting in:
gross bit rate = symbol rate/2
{\displaystyle {\text{gross bit rate = symbol rate/2}}}
A theoretical upper bound for the symbol rate in baud, symbols/s or pulses/s for a certain spectral bandwidth in hertz is given by the Nyquist law:
symbol rate
≤
Nyquist rate
=
2
×
bandwidth
{\displaystyle {\text{symbol rate}}\leq {\text{Nyquist rate}}=2\times {\text{bandwidth}}}
In practice this upper bound can only be approached for line coding schemes and for so-called vestigial sideband digital modulation. Most other digital carrier-modulated schemes, for example ASK, PSK, QAM and OFDM, can be characterized as double sideband modulation, resulting in the following relation:
symbol rate
≤
bandwidth
{\displaystyle {\text{symbol rate}}\leq {\text{bandwidth}}}
In case of parallel communication, the gross bit rate is given by
∑
i
=
1
n
log
2
M
i
T
i
{\displaystyle \sum _{i=1}^{n}{\frac {\log _{2}{M_{i}}}{T_{i}}}}
where n is the number of parallel channels, Mi is the number of symbols or levels of the modulation in the ith channel, and Ti is the symbol duration time, expressed in seconds, for the ith channel.
=== Information rate ===
The physical layer net bitrate, information rate, useful bit rate, payload rate, net data transfer rate, coded transmission rate, effective data rate or wire speed (informal language) of a digital communication channel is the capacity excluding the physical layer protocol overhead, for example time division multiplex (TDM) framing bits, redundant forward error correction (FEC) codes, equalizer training symbols and other channel coding. Error-correcting codes are common especially in wireless communication systems, broadband modem standards and modern copper-based high-speed LANs. The physical layer net bitrate is the datarate measured at a reference point in the interface between the data link layer and physical layer, and may consequently include data link and higher layer overhead.
In modems and wireless systems, link adaptation (automatic adaptation of the data rate and the modulation and/or error coding scheme to the signal quality) is often applied. In that context, the term peak bitrate denotes the net bitrate of the fastest and least robust transmission mode, used for example when the distance is very short between sender and transmitter. Some operating systems and network equipment may detect the "connection speed" (informal language) of a network access technology or communication device, implying the current net bit rate. The term line rate in some textbooks is defined as gross bit rate, in others as net bit rate.
The relationship between the gross bit rate and net bit rate is affected by the FEC code rate according to the following.
net bit rate ≤ gross bit rate × code rate
The connection speed of a technology that involves forward error correction typically refers to the physical layer net bit rate in accordance with the above definition.
For example, the net bitrate (and thus the "connection speed") of an IEEE 802.11a wireless network is the net bit rate of between 6 and 54 Mbit/s, while the gross bit rate is between 12 and 72 Mbit/s inclusive of error-correcting codes.
The net bit rate of ISDN2 Basic Rate Interface (2 B-channels + 1 D-channel) of 64+64+16 = 144 kbit/s also refers to the payload data rates, while the D channel signalling rate is 16 kbit/s.
The net bit rate of the Ethernet 100BASE-TX physical layer standard is 100 Mbit/s, while the gross bitrate is 125 Mbit/s, due to the 4B5B (four bit over five bit) encoding. In this case, the gross bit rate is equal to the symbol rate or pulse rate of 125 megabaud, due to the NRZI line code.
In communications technologies without forward error correction and other physical layer protocol overhead, there is no distinction between gross bit rate and physical layer net bit rate. For example, the net as well as gross bit rate of Ethernet 10BASE-T is 10 Mbit/s. Due to the Manchester line code, each bit is represented by two pulses, resulting in a pulse rate of 20 megabaud.
The "connection speed" of a V.92 voiceband modem typically refers to the gross bit rate, since there is no additional error-correction code. It can be up to 56,000 bit/s downstream and 48,000 bit/s upstream. A lower bit rate may be chosen during the connection establishment phase due to adaptive modulation – slower but more robust modulation schemes are chosen in case of poor signal-to-noise ratio. Due to data compression, the actual data transmission rate or throughput (see below) may be higher.
The channel capacity, also known as the Shannon capacity, is a theoretical upper bound for the maximum net bitrate, exclusive of forward error correction coding, that is possible without bit errors for a certain physical analog node-to-node communication link.
net bit rate ≤ channel capacity
The channel capacity is proportional to the analog bandwidth in hertz. This proportionality is called Hartley's law. Consequently, the net bit rate is sometimes called digital bandwidth capacity in bit/s.
=== Network throughput ===
The term throughput, essentially the same thing as digital bandwidth consumption, denotes the achieved average useful bit rate in a computer network over a logical or physical communication link or through a network node, typically measured at a reference point above the data link layer. This implies that the throughput often excludes data link layer protocol overhead. The throughput is affected by the traffic load from the data source in question, as well as from other sources sharing the same network resources. See also measuring network throughput.
=== Goodput (data transfer rate) ===
Goodput or data transfer rate refers to the achieved average net bit rate that is delivered to the application layer, exclusive of all protocol overhead, data packets retransmissions, etc. For example, in the case of file transfer, the goodput corresponds to the achieved file transfer rate. The file transfer rate in bit/s can be calculated as the file size (in bytes) divided by the file transfer time (in seconds) and multiplied by eight.
As an example, the goodput or data transfer rate of a V.92 voiceband modem is affected by the modem physical layer and data link layer protocols. It is sometimes higher than the physical layer data rate due to V.44 data compression, and sometimes lower due to bit-errors and automatic repeat request retransmissions.
If no data compression is provided by the network equipment or protocols, we have the following relation:
goodput ≤ throughput ≤ maximum throughput ≤ net bit rate
for a certain communication path.
=== Progress trends ===
These are examples of physical layer net bit rates in proposed communication standard interfaces and devices:
== Multimedia ==
In digital multimedia, bit rate represents the amount of information, or detail, that is stored per unit of time of a recording. The bitrate depends on several factors:
The original material may be sampled at different frequencies.
The samples may use different numbers of bits.
The data may be encoded by different schemes.
The information may be digitally compressed by different algorithms or to different degrees.
Generally, choices are made about the above factors in order to achieve the desired trade-off between minimizing the bitrate and maximizing the quality of the material when it is played.
If lossy data compression is used on audio or visual data, differences from the original signal will be introduced; if the compression is substantial, or lossy data is decompressed and recompressed, this may become noticeable in the form of compression artifacts. Whether these affect the perceived quality, and if so how much, depends on the compression scheme, encoder power, the characteristics of the input data, the listener's perceptions, the listener's familiarity with artifacts, and the listening or viewing environment.
The encoding bit rate of a multimedia file is its size in bytes divided by the playback time of the recording (in seconds), multiplied by eight.
For real-time streaming multimedia, the encoding bit rate is the goodput that is required to avoid playback interruption.
The term average bitrate is used in case of variable bitrate multimedia source coding schemes. In this context, the peak bit rate is the maximum number of bits required for any short-term block of compressed data.
A theoretical lower bound for the encoding bit rate for lossless data compression is the source information rate, also known as the entropy rate.
The bitrates in this section are approximately the minimum that the average listener in a typical listening or viewing environment, when using the best available compression, would perceive as not significantly worse than the reference standard.
=== Audio ===
==== CD-DA ====
Compact Disc Digital Audio (CD-DA) uses 44,100 samples per second, each with a bit depth of 16, a format sometimes abbreviated like "16bit / 44.1kHz". CD-DA is also stereo, using a left and right channel, so the amount of audio data per second is double that of mono, where only a single channel is used.
The bit rate of PCM audio data can be calculated with the following formula:
bit rate
=
sample rate
×
bit depth
×
channels
{\displaystyle {\text{bit rate}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}}
For example, the bit rate of a CD-DA recording (44.1 kHz sampling rate, 16 bits per sample and two channels) can be calculated as follows:
44
,
100
×
16
×
2
=
1
,
411
,
200
bit/s
=
1
,
411.2
kbit/s
{\displaystyle 44,100\times 16\times 2=1,411,200\ {\text{bit/s}}=1,411.2\ {\text{kbit/s}}}
The cumulative size of a length of PCM audio data (excluding a file header or other metadata) can be calculated using the following formula:
size in bits
=
sample rate
×
bit depth
×
channels
×
time
.
{\displaystyle {\text{size in bits}}={\text{sample rate}}\times {\text{bit depth}}\times {\text{channels}}\times {\text{time}}.}
The cumulative size in bytes can be found by dividing the file size in bits by the number of bits in a byte, which is eight:
size in bytes
=
size in bits
8
{\displaystyle {\text{size in bytes}}={\frac {\text{size in bits}}{8}}}
Therefore, 80 minutes (4,800 seconds) of CD-DA data requires 846,720,000 bytes of storage:
44
,
100
×
16
×
2
×
4
,
800
8
=
846
,
720
,
000
bytes
≈
847
MB
≈
807.5
MiB
{\displaystyle {\frac {44,100\times 16\times 2\times 4,800}{8}}=846,720,000\ {\text{bytes}}\approx 847\ {\text{MB}}\approx 807.5\ {\text{MiB}}}
where MiB is mebibytes with binary prefix Mi, meaning 220 = 1,048,576.
==== MP3 ====
The MP3 audio format provides lossy data compression. Audio quality improves with increasing bitrate:
32 kbit/s – generally acceptable only for speech
96 kbit/s – generally used for speech or low-quality streaming
128 or 160 kbit/s – mid-range bitrate quality
192 kbit/s – medium quality bitrate
256 kbit/s – a commonly used high-quality bitrate
320 kbit/s – highest level supported by the MP3 standard
==== Other audio ====
700 bit/s – lowest bitrate open-source speech codec Codec2, but Codec2 sounds much better at 1.2 kbit/s
800 bit/s – minimum necessary for recognizable speech, using the special-purpose FS-1015 speech codecs
2.15 kbit/s – minimum bitrate available through the open-source Speex codec
6 kbit/s – minimum bitrate available through the open-source Opus codec
8 kbit/s – telephone quality using speech codecs
32–500 kbit/s – lossy audio as used in Ogg Vorbis
256 kbit/s – Digital Audio Broadcasting (DAB) MP2 bit rate required to achieve a high quality signal
292 kbit/s – Sony Adaptive Transform Acoustic Coding (ATRAC) for use on the MiniDisc Format
400 kbit/s–1,411 kbit/s – lossless audio as used in formats such as Free Lossless Audio Codec, WavPack, or Monkey's Audio to compress CD audio
1,411.2 kbit/s – Linear PCM sound format of CD-DA
5,644.8 kbit/s – DSD, which is a trademarked implementation of PDM sound format used on Super Audio CD.
6.144 Mbit/s – E-AC-3 (Dolby Digital Plus), an enhanced coding system based on the AC-3 codec
9.6 Mbit/s – DVD-Audio, a digital format for delivering high-fidelity audio content on a DVD. DVD-Audio is not intended to be a video delivery format and is not the same as video DVDs containing concert films or music videos. These discs cannot be played on a standard DVD-player without DVD-Audio logo.
18 Mbit/s – advanced lossless audio codec based on Meridian Lossless Packing (MLP)
=== Video ===
16 kbit/s – videophone quality (minimum necessary for a consumer-acceptable "talking head" picture using various video compression schemes)
128–384 kbit/s – business-oriented videoconferencing quality using video compression
400 kbit/s YouTube 240p videos (using H.264)
750 kbit/s YouTube 360p videos (using H.264)
1 Mbit/s YouTube 480p videos (using H.264)
1.15 Mbit/s max – VCD quality (using MPEG1 compression)
2.5 Mbit/s YouTube 720p videos (using H.264)
3.5 Mbit/s typ – Standard-definition television quality (with bit-rate reduction from MPEG-2 compression)
3.8 Mbit/s YouTube 720p60 (60 FPS) videos (using H.264)
4.5 Mbit/s YouTube 1080p videos (using H.264)
6.8 Mbit/s YouTube 1080p60 (60 FPS) videos (using H.264)
9.8 Mbit/s max – DVD (using MPEG2 compression)
8 to 15 Mbit/s typ – HDTV quality (with bit-rate reduction from MPEG-4 AVC compression)
19 Mbit/s approximate – HDV 720p (using MPEG2 compression)
24 Mbit/s max – AVCHD (using MPEG4 AVC compression)
25 Mbit/s approximate – HDV 1080i (using MPEG2 compression)
29.4 Mbit/s max – HD DVD
40 Mbit/s max – 1080p Blu-ray Disc (using MPEG2, MPEG4 AVC or VC-1 compression)
250 Mbit/s max – DCP (using JPEG 2000 compression)
1.4 Gbit/s – 10-bit 4:4:4 uncompressed 1080p at 24 FPS
=== Notes ===
For technical reasons (hardware/software protocols, overheads, encoding schemes, etc.) the actual bit rates used by some of the compared-to devices may be significantly higher than listed above. For example, telephone circuits using μlaw or A-law companding (pulse code modulation) yield 64 kbit/s.
== See also ==
== References ==
== External links ==
Live Video Streaming Bitrate Calculator Calculate bitrate for video and live streams
DVD-HQ bit rate calculator Calculate bit rate for various types of digital video media.
Maximum PC - Do Higher MP3 Bit Rates Pay Off?
Valid8 Data Rate Calculator | Wikipedia/Bit_rate |
Integrated information theory (IIT) proposes a mathematical model for the consciousness of a system. It comprises a framework ultimately intended to explain why some physical systems (such as human brains) are conscious, and to be capable of providing a concrete inference about whether any physical system is conscious, to what degree, and what particular experience it has; why they feel the particular way they do in particular states (e.g. why our visual field appears extended when we gaze out at the night sky), and what it would take for other physical systems to be conscious (Are other animals conscious? Might the whole universe be?).
According to IIT, a system's consciousness (what it is like subjectively) is conjectured to be identical to its causal properties (what it is like objectively). Therefore, it should be possible to account for the conscious experience of a physical system by unfolding its complete causal powers.
IIT was proposed by neuroscientist Giulio Tononi in 2004. Despite significant interest, IIT remains controversial and has been widely criticized, including that it is unfalsifiable pseudoscience.
== Overview ==
=== Relationship to the "hard problem of consciousness" ===
David Chalmers has argued that any attempt to explain consciousness in purely physical terms (i.e., to start with the laws of physics as they are currently formulated and derive the necessary and inevitable existence of consciousness) eventually runs into the so-called "hard problem". Rather than try to start from physical principles and arrive at consciousness, IIT "starts with consciousness" (accepts the existence of our own consciousness as certain) and reasons about the properties that a postulated physical substrate would need to have in order to account for it. The ability to perform this jump from phenomenology to mechanism rests on IIT's assumption that if the formal properties of a conscious experience can be fully accounted for by an underlying physical system, then the properties of the physical system must be constrained by the properties of the experience. The limitations on the physical system for consciousness to exist are unknown and consciousness may exist on a spectrum, as implied by studies involving split-brain patients and conscious patients with large amounts of brain matter missing.
IIT aims to explain which physical systems are conscious, to what degree, and in what way. The theory begins from the phenomenological certainty that experience exists, and infers necessary physical postulates that any conscious substrate must satisfy. Specifically, IIT moves from phenomenology to mechanism by attempting to identify the essential properties of conscious experience (dubbed "axioms") and, from there, the essential properties of conscious physical systems (dubbed "postulates").
=== Ontological Commitments ===
IIT is grounded in:
Realism – the world exists independently of experience
Operational physicalism – physical existence means the ability to take and make a difference (i.e., to have cause–effect power)
Atomism – causal power can, in principle, be reduced to interactions between minimal units
=== Axioms and Postulates ===
Starting from the zeroth axiom (experience exists), IIT identifies five essential properties of experience:
Intrinsicality – experience exists for itself
Information – experience is specific
Integration – experience is unitary
Exclusion – experience is definite
Composition – experience is structured
Each axiom is mapped onto a physical postulate about a system’s causal structure:
The system must exert intrinsic cause–effect power
It must specify a specific cause and effect state (via intrinsic information)
It must do so as a whole—irreducibly (measured by small phi, φ)
Only the maximally irreducible substrate (the complex) is conscious
Its subsets must specify structured distinctions and relations, forming a Φ-structure (big Phi)
=== Mathematical Formalism ===
A system is described by its transition probability matrix (TPM), denoted
T
U
=
p
(
u
′
∣
u
)
{\displaystyle T_{U}=p(\mathbf {u} '\mid \mathbf {u} )}
, over all its possible states. From this, IIT defines:
Intrinsic information (ii) for a state s over a possible cause/effect state
s
~
{\displaystyle {\tilde {s}}}
:
ii
(
s
,
s
~
)
=
p
(
s
~
∣
s
)
log
2
(
p
(
s
~
∣
s
)
p
(
s
~
)
)
{\displaystyle {\text{ii}}(s,{\tilde {s}})=p({\tilde {s}}\mid s)\log _{2}\left({\frac {p({\tilde {s}}\mid s)}{p({\tilde {s}})}}\right)}
Integrated information (φ) as the irreducibility of that cause–effect structure across the minimum information partition (MIP):
ϕ
=
min
θ
[
ii
(
s
,
s
~
)
−
ii
θ
(
s
,
s
~
)
]
{\displaystyle \phi =\min _{\theta }\left[{\text{ii}}(s,{\tilde {s}})-{\text{ii}}_{\theta }(s,{\tilde {s}})\right]}
Complexes are defined as the systems (subsets of units) that locally maximize φ. Their internal distinctions and relations form the Φ-structure of the system:
Φ
=
∑
distinctions, relations
ϕ
{\displaystyle \Phi =\sum _{\text{distinctions, relations}}\phi }
Φ
{\displaystyle \Phi }
corresponds to the quantity of consciousness, while the particular structure of distinctions and relations defines its quality.
=== Explanatory Identity ===
IIT proposes an explanatory identity: an experience is identical to the cause–effect structure (Φ-structure) unfolded from a complex in its current state. This identity is not a correlation but a proposed explanation for how subjective experience arises from physical mechanisms.
== Extensions ==
The calculation of even a modestly-sized system's
Φ
Max
{\displaystyle \Phi ^{\textrm {Max}}}
is often computationally intractable, so efforts have been made to develop heuristic or proxy measures of integrated information. For example, Masafumi Oizumi and colleagues have developed both
Φ
∗
{\displaystyle \Phi ^{*}}
and geometric integrated information or
Φ
G
{\displaystyle \Phi ^{G}}
, which are practical approximations for integrated information. These are related to proxy measures developed earlier by Anil Seth and Adam Barrett. However, none of these proxy measures have a mathematically proven relationship to the actual
Φ
Max
{\displaystyle \Phi ^{\textrm {Max}}}
value, which complicates the interpretation of analyses that use them. They can give qualitatively different results even for very small systems.
In 2021, Angus Leung and colleagues published a direct application of IIT's mathematical formalism to neural data. To circumvent the computational challenges associated with larger datasets, the authors focused on neuronal population activity in the fly. The study showed that
Φ
Max
{\displaystyle \Phi ^{\textrm {Max}}}
can readily be computed for smaller sets of neural data. Moreover, matching IIT's predictions,
Φ
Max
{\displaystyle \Phi ^{\textrm {Max}}}
was significantly decreased when the animals underwent general anesthesia.
A significant computational challenge in calculating integrated information is finding the minimum information partition of a neural system, which requires iterating through all possible network partitions. To solve this problem, Daniel Toker and Friedrich T. Sommer have shown that the spectral decomposition of the correlation matrix of a system's dynamics is a quick and robust proxy for the minimum information partition.
== Related experimental work ==
While the algorithm for assessing a system's
Φ
Max
{\displaystyle \Phi ^{\textrm {Max}}}
and conceptual structure is relatively straightforward, its high time complexity makes it computationally intractable for many systems of interest. Heuristics and approximations can sometimes be used to provide ballpark estimates of a complex system's integrated information, but precise calculations are often impossible. These computational challenges, combined with the already difficult task of reliably and accurately assessing consciousness under experimental conditions, make testing many of the theory's predictions difficult.
Despite these challenges, researchers have attempted to use measures of information integration and differentiation to assess levels of consciousness in a variety of subjects. For instance, a recent study using a less computationally-intensive proxy for
Φ
Max
{\displaystyle \Phi ^{\textrm {Max}}}
was able to reliably discriminate between varying levels of consciousness in wakeful, sleeping (dreaming vs. non-dreaming), anesthetized, and comatose (vegetative vs. minimally-conscious vs. locked-in) individuals.
IIT also makes several predictions which fit well with existing experimental evidence, and can be used to explain some counterintuitive findings in consciousness research. For example, IIT can be used to explain why some brain regions, such as the cerebellum do not appear to contribute to consciousness, despite their size and/or functional importance.
== Reception ==
Integrated information theory has received both broad criticism and support.
=== Support ===
Neuroscientist Christof Koch, who has helped to develop later versions of the theory, has called IIT "the only really promising fundamental theory of consciousness".
Neuroscientist and consciousness researcher Anil Seth is supportive of the theory, with some caveats, claiming that "conscious experiences are highly informative and always integrated."; and that "One thing that immediately follows from [IIT] is that you have a nice post hoc explanation for certain things we know about consciousness.". But he also claims "the parts of IIT that I find less promising are where it claims that integrated information actually is consciousness — that there's an identity between the two.", and has criticized the panpsychist extrapolations of the theory.
Philosopher David Chalmers, famous for the idea of the hard problem of consciousness, has expressed some enthusiasm about IIT. According to Chalmers, IIT is a development in the right direction, whether or not it is correct.
Max Tegmark has tried to address the problem of the computational complexity behind the calculations. According to Max Tegmark "the integration measure proposed by IIT is computationally infeasible to evaluate for large systems, growing super-exponentially with the system's information content." As a result, Φ can only be approximated in general. However, different ways of approximating Φ provide radically different results. Other works have shown that Φ can be computed in some large mean-field neural network models, although some assumptions of the theory have to be revised to capture phase transitions in these large systems.
In 2019, the Templeton Foundation announced funding in excess of $6,000,000 to test opposing empirical predictions of IIT and a rival theory (Global Neuronal Workspace Theory, GNWT). The originators of both theories signed off on experimental protocols and data analyses as well as the exact conditions that satisfy if their championed theory correctly predicted the outcome or not. Initial results were revealed in June 2023. None of GNWT's predictions passed what was agreed upon pre-registration while two out of three of IIT's predictions passed that threshold. The final, peer-reviewed results were published in the 30 April 2025 issue of Nature.
In a March 2025 Nature Neuroscience commentary titled “Consciousness or pseudo-consciousness? A clash of two paradigms,” proponents of IIT listed 16 peer-reviewed studies as empirical tests of the theory’s core claims. A commentary in the same issue by Alex Gomez-Marin and Anil Seth, titled “A science of consciousness beyond pseudo-science and pseudo-consciousness,” argued that, despite current empirical limitations, IIT remains scientifically legitimate.
=== Criticism ===
Influential philosopher John Searle has given a critique of the theory saying "The theory implies panpsychism" and "The problem with panpsychism is not that it is false; it does not get up to the level of being false. It is strictly speaking meaningless because no clear notion has been given to the claim." Searle's take has itself been criticized by other philosophers for misunderstanding and misrepresenting a theory that may actually be resonant with his own ideas.
Theoretical computer scientist Scott Aaronson has criticized IIT by demonstrating through its own formulation that an inactive series of logic gates, arranged in the correct way, would not only be conscious but be "unboundedly more conscious than humans are." Tononi himself agrees with the assessment and argues that according to IIT, an even simpler arrangement of inactive logic gates, if large enough, would also be conscious. However he further argues that this is a strength of IIT rather than a weakness, because that's exactly the sort of cytoarchitecture followed by large portions of the cerebral cortex, specially at the back of the brain, which is the most likely neuroanatomical correlate of consciousness according to some reviews.
Philosopher Tim Bayne has criticized the axiomatic foundations of the theory. He concludes that "the so-called 'axioms' that Tononi et al. appeal to fail to qualify as genuine axioms".
IIT as a scientific theory of consciousness has been criticized in the scientific literature as only able to be "either false or unscientific" by its own definitions. IIT has also been denounced by other members of the consciousness field as requiring "an unscientific leap of faith". The theory has also been derided for failing to answer the basic questions required of a theory of consciousness. Philosopher Adam Pautz says "As long as proponents of IIT do not address these questions, they have not put a clear theory on the table that can be evaluated as true or false." Neuroscientist Michael Graziano, proponent of the competing attention schema theory, rejects IIT as pseudoscience. He claims IIT is a "magicalist theory" that has "no chance of scientific success or understanding". Similarly, IIT was criticized that its claims are "not scientifically established or testable at the moment".
Neuroscientists Björn Merker, David Rudrauf and Philosopher Kenneth Williford co-authored a paper criticizing IIT on several grounds. Firstly, by not demonstrating that all members of systems which do in fact combine integration and differentiation in the formal IIT sense are conscious, systems which demonstrate high levels of integration and differentiation of information might provide the necessary conditions for consciousness but those combinations of attributes do not amount to the conditions for consciousness. Secondly that the measure, Φ, reflects efficiency of global information transfer rather than level of consciousness, and that the correlation of Φ with level of consciousness through different states of wakefulness (e.g. awake, dreaming and dreamless sleep, anesthesia, seizures and coma) actually reflect the level of efficient network interactions performed for cortical engagement. Hence Φ reflects network efficiency rather than consciousness, which would be one of the functions served by cortical network efficiency.
A letter published on 15 September 2023 in the preprint repository PsyArXiv and signed by 124 scholars asserted that until IIT is empirically testable, it should be labeled pseudoscience. A number of researchers defended the theory in response. Computer scientist Hector Zenil based his criticism of IIT and what he considers a similarly unscientific theory, Assembly theory (AT), on the lack of correspondence of the methods and theory in some IIT research papers and the media frenzy. He criticized the shallowness and misleading nature of the media coverage, including that which appeared in journals such as Nature and Science. He also criticized the testing methods and evidence used by IIT proponents, noting that one test amounted to simply applying LZW compression to measure entropy rather than to indicate consciousness as proponents claimed. An anonymized public survey invited all authors from peer-reviewed papers published between 2013 and 2023 found by a query of Web of Science using "consciousness AND theor*". Of the 60 respondents, 8% "fully" agreed, and 20% did "not at all" agree with the letter, with the remainder falling in between these poles.
The 10 March 2025 Nature Neuroscience commentary "What Makes a Theory of Consciousness Unscientific?" was signed by many of the same writers as the letter. It asserts that "the core ideas of IIT lack empirical support and are metaphysical, and not scientific" and refers to "the core claims of IIT, which we argue are unscientific".
== See also ==
== References ==
== External links ==
=== Related papers ===
Albantakis, L; Barbosa, L; Findlay, G; Grasso, M; Haun, AM; Marshall, W; Mayner, WGP; Zaeemzadeh, A; Boly, M; Juel, BE; Sasai, S; Fujii, K; David, I; Hendren, J; Lang, JP; Tononi, G (October 2023). "Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms". PLOS Computational Biology. 19 (10): e1011465. arXiv:2212.14787. Bibcode:2023PLSCB..19E1465A. doi:10.1371/journal.pcbi.1011465. PMC 10581496. PMID 37847724.
Tononi, Giulio; Boly, Melanie; Massimini, Marcello; Koch, Christof (2016). "Integrated information theory: From consciousness to its physical substrate". Nature Reviews Neuroscience. 17 (7): 450–461. doi:10.1038/nrn.2016.44. PMID 27225071. S2CID 21347087.
Tononi, Giulio (2015). "Integrated information theory". Scholarpedia. 10 (1): 4164. Bibcode:2015SchpJ..10.4164T. doi:10.4249/scholarpedia.4164.
Oizumi, Masafumi; Albantakis, Larissa; Tononi, Giulio (2014). "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0". PLOS Computational Biology. 10 (5): e1003588. Bibcode:2014PLSCB..10E3588O. doi:10.1371/journal.pcbi.1003588. PMC 4014402. PMID 24811198. S2CID 2578087.
Integrated Information Theory: An Updated Account (2012) (First presentation of IIT 3.0) Archived 16 December 2014 at the Wayback Machine
Tononi, Giulio (2008). "Consciousness as Integrated Information: A Provisional Manifesto". The Biological Bulletin. 215 (3): 216–242. doi:10.2307/25470707. JSTOR 25470707. PMID 19098144.
Tononi, Giulio (2004). "An information integration theory of consciousness". BMC Neuroscience. 5: 42. doi:10.1186/1471-2202-5-42. PMC 543470. PMID 15522121.
=== Websites ===
IIT-wiki: An online learning resource aimed at teaching the foundations of IIT; includes texts, slideshows, interactive coding exercises, and sections for discussion and asking questions.
integratedinformationtheory.org: a (somewhat out-of-date) hub for sources about IIT; features a graphical user interface to an old version of PyPhi.
"Integrated Information Theory of Consciousness". Internet Encyclopedia of Philosophy.
=== Software ===
PyPhi: an open-source Python package for calculating integrated information.
Graphical user interface
Documentation
=== Books ===
The Feeling of Life Itself: Why Consciousness is Widespread but Can't Be Computed by Christof Koch (2019)
Phi: A Voyage from the Brain to the Soul by Giulio Tononi (2012)
=== News articles ===
New Scientist (2019): How does consciousness work? A radical theory has mind-blowing answers
Nautilus (2017): Is Matter Conscious?
Aeon (2016): Consciousness creep
MIT Technology Review (2014): What It Will Take for Computers to Be Conscious Archived 27 November 2015 at the Wayback Machine
Wired (2013): A Neuroscientist's Radical Theory of How Networks Become Conscious
The New Yorker (2013): How Much Consciousness Does an iPhone Have?
New York Times (2010): Sizing Up Consciousness by Its Bits
Scientific American (2009): A "Complex" Theory of Consciousness
IEEE Spectrum (2008): A Bit of Theory: Consciousness as Integrated Information Theory
=== Talks ===
Christof Koch (2014): The Integrated Information Theory of Consciousness
David Chalmers (2014): How do you explain consciousness? | Wikipedia/Integrated_information_theory |
Constant bitrate (CBR) is a term used in telecommunications, relating to the quality of service. Compare with variable bitrate.
When referring to codecs, constant bit rate encoding means that the rate at which a codec's output data should be consumed is constant. CBR is useful for streaming multimedia content on limited capacity channels since it is the maximum bit rate that matters, not the average, so CBR would be used to take advantage of all of the capacity.
CBR is not optimal for storing data as it may not allocate enough data for complex sections (resulting in degraded quality); and if it maximizes quality for complex sections, it will waste data on simple sections.
The problem of not allocating enough data for complex sections could be solved by choosing a high bitrate to ensure that there will be enough bits for the entire encoding process, though the size of the file at the end would be proportionally larger.
Most coding schemes such as Huffman coding or run-length encoding produce variable-length codes, making perfect CBR difficult to achieve. This is partly solved by varying the quantization (quality), and fully solved by the use of padding. (However, CBR is implied in a simple scheme like reducing all 16-bit audio samples to 8 bits.)
In the case of streaming video as a CBR, the source could be under the CBR data rate target. So in order to complete the stream, it's necessary to add stuffing packets in the stream to reach the data rate wanted. These packets are totally neutral and don't affect the stream.
== See also ==
Bitrate
Average bitrate
Variable bitrate
Bit stuffing
== References == | Wikipedia/Constant_bitrate |
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.
The notion of Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem.
In particular, no program P computing a lower bound for each text's Kolmogorov complexity can return a value essentially larger than P's own length (see section § Chaitin's incompleteness theorem); hence no single program can compute the exact Kolmogorov complexity for infinitely many texts.
== Definition ==
=== Intuition ===
Consider the following two strings of 32 lowercase letters and digits:
abababababababababababababababab , and
4c1j5b2p0cv4w1x8rx2y39umgw5q85s7
The first string has a short English-language description, namely "write ab 16 times", which consists of 17 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, i.e., "write 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7" which has 38 characters. Hence the operation of writing the first string can be said to have "less complexity" than writing the second.
More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings like the abab example above, whose Kolmogorov complexity is small relative to the string's size, are not considered to be complex.
The Kolmogorov complexity can be defined for any mathematical object, but for simplicity the scope of this article is restricted to strings. We must first specify a description language for strings. Such a description language can be based on any computer programming language, such as Lisp, Pascal, or Java. If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g., 7 for ASCII).
We could, alternatively, choose an encoding for Turing machines, where an encoding is a function which associates to each Turing Machine M a bitstring <M>. If M is a Turing Machine which, on input w, outputs string x, then the concatenated string <M> w is a description of x. For theoretical analysis, this approach is more suited for constructing detailed formal proofs and is generally preferred in the research literature. In this article, an informal approach is discussed.
Any string s has at least one description. For example, the second string above is output by the pseudo-code:
function GenerateString2()
return "4c1j5b2p0cv4w1x8rx2y39umgw5q85s7"
whereas the first string is output by the (much shorter) pseudo-code:
function GenerateString1()
return "ab" × 16
If a description d(s) of a string s is of minimal length (i.e., using the fewest bits), it is called a minimal description of s, and the length of d(s) (i.e. the number of bits in the minimal description) is the Kolmogorov complexity of s, written K(s). Symbolically,
K(s) = |d(s)|.
The length of the shortest description will depend on the choice of description language; but the effect of changing languages is bounded (a result called the invariance theorem).
=== Plain Kolmogorov complexity C ===
There are two definitions of Kolmogorov complexity: plain and prefix-free. The plain complexity is the minimal description length of any program, and denoted
C
(
x
)
{\displaystyle C(x)}
while the prefix-free complexity is the minimal description length of any program encoded in a prefix-free code, and denoted
K
(
x
)
{\displaystyle K(x)}
. The plain complexity is more intuitive, but the prefix-free complexity is easier to study.
By default, all equations hold only up to an additive constant. For example,
f
(
x
)
=
g
(
x
)
{\displaystyle f(x)=g(x)}
really means that
f
(
x
)
=
g
(
x
)
+
O
(
1
)
{\displaystyle f(x)=g(x)+O(1)}
, that is,
∃
c
,
∀
x
,
|
f
(
x
)
−
g
(
x
)
|
≤
c
{\displaystyle \exists c,\forall x,|f(x)-g(x)|\leq c}
.
Let
U
:
2
∗
→
2
∗
{\displaystyle U:2^{*}\to 2^{*}}
be a computable function mapping finite binary strings to binary strings. It is a universal function if, and only if, for any computable
f
:
2
∗
→
2
∗
{\displaystyle f:2^{*}\to 2^{*}}
, we can encode the function in a "program"
s
f
{\displaystyle s_{f}}
, such that
∀
x
∈
2
∗
,
U
(
s
f
x
)
=
f
(
x
)
{\displaystyle \forall x\in 2^{*},U(s_{f}x)=f(x)}
. We can think of
U
{\displaystyle U}
as a program interpreter, which takes in an initial segment describing the program, followed by data that the program should process.
One problem with plain complexity is that
C
(
x
y
)
≮
C
(
x
)
+
C
(
y
)
{\displaystyle C(xy)\not <C(x)+C(y)}
, because intuitively speaking, there is no general way to tell where to divide an output string just by looking at the concatenated string. We can divide it by specifying the length of
x
{\displaystyle x}
or
y
{\displaystyle y}
, but that would take
O
(
min
(
ln
x
,
ln
y
)
)
{\displaystyle O(\min(\ln x,\ln y))}
extra symbols. Indeed, for any
c
>
0
{\displaystyle c>0}
there exists
x
,
y
{\displaystyle x,y}
such that
C
(
x
y
)
≥
C
(
x
)
+
C
(
y
)
+
c
{\displaystyle C(xy)\geq C(x)+C(y)+c}
.
Typically, inequalities with plain complexity have a term like
O
(
min
(
ln
x
,
ln
y
)
)
{\displaystyle O(\min(\ln x,\ln y))}
on one side, whereas the same inequalities with prefix-free complexity have only
O
(
1
)
{\displaystyle O(1)}
.
The main problem with plain complexity is that there is something extra sneaked into a program. A program not only represents for something with its code, but also represents its own length. In particular, a program
x
{\displaystyle x}
may represent a binary number up to
log
2
|
x
|
{\displaystyle \log _{2}|x|}
, simply by its own length. Stated in another way, it is as if we are using a termination symbol to denote where a word ends, and so we are not using 2 symbols, but 3. To fix this defect, we introduce the prefix-free Kolmogorov complexity.
=== Prefix-free Kolmogorov complexity K ===
A prefix-free code is a subset of
2
∗
{\displaystyle 2^{*}}
such that given any two different words
x
,
y
{\displaystyle x,y}
in the set, neither is a prefix of the other. The benefit of a prefix-free code is that we can build a machine that reads words from the code forward in one direction, and as soon as it reads the last symbol of the word, it knows that the word is finished, and does not need to backtrack or a termination symbol.
Define a prefix-free Turing machine to be a Turing machine that comes with a prefix-free code, such that the Turing machine can read any string from the code in one direction, and stop reading as soon as it reads the last symbol. Afterwards, it may compute on a work tape and write to a write tape, but it cannot move its read-head anymore.
This gives us the following formal way to describe K.
Fix a prefix-free universal Turing machine, with three tapes: a read tape infinite in one direction, a work tape infinite in two directions, and a write tape infinite in one direction.
The machine can read from the read tape in one direction only (no backtracking), and write to the write tape in one direction only. It can read and write the work tape in both directions.
The work tape and write tape start with all zeros. The read tape starts with an input prefix code, followed by all zeros.
Let
S
{\displaystyle S}
be the prefix-free code on
2
∗
{\displaystyle 2^{*}}
, used by the universal Turing machine.
Note that some universal Turing machines may not be programmable with prefix codes. We must pick only a prefix-free universal Turing machine.
The prefix-free complexity of a string
x
{\displaystyle x}
is the shortest prefix code that makes the machine output
x
{\displaystyle x}
:
K
(
x
)
:=
min
{
|
c
|
:
c
∈
S
,
U
(
c
)
=
x
}
{\displaystyle K(x):=\min\{|c|:c\in S,U(c)=x\}}
== Invariance theorem ==
=== Informal treatment ===
There are some description languages which are optimal, in the following sense: given any description of an object in a description language, said description may be used in the optimal description language with a constant overhead. The constant depends only on the languages involved, not on the description of the object, nor the object being described.
Here is an example of an optimal description language. A description will have two parts:
The first part describes another description language.
The second part is a description of the object in that language.
In more technical terms, the first part of a description is a computer program (specifically: a compiler for the object's language, written in the description language), with the second part being the input to that computer program which produces the object as output.
The invariance theorem follows: Given any description language L, the optimal description language is at least as efficient as L, with some constant overhead.
Proof: Any description D in L can be converted into a description in the optimal language by first describing L as a computer program P (part 1), and then using the original description D as input to that program (part 2). The
total length of this new description D′ is (approximately):
|D′ | = |P| + |D|
The length of P is a constant that doesn't depend on D. So, there is at most a constant overhead, regardless of the object described. Therefore, the optimal language is universal up to this additive constant.
=== A more formal treatment ===
Theorem: If K1 and K2 are the complexity functions relative to Turing complete description languages L1 and L2, then there is a constant c – which depends only on the languages L1 and L2 chosen – such that
∀s. −c ≤ K1(s) − K2(s) ≤ c.
Proof: By symmetry, it suffices to prove that there is some constant c such that for all strings s
K1(s) ≤ K2(s) + c.
Now, suppose there is a program in the language L1 which acts as an interpreter for L2:
function InterpretLanguage(string p)
where p is a program in L2. The interpreter is characterized by the following property:
Running InterpretLanguage on input p returns the result of running p.
Thus, if P is a program in L2 which is a minimal description of s, then InterpretLanguage(P) returns the string s. The length of this description of s is the sum of
The length of the program InterpretLanguage, which we can take to be the constant c.
The length of P which by definition is K2(s).
This proves the desired upper bound.
== History and context ==
Algorithmic information theory is the area of computer science that studies Kolmogorov complexity and other complexity measures on strings (or other data structures).
The concept and theory of Kolmogorov Complexity is based on a crucial theorem first discovered by Ray Solomonoff, who published it in 1960, describing it in "A Preliminary Report on a General Theory of Inductive Inference" as part of his invention of algorithmic probability. He gave a more complete description in his 1964 publications, "A Formal Theory of Inductive Inference," Part 1 and Part 2 in Information and Control.
Andrey Kolmogorov later independently published this theorem in Problems Inform. Transmission in 1965. Gregory Chaitin also presents this theorem in J. ACM – Chaitin's paper was submitted October 1966 and revised in December 1968, and cites both Solomonoff's and Kolmogorov's papers.
The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one. This algorithm, for all strings, allows codes as short as allowed by any other algorithm up to an additive constant that depends on the algorithms, but not on the strings themselves. Solomonoff used this algorithm and the code lengths it allows to define a "universal probability" of a string on which inductive inference of the subsequent digits of the string can be based. Kolmogorov used this theorem to define several functions of strings, including complexity, randomness, and information.
When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority. For several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence, while Algorithmic Probability became associated with Solomonoff, who focused on prediction using his invention of the universal prior probability distribution. The broader area encompassing descriptional complexity and probability is often called Kolmogorov complexity. The computer scientist Ming Li considers this an example of the Matthew effect: "...to everyone who has, more will be given..."
There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on self-delimiting programs, and is mainly due to Leonid Levin (1974).
An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov.
In the late 1990s and early 2000s, methods developed to approximate Kolmogorov complexity relied on popular compression algorithms like LZW, which made difficult or impossible to provide any estimation to short strings until a method based on Algorithmic probability was introduced, offering the only alternative to compression-based methods.
== Basic results ==
We write
K
(
x
,
y
)
{\displaystyle K(x,y)}
to be
K
(
(
x
,
y
)
)
{\displaystyle K((x,y))}
, where
(
x
,
y
)
{\displaystyle (x,y)}
means some fixed way to code for a tuple of strings x and y.
=== Inequalities ===
We omit additive factors of
O
(
1
)
{\displaystyle O(1)}
. This section is based on.
Theorem.
K
(
x
)
≤
C
(
x
)
+
2
log
2
C
(
x
)
{\displaystyle K(x)\leq C(x)+2\log _{2}C(x)}
Proof. Take any program for the universal Turing machine used to define plain complexity, and convert it to a prefix-free program by first coding the length of the program in binary, then convert the length to prefix-free coding. For example, suppose the program has length 9, then we can convert it as follows:
9
↦
1001
↦
11
−
00
−
00
−
11
−
01
{\displaystyle 9\mapsto 1001\mapsto 11-00-00-11-\color {red}{01}}
where we double each digit, then add a termination code. The prefix-free universal Turing machine can then read in any program for the other machine as follows:
[
code for simulating the other machine
]
[
coded length of the program
]
[
the program
]
{\displaystyle [{\text{code for simulating the other machine}}][{\text{coded length of the program}}][{\text{the program}}]}
The first part programs the machine to simulate the other machine, and is a constant overhead
O
(
1
)
{\displaystyle O(1)}
. The second part has length
≤
2
log
2
C
(
x
)
+
3
{\displaystyle \leq 2\log _{2}C(x)+3}
. The third part has length
C
(
x
)
{\displaystyle C(x)}
.
Theorem: There exists
c
{\displaystyle c}
such that
∀
x
,
C
(
x
)
≤
|
x
|
+
c
{\displaystyle \forall x,C(x)\leq |x|+c}
. More succinctly,
C
(
x
)
≤
|
x
|
{\displaystyle C(x)\leq |x|}
. Similarly,
K
(
x
)
≤
|
x
|
+
2
log
2
|
x
|
{\displaystyle K(x)\leq |x|+2\log _{2}|x|}
, and
K
(
x
|
|
x
|
)
≤
|
x
|
{\displaystyle K(x||x|)\leq |x|}
.
Proof. For the plain complexity, just write a program that simply copies the input to the output. For the prefix-free complexity, we need to first describe the length of the string, before writing out the string itself.
Theorem. (extra information bounds, subadditivity)
K
(
x
|
y
)
≤
K
(
x
)
≤
K
(
x
,
y
)
≤
max
(
K
(
x
|
y
)
+
K
(
y
)
,
K
(
y
|
x
)
+
K
(
x
)
)
≤
K
(
x
)
+
K
(
y
)
{\displaystyle K(x|y)\leq K(x)\leq K(x,y)\leq \max(K(x|y)+K(y),K(y|x)+K(x))\leq K(x)+K(y)}
K
(
x
y
)
≤
K
(
x
,
y
)
{\displaystyle K(xy)\leq K(x,y)}
Note that there is no way to compare
K
(
x
y
)
{\displaystyle K(xy)}
and
K
(
x
|
y
)
{\displaystyle K(x|y)}
or
K
(
x
)
{\displaystyle K(x)}
or
K
(
y
|
x
)
{\displaystyle K(y|x)}
or
K
(
y
)
{\displaystyle K(y)}
. There are strings such that the whole string
x
y
{\displaystyle xy}
is easy to describe, but its substrings are very hard to describe.
Theorem. (symmetry of information)
K
(
x
,
y
)
=
K
(
x
|
y
,
K
(
y
)
)
+
K
(
y
)
=
K
(
y
,
x
)
{\displaystyle K(x,y)=K(x|y,K(y))+K(y)=K(y,x)}
.
Proof. One side is simple. For the other side with
K
(
x
,
y
)
≥
K
(
x
|
y
,
K
(
y
)
)
+
K
(
y
)
{\displaystyle K(x,y)\geq K(x|y,K(y))+K(y)}
, we need to use a counting argument (page 38 ).
Theorem. (information non-increase) For any computable function
f
{\displaystyle f}
, we have
K
(
f
(
x
)
)
≤
K
(
x
)
+
K
(
f
)
{\displaystyle K(f(x))\leq K(x)+K(f)}
.
Proof. Program the Turing machine to read two subsequent programs, one describing the function and one describing the string. Then run both programs on the work tape to produce
f
(
x
)
{\displaystyle f(x)}
, and write it out.
=== Uncomputability of Kolmogorov complexity ===
==== A naive attempt at a program to compute K ====
At first glance it might seem trivial to write a program which can compute K(s) for any s, such as the following:
function KolmogorovComplexity(string s)
for i = 1 to infinity:
for each string p of length exactly i
if isValidProgram(p) and evaluate(p) == s
return i
This program iterates through all possible programs (by iterating through all possible strings and only considering those which are valid programs), starting with the shortest. Each program is executed to find the result produced by that program, comparing it to the input s. If the result matches then the length of the program is returned.
However this will not work because some of the programs p tested will not terminate, e.g. if they contain infinite loops. There is no way to avoid all of these programs by testing them in some way before executing them due to the non-computability of the halting problem.
What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following.
==== Formal proof of uncomputability of K ====
Theorem: There exist strings of arbitrarily large Kolmogorov complexity. Formally: for each natural number n, there is a string s with K(s) ≥ n.
Proof: Otherwise all of the infinitely many possible finite strings could be generated by the finitely many programs with a complexity below n bits.
Theorem: K is not a computable function. In other words, there is no program which takes any string s as input and produces the integer K(s) as output.
The following proof by contradiction uses a simple Pascal-like language to denote programs; for sake of proof simplicity assume its description (i.e. an interpreter) to have a length of 1400000 bits.
Assume for contradiction there is a program
function KolmogorovComplexity(string s)
which takes as input a string s and returns K(s). All programs are of finite length so, for sake of proof simplicity, assume it to be 7000000000 bits.
Now, consider the following program of length 1288 bits:
function GenerateComplexString()
for i = 1 to infinity:
for each string s of length exactly i
if KolmogorovComplexity(s) ≥ 8000000000
return s
Using KolmogorovComplexity as a subroutine, the program tries every string, starting with the shortest, until it returns a string with Kolmogorov complexity at least 8000000000 bits, i.e. a string that cannot be produced by any program shorter than 8000000000 bits. However, the overall length of the above program that produced s is only 7001401288 bits, which is a contradiction. (If the code of KolmogorovComplexity is shorter, the contradiction remains. If it is longer, the constant used in GenerateComplexString can always be changed appropriately.)
The above proof uses a contradiction similar to that of the Berry paradox: "1The 2smallest 3positive 4integer 5that 6cannot 7be 8defined 9in 10fewer 11than 12twenty 13English 14words". It is also possible to show the non-computability of K by reduction from the non-computability of the halting problem H, since K and H are Turing-equivalent.
There is a corollary, humorously called the "full employment theorem" in the programming language community, stating that there is no perfect size-optimizing compiler.
=== Chain rule for Kolmogorov complexity ===
The chain rule for Kolmogorov complexity states that there exists a constant c such that for all X and Y:
K(X,Y) = K(X) + K(Y|X) + c*max(1,log(K(X,Y))).
It states that the shortest program that reproduces X and Y is no more than a logarithmic term larger than a program to reproduce X and a program to reproduce Y given X. Using this statement, one can define an analogue of mutual information for Kolmogorov complexity.
== Compression ==
It is straightforward to compute upper bounds for K(s) – simply compress the string s with some method, implement the corresponding decompressor in the chosen language, concatenate the decompressor to the compressed string, and measure the length of the resulting string – concretely, the size of a self-extracting archive in the given language.
A string s is compressible by a number c if it has a description whose length does not exceed |s| − c bits. This is equivalent to saying that K(s) ≤ |s| − c. Otherwise, s is incompressible by c. A string incompressible by 1 is said to be simply incompressible – by the pigeonhole principle, which applies because every compressed string maps to only one uncompressed string, incompressible strings must exist, since there are 2n bit strings of length n, but only 2n − 1 shorter strings, that is, strings of length less than n, (i.e. with length 0, 1, ..., n − 1).
For the same reason, most strings are complex in the sense that they cannot be significantly compressed – their K(s) is not much smaller than |s|, the length of s in bits. To make this precise, fix a value of n. There are 2n bitstrings of length n. The uniform probability distribution on the space of these bitstrings assigns exactly equal weight 2−n to each string of length n.
Theorem: With the uniform probability distribution on the space of bitstrings of length n, the probability that a string is incompressible by c is at least 1 − 2−c+1 + 2−n.
To prove the theorem, note that the number of descriptions of length not exceeding n − c is given by the geometric series:
1 + 2 + 22 + ... + 2n − c = 2n−c+1 − 1.
There remain at least
2n − 2n−c+1 + 1
bitstrings of length n that are incompressible by c. To determine the probability, divide by 2n.
== Chaitin's incompleteness theorem ==
By the above theorem (§ Compression), most strings are complex in the sense that they cannot be described in any significantly "compressed" way. However, it turns out that the fact that a specific string is complex cannot be formally proven, if the complexity of the string is above a certain threshold. The precise formalization is as follows. First, fix a particular axiomatic system S for the natural numbers. The axiomatic system has to be powerful enough so that, to certain assertions A about complexity of strings, one can associate a formula FA in S. This association must have the following property:
If FA is provable from the axioms of S, then the corresponding assertion A must be true. This "formalization" can be achieved based on a Gödel numbering.
Theorem: There exists a constant L (which only depends on S and on the choice of description language) such that there does not exist a string s for which the statement
K(s) ≥ L (as formalized in S)
can be proven within S.
Proof Idea: The proof of this result is modeled on a self-referential construction used in Berry's paradox. We firstly obtain a program which enumerates the proofs within S and we specify a procedure P which takes as an input an integer L and prints the strings x which are within proofs within S of the statement K(x) ≥ L. By then setting L to greater than the length of this procedure P, we have that the required length of a program to print x as stated in K(x) ≥ L as being at least L is then less than the amount L since the string x was printed by the procedure P. This is a contradiction. So it is not possible for the proof system S to prove K(x) ≥ L for L arbitrarily large, in particular, for L larger than the length of the procedure P, (which is finite).
Proof:
We can find an effective enumeration of all the formal proofs in S by some procedure
function NthProof(int n)
which takes as input n and outputs some proof. This function enumerates all proofs. Some of these are proofs for formulas we do not care about here, since every possible proof in the language of S is produced for some n. Some of these are complexity formulas of the form K(s) ≥ n where s and n are constants in the language of S. There is a procedure
function NthProofProvesComplexityFormula(int n)
which determines whether the nth proof actually proves a complexity formula K(s) ≥ L. The strings s, and the integer L in turn, are computable by procedure:
function StringNthProof(int n)
function ComplexityLowerBoundNthProof(int n)
Consider the following procedure:
function GenerateProvablyComplexString(int n)
for i = 1 to infinity:
if NthProofProvesComplexityFormula(i) and ComplexityLowerBoundNthProof(i) ≥ n
return StringNthProof(i)
Given an n, this procedure tries every proof until it finds a string and a proof in the formal system S of the formula K(s) ≥ L for some L ≥ n; if no such proof exists, it loops forever.
Finally, consider the program consisting of all these procedure definitions, and a main call:
GenerateProvablyComplexString(n0)
where the constant n0 will be determined later on. The overall program length can be expressed as U+log2(n0), where U is some constant and log2(n0) represents the length of the integer value n0, under the reasonable assumption that it is encoded in binary digits. We will choose n0 to be greater than the program length, that is, such that n0 > U+log2(n0). This is clearly true for n0 sufficiently large, because the left hand side grows linearly in n0 whilst the right hand side grows logarithmically in n0 up to the fixed constant U.
Then no proof of the form "K(s)≥L" with L≥n0 can be obtained in S, as can be seen by an indirect argument:
If ComplexityLowerBoundNthProof(i) could return a value ≥n0, then the loop inside GenerateProvablyComplexString would eventually terminate, and that procedure would return a string s such that
This is a contradiction, Q.E.D.
As a consequence, the above program, with the chosen value of n0, must loop forever.
Similar ideas are used to prove the properties of Chaitin's constant.
== Minimum message length ==
The minimum message length principle of statistical and inductive inference and machine learning was developed by C.S. Wallace and D.M. Boulton in 1968. MML is Bayesian (i.e. it incorporates prior beliefs) and information-theoretic. It has the desirable properties of statistical invariance (i.e. the inference transforms with a re-parametrisation, such as from polar coordinates to Cartesian coordinates), statistical consistency (i.e. even for very hard problems, MML will converge to any underlying model) and efficiency (i.e. the MML model will converge to any true underlying model about as quickly as is possible). C.S. Wallace and D.L. Dowe (1999) showed a formal connection between MML and algorithmic information theory (or Kolmogorov complexity).
== Kolmogorov randomness ==
Kolmogorov randomness defines a string (usually of bits) as being random if and only if every computer program that can produce that string is at least as long as the string itself. To make this precise, a universal computer (or universal Turing machine) must be specified, so that "program" means a program for this universal machine. A random string in this sense is "incompressible" in that it is impossible to "compress" the string into a program that is shorter than the string itself. For every universal computer, there is at least one algorithmically random string of each length. Whether a particular string is random, however, depends on the specific universal computer that is chosen. This is because a universal computer can have a particular string hard-coded in itself, and a program running on this universal computer can then simply refer to this hard-coded string using a short sequence of bits (i.e. much shorter than the string itself).
This definition can be extended to define a notion of randomness for infinite sequences from a finite alphabet. These algorithmically random sequences can be defined in three equivalent ways. One way uses an effective analogue of measure theory; another uses effective martingales. The third way defines an infinite sequence to be random if the prefix-free Kolmogorov complexity of its initial segments grows quickly enough — there must be a constant c such that the complexity of an initial segment of length n is always at least n−c. This definition, unlike the definition of randomness for a finite string, is not affected by which universal machine is used to define prefix-free Kolmogorov complexity.
== Relation to entropy ==
For dynamical systems, entropy rate and algorithmic complexity of the trajectories are related by a theorem of Brudno, that the equality
K
(
x
;
T
)
=
h
(
T
)
{\displaystyle K(x;T)=h(T)}
holds for almost all
x
{\displaystyle x}
.
It can be shown that for the output of Markov information sources, Kolmogorov complexity is related to the entropy of the information source. More precisely, the Kolmogorov complexity of the output of a Markov information source, normalized by the length of the output, converges almost surely (as the length of the output goes to infinity) to the entropy of the source.
Theorem. (Theorem 14.2.5 ) The conditional Kolmogorov complexity of a binary string
x
1
:
n
{\displaystyle x_{1:n}}
satisfies
1
n
K
(
x
1
:
n
|
n
)
≤
H
b
(
1
n
∑
i
x
i
)
+
log
n
2
n
+
O
(
1
/
n
)
{\displaystyle {\frac {1}{n}}K(x_{1:n}|n)\leq H_{b}\left({\frac {1}{n}}\sum _{i}x_{i}\right)+{\frac {\log n}{2n}}+O(1/n)}
where
H
b
{\displaystyle H_{b}}
is the binary entropy function (not to be confused with the entropy rate).
== Halting problem ==
The Kolmogorov complexity function is equivalent to deciding the halting problem.
If we have a halting oracle, then the Kolmogorov complexity of a string can be computed by simply trying every halting program, in lexicographic order, until one of them outputs the string.
The other direction is much more involved. It shows that given a Kolmogorov complexity function, we can construct a function
p
{\displaystyle p}
, such that
p
(
n
)
≥
B
B
(
n
)
{\displaystyle p(n)\geq BB(n)}
for all large
n
{\displaystyle n}
, where
B
B
{\displaystyle BB}
is the Busy Beaver shift function (also denoted as
S
(
n
)
{\displaystyle S(n)}
). By modifying the function at lower values of
n
{\displaystyle n}
we get an upper bound on
B
B
{\displaystyle BB}
, which solves the halting problem.
Consider this program
p
K
{\textstyle p_{K}}
, which takes input as
n
{\textstyle n}
, and uses
K
{\textstyle K}
.
List all strings of length
≤
2
n
+
1
{\textstyle \leq 2n+1}
.
For each such string
x
{\textstyle x}
, enumerate all (prefix-free) programs of length
K
(
x
)
{\displaystyle K(x)}
until one of them does output
x
{\textstyle x}
. Record its runtime
n
x
{\textstyle n_{x}}
.
Output the largest
n
x
{\textstyle n_{x}}
.
We prove by contradiction that
p
K
(
n
)
≥
B
B
(
n
)
{\textstyle p_{K}(n)\geq BB(n)}
for all large
n
{\textstyle n}
.
Let
p
n
{\textstyle p_{n}}
be a Busy Beaver of length
n
{\displaystyle n}
. Consider this (prefix-free) program, which takes no input:
Run the program
p
n
{\textstyle p_{n}}
, and record its runtime length
B
B
(
n
)
{\textstyle BB(n)}
.
Generate all programs with length
≤
2
n
{\textstyle \leq 2n}
. Run every one of them for up to
B
B
(
n
)
{\textstyle BB(n)}
steps. Note the outputs of those that have halted.
Output the string with the lowest lexicographic order that has not been output by any of those.
Let the string output by the program be
x
{\textstyle x}
.
The program has length
≤
n
+
2
log
2
n
+
O
(
1
)
{\textstyle \leq n+2\log _{2}n+O(1)}
, where
n
{\displaystyle n}
comes from the length of the Busy Beaver
p
n
{\textstyle p_{n}}
,
2
log
2
n
{\displaystyle 2\log _{2}n}
comes from using the (prefix-free) Elias delta code for the number
n
{\displaystyle n}
, and
O
(
1
)
{\displaystyle O(1)}
comes from the rest of the program. Therefore,
K
(
x
)
≤
n
+
2
log
2
n
+
O
(
1
)
≤
2
n
{\displaystyle K(x)\leq n+2\log _{2}n+O(1)\leq 2n}
for all big
n
{\textstyle n}
. Further, since there are only so many possible programs with length
≤
2
n
{\textstyle \leq 2n}
, we have
l
(
x
)
≤
2
n
+
1
{\textstyle l(x)\leq 2n+1}
by pigeonhole principle.
By assumption,
p
K
(
n
)
<
B
B
(
n
)
{\textstyle p_{K}(n)<BB(n)}
, so every string of length
≤
2
n
+
1
{\textstyle \leq 2n+1}
has a minimal program with runtime
<
B
B
(
n
)
{\textstyle <BB(n)}
. Thus, the string
x
{\textstyle x}
has a minimal program with runtime
<
B
B
(
n
)
{\textstyle <BB(n)}
. Further, that program has length
K
(
x
)
≤
2
n
{\textstyle K(x)\leq 2n}
. This contradicts how
x
{\textstyle x}
was constructed.
== Universal probability ==
Fix a universal Turing machine
U
{\displaystyle U}
, the same one used to define the (prefix-free) Kolmogorov complexity. Define the (prefix-free) universal probability of a string
x
{\displaystyle x}
to be
P
(
x
)
=
∑
U
(
p
)
=
x
2
−
l
(
p
)
{\displaystyle P(x)=\sum _{U(p)=x}2^{-l(p)}}
In other words, it is the probability that, given a uniformly random binary stream as input, the universal Turing machine would halt after reading a certain prefix of the stream, and output
x
{\displaystyle x}
.
Note.
U
(
p
)
=
x
{\displaystyle U(p)=x}
does not mean that the input stream is
p
000
⋯
{\displaystyle p000\cdots }
, but that the universal Turing machine would halt at some point after reading the initial segment
p
{\displaystyle p}
, without reading any further input, and that, when it halts, its has written
x
{\displaystyle x}
to the output tape.
Theorem. (Theorem 14.11.1)
log
1
P
(
x
)
=
K
(
x
)
+
O
(
1
)
{\displaystyle \log {\frac {1}{P(x)}}=K(x)+O(1)}
== Implications in biology ==
In the context of biology to argue that the symmetries and modular arrangements observed in multiple species emerge from the tendency of evolution to prefer minimal Kolmogorov complexity. Considering the genome as a program that must solve a task or implement a series of functions, shorter programs would be preferred on the basis that they are easier to find by the mechanisms of evolution. An example of this approach is the eight-fold symmetry of the compass circuit that is found across insect species, which correspond to the circuit that is both functional and requires the minimum Kolmogorov complexity to be generated from self-replicating units.
== Conditional versions ==
The conditional Kolmogorov complexity of two strings
K
(
x
|
y
)
{\displaystyle K(x|y)}
is, roughly speaking, defined as the Kolmogorov complexity of x given y as an auxiliary input to the procedure.
There is also a length-conditional complexity
K
(
x
|
L
(
x
)
)
{\displaystyle K(x|L(x))}
, which is the complexity of x given the length of x as known/input.
== Time-bounded complexity ==
Time-bounded Kolmogorov complexity is a modified version of Kolmogorov complexity where the space of programs to be searched for a solution is confined to only programs that can run within some pre-defined number of steps. It is hypothesised that the possibility of the existence of an efficient algorithm for determining approximate time-bounded Kolmogorov complexity is related to the question of whether true one-way functions exist.
== See also ==
Berry paradox
Code golf
Data compression
Descriptive complexity theory
Grammar induction
Inductive reasoning
Kolmogorov structure function
Levenshtein distance
Manifold hypothesis
Solomonoff's theory of inductive inference
Sample entropy
== Notes ==
== References ==
== Further reading ==
Blum, M. (1967). "On the size of machines". Information and Control. 11 (3): 257. doi:10.1016/S0019-9958(67)90546-3.
Brudno, A. (1983). "Entropy and the complexity of the trajectories of a dynamical system". Transactions of the Moscow Mathematical Society. 2: 127–151.
Cover, Thomas M.; Thomas, Joy A. (2006). Elements of information theory (2nd ed.). Wiley-Interscience. ISBN 0-471-24195-4.
Lajos, Rónyai; Gábor, Ivanyos; Réka, Szabó (1999). Algoritmusok. TypoTeX. ISBN 963-279-014-6.
Li, Ming; Vitányi, Paul (1997). An Introduction to Kolmogorov Complexity and Its Applications. Springer. ISBN 978-0387339986.
Yu, Manin (1977). A Course in Mathematical Logic. Springer-Verlag. ISBN 978-0-7204-2844-5.
Sipser, Michael (1997). Introduction to the Theory of Computation. PWS. ISBN 0-534-95097-3.
Downey, Rodney G.; Hirschfeldt, Denis R. (2010). "Algorithmic Randomness and Complexity". Theory and Applications of Computability. doi:10.1007/978-0-387-68441-3. ISBN 978-0-387-95567-4. ISSN 2190-619X.
== External links ==
The Legacy of Andrei Nikolaevich Kolmogorov
Chaitin's online publications
Solomonoff's IDSIA page
Generalizations of algorithmic information by J. Schmidhuber
"Review of Li Vitányi 1997".
Tromp, John. "John's Lambda Calculus and Combinatory Logic Playground". Tromp's lambda calculus computer model offers a concrete definition of K()]
Universal AI based on Kolmogorov Complexity ISBN 3-540-22139-5 by M. Hutter: ISBN 3-540-22139-5
David Dowe's Minimum Message Length (MML) and Occam's razor pages.
Grunwald, P.; Pitt, M.A. (2005). Myung, I. J. (ed.). Advances in Minimum Description Length: Theory and Applications. MIT Press. ISBN 0-262-07262-9. | Wikipedia/Algorithmic_complexity_theory |
A remote control, also known colloquially as a remote or clicker, is an electronic device used to operate another device from a distance, usually wirelessly. In consumer electronics, a remote control can be used to operate devices such as a television set, DVD player or other digital home media appliance. A remote control can allow operation of devices that are out of convenient reach for direct operation of controls. They function best when used from a short distance. This is primarily a convenience feature for the user. In some cases, remote controls allow a person to operate a device that they otherwise would not be able to reach, as when a garage door opener is triggered from outside.
Early television remote controls (1956–1977) used ultrasonic tones. Present-day remote controls are commonly consumer infrared devices which send digitally-coded pulses of infrared radiation. They control functions such as power, volume, channels, playback, track change, energy, fan speed, and various other features. Remote controls for these devices are usually small wireless handheld objects with an array of buttons. They are used to adjust various settings such as television channel, track number, and volume. The remote control code, and thus the required remote control device, is usually specific to a product line. However, there are universal remotes, which emulate the remote control made for most major brand devices.
Remote controls in the 2000s include Bluetooth or Wi-Fi connectivity, motion sensor-enabled capabilities and voice control. Remote controls for 2010s onward Smart TVs may feature a standalone keyboard on the rear side to facilitate typing, and be usable as a pointing device.
== History ==
Wired and wireless remote control was developed in the latter half of the 19th century to meet the need to control unmanned vehicles (for the most part military torpedoes). These included a wired version by German engineer Werner von Siemens in 1870, and radio controlled ones by British engineer Ernest Wilson and C. J. Evans (1897) and a prototype that inventor Nikola Tesla demonstrated in New York in 1898. In 1903 Spanish engineer Leonardo Torres Quevedo introduced a radio based control system called the "Telekino" at the Paris Academy of Sciences, which he hoped to use to control a dirigible airship of his own design. Unlike previous “on/off” techniques, the Telekino was able to execute a finite but not limited set of different mechanical actions using a single communication channel. From 1904 to 1906 Torres chose to conduct Telekino testings in the form of a three-wheeled land vehicle with an effective range of 20 to 30 meters, and guiding a manned electrically powered boat, which demonstrated a standoff range of 2 kilometers. The first remote-controlled model airplane flew in 1932, and the use of remote control technology for military purposes was worked on intensively during the Second World War, one result of this being the German Wasserfall missile.
By the late 1930s, several radio manufacturers offered remote controls for some of their higher-end models. Most of these were connected to the set being controlled by wires, but the Philco Mystery Control (1939) was a battery-operated low-frequency radio transmitter, thus making it the first wireless remote control for a consumer electronics device. Using pulse-count modulation, this also was the first digital wireless remote control.
=== Television remote controls ===
One of the first remote intended to control a television was developed by Zenith Radio Corporation in 1950. The remote, called Lazy Bones, was connected to the television by a wire. A wireless remote control, the Flash-Matic, was developed in 1955 by Eugene Polley. It worked by shining a beam of light onto one of four photoelectric cells, but the cell did not distinguish between light from the remote and light from other sources. The Flashmatic also had to be pointed very precisely at one of the sensors in order to work.
In 1956, Robert Adler developed Zenith Space Command, a wireless remote. It was mechanical and used ultrasound to change the channel and volume. When the user pushed a button on the remote control, it struck a bar and clicked, hence they were commonly called "clickers", and the mechanics were similar to a pluck. Each of the four bars emitted a different fundamental frequency with ultrasonic harmonics, and circuits in the television detected these sounds and interpreted them as channel-up, channel-down, sound-on/off, and power-on/off.
Later, the rapid decrease in price of transistors made possible cheaper electronic remotes that contained a piezoelectric crystal that was fed by an oscillating electric current at a frequency near or above the upper threshold of human hearing, though still audible to dogs. The receiver contained a microphone attached to a circuit that was tuned to the same frequency. Some problems with this method were that the receiver could be triggered accidentally by naturally occurring noises or deliberately by metal against glass, for example, and some people could hear the lower ultrasonic harmonics.
In 1970, RCA introduced an all-electronic remote control that uses digital signals and metal–oxide–semiconductor field-effect transistor (MOSFET) memory. This was widely adopted for color television, replacing motor-driven tuning controls.
The impetus for a more complex type of television remote control came in 1973, with the development of the Ceefax teletext service by the BBC. Most commercial remote controls at that time had a limited number of functions, sometimes as few as three: next channel, previous channel, and volume/off. This type of control did not meet the needs of Teletext sets, where pages were identified with three-digit numbers. A remote control that selects Teletext pages would need buttons for each numeral from zero to nine, as well as other control functions, such as switching from text to picture, and the normal television controls of volume, channel, brightness, color intensity, etc. Early Teletext sets used wired remote controls to select pages, but the continuous use of the remote control required for Teletext quickly indicated the need for a wireless device. So BBC engineers began talks with one or two television manufacturers, which led to early prototypes in around 1977–1978 that could control many more functions. ITT was one of the companies and later gave its name to the ITT protocol of infrared communication.
In 1980, the most popular remote control was the Starcom Cable TV Converter (from Jerrold Electronics, a division of General Instrument) which used 40-kHz sound to change channels. Then, a Canadian company, Viewstar, Inc., was formed by engineer Paul Hrivnak and started producing a cable TV converter with an infrared remote control. The product was sold through Philips for approximately $190 CAD. The Viewstar converter was an immediate success, the millionth converter being sold on March 21, 1985, with 1.6 million sold by 1989.
=== Other remote controls ===
The Blab-off was a wired remote control created in 1952 that turned a TV's (television) sound on or off so that viewers could avoid hearing commercials. In the 1980s Steve Wozniak of Apple started a company named CL 9. The purpose of this company was to create a remote control that could operate multiple electronic devices. The CORE unit (Controller Of Remote Equipment) was introduced in the fall of 1987. The advantage to this remote controller was that it could "learn" remote signals from different devices. It had the ability to perform specific or multiple functions at various times with its built-in clock. It was the first remote control that could be linked to a computer and loaded with updated software code as needed. The CORE unit never made a huge impact on the market. It was much too cumbersome for the average user to program, but it received rave reviews from those who could. These obstacles eventually led to the demise of CL 9, but two of its employees continued the business under the name Celadon. This was one of the first computer-controlled learning remote controls on the market.
In the 1990s, cars were increasingly sold with electronic remote control door locks. These remotes transmit a signal to the car which locks or unlocks the door locks or unlocks the trunk. An aftermarket device sold in some countries is the remote starter. This enables a car owner to remotely start their car. This feature is most associated with countries with winter climates, where users may wish to run the car for several minutes before they intend to use it, so that the car heater and defrost systems can remove ice and snow from the windows.
=== Proliferation ===
By the early 2000s, the number of consumer electronic devices in most homes greatly increased, along with the number of remotes to control those devices. According to the Consumer Electronics Association, an average US home has four remotes. To operate a home theater as many as five or six remotes may be required, including one for cable or satellite receiver, VCR or digital video recorder (DVR/PVR), DVD player, TV and audio amplifier. Several of these remotes may need to be used sequentially for some programs or services to work properly. However, as there are no accepted interface guidelines, the process is increasingly cumbersome. One solution used to reduce the number of remotes that have to be used is the universal remote, a remote control that is programmed with the operation codes for most major brands of TVs, DVD players, etc. In the early 2010s, many smartphone manufacturers began incorporating infrared emitters into their devices, thereby enabling their use as universal remotes via an included or downloadable app.
== Technique ==
The main technology used in home remote controls is infrared (IR) light. The signal between a remote control handset and the device it controls consists of pulses of infrared light, which is invisible to the human eye but can be seen through a digital camera, video camera or phone camera. The transmitter in the remote control handset sends out a stream of pulses of infrared light when the user presses a button on the handset. A transmitter is often a light-emitting diode (LED) which is built into the pointing end of the remote control handset. The infrared light pulses form a pattern unique to that button. The receiver in the device recognizes the pattern and causes the device to respond accordingly.
=== Opto components and circuits ===
Most remote controls for electronic appliances use a near infrared diode to emit a beam of light that reaches the device. A 940 nm wavelength LED is typical. This infrared light is not visible to the human eye but picked up by sensors on the receiving device. Video cameras see the diode as if it produces visible purple light. With a single channel (single-function, one-button) remote control the presence of a carrier signal can be used to trigger a function. For multi-channel (normal multi-function) remote controls more sophisticated procedures are necessary: one consists of modulating the carrier with signals of different frequencies. After the receiver demodulates the received signal, it applies the appropriate frequency filters to separate the respective signals. One can often hear the signals being modulated on the infrared carrier by operating a remote control in very close proximity to an AM radio not tuned to a station. Today, IR remote controls almost always use a pulse width modulated code, encoded and decoded by a digital computer: a command from a remote control consists of a short train of pulses of carrier-present and carrier-not-present of varying widths.
=== Consumer electronics infrared protocols ===
Different manufacturers of infrared remote controls use different protocols to transmit the infrared commands. The RC-5 protocol that has its origins within Philips, uses, for instance, a total of 14 bits for each button press. The bit pattern is modulated onto a carrier frequency that, again, can be different for different manufacturers and standards, in the case of RC-5, the carrier is 36 kHz. Other consumer infrared protocols include the various versions of SIRCS used by Sony, the RC-6 from Philips, the Ruwido R-Step, and the NEC TC101 protocol.
=== Infrared, line of sight and operating angle ===
Since infrared (IR) remote controls use light, they require line of sight to operate the destination device. The signal can, however, be reflected by mirrors, just like any other light source. If operation is required where no line of sight is possible, for instance when controlling equipment in another room or installed in a cabinet, many brands of IR extenders are available for this on the market. Most of these have an IR receiver, picking up the IR signal and relaying it via radio waves to the remote part, which has an IR transmitter mimicking the original IR control. Infrared receivers also tend to have a more or less limited operating angle, which mainly depends on the optical characteristics of the phototransistor. However, it is easy to increase the operating angle using a matte transparent object in front of the receiver.
=== Radio remote control systems ===
Radio remote control (RF remote control) is used to control distant objects using a variety of radio signals transmitted by the remote control device. As a complementary method to infrared remote controls, the radio remote control is used with electric garage door or gate openers, automatic barrier systems, burglar alarms and industrial automation systems. Standards used for RF remotes are: Bluetooth AVRCP, Zigbee (RF4CE), Z-Wave. Most remote controls use their own coding, transmitting from 8 to 100 or more pulses, fixed or Rolling code, using OOK or FSK modulation. Also, transmitters or receivers can be universal, meaning they are able to work with many different codings. In this case, the transmitter is normally called a universal remote control duplicator because it is able to copy existing remote controls, while the receiver is called a universal receiver because it works with almost any remote control in the market.
A radio remote control system commonly has two parts: transmit and receive. The transmitter part is divided into two parts, the RF remote control and the transmitter module. This allows the transmitter module to be used as a component in a larger application. The transmitter module is small, but users must have detailed knowledge to use it; combined with the RF remote control it is much simpler to use.
The receiver is generally one of two types: a super-regenerative receiver or a superheterodyne. The super-regenerative receiver works like that of an intermittent oscillation detection circuit. The superheterodyne works like the one in a radio receiver. The superheterodyne receiver is used because of its stability, high sensitivity and it has relatively good anti-interference ability, a small package and lower price.
== Usage ==
=== Industry ===
A remote control is used for controlling substations, pump storage power stations and HVDC-plants. For these systems often PLC-systems working in the longwave range are used.
=== Power line remote control ===
A subset of Power-Line communication that sends remote control signals over energized AC power lines. This was used to remotely control home automation before the invention of WIFI connected smart switches.
=== Garage and gate ===
Garage and gate remote controls, also called clickers or openers, are very common especially in some countries such as the US, Australia, and the UK, where garage doors, gates and barriers are widely used. Such a remote is very simple by design, usually only one button, and some with more buttons to control several gates from one control. Such remotes can be divided into two categories by the encoder type used: fixed code and rolling code. If you find dip-switches in the remote, it is likely to be fixed code, an older technology which was widely used. However, fixed codes have been criticized for their (lack of) security, thus rolling code has been more and more widely used in later installations.
=== Military ===
Remotely operated torpedoes were demonstrated in the late 19th century in the form of several types of remotely controlled torpedoes. The early 1870s saw remotely controlled torpedoes by John Ericsson (pneumatic), John Louis Lay (electric wire guided), and Victor von Scheliha (electric wire guided).
The Brennan torpedo, invented by Louis Brennan in 1877 was powered by two contra-rotating propellers that were spun by rapidly pulling out wires from drums wound inside the torpedo. Differential speed on the wires connected to the shore station allowed the torpedo to be guided to its target, making it "the world's first practical guided missile". In 1898 Nikola Tesla publicly demonstrated a "wireless" radio-controlled torpedo that he hoped to sell to the U.S. Navy.
Archibald Low was known as the "father of radio guidance systems" for his pioneering work on guided rockets and planes during the First World War. In 1917, he demonstrated a remote-controlled aircraft to the Royal Flying Corps and in the same year built the first wire-guided rocket. As head of the secret RFC experimental works at Feltham, A. M. Low was the first person to use radio control successfully on an aircraft, an "Aerial Target". It was "piloted" from the ground by future world aerial speed record holder Henry Segrave. Low's systems encoded the command transmissions as a countermeasure to prevent enemy intervention. By 1918 the secret D.C.B. Section of the Royal Navy's Signals School, Portsmouth under the command of Eric Robinson V.C. used a variant of the Aerial Target's radio control system to control from ‘mother’ aircraft different types of naval vessels including a submarine.
The military also developed several early remote control vehicles. In World War I, the Imperial German Navy employed FL-boats (Fernlenkboote) against coastal shipping. These were driven by internal combustion engines and controlled remotely from a shore station through several miles of wire wound on a spool on the boat. An aircraft was used to signal directions to the shore station. EMBs carried a high explosive charge in the bow and traveled at speeds of thirty knots. The Soviet Red Army used remotely controlled teletanks during the 1930s in the Winter War against Finland and the early stages of World War II. A teletank is controlled by radio from a control tank at a distance of 500 to 1,500 meters, the two constituting a telemechanical group. The Red Army fielded at least two teletank battalions at the beginning of the Great Patriotic War. There were also remotely controlled cutters and experimental remotely controlled planes in the Red Army.
Remote controls in military usage employ jamming and countermeasures against jamming. Jammers are used to disable or sabotage the enemy's use of remote controls. The distances for military remote controls also tend to be much longer, up to intercontinental distance satellite-linked remote controls used by the U.S. for their unmanned airplanes (drones) in Afghanistan, Iraq, and Pakistan. Remote controls are used by insurgents in Iraq and Afghanistan to attack coalition and government troops with roadside improvised explosive devices, and terrorists in Iraq are reported in the media to use modified TV remote controls to detonate bombs.
=== Space ===
In the winter of 1971, the Soviet Union explored the surface of the Moon with the lunar vehicle Lunokhod 1, the first roving remote-controlled robot to land on another celestial body. Remote control technology is also used in space travel, for instance, the Soviet Lunokhod vehicles were remote-controlled from the ground. Many space exploration rovers can be remotely controlled, though vast distance to a vehicle results in a long time delay between transmission and receipt of a command.
=== PC control ===
Existing infrared remote controls can be used to control PC applications. Any application that supports shortcut keys can be controlled via infrared remote controls from other home devices (TV, VCR, AC). This is widely used with multimedia applications for PC based home theater systems. For this to work, one needs a device that decodes IR remote control data signals and a PC application that communicates to this device connected to PC. A connection can be made via serial port, USB port or motherboard IrDA connector. Such devices are commercially available but can be homemade using low-cost microcontrollers. LIRC (Linux IR Remote control) and WinLIRC (for Windows) are software packages developed for the purpose of controlling PC using TV remote and can be also used for homebrew remote with lesser modification.
=== Photography ===
Remote controls are used in photography, in particular to take long-exposure shots. Many action cameras such as the GoPros as well as standard DSLRs including Sony's Alpha series incorporate Wi-Fi based remote control systems. These can often be accessed and even controlled via cell-phones and other mobile devices.
=== Video games ===
Video game consoles had not used wireless controllers until recently, mainly because of the difficulty involved in playing the game while keeping the infrared transmitter pointed at the console. Early wireless controllers were cumbersome and when powered on alkaline batteries, lasted only a few hours before they needed replacement. Some wireless controllers were produced by third parties, in most cases using a radio link instead of infrared. Even these were very inconsistent, and in some cases, had transmission delays, making them virtually useless. Some examples include the Double Player for NES, the Master System Remote Control System and the Wireless Dual Shot for the PlayStation.
The first official wireless game controller made by a first party manufacturer was the CX-42 for Atari 2600. The Philips CD-i 400 series also came with a remote control, the WaveBird was also produced for the GameCube. In the seventh generation of gaming consoles, wireless controllers became standard. Some wireless controllers, such as those of the PlayStation 3 and Wii, use Bluetooth. Others, like the Xbox 360, use proprietary wireless protocols.
== Standby power ==
To be turned on by a wireless remote, the controlled appliance must always be partly on, consuming standby power.
== Alternatives ==
Hand-gesture recognition has been researched as an alternative to remote controls for television sets.
== See also ==
Apple Siri Remote
Consumer Electronics Control (CEC)
Kinect
Peel Technologies
Media controls
PlayStation Move
Radio control
Remote control locomotive
Teleoperation
Telecommand
== References ==
== External links ==
Media related to Remote control at Wikimedia Commons | Wikipedia/Remote_control |
Image resolution is the level of detail of an image. The term applies to digital images, film images, and other types of images. "Higher resolution" means more image detail.
Image resolution can be measured in various ways. Resolution quantifies how close lines can be to each other and still be visibly resolved. Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch), to the overall size of a picture (lines per picture height, also known simply as lines, TV lines, or TVL), or to angular subtense. Instead of single lines, line pairs are often used, composed of a dark line and an adjacent light line; for example, a resolution of 10 lines per millimeter means 5 dark lines alternating with 5 light lines, or 5 line pairs per millimeter (5 LP/mm). Photographic lens are most often quoted in line pairs per millimeter.
== Types ==
The resolution of digital cameras can be described in many different ways.
=== Pixel count ===
The term resolution is often considered equivalent to pixel count in digital imaging, though international standards in the digital camera field specify it should instead be called "Number of Total Pixels" in relation to image sensors, and as "Number of Recorded Pixels" for what is fully captured. Hence, CIPA DCG-001 calls for notation such as "Number of Recorded Pixels 1000 × 1500". According to the same standards, the "Number of Effective Pixels" that an image sensor or digital camera has is the count of pixel sensors that contribute to the final image (including pixels not in said image but nevertheless support the image filtering process), as opposed to the number of total pixels, which includes unused or light-shielded pixels around the edges.
An image of N pixels height by M pixels wide can have any resolution less than N lines per picture height, or N TV lines. But when the pixel counts are referred to as "resolution", the convention is to describe the pixel resolution with the set of two positive integer numbers, where the first number is the number of pixel columns (width) and the second is the number of pixel rows (height), for example as 7680 × 6876. Another popular convention is to cite resolution as the total number of pixels in the image, typically given as number of megapixels, which can be calculated by multiplying pixel columns by pixel rows and dividing by one million. Other conventions include describing pixels per length unit or pixels per area unit, such as pixels per inch or per square inch. None of these pixel resolutions are true resolutions, but they are widely referred to as such; they serve as upper bounds on image resolution.
Below is an illustration of how the same image might appear at different pixel resolutions, if the pixels were poorly rendered as sharp squares (normally, a smooth image reconstruction from pixels would be preferred, but for illustration of pixels, the sharp squares make the point better).
An image that is 2048 pixels in width and 1536 pixels in height has a total of 2048×1536 = 3,145,728 pixels or 3.1 megapixels. One could refer to it as 2048 by 1536 or a 3.1-megapixel image. The image would be a very low quality image (72ppi) if printed at about 28.5 inches wide, but a very good quality (300ppi) image if printed at about 7 inches wide.
The number of photodiodes in a color digital camera image sensor is often a multiple of the number of pixels in the image it produces, because information from an array of color image sensors is used to reconstruct the color of a single pixel. The image has to be interpolated or demosaiced to produce all three colors for each output pixel.
=== Spatial resolution ===
The terms blurriness and sharpness are used for digital images but other descriptors are used to reference the hardware capturing and displaying the images.
Spatial resolution in radiology is the ability of the imaging modality to differentiate two objects. Low spatial resolution techniques will be unable to differentiate between two objects that are relatively close together.
The measure of how closely lines can be resolved in an image is called spatial resolution, and it depends on properties of the system creating the image, not just the pixel resolution in pixels per inch (ppi). For practical purposes the clarity of the image is decided by its spatial resolution, not the number of pixels in an image. In effect, spatial resolution is the number of independent pixel values per unit length.
The spatial resolution of consumer displays ranges from 50 to 800 pixel lines per inch. With scanners, optical resolution is sometimes used to distinguish spatial resolution from the number of pixels per inch.
In remote sensing, spatial resolution is typically limited by diffraction, as well as by aberrations, imperfect focus, and atmospheric distortion. The ground sample distance (GSD) of an image, the pixel spacing on the Earth's surface, is typically considerably smaller than the resolvable spot size.
In astronomy, one often measures spatial resolution in data points per arcsecond subtended at the point of observation, because the physical distance between objects in the image depends on their distance away and this varies widely with the object of interest. On the other hand, in electron microscopy, line or fringe resolution is the minimum separation detectable between adjacent parallel lines (e.g. between planes of atoms), whereas point resolution is instead the minimum separation between adjacent points that can be both detected and interpreted e.g. as adjacent columns of atoms, for instance. The former often helps one detect periodicity in specimens, whereas the latter (although more difficult to achieve) is key to visualizing how individual atoms interact.
In Stereoscopic 3D images, spatial resolution could be defined as the spatial information recorded or captured by two viewpoints of a stereo camera (left and right camera).
=== Spectral resolution ===
Pixel encoding limits the information stored in a digital image, and the term color profile is used for digital images but other descriptors are used to reference the hardware capturing and displaying the images.
Spectral resolution is the ability to resolve spectral features and bands into their separate components. Color images distinguish light of different spectra. Multispectral images can resolve even finer differences of spectrum or wavelength by measuring and storing more than the traditional 3 of common RGB color images.
=== Temporal resolution ===
Temporal resolution (TR) is the precision of a measurement with respect to time.
Movie cameras and high-speed cameras can resolve events at different points in time. The time resolution used for movies is usually 24 to 48 frames per second (frames/s), whereas high-speed cameras may resolve 50 to 300 frames/s, or even more.
The Heisenberg uncertainty principle describes the fundamental limit on the maximum spatial resolution of information about a particle's coordinates imposed by the measurement or existence of information regarding its momentum to any degree of precision.
This fundamental limitation can, in turn, be a factor in the maximum imaging resolution at subatomic scales, as can be encountered using scanning electron microscopes.
=== Radiometric resolution ===
Radiometric resolution determines how finely a system can represent or distinguish differences of intensity, and is usually expressed as a number of levels or a number of bits, for example 8 bits or 256 levels that is typical of computer image files. The higher the radiometric resolution, the better subtle differences of intensity or reflectivity can be represented, at least in theory. In practice, the effective radiometric resolution is typically limited by the noise level, rather than by the number of bits of representation.
== Resolution in various media ==
This is a list of traditional, analogue horizontal resolutions for various media. The list only includes popular formats, not rare formats, and all values are approximate, because the actual quality can vary machine-to-machine or tape-to-tape. For ease-of-comparison, all values are for the NTSC system. (For PAL systems, replace 480 with 576.) Analog formats usually had less chroma resolution.
Analogue and early digital
Many cameras and displays offset the color components relative to each other or mix up temporal with spatial resolution:
Narrowscreen 4:3 computer display resolutions
320 × 200: MCGA
320 × 240: QVGA
640 × 350: EGA
640 × 480: VGA
800 × 600: Super VGA
1024 × 768: XGA / EVGA
1600 × 1200: UXGA
Analog
320 × 200: CRT monitors
333 × 480: VHS, Video8, Umatic
350 × 480: Betamax
420 × 480: Super Betamax, Betacam
460 × 480: Betacam SP, Umatic SP, NTSC (over-the-air TV)
580 × 480: Super VHS, Hi8, LaserDisc
700 × 480: Enhanced Definition Betamax, Analog broadcast limit (NTSC)
768 × 576: Analog broadcast limit (PAL, SECAM)
Digital
352 × 240: Video CD
500 × 480: Digital8
720 × 480: D-VHS, DVD, miniDV, Digital Betacam (NTSC)
720 × 480: Widescreen DVD (anamorphic) (NTSC)
854 × 480: EDTV (Enhanced Definition Television)
720 × 576: D-VHS, DVD, miniDV, Digital8, Digital Betacam (PAL/SECAM)
720 × 576 or 1024 × 576: Widescreen DVD (anamorphic) (PAL/SECAM)
1280 × 720: D-VHS, HD DVD, Blu-ray, HDV (miniDV)
1440 × 1080: HDV (miniDV)
1920 × 1080: HDV (miniDV), AVCHD, HD DVD, Blu-ray, HDCAM SR
1998 × 1080: 2K Flat (1.85:1)
2048 × 1080: 2K Digital Cinema
2560 × 1440: QHD (Quad HD) i.e. 4x the pixels in HD 1280x720
3840 × 2160: 4K UHDTV, Ultra HD Blu-ray
4096 × 2160: 4K Digital Cinema
7680 × 4320: 8K UHDTV
15360 × 8640: 16K Digital Cinema
30720 × 17280: 32K
Sequences from newer films are scanned at 2,000, 4,000, or even 8,000 columns, called 2K, 4K, and 8K, for quality visual-effects editing on computers.
IMAX, including IMAX HD and OMNIMAX: approximately 10,000×7,000 (7,000 lines) resolution. It is about 70 MP, which is currently highest-resolution single-sensor digital cinema camera (as of January 2012).
Film
35 mm film is scanned for release on DVD at 1080 or 2000 lines as of 2005.
The actual resolution of 35 mm original camera negatives is the subject of much debate. Measured resolutions of negative film have ranged from 25–200 LP/mm, which equates to a range of 325 lines for 2-perf, to (theoretically) over 2300 lines for 4-perf shot on T-Max 100. Kodak states that 35 mm film has the equivalent of 6K resolution horizontally according to a Senior Vice President of IMAX.
Print
Modern digital camera resolutions
Digital medium format camera – single, not combined one large digital sensor – 80 MP (starting from 2011, current as of 2013) – 10320 × 7752 or 10380 × 7816 (81.1 MP).
Mobile phone – Nokia 808 PureView – 41 MP (7728 × 5368), Nokia Lumia 1020 – also 41 MP (7712 × 5360)
Digital still camera – Canon EOS 5DS – 51 MP (8688 × 5792)
== See also ==
Display resolution
Dots per inch
Multi-exposure HDR capture
High-resolution picture transmission
Image scaling
Image scanner
Kell factor, which typically limits the number of visible lines to 0.7x of the device resolution
Pixel density
== References == | Wikipedia/Image_resolution |
In cryptography, encryption (more specifically, encoding) is the process of transforming information in a way that, ideally, only authorized parties can decode. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Despite its goal, encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.
For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.
Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes use the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
== History ==
=== Ancient ===
One of the earliest forms of encryption is symbol replacement, which was first found in the tomb of Khnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes. One of the most famous military encryption developments was the Caesar cipher, in which a plaintext letter is shifted a fixed number of positions along the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with a fixed number on the Caesar cipher.
Around 800 AD, Arab mathematician al-Kindi developed the technique of frequency analysis – which was an attempt to crack ciphers systematically, including the Caesar cipher. This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift: for example, the most common letter in English text is E and is therefore likely to be represented by the letter that appears most commonly in the ciphertext. This technique was rendered ineffective by the polyalphabetic cipher, described by al-Qalqashandi (1355–1418) and Leon Battista Alberti (in 1465), which varied the substitution alphabet as encryption proceeded in order to confound such analysis.
=== 19th–20th century ===
Around 1790, Thomas Jefferson theorized a cipher to encode and decode messages to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or the Jefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher.
A similar device to the Jefferson Disk, the M-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942.
In World War II, the Axis powers used a more advanced version of the M-94 called the Enigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours. The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine.
=== Modern ===
Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent eavesdropping attacks. One of the first "modern" cipher suites, DES, used a 56-bit key with 72,057,594,037,927,936 possibilities; it was cracked in 1999 by EFF's brute-force DES cracker, which required 22 hours and 15 minutes to do so. Modern encryption standards often use stronger key sizes, such as AES (256-bit mode), TwoFish, ChaCha20-Poly1305, Serpent (configurable up to 512-bit). Cipher suites that use a 128-bit or higher key, like AES, will not be able to be brute-forced because the total amount of keys is 3.4028237e+38 possibilities. The most likely option for cracking ciphers with high key size is to find vulnerabilities in the cipher itself, like inherent biases and backdoors or by exploiting physical side effects through Side-channel attacks. For example, RC4, a stream cipher, was cracked due to inherent biases and vulnerabilities in the cipher.
== Encryption in cryptography ==
In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key).
Many complex cryptographic algorithms often use simple modular arithmetic in their implementations.
=== Types ===
In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine used a new symmetric-key each day for encoding and decoding messages.
In public-key cryptography schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key (also called private-key).: 478 Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange.
RSA (Rivest–Shamir–Adleman) is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys.
A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated.
== Uses ==
Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed used encryption for some of their data in transit, and 53% used encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest.
Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users.
=== Data erasure ===
Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device.
== Limitations ==
Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods.
The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to brute force attacks.
Quantum computing uses properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption uses the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing.
While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be used in favor of encryption as well. The National Security Agency (NSA) is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing.
== Attacks and countermeasures ==
Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by a cloud service for example. Homomorphic encryption and secure multi-party computation are emerging techniques to compute encrypted data; these techniques are general and Turing complete but incur high computational and/or communication costs.
In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, and ransomware attacks. Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy.
== The debate around encryption ==
The question of balancing the need for national security with the right to privacy has been debated for years, since encryption has become critical in today's digital society. The modern encryption debate started around the '90s when US government tried to ban cryptography because, according to them, it would threaten national security. The debate is polarized around two opposing views. Those who see strong encryption as a problem making it easier for criminals to hide their illegal acts online and others who argue that encryption keep digital communications safe. The debate heated up in 2014, when Big Tech like Apple and Google set encryption by default in their devices. This was the start of a series of controversies that puts governments, companies and internet users at stake.
=== Integrity protection of Ciphertexts ===
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature usually done by a hashing algorithm or a PGP signature. Authenticated encryption algorithms are designed to provide both encryption and integrity protection together. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for example traffic analysis, TEMPEST, or Trojan horse.
Integrity protection mechanisms such as MACs and digital signatures must be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a message end-to-end along its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correct keys and has not been tampered with. If an endpoint device has been configured to trust a root certificate that an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing a man-in-the-middle attack anywhere along the message's path. The common practice of TLS interception by network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship.
=== Ciphertext length and padding ===
Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message's length is a form of metadata that can still leak sensitive information about the message. For example, the well-known CRIME and BREACH attacks against HTTPS were side-channel attacks that relied on information leakage via the length of encrypted content. Traffic analysis is a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages.
Padding a message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasing bandwidth overhead. Messages may be padded randomly or deterministically, with each approach having different tradeoffs. Encrypting and padding messages to form padded uniform random blobs or PURBs is a practice guaranteeing that the cipher text leaks no metadata about its cleartext's content, and leaks asymptotically minimal
O
(
log
log
M
)
{\displaystyle O(\log \log M)}
information via its length.
== See also ==
== References ==
== Further reading ==
Fouché Gaines, Helen (1939), Cryptanalysis: A Study of Ciphers and Their Solution, New York: Dover Publications Inc, ISBN 978-0486200972 {{citation}}: ISBN / Date incompatibility (help)
Kahn, David (1967), The Codebreakers - The Story of Secret Writing (ISBN 0-684-83130-9)
Preneel, Bart (2000), "Advances in Cryptology – EUROCRYPT 2000", Springer Berlin Heidelberg, ISBN 978-3-540-67517-4
Sinkov, Abraham (1966): Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America. ISBN 0-88385-622-0
Tenzer, Theo (2021): SUPER SECRETO – The Third Epoch of Cryptography: Multiple, exponential, quantum-secure and above all, simple and practical Encryption for Everyone, Norderstedt, ISBN 978-3-755-76117-4.
Lindell, Yehuda; Katz, Jonathan (2014), Introduction to modern cryptography, Hall/CRC, ISBN 978-1466570269
Ermoshina, Ksenia; Musiani, Francesca (2022), Concealing for Freedom: The Making of Encryption, Secure Messaging and Digital Liberties (Foreword by Laura DeNardis)(open access) (PDF), Manchester, UK: matteringpress.org, ISBN 978-1-912729-22-7, archived from the original (PDF) on 2022-06-02
== External links ==
The dictionary definition of encryption at Wiktionary
Media related to Cryptographic algorithms at Wikimedia Commons | Wikipedia/Cryptographic_algorithm |
This article discusses how information theory (a branch of mathematics studying the transmission, processing and storage of information) is related to measure theory (a branch of mathematics related to integration and probability).
== Measures in information theory ==
Many of the concepts in information theory have separate definitions and formulas for continuous and discrete cases. For example, entropy
H
(
X
)
{\displaystyle \mathrm {H} (X)}
is usually defined for discrete random variables, whereas for continuous random variables the related concept of differential entropy, written
h
(
X
)
{\displaystyle h(X)}
, is used (see Cover and Thomas, 2006, chapter 8). Both these concepts are mathematical expectations, but the expectation is defined with an integral for the continuous case, and a sum for the discrete case.
These separate definitions can be more closely related in terms of measure theory. For discrete random variables, probability mass functions can be considered density functions with respect to the counting measure. Thinking of both the integral and the sum as integration on a measure space allows for a unified treatment.
Consider the formula for the differential entropy of a continuous random variable
X
{\displaystyle X}
with range
R
{\displaystyle \mathbb {R} }
and probability density function
f
(
x
)
{\displaystyle f(x)}
:
h
(
X
)
=
−
∫
R
f
(
x
)
log
f
(
x
)
d
x
.
{\displaystyle h(X)=-\int _{\mathbb {R} }f(x)\log f(x)\,dx.}
This can usually be interpreted as the following Riemann–Stieltjes integral:
h
(
X
)
=
−
∫
R
f
(
x
)
log
f
(
x
)
d
μ
(
x
)
,
{\displaystyle h(X)=-\int _{\mathbb {R} }f(x)\log f(x)\,d\mu (x),}
where
μ
{\displaystyle \mu }
is the Lebesgue measure.
If instead,
X
{\displaystyle X}
is discrete, with range
Ω
{\displaystyle \Omega }
a finite set,
f
{\displaystyle f}
is a probability mass function on
Ω
{\displaystyle \Omega }
, and
ν
{\displaystyle \nu }
is the counting measure on
Ω
{\displaystyle \Omega }
, we can write:
H
(
X
)
=
−
∑
x
∈
Ω
f
(
x
)
log
f
(
x
)
=
−
∫
Ω
f
(
x
)
log
f
(
x
)
d
ν
(
x
)
.
{\displaystyle \mathrm {H} (X)=-\sum _{x\in \Omega }f(x)\log f(x)=-\int _{\Omega }f(x)\log f(x)\,d\nu (x).}
The integral expression, and the general concept, are identical in the continuous case; the only difference is the measure used. In both cases the probability density function
f
{\displaystyle f}
is the Radon–Nikodym derivative of the probability measure with respect to the measure against which the integral is taken.
If
P
{\displaystyle P}
is the probability measure induced by
X
{\displaystyle X}
, then the integral can also be taken directly with respect to
P
{\displaystyle P}
:
h
(
X
)
=
−
∫
Ω
log
d
P
d
μ
d
P
,
{\displaystyle h(X)=-\int _{\Omega }\log {\frac {\mathrm {d} P}{\mathrm {d} \mu }}\,dP,}
If instead of the underlying measure μ we take another probability measure
Q
{\displaystyle Q}
, we are led to the Kullback–Leibler divergence: let
P
{\displaystyle P}
and
Q
{\displaystyle Q}
be probability measures over the same space. Then if
P
{\displaystyle P}
is absolutely continuous with respect to
Q
{\displaystyle Q}
, written
P
≪
Q
,
{\displaystyle P\ll Q,}
the Radon–Nikodym derivative
d
P
d
Q
{\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} Q}}}
exists and the Kullback–Leibler divergence can be expressed in its full generality:
D
KL
(
P
‖
Q
)
=
∫
supp
P
d
P
d
Q
log
d
P
d
Q
d
Q
=
∫
supp
P
log
d
P
d
Q
d
P
,
{\displaystyle D_{\operatorname {KL} }(P\|Q)=\int _{\operatorname {supp} P}{\frac {\mathrm {d} P}{\mathrm {d} Q}}\log {\frac {\mathrm {d} P}{\mathrm {d} Q}}\,dQ=\int _{\operatorname {supp} P}\log {\frac {\mathrm {d} P}{\mathrm {d} Q}}\,dP,}
where the integral runs over the support of
P
.
{\displaystyle P.}
Note that we have dropped the negative sign: the Kullback–Leibler divergence is always non-negative due to Gibbs' inequality.
== Entropy as a "measure" ==
There is an analogy between Shannon's basic "measures" of the information content of random variables and a measure over sets. Namely the joint entropy, conditional entropy, and mutual information can be considered as the measure of a set union, set difference, and set intersection, respectively (Reza pp. 106–108).
If we associate the existence of abstract sets
X
~
{\displaystyle {\tilde {X}}}
and
Y
~
{\displaystyle {\tilde {Y}}}
to arbitrary discrete random variables X and Y, somehow representing the information borne by X and Y, respectively, such that:
μ
(
X
~
∩
Y
~
)
=
0
{\displaystyle \mu ({\tilde {X}}\cap {\tilde {Y}})=0}
whenever X and Y are unconditionally independent, and
X
~
=
Y
~
{\displaystyle {\tilde {X}}={\tilde {Y}}}
whenever X and Y are such that either one is completely determined by the other (i.e. by a bijection);
where
μ
{\displaystyle \mu }
is a signed measure over these sets, and we set:
H
(
X
)
=
μ
(
X
~
)
,
H
(
Y
)
=
μ
(
Y
~
)
,
H
(
X
,
Y
)
=
μ
(
X
~
∪
Y
~
)
,
H
(
X
∣
Y
)
=
μ
(
X
~
∖
Y
~
)
,
I
(
X
;
Y
)
=
μ
(
X
~
∩
Y
~
)
;
{\displaystyle {\begin{aligned}\mathrm {H} (X)&=\mu ({\tilde {X}}),\\\mathrm {H} (Y)&=\mu ({\tilde {Y}}),\\\mathrm {H} (X,Y)&=\mu ({\tilde {X}}\cup {\tilde {Y}}),\\\mathrm {H} (X\mid Y)&=\mu ({\tilde {X}}\setminus {\tilde {Y}}),\\\operatorname {I} (X;Y)&=\mu ({\tilde {X}}\cap {\tilde {Y}});\end{aligned}}}
we find that Shannon's "measure" of information content satisfies all the postulates and basic properties of a formal signed measure over sets, as commonly illustrated in an information diagram. This allows the sum of two measures to be written:
μ
(
A
)
+
μ
(
B
)
=
μ
(
A
∪
B
)
+
μ
(
A
∩
B
)
{\displaystyle \mu (A)+\mu (B)=\mu (A\cup B)+\mu (A\cap B)}
and the analog of Bayes' theorem (
μ
(
A
)
+
μ
(
B
∖
A
)
=
μ
(
B
)
+
μ
(
A
∖
B
)
{\displaystyle \mu (A)+\mu (B\setminus A)=\mu (B)+\mu (A\setminus B)}
) allows the difference of two measures to be written:
μ
(
A
)
−
μ
(
B
)
=
μ
(
A
∖
B
)
−
μ
(
B
∖
A
)
{\displaystyle \mu (A)-\mu (B)=\mu (A\setminus B)-\mu (B\setminus A)}
This can be a handy mnemonic device in some situations, e.g.
H
(
X
,
Y
)
=
H
(
X
)
+
H
(
Y
∣
X
)
μ
(
X
~
∪
Y
~
)
=
μ
(
X
~
)
+
μ
(
Y
~
∖
X
~
)
I
(
X
;
Y
)
=
H
(
X
)
−
H
(
X
∣
Y
)
μ
(
X
~
∩
Y
~
)
=
μ
(
X
~
)
−
μ
(
X
~
∖
Y
~
)
{\displaystyle {\begin{aligned}\mathrm {H} (X,Y)&=\mathrm {H} (X)+\mathrm {H} (Y\mid X)&\mu ({\tilde {X}}\cup {\tilde {Y}})&=\mu ({\tilde {X}})+\mu ({\tilde {Y}}\setminus {\tilde {X}})\\\operatorname {I} (X;Y)&=\mathrm {H} (X)-\mathrm {H} (X\mid Y)&\mu ({\tilde {X}}\cap {\tilde {Y}})&=\mu ({\tilde {X}})-\mu ({\tilde {X}}\setminus {\tilde {Y}})\end{aligned}}}
Note that measures (expectation values of the logarithm) of true probabilities are called "entropy" and generally represented by the letter H, while other measures are often referred to as "information" or "correlation" and generally represented by the letter I. For notational simplicity, the letter I is sometimes used for all measures.
== Multivariate mutual information ==
Certain extensions to the definitions of Shannon's basic measures of information are necessary to deal with the σ-algebra generated by the sets that would be associated to three or more arbitrary random variables. (See Reza pp. 106–108 for an informal but rather complete discussion.) Namely
H
(
X
,
Y
,
Z
,
⋯
)
{\displaystyle \mathrm {H} (X,Y,Z,\cdots )}
needs to be defined in the obvious way as the entropy of a joint distribution, and a multivariate mutual information
I
(
X
;
Y
;
Z
;
⋯
)
{\displaystyle \operatorname {I} (X;Y;Z;\cdots )}
defined in a suitable manner so that we can set:
H
(
X
,
Y
,
Z
,
⋯
)
=
μ
(
X
~
∪
Y
~
∪
Z
~
∪
⋯
)
,
I
(
X
;
Y
;
Z
;
⋯
)
=
μ
(
X
~
∩
Y
~
∩
Z
~
∩
⋯
)
;
{\displaystyle {\begin{aligned}\mathrm {H} (X,Y,Z,\cdots )&=\mu ({\tilde {X}}\cup {\tilde {Y}}\cup {\tilde {Z}}\cup \cdots ),\\\operatorname {I} (X;Y;Z;\cdots )&=\mu ({\tilde {X}}\cap {\tilde {Y}}\cap {\tilde {Z}}\cap \cdots );\end{aligned}}}
in order to define the (signed) measure over the whole σ-algebra. There is no single universally accepted definition for the multivariate mutual information, but the one that corresponds here to the measure of a set intersection is due to Fano (1966: p. 57-59). The definition is recursive. As a base case the mutual information of a single random variable is defined to be its entropy:
I
(
X
)
=
H
(
X
)
{\displaystyle \operatorname {I} (X)=\mathrm {H} (X)}
. Then for
n
≥
2
{\displaystyle n\geq 2}
we set
I
(
X
1
;
⋯
;
X
n
)
=
I
(
X
1
;
⋯
;
X
n
−
1
)
−
I
(
X
1
;
⋯
;
X
n
−
1
∣
X
n
)
,
{\displaystyle \operatorname {I} (X_{1};\cdots ;X_{n})=\operatorname {I} (X_{1};\cdots ;X_{n-1})-\operatorname {I} (X_{1};\cdots ;X_{n-1}\mid X_{n}),}
where the conditional mutual information is defined as
I
(
X
1
;
⋯
;
X
n
−
1
∣
X
n
)
=
E
X
n
(
I
(
X
1
;
⋯
;
X
n
−
1
)
∣
X
n
)
.
{\displaystyle \operatorname {I} (X_{1};\cdots ;X_{n-1}\mid X_{n})=\mathbb {E} _{X_{n}}{\big (}\operatorname {I} (X_{1};\cdots ;X_{n-1})\mid X_{n}{\big )}.}
The first step in the recursion yields Shannon's definition
I
(
X
1
;
X
2
)
=
H
(
X
1
)
−
H
(
X
1
∣
X
2
)
.
{\displaystyle \operatorname {I} (X_{1};X_{2})=\mathrm {H} (X_{1})-\mathrm {H} (X_{1}\mid X_{2}).}
The multivariate mutual information (same as interaction information but for a change in sign) of three or more random variables can be negative as well as positive: Let X and Y be two independent fair coin flips, and let Z be their exclusive or. Then
I
(
X
;
Y
;
Z
)
=
−
1
{\displaystyle \operatorname {I} (X;Y;Z)=-1}
bit.
Many other variations are possible for three or more random variables: for example,
I
(
X
,
Y
;
Z
)
{\displaystyle \operatorname {I} (X,Y;Z)}
is the mutual information of the joint distribution of X and Y relative to Z, and can be interpreted as
μ
(
(
X
~
∪
Y
~
)
∩
Z
~
)
.
{\displaystyle \mu (({\tilde {X}}\cup {\tilde {Y}})\cap {\tilde {Z}}).}
Many more complicated expressions can be built this way, and still have meaning, e.g.
I
(
X
,
Y
;
Z
∣
W
)
,
{\displaystyle \operatorname {I} (X,Y;Z\mid W),}
or
H
(
X
,
Z
∣
W
,
Y
)
.
{\displaystyle \mathrm {H} (X,Z\mid W,Y).}
== References ==
Thomas M. Cover and Joy A. Thomas. Elements of Information Theory, second edition, 2006. New Jersey: Wiley and Sons. ISBN 978-0-471-24195-9.
Fazlollah M. Reza. An Introduction to Information Theory. New York: McGraw–Hill 1961. New York: Dover 1994. ISBN 0-486-68210-2
Fano, R. M. (1966), Transmission of Information: a statistical theory of communications, MIT Press, ISBN 978-0-262-56169-3, OCLC 804123877
R. W. Yeung, "On entropy, information inequalities, and Groups." PS Archived 2016-03-03 at the Wayback Machine
== See also ==
Information theory
Measure theory
Set theory | Wikipedia/Information_theory_and_measure_theory |
Thermal physics is the combined study of thermodynamics, statistical mechanics, and kinetic theory of gases. This umbrella-subject is typically designed for physics students and functions to provide a general introduction to each of three core heat-related subjects. Other authors, however, define thermal physics loosely as a summation of only thermodynamics and statistical mechanics.
Thermal physics can be seen as the study of systems with a large number of atoms. It unites thermodynamics and statistical mechanics.
== Overview ==
Thermal physics, generally speaking, is the study of the statistical nature of physical systems from an energetic perspective. Starting with the basics of heat and temperature, thermal physics analyzes the first law of thermodynamics and second law of thermodynamics from the statistical perspective, in terms of the number of microstates corresponding to a given macrostate. In addition, the concept of entropy is studied via quantum theory.
A central topic in thermal physics is the canonical probability distribution. The electromagnetic nature of photons and phonons are studied which show that the oscillations of electromagnetic fields and of crystal lattices have much in common. Waves form a basis for both, provided one incorporates quantum theory.
Other topics studied in thermal physics include: chemical potential, the quantum nature of an ideal gas, i.e. in terms of fermions and bosons, Bose–Einstein condensation, Gibbs free energy, Helmholtz free energy, chemical equilibrium, phase equilibrium, the equipartition theorem, entropy at absolute zero, and transport processes as mean free path, viscosity, and conduction.
== See also ==
Heat transfer physics
Information theory
Philosophy of thermal and statistical physics
Thermodynamic instruments
== References ==
== Further reading ==
Callen, Herbert B. (1985). Thermodynamics and an Introduction to Thermostatistics (2nd ed.). Wiley. ISBN 0-471-86256-8.
Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN 0-716-71088-9.
Schroeder, Daniel V. (1999). An Introduction to Thermal Physics. Addison Wesley. ISBN 0-201-38027-7.
== External links ==
Thermal Physics Links on the Web | Wikipedia/Thermal_physics |
In mathematics, the discrete sine transform (DST) is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using a purely real matrix. It is equivalent to the imaginary parts of a DFT of roughly twice the length, operating on real data with odd symmetry (since the Fourier transform of a real and odd function is imaginary and odd), where in some variants the input and/or output data are shifted by half a sample.
The DST is related to the discrete cosine transform (DCT), which is equivalent to a DFT of real and even functions. See the DCT article for a general discussion of how the boundary conditions relate the various DCT and DST types. Generally, the DST is derived from the DCT by replacing the Neumann condition at x=0 with a Dirichlet condition. Both the DCT and the DST were described by Nasir Ahmed, T. Natarajan, and K.R. Rao in 1974. The type-I DST (DST-I) was later described by Anil K. Jain in 1976, and the type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978.
== Applications ==
DSTs are widely employed in solving partial differential equations by spectral methods, where the different variants of the DST correspond to slightly different odd/even boundary conditions at the two ends of the array.
== Informal overview ==
Like any Fourier-related transform, discrete sine transforms (DSTs) express a function or a signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the discrete Fourier transform (DFT), a DST operates on a function at a finite number of discrete data points. The obvious distinction between a DST and a DFT is that the former uses only sine functions, while the latter uses both cosines and sines (in the form of complex exponentials). However, this visible difference is merely a consequence of a deeper distinction: a DST implies different boundary conditions than the DFT or other related transforms.
The Fourier-related transforms that operate on a function over a finite domain, such as the DFT or DST or a Fourier series, can be thought of as implicitly defining an extension of that function outside the domain. That is, once you write a function
f
(
x
)
{\displaystyle f(x)}
as a sum of sinusoids, you can evaluate that sum at any
x
{\displaystyle x}
, even for
x
{\displaystyle x}
where the original
f
(
x
)
{\displaystyle f(x)}
was not specified. The DFT, like the Fourier series, implies a periodic extension of the original function. A DST, like a sine transform, implies an odd extension of the original function.
However, because DSTs operate on finite, discrete sequences, two issues arise that do not apply for the continuous sine transform. First, one has to specify whether the function is even or odd at both the left and right boundaries of the domain (i.e. the min-n and max-n boundaries in the definitions below, respectively). Second, one has to specify around what point the function is even or odd. In particular, consider a sequence (a,b,c) of three equally spaced data points, and say that we specify an odd left boundary. There are two sensible possibilities: either the data is odd about the point prior to a, in which case the odd extension is (−c,−b,−a,0,a,b,c), or the data is odd about the point halfway between a and the previous point, in which case the odd extension is (−c,−b,−a,a,b,c)
These choices lead to all the standard variations of DSTs and also discrete cosine transforms (DCTs). Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of
2
×
2
×
2
×
2
=
16
{\displaystyle 2\times 2\times 2\times 2=16}
possibilities. Half of these possibilities, those where the left boundary is odd, correspond to the 8 types of DST; the other half are the 8 types of DCT.
These different boundary conditions strongly affect the applications of the transform, and lead to uniquely useful properties for the various DCT types. Most directly, when using Fourier-related transforms to solve partial differential equations by spectral methods, the boundary conditions are directly specified as a part of the problem being solved.
== Definition ==
Formally, the discrete sine transform is a linear, invertible function F : RN -> RN (where R denotes the set of real numbers), or equivalently an N × N square matrix. There are several variants of the DST with slightly modified definitions. The N real numbers x0,...,xN − 1 are transformed into the N real numbers X0,...,XN − 1 according to one of the formulas:
=== DST-I ===
X
k
=
∑
n
=
0
N
−
1
x
n
sin
[
π
N
+
1
(
n
+
1
)
(
k
+
1
)
]
k
=
0
,
…
,
N
−
1
X
k
−
1
=
∑
n
=
1
N
x
n
−
1
sin
[
π
n
k
N
+
1
]
k
=
1
,
…
,
N
{\displaystyle {\begin{aligned}X_{k}&=\sum _{n=0}^{N-1}x_{n}\sin \left[{\frac {\pi }{N+1}}(n+1)(k+1)\right]&k&=0,\dots ,N-1\\X_{k-1}&=\sum _{n=1}^{N}x_{n-1}\sin \left[{\frac {\pi nk}{N+1}}\right]&k&=1,\dots ,N\end{aligned}}}
The DST-I matrix is orthogonal (up to a scale factor).
A DST-I is exactly equivalent to a DFT of a real sequence that is odd around the zero-th and middle points, scaled by 1/2. For example, a DST-I of N=3 real numbers (a,b,c) is exactly equivalent to a DFT of eight real numbers (0,a,b,c,0,−c,−b,−a) (odd symmetry), scaled by 1/2. (In contrast, DST types II–IV involve a half-sample shift in the equivalent DFT.) This is the reason for the N + 1 in the denominator of the sine function: the equivalent DFT has 2(N+1) points and has 2π/2(N+1) in its sinusoid frequency, so the DST-I has π/(N+1) in its frequency.
Thus, the DST-I corresponds to the boundary conditions: xn is odd around n = −1 and odd around n=N; similarly for Xk.
=== DST-II ===
X
k
=
∑
n
=
0
N
−
1
x
n
sin
[
π
N
(
n
+
1
2
)
(
k
+
1
)
]
k
=
0
,
…
,
N
−
1
{\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}\sin \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}\right)(k+1)\right]\quad \quad k=0,\dots ,N-1}
Some authors further multiply the XN − 1 term by 1/√2 (see below for the corresponding change in DST-III). This makes the DST-II matrix orthogonal (up to a scale factor), but breaks the direct correspondence with a real-odd DFT of half-shifted input.
The DST-II implies the boundary conditions: xn is odd around n = −1/2 and odd around n = N − 1/2; Xk is odd around k = −1 and even around k = N − 1.
=== DST-III ===
X
k
=
(
−
1
)
k
2
x
N
−
1
+
∑
n
=
0
N
−
2
x
n
sin
[
π
N
(
n
+
1
)
(
k
+
1
2
)
]
k
=
0
,
…
,
N
−
1
{\displaystyle X_{k}={\frac {(-1)^{k}}{2}}x_{N-1}+\sum _{n=0}^{N-2}x_{n}\sin \left[{\frac {\pi }{N}}(n+1)\left(k+{\frac {1}{2}}\right)\right]\quad \quad k=0,\dots ,N-1}
Some authors further multiply the xN − 1 term by √2 (see above for the corresponding change in DST-II). This makes the DST-III matrix orthogonal (up to a scale factor), but breaks the direct correspondence with a real-odd DFT of half-shifted output.
The DST-III implies the boundary conditions: xn is odd around n = −1 and even around n = N − 1; Xk is odd around k = −1/2 and odd around k = N − 1/2.
=== DST-IV ===
X
k
=
∑
n
=
0
N
−
1
x
n
sin
[
π
N
(
n
+
1
2
)
(
k
+
1
2
)
]
k
=
0
,
…
,
N
−
1
{\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}\sin \left[{\frac {\pi }{N}}\left(n+{\frac {1}{2}}\right)\left(k+{\frac {1}{2}}\right)\right]\quad \quad k=0,\dots ,N-1}
The DST-IV matrix is orthogonal (up to a scale factor).
The DST-IV implies the boundary conditions: xn is odd around n = −1/2 and even around n = N − 1/2; similarly for Xk.
=== DST V–VIII ===
DST types I–IV are equivalent to real-odd DFTs of even order. In principle, there are actually four additional types of discrete sine transform (Martucci, 1994), corresponding to real-odd DFTs of logically odd order, which have factors of N+1/2 in the denominators of the sine arguments. However, these variants seem to be rarely used in practice.
=== Inverse transforms ===
The inverse of DST-I is DST-I multiplied by 2/(N + 1). The inverse of DST-IV is DST-IV multiplied by 2/N. The inverse of DST-II is DST-III multiplied by 2/N (and vice versa).
As for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by
2
/
N
{\textstyle {\sqrt {2/N}}}
so that the inverse does not require any additional multiplicative factor.
== Computation ==
Although the direct application of these formulas would require O(N2) operations, it is possible to compute the same thing with only O(N log N) complexity by factorizing the computation similar to the fast Fourier transform (FFT). (One can also compute DSTs via FFTs combined with O(N) pre- and post-processing steps.)
A DST-III or DST-IV can be computed from a DCT-III or DCT-IV (see discrete cosine transform), respectively, by reversing the order of the inputs and flipping the sign of every other output, and vice versa for DST-II from DCT-II. In this way it follows that types II–IV of the DST require exactly the same number of arithmetic operations (additions and multiplications) as the corresponding DCT types.
== Generalizations ==
A family of transforms composed of sine and sine hyperbolic functions exists; these transforms are made based on the natural vibration of thin square plates with different boundary conditions.
== References ==
== Bibliography ==
S. A. Martucci, "Symmetric convolution and the discrete sine and cosine transforms," IEEE Trans. Signal Process. SP-42, 1038–1051 (1994).
Matteo Frigo and Steven G. Johnson: FFTW, FFTW Home Page. A free (GPL) C library that can compute fast DSTs (types I–IV) in one or more dimensions, of arbitrary size. Also M. Frigo and S. G. Johnson, "The Design and Implementation of FFTW3," Proceedings of the IEEE 93 (2), 216–231 (2005).
Takuya Ooura: General Purpose FFT Package, FFT Package 1-dim / 2-dim. Free C & FORTRAN libraries for computing fast DSTs in one, two or three dimensions, power of 2 sizes.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 12.4.1. Sine Transform", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8, archived from the original on 2011-08-11, retrieved 2011-08-13.
R. Chivukula and Y. Reznik, "Fast Computing of Discrete Cosine and Sine Transforms of Types VI and VII," Proc. SPIE Vol. 8135, 2011. | Wikipedia/Discrete_sine_transform |
The free energy principle is a mathematical principle of information physics. Its application to fMRI brain imaging data as a theoretical framework suggests that the brain reduces surprise or uncertainty by making predictions based on internal models and uses sensory input to update its models so as to improve the accuracy of its predictions. This principle approximates an integration of Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. From it, wide-ranging inferences have been made about brain function, perception, and action. Its applicability to living systems has been questioned.
== Overview ==
In biophysics and cognitive science, the free energy principle is a mathematical principle describing a formal account of the representational capacities of physical systems: that is, why things that exist look as if they track properties of the systems to which they are coupled. It establishes that the dynamics of physical systems minimise a quantity known as surprisal (which is the negative log probability of some outcome); or equivalently, its variational upper bound, called free energy. The principle is used especially in Bayesian approaches to brain function, but also some approaches to artificial intelligence; it is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception-action loops in neuroscience.
The free energy principle models the behaviour of systems that are distinct from, but coupled to, another system (e.g., an embedding environment), where the degrees of freedom that implement the interface between the two systems is known as a Markov blanket. More formally, the free energy principle says that if a system has a "particular partition" (i.e., into particles, with their Markov blankets), then subsets of that system will track the statistical structure of other subsets (which are known as internal and external states or paths of a system).
The free energy principle is based on the Bayesian idea of the brain as an “inference engine.” Under the free energy principle, systems pursue paths of least surprise, or equivalently, minimize the difference between predictions based on their model of the world and their sense and associated perception. This difference is quantified by variational free energy and is minimized by continuous correction of the world model of the system, or by making the world more like the predictions of the system. By actively changing the world to make it closer to the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods.
The free energy principle is a mathematical principle of information physics: much like the principle of maximum entropy or the principle of least action, it is true on mathematical grounds. To attempt to falsify the free energy principle is a category mistake, akin to trying to falsify calculus by making empirical observations. (One cannot invalidate a mathematical theory in this way; instead, one would need to derive a formal contradiction from the theory.) In a 2018 interview, Friston explained what it entails for the free energy principle to not be subject to falsification:
I think it is useful to make a fundamental distinction at this point—that we can appeal to later.
The distinction is between a state and process theory; i.e., the difference between a normative principle that things may or may not conform to, and a process theory or hypothesis about how that principle is realized. Under this distinction, the free energy principle stands in stark distinction to things like predictive coding and the Bayesian brain hypothesis. This is because the free energy principle is what it is — a principle. Like Hamilton's principle of stationary action, it cannot be falsified. It cannot be disproven. In fact, there's not much you can do with it, unless you ask whether measurable systems conform to the principle. On the other hand, hypotheses that the brain performs some form of Bayesian inference or predictive coding are what they are—hypotheses. These hypotheses may or may not be supported by empirical evidence.
There are many examples of these hypotheses being supported by empirical evidence.
== Background ==
The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz's work on unconscious inference and subsequent treatments in psychology and machine learning. Variational free energy is a function of observations and a probability density over their hidden causes. This variational density is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation to Bayesian model evidence. Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world.
However, free energy is also an upper bound on the self-information of outcomes, where the long-term average of surprise is entropy. This means that if a system acts to minimise free energy, it will implicitly place an upper bound on the entropy of the outcomes – or sensory states – it samples.
=== Relationship to other theories ===
Active inference is closely related to the good regulator theorem and related accounts of self-organisation, such as self-assembly, pattern formation, autopoiesis and practopoiesis. It addresses the themes considered in cybernetics, synergetics and embodied cognition. Because free energy can be expressed as the expected energy of observations under the variational density minus its entropy, it is also related to the maximum entropy principle. Finally, because the time average of energy is action, the principle of minimum variational free energy is a principle of least action. Active inference allowing for scale invariance has also been applied to other theories and domains. For instance, it has been applied to sociology, linguistics and communication, semiotics, and epidemiology among others.
Negative free energy is formally equivalent to the evidence lower bound, which is commonly used in machine learning to train generative models, such as variational autoencoders.
== Action and perception ==
Active inference applies the techniques of approximate Bayesian inference to infer the causes of sensory data from a 'generative' model of how that data is caused and then uses these inferences to guide action.
Bayes' rule characterizes the probabilistically optimal inversion of such a causal model, but applying it is typically computationally intractable, leading to the use of approximate methods.
In active inference, the leading class of such approximate methods are variational methods, for both practical and theoretical reasons: practical, as they often lead to simple inference procedures; and theoretical, because they are related to fundamental physical principles, as discussed above.
These variational methods proceed by minimizing an upper bound on the divergence between the Bayes-optimal inference (or 'posterior') and its approximation according to the method.
This upper bound is known as the free energy, and we can accordingly characterize perception as the minimization of the free energy with respect to inbound sensory information, and action as the minimization of the same free energy with respect to outbound action information.
This holistic dual optimization is characteristic of active inference, and the free energy principle is the hypothesis that all systems which perceive and act can be characterized in this way.
In order to exemplify the mechanics of active inference via the free energy principle, a generative model must be specified, and this typically involves a collection of probability density functions which together characterize the causal model.
One such specification is as follows.
The system is modelled as inhabiting a state space
X
{\displaystyle X}
, in the sense that its states form the points of this space.
The state space is then factorized according to
X
=
Ψ
×
S
×
A
×
R
{\displaystyle X=\Psi \times S\times A\times R}
, where
Ψ
{\displaystyle \Psi }
is the space of 'external' states that are 'hidden' from the agent (in the sense of not being directly perceived or accessible),
S
{\displaystyle S}
is the space of sensory states that are directly perceived by the agent,
A
{\displaystyle A}
is the space of the agent's possible actions, and
R
{\displaystyle R}
is a space of 'internal' states that are private to the agent.
Keeping with the Figure 1, note that in the following the
ψ
˙
,
ψ
,
s
,
a
{\displaystyle {\dot {\psi }},\psi ,s,a}
and
μ
{\displaystyle \mu }
are functions of (continuous) time
t
{\displaystyle t}
. The generative model is the specification of the following density functions:
A sensory model,
p
S
:
S
×
Ψ
×
A
→
R
{\displaystyle p_{S}:S\times \Psi \times A\to \mathbb {R} }
, often written as
p
S
(
s
∣
ψ
,
a
)
{\displaystyle p_{S}(s\mid \psi ,a)}
, characterizing the likelihood of sensory data given external states and actions;
a stochastic model of the environmental dynamics,
p
Ψ
:
Ψ
×
Ψ
×
A
→
R
{\displaystyle p_{\Psi }:\Psi \times \Psi \times A\to \mathbb {R} }
, often written
p
Ψ
(
ψ
˙
∣
ψ
,
a
)
{\displaystyle p_{\Psi }({\dot {\psi }}\mid \psi ,a)}
, characterizing how the external states are expected by the agent to evolve over time
t
{\displaystyle t}
, given the agent's actions;
an action model,
p
A
:
A
×
R
×
S
→
R
{\displaystyle p_{A}:A\times R\times S\to \mathbb {R} }
, written
p
A
(
a
∣
μ
,
s
)
{\displaystyle p_{A}(a\mid \mu ,s)}
, characterizing how the agent's actions depend upon its internal states and sensory data; and
an internal model,
p
R
:
R
×
S
→
R
{\displaystyle p_{R}:R\times S\to \mathbb {R} }
, written
p
R
(
μ
∣
s
)
{\displaystyle p_{R}(\mu \mid s)}
, characterizing how the agent's internal states depend upon its sensory data.
These density functions determine the factors of a "joint model", which represents the complete specification of the generative model, and which can be written as
p
(
ψ
˙
,
s
,
a
,
μ
∣
ψ
)
=
p
S
(
s
∣
ψ
,
a
)
p
Ψ
(
ψ
˙
∣
ψ
,
a
)
p
A
(
a
∣
μ
,
s
)
p
R
(
μ
∣
s
)
{\displaystyle p({\dot {\psi }},s,a,\mu \mid \psi )=p_{S}(s\mid \psi ,a)p_{\Psi }({\dot {\psi }}\mid \psi ,a)p_{A}(a\mid \mu ,s)p_{R}(\mu \mid s)}
.
Bayes' rule then determines the "posterior density"
p
Bayes
(
ψ
˙
|
s
,
a
,
μ
,
ψ
)
{\displaystyle p_{\text{Bayes}}({\dot {\psi }}|s,a,\mu ,\psi )}
, which expresses a probabilistically optimal belief about the external state
ψ
˙
{\displaystyle {\dot {\psi }}}
given the preceding state
ψ
{\displaystyle \psi }
and the agent's actions, sensory signals, and internal states.
Since computing
p
Bayes
{\displaystyle p_{\text{Bayes}}}
is computationally intractable, the free energy principle asserts the existence of a "variational density"
q
(
ψ
˙
|
s
,
a
,
μ
,
ψ
)
{\displaystyle q({\dot {\psi }}|s,a,\mu ,\psi )}
, where
q
{\displaystyle q}
is an approximation to
p
Bayes
{\displaystyle p_{\text{Bayes}}}
.
One then defines the free energy as
F
(
μ
,
a
;
s
)
⏟
f
r
e
e
−
e
n
e
r
g
y
=
E
q
(
ψ
˙
)
[
−
log
p
(
ψ
˙
,
s
,
a
,
μ
∣
ψ
)
]
⏟
expected energy
−
H
[
q
(
ψ
˙
∣
s
,
a
,
μ
,
ψ
)
]
⏟
e
n
t
r
o
p
y
=
−
log
p
(
s
)
⏟
s
u
r
p
r
i
s
e
+
K
L
[
q
(
ψ
˙
∣
s
,
a
,
μ
,
ψ
)
∥
p
Bayes
(
ψ
˙
∣
s
,
a
,
μ
,
ψ
)
]
⏟
d
i
v
e
r
g
e
n
c
e
≥
−
log
p
(
s
)
⏟
s
u
r
p
r
i
s
e
{\displaystyle {\begin{aligned}{\underset {\mathrm {free-energy} }{\underbrace {F(\mu ,a\,;s)} }}&={\underset {\text{expected energy}}{\underbrace {\mathbb {E} _{q({\dot {\psi }})}[-\log p({\dot {\psi }},s,a,\mu \mid \psi )]} }}-{\underset {\mathrm {entropy} }{\underbrace {\mathbb {H} [q({\dot {\psi }}\mid s,a,\mu ,\psi )]} }}\\&={\underset {\mathrm {surprise} }{\underbrace {-\log p(s)} }}+{\underset {\mathrm {divergence} }{\underbrace {\mathbb {KL} [q({\dot {\psi }}\mid s,a,\mu ,\psi )\parallel p_{\text{Bayes}}({\dot {\psi }}\mid s,a,\mu ,\psi )]} }}\\&\geq {\underset {\mathrm {surprise} }{\underbrace {-\log p(s)} }}\end{aligned}}}
and defines action and perception as the joint optimization problem
μ
∗
=
a
r
g
m
i
n
μ
{
F
(
μ
,
a
;
s
)
)
}
a
∗
=
a
r
g
m
i
n
a
{
F
(
μ
∗
,
a
;
s
)
}
{\displaystyle {\begin{aligned}\mu ^{*}&={\underset {\mu }{\operatorname {arg\,min} }}\{F(\mu ,a\,;\,s))\}\\a^{*}&={\underset {a}{\operatorname {arg\,min} }}\{F(\mu ^{*},a\,;\,s)\}\end{aligned}}}
where the internal states
μ
{\displaystyle \mu }
are typically taken to encode the parameters of the 'variational' density
q
{\displaystyle q}
and hence the agent's "best guess" about the posterior belief over
Ψ
{\displaystyle \Psi }
.
Note that the free energy is also an upper bound on a measure of the agent's (marginal, or average) sensory surprise, and hence free energy minimization is often motivated by the minimization of surprise.
== Free energy minimisation ==
=== Free energy minimisation and self-organisation ===
Free energy minimisation has been proposed as a hallmark of self-organising systems when cast as random dynamical systems. This formulation rests on a Markov blanket (comprising action and sensory states) that separates internal and external states. If internal states and action minimise free energy, then they place an upper bound on the entropy of sensory states:
lim
T
→
∞
1
T
∫
0
T
F
(
s
(
t
)
,
μ
(
t
)
)
d
t
⏟
free-action
≥
lim
T
→
∞
1
T
∫
0
T
−
log
p
(
s
(
t
)
∣
m
)
⏟
surprise
d
t
=
H
[
p
(
s
∣
m
)
]
{\displaystyle \lim _{T\to \infty }{\frac {1}{T}}{\underset {\text{free-action}}{\underbrace {\int _{0}^{T}F(s(t),\mu (t))\,dt} }}\geq \lim _{T\to \infty }{\frac {1}{T}}\int _{0}^{T}{\underset {\text{surprise}}{\underbrace {-\log p(s(t)\mid m)} }}\,dt=H[p(s\mid m)]}
This is because – under ergodic assumptions – the long-term average of surprise is entropy. This bound resists a natural tendency to disorder – of the sort associated with the second law of thermodynamics and the fluctuation theorem. However, formulating a unifying principle for the life sciences in terms of concepts from statistical physics, such as random dynamical system, non-equilibrium steady state and ergodicity, places substantial constraints on the theoretical and empirical study of biological systems with the risk of obscuring all features that make biological systems interesting kinds of self-organizing systems.
=== Free energy minimisation and Bayesian inference ===
All Bayesian inference can be cast in terms of free energy minimisation. When free energy is minimised with respect to internal states, the Kullback–Leibler divergence between the variational and posterior density over hidden states is minimised. This corresponds to approximate Bayesian inference – when the form of the variational density is fixed – and exact Bayesian inference otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g., Kalman filtering). It is also used in Bayesian model selection, where free energy can be usefully decomposed into complexity and accuracy:
F
(
s
,
μ
)
⏟
free-energy
=
D
K
L
[
q
(
ψ
∣
μ
)
∥
p
(
ψ
∣
m
)
]
⏟
complexity
−
E
q
[
log
p
(
s
∣
ψ
,
m
)
]
⏟
a
c
c
u
r
a
c
y
{\displaystyle {\underset {\text{free-energy}}{\underbrace {F(s,\mu )} }}={\underset {\text{complexity}}{\underbrace {D_{\mathrm {KL} }[q(\psi \mid \mu )\parallel p(\psi \mid m)]} }}-{\underset {\mathrm {accuracy} }{\underbrace {E_{q}[\log p(s\mid \psi ,m)]} }}}
Models with minimum free energy provide an accurate explanation of data, under complexity costs; cf. Occam's razor and more formal treatments of computational costs. Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data).
=== Free energy minimisation and thermodynamics ===
Variational free energy is an information-theoretic functional and is distinct from thermodynamic (Helmholtz) free energy. However, the complexity term of variational free energy shares the same fixed point as Helmholtz free energy (under the assumption the system is thermodynamically closed but not isolated). This is because if sensory perturbations are suspended (for a suitably long period of time), complexity is minimised (because accuracy can be neglected). At this point, the system is at equilibrium and internal states minimise Helmholtz free energy, by the principle of minimum energy.
=== Free energy minimisation and information theory ===
Free energy minimisation is equivalent to maximising the mutual information between sensory states and internal states that parameterise the variational density (for a fixed entropy variational density). This relates free energy minimization to the principle of minimum redundancy.
== Free energy minimisation in neuroscience ==
Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states:
Ψ
=
X
×
Θ
×
Π
{\displaystyle \Psi =X\times \Theta \times \Pi }
that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively.
=== Perceptual inference and categorisation ===
Free energy minimisation formalises the notion of unconscious inference in perception and provides a normative (Bayesian) theory of neuronal processing. The associated process theory of neuronal dynamics is based on minimising free energy through gradient descent. This corresponds to generalised Bayesian filtering (where ~ denotes a variable in generalised coordinates of motion and
D
{\displaystyle D}
is a derivative matrix operator):
μ
~
˙
=
D
μ
~
−
∂
μ
F
(
s
,
μ
)
|
μ
=
μ
~
{\displaystyle {\dot {\tilde {\mu }}}=D{\tilde {\mu }}-\partial _{\mu }F(s,\mu ){\Big |}_{\mu ={\tilde {\mu }}}}
Usually, the generative models that define free energy are non-linear and hierarchical (like cortical hierarchies in the brain). Special cases of generalised filtering include Kalman filtering, which is formally equivalent to predictive coding – a popular metaphor for message passing in the brain. Under hierarchical models, predictive coding involves the recurrent exchange of ascending (bottom-up) prediction errors and descending (top-down) predictions that is consistent with the anatomy and physiology of sensory and motor systems.
=== Perceptual learning and memory ===
In predictive coding, optimising model parameters through a gradient descent on the time integral of free energy (free action) reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain.
=== Perceptual precision, attention and salience ===
Optimizing the precision parameters corresponds to optimizing the gain of prediction errors (cf., Kalman gain). In neuronally plausible implementations of predictive coding, this corresponds to optimizing the excitability of superficial pyramidal cells and has been interpreted in terms of attentional gain.
With regard to the top-down vs. bottom-up controversy, which has been addressed as a major open problem of attention, a computational model has succeeded in illustrating the circular nature of the interplay between top-down and bottom-up mechanisms. Using an established emergent model of attention, namely SAIM, the authors proposed a model called PE-SAIM, which, in contrast to the standard version, approaches selective attention from a top-down position. The model takes into account the transmission of prediction errors to the same level or a level above, in order to minimise the energy function that indicates the difference between the data and its cause, or, in other words, between the generative model and the posterior. To increase validity, they also incorporated neural competition between stimuli into their model. A notable feature of this model is the reformulation of the free energy function only in terms of prediction errors during task performance:
∂
E
t
o
t
a
l
(
Y
V
P
,
X
S
N
,
x
C
N
,
y
K
N
)
∂
y
m
n
S
N
=
x
m
n
C
N
−
b
C
N
ε
n
m
C
N
+
b
C
N
∑
k
(
ε
k
n
m
K
N
)
{\displaystyle {\dfrac {\partial E^{total}(Y^{VP},X^{SN},x^{CN},y^{KN})}{\partial y_{mn}^{SN}}}=x_{mn}^{CN}-b^{CN}\varepsilon _{nm}^{CN}+b^{CN}\sum _{k}(\varepsilon _{knm}^{KN})}
where
E
t
o
t
a
l
{\displaystyle E^{total}}
is the total energy function of the neural networks entail, and
ε
k
n
m
K
N
{\displaystyle \varepsilon _{knm}^{KN}}
is the prediction error between the generative model (prior) and posterior changing over time.
Comparing the two models reveals a notable similarity between their respective results while also highlighting a remarkable discrepancy, whereby – in the standard version of the SAIM – the model's focus is mainly upon the excitatory connections, whereas in the PE-SAIM, the inhibitory connections are leveraged to make an inference. The model has also proved to be fit to predict the EEG and fMRI data drawn from human experiments with high precision. In the same vein, Yahya et al. also applied the free energy principle to propose a computational model for template matching in covert selective visual attention that mostly relies on SAIM. According to this study, the total free energy of the whole state-space is reached by inserting top-down signals in the original neural networks, whereby we derive a dynamical system comprising both feed-forward and backward prediction error.
== Active inference ==
When gradient descent is applied to action
a
˙
=
−
∂
a
F
(
s
,
μ
~
)
{\displaystyle {\dot {a}}=-\partial _{a}F(s,{\tilde {\mu }})}
, motor control can be understood in terms of classical reflex arcs that are engaged by descending (corticospinal) predictions. This provides a formalism that generalizes the equilibrium point solution – to the degrees of freedom problem – to movement trajectories.
=== Active inference and optimal control ===
Active inference is related to optimal control by replacing value or cost-to-go functions with prior beliefs about state transitions or flow. This exploits the close connection between Bayesian filtering and the solution to the Bellman equation. However, active inference starts with (priors over) flow
f
=
Γ
⋅
∇
V
+
∇
×
W
{\displaystyle f=\Gamma \cdot \nabla V+\nabla \times W}
that are specified with scalar
V
(
x
)
{\displaystyle V(x)}
and vector
W
(
x
)
{\displaystyle W(x)}
value functions of state space (cf., the Helmholtz decomposition). Here,
Γ
{\displaystyle \Gamma }
is the amplitude of random fluctuations and cost is
c
(
x
)
=
f
⋅
∇
V
+
∇
⋅
Γ
⋅
V
{\displaystyle c(x)=f\cdot \nabla V+\nabla \cdot \Gamma \cdot V}
. The priors over flow
p
(
x
~
∣
m
)
{\displaystyle p({\tilde {x}}\mid m)}
induce a prior over states
p
(
x
∣
m
)
=
exp
(
V
(
x
)
)
{\displaystyle p(x\mid m)=\exp(V(x))}
that is the solution to the appropriate forward Kolmogorov equations. In contrast, optimal control optimises the flow, given a cost function, under the assumption that
W
=
0
{\displaystyle W=0}
(i.e., the flow is curl free or has detailed balance). Usually, this entails solving backward Kolmogorov equations.
=== Active inference and optimal decision (game) theory ===
Optimal decision problems (usually formulated as partially observable Markov decision processes) are treated within active inference by absorbing utility functions into prior beliefs. In this setting, states that have a high utility (low cost) are states an agent expects to occupy. By equipping the generative model with hidden states that model control, policies (control sequences) that minimise variational free energy lead to high utility states.
Neurobiologically, neuromodulators such as dopamine are considered to report the precision of prediction errors by modulating the gain of principal cells encoding prediction error. This is closely related to – but formally distinct from – the role of dopamine in reporting prediction errors per se and related computational accounts.
=== Active inference and cognitive neuroscience ===
Active inference has been used to address a range of issues in cognitive neuroscience, brain function and neuropsychiatry, including action observation, mirror neurons, saccades and visual search, eye movements, sleep, illusions, attention, action selection, consciousness, hysteria and psychosis. Explanations of action in active inference often depend on the idea that the brain has 'stubborn predictions' that it cannot update, leading to actions that cause these predictions to come true.
== See also ==
Action-specific perception – Psychological theory that people perceive their environment and events within itPages displaying wikidata descriptions as a fallback
Affordance – Possibility of an action on an object or environment
Autopoiesis – System capable of producing itself
Bayesian approaches to brain function – Explaining the brain's abilities through statistical principles
Constructal law - Law of design evolution in nature, animate and inanimate
Decision theory – Branch of applied probability theory
Embodied cognition – Interdisciplinary theory
Entropic force – Physical force that originates from thermodynamics instead of fundamental interactions
Principle of minimum energy – thermodynamic formulation based on the second lawPages displaying wikidata descriptions as a fallback
Info-metrics – Interdisciplinary approach to scientific modelling and information processing
Optimal control – Mathematical way of attaining a desired output from a dynamic system
Adaptive system – System that can adapt to the environment
Predictive coding – Theory of brain function
Self-organization – Process of creating order by local interactions
Surprisal – Basic quantity derived from the probability of a particular event occurring from a random variablePages displaying short descriptions of redirect targets
Synergetics (Haken) – school of thought on thermodynamics and systems phenomena developed by Hermann HakenPages displaying wikidata descriptions as a fallback
Variational Bayesian methods – Mathematical methods used in Bayesian inference and machine learning
== References ==
== External links ==
Behavioral and Brain Sciences (by Andy Clark) | Wikipedia/Free_energy_principle |
Cognitive neuroscience is the scientific field that is concerned with the study of the biological processes and aspects that underlie cognition, with a specific focus on the neural connections in the brain which are involved in mental processes. It addresses the questions of how cognitive activities are affected or controlled by neural circuits in the brain. Cognitive neuroscience is a branch of both neuroscience and psychology, overlapping with disciplines such as behavioral neuroscience, cognitive psychology, physiological psychology and affective neuroscience. Cognitive neuroscience relies upon theories in cognitive science coupled with evidence from neurobiology, and computational modeling.
Parts of the brain play an important role in this field. Neurons play the most vital role, since the main point is to establish an understanding of cognition from a neural perspective, along with the different lobes of the cerebral cortex.
Methods employed in cognitive neuroscience include experimental procedures from psychophysics and cognitive psychology, functional neuroimaging, electrophysiology, cognitive genomics, and behavioral genetics.
Studies of patients with cognitive deficits due to brain lesions constitute an important aspect of cognitive neuroscience. The damages in lesioned brains provide a comparable starting point on regards to healthy and fully functioning brains. These damages change the neural circuits in the brain and cause it to malfunction during basic cognitive processes, such as memory or learning. People have learning disabilities and such damage, can be compared with how the healthy neural circuits are functioning, and possibly draw conclusions about the basis of the affected cognitive processes. Some examples of learning disabilities in the brain include places in Wernicke's area, the left side of the temporal lobe, and Broca's area close to the frontal lobe.
Also, cognitive abilities based on brain development are studied and examined under the subfield of developmental cognitive neuroscience. This shows brain development over time, analyzing differences and concocting possible reasons for those differences.
Theoretical approaches include computational neuroscience and cognitive psychology.
== Historical origins ==
Cognitive neuroscience is an interdisciplinary area of study that has emerged from neuroscience and psychology. There are several stages in these disciplines that have changed the way researchers approached their investigations and that led to the field becoming fully established.
Although the task of cognitive neuroscience is to describe the neural mechanisms associated with the mind, historically it has progressed by investigating how a certain area of the brain supports a given mental faculty. However, early efforts to subdivide the brain proved to be problematic. The phrenologist movement failed to supply a scientific basis for its theories and has since been rejected. The aggregate field view, meaning that all areas of the brain participated in all behavior, was also rejected as a result of brain mapping, which began with Hitzig and Fritsch's experiments and eventually developed through methods such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). Gestalt theory, neuropsychology, and the cognitive revolution were major turning points in the creation of cognitive neuroscience as a field, bringing together ideas and techniques that enabled researchers to make more links between behavior and its neural substrates.
While the Ancient Greeks Alcmaeon, Plato, Aristotle in the 5th and 4th centuries BC, and then the Roman physician Galen in the 2nd century AD already argued that the brain is the source of mental activity, scientific research into the connections between brain areas and cognitive functions began in the second half of the 19th century. The founding insights in the Cognitive neuroscience establishment were:
In 1861, French neurologist Paul Broca discovered that a damaged area of the posterior inferior frontal gyrus (pars triangularis, BA45, also known as Broca's area) in patients caused an inability to speak. His work "Localization of Speech in the Third Left Frontal Cultivation" in 1865 inspired others to study brain regions linking them to sensory and motor functions.
In 1870, German physicians Eduard Hitzig and Gustav Fritsch stimulated the cerebral cortex of a dog with electricity, causing different muscles to contract depending on the areas of the brain involved. This led to the suggestion that individual functions are localized to specific areas of the brain.
Italian neuroanatomist professor Camillo Golgi discovered in the 1870s that nerve cells could be colored using silver nitrate allowing Golgi to argue that all the nerve cells in the nervous system are a continuous, interconnected network.
In 1874, German neurologist and psychiatrist Carl Wernicke hypothesized an association between the left posterior section of the superior temporal gyrus and the reflexive mimicking of words and their syllables.
In 1878, Italian professor of pharmacology and physiology Angelo Mosso associated blood flow with brain functions. He invented the first neuroimaging technique, known as 'human circulation balance'. Angelo Mosso is a forerunner of more refined techniques like functional magnetic resonance imaging (fMRI) and positron emission tomography (PET).
In 1887, Spanish neuroanatomist professor Santiago Ramón y Cajal (1852–1934) improved the Golgi's method of visualizing nervous tissue under light microscopy by using a technique he termed "double impregnation". He discovered a number of facts about the organization of the nervous system: the nerve cell as an independent cell, insights into degeneration and regeneration, and ideas on brain plasticity.
In 1894, neurologist and psychiatrist Edward Flatau published a human brain atlas “Atlas of the Human Brain and the Course of the Nerve-Fibres” which consisted of long-exposure photographs of fresh brain sections. It contained an overview of the knowledge of the time on the fibre pathways in the central nervous system.
In 1909, German anatomist Korbinian Brodmann published his original research on brain mapping in the monograph Vergleichende Lokalisationslehre der Großhirnrinde (Localisation in the cerebral cortex), defining 52 distinct regions of the cerebral cortex, known as Brodmann areas now, based on regional variations in structure. These Brodmann areas were associated with diverse functions including sensation, motor control, and cognition.
In 1924, German physiologist and psychiatrist Hans Berger (1873–1941) recorded the first human electroencephalogram EEG, discovering the electrical activity of the brain (called brain waves) and, in particular, the alpha wave rhythm, which is a type of brain wave.
A first clinical positron imaging device, a prototype of a modern Positron Emission Tomography (PET), was invented in 1953 by Dr. Brownell and Dr. Aronow. American scientists specializing in nuclear medicine David Edmund Kuhl, Luke Chapman and Roy Edwards developed this new method of tomographic imaging and constructed several tomographic instruments in the late 1950s. Ph.D. in Chemistry Michael E. Phelps was able to invent their insights into the first PET scanner in 1973. PET became a valuable research tool to study brain functioning. This technique can indirectly measure radioactivity signal that indicates increased blood flow associated with increased brain activity.
In 1971, American chemist and physicist Paul Christian Lauterbur invented the idea of MR imaging (MRI). In 2003, he received the Nobel Prize. MRI is the investigative tool for contrasting grey and white matter, which makes MRI the choice to study many conditions of the central nervous system. This method contributed to the development of Functional Magnetic Resonance Imaging (fMRI), which has been used in many studies in cognitive neuroscience since 1990s.
=== Origins in philosophy ===
Philosophers have always been interested in the mind: "the idea that explaining a phenomenon involves understanding the mechanism responsible for it has deep roots in the History of Philosophy from atomic theories in 5th century B.C. to its rebirth in the 17th and 18th century in the works of Galileo, Descartes, and Boyle. Among others, it's Descartes' idea that machines humans build could work as models of scientific explanation."
For example, Aristotle thought the brain was the body's cooling system and the capacity for intelligence was located in the heart. It has been suggested that the first person to believe otherwise was the Roman physician Galen in the second century AD, who declared that the brain was the source of mental activity, although this has also been accredited to Alcmaeon. However, Galen believed that personality and emotion were not generated by the brain, but rather by other organs. Andreas Vesalius, an anatomist and physician, was the first to believe that the brain and the nervous system are the center of the mind and emotion. Psychology, a major contributing field to cognitive neuroscience, emerged from philosophical reasoning about the mind.
=== 19th century ===
==== Phrenology ====
One of the predecessors to cognitive neuroscience was phrenology, a pseudoscientific approach that claimed that behavior could be determined by the shape of the scalp. In the early 19th century, Franz Joseph Gall and J. G. Spurzheim believed that the human brain was localized into approximately 35 different sections. In his book, The Anatomy and Physiology of the Nervous System in General, and of the Brain in Particular, Gall claimed that a larger bump in one of these areas meant that that area of the brain was used more frequently by that person. This theory gained significant public attention, leading to the publication of phrenology journals and the creation of phrenometers, which measured the bumps on a human subject's head. While phrenology remained a fixture at fairs and carnivals, it did not enjoy wide acceptance within the scientific community. The major criticism of phrenology is that researchers were not able to test theories empirically.
==== Localizationist view ====
The localizationist view was concerned with mental abilities being localized to specific areas of the brain rather than on what the characteristics of the abilities were and how to measure them. Studies performed in Europe, such as those of John Hughlings Jackson, supported this view. Jackson studied patients with brain damage, particularly those with epilepsy. He discovered that the epileptic patients often made the same clonic and tonic movements of muscle during their seizures, leading Jackson to believe that they must be caused by activity in the same place in the brain every time. Jackson proposed that specific functions were localized to specific areas of the brain, which was critical to future understanding of the brain lobes.
==== Aggregate field view ====
According to the aggregate field view, all areas of the brain participate in every mental function.
Pierre Flourens, a French experimental psychologist, challenged the localizationist view by using animal experiments. He discovered that removing the cerebellum (brain) in rabbits and pigeons affected their sense of muscular coordination, and that all cognitive functions were disrupted in pigeons when the cerebral hemispheres were removed. From this he concluded that the cerebral cortex, cerebellum, and brainstem functioned together as a whole. His approach has been criticised on the basis that the tests were not sensitive enough to notice selective deficits had they been present.
==== Emergence of neuropsychology ====
Perhaps the first serious attempts to localize mental functions to specific locations in the brain was by Broca and Wernicke. This was mostly achieved by studying the effects of injuries to different parts of the brain on psychological functions. In 1861, French neurologist Paul Broca came across a man with a disability who was able to understand the language but unable to speak. The man could only produce the sound "tan". It was later discovered that the man had damage to an area of his left frontal lobe now known as Broca's area. Carl Wernicke, a German neurologist, found a patient who could speak fluently but non-sensibly. The patient had been the victim of a stroke, and could not understand spoken or written language. This patient had a lesion in the area where the left parietal and temporal lobes meet, now known as Wernicke's area. These cases, which suggested that lesions caused specific behavioral changes, strongly supported the localizationist view. Additionally, Aphasia is a learning disorder which was also discovered by Paul Broca. According to, Johns Hopkins School of Medicine, Aphasia is a language disorder caused by damage in a specific area of the brain that controls language expression and comprehension. This can often lead to the person speaking words with no sense known as "word salad"
==== Mapping the brain ====
In 1870, German physicians Eduard Hitzig and Gustav Fritsch published their findings of the behavior of animals. Hitzig and Fritsch ran an electric current through the cerebral cortex of a dog, causing different muscles to contract depending on which areas of the brain were electrically stimulated. This led to the proposition that individual functions are localized to specific areas of the brain rather than the cerebrum as a whole, as the aggregate field view suggests. Brodmann was also an important figure in brain mapping; his experiments based on Franz Nissl's tissue staining techniques divided the brain into fifty-two areas.
=== 20th century ===
==== Cognitive revolution ====
At the start of the 20th century, attitudes in America were characterized by pragmatism, which led to a preference for behaviorism as the primary approach in psychology. J.B. Watson was a key figure with his stimulus-response approach. By conducting experiments on animals he was aiming to be able to predict and control behavior. Behaviorism eventually failed because it could not provide realistic psychology of human action and thought – it focused primarily on stimulus-response associations at the expense of explaining phenomena like thought and imagination. This led to what is often termed as the "cognitive revolution".
==== Neuron doctrine ====
In the early 20th century, Santiago Ramón y Cajal and Camillo Golgi began working on the structure of the neuron. Golgi developed a silver staining method that could entirely stain several cells in a particular area, leading him to believe that neurons were directly connected with each other in one cytoplasm. Cajal challenged this view after staining areas of the brain that had less myelin and discovering that neurons were discrete cells. Cajal also discovered that cells transmit electrical signals down the neuron in one direction only. Both Golgi and Cajal were awarded a Nobel Prize in Physiology or Medicine in 1906 for this work on the neuron doctrine.
=== Mid-late 20th century ===
Several findings in the 20th century continued to advance the field, such as the discovery of ocular dominance columns, recording of single nerve cells in animals, and coordination of eye and head movements. Experimental psychology was also significant in the foundation of cognitive neuroscience. Some particularly important results were the demonstration that some tasks are accomplished via discrete processing stages, the study of attention, and the notion that behavioural data do not provide enough information by themselves to explain mental processes. As a result, some experimental psychologists began to investigate neural bases of behaviour.
Wilder Penfield created maps of primary sensory and motor areas of the brain by stimulating the cortices of patients during surgery. The work of Sperry and Gazzaniga on split brain patients in the 1950s was also instrumental in the progress of the field. The term cognitive neuroscience itself was coined by Gazzaniga and cognitive psychologist George Armitage Miller while sharing a taxi in 1976.
==== Brain mapping ====
New brain mapping technology, particularly fMRI and PET, allowed researchers to investigate experimental strategies of cognitive psychology by observing brain function. Although this is often thought of as a new method (most of the technology is relatively recent), the underlying principle goes back as far as 1878 when blood flow was first associated with brain function. Angelo Mosso, an Italian psychologist of the 19th century, had monitored the pulsations of the adult brain through neurosurgically created bony defects in the skulls of patients. He noted that when the subjects engaged in tasks such as mathematical calculations the pulsations of the brain increased locally. Such observations led Mosso to conclude that blood flow of the brain followed function.
Commonly the cerebrum is divided into 5 sections: the frontal lobe, occipital lobe, temporal lobes, parietal lobe, and the insula. The brain is also divided into fissures and sulci. The lateral sulcus called the Sylvian Fissure separates the frontal and temporal lobes. The insula is described as being deep to this lateral fissure. The longitudinal fissure separates the lobes of the brain length-wise. Lobes are considered to be distinct in their distribution of vessels. The overall surface consists of sulci and gyri which are necessary to identify for neuroimaging purposes.
== Notable experiments ==
Throughout the history of cognitive neuroscience, many notable experiments have been conducted. For example, the mental rotation experiment conducted by Kosslyn et al., 1993, indicated that the time it takes to mentally rotate an object via imagination takes the same amount of time as actually rotating it; they found that mentally rotating an object activates parts of the brain involved in motor functioning, which may explain this similarity.
Another experiment is describes the two mechanisms of processing visual attention: bottom-up attention, and top-down attention. They define bottom-up attention is the brain visually processing salient images first, and then the surrounding information, while top-down attention involves focusing on task-relevant objects first. The researchers found that the ventral stream focuses on visual recognition, the dorsal stream is involved in the spatial information concerning the object.
As experiments in cognitive neuroscience, what these have in common is that the researchers are measuring activities or behaviors that we can see, and then determining the neural basis of the function and what part of the brain is involved.
== Emergence of a new discipline ==
=== Birth of cognitive science ===
On September 11, 1956, a large-scale meeting of cognitivists took place at the Massachusetts Institute of Technology. George A. Miller presented his "The Magical Number Seven, Plus or Minus Two" paper while Noam Chomsky and Newell & Simon presented their findings on computer science. Ulric Neisser commented on many of the findings at this meeting in his 1967 book Cognitive Psychology. The term "psychology" had been waning in the 1950s and 1960s, causing the field to be referred to as "cognitive science". Behaviorists such as Miller began to focus on the representation of language rather than general behavior. David Marr concluded that one should understand any cognitive process at three levels of analysis. These levels include computational, algorithmic/representational, and physical levels of analysis.
=== Combining neuroscience and cognitive science ===
Before the 1980s, interaction between neuroscience and cognitive science was scarce. Cognitive neuroscience began to integrate the newly laid theoretical ground in cognitive science, that emerged between the 1950s and 1960s, with approaches in experimental psychology, neuropsychology and neuroscience. (Neuroscience was not established as a unified discipline until 1971). In the late 1970s, neuroscientist Michael S. Gazzaniga and cognitive psychologist George A. Miller were said to have first coined the term "cognitive neuroscience." In the very late 20th century new technologies evolved that are now the mainstay of the methodology of cognitive neuroscience, including TMS (1985) and fMRI (1991). Earlier methods used in cognitive neuroscience include EEG (human EEG 1920) and MEG (1968). Occasionally cognitive neuroscientists utilize other brain imaging methods such as PET and SPECT. An upcoming technique in neuroscience is NIRS which uses light absorption to calculate changes in oxy- and deoxyhemoglobin in cortical areas. In some animals Single-unit recording can be used. Other methods include microneurography, facial EMG, and eye tracking. Integrative neuroscience attempts to consolidate data in databases, and form unified descriptive models from various fields and scales: biology, psychology, anatomy, and clinical practice.
Adaptive resonance theory (ART) is a cognitive neuroscience theory developed by Gail Carpenter and Stephen Grossberg in the late 1970s on aspects of how the brain processes information. It describes a number of artificial neural network models which use supervised and unsupervised learning methods, and address problems such as pattern recognition and prediction.
In 2014, Stanislas Dehaene, Giacomo Rizzolatti and Trevor Robbins, were awarded the Brain Prize "for their pioneering research on higher brain mechanisms underpinning such complex human functions as literacy, numeracy, motivated behaviour and social cognition, and for their efforts to understand cognitive and behavioural disorders". Brenda Milner, Marcus Raichle and John O'Keefe received the Kavli Prize in Neuroscience "for the discovery of specialized brain networks for memory and cognition" and O'Keefe shared the Nobel Prize in Physiology or Medicine in the same year with May-Britt Moser and Edvard Moser "for their discoveries of cells that constitute a positioning system in the brain".
In 2017, Wolfram Schultz, Peter Dayan and Ray Dolan were awarded the Brain Prize "for their multidisciplinary analysis of brain mechanisms that link learning to reward, which has far-reaching implications for the understanding of human behaviour, including disorders of decision-making in conditions such as gambling, drug addiction, compulsive behaviour and schizophrenia".,
== Recent trends ==
Recently the focus of research had expanded from the localization of brain area(s) for specific functions in the adult brain using a single technology. Studies have been diverging in several different directions: exploring the interactions between different brain areas, using multiple technologies and approaches to understand brain functions, and using computational approaches. Advances in non-invasive functional neuroimaging and associated data analysis methods have also made it possible to use highly naturalistic stimuli and tasks such as feature films depicting social interactions in cognitive neuroscience studies.
In recent years, there have been a lot of new advancements in the field of Cognitive Neuroscience. One new technique that has emerged is called shadow imaging. This method has combined different aspects of various neuroimaging techniques to create one that is more versatile. It uses standard light microscopy and melds it with fluorescence labeling of the interstitial fluid in the brain's extracellular space. This technique can help researchers get a bigger and more detailed look at brain tissue. This can help researchers understand more on anatomy and viability for their experiments. This technique has helped to see neurons, microglia, tumor cells and blood capillaries more closely. Shadow imaging is a new approach that shows a lot of promise in the field of neuroimaging.
Another very recent trend in cognitive neuroscience is the use of optogenetics to explore circuit function and its behavioral consequences. This new technology is a combination of genetic targeting of certain neurons and using the imaging technology to see targets in living neurons. This technique allows scientists to see the neurons while they are still intact in animals and be able to trace the electrical happenings in that cell. This new technology has been used successfully in many experiments and it is helping researchers in observing brain activity and understanding its role in disease, behavior and function.
Researchers have also modified a fMRI and made it more efficient, in a technique called direct imaging of neuronal activity or DIANA. This group of researchers changed the software to collect data every 5 milliseconds, which is 8 times faster than what the normal technique captures. After, the software can stitch together all of the images taken during the imaging and create a full slice of the brain.
In 2024, Prof. in bioengineering at RTU Liepaja Academy Igor Val Danilov introduced the natural neurostimulation hypothesis that explains the neuromodulation mechanism during pregnancy. Because the natural neurostimulation contributes to developing the healthy nervous system during pregnancy, artificial neurostimulation with the physical characteristics of a mother's care for her fetus scaled to the parameters of the specific patient can treat the injured nervous system. Basing on this insight, the novel APIN neurostimulation technique was introduced. The APIN technique exerts its neurotherapeutic effect by inducing mitochondrial stress and microvascular vasodilation of the specific neuronal circuits during an intensive cognitive load.
=== Cognitive Neuroscience and Artificial Intelligence ===
Cognitive neuroscience has played a major role in shaping artificial intelligence (AI). By studying how the human brain processes information, researchers have developed AI systems that simulate cognitive functions like learning, pattern recognition, and decision-making. A good example of this is neural networks, which are inspired by the connections between neurons in the brain. These networks form the foundation of many AI applications.
Deep learning, a subfield of AI, uses neural networks to replicate processes similar to those in the human brain. For instance, convolutional neural networks (CNNs) are modeled after the visual system and have transformed tasks like image recognition and speech analysis. AI also benefits from advancements in brain imaging technologies, such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). These tools provide valuable insights into neural activity, which help improve AI systems designed to mimic human thought processes.
Despite the progress, replicating the complexity of human cognition remains a challenge. Researchers are now exploring hybrid models that combine neural networks with symbolic reasoning to better mimic how humans think and solve problems. This approach shows promise for addressing some of the limitations of current AI systems.
=== Cognitive Neuroscience and Neurotherapy ===
Cognitive neuroscience contributed to development of novel noninvasive neurostimulation methods and developed in parallel with Neurotherapy aimed to address symptom control and cure several conditions in medical treatment. Noninvasive neurotherapy have attracted significant attention from the scientific community since, these methods can be personalized and used in treatment independent of underlying conditions. Based on research in cognitive neuroscience, Neurostimulation techniques apply different innovations to exert an energy-based impact on the nervous system by using electrical, magnetic, and/or electromagnetic energy to treat mental and physical health disorders in patients. Since Neurotherapy aims to heal without harm and implements systemic targeted delivery of an energy stimulus to a specific neurological zone in the body to alter neuronal activity and stimulate neuroplasticity, the recent trend in the Cognitive neuroscience is the research of natural neurostimulation.
== Topics ==
Attention
Cognitive development
Consciousness
Creativity
Decision-making
Emotions
Intelligence
Language
Learning
Memory
Perception
Social cognition
Mind Wandering
== Methods ==
Experimental methods include:
Psychophysics
Eye-tracking
Functional magnetic resonance imaging
Electroencephalography
Magnetoencephalography
Electrocorticography
Transcranial Magnetic Stimulation
Computational Modeling
== Notable people ==
Jesper Mogensen, Danish neuroscientist and former university professor
== See also ==
== References ==
== Sources ==
Bear, Mark F.; Connors, Barry W.; Paradiso, Michael A. (2007). Neuroscience. Lippincott Williams & Wilkins. ISBN 978-0-7817-6003-4.
Kosslyn, Stephen Michael; Andersen, Richard A., eds. (1995). Frontiers in Cognitive Neuroscience. MIT Press. ISBN 978-0-262-61110-7.
== Further reading ==
Baars, Bernard J.; Gage, Nicole M. (2010). Cognition, Brain, and Consciousness: Introduction to Cognitive Neuroscience. Academic Press. ISBN 978-0-12-381440-1.
Churchland, Patricia Smith; Sejnowski, Terrence Joseph (1992). The Computational Brain. MIT Press. ISBN 978-0-262-33965-0.
Code, Chris (2004). "Classic Cases: Ancient and Modern Milestones in the Development of Neuropsychological Science". In Code, Chris; Joanette, Yves; Lecours, André Roch; Wallesch, Claus-W (eds.). Classic Cases in Neuropsychology. pp. 17–25. doi:10.4324/9780203304112-8. ISBN 978-0-203-30411-2.
Enersen, O. D. (2009). John Hughlings Jackson. In: Who Named It. http://www.whonamedit.com/doctor.cfm/2766.html Retrieved 14 August 2009
Gazzaniga, M. S., Ivry, R. B. & Mangun, G. R. (2002). Cognitive Neuroscience: The biology of the mind (2nd ed.). New York: W.W.Norton.
Gallistel, R. (2009). "Memory and the Computational Brain: Why Cognitive Science will Transform Neuroscience." Wiley-Blackwell ISBN 978-1-4051-2287-0.
Gazzaniga, M. S., The Cognitive Neurosciences III, (2004), The MIT Press, ISBN 0-262-07254-8
Gazzaniga, M. S., Ed. (1999). Conversations in the Cognitive Neurosciences, The MIT Press, ISBN 0-262-57117-X.
Sternberg, Eliezer J. Are You a Machine? The Brain, the Mind and What it Means to be Human. Amherst, NY: Prometheus Books.
Ward, Jamie (2015). The Student's Guide to Cognitive Neuroscience (3rd ed.). Psychology Press. ISBN 978-1848722729.
Handbook of Functional Neuroimaging of Cognition By Roberto Cabeza, Alan Kingstone
Principles of neural science By Eric R. Kandel, James H. Schwartz, Thomas M. Jessell
The Cognitive Neuroscience of Memory By Amanda Parker, Edward L. Wilding, Timothy J. Bussey
Neuronal Theories of the Brain By Christof Koch, Joel L. Davis
Cambridge Handbook of Thinking and Reasoning By Keith James Holyoak, Robert G. Morrison
Handbook of Mathematical Cognition By Jamie I. D. Campbell
Cognitive Psychology By Michael W. Eysenck, Mark T. Keane
Development of Intelligence By Mike Anderson
Development of Mental Processing By Andreas Demetriou, et al.
Memory and Thinking By Robert H. Logie, K. J. Gilhooly
Memory Capacity By Nelson Cowan
Proceedings of the Nineteenth Annual Conference of the Cognitive Science
Models of Working Memory By Akira Miyake, Priti Shah
Memory and Thinking By Robert H. Logie, K. J. Gilhooly
Variation in Working Memory By Andrew R. A. Conway, et al.
Memory Capacity By Nelson Cowan
Cognition and Intelligence By Robert J. Sternberg, Jean E. Pretz
General Factor of Intelligence By Robert J. Sternberg, Elena Grigorenko
Neurological Basis of Learning, Development and Discovery By Anton E. Lawson
Memory and Human Cognition By John T. E. Richardson
Society for Neuroscience. https://web.archive.org/web/20090805111859/http://www.sfn.org/index.cfm?pagename=about_SfN#timeline Retrieved 14 August 2009
Keiji Tanaka,"Current Opinion in Neurobiology", (2007)
== External links ==
Cognitive Neuroscience Society Homepage
There's Something about Zero
What Is Cognitive Neuroscience?, Jamie Ward/Psychology Press
goCognitive - Educational Tools for Cognitive Neuroscience (including video interviews)
CogNet, The Brain and Cognitive Sciences Community Online, MIT
Cognitive Neuroscience Arena, Psychology Press
Cognitive Neuroscience and Philosophy, CUJCS, Spring 2002
Whole Brain Atlas Top 100 Brain Structures
Cognitive Neuroscience Discussion Group
John Jonides, a big role in Cognitive Neurosciences by Beebrite
Introduction to Cognitive Neuroscience
AgliotiLAB - Social and Cognitive Neuroscience Laboratory founded in 2003 in Rome, Italy
Related Wikibooks
Wikibook on cognitive psychology and cognitive neuroscience
Wikibook on consciousness studies
Cognitive Neuroscience chapter of the Wikibook on neuroscience
Computational Cognitive Neuroscience wikibook Archived 2019-07-24 at the Wayback Machine
Unlocking the Brain: Inside Cognitive Neuroscience – Exam Sage – An in-depth educational article exploring core concepts and emerging research areas in cognitive neuroscience, including memory, perception, attention, neuroimaging techniques, and the neural basis of behavior. | Wikipedia/Cognitive_neuroscience |
Quantum information science is a field that combines the principles of quantum mechanics with information theory to study the processing, analysis, and transmission of information. It covers both theoretical and experimental aspects of quantum physics, including the limits of what can be achieved with quantum information. The term quantum information theory is sometimes used, but it does not include experimental research and can be confused with a subfield of quantum information science that deals with the processing of quantum information.
== Scientific and engineering studies ==
Quantum teleportation, entanglement and the manufacturing of quantum computers depend on a comprehensive understanding of quantum physics and engineering. Google and IBM have invested significantly in quantum computer hardware research, leading to significant progress in manufacturing quantum computers since the 2010s. Currently, it is possible to create a quantum computer with over 100 qubits, but the error rate is high due to the lack of suitable materials for quantum computer manufacturing. Majorana fermions may be a crucial missing material.
Quantum cryptography devices are now available for commercial use. The one time pad, a cipher used by spies during the Cold War, uses a sequence of random keys for encryption. These keys can be securely exchanged using quantum entangled particle pairs, as the principles of the no-cloning theorem and wave function collapse ensure the secure exchange of the random keys. The development of devices that can transmit quantum entangled particles is a significant scientific and engineering goal.
Qiskit, Cirq and Q Sharp are popular quantum programming languages. Additional programming languages for quantum computers are needed, as well as a larger community of competent quantum programmers. To this end, additional learning resources are needed, since there are many fundamental differences in quantum programming which limits the number of skills that can be carried over from traditional programming.
== Related mathematical subjects ==
Quantum algorithms and quantum complexity theory are two of the subjects in algorithms and computational complexity theory. In 1994, mathematician Peter Shor introduced a quantum algorithm for prime factorization that, with a quantum computer containing 4,000 logical qubits, could potentially break widely used ciphers like RSA and ECC, posing a major security threat. This led to increased investment in quantum computing research and the development of post-quantum cryptography to prepare for the fault-tolerant quantum computing (FTQC) era.
== See also ==
== References ==
Nielsen, Michael A.; Chuang, Isaac L. (June 2012). Quantum Computation and Quantum Information (10th anniversary ed.). Cambridge: Cambridge University Press. ISBN 9780511992773. OCLC 700706156.
== External links ==
Quantiki – quantum information science portal and wiki.
ERA-Pilot QIST WP1 European roadmap on Quantum Information Processing and Communication
QIIC – Quantum Information, Imperial College London.
QIP – Quantum Information Group, University of Leeds. The quantum information group at the University of Leeds is engaged in researching a wide spectrum of aspects of quantum information. This ranges from algorithms, quantum computation, to physical implementations of information processing and fundamental issues in quantum mechanics. Also contains some basic tutorials for the lay audience.
mathQI Research Group on Mathematics and Quantum Information.
CQIST Center for Quantum Information Science & Technology at the University of Southern California
CQuIC Center for Quantum Information and Control, including theoretical and experimental groups from University of New Mexico, University of Arizona.
CQT Centre for Quantum Technologies at the National University of Singapore
CQC2T Centre for Quantum Computation and Communication Technology
QST@LSU Quantum Science and Technologies Group at Louisiana State University | Wikipedia/Quantum_information_science |
In theoretical computer science, a circuit is a model of computation in which input values proceed through a sequence of gates, each of which computes a function. Circuits of this kind provide a generalization of Boolean circuits and a mathematical model for digital logic circuits. Circuits are defined by the gates they contain and the values the gates can produce. For example, the values in a Boolean circuit are Boolean values, and the circuit includes conjunction, disjunction, and negation gates. The values in an integer circuit are sets of integers and the gates compute set union, set intersection, and set complement, as well as the arithmetic operations addition and multiplication.
== Formal definition ==
A circuit is a triplet
(
M
,
L
,
G
)
{\displaystyle (M,L,G)}
, where
M
{\displaystyle M}
is a set of values,
L
{\displaystyle L}
is a set of gate labels, each of which is a function from
M
i
{\displaystyle M^{i}}
to
M
{\displaystyle M}
for some non-negative integer
i
{\displaystyle i}
(where
i
{\displaystyle i}
represents the number of inputs to the gate), and
G
{\displaystyle G}
is a labelled directed acyclic graph with labels from
L
{\displaystyle L}
.
The vertices of the graph are called gates. For each gate
g
{\displaystyle g}
of in-degree
i
{\displaystyle i}
, the gate
g
{\displaystyle g}
can be labeled by an element
ℓ
{\displaystyle \ell }
of
L
{\displaystyle L}
if and only if
ℓ
{\displaystyle \ell }
is defined on
M
i
.
{\displaystyle M^{i}.}
=== Terminology ===
The gates of in-degree 0 are called inputs or leaves. The gates of out-degree 0 are called outputs. If there is an edge from gate
g
{\displaystyle g}
to gate
h
{\displaystyle h}
in the graph
G
{\displaystyle G}
then
h
{\displaystyle h}
is called a child of
g
{\displaystyle g}
. We suppose there is an order on the vertices of the graph, so we can speak of the
k
{\displaystyle k}
th child of a gate when
k
{\displaystyle k}
is less than or equal to the out-degree of the gate.
The size of a circuit is the number of nodes of a circuit. The depth of a gate
g
{\displaystyle g}
is the length of the longest path in
G
{\displaystyle G}
beginning at
g
{\displaystyle g}
up to an output gate. In particular, the gates of out-degree 0 are the only gates of depth 1. The depth of a circuit is the maximum depth of any gate.
Level
i
{\displaystyle i}
is the set of all gates of depth
i
{\displaystyle i}
. A levelled circuit is a circuit in which the edges to gates of depth
i
{\displaystyle i}
comes only from gates of depth
i
+
1
{\displaystyle i+1}
or from the inputs. In other words, edges only exist between adjacent levels of the circuit. The width of a levelled circuit is the maximum size of any level.
=== Evaluation ===
The exact value
V
(
g
)
{\displaystyle V(g)}
of a gate
g
{\displaystyle g}
with in-degree
i
{\displaystyle i}
and label
l
{\displaystyle l}
is defined recursively for all gates
g
{\displaystyle g}
.
V
(
g
)
=
{
l
if
g
is an input
l
(
V
(
g
1
)
,
…
,
V
(
g
i
)
)
otherwise,
{\displaystyle V(g)={\begin{cases}l&{\text{if }}g{\text{ is an input}}\\l(V(g_{1}),\dotsc ,V(g_{i}))&{\text{otherwise,}}\end{cases}}}
where each
g
j
{\displaystyle g_{j}}
is a parent of
g
{\displaystyle g}
.
The value of the circuit is the value of each of the output gates.
== Circuits as functions ==
The labels of the leaves can also be variables which take values in
M
{\displaystyle M}
. If there are
n
{\displaystyle n}
leaves, then the circuit can be seen as a function from
M
n
{\displaystyle M^{n}}
to
M
{\displaystyle M}
. It is then usual to consider a family of circuits
(
C
n
)
n
∈
N
{\displaystyle (C_{n})_{n\in \mathbb {N} }}
, a sequence of circuits indexed by the integers where the circuit
C
n
{\displaystyle C_{n}}
has
n
{\displaystyle n}
variables. Families of circuits can thus be seen as functions from
M
∗
{\displaystyle M^{*}}
to
M
{\displaystyle M}
.
The notions of size, depth and width can be naturally extended to families of functions, becoming functions from
N
{\displaystyle \mathbb {N} }
to
N
{\displaystyle \mathbb {N} }
; for example,
s
i
z
e
(
n
)
{\displaystyle size(n)}
is the size of the
n
{\displaystyle n}
th circuit of the family.
== Complexity and algorithmic problems ==
Computing the output of a given Boolean circuit on a specific input is a P-complete problem. If the input is an integer circuit, however, it is unknown whether this problem is decidable.
Circuit complexity attempts to classify Boolean functions with respect to the size or depth of circuits that can compute them.
== See also ==
Arithmetic circuit complexity
Boolean circuit
Circuit complexity
Circuits over sets of natural numbers
The complexity classes NC, AC and TC
Quantum circuit and BQP
== References ==
Vollmer, Heribert (1999). Introduction to Circuit Complexity. Berlin: Springer. ISBN 978-3-540-64310-4.
Yang, Ke (2001). "Integer Circuit Evaluation Is PSPACE-Complete". Journal of Computer and System Sciences. 63 (2, September 2001): 288–303. doi:10.1006/jcss.2001.1768. ISSN 0022-0000. | Wikipedia/Circuit_(computer_science) |
In probability and statistics, a probability mass function (sometimes called probability function or frequency function) is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete probability density function. The probability mass function is often the primary means of defining a discrete probability distribution, and such functions exist for either scalar or multivariate random variables whose domain is discrete.
A probability mass function differs from a continuous probability density function (PDF) in that the latter is associated with continuous rather than discrete random variables. A continuous PDF must be integrated over an interval to yield a probability.
The value of the random variable having the largest probability mass is called the mode.
== Formal definition ==
Probability mass function is the probability distribution of a discrete random variable, and provides the possible values and their associated probabilities. It is the function
p
:
R
→
[
0
,
1
]
{\displaystyle p:\mathbb {R} \to [0,1]}
defined by
for
−
∞
<
x
<
∞
{\displaystyle -\infty <x<\infty }
, where
P
{\displaystyle P}
is a probability measure.
p
X
(
x
)
{\displaystyle p_{X}(x)}
can also be simplified as
p
(
x
)
{\displaystyle p(x)}
.
The probabilities associated with all (hypothetical) values must be non-negative and sum up to 1,
∑
x
p
X
(
x
)
=
1
{\displaystyle \sum _{x}p_{X}(x)=1}
and
p
X
(
x
)
≥
0.
{\displaystyle p_{X}(x)\geq 0.}
Thinking of probability as mass helps to avoid mistakes since the physical mass is conserved as is the total probability for all hypothetical outcomes
x
{\displaystyle x}
.
== Measure theoretic formulation ==
A probability mass function of a discrete random variable
X
{\displaystyle X}
can be seen as a special case of two more general measure theoretic constructions:
the distribution of
X
{\displaystyle X}
and the probability density function of
X
{\displaystyle X}
with respect to the counting measure. We make this more precise below.
Suppose that
(
A
,
A
,
P
)
{\displaystyle (A,{\mathcal {A}},P)}
is a probability space
and that
(
B
,
B
)
{\displaystyle (B,{\mathcal {B}})}
is a measurable space whose underlying σ-algebra is discrete, so in particular contains singleton sets of
B
{\displaystyle B}
. In this setting, a random variable
X
:
A
→
B
{\displaystyle X\colon A\to B}
is discrete provided its image is countable.
The pushforward measure
X
∗
(
P
)
{\displaystyle X_{*}(P)}
—called the distribution of
X
{\displaystyle X}
in this context—is a probability measure on
B
{\displaystyle B}
whose restriction to singleton sets induces the probability mass function (as mentioned in the previous section)
f
X
:
B
→
R
{\displaystyle f_{X}\colon B\to \mathbb {R} }
since
f
X
(
b
)
=
P
(
X
−
1
(
b
)
)
=
P
(
X
=
b
)
{\displaystyle f_{X}(b)=P(X^{-1}(b))=P(X=b)}
for each
b
∈
B
{\displaystyle b\in B}
.
Now suppose that
(
B
,
B
,
μ
)
{\displaystyle (B,{\mathcal {B}},\mu )}
is a measure space equipped with the counting measure
μ
{\displaystyle \mu }
. The probability density function
f
{\displaystyle f}
of
X
{\displaystyle X}
with respect to the counting measure, if it exists, is the Radon–Nikodym derivative of the pushforward measure of
X
{\displaystyle X}
(with respect to the counting measure), so
f
=
d
X
∗
P
/
d
μ
{\displaystyle f=dX_{*}P/d\mu }
and
f
{\displaystyle f}
is a function from
B
{\displaystyle B}
to the non-negative reals. As a consequence, for any
b
∈
B
{\displaystyle b\in B}
we have
P
(
X
=
b
)
=
P
(
X
−
1
(
b
)
)
=
X
∗
(
P
)
(
b
)
=
∫
b
f
d
μ
=
f
(
b
)
,
{\displaystyle P(X=b)=P(X^{-1}(b))=X_{*}(P)(b)=\int _{b}fd\mu =f(b),}
demonstrating that
f
{\displaystyle f}
is in fact a probability mass function.
When there is a natural order among the potential outcomes
x
{\displaystyle x}
, it may be convenient to assign numerical values to them (or n-tuples in case of a discrete multivariate random variable) and to consider also values not in the image of
X
{\displaystyle X}
. That is,
f
X
{\displaystyle f_{X}}
may be defined for all real numbers and
f
X
(
x
)
=
0
{\displaystyle f_{X}(x)=0}
for all
x
∉
X
(
S
)
{\displaystyle x\notin X(S)}
as shown in the figure.
The image of
X
{\displaystyle X}
has a countable subset on which the probability mass function
f
X
(
x
)
{\displaystyle f_{X}(x)}
is one. Consequently, the probability mass function is zero for all but a countable number of values of
x
{\displaystyle x}
.
The discontinuity of probability mass functions is related to the fact that the cumulative distribution function of a discrete random variable is also discontinuous. If
X
{\displaystyle X}
is a discrete random variable, then
P
(
X
=
x
)
=
1
{\displaystyle P(X=x)=1}
means that the casual event
(
X
=
x
)
{\displaystyle (X=x)}
is certain (it is true in 100% of the occurrences); on the contrary,
P
(
X
=
x
)
=
0
{\displaystyle P(X=x)=0}
means that the casual event
(
X
=
x
)
{\displaystyle (X=x)}
is always impossible. This statement isn't true for a continuous random variable
X
{\displaystyle X}
, for which
P
(
X
=
x
)
=
0
{\displaystyle P(X=x)=0}
for any possible
x
{\displaystyle x}
. Discretization is the process of converting a continuous random variable into a discrete one.
== Examples ==
=== Finite ===
There are three major distributions associated, the Bernoulli distribution, the binomial distribution and the geometric distribution.
Bernoulli distribution: ber(p) , is used to model an experiment with only two possible outcomes. The two outcomes are often encoded as 1 and 0.
p
X
(
x
)
=
{
p
,
if
x
is 1
1
−
p
,
if
x
is 0
{\displaystyle p_{X}(x)={\begin{cases}p,&{\text{if }}x{\text{ is 1}}\\1-p,&{\text{if }}x{\text{ is 0}}\end{cases}}}
An example of the Bernoulli distribution is tossing a coin. Suppose that
S
{\displaystyle S}
is the sample space of all outcomes of a single toss of a fair coin, and
X
{\displaystyle X}
is the random variable defined on
S
{\displaystyle S}
assigning 0 to the category "tails" and 1 to the category "heads". Since the coin is fair, the probability mass function is
p
X
(
x
)
=
{
1
2
,
x
=
0
,
1
2
,
x
=
1
,
0
,
x
∉
{
0
,
1
}
.
{\displaystyle p_{X}(x)={\begin{cases}{\frac {1}{2}},&x=0,\\{\frac {1}{2}},&x=1,\\0,&x\notin \{0,1\}.\end{cases}}}
Binomial distribution, models the number of successes when someone draws n times with replacement. Each draw or experiment is independent, with two possible outcomes. The associated probability mass function is
(
n
k
)
p
k
(
1
−
p
)
n
−
k
{\textstyle {\binom {n}{k}}p^{k}(1-p)^{n-k}}
. An example of the binomial distribution is the probability of getting exactly one 6 when someone rolls a fair die three times.
Geometric distribution describes the number of trials needed to get one success. Its probability mass function is
p
X
(
k
)
=
(
1
−
p
)
k
−
1
p
{\textstyle p_{X}(k)=(1-p)^{k-1}p}
.An example is tossing a coin until the first "heads" appears.
p
{\displaystyle p}
denotes the probability of the outcome "heads", and
k
{\displaystyle k}
denotes the number of necessary coin tosses. Other distributions that can be modeled using a probability mass function are the categorical distribution (also known as the generalized Bernoulli distribution) and the multinomial distribution.
If the discrete distribution has two or more categories one of which may occur, whether or not these categories have a natural ordering, when there is only a single trial (draw) this is a categorical distribution.
An example of a multivariate discrete distribution, and of its probability mass function, is provided by the multinomial distribution. Here the multiple random variables are the numbers of successes in each of the categories after a given number of trials, and each non-zero probability mass gives the probability of a certain combination of numbers of successes in the various categories.
=== Infinite ===
The following exponentially declining distribution is an example of a distribution with an infinite number of possible outcomes—all the positive integers:
Pr
(
X
=
i
)
=
1
2
i
for
i
=
1
,
2
,
3
,
…
{\displaystyle {\text{Pr}}(X=i)={\frac {1}{2^{i}}}\qquad {\text{for }}i=1,2,3,\dots }
Despite the infinite number of possible outcomes, the total probability mass is 1/2 + 1/4 + 1/8 + ⋯ = 1, satisfying the unit total probability requirement for a probability distribution.
== Multivariate case ==
Two or more discrete random variables have a joint probability mass function, which gives the probability of each possible combination of realizations for the random variables.
== References ==
== Further reading ==
Johnson, N. L.; Kotz, S.; Kemp, A. (1993). Univariate Discrete Distributions (2nd ed.). Wiley. p. 36. ISBN 0-471-54897-9. | Wikipedia/Probability_mass_function |
In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and location information (location in time).
== Definition ==
=== One level of the transform ===
The DWT of a signal
x
{\displaystyle x}
is calculated by passing it through a series of filters. First the samples are passed through a low-pass filter with impulse response
g
{\displaystyle g}
resulting in a convolution of the two:
y
[
n
]
=
(
x
∗
g
)
[
n
]
=
∑
k
=
−
∞
∞
x
[
k
]
g
[
n
−
k
]
{\displaystyle y[n]=(x*g)[n]=\sum \limits _{k=-\infty }^{\infty }{x[k]g[n-k]}}
The signal is also decomposed simultaneously using a high-pass filter
h
{\displaystyle h}
. The outputs give the detail coefficients (from the high-pass filter) and approximation coefficients (from the low-pass). It is important that the two filters are related to each other and they are known as a quadrature mirror filter.
However, since half the frequencies of the signal have now been removed, half the samples can be discarded according to Nyquist's rule. The filter output of the low-pass filter
g
{\displaystyle g}
in the diagram above is then subsampled by 2 and further processed by passing it again through a new low-pass filter
g
{\displaystyle g}
and a high- pass filter
h
{\displaystyle h}
with half the cut-off frequency of the previous one, i.e.:
y
l
o
w
[
n
]
=
∑
k
=
−
∞
∞
x
[
k
]
g
[
2
n
−
k
]
{\displaystyle y_{\mathrm {low} }[n]=\sum \limits _{k=-\infty }^{\infty }{x[k]g[2n-k]}}
y
h
i
g
h
[
n
]
=
∑
k
=
−
∞
∞
x
[
k
]
h
[
2
n
−
k
]
{\displaystyle y_{\mathrm {high} }[n]=\sum \limits _{k=-\infty }^{\infty }{x[k]h[2n-k]}}
This decomposition has halved the time resolution since only half of each filter output characterises the signal. However, each output has half the frequency band of the input, so the frequency resolution has been doubled.
With the subsampling operator
↓
{\displaystyle \downarrow }
(
y
↓
k
)
[
n
]
=
y
[
k
n
]
{\displaystyle (y\downarrow k)[n]=y[kn]}
the above summation can be written more concisely.
y
l
o
w
=
(
x
∗
g
)
↓
2
{\displaystyle y_{\mathrm {low} }=(x*g)\downarrow 2}
y
h
i
g
h
=
(
x
∗
h
)
↓
2
{\displaystyle y_{\mathrm {high} }=(x*h)\downarrow 2}
However computing a complete convolution
x
∗
g
{\displaystyle x*g}
with subsequent downsampling would waste computation time.
The Lifting scheme is an optimization where these two computations are interleaved.
=== Cascading and filter banks ===
This decomposition is repeated to further increase the frequency resolution and the approximation coefficients decomposed with high- and low-pass filters and then down-sampled. This is represented as a binary tree with nodes representing a sub-space with a different time-frequency localisation. The tree is known as a filter bank.
At each level in the above diagram the signal is decomposed into low and high frequencies. Due to the decomposition process the input signal must be a multiple of
2
n
{\displaystyle 2^{n}}
where
n
{\displaystyle n}
is the number of levels.
For example a signal with 32 samples, frequency range 0 to
f
n
{\displaystyle f_{n}}
and 3 levels of decomposition, 4 output scales are produced:
=== Relationship to the mother wavelet ===
The filterbank implementation of wavelets can be interpreted as computing the wavelet coefficients of a discrete set of child wavelets for a given mother wavelet
ψ
(
t
)
{\displaystyle \psi (t)}
. In the case of the discrete wavelet transform, the mother wavelet is shifted and scaled by powers of two
ψ
j
,
k
(
t
)
=
1
2
j
ψ
(
t
−
k
2
j
2
j
)
{\displaystyle \psi _{j,k}(t)={\frac {1}{\sqrt {2^{j}}}}\psi \left({\frac {t-k2^{j}}{2^{j}}}\right)}
where
j
{\displaystyle j}
is the scale parameter and
k
{\displaystyle k}
is the shift parameter, both of which are integers.
Recall that the wavelet coefficient
γ
{\displaystyle \gamma }
of a signal
x
(
t
)
{\displaystyle x(t)}
is the projection of
x
(
t
)
{\displaystyle x(t)}
onto a wavelet, and let
x
(
t
)
{\displaystyle x(t)}
be a signal of length
2
N
{\displaystyle 2^{N}}
. In the case of a child wavelet in the discrete family above,
γ
j
k
=
∫
−
∞
∞
x
(
t
)
1
2
j
ψ
(
t
−
k
2
j
2
j
)
d
t
{\displaystyle \gamma _{jk}=\int _{-\infty }^{\infty }x(t){\frac {1}{\sqrt {2^{j}}}}\psi \left({\frac {t-k2^{j}}{2^{j}}}\right)dt}
Now fix
j
{\displaystyle j}
at a particular scale, so that
γ
j
k
{\displaystyle \gamma _{jk}}
is a function of
k
{\displaystyle k}
only. In light of the above equation,
γ
j
k
{\displaystyle \gamma _{jk}}
can be viewed as a convolution of
x
(
t
)
{\displaystyle x(t)}
with a dilated, reflected, and normalized version of the mother wavelet,
h
(
t
)
=
1
2
j
ψ
(
−
t
2
j
)
{\displaystyle h(t)={\frac {1}{\sqrt {2^{j}}}}\psi \left({\frac {-t}{2^{j}}}\right)}
, sampled at the points
1
,
2
j
,
2
⋅
2
j
,
.
.
.
,
2
N
{\displaystyle 1,2^{j},2\cdot {2^{j}},...,2^{N}}
. But this is precisely what the detail coefficients give at level
j
{\displaystyle j}
of the discrete wavelet transform. Therefore, for an appropriate choice of
h
[
n
]
{\displaystyle h[n]}
and
g
[
n
]
{\displaystyle g[n]}
, the detail coefficients of the filter bank correspond exactly to a wavelet coefficient of a discrete set of child wavelets for a given mother wavelet
ψ
(
t
)
{\displaystyle \psi (t)}
.
As an example, consider the discrete Haar wavelet, whose mother wavelet is
ψ
=
[
1
,
−
1
]
{\displaystyle \psi =[1,-1]}
. Then the dilated, reflected, and normalized version of this wavelet is
h
[
n
]
=
1
2
[
−
1
,
1
]
{\displaystyle h[n]={\frac {1}{\sqrt {2}}}[-1,1]}
, which is, indeed, the highpass decomposition filter for the discrete Haar wavelet transform.
=== Time complexity ===
The filterbank implementation of the Discrete Wavelet Transform takes only O(N) in certain cases, as compared to O(N log N) for the fast Fourier transform.
Note that if
g
[
n
]
{\displaystyle g[n]}
and
h
[
n
]
{\displaystyle h[n]}
are both a constant length (i.e. their length is independent of N), then
x
∗
h
{\displaystyle x*h}
and
x
∗
g
{\displaystyle x*g}
each take O(N) time. The wavelet filterbank does each of these two O(N) convolutions, then splits the signal into two branches of size N/2. But it only recursively splits the upper branch convolved with
g
[
n
]
{\displaystyle g[n]}
(as contrasted with the FFT, which recursively splits both the upper branch and the lower branch). This leads to the following recurrence relation
T
(
N
)
=
2
N
+
T
(
N
2
)
{\displaystyle T(N)=2N+T\left({\frac {N}{2}}\right)}
which leads to an O(N) time for the entire operation, as can be shown by a geometric series expansion of the above relation.
As an example, the discrete Haar wavelet transform is linear, since in that case
h
[
n
]
{\displaystyle h[n]}
and
g
[
n
]
{\displaystyle g[n]}
are constant length 2.
h
[
n
]
=
[
−
2
2
,
2
2
]
g
[
n
]
=
[
2
2
,
2
2
]
{\displaystyle h[n]=\left[{\frac {-{\sqrt {2}}}{2}},{\frac {\sqrt {2}}{2}}\right]g[n]=\left[{\frac {\sqrt {2}}{2}},{\frac {\sqrt {2}}{2}}\right]}
The locality of wavelets, coupled with the O(N) complexity, guarantees that the transform can be computed online (on a streaming basis). This property is in sharp contrast to FFT, which requires access to the entire signal at once. It also applies to the multi-scale transform and also to the multi-dimensional transforms (e.g., 2-D DWT).
== Examples ==
=== Haar wavelets ===
The first DWT was invented by Hungarian mathematician Alfréd Haar. For an input represented by a list of
2
n
{\displaystyle 2^{n}}
numbers, the Haar wavelet transform may be considered to pair up input values, storing the difference and passing the sum. This process is repeated recursively, pairing up the sums to prove the next scale, which leads to
2
n
−
1
{\displaystyle 2^{n}-1}
differences and a final sum.
=== Daubechies wavelets ===
The most commonly used set of discrete wavelet transforms was formulated by the Belgian mathematician Ingrid Daubechies in 1988. This formulation is based on the use of recurrence relations to generate progressively finer discrete samplings of an implicit mother wavelet function; each resolution is twice that of the previous scale. In her seminal paper, Daubechies derives a family of wavelets, the first of which is the Haar wavelet. Interest in this field has exploded since then, and many variations of Daubechies' original wavelets were developed.
=== The dual-tree complex wavelet transform (DCWT) ===
The dual-tree complex wavelet transform (
C
{\displaystyle \mathbb {C} }
WT) is a relatively recent enhancement to the discrete wavelet transform (DWT), with important additional properties: It is nearly shift invariant and directionally selective in two and higher dimensions. It achieves this with a redundancy factor of only
2
d
{\displaystyle 2^{d}}
, substantially lower than the undecimated DWT. The multidimensional (M-D) dual-tree
C
{\displaystyle \mathbb {C} }
WT is nonseparable but is based on a computationally efficient, separable filter bank (FB).
=== Others ===
Other forms of discrete wavelet transform include the Le Gall–Tabatabai (LGT) 5/3 wavelet developed by Didier Le Gall and Ali J. Tabatabai in 1988 (used in JPEG 2000 or JPEG XS ), the Binomial QMF developed by Ali Naci Akansu in 1990, the set partitioning in hierarchical trees (SPIHT) algorithm developed by Amir Said with William A. Pearlman in 1996, the non- or undecimated wavelet transform (where downsampling is omitted), and the Newland transform (where an orthonormal basis of wavelets is formed from appropriately constructed top-hat filters in frequency space). Wavelet packet transforms are also related to the discrete wavelet transform. Complex wavelet transform is another form.
=== Coding ===
Complete Java code for a 1-D and 2-D DWT using Haar, Daubechies, Coiflet, and Legendre wavelets is available from the open source project: JWave.
Furthermore, a fast lifting implementation of the discrete biorthogonal CDF 9/7 wavelet transform in C, used in the JPEG 2000 image compression standard can be found here (archived 5 March 2012).
An example of the Haar wavelet in Java is given below:
The figure on the right shows an example of applying the above code to compute the Haar wavelet coefficients on a sound waveform. This example highlights two key properties of the wavelet transform:
Natural signals often have some degree of smoothness, which makes them sparse in the wavelet domain. There are far fewer significant components in the wavelet domain in this example than there are in the time domain, and most of the significant components are towards the coarser coefficients on the left. Hence, natural signals are compressible in the wavelet domain.
The wavelet transform is a multiresolution, bandpass representation of a signal. This can be seen directly from the filterbank definition of the discrete wavelet transform given in this article. For a signal of length
2
N
{\displaystyle 2^{N}}
, the coefficients in the range
[
2
N
−
j
,
2
N
−
j
+
1
]
{\displaystyle [2^{N-j},2^{N-j+1}]}
represent a version of the original signal which is in the pass-band
[
π
2
j
,
π
2
j
−
1
]
{\displaystyle \left[{\frac {\pi }{2^{j}}},{\frac {\pi }{2^{j-1}}}\right]}
. This is why zooming in on these ranges of the wavelet coefficients looks so similar in structure to the original signal. Ranges which are closer to the left (larger
j
{\displaystyle j}
in the above notation), are coarser representations of the signal, while ranges to the right represent finer details.
== Properties ==
The Haar DWT illustrates the desirable properties of wavelets in general. First, it can be performed in
O
(
n
)
{\displaystyle O(n)}
operations; second, it captures not only a notion of the frequency content of the input, by examining it at different scales, but also temporal content, i.e. the times at which these frequencies occur. Combined, these two properties make the Fast wavelet transform (FWT) an alternative to the conventional fast Fourier transform (FFT).
=== Time issues ===
Due to the rate-change operators in the filter bank, the discrete WT is not time-invariant but actually very sensitive to the alignment of the signal in time. To address the time-varying problem of wavelet transforms, Mallat and Zhong proposed a new algorithm for wavelet representation of a signal, which is invariant to time shifts. According to this algorithm, which is called a TI-DWT, only the scale parameter is sampled along the dyadic sequence 2^j (j∈Z) and the wavelet transform is calculated for each point in time.
== Applications ==
The discrete wavelet transform has a huge number of applications in science, engineering, mathematics and computer science. Most notably, it is used for signal coding, to represent a discrete signal in a more redundant form, often as a preconditioning for data compression. Practical applications can also be found in signal processing of accelerations for gait analysis, image processing, in digital communications and many others.
It is shown that discrete wavelet transform (discrete in scale and shift, and continuous in time) is successfully implemented as analog filter bank in biomedical signal processing for design of low-power pacemakers and also in ultra-wideband (UWB) wireless communications.
=== Image processing ===
Wavelets are often used to denoise two dimensional signals, such as images. The following example provides three steps to remove unwanted white Gaussian noise from the noisy image shown. Matlab was used to import and filter the image.
The first step is to choose a wavelet type, and a level N of decomposition. In this case biorthogonal 3.5 wavelets were chosen with a level N of 10. Biorthogonal wavelets are commonly used in image processing to detect and filter white Gaussian noise, due to their high contrast of neighboring pixel intensity values. Using these wavelets a wavelet transformation is performed on the two dimensional image.
Following the decomposition of the image file, the next step is to determine threshold values for each level from 1 to N. Birgé-Massart strategy is a fairly common method for selecting these thresholds. Using this process individual thresholds are made for N = 10 levels. Applying these thresholds are the majority of the actual filtering of the signal.
The final step is to reconstruct the image from the modified levels. This is accomplished using an inverse wavelet transform. The resulting image, with white Gaussian noise removed is shown below the original image. When filtering any form of data it is important to quantify the signal-to-noise-ratio of the result. In this case, the SNR of the noisy image in comparison to the original was 30.4958%, and the SNR of the denoised image is 32.5525%. The resulting improvement of the wavelet filtering is a SNR gain of 2.0567%.
Choosing other wavelets, levels, and thresholding strategies can result in different types of filtering. In this example, white Gaussian noise was chosen to be removed. Although, with different thresholding, it could just as easily have been amplified.
To illustrate the differences and similarities between the discrete wavelet transform with the discrete Fourier transform, consider the DWT and DFT of the following sequence: (1,0,0,0), a unit impulse.
The DFT has orthogonal basis (DFT matrix):
[
1
1
1
1
1
−
i
−
1
i
1
−
1
1
−
1
1
i
−
1
−
i
]
{\displaystyle {\begin{bmatrix}1&1&1&1\\1&-i&-1&i\\1&-1&1&-1\\1&i&-1&-i\end{bmatrix}}}
while the DWT with Haar wavelets for length 4 data has orthogonal basis in the rows of:
[
1
1
1
1
1
1
−
1
−
1
1
−
1
0
0
0
0
1
−
1
]
{\displaystyle {\begin{bmatrix}1&1&1&1\\1&1&-1&-1\\1&-1&0&0\\0&0&1&-1\end{bmatrix}}}
(To simplify notation, whole numbers are used, so the bases are orthogonal but not orthonormal.)
Preliminary observations include:
Sinusoidal waves differ only in their frequency. The first does not complete any cycles, the second completes one full cycle, the third completes two cycles, and the fourth completes three cycles (which is equivalent to completing one cycle in the opposite direction). Differences in phase can be represented by multiplying a given basis vector by a complex constant.
Wavelets, by contrast, have both frequency and location. As before, the first completes zero cycles, and the second completes one cycle. However, the third and fourth both have the same frequency, twice that of the first. Rather than differing in frequency, they differ in location — the third is nonzero over the first two elements, and the fourth is nonzero over the second two elements.
(
1
,
0
,
0
,
0
)
=
1
4
(
1
,
1
,
1
,
1
)
+
1
4
(
1
,
1
,
−
1
,
−
1
)
+
1
2
(
1
,
−
1
,
0
,
0
)
Haar DWT
(
1
,
0
,
0
,
0
)
=
1
4
(
1
,
1
,
1
,
1
)
+
1
4
(
1
,
i
,
−
1
,
−
i
)
+
1
4
(
1
,
−
1
,
1
,
−
1
)
+
1
4
(
1
,
−
i
,
−
1
,
i
)
DFT
{\displaystyle {\begin{aligned}(1,0,0,0)&={\frac {1}{4}}(1,1,1,1)+{\frac {1}{4}}(1,1,-1,-1)+{\frac {1}{2}}(1,-1,0,0)\qquad {\text{Haar DWT}}\\(1,0,0,0)&={\frac {1}{4}}(1,1,1,1)+{\frac {1}{4}}(1,i,-1,-i)+{\frac {1}{4}}(1,-1,1,-1)+{\frac {1}{4}}(1,-i,-1,i)\qquad {\text{DFT}}\end{aligned}}}
The DWT demonstrates the localization: the (1,1,1,1) term gives the average signal value, the (1,1,–1,–1) places the signal in the left side of the domain, and the
(1,–1,0,0) places it at the left side of the left side, and truncating at any stage yields a downsampled version of the signal:
(
1
4
,
1
4
,
1
4
,
1
4
)
(
1
2
,
1
2
,
0
,
0
)
2-term truncation
(
1
,
0
,
0
,
0
)
{\displaystyle {\begin{aligned}&\left({\frac {1}{4}},{\frac {1}{4}},{\frac {1}{4}},{\frac {1}{4}}\right)\\&\left({\frac {1}{2}},{\frac {1}{2}},0,0\right)\qquad {\text{2-term truncation}}\\&\left(1,0,0,0\right)\end{aligned}}}
The DFT, by contrast, expresses the sequence by the interference of waves of various frequencies – thus truncating the series yields a low-pass filtered version of the series:
(
1
4
,
1
4
,
1
4
,
1
4
)
(
3
4
,
1
4
,
−
1
4
,
1
4
)
2-term truncation
(
1
,
0
,
0
,
0
)
{\displaystyle {\begin{aligned}&\left({\frac {1}{4}},{\frac {1}{4}},{\frac {1}{4}},{\frac {1}{4}}\right)\\&\left({\frac {3}{4}},{\frac {1}{4}},-{\frac {1}{4}},{\frac {1}{4}}\right)\qquad {\text{2-term truncation}}\\&\left(1,0,0,0\right)\end{aligned}}}
Notably, the middle approximation (2-term) differs. From the frequency domain perspective, this is a better approximation, but from the time domain perspective it has drawbacks – it exhibits undershoot – one of the values is negative, though the original series is non-negative everywhere – and ringing, where the right side is non-zero, unlike in the wavelet transform. On the other hand, the Fourier approximation correctly shows a peak, and all points are within
1
/
4
{\displaystyle 1/4}
of their correct value, though all points have error. The wavelet approximation, by contrast, places a peak on the left half, but has no peak at the first point, and while it is exactly correct for half the values (reflecting location), it has an error of
1
/
2
{\displaystyle 1/2}
for the other values.
This illustrates the kinds of trade-offs between these transforms, and how in some respects the DWT provides preferable behavior, particularly for the modeling of transients.
=== Watermarking ===
Watermarking using DCT-DWT alters the wavelet coefficients of middle-frequency coefficient sets of 5-levels DWT transformed host image, followed by applying the DCT transforms on the selected coefficient sets. Prasanalakshmi B proposed a method that uses the HL frequency sub-band in the middle-frequency coefficient sets LHx and HLx in a 5-level Discrete Wavelet Transform (DWT) transformed image.This algorithm chooses a coarser level of DWT in terms of imperceptibility and robustness to apply 4×4 block-based DCT on them. Consequently, higher imperceptibility and robustness can be achieved. Also, the pre-filtering operation is used before extraction of the watermark, sharpening, and Laplacian of Gaussian (LoG) filtering, which increases the difference between the information of the watermark and the hosted image.
The basic idea of the DWT for a two-dimensional image is described as follows: An image is first decomposed into four parts of high, middle, and low-frequency subcomponents (i.e., LL1, HL1, LH1, HH1) by critically subsampling horizontal and vertical channels using subcomponent filters.
The subcomponents HL1, LH1, and HH1 represent the finest scale wavelet coefficients. The subcomponent LL1 is decomposed and critically subsampled to obtain the following coarser-scaled wavelet components. This process is repeated several times, which is determined by the application at hand.
High-frequency components are considered to embed the watermark since they contain edge information, and the human eye is less sensitive to edge changes. In watermarking algorithms, besides the watermark's invisibility, the primary concern is choosing the frequency components to embed the watermark to survive the possible attacks that the transmitted image may undergo. Transform domain techniques have the advantage of unique properties of alternate domains to address spatial domain limitations and have additional features.
The Host image is made to undergo 5-level DWT watermarking. Embedding the watermark in the middle-level frequency sub-bands LLx gives a high degree of imperceptibility and robustness. Consequently, LLx coefficient sets in level five are chosen to increase the robustness of the watermark against common watermarking attacks, especially adding noise and blurring attacks, at little to no additional impact on image quality. Then, the block base DCT is performed on these selected DWT coefficient sets and embeds pseudorandom sequences in middle frequencies. The watermark embedding procedure is explained below:
1. Read the cover image I, of size N×N.
2.The four non-overlapping multi-resolution coefficient sets LL1, HL1, LH1, and HH1 are obtained initially.
3. Decomposition is performed till 5-levels and the frequency subcomponents {HH1, HL1, LH1,{{HH2, HL2, LH2, {HH3, HL3, LH3, {HH4, HL4, LH4, {HH5, HL5, LH5, LL5}}}}}} are obtained by computing the fifth level DWT of the image I.
4. Divide the final four coefficient sets: HH5, HL5, LH5 and LL5 into 4 x 4 blocks.
5. DCT is performed on each block in the chosen coefficient sets. These coefficient sets are chosen to inquire about the imperceptibility and robustness of algorithms equally.
6. Scramble the fingerprint image to gain the scrambled watermark WS (i, j).
7. Re-formulate the scrambled watermark image into a vector of zeros and ones.
8. Two uncorrelated pseudorandom sequences are generated from the key obtained from the palm vein. The number of elements in the two pseudorandom sequences must equal the number of mid-band elements of the DCT-transformed DWT coefficient sets.
9. Embed the two pseudorandom sequences with a gain factor α in the DCT-transformed 4x4 blocks of the selected DWT coefficient sets of the host image. Instead of embedding in all coefficients of the DCT block, it is applied only to the mid-band DCT coefficients. If X is denoted as the matrix of the mid-band coefficients of the DCT transformed block, then embedding is done with watermark bit 0, and X' is updated as X+∝*PN0,watermarkbit=0 and done with watermark bit 1 and X' is updated as X+∝*PN1. Inverse DCT (IDCT) is done on each block after its mid-band coefficients have been modified to embed the watermark bits.
10. To produce the watermarked host image, Perform the inverse DWT (IDWT) on the DWT-transformed image, including the modified coefficient sets.
== Similar transforms ==
The Adam7 algorithm, used for interlacing in the Portable Network Graphics (PNG) format, is a multiscale model of the data which is similar to a DWT with Haar wavelets. Unlike the DWT, it has a specific scale – it starts from an 8×8 block, and it downsamples the image, rather than decimating (low-pass filtering, then downsampling). It thus offers worse frequency behavior, showing artifacts (pixelation) at the early stages, in return for simpler implementation.
The multiplicative (or geometric) discrete wavelet transform is a variant that applies to an observation model
y
=
f
X
{\displaystyle {\bf {y}}=f{\bf {X}}}
involving interactions of a positive regular function
f
{\displaystyle f}
and a multiplicative independent positive noise
X
{\displaystyle X}
, with
E
X
=
1
{\displaystyle \mathbb {E} X=1}
. Denote
W
{\displaystyle {\cal {W}}}
, a wavelet transform. Since
f
X
=
f
+
f
(
X
−
1
)
{\displaystyle f{\bf {X}}=f+{f({\bf {X}}-1)}}
, then the standard (additive) discrete wavelet transform
W
+
{\displaystyle {\cal {W^{+}}}}
is such that
W
+
y
=
W
+
f
+
W
+
f
(
X
−
1
)
,
{\displaystyle {\cal {W^{+}}}{\bf {y}}={\cal {W^{+}}}f+{\cal {W^{+}}}{f({\bf {X}}-1)},}
where detail coefficients
W
+
f
(
X
−
1
)
{\displaystyle {\cal {W^{+}}}{f({\bf {X}}-1)}}
cannot be considered as sparse in general, due to the contribution of
f
{\displaystyle f}
in the latter expression. In the multiplicative framework, the wavelet transform is such that
W
×
y
=
(
W
×
f
)
×
(
W
×
X
)
.
{\displaystyle {\cal {W^{\times }}}{\bf {y}}=\left({\cal {W^{\times }}}f\right)\times \left({\cal {W^{\times }}}{\bf {X}}\right).}
This 'embedding' of wavelets in a multiplicative algebra involves generalized multiplicative approximations and detail operators: For instance, in the case of the Haar wavelets, then up to the normalization coefficient
α
{\displaystyle \alpha }
, the standard
W
+
{\displaystyle {\cal {W^{+}}}}
approximations (arithmetic mean)
c
k
=
α
(
y
k
+
y
k
−
1
)
{\displaystyle c_{k}=\alpha (y_{k}+y_{k-1})}
and details (arithmetic differences)
d
k
=
α
(
y
k
−
y
k
−
1
)
{\displaystyle d_{k}=\alpha (y_{k}-y_{k-1})}
become respectively geometric mean approximations
c
k
∗
=
(
y
k
×
y
k
−
1
)
α
{\displaystyle c_{k}^{\ast }=(y_{k}\times y_{k-1})^{\alpha }}
and geometric differences (details)
d
k
∗
=
(
y
k
y
k
−
1
)
α
{\displaystyle d_{k}^{\ast }=\left({\frac {y_{k}}{y_{k-1}}}\right)^{\alpha }}
when using
W
×
{\displaystyle {\cal {W^{\times }}}}
.
== See also ==
Discrete cosine transform (DCT)
Wavelet
Wavelet transform
Wavelet compression
List of wavelet-related transforms
== References ==
== External links ==
Stanford's WaveLab in matlab
libdwt, a cross-platform DWT library written in C
Concise Introduction to Wavelets by René Puschinger | Wikipedia/Discrete_wavelet_transform |
Variable bitrate (VBR) is a term used in telecommunications and computing that relates to the bitrate used in sound or video encoding. As opposed to constant bitrate (CBR), VBR files vary the amount of output data per time segment. VBR allows a higher bitrate (and therefore more storage space) to be allocated to the more complex segments of media files while less space is allocated to less complex segments. The average of these rates can be calculated to produce an average bitrate for the file.
MP3, WMA and AAC audio files can optionally be encoded in VBR, while Opus and Vorbis are encoded in VBR by default. Variable bit rate encoding is also commonly used on MPEG-2 video, MPEG-4 Part 2 video (Xvid, DivX, etc.), MPEG-4 Part 10/H.264 video, Theora, Dirac and other video compression formats. Additionally, variable rate encoding is inherent in lossless compression schemes such as FLAC and Apple Lossless.
== Advantages and disadvantages of VBR ==
The advantages of VBR are that it produces a better quality-to-space ratio compared to a CBR file of the same data. The bits available are used more flexibly to encode the sound or video data more accurately, with fewer bits used in less demanding passages and more bits used in difficult-to-encode passages.
The disadvantages are that it may take more time to encode, as the process is more complex, and that some hardware might not be compatible with VBR files.
== Methods of VBR encoding ==
=== Multi-pass encoding and single-pass encoding ===
VBR is created using so-called single-pass encoding or multi-pass encoding. Single-pass encoding analyzes and encodes the data "on the fly" and it is also used in constant bitrate encoding. Single-pass encoding is used when the encoding speed is most important — e.g. for real-time encoding. Single-pass VBR encoding is usually controlled by the fixed quality setting or by the bitrate range (minimum and maximum allowed bitrate) or by the average bitrate setting. Multi-pass encoding is used when the encoding quality is most important. Multi-pass encoding cannot be used in real-time encoding, live broadcast or live streaming. Multi-pass encoding takes much longer than single-pass encoding, because every pass means one pass through the input data (usually through the whole input file). Multi-pass encoding is used only for VBR encoding, because CBR encoding doesn't offer any flexibility to change the bitrate. The most common multi-pass encoding is two-pass encoding. In the first pass of two-pass encoding, the input data is being analyzed and the result is stored in a log file. In the second pass, the collected data from the first pass is used to achieve the best encoding quality. In a video encoding, two-pass encoding is usually controlled by the average bitrate setting or by the bitrate range setting (minimal and maximal allowed bitrate) or by the target video file size setting.
=== Bitrate range ===
This VBR encoding method allows the user to specify a bitrate range — a minimum and/or maximum allowed bitrate. Some encoders extend this method with an average bitrate. The minimum and maximum allowed bitrate set bounds in which the bitrate may vary. The disadvantage of this method is that the average bitrate (and hence file size) will not be known ahead of time. The bitrate range is also used in some fixed quality encoding methods, but usually without permission to change a particular bitrate.
=== Average bitrate ===
The disadvantage of single pass ABR encoding (with or without Constrained Variable Bitrate) is the opposite of fixed quantizer VBR — the size of the output is known ahead of time, but the resulting quality is unknown, although still better than CBR.
The multi-pass ABR encoding is more similar to fixed quantizer VBR, because a higher average will really increase the quality.
=== File size ===
VBR encoding using the file size setting is usually multi-pass encoding. It allows the user to specify a specific target file size. In the first pass, the encoder analyzes the input file and automatically calculates possible bitrate range and/or average bitrate. In the last pass, the encoder distributes the available bits among the entire video to achieve uniform quality.
== See also ==
Bitrate
Average bitrate
Constant bitrate
Adaptive bitrate streaming
== References == | Wikipedia/Variable_bitrate |
In cryptology, a code is a method used to encrypt a message that operates at the level of meaning; that is, words or phrases are converted into something else. A code might transform "change" into "CVGDK" or "cocktail lounge". The U.S. National Security Agency defined a code as "A substitution cryptosystem in which the plaintext elements are primarily words, phrases, or sentences, and the code equivalents (called "code groups") typically consist of letters or digits (or both) in otherwise meaningless combinations of identical length.": Vol I, p. 12 A codebook is needed to encrypt, and decrypt the phrases or words.
By contrast, ciphers encrypt messages at the level of individual letters, or small groups of letters, or even, in modern ciphers, individual bits. Messages can be transformed first by a code, and then by a cipher. Such multiple encryption, or "superencryption" aims to make cryptanalysis more difficult.
Another comparison between codes and ciphers is that a code typically represents a letter or groups of letters directly without the use of mathematics. As such the numbers are configured to represent these three values: 1001 = A, 1002 = B, 1003 = C, ... . The resulting message, then would be 1001 1002 1003 to communicate ABC. Ciphers, however, utilize a mathematical formula to represent letters or groups of letters. For example, A = 1, B = 2, C = 3, ... . Thus the message ABC results by multiplying each letter's value by 13. The message ABC, then would be 13 26 39.
Codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing the cumbersome codebooks, so ciphers are now the dominant technique in modern cryptography.
In contrast, because codes are representational, they are not susceptible to mathematical analysis of the individual codebook elements. In the example, the message 13 26 39 can be cracked by dividing each number by 13 and then ranking them alphabetically. However, the focus of codebook cryptanalysis is the comparative frequency of the individual code elements matching the same frequency of letters within the plaintext messages using frequency analysis. In the above example, the code group, 1001, 1002, 1003, might occur more than once and that frequency might match the number of times that ABC occurs in plain text messages.
(In the past, or in non-technical contexts, code and cipher are often used to refer to any form of encryption).
== One- and two-part codes ==
Codes are defined by "codebooks" (physical or notional), which are dictionaries of codegroups listed with their corresponding plaintext. Codes originally had the codegroups assigned in 'plaintext order' for convenience of the code designed, or the encoder. For example, in a code using numeric code groups, a plaintext word starting with "a" would have a low-value group, while one starting with "z" would have a high-value group. The same codebook could be used to "encode" a plaintext message into a coded message or "codetext", and "decode" a codetext back into plaintext message.
In order to make life more difficult for codebreakers, codemakers designed codes with no predictable relationship between the codegroups and the ordering of the matching plaintext. In practice, this meant that two codebooks were now required, one to find codegroups for encoding, the other to look up codegroups to find plaintext for decoding. Such "two-part" codes required more effort to develop, and twice as much effort to distribute (and discard safely when replaced), but they were harder to break. The Zimmermann Telegram in January 1917 used the German diplomatic "0075" two-part code system which contained upwards of 10,000 phrases and individual words.
== One-time code ==
A one-time code is a prearranged word, phrase or symbol that is intended to be used only once to convey a simple message, often the signal to execute or abort some plan or confirm that it has succeeded or failed. One-time codes are often designed to be included in what would appear to be an innocent conversation. Done properly they are almost impossible to detect, though a trained analyst monitoring the communications of someone who has already aroused suspicion might be able to recognize a comment like "Aunt Bertha has gone into labor" as having an ominous meaning. Famous example of one time codes include:
In the Bible, Jonathan prearranges a code with David, who is going into hiding from Jonathan's father, King Saul. If, during archery practice, Jonathan tells the servant retrieving arrows "the arrows are on this side of you," it's safe for David to return to court, if the command is "the arrows are beyond you," David must flee.
"One if by land; two if by sea" in "Paul Revere's Ride" made famous in the poem by Henry Wadsworth Longfellow
"Climb Mount Niitaka" - the signal to Japanese planes to begin the attack on Pearl Harbor
During World War II the British Broadcasting Corporation's overseas service frequently included "personal messages" as part of its regular broadcast schedule. The seemingly nonsensical stream of messages read out by announcers were actually one time codes intended for Special Operations Executive (SOE) agents operating behind enemy lines. An example might be "The princess wears red shoes" or "Mimi's cat is asleep under the table". Each code message was read out twice. By such means, the French Resistance were instructed to start sabotaging rail and other transport links the night before D-day.
"Over all of Spain, the sky is clear" was a signal (broadcast on radio) to start the nationalist military revolt in Spain on July 17, 1936.
Sometimes messages are not prearranged and rely on shared knowledge hopefully known only to the recipients. An example is the telegram sent to U.S. President Harry Truman, then at the Potsdam Conference to meet with Soviet premier Joseph Stalin, informing Truman of the first successful test of an atomic bomb.
"Operated on this morning. Diagnosis not yet complete but results seem satisfactory and already exceed expectations. Local press release necessary as interest extends great distance. Dr. Groves pleased. He returns tomorrow. I will keep you posted."
See also one-time pad, an unrelated cypher algorithm
== Idiot code ==
An idiot code is a code that is created by the parties using it. This type of communication is akin to the hand signals used by armies in the field.
Example: Any sentence where 'day' and 'night' are used means 'attack'. The location mentioned in the following sentence specifies the location to be attacked.
Plaintext: Attack X.
Codetext: We walked day and night through the streets but couldn't find it! Tomorrow we'll head into X.
An early use of the term appears to be by George Perrault, a character in the science fiction book Friday by Robert A. Heinlein:
The simplest sort [of code] and thereby impossible to break. The first ad told the person or persons concerned to carry out number seven or expect number seven or it said something about something designated as seven. This one says the same with respect to code item number ten. But the meaning of the numbers cannot be deduced through statistical analysis because the code can be changed long before a useful statistical universe can be reached. It's an idiot code... and an idiot code can never be broken if the user has the good sense not to go too often to the well.
Terrorism expert Magnus Ranstorp said that the men who carried out the September 11 attacks on the United States used basic e-mail and what he calls "idiot code" to discuss their plans.
== Cryptanalysis of codes ==
While solving a monoalphabetic substitution cipher is easy, solving even a simple code is difficult. Decrypting a coded message is a little like trying to translate a document written in a foreign language, with the task basically amounting to building up a "dictionary" of the codegroups and the plaintext words they represent.
One fingerhold on a simple code is the fact that some words are more common than others, such as "the" or "a" in English. In telegraphic messages, the codegroup for "STOP" (i.e., end of sentence or paragraph) is usually very common. This helps define the structure of the message in terms of sentences, if not their meaning, and this is cryptanalytically useful.
Further progress can be made against a code by collecting many codetexts encrypted with the same code and then using information from other sources
spies
newspapers
diplomatic cocktail party chat
the location from where a message was sent
where it was being sent to (i.e., traffic analysis)
the time the message was sent,
events occurring before and after the message was sent
the normal habits of the people sending the coded messages
etc.
For example, a particular codegroup found almost exclusively in messages from a particular army and nowhere else might very well indicate the commander of that army. A codegroup that appears in messages preceding an attack on a particular location may very well stand for that location.
Cribs can be an immediate giveaway to the definitions of codegroups. As codegroups are determined, they can gradually build up a critical mass, with more and more codegroups revealed from context and educated guesswork. One-part codes are more vulnerable to such educated guesswork than two-part codes, since if the codenumber "26839" of a one-part code is determined to stand for "bulldozer", then the lower codenumber "17598" will likely stand for a plaintext word that starts with "a" or "b". At least, for simple one part codes.
Various tricks can be used to "plant" or "sow" information into a coded message, for example by executing a raid at a particular time and location against an enemy, and then examining code messages sent after the raid. Coding errors are a particularly useful fingerhold into a code; people reliably make errors, sometimes disastrous ones. Planting data and exploiting errors works against ciphers as well.
The most obvious and, in principle at least, simplest way of cracking a code is to steal the codebook through bribery, burglary, or raiding parties — procedures sometimes glorified by the phrase "practical cryptography" — and this is a weakness for both codes and ciphers, though codebooks are generally larger and used longer than cipher keys. While a good code may be harder to break than a cipher, the need to write and distribute codebooks is seriously troublesome.
Constructing a new code is like building a new language and writing a dictionary for it; it was an especially big job before computers. If a code is compromised, the entire task must be done all over again, and that means a lot of work for both cryptographers and the code users. In practice, when codes were in widespread use, they were usually changed on a periodic basis to frustrate codebreakers, and to limit the useful life of stolen or copied codebooks.
Once codes have been created, codebook distribution is logistically clumsy, and increases chances the code will be compromised. There is a saying that "Three people can keep a secret if two of them are dead," (Benjamin Franklin - Wikiquote) and though it may be something of an exaggeration, a secret becomes harder to keep if it is shared among several people. Codes can be thought reasonably secure if they are only used by a few careful people, but if whole armies use the same codebook, security becomes much more difficult.
In contrast, the security of ciphers is generally dependent on protecting the cipher keys. Cipher keys can be stolen and people can betray them, but they are much easier to change and distribute.
== Superencipherment ==
It was common to encipher a message after first encoding it, to increase the difficulty of cryptanalysis. With a numerical code, this was commonly done with an "additive" - simply a long key number which was digit-by-digit added to the code groups, modulo 10. Unlike the codebooks, additives would be changed frequently. The famous Japanese Navy code, JN-25, was of this design.
== References ==
== Sources ==
Kahn, David (1996). The Codebreakers : The Comprehensive History of Secret Communication from Ancient Times to the Internet. Scribner.
Pickover, Cliff (2000). Cryptorunes: Codes and Secret Writing. Pomegranate Communications. ISBN 978-0-7649-1251-1.
Boak, David G. (July 1973) [1966]. "Codes" (PDF). A History of U.S. Communications Security; the David G. Boak Lectures, Vol. I (2015 declassification review ed.). Ft. George G. Meade, MD: U.S. National Security Agency. pp. 21–32. Retrieved 2017-04-23.
American Army Field Codes In the American Expeditionary Forces During The First World War, William Friedman, U.S. War Department, June 1942. Exhibits many examples in its appendix, including a "Baseball code" (p. 254)
== See also ==
Cipher
Code, its more general communications meaning
Trench code
JN-25
Zimmermann telegram
Code talkers
This article, or an earlier version of it, incorporates material from Greg Goebel's Codes, Ciphers, & Codebreaking. | Wikipedia/Code_(cryptography) |
In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s.
It is used in inductive inference theory and analyses of algorithms. In his general theory of inductive inference, Solomonoff uses the method together with Bayes' rule to obtain probabilities of prediction for an algorithm's future outputs.
In the mathematical formalism used, the observations have the form of finite binary strings viewed as outputs of Turing machines, and the universal prior is a probability distribution over the set of finite binary strings calculated from a probability distribution over programs (that is, inputs to a universal Turing machine). The prior is universal in the
Turing-computability sense, i.e. no string has zero probability. It is not computable, but it can be approximated.
Formally, the probability
P
{\displaystyle P}
is not a probability and it is not computable. It is only "lower semi-computable" and a "semi-measure". By "semi-measure", it means that
0
≤
∑
x
P
(
x
)
<
1
{\displaystyle 0\leq \sum _{x}P(x)<1}
. That is, the "probability" does not actually sum up to one, unlike actual probabilities. This is because some inputs to the Turing machine causes it to never halt, which means the probability mass allocated to those inputs is lost. By "lower semi-computable", it means there is a Turing machine that, given an input string
x
{\displaystyle x}
, can print out a sequence
y
1
<
y
2
<
⋯
{\displaystyle y_{1}<y_{2}<\cdots }
that converges to
P
(
x
)
{\displaystyle P(x)}
from below, but there is no such Turing machine that does the same from above.
== Overview ==
Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable.
Four principal inspirations for Solomonoff's algorithmic probability were: Occam's razor, Epicurus' principle of multiple explanations, modern computing theory (e.g. use of a universal Turing machine) and Bayes’ rule for prediction.
Occam's razor and Epicurus' principle are essentially two different non-mathematical approximations of the universal prior.
Occam's razor: among the theories that are consistent with the observed phenomena, one should select the simplest theory.
Epicurus' principle of multiple explanations: if more than one theory is consistent with the observations, keep all such theories.
At the heart of the universal prior is an abstract model of a computer, such as a universal Turing machine. Any abstract computer will do, as long as it is Turing-complete, i.e. every computable function has at least one program that will compute its application on the abstract computer.
The abstract computer is used to give precise meaning to the phrase "simple explanation". In the formalism used, explanations, or theories of phenomena, are computer programs that generate observation strings when run on the abstract computer. Each computer program is assigned a weight corresponding to its length. The universal probability distribution is the probability distribution on all possible output strings with random input, assigning for each finite output prefix q the sum of the probabilities of the programs that compute something starting with q. Thus, a simple explanation is a short computer program. A complex explanation is a long computer program. Simple explanations are more likely, so a high-probability observation string is one generated by a short computer program, or perhaps by any of a large number of slightly longer computer programs. A low-probability observation string is one that can only be generated by a long computer program.
Algorithmic probability is closely related to the concept of Kolmogorov complexity. Kolmogorov's introduction of complexity was motivated by information theory and problems in randomness, while Solomonoff introduced algorithmic complexity for a different reason: inductive reasoning. A single universal prior probability that can be substituted for each actual prior probability in Bayes's rule was invented by Solomonoff with Kolmogorov complexity as a side product. It predicts the most likely continuation of that observation, and provides a measure of how likely this continuation will be.
Solomonoff's enumerable measure is universal in a certain powerful sense, but the computation time can be infinite. One way of dealing with this issue is a variant of Leonid Levin's Search Algorithm, which limits the time spent computing the success of possible programs, with shorter programs given more time. When run for longer and longer periods of time, it will generate a sequence of approximations which converge to the universal probability distribution. Other methods of dealing with the issue include limiting the search space by including training sequences.
Solomonoff proved this distribution to be machine-invariant within a constant factor (called the invariance theorem).
== Fundamental Theorems ==
=== I. Kolmogorov's Invariance Theorem ===
Kolmogorov's Invariance theorem clarifies that the Kolmogorov Complexity, or Minimal Description Length, of a dataset
is invariant to the choice of Turing-Complete language used to simulate a Universal Turing Machine:
∀
x
∈
{
0
,
1
}
∗
,
|
K
U
(
x
)
−
K
U
′
(
x
)
|
≤
O
(
1
)
{\displaystyle \forall x\in \{0,1\}^{*},|K_{U}(x)-K_{U'}(x)|\leq {\mathcal {O}}(1)}
where
K
U
(
x
)
=
min
p
{
|
p
|
:
U
(
p
)
=
x
}
{\displaystyle K_{U}(x)=\min _{p}\{|p|:U(p)=x\}}
.
=== Interpretation ===
The minimal description
p
{\displaystyle p}
such that
U
(
p
)
=
x
{\displaystyle U(p)=x}
serves as a natural representation of the string
x
{\displaystyle x}
relative to the Turing-Complete language
U
{\displaystyle U}
. Moreover, as
x
{\displaystyle x}
can't be compressed further
p
{\displaystyle p}
is an incompressible and hence uncomputable string. This corresponds to a scientists' notion of randomness and clarifies the reason why Kolmogorov Complexity is not computable.
It follows that any piece of data has a necessary and sufficient representation in terms of a random string.
=== Proof ===
The following is taken from
From the theory of compilers, it is known that for any two Turing-Complete languages
U
1
{\displaystyle U_{1}}
and
U
2
{\displaystyle U_{2}}
, there exists a compiler
Λ
1
{\displaystyle \Lambda _{1}}
expressed in
U
1
{\displaystyle U_{1}}
that translates programs expressed in
U
2
{\displaystyle U_{2}}
into functionally-equivalent programs expressed in
U
1
{\displaystyle U_{1}}
.
It follows that if we let
p
{\displaystyle p}
be the shortest program that prints a given string
x
{\displaystyle x}
then:
K
U
1
(
x
)
≤
|
Λ
1
|
+
|
p
|
≤
K
U
2
(
x
)
+
O
(
1
)
{\displaystyle K_{U_{1}}(x)\leq |\Lambda _{1}|+|p|\leq K_{U_{2}}(x)+{\mathcal {O}}(1)}
where
|
Λ
1
|
=
O
(
1
)
{\displaystyle |\Lambda _{1}|={\mathcal {O}}(1)}
, and by symmetry we obtain the opposite inequality.
=== II. Levin's Universal Distribution ===
Given that any uniquely-decodable code satisfies the Kraft-McMillan inequality, prefix-free Kolmogorov Complexity allows us to derive the Universal
Distribution:
P
(
x
)
=
∑
U
(
p
)
=
x
P
(
U
(
p
)
=
x
)
=
∑
U
(
p
)
=
x
2
−
K
U
(
p
)
≤
1
{\displaystyle P(x)=\sum _{U(p)=x}P(U(p)=x)=\sum _{U(p)=x}2^{-K_{U}(p)}\leq 1}
where the fact that
U
{\displaystyle U}
may simulate a prefix-free UTM implies that for two distinct descriptions
p
{\displaystyle p}
and
p
′
{\displaystyle p'}
,
p
{\displaystyle p}
isn't
a substring of
p
′
{\displaystyle p'}
and
p
′
{\displaystyle p'}
isn't a substring of
p
{\displaystyle p}
.
=== Interpretation ===
In a Computable Universe, given a phenomenon with encoding
x
∈
{
0
,
1
}
∗
{\displaystyle x\in \{0,1\}^{*}}
generated by a physical process the probability of that phenomenon is well-defined and equal to the sum over the probabilities of distinct and independent causes. The prefix-free criterion is precisely what guarantees causal independence.
=== Proof ===
This is an immediate consequence of the Kraft-McMillan inequality.
Kraft's inequality states that given a sequence of strings
{
x
i
}
i
=
1
n
{\displaystyle \{x_{i}\}_{i=1}^{n}}
there exists a prefix code with codewords
{
σ
i
}
i
=
1
n
{\displaystyle \{\sigma _{i}\}_{i=1}^{n}}
where
∀
i
,
|
σ
i
|
=
k
i
{\displaystyle \forall i,|\sigma _{i}|=k_{i}}
if and only if:
∑
i
=
1
n
s
−
k
i
≤
1
{\displaystyle \sum _{i=1}^{n}s^{-k_{i}}\leq 1}
where
s
{\displaystyle s}
is the size of the alphabet
S
{\displaystyle S}
.
Without loss of generality, let's suppose we may order the
k
i
{\displaystyle k_{i}}
such that:
k
1
≤
k
2
≤
.
.
.
≤
k
n
{\displaystyle k_{1}\leq k_{2}\leq ...\leq k_{n}}
Now, there exists a prefix code if and only if at each step
j
{\displaystyle j}
there is at least one codeword to choose that does not contain any of the previous
j
−
1
{\displaystyle j-1}
codewords as a prefix. Due to the existence of a codeword at a previous step
i
<
j
,
s
k
j
−
k
i
{\displaystyle i<j,s^{k_{j}-k_{i}}}
codewords are forbidden as they contain
σ
i
{\displaystyle \sigma _{i}}
as a prefix. It follows that in general a prefix code exists if and only if:
∀
j
≥
2
,
s
k
j
>
∑
i
=
1
j
−
1
s
k
j
−
k
i
{\displaystyle \forall j\geq 2,s^{k_{j}}>\sum _{i=1}^{j-1}s^{k_{j}-k_{i}}}
Dividing both sides by
s
k
j
{\displaystyle s^{k_{j}}}
, we find:
∑
i
=
1
n
s
−
k
i
≤
1
{\displaystyle \sum _{i=1}^{n}s^{-k_{i}}\leq 1}
QED.
== History ==
Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960, publishing a report on it: "A Preliminary Report on a General Theory of Inductive Inference." He clarified these ideas more fully in 1964 with "A Formal Theory of Inductive Inference," Part I and Part II.
In terms of practical implications and applications, the study of bias in empirical data related to Algorithmic Probability emerged in the early 2010s. The bias found led to methods that combined algorithmic probability with perturbation analysis in the context of causal analysis and non-differentiable Machine Learning
== Sequential Decisions Based on Algorithmic Probability ==
Sequential Decisions Based on Algorithmic Probability is a theoretical framework proposed by Marcus Hutter to unify algorithmic probability with decision theory. The framework provides a foundation for creating universally intelligent agents capable of optimal performance in any computable environment. It builds on Solomonoff’s theory of induction and incorporates elements of reinforcement learning, optimization, and sequential decision-making.
=== Background ===
Inductive reasoning, the process of predicting future events based on past observations, is central to intelligent behavior. Hutter formalized this process using Occam’s razor and algorithmic probability. The framework is rooted in Kolmogorov complexity, which measures the simplicity of data by the length of its shortest descriptive program. This concept underpins the universal distribution MM, as introduced by Ray Solomonoff, which assigns higher probabilities to simpler hypotheses.
Hutter extended the universal distribution to include actions, creating a framework capable of addressing problems such as prediction, optimization, and reinforcement learning in environments with unknown structures.
=== The AIXI Model ===
The AIXI model is the centerpiece of Hutter’s theory. It describes a universal artificial agent designed to maximize expected rewards in an unknown environment. AIXI operates under the assumption that the environment can be represented by a computable probability distribution. It uses past observations to infer the most likely environmental model, leveraging algorithmic probability.
Mathematically, AIXI evaluates all possible future sequences of actions and observations. It computes their algorithmic probabilities and expected utilities, selecting the sequence of actions that maximizes cumulative rewards. This approach transforms sequential decision-making into an optimization problem. However, the general formulation of AIXI is incomputable, making it impractical for direct implementation.
=== Optimality and Limitations ===
AIXI is universally optimal in the sense that it performs as well as or better than any other agent in all computable environments. This universality makes it a theoretical benchmark for intelligence. However, its reliance on algorithmic probability renders it computationally infeasible, requiring exponential time to evaluate all possibilities.
To address this limitation, Hutter proposed time-bounded approximations, such as AIXItl, which reduce computational demands while retaining many theoretical properties of the original model. These approximations provide a more practical balance between computational feasibility and optimality.
=== Applications and Implications ===
The AIXI framework has significant implications for artificial intelligence and related fields. It provides a formal benchmark for measuring intelligence and a theoretical foundation for solving various problems, including prediction, reinforcement learning, and optimization.
Despite its strengths, the framework has limitations. AIXI assumes that the environment is computable, excluding chaotic or non-computable systems. Additionally, its high computational requirements make real-world applications challenging.
=== Philosophical Considerations ===
Hutter’s theory raises philosophical questions about the nature of intelligence and computation. The reliance on algorithmic probability ties intelligence to the ability to compute and predict, which may exclude certain natural or chaotic phenomena. Nonetheless, the AIXI model offers insights into the theoretical upper bounds of intelligent behavior and serves as a stepping stone toward more practical AI systems.
== Key people ==
Ray Solomonoff
Andrey Kolmogorov
Leonid Levin
== See also ==
Solomonoff's theory of inductive inference
Algorithmic information theory
Bayesian inference
Inductive inference
Inductive probability
Kolmogorov complexity
Universal Turing machine
Information-based complexity
== References ==
== Sources ==
Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and Its Applications, 3rd Edition, Springer Science and Business Media, N.Y., 2008
Hutter, Marcus (2005). Universal artificial intelligence: sequential decisions based on algorithmic probability. Texts in theoretical computer science. Berlin Heidelberg: Springer. ISBN 978-3-540-22139-5.
== Further reading ==
Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076-1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference
== External links ==
Algorithmic Probability at Scholarpedia
Solomonoff's publications | Wikipedia/Algorithmic_probability |
Information field theory (IFT) is a Bayesian statistical field theory relating to signal reconstruction, cosmography, and other related areas. IFT summarizes the information available on a physical field using Bayesian probabilities. It uses computational techniques developed for quantum field theory and statistical field theory to handle the infinite number of degrees of freedom of a field and to derive algorithms for the calculation of field expectation values. For example, the posterior expectation value of a field generated by a known Gaussian process and measured by a linear device with known Gaussian noise statistics is given by a generalized Wiener filter applied to the measured data. IFT extends such known filter formula to situations with nonlinear physics, nonlinear devices, non-Gaussian field or noise statistics, dependence of the noise statistics on the field values, and partly unknown parameters of measurement. For this it uses Feynman diagrams, renormalisation flow equations, and other methods from mathematical physics.
== Motivation ==
Fields play an important role in science, technology, and economy. They describe the spatial variations of a quantity, like the air temperature, as a function of position. Knowing the configuration of a field can be of large value. Measurements of fields, however, can never provide the precise field configuration with certainty. Physical fields have an infinite number of degrees of freedom, but the data generated by any measurement device is always finite, providing only a finite number of constraints on the field. Thus, an unambiguous deduction of such a field from measurement data alone is impossible and only probabilistic inference remains as a means to make statements about the field. Fortunately, physical fields exhibit correlations and often follow known physical laws. Such information is best fused into the field inference in order to overcome the mismatch of field degrees of freedom to measurement points. To handle this, an information theory for fields is needed, and that is what information field theory is.
== Concepts ==
=== Bayesian inference ===
s
(
x
)
{\displaystyle s(x)}
is a field value at a location
x
∈
Ω
{\displaystyle x\in \Omega }
in a space
Ω
{\displaystyle \Omega }
. The prior knowledge about the unknown signal field
s
{\displaystyle s}
is encoded in the probability distribution
P
(
s
)
{\displaystyle {\mathcal {P}}(s)}
. The data
d
{\displaystyle d}
provides additional information on
s
{\displaystyle s}
via the likelihood
P
(
d
|
s
)
{\displaystyle {\mathcal {P}}(d|s)}
that gets incorporated into the posterior probability
P
(
s
|
d
)
=
P
(
d
|
s
)
P
(
s
)
P
(
d
)
{\displaystyle {\mathcal {P}}(s|d)={\frac {{\mathcal {P}}(d|s)\,{\mathcal {P}}(s)}{{\mathcal {P}}(d)}}}
according to Bayes theorem.
=== Information Hamiltonian ===
In IFT Bayes theorem is usually rewritten in the language of a statistical field theory,
P
(
s
|
d
)
=
P
(
d
,
s
)
P
(
d
)
≡
e
−
H
(
d
,
s
)
Z
(
d
)
,
{\displaystyle {\mathcal {P}}(s|d)={\frac {{\mathcal {P}}(d,s)}{{\mathcal {P}}(d)}}\equiv {\frac {e^{-{\mathcal {H}}(d,s)}}{{\mathcal {Z}}(d)}},}
with the information Hamiltonian defined as
H
(
d
,
s
)
≡
−
ln
P
(
d
,
s
)
=
−
ln
P
(
d
|
s
)
−
ln
P
(
s
)
≡
H
(
d
|
s
)
+
H
(
s
)
,
{\displaystyle {\mathcal {H}}(d,s)\equiv -\ln {\mathcal {P}}(d,s)=-\ln {\mathcal {P}}(d|s)-\ln {\mathcal {P}}(s)\equiv {\mathcal {H}}(d|s)+{\mathcal {H}}(s),}
the negative logarithm of the joint probability of data and signal and with the partition function being
Z
(
d
)
≡
P
(
d
)
=
∫
D
s
P
(
d
,
s
)
.
{\displaystyle {\mathcal {Z}}(d)\equiv {\mathcal {P}}(d)=\int {\mathcal {D}}s\,{\mathcal {P}}(d,s).}
This reformulation of Bayes theorem permits the usage of methods of mathematical physics developed for the treatment of statistical field theories and quantum field theories.
=== Fields ===
As fields have an infinite number of degrees of freedom, the definition of probabilities over spaces of field configurations has subtleties. Identifying physical fields as elements of function spaces provides the problem that no Lebesgue measure is defined over the latter and therefore probability densities can not be defined there. However, physical fields have much more regularity than most elements of function spaces, as they are continuous and smooth at most of their locations. Therefore, less general, but sufficiently flexible constructions can be used to handle the infinite number of degrees of freedom of a field.
A pragmatic approach is to regard the field to be discretized in terms of pixels. Each pixel carries a single field value that is assumed to be constant within the pixel volume. All statements about the continuous field have then to be cast into its pixel representation. This way, one deals with finite dimensional field spaces, over which probability densities are well definable.
In order for this description to be a proper field theory, it is further required that the pixel resolution
Δ
x
{\displaystyle \Delta x}
can always be refined, while expectation values of the discretized field
s
Δ
x
{\displaystyle s_{\Delta x}}
converge to finite values:
⟨
f
(
s
)
⟩
(
s
|
d
)
≡
lim
Δ
x
→
0
∫
d
s
Δ
x
f
(
s
Δ
x
)
P
(
s
Δ
x
)
.
{\displaystyle \langle f(s)\rangle _{(s|d)}\equiv \lim _{\Delta x\rightarrow 0}\int ds_{\Delta x}f(s_{\Delta x})\,{\mathcal {P}}(s_{\Delta x}).}
=== Path integrals ===
If this limit exists, one can talk about the field configuration space integral or path integral
⟨
f
(
s
)
⟩
(
s
|
d
)
≡
∫
D
s
f
(
s
)
P
(
s
)
.
{\displaystyle \langle f(s)\rangle _{(s|d)}\equiv \int {\mathcal {D}}s\,f(s)\,{\mathcal {P}}(s).}
irrespective of the resolution it might be evaluated numerically.
=== Gaussian prior ===
The simplest prior for a field is that of a zero mean Gaussian probability distribution
P
(
s
)
=
G
(
s
,
S
)
≡
1
|
2
π
S
|
e
−
1
2
s
†
S
−
1
s
.
{\displaystyle {\mathcal {P}}(s)={\mathcal {G}}(s,S)\equiv {\frac {1}{\sqrt {|2\pi S|}}}e^{-{\frac {1}{2}}\,s^{\dagger }S^{-1}\,s}.}
The determinant in the denominator might be ill-defined in the continuum limit
Δ
x
→
0
{\displaystyle \Delta x\rightarrow 0}
, however, all what is necessary for IFT to be consistent is that this determinant can be estimated for any finite resolution field representation with
Δ
x
>
0
{\displaystyle \Delta x>0}
and that this permits the calculation of convergent expectation values.
A Gaussian probability distribution requires the specification of the field two point correlation function
S
≡
⟨
s
s
†
⟩
(
s
)
{\displaystyle S\equiv \langle s\,s^{\dagger }\rangle _{(s)}}
with coefficients
S
x
y
≡
⟨
s
(
x
)
s
(
y
)
¯
⟩
(
s
)
{\displaystyle S_{xy}\equiv \langle s(x)\,{\overline {s(y)}}\rangle _{(s)}}
and a scalar product for continuous fields
a
†
b
≡
∫
Ω
d
x
a
(
x
)
¯
b
(
x
)
,
{\displaystyle a^{\dagger }b\equiv \int _{\Omega }dx\,{\overline {a(x)}}\,b(x),}
with respect to which the inverse signal field covariance
S
−
1
{\displaystyle S^{-1}}
is constructed, i.e.
(
S
−
1
S
)
x
y
≡
∫
Ω
d
z
(
S
−
1
)
x
z
S
z
y
=
1
x
y
≡
δ
(
x
−
y
)
.
{\displaystyle (S^{-1}S)_{xy}\equiv \int _{\Omega }dz\,(S^{-1})_{xz}S_{zy}=\mathbb {1} _{xy}\equiv \delta (x-y).}
The corresponding prior information Hamiltonian reads
H
(
s
)
=
−
ln
G
(
s
,
S
)
=
1
2
s
†
S
−
1
s
+
1
2
ln
|
2
π
S
|
.
{\displaystyle {\mathcal {H}}(s)=-\ln {\mathcal {G}}(s,S)={\frac {1}{2}}\,s^{\dagger }S^{-1}\,s+{\frac {1}{2}}\,\ln |2\pi S|.}
=== Measurement equation ===
The measurement data
d
{\displaystyle d}
was generated with the likelihood
P
(
d
|
s
)
{\displaystyle {\mathcal {P}}(d|s)}
. In case the instrument was linear, a measurement equation of the form
d
=
R
s
+
n
{\displaystyle d=R\,s+n}
can be given, in which
R
{\displaystyle R}
is the instrument response, which describes how the data on average reacts to the signal, and
n
{\displaystyle n}
is the noise, simply the difference between data
d
{\displaystyle d}
and linear signal response
R
s
{\displaystyle R\,s}
. The response translates the infinite dimensional signal vector into the finite dimensional data space. In components this reads
d
i
=
∫
Ω
d
x
R
i
x
s
x
+
n
i
,
{\displaystyle d_{i}=\int _{\Omega }dx\,R_{ix}\,s_{x}+n_{i},}
where a vector component notation was also introduced for signal and data vectors.
If the noise follows a signal independent zero mean Gaussian statistics with covariance
N
{\displaystyle N}
,
P
(
n
|
s
)
=
G
(
n
,
N
)
,
{\displaystyle {\mathcal {P}}(n|s)={\mathcal {G}}(n,N),}
then the likelihood is Gaussian as well,
P
(
d
|
s
)
=
G
(
d
−
R
s
,
N
)
,
{\displaystyle {\mathcal {P}}(d|s)={\mathcal {G}}(d-R\,s,N),}
and the likelihood information Hamiltonian is
H
(
d
|
s
)
=
−
ln
G
(
d
−
R
s
,
N
)
=
1
2
(
d
−
R
s
)
†
N
−
1
(
d
−
R
s
)
+
1
2
ln
|
2
π
N
|
.
{\displaystyle {\mathcal {H}}(d|s)=-\ln {\mathcal {G}}(d-R\,s,N)={\frac {1}{2}}\,(d-R\,s)^{\dagger }N^{-1}\,(d-R\,s)+{\frac {1}{2}}\,\ln |2\pi N|.}
A linear measurement of a Gaussian signal, subject to Gaussian and signal-independent noise leads to a free IFT.
== Free theory ==
=== Free Hamiltonian ===
The joint information Hamiltonian of the Gaussian scenario described above is
H
(
d
,
s
)
=
H
(
d
|
s
)
+
H
(
s
)
=
^
1
2
(
d
−
R
s
)
†
N
−
1
(
d
−
R
s
)
+
1
2
s
†
S
−
1
s
=
^
1
2
[
s
†
(
S
−
1
+
R
†
N
−
1
R
)
⏟
D
−
1
s
−
s
†
R
†
N
−
1
d
⏟
j
−
d
†
N
−
1
R
⏟
j
†
s
]
≡
1
2
[
s
†
D
−
1
s
−
s
†
j
−
j
†
s
]
=
1
2
[
s
†
D
−
1
s
−
s
†
D
−
1
D
j
⏟
m
−
j
†
D
⏟
m
†
D
−
1
s
]
=
^
1
2
(
s
−
m
)
†
D
−
1
(
s
−
m
)
,
{\displaystyle {\begin{aligned}{\mathcal {H}}(d,s)&={\mathcal {H}}(d|s)+{\mathcal {H}}(s)\\&{\widehat {=}}{\frac {1}{2}}\,(d-R\,s)^{\dagger }N^{-1}\,(d-R\,s)+{\frac {1}{2}}\,s^{\dagger }S^{-1}\,s\\&{\widehat {=}}{\frac {1}{2}}\,\left[s^{\dagger }\underbrace {(S^{-1}+R^{\dagger }N^{-1}R)} _{D^{-1}}\,s-s^{\dagger }\underbrace {R^{\dagger }N^{-1}d} _{j}-\underbrace {d^{\dagger }N^{-1}R} _{j^{\dagger }}\,s\right]\\&\equiv {\frac {1}{2}}\,\left[s^{\dagger }D^{-1}s-s^{\dagger }j-j^{\dagger }s\right]\\&={\frac {1}{2}}\,\left[s^{\dagger }D^{-1}s-s^{\dagger }D^{-1}\underbrace {D\,j} _{m}-\underbrace {j^{\dagger }D} _{m^{\dagger }}\,D^{-1}s\right]\\&{\widehat {=}}{\frac {1}{2}}\,(s-m)^{\dagger }D^{-1}(s-m),\end{aligned}}}
where
=
^
{\displaystyle {\widehat {=}}}
denotes equality up to irrelevant constants, which, in this case, means expressions that are independent of
s
{\displaystyle s}
. From this it is clear, that the posterior must be a Gaussian with mean
m
{\displaystyle m}
and variance
D
{\displaystyle D}
,
P
(
s
|
d
)
∝
e
−
H
(
d
,
s
)
∝
e
−
1
2
(
s
−
m
)
†
D
−
1
(
s
−
m
)
∝
G
(
s
−
m
,
D
)
{\displaystyle {\mathcal {P}}(s|d)\propto e^{-{\mathcal {H}}(d,s)}\propto e^{-{\frac {1}{2}}\,(s-m)^{\dagger }D^{-1}(s-m)}\propto {\mathcal {G}}(s-m,D)}
where equality between the right and left hand sides holds as both distributions are normalized,
∫
D
s
P
(
s
|
d
)
=
1
=
∫
D
s
G
(
s
−
m
,
D
)
{\displaystyle \int {\mathcal {D}}s\,{\mathcal {P}}(s|d)=1=\int {\mathcal {D}}s\,{\mathcal {G}}(s-m,D)}
.
=== Generalized Wiener filter ===
The posterior mean
m
=
D
j
=
(
S
−
1
+
R
†
N
−
1
R
)
−
1
R
†
N
−
1
d
{\displaystyle m=D\,j=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}R^{\dagger }N^{-1}d}
is also known as the generalized Wiener filter solution and the uncertainty covariance
D
=
(
S
−
1
+
R
†
N
−
1
R
)
−
1
{\displaystyle D=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}}
as the Wiener variance.
In IFT,
j
=
R
†
N
−
1
d
{\displaystyle j=R^{\dagger }N^{-1}d}
is called the information source, as it acts as a source term to excite the field (knowledge), and
D
{\displaystyle D}
the information propagator, as it propagates information from one location to another in
m
x
=
∫
Ω
d
y
D
x
y
j
y
.
{\displaystyle m_{x}=\int _{\Omega }dy\,D_{xy}j_{y}.}
== Interacting theory ==
=== Interacting Hamiltonian ===
If any of the assumptions that lead to the free theory is violated, IFT becomes an interacting theory, with terms that are of higher than quadratic order in the signal field. This happens when the signal or the noise are not following Gaussian statistics, when the response is non-linear, when the noise depends on the signal, or when response or covariances are uncertain.
In this case, the information Hamiltonian might be expandable in a Taylor-Fréchet series,
H
(
d
,
s
)
=
1
2
s
†
D
−
1
s
−
j
†
s
+
H
0
⏟
=
H
free
(
d
,
s
)
+
∑
n
=
3
∞
1
n
!
Λ
x
1
.
.
.
x
n
(
n
)
s
x
1
.
.
.
s
x
n
⏟
=
H
int
(
d
,
s
)
,
{\displaystyle {\mathcal {H}}(d,\,s)=\underbrace {{\frac {1}{2}}s^{\dagger }D^{-1}s-j^{\dagger }s+{\mathcal {H}}_{0}} _{={\mathcal {H}}_{\text{free}}(d,\,s)}+\underbrace {\sum _{n=3}^{\infty }{\frac {1}{n!}}\Lambda _{x_{1}...x_{n}}^{(n)}s_{x_{1}}...s_{x_{n}}} _{={\mathcal {H}}_{\text{int}}(d,\,s)},}
where
H
free
(
d
,
s
)
{\displaystyle {\mathcal {H}}_{\text{free}}(d,\,s)}
is the free Hamiltonian, which alone would lead to a Gaussian posterior, and
H
int
(
d
,
s
)
{\displaystyle {\mathcal {H}}_{\text{int}}(d,\,s)}
is the interacting Hamiltonian, which encodes non-Gaussian corrections. The first and second order Taylor coefficients are often identified with the (negative) information source
−
j
{\displaystyle -j}
and information propagator
D
{\displaystyle D}
, respectively. The higher coefficients
Λ
x
1
.
.
.
x
n
(
n
)
{\displaystyle \Lambda _{x_{1}...x_{n}}^{(n)}}
are associated with non-linear self-interactions.
=== Classical field ===
The classical field
s
cl
{\displaystyle s_{\text{cl}}}
minimizes the information Hamiltonian,
∂
H
(
d
,
s
)
∂
s
|
s
=
s
cl
=
0
,
{\displaystyle \left.{\frac {\partial {\mathcal {H}}(d,s)}{\partial s}}\right|_{s=s_{\text{cl}}}=0,}
and therefore maximizes the posterior:
∂
P
(
s
|
d
)
∂
s
|
s
=
s
cl
=
∂
∂
s
e
−
H
(
d
,
s
)
Z
(
d
)
|
s
=
s
cl
=
−
P
(
d
,
s
)
∂
H
(
d
,
s
)
∂
s
|
s
=
s
cl
⏟
=
0
=
0
{\displaystyle \left.{\frac {\partial {\mathcal {P}}(s|d)}{\partial s}}\right|_{s=s_{\text{cl}}}=\left.{\frac {\partial }{\partial s}}\,{\frac {e^{-{\mathcal {H}}(d,s)}}{{\mathcal {Z}}(d)}}\right|_{s=s_{\text{cl}}}=-{\mathcal {P}}(d,s)\,\underbrace {\left.{\frac {\partial {\mathcal {H}}(d,s)}{\partial s}}\right|_{s=s_{\text{cl}}}} _{=0}=0}
The classical field
s
cl
{\displaystyle s_{\text{cl}}}
is therefore the maximum a posteriori estimator of the field inference problem.
=== Critical filter ===
The Wiener filter problem requires the two point correlation
S
≡
⟨
s
s
†
⟩
(
s
)
{\displaystyle S\equiv \langle s\,s^{\dagger }\rangle _{(s)}}
of a field to be known. If it is unknown, it has to be inferred along with the field itself. This requires the specification of a hyperprior
P
(
S
)
{\displaystyle {\mathcal {P}}(S)}
. Often, statistical homogeneity (translation invariance) can be assumed, implying that
S
{\displaystyle S}
is diagonal in Fourier space (for
Ω
=
R
u
{\displaystyle \Omega =\mathbb {R} ^{u}}
being a
u
{\displaystyle u}
dimensional Cartesian space). In this case, only the Fourier space power spectrum
P
s
(
k
→
)
{\displaystyle P_{s}({\vec {k}})}
needs to be inferred. Given a further assumption of statistical isotropy, this spectrum depends only on the length
k
=
|
k
→
|
{\displaystyle k=|{\vec {k}}|}
of the Fourier vector
k
→
{\displaystyle {\vec {k}}}
and only a one dimensional spectrum
P
s
(
k
)
{\displaystyle P_{s}(k)}
has to be determined. The prior field covariance reads then in Fourier space coordinates
S
k
→
q
→
=
(
2
π
)
u
δ
(
k
→
−
q
→
)
P
s
(
k
)
{\displaystyle S_{{\vec {k}}{\vec {q}}}=(2\pi )^{u}\delta ({\vec {k}}-{\vec {q}})\,P_{s}(k)}
.
If the prior on
P
s
(
k
)
{\displaystyle P_{s}(k)}
is flat, the joint probability of data and spectrum is
P
(
d
,
P
s
)
=
∫
D
s
P
(
d
,
s
,
P
s
)
=
∫
D
s
P
(
d
|
s
,
P
s
)
P
(
s
|
P
s
)
P
(
P
s
)
∝
∫
D
s
G
(
d
−
R
s
,
N
)
G
(
s
,
S
)
∝
1
|
S
|
1
2
∫
D
s
exp
[
−
1
2
(
s
†
D
−
1
s
−
j
†
s
−
s
†
j
)
]
∝
|
D
|
1
2
|
S
|
1
2
exp
[
1
2
j
†
D
j
]
,
{\displaystyle {\begin{aligned}{\mathcal {P}}(d,P_{s})&=\int {\mathcal {D}}s\,{\mathcal {P}}(d,s,P_{s})\\&=\int {\mathcal {D}}s\,{\mathcal {P}}(d|s,P_{s})\,{\mathcal {P}}(s|P_{s})\,{\mathcal {P}}(P_{s})\\&\propto \int {\mathcal {D}}s\,{\mathcal {G}}(d-Rs,N)\,{\mathcal {G}}(s,S)\\&\propto {\frac {1}{|S|^{\frac {1}{2}}}}\int {\mathcal {D}}s\,\exp \left[-{\frac {1}{2}}\left(s^{\dagger }D^{-1}s-j^{\dagger }s-s^{\dagger }j\right)\right]\\&\propto {\frac {|D|^{\frac {1}{2}}}{|S|^{\frac {1}{2}}}}\exp \left[{\frac {1}{2}}j^{\dagger }D\,j\right],\end{aligned}}}
where the notation of the information propagator
D
=
(
S
−
1
+
R
†
N
−
1
R
)
−
1
{\displaystyle D=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}}
and source
j
=
R
†
N
−
1
d
{\displaystyle j=R^{\dagger }N^{-1}d}
of the Wiener filter problem was used again. The corresponding information Hamiltonian is
H
(
d
,
P
s
)
=
^
1
2
[
ln
|
S
D
−
1
|
−
j
†
D
j
]
=
1
2
T
r
[
ln
(
S
D
−
1
)
−
j
j
†
D
]
,
{\displaystyle {\mathcal {H}}(d,P_{s})\;{\widehat {=}}\;{\frac {1}{2}}\left[\ln |S\,D^{-1}|-j^{\dagger }D\,j\right]={\frac {1}{2}}\mathrm {Tr} \left[\ln \left(S\,D^{-1}\right)-j\,j^{\dagger }D\right],}
where
=
^
{\displaystyle {\widehat {=}}}
denotes equality up to irrelevant constants (here: constant with respect to
P
s
{\displaystyle P_{s}}
). Minimizing this with respect to
P
s
{\displaystyle P_{s}}
, in order to get its maximum a posteriori power spectrum estimator, yields
∂
H
(
d
,
P
s
)
∂
P
s
(
k
)
=
1
2
T
r
[
D
S
−
1
∂
(
S
D
−
1
)
∂
P
s
(
k
)
−
j
j
†
∂
D
∂
P
s
(
k
)
]
=
1
2
T
r
[
D
S
−
1
∂
(
1
+
S
R
†
N
−
1
R
)
∂
P
s
(
k
)
+
j
j
†
D
∂
D
−
1
∂
P
s
(
k
)
D
]
=
1
2
T
r
[
D
S
−
1
∂
S
∂
P
s
(
k
)
R
†
N
−
1
R
+
m
m
†
∂
S
−
1
∂
P
s
(
k
)
]
=
1
2
T
r
[
(
R
†
N
−
1
R
D
S
−
1
−
S
−
1
m
m
†
S
−
1
)
∂
S
∂
P
s
(
k
)
]
=
1
2
∫
(
d
q
2
π
)
u
∫
(
d
q
′
2
π
)
u
(
(
D
−
1
−
S
−
1
)
D
S
−
1
−
S
−
1
m
m
†
S
−
1
)
q
→
q
→
′
∂
(
2
π
)
u
δ
(
q
→
−
q
→
′
)
P
s
(
q
)
∂
P
s
(
k
)
=
1
2
∫
(
d
q
2
π
)
u
(
S
−
1
−
S
−
1
D
S
−
1
−
S
−
1
m
m
†
S
−
1
)
q
→
q
→
δ
(
k
−
q
)
=
1
2
T
r
{
S
−
1
[
S
−
(
D
+
m
m
†
)
]
S
−
1
P
k
}
=
T
r
[
P
k
]
2
P
s
(
k
)
−
T
r
[
(
D
+
m
m
†
)
P
k
]
2
[
P
s
(
k
)
]
2
=
0
,
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {H}}(d,P_{s})}{\partial P_{s}(k)}}&={\frac {1}{2}}\mathrm {Tr} \left[D\,S^{-1}\,{\frac {\partial \left(S\,D^{-1}\right)}{\partial P_{s}(k)}}-j\,j^{\dagger }{\frac {\partial D}{\partial P_{s}(k)}}\right]\\&={\frac {1}{2}}\mathrm {Tr} \left[D\,S^{-1}\,{\frac {\partial \left(1+S\,R^{\dagger }N^{-1}R\right)}{\partial P_{s}(k)}}+j\,j^{\dagger }D\,{\frac {\partial D^{-1}}{\partial P_{s}(k)}}\,D\right]\\&={\frac {1}{2}}\mathrm {Tr} \left[D\,S^{-1}\,{\frac {\partial S}{\partial P_{s}(k)}}R^{\dagger }N^{-1}R+m\,m^{\dagger }\,{\frac {\partial S^{-1}}{\partial P_{s}(k)}}\right]\\&={\frac {1}{2}}\mathrm {Tr} \left[\left(R^{\dagger }N^{-1}R\,D\,S^{-1}-S^{-1}m\,m^{\dagger }\,S^{-1}\right)\,{\frac {\partial S}{\partial P_{s}(k)}}\right]\\&={\frac {1}{2}}\int \left({\frac {dq}{2\pi }}\right)^{u}\int \left({\frac {dq'}{2\pi }}\right)^{u}\left(\left(D^{-1}-S^{-1}\right)\,D\,S^{-1}-S^{-1}m\,m^{\dagger }\,S^{-1}\right)_{{\vec {q}}{\vec {q}}'}\,{\frac {\partial (2\pi )^{u}\delta ({\vec {q}}-{\vec {q}}')\,P_{s}(q)}{\partial P_{s}(k)}}\\&={\frac {1}{2}}\int \left({\frac {dq}{2\pi }}\right)^{u}\left(S^{-1}-S^{-1}D\,S^{-1}-S^{-1}m\,m^{\dagger }\,S^{-1}\right)_{{\vec {q}}{\vec {q}}}\,\delta (k-q)\\&={\frac {1}{2}}\mathrm {Tr} \left\{S^{-1}\left[S-\left(D+m\,m^{\dagger }\right)\right]\,S^{-1}\mathbb {P} _{k}\right\}\\&={\frac {\mathrm {Tr} \left[\mathbb {P} _{k}\right]}{2\,P_{s}(k)}}-{\frac {\mathrm {Tr} \left[\left(D+m\,m^{\dagger }\right)\,\mathbb {P} _{k}\right]}{2\,\left[P_{s}(k)\right]^{2}}}=0,\end{aligned}}}
where the Wiener filter mean
m
=
D
j
{\displaystyle m=D\,j}
and the spectral band projector
(
P
k
)
q
→
q
→
′
≡
(
2
π
)
u
δ
(
q
→
−
q
→
′
)
δ
(
|
q
→
|
−
k
)
{\displaystyle (\mathbb {P} _{k})_{{\vec {q}}{\vec {q}}'}\equiv (2\pi )^{u}\delta ({\vec {q}}-{\vec {q}}')\,\delta (|{\vec {q}}|-k)}
were introduced. The latter commutes with
S
−
1
{\displaystyle S^{-1}}
, since
(
S
−
1
)
k
→
q
→
=
(
2
π
)
u
δ
(
k
→
−
q
→
)
[
P
s
(
k
)
]
−
1
{\displaystyle (S^{-1})_{{\vec {k}}{\vec {q}}}=(2\pi )^{u}\delta ({\vec {k}}-{\vec {q}})\,[P_{s}(k)]^{-1}}
is diagonal in Fourier space. The maximum a posteriori estimator for the power spectrum is therefore
P
s
(
k
)
=
T
r
[
(
m
m
†
+
D
)
P
k
]
T
r
[
P
k
]
.
{\displaystyle P_{s}(k)={\frac {\mathrm {Tr} \left[\left(m\,m^{\dagger }+D\right)\,\mathbb {P} _{k}\right]}{\mathrm {Tr} \left[\mathbb {P} _{k}\right]}}.}
It has to be calculated iteratively, as
m
=
D
j
{\displaystyle m=D\,j}
and
D
=
(
S
−
1
+
R
†
N
−
1
R
)
−
1
{\displaystyle D=(S^{-1}+R^{\dagger }N^{-1}R)^{-1}}
depend both on
P
s
{\displaystyle P_{s}}
themselves. In an empirical Bayes approach, the estimated
P
s
{\displaystyle P_{s}}
would be taken as given. As a consequence, the posterior mean estimate for the signal field is the corresponding
m
{\displaystyle m}
and its uncertainty the corresponding
D
{\displaystyle D}
in the empirical Bayes approximation.
The resulting non-linear filter is called the critical filter. The generalization of the power spectrum estimation formula as
P
s
(
k
)
=
T
r
[
(
m
m
†
+
δ
D
)
P
k
]
T
r
[
P
k
]
{\displaystyle P_{s}(k)={\frac {\mathrm {Tr} \left[\left(m\,m^{\dagger }+\delta \,D\right)\,\mathbb {P} _{k}\right]}{\mathrm {Tr} \left[\mathbb {P} _{k}\right]}}}
exhibits a perception thresholds for
δ
<
1
{\displaystyle \delta <1}
, meaning that the data variance in a Fourier band has to exceed the expected noise level by a certain threshold before the signal reconstruction
m
{\displaystyle m}
becomes non-zero for this band. Whenever the data variance exceeds this threshold slightly, the signal reconstruction jumps to a finite excitation level, similar to a first order phase transition in thermodynamic systems. For filter with
δ
=
1
{\displaystyle \delta =1}
perception of the signal starts continuously as soon the data variance exceeds the noise level. The disappearance of the discontinuous perception at
δ
=
1
{\displaystyle \delta =1}
is similar to a thermodynamic system going through a critical point. Hence the name critical filter.
The critical filter, extensions thereof to non-linear measurements, and the inclusion of non-flat spectrum priors, permitted the application of IFT to real world signal inference problems, for which the signal covariance is usually unknown a priori.
== IFT application examples ==
The generalized Wiener filter, that emerges in free IFT, is in broad usage in signal processing. Algorithms explicitly based on IFT were derived for a number of applications. Many of them are implemented using the Numerical Information Field Theory (NIFTy) library.
D³PO is a code for Denoising, Deconvolving, and Decomposing Photon Observations. It reconstructs images from individual photon count events taking into account the Poisson statistics of the counts and an instrument response function. It splits the sky emission into an image of diffuse emission and one of point sources, exploiting the different correlation structure and statistics of the two components for their separation. D³PO has been applied to data of the Fermi and the RXTE satellites.
RESOLVE is a Bayesian algorithm for aperture synthesis imaging in radio astronomy. RESOLVE is similar to D³PO, but it assumes a Gaussian likelihood and a Fourier space response function. It has been applied to data of the Very Large Array.
PySESA is a Python framework for Spatially Explicit Spectral Analysis for spatially explicit spectral analysis of point clouds and geospatial data.
== Advanced theory ==
Many techniques from quantum field theory can be used to tackle IFT problems, like Feynman diagrams, effective actions, and the field operator formalism.
=== Feynman diagrams ===
In case the interaction coefficients
Λ
(
n
)
{\displaystyle \Lambda ^{(n)}}
in a Taylor-Fréchet expansion of the information Hamiltonian
H
(
d
,
s
)
=
1
2
s
†
D
−
1
s
−
j
†
s
+
H
0
⏟
=
H
free
(
d
,
s
)
+
∑
n
=
3
∞
1
n
!
Λ
x
1
.
.
.
x
n
(
n
)
s
x
1
.
.
.
s
x
n
⏟
=
H
int
(
d
,
s
)
,
{\displaystyle {\mathcal {H}}(d,\,s)=\underbrace {{\frac {1}{2}}s^{\dagger }D^{-1}s-j^{\dagger }s+{\mathcal {H}}_{0}} _{={\mathcal {H}}_{\text{free}}(d,\,s)}+\underbrace {\sum _{n=3}^{\infty }{\frac {1}{n!}}\Lambda _{x_{1}...x_{n}}^{(n)}s_{x_{1}}...s_{x_{n}}} _{={\mathcal {H}}_{\text{int}}(d,\,s)},}
are small, the log partition function, or Helmholtz free energy,
ln
Z
(
d
)
=
ln
∫
D
s
e
−
H
(
d
,
s
)
=
∑
c
∈
C
c
{\displaystyle \ln {\mathcal {Z}}(d)=\ln \int {\mathcal {D}}s\,e^{-{\mathcal {H}}(d,s)}=\sum _{c\in C}c}
can be expanded asymptotically in terms of these coefficients. The free Hamiltonian specifies the mean
m
=
D
j
{\displaystyle m=D\,j}
and variance
D
{\displaystyle D}
of the Gaussian distribution
G
(
s
−
m
,
D
)
{\displaystyle {\mathcal {G}}(s-m,D)}
over which the expansion is integrated. This leads to a sum over the set
C
{\displaystyle C}
of all connected Feynman diagrams. From the Helmholtz free energy, any connected moment of the field can be calculated via
⟨
s
x
1
…
s
x
n
⟩
(
s
|
d
)
c
=
∂
n
ln
Z
∂
j
x
1
…
∂
j
x
n
.
{\displaystyle \langle s_{x_{1}}\ldots s_{x_{n}}\rangle _{(s|d)}^{\text{c}}={\frac {\partial ^{n}\ln {\mathcal {Z}}}{\partial j_{x_{1}}\ldots \partial j_{x_{n}}}}.}
Situations where small expansion parameters exist that are needed for such a diagrammatic expansion to converge are given by nearly Gaussian signal fields, where the non-Gaussianity of the field statistics leads to small interaction coefficients
Λ
(
n
)
{\displaystyle \Lambda ^{(n)}}
. For example, the statistics of the Cosmic Microwave Background is nearly Gaussian, with small amounts of non-Gaussianities believed to be seeded during the inflationary epoch in the Early Universe.
=== Effective action ===
In order to have a stable numerics for IFT problems, a field functional that if minimized provides the posterior mean field is needed. Such is given by the effective action or Gibbs free energy of a field. The Gibbs free energy
G
{\displaystyle G}
can be constructed from the Helmholtz free energy via a Legendre transformation.
In IFT, it is given by the difference of the internal information energy
U
=
⟨
H
(
d
,
s
)
⟩
P
′
(
s
|
d
′
)
{\displaystyle U=\langle {\mathcal {H}}(d,s)\rangle _{{\mathcal {P}}'(s|d')}}
and the Shannon entropy
S
=
−
∫
D
s
P
′
(
s
|
d
′
)
ln
P
′
(
s
|
d
′
)
{\displaystyle {\mathcal {S}}=-\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\mathcal {P}}'(s|d')}
for temperature
T
=
1
{\displaystyle T=1}
,
where a Gaussian posterior approximation
P
′
(
s
|
d
′
)
=
G
(
s
−
m
,
D
)
{\displaystyle {\mathcal {P}}'(s|d')={\mathcal {G}}(s-m,D)}
is used with the approximate data
d
′
=
(
m
,
D
)
{\displaystyle d'=(m,D)}
containing the mean and the dispersion of the field.
The Gibbs free energy is then
G
(
m
,
D
)
=
U
(
m
,
D
)
−
T
S
(
m
,
D
)
=
⟨
H
(
d
,
s
)
+
ln
P
′
(
s
|
d
′
)
⟩
P
′
(
s
|
d
′
)
=
∫
D
s
P
′
(
s
|
d
′
)
ln
P
′
(
s
|
d
′
)
P
(
d
,
s
)
=
∫
D
s
P
′
(
s
|
d
′
)
ln
P
′
(
s
|
d
′
)
P
(
s
|
d
)
P
(
d
)
=
∫
D
s
P
′
(
s
|
d
′
)
ln
P
′
(
s
|
d
′
)
P
(
s
|
d
)
−
ln
P
(
d
)
=
KL
(
P
′
(
s
|
d
′
)
|
|
P
(
s
|
d
)
)
−
ln
Z
(
d
)
,
{\displaystyle {\begin{aligned}G(m,D)&=U(m,D)-T\,{\mathcal {S}}(m,D)\\&=\langle {\mathcal {H}}(d,s)+\ln {\mathcal {P}}'(s|d')\rangle _{{\mathcal {P}}'(s|d')}\\&=\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\frac {{\mathcal {P}}'(s|d')}{{\mathcal {P}}(d,s)}}\\&=\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\frac {{\mathcal {P}}'(s|d')}{{\mathcal {P}}(s|d)\,{\mathcal {P}}(d)}}\\&=\int {\mathcal {D}}s\,{\mathcal {P}}'(s|d')\,\ln {\frac {{\mathcal {P}}'(s|d')}{{\mathcal {P}}(s|d)}}-\ln \,{\mathcal {P}}(d)\\&={\text{KL}}({\mathcal {P}}'(s|d')||{\mathcal {P}}(s|d))-\ln {\mathcal {Z}}(d),\end{aligned}}}
the Kullback-Leibler divergence
KL
(
P
′
,
P
)
{\displaystyle {\text{KL}}({\mathcal {P}}',{\mathcal {P}})}
between approximative and exact posterior plus the Helmholtz free energy. As the latter does not depend on the approximate data
d
′
=
(
m
,
D
)
{\displaystyle d'=(m,D)}
, minimizing the Gibbs free energy is equivalent to minimizing the Kullback-Leibler divergence between approximate and exact posterior. Thus, the effective action approach of IFT is equivalent to the variational Bayesian methods, which also minimize the Kullback-Leibler divergence between approximate and exact posteriors.
Minimizing the Gibbs free energy provides approximatively the posterior mean field
⟨
s
⟩
(
s
|
d
)
=
∫
D
s
s
P
(
s
|
d
)
,
{\displaystyle \langle s\rangle _{(s|d)}=\int {\mathcal {D}}s\,s\,{\mathcal {P}}(s|d),}
whereas minimizing the information Hamiltonian provides the maximum a posteriori field. As the latter is known to over-fit noise, the former is usually a better field estimator.
=== Operator formalism ===
The calculation of the Gibbs free energy requires the calculation of Gaussian integrals over an information Hamiltonian, since the internal information energy is
U
(
m
,
D
)
=
⟨
H
(
d
,
s
)
⟩
P
′
(
s
|
d
′
)
=
∫
D
s
H
(
d
,
s
)
G
(
s
−
m
,
D
)
.
{\displaystyle U(m,D)=\langle {\mathcal {H}}(d,s)\rangle _{{\mathcal {P}}'(s|d')}=\int {\mathcal {D}}s\,{\mathcal {H}}(d,s)\,{\mathcal {G}}(s-m,D).}
Such integrals can be calculated via a field operator formalism, in which
O
m
=
m
+
D
d
d
m
{\displaystyle O_{m}=m+D\,{\frac {\mathrm {d} }{\mathrm {d} m}}}
is the field operator. This generates the field expression
s
{\displaystyle s}
within the integral if applied to the Gaussian distribution function,
O
m
G
(
s
−
m
,
D
)
=
(
m
+
D
d
d
m
)
1
|
2
π
D
|
1
2
exp
[
−
1
2
(
s
−
m
)
†
D
−
1
(
s
−
m
)
]
=
(
m
+
D
D
−
1
(
s
−
m
)
)
1
|
2
π
D
|
1
2
exp
[
−
1
2
(
s
−
m
)
†
D
−
1
(
s
−
m
)
]
=
s
G
(
s
−
m
,
D
)
,
{\displaystyle {\begin{aligned}O_{m}\,{\mathcal {G}}(s-m,D)&=(m+D\,{\frac {\mathrm {d} }{\mathrm {d} m}})\,{\frac {1}{|2\pi D|^{\frac {1}{2}}}}\,\exp \left[-{\frac {1}{2}}(s-m)^{\dagger }D^{-1}(s-m)\right]\\&=(m+D\,D^{-1}(s-m))\,{\frac {1}{|2\pi D|^{\frac {1}{2}}}}\,\exp \left[-{\frac {1}{2}}(s-m)^{\dagger }D^{-1}(s-m)\right]\\&=s\,{\mathcal {G}}(s-m,D),\end{aligned}}}
and any higher power of the field if applied several times,
(
O
m
)
n
G
(
s
−
m
,
D
)
=
s
n
G
(
s
−
m
,
D
)
.
{\displaystyle {\begin{aligned}(O_{m})^{n}\,{\mathcal {G}}(s-m,D)&=s^{n}\,{\mathcal {G}}(s-m,D).\end{aligned}}}
If the information Hamiltonian is analytical, all its terms can be generated via the field operator
H
(
d
,
O
m
)
G
(
s
−
m
,
D
)
=
H
(
d
,
s
)
G
(
s
−
m
,
D
)
.
{\displaystyle {\mathcal {H}}(d,O_{m})\,{\mathcal {G}}(s-m,D)={\mathcal {H}}(d,s)\,{\mathcal {G}}(s-m,D).}
As the field operator does not depend on the field
s
{\displaystyle s}
itself, it can be pulled out of the path integral of the internal information energy construction,
U
(
m
,
D
)
=
∫
D
s
H
(
d
,
O
m
)
G
(
s
−
m
,
D
)
=
H
(
d
,
O
m
)
∫
D
s
G
(
s
−
m
,
D
)
=
H
(
d
,
O
m
)
1
m
,
{\displaystyle U(m,D)=\int {\mathcal {D}}s\,{\mathcal {H}}(d,O_{m})\,{\mathcal {G}}(s-m,D)={\mathcal {H}}(d,O_{m})\int {\mathcal {D}}s\,{\mathcal {G}}(s-m,D)={\mathcal {H}}(d,O_{m})\,1_{m},}
where
1
m
=
1
{\displaystyle 1_{m}=1}
should be regarded as a functional that always returns the value
1
{\displaystyle 1}
irrespective the value of its input
m
{\displaystyle m}
. The resulting expression can be calculated by commuting the mean field annihilator
D
d
d
m
{\displaystyle D\,{\frac {\mathrm {d} }{\mathrm {d} m}}}
to the right of the expression, where they vanish since
d
d
m
1
m
=
0
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} m}}\,1_{m}=0}
. The mean field annihilator
D
d
d
m
{\displaystyle D\,{\frac {\mathrm {d} }{\mathrm {d} m}}}
commutes with the mean field as
[
D
d
d
m
,
m
]
=
D
d
d
m
m
−
m
D
d
d
m
=
D
+
m
D
d
d
m
−
m
D
d
d
m
=
D
.
{\displaystyle \left[D\,{\frac {\mathrm {d} }{\mathrm {d} m}},m\right]=D\,{\frac {\mathrm {d} }{\mathrm {d} m}}\,m-m\,D\,{\frac {\mathrm {d} }{\mathrm {d} m}}=D+m\,D\,{\frac {\mathrm {d} }{\mathrm {d} m}}-m\,D\,{\frac {\mathrm {d} }{\mathrm {d} m}}=D.}
By the usage of the field operator formalism the Gibbs free energy can be calculated, which permits the (approximate) inference of the posterior mean field via a numerical robust functional minimization.
== History ==
The book of Norbert Wiener might be regarded as one of the first works on field inference. The usage of path integrals for field inference was proposed by a number of authors, e.g. Edmund Bertschinger or William Bialek and A. Zee. The connection of field theory and Bayesian reasoning was made explicit by Jörg Lemm. The term information field theory was coined by Torsten Enßlin. See the latter reference for more information on the history of IFT.
== See also ==
Bayesian inference
Bayesian hierarchical modeling
Gaussian process
Statistical Inference
== References == | Wikipedia/Information_field_theory |
An A-law algorithm is a standard companding algorithm, used in European 8-bit PCM digital communications systems to optimize, i.e. modify, the dynamic range of an analog signal for digitizing. It is one of the two companding algorithms in the G.711 standard from ITU-T, the other being the similar μ-law, used in North America and Japan.
For a given input
x
{\displaystyle x}
, the equation for A-law encoding is as follows:
F
(
x
)
=
sgn
(
x
)
{
A
|
x
|
1
+
ln
(
A
)
,
|
x
|
<
1
A
,
1
+
ln
(
A
|
x
|
)
1
+
ln
(
A
)
,
1
A
≤
|
x
|
≤
1
,
{\displaystyle F(x)=\operatorname {sgn}(x){\begin{cases}{\dfrac {A|x|}{1+\ln(A)}},&|x|<{\dfrac {1}{A}},\\[1ex]{\dfrac {1+\ln(A|x|)}{1+\ln(A)}},&{\dfrac {1}{A}}\leq |x|\leq 1,\end{cases}}}
where
A
{\displaystyle A}
is the compression parameter. In Europe,
A
=
87.6
{\displaystyle A=87.6}
.
A-law expansion is given by the inverse function:
F
−
1
(
y
)
=
sgn
(
y
)
{
|
y
|
(
1
+
ln
(
A
)
)
A
,
|
y
|
<
1
1
+
ln
(
A
)
,
e
−
1
+
|
y
|
(
1
+
ln
(
A
)
)
A
,
1
1
+
ln
(
A
)
≤
|
y
|
<
1.
{\displaystyle F^{-1}(y)=\operatorname {sgn}(y){\begin{cases}{\dfrac {|y|(1+\ln(A))}{A}},&|y|<{\dfrac {1}{1+\ln(A)}},\\{\dfrac {e^{-1+|y|(1+\ln(A))}}{A}},&{\dfrac {1}{1+\ln(A)}}\leq |y|<1.\end{cases}}}
The reason for this encoding is that the wide dynamic range of speech does not lend itself well to efficient linear digital encoding. A-law encoding effectively reduces the dynamic range of the signal, thereby increasing the coding efficiency and resulting in a signal-to-distortion ratio that is superior to that obtained by linear encoding for a given number of bits.
== Comparison to μ-law ==
The μ-law algorithm provides a slightly larger dynamic range than the A-law at the cost of worse proportional distortion for small signals. By convention, A-law is used for an international connection if at least one country uses it.
== See also ==
μ-law algorithm
Dynamic range compression
Signal compression
Companding
G.711
DS0
Tapered floating point
== External links ==
Waveform Coding Techniques - Has details of implementation (but note that the A-law equation is incorrect)
A-law implementation in C-language with example code | Wikipedia/A-law_algorithm |
The search for extraterrestrial intelligence (usually shortened as SETI) is an expression that refers to the diverse efforts and scientific projects intended to detect extraterrestrial signals, or any evidence of intelligent life beyond Earth.
Researchers use methods such as monitoring electromagnetic radiation, searching for optical signals, and investigating potential extraterrestrial artifacts for any signs of transmission from civilizations present on other planets. Some initiatives have also attempted to send messages to hypothetical alien civilizations, such as NASA's Golden Record.
Modern SETI research began in the early 20th century after the advent of radio, expanding with projects like Project Ozma, the Wow! signal detection, and the Breakthrough Listen initiative; a $100 million, 10-year attempt to detect signals from nearby stars, announced in 2015 by Stephen Hawking and Yuri Milner. Since the 1980s, international efforts have been ongoing, with community led projects such as SETI@home and Project Argus, engaging in analyzing data. While SETI remains a respected scientific field, it often gets compared to conspiracy theory, UFO research, bringing unawarrented skepticism from the public, despite its reliance on rigorous scientific methods and verifiable data and research. Similar studies on Unidentified Aerial Phenomena (UAP) such as the Avi Loeb's Galileo Project have brought further attention to SETI research.
Despite decades of searching, no confirmed evidence of alien intelligence has been found, bringing criticism onto SETI for being 'overly hopeful'. Critics argue that SETI is speculative and unfalsifiable, while supporters see it as a crucial step in addressing the Fermi Paradox and understanding extraterrestrial technosignature.
== History ==
=== Early work ===
There have been many earlier searches for extraterrestrial intelligence within the Solar System. In 1896, Nikola Tesla suggested that an extreme version of his wireless electrical transmission system could be used to contact beings on Mars. In 1899, while conducting experiments at his Colorado Springs experimental station, he thought he had detected a signal from Mars since an odd repetitive static signal seemed to cut off when Mars set in the night sky. Analysis of Tesla's research has led to a range of explanations including:
Tesla simply misunderstood the new technology he was working with,
that he may have been observing signals from Marconi's European radio experiments,
and even speculation that he could have picked up naturally occurring radio noise caused by a moon of Jupiter (Io) moving through the magnetosphere of Jupiter.
In the early 1900s, Guglielmo Marconi, Lord Kelvin and David Peck Todd also stated their belief that radio could be used to contact Martians, with Marconi stating that his stations had also picked up potential Martian signals.
On August 21–23, 1924, Mars entered an opposition closer to Earth than at any time in the century before or the next 80 years. In the United States, a "National Radio Silence Day" was promoted during a 36-hour period from August 21–23, with all radios quiet for five minutes on the hour, every hour. At the United States Naval Observatory, a radio receiver was lifted 3 kilometres (1.9 miles) above the ground in a dirigible tuned to a wavelength between 8 and 9 km, using a "radio-camera" developed by Amherst College and Charles Francis Jenkins. The program was led by David Peck Todd with the military assistance of Admiral Edward W. Eberle (Chief of Naval Operations), with William F. Friedman (chief cryptographer of the United States Army), assigned to translate any potential Martian messages.
A 1959 paper by Philip Morrison and Giuseppe Cocconi first pointed out the possibility of searching the microwave spectrum. It proposed frequencies and a set of initial targets.
In 1960, Cornell University astronomer Frank Drake performed the first modern SETI experiment, named "Project Ozma" after the Queen of Oz in L. Frank Baum's fantasy books. Drake used a radio telescope 26 metres (85 ft) in diameter at Green Bank, West Virginia, to examine the stars Tau Ceti and Epsilon Eridani near the 1.420 gigahertz marker frequency, a region of the radio spectrum dubbed the "water hole" due to its proximity to the hydrogen and hydroxyl radical spectral lines. A 400 kilohertz band around the marker frequency was scanned using a single-channel receiver with a bandwidth of 100 hertz. He found nothing of interest.
Soviet scientists took a strong interest in SETI during the 1960s and performed a number of searches with omnidirectional antennas in the hope of picking up powerful radio signals. Soviet astronomer Iosif Shklovsky wrote the pioneering book in the field, Universe, Life, Intelligence (1962), which was expanded upon by American astronomer Carl Sagan as the best-selling book Intelligent Life in the Universe (1966).
In the March 1955 issue of Scientific American, John D. Kraus described an idea to scan the cosmos for natural radio signals using a flat-plane radio telescope equipped with a parabolic reflector. Within two years, his concept was approved for construction by Ohio State University. With a total of US$71,000 (equivalent to $794,880 in 2024)
in grants from the National Science Foundation, construction began on an 8-hectare (20-acre) plot in Delaware, Ohio. This Ohio State University Radio Observatory telescope was called "Big Ear". Later, it began the world's first continuous SETI program, called the Ohio State University SETI program.
In 1971, NASA funded a SETI study that involved Drake, Barney Oliver of Hewlett-Packard Laboratories, and others. The resulting report proposed the construction of an Earth-based radio telescope array with 1,500 dishes known as "Project Cyclops". The price tag for the Cyclops array was US$10 billion. Cyclops was not built, but the report formed the basis of much SETI work that followed.
The Ohio State SETI program gained fame on August 15, 1977, when Jerry Ehman, a project volunteer, witnessed a startlingly strong signal received by the telescope. He quickly circled the indication on a printout and scribbled the exclamation "Wow!" in the margin. Dubbed the Wow! signal, it is considered by some to be the best candidate for a radio signal from an artificial, extraterrestrial source ever discovered, but it has not been detected again in several additional searches.
On 24 May 2023, a test extraterrestrial signal, in the form of a "coded radio signal from Mars", was transmitted to radio telescopes on Earth, according to a report in The New York Times.
=== Sentinel, META, and BETA ===
In 1980, Carl Sagan, Bruce Murray, and Louis Friedman founded the U.S. Planetary Society, partly as a vehicle for SETI studies.
In the early 1980s, Harvard University physicist Paul Horowitz took the next step and proposed the design of a spectrum analyzer specifically intended to search for SETI transmissions. Traditional desktop spectrum analyzers were of little use for this job, as they sampled frequencies using banks of analog filters and so were restricted in the number of channels they could acquire. However, modern integrated-circuit digital signal processing (DSP) technology could be used to build autocorrelation receivers to check far more channels. This work led in 1981 to a portable spectrum analyzer named "Suitcase SETI" that had a capacity of 131,000 narrow band channels. After field tests that lasted into 1982, Suitcase SETI was put into use in 1983 with the 26-meter (85 ft) Harvard/Smithsonian radio telescope at Oak Ridge Observatory in Harvard, Massachusetts. This project was named "Sentinel" and continued into 1985.
Even 131,000 channels were not enough to search the sky in detail at a fast rate, so Suitcase SETI was followed in 1985 by Project "META", for "Megachannel Extra-Terrestrial Assay". The META spectrum analyzer had a capacity of 8.4 million channels and a channel resolution of 0.05 hertz. An important feature of META was its use of frequency Doppler shift to distinguish between signals of terrestrial and extraterrestrial origin. The project was led by Horowitz with the help of the Planetary Society, and was partly funded by movie maker Steven Spielberg. A second such effort, META II, was begun in Argentina in 1990, to search the southern sky, receiving an equipment upgrade in 1996–1997.
The follow-on to META was named "BETA", for "Billion-channel Extraterrestrial Assay", and it commenced observation on October 30, 1995. The heart of BETA's processing capability consisted of 63 dedicated fast Fourier transform (FFT) engines, each capable of performing a 222-point complex FFTs in two seconds, and 21 general-purpose personal computers equipped with custom digital signal processing boards. This allowed BETA to receive 250 million simultaneous channels with a resolution of 0.5 hertz per channel. It scanned through the microwave spectrum from 1.400 to 1.720 gigahertz in eight hops, with two seconds of observation per hop. An important capability of the BETA search was rapid and automatic re-observation of candidate signals, achieved by observing the sky with two adjacent beams, one slightly to the east and the other slightly to the west. A successful candidate signal would first transit the east beam, and then the west beam and do so with a speed consistent with Earth's sidereal rotation rate. A third receiver observed the horizon to veto signals of obvious terrestrial origin. On March 23, 1999, the 26-meter radio telescope on which Sentinel, META and BETA were based was blown over by strong winds and seriously damaged. This forced the BETA project to cease operation.
=== MOP and Project Phoenix ===
In 1978, the NASA SETI program had been heavily criticized by Senator William Proxmire, and funding for SETI research was removed from the NASA budget by Congress in 1981; however, funding was restored in 1982, after Carl Sagan talked with Proxmire and convinced him of the program's value. In 1992, the U.S. government funded an operational SETI program, in the form of the NASA Microwave Observing Program (MOP). MOP was planned as a long-term effort to conduct a general survey of the sky and also carry out targeted searches of 800 specific nearby stars. MOP was to be performed by radio antennas associated with the NASA Deep Space Network, as well as the 140-foot (43 m) radio telescope of the National Radio Astronomy Observatory at Green Bank, West Virginia and the 1,000-foot (300 m) radio telescope at the Arecibo Observatory in Puerto Rico. The signals were to be analyzed by spectrum analyzers, each with a capacity of 15 million channels. These spectrum analyzers could be grouped together to obtain greater capacity. Those used in the targeted search had a bandwidth of 1 hertz per channel, while those used in the sky survey had a bandwidth of 30 hertz per channel.
MOP drew the attention of the United States Congress, where the program met opposition and canceled one year after its start. SETI advocates continued without government funding, and in 1995 the nonprofit SETI Institute of Mountain View, California resurrected the MOP program under the name of Project "Phoenix", backed by private sources of funding. In 2012 it cost around $2 million per year to maintain SETI research at the SETI Institute and around 10 times that to support different SETI activities globally. Project Phoenix, under the direction of Jill Tarter, was a continuation of the targeted search program from MOP and studied roughly 1,000 nearby Sun-like stars until approximately 2015. From 1995 through March 2004, Phoenix conducted observations at the 64-meter (210 ft) Parkes radio telescope in Australia, the 140-foot (43 m) radio telescope of the National Radio Astronomy Observatory in Green Bank, West Virginia, and the 1,000-foot (300 m) radio telescope at the Arecibo Observatory in Puerto Rico. The project observed the equivalent of 800 stars over the available channels in the frequency range from 1200 to 3000 MHz. The search was sensitive enough to pick up transmitters with 1 GW EIRP to a distance of about 200 light-years.
== Ongoing radio searches ==
Many radio frequencies penetrate Earth's atmosphere quite well, and this led to radio telescopes that investigate the cosmos using large radio antennas. Furthermore, human endeavors emit considerable electromagnetic radiation as a byproduct of communications such as television and radio. These signals would be easy to recognize as artificial due to their repetitive nature and narrow bandwidths. Earth has been sending radio waves from broadcasts into space for over 100 years. These signals have reached over 1,000 stars, most notably Vega, Aldebaran, Barnard's Star, Sirius, and Proxima Centauri. If intelligent alien life exists on any planet orbiting these nearby stars, these signals could be heard and deciphered, even though some of the signal is garbled by the Earth's ionosphere.
Many international radio telescopes are currently being used for radio SETI searches, including the Low Frequency Array (LOFAR) in Europe, the Murchison Widefield Array (MWA) in Australia, and the Lovell Telescope in the United Kingdom.
=== Allen Telescope Array ===
The SETI Institute collaborated with the Radio Astronomy Laboratory at the Berkeley SETI Research Center to develop a specialized radio telescope array for SETI studies, similar to a mini-cyclops array. Formerly known as the One Hectare Telescope (1HT), the concept was renamed the "Allen Telescope Array" (ATA) after the project's benefactor, Paul Allen. Its sensitivity is designed to be equivalent to a single large dish more than 100 meters in diameter, if fully completed. Presently, the array has 42 operational dishes at the Hat Creek Radio Observatory in rural northern California.
The full array (ATA-350) is planned to consist of 350 or more offset-Gregorian radio dishes, each 6.1 meters (20 feet) in diameter. These dishes are the largest producible with commercially available satellite television dish technology. The ATA was planned for a 2007 completion date, at a cost of US$25 million. The SETI Institute provided money for building the ATA while University of California, Berkeley designed the telescope and provided operational funding. The first portion of the array (ATA-42) became operational in October 2007 with 42 antennas. The DSP system planned for ATA-350 is extremely ambitious. Completion of the full 350 element array will depend on funding and the technical results from ATA-42.
ATA-42 (ATA) is designed to allow multiple observers simultaneous access to the interferometer output at the same time. Typically, the ATA snapshot imager (used for astronomical surveys and SETI) is run in parallel with a beamforming system (used primarily for SETI). ATA also supports observations in multiple synthesized pencil beams at once, through a technique known as "multibeaming". Multibeaming provides an effective filter for identifying false positives in SETI, since a very distant transmitter must appear at only one point on the sky.
SETI Institute's Center for SETI Research (CSR) uses ATA in the search for extraterrestrial intelligence, observing 12 hours a day, 7 days a week. From 2007 to 2015, ATA identified hundreds of millions of technological signals. So far, all these signals have been assigned the status of noise or radio frequency interference because a) they appear to be generated by satellites or Earth-based transmitters, or b) they disappeared before the threshold time limit of ~1 hour. Researchers in CSR are working on ways to reduce the threshold time limit, and to expand ATA's capabilities for detection of signals that may have embedded messages.
Berkeley astronomers used the ATA to pursue several science topics, some of which might have transient SETI signals, until 2011, when the collaboration between the University of California, Berkeley and the SETI Institute was terminated.
CNET published an article and pictures about the Allen Telescope Array (ATA) on December 12, 2008.
In April 2011, the ATA entered an 8-month "hibernation" due to funding shortfalls. Regular operation of the ATA resumed on December 5, 2011.
In 2012, the ATA was revitalized with a $3.6 million donation by Franklin Antonio, co-founder and Chief Scientist of QUALCOMM Incorporated. This gift supported upgrades of all the receivers on the ATA dishes to have (2× to 10× over the range 1–8 GHz) greater sensitivity than before and supporting observations over a wider frequency range from 1–18 GHz, though initially the radio frequency electronics only go to 12 GHz. As of July 2013, the first of these receivers was installed and proven, with full installation on all 42 antennas being expected for June 2017. ATA is well suited to the search for extraterrestrial intelligence (SETI) and to discovery of astronomical radio sources, such as heretofore unexplained non-repeating, possibly extragalactic, pulses known as fast radio bursts or FRBs.
=== SERENDIP ===
SERENDIP (Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) is a SETI program launched in 1979 by the Berkeley SETI Research Center. SERENDIP takes advantage of ongoing "mainstream" radio telescope observations as a "piggy-back" or "commensal" program, using large radio telescopes including the NRAO 90m telescope at Green Bank and, formerly, the Arecibo 305m telescope. Rather than having its own observation program, SERENDIP analyzes deep space radio telescope data that it obtains while other astronomers are using the telescopes. The most recently deployed SERENDIP spectrometer, SERENDIP VI, was installed at both the Arecibo Telescope and the Green Bank Telescope in 2014–2015.
=== Breakthrough Listen ===
Breakthrough Listen is a ten-year initiative with $100 million funding begun in July 2015 to actively search for intelligent extraterrestrial communications in the universe, in a substantially expanded way, using resources that had not previously been extensively used for the purpose. It has been described as the most comprehensive search for alien communications to date. The science program for Breakthrough Listen is based at Berkeley SETI Research Center, located in the Astronomy Department at the University of California, Berkeley.
Announced in July 2015, the project is observing for thousands of hours every year on two major radio telescopes, the Green Bank Observatory in West Virginia, and the Parkes Observatory in Australia. Previously, only about 24 to 36 hours of telescope time per year were used in the search for alien life. Furthermore, the Automated Planet Finder at Lick Observatory is searching for optical signals coming from laser transmissions. The massive data rates from the radio telescopes (24 GB/s at Green Bank) necessitated the construction of dedicated hardware at the telescopes to perform the bulk of the analysis. Some of the data are also analyzed by volunteers in the SETI@home volunteer computing network. Founder of modern SETI Frank Drake was one of the scientists on the project's advisory committee.
In October 2019, Breakthrough Listen started a collaboration with scientists from the TESS team (Transiting Exoplanet Survey Satellite) to look for signs of advanced extraterrestrial life. Thousands of new planets found by TESS will be scanned for technosignatures by Breakthrough Listen partner facilities across the globe. Data from TESS monitoring of stars will also be searched for anomalies.
=== FAST ===
China's 500 meter Aperture Spherical Telescope (FAST) lists detecting interstellar communication signals as part of its science mission. It is funded by the National Development and Reform Commission (NDRC) and managed by the National Astronomical observatories (NAOC) of the Chinese Academy of Sciences (CAS). FAST is the first radio observatory built with SETI as a core scientific goal. FAST consists of a fixed 500 m (1,600 ft) diameter spherical dish constructed in a natural depression sinkhole caused by karst processes in the region. It is the world's largest filled-aperture radio telescope.
According to its website, FAST can search to 28 light-years, and is able to reach 1,400 stars. If the transmitter's radiated power were to be increased to 1,000,000 MW, FAST would be able to reach one million stars. This is compared to the former Arecibo 305 meter telescope detection distance of 18 light-years.
On 14 June 2022, astronomers, working with China's FAST telescope, reported the possibility of having detected artificial (presumably alien) signals, but cautioned that further studies were required to determine if a natural radio interference may be the source. More recently, on 18 June 2022, Dan Werthimer, chief scientist for several SETI-related projects, reportedly noted, "These signals are from radio interference; they are due to radio pollution from earthlings, not from E.T.".
=== UCLA ===
Since 2016, University of California Los Angeles (UCLA) undergraduate and graduate students have been participating in radio searches for technosignatures with the Green Bank Telescope. Targets include the Kepler field, TRAPPIST-1, and solar-type stars. The search is sensitive to Arecibo-class transmitters located within 420 light years of Earth and to transmitters that are 1,000 times more powerful than Arecibo located within 13,000 light years of Earth.
== Community SETI projects ==
=== SETI@home ===
The SETI@home project used volunteer computing to analyze signals acquired by the SERENDIP project.
SETI@home was conceived by David Gedye along with Craig Kasnoff and is a popular volunteer computing project that was launched by the Berkeley SETI Research Center at the University of California, Berkeley, in May 1999. It was originally funded by The Planetary Society and Paramount Pictures, and later by the state of California. The project is run by director David P. Anderson and chief scientist Dan Werthimer. Any individual could become involved with SETI research by downloading the Berkeley Open Infrastructure for Network Computing (BOINC) software program, attaching to the SETI@home project, and allowing the program to run as a background process that uses idle computer power. The SETI@home program itself ran signal analysis on a "work unit" of data recorded from the central 2.5 MHz wide band of the SERENDIP IV instrument. After computation on the work unit was complete, the results were then automatically reported back to SETI@home servers at University of California, Berkeley. By June 28, 2009, the SETI@home project had over 180,000 active participants volunteering a total of over 290,000 computers. These computers gave SETI@home an average computational power of 617 teraFLOPS. In 2004 radio source SHGb02+14a set off speculation in the media that a signal had been detected but researchers noted the frequency drifted rapidly and the detection on three SETI@home computers fell within random chance.
By 2010, after 10 years of data collection, SETI@home had listened to that one frequency at every point of over 67 percent of the sky observable from Arecibo with at least three scans (out of the goal of nine scans), which covers about 20 percent of the full celestial sphere. On March 31, 2020, with 91,454 active users, the project stopped sending out new work to SETI@home users, bringing this particular SETI effort to an indefinite hiatus.
=== SETI Net ===
SETI Network was the only fully operational private search system. The SETI Net station consisted of off-the-shelf, consumer-grade electronics to minimize cost and to allow this design to be replicated as simply as possible. It had a 3-meter parabolic antenna that could be directed in azimuth and elevation, an LNA that covered 100 MHz of the 1420 MHz spectrum, a receiver to reproduce the wideband audio, and a standard personal computer as the control device and for deploying the detection algorithms. The antenna could be pointed and locked to one sky location in Ra and DEC which enabling the system to integrate on it for long periods. The Wow! signal area was monitored for many long periods. All search data was collected and is available on the Internet archive.
SETI Net started operation in the early 1980s as a way to learn about the science of the search, and developed several software packages for the amateur SETI community. It provided an astronomical clock, a file manager to keep track of SETI data files, a spectrum analyzer optimized for amateur SETI, remote control of the station from the Internet, and other packages.
SETI Net went dark and was decommissioned on 2021-12-04. The collected data is available on their website.
=== The SETI League and Project Argus ===
Founded in 1994 in response to the United States Congress cancellation of the NASA SETI program, The SETI League, Incorporated is a membership-supported nonprofit organization with 1,500 members in 62 countries. This grass-roots alliance of amateur and professional radio astronomers is headed by executive director emeritus H. Paul Shuch, the engineer credited with developing the world's first commercial home satellite TV receiver. Many SETI League members are licensed radio amateurs and microwave experimenters. Others are digital signal processing experts and computer enthusiasts.
The SETI League pioneered the conversion of backyard satellite TV dishes 3 to 5 m (10–16 ft) in diameter into research-grade radio telescopes of modest sensitivity. The organization concentrates on coordinating a global network of small, amateur-built radio telescopes under Project Argus, an all-sky survey seeking to achieve real-time coverage of the entire sky. Project Argus was conceived as a continuation of the all-sky survey component of the late NASA SETI program (the targeted search having been continued by the SETI Institute's Project Phoenix). There are currently 143 Project Argus radio telescopes operating in 27 countries. Project Argus instruments typically exhibit sensitivity on the order of 10−23 Watts/square metre, or roughly equivalent to that achieved by the Ohio State University Big Ear radio telescope in 1977, when it detected the landmark "Wow!" candidate signal.
The name "Argus" derives from the mythical Greek guard-beast who had 100 eyes, and could see in all directions at once. In the SETI context, the name has been used for radio telescopes in fiction (Arthur C. Clarke, "Imperial Earth"; Carl Sagan, "Contact"), was the name initially used for the NASA study ultimately known as "Cyclops," and is the name given to an omnidirectional radio telescope design being developed at the Ohio State University.
== Optical experiments ==
While most SETI sky searches have studied the radio spectrum, some SETI researchers have considered the possibility that alien civilizations might be using powerful lasers for interstellar communications at optical wavelengths. The idea was first suggested by R. N. Schwartz and Charles Hard Townes in a 1961 paper published in the journal Nature titled "Interstellar and Interplanetary Communication by Optical Masers". However, the 1971 Cyclops study discounted the possibility of optical SETI, reasoning that construction of a laser system that could outshine the bright central star of a remote star system would be too difficult. In 1983, Townes published a detailed study of the idea in the United States journal Proceedings of the National Academy of Sciences, which was met with interest by the SETI community.
There are two problems with optical SETI. The first problem is that lasers are highly "monochromatic", that is, they emit light only on one frequency, making it troublesome to figure out what frequency to look for. However, emitting light in narrow pulses results in a broad spectrum of emission; the spread in frequency becomes higher as the pulse width becomes narrower, making it easier to detect an emission.
The other problem is that while radio transmissions can be broadcast in all directions, lasers are highly directional. Interstellar gas and dust is almost transparent to near infrared, so these signals can be seen from greater distances, but the extraterrestrial laser signals would need to be transmitted in the direction of Earth in order to be detected.
Optical SETI supporters have conducted paper studies of the effectiveness of using contemporary high-energy lasers and a ten-meter diameter mirror as an interstellar beacon. The analysis shows that an infrared pulse from a laser, focused into a narrow beam by such a mirror, would appear thousands of times brighter than the Sun to a distant civilization in the beam's line of fire. The Cyclops study proved incorrect in suggesting a laser beam would be inherently hard to see.
Such a system could be made to automatically steer itself through a target list, sending a pulse to each target at a constant rate. This would allow targeting of all Sun-like stars within a distance of 100 light-years. The studies have also described an automatic laser pulse detector system with a low-cost, two-meter mirror made of carbon composite materials, focusing on an array of light detectors. This automatic detector system could perform sky surveys to detect laser flashes from civilizations attempting contact.
Several optical SETI experiments are now in progress. A Harvard-Smithsonian group that includes Paul Horowitz designed a laser detector and mounted it on Harvard's 155-centimeter (61-inch) optical telescope. This telescope is currently being used for a more conventional star survey, and the optical SETI survey is "piggybacking" on that effort. Between October 1998 and November 1999, the survey inspected about 2,500 stars. Nothing that resembled an intentional laser signal was detected, but efforts continue. The Harvard-Smithsonian group is now working with Princeton University to mount a similar detector system on Princeton's 91-centimeter (36-inch) telescope. The Harvard and Princeton telescopes will be "ganged" to track the same targets at the same time, with the intent being to detect the same signal in both locations as a means of reducing errors from detector noise.
The Harvard-Smithsonian SETI group led by Professor Paul Horowitz built a dedicated all-sky optical survey system along the lines of that described above, featuring a 1.8-meter (72-inch) telescope. The new optical SETI survey telescope is being set up at the Oak Ridge Observatory in Harvard, Massachusetts.
The University of California, Berkeley, home of SERENDIP and SETI@home, is also conducting optical SETI searches and collaborates with the NIROSETI program. The optical SETI program at Breakthrough Listen was initially directed by Geoffrey Marcy, an extrasolar planet hunter, and it involves examination of records of spectra taken during extrasolar planet hunts for a continuous, rather than pulsed, laser signal. This survey uses the Automated Planet Finder 2.4-m telescope at the Lick Observatory, situated on the summit of Mount Hamilton, east of San Jose, California. The other Berkeley optical SETI effort is being pursued by the Harvard-Smithsonian group and is being directed by Dan Werthimer of Berkeley, who built the laser detector for the Harvard-Smithsonian group. This survey uses a 76-centimeter (30-inch) automated telescope at Leuschner Observatory and an older laser detector built by Werthimer.
The SETI Institute also runs a program called 'Laser SETI' with an instrument composed of several cameras that continuously survey the entire night sky searching for millisecond singleton laser pulses of extraterrestrial origin.
In January 2020, two Pulsed All-sky Near-infrared Optical SETI (PANOSETI) project telescopes were installed in the Lick Observatory Astrograph Dome. The project aims to commence a wide-field optical SETI search and continue prototyping designs for a full observatory. The installation can offer an "all-observable-sky" optical and wide-field near-infrared pulsed technosignature and astrophysical transient search for the northern hemisphere.
In May 2017, astronomers reported studies related to laser light emissions from stars, as a way of detecting technology-related signals from an alien civilization. The reported studies included Tabby's Star (designated KIC 8462852 in the Kepler Input Catalog), an oddly dimming star in which its unusual starlight fluctuations may be the result of interference by an artificial megastructure, such as a Dyson swarm, made by such a civilization. No evidence was found for technology-related signals from KIC 8462852 in the studies.
== Quantum communications ==
In a 2020 paper, Berera examined sources of decoherence in the interstellar medium and made the observation that quantum coherence
of photons in certain frequency bands could be sustained to interstellar distances.
It was suggested this would allow for quantum communication at these distances.
In a 2021 preprint, astronomer Michael Hipke described for the first time how one could search for quantum communication transmissions sent by ETI using existing telescope and receiver technology. He also provides arguments for why future searches of ETI should also target interstellar quantum communication networks.
A 2022 paper by Arjun Berera and Jaime Calderón-Figueroa noted that interstellar quantum communication by other civilizations could be possible and may be advantageous, identifying some potential challenges and factors for detecting technosignatures. They may, for example, use X-ray photons for remotely established quantum communication and quantum teleportation as the communication mode.
== Search for extraterrestrial artifacts ==
The possibility of using interstellar messenger probes in the search for extraterrestrial intelligence was first suggested by Ronald N. Bracewell in 1960 (see Bracewell probe), and the technical feasibility of this approach was demonstrated by the British Interplanetary Society's starship study Project Daedalus in 1978. Starting in 1979, Robert Freitas advanced arguments for the proposition that physical space-probes are a superior mode of interstellar communication to radio signals (see Voyager Golden Record).
In recognition that any sufficiently advanced interstellar probe in the vicinity of Earth could easily monitor the terrestrial Internet, 'Invitation to ETI' was established by Allen Tough in 1996, as a Web-based SETI experiment inviting such spacefaring probes to establish contact with humanity. The project's 100 signatories includes prominent physical, biological, and social scientists, as well as artists, educators, entertainers, philosophers and futurists. H. Paul Shuch, executive director emeritus of The SETI League, serves as the project's Principal Investigator.
Inscribing a message in matter and transporting it to an interstellar destination can be enormously more energy efficient than communication using electromagnetic waves if delays larger than light transit time can be tolerated. That said, for simple messages such as "hello," radio SETI could be far more efficient. If energy requirement is used as a proxy for technical difficulty, then a solarcentric Search for Extraterrestrial Artifacts (SETA) may be a useful supplement to traditional radio or optical searches.
Much like the "preferred frequency" concept in SETI radio beacon theory, the Earth-Moon or Sun-Earth libration orbits might therefore constitute the most universally convenient parking places for automated extraterrestrial spacecraft exploring arbitrary stellar systems. A viable long-term SETI program may be founded upon a search for these objects.
In 1979, Freitas and Valdes conducted a photographic search of the vicinity of the Earth-Moon triangular libration points L4 and L5, and of the solar-synchronized positions in the associated halo orbits, seeking possible orbiting extraterrestrial interstellar probes, but found nothing to a detection limit of about 14th magnitude. The authors conducted a second, more comprehensive photographic search for probes in 1982 that examined the five Earth-Moon Lagrangian positions and included the solar-synchronized positions in the stable L4/L5 libration orbits, the potentially stable nonplanar orbits near L1/L2, Earth-Moon L3, and also L2 in the Sun-Earth system. Again no extraterrestrial probes were found to limiting magnitudes of 17–19th magnitude near L3/L4/L5, 10–18th magnitude for L1/L2, and 14–16th magnitude for Sun-Earth L2.
In June 1983, Valdes and Freitas used the 26 m radiotelescope at Hat Creek Radio Observatory to search for the tritium hyperfine line at 1516 MHz from 108 assorted astronomical objects, with emphasis on 53 nearby stars including all visible stars within a 20 light-year radius. The tritium frequency was deemed highly attractive for SETI work because (1) the isotope is cosmically rare, (2) the tritium hyperfine line is centered in the SETI water hole region of the terrestrial microwave window, and (3) in addition to beacon signals, tritium hyperfine emission may occur as a byproduct of extensive nuclear fusion energy production by extraterrestrial civilizations. The wideband- and narrowband-channel observations achieved sensitivities of 5–14×10−21 W/m2/channel and 0.7–2×10−24 W/m2/channel, respectively, but no detections were made.
Others have speculated, that we might find traces of past civilizations in our very own Solar System, on planets like Venus or Mars, although the traces would be found most likely underground.
== Technosignatures ==
Technosignatures, including all signs of technology, are a recent avenue in the search for extraterrestrial intelligence. Technosignatures may originate from various sources, from megastructures such as Dyson spheres and space mirrors or space shaders to the atmospheric contamination created by an industrial civilization, or city lights on extrasolar planets, and may be detectable in the future with large hypertelescopes.
Technosignatures can be divided into three broad categories: astroengineering projects, signals of planetary origin, and spacecraft within and outside the Solar System.
An astroengineering installation such as a Dyson sphere, designed to convert all of the incident radiation of its host star into energy, could be detected through the observation of an infrared excess from a solar analog star, or by the star's apparent disappearance in the visible spectrum over several years. After examining some 100,000 nearby large galaxies, a team of researchers has concluded that none of them display any obvious signs of highly advanced technological civilizations.
Another hypothetical form of astroengineering, the Shkadov thruster, moves its host star by reflecting some of the star's light back on itself, and would be detected by observing if its transits across the star abruptly end with the thruster in front. Asteroid mining within the Solar System is also a detectable technosignature of the first kind.
Individual extrasolar planets can be analyzed for signs of technology. Avi Loeb of the Center for Astrophysics | Harvard & Smithsonian has proposed that persistent light signals on the night side of an exoplanet can be an indication of the presence of cities and an advanced civilization. In addition, the excess infrared radiation and chemicals produced by various industrial processes or terraforming efforts may point to intelligence.
Light and heat detected from planets need to be distinguished from natural sources to conclusively prove the existence of civilization on a planet. However, as argued by the Colossus team, a civilization heat signature should be within a "comfortable" temperature range, like terrestrial urban heat islands, i.e., only a few degrees warmer than the planet itself. In contrast, such natural sources as wild fires, volcanoes, etc. are significantly hotter, so they will be well distinguished by their maximum flux at a different wavelength.
Other than astroengineering, technosignatures such as artificial satellites around exoplanets, particularly such in geostationary orbit, might be detectable even with today's technology and data, and would allow, similar to fossils on Earth, to find traces of extrasolar life from long ago.
Extraterrestrial craft are another target in the search for technosignatures. Magnetic sail interstellar spacecraft should be detectable over thousands of light-years of distance through the synchrotron radiation they would produce through interaction with the interstellar medium; other interstellar spacecraft designs may be detectable at more modest distances. In addition, robotic probes within the Solar System are also being sought with optical and radio searches.
For a sufficiently advanced civilization, hyper energetic neutrinos from Planck scale accelerators should be detectable at a distance of many Mpc.
=== Advances for Bio and Technosignature Detection ===
A notable advancement in technosignature detection is the development of an algorithm for signal reconstruction in zero-knowledge one-way communication channels. This algorithm decodes signals from unknown sources without prior knowledge of the encoding scheme, using principles from Algorithmic Information Theory to identify the geometric and topological dimensions of the encoding space. It successfully reconstructed the Arecibo message despite significant noise. The work establishes a connection between syntax and semantics in SETI and technosignature detection, enhancing fields like cryptography and Information Theory.
Based on fractal theory and the Weierstrass function, a known fractal, another method authored by the same group called fractal messaging offers a framework for space-time scale-free communication. This method leverages properties of self-similarity and scale invariance, enabling spatio-temporal scale-independent and parallel infinite-frequency communication. It also embodies the concept of sending a self-encoding/self-decoding signal as a mathematical formula, equivalent to self-executable computer code that unfolds to read a message at all possible time scales and in all possible channels simultaneously.
== Fermi paradox ==
Italian physicist Enrico Fermi suggested in the 1950s that if technologically advanced civilizations are common in the universe, then they should be detectable in one way or another. According to those who were there, Fermi either asked "Where are they?" or "Where is everybody?"
The Fermi paradox is commonly understood as asking why extraterrestrials have not visited Earth, but the same reasoning applies to the question of why signals from extraterrestrials have not been heard. The SETI version of the question is sometimes referred to as "the Great Silence".
The Fermi paradox can be stated more completely as follows:
The size and age of the universe incline us to believe that many technologically advanced civilizations must exist. However, this belief seems logically inconsistent with our lack of observational evidence to support it. Either (1) the initial assumption is incorrect and technologically advanced intelligent life is much rarer than we believe, or (2) our current observations are incomplete, and we simply have not detected them yet, or (3) our search methodologies are flawed and we are not searching for the correct indicators, or (4) it is the nature of intelligent life to destroy itself.
There are multiple explanations proposed for the Fermi paradox, ranging from analyses suggesting that intelligent life is rare (the "Rare Earth hypothesis"), to analyses suggesting that although extraterrestrial civilizations may be common, they would not communicate with us, would communicate in a way we have not discovered yet, could not travel across interstellar distances, or destroy themselves before they master the technology of either interstellar travel or communication.
The German astrophysicist and radio astronomer Sebastian von Hoerner suggested that the average duration of civilization was 6,500 years. After this time, according to him, it disappears for external reasons (the destruction of life on the planet, the destruction of only rational beings) or internal causes (mental or physical degeneration). According to his calculations, on a habitable planet (one in three million stars) there is a sequence of technological species over a time distance of hundreds of millions of years, and each of them "produces" an average of four technological species. With these assumptions, the average distance between civilizations in the Milky Way is 1,000 light years.
Science writer Timothy Ferris has posited that since galactic societies are most likely only transitory, an obvious solution is an interstellar communications network, or a type of library consisting mostly of automated systems. They would store the cumulative knowledge of vanished civilizations and communicate that knowledge through the galaxy. Ferris calls this the "Interstellar Internet", with the various automated systems acting as network "servers". If such an Interstellar Internet exists, the hypothesis states, communications between servers are mostly through narrow-band, highly directional radio or laser links. Intercepting such signals is, as discussed earlier, very difficult. However, the network could maintain some broadcast nodes in hopes of making contact with new civilizations.
Although somewhat dated in terms of "information culture" arguments, not to mention the obvious technological problems of a system that could work effectively for billions of years and requires multiple lifeforms agreeing on certain basics of communications technologies, this hypothesis is actually testable (see below).
=== Difficulty of detection ===
A significant problem is the vastness of space. Despite piggybacking on the world's most sensitive radio telescope, astronomer and initiator of SERENDIP Charles Stuart Bowyer noted the then world's largest instrument could not detect random radio noise emanating from a civilization like ours, which has been leaking radio and TV signals for less than 100 years. For SERENDIP and most other SETI projects to detect a signal from an extraterrestrial civilization, the civilization would have to be beaming a powerful signal directly at us. It also means that Earth civilization will only be detectable within a distance of 100 light-years.
== Post-detection disclosure protocol ==
The International Academy of Astronautics (IAA) has a long-standing SETI Permanent Study Group (SPSG, formerly called the IAA SETI Committee), which addresses matters of SETI science, technology, and international policy. The SPSG meets in conjunction with the International Astronautical Congress (IAC), held annually at different locations around the world, and sponsors two SETI Symposia at each IAC. In 2005, the IAA established the SETI: Post-Detection Science and Technology Taskgroup (chairman, Professor Paul Davies) "to act as a Standing Committee to be available to be called on at any time to advise and consult on questions stemming from the discovery of a putative signal of extraterrestrial intelligent (ETI) origin."
However, the protocols mentioned apply only to radio SETI rather than for METI (Active SETI). The intention for METI is covered under the SETI charter "Declaration of Principles Concerning Sending Communications with Extraterrestrial Intelligence".
In October 2000 astronomers Iván Almár and Jill Tarter presented a paper to The SETI Permanent Study Group in Rio de Janeiro, Brazil which proposed a scale (modelled after the Torino scale) which is an ordinal scale between zero and ten that quantifies the impact of any public announcement regarding evidence of extraterrestrial intelligence; the Rio scale has since inspired the 2005 San Marino Scale (in regard to the risks of transmissions from Earth) and the 2010 London Scale (in regard to the detection of extraterrestrial life). The Rio scale itself was revised in 2018.
The SETI Institute does not officially recognize the Wow! signal as of extraterrestrial origin as it was unable to be verified, although in a 2020 Twitter post the organization stated that ''an astronomer might have pinpointed the host star''. The SETI Institute has also publicly denied that the candidate signal Radio source SHGb02+14a is of extraterrestrial origin. Although other volunteering projects such as Zooniverse credit users for discoveries, there is currently no crediting or early notification by SETI@Home following the discovery of a signal.
Some people, including Steven M. Greer, have expressed cynicism that the general public might not be informed in the event of a genuine discovery of extraterrestrial intelligence due to significant vested interests. Some, such as Bruce Jakosky have also argued that the official disclosure of extraterrestrial life may have far reaching and as yet undetermined implications for society, particularly for the world's religions.
== Active SETI ==
Active SETI, also known as messaging to extraterrestrial intelligence (METI), consists of sending signals into space in the hope that they will be detected by an alien intelligence.
=== Realized interstellar radio message projects ===
In November 1974, a largely symbolic attempt was made at the Arecibo Observatory to send a message to other worlds. Known as the Arecibo Message, it was sent towards the globular cluster M13, which is 25,000 light-years from Earth. Further IRMs Cosmic Call, Teen Age Message, Cosmic Call 2, and A Message From Earth were transmitted in 1999, 2001, 2003 and 2008 from the Evpatoria Planetary Radar.
=== Debate ===
Whether or not to attempt to contact extraterrestrials has attracted significant academic debate in the fields of space ethics and space policy. Physicist Stephen Hawking, in his book A Brief History of Time, suggests that "alerting" extraterrestrial intelligences to our existence is foolhardy, citing humankind's history of treating its own kind harshly in meetings of civilizations with a significant technology gap, e.g., the extermination of Tasmanian aborigines. He suggests, in view of this history, that we "lay low". In one response to Hawking, in September 2016, astronomer Seth Shostak sought to allay such concerns. Astronomer Jill Tarter also disagrees with Hawking, arguing that aliens developed and long-lived enough to communicate and travel across interstellar distances would have evolved a cooperative and less violent intelligence. She however thinks it is too soon for humans to attempt active SETI and that humans should be more advanced technologically first but keep listening in the meantime.
== Criticism ==
As various SETI projects have progressed, some have criticized early claims by researchers as being too "euphoric". For example, Peter Schenkel, while remaining a supporter of SETI projects, wrote in 2006 that:
[i]n light of new findings and insights, it seems appropriate to put excessive euphoria to rest and to take a more down-to-earth view [...] We should quietly admit that the early estimates—that there may be a million, a hundred thousand, or ten thousand advanced extraterrestrial civilizations in our galaxy—may no longer be tenable.
Critics claim that the existence of extraterrestrial intelligence has no good Popperian criteria for falsifiability, as explained in a 2009 editorial in Nature, which said:
Seti... has always sat at the edge of mainstream astronomy. This is partly because, no matter how scientifically rigorous its practitioners try to be, SETI can't escape an association with UFO believers and other such crackpots. But it is also because SETI is arguably not a falsifiable experiment. Regardless of how exhaustively the Galaxy is searched, the null result of radio silence doesn't rule out the existence of alien civilizations. It means only that those civilizations might not be using radio to communicate.
Nature added that SETI was "marked by a hope, bordering on faith" that aliens were aiming signals at us, that a hypothetical alien SETI project looking at Earth with "similar faith" would be "sorely disappointed", despite our many untargeted radar and TV signals, and our few targeted Active SETI radio signals denounced by those fearing aliens, and that it had difficulties attracting even sympathetic working scientists and government funding because it was "an effort so likely to turn up nothing".
However, Nature also added, "Nonetheless, a small SETI effort is well worth supporting, especially given the enormous implications if it did succeed" and that "happily, a handful of wealthy technologists and other private donors have proved willing to provide that support".
Supporters of the Rare Earth Hypothesis argue that advanced lifeforms are likely to be very rare, and that, if that is so, then SETI efforts will be futile. However, the Rare Earth Hypothesis itself faces many criticisms.
In 1993, Roy Mash stated that "Arguments favoring the existence of extraterrestrial intelligence nearly always contain an overt appeal to big numbers, often combined with a covert reliance on generalization from a single instance" and concluded that "the dispute between believers and skeptics is seen to boil down to a conflict of intuitions which can barely be engaged, let alone resolved, given our present state of knowledge". In response, in 2012, Milan M. Ćirković, then research professor at the Astronomical Observatory of Belgrade and a research associate of the Future of Humanity Institute at the University of Oxford, said that Mash was unrealistically over-reliant on excessive abstraction that ignored the empirical information available to modern SETI researchers.
George Basalla, Emeritus Professor of History at the University of Delaware, is a critic of SETI who argued in 2006 that "extraterrestrials discussed by scientists are as imaginary as the spirits and gods of religion or myth", and was in turn criticized by Milan M. Ćirković for, among other things, being unable to distinguish between "SETI believers" and "scientists engaged in SETI", who are often sceptical (especially about quick detection), such as Freeman Dyson and, at least in their later years, Iosif Shklovsky and Sebastian von Hoerner, and for ignoring the difference between the knowledge underlying the arguments of modern scientists and those of ancient Greek thinkers.
Massimo Pigliucci, Professor of Philosophy at CUNY – City College, asked in 2010 whether SETI is "uncomfortably close to the status of pseudoscience" due to the lack of any clear point at which negative results cause the hypothesis of Extraterrestrial Intelligence to be abandoned, before eventually concluding that SETI is "almost-science", which is described by Milan M. Ćirković as Pigliucci putting SETI in "the illustrious company of string theory, interpretations of quantum mechanics, evolutionary psychology and history (of the 'synthetic' kind done recently by Jared Diamond)", while adding that his justification for doing so with SETI "is weak, outdated, and reflecting particular philosophical prejudices similar to the ones described above in Mash and Basalla".
Richard Carrigan, a particle physicist at the Fermi National Accelerator Laboratory near Chicago, Illinois, suggested that passive SETI could also be dangerous and that a signal released onto the Internet could act as a computer virus. Computer security expert Bruce Schneier dismissed this possibility as a "bizarre movie-plot threat".
=== Ufology ===
Ufologist Stanton Friedman has often criticized SETI researchers for, among other reasons, what he sees as their unscientific criticisms of Ufology, but, unlike SETI, Ufology has generally not been embraced by academia as a scientific field of study, and it is usually characterized as a partial or total pseudoscience. In a 2016 interview, Jill Tarter pointed out that it is still a misconception that SETI and UFOs are related. She states, "SETI uses the tools of the astronomer to attempt to find evidence of somebody else's technology coming from a great distance. If we ever claim detection of a signal, we will provide evidence and data that can be independently confirmed. UFOs—none of the above." The Galileo Project headed by Harvard astronomer Avi Loeb is one of the few scientific efforts to study UFOs or UAPs. Loeb criticized that the study of UAP is often dismissed and not sufficiently studied by scientists and should shift from "occupying the talking points of national security administrators and politicians" to the realm of science. The Galileo Project's position after the publication of the 2021 UFO Report by the U.S. Intelligence community is that the scientific community needs to "systematically, scientifically and transparently look for potential evidence of extraterrestrial technological equipment".
== See also ==
== References ==
== Further reading ==
Campbell, John B. (2006). "Archaeology and direct imaging of exoplanets" (PDF). In C. Aime & F. Vakili (ed.). Proceedings of the International Astronomical Union. Cambridge University Press. pp. 247 ff. ISBN 978-0-521-85607-2. Archived from the original (PDF) on 2009-03-26.
Carlotto, Mark J. (2007). "Detecting Patterns of a Technological Intelligence in Remotely Sensed Imagery" (PDF). Journal of the British Interplanetary Society. 60: 28–39. Bibcode:2007JBIS...60...28C. Archived from the original (PDF) on 2016-09-09. Retrieved 2009-03-03.
Catran, Jack (1980). Is There Intelligent Life on Earth?. Lidiraven Books. ISBN 978-0-9361-6229-4.
Ćirković, Milan M. (2012). The Astrobiological Landscape: Philosophical Foundations of the Study of Cosmic Life. Cambridge Astrobiology. Cambridge University Press. ISBN 9780521197755. ISSN 1759-3247.
Cooper, Keith (2019). The Contact Paradox: Challenging Our Assumptions in the Search for Extraterrestrial Intelligence. Bloomsbury Publishing. ISBN 9781472960443.
DeVito, Carl L. (2013). Science, Seti, and Mathematics. Berghahn Books. ISBN 9781782380702.
Maccone, Claudio (2022). Evo-SETI: Life Evolution Statistics on Earth and Exoplanets. Springer International Publishing. ISBN 9783030519339.
McConnell, Brian; Chuck Toporek (2001). Beyond Contact: A Guide to SETI and Communicating with Alien Civilizations. O'Reilly. ISBN 978-0-596-00037-0.
Phillip Morrison, John Billingham, & John Wolfe: The search for extraterrestrial intelligence—SETI. NASA SP, Washington 1977
Oberhaus, Daniel (2019). Extraterrestrial Languages. The MIT Press. ISBN 9780262043069.
Roush, Wade (2020). Extraterrestrials. MIT Press. ISBN 9780262538435.
David W. Swift: Seti Pioneers: Scientists Talk about Their Search for Extraterrestrial Intelligence. University of Arizona Press, Tucson, Arizona, 1993, ISBN 0-8165-1119-5
Frank White: The Seti Factor: How the Search for Extraterrestrial Intelligence Is Changing Our View of the Universe and Ourselves. Walker & Company, New York 1990, ISBN 978-0-8027-1105-2
Jon Willis (2016). All These Worlds Are Yours: The Scientific Search for Alien Life. Yale University Press. ISBN 978-0300208696.
== External links ==
Official website SETI official website
Harvard University SETI Program Archived 2011-08-10 at the Wayback Machine
University of California, Berkeley SETI Program
Project Dorothy, a Worldwide Joint SETI Observation to Commemorate the 50th Anniversary of Project OZMA
"SETI: Astronomy as a Contact Sport - A conversation with Jill Tarter". Ideas Roadshow. April 19, 2013.
The Rio Scale Archived 2015-09-14 at the Wayback Machine, a scale for rating SETI announcements
2012 Interview of SETI Pioneer Frank Drake by astronomer Andrew Fraknoi
Now dark SETI Net station archives (www.seti.net) | Wikipedia/Search_for_extraterrestrial_intelligence |
Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."
Besides the formalization of a universal measure for irreducible information content of computably generated objects, some main achievements of AIT were to show that: in fact algorithmic complexity follows (in the self-delimited case) the same inequalities (except for a constant) that entropy does, as in classical information theory; randomness is incompressibility; and, within the realm of randomly generated software, the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine.
AIT principally studies measures of irreducible information content of strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers. One of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field of metamathematics, e.g., as shown by the incompleteness results mentioned below. Other main motivations came from surpassing the limitations of classical information theory for single and fixed objects, formalizing the concept of randomness, and finding a meaningful probabilistic inference without prior knowledge of the probability distribution (e.g., whether it is independent and identically distributed, Markovian, or even stationary). In this way, AIT is known to be basically founded upon three main mathematical concepts and the relations between them: algorithmic complexity, algorithmic randomness, and algorithmic probability.
== Overview ==
Algorithmic information theory principally studies complexity measures on strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers.
Informally, from the point of view of algorithmic information theory, the information content of a string is equivalent to the length of the most-compressed possible self-contained representation of that string. A self-contained representation is essentially a program—in some fixed but otherwise irrelevant universal programming language—that, when run, outputs the original string.
From this point of view, a 3000-page encyclopedia actually contains less information than 3000 pages of completely random letters, despite the fact that the encyclopedia is much more useful. This is because to reconstruct the entire sequence of random letters, one must know what every single letter is. On the other hand, if every vowel were removed from the encyclopedia, someone with reasonable knowledge of the English language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc hs lw nfrmtn cntnt" from the context and consonants present.
Unlike classical information theory, algorithmic information theory gives formal, rigorous definitions of a random string and a random infinite sequence that do not depend on physical or philosophical intuitions about nondeterminism or likelihood. (The set of random strings depends on the choice of the universal Turing machine used to define Kolmogorov complexity, but any choice
gives identical asymptotic results because the Kolmogorov complexity of a string is invariant up to an additive constant depending only on the choice of universal Turing machine. For this reason the set of random infinite sequences is independent of the choice of universal machine.)
Some of the results of algorithmic information theory, such as Chaitin's incompleteness theorem, appear to challenge common mathematical and philosophical intuitions. Most notable among these is the construction of Chaitin's constant Ω, a real number that expresses the probability that a self-delimiting universal Turing machine will halt when its input is supplied by flips of a fair coin (sometimes thought of as the probability that a random computer program will eventually halt). Although Ω is easily defined, in any consistent axiomatizable theory one can only compute finitely many digits of Ω, so it is in some sense unknowable, providing an absolute limit on knowledge that is reminiscent of Gödel's incompleteness theorems. Although the digits of Ω cannot be determined, many properties of Ω are known; for example, it is an algorithmically random sequence and thus its binary digits are evenly distributed (in fact it is normal).
== History ==
Algorithmic information theory was founded by Ray Solomonoff, who published the basic ideas on which the field is based as part of his invention of algorithmic probability—a way to overcome serious problems associated with the application of Bayes' rules in statistics. He first described his results at a Conference at Caltech in 1960, and in a report, February 1960, "A Preliminary Report on a General Theory of Inductive Inference." Algorithmic information theory was later developed independently by Andrey Kolmogorov, in 1965 and Gregory Chaitin, around 1966.
There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974). Per Martin-Löf also contributed significantly to the information theory of infinite sequences. An axiomatic approach to algorithmic information theory based on the Blum axioms (Blum 1967) was introduced by Mark Burgin in a paper presented for publication by Andrey Kolmogorov (Burgin 1982). The axiomatic approach encompasses other approaches in the algorithmic information theory. It is possible to treat different measures of algorithmic information as particular cases of axiomatically defined measures of algorithmic information. Instead of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to algorithmic information theory was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003).
== Precise definitions ==
A binary string is said to be random if the Kolmogorov complexity of the string is at least the length of the string. A simple counting argument shows that some strings of any given length are random, and almost all strings are very close to being random. Since Kolmogorov complexity depends on a fixed choice of universal Turing machine (informally, a fixed "description language" in which the "descriptions" are given), the collection of random strings does depend on the choice of fixed universal machine. Nevertheless, the collection of random strings, as a whole, has similar properties regardless of the fixed machine, so one can (and often does) talk about the properties of random strings as a group without having to first specify a universal machine.
An infinite binary sequence is said to be random if, for some constant c, for all n, the Kolmogorov complexity of the initial segment of length n of the sequence is at least n − c. It can be shown that almost every sequence (from the point of view of the standard measure—"fair coin" or Lebesgue measure—on the space of infinite binary sequences) is random. Also, since it can be shown that the Kolmogorov complexity relative to two different universal machines differs by at most a constant, the collection of random infinite sequences does not depend on the choice of universal machine (in contrast to finite strings). This definition of randomness is usually called Martin-Löf randomness, after Per Martin-Löf, to distinguish it from other similar notions of randomness. It is also sometimes called 1-randomness to distinguish it from other stronger notions of randomness (2-randomness, 3-randomness, etc.). In addition to Martin-Löf randomness concepts, there are also recursive randomness, Schnorr randomness, and Kurtz randomness etc. Yongge Wang showed that all of these randomness concepts are different.
(Related definitions can be made for alphabets other than the set
{
0
,
1
}
{\displaystyle \{0,1\}}
.)
== Specific sequence ==
Algorithmic information theory (AIT) is the information theory of individual objects, using computer science, and concerns itself with the relationship between computation, information, and randomness.
The information content or complexity of an object can be measured by the length of its shortest description. For instance the string
"0101010101010101010101010101010101010101010101010101010101010101"
has the short description "32 repetitions of '01'", while
"1100100001100001110111101110110011111010010000100101011110010110"
presumably has no simple description other than writing down the string itself.
More formally, the algorithmic complexity (AC) of a string x is defined as the length of the shortest program that computes or outputs x, where the program is run on some fixed reference universal computer.
A closely related notion is the probability that a universal computer outputs some string x when fed with a program chosen at random. This algorithmic "Solomonoff" probability (AP) is key in addressing the old philosophical problem of induction in a formal way.
The major drawback of AC and AP are their incomputability. Time-bounded "Levin" complexity penalizes a slow program by adding the logarithm of its running time to its length. This leads to computable variants of AC and AP, and universal "Levin" search (US) solves all inversion problems in optimal time (apart from some unrealistically large multiplicative constant).
AC and AP also allow a formal and rigorous definition of randomness of individual strings to not depend on physical or philosophical intuitions about non-determinism or likelihood. Roughly, a string is algorithmic "Martin-Löf" random (AR) if it is incompressible in the sense that its algorithmic complexity is equal to its length.
AC, AP, and AR are the core sub-disciplines of AIT, but AIT spawns into many other areas. It serves as the foundation of the Minimum Description Length (MDL) principle, can simplify proofs in computational complexity theory, has been used to define a universal similarity metric between objects, solves the Maxwell daemon problem, and many others.
== See also ==
== References ==
== External links ==
Algorithmic Information Theory at Scholarpedia
Chaitin's account of the history of AIT.
== Further reading == | Wikipedia/Algorithmic_information_theory |
The min-entropy, in information theory, is the smallest of the Rényi family of entropies, corresponding to the most conservative way of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of the most likely outcome. The various Rényi entropies are all equal for a uniform distribution, but measure the unpredictability of a nonuniform distribution in different ways. The min-entropy is never greater than the ordinary or Shannon entropy (which measures the average unpredictability of the outcomes) and that in turn is never greater than the Hartley or max-entropy, defined as the logarithm of the number of outcomes with nonzero probability.
As with the classical Shannon entropy and its quantum generalization, the von Neumann entropy, one can define a conditional version of min-entropy. The conditional quantum min-entropy is a one-shot, or conservative, analog of conditional quantum entropy.
To interpret a conditional information measure, suppose Alice and Bob were to share a bipartite quantum state
ρ
A
B
{\displaystyle \rho _{AB}}
. Alice has access to system
A
{\displaystyle A}
and Bob to system
B
{\displaystyle B}
. The conditional entropy measures the average uncertainty Bob has about Alice's state upon sampling from his own system. The min-entropy can be interpreted as the distance of a state from a maximally entangled state.
This concept is useful in quantum cryptography, in the context of privacy amplification (See for example ).
== Definition for classical distributions ==
If
P
=
(
p
1
,
.
.
.
,
p
n
)
{\displaystyle P=(p_{1},...,p_{n})}
is a classical finite probability distribution, its min-entropy can be defined as
H
m
i
n
(
P
)
=
log
1
P
m
a
x
,
P
m
a
x
≡
max
i
p
i
.
{\displaystyle H_{\rm {min}}({\boldsymbol {P}})=\log {\frac {1}{P_{\rm {max}}}},\qquad P_{\rm {max}}\equiv \max _{i}p_{i}.}
One way to justify the name of the quantity is to compare it with the more standard definition of entropy, which reads
H
(
P
)
=
∑
i
p
i
log
(
1
/
p
i
)
{\displaystyle H({\boldsymbol {P}})=\sum _{i}p_{i}\log(1/p_{i})}
, and can thus be written concisely as the expectation value of
log
(
1
/
p
i
)
{\displaystyle \log(1/p_{i})}
over the distribution. If instead of taking the expectation value of this quantity we take its minimum value, we get precisely the above definition of
H
m
i
n
(
P
)
{\displaystyle H_{\rm {min}}({\boldsymbol {P}})}
.
From an operational perspective, the min-entropy equals the negative logarithm of the probability of successfully guessing the outcome of a random draw from
P
{\displaystyle P}
.
This is because it is optimal to guess the element with the largest probability and the chance of success equals the probability of that element.
== Definition for quantum states ==
A natural way to generalize "min-entropy" from classical to quantum states is to leverage the simple observation that quantum states define classical probability distributions when measured in some basis. There is however the added difficulty that a single quantum state can result in infinitely many possible probability distributions, depending on how it is measured. A natural path is then, given a quantum state
ρ
{\displaystyle \rho }
, to still define
H
m
i
n
(
ρ
)
{\displaystyle H_{\rm {min}}(\rho )}
as
log
(
1
/
P
m
a
x
)
{\displaystyle \log(1/P_{\rm {max}})}
, but this time defining
P
m
a
x
{\displaystyle P_{\rm {max}}}
as the maximum possible probability that can be obtained measuring
ρ
{\displaystyle \rho }
, maximizing over all possible projective measurements.
Using this, one gets the operational definition that the min-entropy of
ρ
{\displaystyle \rho }
equals the negative logarithm of the probability of successfully guessing the outcome of any measurement of
ρ
{\displaystyle \rho }
.
Formally, this leads to the definition
H
m
i
n
(
ρ
)
=
max
Π
log
1
max
i
tr
(
Π
i
ρ
)
=
−
max
Π
log
max
i
tr
(
Π
i
ρ
)
,
{\displaystyle H_{\rm {min}}(\rho )=\max _{\Pi }\log {\frac {1}{\max _{i}\operatorname {tr} (\Pi _{i}\rho )}}=-\max _{\Pi }\log \max _{i}\operatorname {tr} (\Pi _{i}\rho ),}
where we are maximizing over the set of all projective measurements
Π
=
(
Π
i
)
i
{\displaystyle \Pi =(\Pi _{i})_{i}}
,
Π
i
{\displaystyle \Pi _{i}}
represent the measurement outcomes in the POVM formalism, and
tr
(
Π
i
ρ
)
{\displaystyle \operatorname {tr} (\Pi _{i}\rho )}
is therefore the probability of observing the
i
{\displaystyle i}
-th outcome when the measurement is
Π
{\displaystyle \Pi }
.
A more concise method to write the double maximization is to observe that any element of any POVM is a Hermitian operator such that
0
≤
Π
≤
I
{\displaystyle 0\leq \Pi \leq I}
, and thus we can equivalently directly maximize over these to get
H
m
i
n
(
ρ
)
=
−
max
0
≤
Π
≤
I
log
tr
(
Π
ρ
)
.
{\displaystyle H_{\rm {min}}(\rho )=-\max _{0\leq \Pi \leq I}\log \operatorname {tr} (\Pi \rho ).}
In fact, this maximization can be performed explicitly and the maximum is obtained when
Π
{\displaystyle \Pi }
is the projection onto (any of) the largest eigenvalue(s) of
ρ
{\displaystyle \rho }
. We thus get yet another expression for the min-entropy as:
H
m
i
n
(
ρ
)
=
−
log
‖
ρ
‖
o
p
,
{\displaystyle H_{\rm {min}}(\rho )=-\log \|\rho \|_{\rm {op}},}
remembering that the operator norm of a Hermitian positive semidefinite operator equals its largest eigenvalue.
== Conditional entropies ==
Let
ρ
A
B
{\displaystyle \rho _{AB}}
be a bipartite density operator on the space
H
A
⊗
H
B
{\displaystyle {\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}}
. The min-entropy of
A
{\displaystyle A}
conditioned on
B
{\displaystyle B}
is defined to be
H
min
(
A
|
B
)
ρ
≡
−
inf
σ
B
D
max
(
ρ
A
B
‖
I
A
⊗
σ
B
)
{\displaystyle H_{\min }(A|B)_{\rho }\equiv -\inf _{\sigma _{B}}D_{\max }(\rho _{AB}\|I_{A}\otimes \sigma _{B})}
where the infimum ranges over all density operators
σ
B
{\displaystyle \sigma _{B}}
on the space
H
B
{\displaystyle {\mathcal {H}}_{B}}
. The measure
D
max
{\displaystyle D_{\max }}
is the maximum relative entropy defined as
D
max
(
ρ
‖
σ
)
=
inf
λ
{
λ
:
ρ
≤
2
λ
σ
}
{\displaystyle D_{\max }(\rho \|\sigma )=\inf _{\lambda }\{\lambda :\rho \leq 2^{\lambda }\sigma \}}
The smooth min-entropy is defined in terms of the min-entropy.
H
min
ϵ
(
A
|
B
)
ρ
=
sup
ρ
′
H
min
(
A
|
B
)
ρ
′
{\displaystyle H_{\min }^{\epsilon }(A|B)_{\rho }=\sup _{\rho '}H_{\min }(A|B)_{\rho '}}
where the sup and inf range over density operators
ρ
A
B
′
{\displaystyle \rho '_{AB}}
which are
ϵ
{\displaystyle \epsilon }
-close to
ρ
A
B
{\displaystyle \rho _{AB}}
. This measure of
ϵ
{\displaystyle \epsilon }
-close is defined in terms of the purified distance
P
(
ρ
,
σ
)
=
1
−
F
(
ρ
,
σ
)
2
{\displaystyle P(\rho ,\sigma )={\sqrt {1-F(\rho ,\sigma )^{2}}}}
where
F
(
ρ
,
σ
)
{\displaystyle F(\rho ,\sigma )}
is the fidelity measure.
These quantities can be seen as generalizations of the von Neumann entropy. Indeed, the von Neumann entropy can be expressed as
S
(
A
|
B
)
ρ
=
lim
ϵ
→
0
lim
n
→
∞
1
n
H
min
ϵ
(
A
n
|
B
n
)
ρ
⊗
n
.
{\displaystyle S(A|B)_{\rho }=\lim _{\epsilon \to 0}\lim _{n\to \infty }{\frac {1}{n}}H_{\min }^{\epsilon }(A^{n}|B^{n})_{\rho ^{\otimes n}}~.}
This is called the fully quantum asymptotic equipartition theorem.
The smoothed entropies share many interesting properties with the von Neumann entropy. For example, the smooth min-entropy satisfy a data-processing inequality:
H
min
ϵ
(
A
|
B
)
ρ
≥
H
min
ϵ
(
A
|
B
C
)
ρ
.
{\displaystyle H_{\min }^{\epsilon }(A|B)_{\rho }\geq H_{\min }^{\epsilon }(A|BC)_{\rho }~.}
== Operational interpretation of smoothed min-entropy ==
Henceforth, we shall drop the subscript
ρ
{\displaystyle \rho }
from the min-entropy when it is obvious from the context on what state it is evaluated.
=== Min-entropy as uncertainty about classical information ===
Suppose an agent had access to a quantum system
B
{\displaystyle B}
whose state
ρ
B
x
{\displaystyle \rho _{B}^{x}}
depends on some classical variable
X
{\displaystyle X}
. Furthermore, suppose that each of its elements
x
{\displaystyle x}
is distributed according to some distribution
P
X
(
x
)
{\displaystyle P_{X}(x)}
. This can be described by the following state over the system
X
B
{\displaystyle XB}
.
ρ
X
B
=
∑
x
P
X
(
x
)
|
x
⟩
⟨
x
|
⊗
ρ
B
x
,
{\displaystyle \rho _{XB}=\sum _{x}P_{X}(x)|x\rangle \langle x|\otimes \rho _{B}^{x},}
where
{
|
x
⟩
}
{\displaystyle \{|x\rangle \}}
form an orthonormal basis. We would like to know what the agent can learn about the classical variable
x
{\displaystyle x}
. Let
p
g
(
X
|
B
)
{\displaystyle p_{g}(X|B)}
be the probability that the agent guesses
X
{\displaystyle X}
when using an optimal measurement strategy
p
g
(
X
|
B
)
=
∑
x
P
X
(
x
)
tr
(
E
x
ρ
B
x
)
,
{\displaystyle p_{g}(X|B)=\sum _{x}P_{X}(x)\operatorname {tr} (E_{x}\rho _{B}^{x}),}
where
E
x
{\displaystyle E_{x}}
is the POVM that maximizes this expression. It can be shown that this optimum can be expressed in terms of the min-entropy as
p
g
(
X
|
B
)
=
2
−
H
min
(
X
|
B
)
.
{\displaystyle p_{g}(X|B)=2^{-H_{\min }(X|B)}~.}
If the state
ρ
X
B
{\displaystyle \rho _{XB}}
is a product state i.e.
ρ
X
B
=
σ
X
⊗
τ
B
{\displaystyle \rho _{XB}=\sigma _{X}\otimes \tau _{B}}
for some density operators
σ
X
{\displaystyle \sigma _{X}}
and
τ
B
{\displaystyle \tau _{B}}
, then there is no correlation between the systems
X
{\displaystyle X}
and
B
{\displaystyle B}
. In this case, it turns out that
2
−
H
min
(
X
|
B
)
=
max
x
P
X
(
x
)
.
{\displaystyle 2^{-H_{\min }(X|B)}=\max _{x}P_{X}(x)~.}
Since the conditional min-entropy is always smaller than the conditional Von Neumann entropy, it follows that
p
g
(
X
|
B
)
≥
2
−
S
(
A
|
B
)
ρ
.
{\displaystyle p_{g}(X|B)\geq 2^{-S(A|B)_{\rho }}~.}
==== Min-entropy as overlap with the maximally entangled state ====
The maximally entangled state
|
ϕ
+
⟩
{\displaystyle |\phi ^{+}\rangle }
on a bipartite system
H
A
⊗
H
B
{\displaystyle {\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}}
is defined as
|
ϕ
+
⟩
A
B
=
1
d
∑
x
A
,
x
B
|
x
A
⟩
|
x
B
⟩
{\displaystyle |\phi ^{+}\rangle _{AB}={\frac {1}{\sqrt {d}}}\sum _{x_{A},x_{B}}|x_{A}\rangle |x_{B}\rangle }
where
{
|
x
A
⟩
}
{\displaystyle \{|x_{A}\rangle \}}
and
{
|
x
B
⟩
}
{\displaystyle \{|x_{B}\rangle \}}
form an orthonormal basis for the spaces
A
{\displaystyle A}
and
B
{\displaystyle B}
respectively.
For a bipartite quantum state
ρ
A
B
{\displaystyle \rho _{AB}}
, we define the maximum overlap with the maximally entangled state as
q
c
(
A
|
B
)
=
d
A
max
E
F
(
(
I
A
⊗
E
)
ρ
A
B
,
|
ϕ
+
⟩
⟨
ϕ
+
|
)
2
{\displaystyle q_{c}(A|B)=d_{A}\max _{\mathcal {E}}F\left((I_{A}\otimes {\mathcal {E}})\rho _{AB},|\phi ^{+}\rangle \langle \phi ^{+}|\right)^{2}}
where the maximum is over all CPTP operations
E
{\displaystyle {\mathcal {E}}}
and
d
A
{\displaystyle d_{A}}
is the dimension of subsystem
A
{\displaystyle A}
. This is a measure of how correlated the state
ρ
A
B
{\displaystyle \rho _{AB}}
is. It can be shown that
q
c
(
A
|
B
)
=
2
−
H
min
(
A
|
B
)
{\displaystyle q_{c}(A|B)=2^{-H_{\min }(A|B)}}
. If the information contained in
A
{\displaystyle A}
is classical, this reduces to the expression above for the guessing probability.
=== Proof of operational characterization of min-entropy ===
The proof is from a paper by König, Schaffner, Renner in 2008. It involves the machinery of semidefinite programs. Suppose we are given some bipartite density operator
ρ
A
B
{\displaystyle \rho _{AB}}
. From the definition of the min-entropy, we have
H
min
(
A
|
B
)
=
−
inf
σ
B
inf
λ
{
λ
|
ρ
A
B
≤
2
λ
(
I
A
⊗
σ
B
)
}
.
{\displaystyle H_{\min }(A|B)=-\inf _{\sigma _{B}}\inf _{\lambda }\{\lambda |\rho _{AB}\leq 2^{\lambda }(I_{A}\otimes \sigma _{B})\}~.}
This can be re-written as
−
log
inf
σ
B
Tr
(
σ
B
)
{\displaystyle -\log \inf _{\sigma _{B}}\operatorname {Tr} (\sigma _{B})}
subject to the conditions
σ
B
≥
0
,
I
A
⊗
σ
B
≥
ρ
A
B
.
{\displaystyle {\begin{aligned}\sigma _{B}&\geq 0,\\I_{A}\otimes \sigma _{B}&\geq \rho _{AB}~.\end{aligned}}}
We notice that the infimum is taken over compact sets and hence can be replaced by a minimum. This can then be expressed succinctly as a semidefinite program. Consider the primal problem
{
min:
Tr
(
σ
B
)
subject to:
I
A
⊗
σ
B
≥
ρ
A
B
σ
B
≥
0
.
{\displaystyle {\begin{cases}{\text{min:}}\operatorname {Tr} (\sigma _{B})\\{\text{subject to: }}I_{A}\otimes \sigma _{B}\geq \rho _{AB}\\\sigma _{B}\geq 0~.\end{cases}}}
This primal problem can also be fully specified by the matrices
(
ρ
A
B
,
I
B
,
Tr
∗
)
{\displaystyle (\rho _{AB},I_{B},\operatorname {Tr} ^{*})}
where
Tr
∗
{\displaystyle \operatorname {Tr} ^{*}}
is the adjoint of the partial trace over
A
{\displaystyle A}
. The action of
Tr
∗
{\displaystyle \operatorname {Tr} ^{*}}
on operators on
B
{\displaystyle B}
can be written as
Tr
∗
(
X
)
=
I
A
⊗
X
.
{\displaystyle \operatorname {Tr} ^{*}(X)=I_{A}\otimes X~.}
We can express the dual problem as a maximization over operators
E
A
B
{\displaystyle E_{AB}}
on the space
A
B
{\displaystyle AB}
as
{
max:
Tr
(
ρ
A
B
E
A
B
)
subject to:
Tr
A
(
E
A
B
)
=
I
B
E
A
B
≥
0
.
{\displaystyle {\begin{cases}{\text{max:}}\operatorname {Tr} (\rho _{AB}E_{AB})\\{\text{subject to: }}\operatorname {Tr} _{A}(E_{AB})=I_{B}\\E_{AB}\geq 0~.\end{cases}}}
Using the Choi–Jamiołkowski isomorphism, we can define the channel
E
{\displaystyle {\mathcal {E}}}
such that
d
A
I
A
⊗
E
†
(
|
ϕ
+
⟩
⟨
ϕ
+
|
)
=
E
A
B
{\displaystyle d_{A}I_{A}\otimes {\mathcal {E}}^{\dagger }(|\phi ^{+}\rangle \langle \phi ^{+}|)=E_{AB}}
where the bell state is defined over the space
A
A
′
{\displaystyle AA'}
. This means that we can express the objective function of the dual problem as
⟨
ρ
A
B
,
E
A
B
⟩
=
d
A
⟨
ρ
A
B
,
I
A
⊗
E
†
(
|
ϕ
+
⟩
⟨
ϕ
+
|
)
⟩
=
d
A
⟨
I
A
⊗
E
(
ρ
A
B
)
,
|
ϕ
+
⟩
⟨
ϕ
+
|
)
⟩
{\displaystyle {\begin{aligned}\langle \rho _{AB},E_{AB}\rangle &=d_{A}\langle \rho _{AB},I_{A}\otimes {\mathcal {E}}^{\dagger }(|\phi ^{+}\rangle \langle \phi ^{+}|)\rangle \\&=d_{A}\langle I_{A}\otimes {\mathcal {E}}(\rho _{AB}),|\phi ^{+}\rangle \langle \phi ^{+}|)\rangle \end{aligned}}}
as desired.
Notice that in the event that the system
A
{\displaystyle A}
is a partly classical state as above, then the quantity that we are after reduces to
max
P
X
(
x
)
⟨
x
|
E
(
ρ
B
x
)
|
x
⟩
.
{\displaystyle \max P_{X}(x)\langle x|{\mathcal {E}}(\rho _{B}^{x})|x\rangle ~.}
We can interpret
E
{\displaystyle {\mathcal {E}}}
as a guessing strategy and this then reduces to the interpretation given above where an adversary wants to find the string
x
{\displaystyle x}
given access to quantum information via system
B
{\displaystyle B}
.
== See also ==
von Neumann entropy
Generalized relative entropy
max-entropy
== References == | Wikipedia/Min-entropy |
Adaptive differential pulse-code modulation (ADPCM) is a variant of differential pulse-code modulation (DPCM) that varies the size of the quantization step, to allow further reduction of the required data bandwidth for a given signal-to-noise ratio.
Typically, the adaptation to signal statistics in ADPCM consists simply of an adaptive scale factor before quantizing the difference in the DPCM encoder.
ADPCM was developed for speech coding by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973.
== In telephony ==
In telephony, a standard audio signal for a single phone call is encoded as 8000 analog samples per second, of 8 bits each, giving a 64 kbit/s digital signal known as DS0. The default signal compression encoding on a DS0 is either μ-law (mu-law) PCM (North America and Japan) or A-law PCM (Europe and most of the rest of the world). These are logarithmic compression systems where a 13- or 14-bit linear PCM sample number is mapped into an 8-bit value. This system is described by international standard G.711. Where circuit costs are high and loss of voice quality is acceptable, it sometimes makes sense to compress the voice signal even further. An ADPCM algorithm is used to map a series of 8-bit μ-law (or a-law) PCM samples into a series of 4-bit ADPCM samples. In this way, the capacity of the line is doubled. The technique is detailed in the G.726 standard.
ADPCM techniques are used in voice over IP communications. In the early 1990s, ADPCM was also used by Interactive Multimedia Association to develop the legacy audio codecs ADPCM DVI, IMA ADPCM, and DVI4.
== Split-band or subband ADPCM ==
G.722 is an ITU-T standard wideband speech codec operating at 48, 56 and 64 kbit/s, based on subband coding with two channels and ADPCM coding of each. Before the digitization process, it catches the analog signal and divides it in frequency bands with quadrature mirror filters (QMF) to get two subbands of the signal. When the ADPCM bitstream of each subband is obtained, the results are multiplexed, and the next step is storage or transmission of the data. The decoder has to perform the reverse process, that is, demultiplex and decode each subband of the bitstream and recombine them.
Referring to the coding process, in some applications as voice coding, the subband that includes the voice is coded with more bits than the others. It is a way to reduce the file size.
== Software ==
The Windows Sound System supported ADPCM in WAV files.
As of 31 October 2024, FFmpeg include 50 built-in ADPCM decoders and 16 encoders, some catering to niche purposes. For instance, "ADPCM Westwood Studios IMA" (adpcm_ima_ws) encodes and decodes the audio of the old Command & Conquer video games.
The DSP in the GameCube supports ADPCM encoding on 64 simultaneous audio channels.
== See also ==
Audio coding format
Audio data compression
Pulse-code modulation (PCM)
== References == | Wikipedia/Adaptive_differential_pulse-code_modulation |
Ultra was the designation adopted by British military intelligence in June 1941 for wartime signals intelligence obtained by breaking high-level encrypted enemy radio and teleprinter communications at the Government Code and Cypher School (GC&CS) at Bletchley Park. Ultra eventually became the standard designation among the western Allies for all such intelligence. The name arose because the intelligence obtained was considered more important than that designated by the highest British security classification then used (Most Secret) and so was regarded as being Ultra Secret. Several other cryptonyms had been used for such intelligence.
The code name "Boniface" was used as a cover name for Ultra. In order to ensure that the successful code-breaking did not become apparent to the Germans, British intelligence created a fictional MI6 master spy, Boniface, who controlled a fictional series of agents throughout Germany. Information obtained through code-breaking was often attributed to the human intelligence from the Boniface network. The U.S. used the codename Magic for its decrypts from Japanese sources, including the "Purple" cipher.
Much of the German cipher traffic was encrypted on the Enigma machine. Used properly, the German military Enigma would have been virtually unbreakable; in practice, shortcomings in operation allowed it to be broken. The term "Ultra" has often been used almost synonymously with "Enigma decrypts". However, Ultra also encompassed decrypts of the German Lorenz SZ 40/42 machines that were used by the German High Command, and the Hagelin machine.
Many observers, at the time and later, regarded Ultra as immensely valuable to the Allies. Winston Churchill was reported to have told King George VI, when presenting to him Stewart Menzies (head of the Secret Intelligence Service and the person who controlled distribution of Ultra decrypts to the government): "It is thanks to the secret weapon of General Menzies, put into use on all the fronts, that we won the war!" F. W. Winterbotham quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at war's end describing Ultra as having been "decisive" to Allied victory. Sir Harry Hinsley, Bletchley Park veteran and official historian of British Intelligence in World War II, made a similar assessment of Ultra, saying that while the Allies would have won the war without it, "the war would have been something like two years longer, perhaps three years longer, possibly four years longer than it was." However, Hinsley and others have emphasized the difficulties of counterfactual history in attempting such conclusions, and some historians, such as John Keegan, have said the shortening might have been as little as the three months it took the United States to deploy the atomic bomb.
== Sources of intelligence ==
Most Ultra intelligence was derived from reading radio messages that had been encrypted with cipher machines, complemented by material from radio communications using traffic analysis and direction finding. In the early phases of the war, particularly during the eight-month Phoney War, the Germans could transmit most of their messages using land lines and so had no need to use radio. This meant that those at Bletchley Park had some time to build up experience of collecting and starting to decrypt messages on the various radio networks. German Enigma messages were the main source, with those of the German air force,the Luftwaffe predominating, as they used radio more and their operators were particularly ill-disciplined.
=== German ===
==== Enigma ====
"Enigma" refers to a family of electro-mechanical rotor cipher machines. These produced a polyalphabetic substitution cipher and were widely thought to be unbreakable in the 1920s, when a variant of the commercial Model D was first used by the Reichswehr. The German Army (Heer), Navy, Air Force, Nazi party, Gestapo and German diplomats used Enigma machines in several variants. Abwehr (German military intelligence) used a four-rotor machine without a plugboard and Naval Enigma used different key management from that of the army or air force, making its traffic far more difficult to cryptanalyse; each variant required different cryptanalytic treatment. The commercial versions were not as secure and Dilly Knox of GC&CS is said to have broken one before the war.
German military Enigma was first broken in December 1932 by Marian Rejewski and the Polish Cipher Bureau, using a combination of brilliant mathematics, the services of a spy in the German office responsible for administering encrypted communications, and good luck. The Poles read Enigma to the outbreak of World War II and beyond, in France. At the turn of 1939, the Germans made the systems ten times more complex, which required a tenfold increase in Polish decryption equipment, which they could not meet. On 25 July 1939, the Polish Cipher Bureau handed reconstructed Enigma machines and their techniques for decrypting ciphers to the French and British. Gordon Welchman wrote,
Ultra would never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military Enigma machine, and of the operating procedures that were in use.
At Bletchley Park, some of the key people responsible for success against Enigma included mathematicians Alan Turing and Hugh Alexander and, at the British Tabulating Machine Company, chief engineer Harold Keen.
After the war, interrogation of German cryptographic personnel led to the conclusion that German cryptanalysts understood that cryptanalytic attacks against Enigma were possible but were thought to require impracticable amounts of effort and investment. The Poles' early start at breaking Enigma and the continuity of their success gave the Allies an advantage when World War II began.
==== Lorenz cipher ====
In June 1941, the Germans started to introduce on-line stream cipher teleprinter systems for strategic point-to-point radio links, to which the British gave the code-name Fish. Several systems were used, principally the Lorenz SZ 40/42 (codenamed "Tunny" by the British) and Geheimfernschreiber ("Sturgeon"). These cipher systems were cryptanalysed, particularly Tunny, which the British thoroughly penetrated. It was eventually attacked using Colossus machines, which were the first digital programme-controlled electronic computers. In many respects the Tunny work was more difficult than for the Enigma, since the British codebreakers had no knowledge of the machine producing it and no head-start such as that the Poles had given them against Enigma.
Although the volume of intelligence derived from this system was much smaller than that from Enigma, its importance was often far higher because it produced primarily high-level, strategic intelligence that was sent between Wehrmacht high command (Oberkommando der Wehrmacht, OKW). The eventual bulk decryption of Lorenz-enciphered messages contributed significantly, and perhaps decisively, to the defeat of Nazi Germany. Nevertheless, the Tunny story has become much less well known among the public than the Enigma one. At Bletchley Park, some of the key people responsible for success in the Tunny effort included mathematicians W. T. "Bill" Tutte and Max Newman and electrical engineer Tommy Flowers.
=== Italian ===
In June 1940, the Italians were using book codes for most of their military messages, except for the Italian Navy, which in early 1941 had started using a version of the Hagelin rotor-based cipher machine C-38. This was broken from June 1941 onwards by the Italian subsection of GC&CS at Bletchley Park.
=== Japanese ===
In the Pacific theatre, a Japanese cipher machine, called "Purple" by the Americans, was used for highest-level Japanese diplomatic traffic. It produced a polyalphabetic substitution cipher, but unlike Enigma, was not a rotor machine, being built around electrical stepping switches. It was broken by the US Army Signal Intelligence Service and disseminated as Magic. Detailed reports by the Japanese ambassador to Germany were encrypted on the Purple machine. His reports included reviews of German assessments of the military situation, reviews of strategy and intentions, reports on direct inspections by the ambassador (in one case, of Normandy beach defences), and reports of long interviews with Hitler. The Japanese are said to have obtained an Enigma machine in 1937, although it is debated whether they were given it by the Germans or bought a commercial version, which, apart from the plugboard and internal wiring, was the German Heer/Luftwaffe machine. Having developed a similar machine, the Japanese did not use the Enigma machine for their most secret communications.
The chief fleet communications code system used by the Imperial Japanese Navy was called JN-25 by the Americans, and by early 1942 the US Navy had made considerable progress in decrypting Japanese naval messages. The US Army also made progress on the Japanese Army's codes in 1943, including codes used by supply ships, resulting in heavy losses to their shipping.
== Distribution ==
Army- and Air Force-related intelligence derived from signals intelligence (SIGINT) sources – mainly Enigma decrypts in Hut 6 – was compiled in summaries at GC&CS (Bletchley Park) Hut 3 and distributed initially under the codeword "BONIFACE", implying that it was acquired from a well placed agent in Berlin. The volume of the intelligence reports going out to commanders in the field built up gradually.
Naval Enigma decrypted in Hut 8 was forwarded from Hut 4 to the Admiralty's Operational Intelligence Centre (OIC), which distributed it initially under the codeword "HYDRO".
The codeword "ULTRA" was adopted in June 1941. This codeword was reportedly suggested by Commander Geoffrey Colpoys, RN, who served in the Royal Navy's OIC.
=== Army and Air Force ===
The distribution of Ultra information to Allied commanders and units in the field involved considerable risk of discovery by the Germans, and great care was taken to control both the information and knowledge of how it was obtained. Liaison officers were appointed for each field command to manage and control dissemination.
Dissemination of Ultra intelligence to field commanders was carried out by MI6, which operated Special Liaison Units (SLU) attached to major army and air force commands. The activity was organized and supervised on behalf of MI6 by Group Captain F. W. Winterbotham. Each SLU included intelligence, communications, and cryptographic elements. It was headed by a British Army or RAF officer, usually a major, known as "Special Liaison Officer". The main function of the liaison officer or his deputy was to pass Ultra intelligence bulletins to the commander of the command he was attached to, or to other indoctrinated staff officers. In order to safeguard Ultra, special precautions were taken. The standard procedure was for the liaison officer to present the intelligence summary to the recipient, stay with him while he studied it, then take it back and destroy it.
By the end of the war, there were about 40 SLUs serving commands around the world. Fixed SLUs existed at the Admiralty, the War Office, the Air Ministry, RAF Fighter Command, the US Strategic Air Forces in Europe (Wycombe Abbey) and other fixed headquarters in the UK. An SLU was operating at the War HQ in Valletta, Malta. These units had permanent teleprinter links to Bletchley Park.
Mobile SLUs were attached to field army and air force headquarters and depended on radio communications to receive intelligence summaries. The first mobile SLUs appeared during the French campaign of 1940. An SLU supported the British Expeditionary Force (BEF) headed by General Lord Gort. The first liaison officers were Robert Gore-Browne and Humphrey Plowden. A second SLU of the 1940 period was attached to the RAF Advanced Air Striking Force at Meaux commanded by Air Vice-Marshal P H Lyon Playfair. This SLU was commanded by Squadron Leader F.W. "Tubby" Long.
=== Intelligence agencies ===
In 1940, special arrangements were made within the British intelligence services for handling BONIFACE and later Ultra intelligence. The Security Service started "Special Research Unit B1(b)" under Herbert Hart. In the SIS this intelligence was handled by "Section V" based at St Albans.
=== Radio and cryptography ===
The communications system was founded by Brigadier Sir Richard Gambier-Parry, who from 1938 to 1946 was head of MI6 Section VIII, based at Whaddon Hall in Buckinghamshire, UK. Ultra summaries from Bletchley Park were sent over landline to the Section VIII radio transmitter at Windy Ridge. From there they were transmitted to the destination SLUs.
The communications element of each SLU was called a "Special Communications Unit" or SCU. Radio transmitters were constructed at Whaddon Hall workshops, while receivers were the National HRO, made in the USA. The SCUs were highly mobile and the first such units used civilian Packard cars. The following SCUs are listed: SCU1 (Whaddon Hall), SCU2 (France before 1940, India), SCU3 (RSS Hanslope Park), SCU5, SCU6 (possibly Algiers and Italy), SCU7 (training unit in the UK), SCU8 (Europe after D-day), SCU9 (Europe after D-day), SCU11 (Palestine and India), SCU12 (India), SCU13 and SCU14.
The cryptographic element of each SLU was supplied by the RAF and was based on the TYPEX cryptographic machine and one-time pad systems.
RN Ultra messages from the OIC to ships at sea were necessarily transmitted over normal naval radio circuits and were protected by one-time pad encryption.
=== Lucy ===
It is alleged that Ultra information was used by the "Lucy" spy ring, headquartered in Switzerland and apparently operated by one man, Rudolf Roessler. This was an extremely well informed, responsive ring that was able to get information "directly from German General Staff Headquarters" – often on specific request. It has been alleged that "Lucy" was in major part a conduit for the British to feed Ultra intelligence to the Soviets in a way that made it appear to have come from highly placed espionage rather than from cryptanalysis of German radio traffic. The Soviets, however, through an agent at Bletchley, John Cairncross, knew that Britain had broken Enigma. The "Lucy" ring was initially treated with suspicion by the Soviets. The information it provided was accurate and timely, however, and Soviet agents in Switzerland (including their chief, Alexander Radó) eventually learned to take it seriously. However, the theory that the Lucy ring was a cover for Britain to pass Enigma intelligence to the Soviets has not gained traction. Among others who have rejected the theory, Harry Hinsley, the official historian for the British Secret Services in World War II, stated that "there is no truth in the much-publicized claim that the British authorities made use of the ‘Lucy’ ring ... to forward intelligence to Moscow".
== Use of intelligence ==
Most deciphered messages, often about relative trivia, were insufficient as intelligence reports for military strategists or field commanders. The organisation, interpretation and distribution of decrypted Enigma message traffic and other sources into usable intelligence was a subtle task.
At Bletchley Park, extensive indices were kept of the information in the messages decrypted. For each message the traffic analysis recorded the radio frequency, the date and time of intercept, and the preamble – which contained the network-identifying discriminant, the time of origin of the message, the callsign of the originating and receiving stations, and the indicator setting. This allowed cross referencing of a new message with a previous one. The indices included message preambles, every person, every ship, every unit, every weapon, every technical term and of repeated phrases such as forms of address and other German military jargon that might be usable as cribs.
The first decryption of a wartime Enigma message, albeit one that had been transmitted three months earlier, was achieved by the Poles at PC Bruno on 17 January 1940. Little had been achieved by the start of the Allied campaign in Norway in April. At the start of the Battle of France on 10 May 1940, the Germans made a very significant change in the indicator procedures for Enigma messages. However, the Bletchley Park cryptanalysts had anticipated this, and were able – jointly with PC Bruno – to resume breaking messages from 22 May, although often with some delay. The intelligence that these messages yielded was of little operational use in the fast-moving situation of the German advance.
Decryption of Enigma traffic built up gradually during 1940, with the first two prototype bombes being delivered in March and August. The traffic was almost entirely limited to Luftwaffe messages. By the peak of the Battle of the Mediterranean in 1941, however, Bletchley Park was deciphering daily 2,000 Italian Hagelin messages. By the second half of 1941 30,000 Enigma messages a month were being deciphered, rising to 90,000 a month of Enigma and Fish decrypts combined later in the war.
Some of the contributions that Ultra intelligence made to the Allied successes are given below.
In April 1940, Ultra information provided a detailed picture of the disposition of the German forces, and then their movement orders for the attack on the Low Countries prior to the Battle of France in May.
An Ultra decrypt of June 1940 read KNICKEBEIN KLEVE IST AUF PUNKT 53 GRAD 24 MINUTEN NORD UND EIN GRAD WEST EINGERICHTET ("The Cleves Knickebein is directed at position 53 degrees 24 minutes north and 1 degree west"). This was the definitive piece of evidence that Dr R. V. Jones of scientific intelligence in the Air Ministry needed to show that the Germans were developing a radio guidance system for their bombers. Ultra intelligence then continued to play a vital role in the so-called Battle of the Beams.
During the Battle of Britain, Air Chief Marshal Sir Hugh Dowding, Commander-in-Chief of RAF Fighter Command, had a teleprinter link from Bletchley Park to his headquarters at RAF Bentley Priory, for Ultra reports. Ultra intelligence kept him informed of German strategy, and of the strength and location of various Luftwaffe units, and often provided advance warning of bombing raids (but not of their specific targets). These contributed to the British success. Dowding was bitterly and sometimes unfairly criticized by others who did not see Ultra, but he did not disclose his source.
Decryption of traffic from Luftwaffe radio networks provided a great deal of indirect intelligence about the Germans' planned Operation Sea Lion to invade England in 1940.
On 17 September 1940 an Ultra message reported that equipment at German airfields in Belgium for loading planes with paratroops and their gear was to be dismantled. This was taken as a clear signal that Sea Lion had been cancelled.
Ultra revealed that a major German air raid was planned for the night of 14 November 1940, and indicated three possible targets, including London and Coventry. However, the specific target was not determined until late on the afternoon of 14 November, by detection of the German radio guidance signals. Unfortunately, countermeasures failed to prevent the devastating Coventry Blitz. F. W. Winterbotham claimed that Churchill had advance warning, but intentionally did nothing about the raid, to safeguard Ultra. This claim has been comprehensively refuted by R. V. Jones, Sir David Hunt, Ralph Bennett and Peter Calvocoressi. Ultra warned of a raid but did not reveal the target. Churchill, who had been en route to Ditchley Park, was told that London might be bombed and returned to 10 Downing Street so that he could observe the raid from the Air Ministry roof.
Ultra intelligence considerably aided the British Army's Operation Compass victory over the much larger Italian army in Libya between December 1940 and February 1941.
Ultra intelligence greatly aided the Royal Navy's victory over the Italian navy in the Battle of Cape Matapan in March 1941.
Although the Allies lost the Battle of Crete in May 1941, the Ultra intelligence that a parachute landing was planned, and the exact day of the invasion, meant that heavy losses were inflicted on the Germans and that fewer British troops were captured.
Ultra intelligence fully revealed the preparations for Operation Barbarossa, the German invasion of the USSR. Although this information was passed to the Soviet government, Stalin refused to believe it. The information did, however, help British planning, knowing that substantial German forces were to be deployed to the East.
Ultra intelligence made a very significant contribution in the Battle of the Atlantic. Winston Churchill wrote "The only thing that ever really frightened me during the war was the U-boat peril." The decryption of Enigma signals to the U-boats was much more difficult than those of the Luftwaffe. It was not until June 1941 that Bletchley Park was able to read a significant amount of this traffic contemporaneously. Transatlantic convoys were then diverted away from the U-boat "wolfpacks", and the U-boat supply vessels were sunk. On 1 February 1942, Enigma U-boat traffic became unreadable because of the introduction of a different 4-rotor Enigma machine. This situation persisted until December 1942, although other German naval Enigma messages were still being deciphered, such as those of the U-boat training command at Kiel. From December 1942 to the end of the war, Ultra allowed Allied convoys to evade U-boat patrol lines, and guided Allied anti-submarine forces to the location of U-boats at sea.
In the Western Desert Campaign, Ultra intelligence helped Wavell and Auchinleck to prevent Rommel's forces from reaching Cairo in the autumn of 1941.
Ultra intelligence from Hagelin decrypts, and from Luftwaffe and German naval Enigma decrypts, helped sink about half of the ships supplying the Axis forces in North Africa.
Ultra intelligence from Abwehr transmissions confirmed that Britain's Security Service (MI5) had captured all of the German agents in Britain, and that the Abwehr still believed in the many double agents which MI5 controlled under the Double Cross System. This enabled major deception operations.
Deciphered JN-25 messages allowed the U.S. to turn back a Japanese offensive in the Battle of the Coral Sea in April 1942 and set up the decisive American victory at the Battle of Midway in June 1942.
Ultra contributed very significantly to the monitoring of German developments at Peenemünde and the collection of V-1 and V-2 intelligence from 1942 onwards.
Ultra contributed to Montgomery's victory at the Battle of Alam el Halfa by providing warning of Rommel's planned attack.
Ultra also contributed to the success of Montgomery's offensive in the Second Battle of El Alamein, by providing him (before the battle) with a complete picture of Axis forces, and (during the battle) with Rommel's own action reports to Germany.
Ultra provided evidence that the Allied landings in French North Africa (Operation Torch) were not anticipated.
A JN-25 decrypt of 14 April 1943 provided details of Admiral Yamamoto's forthcoming visit to Balalae Island, and on 18 April, a year to the day following the Doolittle Raid, his aircraft was shot down, killing this man who was regarded as irreplaceable.
Ship position reports in the Japanese Army’s "2468" water transport code, decrypted by the SIS starting in July 1943, helped U.S. submarines and aircraft sink two-thirds of the Japanese merchant marine.: pp. 226 ff, 242 ff
The part played by Ultra intelligence in the preparation for the Allied invasion of Sicily was of unprecedented importance. It provided information as to where the enemy's forces were strongest and that the elaborate strategic deceptions had convinced Hitler and the German high command.
The success of the Battle of North Cape, in which HMS Duke of York sank the German battleship Scharnhorst, was entirely built on prompt deciphering of German naval signals.
US Army Lieutenant Arthur J. Levenson, who worked on both Enigma and Tunny at Bletchley Park, said in a 1980 interview of intelligence from Tunny:Rommel was appointed Inspector General of the West, and he inspected all the defences along the Normandy beaches and send a very detailed message that I think was 70,000 characters and we decrypted it as a small pamphlet. It was a report of the whole Western defences. How wide the V shaped trenches were to stop tanks, and how much barbed wire. Oh, it was everything and we decrypted it before D-Day.
Both Enigma and Tunny decrypts showed Germany had been taken in by Operation Bodyguard, the deception operation to protect Operation Overlord. They revealed the Germans did not anticipate the Normandy landings and even after D-Day still believed Normandy was only a feint, with the main invasion to be in the Pas de Calais.
Information that there was a German Panzergrenadier division in the planned dropping zone for the US 101st Airborne Division in Operation Overlord led to a change of location.
Ultra assisted greatly in Operation Cobra.
Ultra warned of the major German counterattack at Mortain, and allowed the Allies to surround the forces at Falaise.
During the Allied advance to Germany, Ultra often provided detailed tactical information, and showed how Hitler ignored the advice of his generals and insisted on German troops fighting in place "to the last man".
Arthur "Bomber" Harris, officer commanding RAF Bomber Command, was not cleared for Ultra. After the invasion of France, with the resumption of the strategic bombing campaign over Germany, Harris remained wedded to area bombardment. Historian Frederick Taylor argues that, as Harris was not cleared for access to Ultra, he was given some information gleaned from Enigma but not the information's source. This affected his attitude about post-D-Day directives to target oil installations, since he did not know that senior Allied commanders were using high-level German sources to assess just how much this was hurting the German war effort; thus Harris tended to see the directives to bomb specific oil and munitions targets as a "panacea" (his word) and a distraction from the real task of making the rubble bounce.
== Safeguarding of sources ==
The Allies were seriously concerned with the prospect of the Axis command finding out that they had broken into the Enigma traffic. The British were more disciplined about such measures than the Americans, and this difference was a source of friction between them.
To disguise the source of the intelligence for the Allied attacks on Axis supply ships bound for North Africa, "spotter" submarines and aircraft were sent to search for Axis ships. These searchers or their radio transmissions were observed by the Axis forces, who concluded their ships were being found by conventional reconnaissance. They suspected that there were some 400 Allied submarines in the Mediterranean and a huge fleet of reconnaissance aircraft on Malta. In fact, there were only 25 submarines and at times as few as three aircraft.
This procedure also helped conceal the intelligence source from Allied personnel, who might give away the secret by careless talk, or under interrogation if captured. Along with the search mission that would find the Axis ships, two or three additional search missions would be sent out to other areas, so that crews would not begin to wonder why a single mission found the Axis ships every time.
Other deceptive means were used. On one occasion, a convoy of five ships sailed from Naples to North Africa with essential supplies at a critical moment in the North African fighting. There was no time to have the ships properly spotted beforehand. The decision to attack solely on Ultra intelligence went directly to Churchill. The ships were all sunk by an attack "out of the blue", arousing German suspicions of a security breach. To distract the Germans from the idea of a signals breach (such as Ultra), the Allies sent a radio message to a fictitious spy in Naples, congratulating him for this success. According to some sources the Germans decrypted this message and believed it.
In the Battle of the Atlantic, the precautions were taken to the extreme. In most cases where the Allies knew from intercepts the location of a U-boat in mid-Atlantic, the U-boat was not attacked immediately, until a "cover story" could be arranged. For example, a search plane might be "fortunate enough" to sight the U-boat, thus explaining the Allied attack.
Some Germans had suspicions that all was not right with Enigma. Admiral Karl Dönitz received reports of "impossible" encounters between U-boats and enemy vessels which made him suspect some compromise of his communications. In one instance, three U-boats met at a tiny island in the Caribbean Sea, and a British destroyer promptly showed up. The U-boats escaped and reported what had happened. Dönitz immediately asked for a review of Enigma's security. The analysis suggested that the signals problem, if there was one, was not due to the Enigma itself. Dönitz had the settings book changed anyway, blacking out Bletchley Park for a period. However, the evidence was never enough to truly convince him that Naval Enigma was being read by the Allies. The more so, since B-Dienst, his own codebreaking group, had partially broken Royal Navy traffic (including its convoy codes early in the war), and supplied enough information to support the idea that the Allies were unable to read Naval Enigma.
By 1945, most German Enigma traffic could be decrypted within a day or two, yet the Germans remained confident of its security.
== Role of women in Allied codebreaking ==
After encryption systems were "broken", there was a large volume of cryptologic work needed to recover daily key settings and keep up with changes in enemy security procedures, plus the more mundane work of processing, translating, indexing, analyzing and distributing tens of thousands of intercepted messages daily. The more successful the code breakers were, the more labor was required. Some 8,000 women worked at Bletchley Park, about three quarters of the work force. Before the attack on Pearl Harbor, the US Navy sent letters to top women's colleges seeking introductions to their best seniors; the Army soon followed suit. By the end of the war, some 7000 workers in the Army Signal Intelligence service, out of a total 10,500, were female. By contrast, the Germans and Japanese had strong ideological objections to women engaging in war work. The Nazis even created a Cross of Honour of the German Mother to encourage women to stay at home and have babies.
== Postwar consequences ==
The mystery surrounding the discovery of the sunk German submarine U-869 off the coast of New Jersey by divers Richie Kohler and John Chatterton was unravelled in part through the analysis of Ultra intercepts, which demonstrated that, although U-869 had been ordered by U-boat Command to change course and proceed to North Africa, near Rabat, the submarine had missed the messages changing her assignment and had continued to the eastern coast of the U.S., her original destination.
In 1953, the CIA's Project ARTICHOKE, a series of experiments on human subjects to develop drugs for use in interrogations, was renamed Project MKUltra. MK was the CIA's designation for its Technical Services Division and Ultra was in reference to the Ultra project.
== Postwar secrecy ==
=== Secrecy and initial silence (1945–1960s) ===
Until the mid 1970s, the thirty year rule meant that there was no official mention of Bletchley Park. This meant that although there were many operations where codes broken by Bletchley Park played an important role, this was not present in the history of those events. Churchill's series The Second World War did mention Enigma but not that it had been broken.
While it is obvious why Britain and the U.S. went to considerable pains to keep Ultra a secret until the end of the war, it has been a matter of some conjecture why Ultra was kept officially secret for 29 years thereafter, until 1974. During that period, the important contributions to the war effort of a great many people remained unknown, and they were unable to share in the glory of what is now recognised as one of the chief reasons the Allies won the war – or, at least, as quickly as they did.
At least three explanations exist as to why Ultra was kept secret so long. Each has plausibility, and all may be true. First, as David Kahn pointed out in his 1974 New York Times review of Winterbotham's The Ultra Secret, after the war, surplus Enigmas and Enigma-like machines were sold to Third World countries, which remained convinced of the security of the remarkable cipher machines. Their traffic was not as secure as they believed, however, which is one reason the British made the machines available.
By the 1970s, newer computer-based ciphers were becoming popular as the world increasingly turned to computerised communications, and the usefulness of Enigma copies (and rotor machines generally) rapidly decreased. Switzerland developed its own version of Enigma, known as NEMA, and used it into the late 1970s, while the United States National Security Agency (NSA) retired the last of its rotor-based encryption systems, the KL-7 series, in the 1980s.
A second explanation relates to a misadventure of one of Churchill's predecessors, Stanley Baldwin, between the World Wars, when he publicly disclosed information from decrypted Soviet communications about the General Strike. This had prompted the Soviets to change their ciphers, leading to a blackout.
The third explanation is given by Winterbotham, who recounts that two weeks after V-E Day, on 25 May 1945, Churchill requested former recipients of Ultra intelligence not to divulge the source or the information that they had received from it, in order that there be neither damage to the future operations of the Secret Service nor any cause for the Axis to blame Ultra for their defeat.
=== Partial disclosures ===
In 1967, Polish military historian Władysław Kozaczuk in his book Bitwa o tajemnice ("Battle for Secrets") first revealed Enigma had been broken by Polish cryptologists before World War II.
Also published in 1967, David Kahn's comprehensive chronicle of the history of cryptography, The Codebreakers, does not mention Bletchley Park, although it does make the claim that Soviet forces were reading Enigma messages by 1942. He also described the 1944 capture of a naval Enigma machine from U-505 and gave the first published hint about the scale, mechanisation and operational importance of the Anglo-American Enigma-breaking operation:
The Allies now read U-boat operational traffic. For they had, more than a year before the theft, succeeded in solving the difficult U-boat systems, and – in one of the finest cryptanalytic achievements of the war – managed to read the intercepts on a current basis. For this, the cryptanalysts needed the help of a mass of machinery that filled two buildings.
Ladislas Farago's 1971 best-seller The Game of the Foxes gave an early garbled version of the myth of the purloined Enigma. According to Farago, it was thanks to a "Polish-Swedish ring [that] the British obtained a working model of the 'Enigma' machine, which the Germans used to encipher their top-secret messages." "It was to pick up one of these machines that Commander Denniston went clandestinely to a secluded Polish castle [!] on the eve of the war. Dilly Knox later solved its keying, exposing all Abwehr signals encoded by this system." "In 1941 [t]he brilliant cryptologist Dillwyn Knox, working at the Government Code & Cypher School at the Bletchley centre of British code-cracking, solved the keying of the Abwehr's Enigma machine."
=== 1970s ===
The 1973 public disclosure of Enigma decryption in the book Enigma by French intelligence officer Gustave Bertrand – which dealt mainly with the Polish and then Franco-Polish efforts before the Invasion of France and before the Ultra program – generated pressure to discuss the rest of the Enigma–Ultra story.
Since it was British and, later, American message-breaking which had been the most extensive, the importance of Enigma decrypts to the prosecution of the war remained unknown despite revelations by the Poles and the French of their early work on breaking the Enigma cipher. This work, which was carried out in the 1930s and continued into the early part of the war, was necessarily uninformed regarding further breakthroughs achieved by the Allies during the balance of the war.
The British ban was finally lifted in 1974, the year that a key participant on the distribution side of the Ultra project, F. W. Winterbotham, published The Ultra Secret. Winterbotham's book was written from memory and although officially allowed, there was no access to archives. Public discussion of Bletchley Park's work in the English speaking world finally became accepted, although some former staff considered themselves bound to silence forever.
Other books such as Anthony Cave Brown's Bodyguard of Lies and William Stevenson's A Man called Intrepid were also being written at this time, and the military historian Harold C. Deutsch regards Winterbotham's revelations as only to have anticipated what were going to be a number of revelations.
=== Public interest ===
A succession of books by former participants and others followed. The official history of British intelligence in World War II was published in five volumes from 1979 to 1988, and included further details from official sources concerning the availability and employment of Ultra intelligence. It was chiefly edited by Harry Hinsley, with one volume by Michael Howard. There is also a one-volume collection of reminiscences by Ultra veterans, Codebreakers (1993), edited by Hinsley and Alan Stripp.
=== Continued selective secrecy ===
In 2012, Alan Turing's last two papers on Enigma decryption were released to Britain's National Archives. The Departmental Historian at GCHQ stated that the seven decades' delay had been due to their "continuing sensitivity... It wouldn't have been safe to release [them earlier]."
== Historical debates on Ultra ==
=== Holocaust intelligence ===
Historians and Holocaust researchers have tried to establish when the Allies realized the full extent of Nazi-era extermination of Jews, and specifically, the extermination-camp system. In 1999, the U.S. Government passed the Nazi War Crimes Disclosure Act (P.L. 105-246), making it policy to declassify all Nazi war crime documents in their files; this was later amended to include the Japanese Imperial Government. As a result, more than 600 decrypts and translations of intercepted messages were disclosed; NSA historian Robert Hanyok would conclude that Allied communications intelligence, "by itself, could not have provided an early warning to Allied leaders regarding the nature and scope of the Holocaust."
Following Operation Barbarossa, decrypts in August 1941 alerted British authorities to the many massacres in occupied zones of the Soviet Union, including those of Jews, but specifics were not made public for security reasons. Revelations about the concentration camps were gleaned from other sources, and were publicly reported by the Polish government-in-exile, Jan Karski and the WJC offices in Switzerland a year or more later. A decrypted message referring to "Einsatz Reinhard" (the Höfle telegram), from 11 January 1943 may have outlined the system and listed the number of Jews and others gassed at four death camps the previous year, but codebreakers did not understand the meaning of the message. In summer 1944, Arthur Schlesinger, an OSS analyst, interpreted the intelligence as an "incremental increase in persecution rather than ... extermination".
=== Overall effect on the War ===
The existence of Ultra was kept secret for many years after the war. Since the Ultra story was widely disseminated by Winterbotham in his 1974 book The Ultra Secret, historians have altered the historiography of World War II. For example, Andrew Roberts, writing in the 21st century, stated of Montgomery's handling of the Second Battle of El Alamein, "Because he had the invaluable advantage of being able to read [Field Marshal Erwin Rommel's] Enigma communications, Montgomery knew how short the Germans were of men, ammunition, food and above all fuel. When he put Rommel's picture up in his caravan he wanted to be seen to be almost reading his opponent's mind. In fact he was reading his mail." Over time, Ultra has become embedded in the public consciousness and Bletchley Park has become a significant visitor attraction. As stated by historian Thomas Haigh, "The British code-breaking effort of the Second World War, formerly secret, is now one of the most celebrated aspects of modern British history, an inspiring story in which a free society mobilized its intellectual resources against a terrible enemy."
=== Effect on the duration of the War ===
There has been controversy about the influence of Allied Enigma decryption on the course of World War II with three views – that without Ultra the outcome of the war would be different, that without Ultra the Allies would have still won but that it was shortened by two years and that while useful Ultra decrypts were largely incidental to the fact and timing of the Allied victory.
An oft-repeated assessment is that decryption of German ciphers advanced the end of the European war by no less than two years. Hinsley, who first made this claim, is typically cited as an authority for the two-year estimate.
Winterbotham's quoting of Eisenhower's "decisive" verdict is part of a letter sent by Eisenhower to Menzies after the conclusion of the European war and later found among his papers at the Eisenhower Presidential Library. It allows a contemporary, documentary view of a leader on Ultra's importance:
July 1945
Dear General Menzies:
I had hoped to be able to pay a visit to Bletchley Park in order to thank you, Sir Edward Travis, and the members of the staff personally for the magnificent service which has been rendered to the Allied cause.
I am very well aware of the immense amount of work and effort which has been involved in the production of the material with which you supplied us. I fully realize also the numerous setbacks and difficulties with which you have had to contend and how you have always, by your supreme efforts, overcome them.
The intelligence which has emanated from you before and during this campaign has been of priceless value to me. It has simplified my task as a commander enormously. It has saved thousands of British and American lives and, in no small way, contributed to the speed with which the enemy was routed and eventually forced to surrender.
I should be very grateful, therefore, if you would express to each and every one of those engaged in this work from me personally my heartfelt admiration and sincere thanks for their very decisive contribution to the Allied war effort.
Sincerely,
Dwight D. Eisenhower
There is wide disagreement about the importance of codebreaking in winning the crucial Battle of the Atlantic. To cite just one example, the historian Max Hastings states that "In 1941 alone, Ultra saved between 1.5 and two million tons of Allied ships from destruction." This would represent a 40 percent to 53 percent reduction, though it is not clear how this extrapolation was made.
Another view is from a history based on the German naval archives written after the war for the British Admiralty by a former U-boat commander and son-in-law of his commander, Grand Admiral Karl Dönitz. His book reports that several times during the war they undertook detailed investigations to see whether their operations were being compromised by broken Enigma ciphers. These investigations were spurred because the Germans had broken the British naval code and found the information useful. Their investigations were negative, and the conclusion was that their defeat "was due firstly to outstanding developments in enemy radar..." The great advance was centimetric radar, developed in a joint British-American venture, which became operational in the spring of 1943. Earlier radar was unable to distinguish U-boat conning towers from the surface of the sea, so it could not even locate U-boats attacking convoys on the surface on moonless nights; thus the surfaced U-boats were almost invisible, while having the additional advantage of being swifter than their prey. The new higher-frequency radar could spot conning towers, and periscopes could even be detected from airplanes. Some idea of the relative effect of cipher-breaking and radar improvement can be obtained from graphs showing the tonnage of merchantmen sunk and the number of U-boats sunk in each month of the Battle of the Atlantic. The graphs cannot be interpreted unambiguously, because it is challenging to factor in many variables such as improvements in cipher-breaking and the numerous other advances in equipment and techniques used to combat U-boats. Nonetheless, the data seem to favor the view of the former U-boat commander – that radar was crucial.
While Ultra certainly affected the course of the Western Front during the war, two factors often argued against Ultra having shortened the overall war by a measure of years are the relatively small role it played in the Eastern Front conflict between Germany and the Soviet Union, and the completely independent development of the U.S.-led Manhattan Project to create the atomic bomb. Author Jeffrey T. Richelson mentions Hinsley's estimate of at least two years, and concludes that "It might be more accurate to say that Ultra helped shorten the war by three months – the interval between the actual end of the war in Europe and the time the United States would have been able to drop an atomic bomb on Hamburg or Berlin – and might have shortened the war by as much as two years had the U.S. atomic bomb program been unsuccessful." Military historian Guy Hartcup analyzes aspects of the question but then simply says, "It is impossible to calculate in terms of months or years how much Ultra shortened the war."
F. W. Winterbotham, the first author to outline the influence of Enigma decryption on the course of World War II, likewise made the earliest contribution to an appreciation of Ultra's postwar influence, which now continues into the 21st century – and not only in the postwar establishment of Britain's GCHQ (Government Communication Headquarters) and the United States' NSA. "Let no one be fooled", Winterbotham admonishes in chapter 3, "by the spate of television films and propaganda which has made the war seem like some great triumphant epic. It was, in fact, a very narrow shave, and the reader may like to ponder [...] whether [...] we might have won [without] Ultra."
Iain Standen, Chief Executive of the Bletchley Park Trust, says of the work done there: "It was crucial to the survival of Britain, and indeed of the West." The Departmental Historian at GCHQ (the Government Communications Headquarters), who identifies himself only as "Tony" but seems to speak authoritatively, says that Ultra was a "major force multiplier. It was the first time that quantities of real-time intelligence became available to the British military."
According to the official historian of British Intelligence, Ultra intelligence shortened the war by two to four years, and without it the outcome of the war would have been uncertain.
=== Contribution to the Cold War ===
Phillip Knightley suggests that Ultra may have contributed to the development of the Cold War. The Soviets received disguised Ultra information, but the existence of Ultra itself was not disclosed by the western Allies. The Soviets, who had clues to Ultra's existence, possibly through Kim Philby, John Cairncross and Anthony Blunt, may thus have felt still more distrustful of their wartime partners.
Debate continues on whether, had postwar political and military leaders been aware of Ultra's role in Allied victory in World War II, these leaders might have been less optimistic about post-World War II military involvements. Christopher Kasparek writes: "Had the... postwar governments of major powers realized ... how Allied victory in World War II had hung by a slender thread first spun by three mathematicians [Rejewski, Różycki, Zygalski] working on Enigma decryption for the general staff of a seemingly negligible power [Poland], they might have been more cautious in picking their own wars." A kindred point concerning postwar American triumphalism is made by British historian Max Hastings, author of Inferno: The World at War, 1939–1945.
== See also ==
Hut 6
Hut 8
Magic (cryptography)
Military intelligence
Signals intelligence in modern history
The Imitation Game
== Notes ==
== References ==
== Sources ==
== Further reading == | Wikipedia/Ultra_(cryptography) |
Differential pulse-code modulation (DPCM) is a signal encoder that uses the baseline of pulse-code modulation (PCM) but adds some functionalities based on the prediction of the samples of the signal. The input can be an analog signal or a digital signal.
If the input is a continuous-time analog signal, it needs to be sampled first so that a discrete-time signal is the input to the DPCM encoder.
Option 1: take the values of two consecutive samples; if they are analog samples, quantize them; calculate the difference between the first one and the next; the output is the difference.
Option 2: instead of taking a difference relative to a previous input sample, take the difference relative to the output of a local model of the decoder process; in this option, the difference can be quantized, which allows a good way to incorporate a controlled loss in the encoding.
Applying one of these two processes, short-term redundancy (positive correlation of nearby values) of the signal is eliminated; compression ratios on the order of 2 to 4 can be achieved if differences are subsequently entropy coded because the entropy of the difference signal is much smaller than that of the original discrete signal treated as independent samples.
DPCM was invented by C. Chapin Cutler at Bell Labs in 1950; his patent includes both methods.
== Option 1: difference between two consecutive quantized samples ==
The encoder performs the function of differentiation; a quantizer precedes the differencing of adjacent quantized samples; the decoder is an accumulator, which if correctly initialized exactly recovers the quantized signal.
== Option 2: analysis by synthesis ==
The incorporation of the decoder inside the encoder allows quantization of the differences, including nonlinear quantization, in the encoder, as long as an approximate inverse quantizer is used appropriately in the receiver. When the quantizer is uniform, the decoder regenerates the differences implicitly, as in this simple diagram that Cutler showed:
== See also ==
Adaptive differential pulse-code modulation
Delta modulation, a special case of DPCM where the differences eQ[n] are represented with 1 bit as ±Δ
Pulse modulation methods
Delta-sigma modulation
== References == | Wikipedia/Differential_pulse-code_modulation |
The display resolution or display modes of a digital television, computer monitor, or other display device is the number of distinct pixels in each dimension that can be displayed. It can be an ambiguous term especially as the displayed resolution is controlled by different factors in cathode-ray tube (CRT) displays, flat-panel displays (including liquid-crystal displays) and projection displays using fixed picture-element (pixel) arrays.
It is usually quoted as width × height, with the units in pixels: for example, 1024 × 768 means the width is 1024 pixels and the height is 768 pixels. This example would normally be spoken as "ten twenty-four by seven sixty-eight" or "ten twenty-four by seven six eight".
One use of the term display resolution applies to fixed-pixel-array displays such as plasma display panels (PDP), liquid-crystal displays (LCD), Digital Light Processing (DLP) projectors, OLED displays, and similar technologies, and is simply the physical number of columns and rows of pixels creating the display (e.g. 1920 × 1080). A consequence of having a fixed-grid display is that, for multi-format video inputs, all displays need a "scaling engine" (a digital video processor that includes a memory array) to match the incoming picture format to the display.
For device displays such as phones, tablets, monitors and televisions, the use of the term display resolution as defined above is a misnomer, though common. The term display resolution is usually used to mean pixel dimensions, the maximum number of pixels in each dimension (e.g. 1920 × 1080), which does not tell anything about the pixel density of the display on which the image is actually formed: resolution properly refers to the pixel density, the number of pixels per unit distance or area, not the total number of pixels. In digital measurement, the display resolution would be given in pixels per inch (PPI). In analog measurement, if the screen is 10 inches high, then the horizontal resolution is measured across a square 10 inches wide. For television standards, this is typically stated as "lines horizontal resolution, per picture height"; for example, analog NTSC TVs can typically display about 340 lines of "per picture height" horizontal resolution from over-the-air sources, which is equivalent to about 440 total lines of actual picture information from left edge to right edge.
== Background ==
Some commentators also use display resolution to indicate a range of input formats that the display's input electronics will accept and often include formats greater than the screen's native grid size even though they have to be down-scaled to match the screen's parameters (e.g. accepting a 1920 × 1080 input on a display with a native 1366 × 768 pixel array). In the case of television inputs, many manufacturers will take the input and zoom it out to "overscan" the display by as much as 5% so input resolution is not necessarily display resolution.
The eye's perception of display resolution can be affected by a number of factors – see image resolution and optical resolution. One factor is the display screen's rectangular shape, which is expressed as the ratio of the physical picture width to the physical picture height. This is known as the aspect ratio. A screen's physical aspect ratio and the individual pixels' aspect ratio may not necessarily be the same. An array of 1280 × 720 on a 16:9 display has square pixels, but an array of 1024 × 768 on a 16:9 display has oblong pixels.
An example of pixel shape affecting "resolution" or perceived sharpness: displaying more information in a smaller area using a higher resolution makes the image much clearer or "sharper". However, most recent screen technologies are fixed at a certain resolution; making the resolution lower on these kinds of screens will greatly decrease sharpness, as an interpolation process is used to "fix" the non-native resolution input into the display's native resolution output.
While some CRT-based displays may use digital video processing that involves image scaling using memory arrays, ultimately "display resolution" in CRT-type displays is affected by different parameters such as spot size and focus, astigmatic effects in the display corners, the color phosphor pitch shadow mask (such as Trinitron) in color displays, and the video bandwidth.
== Aspects ==
=== Overscan and underscan ===
Most television display manufacturers "overscan" the pictures on their displays (CRTs and PDPs, LCDs etc.), so that the effective on-screen picture may be reduced from 720 × 576 (480) to 680 × 550 (450), for example. The size of the invisible area somewhat depends on the display device. Some HD televisions do this as well, to a similar extent.
Computer displays including projectors generally do not overscan although many models (particularly CRT displays) allow it. CRT displays tend to be underscanned in stock configurations, to compensate for the increasing distortions at the corners.
=== Interlaced versus progressive scan ===
Interlaced video (also known as interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon.
The European Broadcasting Union has argued against interlaced video in production and broadcasting. The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames. Despite arguments against it, television standards organizations continue to support interlacing. It is still included in digital video transmission formats such as DV, DVB, and ATSC. New video compression standards like High Efficiency Video Coding are optimized for progressive scan video, but sometimes do support interlaced video.
Progressive scanning (alternatively referred to as noninterlaced scanning) is a format of displaying, storing, or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to interlaced video used in traditional analog television systems where only the odd lines, then the even lines of each frame (each image called a video field) are drawn alternately, so that only half the number of actual image frames are used to produce video.
== Televisions ==
=== Current standards ===
Televisions are of the following resolutions:
Standard-definition television (SDTV):
480i (NTSC-compatible digital standard employing two interlaced fields of 240 lines each)
576i (PAL-compatible digital standard employing two interlaced fields of 288 lines each)
Enhanced-definition television (EDTV):
480p (720 × 480 progressive scan)
576p (720 × 576 progressive scan)
High-definition television (HDTV):
720p (1280 × 720 progressive scan)
1080i (1920 × 1080 split into two interlaced fields of 540 lines)
1080p (1920 × 1080 progressive scan)
Ultra-high-definition television (UHDTV):
4K UHD (3840 × 2160 progressive scan)
8K UHD (7680 × 4320 progressive scan)
== Film industry ==
As far as digital cinematography is concerned, video resolution standards depend first on the frames' aspect ratio in the film stock (which is usually scanned for digital intermediate post-production) and then on the actual points' count. Although there is not a unique set of standardized sizes, it is commonplace within the motion picture industry to refer to "nK" image "quality", where n is a (small, usually even) integer number which translates into a set of actual resolutions, depending on the film format. As a reference consider that, for a 4:3 (around 1.33:1) aspect ratio which a film frame (no matter what is its format) is expected to horizontally fit in, n is the multiplier of 1024 such that the horizontal resolution is exactly 1024•n points. For example, 2K reference resolution is 2048 × 1536 pixels, whereas 4K reference resolution is 4096 × 3072 pixels. Nevertheless, 2K may also refer to resolutions like 2048 × 1556 (full-aperture), 2048 × 1152 (HDTV, 16:9 aspect ratio) or 2048 × 872 pixels (Cinemascope, 2.35:1 aspect ratio). It is also worth noting that while a frame resolution may be, for example, 3:2 (720 × 480 NTSC), that is not what you will see on-screen (i.e. 4:3 or 16:9 depending on the intended aspect ratio of the original material).
== Computer monitors ==
Computer monitors have traditionally possessed higher resolutions than most televisions.
=== Evolution of standards ===
Many personal computers introduced in the late 1970s and the 1980s were designed to use television receivers as their display devices, making the resolutions dependent on the television standards in use, including PAL and NTSC. Picture sizes were usually limited to ensure the visibility of all the pixels in the major television standards and the broad range of television sets with varying amounts of over scan. The actual drawable picture area was, therefore, somewhat smaller than the whole screen, and was usually surrounded by a static-colored border (see image below). Also, the interlace scanning was usually omitted in order to provide more stability to the picture, effectively halving the vertical resolution in progress. 160 × 200, 320 × 200 and 640 × 200 on NTSC were relatively common resolutions in the era (224, 240 or 256 scanlines were also common). In the IBM PC world, these resolutions came to be used by 16-color EGA video cards.
One of the drawbacks of using a classic television is that the computer display resolution is higher than the television could decode. Chroma resolution for NTSC/PAL televisions are bandwidth-limited to a maximum 1.5 MHz, or approximately 160 pixels wide, which led to blurring of the color for 320- or 640-wide signals, and made text difficult to read (see example image below). Many users upgraded to higher-quality televisions with S-Video or RGBI inputs that helped eliminate chroma blur and produce more legible displays. The earliest, lowest cost solution to the chroma problem was offered in the Atari 2600 Video Computer System and the Apple II+, both of which offered the option to disable the color and view a legacy black-and-white signal. On the Commodore 64, the GEOS mirrored the Mac OS method of using black-and-white to improve readability.
The 640 × 400i resolution (720 × 480i with borders disabled) was first introduced by home computers such as the Commodore Amiga and, later, Atari Falcon. These computers used interlace to boost the maximum vertical resolution. These modes were only suited to graphics or gaming, as the flickering interlace made reading text in word processor, database, or spreadsheet software difficult. (Modern game consoles solve this problem by pre-filtering the 480i video to a lower resolution. For example, Final Fantasy XII suffers from flicker when the filter is turned off, but stabilizes once filtering is restored. The computers of the 1980s lacked sufficient power to run similar filtering software.)
The advantage of a 720 × 480i overscanned computer was an easy interface with interlaced TV production, leading to the development of Newtek's Video Toaster. This device allowed Amigas to be used for CGI creation in various news departments (example: weather overlays), drama programs such as NBC's seaQuest and The WB's Babylon 5.
In the PC world, the IBM PS/2 VGA (multi-color) on-board graphics chips used a non-interlaced (progressive) 640 × 480 × 16 color resolution that was easier to read and thus more useful for office work. It was the standard resolution from 1990 to around 1996. The standard resolution was 800 × 600 until around 2000. Microsoft Windows XP, released in 2001, was designed to run at 800 × 600 minimum, although it is possible to select the original 640 × 480 in the Advanced Settings window.
Programs designed to mimic older hardware such as Atari, Sega, or Nintendo game consoles (emulators) when attached to multiscan CRTs, routinely use much lower resolutions, such as 160 × 200 or 320 × 400 for greater authenticity, though other emulators have taken advantage of pixelation recognition on circle, square, triangle and other geometric features on a lesser resolution for a more scaled vector rendering. Some emulators, at higher resolutions, can even mimic the aperture grille and shadow masks of CRT monitors.
In 2002, 1024 × 768 eXtended Graphics Array was the most common display resolution. Many web sites and multimedia products were re-designed from the previous 800 × 600 format to the layouts optimized for 1024 × 768.
The availability of inexpensive LCD monitors made the 5∶4 aspect ratio resolution of 1280 × 1024 more popular for desktop usage during the first decade of the 21st century. Many computer users including CAD users, graphic artists and video game players ran their computers at 1600 × 1200 resolution (UXGA) or higher such as 2048 × 1536 QXGA if they had the necessary equipment. Other available resolutions included oversize aspects like 1400 × 1050 SXGA+ and wide aspects like 1280 × 800 WXGA, 1440 × 900 WXGA+, 1680 × 1050 WSXGA+, and 1920 × 1200 WUXGA; monitors built to the 720p and 1080p standard were also not unusual among home media and video game players, due to the perfect screen compatibility with movie and video game releases. A new more-than-HD resolution of 2560 × 1600 WQXGA was released in 30-inch LCD monitors in 2007.
In 2010, 27-inch LCD monitors with the 2560 × 1440 resolution were released by multiple manufacturers, and in 2012, Apple introduced a 2880 × 1800 display on the MacBook Pro. Panels for professional environments, such as medical use and air traffic control, support resolutions up to 4096 × 2160 (or, more relevant for control rooms, 1∶1 2048 × 2048 pixels).
=== Common display resolutions ===
In recent years the 16:9 aspect ratio has become more common in notebook displays, and 1366 × 768 (HD) has become popular for most low-cost notebooks, while 1920 × 1080 (FHD) and higher resolutions are available for more premium notebooks.
When a computer display resolution is set higher than the physical screen resolution (native resolution), some video drivers make the virtual screen scrollable over the physical screen thus realizing a two dimensional virtual desktop with its viewport. Most LCD manufacturers do make note of the panel's native resolution as working in a non-native resolution on LCDs will result in a poorer image, due to dropping of pixels to make the image fit (when using DVI) or insufficient sampling of the analog signal (when using VGA connector). Few CRT manufacturers will quote the true native resolution, because CRTs are analog in nature and can vary their display from as low as 320 × 200 (emulation of older computers or game consoles) to as high as the internal board will allow, or the image becomes too detailed for the vacuum tube to recreate (i.e., analog blur). Thus, CRTs provide a variability in resolution that fixed resolution LCDs cannot provide.
== See also ==
Display aspect ratio
Display size
Pixel density of computer displays – PPI (for example, a 20-inch 1680 × 1050 screen has a PPI of 99.06)
Resolution independence
Ultrawide formats
Video scaler
Widescreen
== References == | Wikipedia/Display_resolution |
The term "information algebra" refers to mathematical techniques of information processing. Classical information theory goes back to Claude Shannon. It is a theory of information transmission, looking at communication and storage. However, it has not been considered so far that information comes from different sources and that it is therefore usually combined. It has furthermore been neglected in classical information theory that one wants to extract those parts out of a piece of information that are relevant to specific questions.
A mathematical phrasing of these operations leads to an algebra of information, describing basic modes of information processing. Such an algebra involves several formalisms of computer science, which seem to be different on the surface: relational databases, multiple systems of formal logic or numerical problems of linear algebra. It allows the development of generic procedures of information processing and thus a unification of basic methods of computer science, in particular of distributed information processing.
Information relates to precise questions, comes from different sources, must be aggregated, and can be focused on questions of interest. Starting from these considerations, information algebras (Kohlas 2003) are two-sorted algebras
(
Φ
,
D
)
{\displaystyle (\Phi ,D)}
:
Where
Φ
{\displaystyle \Phi }
is a semigroup, representing combination or aggregation of information, and
D
{\displaystyle D}
is a lattice of domains (related to questions) whose partial order reflects the granularity of the domain or the question, and a mixed operation representing focusing or extraction of information.
== Information and its operations ==
More precisely, in the two-sorted algebra
(
Φ
,
D
)
{\displaystyle (\Phi ,D)}
, the following operations are defined
Additionally, in
D
{\displaystyle D}
the usual lattice operations (meet and join) are defined.
== Axioms and definition ==
The axioms of the two-sorted algebra
(
Φ
,
D
)
{\displaystyle (\Phi ,D)}
, in addition to the axioms of the lattice
D
{\displaystyle D}
:
A two-sorted algebra
(
Φ
,
D
)
{\displaystyle (\Phi ,D)}
satisfying these axioms is called an Information Algebra.
== Order of information ==
A partial order of information can be introduced by defining
ϕ
≤
ψ
{\displaystyle \phi \leq \psi }
if
ϕ
⊗
ψ
=
ψ
{\displaystyle \phi \otimes \psi =\psi }
. This means that
ϕ
{\displaystyle \phi }
is less informative than
ψ
{\displaystyle \psi }
if it adds no new information to
ψ
{\displaystyle \psi }
. The semigroup
Φ
{\displaystyle \Phi }
is a semilattice relative to this order, i.e.
ϕ
⊗
ψ
=
ϕ
∨
ψ
{\displaystyle \phi \otimes \psi =\phi \vee \psi }
. Relative to any domain (question)
x
∈
D
{\displaystyle x\in D}
a partial order can be introduced by defining
ϕ
≤
x
ψ
{\displaystyle \phi \leq _{x}\psi }
if
ϕ
⇒
x
≤
ψ
⇒
x
{\displaystyle \phi ^{\Rightarrow x}\leq \psi ^{\Rightarrow x}}
. It represents the order of information content of
ϕ
{\displaystyle \phi }
and
ψ
{\displaystyle \psi }
relative to the domain (question)
x
{\displaystyle x}
.
== Labeled information algebra ==
The pairs
(
ϕ
,
x
)
{\displaystyle (\phi ,x)\ }
, where
ϕ
∈
Φ
{\displaystyle \phi \in \Phi }
and
x
∈
D
{\displaystyle x\in D}
such that
ϕ
⇒
x
=
ϕ
{\displaystyle \phi ^{\Rightarrow x}=\phi }
form a labeled Information Algebra. More precisely, in the two-sorted algebra
(
Φ
,
D
)
{\displaystyle (\Phi ,D)\ }
, the following operations are defined
== Models of information algebras ==
Here follows an incomplete list of instances of information algebras:
Relational algebra: The reduct of a relational algebra with natural join as combination and the usual projection is a labeled information algebra, see Example.
Constraint systems: Constraints form an information algebra (Jaffar & Maher 1994).
Semiring valued algebras: C-Semirings induce information algebras (Bistarelli, Montanari & Rossi1997);(Bistarelli et al. 1999);(Kohlas & Wilson 2006).
Logic: Many logic systems induce information algebras (Wilson & Mengin 1999). Reducts of cylindric algebras (Henkin, Monk & Tarski 1971) or polyadic algebras are information algebras related to predicate logic (Halmos 2000).
Module algebras: (Bergstra, Heering & Klint 1990);(de Lavalette 1992).
Linear systems: Systems of linear equations or linear inequalities induce information algebras (Kohlas 2003).
=== Worked-out example: relational algebra ===
Let
A
{\displaystyle {\mathcal {A}}}
be a set of symbols, called attributes (or column names). For each
α
∈
A
{\displaystyle \alpha \in {\mathcal {A}}}
let
U
α
{\displaystyle U_{\alpha }}
be a non-empty set, the set of all possible values of the attribute
α
{\displaystyle \alpha }
. For example, if
A
=
{
name
,
age
,
income
}
{\displaystyle {\mathcal {A}}=\{{\texttt {name}},{\texttt {age}},{\texttt {income}}\}}
, then
U
name
{\displaystyle U_{\texttt {name}}}
could
be the set of strings, whereas
U
age
{\displaystyle U_{\texttt {age}}}
and
U
income
{\displaystyle U_{\texttt {income}}}
are both the set of non-negative integers.
Let
x
⊆
A
{\displaystyle x\subseteq {\mathcal {A}}}
. An
x
{\displaystyle x}
-tuple is a function
f
{\displaystyle f}
so that
dom
(
f
)
=
x
{\displaystyle {\hbox{dom}}(f)=x}
and
f
(
α
)
∈
U
α
{\displaystyle f(\alpha )\in U_{\alpha }}
for each
α
∈
x
{\displaystyle \alpha \in x}
The set
of all
x
{\displaystyle x}
-tuples is denoted by
E
x
{\displaystyle E_{x}}
. For an
x
{\displaystyle x}
-tuple
f
{\displaystyle f}
and a subset
y
⊆
x
{\displaystyle y\subseteq x}
the restriction
f
[
y
]
{\displaystyle f[y]}
is defined to be the
y
{\displaystyle y}
-tuple
g
{\displaystyle g}
so that
g
(
α
)
=
f
(
α
)
{\displaystyle g(\alpha )=f(\alpha )}
for all
α
∈
y
{\displaystyle \alpha \in y}
.
A relation
R
{\displaystyle R}
over
x
{\displaystyle x}
is a set of
x
{\displaystyle x}
-tuples, i.e. a subset of
E
x
{\displaystyle E_{x}}
.
The set of attributes
x
{\displaystyle x}
is called the domain of
R
{\displaystyle R}
and denoted by
d
(
R
)
{\displaystyle d(R)}
. For
y
⊆
d
(
R
)
{\displaystyle y\subseteq d(R)}
the projection of
R
{\displaystyle R}
onto
y
{\displaystyle y}
is defined
as follows:
π
y
(
R
)
:=
{
f
[
y
]
∣
f
∈
R
}
.
{\displaystyle \pi _{y}(R):=\{f[y]\mid f\in R\}.}
The join of a relation
R
{\displaystyle R}
over
x
{\displaystyle x}
and a relation
S
{\displaystyle S}
over
y
{\displaystyle y}
is
defined as follows:
R
⋈
S
:=
{
f
∣
f
(
x
∪
y
)
-tuple
,
f
[
x
]
∈
R
,
f
[
y
]
∈
S
}
.
{\displaystyle R\bowtie S:=\{f\mid f\quad (x\cup y){\hbox{-tuple}},\quad f[x]\in R,\;f[y]\in S\}.}
As an example, let
R
{\displaystyle R}
and
S
{\displaystyle S}
be the following relations:
R
=
name
age
A
34
B
47
S
=
name
income
A
20'000
B
32'000
{\displaystyle R={\begin{matrix}{\texttt {name}}&{\texttt {age}}\\{\texttt {A}}&{\texttt {34}}\\{\texttt {B}}&{\texttt {47}}\\\end{matrix}}\qquad S={\begin{matrix}{\texttt {name}}&{\texttt {income}}\\{\texttt {A}}&{\texttt {20'000}}\\{\texttt {B}}&{\texttt {32'000}}\\\end{matrix}}}
Then the join of
R
{\displaystyle R}
and
S
{\displaystyle S}
is:
R
⋈
S
=
name
age
income
A
34
20'000
B
47
32'000
{\displaystyle R\bowtie S={\begin{matrix}{\texttt {name}}&{\texttt {age}}&{\texttt {income}}\\{\texttt {A}}&{\texttt {34}}&{\texttt {20'000}}\\{\texttt {B}}&{\texttt {47}}&{\texttt {32'000}}\\\end{matrix}}}
A relational database with natural join
⋈
{\displaystyle \bowtie }
as combination and the usual projection
π
{\displaystyle \pi }
is an information algebra.
The operations are well defined since
d
(
R
⋈
S
)
=
d
(
R
)
∪
d
(
S
)
{\displaystyle d(R\bowtie S)=d(R)\cup d(S)}
If
x
⊆
d
(
R
)
{\displaystyle x\subseteq d(R)}
, then
d
(
π
x
(
R
)
)
=
x
{\displaystyle d(\pi _{x}(R))=x}
.
It is easy to see that relational databases satisfy the axioms of a labeled
information algebra:
semigroup
(
R
1
⋈
R
2
)
⋈
R
3
=
R
1
⋈
(
R
2
⋈
R
3
)
{\displaystyle (R_{1}\bowtie R_{2})\bowtie R_{3}=R_{1}\bowtie (R_{2}\bowtie R_{3})}
and
R
⋈
S
=
S
⋈
R
{\displaystyle R\bowtie S=S\bowtie R}
transitivity
If
x
⊆
y
⊆
d
(
R
)
{\displaystyle x\subseteq y\subseteq d(R)}
, then
π
x
(
π
y
(
R
)
)
=
π
x
(
R
)
{\displaystyle \pi _{x}(\pi _{y}(R))=\pi _{x}(R)}
.
combination
If
d
(
R
)
=
x
{\displaystyle d(R)=x}
and
d
(
S
)
=
y
{\displaystyle d(S)=y}
, then
π
x
(
R
⋈
S
)
=
R
⋈
π
x
∩
y
(
S
)
{\displaystyle \pi _{x}(R\bowtie S)=R\bowtie \pi _{x\cap y}(S)}
.
idempotency
If
x
⊆
d
(
R
)
{\displaystyle x\subseteq d(R)}
, then
R
⋈
π
x
(
R
)
=
R
{\displaystyle R\bowtie \pi _{x}(R)=R}
.
support
If
x
=
d
(
R
)
{\displaystyle x=d(R)}
, then
π
x
(
R
)
=
R
{\displaystyle \pi _{x}(R)=R}
.
== Connections ==
Valuation algebras
Dropping the idempotency axiom leads to valuation algebras. These axioms have been introduced by (Shenoy & Shafer 1990) to generalize local computation schemes (Lauritzen & Spiegelhalter 1988) from Bayesian networks to more general formalisms, including belief function, possibility potentials, etc. (Kohlas & Shenoy 2000). For a book-length exposition on the topic see Pouly & Kohlas (2011).
Domains and information systems
Compact Information Algebras (Kohlas 2003) are related to Scott domains and Scott information systems (Scott 1970);(Scott 1982);(Larsen & Winskel 1984).
Uncertain information
Random variables with values in information algebras represent probabilistic argumentation systems (Haenni, Kohlas & Lehmann 2000).
Semantic information
Information algebras introduce semantics by relating information to questions through focusing and combination (Groenendijk & Stokhof 1984);(Floridi 2004).
Information flow
Information algebras are related to information flow, in particular classifications (Barwise & Seligman 1997).
Tree decomposition
Information algebras are organized into a hierarchical tree structure, and decomposed into smaller problems.
Semigroup theory
...
Compositional models
Such models may be defined within the framework of information algebras: https://arxiv.org/abs/1612.02587
Extended axiomatic foundations of information and valuation algebras
The concept of conditional independence is basic for information algebras and a new axiomatic foundation of information algebras, based on conditional independence, extending the old one (see above) is available: https://arxiv.org/abs/1701.02658
== Historical Roots ==
The axioms for information algebras are derived from
the axiom system proposed in (Shenoy and Shafer, 1990), see also (Shafer, 1991).
== References ==
Barwise, J.; Seligman, J. (1997), Information Flow: The Logic of Distributed Systems, Cambridge U.K.: Number 44 in Cambridge Tracts in Theoretical Computer Science, Cambridge University Press
Bergstra, J.A.; Heering, J.; Klint, P. (1990), "Module algebra", Journal of the ACM, 73 (2): 335–372, doi:10.1145/77600.77621, S2CID 7910431
Bistarelli, S.; Fargier, H.; Montanari, U.; Rossi, F.; Schiex, T.; Verfaillie, G. (1999), "Semiring-based CSPs and valued CSPs: Frameworks, properties, and comparison", Constraints, 4 (3): 199–240, doi:10.1023/A:1026441215081, S2CID 17232456, archived from the original on March 10, 2022
Bistarelli, Stefano; Montanari, Ugo; Rossi, Francesca (1997), "Semiring-based constraint satisfaction and optimization", Journal of the ACM, 44 (2): 201–236, CiteSeerX 10.1.1.45.5110, doi:10.1145/256303.256306, S2CID 4003767
de Lavalette, Gerard R. Renardel (1992), "Logical semantics of modularisation", in Egon Börger; Gerhard Jäger; Hans Kleine Büning; Michael M. Richter (eds.), CSL: 5th Workshop on Computer Science Logic, Volume 626 of Lecture Notes in Computer Science, Springer, pp. 306–315, ISBN 978-3-540-55789-0
Floridi, Luciano (2004), "Outline of a theory of strongly semantic information" (PDF), Minds and Machines, 14 (2): 197–221, doi:10.1023/b:mind.0000021684.50925.c9, S2CID 3058065
Groenendijk, J.; Stokhof, M. (1984), Studies on the Semantics of Questions and the Pragmatics of Answers, PhD thesis, Universiteit van Amsterdam
Haenni, R.; Kohlas, J.; Lehmann, N. (2000), "Probabilistic argumentation systems" (PDF), in J. Kohlas; S. Moral (eds.), Handbook of Defeasible Reasoning and Uncertainty Management Systems, Dordrecht: Volume 5: Algorithms for Uncertainty and Defeasible Reasoning, Kluwer, pp. 221–287, archived from the original on January 25, 2005
Halmos, Paul R. (2000), "An autobiography of polyadic algebras", Logic Journal of the IGPL, 8 (4): 383–392, doi:10.1093/jigpal/8.4.383, S2CID 36156234
Henkin, L.; Monk, J. D.; Tarski, A. (1971), Cylindric Algebras, Amsterdam: North-Holland, ISBN 978-0-7204-2043-2
Jaffar, J.; Maher, M. J. (1994), "Constraint logic programming: A survey", Journal of Logic Programming, 19/20: 503–581, doi:10.1016/0743-1066(94)90033-7
Kohlas, J. (2003), Information Algebras: Generic Structures for Inference, Springer-Verlag, ISBN 978-1-85233-689-9
Kohlas, J.; Shenoy, P.P. (2000), "Computation in valuation algebras", in J. Kohlas; S. Moral (eds.), Handbook of Defeasible Reasoning and Uncertainty Management Systems, Volume 5: Algorithms for Uncertainty and Defeasible Reasoning, Dordrecht: Kluwer, pp. 5–39
Kohlas, J.; Wilson, N. (2006), Exact and approximate local computation in semiring-induced valuation algebras (PDF), Technical Report 06-06, Department of Informatics, University of Fribourg, archived from the original on September 24, 2006
Larsen, K. G.; Winskel, G. (1984), "Using information systems to solve recursive domain equations effectively", in Gilles Kahn; David B. MacQueen; Gordon D. Plotkin (eds.), Semantics of Data Types, International Symposium, Sophia-Antipolis, France, June 27–29, 1984, Proceedings, vol. 173 of Lecture Notes in Computer Science, Berlin: Springer, pp. 109–129
Lauritzen, S. L.; Spiegelhalter, D. J. (1988), "Local computations with probabilities on graphical structures and their application to expert systems", Journal of the Royal Statistical Society, Series B, 50 (2): 157–224, doi:10.1111/j.2517-6161.1988.tb01721.x
Pouly, Marc; Kohlas, Jürg (2011), Generic Inference: A Unifying Theory for Automated Reasoning, John Wiley & Sons, ISBN 978-1-118-01086-0
Scott, Dana S. (1970), Outline of a mathematical theory of computation, Technical Monograph PRG–2, Oxford University Computing Laboratory, Programming Research Group
Scott, D.S. (1982), "Domains for denotational semantics", in M. Nielsen; E.M. Schmitt (eds.), Automata, Languages and Programming, Springer, pp. 577–613
Shafer, G. (1991), An axiomatic study of computation in hypertrees, Working Paper 232, School of Business, University of Kansas
Shenoy, P. P.; Shafer, G. (1990). "Axioms for probability and belief-function proagation". In Ross D. Shachter; Tod S. Levitt; Laveen N. Kanal; John F. Lemmer (eds.). Uncertainty in Artificial Intelligence 4. Vol. 9. Amsterdam: Elsevier. pp. 169–198. doi:10.1016/B978-0-444-88650-7.50019-6. hdl:1808/144. ISBN 978-0-444-88650-7. {{cite book}}: |journal= ignored (help)
Wilson, Nic; Mengin, Jérôme (1999), "Logical deduction using the local computation framework", in Anthony Hunter; Simon Parsons (eds.), Symbolic and Quantitative Approaches to Reasoning and Uncertainty, European Conference, ECSQARU'99, London, UK, July 5–9, 1999, Proceedings, volume 1638 of Lecture Notes in Computer Science, Springer, pp. 386–396, ISBN 978-3-540-66131-3 | Wikipedia/Information_algebra |
Statistical inference might be thought of as gambling theory applied to the world around us. The myriad applications for logarithmic information measures tell us precisely how to take the best guess in the face of partial information. In that sense, information theory might be considered a formal expression of the theory of gambling. It is no surprise, therefore, that information theory has applications to games of chance.
== Kelly Betting ==
Kelly betting or proportional betting is an application of information theory to investing and gambling. Its discoverer was John Larry Kelly, Jr.
Part of Kelly's insight was to have the gambler maximize the expectation of the logarithm of his capital, rather than the expected profit from each bet. This is important, since in the latter case, one would be led to gamble all he had when presented with a favorable bet, and if he lost, would have no capital with which to place subsequent bets. Kelly realized that it was the logarithm of the gambler's capital which is additive in sequential bets, and "to which the law of large numbers applies."
=== Side information ===
A bit is the amount of entropy in a bettable event with two possible outcomes and even odds. Obviously we could double our money if we knew beforehand what the outcome of that event would be. Kelly's insight was that no matter how complicated the betting scenario is, we can use an optimum betting strategy, called the Kelly criterion, to make our money grow exponentially with whatever side information we are able to obtain. The value of this "illicit" side information is measured as mutual information relative to the outcome of the betable event:
I
(
X
;
Y
)
=
E
Y
{
D
K
L
(
P
(
X
|
Y
)
‖
P
(
X
|
I
)
)
}
=
E
Y
{
D
K
L
(
P
(
X
|
side
information
Y
)
‖
P
(
X
|
stated
odds
I
)
)
}
,
{\displaystyle {\begin{aligned}I(X;Y)&=\mathbb {E} _{Y}\{D_{\mathrm {KL} }{\big (}P(X|Y)\|P(X|I){\big )}\}\\&=\mathbb {E} _{Y}\{D_{\mathrm {KL} }{\big (}P(X|{\textrm {side}}\ {\textrm {information}}\ Y)\|P(X|{\textrm {stated}}\ {\textrm {odds}}\ I){\big )}\},\end{aligned}}}
where Y is the side information, X is the outcome of the betable event, and I is the state of the bookmaker's knowledge. This is the average Kullback–Leibler divergence, or information gain, of the a posteriori probability distribution of X given the value of Y relative to the a priori distribution, or stated odds, on X. Notice that the expectation is taken over Y rather than X: we need to evaluate how accurate, in the long term, our side information Y is before we start betting real money on X. This is a straightforward application of Bayesian inference. Note that the side information Y might affect not just our knowledge of the event X but also the event itself. For example, Y might be a horse that had too many oats or not enough water. The same mathematics applies in this case, because from the bookmaker's point of view, the occasional race fixing is already taken into account when he makes his odds.
The nature of side information is extremely finicky. We have already seen that it can affect the actual event as well as our knowledge of the outcome. Suppose we have an informer, who tells us that a certain horse is going to win. We certainly do not want to bet all our money on that horse just upon a rumor: that informer may be betting on another horse, and may be spreading rumors just so he can get better odds himself. Instead, as we have indicated, we need to evaluate our side information in the long term to see how it correlates with the outcomes of the races. This way we can determine exactly how reliable our informer is, and place our bets precisely to maximize the expected logarithm of our capital according to the Kelly criterion. Even if our informer is lying to us, we can still profit from his lies if we can find some reverse correlation between his tips and the actual race results.
=== Doubling rate ===
Doubling rate in gambling on a horse race is
W
(
b
,
p
)
=
E
[
log
2
S
(
X
)
]
=
∑
i
=
1
m
p
i
log
2
b
i
o
i
{\displaystyle W(b,p)=\mathbb {E} [\log _{2}S(X)]=\sum _{i=1}^{m}p_{i}\log _{2}b_{i}o_{i}}
where there are
m
{\displaystyle m}
horses, the probability of the
i
{\displaystyle i}
th horse winning being
p
i
{\displaystyle p_{i}}
, the proportion of wealth bet on the horse being
b
i
{\displaystyle b_{i}}
, and the odds (payoff) being
o
i
{\displaystyle o_{i}}
(e.g.,
o
i
=
2
{\displaystyle o_{i}=2}
if the
i
{\displaystyle i}
th horse winning pays double the amount bet). This quantity is maximized by proportional (Kelly) gambling:
b
=
p
{\displaystyle b=p\,}
for which
max
b
W
(
b
,
p
)
=
∑
i
p
i
log
2
o
i
−
H
(
p
)
{\displaystyle \max _{b}W(b,p)=\sum _{i}p_{i}\log _{2}o_{i}-H(p)\,}
where
H
(
p
)
{\displaystyle H(p)}
is information entropy.
=== Expected gains ===
An important but simple relation exists between the amount of side information a gambler obtains and the expected exponential growth of his capital (Kelly):
E
log
K
t
=
log
K
0
+
∑
i
=
1
t
H
i
{\displaystyle \mathbb {E} \log K_{t}=\log K_{0}+\sum _{i=1}^{t}H_{i}}
for an optimal betting strategy, where
K
0
{\displaystyle K_{0}}
is the initial capital,
K
t
{\displaystyle K_{t}}
is the capital after the tth bet, and
H
i
{\displaystyle H_{i}}
is the amount of side information obtained concerning the ith bet (in particular, the mutual information relative to the outcome of each betable event).
This equation applies in the absence of any transaction costs or minimum bets. When these constraints apply (as they invariably do in real life), another important gambling concept comes into play: in a game with negative expected value, the gambler (or unscrupulous investor) must face a certain probability of ultimate ruin, which is known as the gambler's ruin scenario. Note that even food, clothing, and shelter can be considered fixed transaction costs and thus contribute to the gambler's probability of ultimate ruin.
This equation was the first application of Shannon's theory of information outside its prevailing paradigm of data communications (Pierce).
== Applications for self-information ==
The logarithmic probability measure self-information or surprisal, whose average is information entropy/uncertainty and whose average difference is KL-divergence, has applications to odds-analysis all by itself. Its two primary strengths are that surprisals: (i) reduce minuscule probabilities to numbers of manageable size, and (ii) add whenever probabilities multiply.
For example, one might say that "the number of states equals two to the number of bits" i.e. #states = 2#bits. Here the quantity that's measured in bits is the logarithmic information measure mentioned above. Hence there are N bits of surprisal in landing all heads on one's first toss of N coins.
The additive nature of surprisals, and one's ability to get a feel for their meaning with a handful of coins, can help one put improbable events (like winning the lottery, or having an accident) into context. For example if one out of 17 million tickets is a winner, then the surprisal of winning from a single random selection is about 24 bits. Tossing 24 coins a few times might give you a feel for the surprisal of getting all heads on the first try.
The additive nature of this measure also comes in handy when weighing alternatives. For example, imagine that the surprisal of harm from a vaccination is 20 bits. If the surprisal of catching a disease without it is 16 bits, but the surprisal of harm from the disease if you catch it is 2 bits, then the surprisal of harm from NOT getting the vaccination is only 16+2=18 bits. Whether or not you decide to get the vaccination (e.g. the monetary cost of paying for it is not included in this discussion), you can in that way at least take responsibility for a decision informed to the fact that not getting the vaccination involves more than one bit of additional risk.
More generally, one can relate probability p to bits of surprisal sbits as probability = 1/2sbits. As suggested above, this is mainly useful with small probabilities. However, Jaynes pointed out that with true-false assertions one can also define bits of evidence ebits as the surprisal against minus the surprisal for. This evidence in bits relates simply to the odds ratio = p/(1-p) = 2ebits, and has advantages similar to those of self-information itself.
== Applications in games of chance ==
Information theory can be thought of as a way of quantifying information so as to make the best decision in the face of imperfect information. That is, how to make the best decision using only the information you have available. The point of betting is to rationally assess all relevant variables of an uncertain game/race/match, then compare them to the bookmaker's assessments, which usually comes in the form of odds or spreads and
place the proper bet if the assessments differ sufficiently. The area of gambling where this has the most use is sports betting. Sports handicapping lends itself to information theory extremely well because of the availability of statistics. For many years noted economists have tested different mathematical theories using sports as their laboratory, with vastly differing results.
One theory regarding sports betting is that it is a random walk. Random walk is a scenario where new information, prices and returns will fluctuate by chance, this is part of the efficient-market hypothesis. The underlying belief of the efficient market hypothesis is that the market will always make adjustments for any new information. Therefore no one can beat the market because they are trading on the same information from which the market adjusted. However, according to Fama, to have an efficient market three qualities need to be met:
There are no transaction costs in trading securities
All available information is costlessly available to all market participants
All agree on the implications of the current information for the current price and distributions of future prices of each security
Statisticians have shown that it's the third condition which allows for information theory to be useful in sports handicapping. When everyone doesn't agree on how information will affect the outcome of the event, we get differing opinions.
== See also ==
Principle of indifference
Statistical association football predictions
Advanced NFL Stats
== References ==
== External links ==
Statistical analysis in sports handicapping models
DVOA as an explanatory variable | Wikipedia/Gambling_and_information_theory |
In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. It is closely related to functionalism, a broader theory that defines mental states by what they do rather than what they are made of.
Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1960 and 1961, and then developed by his PhD student, philosopher, and cognitive scientist Jerry Fodor in the 1960s, 1970s, and 1980s. It was later criticized in the 1990s by Putnam himself, John Searle, and others.
The computational theory of mind holds that the human mind is a computational system that is realized (i.e. physically implemented) by neural activity in the brain. The theory can be elaborated in many ways and varies largely based on how the term computation is understood. Computation is commonly understood in terms of Turing machines which manipulate symbols according to a rule, in combination with the internal state of the machine. The critical aspect of such a computational model is that we can abstract away from particular physical details of the machine that is implementing the computation. For example, the appropriate computation could be implemented either by silicon chips or biological neural networks, so long as there is a series of outputs based on manipulations of inputs and internal states, performed according to a rule. CTM therefore holds that the mind is not simply analogous to a computer program, but that it is literally a computational system.
Computational theories of mind are often said to require mental representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object but must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However, the representational theory of mind shifts the focus to the symbols being manipulated. This approach better accounts for systematicity and productivity. In Fodor's original views, the computational theory of mind is also related to the language of thought. The language of thought theory allows the mind to process more complex representations with the help of semantics.
Recent work has suggested that we make a distinction between the mind and cognition. Building from the tradition of McCulloch and Pitts, the computational theory of cognition (CTC) states that neural computations explain cognition. The computational theory of mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. That is to say, CTM entails CTC. While phenomenal consciousness could fulfill some other functional role, computational theory of cognition leaves open the possibility that some aspects of the mind could be non-computational. CTC, therefore, provides an important explanatory framework for understanding neural networks, while avoiding counter-arguments that center around phenomenal consciousness.
== "Computer metaphor" ==
Computational theory of mind is not the same as the computer metaphor, comparing the mind to a modern-day digital computer. Computational theory just uses some of the same principles as those found in digital computing. While the computer metaphor draws an analogy between the mind as software and the brain as hardware, CTM is the claim that the mind is a computational system. More specifically, it states that a computational simulation of a mind is sufficient for the actual presence of a mind, and that a mind truly can be simulated computationally.
'Computational system' is not meant to mean a modern-day electronic computer. Rather, a computational system is a symbol manipulator that follows step-by-step functions to compute input and form output. Alan Turing describes this type of computer in his concept of a Turing machine.
== Criticism ==
A range of arguments have been proposed against physicalist conceptions used in computational theories of mind.
An early, though indirect, criticism of the computational theory of mind comes from philosopher John Searle. In his thought experiment known as the Chinese room, Searle attempts to refute the claims that artificially intelligent agents can be said to have intentionality and understanding and that these systems, because they can be said to be minds themselves, are sufficient for the study of the human mind. Searle asks us to imagine that there is a man in a room with no way of communicating with anyone or anything outside of the room except for a piece of paper with symbols written on it that is passed under the door. With the paper, the man is to use a series of provided rule books to return paper containing different symbols. Unknown to the man in the room, these symbols are of a Chinese language, and this process generates a conversation that a Chinese speaker outside of the room can actually understand. Searle contends that the man in the room does not understand the Chinese conversation. This is essentially what the computational theory of mind presents us—a model in which the mind simply decodes symbols and outputs more symbols. Searle argues that this is not real understanding or intentionality. This was originally written as a repudiation of the idea that computers work like minds.
Searle has further raised questions about what exactly constitutes a computation:
the wall behind my back is right now implementing the WordStar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of WordStar. But if the wall is implementing WordStar, if it is a big enough wall it is implementing any program, including any program implemented in the brain.
Objections like Searle's might be called insufficiency objections. They claim that computational theories of mind fail because computation is insufficient to account for some capacity of the mind. Arguments from qualia, such as Frank Jackson's knowledge argument, can be understood as objections to computational theories of mind in this way—though they take aim at physicalist conceptions of the mind in general, and not computational theories specifically.
There are also objections which are directly tailored for computational theories of mind.
Jerry Fodor himself argues that the mind is still a very long way from having been explained by the computational theory of mind. The main reason for this shortcoming is that most cognition is abductive and global, hence sensitive to all possibly relevant background beliefs to (dis)confirm a belief. This creates, among other problems, the frame problem for the computational theory, because the relevance of a belief is not one of its local, syntactic properties but context-dependent.
Putnam himself (see in particular Representation and Reality and the first part of Renewing Philosophy) became a prominent critic of computationalism for a variety of reasons, including ones related to Searle's Chinese room arguments, questions of world-word reference relations, and thoughts about the mind-body problem. Regarding functionalism in particular, Putnam has claimed along lines similar to, but more general than Searle's arguments, that the question of whether the human mind can implement computational states is not relevant to the question of the nature of mind, because "every ordinary open system realizes every abstract finite automaton." Computationalists have responded by aiming to develop criteria describing what exactly counts as an implementation.
Roger Penrose has proposed the idea that the human mind does not use a knowably sound calculation procedure to understand and discover mathematical intricacies. This would mean that a normal Turing complete computer would not be able to ascertain certain mathematical truths that human minds can. However, the application of Gödel's theorem by Penrose to demonstrate it was widely criticized, and is considered erroneous.
=== Pancomputationalism ===
CTM raises a question that remains a subject of debate: what does it take for a physical system (such as a mind, or an artificial computer) to perform computations? A very straightforward account is based on a simple mapping between abstract mathematical computations and physical systems: a system performs computation C if and only if there is a mapping between a sequence of states individuated by C and a sequence of states individuated by a physical description of the system.
Putnam (1988) and Searle (1992) argue that this simple mapping account (SMA) trivializes the empirical import of computational descriptions. As Putnam put it, "everything is a Probabilistic Automaton under some Description". Even rocks, walls, and buckets of water—contrary to appearances—are computing systems. Gualtiero Piccinini identifies different versions of Pancomputationalism.
In response to the trivialization criticism, and to restrict SMA, philosophers of mind have offered different accounts of computational systems. These typically include causal account, semantic account, syntactic account, and mechanistic account. Instead of a semantic restriction, the syntactic account imposes a syntactic restriction. The mechanistic account was first introduced by Gualtiero Piccinini in 2007.
== Notable theorists ==
Daniel Dennett proposed the multiple drafts model, in which consciousness seems linear but is actually blurry and gappy, distributed over space and time in the brain. Consciousness is the computation, there is no extra step in which you become conscious of the computation.
Jerry Fodor argues that mental states, such as beliefs and desires, are relations between individuals and mental representations. He maintains that these representations can only be correctly explained in terms of a language of thought (LOT) in the mind. Further, this language of thought itself is codified in the brain, not just a useful explanatory tool. Fodor adheres to a species of functionalism, maintaining that thinking and other mental processes consist primarily of computations operating on the syntax of the representations that make up the language of thought. In later work (Concepts and The Elm and the Expert), Fodor has refined and even questioned some of his original computationalist views, and adopted LOT2, a highly modified version of LOT.
David Marr proposed that cognitive processes have three levels of description: the computational level, which describes that computational problem solved by the cognitive process; the algorithmic level, which presents the algorithm used for computing the problem postulated at the computational level; and the implementational level, which describes the physical implementation of the algorithm postulated at the algorithmic level in the brain.
Ulric Neisser coined the term cognitive psychology in his book with that title published in 1967. Neisser characterizes people as dynamic information-processing systems whose mental operations might be described in computational terms.
Steven Pinker described language instinct as an evolved, built-in capacity to learn language (if not writing). His 1997 book How the Mind Works sought to popularize the computational theory of mind for wide audiences.
Hilary Putnam proposed functionalism to describe consciousness, asserting that it is the computation that equates to consciousness, regardless of whether the computation is operating in a brain or in a computer.
== See also ==
=== Alternative theories ===
== References ==
== Further reading ==
Block, Ned, ed. (1983). Readings in Philosophy of Psychology. Vol. 1. Cambridge, Massachusetts: Harvard University Press. ISBN 067474876X. OCLC 810753995.
Chalmers, David (2011). "A Computational Foundation for the Study of Cognition". Journal of Cognitive Science. 12 (4): 323–357.
Crane, Tim (2016). The Mechanical Mind: A Philosophical Introduction to Minds, Machines, and Mental Representation (3nd ed.). London and New York: Routledge. ISBN 9781138858329. OCLC 964575493.
Fodor, Jerry (1975). The Language of Thought. Cambridge, Massachusetts: Harvard University Press. ISBN 0674510305. OCLC 15149586.
Fodor, Jerry (1995). The Elm and the Expert: Mentalese and Its Semantics (eBook ed.). Cambridge, Massachusetts: The MIT Press. doi:10.7551/mitpress/2693.001.0001. ISBN 9780262272889. OCLC 9470770683.
Fodor, Jerry (1998). Concepts: Where Cognitive Science Went Wrong. Oxford Cognitive Science Series. Oxford and New York: Clarendon Press of the Oxford University Press. ISBN 9780198236375. OCLC 38079317.
Fodor, Jerry (2000). The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology. Cambridge, MA and London: The MIT Press. ISBN 9780262062121. OCLC 43109956.
Fodor, Jerry (2010). LOT 2: The Language of Thought Revisited. Oxford and New York: Oxford University Press. ISBN 9780199548774. OCLC 470698989.
Harnad, Stevan (1994). "Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't". Minds and Machines. 4 (4): 379–390. doi:10.1007/bf00974165. S2CID 230344.
Marr, David (2010) [1981]. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Cambridge, MA and London: The MIT Press. ISBN 978-0-262-51462-0. OCLC 472791457.
Piccinini, Gualtiero (2015). Physical Computation: A Mechanistic Account. Oxford: Oxford University Press. ISBN 9780199658855. OCLC 920617851.
Pinker, Steven (1997). How the Mind Works. New York: W. W. Norton. ISBN 978-0393045352. OCLC 36379708.
Putnam, Hilary (1995) [1979]. Mathematics, Matter, and Method. Philosophical Papers, Volume 1 (2nd ed.). Cambridge and New York: Cambridge University Press. ISBN 0-521-29550-5. OCLC 258667059.
Putnam, Hilary (1995). Renewing Philosophy (Reissue ed.). Cambridge, Massachusetts: Harvard University Press. ISBN 9780674760943. OCLC 60298037.
Pylyshyn, Zenon (1984). Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge, Massachusetts: The MIT Press. ISBN 0262160986. OCLC 10507591.
Searle, John (1992). The Rediscovery of the Mind. Cambridge, Massachusetts: The MIT Press. ISBN 026269154X. OCLC 760581004.
Zalta, Edward N. (ed.). "The Computational Theory of Mind". Stanford Encyclopedia of Philosophy.
== External links ==
Computational theory of mind at the Indiana Philosophy Ontology Project
Computational theory of mind at PhilPapers
Online papers on consciousness, part 2: Other Philosophy of Mind, compiled by David Chalmers | Wikipedia/Computational_theory_of_mind |
In the mathematical theory of probability, the entropy rate or source information rate is a function assigning an entropy to a stochastic process.
For a strongly stationary process, the conditional entropy for latest random variable eventually tend towards this rate value.
== Definition ==
A process
X
{\displaystyle X}
with a countable index gives rise to the sequence of its joint entropies
H
n
(
X
1
,
X
2
,
…
X
n
)
{\displaystyle H_{n}(X_{1},X_{2},\dots X_{n})}
. If the limit exists, the entropy rate is defined as
H
(
X
)
:=
lim
n
→
∞
1
n
H
n
.
{\displaystyle H(X):=\lim _{n\to \infty }{\tfrac {1}{n}}H_{n}.}
Note that given any sequence
(
a
n
)
n
{\displaystyle (a_{n})_{n}}
with
a
0
=
0
{\displaystyle a_{0}=0}
and letting
Δ
a
k
:=
a
k
−
a
k
−
1
{\displaystyle \Delta a_{k}:=a_{k}-a_{k-1}}
, by telescoping one has
a
n
=
∑
k
=
1
n
Δ
a
k
{\displaystyle a_{n}={\textstyle \sum _{k=1}^{n}}\Delta a_{k}}
. The entropy rate thus computes the mean of the first
n
{\displaystyle n}
such entropy changes, with
n
{\displaystyle n}
going to infinity.
The behaviour of joint entropies from one index to the next is also explicitly subject in some characterizations of entropy.
== Discussion ==
While
X
{\displaystyle X}
may be understood as a sequence of random variables, the entropy rate
H
(
X
)
{\displaystyle H(X)}
represents the average entropy change per one random variable, in the long term.
It can be thought of as a general property of stochastic sources - this is the subject of the asymptotic equipartition property.
=== For strongly stationary processes ===
A stochastic process also gives rise to a sequence of conditional entropies, comprising more and more random variables.
For strongly stationary stochastic processes, the entropy rate equals the limit of that sequence
H
(
X
)
=
lim
n
→
∞
H
(
X
n
|
X
n
−
1
,
X
n
−
2
,
…
X
1
)
{\displaystyle H(X)=\lim _{n\to \infty }H(X_{n}|X_{n-1},X_{n-2},\dots X_{1})}
The quantity given by the limit on the right is also denoted
H
′
(
X
)
{\displaystyle H'(X)}
, which is motivated to the extent that here this is then again a rate associated with the process, in the above sense.
=== For Markov chains ===
Since a stochastic process defined by a Markov chain that is irreducible and aperiodic has a stationary distribution, the entropy rate is independent of the initial distribution.
For example, consider a Markov chain defined on a countable number of states. Given its right stochastic transition matrix
P
i
j
{\displaystyle P_{ij}}
and an entropy
h
i
:=
−
∑
j
P
i
j
log
P
i
j
{\displaystyle h_{i}:=-\sum _{j}P_{ij}\log P_{ij}}
associated with each state, one finds
H
(
X
)
=
∑
i
μ
i
h
i
,
{\displaystyle \displaystyle H(X)=\sum _{i}\mu _{i}h_{i},}
where
μ
i
{\displaystyle \mu _{i}}
is the asymptotic distribution of the chain.
In particular, it follows that the entropy rate of an i.i.d. stochastic process is the same as the entropy of any individual member in the process.
=== For hidden Markov models ===
The entropy rate of hidden Markov models (HMM) has no known closed-form solution. However, it has known upper and lower bounds. Let the underlying Markov chain
X
1
:
∞
{\displaystyle X_{1:\infty }}
be stationary, and let
Y
1
:
∞
{\displaystyle Y_{1:\infty }}
be the observable states, then we have
H
(
Y
n
|
X
1
,
Y
1
:
n
−
1
)
≤
H
(
Y
)
≤
H
(
Y
n
|
Y
1
:
n
−
1
)
{\displaystyle H(Y_{n}|X_{1},Y_{1:n-1})\leq H(Y)\leq H(Y_{n}|Y_{1:n-1})}
and at the limit of
n
→
∞
{\displaystyle n\to \infty }
, both sides converge to the middle.
== Applications ==
The entropy rate may be used to estimate the complexity of stochastic processes. It is used in diverse applications ranging from characterizing the complexity of languages, blind source separation, through to optimizing quantizers and data compression algorithms. For example, a maximum entropy rate criterion may be used for feature selection in machine learning.
== See also ==
Information source (mathematics)
Markov information source
Asymptotic equipartition property
Maximal entropy random walk - chosen to maximize entropy rate
== References ==
== External links ==
Cover, T. and Thomas, J. Elements of Information Theory. John Wiley and Sons, Inc. Second Edition, 2006. | Wikipedia/Entropy_rate |
The μ-law algorithm (sometimes written mu-law, often abbreviated as u-law) is a companding algorithm, primarily used in 8-bit PCM digital telecommunications systems in North America and Japan. It is one of the two companding algorithms in the G.711 standard from ITU-T, the other being the similar A-law. A-law is used in regions where digital telecommunication signals are carried on E-1 circuits, e.g. Europe.
The terms PCMU, G711u or G711MU are used for G711 μ-law.
Companding algorithms reduce the dynamic range of an audio signal. In analog systems, this can increase the signal-to-noise ratio (SNR) achieved during transmission; in the digital domain, it can reduce the quantization error (hence increasing the signal-to-quantization-noise ratio). These SNR increases can be traded instead for reduced bandwidth for equivalent SNR.
At the cost of a reduced peak SNR, it can be mathematically shown that μ-law's non-linear quantization effectively increases dynamic range by 33 dB or 5+1⁄2 bits over a linearly-quantized signal, hence 13.5 bits (which rounds up to 14 bits) is the most resolution required for an input digital signal to be compressed for 8-bit μ-law.
== Algorithm types ==
The μ-law algorithm may be described in an analog form and in a quantized digital form.
=== Continuous ===
For a given input x, the equation for μ-law encoding is
F
(
x
)
=
sgn
(
x
)
ln
(
1
+
μ
|
x
|
)
ln
(
1
+
μ
)
,
−
1
≤
x
≤
1
,
{\displaystyle F(x)=\operatorname {sgn}(x){\dfrac {\ln(1+\mu |x|)}{\ln(1+\mu )}},\quad -1\leq x\leq 1,}
where μ = 255 in the North American and Japanese standards, and sgn(x) is the sign function. The range of this function is −1 to 1.
μ-law expansion is then given by the inverse equation:
F
−
1
(
y
)
=
sgn
(
y
)
(
1
+
μ
)
|
y
|
−
1
μ
,
−
1
≤
y
≤
1.
{\displaystyle F^{-1}(y)=\operatorname {sgn}(y){\dfrac {(1+\mu )^{|y|}-1}{\mu }},\quad -1\leq y\leq 1.}
=== Discrete ===
The discrete form is defined in ITU-T Recommendation G.711.
G.711 is unclear about how to code the values at the limit of a range (e.g. whether +31 codes to 0xEF or 0xF0).
However, G.191 provides example code in the C language for a μ-law encoder. The difference between the positive and negative ranges, e.g. the negative range corresponding to +30 to +1 is −31 to −2. This is accounted for by the use of 1's complement (simple bit inversion) rather than 2's complement to convert a negative value to a positive value during encoding.
== Implementation ==
The μ-law algorithm may be implemented in several ways:
Analog
Use an amplifier with non-linear gain to achieve companding entirely in the analog domain.
Non-linear ADC
Use an analog-to-digital converter with quantization levels which are unequally spaced to match the μ-law algorithm.
Digital
Use the quantized digital version of the μ-law algorithm to convert data once it is in the digital domain.
Software/DSP
Use the continuous version of the μ-law algorithm to calculate the companded values.
== Usage justification ==
μ-law encoding is used because speech has a wide dynamic range. In analog signal transmission, in the presence of relatively constant background noise, the finer detail is lost. Given that the precision of the detail is compromised anyway, and assuming that the signal is to be perceived as audio by a human, one can take advantage of the fact that the perceived acoustic intensity level or loudness is logarithmic by compressing the signal using a logarithmic-response operational amplifier (Weber–Fechner law). In telecommunications circuits, most of the noise is injected on the lines, thus after the compressor, the intended signal is perceived as significantly louder than the static, compared to an uncompressed source. This became a common solution, and thus, prior to common digital usage, the μ-law specification was developed to define an interoperable standard.
This pre-existing algorithm had the effect of significantly lowering the amount of bits required to encode a recognizable human voice in digital systems. A sample could be effectively encoded using μ-law in as little as 8 bits, which conveniently matched the symbol size of the majority of common computers.
μ-law encoding effectively reduced the dynamic range of the signal, thereby increasing the coding efficiency while biasing the signal in a way that results in a signal-to-distortion ratio that is greater than that obtained by linear encoding for a given number of bits.
The μ-law algorithm is also used in the .au format, which dates back at least to the SPARCstation 1 by Sun Microsystems as the native method used by the /dev/audio interface, widely used as a de facto standard for sound on Unix systems. The au format is also used in various common audio APIs such as the classes in the sun.audio Java package in Java 1.1 and in some C# methods.
This plot illustrates how μ-law concentrates sampling in the smaller (softer) values. The horizontal axis represents the byte values 0-255 and the vertical axis is the 16-bit linear decoded value of μ-law encoding.
== Comparison with A-law ==
The μ-law algorithm provides a slightly larger dynamic range than the A-law at the cost of worse proportional distortions for small signals. By convention, A-law is used for an international connection if at least one country uses it.
== See also ==
Dynamic range compression
Signal compression (disambiguation)
G.711, a waveform speech coder using either A-law or μ-law encoding
Tapered floating point
== References ==
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 22 January 2022.
== External links ==
Waveform Coding Techniques – details of implementation
A-Law and mu-Law Companding Implementations Using the TMS320C54x (PDF)
TMS320C6000 μ-Law and A-Law Companding with Software or the McBSP (PDF)
A-law and μ-law realisation (in C)
u-law implementation in C-language with example code | Wikipedia/Μ-law_algorithm |
In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.
The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of α can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of randomness extractors.
== Definition ==
The Rényi entropy of order
α
{\displaystyle \alpha }
, where
0
<
α
<
∞
{\displaystyle 0<\alpha <\infty }
and
α
≠
1
{\displaystyle \alpha \neq 1}
, is defined as
H
α
(
X
)
=
1
1
−
α
log
(
∑
i
=
1
n
p
i
α
)
.
{\displaystyle \mathrm {H} _{\alpha }(X)={\frac {1}{1-\alpha }}\log \left(\sum _{i=1}^{n}p_{i}^{\alpha }\right).}
It is further defined at
α
=
0
,
1
,
∞
{\displaystyle \alpha =0,1,\infty }
as
H
α
(
X
)
=
lim
γ
→
α
H
γ
(
X
)
.
{\displaystyle \mathrm {H} _{\alpha }(X)=\lim _{\gamma \to \alpha }\mathrm {H} _{\gamma }(X).}
Here,
X
{\displaystyle X}
is a discrete random variable with possible outcomes in the set
A
=
{
x
1
,
x
2
,
.
.
.
,
x
n
}
{\displaystyle {\mathcal {A}}=\{x_{1},x_{2},...,x_{n}\}}
and corresponding probabilities
p
i
≐
Pr
(
X
=
x
i
)
{\displaystyle p_{i}\doteq \Pr(X=x_{i})}
for
i
=
1
,
…
,
n
{\displaystyle i=1,\dots ,n}
. The resulting unit of information is determined by the base of the logarithm, e.g. shannon for base 2, or nat for base e.
If the probabilities are
p
i
=
1
/
n
{\displaystyle p_{i}=1/n}
for all
i
=
1
,
…
,
n
{\displaystyle i=1,\dots ,n}
, then all the Rényi entropies of the distribution are equal:
H
α
(
X
)
=
log
n
{\displaystyle \mathrm {H} _{\alpha }(X)=\log n}
.
In general, for all discrete random variables
X
{\displaystyle X}
,
H
α
(
X
)
{\displaystyle \mathrm {H} _{\alpha }(X)}
is a non-increasing function in
α
{\displaystyle \alpha }
.
Applications often exploit the following relation between the Rényi entropy and the α-norm of the vector of probabilities:
H
α
(
X
)
=
α
1
−
α
log
(
‖
P
‖
α
)
.
{\displaystyle \mathrm {H} _{\alpha }(X)={\frac {\alpha }{1-\alpha }}\log \left({\left\|P\right\|}_{\alpha }\right).}
Here, the discrete probability distribution
P
=
(
p
1
,
…
,
p
n
)
{\displaystyle P=(p_{1},\dots ,p_{n})}
is interpreted as a vector in
R
n
{\displaystyle \mathbb {R} ^{n}}
with
p
i
≥
0
{\displaystyle p_{i}\geq 0}
and
∑
i
=
1
n
p
i
=
1
{\textstyle \sum _{i=1}^{n}p_{i}=1}
.
The Rényi entropy for any
α
≥
0
{\displaystyle \alpha \geq 0}
is Schur concave. Proven by the Schur–Ostrowski criterion.
== Special cases ==
As
α
{\displaystyle \alpha }
approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for
α
→
0
{\displaystyle \alpha \to 0}
, the Rényi entropy is just the logarithm of the size of the support of X. The limit for
α
→
1
{\displaystyle \alpha \to 1}
is the Shannon entropy. As
α
{\displaystyle \alpha }
approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.
=== Hartley or max-entropy ===
H
0
(
X
)
{\displaystyle \mathrm {H} _{0}(X)}
is
log
n
{\displaystyle \log n}
where
n
{\displaystyle n}
is the number of non-zero probabilities. If the probabilities are all nonzero, it is simply the logarithm of the cardinality of the alphabet (
A
{\displaystyle {\mathcal {A}}}
) of
X
{\displaystyle X}
, sometimes called the Hartley entropy of
X
{\displaystyle X}
,
H
0
(
X
)
=
log
n
=
log
|
A
|
{\displaystyle \mathrm {H} _{0}(X)=\log n=\log |{\mathcal {A}}|\,}
=== Shannon entropy ===
The limiting value of
H
α
{\displaystyle \mathrm {H} _{\alpha }}
as
α
→
1
{\displaystyle \alpha \to 1}
is the Shannon entropy:
H
1
(
X
)
≡
lim
α
→
1
H
α
(
X
)
=
−
∑
i
=
1
n
p
i
log
p
i
{\displaystyle \mathrm {H} _{1}(X)\equiv \lim _{\alpha \to 1}\mathrm {H} _{\alpha }(X)=-\sum _{i=1}^{n}p_{i}\log p_{i}}
=== Collision entropy ===
Collision entropy, sometimes just called "Rényi entropy", refers to the case
α
=
2
{\displaystyle \alpha =2}
,
H
2
(
X
)
=
−
log
∑
i
=
1
n
p
i
2
=
−
log
P
(
X
=
Y
)
,
{\displaystyle \mathrm {H} _{2}(X)=-\log \sum _{i=1}^{n}p_{i}^{2}=-\log P(X=Y),}
where
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are independent and identically distributed. The collision entropy is related to the index of coincidence. It is the negative logarithm of the Simpson diversity index.
=== Min-entropy ===
In the limit as
α
→
∞
{\displaystyle \alpha \rightarrow \infty }
, the Rényi entropy
H
α
{\displaystyle \mathrm {H} _{\alpha }}
converges to the min-entropy
H
∞
{\displaystyle \mathrm {H} _{\infty }}
:
H
∞
(
X
)
≐
min
i
(
−
log
p
i
)
=
−
(
max
i
log
p
i
)
=
−
log
max
i
p
i
.
{\displaystyle \mathrm {H} _{\infty }(X)\doteq \min _{i}(-\log p_{i})=-(\max _{i}\log p_{i})=-\log \max _{i}p_{i}\,.}
Equivalently, the min-entropy
H
∞
(
X
)
{\displaystyle \mathrm {H} _{\infty }(X)}
is the largest real number b such that all events occur with probability at most
2
−
b
{\displaystyle 2^{-b}}
.
The name min-entropy stems from the fact that it is the smallest entropy measure in the family of Rényi entropies.
In this sense, it is the strongest way to measure the information content of a discrete random variable.
In particular, the min-entropy is never larger than the Shannon entropy.
The min-entropy has important applications for randomness extractors in theoretical computer science:
Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large Shannon entropy does not suffice for this task.
== Inequalities for different orders α ==
That
H
α
{\displaystyle \mathrm {H} _{\alpha }}
is non-increasing in
α
{\displaystyle \alpha }
for any given distribution of probabilities
p
i
{\displaystyle p_{i}}
,
which can be proven by differentiation, as
−
d
H
α
d
α
=
1
(
1
−
α
)
2
∑
i
=
1
n
z
i
log
(
z
i
/
p
i
)
=
1
(
1
−
α
)
2
D
K
L
(
z
‖
p
)
{\displaystyle -{\frac {d\mathrm {H} _{\alpha }}{d\alpha }}={\frac {1}{(1-\alpha )^{2}}}\sum _{i=1}^{n}z_{i}\log(z_{i}/p_{i})={\frac {1}{(1-\alpha )^{2}}}D_{KL}(z\|p)}
which is proportional to Kullback–Leibler divergence (which is always non-negative), where
z
i
=
p
i
α
/
∑
j
=
1
n
p
j
α
{\textstyle z_{i}=p_{i}^{\alpha }/\sum _{j=1}^{n}p_{j}^{\alpha }}
. In particular, it is strictly positive except when the distribution is uniform.
At the
α
→
1
{\displaystyle \alpha \to 1}
limit, we have
−
d
H
α
d
α
→
1
2
∑
i
p
i
(
ln
p
i
+
H
(
p
)
)
2
{\textstyle -{\frac {d\mathrm {H} _{\alpha }}{d\alpha }}\to {\frac {1}{2}}\sum _{i}p_{i}{\left(\ln p_{i}+H(p)\right)}^{2}}
.
In particular cases inequalities can be proven also by Jensen's inequality:
log
n
=
H
0
≥
H
1
≥
H
2
≥
H
∞
.
{\displaystyle \log n=\mathrm {H} _{0}\geq \mathrm {H} _{1}\geq \mathrm {H} _{2}\geq \mathrm {H} _{\infty }.}
For values of
α
>
1
{\displaystyle \alpha >1}
, inequalities in the other direction also hold. In particular, we have
H
2
≤
2
H
∞
.
{\displaystyle \mathrm {H} _{2}\leq 2\mathrm {H} _{\infty }.}
On the other hand, the Shannon entropy
H
1
{\displaystyle \mathrm {H} _{1}}
can be arbitrarily high for a random variable
X
{\displaystyle X}
that has a given min-entropy. An example of this is given by the sequence of random variables
X
n
∼
{
0
,
…
,
n
}
{\displaystyle X_{n}\sim \{0,\ldots ,n\}}
for
n
≥
1
{\displaystyle n\geq 1}
such that
P
(
X
n
=
0
)
=
1
/
2
{\displaystyle P(X_{n}=0)=1/2}
and
P
(
X
n
=
x
)
=
1
/
(
2
n
)
{\displaystyle P(X_{n}=x)=1/(2n)}
since
H
∞
(
X
n
)
=
log
2
{\displaystyle \mathrm {H} _{\infty }(X_{n})=\log 2}
but
H
1
(
X
n
)
=
(
log
2
+
log
2
n
)
/
2
{\displaystyle \mathrm {H} _{1}(X_{n})=(\log 2+\log 2n)/2}
.
== Rényi divergence ==
As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the Kullback–Leibler divergence.
The Rényi divergence of order
α
{\displaystyle \alpha }
or alpha-divergence of a distribution P from a distribution Q is defined to be
D
α
(
P
‖
Q
)
=
1
α
−
1
log
(
∑
i
=
1
n
p
i
α
q
i
α
−
1
)
=
1
α
−
1
log
E
i
∼
p
[
(
p
i
/
q
i
)
α
−
1
]
{\displaystyle {\begin{aligned}D_{\alpha }(P\Vert Q)&={\frac {1}{\alpha -1}}\log \left(\sum _{i=1}^{n}{\frac {p_{i}^{\alpha }}{q_{i}^{\alpha -1}}}\right)\\[1ex]&={\frac {1}{\alpha -1}}\log \mathbb {E} _{i\sim p}\left[{\left(p_{i}/q_{i}\right)}^{\alpha -1}\right]\,\end{aligned}}}
when
0
<
α
<
∞
{\displaystyle 0<\alpha <\infty }
and
α
≠
1
{\displaystyle \alpha \neq 1}
. We can define the Rényi divergence for the special values α = 0, 1, ∞ by taking a limit, and in particular the limit α → 1 gives the Kullback–Leibler divergence.
Some special cases:
D
0
(
P
‖
Q
)
=
−
log
Q
(
{
i
:
p
i
>
0
}
)
{\displaystyle D_{0}(P\Vert Q)=-\log Q(\{i:p_{i}>0\})}
: minus the log probability under Q that pi > 0;
D
1
/
2
(
P
‖
Q
)
=
−
2
log
∑
i
=
1
n
p
i
q
i
{\displaystyle D_{1/2}(P\Vert Q)=-2\log \sum _{i=1}^{n}{\sqrt {p_{i}q_{i}}}}
: minus twice the logarithm of the Bhattacharyya coefficient; (Nielsen & Boltz (2010))
D
1
(
P
‖
Q
)
=
∑
i
=
1
n
p
i
log
p
i
q
i
{\displaystyle D_{1}(P\Vert Q)=\sum _{i=1}^{n}p_{i}\log {\frac {p_{i}}{q_{i}}}}
: the Kullback–Leibler divergence;
D
2
(
P
‖
Q
)
=
log
⟨
p
i
q
i
⟩
{\displaystyle D_{2}(P\Vert Q)=\log {\Big \langle }{\frac {p_{i}}{q_{i}}}{\Big \rangle }}
: the log of the expected ratio of the probabilities;
D
∞
(
P
‖
Q
)
=
log
sup
i
p
i
q
i
{\displaystyle D_{\infty }(P\Vert Q)=\log \sup _{i}{\frac {p_{i}}{q_{i}}}}
: the log of the maximum ratio of the probabilities.
The Rényi divergence is indeed a divergence, meaning simply that
D
α
(
P
‖
Q
)
{\displaystyle D_{\alpha }(P\|Q)}
is greater than or equal to zero, and zero only when P = Q. For any fixed distributions P and Q, the Rényi divergence is nondecreasing as a function of its order α, and it is continuous on the set of α for which it is finite, or for the sake of brevity, the information of order α obtained if the distribution P is replaced by the distribution Q.
== Financial interpretation ==
A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows
E
x
p
e
c
t
e
d
R
a
t
e
=
1
R
D
1
(
b
‖
m
)
+
R
−
1
R
D
1
/
R
(
b
‖
m
)
,
{\displaystyle {\rm {ExpectedRate}}={\frac {1}{R}}\,D_{1}(b\|m)+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,,}
where
m
{\displaystyle m}
is the distribution defining the official odds (i.e. the "market") for the game,
b
{\displaystyle b}
is the investor-believed distribution and
R
{\displaystyle R}
is the investor's risk aversion (the Arrow–Pratt relative risk aversion).
If the true distribution is
p
{\displaystyle p}
(not necessarily coinciding with the investor's belief
b
{\displaystyle b}
), the long-term realized rate converges to the true expectation which has a similar mathematical structure
R
e
a
l
i
z
e
d
R
a
t
e
=
1
R
(
D
1
(
p
‖
m
)
−
D
1
(
p
‖
b
)
)
+
R
−
1
R
D
1
/
R
(
b
‖
m
)
.
{\displaystyle {\rm {RealizedRate}}={\frac {1}{R}}\,{\Big (}D_{1}(p\|m)-D_{1}(p\|b){\Big )}+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,.}
== Properties specific to α = 1 ==
The value
α
=
1
{\displaystyle \alpha =1}
, which gives the Shannon entropy and the Kullback–Leibler divergence, is the only value at which the chain rule of conditional probability holds exactly:
H
(
A
,
X
)
=
H
(
A
)
+
E
a
∼
A
[
H
(
X
|
A
=
a
)
]
{\displaystyle \mathrm {H} (A,X)=\mathrm {H} (A)+\mathbb {E} _{a\sim A}{\big [}\mathrm {H} (X|A=a){\big ]}}
for the absolute entropies, and
D
K
L
(
p
(
x
|
a
)
p
(
a
)
‖
m
(
x
,
a
)
)
=
D
K
L
(
p
(
a
)
‖
m
(
a
)
)
+
E
p
(
a
)
{
D
K
L
(
p
(
x
|
a
)
‖
m
(
x
|
a
)
)
}
,
{\displaystyle D_{\mathrm {KL} }(p(x|a)p(a)\|m(x,a))=D_{\mathrm {KL} }(p(a)\|m(a))+\mathbb {E} _{p(a)}\{D_{\mathrm {KL} }(p(x|a)\|m(x|a))\},}
for the relative entropies.
The latter in particular means that if we seek a distribution p(x, a) which minimizes the divergence from some underlying prior measure m(x, a), and we acquire new information which only affects the distribution of a, then the distribution of p(x|a) remains m(x|a), unchanged.
The other Rényi divergences satisfy the criteria of being positive and continuous, being invariant under 1-to-1 co-ordinate transformations, and of combining additively when A and X are independent, so that if p(A, X) = p(A)p(X), then
H
α
(
A
,
X
)
=
H
α
(
A
)
+
H
α
(
X
)
{\displaystyle \mathrm {H} _{\alpha }(A,X)=\mathrm {H} _{\alpha }(A)+\mathrm {H} _{\alpha }(X)\;}
and
D
α
(
P
(
A
)
P
(
X
)
‖
Q
(
A
)
Q
(
X
)
)
=
D
α
(
P
(
A
)
‖
Q
(
A
)
)
+
D
α
(
P
(
X
)
‖
Q
(
X
)
)
.
{\displaystyle D_{\alpha }(P(A)P(X)\|Q(A)Q(X))=D_{\alpha }(P(A)\|Q(A))+D_{\alpha }(P(X)\|Q(X)).}
The stronger properties of the
α
=
1
{\displaystyle \alpha =1}
quantities allow the definition of conditional information and mutual information from communication theory.
== Exponential families ==
The Rényi entropies and divergences for an exponential family admit simple expressions
H
α
(
p
F
(
x
;
θ
)
)
=
1
1
−
α
(
F
(
α
θ
)
−
α
F
(
θ
)
+
log
E
p
[
e
(
α
−
1
)
k
(
x
)
]
)
{\displaystyle \mathrm {H} _{\alpha }(p_{F}(x;\theta ))={\frac {1}{1-\alpha }}\left(F(\alpha \theta )-\alpha F(\theta )+\log E_{p}\left[e^{(\alpha -1)k(x)}\right]\right)}
and
D
α
(
p
:
q
)
=
J
F
,
α
(
θ
:
θ
′
)
1
−
α
{\displaystyle D_{\alpha }(p:q)={\frac {J_{F,\alpha }(\theta :\theta ')}{1-\alpha }}}
where
J
F
,
α
(
θ
:
θ
′
)
=
α
F
(
θ
)
+
(
1
−
α
)
F
(
θ
′
)
−
F
(
α
θ
+
(
1
−
α
)
θ
′
)
{\displaystyle J_{F,\alpha }(\theta :\theta ')=\alpha F(\theta )+(1-\alpha )F(\theta ')-F(\alpha \theta +(1-\alpha )\theta ')}
is a Jensen difference divergence.
== Physical meaning ==
The Rényi entropy in quantum physics is not considered to be an observable, due to its nonlinear dependence on the density matrix. (This nonlinear dependence applies even in the special case of the Shannon entropy.) It can, however, be given an operational meaning through the two-time measurements (also known as full counting statistics) of energy transfers.
The limit of the quantum mechanical Rényi entropy as
α
→
1
{\displaystyle \alpha \to 1}
is the von Neumann entropy.
== See also ==
Diversity indices
Tsallis entropy
Generalized entropy index
== Notes ==
== References == | Wikipedia/Rényi_entropy |
In Euclidean geometry, an affine transformation or affinity (from the Latin, affinis, "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles.
More generally, an affine transformation is an automorphism of an affine space (Euclidean spaces are specific affine spaces), that is, a function which maps an affine space onto itself while preserving both the dimension of any affine subspaces (meaning that it sends points to points, lines to lines, planes to planes, and so on) and the ratios of the lengths of parallel line segments. Consequently, sets of parallel affine subspaces remain parallel after an affine transformation. An affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line.
If X is the point set of an affine space, then every affine transformation on X can be represented as the composition of a linear transformation on X and a translation of X. Unlike a purely linear transformation, an affine transformation need not preserve the origin of the affine space. Thus, every linear transformation is affine, but not every affine transformation is linear.
Examples of affine transformations include translation, scaling, homothety, similarity, reflection, rotation, hyperbolic rotation, shear mapping, and compositions of them in any combination and sequence.
Viewing an affine space as the complement of a hyperplane at infinity of a projective space, the affine transformations are the projective transformations of that projective space that leave the hyperplane at infinity invariant, restricted to the complement of that hyperplane.
A generalization of an affine transformation is an affine map (or affine homomorphism or affine mapping) between two (potentially different) affine spaces over the same field k. Let (X, V, k) and (Z, W, k) be two affine spaces with X and Z the point sets and V and W the respective associated vector spaces over the field k. A map f : X → Z is an affine map if there exists a linear map mf : V → W such that mf (x − y) = f (x) − f (y) for all x, y in X.
== Definition ==
Let X be an affine space over a field k, and V be its associated vector space. An affine transformation is a bijection f from X onto itself that is an affine map; this means that a linear map g from V to V is well defined by the equation
g
(
y
−
x
)
=
f
(
y
)
−
f
(
x
)
;
{\displaystyle g(y-x)=f(y)-f(x);}
here, as usual, the subtraction of two points denotes the free vector from the second point to the first one, and "well-defined" means that
y
−
x
=
y
′
−
x
′
{\displaystyle y-x=y'-x'}
implies that
f
(
y
)
−
f
(
x
)
=
f
(
y
′
)
−
f
(
x
′
)
.
{\displaystyle f(y)-f(x)=f(y')-f(x').}
If the dimension of X is at least two, a semiaffine transformation f of X is a bijection from X onto itself satisfying:
For every d-dimensional affine subspace S of X, then f (S) is also a d-dimensional affine subspace of X.
If S and T are parallel affine subspaces of X, then f (S) and f (T) are parallel.
These two conditions are satisfied by affine transformations, and express what is precisely meant by the expression that "f preserves parallelism".
These conditions are not independent as the second follows from the first. Furthermore, if the field k has at least three elements, the first condition can be simplified to: f is a collineation, that is, it maps lines to lines.
== Structure ==
By the definition of an affine space, V acts on X, so that, for every pair
(
x
,
v
)
{\displaystyle (x,\mathbf {v} )}
in X × V there is associated a point y in X. We can denote this action by
v
→
(
x
)
=
y
{\displaystyle {\vec {v}}(x)=y}
. Here we use the convention that
v
→
=
v
{\displaystyle {\vec {v}}={\textbf {v}}}
are two interchangeable notations for an element of V. By fixing a point c in X one can define a function mc : X → V by mc(x) = cx→. For any c, this function is one-to-one, and so, has an inverse function mc−1 : V → X given by
m
c
−
1
(
v
)
=
v
→
(
c
)
{\displaystyle m_{c}^{-1}({\textbf {v}})={\vec {v}}(c)}
. These functions can be used to turn X into a vector space (with respect to the point c) by defining:
x
+
y
=
m
c
−
1
(
m
c
(
x
)
+
m
c
(
y
)
)
,
for all
x
,
y
in
X
,
{\displaystyle x+y=m_{c}^{-1}\left(m_{c}(x)+m_{c}(y)\right),{\text{ for all }}x,y{\text{ in }}X,}
and
r
x
=
m
c
−
1
(
r
m
c
(
x
)
)
,
for all
r
in
k
and
x
in
X
.
{\displaystyle rx=m_{c}^{-1}\left(rm_{c}(x)\right),{\text{ for all }}r{\text{ in }}k{\text{ and }}x{\text{ in }}X.}
This vector space has origin c and formally needs to be distinguished from the affine space X, but common practice is to denote it by the same symbol and mention that it is a vector space after an origin has been specified. This identification permits points to be viewed as vectors and vice versa.
For any linear transformation λ of V, we can define the function L(c, λ) : X → X by
L
(
c
,
λ
)
(
x
)
=
m
c
−
1
(
λ
(
m
c
(
x
)
)
)
=
c
+
λ
(
c
x
→
)
.
{\displaystyle L(c,\lambda )(x)=m_{c}^{-1}\left(\lambda (m_{c}(x))\right)=c+\lambda ({\vec {cx}}).}
Then L(c, λ) is an affine transformation of X which leaves the point c fixed. It is a linear transformation of X, viewed as a vector space with origin c.
Let σ be any affine transformation of X. Pick a point c in X and consider the translation of X by the vector
w
=
c
σ
(
c
)
→
{\displaystyle {\mathbf {w}}={\overrightarrow {c\sigma (c)}}}
, denoted by Tw. Translations are affine transformations and the composition of affine transformations is an affine transformation. For this choice of c, there exists a unique linear transformation λ of V such that
σ
(
x
)
=
T
w
(
L
(
c
,
λ
)
(
x
)
)
.
{\displaystyle \sigma (x)=T_{\mathbf {w}}\left(L(c,\lambda )(x)\right).}
That is, an arbitrary affine transformation of X is the composition of a linear transformation of X (viewed as a vector space) and a translation of X.
This representation of affine transformations is often taken as the definition of an affine transformation (with the choice of origin being implicit).
== Representation ==
As shown above, an affine map is the composition of two functions: a translation and a linear map. Ordinary vector algebra uses matrix multiplication to represent linear maps, and vector addition to represent translations. Formally, in the finite-dimensional case, if the linear map is represented as a multiplication by an invertible matrix
A
{\displaystyle A}
and the translation as the addition of a vector
b
{\displaystyle \mathbf {b} }
, an affine map
f
{\displaystyle f}
acting on a vector
x
{\displaystyle \mathbf {x} }
can be represented as
y
=
f
(
x
)
=
A
x
+
b
.
{\displaystyle \mathbf {y} =f(\mathbf {x} )=A\mathbf {x} +\mathbf {b} .}
=== Augmented matrix ===
Using an augmented matrix and an augmented vector, it is possible to represent both the translation and the linear map using a single matrix multiplication. The technique requires that all vectors be augmented with a "1" at the end, and all matrices be augmented with an extra row of zeros at the bottom, an extra column—the translation vector—to the right, and a "1" in the lower right corner. If
A
{\displaystyle A}
is a matrix,
[
y
1
]
=
[
A
b
0
⋯
0
1
]
[
x
1
]
{\displaystyle {\begin{bmatrix}\mathbf {y} \\1\end{bmatrix}}=\left[{\begin{array}{ccc|c}&A&&\mathbf {b} \\0&\cdots &0&1\end{array}}\right]{\begin{bmatrix}\mathbf {x} \\1\end{bmatrix}}}
is equivalent to the following
y
=
A
x
+
b
.
{\displaystyle \mathbf {y} =A\mathbf {x} +\mathbf {b} .}
The above-mentioned augmented matrix is called an affine transformation matrix. In the general case, when the last row vector is not restricted to be
[
0
⋯
0
1
]
{\displaystyle \left[{\begin{array}{ccc|c}0&\cdots &0&1\end{array}}\right]}
, the matrix becomes a projective transformation matrix (as it can also be used to perform projective transformations).
This representation exhibits the set of all invertible affine transformations as the semidirect product of
K
n
{\displaystyle K^{n}}
and
GL
(
n
,
K
)
{\displaystyle \operatorname {GL} (n,K)}
. This is a group under the operation of composition of functions, called the affine group.
Ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a translation, in which the origin must necessarily be mapped to some other point. By appending the additional coordinate "1" to every vector, one essentially considers the space to be mapped as a subset of a space with an additional dimension. In that space, the original space occupies the subset in which the additional coordinate is 1. Thus the origin of the original space can be found at
(
0
,
0
,
…
,
0
,
1
)
{\displaystyle (0,0,\dotsc ,0,1)}
. A translation within the original space by means of a linear transformation of the higher-dimensional space is then possible (specifically, a shear transformation). The coordinates in the higher-dimensional space are an example of homogeneous coordinates. If the original space is Euclidean, the higher dimensional space is a real projective space.
The advantage of using homogeneous coordinates is that one can combine any number of affine transformations into one by multiplying the respective matrices. This property is used extensively in computer graphics, computer vision and robotics.
==== Example augmented matrix ====
Suppose you have three points that define a non-degenerate triangle in a plane, or four points that define a non-degenerate tetrahedron in 3-dimensional space, or generally n + 1 points x1, ..., xn+1 that define a non-degenerate simplex in n-dimensional space. Suppose you have corresponding destination points y1, ..., yn+1, where these new points can lie in a space with any number of dimensions. (Furthermore, the new points need not be distinct from each other and need not form a non-degenerate simplex.) The unique augmented matrix M that achieves the affine transformation
[
y
i
1
]
=
M
[
x
i
1
]
{\displaystyle {\begin{bmatrix}\mathbf {y} _{i}\\1\end{bmatrix}}=M{\begin{bmatrix}\mathbf {x} _{i}\\1\end{bmatrix}}}
for every i is
M
=
[
y
1
⋯
y
n
+
1
1
⋯
1
]
[
x
1
⋯
x
n
+
1
1
⋯
1
]
−
1
.
{\displaystyle M={\begin{bmatrix}\mathbf {y} _{1}&\cdots &\mathbf {y} _{n+1}\\1&\cdots &1\end{bmatrix}}{\begin{bmatrix}\mathbf {x} _{1}&\cdots &\mathbf {x} _{n+1}\\1&\cdots &1\end{bmatrix}}^{-1}.}
== Properties ==
=== Properties preserved ===
An affine transformation preserves:
collinearity between points: three or more points which lie on the same line (called collinear points) continue to be collinear after the transformation.
parallelism: two or more lines which are parallel, continue to be parallel after the transformation.
convexity of sets: a convex set continues to be convex after the transformation. Moreover, the extreme points of the original set are mapped to the extreme points of the transformed set.
ratios of lengths of parallel line segments: for distinct parallel segments defined by points
p
1
{\displaystyle p_{1}}
and
p
2
{\displaystyle p_{2}}
,
p
3
{\displaystyle p_{3}}
and
p
4
{\displaystyle p_{4}}
, the ratio of
p
1
p
2
→
{\displaystyle {\overrightarrow {p_{1}p_{2}}}}
and
p
3
p
4
→
{\displaystyle {\overrightarrow {p_{3}p_{4}}}}
is the same as that of
f
(
p
1
)
f
(
p
2
)
→
{\displaystyle {\overrightarrow {f(p_{1})f(p_{2})}}}
and
f
(
p
3
)
f
(
p
4
)
→
{\displaystyle {\overrightarrow {f(p_{3})f(p_{4})}}}
.
barycenters of weighted collections of points.
=== Groups ===
As an affine transformation is invertible, the square matrix
A
{\displaystyle A}
appearing in its matrix representation is invertible. The matrix representation of the inverse transformation is thus
[
A
−
1
−
A
−
1
b
→
0
…
0
1
]
.
{\displaystyle \left[{\begin{array}{ccc|c}&A^{-1}&&-A^{-1}{\vec {b}}\ \\0&\ldots &0&1\end{array}}\right].}
The invertible affine transformations (of an affine space onto itself) form the affine group, which has the general linear group of degree
n
{\displaystyle n}
as subgroup and is itself a subgroup of the general linear group of degree
n
+
1
{\displaystyle n+1}
.
The similarity transformations form the subgroup where
A
{\displaystyle A}
is a scalar times an orthogonal matrix. For example, if the affine transformation acts on the plane and if the determinant of
A
{\displaystyle A}
is 1 or −1 then the transformation is an equiareal mapping. Such transformations form a subgroup called the equi-affine group. A transformation that is both equi-affine and a similarity is an isometry of the plane taken with Euclidean distance.
Each of these groups has a subgroup of orientation-preserving or positive affine transformations: those where the determinant of
A
{\displaystyle A}
is positive. In the last case this is in 3D the group of rigid transformations (proper rotations and pure translations).
If there is a fixed point, we can take that as the origin, and the affine transformation reduces to a linear transformation. This may make it easier to classify and understand the transformation. For example, describing a transformation as a rotation by a certain angle with respect to a certain axis may give a clearer idea of the overall behavior of the transformation than describing it as a combination of a translation and a rotation. However, this depends on application and context.
== Affine maps ==
An affine map
f
:
A
→
B
{\displaystyle f\colon {\mathcal {A}}\to {\mathcal {B}}}
between two affine spaces is a map on the points that acts linearly on the vectors (that is, the vectors between points of the space). In symbols,
f
{\displaystyle f}
determines a linear transformation
φ
{\displaystyle \varphi }
such that, for any pair of points
P
,
Q
∈
A
{\displaystyle P,Q\in {\mathcal {A}}}
:
f
(
P
)
f
(
Q
)
→
=
φ
(
P
Q
→
)
{\displaystyle {\overrightarrow {f(P)~f(Q)}}=\varphi ({\overrightarrow {PQ}})}
or
f
(
Q
)
−
f
(
P
)
=
φ
(
Q
−
P
)
{\displaystyle f(Q)-f(P)=\varphi (Q-P)}
.
We can interpret this definition in a few other ways, as follows.
If an origin
O
∈
A
{\displaystyle O\in {\mathcal {A}}}
is chosen, and
B
{\displaystyle B}
denotes its image
f
(
O
)
∈
B
{\displaystyle f(O)\in {\mathcal {B}}}
, then this means that for any vector
x
→
{\displaystyle {\vec {x}}}
:
f
:
(
O
+
x
→
)
↦
(
B
+
φ
(
x
→
)
)
{\displaystyle f\colon (O+{\vec {x}})\mapsto (B+\varphi ({\vec {x}}))}
.
If an origin
O
′
∈
B
{\displaystyle O'\in {\mathcal {B}}}
is also chosen, this can be decomposed as an affine transformation
g
:
A
→
B
{\displaystyle g\colon {\mathcal {A}}\to {\mathcal {B}}}
that sends
O
↦
O
′
{\displaystyle O\mapsto O'}
, namely
g
:
(
O
+
x
→
)
↦
(
O
′
+
φ
(
x
→
)
)
{\displaystyle g\colon (O+{\vec {x}})\mapsto (O'+\varphi ({\vec {x}}))}
,
followed by the translation by a vector
b
→
=
O
′
B
→
{\displaystyle {\vec {b}}={\overrightarrow {O'B}}}
.
The conclusion is that, intuitively,
f
{\displaystyle f}
consists of a translation and a linear map.
=== Alternative definition ===
Given two affine spaces
A
{\displaystyle {\mathcal {A}}}
and
B
{\displaystyle {\mathcal {B}}}
, over the same field, a function
f
:
A
→
B
{\displaystyle f\colon {\mathcal {A}}\to {\mathcal {B}}}
is an affine map if and only if for every family
{
(
a
i
,
λ
i
)
}
i
∈
I
{\displaystyle \{(a_{i},\lambda _{i})\}_{i\in I}}
of weighted points in
A
{\displaystyle {\mathcal {A}}}
such that
∑
i
∈
I
λ
i
=
1
{\displaystyle \sum _{i\in I}\lambda _{i}=1}
,
we have
f
(
∑
i
∈
I
λ
i
a
i
)
=
∑
i
∈
I
λ
i
f
(
a
i
)
{\displaystyle f\left(\sum _{i\in I}\lambda _{i}a_{i}\right)=\sum _{i\in I}\lambda _{i}f(a_{i})}
.
In other words,
f
{\displaystyle f}
preserves barycenters.
== History ==
The word "affine" as a mathematical term is defined in connection with tangents to curves in Euler's 1748 Introductio in analysin infinitorum. Felix Klein attributes the term "affine transformation" to Möbius and Gauss.
== Image transformation ==
In their applications to digital image processing, the affine transformations are analogous to printing on a sheet of rubber and stretching the sheet's edges parallel to the plane. This transform relocates pixels requiring intensity interpolation to approximate the value of moved pixels, bicubic interpolation is the standard for image transformations in image processing applications. Affine transformations scale, rotate, translate, mirror and shear images as shown in the following examples:
The affine transforms are applicable to the registration process where two or more images are aligned (registered). An example of image registration is the generation of panoramic images that are the product of multiple images stitched together.
=== Affine warping ===
The affine transform preserves parallel lines. However, the stretching and shearing transformations warp shapes, as the following example shows:
This is an example of image warping. However, the affine transformations do not facilitate projection onto a curved surface or radial distortions.
== In the plane ==
Every affine transformations in a Euclidean plane is the composition of a translation and an affine transformation that fixes a point; the latter may be
a homothety,
rotations around the fixed point,
a scaling, with possibly negative scaling factors, in two directions (not necessarily perpendicular); this includes reflections,
a shear mapping
a squeeze mapping.
Given two non-degenerate triangles ABC and A′B′C′ in a Euclidean plane, there is a unique affine transformation T that maps A to A′, B to B′ and C to C′. Each of ABC and A′B′C′ defines an affine coordinate system and a barycentric coordinate system. Given a point P, the point T(P) is the point that has the same coordinates on the second system as the coordinates of P on the first system.
Affine transformations do not respect lengths or angles; they multiply areas by the constant factor
area of A′B′C′ / area of ABC.
A given T may either be direct (respect orientation), or indirect (reverse orientation), and this may be determined by comparing the orientations of the triangles.
== Examples ==
=== Over the real numbers ===
The functions
f
:
R
→
R
,
f
(
x
)
=
m
x
+
c
{\displaystyle f\colon \mathbb {R} \to \mathbb {R} ,\;f(x)=mx+c}
with
m
{\displaystyle m}
and
c
{\displaystyle c}
in
R
{\displaystyle \mathbb {R} }
and
m
≠
0
{\displaystyle m\neq 0}
, are precisely the affine transformations of the real line.
=== In plane geometry ===
In
R
2
{\displaystyle \mathbb {R} ^{2}}
, the transformation shown at left is accomplished using the map given by:
[
x
y
]
↦
[
0
1
2
1
]
[
x
y
]
+
[
−
100
−
100
]
{\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}\mapsto {\begin{bmatrix}0&1\\2&1\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}+{\begin{bmatrix}-100\\-100\end{bmatrix}}}
Transforming the three corner points of the original triangle (in red) gives three new points which form the new triangle (in blue). This transformation skews and translates the original triangle.
In fact, all triangles are related to one another by affine transformations. This is also true for all parallelograms, but not for all quadrilaterals.
== See also ==
Anamorphosis – artistic applications of affine transformations
Affine geometry
3D projection
Homography
Flat (geometry)
Bent function
Multilinear polynomial
== Notes ==
== References ==
Berger, Marcel (1987), Geometry I, Berlin: Springer, ISBN 3-540-11658-3
Brannan, David A.; Esplen, Matthew F.; Gray, Jeremy J. (1999), Geometry, Cambridge University Press, ISBN 978-0-521-59787-6
Nomizu, Katsumi; Sasaki, S. (1994), Affine Differential Geometry (New ed.), Cambridge University Press, ISBN 978-0-521-44177-3
Klein, Felix (1948) [1939], Elementary Mathematics from an Advanced Standpoint: Geometry, Dover
Samuel, Pierre (1988), Projective Geometry, Springer-Verlag, ISBN 0-387-96752-4
Sharpe, R. W. (1997). Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. New York: Springer. ISBN 0-387-94732-9.
Snapper, Ernst; Troyer, Robert J. (1989) [1971], Metric Affine Geometry, Dover, ISBN 978-0-486-66108-7
Wan, Zhe-xian (1993), Geometry of Classical Groups over Finite Fields, Chartwell-Bratt, ISBN 0-86238-326-9
== External links ==
Media related to Affine transformation at Wikimedia Commons
"Affine transformation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Geometric Operations: Affine Transform, R. Fisher, S. Perkins, A. Walker and E. Wolfart.
Weisstein, Eric W. "Affine Transformation". MathWorld.
Affine Transform by Bernard Vuilleumier, Wolfram Demonstrations Project.
Affine Transformation with MATLAB | Wikipedia/Affine_function |
In mathematics, a Petersson algebra is a composition algebra over a field constructed from an order-3 automorphism of a Hurwitz algebra. They were first constructed by Petersson (1969).
== Construction ==
Suppose that C is a Hurwitz algebra and φ is an order 3 automorphism. Define the new product of x and y to be φ(x)φ2(y). With this new product the algebra is called a Petersson algebra.
== References ==
Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998), The book of involutions, Colloquium Publications, vol. 44, Providence, RI: American Mathematical Society, ISBN 0-8218-0904-0, Zbl 0955.16001
Petersson, Holger P. (1969), "Eine Identität fünften Grades, der gewisse Isotope von Kompositions-Algebren genügen", Math. Z. (in German), 109 (3): 217–238, doi:10.1007/BF01111407, MR 0242910, S2CID 122353090 | Wikipedia/Petersson_algebra |
In mathematics, a quaternion algebra over a field F is a central simple algebra A over F that has dimension 4 over F. Every quaternion algebra becomes a matrix algebra by extending scalars (equivalently, tensoring with a field extension), i.e. for a suitable field extension K of F,
A
⊗
F
K
{\displaystyle A\otimes _{F}K}
is isomorphic to the 2 × 2 matrix algebra over K.
The notion of a quaternion algebra can be seen as a generalization of Hamilton's quaternions to an arbitrary base field. The Hamilton quaternions are a quaternion algebra (in the above sense) over
F
=
R
{\displaystyle F=\mathbb {R} }
, and indeed the only one over
R
{\displaystyle \mathbb {R} }
apart from the 2 × 2 real matrix algebra, up to isomorphism. When
F
=
C
{\displaystyle F=\mathbb {C} }
, then the biquaternions form the quaternion algebra over F.
== Structure ==
Quaternion algebra here means something more general than the algebra of Hamilton's quaternions. When the coefficient field F does not have characteristic 2, every quaternion algebra over F can be described as a 4-dimensional F-vector space with basis
{
1
,
i
,
j
,
k
}
{\displaystyle \{1,i,j,k\}}
, with the following multiplication rules:
i
2
=
a
{\displaystyle i^{2}=a}
j
2
=
b
{\displaystyle j^{2}=b}
i
j
=
k
{\displaystyle ij=k}
j
i
=
−
k
{\displaystyle ji=-k}
where a and b are any given nonzero elements of F. From these rules we get:
k
2
=
i
j
i
j
=
−
i
i
j
j
=
−
a
b
{\displaystyle k^{2}=ijij=-iijj=-ab}
The classical instances where
F
=
R
{\displaystyle F=\mathbb {R} }
are Hamilton's quaternions (a = b = −1) and split-quaternions (a = −1, b = +1). In split-quaternions,
k
2
=
+
1
{\displaystyle k^{2}=+1}
and
j
k
=
−
i
{\displaystyle jk=-i}
, differing from Hamilton's equations.
The algebra defined in this way is denoted (a,b)F or simply (a,b). When F has characteristic 2, a different explicit description in terms of a basis of 4 elements is also possible, but in any event the definition of a quaternion algebra over F as a 4-dimensional central simple algebra over F applies uniformly in all characteristics.
A quaternion algebra (a,b)F is either a division algebra or isomorphic to the matrix algebra of 2 × 2 matrices over F; the latter case is termed split. The norm form
N
(
t
+
x
i
+
y
j
+
z
k
)
=
t
2
−
a
x
2
−
b
y
2
+
a
b
z
2
{\displaystyle N(t+xi+yj+zk)=t^{2}-ax^{2}-by^{2}+abz^{2}\ }
defines a structure of division algebra if and only if the norm is an anisotropic quadratic form, that is, zero only on the zero element. The conic C(a,b) defined by
a
x
2
+
b
y
2
=
z
2
{\displaystyle ax^{2}+by^{2}=z^{2}\ }
has a point (x,y,z) with coordinates in F in the split case.
== Application ==
Quaternion algebras are applied in number theory, particularly to quadratic forms. They are concrete structures that generate the elements of order two in the Brauer group of F. For some fields, including algebraic number fields, every element of order 2 in its Brauer group is represented by a quaternion algebra. A theorem of Alexander Merkurjev implies that each element of order 2 in the Brauer group of any field is represented by a tensor product of quaternion algebras. In particular, over p-adic fields the construction of quaternion algebras can be viewed as the quadratic Hilbert symbol of local class field theory.
== Classification ==
It is a theorem of Frobenius that there are only two real quaternion algebras: 2 × 2 matrices over the reals and Hamilton's real quaternions.
In a similar way, over any local field F there are exactly two quaternion algebras: the 2 × 2 matrices over F and a division algebra.
But the quaternion division algebra over a local field is usually not Hamilton's quaternions over the field. For example, over the p-adic numbers Hamilton's quaternions are a division algebra only when p is 2. For odd prime p, the p-adic Hamilton quaternions are isomorphic to the 2 × 2 matrices over the p-adics. To see the p-adic Hamilton quaternions are not a division algebra for odd prime p, observe that the congruence x2 + y2 = −1 mod p is solvable and therefore by Hensel's lemma — here is where p being odd is needed — the equation
x2 + y2 = −1
is solvable in the p-adic numbers. Therefore the quaternion
xi + yj + k
has norm 0 and hence doesn't have a multiplicative inverse.
One way to classify the F-algebra isomorphism classes of all quaternion algebras for a given field F is to use the one-to-one correspondence between isomorphism classes of quaternion algebras over F and isomorphism classes of their norm forms.
To every quaternion algebra A, one can associate a quadratic form N (called the norm form) on A such that
N
(
x
y
)
=
N
(
x
)
N
(
y
)
{\displaystyle N(xy)=N(x)N(y)}
for all x and y in A. It turns out that the possible norm forms for quaternion F-algebras are exactly the Pfister 2-forms.
== Quaternion algebras over the rational numbers ==
Quaternion algebras over the rational numbers have an arithmetic theory similar to, but more complicated than, that of quadratic extensions of
Q
{\displaystyle \mathbb {Q} }
.
Let
B
{\displaystyle B}
be a quaternion algebra over
Q
{\displaystyle \mathbb {Q} }
and let
ν
{\displaystyle \nu }
be a place of
Q
{\displaystyle \mathbb {Q} }
, with completion
Q
ν
{\displaystyle \mathbb {Q} _{\nu }}
(so it is either the p-adic numbers
Q
p
{\displaystyle \mathbb {Q} _{p}}
for some prime p or the real numbers
R
{\displaystyle \mathbb {R} }
). Define
B
ν
:=
Q
ν
⊗
Q
B
{\displaystyle B_{\nu }:=\mathbb {Q} _{\nu }\otimes _{\mathbb {Q} }B}
, which is a quaternion algebra over
Q
ν
{\displaystyle \mathbb {Q} _{\nu }}
. So there are two choices for
B
ν
{\displaystyle B_{\nu }}
: the 2 × 2 matrices over
Q
ν
{\displaystyle \mathbb {Q} _{\nu }}
or a division algebra.
We say that
B
{\displaystyle B}
is split (or unramified) at
ν
{\displaystyle \nu }
if
B
ν
{\displaystyle B_{\nu }}
is isomorphic to the 2 × 2 matrices over
Q
ν
{\displaystyle \mathbb {Q} _{\nu }}
. We say that B is non-split (or ramified) at
ν
{\displaystyle \nu }
if
B
ν
{\displaystyle B_{\nu }}
is the quaternion division algebra over
Q
ν
{\displaystyle \mathbb {Q} _{\nu }}
. For example, the rational Hamilton quaternions is non-split at 2 and at
∞
{\displaystyle \infty }
and split at all odd primes. The rational 2 × 2 matrices are split at all places.
A quaternion algebra over the rationals which splits at
∞
{\displaystyle \infty }
is analogous to a real quadratic field and one which is non-split at
∞
{\displaystyle \infty }
is analogous to an imaginary quadratic field. The analogy comes from a quadratic field having real embeddings when the minimal polynomial for a generator splits over the reals and having non-real embeddings otherwise. One illustration of the strength of this analogy concerns unit groups in an order of a rational quaternion algebra:
it is infinite if the quaternion algebra splits at
∞
{\displaystyle \infty }
and it is finite otherwise, just as the unit group of an order in a quadratic ring is infinite in the real quadratic case and finite otherwise.
The number of places where a quaternion algebra over the rationals ramifies is always even, and this is equivalent to the quadratic reciprocity law over the rationals.
Moreover, the places where B ramifies determines B up to isomorphism as an algebra. (In other words, non-isomorphic quaternion algebras over the rationals do not share the same set of ramified places.) The product of the primes at which B ramifies is called the discriminant of B.
== See also ==
Composition algebra
Cyclic algebra
Octonion algebra
Hurwitz quaternion order
Hurwitz quaternion
== Notes ==
== References ==
Gille, Philippe; Szamuely, Tamás (2006). Central simple algebras and Galois cohomology. Cambridge Studies in Advanced Mathematics. Vol. 101. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511607219. ISBN 0-521-86103-9. Zbl 1137.12001.
Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. MR 2104929. Zbl 1068.11023.
== Further reading ==
Voight, John (2021). Quaternion Algebras. Graduate Texts in Mathematics. Vol. 288. Springer Nature. doi:10.1007/978-3-030-56694-4. ISBN 978-3-030-56692-0.
Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998). The book of involutions. Colloquium Publications. Vol. 44. With a preface by J. Tits. Providence, RI: American Mathematical Society. ISBN 0-8218-0904-0. MR 1632779. Zbl 0955.16001.
Maclachlan, Colin; Reid, Alan W. (2003). The Arithmetic of Hyperbolic 3-Manifolds. New York: Springer-Verlag. doi:10.1007/978-1-4757-6720-9. ISBN 0-387-98386-4. MR 1937957. See chapter 2 (Quaternion Algebras I) and chapter 7 (Quaternion Algebras II).
Vignéras, Marie-France (1980). Arithmetique Des Algebres De Quaternions. Lecture Notes in Mathematics (in French). Springer-Verlag. ISBN 978-0387099835.
Chisholm, Hugh, ed. (1911). "Algebra" . Encyclopædia Britannica (11th ed.). Cambridge University Press. (See section on quaternions.)
Quaternion algebra at Encyclopedia of Mathematics. | Wikipedia/Quaternion_algebra |
In mathematics, a square is the result of multiplying a number by itself. The verb "to square" is used to denote this operation. Squaring is the same as raising to the power 2, and is denoted by a superscript 2; for instance, the square of 3 may be written as 32, which is the number 9.
In some cases when superscripts are not available, as for instance in programming languages or plain text files, the notations x^2 (caret) or x**2 may be used in place of x2.
The adjective which corresponds to squaring is quadratic.
The square of an integer may also be called a square number or a perfect square. In algebra, the operation of squaring is often generalized to polynomials, other expressions, or values in systems of mathematical values other than the numbers. For instance, the square of the linear polynomial x + 1 is the quadratic polynomial (x + 1)2 = x2 + 2x + 1.
One of the important properties of squaring, for numbers as well as in many other mathematical systems, is that (for all numbers x), the square of x is the same as the square of its additive inverse −x. That is, the square function satisfies the identity x2 = (−x)2. This can also be expressed by saying that the square function is an even function.
== In real numbers ==
The squaring operation defines a real function called the square function or the squaring function. Its domain is the whole real line, and its image is the set of nonnegative real numbers.
The square function preserves the order of positive numbers: larger numbers have larger squares. In other words, the square is a monotonic function on the interval [0, +∞). On the negative numbers, numbers with greater absolute value have greater squares, so the square is a monotonically decreasing function on (−∞,0]. Hence, zero is the (global) minimum of the square function.
The square x2 of a number x is less than x (that is x2 < x) if and only if 0 < x < 1, that is, if x belongs to the open interval (0,1). This implies that the square of an integer is never less than the original number x.
Every positive real number is the square of exactly two numbers, one of which is strictly positive and the other of which is strictly negative. Zero is the square of only one number, itself. For this reason, it is possible to define the square root function, which associates with a non-negative real number the non-negative number whose square is the original number.
No square root can be taken of a negative number within the system of real numbers, because squares of all real numbers are non-negative. The lack of real square roots for the negative numbers can be used to expand the real number system to the complex numbers, by postulating the imaginary unit i, which is one of the square roots of −1.
The property "every non-negative real number is a square" has been generalized to the notion of a real closed field, which is an ordered field such that every non-negative element is a square and every polynomial of odd degree has a root. The real closed fields cannot be distinguished from the field of real numbers by their algebraic properties: every property of the real numbers, which may be expressed in first-order logic (that is expressed by a formula in which the variables that are quantified by ∀ or ∃ represent elements, not sets), is true for every real closed field, and conversely every property of the first-order logic, which is true for a specific real closed field is also true for the real numbers.
== In geometry ==
There are several major uses of the square function in geometry.
The name of the square function shows its importance in the definition of the area: it comes from the fact that the area of a square with sides of length l is equal to l2. The area depends quadratically on the size: the area of a shape n times larger is n2 times greater. This holds for areas in three dimensions as well as in the plane: for instance, the surface area of a sphere is proportional to the square of its radius, a fact that is manifested physically by the inverse-square law describing how the strength of physical forces such as gravity varies according to distance.
The square function is related to distance through the Pythagorean theorem and its generalization, the parallelogram law. Euclidean distance is not a smooth function: the three-dimensional graph of distance from a fixed point forms a cone, with a non-smooth point at the tip of the cone. However, the square of the distance (denoted d2 or r2), which has a paraboloid as its graph, is a smooth and analytic function.
The dot product of a Euclidean vector with itself is equal to the square of its length: v⋅v = v2. This is further generalised to quadratic forms in linear spaces via the inner product. The inertia tensor in mechanics is an example of a quadratic form. It demonstrates a quadratic relation of the moment of inertia to the size (length).
There are infinitely many Pythagorean triples, sets of three positive integers such that the sum of the squares of the first two equals the square of the third. Each of these triples gives the integer sides of a right triangle.
== In abstract algebra and number theory ==
The square function is defined in any field or ring. An element in the image of this function is called a square, and the inverse images of a square are called square roots.
The notion of squaring is particularly important in the finite fields Z/pZ formed by the numbers modulo an odd prime number p. A non-zero element of this field is called a quadratic residue if it is a square in Z/pZ, and otherwise, it is called a quadratic non-residue. Zero, while a square, is not considered to be a quadratic residue. Every finite field of this type has exactly (p − 1)/2 quadratic residues and exactly (p − 1)/2 quadratic non-residues. The quadratic residues form a group under multiplication. The properties of quadratic residues are widely used in number theory.
More generally, in rings, the square function may have different properties that are sometimes used to classify rings.
Zero may be the square of some non-zero elements. A commutative ring such that the square of a non zero element is never zero is called a reduced ring. More generally, in a commutative ring, a radical ideal is an ideal I such that
x
2
∈
I
{\displaystyle x^{2}\in I}
implies
x
∈
I
{\displaystyle x\in I}
. Both notions are important in algebraic geometry, because of Hilbert's Nullstellensatz.
An element of a ring that is equal to its own square is called an idempotent. In any ring, 0 and 1 are idempotents. There are no other idempotents in fields and more generally in integral domains. However,
the ring of the integers modulo n has 2k idempotents, where k is the number of distinct prime factors of n.
A commutative ring in which every element is equal to its square (every element is idempotent) is called a Boolean ring; an example from computer science is the ring whose elements are binary numbers, with bitwise AND as the multiplication operation and bitwise XOR as the addition operation.
In a totally ordered ring, x2 ≥ 0 for any x. Moreover, x2 = 0 if and only if x = 0.
In a supercommutative algebra where 2 is invertible, the square of any odd element equals zero.
If A is a commutative semigroup, then one has
∀
x
,
y
∈
A
(
x
y
)
2
=
x
y
x
y
=
x
x
y
y
=
x
2
y
2
.
{\displaystyle \forall x,y\in A\quad (xy)^{2}=xyxy=xxyy=x^{2}y^{2}.}
In the language of quadratic forms, this equality says that the square function is a "form permitting composition". In fact, the square function is the foundation upon which other quadratic forms are constructed which also permit composition. The procedure was introduced by L. E. Dickson to produce the octonions out of quaternions by doubling. The doubling method was formalized by A. A. Albert who started with the real number field
R
{\displaystyle \mathbb {R} }
and the square function, doubling it to obtain the complex number field with quadratic form x2 + y2, and then doubling again to obtain quaternions. The doubling procedure is called the Cayley–Dickson construction, and has been generalized to form algebras of dimension 2n over a field F with involution.
The square function z2 is the "norm" of the composition algebra
C
{\displaystyle \mathbb {C} }
, where the identity function forms a trivial involution to begin the Cayley–Dickson constructions leading to bicomplex, biquaternion, and bioctonion composition algebras.
== In complex numbers ==
On complex numbers, the square function
z
→
z
2
{\displaystyle z\to z^{2}}
is a twofold cover in the sense that each non-zero complex number has exactly two square roots.
The square of the absolute value of a complex number is called its absolute square, squared modulus, or squared magnitude. It is the product of the complex number with its complex conjugate, and equals the sum of the squares of the real and imaginary parts of the complex number.
The absolute square of a complex number is always a nonnegative real number, that is zero if and only if the complex number is zero. It is easier to compute than the absolute value (no square root), and is a smooth real-valued function. Because of these two properties, the absolute square is often preferred to the absolute value for explicit computations and when methods of mathematical analysis are involved (for example optimization or integration).
For complex vectors, the dot product can be defined involving the conjugate transpose, leading to the squared norm.
== Other uses ==
Squares are ubiquitous in algebra, more generally, in almost every branch of mathematics, and also in physics where many units are defined using squares and inverse squares: see below.
Least squares is the standard method used with overdetermined systems.
Squaring is used in statistics and probability theory in determining the standard deviation of a set of values, or a random variable. The deviation of each value xi from the mean
x
¯
{\displaystyle {\overline {x}}}
of the set is defined as the difference
x
i
−
x
¯
{\displaystyle x_{i}-{\overline {x}}}
. These deviations are squared, then a mean is taken of the new set of numbers (each of which is positive). This mean is the variance, and its square root is the standard deviation.
== See also ==
Cube (algebra)
Euclidean distance
Exponentiation by squaring
Hilbert's seventeenth problem, for the representation of positive polynomials as a sum of squares of rational functions
Metric tensor
Polynomial ring
Polynomial SOS, the representation of a non-negative polynomial as the sum of squares of polynomials
Quadratic equation
Square-free polynomial
Sums of squares (disambiguation page with various relevant links)
=== Related identities ===
Algebraic (need a commutative ring)
Brahmagupta–Fibonacci identity, related to complex numbers in the sense discussed above
Degen's eight-square identity, related to octonions in the same way
Difference of two squares
Euler's four-square identity, related to quaternions in the same way
Lagrange's identity
Other
Parseval's identity
Pythagorean trigonometric identity
=== Related physical quantities ===
acceleration, length per square time
coupling constant (has square charge in the denominator, and may be expressed with square distance in the numerator)
cross section (physics), an area-dimensioned quantity
kinetic energy (quadratic dependence on velocity)
specific energy, a (square velocity)-dimensioned quantity
== Footnotes ==
== Further reading ==
Marshall, Murray Positive polynomials and sums of squares. Mathematical Surveys and Monographs, 146. American Mathematical Society, Providence, RI, 2008. xii+187 pp. ISBN 978-0-8218-4402-1, ISBN 0-8218-4402-4
Rajwade, A. R. (1993). Squares. London Mathematical Society Lecture Note Series. Vol. 171. Cambridge University Press. ISBN 0-521-42668-5. Zbl 0785.11022. | Wikipedia/Square_(algebra) |
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma (σ), they are occasionally denoted by tau (τ) when used in connection with isospin symmetries.
σ
1
=
σ
x
=
(
0
1
1
0
)
,
σ
2
=
σ
y
=
(
0
−
i
i
0
)
,
σ
3
=
σ
z
=
(
1
0
0
−
1
)
.
{\displaystyle {\begin{aligned}\sigma _{1}=\sigma _{x}&={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\\\sigma _{2}=\sigma _{y}&={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},\\\sigma _{3}=\sigma _{z}&={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.\\\end{aligned}}}
These matrices are named after the physicist Wolfgang Pauli. In quantum mechanics, they occur in the Pauli equation, which takes into account the interaction of the spin of a particle with an external electromagnetic field. They also represent the interaction states of two polarization filters for horizontal/vertical polarization, 45 degree polarization (right/left), and circular polarization (right/left).
Each Pauli matrix is Hermitian, and together with the identity matrix I (sometimes considered as the zeroth Pauli matrix σ0 ), the Pauli matrices form a basis of the vector space of 2 × 2 Hermitian matrices over the real numbers, under addition. This means that any 2 × 2 Hermitian matrix can be written in a unique way as a linear combination of Pauli matrices, with all coefficients being real numbers.
The Pauli matrices satisfy the useful product relation:
σ
i
σ
j
=
δ
i
j
+
i
ϵ
i
j
k
σ
k
.
{\displaystyle {\begin{aligned}\sigma _{i}\sigma _{j}=\delta _{ij}+i\epsilon _{ijk}\sigma _{k}.\end{aligned}}}
Hermitian operators represent observables in quantum mechanics, so the Pauli matrices span the space of observables of the complex two-dimensional Hilbert space. In the context of Pauli's work, σk represents the observable corresponding to spin along the kth coordinate axis in three-dimensional Euclidean space
R
3
.
{\displaystyle \mathbb {R} ^{3}.}
The Pauli matrices (after multiplication by i to make them anti-Hermitian) also generate transformations in the sense of Lie algebras: the matrices iσ1, iσ2, iσ3 form a basis for the real Lie algebra
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
, which exponentiates to the special unitary group SU(2). The algebra generated by the three matrices σ1, σ2, σ3 is isomorphic to the Clifford algebra of
R
3
,
{\displaystyle \mathbb {R} ^{3},}
and the (unital) associative algebra generated by iσ1, iσ2, iσ3 functions identically (is isomorphic) to that of quaternions (
H
{\displaystyle \mathbb {H} }
).
== Algebraic properties ==
All three of the Pauli matrices can be compacted into a single expression:
σ
j
=
(
δ
j
3
δ
j
1
−
i
δ
j
2
δ
j
1
+
i
δ
j
2
−
δ
j
3
)
,
{\displaystyle \sigma _{j}={\begin{pmatrix}\delta _{j3}&\delta _{j1}-i\,\delta _{j2}\\\delta _{j1}+i\,\delta _{j2}&-\delta _{j3}\end{pmatrix}},}
where δjk is the Kronecker delta, which equals +1 if j = k and 0 otherwise. This expression is useful for "selecting" any one of the matrices numerically by substituting values of j = 1, 2, 3, in turn useful when any of the matrices (but no particular one) is to be used in algebraic manipulations.
The matrices are involutory:
σ
1
2
=
σ
2
2
=
σ
3
2
=
−
i
σ
1
σ
2
σ
3
=
(
1
0
0
1
)
=
I
,
{\displaystyle \sigma _{1}^{2}=\sigma _{2}^{2}=\sigma _{3}^{2}=-i\,\sigma _{1}\sigma _{2}\sigma _{3}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}=I,}
where I is the identity matrix.
The determinants and traces of the Pauli matrices are
det
σ
j
=
−
1
,
tr
σ
j
=
0
,
{\displaystyle {\begin{aligned}\det \sigma _{j}&=-1,\\\operatorname {tr} \sigma _{j}&=0,\end{aligned}}}
from which we can deduce that each matrix σj has eigenvalues +1 and −1.
With the inclusion of the identity matrix I (sometimes denoted σ0), the Pauli matrices form an orthogonal basis (in the sense of Hilbert–Schmidt) of the Hilbert space
H
2
{\displaystyle {\mathcal {H}}_{2}}
of 2 × 2 Hermitian matrices over
R
{\displaystyle \mathbb {R} }
, and the Hilbert space
M
2
,
2
(
C
)
{\displaystyle {\mathcal {M}}_{2,2}(\mathbb {C} )}
of all complex 2 × 2 matrices over
C
{\displaystyle \mathbb {C} }
.
=== Commutation and anti-commutation relations ===
==== Commutation relations ====
The Pauli matrices obey the following commutation relations:
[
σ
j
,
σ
k
]
=
2
i
ε
j
k
l
σ
l
,
{\displaystyle [\sigma _{j},\sigma _{k}]=2i\varepsilon _{jkl}\,\sigma _{l},}
where the Levi-Civita symbol εjkl is used.
These commutation relations make the Pauli matrices the generators of a representation of the Lie algebra
(
R
3
,
×
)
≅
s
u
(
2
)
≅
s
o
(
3
)
.
{\displaystyle (\mathbb {R} ^{3},\times )\cong {\mathfrak {su}}(2)\cong {\mathfrak {so}}(3).}
==== Anticommutation relations ====
They also satisfy the anticommutation relations:
{
σ
j
,
σ
k
}
=
2
δ
j
k
I
,
{\displaystyle \{\sigma _{j},\sigma _{k}\}=2\delta _{jk}\,I,}
where
{
σ
j
,
σ
k
}
{\displaystyle \{\sigma _{j},\sigma _{k}\}}
is defined as
σ
j
σ
k
+
σ
k
σ
j
,
{\displaystyle \sigma _{j}\sigma _{k}+\sigma _{k}\sigma _{j},}
and δjk is the Kronecker delta. I denotes the 2 × 2 identity matrix.
These anti-commutation relations make the Pauli matrices the generators of a representation of the Clifford algebra for
R
3
,
{\displaystyle \mathbb {R} ^{3},}
denoted
C
l
3
(
R
)
.
{\displaystyle \mathrm {Cl} _{3}(\mathbb {R} ).}
The usual construction of generators
σ
j
k
=
1
4
[
σ
j
,
σ
k
]
{\displaystyle \sigma _{jk}={\tfrac {1}{4}}[\sigma _{j},\sigma _{k}]}
of
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
using the Clifford algebra recovers the commutation relations above, up to unimportant numerical factors.
A few explicit commutators and anti-commutators are given below as examples:
=== Eigenvectors and eigenvalues ===
Each of the (Hermitian) Pauli matrices has two eigenvalues: +1 and −1. The corresponding normalized eigenvectors are
ψ
x
+
=
1
2
[
1
1
]
,
ψ
x
−
=
1
2
[
1
−
1
]
,
ψ
y
+
=
1
2
[
1
i
]
,
ψ
y
−
=
1
2
[
1
−
i
]
,
ψ
z
+
=
[
1
0
]
,
ψ
z
−
=
[
0
1
]
.
{\displaystyle {\begin{aligned}\psi _{x+}&={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}},&\psi _{x-}&={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\-1\end{bmatrix}},\\\psi _{y+}&={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\i\end{bmatrix}},&\psi _{y-}&={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\-i\end{bmatrix}},\\\psi _{z+}&={\begin{bmatrix}1\\0\end{bmatrix}},&\psi _{z-}&={\begin{bmatrix}0\\1\end{bmatrix}}.\end{aligned}}}
== Pauli vectors ==
The Pauli vector is defined by
σ
→
=
σ
1
x
^
1
+
σ
2
x
^
2
+
σ
3
x
^
3
,
{\displaystyle {\vec {\sigma }}=\sigma _{1}{\hat {x}}_{1}+\sigma _{2}{\hat {x}}_{2}+\sigma _{3}{\hat {x}}_{3},}
where
x
^
1
{\displaystyle {\hat {x}}_{1}}
,
x
^
2
{\displaystyle {\hat {x}}_{2}}
, and
x
^
3
{\displaystyle {\hat {x}}_{3}}
are an equivalent notation for the more familiar
x
^
{\displaystyle {\hat {x}}}
,
y
^
{\displaystyle {\hat {y}}}
, and
z
^
{\displaystyle {\hat {z}}}
.
The Pauli vector provides a mapping mechanism from a vector basis to a Pauli matrix basis as follows:
a
→
⋅
σ
→
=
∑
k
,
l
a
k
σ
ℓ
x
^
k
⋅
x
^
ℓ
=
∑
k
a
k
σ
k
=
(
a
3
a
1
−
i
a
2
a
1
+
i
a
2
−
a
3
)
.
{\displaystyle {\begin{aligned}{\vec {a}}\cdot {\vec {\sigma }}&=\sum _{k,l}a_{k}\,\sigma _{\ell }\,{\hat {x}}_{k}\cdot {\hat {x}}_{\ell }\\&=\sum _{k}a_{k}\,\sigma _{k}\\&={\begin{pmatrix}a_{3}&a_{1}-ia_{2}\\a_{1}+ia_{2}&-a_{3}\end{pmatrix}}.\end{aligned}}}
More formally, this defines a map from
R
3
{\displaystyle \mathbb {R} ^{3}}
to the vector space of traceless Hermitian
2
×
2
{\displaystyle 2\times 2}
matrices. This map encodes structures of
R
3
{\displaystyle \mathbb {R} ^{3}}
as a normed vector space and as a Lie algebra (with the cross-product as its Lie bracket) via functions of matrices, making the map an isomorphism of Lie algebras. This makes the Pauli matrices intertwiners from the point of view of representation theory.
Another way to view the Pauli vector is as a
2
×
2
{\displaystyle 2\times 2}
Hermitian traceless matrix-valued dual vector, that is, an element of
Mat
2
×
2
(
C
)
⊗
(
R
3
)
∗
{\displaystyle {\text{Mat}}_{2\times 2}(\mathbb {C} )\otimes (\mathbb {R} ^{3})^{*}}
that maps
a
→
↦
a
→
⋅
σ
→
.
{\displaystyle {\vec {a}}\mapsto {\vec {a}}\cdot {\vec {\sigma }}.}
=== Completeness relation ===
Each component of
a
→
{\displaystyle {\vec {a}}}
can be recovered from the matrix (see completeness relation below)
1
2
tr
(
(
a
→
⋅
σ
→
)
σ
→
)
=
a
→
.
{\displaystyle {\frac {1}{2}}\operatorname {tr} {\Bigl (}{\bigl (}{\vec {a}}\cdot {\vec {\sigma }}{\bigr )}{\vec {\sigma }}{\Bigr )}={\vec {a}}.}
This constitutes an inverse to the map
a
→
↦
a
→
⋅
σ
→
{\displaystyle {\vec {a}}\mapsto {\vec {a}}\cdot {\vec {\sigma }}}
, making it manifest that the map is a bijection.
=== Determinant ===
The norm is given by the determinant (up to a minus sign)
det
(
a
→
⋅
σ
→
)
=
−
a
→
⋅
a
→
=
−
|
a
→
|
2
.
{\displaystyle \det {\bigl (}{\vec {a}}\cdot {\vec {\sigma }}{\bigr )}=-{\vec {a}}\cdot {\vec {a}}=-|{\vec {a}}|^{2}.}
Then, considering the conjugation action of an
SU
(
2
)
{\displaystyle {\text{SU}}(2)}
matrix
U
{\displaystyle U}
on this space of matrices,
U
∗
a
→
⋅
σ
→
:=
U
a
→
⋅
σ
→
U
−
1
,
{\displaystyle U*{\vec {a}}\cdot {\vec {\sigma }}:=U\,{\vec {a}}\cdot {\vec {\sigma }}\,U^{-1},}
we find
det
(
U
∗
a
→
⋅
σ
→
)
=
det
(
a
→
⋅
σ
→
)
,
{\displaystyle \det(U*{\vec {a}}\cdot {\vec {\sigma }})=\det({\vec {a}}\cdot {\vec {\sigma }}),}
and that
U
∗
a
→
⋅
σ
→
{\displaystyle U*{\vec {a}}\cdot {\vec {\sigma }}}
is Hermitian and traceless. It then makes sense to define
U
∗
a
→
⋅
σ
→
=
a
→
′
⋅
σ
→
,
{\displaystyle U*{\vec {a}}\cdot {\vec {\sigma }}={\vec {a}}'\cdot {\vec {\sigma }},}
where
a
→
′
{\displaystyle {\vec {a}}'}
has the same norm as
a
→
,
{\displaystyle {\vec {a}},}
and therefore interpret
U
{\displaystyle U}
as a rotation of three-dimensional space. In fact, it turns out that the special restriction on
U
{\displaystyle U}
implies that the rotation is orientation preserving. This allows the definition of a map
R
:
S
U
(
2
)
→
S
O
(
3
)
{\displaystyle R:\mathrm {SU} (2)\to \mathrm {SO} (3)}
given by
U
∗
a
→
⋅
σ
→
=
a
→
′
⋅
σ
→
=:
(
R
(
U
)
a
→
)
⋅
σ
→
,
{\displaystyle U*{\vec {a}}\cdot {\vec {\sigma }}={\vec {a}}'\cdot {\vec {\sigma }}=:(R(U)\ {\vec {a}})\cdot {\vec {\sigma }},}
where
R
(
U
)
∈
S
O
(
3
)
.
{\displaystyle R(U)\in \mathrm {SO} (3).}
This map is the concrete realization of the double cover of
S
O
(
3
)
{\displaystyle \mathrm {SO} (3)}
by
S
U
(
2
)
,
{\displaystyle \mathrm {SU} (2),}
and therefore shows that
SU
(
2
)
≅
S
p
i
n
(
3
)
.
{\displaystyle {\text{SU}}(2)\cong \mathrm {Spin} (3).}
The components of
R
(
U
)
{\displaystyle R(U)}
can be recovered using the tracing process above:
R
(
U
)
i
j
=
1
2
tr
(
σ
i
U
σ
j
U
−
1
)
.
{\displaystyle R(U)_{ij}={\frac {1}{2}}\operatorname {tr} \left(\sigma _{i}U\sigma _{j}U^{-1}\right).}
=== Cross-product ===
The cross-product is given by the matrix commutator (up to a factor of
2
i
{\displaystyle 2i}
)
[
a
→
⋅
σ
→
,
b
→
⋅
σ
→
]
=
2
i
(
a
→
×
b
→
)
⋅
σ
→
.
{\displaystyle [{\vec {a}}\cdot {\vec {\sigma }},{\vec {b}}\cdot {\vec {\sigma }}]=2i\,({\vec {a}}\times {\vec {b}})\cdot {\vec {\sigma }}.}
In fact, the existence of a norm follows from the fact that
R
3
{\displaystyle \mathbb {R} ^{3}}
is a Lie algebra (see Killing form).
This cross-product can be used to prove the orientation-preserving property of the map above.
=== Eigenvalues and eigenvectors ===
The eigenvalues of
a
→
⋅
σ
→
{\displaystyle \ {\vec {a}}\cdot {\vec {\sigma }}\ }
are
±
|
a
→
|
.
{\displaystyle \ \pm |{\vec {a}}|.}
This follows immediately from tracelessness and explicitly computing the determinant.
More abstractly, without computing the determinant, which requires explicit properties of the Pauli matrices, this follows from
(
a
→
⋅
σ
→
)
2
−
|
a
→
|
2
=
0
,
{\displaystyle \ ({\vec {a}}\cdot {\vec {\sigma }})^{2}-|{\vec {a}}|^{2}=0\ ,}
since this can be factorised into
(
a
→
⋅
σ
→
−
|
a
→
|
)
(
a
→
⋅
σ
→
+
|
a
→
|
)
=
0.
{\displaystyle \ ({\vec {a}}\cdot {\vec {\sigma }}-|{\vec {a}}|)({\vec {a}}\cdot {\vec {\sigma }}+|{\vec {a}}|)=0.}
A standard result in linear algebra (a linear map that satisfies a polynomial equation written in distinct linear factors is diagonal) means this implies
a
→
⋅
σ
→
{\displaystyle \ {\vec {a}}\cdot {\vec {\sigma }}\ }
is diagonal with possible eigenvalues
±
|
a
→
|
.
{\displaystyle \ \pm |{\vec {a}}|.}
The tracelessness of
a
→
⋅
σ
→
{\displaystyle \ {\vec {a}}\cdot {\vec {\sigma }}\ }
means it has exactly one of each eigenvalue.
Its normalized eigenvectors are
ψ
+
=
1
2
|
a
→
|
(
a
3
+
|
a
→
|
)
[
a
3
+
|
a
→
|
a
1
+
i
a
2
]
;
ψ
−
=
1
2
|
a
→
|
(
a
3
+
|
a
→
|
)
[
i
a
2
−
a
1
a
3
+
|
a
→
|
]
.
{\displaystyle \psi _{+}={\frac {1}{{\sqrt {2\left|{\vec {a}}\right|\ (a_{3}+\left|{\vec {a}}\right|)\ }}\ }}{\begin{bmatrix}a_{3}+\left|{\vec {a}}\right|\\a_{1}+ia_{2}\end{bmatrix}};\qquad \psi _{-}={\frac {1}{\sqrt {2|{\vec {a}}|(a_{3}+|{\vec {a}}|)}}}{\begin{bmatrix}ia_{2}-a_{1}\\a_{3}+|{\vec {a}}|\end{bmatrix}}~.}
These expressions become singular for
a
3
→
−
|
a
→
|
{\displaystyle a_{3}\to -\left|{\vec {a}}\right|}
. They can be rescued by letting
a
→
=
|
a
→
|
(
ϵ
,
0
,
−
(
1
−
ϵ
2
/
2
)
)
{\displaystyle {\vec {a}}=\left|{\vec {a}}\right|(\epsilon ,0,-(1-\epsilon ^{2}/2))}
and taking the limit
ϵ
→
0
{\displaystyle \epsilon \to 0}
, which yields the correct eigenvectors (0,1) and (1,0) of
σ
z
{\displaystyle \sigma _{z}}
.
Alternatively, one may use spherical coordinates
a
→
=
a
(
sin
ϑ
cos
φ
,
sin
ϑ
sin
φ
,
cos
ϑ
)
{\displaystyle {\vec {a}}=a(\sin \vartheta \cos \varphi ,\sin \vartheta \sin \varphi ,\cos \vartheta )}
to obtain the eigenvectors
ψ
+
=
(
cos
(
ϑ
/
2
)
,
sin
(
ϑ
/
2
)
exp
(
i
φ
)
)
{\displaystyle \psi _{+}=(\cos(\vartheta /2),\sin(\vartheta /2)\exp(i\varphi ))}
and
ψ
−
=
(
−
sin
(
ϑ
/
2
)
exp
(
−
i
φ
)
,
cos
(
ϑ
/
2
)
)
{\displaystyle \psi _{-}=(-\sin(\vartheta /2)\exp(-i\varphi ),\cos(\vartheta /2))}
.
=== Pauli 4-vector ===
The Pauli 4-vector, used in spinor theory, is written
σ
μ
{\displaystyle \ \sigma ^{\mu }\ }
with components
σ
μ
=
(
I
,
σ
→
)
.
{\displaystyle \sigma ^{\mu }=(I,{\vec {\sigma }}).}
This defines a map from
R
1
,
3
{\displaystyle \mathbb {R} ^{1,3}}
to the vector space of Hermitian matrices,
x
μ
↦
x
μ
σ
μ
,
{\displaystyle x_{\mu }\mapsto x_{\mu }\sigma ^{\mu }\ ,}
which also encodes the Minkowski metric (with mostly minus convention) in its determinant:
det
(
x
μ
σ
μ
)
=
η
(
x
,
x
)
.
{\displaystyle \det(x_{\mu }\sigma ^{\mu })=\eta (x,x).}
This 4-vector also has a completeness relation. It is convenient to define a second Pauli 4-vector
σ
¯
μ
=
(
I
,
−
σ
→
)
.
{\displaystyle {\bar {\sigma }}^{\mu }=(I,-{\vec {\sigma }}).}
and allow raising and lowering using the Minkowski metric tensor. The relation can then be written
x
ν
=
1
2
tr
(
σ
¯
ν
(
x
μ
σ
μ
)
)
.
{\displaystyle x_{\nu }={\tfrac {1}{2}}\operatorname {tr} {\Bigl (}{\bar {\sigma }}_{\nu }{\bigl (}x_{\mu }\sigma ^{\mu }{\bigr )}{\Bigr )}.}
Similarly to the Pauli 3-vector case, we can find a matrix group that acts as isometries on
R
1
,
3
;
{\displaystyle \ \mathbb {R} ^{1,3}\ ;}
in this case the matrix group is
S
L
(
2
,
C
)
,
{\displaystyle \ \mathrm {SL} (2,\mathbb {C} )\ ,}
and this shows
S
L
(
2
,
C
)
≅
S
p
i
n
(
1
,
3
)
.
{\displaystyle \ \mathrm {SL} (2,\mathbb {C} )\cong \mathrm {Spin} (1,3).}
Similarly to above, this can be explicitly realized for
S
∈
S
L
(
2
,
C
)
{\displaystyle \ S\in \mathrm {SL} (2,\mathbb {C} )\ }
with components
Λ
(
S
)
μ
ν
=
1
2
tr
(
σ
¯
ν
S
σ
μ
S
†
)
.
{\displaystyle \Lambda (S)^{\mu }{}_{\nu }={\tfrac {1}{2}}\operatorname {tr} \left({\bar {\sigma }}_{\nu }S\sigma ^{\mu }S^{\dagger }\right).}
In fact, the determinant property follows abstractly from trace properties of the
σ
μ
.
{\displaystyle \ \sigma ^{\mu }.}
For
2
×
2
{\displaystyle \ 2\times 2\ }
matrices, the following identity holds:
det
(
A
+
B
)
=
det
(
A
)
+
det
(
B
)
+
tr
(
A
)
tr
(
B
)
−
tr
(
A
B
)
.
{\displaystyle \det(A+B)=\det(A)+\det(B)+\operatorname {tr} (A)\operatorname {tr} (B)-\operatorname {tr} (AB).}
That is, the 'cross-terms' can be written as traces. When
A
,
B
{\displaystyle \ A,B\ }
are chosen to be different
σ
μ
,
{\displaystyle \ \sigma ^{\mu }\ ,}
the cross-terms vanish. It then follows, now showing summation explicitly,
det
(
∑
μ
x
μ
σ
μ
)
=
∑
μ
det
(
x
μ
σ
μ
)
.
{\textstyle \det \left(\sum _{\mu }x_{\mu }\sigma ^{\mu }\right)=\sum _{\mu }\det \left(x_{\mu }\sigma ^{\mu }\right).}
Since the matrices are
2
×
2
,
{\displaystyle \ 2\times 2\ ,}
this is equal to
∑
μ
x
μ
2
det
(
σ
μ
)
=
η
(
x
,
x
)
.
{\textstyle \sum _{\mu }x_{\mu }^{2}\det(\sigma ^{\mu })=\eta (x,x).}
=== Relation to dot and cross product ===
Pauli vectors elegantly map these commutation and anticommutation relations to corresponding vector products. Adding the commutator to the anticommutator gives
[
σ
j
,
σ
k
]
+
{
σ
j
,
σ
k
}
=
(
σ
j
σ
k
−
σ
k
σ
j
)
+
(
σ
j
σ
k
+
σ
k
σ
j
)
2
i
ε
j
k
ℓ
σ
ℓ
+
2
δ
j
k
I
=
2
σ
j
σ
k
{\displaystyle {\begin{aligned}\left[\sigma _{j},\sigma _{k}\right]+\{\sigma _{j},\sigma _{k}\}&=(\sigma _{j}\sigma _{k}-\sigma _{k}\sigma _{j})+(\sigma _{j}\sigma _{k}+\sigma _{k}\sigma _{j})\\2i\varepsilon _{jk\ell }\,\sigma _{\ell }+2\delta _{jk}I&=2\sigma _{j}\sigma _{k}\end{aligned}}}
so that,
Contracting each side of the equation with components of two 3-vectors ap and bq (which commute with the Pauli matrices, i.e., apσq = σqap) for each matrix σq and vector component ap (and likewise with bq) yields
a
j
b
k
σ
j
σ
k
=
a
j
b
k
(
i
ε
j
k
ℓ
σ
ℓ
+
δ
j
k
I
)
a
j
σ
j
b
k
σ
k
=
i
ε
j
k
ℓ
a
j
b
k
σ
ℓ
+
a
j
b
k
δ
j
k
I
.
{\displaystyle ~~{\begin{aligned}a_{j}b_{k}\sigma _{j}\sigma _{k}&=a_{j}b_{k}\left(i\varepsilon _{jk\ell }\,\sigma _{\ell }+\delta _{jk}I\right)\\a_{j}\sigma _{j}b_{k}\sigma _{k}&=i\varepsilon _{jk\ell }\,a_{j}b_{k}\sigma _{\ell }+a_{j}b_{k}\delta _{jk}I\end{aligned}}.~}
Finally, translating the index notation for the dot product and cross product results in
If i is identified with the pseudoscalar σxσyσz then the right hand side becomes
a
⋅
b
+
a
∧
b
{\displaystyle a\cdot b+a\wedge b}
, which is also the definition for the product of two vectors in geometric algebra.
If we define the spin operator as J = ħ/2σ, then J satisfies the commutation relation:
J
×
J
=
i
ℏ
J
{\displaystyle \mathbf {J} \times \mathbf {J} =i\hbar \mathbf {J} }
Or equivalently, the Pauli vector satisfies:
σ
→
2
×
σ
→
2
=
i
σ
→
2
{\displaystyle {\frac {\vec {\sigma }}{2}}\times {\frac {\vec {\sigma }}{2}}=i{\frac {\vec {\sigma }}{2}}}
=== Some trace relations ===
The following traces can be derived using the commutation and anticommutation relations.
tr
(
σ
j
)
=
0
tr
(
σ
j
σ
k
)
=
2
δ
j
k
tr
(
σ
j
σ
k
σ
ℓ
)
=
2
i
ε
j
k
ℓ
tr
(
σ
j
σ
k
σ
ℓ
σ
m
)
=
2
(
δ
j
k
δ
ℓ
m
−
δ
j
ℓ
δ
k
m
+
δ
j
m
δ
k
ℓ
)
{\displaystyle {\begin{aligned}\operatorname {tr} \left(\sigma _{j}\right)&=0\\\operatorname {tr} \left(\sigma _{j}\sigma _{k}\right)&=2\delta _{jk}\\\operatorname {tr} \left(\sigma _{j}\sigma _{k}\sigma _{\ell }\right)&=2i\varepsilon _{jk\ell }\\\operatorname {tr} \left(\sigma _{j}\sigma _{k}\sigma _{\ell }\sigma _{m}\right)&=2\left(\delta _{jk}\delta _{\ell m}-\delta _{j\ell }\delta _{km}+\delta _{jm}\delta _{k\ell }\right)\end{aligned}}}
If the matrix σ0 = I is also considered, these relationships become
tr
(
σ
α
)
=
2
δ
0
α
tr
(
σ
α
σ
β
)
=
2
δ
α
β
tr
(
σ
α
σ
β
σ
γ
)
=
2
∑
(
α
β
γ
)
δ
α
β
δ
0
γ
−
4
δ
0
α
δ
0
β
δ
0
γ
+
2
i
ε
0
α
β
γ
tr
(
σ
α
σ
β
σ
γ
σ
μ
)
=
2
(
δ
α
β
δ
γ
μ
−
δ
α
γ
δ
β
μ
+
δ
α
μ
δ
β
γ
)
+
4
(
δ
α
γ
δ
0
β
δ
0
μ
+
δ
β
μ
δ
0
α
δ
0
γ
)
−
8
δ
0
α
δ
0
β
δ
0
γ
δ
0
μ
+
2
i
∑
(
α
β
γ
μ
)
ε
0
α
β
γ
δ
0
μ
{\displaystyle {\begin{aligned}\operatorname {tr} \left(\sigma _{\alpha }\right)&=2\delta _{0\alpha }\\\operatorname {tr} \left(\sigma _{\alpha }\sigma _{\beta }\right)&=2\delta _{\alpha \beta }\\\operatorname {tr} \left(\sigma _{\alpha }\sigma _{\beta }\sigma _{\gamma }\right)&=2\sum _{(\alpha \beta \gamma )}\delta _{\alpha \beta }\delta _{0\gamma }-4\delta _{0\alpha }\delta _{0\beta }\delta _{0\gamma }+2i\varepsilon _{0\alpha \beta \gamma }\\\operatorname {tr} \left(\sigma _{\alpha }\sigma _{\beta }\sigma _{\gamma }\sigma _{\mu }\right)&=2\left(\delta _{\alpha \beta }\delta _{\gamma \mu }-\delta _{\alpha \gamma }\delta _{\beta \mu }+\delta _{\alpha \mu }\delta _{\beta \gamma }\right)+4\left(\delta _{\alpha \gamma }\delta _{0\beta }\delta _{0\mu }+\delta _{\beta \mu }\delta _{0\alpha }\delta _{0\gamma }\right)-8\delta _{0\alpha }\delta _{0\beta }\delta _{0\gamma }\delta _{0\mu }+2i\sum _{(\alpha \beta \gamma \mu )}\varepsilon _{0\alpha \beta \gamma }\delta _{0\mu }\end{aligned}}}
where Greek indices α, β, γ and μ assume values from {0, x, y, z} and the notation
∑
(
α
…
)
{\textstyle \sum _{(\alpha \ldots )}}
is used to denote the sum over the cyclic permutation of the included indices.
=== Exponential of a Pauli vector ===
For
a
→
=
a
n
^
,
|
n
^
|
=
1
,
{\displaystyle {\vec {a}}=a{\hat {n}},\quad |{\hat {n}}|=1,}
one has, for even powers, 2p, p = 0, 1, 2, 3, ...
(
n
^
⋅
σ
→
)
2
p
=
I
,
{\displaystyle ({\hat {n}}\cdot {\vec {\sigma }})^{2p}=I,}
which can be shown first for the p = 1 case using the anticommutation relations. For convenience, the case p = 0 is taken to be I by convention.
For odd powers, 2q + 1, q = 0, 1, 2, 3, ...
(
n
^
⋅
σ
→
)
2
q
+
1
=
n
^
⋅
σ
→
.
{\displaystyle \left({\hat {n}}\cdot {\vec {\sigma }}\right)^{2q+1}={\hat {n}}\cdot {\vec {\sigma }}\,.}
Matrix exponentiating, and using the Taylor series for sine and cosine,
e
i
a
(
n
^
⋅
σ
→
)
=
∑
k
=
0
∞
i
k
[
a
(
n
^
⋅
σ
→
)
]
k
k
!
=
∑
p
=
0
∞
(
−
1
)
p
(
a
n
^
⋅
σ
→
)
2
p
(
2
p
)
!
+
i
∑
q
=
0
∞
(
−
1
)
q
(
a
n
^
⋅
σ
→
)
2
q
+
1
(
2
q
+
1
)
!
=
I
∑
p
=
0
∞
(
−
1
)
p
a
2
p
(
2
p
)
!
+
i
(
n
^
⋅
σ
→
)
∑
q
=
0
∞
(
−
1
)
q
a
2
q
+
1
(
2
q
+
1
)
!
{\displaystyle {\begin{aligned}e^{ia\left({\hat {n}}\cdot {\vec {\sigma }}\right)}&=\sum _{k=0}^{\infty }{\frac {i^{k}\left[a\left({\hat {n}}\cdot {\vec {\sigma }}\right)\right]^{k}}{k!}}\\&=\sum _{p=0}^{\infty }{\frac {(-1)^{p}(a{\hat {n}}\cdot {\vec {\sigma }})^{2p}}{(2p)!}}+i\sum _{q=0}^{\infty }{\frac {(-1)^{q}(a{\hat {n}}\cdot {\vec {\sigma }})^{2q+1}}{(2q+1)!}}\\&=I\sum _{p=0}^{\infty }{\frac {(-1)^{p}a^{2p}}{(2p)!}}+i({\hat {n}}\cdot {\vec {\sigma }})\sum _{q=0}^{\infty }{\frac {(-1)^{q}a^{2q+1}}{(2q+1)!}}\\\end{aligned}}}
.
In the last line, the first sum is the cosine, while the second sum is the sine; so, finally,
which is analogous to Euler's formula, extended to quaternions. In particular,
e
i
a
σ
1
=
(
cos
a
i
sin
a
i
sin
a
cos
a
)
,
e
i
a
σ
2
=
(
cos
a
sin
a
−
sin
a
cos
a
)
,
e
i
a
σ
3
=
(
e
i
a
0
0
e
−
i
a
)
.
{\displaystyle e^{ia\sigma _{1}}={\begin{pmatrix}\cos a&i\sin a\\i\sin a&\cos a\end{pmatrix}},\quad e^{ia\sigma _{2}}={\begin{pmatrix}\cos a&\sin a\\-\sin a&\cos a\end{pmatrix}},\quad e^{ia\sigma _{3}}={\begin{pmatrix}e^{ia}&0\\0&e^{-ia}\end{pmatrix}}.}
Note that
det
[
i
a
(
n
^
⋅
σ
→
)
]
=
a
2
{\displaystyle \det[ia({\hat {n}}\cdot {\vec {\sigma }})]=a^{2}}
,
while the determinant of the exponential itself is just 1, which makes it the generic group element of SU(2).
A more abstract version of formula (2) for a general 2 × 2 matrix can be found in the article on matrix exponentials. A general version of (2) for an analytic (at a and −a) function is provided by application of Sylvester's formula,
f
(
a
(
n
^
⋅
σ
→
)
)
=
I
f
(
a
)
+
f
(
−
a
)
2
+
n
^
⋅
σ
→
f
(
a
)
−
f
(
−
a
)
2
.
{\displaystyle f(a({\hat {n}}\cdot {\vec {\sigma }}))=I{\frac {f(a)+f(-a)}{2}}+{\hat {n}}\cdot {\vec {\sigma }}{\frac {f(a)-f(-a)}{2}}.}
==== The group composition law of SU(2) ====
A straightforward application of formula (2) provides a parameterization of the composition law of the group SU(2). One may directly solve for c in
e
i
a
(
n
^
⋅
σ
→
)
e
i
b
(
m
^
⋅
σ
→
)
=
I
(
cos
a
cos
b
−
n
^
⋅
m
^
sin
a
sin
b
)
+
i
(
n
^
sin
a
cos
b
+
m
^
sin
b
cos
a
−
n
^
×
m
^
sin
a
sin
b
)
⋅
σ
→
=
I
cos
c
+
i
(
k
^
⋅
σ
→
)
sin
c
=
e
i
c
(
k
^
⋅
σ
→
)
,
{\displaystyle {\begin{aligned}e^{ia\left({\hat {n}}\cdot {\vec {\sigma }}\right)}e^{ib\left({\hat {m}}\cdot {\vec {\sigma }}\right)}&=I\left(\cos a\cos b-{\hat {n}}\cdot {\hat {m}}\sin a\sin b\right)+i\left({\hat {n}}\sin a\cos b+{\hat {m}}\sin b\cos a-{\hat {n}}\times {\hat {m}}~\sin a\sin b\right)\cdot {\vec {\sigma }}\\&=I\cos {c}+i\left({\hat {k}}\cdot {\vec {\sigma }}\right)\sin c\\&=e^{ic\left({\hat {k}}\cdot {\vec {\sigma }}\right)},\end{aligned}}}
which specifies the generic group multiplication, where, manifestly,
cos
c
=
cos
a
cos
b
−
n
^
⋅
m
^
sin
a
sin
b
,
{\displaystyle \cos c=\cos a\cos b-{\hat {n}}\cdot {\hat {m}}\sin a\sin b~,}
the spherical law of cosines. Given c, then,
k
^
=
1
sin
c
(
n
^
sin
a
cos
b
+
m
^
sin
b
cos
a
−
n
^
×
m
^
sin
a
sin
b
)
.
{\displaystyle {\hat {k}}={\frac {1}{\sin c}}\left({\hat {n}}\sin a\cos b+{\hat {m}}\sin b\cos a-{\hat {n}}\times {\hat {m}}\sin a\sin b\right).}
Consequently, the composite rotation parameters in this group element (a closed form of the respective BCH expansion in this case) simply amount to
e
i
c
k
^
⋅
σ
→
=
exp
(
i
c
sin
c
(
n
^
sin
a
cos
b
+
m
^
sin
b
cos
a
−
n
^
×
m
^
sin
a
sin
b
)
⋅
σ
→
)
.
{\displaystyle e^{ic{\hat {k}}\cdot {\vec {\sigma }}}=\exp \left(i{\frac {c}{\sin c}}\left({\hat {n}}\sin a\cos b+{\hat {m}}\sin b\cos a-{\hat {n}}\times {\hat {m}}~\sin a\sin b\right)\cdot {\vec {\sigma }}\right).}
(Of course, when
n
^
{\displaystyle {\hat {n}}}
is parallel to
m
^
{\displaystyle {\hat {m}}}
, so is
k
^
{\displaystyle {\hat {k}}}
, and c = a + b.)
==== Adjoint action ====
It is also straightforward to likewise work out the adjoint action on the Pauli vector, namely rotation of any angle
a
{\displaystyle a}
along any axis
n
^
{\displaystyle {\hat {n}}}
:
R
n
(
−
a
)
σ
→
R
n
(
a
)
=
e
i
a
2
(
n
^
⋅
σ
→
)
σ
→
e
−
i
a
2
(
n
^
⋅
σ
→
)
=
σ
→
cos
(
a
)
+
n
^
×
σ
→
sin
(
a
)
+
n
^
n
^
⋅
σ
→
(
1
−
cos
(
a
)
)
.
{\displaystyle R_{n}(-a)~{\vec {\sigma }}~R_{n}(a)=e^{i{\frac {a}{2}}\left({\hat {n}}\cdot {\vec {\sigma }}\right)}~{\vec {\sigma }}~e^{-i{\frac {a}{2}}\left({\hat {n}}\cdot {\vec {\sigma }}\right)}={\vec {\sigma }}\cos(a)+{\hat {n}}\times {\vec {\sigma }}~\sin(a)+{\hat {n}}~{\hat {n}}\cdot {\vec {\sigma }}~(1-\cos(a))~.}
Taking the dot product of any unit vector with the above formula generates the expression of any single qubit operator under any rotation. For example, it can be shown that
R
y
(
−
π
2
)
σ
x
R
y
(
π
2
)
=
x
^
⋅
(
y
^
×
σ
→
)
=
σ
z
{\textstyle R_{y}{\mathord {\left(-{\frac {\pi }{2}}\right)}}\,\sigma _{x}\,R_{y}{\mathord {\left({\frac {\pi }{2}}\right)}}={\hat {x}}\cdot \left({\hat {y}}\times {\vec {\sigma }}\right)=\sigma _{z}}
.
=== Completeness relation ===
An alternative notation that is commonly used for the Pauli matrices is to write the vector index k in the superscript, and the matrix indices as subscripts, so that the element in row α and column β of the k-th Pauli matrix is σ kαβ.
In this notation, the completeness relation for the Pauli matrices can be written
σ
→
α
β
⋅
σ
→
γ
δ
≡
∑
k
=
1
3
σ
α
β
k
σ
γ
δ
k
=
2
δ
α
δ
δ
β
γ
−
δ
α
β
δ
γ
δ
.
{\displaystyle {\vec {\sigma }}_{\alpha \beta }\cdot {\vec {\sigma }}_{\gamma \delta }\equiv \sum _{k=1}^{3}\sigma _{\alpha \beta }^{k}\,\sigma _{\gamma \delta }^{k}=2\,\delta _{\alpha \delta }\,\delta _{\beta \gamma }-\delta _{\alpha \beta }\,\delta _{\gamma \delta }.}
As noted above, it is common to denote the 2 × 2 unit matrix by σ0, so σ0αβ = δαβ. The completeness relation can alternatively be expressed as
∑
k
=
0
3
σ
α
β
k
σ
γ
δ
k
=
2
δ
α
δ
δ
β
γ
.
{\displaystyle \sum _{k=0}^{3}\sigma _{\alpha \beta }^{k}\,\sigma _{\gamma \delta }^{k}=2\,\delta _{\alpha \delta }\,\delta _{\beta \gamma }~.}
The fact that any Hermitian complex 2 × 2 matrices can be expressed in terms of the identity matrix and the Pauli matrices also leads to the Bloch sphere representation of 2 × 2 mixed states’ density matrix, (positive semidefinite 2 × 2 matrices with unit trace. This can be seen by first expressing an arbitrary Hermitian matrix as a real linear combination of {σ0, σ1, σ2, σ3} as above, and then imposing the positive-semidefinite and trace 1 conditions.
For a pure state, in polar coordinates,
a
→
=
(
sin
θ
cos
ϕ
sin
θ
sin
ϕ
cos
θ
)
,
{\displaystyle {\vec {a}}={\begin{pmatrix}\sin \theta \cos \phi &\sin \theta \sin \phi &\cos \theta \end{pmatrix}},}
the idempotent density matrix
1
2
(
1
+
a
→
⋅
σ
→
)
=
(
cos
2
(
θ
2
)
e
−
i
ϕ
sin
(
θ
2
)
cos
(
θ
2
)
e
+
i
ϕ
sin
(
θ
2
)
cos
(
θ
2
)
sin
2
(
θ
2
)
)
{\displaystyle {\tfrac {1}{2}}\left(\mathbf {1} +{\vec {a}}\cdot {\vec {\sigma }}\right)={\begin{pmatrix}\cos ^{2}\left({\frac {\,\theta \,}{2}}\right)&e^{-i\,\phi }\sin \left({\frac {\,\theta \,}{2}}\right)\cos \left({\frac {\,\theta \,}{2}}\right)\\e^{+i\,\phi }\sin \left({\frac {\,\theta \,}{2}}\right)\cos \left({\frac {\,\theta \,}{2}}\right)&\sin ^{2}\left({\frac {\,\theta \,}{2}}\right)\end{pmatrix}}}
acts on the state eigenvector
(
cos
(
θ
2
)
e
+
i
ϕ
sin
(
θ
2
)
)
{\displaystyle {\begin{pmatrix}\cos \left({\frac {\,\theta \,}{2}}\right)&e^{+i\phi }\,\sin \left({\frac {\,\theta \,}{2}}\right)\end{pmatrix}}}
with eigenvalue +1, hence it acts like a projection operator.
=== Relation with the permutation operator ===
Let Pjk be the transposition (also known as a permutation) between two spins σj and σk living in the tensor product space
C
2
⊗
C
2
{\displaystyle \mathbb {C} ^{2}\otimes \mathbb {C} ^{2}}
,
P
j
k
|
σ
j
σ
k
⟩
=
|
σ
k
σ
j
⟩
.
{\displaystyle P_{jk}\left|\sigma _{j}\sigma _{k}\right\rangle =\left|\sigma _{k}\sigma _{j}\right\rangle .}
This operator can also be written more explicitly as Dirac's spin exchange operator,
P
j
k
=
1
2
(
σ
→
j
⋅
σ
→
k
+
1
)
.
{\displaystyle P_{jk}={\frac {1}{2}}\,\left({\vec {\sigma }}_{j}\cdot {\vec {\sigma }}_{k}+1\right)~.}
Its eigenvalues are therefore 1 or −1. It may thus be utilized as an interaction term in a Hamiltonian, splitting the energy eigenvalues of its symmetric versus antisymmetric eigenstates.
== SU(2) ==
The group SU(2) is the Lie group of unitary 2 × 2 matrices with unit determinant; its Lie algebra is the set of all 2 × 2 anti-Hermitian matrices with trace 0. Direct calculation, as above, shows that the Lie algebra
s
u
2
{\displaystyle {\mathfrak {su}}_{2}}
is the three-dimensional real algebra spanned by the set {iσk}. In compact notation,
s
u
(
2
)
=
span
{
i
σ
1
,
i
σ
2
,
i
σ
3
}
.
{\displaystyle {\mathfrak {su}}(2)=\operatorname {span} \{\;i\,\sigma _{1}\,,\;i\,\sigma _{2}\,,\;i\,\sigma _{3}\;\}.}
As a result, each iσj can be seen as an infinitesimal generator of SU(2). The elements of SU(2) are exponentials of linear combinations of these three generators, and multiply as indicated above in discussing the Pauli vector. Although this suffices to generate SU(2), it is not a proper representation of su(2), as the Pauli eigenvalues are scaled unconventionally. The conventional normalization is λ = 1/2, so that
s
u
(
2
)
=
span
{
i
σ
1
2
,
i
σ
2
2
,
i
σ
3
2
}
.
{\displaystyle {\mathfrak {su}}(2)=\operatorname {span} \left\{{\frac {\,i\,\sigma _{1}\,}{2}},{\frac {\,i\,\sigma _{2}\,}{2}},{\frac {\,i\,\sigma _{3}\,}{2}}\right\}.}
As SU(2) is a compact group, its Cartan decomposition is trivial.
=== SO(3) ===
The Lie algebra
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
is isomorphic to the Lie algebra
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
, which corresponds to the Lie group SO(3), the group of rotations in three-dimensional space. In other words, one can say that the iσj are a realization (and, in fact, the lowest-dimensional realization) of infinitesimal rotations in three-dimensional space. However, even though
s
u
(
2
)
{\displaystyle {\mathfrak {su}}(2)}
and
s
o
(
3
)
{\displaystyle {\mathfrak {so}}(3)}
are isomorphic as Lie algebras, SU(2) and SO(3) are not isomorphic as Lie groups. SU(2) is actually a double cover of SO(3), meaning that there is a two-to-one group homomorphism from SU(2) to SO(3), see relationship between SO(3) and SU(2).
=== Quaternions ===
The real linear span of {I, iσ1, iσ2, iσ3} is isomorphic to the real algebra of quaternions,
H
{\displaystyle \mathbb {H} }
, represented by the span of the basis vectors
{
1
,
i
,
j
,
k
}
.
{\displaystyle \left\{\;\mathbf {1} ,\,\mathbf {i} ,\,\mathbf {j} ,\,\mathbf {k} \;\right\}.}
The isomorphism from
H
{\displaystyle \mathbb {H} }
to this set is given by the following map (notice the reversed signs for the Pauli matrices):
1
↦
I
,
i
↦
−
σ
2
σ
3
=
−
i
σ
1
,
j
↦
−
σ
3
σ
1
=
−
i
σ
2
,
k
↦
−
σ
1
σ
2
=
−
i
σ
3
.
{\displaystyle \mathbf {1} \mapsto I,\quad \mathbf {i} \mapsto -\sigma _{2}\sigma _{3}=-i\,\sigma _{1},\quad \mathbf {j} \mapsto -\sigma _{3}\sigma _{1}=-i\,\sigma _{2},\quad \mathbf {k} \mapsto -\sigma _{1}\sigma _{2}=-i\,\sigma _{3}.}
Alternatively, the isomorphism can be achieved by a map using the Pauli matrices in reversed order,
1
↦
I
,
i
↦
i
σ
3
,
j
↦
i
σ
2
,
k
↦
i
σ
1
.
{\displaystyle \mathbf {1} \mapsto I,\quad \mathbf {i} \mapsto i\,\sigma _{3}\,,\quad \mathbf {j} \mapsto i\,\sigma _{2}\,,\quad \mathbf {k} \mapsto i\,\sigma _{1}~.}
As the set of versors U ⊂
H
{\displaystyle \mathbb {H} }
forms a group isomorphic to SU(2), U gives yet another way of describing SU(2). The two-to-one homomorphism from SU(2) to SO(3) may be given in terms of the Pauli matrices in this formulation.
== Physics ==
=== Classical mechanics ===
In classical mechanics, Pauli matrices are useful in the context of the Cayley-Klein parameters. The matrix P corresponding to the position
x
→
{\displaystyle {\vec {x}}}
of a point in space is defined in terms of the above Pauli vector matrix,
P
=
x
→
⋅
σ
→
=
x
σ
x
+
y
σ
y
+
z
σ
z
.
{\displaystyle P={\vec {x}}\cdot {\vec {\sigma }}=x\,\sigma _{x}+y\,\sigma _{y}+z\,\sigma _{z}.}
Consequently, the transformation matrix Qθ for rotations about the x-axis through an angle θ may be written in terms of Pauli matrices and the unit matrix as
Q
θ
=
1
cos
θ
2
+
i
σ
x
sin
θ
2
.
{\displaystyle Q_{\theta }={\boldsymbol {1}}\,\cos {\frac {\theta }{2}}+i\,\sigma _{x}\sin {\frac {\theta }{2}}.}
Similar expressions follow for general Pauli vector rotations as detailed above.
=== Quantum mechanics ===
In quantum mechanics, each Pauli matrix is related to an angular momentum operator that corresponds to an observable describing the spin of a spin 1⁄2 particle, in each of the three spatial directions. As an immediate consequence of the Cartan decomposition mentioned above, iσj are the generators of a projective representation (spin representation) of the rotation group SO(3) acting on non-relativistic particles with spin 1⁄2. The states of the particles are represented as two-component spinors. In the same way, the Pauli matrices are related to the isospin operator.
An interesting property of spin 1⁄2 particles is that they must be rotated by an angle of 4π in order to return to their original configuration. This is due to the two-to-one correspondence between SU(2) and SO(3) mentioned above, and the fact that, although one visualizes spin up/down as the north–south pole on the 2-sphere S2, they are actually represented by orthogonal vectors in the two-dimensional complex Hilbert space.
For a spin 1⁄2 particle, the spin operator is given by J = ħ/2σ, the fundamental representation of SU(2). By taking Kronecker products of this representation with itself repeatedly, one may construct all higher irreducible representations. That is, the resulting spin operators for higher spin systems in three spatial dimensions, for arbitrarily large j, can be calculated using this spin operator and ladder operators. They can be found in Rotation group SO(3) § A note on Lie algebras. The analog formula to the above generalization of Euler's formula for Pauli matrices, the group element in terms of spin matrices, is tractable, but less simple.
Also useful in the quantum mechanics of multiparticle systems, the general Pauli group Gn is defined to consist of all n-fold tensor products of Pauli matrices.
=== Relativistic quantum mechanics ===
In relativistic quantum mechanics, the spinors in four dimensions are 4 × 1 (or 1 × 4) matrices. Hence the Pauli matrices or the Sigma matrices operating on these spinors have to be 4 × 4 matrices. They are defined in terms of 2 × 2 Pauli matrices as
Σ
k
=
(
σ
k
0
0
σ
k
)
.
{\displaystyle {\mathsf {\Sigma }}_{k}={\begin{pmatrix}{\mathsf {\sigma }}_{k}&0\\0&{\mathsf {\sigma }}_{k}\end{pmatrix}}.}
It follows from this definition that the
Σ
k
{\displaystyle \ {\mathsf {\Sigma }}_{k}\ }
matrices have the same algebraic properties as the σk matrices.
However, relativistic angular momentum is not a three-vector, but a second order four-tensor. Hence
Σ
k
{\displaystyle \ {\mathsf {\Sigma }}_{k}\ }
needs to be replaced by Σμν , the generator of Lorentz transformations on spinors. By the antisymmetry of angular momentum, the Σμν are also antisymmetric. Hence there are only six independent matrices.
The first three are the
Σ
k
ℓ
≡
ϵ
j
k
ℓ
Σ
j
.
{\displaystyle \ \Sigma _{k\ell }\equiv \epsilon _{jk\ell }{\mathsf {\Sigma }}_{j}.}
The remaining three,
−
i
Σ
0
k
≡
α
k
,
{\displaystyle \ -i\ \Sigma _{0k}\equiv {\mathsf {\alpha }}_{k}\ ,}
where the Dirac αk matrices are defined as
α
k
=
(
0
σ
k
σ
k
0
)
.
{\displaystyle {\mathsf {\alpha }}_{k}={\begin{pmatrix}0&{\mathsf {\sigma }}_{k}\\{\mathsf {\sigma }}_{k}&0\end{pmatrix}}.}
The relativistic spin matrices Σμν are written in compact form in terms of commutator of gamma matrices as
Σ
μ
ν
=
i
2
[
γ
μ
,
γ
ν
]
.
{\displaystyle \Sigma _{\mu \nu }={\frac {i}{2}}{\bigl [}\gamma _{\mu },\gamma _{\nu }{\bigr ]}.}
=== Quantum information ===
In quantum information, single-qubit quantum gates are 2 × 2 unitary matrices. The Pauli matrices are some of the most important single-qubit operations. In that context, the Cartan decomposition given above is called the "Z–Y decomposition of a single-qubit gate". Choosing a different Cartan pair gives a similar "X–Y decomposition of a single-qubit gate ".
== See also ==
Algebra of physical space
Spinors in three dimensions
Gamma matrices
§ Dirac basis
Angular momentum
Gell-Mann matrices
Poincaré group
Generalizations of Pauli matrices
Bloch sphere
Euler's four-square identity
For higher spin generalizations of the Pauli matrices, see Spin (physics) § Higher spins
Exchange matrix (the first Pauli matrix is an exchange matrix of order two)
Split-quaternion
== Remarks ==
== Notes ==
== References == | Wikipedia/Pauli_algebra |
In mathematics, given a vector space X with an associated quadratic form q, written (X, q), a null vector or isotropic vector is a non-zero element x of X for which q(x) = 0.
In the theory of real bilinear forms, definite quadratic forms and isotropic quadratic forms are distinct. They are distinguished in that only for the latter does there exist a nonzero null vector.
A quadratic space (X, q) which has a null vector is called a pseudo-Euclidean space. The term isotropic vector v when q(v) = 0 has been used in quadratic spaces, and anisotropic space for a quadratic space without null vectors.
A pseudo-Euclidean vector space may be decomposed (non-uniquely) into orthogonal subspaces A and B, X = A + B, where q is positive-definite on A and negative-definite on B. The null cone, or isotropic cone, of X consists of the union of balanced spheres:
⋃
r
≥
0
{
x
=
a
+
b
:
q
(
a
)
=
−
q
(
b
)
=
r
,
a
,
b
∈
B
}
.
{\displaystyle \bigcup _{r\geq 0}\{x=a+b:q(a)=-q(b)=r,\ \ a,b\in B\}.}
The null cone is also the union of the isotropic lines through the origin.
== Split algebras ==
A composition algebra with a null vector is a split algebra.
In a composition algebra (A, +, ×, *), the quadratic form is q(x) = x x*. When x is a null vector then there is no multiplicative inverse for x, and since x ≠ 0, A is not a division algebra.
In the Cayley–Dickson construction, the split algebras arise in the series bicomplex numbers, biquaternions, and bioctonions, which uses the complex number field
C
{\displaystyle \mathbb {C} }
as the foundation of this doubling construction due to L. E. Dickson (1919). In particular, these algebras have two imaginary units, which commute so their product, when squared, yields +1:
(
h
i
)
2
=
h
2
i
2
=
(
−
1
)
(
−
1
)
=
+
1.
{\displaystyle (hi)^{2}=h^{2}i^{2}=(-1)(-1)=+1.}
Then
(
1
+
h
i
)
(
1
+
h
i
)
∗
=
(
1
+
h
i
)
(
1
−
h
i
)
=
1
−
(
h
i
)
2
=
0
{\displaystyle (1+hi)(1+hi)^{*}=(1+hi)(1-hi)=1-(hi)^{2}=0}
so 1 + hi is a null vector.
The real subalgebras, split complex numbers, split quaternions, and split-octonions, with their null cones representing the light tracking into and out of 0 ∈ A, suggest spacetime topology.
== Examples ==
The light-like vectors of Minkowski space are null vectors.
The four linearly independent biquaternions l = 1 + hi, n = 1 + hj, m = 1 + hk, and m∗ = 1 – hk are null vectors and { l, n, m, m∗ } can serve as a basis for the subspace used to represent spacetime. Null vectors are also used in the Newman–Penrose formalism approach to spacetime manifolds.
In the Verma module of a Lie algebra there are null vectors.
== References ==
Dubrovin, B. A.; Fomenko, A. T.; Novikov, S. P. (1984). Modern Geometry: Methods and Applications. Translated by Burns, Robert G. Springer. p. 50. ISBN 0-387-90872-2.
Shaw, Ronald (1982). Linear Algebra and Group Representations. Vol. 1. Academic Press. p. 151. ISBN 0-12-639201-3.
Neville, E. H. (Eric Harold) (1922). Prolegomena to Analytical Geometry in Anisotropic Euclidean Space of Three Dimensions. Cambridge University Press. p. 204. | Wikipedia/Split_algebra |
In mathematics, Hurwitz's theorem is a theorem of Adolf Hurwitz (1859–1919), published posthumously in 1923, solving the Hurwitz problem for finite-dimensional unital real non-associative algebras endowed with a nondegenerate positive-definite quadratic form. The theorem states that if the quadratic form defines a homomorphism into the positive real numbers on the non-zero part of the algebra, then the algebra must be isomorphic to the real numbers, the complex numbers, the quaternions, or the octonions, and that there are no other possibilities. Such algebras, sometimes called Hurwitz algebras, are examples of composition algebras.
The theory of composition algebras has subsequently been generalized to arbitrary quadratic forms and arbitrary fields. Hurwitz's theorem implies that multiplicative formulas for sums of squares can only occur in 1, 2, 4 and 8 dimensions, a result originally proved by Hurwitz in 1898. It is a special case of the Hurwitz problem, solved also in Radon (1922). Subsequent proofs of the restrictions on the dimension have been given by Eckmann (1943) using the representation theory of finite groups and by Lee (1948) and Chevalley (1954) using Clifford algebras. Hurwitz's theorem has been applied in algebraic topology to problems on vector fields on spheres and the homotopy groups of the classical groups and in quantum mechanics to the classification of simple Jordan algebras.
== Euclidean Hurwitz algebras ==
=== Definition ===
A Hurwitz algebra or composition algebra is a finite-dimensional not necessarily associative algebra A with identity endowed with a nondegenerate quadratic form q such that q(a b) = q(a) q(b). If the underlying coefficient field is the reals and q is positive-definite, so that (a, b) = 1/2[q(a + b) − q(a) − q(b)] is an inner product, then A is called a Euclidean Hurwitz algebra or (finite-dimensional) normed division algebra.
If A is a Euclidean Hurwitz algebra and a is in A, define the involution and right and left multiplication operators by
a
∗
=
−
a
+
2
(
a
,
1
)
1
,
L
(
a
)
b
=
a
b
,
R
(
a
)
b
=
b
a
.
{\displaystyle a^{*}=-a+2(a,1)1,\quad L(a)b=ab,\quad R(a)b=ba.}
Evidently the involution has period two and preserves the inner product and norm. These operators have the following properties:
the involution is an antiautomorphism, i.e. (ab)* = b*a*
aa* = ‖a‖2 1 = a*a
L(a*) = L(a)*, R(a*) = R(a)*, so that the involution on the algebra corresponds to taking adjoints
Re (ab) = Re (ba) if Re x = (x + x*)/2 = (x, 1)1
Re (ab)c = Re a(bc)
L(a2) = L(a)2, R(a2) = R(a)2, so that A is an alternative algebra.
These properties are proved starting from the polarized version of the identity (ab, ab) = (a, a)(b, b):
2
(
a
,
b
)
(
c
,
d
)
=
(
a
c
,
b
d
)
+
(
a
d
,
b
c
)
.
{\displaystyle \displaystyle {2(a,b)(c,d)=(ac,bd)+(ad,bc).}}
Setting b = 1 or d = 1 yields L(a*) = L(a)* and R(c*) = R(c)*.
Hence Re(ab) = (ab, 1)1 = (a, b*)1 = (ba, 1)1 = Re(ba).
Similarly Re (ab)c = ((ab)c,1)1 = (ab, c*)1 = (b, a* c*)1 = (bc,a*)1 = (a(bc),1)1 = Re a(bc).
Hence ((ab)*, c) = (ab, c*) = (b, a*c*) = (1, b*(a*c*)) = (1, (b*a*)c*) = (b*a*, c), so that (ab)* = b*a*.
By the polarized identity ‖a‖2 (c, d) = (ac, ad) = (a* (ac), d) so L(a*) L(a) = L(‖a‖2). Applied to 1 this gives a*a = ‖a‖2 1. Replacing a by a* gives the other identity.
Substituting the formula for a* in L(a*) L(a) = L(a*a) gives L(a)2 = L(a2). The formula R(a2) = R(a)2 is proved analogously.
=== Classification ===
It is routine to check that the real numbers R, the complex numbers C and the quaternions H are examples of associative Euclidean Hurwitz algebras with their standard norms and involutions. There are moreover natural inclusions R ⊂ C ⊂ H.
Analysing such an inclusion leads to the Cayley–Dickson construction, formalized by A.A. Albert. Let A be a Euclidean Hurwitz algebra and B a proper unital subalgebra, so a Euclidean Hurwitz algebra in its own right. Pick a unit vector j in A orthogonal to B. Since (j, 1) = 0, it follows that j* = −j and hence j2 = −1. Let C be subalgebra generated by B and j. It is unital and is again a Euclidean Hurwitz algebra. It satisfies the following Cayley–Dickson multiplication laws:
C
=
B
⊕
B
j
,
(
a
+
b
j
)
∗
=
a
∗
−
b
j
,
(
a
+
b
j
)
(
c
+
d
j
)
=
(
a
c
−
d
∗
b
)
+
(
b
c
∗
+
d
a
)
j
.
{\displaystyle \displaystyle {C=B\oplus Bj,\,\,\,(a+bj)^{*}=a^{*}-bj,\,\,\,(a+bj)(c+dj)=(ac-d^{*}b)+(bc^{*}+da)j.}}
B and Bj are orthogonal, since j is orthogonal to B. If a is in B, then j a = a* j, since by orthogonal 0 = 2(j, a*) = ja − a*j. The formula for the involution follows. To show that B ⊕ B j is closed under multiplication Bj = jB. Since Bj is orthogonal to 1, (bj)* = −bj.
b(cj) = (cb)j since (b, j) = 0 so that, for x in A, (b(cj), x) = (b(jx), j(cj)) = −(b(jx), c*) = −(cb, (jx)*) = −((cb)j, x*) = ((cb)j, x).
(jc)b = j(bc) taking adjoints above.
(bj)(cj) = −c*b since (b, cj) = 0, so that, for x in A, ((bj)(cj), x) = −((cj)x*, bj) = (bx*, (cj)j) = −(c*b, x).
Imposing the multiplicativity of the norm on C for a + bj and c + dj gives:
(
‖
a
‖
2
+
‖
b
‖
2
)
(
‖
c
‖
2
+
‖
d
‖
2
)
=
‖
a
c
−
d
∗
b
‖
2
+
‖
b
c
∗
+
d
a
‖
2
,
{\displaystyle \displaystyle {(\|a\|^{2}+\|b\|^{2})(\|c\|^{2}+\|d\|^{2})=\|ac-d^{*}b\|^{2}+\|bc^{*}+da\|^{2},}}
which leads to
(
a
c
,
d
∗
b
)
=
(
b
c
∗
,
d
a
)
.
{\displaystyle \displaystyle {(ac,d^{*}b)=(bc^{*},da).}}
Hence d(ac) = (da)c, so that B must be associative.
This analysis applies to the inclusion of R in C and C in H. Taking O = H ⊕ H with the product and inner product above gives a noncommutative nonassociative algebra generated by J = (0, 1). This recovers the usual definition of the octonions or Cayley numbers. If A is a Euclidean algebra, it must contain R. If it is strictly larger than R, the argument above shows that it contains C. If it is larger than C, it contains H. If it is larger still, it must contain O. But there the process must stop, because O is not associative. In fact H is not commutative and a(bj) = (ba)j ≠ (ab)j in O.
Theorem. The only Euclidean Hurwitz algebras are the real numbers, the complex numbers, the quaternions and the octonions.
== Other proofs ==
The proofs of Lee (1948) and Chevalley (1954) use Clifford algebras to show that the dimension N of A must be 1, 2, 4 or 8. In fact the operators L(a) with (a, 1) = 0 satisfy L(a)2 = −‖a‖2 and so form a real Clifford algebra. If a is a unit vector, then L(a) is skew-adjoint with square −I. So N must be either even or 1 (in which case A contains no unit vectors orthogonal to 1). The real Clifford algebra and its complexification act on the complexification of A, an N-dimensional complex space. If N is even, N − 1 is odd, so the Clifford algebra has exactly two complex irreducible representations of dimension 2N/2 − 1. So this power of 2 must divide N. It is easy to see that this implies N can only be 1, 2, 4 or 8.
The proof of Eckmann (1943) uses the representation theory of finite groups, or the projective representation theory of elementary abelian 2-groups, known to be equivalent to the representation theory of real Clifford algebras. Indeed, taking an orthonormal basis ei of the orthogonal complement of 1 gives rise to operators Ui = L(ei)
satisfying
U
i
2
=
−
I
,
U
i
U
j
=
−
U
j
U
i
(
i
≠
j
)
.
{\displaystyle U_{i}^{2}=-I,\quad U_{i}U_{j}=-U_{j}U_{i}\,\,(i\neq j).}
This is a projective representation of a direct product of N − 1 groups of order 2. (N is assumed to be greater than 1.) The operators Ui by construction are skew-symmetric and orthogonal. In fact Eckmann constructed operators of this type in a slightly different but equivalent way. It is in fact the method originally followed in Hurwitz (1923). Assume that there is a composition law for two forms
(
x
1
2
+
⋯
+
x
N
2
)
(
y
1
2
+
⋯
+
y
N
2
)
=
z
1
2
+
⋯
+
z
N
2
,
{\displaystyle \displaystyle {(x_{1}^{2}+\cdots +x_{N}^{2})(y_{1}^{2}+\cdots +y_{N}^{2})=z_{1}^{2}+\cdots +z_{N}^{2},}}
where zi is bilinear in x and y. Thus
z
i
=
∑
j
=
1
N
a
i
j
(
x
)
y
j
{\displaystyle \displaystyle {z_{i}=\sum _{j=1}^{N}a_{ij}(x)y_{j}}}
where the matrix T(x) = (aij) is linear in x. The relations above are equivalent to
T
(
x
)
T
(
x
)
t
=
x
1
2
+
⋯
+
x
N
2
.
{\displaystyle \displaystyle {T(x)T(x)^{t}=x_{1}^{2}+\cdots +x_{N}^{2}.}}
Writing
T
(
x
)
=
T
1
x
1
+
⋯
+
T
N
x
N
,
{\displaystyle \displaystyle {T(x)=T_{1}x_{1}+\cdots +T_{N}x_{N},}}
the relations become
T
i
T
j
t
+
T
j
T
i
t
=
2
δ
i
j
I
.
{\displaystyle \displaystyle {T_{i}T_{j}^{t}+T_{j}T_{i}^{t}=2\delta _{ij}I.}}
Now set Vi = (TN)t Ti. Thus VN = I and the V1, ... , VN − 1 are skew-adjoint, orthogonal satisfying exactly the same relations as the Ui's:
V
i
2
=
−
I
,
V
i
V
j
=
−
V
j
V
i
(
i
≠
j
)
.
{\displaystyle \displaystyle {V_{i}^{2}=-I,\quad V_{i}V_{j}=-V_{j}V_{i}\,\,(i\neq j).}}
Since Vi is an orthogonal matrix with square −I on a real vector space, N is even.
Let G be the finite group generated by elements vi such that
v
i
2
=
ε
,
v
i
v
j
=
ε
v
j
v
i
(
i
≠
j
)
,
{\displaystyle \displaystyle {v_{i}^{2}=\varepsilon ,\quad v_{i}v_{j}=\varepsilon v_{j}v_{i}\,\,(i\neq j),}}
where ε is central of order 2. The commutator subgroup [G, G] is just formed of 1 and ε. If N is odd this coincides with the center while if N is even the center has order 4 with extra elements γ = v1...vN − 1 and εγ. If g in G is not in the center its conjugacy class is exactly g and εg. Thus there are
2N − 1 + 1 conjugacy classes for N odd and 2N − 1 + 2 for N even. G has |G / [G, G] | = 2N − 1 1-dimensional complex representations. The total number of irreducible complex representations is the number of conjugacy classes. So since N is even, there are two further irreducible complex representations. Since the sum of the squares of the dimensions equals |G| and the dimensions divide |G|, the two irreducibles must have dimension 2(N − 2)/2. When N is even, there are two and their dimension must divide the order of the group, so is a power of two, so they must both have dimension 2(N − 2)/2. The space on which the Vi's act can be complexified. It will have complex dimension N. It breaks up into some of complex irreducible representations of G, all having dimension 2(N − 2)/2. In particular this dimension is ≤ N, so N is less than or equal to 8. If N = 6, the dimension is 4, which does not divide 6. So N can only be 1, 2, 4 or 8.
== Applications to Jordan algebras ==
Let A be a Euclidean Hurwitz algebra and let Mn(A) be the algebra of n-by-n matrices over A. It is a unital nonassociative algebra with an involution given by
(
x
i
j
)
∗
=
(
x
j
i
∗
)
.
{\displaystyle \displaystyle {(x_{ij})^{*}=(x_{ji}^{*}).}}
The trace Tr(X) is defined as the sum of the diagonal elements of X and the real-valued trace by
TrR(X ) = Re Tr(X ). The real-valued trace satisfies:
Tr
R
X
Y
=
Tr
R
Y
X
,
Tr
R
(
X
Y
)
Z
=
Tr
R
X
(
Y
Z
)
.
{\displaystyle \operatorname {Tr} _{\mathbf {R} }XY=\operatorname {Tr} _{\mathbf {R} }YX,\qquad \operatorname {Tr} _{\mathbf {R} }(XY)Z=\operatorname {Tr} _{\mathbf {R} }X(YZ).}
These are immediate consequences of the known identities for n = 1.
In A define the associator by
[
a
,
b
,
c
]
=
a
(
b
c
)
−
(
a
b
)
c
.
{\displaystyle \displaystyle {[a,b,c]=a(bc)-(ab)c.}}
It is trilinear and vanishes identically if A is associative. Since A is an alternative algebra
[a, a, b] = 0 and [b, a, a] = 0. Polarizing it follows that the associator is antisymmetric in its three entries. Furthermore, if a, b or c lie in R then [a, b, c] = 0. These facts imply that M3(A) has certain commutation properties. In fact if X is a matrix in M3(A) with real entries on the diagonal then
[
X
,
X
2
]
=
a
I
,
{\displaystyle \displaystyle {[X,X^{2}]=aI,}}
with a in A. In fact if Y = [X, X 2], then
y
i
j
=
∑
k
,
ℓ
[
x
i
k
,
x
k
ℓ
,
x
ℓ
j
]
.
{\displaystyle \displaystyle {y_{ij}=\sum _{k,\ell }[x_{ik},x_{k\ell },x_{\ell j}].}}
Since the diagonal entries of X are real, the off-diagonal entries of Y vanish. Each diagonal
entry of Y is a sum of two associators involving only off diagonal terms of X. Since the associators are invariant under cyclic permutations, the diagonal entries of Y are all equal.
Let Hn(A) be the space of self-adjoint elements in Mn(A) with product X ∘Y = 1/2(X Y + Y X) and inner product (X, Y ) = TrR(X Y ).
Theorem. Hn(A) is a Euclidean Jordan algebra if A is associative (the real numbers, complex numbers or quaternions) and n ≥ 3 or if A is nonassociative (the octonions) and n = 3.
The exceptional Jordan algebra H3(O) is called the Albert algebra after A.A. Albert.
To check that Hn(A) satisfies the axioms for a Euclidean Jordan algebra, the real trace defines a symmetric bilinear form with (X, X) = Σ ‖xij‖2. So it is an inner product. It satisfies the associativity property (Z∘X, Y ) = (X, Z∘Y ) because of the properties of the real trace. The main axiom to check is the Jordan condition for the operators L(X) defined by L(X)Y = X ∘Y:
[
L
(
X
)
,
L
(
X
2
)
]
=
0.
{\displaystyle \displaystyle {[L(X),L(X^{2})]=0.}}
This is easy to check when A is associative, since Mn(A) is an associative algebra so a Jordan algebra with X ∘Y = 1/2(X Y + Y X). When A = O and n = 3 a special argument is required, one of the shortest being due to Freudenthal (1951).
In fact if T is in H3(O) with Tr T = 0, then
D
(
X
)
=
T
X
−
X
T
{\displaystyle \displaystyle {D(X)=TX-XT}}
defines a skew-adjoint derivation of H3(O). Indeed,
Tr
(
T
(
X
(
X
2
)
)
−
T
(
X
2
(
X
)
)
)
=
Tr
T
(
a
I
)
=
Tr
(
T
)
a
=
0
,
{\displaystyle \operatorname {Tr} (T(X(X^{2}))-T(X^{2}(X)))=\operatorname {Tr} T(aI)=\operatorname {Tr} (T)a=0,}
so that
(
D
(
X
)
,
X
2
)
=
0.
{\displaystyle (D(X),X^{2})=0.}
Polarizing yields:
(
D
(
X
)
,
Y
∘
Z
)
+
(
D
(
Y
)
,
Z
∘
X
)
+
(
D
(
Z
)
,
X
∘
Y
)
=
0.
{\displaystyle (D(X),Y\circ Z)+(D(Y),Z\circ X)+(D(Z),X\circ Y)=0.}
Setting Z = 1 shows that D is skew-adjoint. The derivation property D(X ∘Y) = D(X)∘Y + X∘D(Y) follows by this and the associativity property of the inner product in the identity above.
With A and n as in the statement of the theorem, let K be the group of automorphisms of E = Hn(A) leaving invariant the inner product. It is a closed subgroup of O(E) so a compact Lie group. Its Lie algebra consists of skew-adjoint derivations. Freudenthal (1951) showed that given X in E there is an automorphism k in K such that k(X) is a diagonal matrix. (By self-adjointness the diagonal entries will be real.) Freudenthal's diagonalization theorem immediately implies the Jordan condition, since Jordan products by real diagonal matrices commute on Mn(A) for any non-associative algebra A.
To prove the diagonalization theorem, take X in E. By compactness k can be chosen in K minimizing the sums of the squares of the norms of the off-diagonal terms of k(X ). Since K preserves the sums of all the squares, this is equivalent to maximizing the sums of the squares of the norms of the diagonal terms of k(X ). Replacing X by k X, it can be assumed that the maximum is attained at X. Since the symmetric group Sn, acting by permuting the coordinates, lies in K, if X is not diagonal, it can be supposed that x12 and its adjoint x21 are non-zero. Let T be the skew-adjoint matrix with (2, 1) entry a, (1, 2) entry −a* and 0 elsewhere and let D be the derivation ad T of E. Let kt = exp tD in K. Then only the first two diagonal entries in X(t) = ktX differ from those of X. The diagonal entries are real. The derivative of x11(t) at t = 0 is the (1, 1) coordinate of [T, X], i.e. a* x21 + x12 a = 2(x21, a). This derivative is non-zero if a = x21. On the other hand, the group kt preserves the real-valued trace. Since it can only change x11 and x22, it preserves their sum. However, on the line x + y = constant, x2 + y2 has no local maximum (only a global minimum), a contradiction. Hence X must be diagonal.
== See also ==
Multiplicative quadratic form
Radon–Hurwitz number
Frobenius Theorem
== Notes ==
== References ==
Albert, A. A. (1934), "On a certain algebra of quantum mechanics", Ann. of Math., 35 (1): 65–73, doi:10.2307/1968118, JSTOR 1968118
Chevalley, C. (1954), The algebraic theory of spinors and Clifford algebras, Columbia University Press
Eckmann, Beno (1943), "Gruppentheoretischer Beweis des Satzes von Hurwitz–Radon über die Komposition quadratischer Formen", Comment. Math. Helv., 15: 358–366, doi:10.1007/bf02565652, S2CID 123322808
Eckmann, Beno (1989), "Hurwitz–Radon matrices and periodicity modulo 8", Enseign. Math., 35: 77–91, archived from the original on 2013-06-16
Eckmann, Beno (1999), "Topology, algebra, analysis—relations and missing links", Notices Amer. Math. Soc., 46: 520–527
Faraut, J.; Koranyi, A. (1994), Analysis on symmetric cones, Oxford Mathematical Monographs, Oxford University Press, ISBN 978-0198534778
Freudenthal, Hans (1951), Oktaven, Ausnahmegruppen und Oktavengeometrie, Mathematisch Instituut der Rijksuniversiteit te Utrecht
Freudenthal, Hans (1985), "Oktaven, Ausnahmegruppen und Oktavengeometrie", Geom. Dedicata, 19: 7–63, doi:10.1007/bf00233101, S2CID 121496094 (reprint of 1951 article)
Herstein, I. N. (1968), Noncommutative rings, Carus Mathematical Monographs, vol. 15, Mathematical Association of America, ISBN 978-0883850152
Hurwitz, A. (1898), "Über die Composition der quadratischen Formen von beliebig vielen Variabeln", Goett. Nachr.: 309–316
Hurwitz, A. (1923), "Über die Komposition der quadratischen Formen", Math. Ann., 88 (1–2): 1–25, doi:10.1007/bf01448439, S2CID 122147399
Jacobson, N. (1968), Structure and representations of Jordan algebras, American Mathematical Society Colloquium Publications, vol. 39, American Mathematical Society
Jordan, P.; von Neumann, J.; Wigner, E. (1934), "On an algebraic generalization of the quantum mechanical formalism", Ann. of Math., 35 (1): 29–64, doi:10.2307/1968117, JSTOR 1968117
Lam, Tsit-Yuen (2005), Introduction to Quadratic Forms over Fields, Graduate Studies in Mathematics, vol. 67, American Mathematical Society, ISBN 978-0-8218-1095-8, MR 2104929, Zbl 1068.11023
Lee, H. C. (1948), "Sur le théorème de Hurwitz-Radon pour la composition des formes quadratiques", Comment. Math. Helv., 21: 261–269, doi:10.1007/bf02568038, S2CID 121079375, archived from the original on 2014-05-03
Porteous, I.R. (1969), Topological Geometry, Van Nostrand Reinhold, ISBN 978-0-442-06606-2, Zbl 0186.06304
Postnikov, M. (1986), Lie groups and Lie algebras. Lectures in geometry. Semester V, Mir
Radon, J. (1922), "Lineare scharen orthogonaler matrizen", Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 1: 1–14, doi:10.1007/bf02940576, S2CID 120583389
Rajwade, A. R. (1993), Squares, London Mathematical Society Lecture Note Series, vol. 171, Cambridge University Press, ISBN 978-0-521-42668-8, Zbl 0785.11022
Schafer, Richard D. (1995) [1966], An introduction to non-associative algebras, Dover Publications, ISBN 978-0-486-68813-8, Zbl 0145.25601
Shapiro, Daniel B. (2000), Compositions of quadratic forms, De Gruyter Expositions in Mathematics, vol. 33, Walter de Gruyter, ISBN 978-3-11-012629-7, Zbl 0954.11011
== Further reading ==
Baez, John C. (2002), "The octonions", Bull. Amer. Math. Soc., 39 (2): 145–205, arXiv:math/0105155, doi:10.1090/S0273-0979-01-00934-X, S2CID 586512
Conway, John H.; Smith, Derek A. (2003), On quaternions and octonions: their geometry, arithmetic, and symmetry, A K Peters, ISBN 978-1568811345
Kantor, I.L.; Solodovnikov, A.S. (1989), "Normed algebras with an identity. Hurwitz's theorem.", Hypercomplex numbers. An elementary introduction to algebras, Trans. A. Shenitzer (2nd ed.), Springer-Verlag, p. 121, ISBN 978-0-387-96980-0, Zbl 0669.17001
Max Koecher & Reinhold Remmert (1990) "Composition Algebras. Hurwitz's Theorem — Vector-Product Algebras", chapter 10 of Numbers by Heinz-Dieter Ebbinghaus et al., Springer, ISBN 0-387-97202-1
Springer, T. A.; F. D. Veldkamp (2000), Octonions, Jordan Algebras and Exceptional Groups, Springer-Verlag, ISBN 978-3-540-66337-9 | Wikipedia/Hurwitz's_theorem_(composition_algebras) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.