text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
A two dimensional Minkowski space, i.e. a flat space with one time and one spatial dimension, has a two-dimensional Poincaré group IO(1,1) as its symmetry group. The respective Lie algebra is called the Poincaré algebra. It is possible to extend this algebra to a supersymmetry algebra, which is a
Z
2
{\displaystyle \mathbb {Z} _{2}}
-graded Lie superalgebra. The most common ways to do this are discussed below.
== N=(2,2) algebra ==
Let the Lie algebra of IO(1,1) be generated by the following generators:
H
=
P
0
{\displaystyle H=P_{0}}
is the generator of the time translation,
P
=
P
1
{\displaystyle P=P_{1}}
is the generator of the space translation,
M
=
M
01
{\displaystyle M=M_{01}}
is the generator of Lorentz boosts.
For the commutators between these generators, see Poincaré algebra.
The
N
=
(
2
,
2
)
{\displaystyle {\mathcal {N}}=(2,2)}
supersymmetry algebra over this space is a supersymmetric extension of this Lie algebra with the four additional generators (supercharges)
Q
+
,
Q
−
,
Q
¯
+
,
Q
¯
−
{\displaystyle Q_{+},\,Q_{-},\,{\overline {Q}}_{+},\,{\overline {Q}}_{-}}
, which are odd elements of the Lie superalgebra. Under Lorentz transformations the generators
Q
+
{\displaystyle Q_{+}}
and
Q
¯
+
{\displaystyle {\overline {Q}}_{+}}
transform as left-handed Weyl spinors, while
Q
−
{\displaystyle Q_{-}}
and
Q
¯
−
{\displaystyle {\overline {Q}}_{-}}
transform as right-handed Weyl spinors. The algebra is given by the Poincaré algebra plus: 283
Q
+
2
=
Q
−
2
=
Q
¯
+
2
=
Q
¯
−
2
=
0
,
{
Q
±
,
Q
¯
±
}
=
H
±
P
,
{
Q
¯
+
,
Q
¯
−
}
=
Z
,
{
Q
+
,
Q
−
}
=
Z
∗
,
{
Q
−
,
Q
¯
+
}
=
Z
~
,
{
Q
+
,
Q
¯
−
}
=
Z
~
∗
,
[
i
M
,
Q
±
]
=
∓
Q
±
,
[
i
M
,
Q
¯
±
]
=
∓
Q
¯
±
,
{\displaystyle {\begin{aligned}&{\begin{aligned}&Q_{+}^{2}=Q_{-}^{2}={\overline {Q}}_{+}^{2}={\overline {Q}}_{-}^{2}=0,\\&\{Q_{\pm },{\overline {Q}}_{\pm }\}=H\pm P,\\\end{aligned}}\\&{\begin{aligned}&\{{\overline {Q}}_{+},{\overline {Q}}_{-}\}=Z,&&\{Q_{+},Q_{-}\}=Z^{*},\\&\{Q_{-},{\overline {Q}}_{+}\}={\tilde {Z}},&&\{Q_{+},{\overline {Q}}_{-}\}={\tilde {Z}}^{*},\\&{[iM,Q_{\pm }]}=\mp Q_{\pm },&&{[iM,{\overline {Q}}_{\pm }]}=\mp {\overline {Q}}_{\pm },\end{aligned}}\end{aligned}}}
where all remaining commutators vanish, and
Z
{\displaystyle Z}
and
Z
~
{\displaystyle {\tilde {Z}}}
are complex central charges. The supercharges are related via
Q
±
†
=
Q
¯
±
{\displaystyle Q_{\pm }^{\dagger }={\overline {Q}}_{\pm }}
.
H
{\displaystyle H}
,
P
{\displaystyle P}
, and
M
{\displaystyle M}
are Hermitian.
== Subalgebras of the N=(2,2) algebra ==
=== The N=(0,2) and N=(2,0) subalgebras ===
The
N
=
(
0
,
2
)
{\displaystyle {\mathcal {N}}=(0,2)}
subalgebra is obtained from the
N
=
(
2
,
2
)
{\displaystyle {\mathcal {N}}=(2,2)}
algebra by removing the generators
Q
−
{\displaystyle Q_{-}}
and
Q
¯
−
{\displaystyle {\overline {Q}}_{-}}
. Thus its anti-commutation relations are given by: 289
Q
+
2
=
Q
¯
+
2
=
0
,
{
Q
+
,
Q
¯
+
}
=
H
+
P
{\displaystyle {\begin{aligned}&Q_{+}^{2}={\overline {Q}}_{+}^{2}=0,\\&\{Q_{+},{\overline {Q}}_{+}\}=H+P\\\end{aligned}}}
plus the commutation relations above that do not involve
Q
−
{\displaystyle Q_{-}}
or
Q
¯
−
{\displaystyle {\overline {Q}}_{-}}
. Both generators are left-handed Weyl spinors.
Similarly, the
N
=
(
2
,
0
)
{\displaystyle {\mathcal {N}}=(2,0)}
subalgebra is obtained by removing
Q
+
{\displaystyle Q_{+}}
and
Q
¯
+
{\displaystyle {\overline {Q}}_{+}}
and fulfills
Q
−
2
=
Q
¯
−
2
=
0
,
{
Q
−
,
Q
¯
−
}
=
H
−
P
.
{\displaystyle {\begin{aligned}&Q_{-}^{2}={\overline {Q}}_{-}^{2}=0,\\&\{Q_{-},{\overline {Q}}_{-}\}=H-P.\\\end{aligned}}}
Both supercharge generators are right-handed.
=== The N=(1,1) subalgebra ===
The
N
=
(
1
,
1
)
{\displaystyle {\mathcal {N}}=(1,1)}
subalgebra is generated by two generators
Q
+
1
{\displaystyle Q_{+}^{1}}
and
Q
−
1
{\displaystyle Q_{-}^{1}}
given by
Q
±
1
=
e
i
ν
±
Q
±
+
e
−
i
ν
±
Q
¯
±
{\displaystyle {\begin{aligned}Q_{\pm }^{1}=e^{i\nu _{\pm }}Q_{\pm }+e^{-i\nu _{\pm }}{\overline {Q}}_{\pm }\end{aligned}}}
for two real numbers
ν
+
{\displaystyle \nu _{+}}
and
ν
−
{\displaystyle \nu _{-}}
.
By definition, both supercharges are real, i.e.
(
Q
±
1
)
†
=
Q
±
1
{\displaystyle (Q_{\pm }^{1})^{\dagger }=Q_{\pm }^{1}}
. They transform as Majorana-Weyl spinors under Lorentz transformations. Their anti-commutation relations are given by: 287
{
Q
±
1
,
Q
±
1
}
=
2
(
H
±
P
)
,
{
Q
+
1
,
Q
−
1
}
=
Z
1
,
{\displaystyle {\begin{aligned}&\{Q_{\pm }^{1},Q_{\pm }^{1}\}=2(H\pm P),\\&\{Q_{+}^{1},Q_{-}^{1}\}=Z^{1},\end{aligned}}}
where
Z
1
{\displaystyle Z^{1}}
is a real central charge.
=== The N=(0,1) and N=(1,0) subalgebras ===
These algebras can be obtained from the
N
=
(
1
,
1
)
{\displaystyle {\mathcal {N}}=(1,1)}
subalgebra by removing
Q
−
1
{\displaystyle Q_{-}^{1}}
resp.
Q
+
1
{\displaystyle Q_{+}^{1}}
from the generators.
== See also ==
Supersymmetry
Super-Poincaré algebra (in 1+3 dimensions)
== References ==
K. Schoutens, Supersymmetry and factorized scattering, Nucl.Phys. B344, 665–695, 1990
T.J. Hollowood, E. Mavrikis, The N = 1 supersymmetric bootstrap and Lie algebras, Nucl. Phys. B484, 631–652, 1997, arXiv:hep-th/9606116 | Wikipedia/Supersymmetry_algebras_in_1_+_1_dimensions |
The Poincaré group, named after Henri Poincaré (1905), was first defined by Hermann Minkowski (1908) as the isometry group of Minkowski spacetime. It is a ten-dimensional non-abelian Lie group that is of importance as a model in our understanding of the most basic fundamentals of physics.
== Overview ==
The Poincaré group consists of all coordinate transformations of Minkowski space that do not change the spacetime interval between events. For example, if everything were postponed by two hours, including the two events and the path you took to go from one to the other, then the time interval between the events recorded by a stopwatch that you carried with you would be the same. Or if everything were shifted five kilometres to the west, or turned 60 degrees to the right, you would also see no change in the interval. It turns out that the proper length of an object is also unaffected by such a shift.
In total, there are ten degrees of freedom for such transformations. They may be thought of as translation through time or space (four degrees, one per dimension); reflection through a plane (three degrees, the freedom in orientation of this plane); or a "boost" in any of the three spatial directions (three degrees). Composition of transformations is the operation of the Poincaré group, with rotations being produced as the composition of an even number of reflections.
In classical physics, the Galilean group is a comparable ten-parameter group that acts on absolute time and space. Instead of boosts, it features shear mappings to relate co-moving frames of reference.
In general relativity, i.e. under the effects of gravity, Poincaré symmetry applies only locally. A treatment of symmetries in general relativity is not in the scope of this article.
== Poincaré symmetry ==
Poincaré symmetry is the full symmetry of special relativity. It includes:
translations (displacements) in time and space, forming the abelian Lie group of spacetime translations (P);
rotations in space, forming the non-abelian Lie group of three-dimensional rotations (J);
boosts, transformations connecting two uniformly moving bodies (K).
The last two symmetries, J and K, together make the Lorentz group (see also Lorentz invariance); the semi-direct product of the spacetime translations group and the Lorentz group then produce the Poincaré group. Objects that are invariant under this group are then said to possess Poincaré invariance or relativistic invariance.
10 generators (in four spacetime dimensions) associated with the Poincaré symmetry, by Noether's theorem, imply 10 conservation laws:
1 for the energy – associated with translations through time
3 for the momentum – associated with translations through spatial dimensions
3 for the angular momentum – associated with rotations between spatial dimensions
3 for a quantity involving the velocity of the center of mass – associated with hyperbolic rotations between each spatial dimension and time
== Poincaré group ==
The Poincaré group is the group of Minkowski spacetime isometries. It is a ten-dimensional noncompact Lie group. The four-dimensional abelian group of spacetime translations is a normal subgroup, while the six-dimensional Lorentz group is also a subgroup, the stabilizer of the origin. The Poincaré group itself is the minimal subgroup of the affine group which includes all translations and Lorentz transformations. More precisely, it is a semidirect product of the spacetime translations group and the Lorentz group,
R
1
,
3
⋊
O
(
1
,
3
)
,
{\displaystyle \mathbf {R} ^{1,3}\rtimes \operatorname {O} (1,3)\,,}
with group multiplication
(
α
,
f
)
⋅
(
β
,
g
)
=
(
α
+
f
⋅
β
,
f
⋅
g
)
{\displaystyle (\alpha ,f)\cdot (\beta ,g)=(\alpha +f\cdot \beta ,\;f\cdot g)}
.
Another way of putting this is that the Poincaré group is a group extension of the Lorentz group by a vector representation of it; it is sometimes dubbed, informally, as the inhomogeneous Lorentz group. In turn, it can also be obtained as a group contraction of the de Sitter group SO(4, 1) ~ Sp(2, 2), as the de Sitter radius goes to infinity.
Its positive energy unitary irreducible representations are indexed by mass (nonnegative number) and spin (integer or half integer) and are associated with particles in quantum mechanics (see Wigner's classification).
In accordance with the Erlangen program, the geometry of Minkowski space is defined by the Poincaré group: Minkowski space is considered as a homogeneous space for the group.
In quantum field theory, the universal cover of the Poincaré group
R
1
,
3
⋊
SL
(
2
,
C
)
,
{\displaystyle \mathbf {R} ^{1,3}\rtimes \operatorname {SL} (2,\mathbf {C} ),}
which may be identified with the double cover
R
1
,
3
⋊
Spin
(
1
,
3
)
,
{\displaystyle \mathbf {R} ^{1,3}\rtimes \operatorname {Spin} (1,3),}
is more important, because representations of
SO
(
1
,
3
)
{\displaystyle \operatorname {SO} (1,3)}
are not able to describe fields with spin 1/2; i.e. fermions. Here
SL
(
2
,
C
)
{\displaystyle \operatorname {SL} (2,\mathbf {C} )}
is the group of complex
2
×
2
{\displaystyle 2\times 2}
matrices with unit determinant, isomorphic to the Lorentz-signature spin group
Spin
(
1
,
3
)
{\displaystyle \operatorname {Spin} (1,3)}
.
== Poincaré algebra ==
The Poincaré algebra is the Lie algebra of the Poincaré group. It is a Lie algebra extension of the Lie algebra of the Lorentz group. More specifically, the proper (
det
Λ
=
1
{\textstyle \det \Lambda =1}
), orthochronous (
Λ
0
0
≥
1
{\textstyle {\Lambda ^{0}}_{0}\geq 1}
) part of the Lorentz subgroup (its identity component),
S
O
(
1
,
3
)
+
↑
{\textstyle \mathrm {SO} (1,3)_{+}^{\uparrow }}
, is connected to the identity and is thus provided by the exponentiation
exp
(
i
a
μ
P
μ
)
exp
(
i
2
ω
μ
ν
M
μ
ν
)
{\textstyle \exp \left(ia_{\mu }P^{\mu }\right)\exp \left({\frac {i}{2}}\omega _{\mu \nu }M^{\mu \nu }\right)}
of this Lie algebra. In component form, the Poincaré algebra is given by the commutation relations:
where
P
{\displaystyle P}
is the generator of translations,
M
{\displaystyle M}
is the generator of Lorentz transformations, and
η
{\displaystyle \eta }
is the
(
+
,
−
,
−
,
−
)
{\displaystyle (+,-,-,-)}
Minkowski metric (see Sign convention).
The bottom commutation relation is the ("homogeneous") Lorentz group, consisting of rotations,
J
i
=
1
2
ϵ
i
m
n
M
m
n
{\textstyle J_{i}={\frac {1}{2}}\epsilon _{imn}M^{mn}}
, and boosts,
K
i
=
M
i
0
{\textstyle K_{i}=M_{i0}}
. In this notation, the entire Poincaré algebra is expressible in noncovariant (but more practical) language as
[
J
m
,
P
n
]
=
i
ϵ
m
n
k
P
k
,
[
J
i
,
P
0
]
=
0
,
[
K
i
,
P
k
]
=
i
η
i
k
P
0
,
[
K
i
,
P
0
]
=
−
i
P
i
,
[
J
m
,
J
n
]
=
i
ϵ
m
n
k
J
k
,
[
J
m
,
K
n
]
=
i
ϵ
m
n
k
K
k
,
[
K
m
,
K
n
]
=
−
i
ϵ
m
n
k
J
k
,
{\displaystyle {\begin{aligned}[][J_{m},P_{n}]&=i\epsilon _{mnk}P_{k}~,\\[][J_{i},P_{0}]&=0~,\\[][K_{i},P_{k}]&=i\eta _{ik}P_{0}~,\\[][K_{i},P_{0}]&=-iP_{i}~,\\[][J_{m},J_{n}]&=i\epsilon _{mnk}J_{k}~,\\[][J_{m},K_{n}]&=i\epsilon _{mnk}K_{k}~,\\[][K_{m},K_{n}]&=-i\epsilon _{mnk}J_{k}~,\end{aligned}}}
where the bottom line commutator of two boosts is often referred to as a "Wigner rotation". The simplification
[
J
m
+
i
K
m
,
J
n
−
i
K
n
]
=
0
{\textstyle [J_{m}+iK_{m},\,J_{n}-iK_{n}]=0}
permits reduction of the Lorentz subalgebra to
s
u
(
2
)
⊕
s
u
(
2
)
{\textstyle {\mathfrak {su}}(2)\oplus {\mathfrak {su}}(2)}
and efficient treatment of its associated representations. In terms of the physical parameters, we have
[
H
,
p
i
]
=
0
[
H
,
L
i
]
=
0
[
H
,
K
i
]
=
i
ℏ
c
p
i
[
p
i
,
p
j
]
=
0
[
p
i
,
L
j
]
=
i
ℏ
ϵ
i
j
k
p
k
[
p
i
,
K
j
]
=
i
ℏ
c
H
δ
i
j
[
L
i
,
L
j
]
=
i
ℏ
ϵ
i
j
k
L
k
[
L
i
,
K
j
]
=
i
ℏ
ϵ
i
j
k
K
k
[
K
i
,
K
j
]
=
−
i
ℏ
ϵ
i
j
k
L
k
{\displaystyle {\begin{aligned}\left[{\mathcal {H}},p_{i}\right]&=0\\\left[{\mathcal {H}},L_{i}\right]&=0\\\left[{\mathcal {H}},K_{i}\right]&=i\hbar cp_{i}\\\left[p_{i},p_{j}\right]&=0\\\left[p_{i},L_{j}\right]&=i\hbar \epsilon _{ijk}p_{k}\\\left[p_{i},K_{j}\right]&={\frac {i\hbar }{c}}{\mathcal {H}}\delta _{ij}\\\left[L_{i},L_{j}\right]&=i\hbar \epsilon _{ijk}L_{k}\\\left[L_{i},K_{j}\right]&=i\hbar \epsilon _{ijk}K_{k}\\\left[K_{i},K_{j}\right]&=-i\hbar \epsilon _{ijk}L_{k}\end{aligned}}}
The Casimir invariants of this algebra are
P
μ
P
μ
{\textstyle P_{\mu }P^{\mu }}
and
W
μ
W
μ
{\textstyle W_{\mu }W^{\mu }}
where
W
μ
{\textstyle W_{\mu }}
is the Pauli–Lubanski pseudovector; they serve as labels for the representations of the group.
The Poincaré group is the full symmetry group of any relativistic field theory. As a result, all elementary particles fall in representations of this group. These are usually specified by the four-momentum squared of each particle (i.e. its mass squared) and the intrinsic quantum numbers
J
P
C
{\textstyle J^{PC}}
, where
J
{\displaystyle J}
is the spin quantum number,
P
{\displaystyle P}
is the parity and
C
{\displaystyle C}
is the charge-conjugation quantum number. In practice, charge conjugation and parity are violated by many quantum field theories; where this occurs,
P
{\displaystyle P}
and
C
{\displaystyle C}
are forfeited. Since CPT symmetry is invariant in quantum field theory, a time-reversal quantum number may be constructed from those given.
As a topological space, the group has four connected components: the component of the identity; the time reversed component; the spatial inversion component; and the component which is both time-reversed and spatially inverted.
== Other dimensions ==
The definitions above can be generalized to arbitrary dimensions in a straightforward manner. The d-dimensional Poincaré group is analogously defined by the semi-direct product
IO
(
1
,
d
−
1
)
:=
R
1
,
d
−
1
⋊
O
(
1
,
d
−
1
)
{\displaystyle \operatorname {IO} (1,d-1):=\mathbf {R} ^{1,d-1}\rtimes \operatorname {O} (1,d-1)}
with the analogous multiplication
(
α
,
f
)
⋅
(
β
,
g
)
=
(
α
+
f
⋅
β
,
f
⋅
g
)
{\displaystyle (\alpha ,f)\cdot (\beta ,g)=(\alpha +f\cdot \beta ,\;f\cdot g)}
.
The Lie algebra retains its form, with indices µ and ν now taking values between 0 and d − 1. The alternative representation in terms of Ji and Ki has no analogue in higher dimensions.
== See also ==
Euclidean group
Galilean group
Representation theory of the Poincaré group
Wigner's classification
Symmetry in quantum mechanics
Pauli–Lubanski pseudovector
Particle physics and representation theory
Continuous spin particle
super-Poincaré algebra
== Notes ==
== References ==
Wu-Ki Tung (1985). Group Theory in Physics. World Scientific Publishing. ISBN 9971-966-57-3.
Weinberg, Steven (1995). The Quantum Theory of Fields. Vol. 1. Cambridge: Cambridge University press. ISBN 978-0-521-55001-7.
L.H. Ryder (1996). Quantum Field Theory (2nd ed.). Cambridge University Press. p. 62. ISBN 0-52147-8146. | Wikipedia/Poincaré_algebra |
In mathematical physics, the 2D N = 2 superconformal algebra is an infinite-dimensional Lie superalgebra, related to supersymmetry, that occurs in string theory and two-dimensional conformal field theory. It has important applications in mirror symmetry. It was introduced by M. Ademollo, L. Brink, and A. D'Adda et al. (1976) as a gauge algebra of the U(1) fermionic string.
== Definition ==
There are two slightly different ways to describe the N = 2 superconformal algebra, called the N = 2 Ramond algebra and the N = 2 Neveu–Schwarz algebra, which are isomorphic (see below) but differ in the choice of standard basis.
The N = 2 superconformal algebra is the Lie superalgebra with basis of even elements c, Ln, Jn, for n an integer, and odd elements G+r, G−r, where
r
∈
Z
{\displaystyle r\in {\mathbb {Z} }}
(for the Ramond basis) or
r
∈
1
2
+
Z
{\textstyle r\in {1 \over 2}+{\mathbb {Z} }}
(for the Neveu–Schwarz basis) defined by the following relations:
c is in the center
[
L
m
,
L
n
]
=
(
m
−
n
)
L
m
+
n
+
c
12
(
m
3
−
m
)
δ
m
+
n
,
0
{\displaystyle [L_{m},L_{n}]=\left(m-n\right)L_{m+n}+{c \over 12}\left(m^{3}-m\right)\delta _{m+n,0}}
[
L
m
,
J
n
]
=
−
n
J
m
+
n
{\displaystyle [L_{m},\,J_{n}]=-nJ_{m+n}}
[
J
m
,
J
n
]
=
c
3
m
δ
m
+
n
,
0
{\displaystyle [J_{m},J_{n}]={c \over 3}m\delta _{m+n,0}}
{
G
r
+
,
G
s
−
}
=
L
r
+
s
+
1
2
(
r
−
s
)
J
r
+
s
+
c
6
(
r
2
−
1
4
)
δ
r
+
s
,
0
{\displaystyle \{G_{r}^{+},G_{s}^{-}\}=L_{r+s}+{1 \over 2}\left(r-s\right)J_{r+s}+{c \over 6}\left(r^{2}-{1 \over 4}\right)\delta _{r+s,0}}
{
G
r
+
,
G
s
+
}
=
0
=
{
G
r
−
,
G
s
−
}
{\displaystyle \{G_{r}^{+},G_{s}^{+}\}=0=\{G_{r}^{-},G_{s}^{-}\}}
[
L
m
,
G
r
±
]
=
(
m
2
−
r
)
G
r
+
m
±
{\displaystyle [L_{m},G_{r}^{\pm }]=\left({m \over 2}-r\right)G_{r+m}^{\pm }}
[
J
m
,
G
r
±
]
=
±
G
m
+
r
±
{\displaystyle [J_{m},G_{r}^{\pm }]=\pm G_{m+r}^{\pm }}
If
r
,
s
∈
Z
{\displaystyle r,s\in {\mathbb {Z} }}
in these relations, this yields the
N = 2 Ramond algebra; while if
r
,
s
∈
1
2
+
Z
{\textstyle r,s\in {1 \over 2}+{\mathbb {Z} }}
are half-integers, it gives the N = 2 Neveu–Schwarz algebra. The operators
L
n
{\displaystyle L_{n}}
generate a Lie subalgebra isomorphic to the Virasoro algebra. Together with the operators
G
r
=
G
r
+
+
G
r
−
{\displaystyle G_{r}=G_{r}^{+}+G_{r}^{-}}
, they generate a Lie superalgebra isomorphic to the super Virasoro algebra,
giving the Ramond algebra if
r
,
s
{\displaystyle r,s}
are integers and the Neveu–Schwarz algebra otherwise. When represented as operators on a complex inner product space,
c
{\displaystyle c}
is taken to act as multiplication by a real scalar, denoted by the same letter and called the central charge, and the adjoint structure is as follows:
L
n
∗
=
L
−
n
,
J
m
∗
=
J
−
m
,
(
G
r
±
)
∗
=
G
−
r
∓
,
c
∗
=
c
{\displaystyle {L_{n}^{*}=L_{-n},\,\,J_{m}^{*}=J_{-m},\,\,(G_{r}^{\pm })^{*}=G_{-r}^{\mp },\,\,c^{*}=c}}
== Properties ==
The N = 2 Ramond and Neveu–Schwarz algebras are isomorphic by the spectral shift isomorphism
α
{\displaystyle \alpha }
of Schwimmer & Seiberg (1987):
α
(
L
n
)
=
L
n
+
1
2
J
n
+
c
24
δ
n
,
0
{\displaystyle \alpha (L_{n})=L_{n}+{1 \over 2}J_{n}+{c \over 24}\delta _{n,0}}
α
(
J
n
)
=
J
n
+
c
6
δ
n
,
0
{\displaystyle \alpha (J_{n})=J_{n}+{c \over 6}\delta _{n,0}}
α
(
G
r
±
)
=
G
r
±
1
2
±
{\displaystyle \alpha (G_{r}^{\pm })=G_{r\pm {1 \over 2}}^{\pm }}
with inverse:
α
−
1
(
L
n
)
=
L
n
−
1
2
J
n
+
c
24
δ
n
,
0
{\displaystyle \alpha ^{-1}(L_{n})=L_{n}-{1 \over 2}J_{n}+{c \over 24}\delta _{n,0}}
α
−
1
(
J
n
)
=
J
n
−
c
6
δ
n
,
0
{\displaystyle \alpha ^{-1}(J_{n})=J_{n}-{c \over 6}\delta _{n,0}}
α
−
1
(
G
r
±
)
=
G
r
∓
1
2
±
{\displaystyle \alpha ^{-1}(G_{r}^{\pm })=G_{r\mp {1 \over 2}}^{\pm }}
In the N = 2 Ramond algebra, the zero mode operators
L
0
{\displaystyle L_{0}}
,
J
0
{\displaystyle J_{0}}
,
G
0
±
{\displaystyle G_{0}^{\pm }}
and the constants form a five-dimensional Lie superalgebra. They satisfy the same relations as the fundamental operators in Kähler geometry, with
L
0
{\displaystyle L_{0}}
corresponding to the Laplacian,
J
0
{\displaystyle J_{0}}
the degree operator, and
G
0
±
{\displaystyle G_{0}^{\pm }}
the
∂
{\displaystyle \partial }
and
∂
¯
{\displaystyle {\overline {\partial }}}
operators.
Even integer powers of the spectral shift give automorphisms of the N = 2 superconformal algebras, called spectral shift automorphisms. Another automorphism
β
{\displaystyle \beta }
, of period two, is given by
β
(
L
m
)
=
L
m
,
{\displaystyle \beta (L_{m})=L_{m},}
β
(
J
m
)
=
−
J
m
−
c
3
δ
m
,
0
,
{\displaystyle \beta (J_{m})=-J_{m}-{c \over 3}\delta _{m,0},}
β
(
G
r
±
)
=
G
r
∓
{\displaystyle \beta (G_{r}^{\pm })=G_{r}^{\mp }}
In terms of Kähler operators,
β
{\displaystyle \beta }
corresponds to conjugating the complex structure. Since
β
α
β
−
1
=
α
−
1
{\displaystyle \beta \alpha \beta ^{-1}=\alpha ^{-1}}
, the automorphisms
α
2
{\displaystyle \alpha ^{2}}
and
β
{\displaystyle \beta }
generate a group of automorphisms of the N = 2 superconformal algebra isomorphic to the infinite dihedral group
Z
⋊
Z
2
{\displaystyle {\mathbb {Z} }\rtimes {\mathbb {Z} }_{2}}
.
Twisted operators
L
n
=
L
n
+
1
2
(
n
+
1
)
J
n
{\textstyle {\mathcal {L}}_{n}=L_{n}+{1 \over 2}(n+1)J_{n}}
were introduced by Eguchi & Yang (1990) and satisfy:
[
L
m
,
L
n
]
=
(
m
−
n
)
L
m
+
n
{\displaystyle [{\mathcal {L}}_{m},{\mathcal {L}}_{n}]=(m-n){\mathcal {L}}_{m+n}}
so that these operators satisfy the Virasoro relation with central charge 0. The constant
c
{\displaystyle c}
still appears in the relations for
J
m
{\displaystyle J_{m}}
and the modified relations
[
L
m
,
J
n
]
=
−
n
J
m
+
n
+
c
6
(
m
2
+
m
)
δ
m
+
n
,
0
{\displaystyle [{\mathcal {L}}_{m},J_{n}]=-nJ_{m+n}+{c \over 6}\left(m^{2}+m\right)\delta _{m+n,0}}
{
G
r
+
,
G
s
−
}
=
2
L
r
+
s
−
2
s
J
r
+
s
+
c
3
(
m
2
+
m
)
δ
m
+
n
,
0
{\displaystyle \{G_{r}^{+},G_{s}^{-}\}=2{\mathcal {L}}_{r+s}-2sJ_{r+s}+{c \over 3}\left(m^{2}+m\right)\delta _{m+n,0}}
== Constructions ==
=== Free field construction ===
Green, Schwarz, and Witten (1988a, 1988b) give a construction using two commuting real bosonic fields
(
a
n
)
{\displaystyle (a_{n})}
,
(
b
n
)
{\displaystyle (b_{n})}
[
a
m
,
a
n
]
=
m
2
δ
m
+
n
,
0
,
[
b
m
,
b
n
]
=
m
2
δ
m
+
n
,
0
,
a
n
∗
=
a
−
n
,
b
n
∗
=
b
−
n
{\displaystyle {[a_{m},a_{n}]={m \over 2}\delta _{m+n,0},\,\,\,\,[b_{m},b_{n}]={m \over 2}\delta _{m+n,0}},\,\,\,\,a_{n}^{*}=a_{-n},\,\,\,\,b_{n}^{*}=b_{-n}}
and a complex fermionic field
(
e
r
)
{\displaystyle (e_{r})}
{
e
r
,
e
s
∗
}
=
δ
r
,
s
,
{
e
r
,
e
s
}
=
0.
{\displaystyle \{e_{r},e_{s}^{*}\}=\delta _{r,s},\,\,\,\,\{e_{r},e_{s}\}=0.}
L
n
{\displaystyle L_{n}}
is defined to the sum of the Virasoro operators naturally associated with each of the three systems
L
n
=
∑
m
:
a
−
m
+
n
a
m
:
+
∑
m
:
b
−
m
+
n
b
m
:
+
∑
r
(
r
+
n
2
)
:
e
r
∗
e
n
+
r
:
{\displaystyle L_{n}=\sum _{m}:a_{-m+n}a_{m}:+\sum _{m}:b_{-m+n}b_{m}:+\sum _{r}\left(r+{n \over 2}\right):e_{r}^{*}e_{n+r}:}
where normal ordering has been used for bosons and fermions.
The current operator
J
n
{\displaystyle J_{n}}
is defined by the standard construction from fermions
J
n
=
∑
r
:
e
r
∗
e
n
+
r
:
{\displaystyle J_{n}=\sum _{r}:e_{r}^{*}e_{n+r}:}
and the two supersymmetric operators
G
r
±
{\displaystyle G_{r}^{\pm }}
by
G
r
+
=
∑
(
a
−
m
+
i
b
−
m
)
⋅
e
r
+
m
,
G
r
−
=
∑
(
a
r
+
m
−
i
b
r
+
m
)
⋅
e
m
∗
{\displaystyle G_{r}^{+}=\sum (a_{-m}+ib_{-m})\cdot e_{r+m},\,\,\,\,G_{r}^{-}=\sum (a_{r+m}-ib_{r+m})\cdot e_{m}^{*}}
This yields an N = 2 Neveu–Schwarz algebra with c = 3.
=== SU(2) supersymmetric coset construction ===
Di Vecchia et al. (1986) gave a coset construction of the N = 2 superconformal algebras, generalizing the coset constructions of Goddard, Kent & Olive (1986) for the discrete series representations of the Virasoro and super Virasoro algebra. Given a representation of the affine Kac–Moody algebra of SU(2) at level
ℓ
{\displaystyle \ell }
with basis
E
n
,
F
n
,
H
n
{\displaystyle E_{n},F_{n},H_{n}}
satisfying
[
H
m
,
H
n
]
=
2
m
ℓ
δ
n
+
m
,
0
,
{\displaystyle [H_{m},H_{n}]=2m\ell \delta _{n+m,0},}
[
E
m
,
F
n
]
=
H
m
+
n
+
m
ℓ
δ
m
+
n
,
0
,
{\displaystyle [E_{m},F_{n}]=H_{m+n}+m\ell \delta _{m+n,0},}
[
H
m
,
E
n
]
=
2
E
m
+
n
,
{\displaystyle [H_{m},E_{n}]=2E_{m+n},}
[
H
m
,
F
n
]
=
−
2
F
m
+
n
,
{\displaystyle [H_{m},F_{n}]=-2F_{m+n},}
the supersymmetric generators are defined by
G
r
+
=
(
ℓ
/
2
+
1
)
−
1
/
2
∑
E
−
m
⋅
e
m
+
r
,
G
r
−
=
(
ℓ
/
2
+
1
)
−
1
/
2
∑
F
r
+
m
⋅
e
m
∗
.
{\displaystyle G_{r}^{+}=(\ell /2+1)^{-1/2}\sum E_{-m}\cdot e_{m+r},\,\,\,G_{r}^{-}=(\ell /2+1)^{-1/2}\sum F_{r+m}\cdot e_{m}^{*}.}
This yields the N=2 superconformal algebra with
c
=
3
ℓ
/
(
ℓ
+
2
)
.
{\displaystyle c=3\ell /(\ell +2).}
The algebra commutes with the bosonic operators
X
n
=
H
n
−
2
∑
r
:
e
r
∗
e
n
+
r
:
.
{\displaystyle X_{n}=H_{n}-2\sum _{r}:e_{r}^{*}e_{n+r}:.}
The space of physical states consists of eigenvectors of
X
0
{\displaystyle X_{0}}
simultaneously annihilated by the
X
n
{\displaystyle X_{n}}
's for positive
n
{\displaystyle n}
and the supercharge operator
Q
=
G
1
/
2
+
+
G
−
1
/
2
−
{\displaystyle Q=G_{1/2}^{+}+G_{-1/2}^{-}}
(Neveu–Schwarz)
Q
=
G
0
+
+
G
0
−
.
{\displaystyle Q=G_{0}^{+}+G_{0}^{-}.}
(Ramond)
The supercharge operator commutes with the action of the affine Weyl group and the physical states lie in a single orbit of this group, a fact which implies the Weyl-Kac character formula.
=== Kazama–Suzuki supersymmetric coset construction ===
Kazama & Suzuki (1989) generalized the SU(2) coset construction to any pair consisting of a simple compact Lie group
G
{\displaystyle G}
and a closed subgroup
H
{\displaystyle H}
of maximal rank, i.e. containing a maximal torus
T
{\displaystyle T}
of
G
{\displaystyle G}
, with the additional condition that the dimension of the centre of
H
{\displaystyle H}
is non-zero. In this case the compact Hermitian symmetric space
G
/
H
{\displaystyle G/H}
is a Kähler manifold, for example when
H
=
T
{\displaystyle H=T}
. The physical states lie in a single orbit of the affine Weyl group, which again implies the Weyl–Kac character formula for the affine Kac–Moody algebra of
G
{\displaystyle G}
.
== See also ==
Virasoro algebra
Super Virasoro algebra
Coset construction
Type IIB string theory
== Notes ==
== References ==
Ademollo, M.; Brink, L.; D'Adda, A.; D'Auria, R.; Napolitano, E.; Sciuto, S.; Giudice, E. Del; Vecchia, P. Di; Ferrara, S.; Gliozzi, F.; Musto, R.; Pettorino, R. (1976), "Supersymmetric strings and colour confinement", Physics Letters B, 62 (1): 105–110, Bibcode:1976PhLB...62..105A, doi:10.1016/0370-2693(76)90061-7
Boucher, W.; Friedan, D; Kent, A. (1986), "Determinant formulae and unitarity for the N = 2 superconformal algebras in two dimensions or exact results on string compactification", Phys. Lett. B, 172 (3–4): 316–322, Bibcode:1986PhLB..172..316B, doi:10.1016/0370-2693(86)90260-1
Di Vecchia, P.; Petersen, J. L.; Yu, M.; Zheng, H. B. (1986), "Explicit construction of unitary representations of the N = 2 superconformal algebra", Phys. Lett. B, 174 (3): 280–284, Bibcode:1986PhLB..174..280D, doi:10.1016/0370-2693(86)91099-3
Eguchi, Tohru; Yang, Sung-Kil (1990), "N = 2 superconformal models as topological field theories", Mod. Phys. Lett. A, 5 (21): 1693–1701, Bibcode:1990MPLA....5.1693E, doi:10.1142/S0217732390001943
Goddard, P.; Kent, A.; Olive, D. (1986), "Unitary representations of the Virasoro and super-Virasoro algebras", Comm. Math. Phys., 103 (1): 105–119, Bibcode:1986CMaPh.103..105G, doi:10.1007/bf01464283, S2CID 91181508
Green, Michael B.; Schwarz, John H.; Witten, Edward (1988a), Superstring theory, Volume 1: Introduction, Cambridge University Press, ISBN 0-521-35752-7
Green, Michael B.; Schwarz, John H.; Witten, Edward (1988b), Superstring theory, Volume 2: Loop amplitudes, anomalies and phenomenology, Cambridge University Press, Bibcode:1987cup..bookR....G, ISBN 0-521-35753-5
Kazama, Yoichi; Suzuki, Hisao (1989), "New N = 2 superconformal field theories and superstring compactification", Nuclear Physics B, 321 (1): 232–268, Bibcode:1989NuPhB.321..232K, doi:10.1016/0550-3213(89)90250-2
Schwimmer, A.; Seiberg, N. (1987), "Comments on the N = 2, 3, 4 superconformal algebras in two dimensions", Phys. Lett. B, 184 (2–3): 191–196, Bibcode:1987PhLB..184..191S, doi:10.1016/0370-2693(87)90566-1
Voisin, Claire (1999), Mirror symmetry, SMF/AMS texts and monographs, vol. 1, American Mathematical Society, ISBN 0-8218-1947-X
Wassermann, A. J. (2010) [1998]. "Lecture notes on Kac-Moody and Virasoro algebras". arXiv:1004.1287.
West, Peter C. (1990), Introduction to supersymmetry and supergravity (2nd ed.), World Scientific, pp. 337–8, ISBN 981-02-0099-4 | Wikipedia/N_=_2_superconformal_algebra |
In computer science, an online algorithm measures its competitiveness against different adversary models. For deterministic algorithms, the adversary is the same as the adaptive offline adversary. For randomized online algorithms competitiveness can depend upon the adversary model used.
== Common adversaries ==
The three common adversaries are the oblivious adversary, the adaptive online adversary, and the adaptive offline adversary.
The oblivious adversary is sometimes referred to as the weak adversary. This adversary knows the algorithm's code, but does not get to know the randomized results of the algorithm.
The adaptive online adversary is sometimes called the medium adversary. This adversary must make its own decision before it is allowed to know the decision of the algorithm.
The adaptive offline adversary is sometimes called the strong adversary. This adversary knows everything, even the random number generator. This adversary is so strong that randomization does not help against it.
== Important results ==
From S. Ben-David, A. Borodin, R. Karp, G. Tardos, A. Wigderson we have:
If there is a randomized algorithm that is α-competitive against any adaptive offline adversary then there also exists an α-competitive deterministic algorithm.
If G is a c-competitive randomized algorithm against any adaptive online adversary, and there is a randomized d-competitive algorithm against any oblivious adversary, then G is a randomized (c * d)-competitive algorithm against any adaptive offline adversary.
== See also ==
Competitive analysis (online algorithm)
K-server problem
Online algorithm
== References ==
Borodin, A.; El-Yaniv, R. (1998). Online Computation and Competitive Analysis. Cambridge University Press. ISBN 978-0-521-56392-5.
S. Ben-David; A. Borodin; R. Karp; G. Tardos; A. Wigderson. (1994). "On the Power of Randomization in On-line Algorithms" (PDF). Algorithmica. 11: 2–14. doi:10.1007/BF01294260.
== External links ==
Bibliography of papers on online algorithms | Wikipedia/Adversary_(online_algorithm) |
Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.
== Naïve algorithm ==
A formula for calculating the variance of an entire population of size N is:
σ
2
=
(
x
2
)
¯
−
x
¯
2
=
∑
i
=
1
N
x
i
2
N
−
(
∑
i
=
1
N
x
i
N
)
2
{\displaystyle \sigma ^{2}={\overline {(x^{2})}}-{\bar {x}}^{2}={\frac {\sum _{i=1}^{N}x_{i}^{2}}{N}}-\left({\frac {\sum _{i=1}^{N}x_{i}}{N}}\right)^{2}}
Using Bessel's correction to calculate an unbiased estimate of the population variance from a finite sample of n observations, the formula is:
s
2
=
(
∑
i
=
1
n
x
i
2
n
−
(
∑
i
=
1
n
x
i
n
)
2
)
⋅
n
n
−
1
.
{\displaystyle s^{2}=\left({\frac {\sum _{i=1}^{n}x_{i}^{2}}{n}}-\left({\frac {\sum _{i=1}^{n}x_{i}}{n}}\right)^{2}\right)\cdot {\frac {n}{n-1}}.}
Therefore, a naïve algorithm to calculate the estimated variance is given by the following:
This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n − 1 on the last line.
Because SumSq and (Sum×Sum)/n can be very similar numbers, cancellation can lead to the precision of the result to be much less than the inherent precision of the floating-point arithmetic used to perform the computation. Thus this algorithm should not be used in practice, and several alternate, numerically stable, algorithms have been proposed. This is particularly bad if the standard deviation is small relative to the mean.
=== Computing shifted data ===
The variance is invariant with respect to changes in a location parameter, a property which can be used to avoid the catastrophic cancellation in this formula.
Var
(
X
−
K
)
=
Var
(
X
)
.
{\displaystyle \operatorname {Var} (X-K)=\operatorname {Var} (X).}
with
K
{\displaystyle K}
any constant, which leads to the new formula
σ
2
=
∑
i
=
1
n
(
x
i
−
K
)
2
−
(
∑
i
=
1
n
(
x
i
−
K
)
)
2
/
n
n
−
1
.
{\displaystyle \sigma ^{2}={\frac {\sum _{i=1}^{n}(x_{i}-K)^{2}-(\sum _{i=1}^{n}(x_{i}-K))^{2}/n}{n-1}}.}
the closer
K
{\displaystyle K}
is to the mean value the more accurate the result will be, but just choosing a value inside the
samples range will guarantee the desired stability. If the values
(
x
i
−
K
)
{\displaystyle (x_{i}-K)}
are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur.
If just the first sample is taken as
K
{\displaystyle K}
the algorithm can be written in Python programming language as
This formula also facilitates the incremental computation that can be expressed as
== Two-pass algorithm ==
An alternative approach, using a different formula for the variance, first computes the sample mean,
x
¯
=
∑
j
=
1
n
x
j
n
,
{\displaystyle {\bar {x}}={\frac {\sum _{j=1}^{n}x_{j}}{n}},}
and then computes the sum of the squares of the differences from the mean,
sample variance
=
s
2
=
∑
i
=
1
n
(
x
i
−
x
¯
)
2
n
−
1
,
{\displaystyle {\text{sample variance}}=s^{2}={\dfrac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}{n-1}},}
where s is the standard deviation. This is given by the following code:
This algorithm is numerically stable if n is small. However, the results of both of these simple algorithms ("naïve" and "two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.
== Welford's online algorithm ==
It is often useful to be able to compute the variance in a single pass, inspecting each value
x
i
{\displaystyle x_{i}}
only once; for example, when the data is being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an online algorithm, a recurrence relation is required between quantities from which the required statistics can be calculated in a numerically stable fashion.
The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element xn. Here,
x
¯
n
=
1
n
∑
i
=
1
n
x
i
{\textstyle {\overline {x}}_{n}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}
denotes the sample mean of the first n samples
(
x
1
,
…
,
x
n
)
{\displaystyle (x_{1},\dots ,x_{n})}
,
σ
n
2
=
1
n
∑
i
=
1
n
(
x
i
−
x
¯
n
)
2
{\textstyle \sigma _{n}^{2}={\frac {1}{n}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}_{n}\right)^{2}}
their biased sample variance, and
s
n
2
=
1
n
−
1
∑
i
=
1
n
(
x
i
−
x
¯
n
)
2
{\textstyle s_{n}^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}_{n}\right)^{2}}
their unbiased sample variance.
x
¯
n
=
(
n
−
1
)
x
¯
n
−
1
+
x
n
n
=
x
¯
n
−
1
+
x
n
−
x
¯
n
−
1
n
{\displaystyle {\bar {x}}_{n}={\frac {(n-1)\,{\bar {x}}_{n-1}+x_{n}}{n}}={\bar {x}}_{n-1}+{\frac {x_{n}-{\bar {x}}_{n-1}}{n}}}
σ
n
2
=
(
n
−
1
)
σ
n
−
1
2
+
(
x
n
−
x
¯
n
−
1
)
(
x
n
−
x
¯
n
)
n
=
σ
n
−
1
2
+
(
x
n
−
x
¯
n
−
1
)
(
x
n
−
x
¯
n
)
−
σ
n
−
1
2
n
.
{\displaystyle \sigma _{n}^{2}={\frac {(n-1)\,\sigma _{n-1}^{2}+(x_{n}-{\bar {x}}_{n-1})(x_{n}-{\bar {x}}_{n})}{n}}=\sigma _{n-1}^{2}+{\frac {(x_{n}-{\bar {x}}_{n-1})(x_{n}-{\bar {x}}_{n})-\sigma _{n-1}^{2}}{n}}.}
s
n
2
=
n
−
2
n
−
1
s
n
−
1
2
+
(
x
n
−
x
¯
n
−
1
)
2
n
=
s
n
−
1
2
+
(
x
n
−
x
¯
n
−
1
)
2
n
−
s
n
−
1
2
n
−
1
,
n
>
1
{\displaystyle s_{n}^{2}={\frac {n-2}{n-1}}\,s_{n-1}^{2}+{\frac {(x_{n}-{\bar {x}}_{n-1})^{2}}{n}}=s_{n-1}^{2}+{\frac {(x_{n}-{\bar {x}}_{n-1})^{2}}{n}}-{\frac {s_{n-1}^{2}}{n-1}},\quad n>1}
These formulas suffer from numerical instability , as they repeatedly subtract a small number from a big number which scales with n. A better quantity for updating is the sum of squares of differences from the current mean,
∑
i
=
1
n
(
x
i
−
x
¯
n
)
2
{\textstyle \sum _{i=1}^{n}(x_{i}-{\bar {x}}_{n})^{2}}
, here denoted
M
2
,
n
{\displaystyle M_{2,n}}
:
M
2
,
n
=
M
2
,
n
−
1
+
(
x
n
−
x
¯
n
−
1
)
(
x
n
−
x
¯
n
)
σ
n
2
=
M
2
,
n
n
s
n
2
=
M
2
,
n
n
−
1
{\displaystyle {\begin{aligned}M_{2,n}&=M_{2,n-1}+(x_{n}-{\bar {x}}_{n-1})(x_{n}-{\bar {x}}_{n})\\[4pt]\sigma _{n}^{2}&={\frac {M_{2,n}}{n}}\\[4pt]s_{n}^{2}&={\frac {M_{2,n}}{n-1}}\end{aligned}}}
This algorithm was found by Welford, and it has been thoroughly analyzed. It is also common to denote
M
k
=
x
¯
k
{\displaystyle M_{k}={\bar {x}}_{k}}
and
S
k
=
M
2
,
k
{\displaystyle S_{k}=M_{2,k}}
.
An example Python implementation for Welford's algorithm is given below.
This algorithm is much less prone to loss of precision due to catastrophic cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.
The parallel algorithm below illustrates how to merge multiple sets of statistics calculated online.
== Weighted incremental algorithm ==
The algorithm can be extended to handle unequal sample weights, replacing the simple counter n with the sum of weights seen so far. West (1979) suggests this incremental algorithm:
== Parallel algorithm ==
Chan et al. note that Welford's online algorithm detailed above is a special case of an algorithm that works for combining arbitrary sets
A
{\displaystyle A}
and
B
{\displaystyle B}
:
n
A
B
=
n
A
+
n
B
δ
=
x
¯
B
−
x
¯
A
x
¯
A
B
=
x
¯
A
+
δ
⋅
n
B
n
A
B
M
2
,
A
B
=
M
2
,
A
+
M
2
,
B
+
δ
2
⋅
n
A
n
B
n
A
B
{\displaystyle {\begin{aligned}n_{AB}&=n_{A}+n_{B}\\\delta &={\bar {x}}_{B}-{\bar {x}}_{A}\\{\bar {x}}_{AB}&={\bar {x}}_{A}+\delta \cdot {\frac {n_{B}}{n_{AB}}}\\M_{2,AB}&=M_{2,A}+M_{2,B}+\delta ^{2}\cdot {\frac {n_{A}n_{B}}{n_{AB}}}\\\end{aligned}}}
.
This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input.
Chan's method for estimating the mean is numerically unstable when
n
A
≈
n
B
{\displaystyle n_{A}\approx n_{B}}
and both are large, because the numerical error in
δ
=
x
¯
B
−
x
¯
A
{\displaystyle \delta ={\bar {x}}_{B}-{\bar {x}}_{A}}
is not scaled down in the way that it is in the
n
B
=
1
{\displaystyle n_{B}=1}
case. In such cases, prefer
x
¯
A
B
=
n
A
x
¯
A
+
n
B
x
¯
B
n
A
B
{\textstyle {\bar {x}}_{AB}={\frac {n_{A}{\bar {x}}_{A}+n_{B}{\bar {x}}_{B}}{n_{AB}}}}
.
This can be generalized to allow parallelization with AVX, with GPUs, and computer clusters, and to covariance.
== Example ==
Assume that all floating point operations use standard IEEE 754 double-precision arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both the naïve algorithm and two-pass algorithm compute these values correctly.
Next consider the sample (108 + 4, 108 + 7, 108 + 13, 108 + 16), which gives rise to the same estimated variance as the first sample. The two-pass algorithm computes this variance estimate correctly, but the naïve algorithm returns 29.333333333333332 instead of 30.
While this loss of precision may be tolerable and viewed as a minor flaw of the naïve algorithm, further increasing the offset makes the error catastrophic. Consider the sample (109 + 4, 109 + 7, 109 + 13, 109 + 16). Again the estimated population variance of 30 is computed correctly by the two-pass algorithm, but the naïve algorithm now computes it as −170.66666666666666. This is a serious problem with naïve algorithm and is due to catastrophic cancellation in the subtraction of two similar numbers at the final stage of the algorithm.
== Higher-order statistics ==
Terriberry extends Chan's formulae to calculating the third and fourth central moments, needed for example when estimating skewness and kurtosis:
M
3
,
X
=
M
3
,
A
+
M
3
,
B
+
δ
3
n
A
n
B
(
n
A
−
n
B
)
n
X
2
+
3
δ
n
A
M
2
,
B
−
n
B
M
2
,
A
n
X
M
4
,
X
=
M
4
,
A
+
M
4
,
B
+
δ
4
n
A
n
B
(
n
A
2
−
n
A
n
B
+
n
B
2
)
n
X
3
+
6
δ
2
n
A
2
M
2
,
B
+
n
B
2
M
2
,
A
n
X
2
+
4
δ
n
A
M
3
,
B
−
n
B
M
3
,
A
n
X
{\displaystyle {\begin{aligned}M_{3,X}=M_{3,A}+M_{3,B}&{}+\delta ^{3}{\frac {n_{A}n_{B}(n_{A}-n_{B})}{n_{X}^{2}}}+3\delta {\frac {n_{A}M_{2,B}-n_{B}M_{2,A}}{n_{X}}}\\[6pt]M_{4,X}=M_{4,A}+M_{4,B}&{}+\delta ^{4}{\frac {n_{A}n_{B}\left(n_{A}^{2}-n_{A}n_{B}+n_{B}^{2}\right)}{n_{X}^{3}}}\\[6pt]&{}+6\delta ^{2}{\frac {n_{A}^{2}M_{2,B}+n_{B}^{2}M_{2,A}}{n_{X}^{2}}}+4\delta {\frac {n_{A}M_{3,B}-n_{B}M_{3,A}}{n_{X}}}\end{aligned}}}
Here the
M
k
{\displaystyle M_{k}}
are again the sums of powers of differences from the mean
∑
(
x
−
x
¯
)
k
{\textstyle \sum (x-{\overline {x}})^{k}}
, giving
skewness
=
g
1
=
n
M
3
M
2
3
/
2
,
kurtosis
=
g
2
=
n
M
4
M
2
2
−
3.
{\displaystyle {\begin{aligned}&{\text{skewness}}=g_{1}={\frac {{\sqrt {n}}M_{3}}{M_{2}^{3/2}}},\\[4pt]&{\text{kurtosis}}=g_{2}={\frac {nM_{4}}{M_{2}^{2}}}-3.\end{aligned}}}
For the incremental case (i.e.,
B
=
{
x
}
{\displaystyle B=\{x\}}
), this simplifies to:
δ
=
x
−
m
m
′
=
m
+
δ
n
M
2
′
=
M
2
+
δ
2
n
−
1
n
M
3
′
=
M
3
+
δ
3
(
n
−
1
)
(
n
−
2
)
n
2
−
3
δ
M
2
n
M
4
′
=
M
4
+
δ
4
(
n
−
1
)
(
n
2
−
3
n
+
3
)
n
3
+
6
δ
2
M
2
n
2
−
4
δ
M
3
n
{\displaystyle {\begin{aligned}\delta &=x-m\\[5pt]m'&=m+{\frac {\delta }{n}}\\[5pt]M_{2}'&=M_{2}+\delta ^{2}{\frac {n-1}{n}}\\[5pt]M_{3}'&=M_{3}+\delta ^{3}{\frac {(n-1)(n-2)}{n^{2}}}-{\frac {3\delta M_{2}}{n}}\\[5pt]M_{4}'&=M_{4}+{\frac {\delta ^{4}(n-1)(n^{2}-3n+3)}{n^{3}}}+{\frac {6\delta ^{2}M_{2}}{n^{2}}}-{\frac {4\delta M_{3}}{n}}\end{aligned}}}
By preserving the value
δ
/
n
{\displaystyle \delta /n}
, only one division operation is needed and the higher-order statistics can thus be calculated for little incremental cost.
An example of the online algorithm for kurtosis implemented as described is:
Pébaÿ
further extends these results to arbitrary-order central moments, for the incremental and the pairwise cases, and subsequently Pébaÿ et al.
for weighted and compound moments. One can also find there similar formulas for covariance.
Choi and Sweetman
offer two alternative methods to compute the skewness and kurtosis, each of which can save substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting histogram, which effectively becomes a one-pass algorithm for higher moments. One benefit is that the statistical moment calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin:
H
(
x
k
)
=
h
(
x
k
)
A
{\displaystyle H(x_{k})={\frac {h(x_{k})}{A}}}
where
h
(
x
k
)
{\displaystyle h(x_{k})}
and
H
(
x
k
)
{\displaystyle H(x_{k})}
represent the frequency and the relative frequency at bin
x
k
{\displaystyle x_{k}}
and
A
=
∑
k
=
1
K
h
(
x
k
)
Δ
x
k
{\textstyle A=\sum _{k=1}^{K}h(x_{k})\,\Delta x_{k}}
is the total area of the histogram. After this normalization, the
n
{\displaystyle n}
raw moments and central moments of
x
(
t
)
{\displaystyle x(t)}
can be calculated from the relative histogram:
m
n
(
h
)
=
∑
k
=
1
K
x
k
n
H
(
x
k
)
Δ
x
k
=
1
A
∑
k
=
1
K
x
k
n
h
(
x
k
)
Δ
x
k
{\displaystyle m_{n}^{(h)}=\sum _{k=1}^{K}x_{k}^{n}H(x_{k})\,\Delta x_{k}={\frac {1}{A}}\sum _{k=1}^{K}x_{k}^{n}h(x_{k})\,\Delta x_{k}}
θ
n
(
h
)
=
∑
k
=
1
K
(
x
k
−
m
1
(
h
)
)
n
H
(
x
k
)
Δ
x
k
=
1
A
∑
k
=
1
K
(
x
k
−
m
1
(
h
)
)
n
h
(
x
k
)
Δ
x
k
{\displaystyle \theta _{n}^{(h)}=\sum _{k=1}^{K}{\Big (}x_{k}-m_{1}^{(h)}{\Big )}^{n}\,H(x_{k})\,\Delta x_{k}={\frac {1}{A}}\sum _{k=1}^{K}{\Big (}x_{k}-m_{1}^{(h)}{\Big )}^{n}h(x_{k})\,\Delta x_{k}}
where the superscript
(
h
)
{\displaystyle ^{(h)}}
indicates the moments are calculated from the histogram. For constant bin width
Δ
x
k
=
Δ
x
{\displaystyle \Delta x_{k}=\Delta x}
these two expressions can be simplified using
I
=
A
/
Δ
x
{\displaystyle I=A/\Delta x}
:
m
n
(
h
)
=
1
I
∑
k
=
1
K
x
k
n
h
(
x
k
)
{\displaystyle m_{n}^{(h)}={\frac {1}{I}}\sum _{k=1}^{K}x_{k}^{n}\,h(x_{k})}
θ
n
(
h
)
=
1
I
∑
k
=
1
K
(
x
k
−
m
1
(
h
)
)
n
h
(
x
k
)
{\displaystyle \theta _{n}^{(h)}={\frac {1}{I}}\sum _{k=1}^{K}{\Big (}x_{k}-m_{1}^{(h)}{\Big )}^{n}h(x_{k})}
The second approach from Choi and Sweetman is an analytical methodology to combine statistical moments from individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This methodology could be used for parallel computation of statistical moments with subsequent combination of those moments, or for combination of statistical moments computed at sequential times.
If
Q
{\displaystyle Q}
sets of statistical moments are known:
(
γ
0
,
q
,
μ
q
,
σ
q
2
,
α
3
,
q
,
α
4
,
q
)
{\displaystyle (\gamma _{0,q},\mu _{q},\sigma _{q}^{2},\alpha _{3,q},\alpha _{4,q})\quad }
for
q
=
1
,
2
,
…
,
Q
{\displaystyle q=1,2,\ldots ,Q}
, then each
γ
n
{\displaystyle \gamma _{n}}
can
be expressed in terms of the equivalent
n
{\displaystyle n}
raw moments:
γ
n
,
q
=
m
n
,
q
γ
0
,
q
for
n
=
1
,
2
,
3
,
4
and
q
=
1
,
2
,
…
,
Q
{\displaystyle \gamma _{n,q}=m_{n,q}\gamma _{0,q}\qquad \quad {\textrm {for}}\quad n=1,2,3,4\quad {\text{ and }}\quad q=1,2,\dots ,Q}
where
γ
0
,
q
{\displaystyle \gamma _{0,q}}
is generally taken to be the duration of the
q
t
h
{\displaystyle q^{th}}
time-history, or the number of points if
Δ
t
{\displaystyle \Delta t}
is constant.
The benefit of expressing the statistical moments in terms of
γ
{\displaystyle \gamma }
is that the
Q
{\displaystyle Q}
sets can be combined by addition, and there is no upper limit on the value of
Q
{\displaystyle Q}
.
γ
n
,
c
=
∑
q
=
1
Q
γ
n
,
q
for
n
=
0
,
1
,
2
,
3
,
4
{\displaystyle \gamma _{n,c}=\sum _{q=1}^{Q}\gamma _{n,q}\quad \quad {\text{for }}n=0,1,2,3,4}
where the subscript
c
{\displaystyle _{c}}
represents the concatenated time-history or combined
γ
{\displaystyle \gamma }
. These combined values of
γ
{\displaystyle \gamma }
can then be inversely transformed into raw moments representing the complete concatenated time-history
m
n
,
c
=
γ
n
,
c
γ
0
,
c
for
n
=
1
,
2
,
3
,
4
{\displaystyle m_{n,c}={\frac {\gamma _{n,c}}{\gamma _{0,c}}}\quad {\text{for }}n=1,2,3,4}
Known relationships between the raw moments (
m
n
{\displaystyle m_{n}}
) and the central moments (
θ
n
=
E
[
(
x
−
μ
)
n
]
)
{\displaystyle \theta _{n}=\operatorname {E} [(x-\mu )^{n}])}
)
are then used to compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history are computed from the central moments:
μ
c
=
m
1
,
c
σ
c
2
=
θ
2
,
c
α
3
,
c
=
θ
3
,
c
σ
c
3
α
4
,
c
=
θ
4
,
c
σ
c
4
−
3
{\displaystyle \mu _{c}=m_{1,c}\qquad \sigma _{c}^{2}=\theta _{2,c}\qquad \alpha _{3,c}={\frac {\theta _{3,c}}{\sigma _{c}^{3}}}\qquad \alpha _{4,c}={\frac {\theta _{4,c}}{\sigma _{c}^{4}}}-3}
== Covariance ==
Very similar algorithms can be used to compute the covariance.
=== Naïve algorithm ===
The naïve algorithm is
Cov
(
X
,
Y
)
=
∑
i
=
1
n
x
i
y
i
−
(
∑
i
=
1
n
x
i
)
(
∑
i
=
1
n
y
i
)
/
n
n
.
{\displaystyle \operatorname {Cov} (X,Y)={\frac {\sum _{i=1}^{n}x_{i}y_{i}-(\sum _{i=1}^{n}x_{i})(\sum _{i=1}^{n}y_{i})/n}{n}}.}
For the algorithm above, one could use the following Python code:
=== With estimate of the mean ===
As for the variance, the covariance of two random variables is also shift-invariant, so given any two constant values
k
x
{\displaystyle k_{x}}
and
k
y
,
{\displaystyle k_{y},}
it can be written:
Cov
(
X
,
Y
)
=
Cov
(
X
−
k
x
,
Y
−
k
y
)
=
∑
i
=
1
n
(
x
i
−
k
x
)
(
y
i
−
k
y
)
−
(
∑
i
=
1
n
(
x
i
−
k
x
)
)
(
∑
i
=
1
n
(
y
i
−
k
y
)
)
/
n
n
.
{\displaystyle \operatorname {Cov} (X,Y)=\operatorname {Cov} (X-k_{x},Y-k_{y})={\dfrac {\sum _{i=1}^{n}(x_{i}-k_{x})(y_{i}-k_{y})-(\sum _{i=1}^{n}(x_{i}-k_{x}))(\sum _{i=1}^{n}(y_{i}-k_{y}))/n}{n}}.}
and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as:
=== Two-pass ===
The two-pass algorithm first computes the sample means, and then the covariance:
x
¯
=
∑
i
=
1
n
x
i
/
n
{\displaystyle {\bar {x}}=\sum _{i=1}^{n}x_{i}/n}
y
¯
=
∑
i
=
1
n
y
i
/
n
{\displaystyle {\bar {y}}=\sum _{i=1}^{n}y_{i}/n}
Cov
(
X
,
Y
)
=
∑
i
=
1
n
(
x
i
−
x
¯
)
(
y
i
−
y
¯
)
n
.
{\displaystyle \operatorname {Cov} (X,Y)={\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{n}}.}
The two-pass algorithm may be written as:
A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums
∑
i
x
i
{\textstyle \sum _{i}x_{i}}
and
∑
i
y
i
{\textstyle \sum _{i}y_{i}}
should be zero, but the second pass compensates for any small error.
=== Online ===
A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-moment
C
n
=
∑
i
=
1
n
(
x
i
−
x
¯
n
)
(
y
i
−
y
¯
n
)
{\textstyle C_{n}=\sum _{i=1}^{n}(x_{i}-{\bar {x}}_{n})(y_{i}-{\bar {y}}_{n})}
:
x
¯
n
=
x
¯
n
−
1
+
x
n
−
x
¯
n
−
1
n
y
¯
n
=
y
¯
n
−
1
+
y
n
−
y
¯
n
−
1
n
C
n
=
C
n
−
1
+
(
x
n
−
x
¯
n
)
(
y
n
−
y
¯
n
−
1
)
=
C
n
−
1
+
(
x
n
−
x
¯
n
−
1
)
(
y
n
−
y
¯
n
)
{\displaystyle {\begin{alignedat}{2}{\bar {x}}_{n}&={\bar {x}}_{n-1}&\,+\,&{\frac {x_{n}-{\bar {x}}_{n-1}}{n}}\\[5pt]{\bar {y}}_{n}&={\bar {y}}_{n-1}&\,+\,&{\frac {y_{n}-{\bar {y}}_{n-1}}{n}}\\[5pt]C_{n}&=C_{n-1}&\,+\,&(x_{n}-{\bar {x}}_{n})(y_{n}-{\bar {y}}_{n-1})\\[5pt]&=C_{n-1}&\,+\,&(x_{n}-{\bar {x}}_{n-1})(y_{n}-{\bar {y}}_{n})\end{alignedat}}}
The apparent asymmetry in that last equation is due to the fact that
(
x
n
−
x
¯
n
)
=
n
−
1
n
(
x
n
−
x
¯
n
−
1
)
{\textstyle (x_{n}-{\bar {x}}_{n})={\frac {n-1}{n}}(x_{n}-{\bar {x}}_{n-1})}
, so both update terms are equal to
n
−
1
n
(
x
n
−
x
¯
n
−
1
)
(
y
n
−
y
¯
n
−
1
)
{\textstyle {\frac {n-1}{n}}(x_{n}-{\bar {x}}_{n-1})(y_{n}-{\bar {y}}_{n-1})}
. Even greater accuracy can be achieved by first computing the means, then using the stable one-pass algorithm on the residuals.
Thus the covariance can be computed as
Cov
N
(
X
,
Y
)
=
C
N
N
=
Cov
N
−
1
(
X
,
Y
)
⋅
(
N
−
1
)
+
(
x
n
−
x
¯
n
)
(
y
n
−
y
¯
n
−
1
)
N
=
Cov
N
−
1
(
X
,
Y
)
⋅
(
N
−
1
)
+
(
x
n
−
x
¯
n
−
1
)
(
y
n
−
y
¯
n
)
N
=
Cov
N
−
1
(
X
,
Y
)
⋅
(
N
−
1
)
+
N
−
1
N
(
x
n
−
x
¯
n
−
1
)
(
y
n
−
y
¯
n
−
1
)
N
=
Cov
N
−
1
(
X
,
Y
)
⋅
(
N
−
1
)
+
N
N
−
1
(
x
n
−
x
¯
n
)
(
y
n
−
y
¯
n
)
N
.
{\displaystyle {\begin{aligned}\operatorname {Cov} _{N}(X,Y)={\frac {C_{N}}{N}}&={\frac {\operatorname {Cov} _{N-1}(X,Y)\cdot (N-1)+(x_{n}-{\bar {x}}_{n})(y_{n}-{\bar {y}}_{n-1})}{N}}\\&={\frac {\operatorname {Cov} _{N-1}(X,Y)\cdot (N-1)+(x_{n}-{\bar {x}}_{n-1})(y_{n}-{\bar {y}}_{n})}{N}}\\&={\frac {\operatorname {Cov} _{N-1}(X,Y)\cdot (N-1)+{\frac {N-1}{N}}(x_{n}-{\bar {x}}_{n-1})(y_{n}-{\bar {y}}_{n-1})}{N}}\\&={\frac {\operatorname {Cov} _{N-1}(X,Y)\cdot (N-1)+{\frac {N}{N-1}}(x_{n}-{\bar {x}}_{n})(y_{n}-{\bar {y}}_{n})}{N}}.\end{aligned}}}
A small modification can also be made to compute the weighted covariance:
Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation:
C
X
=
C
A
+
C
B
+
(
x
¯
A
−
x
¯
B
)
(
y
¯
A
−
y
¯
B
)
⋅
n
A
n
B
n
X
.
{\displaystyle C_{X}=C_{A}+C_{B}+({\bar {x}}_{A}-{\bar {x}}_{B})({\bar {y}}_{A}-{\bar {y}}_{B})\cdot {\frac {n_{A}n_{B}}{n_{X}}}.}
=== Weighted batched version ===
A version of the weighted online algorithm that does batched updated also exists: let
w
1
,
…
w
N
{\displaystyle w_{1},\dots w_{N}}
denote the weights, and write
x
¯
n
+
k
=
x
¯
n
+
∑
i
=
n
+
1
n
+
k
w
i
(
x
i
−
x
¯
n
)
∑
i
=
1
n
+
k
w
i
y
¯
n
+
k
=
y
¯
n
+
∑
i
=
n
+
1
n
+
k
w
i
(
y
i
−
y
¯
n
)
∑
i
=
1
n
+
k
w
i
C
n
+
k
=
C
n
+
∑
i
=
n
+
1
n
+
k
w
i
(
x
i
−
x
¯
n
+
k
)
(
y
i
−
y
¯
n
)
=
C
n
+
∑
i
=
n
+
1
n
+
k
w
i
(
x
i
−
x
¯
n
)
(
y
i
−
y
¯
n
+
k
)
{\displaystyle {\begin{alignedat}{2}{\bar {x}}_{n+k}&={\bar {x}}_{n}&\,+\,&{\frac {\sum _{i=n+1}^{n+k}w_{i}(x_{i}-{\bar {x}}_{n})}{\sum _{i=1}^{n+k}w_{i}}}\\{\bar {y}}_{n+k}&={\bar {y}}_{n}&\,+\,&{\frac {\sum _{i=n+1}^{n+k}w_{i}(y_{i}-{\bar {y}}_{n})}{\sum _{i=1}^{n+k}w_{i}}}\\C_{n+k}&=C_{n}&\,+\,&\sum _{i=n+1}^{n+k}w_{i}(x_{i}-{\bar {x}}_{n+k})(y_{i}-{\bar {y}}_{n})\\&=C_{n}&\,+\,&\sum _{i=n+1}^{n+k}w_{i}(x_{i}-{\bar {x}}_{n})(y_{i}-{\bar {y}}_{n+k})\\\end{alignedat}}}
The covariance can then be computed as
Cov
N
(
X
,
Y
)
=
C
N
∑
i
=
1
N
w
i
{\displaystyle \operatorname {Cov} _{N}(X,Y)={\frac {C_{N}}{\sum _{i=1}^{N}w_{i}}}}
== See also ==
Kahan summation algorithm
Squared deviations from the mean
Yamartino method
== References ==
== External links ==
Weisstein, Eric W. "Sample Variance Computation". MathWorld. | Wikipedia/Algorithms_for_calculating_variance |
In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a page of memory needs to be allocated. Page replacement happens when a requested page is not in memory (page fault) and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.
When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines the quality of the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.
The page replacing problem is a typical online problem from the competitive analysis perspective in the sense that the optimal deterministic algorithm is known.
== History ==
Page replacement algorithms were a hot topic of research and debate in the 1960s and 1970s.
That mostly ended with the development of sophisticated LRU (least recently used) approximations and working set algorithms. Since then, some basic assumptions made by the traditional page replacement algorithms were invalidated, resulting in a revival of research. In particular, the following trends in the behavior of underlying hardware and user-level software have affected the performance of page replacement algorithms:
Size of primary storage has increased by multiple orders of magnitude. With several gigabytes of primary memory, algorithms that require a periodic check of each and every memory frame are becoming less and less practical.
Memory hierarchies have grown taller. The cost of a CPU cache miss is far more expensive. This exacerbates the previous problem.
Locality of reference of user software has weakened. This is mostly attributed to the spread of object-oriented programming techniques that favor large numbers of small functions, use of sophisticated data structures like trees and hash tables that tend to result in chaotic memory reference patterns, and the advent of garbage collection that drastically changed memory access behavior of applications.
Requirements for page replacement algorithms have changed due to differences in operating system kernel architectures. In particular, most modern OS kernels have unified virtual memory and file system caches, requiring the page replacement algorithm to select a page from among the pages of both user program virtual address spaces and cached files. The latter pages have specific properties. For example, they can be locked, or can have write ordering requirements imposed by journaling. Moreover, as the goal of page replacement is to minimize total time waiting for memory, it has to take into account memory requirements imposed by other kernel sub-systems that allocate memory. As a result, page replacement in modern kernels (Linux, FreeBSD, and Solaris) tends to work at the level of a general purpose kernel memory allocator, rather than at the higher level of a virtual memory subsystem.
== Local vs. global replacement ==
Replacement algorithms can be local or global.
When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a memory partition).
A global replacement algorithm is free to select any page in memory.
Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning are fixed partitioning and balanced set algorithms based on the working set model. The advantage of local page replacement is its scalability: each process can handle its page faults independently, leading to more consistent performance for that process. However global page replacement is more efficient on an overall system basis.
== Detecting which pages are referenced and modified ==
Modern general purpose computers and some embedded processors have support for virtual memory. Each process has its own virtual address space. A page table maps a subset of the process virtual addresses to physical addresses. In addition, in most architectures the page table holds an "access" bit and a "dirty" bit for each page in the page table. The CPU sets the access bit when the process reads or writes memory in that page. The CPU sets the dirty bit when the process writes memory in that page. The operating system can modify the access and dirty bits. The operating system can detect accesses to memory and files through the following means:
By clearing the access bit in pages present in the process' page table. After some time, the OS scans the page table looking for pages that had the access bit set by the CPU. This is fast because the access bit is set automatically by the CPU and inaccurate because the OS does not immediately receive notice of the access nor does it have information about the order in which the process accessed these pages.
By removing pages from the process' page table without necessarily removing them from physical memory. The next access to that page is detected immediately because it causes a page fault. This is slow because a page fault involves a context switch to the OS, software lookup for the corresponding physical address, modification of the page table and a context switch back to the process and accurate because the access is detected immediately after it occurs.
Directly when the process makes system calls that potentially access the page cache like read and write in POSIX.
== Precleaning ==
Most replacement algorithms simply return the target page as their result. This means that if target page is dirty (that is, contains data that have to be written to the stable storage before page can be reclaimed), I/O has to be initiated to send that page to the stable storage (to clean the page). In the early days of virtual memory, time spent on cleaning was not of much concern, because virtual memory was first implemented on systems with full duplex channels to the stable storage, and cleaning was customarily overlapped with paging. Contemporary commodity hardware, on the other hand, does not support full duplex transfers, and cleaning of target pages becomes an issue.
To deal with this situation, various precleaning policies are implemented. Precleaning is the mechanism that starts I/O on dirty pages that are (likely) to be replaced soon. The idea is that by the time the precleaned page is actually selected for the replacement, the I/O will complete and the page will be clean. Precleaning assumes that it is possible to identify pages that will be replaced next. Precleaning that is too eager can waste I/O bandwidth by writing pages that manage to get re-dirtied before being selected for replacement.
== The (h,k)-paging problem ==
The (h,k)-paging problem is a generalization of the model of paging problem: Let h,k be positive integers such that
h
≤
k
{\displaystyle h\leq k}
. We measure the performance of an algorithm with cache of size
h
≤
k
{\displaystyle h\leq k}
relative to the theoretically optimal page replacement algorithm. If
h
<
k
{\displaystyle h<k}
, we provide the optimal page replacement algorithm with strictly less resource.
The (h,k)-paging problem is a way to measure how an online algorithm performs by comparing it with the performance of the optimal algorithm, specifically, separately parameterizing the cache size of the online algorithm and optimal algorithm.
== Marking algorithms ==
Marking algorithms is a general class of paging algorithms. For each page, we associate it with a bit called its mark. Initially, we set all pages as unmarked. During a stage (a period of operation or a sequence of requests) of page requests, we mark a page when it is first requested in this stage. A marking algorithm is such an algorithm that never pages out a marked page.
If ALG is a marking algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of size h, where
h
≤
k
{\displaystyle h\leq k}
, then ALG is
k
k
−
h
+
1
{\displaystyle {\tfrac {k}{k-h+1}}}
-competitive. So every marking algorithm attains the
k
k
−
h
+
1
{\displaystyle {\tfrac {k}{k-h+1}}}
-competitive ratio.
LRU is a marking algorithm while FIFO is not a marking algorithm.
== Conservative algorithms ==
An algorithm is conservative, if on any consecutive request sequence containing k or fewer distinct page references, the algorithm will incur k or fewer page faults.
If ALG is a conservative algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of
h
≤
k
{\displaystyle h\leq k}
, then ALG is
k
k
−
h
+
1
{\displaystyle {\tfrac {k}{k-h+1}}}
-competitive. So every conservative algorithm attains the
k
k
−
h
+
1
{\displaystyle {\tfrac {k}{k-h+1}}}
-competitive ratio.
LRU, FIFO and CLOCK are conservative algorithms.
== Page replacement algorithms ==
There are a variety of page replacement algorithms:
=== The theoretically optimal page replacement algorithm ===
The theoretically optimal page replacement algorithm (also known as OPT, clairvoyant replacement algorithm, or Bélády's optimal page replacement policy) is an algorithm that works as follows: when a page needs to be swapped in, the operating system swaps out the page whose next use will occur farthest in the future. For example, a page that is not going to be used for the next 6 seconds will be swapped out over a page that is going to be used within the next 0.4 seconds.
This algorithm cannot be implemented in a general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used, except when all software that will run on a system is either known beforehand and is amenable to static analysis of its memory reference patterns, or only a class of applications allowing run-time analysis. Despite this limitation, algorithms exist that can offer near-optimal performance — the operating system keeps track of all pages referenced by the program, and it uses those data to decide which pages to swap in and out on subsequent runs. This algorithm can offer near-optimal performance, but not on the first run of a program, and only if the program's memory reference pattern is relatively consistent each time it runs.
Analysis of the paging problem has also been done in the field of online algorithms. Efficiency of randomized online algorithms for the paging problem is measured using amortized analysis.
=== Not recently used ===
The not recently used (NRU) page replacement algorithm is an algorithm that favours keeping pages in memory that have been recently used. This algorithm works on the following principle: when a page is referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is possible to do so on the software level as well.
At a certain fixed time interval, a timer interrupt triggers and clears the referenced bit of all the pages, so only pages referenced within the current timer interval are marked with a referenced bit. When a page needs to be replaced, the operating system divides the pages into four classes:
3. referenced, modified
2. referenced, not modified
1. not referenced, modified
0. not referenced, not modified
Although it does not seem possible for a page to be modified yet not referenced, this happens when a class 3 page has its referenced bit cleared by the timer interrupt. The NRU algorithm picks a random page from the lowest category for removal. So out of the above four page categories, the NRU algorithm will replace a not-referenced, not-modified page if such a page exists. Note that this algorithm implies that a modified but not-referenced (within the last timer interval) page is less important than a not-modified page that is intensely referenced.
NRU is a marking algorithm, so it is
k
k
−
h
+
1
{\displaystyle {\tfrac {k}{k-h+1}}}
-competitive.
=== First-in, first-out ===
The simplest page-replacement algorithm is a FIFO algorithm. The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little bookkeeping on the part of the operating system. The idea is obvious from the name – the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the oldest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiences Bélády's anomaly.
In simple words, on a page fault, the frame that has been in memory the longest is replaced.
FIFO page replacement algorithm is used by the OpenVMS operating system, with some modifications. Partial second chance is provided by skipping a limited number of entries with valid translation table references, and additionally, pages are displaced from process working set to a systemwide pool from which they can be recovered if not already re-used.
FIFO is a conservative algorithm, so it is
k
k
−
h
+
1
{\displaystyle {\tfrac {k}{k-h+1}}}
-competitive.
=== Second-chance ===
A modified form of the FIFO page replacement algorithm, known as the Second-chance page replacement algorithm, fares relatively better than FIFO at little cost for the improvement. It works by looking at the front of the queue as FIFO does, but instead of immediately paging out that page, it checks to see if its referenced bit is set. If it is not set, the page is swapped out. Otherwise, the referenced bit is cleared, the page is inserted at the back of the queue (as if it were a new page) and this process is repeated. This can also be thought of as a circular queue. If all the pages have their referenced bit set, on the second encounter of the first page in the list, that page will be swapped out, as it now has its referenced bit cleared. If all the pages have their reference bit cleared, then second chance algorithm degenerates into pure FIFO.
As its name suggests, Second-chance gives every page a "second-chance" – an old page that has been referenced is probably in use, and should not be swapped out over a new page that has not been referenced.
=== Clock ===
Clock is a more efficient version of FIFO than Second-chance because pages don't have to be constantly pushed to the back of the list, but it performs the same general function as Second-Chance. The clock algorithm keeps a circular list of pages in memory, with the "hand" (iterator) pointing to the last examined page frame in the list. When a page fault occurs and no empty frames exist, then the R (referenced) bit is inspected at the hand's location. If R is 0, the new page is put in place of the page the "hand" points to, and the hand is advanced one position. Otherwise, the R bit is cleared, then the clock hand is incremented and the process is repeated until a page is replaced. This algorithm was first described in 1969 by Fernando J. Corbató.
==== Variants of clock ====
GCLOCK: Generalized clock page replacement algorithm.
Clock-Pro keeps a circular list of information about recently referenced pages, including all M pages in memory as well as the most recent M pages that have been paged out. This extra information on paged-out pages, like the similar information maintained by ARC, helps it work better than LRU on large loops and one-time scans.
WSclock. By combining the Clock algorithm with the concept of a working set (i.e., the set of pages expected to be used by that process during some time interval), the performance of the algorithm can be improved. In practice, the "aging" algorithm and the "WSClock" algorithm are probably the most important page replacement algorithms.
Clock with Adaptive Replacement (CAR) is a page replacement algorithm that has performance comparable to ARC, and substantially outperforms both LRU and CLOCK. The algorithm CAR is self-tuning and requires no user-specified magic parameters.
CLOCK is a conservative algorithm, so it is
k
k
−
h
+
1
{\displaystyle {\tfrac {k}{k-h+1}}}
-competitive.
=== Least recently used ===
The least recently used (LRU) page replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good as adaptive replacement cache), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible.
The most expensive method is the linked list method, which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page. The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process.
Another method that requires hardware support is as follows: suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it acquires the value equal to the counter at the time of page access. Whenever a page needs to be replaced, the operating system selects the page with the lowest counter and swaps it out.
Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations.
One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool.
On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns. For example, if there are N pages in the LRU pool, an application executing a loop over array of N + 1 pages will cause a page fault on each and every access. As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations. Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU).
==== Variants on LRU ====
LRU-K evicts the page whose K-th most recent access is furthest in the past. For example, LRU-1 is simply LRU whereas LRU-2 evicts pages according to the time of their penultimate access. LRU-K improves greatly on LRU with regards to locality in time.
The ARC algorithm extends LRU by maintaining a history of recently evicted pages and uses this to change preference to recent or frequent access. It is particularly resistant to sequential scans.
The 2Q algorithm improves upon the LRU and LRU/2 algorithm. By having two queues, one for hot-path items and the other for slow-path items, items are first placed in the slow-path queue and after a second access of the items placed in the hot-path items. Because references to added items are longer hold than in the LRU and LRU/2 algorithm, it has a better hot-path queue which improves the hit rate of the cache.
A comparison of ARC with other algorithms (LRU, MQ, 2Q, LRU-2, LRFU, LIRS) can be found in Megiddo & Modha 2004.
LRU is a marking algorithm, so it is
k
k
−
h
+
1
{\displaystyle {\tfrac {k}{k-h+1}}}
-competitive.
=== Random ===
Random replacement algorithm replaces a random page in memory. This eliminates the overhead cost of tracking page references. Usually it fares better than FIFO, and for looping memory references it is better than LRU, although generally LRU performs better in practice. OS/390 uses global LRU approximation and falls back to random replacement when LRU performance degenerates, and the Intel i860 processor used a random replacement policy (Rhodehamel 1989).
=== Not frequently used (NFU) ===
The not frequently used (NFU) page replacement algorithm requires a counter, and every page has one counter of its own which is initially set to 0. At each clock interval, all pages that have been referenced within that interval will have their counter incremented by 1. In effect, the counters keep track of how frequently a page has been used. Thus, the page with the lowest counter can be swapped out when necessary.
The main problem with NFU is that it keeps track of the frequency of use without regard to the time span of use. Thus, in a multi-pass compiler, pages which were heavily used during the first pass, but are not needed in the second pass will be favoured over pages which are comparably lightly used in the second pass, as they have higher frequency counters. This results in poor performance. Other common scenarios exist where NFU will perform similarly, such as an OS boot-up. Thankfully, a similar and better algorithm exists, and its description follows.
The not frequently used page-replacement algorithm generates fewer page faults than the least recently used page replacement algorithm when the page table contains null pointer values.
=== Aging ===
The aging algorithm is a descendant of the NFU algorithm, with modifications to make it aware of the time span of use. Instead of just incrementing the counters of pages referenced, putting equal emphasis on page references regardless of the time, the reference counter on a page is first shifted right (divided by 2), before adding the referenced bit to the left of that binary number. For instance, if a page has referenced bits 1,0,0,1,1,0 in the past 6 clock ticks, its referenced counter will look like this in chronological order: 10000000, 01000000, 00100000, 10010000, 11001000, 01100100. Page references closer to the present time have more impact than page references long ago. This ensures that pages referenced more recently, though less frequently referenced, will have higher priority over pages more frequently referenced in the past. Thus, when a page needs to be swapped out, the page with the lowest counter will be chosen.
The following Python code simulates the aging algorithm.
Counters
V
i
{\displaystyle V_{i}}
are initialized with 0 and updated as described above via
V
i
←
(
R
i
≪
(
k
−
1
)
)
|
(
V
i
≫
1
)
{\displaystyle V_{i}\leftarrow (R_{i}\ll (k-1))|(V_{i}\gg 1)}
, using arithmetic shift operators.
In the given example of R-bits for 6 pages over 5 clock ticks, the function prints the following output, which lists the R-bits for each clock tick t and the individual counter values
V
i
{\displaystyle V_{i}}
for each page in binary representation.
Note that aging differs from LRU in the sense that aging can only keep track of the references in the latest 16/32 (depending on the bit size of the processor's integers) time intervals. Consequently, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price.
=== Longest distance first (LDF) page replacement algorithm ===
The basic idea behind this algorithm is Locality of Reference as used in LRU but the difference is that in LDF, locality is based on distance not on the used references. In the LDF, replace the page which is on longest distance from the current page. If two pages are on same distance then the page which is next to current page in anti-clock rotation will get replaced.
== Implementation details ==
=== Techniques for hardware with no reference bit ===
Many of the techniques discussed above assume the presence of a reference bit associated with each page. Some hardware has no such bit, so its efficient use requires techniques that operate well without one.
One notable example is VAX hardware running OpenVMS. This system knows if a page has been modified, but not necessarily if a page has been read. Its approach is known as Secondary Page Caching. Pages removed from working sets (process-private memory, generally) are placed on special-purpose lists while remaining in physical memory for some time. Removing a page from a working set is not technically a page-replacement operation, but effectively identifies that page as a candidate. A page whose backing store is still valid (whose contents are not dirty, or otherwise do not need to be preserved) is placed on the tail of the Free Page List. A page that requires writing to backing store will be placed on the Modified Page List. These actions are typically triggered when the size of the Free Page List falls below an adjustable threshold.
Pages may be selected for working set removal in an essentially random fashion, with the expectation that if a poor choice is made, a future reference may retrieve that page from the Free or Modified list before it is removed from physical memory. A page referenced this way will be removed from the Free or Modified list and placed back into a process working set. The Modified Page List additionally provides an opportunity to write pages out to backing store in groups of more than one page, increasing efficiency. These pages can then be placed on the Free Page List. The sequence of pages that works its way to the head of the Free Page List resembles the results of a LRU or NRU mechanism and the overall effect has similarities to the Second-Chance algorithm described earlier.
Another example is used by the Linux kernel on ARM. The lack of hardware functionality is made up for by providing two page tables – the processor-native page tables, with neither referenced bits nor dirty bits, and software-maintained page tables with the required bits present. The emulated bits in the software-maintained table are set by page faults. In order to get the page faults, clearing emulated bits in the second table revokes some of the access rights to the corresponding page, which is implemented by altering the native table.
=== Page cache in Linux ===
Linux uses a unified page cache for
brk and anonymous mmaped-regions. This includes the heap and stack of user-space programs. It is written to swap when paged out.
Non-anonymous (file-backed) mmaped regions. If present in memory and not privately modified the physical page is shared with file cache or buffer.
Shared memory acquired through shm_open.
The tmpfs in-memory filesystem; written to swap when paged out.
The file cache including; written to the underlying block storage (possibly going through the buffer, see below) when paged out.
The cache of block devices, called the "buffer" by Linux (not to be confused with other structures also called buffers like those use for pipes and buffers used internally in Linux); written to the underlying storage when paged out.
The unified page cache operates on units of the smallest page size supported by the CPU (4 KiB in ARMv8, x86 and x86-64) with some pages of the next larger size (2 MiB in x86-64) called "huge pages" by Linux. The pages in the page cache are divided in an "active" set and an "inactive" set. Both sets keep a LRU list of pages. In the basic case, when a page is accessed by a user-space program it is put in the head of the inactive set. When it is accessed repeatedly, it is moved to the active list. Linux moves the pages from the active set to the inactive set as needed so that the active set is smaller than the inactive set. When a page is moved to the inactive set it is removed from the page table of any process address space, without being paged out of physical memory. When a page is removed from the inactive set, it is paged out of physical memory. The size of the "active" and "inactive" list can be queried from /proc/meminfo in the fields "Active", "Inactive", "Active(anon)", "Inactive(anon)", "Active(file)" and "Inactive(file)".
== Working set ==
The working set of a process is the set of pages expected to be used by that process during some time interval.
The "working set model" isn't a page replacement algorithm in the strict sense (it's actually a kind of medium-term scheduler)
== References ==
== Further reading == | Wikipedia/Page_replacement_algorithm |
In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output.
Formally, the output of any sorting algorithm must satisfy two conditions:
The output is in monotonic order (each element is no smaller/larger than the previous element, according to the required order).
The output is a permutation (a reordering, yet retaining all of the original elements) of the input.
Although some algorithms are designed for sequential access, the highest-performing algorithms assume data is stored in a data structure which allows random access.
== History and concepts ==
From the beginning of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. Among the authors of early sorting algorithms around 1951 was Betty Holberton, who worked on ENIAC and UNIVAC. Bubble sort was analyzed as early as 1956. Asymptotically optimal algorithms have been known since the mid-20th century – new algorithms are still being invented, with the widely used Timsort dating to 2002, and the library sort being first published in 2006.
Comparison sorting algorithms have a fundamental requirement of Ω(n log n) comparisons (some input sequences will require a multiple of n log n comparisons, where n is the number of elements in the array to be sorted). Algorithms not based on comparisons, such as counting sort, can have better performance.
Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide-and-conquer algorithms, data structures such as heaps and binary trees, randomized algorithms, best, worst and average case analysis, time–space tradeoffs, and upper and lower bounds.
Sorting small arrays optimally (in the fewest comparisons and swaps) or fast (i.e. taking into account machine-specific details) is still an open research problem, with solutions only known for very small arrays (<20 elements). Similarly optimal (by various definitions) sorting on a parallel machine is an open research topic.
== Classification ==
Sorting algorithms can be classified by:
Computational complexity
Best, worst and average case behavior in terms of the size of the list. For typical serial sorting algorithms, good behavior is O(n log n), with parallel sort in O(log2 n), and bad behavior is O(n2). Ideal behavior for a serial sort is O(n), but this is not possible in the average case. Optimal parallel sorting is O(log n).
Swaps for "in-place" algorithms.
Memory usage (and use of other computer resources). In particular, some sorting algorithms are "in-place". Strictly, an in-place sort needs only O(1) memory beyond the items being sorted; sometimes O(log n) additional memory is considered "in-place".
Recursion: Some algorithms are either recursive or non-recursive, while others may be both (e.g., merge sort).
Stability: stable sorting algorithms maintain the relative order of records with equal keys (i.e., values).
Whether or not they are a comparison sort. A comparison sort examines the data only by comparing two elements with a comparison operator.
General method: insertion, exchange, selection, merging, etc. Exchange sorts include bubble sort and quicksort. Selection sorts include cycle sort and heapsort.
Whether the algorithm is serial or parallel. The remainder of this discussion almost exclusively concentrates on serial algorithms and assumes serial operation.
Adaptability: Whether or not the presortedness of the input affects the running time. Algorithms that take this into account are known to be adaptive.
Online: An algorithm such as Insertion Sort that is online can sort a constant stream of input.
=== Stability ===
Stable sort algorithms sort equal elements in the same order that they appear in the input. For example, in the card sorting example to the right, the cards are being sorted by their rank, and their suit is being ignored. This allows the possibility of multiple different correctly sorted versions of the original list. Stable sorting algorithms choose one of these, according to the following rule: if two items compare as equal (like the two 5 cards), then their relative order will be preserved, i.e. if one comes before the other in the input, it will come before the other in the output.
Stability is important to preserve order over multiple sorts on the same data set. For example, say that student records consisting of name and class section are sorted dynamically, first by name, then by class section. If a stable sorting algorithm is used in both cases, the sort-by-class-section operation will not change the name order; with an unstable sort, it could be that sorting by section shuffles the name order, resulting in a nonalphabetical list of students.
More formally, the data being sorted can be represented as a record or tuple of values, and the part of the data that is used for sorting is called the key. In the card example, cards are represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable if whenever there are two records R and S with the same key, and R appears before S in the original list, then R will always appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. Stability is also not an issue if all keys are different.
Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original input list as a tie-breaker. Remembering this order, however, may require additional time and space.
One application for stable sorting algorithms is sorting a list using a primary and secondary key. For example, suppose we wish to sort a hand of cards such that the suits are in the order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are sorted by rank. This can be done by first sorting the cards by rank (using any sort), and then doing a stable sort by suit:
Within each suit, the stable sort preserves the ordering by rank that was already done. This idea can be extended to any number of keys and is utilised by radix sort. The same effect can be achieved with an unstable sort by using a lexicographic key comparison, which, e.g., compares first by suit, and then compares by rank if the suits are the same.
== Comparison of algorithms ==
This analysis assumes that the length of each key is constant and that all comparisons, swaps and other operations can proceed in constant time.
Legend:
n is the number of records to be sorted.
Comparison column has the following ranking classifications: "Best", "Average" and "Worst" if the time complexity is given for each case.
"Memory" denotes the amount of additional storage required by the algorithm.
The run times and the memory requirements listed are inside big O notation, hence the base of the logarithms does not matter.
The notation log2 n means (log n)2.
=== Comparison sorts ===
Below is a table of comparison sorts. Mathematical analysis demonstrates a comparison sort cannot perform better than O(n log n) on average.
=== Non-comparison sorts ===
The following table describes integer sorting algorithms and other sorting algorithms that are not comparison sorts. These algorithms are not limited to Ω(n log n) unless meet unit-cost random-access machine model as described below.
Complexities below assume n items to be sorted, with keys of size k, digit size d, and r the range of numbers to be sorted.
Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that n ≪ 2k, where ≪ means "much less than".
In the unit-cost random-access machine model, algorithms with running time of
n
⋅
k
d
{\displaystyle \scriptstyle n\cdot {\frac {k}{d}}}
, such as radix sort, still take time proportional to Θ(n log n), because n is limited to be not more than
2
k
d
{\displaystyle 2^{\frac {k}{d}}}
, and a larger number of elements to sort would require a bigger k in order to store them in the memory.
Samplesort can be used to parallelize any of the non-comparison sorts, by efficiently distributing data into several buckets and then passing down sorting to several processors, with no need to merge as buckets are already sorted between each other.
=== Others ===
Some algorithms are slow compared to those discussed above, such as the bogosort with unbounded run time and the stooge sort which has O(n2.7) run time. These sorts are usually described for educational purposes to demonstrate how the run time of algorithms is estimated. The following table describes some sorting algorithms that are impractical for real-life use in traditional software contexts due to extremely poor performance or specialized hardware requirements.
Theoretical computer scientists have detailed other sorting algorithms that provide better than O(n log n) time complexity assuming additional constraints, including:
Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite size, taking O(n log log n) time and O(n) space.
A randomized integer sorting algorithm taking
O
(
n
log
log
n
)
{\displaystyle O\left(n{\sqrt {\log \log n}}\right)}
expected time and O(n) space.
One of the authors of the previously mentioned algorithm also claims to have discovered an algorithm taking
O
(
n
log
n
)
{\displaystyle O\left(n{\sqrt {\log n}}\right)}
time and O(n) space, sorting real numbers, and further claims that, without any added assumptions on the input, it can be modified to achieve
O
(
n
log
n
/
log
log
n
)
{\displaystyle O\left(n\log n/{\sqrt {\log \log n}}\right)}
time and O(n) space.
== Popular sorting algorithms ==
While there are a large number of sorting algorithms, in practical implementations a few algorithms predominate. Insertion sort is widely used for small data sets, while for large data sets an asymptotically efficient sort is used, primarily heapsort, merge sort, or quicksort. Efficient implementations generally use a hybrid algorithm, combining an asymptotically efficient algorithm for the overall sort with insertion sort for small lists at the bottom of a recursion. Highly tuned implementations use more sophisticated variants, such as Timsort (merge sort, insertion sort, and additional logic), used in Android, Java, and Python, and introsort (quicksort and heapsort), used (in variant forms) in some C++ sort implementations and in .NET.
For more restricted data, such as numbers in a fixed interval, distribution sorts such as counting sort or radix sort are widely used. Bubble sort and variants are rarely used in practice, but are commonly found in teaching and theoretical discussions.
When physically sorting objects (such as alphabetizing papers, tests or books) people intuitively generally use insertion sorts for small sets. For larger sets, people often first bucket, such as by initial letter, and multiple bucketing allows practical sorting of very large sets. Often space is relatively cheap, such as by spreading objects out on the floor or over a large area, but operations are expensive, particularly moving an object a large distance – locality of reference is important. Merge sorts are also practical for physical objects, particularly as two hands can be used, one for each list to merge, while other algorithms, such as heapsort or quicksort, are poorly suited for human use. Other algorithms, such as library sort, a variant of insertion sort that leaves spaces, are also practical for physical use.
=== Simple sorts ===
Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on small data, due to low overhead, but not efficient on large data. Insertion sort is generally faster than selection sort in practice, due to fewer comparisons and good performance on almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes, and thus is used when write performance is a limiting factor.
==== Insertion sort ====
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list similar to how one puts money in their wallet. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shellsort is a variant of insertion sort that is more efficient for larger lists.
==== Selection sort ====
Selection sort is an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and also has performance advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps and thus is useful where swapping is very expensive.
=== Efficient sorts ===
Practical general sorting algorithms are almost always based on an algorithm with average time complexity (and generally worst-case complexity) O(n log n), of which the most common are heapsort, merge sort, and quicksort. Each has advantages and drawbacks, with the most significant being that simple implementation of merge sort uses O(n) additional space, and simple implementation of quicksort has O(n2) worst-case complexity. These problems can be solved or ameliorated at the cost of a more complex algorithm.
While these algorithms are asymptotically efficient on random data, for practical efficiency on real-world data various modifications are used. First, the overhead of these algorithms becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching to insertion sort once the data is small enough. Second, the algorithms often perform poorly on already sorted data or almost sorted data – these are common in real-world data and can be sorted in O(n) time by appropriate algorithms. Finally, they may also be unstable, and stability is often a desirable property in a sort. Thus more sophisticated algorithms are often employed, such as Timsort (based on merge sort) or introsort (based on quicksort, falling back to heapsort).
==== Merge sort ====
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n). It is also easily applied to lists, not only arrays, as it only requires sequential access, not random access. However, it has additional O(n) space complexity and involves a large number of copies in simple implementations.
Merge sort has seen a relatively recent surge in popularity for practical implementations, due to its use in the sophisticated algorithm Timsort, which is used for the standard sort routine in the programming languages Python and Java (as of JDK7). Merge sort itself is the standard routine in Perl, among others, and has been used in Java at least since 2000 in JDK1.3.
==== Heapsort ====
Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest (or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst-case complexity.
==== Recombinant sort ====
Recombinant sort is a non-comparison-based sorting algorithm developed by Peeyush Kumar et al in 2020. The algorithm combines bucket sort, counting sort, radix sort, hashing, and dynamic programming techniques. It employs an n-dimensional Cartesian space mapping approach consisting of two primary phases: a Hashing cycle that maps elements to a multidimensional array using a special hash function, and an Extraction cycle that retrieves elements in sorted order. Recombinant Sort achieves O(n) time complexity for best, average, and worst cases, and can process both numerical and string data types, including mixed decimal and non-decimal numbers.
==== Quicksort ====
Quicksort is a divide-and-conquer algorithm which relies on a partition operation: to partition an array, an element called a pivot is selected. All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time and in-place. The lesser and greater sublists are then recursively sorted. This yields an average time complexity of O(n log n), with low overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, quicksort is one of the most popular sorting algorithms and is available in many standard programming libraries.
The important caveat about quicksort is that its worst-case performance is O(n2); while this is rare, in naive implementations (choosing the first or last element as pivot) this occurs for sorted data, which is a common case. The most complex issue in quicksort is thus choosing a good pivot element, as consistently poor choices of pivots can result in drastically slower O(n2) performance, but good choice of pivots yields O(n log n) performance, which is asymptotically optimal. For example, if at each step the median is chosen as the pivot then the algorithm works in O(n log n). Finding the median, such as by the median of medians selection algorithm is however an O(n) operation on unsorted lists and therefore exacts significant overhead with sorting. In practice choosing a random pivot almost certainly yields O(n log n) performance.
If a guarantee of O(n log n) performance is important, there is a simple modification to achieve that. The idea, due to Musser, is to set a limit on the maximum depth of recursion. If that limit is exceeded, then sorting is continued using the heapsort algorithm. Musser proposed that the limit should be
1
+
2
⌊
log
2
(
n
)
⌋
{\displaystyle 1+2\lfloor \log _{2}(n)\rfloor }
, which is approximately twice the maximum recursion depth one would expect on average with a randomly ordered array.
==== Shellsort ====
Shellsort was invented by Donald Shell in 1959. It improves upon insertion sort by moving out of order elements more than one position at a time. The concept behind Shellsort is that insertion sort performs in
O
(
k
n
)
{\displaystyle O(kn)}
time, where k is the greatest distance between two out-of-place elements. This means that generally, they perform in O(n2), but for data that is mostly sorted, with only a few elements out of place, they perform faster. So, by first sorting elements far away, and progressively shrinking the gap between the elements to sort, the final sort computes much faster. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort.
The worst-case time complexity of Shellsort is an open problem and depends on the gap sequence used, with known complexities ranging from O(n2) to O(n4/3) and Θ(n log2 n). This, combined with the fact that Shellsort is in-place, only needs a relatively small amount of code, and does not require use of the call stack, makes it is useful in situations where memory is at a premium, such as in embedded systems and operating system kernels.
=== Bubble sort and variants ===
Bubble sort, and variants such as the Comb sort and cocktail sort, are simple, highly inefficient sorting algorithms. They are frequently seen in introductory texts due to ease of analysis, but they are rarely used in practice.
==== Bubble sort ====
Bubble sort is a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. This algorithm's average time and worst-case performance is O(n2), so it is rarely used to sort large, unordered data sets. Bubble sort can be used to sort a small number of items (where its asymptotic inefficiency is not a high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place). For example, if any number of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble sort's exchange will get them in order on the first pass, the second pass will find all elements in order, so the sort will take only 2n time.
==== Comb sort ====
Comb sort is a relatively simple sorting algorithm based on bubble sort and originally designed by Włodzimierz Dobosiewicz in 1980. It was later rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine article published in April 1991. The basic idea is to eliminate turtles, or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. (Rabbits, large values around the beginning of the list, do not pose a problem in bubble sort) It accomplishes this by initially swapping elements that are a certain distance from one another in the array, rather than only swapping elements if they are adjacent to one another, and then shrinking the chosen distance until it is operating as a normal bubble sort. Thus, if Shellsort can be thought of as a generalized version of insertion sort that swaps elements spaced a certain distance away from one another, comb sort can be thought of as the same generalization applied to bubble sort.
==== Exchange sort ====
Exchange sort is sometimes confused with bubble sort, although the algorithms are in fact distinct. Exchange sort works by comparing the first element with all elements above it, swapping where needed, thereby guaranteeing that the first element is correct for the final sort order; it then proceeds to do the same for the second element, and so on. It lacks the advantage that bubble sort has of detecting in one pass if the list is already sorted, but it can be faster than bubble sort by a constant factor (one less pass over the data to be sorted; half as many total comparisons) in worst-case situations. Like any simple O(n2) sort it can be reasonably fast over very small data sets, though in general insertion sort will be faster.
=== Distribution sorts ===
Distribution sort refers to any sorting algorithm where data is distributed from their input to multiple intermediate structures which are then gathered and placed on the output. For example, both bucket sort and flashsort are distribution-based sorting algorithms. Distribution sorting algorithms can be used on a single processor, or they can be a distributed algorithm, where individual subsets are separately sorted on different processors, then combined. This allows external sorting of data too large to fit into a single computer's memory.
==== Counting sort ====
Counting sort is applicable when each input is known to belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and O(|S|) memory where n is the length of the input. It works by creating an integer array of size |S| and using the ith bin to count the occurrences of the ith member of S in the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order. This sorting algorithm often cannot be used because S needs to be reasonably small for the algorithm to be efficient, but it is extremely fast and demonstrates great asymptotic behavior as n increases. It also can be modified to provide stable behavior.
==== Bucket sort ====
Bucket sort is a divide-and-conquer sorting algorithm that generalizes counting sort by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm or by recursively applying the bucket sorting algorithm.
A bucket sort works best when the elements of the data set are evenly distributed across all buckets.
==== Radix sort ====
Radix sort is an algorithm that sorts numbers by processing individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix sort can process digits of each number either starting from the least significant digit (LSD) or starting from the most significant digit (MSD). The LSD algorithm first sorts the list by the least significant digit while preserving their relative order using a stable sort. Then it sorts them by the next digit, and so on from the least significant to the most significant, ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix sort is not stable. It is common for the counting sort algorithm to be used internally by the radix sort. A hybrid sorting approach, such as using insertion sort for small bins, improves performance of radix sort significantly.
== Memory usage patterns and index sorting ==
When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem. This procedure is sometimes called "tag sort".
Another technique for overcoming the memory-size problem is using external sorting, for example, one of the ways is to combine two algorithms in a way that takes advantage of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted using an efficient algorithm (such as quicksort), and the results merged using a k-way merge similar to that used in merge sort. This is faster than performing either merge sort or quicksort over the entire list.
Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.
== Related algorithms ==
Related problems include approximate sorting (sorting a sequence to within a certain amount of the correct order), partial sorting (sorting only the k smallest elements of a list, or finding the k smallest elements, but unordered) and selection (computing the kth smallest element). These can be solved inefficiently by a total sort, but more efficient algorithms exist, often derived by generalizing a sorting algorithm. The most notable example is quickselect, which is related to quicksort. Conversely, some sorting algorithms can be derived by repeated application of a selection algorithm; quicksort and quickselect can be seen as the same pivoting move, differing only in whether one recurses on both sides (quicksort, divide-and-conquer) or one side (quickselect, decrease-and-conquer).
A kind of opposite of a sorting algorithm is a shuffling algorithm. These are fundamentally different because they require a source of random numbers. Shuffling can also be implemented by a sorting algorithm, namely by a random sort: assigning a random number to each element of the list and then sorting based on the random numbers. This is generally not done in practice, however, and there is a well-known simple and efficient algorithm for shuffling: the Fisher–Yates shuffle.
Sorting algorithms are ineffective for finding an order in many situations. Usually, when elements have no reliable comparison function (crowdsourced preferences like voting systems), comparisons are very costly (sports), or when it would be impossible to pairwise compare all elements for all criteria (search engines). In these cases, the problem is usually referred to as ranking and the goal is to find the "best" result for some criteria according to probabilities inferred from comparisons or rankings. A common example is in chess, where players are ranked with the Elo rating system, and rankings are determined by a tournament system instead of a sorting algorithm.
== See also ==
Collation – Assembly of written information into a standard order
K-sorted sequence
Schwartzian transform – Programming idiom for efficiently sorting a list by a computed key
Search algorithm – Any algorithm which solves the search problem
Quantum sort – Sorting algorithms for quantum computers
== References ==
== Further reading ==
Knuth, Donald E. (1998), Sorting and Searching, The Art of Computer Programming, vol. 3 (2nd ed.), Boston: Addison-Wesley, ISBN 0-201-89685-0
Sedgewick, Robert (1980), "Efficient Sorting by Computer: An Introduction", Computational Probability, New York: Academic Press, pp. 101–130, ISBN 0-12-394680-8
== External links ==
Sorting Algorithm Animations at the Wayback Machine (archived 3 March 2015).
Sequential and parallel sorting algorithms – Explanations and analyses of many sorting algorithms.
Dictionary of Algorithms, Data Structures, and Problems – Dictionary of algorithms, techniques, common functions, and problems.
Slightly Skeptical View on Sorting Algorithms – Discusses several classic algorithms and promotes alternatives to the quicksort algorithm.
15 Sorting Algorithms in 6 Minutes (Youtube) – Visualization and "audibilization" of 15 Sorting Algorithms in 6 Minutes.
A036604 sequence in OEIS database titled "Sorting numbers: minimal number of comparisons needed to sort n elements" – Performed by Ford–Johnson algorithm.
XiSort – External merge sort with symbolic key transformation – A variant of merge sort applied to large datasets using symbolic techniques.
Sorting Algorithms Used on Famous Paintings (Youtube) – Visualization of Sorting Algorithms on Many Famous Paintings.
A Comparison of Sorting Algorithms – Runs a series of tests of 9 of the main sorting algorithms using Python timeit and Google Colab. | Wikipedia/Sorting_algorithms |
Task systems are mathematical objects used to model the set of possible configurations of online algorithms. They were introduced by Borodin, Linial and Saks (1992) to model a variety of online problems. A task system determines a set of states and costs to change states. Task systems obtain as input a sequence of requests such that each request assigns processing times to the states. The objective of an online algorithm for task systems is to create a schedule that minimizes the overall cost incurred due to processing the tasks with respect to the states and due to the cost to change states.
If the cost function to change states is a metric, the task system is a metrical task system (MTS). This is the most common type of task systems.
Metrical task systems generalize online problems such as paging, list accessing, and the k-server problem (in finite spaces).
== Formal definition ==
A task system is a pair
(
S
,
d
)
{\displaystyle (S,d)}
where
S
=
{
s
1
,
s
2
,
…
,
s
n
}
{\displaystyle S=\{s_{1},s_{2},\dotsc ,s_{n}\}}
is a set of states and
d
:
S
×
S
→
R
{\displaystyle d:S\times S\rightarrow \mathbb {R} }
is a distance function. If
d
{\displaystyle d}
is a metric,
(
S
,
d
)
{\displaystyle (S,d)}
is a metrical task system. An input to the task system is a sequence
σ
=
T
1
,
T
2
,
…
,
T
l
{\displaystyle \sigma =T_{1},T_{2},\dotsc ,T_{l}}
such that for each
i
{\displaystyle i}
,
T
i
{\displaystyle T_{i}}
is a vector of
n
{\displaystyle n}
non-negative entries that determine the processing costs for the
n
{\displaystyle n}
states when processing the
i
{\displaystyle i}
th task.
An algorithm for the task system produces a schedule
π
{\displaystyle \pi }
that determines the sequence of states. For instance,
π
(
i
)
=
s
j
{\displaystyle \pi (i)=s_{j}}
means that the
i
{\displaystyle i}
th task
T
i
{\displaystyle T_{i}}
is run in state
s
j
{\displaystyle s_{j}}
. The processing cost of a schedule is
c
o
s
t
(
π
,
σ
)
=
∑
i
=
1
l
d
(
π
(
i
−
1
)
,
π
(
i
)
)
+
T
i
(
π
(
i
)
)
.
{\displaystyle \mathrm {cost} (\pi ,\sigma )=\sum _{i=1}^{l}d(\pi (i-1),\pi (i))+T_{i}(\pi (i)).}
The objective of the algorithm is to find a schedule such that the cost is minimized.
== Known results ==
As usual for online problems, the most common measure to analyze algorithms for metrical task systems is the competitive analysis, where the performance of an online algorithm is compared to the performance of an optimal offline algorithm. For deterministic online algorithms, there is a tight bound
2
n
−
1
{\displaystyle 2n-1}
on the competitive ratio due to Borodin et al. (1992).
For randomized online algorithms, the competitive ratio is lower bounded by
Ω
(
log
n
/
log
log
n
)
{\displaystyle \Omega (\log n/\log \log n)}
and upper bounded by
O
(
(
log
n
)
2
)
{\displaystyle O\left((\log n)^{2}\right)}
. The lower bound is due to Bartal et al. (2006, 2005). The upper bound is due to Bubeck, Cohen, Lee and Lee (2018) who improved upon a result of Fiat and Mendel (2003).
There are many results for various types of restricted metrics.
== See also ==
Adversary model
Competitive analysis
K-server problem
Online algorithm
Page replacement algorithm
Real-time computing
== References ==
Yair Bartal; Avrim Blum; Carl Burch & Andrew Tomkins (1997). "A polylog(n)-Competitive Algorithm for Metrical Task Systems". Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing. pp. 711–719. doi:10.1145/258533.258667.
Yair Bartal, Béla Bollobás, Manor Mendel (2006). "Ramsey-type theorems for metric spaces with applications to online problems". Journal of Computer and System Sciences. 72 (5): 890–921. arXiv:cs/0406028. doi:10.1016/j.jcss.2005.05.008. S2CID 1450455.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Yair Bartal, Nathan Linial, Manor Mendel, Assaf Naor (2005). "On metric Ramsey-type phenomena". Annals of Mathematics. 162 (2): 643–709. arXiv:math/0406353. doi:10.4007/annals.2005.162.643.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Allan Borodin and Ran El-Yaniv (1998). Online Computation and Competitive Analysis. Cambridge University Press. pp. 123–149.
Allan Borodin, Nati Linial, and Michael Saks (1992). "An optimal online algorithm for metrical task systems". Journal of the ACM. 39 (4): 745–763. doi:10.1145/146585.146588. S2CID 18783826.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Amos Fiat & Manor Mendel (2003). "Better Algorithms for Unfair Metrical Task Systems and Applications". SIAM J. Comput. 32 (6): 1403–1422. arXiv:cs/0406034. doi:10.1137/S0097539700376159.
Bubeck, Sébastien; Cohen, Michael B.; R. Lee, James & Lee, Yin Tat (2019). "Metrical task systems on trees via mirror descent and unfair gluing". Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms. arXiv:1807.04404. | Wikipedia/Metrical_task_systems |
In computer science, Ukkonen's algorithm is a linear-time, online algorithm for constructing suffix trees, proposed by Esko Ukkonen in 1995. The algorithm begins with an implicit suffix tree containing the first character of the string. Then it steps through the string, adding successive characters until the tree is complete. This order addition of characters gives Ukkonen's algorithm its "on-line" property. The original algorithm presented by Peter Weiner proceeded backward from the last character to the first one from the shortest to the longest suffix. A simpler algorithm was found by Edward M. McCreight, going from the longest to the shortest suffix.
== Implicit suffix tree ==
While generating suffix tree using Ukkonen's algorithm, we will see implicit suffix tree in intermediate steps depending on characters in string S. In implicit suffix trees, there will be no edge with $ (or any other termination character) label and no internal node with only one edge going out of it.
== High level description of Ukkonen's algorithm ==
Ukkonen's algorithm constructs an implicit suffix tree Ti for each prefix S[1...i] of S (S being the string of length n). It first builds T1 using the 1st character, then T2 using the 2nd character, then T3 using the 3rd character, ..., Tn using the nth character. You can find the following characteristics in a suffix tree that uses Ukkonen's algorithm:
Implicit suffix tree Ti+1 is built on top of implicit suffix tree Ti .
At any given time, Ukkonen's algorithm builds the suffix tree for the characters seen so far and so it has on-line property, allowing the algorithm to have an execution time of O(n).
Ukkonen's algorithm is divided into n phases (one phase for each character in the string with length n).
Each phase i+1 is further divided into i+1 extensions, one for each of the i+1 suffixes of S[1...i+1].
Suffix extension is all about adding the next character into the suffix tree built so far. In extension j of phase i+1, algorithm finds the end of S[j...i] (which is already in the tree due to previous phase i) and then it extends S[j...i] to be sure the suffix S[j...i+1] is in the tree. There are three extension rules:
If the path from the root labelled S[j...i] ends at a leaf edge (i.e., S[i] is last character on leaf edge), then character S[i+1] is just added to the end of the label on that leaf edge.
if the path from the root labelled S[j...i] ends at a non-leaf edge (i.e., there are more characters after S[i] on path) and next character is not S[i+1], then a new leaf edge with label S[i+1] and number j is created starting from character S[i+1]. A new internal node will also be created if S[1...i] ends inside (in between) a non-leaf edge.
If the path from the root labelled S[j..i] ends at a non-leaf edge (i.e., there are more characters after S[i] on path) and next character is S[i+1] (already in tree), do nothing.
One important point to note is that from a given node (root or internal), there will be one and only one edge starting from one character. There will not be more than one edge going out of any node starting with the same character.
== Run time ==
The naive implementation for generating a suffix tree going forward requires O(n2) or even O(n3) time complexity in big O notation, where n is the length of the string. By exploiting a number of algorithmic techniques, Ukkonen reduced this to O(n) (linear) time, for constant-size alphabets, and O(n log n) in general, matching the runtime performance of the earlier two algorithms.
== Ukkonen's algorithm example ==
To better illustrate how a suffix tree is constructed using Ukkonen's algorithm, we can consider the string S = xabxac.
Start with an empty root node.
Construct
T
1
{\displaystyle T_{1}}
for S[1] by adding the first character of the string. Rule 2 applies, which creates a new leaf node.
Construct
T
2
{\displaystyle T_{2}}
for S[1..2] by adding suffixes of xa (xa and a). Rule 1 applies, which extends the path label in existing leaf edge. Rule 2 applies, which creates a new leaf node.
Construct
T
3
{\displaystyle T_{3}}
for S[1..3] by adding suffixes of xab (xab, ab and b). Rule 1 applies, which extends the path label in existing leaf edge. Rule 2 applies, which creates a new leaf node.
Construct
T
4
{\displaystyle T_{4}}
for S[1..4] by adding suffixes of xabx (xabx, abx, bx and x). Rule 1 applies, which extends the path label in existing leaf edge. Rule 3 applies, do nothing.
Constructs
T
5
{\displaystyle T_{5}}
for S[1..5] by adding suffixes of xabxa (xabxa, abxa, bxa, xa and a). Rule 1 applies, which extends the path label in existing leaf edge. Rule 3 applies, do nothing.
Constructs
T
6
{\displaystyle T_{6}}
for S[1..6] by adding suffixes of xabxac (xabxac, abxac, bxac, xac, ac and c). Rule 1 applies, which extends the path label in existing leaf edge. Rule 2 applies, which creates a new leaf node (in this case, three new leaf edges and two new internal nodes are created).
== References ==
== External links ==
Detailed explanation in plain English
Fast String Searching With Suffix Trees Mark Nelson's tutorial. Has an implementation example written with C++.
Implementation in C with detailed explanation
Lecture slides by Guy Blelloch
Ukkonen's homepage
Text-Indexing project (Ukkonen's linear-time construction of suffix trees)
Implementation in C Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 | Wikipedia/Ukkonen's_algorithm |
Dynamic problems in computational complexity theory are problems stated in terms of changing input data. In its most general form, a problem in this category is usually stated as follows:
Given a class of input objects, find efficient algorithms and data structures to answer a certain query about a set of input objects each time the input data is modified, i.e., objects are inserted or deleted.
Problems in this class have the following measures of complexity:
Space – the amount of memory space required to store the data structure;
Initialization time – time required for the initial construction of the data structure;
Insertion time – time required for the update of the data structure when one more input element is added;
Deletion time – time required for the update of the data structure when an input element is deleted;
Query time – time required to answer a query;
Other operations specific to the problem in question
The overall set of computations for a dynamic problem is called a dynamic algorithm.
Many algorithmic problems stated in terms of fixed input data (called static problems in this context and solved by static algorithms) have meaningful dynamic versions.
== Special cases ==
Incremental algorithms, or online algorithms, are algorithms in which only additions of elements are allowed, possibly starting from empty/trivial input data.
Decremental algorithms are algorithms in which only deletions of elements are allowed, starting with the initialization of a full data structure.
If both additions and deletions are allowed, the algorithm is sometimes called fully dynamic.
== Examples ==
=== Maximal element ===
Static problem
For a set of N numbers find the maximal one.
The problem may be solved in O(N) time.
Dynamic problem
For an initial set of N numbers, dynamically maintain the maximal one when insertion and deletions are allowed.
A well-known solution for this problem is using a self-balancing binary search tree. It takes space O(N), may be initially constructed in time O(N log N) and provides insertion, deletion and query times in O(log N).
The priority queue maintenance problem
It is a simplified version of this dynamic problem, where one requires to delete only the maximal element. This version may do with simpler data structures.
=== Graphs ===
Given a graph, maintain its parameters, such as connectivity, maximal degree, shortest paths, etc., when insertion and deletion of its edges are allowed.
== See also ==
Dynamization
Dynamic connectivity
Kinetic data structure
== References == | Wikipedia/Dynamic_algorithm |
In mathematical logic, the lambda calculus (also written as λ-calculus) is a formal system for expressing computation based on function abstraction and application using variable binding and substitution. Untyped lambda calculus, the topic of this article, is a universal machine, a model of computation that can be used to simulate any Turing machine (and vice versa). It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was logically consistent, and documented it in 1940.
Lambda calculus consists of constructing lambda terms and performing reduction operations on them. A term is defined as any valid lambda calculus expression. In the simplest form of lambda calculus, terms are built using only the following rules:
x
{\textstyle x}
: A variable is a character or string representing a parameter.
(
λ
x
.
M
)
{\textstyle (\lambda x.M)}
: A lambda abstraction is a function definition, taking as input the bound variable
x
{\displaystyle x}
(between the λ and the punctum/dot .) and returning the body
M
{\textstyle M}
.
(
M
N
)
{\textstyle (M\ N)}
: An application, applying a function
M
{\textstyle M}
to an argument
N
{\textstyle N}
. Both
M
{\textstyle M}
and
N
{\textstyle N}
are lambda terms.
The reduction operations include:
(
λ
x
.
M
[
x
]
)
→
(
λ
y
.
M
[
y
]
)
{\textstyle (\lambda x.M[x])\rightarrow (\lambda y.M[y])}
: α-conversion, renaming the bound variables in the expression. Used to avoid name collisions.
(
(
λ
x
.
M
)
N
)
→
(
M
[
x
:=
N
]
)
{\textstyle ((\lambda x.M)\ N)\rightarrow (M[x:=N])}
: β-reduction, replacing the bound variables with the argument expression in the body of the abstraction.
If De Bruijn indexing is used, then α-conversion is no longer required as there will be no name collisions. If repeated application of the reduction steps eventually terminates, then by the Church–Rosser theorem it will produce a β-normal form.
Variable names are not needed if using a universal lambda function, such as Iota and Jot, which can create any function behavior by calling it on itself in various combinations.
== Explanation and applications ==
Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine. Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denote binding a variable in a function.
Lambda calculus may be untyped or typed. In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are strictly weaker than the untyped lambda calculus, which is the primary subject of this article, in the sense that typed lambda calculi can express less than the untyped calculus can. On the other hand, typed lambda calculi allow more things to be proven. For example, in simply typed lambda calculus, it is a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda-terms need not terminate (see below). One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus.
Lambda calculus has applications in many different areas in mathematics, philosophy, linguistics, and computer science. Lambda calculus has played an important role in the development of the theory of programming languages. Functional programming languages implement lambda calculus. Lambda calculus is also a current research topic in category theory.
== History ==
Lambda calculus was introduced by mathematician Alonzo Church in the 1930s as part of an investigation into the foundations of mathematics. The original system was shown to be logically inconsistent in 1935 when Stephen Kleene and J. B. Rosser developed the Kleene–Rosser paradox.
Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus. In 1940, he also introduced a computationally weaker, but logically consistent system, known as the simply typed lambda calculus.
Until the 1960s when its relation to programming languages was clarified, the lambda calculus was only a formalism. Thanks to Richard Montague and other linguists' applications in the semantics of natural language, the lambda calculus has begun to enjoy a respectable place in both linguistics and computer science.
=== Origin of the λ symbol ===
There is some uncertainty over the reason for Church's use of the Greek letter lambda (λ) as the notation for function-abstraction in the lambda calculus, perhaps in part due to conflicting explanations by Church himself. According to Cardone and Hindley (2006):
By the way, why did Church choose the notation "λ"? In [an unpublished 1964 letter to Harald Dickson] he stated clearly that it came from the notation "
x
^
{\displaystyle {\hat {x}}}
" used for class-abstraction by Whitehead and Russell, by first modifying "
x
^
{\displaystyle {\hat {x}}}
" to "
∧
x
{\displaystyle \land x}
" to distinguish function-abstraction from class-abstraction, and then changing "
∧
{\displaystyle \land }
" to "λ" for ease of printing.
This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and λ just happened to be chosen.
Dana Scott has also addressed this question in various public lectures.
Scott recounts that he once posed a question about the origin of the lambda symbol to Church's former student and son-in-law John W. Addison Jr., who then wrote his father-in-law a postcard:
Dear Professor Church,
Russell had the iota operator, Hilbert had the epsilon operator. Why did you choose lambda for your operator?
According to Scott, Church's entire response consisted of returning the postcard with the following annotation: "eeny, meeny, miny, moe".
== Informal description ==
=== Motivation ===
Computable functions are a fundamental concept within computer science and mathematics. The lambda calculus provides simple semantics for computation which are useful for formally studying properties of computation. The lambda calculus incorporates two simplifications that make its semantics simple.
The first simplification is that the lambda calculus treats functions "anonymously"; it does not give them explicit names. For example, the function
s
q
u
a
r
e
_
s
u
m
(
x
,
y
)
=
x
2
+
y
2
{\displaystyle \operatorname {square\_sum} (x,y)=x^{2}+y^{2}}
can be rewritten in anonymous form as
(
x
,
y
)
↦
x
2
+
y
2
{\displaystyle (x,y)\mapsto x^{2}+y^{2}}
(which is read as "a tuple of x and y is mapped to
x
2
+
y
2
{\textstyle x^{2}+y^{2}}
"). Similarly, the function
id
(
x
)
=
x
{\displaystyle \operatorname {id} (x)=x}
can be rewritten in anonymous form as
x
↦
x
{\displaystyle x\mapsto x}
where the input is simply mapped to itself.
The second simplification is that the lambda calculus only uses functions of a single input. An ordinary function that requires two inputs, for instance the
s
q
u
a
r
e
_
s
u
m
{\textstyle \operatorname {square\_sum} }
function, can be reworked into an equivalent function that accepts a single input, and as output returns another function, that in turn accepts a single input. For example,
(
x
,
y
)
↦
x
2
+
y
2
{\displaystyle (x,y)\mapsto x^{2}+y^{2}}
can be reworked into
x
↦
(
y
↦
x
2
+
y
2
)
{\displaystyle x\mapsto (y\mapsto x^{2}+y^{2})}
This method, known as currying, transforms a function that takes multiple arguments into a chain of functions each with a single argument.
Function application of the
s
q
u
a
r
e
_
s
u
m
{\textstyle \operatorname {square\_sum} }
function to the arguments (5, 2), yields at once
(
(
x
,
y
)
↦
x
2
+
y
2
)
(
5
,
2
)
{\textstyle ((x,y)\mapsto x^{2}+y^{2})(5,2)}
=
5
2
+
2
2
{\textstyle =5^{2}+2^{2}}
=
29
{\textstyle =29}
,
whereas evaluation of the curried version requires one more step
(
(
x
↦
(
y
↦
x
2
+
y
2
)
)
(
5
)
)
(
2
)
{\textstyle {\Bigl (}{\bigl (}x\mapsto (y\mapsto x^{2}+y^{2}){\bigr )}(5){\Bigr )}(2)}
=
(
y
↦
5
2
+
y
2
)
(
2
)
{\textstyle =(y\mapsto 5^{2}+y^{2})(2)}
// the definition of
x
{\displaystyle x}
has been used with
5
{\displaystyle 5}
in the inner expression. This is like β-reduction.
=
5
2
+
2
2
{\textstyle =5^{2}+2^{2}}
// the definition of
y
{\displaystyle y}
has been used with
2
{\displaystyle 2}
. Again, similar to β-reduction.
=
29
{\textstyle =29}
to arrive at the same result.
=== The lambda calculus ===
The lambda calculus consists of a language of lambda terms, that are defined by a certain formal syntax, and a set of transformation rules for manipulating the lambda terms. These transformation rules can be viewed as an equational theory or as an operational definition.
As described above, having no names, all functions in the lambda calculus are anonymous functions. They only accept one input variable, so currying is used to implement functions of several variables.
==== Lambda terms ====
The syntax of the lambda calculus defines some expressions as valid lambda calculus expressions and some as invalid, just as some strings of characters are valid computer programs and some are not. A valid lambda calculus expression is called a "lambda term".
The following three rules give an inductive definition that can be applied to build all syntactically valid lambda terms:
variable x is itself a valid lambda term.
if t is a lambda term, and x is a variable, then
(
λ
x
.
t
)
{\displaystyle (\lambda x.t)}
is a lambda term (called an abstraction);
if t and s are lambda terms, then
(
t
s
)
{\displaystyle (ts)}
is a lambda term (called an application).
Nothing else is a lambda term. That is, a lambda term is valid if and only if it can be obtained by repeated application of these three rules. For convenience, some parentheses can be omitted when writing a lambda term. For example, the outermost parentheses are usually not written. See § Notation, below, for an explicit description of which parentheses are optional. It is also common to extend the syntax presented here with additional operations, which allows making sense of terms such as
λ
x
.
x
2
.
{\displaystyle \lambda x.x^{2}.}
The focus of this article is the pure lambda calculus without extensions, but lambda terms extended with arithmetic operations are used for explanatory purposes.
An abstraction
λ
x
.
t
{\displaystyle \lambda x.t}
denotes an § anonymous function that takes a single input x and returns t. For example,
λ
x
.
(
x
2
+
2
)
{\displaystyle \lambda x.(x^{2}+2)}
is an abstraction representing the function
f
{\displaystyle f}
defined by
f
(
x
)
=
x
2
+
2
,
{\displaystyle f(x)=x^{2}+2,}
using the term
x
2
+
2
{\displaystyle x^{2}+2}
for t. The name
f
{\displaystyle f}
is superfluous when using abstraction. The syntax
(
λ
x
.
t
)
{\displaystyle (\lambda x.t)}
binds the variable x in the term t. The definition of a function with an abstraction merely "sets up" the function but does not invoke it.
An application
t
s
{\displaystyle ts}
represents the application of a function t to an input s, that is, it represents the act of calling function t on input s to produce
t
(
s
)
{\displaystyle t(s)}
.
A lambda term may refer to a variable that has not been bound, such as the term
λ
x
.
(
x
+
y
)
{\displaystyle \lambda x.(x+y)}
(which represents the function definition
f
(
x
)
=
x
+
y
{\displaystyle f(x)=x+y}
). In this term, the variable y has not been defined and is considered an unknown. The abstraction
λ
x
.
(
x
+
y
)
{\displaystyle \lambda x.(x+y)}
is a syntactically valid term and represents a function that adds its input to the yet-unknown y.
Parentheses may be used and might be needed to disambiguate terms. For example,
λ
x
.
(
(
λ
x
.
x
)
x
)
{\displaystyle \lambda x.((\lambda x.x)x)}
is of form
λ
x
.
B
{\displaystyle \lambda x.B}
and is therefore an abstraction, while
(
λ
x
.
(
λ
x
.
x
)
)
x
{\displaystyle (\lambda x.(\lambda x.x))x}
is of form
M
N
{\displaystyle MN}
and is therefore an application.
The examples 1 and 2 denote different terms, differing only in where the parentheses are placed. They have different meanings: example 1 is a function definition, while example 2 is a function application. The lambda variable x is a placeholder in both examples.
Here, example 1 defines a function
λ
x
.
B
{\displaystyle \lambda x.B}
, where
B
{\displaystyle B}
is
(
λ
x
.
x
)
x
{\displaystyle (\lambda x.x)x}
, an anonymous function
(
λ
x
.
x
)
{\displaystyle (\lambda x.x)}
, with input
x
{\displaystyle x}
; while example 2,
M
{\displaystyle M}
N
{\displaystyle N}
, is M applied to N, where
M
{\displaystyle M}
is the lambda term
(
λ
x
.
(
λ
x
.
x
)
)
{\displaystyle (\lambda x.(\lambda x.x))}
being applied to the input
N
{\displaystyle N}
which is
x
{\displaystyle x}
. Both examples 1 and 2 would evaluate to the identity function
λ
x
.
x
{\displaystyle \lambda x.x}
.
==== Functions that operate on functions ====
In lambda calculus, functions are taken to be 'first class values', so functions may be used as the inputs, or be returned as outputs from other functions.
For example, the lambda term
λ
x
.
x
{\displaystyle \lambda x.x}
represents the identity function,
x
↦
x
{\displaystyle x\mapsto x}
. Further,
λ
x
.
y
{\displaystyle \lambda x.y}
represents the constant function
x
↦
y
{\displaystyle x\mapsto y}
, the function that always returns
y
{\displaystyle y}
, no matter the input. As an example of a function operating on functions, the function composition can be defined as
λ
f
.
λ
g
.
λ
x
.
(
f
(
g
x
)
)
{\displaystyle \lambda f.\lambda g.\lambda x.(f(gx))}
.
There are several notions of "equivalence" and "reduction" that allow lambda terms to be "reduced" to "equivalent" lambda terms.
==== Alpha equivalence ====
A basic form of equivalence, definable on lambda terms, is alpha equivalence. It captures the intuition that the particular choice of a bound variable, in an abstraction, does not (usually) matter.
For instance,
λ
x
.
x
{\displaystyle \lambda x.x}
and
λ
y
.
y
{\displaystyle \lambda y.y}
are alpha-equivalent lambda terms, and they both represent the same function (the identity function).
The terms
x
{\displaystyle x}
and
y
{\displaystyle y}
are not alpha-equivalent, because they are not bound in an abstraction.
In many presentations, it is usual to identify alpha-equivalent lambda terms.
The following definitions are necessary in order to be able to define β-reduction:
==== Free variables ====
The free variables of a term are those variables not bound by an abstraction. The set of free variables of an expression is defined inductively:
The free variables of
x
{\displaystyle x}
are just
x
{\displaystyle x}
The set of free variables of
λ
x
.
t
{\displaystyle \lambda x.t}
is the set of free variables of
t
{\displaystyle t}
, but with
x
{\displaystyle x}
removed
The set of free variables of
t
s
{\displaystyle ts}
is the union of the set of free variables of
t
{\displaystyle t}
and the set of free variables of
s
{\displaystyle s}
.
For example, the lambda term representing the identity
λ
x
.
x
{\displaystyle \lambda x.x}
has no free variables, but the function
λ
x
.
y
x
{\displaystyle \lambda x.yx}
has a single free variable,
y
{\displaystyle y}
.
==== Capture-avoiding substitutions ====
Suppose
t
{\displaystyle t}
,
s
{\displaystyle s}
and
r
{\displaystyle r}
are lambda terms, and
x
{\displaystyle x}
and
y
{\displaystyle y}
are variables.
The notation
t
[
x
:=
r
]
{\displaystyle t[x:=r]}
indicates substitution of
r
{\displaystyle r}
for
x
{\displaystyle x}
in
t
{\displaystyle t}
in a capture-avoiding manner. This is defined so that:
x
[
x
:=
r
]
=
r
{\displaystyle x[x:=r]=r}
; with
r
{\displaystyle r}
substituted for
x
{\displaystyle x}
,
x
{\displaystyle x}
becomes
r
{\displaystyle r}
y
[
x
:=
r
]
=
y
{\displaystyle y[x:=r]=y}
if
x
≠
y
{\displaystyle x\neq y}
; with
r
{\displaystyle r}
substituted for
x
{\displaystyle x}
,
y
{\displaystyle y}
(which is not
x
{\displaystyle x}
) remains
y
{\displaystyle y}
(
t
s
)
[
x
:=
r
]
=
(
t
[
x
:=
r
]
)
(
s
[
x
:=
r
]
)
{\displaystyle (ts)[x:=r]=(t[x:=r])(s[x:=r])}
; substitution distributes to both sides of an application
(
λ
x
.
t
)
[
x
:=
r
]
=
λ
x
.
t
{\displaystyle (\lambda x.t)[x:=r]=\lambda x.t}
; a variable bound by an abstraction is not subject to substitution; substituting such variable leaves the abstraction unchanged
(
λ
y
.
t
)
[
x
:=
r
]
=
λ
y
.
(
t
[
x
:=
r
]
)
{\displaystyle (\lambda y.t)[x:=r]=\lambda y.(t[x:=r])}
if
x
≠
y
{\displaystyle x\neq y}
and
y
{\displaystyle y}
does not appear among the free variables of
r
{\displaystyle r}
(
y
{\displaystyle y}
is said to be "fresh" for
r
{\displaystyle r}
) ; substituting a variable which is not bound by an abstraction proceeds in the abstraction's body, provided that the abstracted variable
y
{\displaystyle y}
is "fresh" for the substitution term
r
{\displaystyle r}
.
For example,
(
λ
x
.
x
)
[
y
:=
y
]
=
λ
x
.
(
x
[
y
:=
y
]
)
=
λ
x
.
x
{\displaystyle (\lambda x.x)[y:=y]=\lambda x.(x[y:=y])=\lambda x.x}
, and
(
(
λ
x
.
y
)
x
)
[
x
:=
y
]
=
(
(
λ
x
.
y
)
[
x
:=
y
]
)
(
x
[
x
:=
y
]
)
=
(
λ
x
.
y
)
y
{\displaystyle ((\lambda x.y)x)[x:=y]=((\lambda x.y)[x:=y])(x[x:=y])=(\lambda x.y)y}
.
The freshness condition (requiring that
y
{\displaystyle y}
is not in the free variables of
r
{\displaystyle r}
) is crucial in order to ensure that substitution does not change the meaning of functions.
For example, a substitution that ignores the freshness condition could lead to errors:
(
λ
x
.
y
)
[
y
:=
x
]
=
λ
x
.
(
y
[
y
:=
x
]
)
=
λ
x
.
x
{\displaystyle (\lambda x.y)[y:=x]=\lambda x.(y[y:=x])=\lambda x.x}
. This erroneous substitution would turn the constant function
λ
x
.
y
{\displaystyle \lambda x.y}
into the identity
λ
x
.
x
{\displaystyle \lambda x.x}
.
In general, failure to meet the freshness condition can be remedied by alpha-renaming first, with a suitable fresh variable.
For example, switching back to our correct notion of substitution, in
(
λ
x
.
y
)
[
y
:=
x
]
{\displaystyle (\lambda x.y)[y:=x]}
the abstraction can be renamed with a fresh variable
z
{\displaystyle z}
, to obtain
(
λ
z
.
y
)
[
y
:=
x
]
=
λ
z
.
(
y
[
y
:=
x
]
)
=
λ
z
.
x
{\displaystyle (\lambda z.y)[y:=x]=\lambda z.(y[y:=x])=\lambda z.x}
, and the meaning of the function is preserved by substitution.
==== β-reduction ====
The β-reduction rule states that an application of the form
(
λ
x
.
t
)
s
{\displaystyle (\lambda x.t)s}
reduces to the term
t
[
x
:=
s
]
{\displaystyle t[x:=s]}
. The notation
(
λ
x
.
t
)
s
→
t
[
x
:=
s
]
{\displaystyle (\lambda x.t)s\to t[x:=s]}
is used to indicate that
(
λ
x
.
t
)
s
{\displaystyle (\lambda x.t)s}
β-reduces to
t
[
x
:=
s
]
{\displaystyle t[x:=s]}
.
For example, for every
s
{\displaystyle s}
,
(
λ
x
.
x
)
s
→
x
[
x
:=
s
]
=
s
{\displaystyle (\lambda x.x)s\to x[x:=s]=s}
. This demonstrates that
λ
x
.
x
{\displaystyle \lambda x.x}
really is the identity.
Similarly,
(
λ
x
.
y
)
s
→
y
[
x
:=
s
]
=
y
{\displaystyle (\lambda x.y)s\to y[x:=s]=y}
, which demonstrates that
λ
x
.
y
{\displaystyle \lambda x.y}
is a constant function.
The lambda calculus may be seen as an idealized version of a functional programming language, like Haskell or Standard ML. Under this view, β-reduction corresponds to a computational step. This step can be repeated by additional β-reductions until there are no more applications left to reduce. In the untyped lambda calculus, as presented here, this reduction process may not terminate. For instance, consider the term
Ω
=
(
λ
x
.
x
x
)
(
λ
x
.
x
x
)
{\displaystyle \Omega =(\lambda x.xx)(\lambda x.xx)}
.
Here
(
λ
x
.
x
x
)
(
λ
x
.
x
x
)
→
(
x
x
)
[
x
:=
λ
x
.
x
x
]
=
(
x
[
x
:=
λ
x
.
x
x
]
)
(
x
[
x
:=
λ
x
.
x
x
]
)
=
(
λ
x
.
x
x
)
(
λ
x
.
x
x
)
{\displaystyle (\lambda x.xx)(\lambda x.xx)\to (xx)[x:=\lambda x.xx]=(x[x:=\lambda x.xx])(x[x:=\lambda x.xx])=(\lambda x.xx)(\lambda x.xx)}
.
That is, the term reduces to itself in a single β-reduction, and therefore the reduction process will never terminate.
Another aspect of the untyped lambda calculus is that it does not distinguish between different kinds of data. For instance, it may be desirable to write a function that only operates on numbers. However, in the untyped lambda calculus, there is no way to prevent a function from being applied to truth values, strings, or other non-number objects.
== Formal definition ==
=== Definition ===
Lambda expressions are composed of:
variables v1, v2, ...;
the abstraction symbols λ (lambda) and . (dot);
parentheses ().
The set of lambda expressions, Λ, can be defined inductively:
If x is a variable, then x ∈ Λ.
If x is a variable and M ∈ Λ, then (λx.M) ∈ Λ.
If M, N ∈ Λ, then (M N) ∈ Λ.
Instances of rule 2 are known as abstractions and instances of rule 3 are known as applications. See § reducible expression
This set of rules may be written in Backus–Naur form as:
=== Notation ===
To keep the notation of lambda expressions uncluttered, the following conventions are usually applied:
Outermost parentheses are dropped: M N instead of (M N).
Applications are assumed to be left associative: M N P may be written instead of ((M N) P).
When all variables are single-letter, the space in applications may be omitted: MNP instead of M N P.
The body of an abstraction extends as far right as possible: λx.M N means λx.(M N) and not (λx.M) N.
A sequence of abstractions is contracted: λx.λy.λz.N is abbreviated as λxyz.N.
=== Free and bound variables ===
The abstraction operator, λ, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to be bound. In an expression λx.M, the part λx is often called binder, as a hint that the variable x is getting bound by prepending λx to M. All other variables are called free. For example, in the expression λy.x x y, y is a bound variable and x is a free variable. Also a variable is bound by its nearest abstraction. In the following example the single occurrence of x in the expression is bound by the second lambda: λx.y (λx.z x).
The set of free variables of a lambda expression, M, is denoted as FV(M) and is defined by recursion on the structure of the terms, as follows:
FV(x) = {x}, where x is a variable.
FV(λx.M) = FV(M) \ {x}.
FV(M N) = FV(M) ∪ FV(N).
An expression that contains no free variables is said to be closed. Closed lambda expressions are also known as combinators and are equivalent to terms in combinatory logic.
== Reduction ==
The meaning of lambda expressions is defined by how expressions can be reduced.
There are three kinds of reduction:
α-conversion: changing bound variables;
β-reduction: applying functions to their arguments;
η-conversion: expressing extensionality.
We also speak of the resulting equivalences: two expressions are α-equivalent, if they can be α-converted into the same expression. β-equivalence and η-equivalence are defined similarly.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, (λx.M) N is a β-redex in expressing the substitution of N for x in M. The expression to which a redex reduces is called its reduct; the reduct of (λx.M) N is M[x := N].
If x is not free in M, λx.M x is also an η-redex, with a reduct of M.
=== α-conversion ===
α-conversion (alpha-conversion), sometimes known as α-renaming, allows bound variable names to be changed. For example, α-conversion of λx.x might yield λy.y. Terms that differ only by α-conversion are called α-equivalent. Frequently, in uses of lambda calculus, α-equivalent terms are considered to be equivalent.
The precise rules for α-conversion are not completely trivial. First, when α-converting an abstraction, the only variable occurrences that are renamed are those that are bound to the same abstraction. For example, an α-conversion of λx.λx.x could result in λy.λx.x, but it could not result in λy.λx.y. The latter has a different meaning from the original. This is analogous to the programming notion of variable shadowing.
Second, α-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replace x with y in λx.λy.x, we get λy.λy.y, which is not at all the same.
In programming languages with static scope, α-conversion can be used to make name resolution simpler by ensuring that no variable name masks a name in a containing scope (see α-renaming to make name resolution trivial).
In the De Bruijn index notation, any two α-equivalent terms are syntactically identical.
==== Substitution ====
Substitution, written M[x := N], is the process of replacing all free occurrences of the variable x in the expression M with expression N. Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any lambda expression):
x[x := N] = N
y[x := N] = y, if x ≠ y
(M1 M2)[x := N] = M1[x := N] M2[x := N]
(λx.M)[x := N] = λx.M
(λy.M)[x := N] = λy.(M[x := N]), if x ≠ y and y ∉ FV(N) See above for the FV
To substitute into an abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for (λx.y)[y := x] to result in λx.x, because the substituted x was supposed to be free but ended up being bound. The correct substitution in this case is λz.x, up to α-equivalence. Substitution is defined uniquely up to α-equivalence. See Capture-avoiding substitutions above.
=== β-reduction ===
β-reduction (beta reduction) captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of (λx.M) N is M[x := N].
For example, assuming some encoding of 2, 7, ×, we have the following β-reduction: (λn.n × 2) 7 → 7 × 2.
β-reduction can be seen to be the same as the concept of local reducibility in natural deduction, via the Curry–Howard isomorphism.
=== η-conversion ===
η-conversion (eta conversion) expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments. η-conversion converts between λx.f x and f whenever x does not appear free in f.
η-reduction changes λx.f x to f, and η-expansion changes f to λx.f x, under the same requirement that x does not appear free in f.
η-conversion can be seen to be the same as the concept of local completeness in natural deduction, via the Curry–Howard isomorphism.
== Normal forms and confluence ==
For the untyped lambda calculus, β-reduction as a rewriting rule is neither strongly normalising nor weakly normalising.
However, it can be shown that β-reduction is confluent when working up to α-conversion (i.e. we consider two normal forms to be equal if it is possible to α-convert one into the other).
Therefore, both strongly normalising terms and weakly normalising terms have a unique normal form. For strongly normalising terms, any reduction strategy is guaranteed to yield the normal form, whereas for weakly normalising terms, some reduction strategies may fail to find it.
== Encoding datatypes ==
The basic lambda calculus may be used to model arithmetic, Booleans, data structures, and recursion, as illustrated in the following sub-sections i, ii, iii, and § iv.
=== Arithmetic in lambda calculus ===
There are several possible ways to define the natural numbers in lambda calculus, but by far the most common are the Church numerals, which can be defined as follows:
0 := λf.λx.x
1 := λf.λx.f x
2 := λf.λx.f (f x)
3 := λf.λx.f (f (f x))
and so on. Or using the alternative syntax presented above in Notation:
0 := λfx.x
1 := λfx.f x
2 := λfx.f (f x)
3 := λfx.f (f (f x))
A Church numeral is a higher-order function—it takes a single-argument function f, and returns another single-argument function. The Church numeral n is a function that takes a function f as argument and returns the n-th composition of f, i.e. the function f composed with itself n times. This is denoted f(n) and is in fact the n-th power of f (considered as an operator); f(0) is defined to be the identity function. Such repeated compositions (of a single function f) obey the laws of exponents, which is why these numerals can be used for arithmetic. (In Church's original lambda calculus, the formal parameter of a lambda expression was required to occur at least once in the function body, which made the above definition of 0 impossible.)
One way of thinking about the Church numeral n, which is often useful when analysing programs, is as an instruction 'repeat n times'. For example, using the PAIR and NIL functions defined below, one can define a function that constructs a (linked) list of n elements all equal to x by repeating 'prepend another x element' n times, starting from an empty list. The lambda term is
λn.λx.n (PAIR x) NIL
By varying what is being repeated, and varying what argument that function being repeated is applied to, a great many different effects can be achieved.
We can define a successor function, which takes a Church numeral n and returns n + 1 by adding another application of f, where '(mf)x' means the function 'f' is applied 'm' times on 'x':
SUCC := λn.λf.λx.f (n f x)
Because the m-th composition of f composed with the n-th composition of f gives the m+n-th composition of f, addition can be defined as follows:
PLUS := λm.λn.λf.λx.m f (n f x)
PLUS can be thought of as a function taking two natural numbers as arguments and returning a natural number; it can be verified that
PLUS 2 3
and
5
are β-equivalent lambda expressions. Since adding m to a number n can be accomplished by adding 1 m times, an alternative definition is:
PLUS := λm.λn.m SUCC n
Similarly, multiplication can be defined as
MULT := λm.λn.λf.m (n f)
Alternatively
MULT := λm.λn.m (PLUS n) 0
since multiplying m and n is the same as repeating the add n function m times and then applying it to zero.
Exponentiation has a rather simple rendering in Church numerals, namely
POW := λb.λe.e b
The predecessor function defined by PRED n = n − 1 for a positive integer n and PRED 0 = 0 is considerably more difficult. The formula
PRED := λn.λf.λx.n (λg.λh.h (g f)) (λu.x) (λu.u)
can be validated by showing inductively that if T denotes (λg.λh.h (g f)), then T(n)(λu.x) = (λh.h(f(n−1)(x))) for n > 0. Two other definitions of PRED are given below, one using conditionals and the other using pairs. With the predecessor function, subtraction is straightforward. Defining
SUB := λm.λn.n PRED m,
SUB m n yields m − n when m > n and 0 otherwise.
=== Logic and predicates ===
By convention, the following two definitions (known as Church Booleans) are used for the Boolean values TRUE and FALSE:
TRUE := λx.λy.x
FALSE := λx.λy.y
Then, with these two lambda terms, we can define some logic operators (these are just possible formulations; other expressions could be equally correct):
AND := λp.λq.p q p
OR := λp.λq.p p q
NOT := λp.p FALSE TRUE
IFTHENELSE := λp.λa.λb.p a b
We are now able to compute some logic functions, for example:
AND TRUE FALSE
≡ (λp.λq.p q p) TRUE FALSE →β TRUE FALSE TRUE
≡ (λx.λy.x) FALSE TRUE →β FALSE
and we see that AND TRUE FALSE is equivalent to FALSE.
A predicate is a function that returns a Boolean value. The most fundamental predicate is ISZERO, which returns TRUE if its argument is the Church numeral 0, but FALSE if its argument were any other Church numeral:
ISZERO := λn.n (λx.FALSE) TRUE
The following predicate tests whether the first argument is less-than-or-equal-to the second:
LEQ := λm.λn.ISZERO (SUB m n),
and since m = n, if LEQ m n and LEQ n m, it is straightforward to build a predicate for numerical equality.
The availability of predicates and the above definition of TRUE and FALSE make it convenient to write "if-then-else" expressions in lambda calculus. For example, the predecessor function can be defined as:
PRED := λn.n (λg.λk.ISZERO (g 1) k (PLUS (g k) 1)) (λv.0) 0
which can be verified by showing inductively that n (λg.λk.ISZERO (g 1) k (PLUS (g k) 1)) (λv.0) is the add n − 1 function for n > 0.
=== Pairs ===
A pair (2-tuple) can be defined in terms of TRUE and FALSE, by using the Church encoding for pairs. For example, PAIR encapsulates the pair (x,y), FIRST returns the first element of the pair, and SECOND returns the second.
PAIR := λx.λy.λf.f x y
FIRST := λp.p TRUE
SECOND := λp.p FALSE
NIL := λx.TRUE
NULL := λp.p (λx.λy.FALSE)
A linked list can be defined as either NIL for the empty list, or the PAIR of an element and a smaller list. The predicate NULL tests for the value NIL. (Alternatively, with NIL := FALSE, the construct l (λh.λt.λz.deal_with_head_h_and_tail_t) (deal_with_nil) obviates the need for an explicit NULL test).
As an example of the use of pairs, the shift-and-increment function that maps (m, n) to (n, n + 1) can be defined as
Φ := λx.PAIR (SECOND x) (SUCC (SECOND x))
which allows us to give perhaps the most transparent version of the predecessor function:
PRED := λn.FIRST (n Φ (PAIR 0 0)).
== Additional programming techniques ==
There is a considerable body of programming idioms for lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation for programming language semantics, effectively using lambda calculus as a low-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign.
=== Named constants ===
In lambda calculus, a library would take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants. The pure lambda calculus does not have a concept of named constants since all atomic lambda-terms are variables, but one can emulate having named constants by setting aside a variable as the name of the constant, using abstraction to bind that variable in the main body, and apply that abstraction to the intended definition. Thus to use f to mean N (some explicit lambda-term) in M (another lambda-term, the "main program"), one can say
(λf.M) N
Authors often introduce syntactic sugar, such as let, to permit writing the above in the more intuitive order
let f = N in M
By chaining such definitions, one can write a lambda calculus "program" as zero or more function definitions, followed by one lambda-term using those functions that constitutes the main body of the program.
A notable restriction of this let is that the name f may not be referenced in N, for N is outside the scope of the abstraction binding f, which is M; this means a recursive function definition cannot be written with let. The letrec construction would allow writing recursive function definitions, where the scope of the abstraction binding f includes N as well as M. Or self-application a-la that which leads to Y combinator could be used.
=== Recursion and fixed points ===
Recursion is when a function invokes itself. What would a value be which were to represent such a function? It has to refer to itself somehow inside itself, just as the definition refers to itself inside itself. If this value were to contain itself by value, it would have to be of infinite size, which is impossible. Other notations, which support recursion natively, overcome this by referring to the function by name inside its definition. Lambda calculus cannot express this, since in it there simply are no names for terms to begin with, only arguments' names, i.e. parameters in abstractions. Thus, a lambda expression can receive itself as its argument and refer to (a copy of) itself via the corresponding parameter's name. This will work fine in case it was indeed called with itself as an argument. For example, (λx.x x) E = (E E) will express recursion when E is an abstraction which is applying its parameter to itself inside its body to express a recursive call. Since this parameter receives E as its value, its self-application will be the same (E E) again.
As a concrete example, consider the factorial function F(n), recursively defined by
F(n) = 1, if n = 0; else n × F(n − 1).
In the lambda expression which is to represent this function, a parameter (typically the first one) will be assumed to receive the lambda expression itself as its value, so that calling it with itself as its first argument will amount to the recursive call. Thus to achieve recursion, the intended-as-self-referencing argument (called s here, reminiscent of "self", or "self-applying") must always be passed to itself within the function body at a recursive call point:
E := λs. λn.(1, if n = 0; else n × (s s (n−1)))
with s s n = F n = E E n to hold, so s = E and
F := (λx.x x) E = E E
and we have
F = E E = λn.(1, if n = 0; else n × (E E (n−1)))
Here s s becomes the same (E E) inside the result of the application (E E), and using the same function for a call is the definition of what recursion is. The self-application achieves replication here, passing the function's lambda expression on to the next invocation as an argument value, making it available to be referenced there by the parameter name s to be called via the self-application s s, again and again as needed, each time re-creating the lambda-term F = E E.
The application is an additional step just as the name lookup would be. It has the same delaying effect. Instead of having F inside itself as a whole up-front, delaying its re-creation until the next call makes its existence possible by having two finite lambda-terms E inside it re-create it on the fly later as needed.
This self-applicational approach solves it, but requires re-writing each recursive call as a self-application. We would like to have a generic solution, without the need for any re-writes:
G := λr. λn.(1, if n = 0; else n × (r (n−1)))
with r x = F x = G r x to hold, so r = G r =: FIX G and
F := FIX G where FIX g = (r where r = g r) = g (FIX g)
so that FIX G = G (FIX G) = (λn.(1, if n = 0; else n × ((FIX G) (n−1))))
Given a lambda term with first argument representing recursive call (e.g. G here), the fixed-point combinator FIX will return a self-replicating lambda expression representing the recursive function (here, F). The function does not need to be explicitly passed to itself at any point, for the self-replication is arranged in advance, when it is created, to be done each time it is called. Thus the original lambda expression (FIX G) is re-created inside itself, at call-point, achieving self-reference.
In fact, there are many possible definitions for this FIX operator, the simplest of them being:
Y := λg.(λx.g (x x)) (λx.g (x x))
In the lambda calculus, Y g is a fixed-point of g, as it expands to:
Y g
~> (λh.(λx.h (x x)) (λx.h (x x))) g
~> (λx.g (x x)) (λx.g (x x))
~> g ((λx.g (x x)) (λx.g (x x)))
<~ g (Y g)
Now, to perform the recursive call to the factorial function for an argument n, we would simply call (Y G) n. Given n = 4, for example, this gives:
Every recursively defined function can be seen as a fixed point of some suitably defined higher order function (also known as functional) closing over the recursive call with an extra argument. Therefore, using Y, every recursive function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication, and comparison predicates of natural numbers, using recursion.
When Y combinator is coded directly in a strict programming language, the applicative order of evaluation used in such languages will cause an attempt to fully expand the internal self-application
(
x
x
)
{\displaystyle (xx)}
prematurely, causing stack overflow or, in case of tail call optimization, indefinite looping. A delayed variant of Y, the Z combinator, can be used in such languages. It has the internal self-application hidden behind an extra abstraction through eta-expansion, as
(
λ
v
.
x
x
v
)
{\displaystyle (\lambda v.xxv)}
, thus preventing its premature expansion:
Z
=
λ
f
.
(
λ
x
.
f
(
λ
v
.
x
x
v
)
)
(
λ
x
.
f
(
λ
v
.
x
x
v
)
)
.
{\displaystyle Z=\lambda f.(\lambda x.f(\lambda v.xxv))\ (\lambda x.f(\lambda v.xxv))\ .}
=== Standard terms ===
Certain terms have commonly accepted names:
I := λx.x
S := λx.λy.λz.x z (y z)
K := λx.λy.x
B := λx.λy.λz.x (y z)
C := λx.λy.λz.x z y
W := λx.λy.x y y
ω or Δ or U := λx.x x
Ω := ω ω
I is the identity function. SK and BCKW form complete combinator calculus systems that can express any lambda term - see
the next section. Ω is UU, the smallest term that has no normal form. YI is another such term.
Y is standard and defined above, and can also be defined as Y=BU(CBU), so that Yg=g(Yg). TRUE and FALSE defined above are commonly abbreviated as T and F.
=== Abstraction elimination ===
If N is a lambda-term without abstraction, but possibly containing named constants (combinators), then there exists a lambda-term T(x,N) which is equivalent to λx.N but lacks abstraction (except as part of the named constants, if these are considered non-atomic). This can also be viewed as anonymising variables, as T(x,N) removes all occurrences of x from N, while still allowing argument values to be substituted into the positions where N contains an x. The conversion function T can be defined by:
T(x, x) := I
T(x, N) := K N if x is not free in N.
T(x, M N) := S T(x, M) T(x, N)
In either case, a term of the form T(x,N) P can reduce by having the initial combinator I, K, or S grab the argument P, just like β-reduction of (λx.N) P would do. I returns that argument. K throws the argument away, just like (λx.N) would do if x has no free occurrence in N. S passes the argument on to both subterms of the application, and then applies the result of the first to the result of the second.
The combinators B and C are similar to S, but pass the argument on to only one subterm of an application (B to the "argument" subterm and C to the "function" subterm), thus saving a subsequent K if there is no occurrence of x in one subterm. In comparison to B and C, the S combinator actually conflates two functionalities: rearranging arguments, and duplicating an argument so that it may be used in two places. The W combinator does only the latter, yielding the B, C, K, W system as an alternative to SKI combinator calculus.
== Typed lambda calculus ==
A typed lambda calculus is a typed formalism that uses the lambda-symbol (
λ
{\displaystyle \lambda }
) to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see Kinds of typed lambda calculi). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus but from another point of view, they can also be considered the more fundamental theory and untyped lambda calculus a special case with only one type.
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here typability usually captures desirable properties of the program, e.g., the program will not cause a memory access violation.
Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of classes of categories, e.g., the simply typed lambda calculus is the language of a Cartesian closed category (CCC).
== Reduction strategies ==
Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. Common lambda calculus reduction strategies include:
Normal order
The leftmost outermost redex is reduced first. That is, whenever possible, arguments are substituted into the body of an abstraction before the arguments are reduced. If a term has a beta-normal form, normal order reduction will always reach that normal form.
Applicative order
The leftmost innermost redex is reduced first. As a consequence, a function's arguments are always reduced before they are substituted into the function. Unlike normal order reduction, applicative order reduction may fail to find the beta-normal form of an expression, even if such a normal form exists. For example, the term
(
λ
x
.
y
(
λ
z
.
(
z
z
)
λ
z
.
(
z
z
)
)
)
{\displaystyle (\;\lambda x.y\;\;(\lambda z.(zz)\;\lambda z.(zz))\;)}
is reduced to itself by applicative order, while normal order reduces it to its beta-normal form
y
{\displaystyle y}
.
Full β-reductions
Any redex can be reduced at any time. This means essentially the lack of any particular reduction strategy—with regard to reducibility, "all bets are off".
Weak reduction strategies do not reduce under lambda abstractions:
Call by value
Like applicative order, but no reductions are performed inside abstractions. This is similar to the evaluation order of strict languages like C: the arguments to a function are evaluated before calling the function, and function bodies are not even partially evaluated until the arguments are substituted in.
Call by name
Like normal order, but no reductions are performed inside abstractions. For example, λx.(λy.y)x is in normal form according to this strategy, although it contains the redex (λy.y)x.
Strategies with sharing reduce computations that are "the same" in parallel:
Optimal reduction
As normal order, but computations that have the same label are reduced simultaneously.
Call by need
As call by name (hence weak), but function applications that would duplicate terms instead name the argument. The argument may be evaluated "when needed", at which point the name binding is updated with the reduced value. This can save time compared to normal order evaluation.
== Computability ==
There is no algorithm that takes as input any two lambda expressions and outputs TRUE or FALSE depending on whether one expression reduces to the other. More precisely, no computable function can decide the question. This was historically the first problem for which undecidability could be proven. As usual for such a proof, computable means computable by any model of computation that is Turing complete. In fact computability can itself be defined via the lambda calculus: a function F: N → N of natural numbers is a computable function if and only if there exists a lambda expression f such that for every pair of x, y in N, F(x)=y if and only if f x =β y, where x and y are the Church numerals corresponding to x and y, respectively and =β meaning equivalence with β-reduction. See the Church–Turing thesis for other approaches to defining computability and their equivalence.
Church's proof of uncomputability first reduces the problem to determining whether a given lambda expression has a normal form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression e that closely follows the proof of Gödel's first incompleteness theorem. If e is applied to its own Gödel number, a contradiction results.
== Complexity ==
The notion of computational complexity for the lambda calculus is a bit tricky, because the cost of a β-reduction may vary depending on how it is implemented.
To be precise, one must somehow find the location of all of the occurrences of the bound variable V in the expression E, implying a time cost, or one must keep track of the locations of free variables in some way, implying a space cost. A naïve search for the locations of V in E is O(n) in the length n of E. Director strings were an early approach that traded this time cost for a quadratic space usage. More generally this has led to the study of systems that use explicit substitution.
In 2014, it was shown that the number of β-reduction steps taken by normal order reduction to reduce a term is a reasonable time cost model, that is, the reduction can be simulated on a Turing machine in time polynomially proportional to the number of steps. This was a long-standing open problem, due to size explosion, the existence of lambda terms which grow exponentially in size for each β-reduction. The result gets around this by working with a compact shared representation. The result makes clear that the amount of space needed to evaluate a lambda term is not proportional to the size of the term during reduction. It is not currently known what a good measure of space complexity would be.
An unreasonable model does not necessarily mean inefficient. Optimal reduction reduces all computations with the same label in one step, avoiding duplicated work, but the number of parallel β-reduction steps to reduce a given term to normal form is approximately linear in the size of the term. This is far too small to be a reasonable cost measure, as any Turing machine may be encoded in the lambda calculus in size linearly proportional to the size of the Turing machine. The true cost of reducing lambda terms is not due to β-reduction per se but rather the handling of the duplication of redexes during β-reduction. It is not known if optimal reduction implementations are reasonable when measured with respect to a reasonable cost model such as the number of leftmost-outermost steps to normal form, but it has been shown for fragments of the lambda calculus that the optimal reduction algorithm is efficient and has at most a quadratic overhead compared to leftmost-outermost. In addition the BOHM prototype implementation of optimal reduction outperformed both Caml Light and Haskell on pure lambda terms.
== Lambda calculus and programming languages ==
As pointed out by Peter Landin's 1965 paper "A Correspondence between ALGOL 60 and Church's Lambda-notation", sequential procedural programming languages can be understood in terms of the lambda calculus, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application.
=== Anonymous functions ===
For example, in Python the "square" function can be expressed as a lambda expression as follows:
The above example is an expression that evaluates to a first-class function. The symbol lambda creates an anonymous function, given a list of parameter names, x – just a single argument in this case, and an expression that is evaluated as the body of the function, x**2. Anonymous functions are sometimes called lambda expressions.
For example, Pascal and many other imperative languages have long supported passing subprograms as arguments to other subprograms through the mechanism of function pointers. However, function pointers are an insufficient condition for functions to be first class datatypes, because a function is a first class datatype if and only if new instances of the function can be created at runtime. Such runtime creation of functions is supported in Smalltalk, JavaScript, Wolfram Language, and more recently in Scala, Eiffel (as agents), C# (as delegates) and C++11, among others.
=== Parallelism and concurrency ===
The Church–Rosser property of the lambda calculus means that evaluation (β-reduction) can be carried out in any order, even in parallel. This means that various nondeterministic evaluation strategies are relevant. However, the lambda calculus does not offer any explicit constructs for parallelism. One can add constructs such as futures to the lambda calculus. Other process calculi have been developed for describing communication and concurrency.
== Semantics ==
The fact that lambda calculus terms act as functions on other lambda calculus terms, and even on themselves, led to questions about the semantics of the lambda calculus. Could a sensible meaning be assigned to lambda calculus terms? The natural semantics was to find a set D isomorphic to the function space D → D, of functions on itself. However, no nontrivial such D can exist, by cardinality constraints because the set of all functions from D to D has greater cardinality than D, unless D is a singleton set.
In the 1970s, Dana Scott showed that if only continuous functions were considered, a set or domain D with the required property could be found, thus providing a model for the lambda calculus.
This work also formed the basis for the denotational semantics of programming languages.
== Variations and extensions ==
These extensions are in the lambda cube:
Typed lambda calculus – Lambda calculus with typed variables (and functions)
System F – A typed lambda calculus with type-variables
Calculus of constructions – A typed lambda calculus with types as first-class values
These formal systems are extensions of lambda calculus that are not in the lambda cube:
Binary lambda calculus – A version of lambda calculus with binary input/output (I/O), a binary encoding of terms, and a designated universal machine.
Lambda-mu calculus – An extension of the lambda calculus for treating classical logic
These formal systems are variations of lambda calculus:
Kappa calculus – A first-order analogue of lambda calculus
These formal systems are related to lambda calculus:
Combinatory logic – A notation for mathematical logic without variables
SKI combinator calculus – A computational system based on the S, K and I combinators, equivalent to lambda calculus, but reducible without variable substitutions
== See also ==
== Further reading ==
Abelson, Harold & Gerald Jay Sussman. Structure and Interpretation of Computer Programs. The MIT Press. ISBN 0-262-51087-1.
Barendregt, Hendrik Pieter Introduction to Lambda Calculus.
Barendregt, Hendrik Pieter, The Impact of the Lambda Calculus in Logic and Computer Science. The Bulletin of Symbolic Logic, Volume 3, Number 2, June 1997.
Barendregt, Hendrik Pieter, The Type Free Lambda Calculus pp1091–1132 of Handbook of Mathematical Logic, North-Holland (1977) ISBN 0-7204-2285-X
Cardone, Felice and Hindley, J. Roger, 2006. History of Lambda-calculus and Combinatory Logic Archived 2021-05-06 at the Wayback Machine. In Gabbay and Woods (eds.), Handbook of the History of Logic, vol. 5. Elsevier.
Church, Alonzo, An unsolvable problem of elementary number theory, American Journal of Mathematics, 58 (1936), pp. 345–363. This paper contains the proof that the equivalence of lambda expressions is in general not decidable.
Church, Alonzo (1941). The Calculi of Lambda-Conversion. Princeton: Princeton University Press. Retrieved 2020-04-14. (ISBN 978-0-691-08394-0)
Frink Jr., Orrin (1944). "Review: The Calculi of Lambda-Conversion by Alonzo Church" (PDF). Bulletin of the American Mathematical Society. 50 (3): 169–172. doi:10.1090/s0002-9904-1944-08090-7.
Kleene, Stephen, A theory of positive integers in formal logic, American Journal of Mathematics, 57 (1935), pp. 153–173 and 219–244. Contains the lambda calculus definitions of several familiar functions.
Landin, Peter, A Correspondence Between ALGOL 60 and Church's Lambda-Notation, Communications of the ACM, vol. 8, no. 2 (1965), pages 89–101. Available from the ACM site. A classic paper highlighting the importance of lambda calculus as a basis for programming languages.
Larson, Jim, An Introduction to Lambda Calculus and Scheme. A gentle introduction for programmers.
Michaelson, Greg (10 April 2013). An Introduction to Functional Programming Through Lambda Calculus. Courier Corporation. ISBN 978-0-486-28029-5.
Schalk, A. and Simmons, H. (2005) An introduction to λ-calculi and arithmetic with a decent selection of exercises. Notes for a course in the Mathematical Logic MSc at Manchester University.
de Queiroz, Ruy J.G.B. (2008). "On Reduction Rules, Meaning-as-Use and Proof-Theoretic Semantics". Studia Logica. 90 (2): 211–247. doi:10.1007/s11225-008-9150-5. S2CID 11321602. A paper giving a formal underpinning to the idea of 'meaning-is-use' which, even if based on proofs, it is different from proof-theoretic semantics as in the Dummett–Prawitz tradition since it takes reduction as the rules giving meaning.
Hankin, Chris, An Introduction to Lambda Calculi for Computer Scientists, ISBN 0954300653
Monographs/textbooks for graduate students
Sørensen, Morten Heine and Urzyczyn, Paweł (2006), Lectures on the Curry–Howard isomorphism, Elsevier, ISBN 0-444-52077-5 is a recent monograph that covers the main topics of lambda calculus from the type-free variety, to most typed lambda calculi, including more recent developments like pure type systems and the lambda cube. It does not cover subtyping extensions.
Pierce, Benjamin (2002), Types and Programming Languages, MIT Press, ISBN 0-262-16209-1 covers lambda calculi from a practical type system perspective; some topics like dependent types are only mentioned, but subtyping is an important topic.
Documents
A Short Introduction to the Lambda Calculus-(PDF) by Achim Jung
A timeline of lambda calculus-(PDF) by Dana Scott
A Tutorial Introduction to the Lambda Calculus-(PDF) by Raúl Rojas
Lecture Notes on the Lambda Calculus-(PDF) by Peter Selinger
Graphic lambda calculus by Marius Buliga
Lambda Calculus as a Workflow Model by Peter Kelly, Paul Coddington, and Andrew Wendelborn; mentions graph reduction as a common means of evaluating lambda expressions and discusses the applicability of lambda calculus for distributed computing (due to the Church–Rosser property, which enables parallel graph reduction for lambda expressions).
== Notes ==
== References ==
Some parts of this article are based on material from FOLDOC, used with permission.
== External links ==
Graham Hutton, Lambda Calculus, a short (12 minutes) Computerphile video on the Lambda Calculus
Helmut Brandl, Step by Step Introduction to Lambda Calculus
"Lambda-calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
David C. Keenan, To Dissect a Mockingbird: A Graphical Notation for the Lambda Calculus with Animated Reduction
L. Allison, Some executable λ-calculus examples
Georg P. Loczewski, The Lambda Calculus and A++
Bret Victor, Alligator Eggs: A Puzzle Game Based on Lambda Calculus
Lambda Calculus Archived 2012-10-14 at the Wayback Machine on Safalra's Website Archived 2021-05-02 at the Wayback Machine
LCI Lambda Interpreter a simple yet powerful pure calculus interpreter
Lambda Calculus links on Lambda-the-Ultimate
Mike Thyer, Lambda Animator, a graphical Java applet demonstrating alternative reduction strategies.
Implementing the Lambda calculus using C++ Templates
Shane Steinert-Threlkeld, "Lambda Calculi", Internet Encyclopedia of Philosophy
Anton Salikhmetov, Macro Lambda Calculus | Wikipedia/Λ-calculus |
In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function whose domain is the set of positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes.
An example of an arithmetic function is the divisor function whose value at a positive integer n is equal to the number of divisors of n.
Arithmetic functions are often extremely irregular (see table), but some of them have series expansions in terms of Ramanujan's sum.
== Multiplicative and additive functions ==
An arithmetic function a is
completely additive if a(mn) = a(m) + a(n) for all natural numbers m and n;
completely multiplicative if a(1) = 1 and a(mn) = a(m)a(n) for all natural numbers m and n;
Two whole numbers m and n are called coprime if their greatest common divisor is 1, that is, if there is no prime number that divides both of them.
Then an arithmetic function a is
additive if a(mn) = a(m) + a(n) for all coprime natural numbers m and n;
multiplicative if a(1) = 1 and a(mn) = a(m)a(n) for all coprime natural numbers m and n.
== Notation ==
In this article,
∑
p
f
(
p
)
{\textstyle \sum _{p}f(p)}
and
∏
p
f
(
p
)
{\textstyle \prod _{p}f(p)}
mean that the sum or product is over all prime numbers:
∑
p
f
(
p
)
=
f
(
2
)
+
f
(
3
)
+
f
(
5
)
+
⋯
{\displaystyle \sum _{p}f(p)=f(2)+f(3)+f(5)+\cdots }
and
∏
p
f
(
p
)
=
f
(
2
)
f
(
3
)
f
(
5
)
⋯
.
{\displaystyle \prod _{p}f(p)=f(2)f(3)f(5)\cdots .}
Similarly,
∑
p
k
f
(
p
k
)
{\textstyle \sum _{p^{k}}f(p^{k})}
and
∏
p
k
f
(
p
k
)
{\textstyle \prod _{p^{k}}f(p^{k})}
mean that the sum or product is over all prime powers with strictly positive exponent (so k = 0 is not included):
∑
p
k
f
(
p
k
)
=
∑
p
∑
k
>
0
f
(
p
k
)
=
f
(
2
)
+
f
(
3
)
+
f
(
4
)
+
f
(
5
)
+
f
(
7
)
+
f
(
8
)
+
f
(
9
)
+
⋯
.
{\displaystyle \sum _{p^{k}}f(p^{k})=\sum _{p}\sum _{k>0}f(p^{k})=f(2)+f(3)+f(4)+f(5)+f(7)+f(8)+f(9)+\cdots .}
The notations
∑
d
∣
n
f
(
d
)
{\textstyle \sum _{d\mid n}f(d)}
and
∏
d
∣
n
f
(
d
)
{\textstyle \prod _{d\mid n}f(d)}
mean that the sum or product is over all positive divisors of n, including 1 and n. For example, if n = 12, then
∏
d
∣
12
f
(
d
)
=
f
(
1
)
f
(
2
)
f
(
3
)
f
(
4
)
f
(
6
)
f
(
12
)
.
{\displaystyle \prod _{d\mid 12}f(d)=f(1)f(2)f(3)f(4)f(6)f(12).}
The notations can be combined:
∑
p
∣
n
f
(
p
)
{\textstyle \sum _{p\mid n}f(p)}
and
∏
p
∣
n
f
(
p
)
{\textstyle \prod _{p\mid n}f(p)}
mean that the sum or product is over all prime divisors of n. For example, if n = 18, then
∑
p
∣
18
f
(
p
)
=
f
(
2
)
+
f
(
3
)
,
{\displaystyle \sum _{p\mid 18}f(p)=f(2)+f(3),}
and similarly
∑
p
k
∣
n
f
(
p
k
)
{\textstyle \sum _{p^{k}\mid n}f(p^{k})}
and
∏
p
k
∣
n
f
(
p
k
)
{\textstyle \prod _{p^{k}\mid n}f(p^{k})}
mean that the sum or product is over all prime powers dividing n. For example, if n = 24, then
∏
p
k
∣
24
f
(
p
k
)
=
f
(
2
)
f
(
3
)
f
(
4
)
f
(
8
)
.
{\displaystyle \prod _{p^{k}\mid 24}f(p^{k})=f(2)f(3)f(4)f(8).}
== Ω(n), ω(n), νp(n) – prime power decomposition ==
The fundamental theorem of arithmetic states that any positive integer n can be represented uniquely as a product of powers of primes:
n
=
p
1
a
1
⋯
p
k
a
k
{\displaystyle n=p_{1}^{a_{1}}\cdots p_{k}^{a_{k}}}
where p1 < p2 < ... < pk are primes and the aj are positive integers. (1 is given by the empty product.)
It is often convenient to write this as an infinite product over all the primes, where all but a finite number have a zero exponent. Define the p-adic valuation νp(n) to be the exponent of the highest power of the prime p that divides n. That is, if p is one of the pi then νp(n) = ai, otherwise it is zero. Then
n
=
∏
p
p
ν
p
(
n
)
.
{\displaystyle n=\prod _{p}p^{\nu _{p}(n)}.}
In terms of the above the prime omega functions ω and Ω are defined by
To avoid repetition, formulas for the functions listed in this article are, whenever possible, given in terms of n and the corresponding pi, ai, ω, and Ω.
== Multiplicative functions ==
=== σk(n), τ(n), d(n) – divisor sums ===
σk(n) is the sum of the kth powers of the positive divisors of n, including 1 and n, where k is a complex number.
σ1(n), the sum of the (positive) divisors of n, is usually denoted by σ(n).
Since a positive number to the zero power is one, σ0(n) is therefore the number of (positive) divisors of n; it is usually denoted by d(n) or τ(n) (for the German Teiler = divisors).
σ
k
(
n
)
=
∏
i
=
1
ω
(
n
)
p
i
(
a
i
+
1
)
k
−
1
p
i
k
−
1
=
∏
i
=
1
ω
(
n
)
(
1
+
p
i
k
+
p
i
2
k
+
⋯
+
p
i
a
i
k
)
.
{\displaystyle \sigma _{k}(n)=\prod _{i=1}^{\omega (n)}{\frac {p_{i}^{(a_{i}+1)k}-1}{p_{i}^{k}-1}}=\prod _{i=1}^{\omega (n)}\left(1+p_{i}^{k}+p_{i}^{2k}+\cdots +p_{i}^{a_{i}k}\right).}
Setting k = 0 in the second product gives
τ
(
n
)
=
d
(
n
)
=
(
1
+
a
1
)
(
1
+
a
2
)
⋯
(
1
+
a
ω
(
n
)
)
.
{\displaystyle \tau (n)=d(n)=(1+a_{1})(1+a_{2})\cdots (1+a_{\omega (n)}).}
=== φ(n) – Euler totient function ===
φ(n), the Euler totient function, is the number of positive integers not greater than n that are coprime to n.
φ
(
n
)
=
n
∏
p
∣
n
(
1
−
1
p
)
=
n
(
p
1
−
1
p
1
)
(
p
2
−
1
p
2
)
⋯
(
p
ω
(
n
)
−
1
p
ω
(
n
)
)
.
{\displaystyle \varphi (n)=n\prod _{p\mid n}\left(1-{\frac {1}{p}}\right)=n\left({\frac {p_{1}-1}{p_{1}}}\right)\left({\frac {p_{2}-1}{p_{2}}}\right)\cdots \left({\frac {p_{\omega (n)}-1}{p_{\omega (n)}}}\right).}
=== Jk(n) – Jordan totient function ===
Jk(n), the Jordan totient function, is the number of k-tuples of positive integers all less than or equal to n that form a coprime (k + 1)-tuple together with n. It is a generalization of Euler's totient, φ(n) = J1(n).
J
k
(
n
)
=
n
k
∏
p
∣
n
(
1
−
1
p
k
)
=
n
k
(
p
1
k
−
1
p
1
k
)
(
p
2
k
−
1
p
2
k
)
⋯
(
p
ω
(
n
)
k
−
1
p
ω
(
n
)
k
)
.
{\displaystyle J_{k}(n)=n^{k}\prod _{p\mid n}\left(1-{\frac {1}{p^{k}}}\right)=n^{k}\left({\frac {p_{1}^{k}-1}{p_{1}^{k}}}\right)\left({\frac {p_{2}^{k}-1}{p_{2}^{k}}}\right)\cdots \left({\frac {p_{\omega (n)}^{k}-1}{p_{\omega (n)}^{k}}}\right).}
=== μ(n) – Möbius function ===
μ(n), the Möbius function, is important because of the Möbius inversion formula. See § Dirichlet convolution, below.
μ
(
n
)
=
{
(
−
1
)
ω
(
n
)
=
(
−
1
)
Ω
(
n
)
if
ω
(
n
)
=
Ω
(
n
)
0
if
ω
(
n
)
≠
Ω
(
n
)
.
{\displaystyle \mu (n)={\begin{cases}(-1)^{\omega (n)}=(-1)^{\Omega (n)}&{\text{if }}\;\omega (n)=\Omega (n)\\0&{\text{if }}\;\omega (n)\neq \Omega (n).\end{cases}}}
This implies that μ(1) = 1. (Because Ω(1) = ω(1) = 0.)
=== τ(n) – Ramanujan tau function ===
τ(n), the Ramanujan tau function, is defined by its generating function identity:
∑
n
≥
1
τ
(
n
)
q
n
=
q
∏
n
≥
1
(
1
−
q
n
)
24
.
{\displaystyle \sum _{n\geq 1}\tau (n)q^{n}=q\prod _{n\geq 1}(1-q^{n})^{24}.}
Although it is hard to say exactly what "arithmetical property of n" it "expresses", (τ(n) is (2π)−12 times the nth Fourier coefficient in the q-expansion of the modular discriminant function) it is included among the arithmetical functions because it is multiplicative and it occurs in identities involving certain σk(n) and rk(n) functions (because these are also coefficients in the expansion of modular forms).
=== cq(n) – Ramanujan's sum ===
cq(n), Ramanujan's sum, is the sum of the nth powers of the primitive qth roots of unity:
c
q
(
n
)
=
∑
gcd
(
a
,
q
)
=
1
1
≤
a
≤
q
e
2
π
i
a
q
n
.
{\displaystyle c_{q}(n)=\sum _{\stackrel {1\leq a\leq q}{\gcd(a,q)=1}}e^{2\pi i{\tfrac {a}{q}}n}.}
Even though it is defined as a sum of complex numbers (irrational for most values of q), it is an integer. For a fixed value of n it is multiplicative in q:
If q and r are coprime, then
c
q
(
n
)
c
r
(
n
)
=
c
q
r
(
n
)
.
{\displaystyle c_{q}(n)c_{r}(n)=c_{qr}(n).}
=== ψ(n) – Dedekind psi function ===
The Dedekind psi function, used in the theory of modular functions, is defined by the formula
ψ
(
n
)
=
n
∏
p
|
n
(
1
+
1
p
)
.
{\displaystyle \psi (n)=n\prod _{p|n}\left(1+{\frac {1}{p}}\right).}
== Completely multiplicative functions ==
=== λ(n) – Liouville function ===
λ(n), the Liouville function, is defined by
λ
(
n
)
=
(
−
1
)
Ω
(
n
)
.
{\displaystyle \lambda (n)=(-1)^{\Omega (n)}.}
=== χ(n) – characters ===
All Dirichlet characters χ(n) are completely multiplicative. Two characters have special notations:
The principal character (mod n) is denoted by χ0(a) (or χ1(a)). It is defined as
χ
0
(
a
)
=
{
1
if
gcd
(
a
,
n
)
=
1
,
0
if
gcd
(
a
,
n
)
≠
1.
{\displaystyle \chi _{0}(a)={\begin{cases}1&{\text{if }}\gcd(a,n)=1,\\0&{\text{if }}\gcd(a,n)\neq 1.\end{cases}}}
The quadratic character (mod n) is denoted by the Jacobi symbol for odd n (it is not defined for even n):
(
a
n
)
=
(
a
p
1
)
a
1
(
a
p
2
)
a
2
⋯
(
a
p
ω
(
n
)
)
a
ω
(
n
)
.
{\displaystyle \left({\frac {a}{n}}\right)=\left({\frac {a}{p_{1}}}\right)^{a_{1}}\left({\frac {a}{p_{2}}}\right)^{a_{2}}\cdots \left({\frac {a}{p_{\omega (n)}}}\right)^{a_{\omega (n)}}.}
In this formula
(
a
p
)
{\displaystyle ({\tfrac {a}{p}})}
is the Legendre symbol, defined for all integers a and all odd primes p by
(
a
p
)
=
{
0
if
a
≡
0
(
mod
p
)
,
+
1
if
a
≢
0
(
mod
p
)
and for some integer
x
,
a
≡
x
2
(
mod
p
)
−
1
if there is no such
x
.
{\displaystyle \left({\frac {a}{p}}\right)={\begin{cases}\;\;\,0&{\text{if }}a\equiv 0{\pmod {p}},\\+1&{\text{if }}a\not \equiv 0{\pmod {p}}{\text{ and for some integer }}x,\;a\equiv x^{2}{\pmod {p}}\\-1&{\text{if there is no such }}x.\end{cases}}}
Following the normal convention for the empty product,
(
a
1
)
=
1.
{\displaystyle \left({\frac {a}{1}}\right)=1.}
== Additive functions ==
=== ω(n) – distinct prime divisors ===
ω(n), defined above as the number of distinct primes dividing n, is additive (see Prime omega function).
== Completely additive functions ==
=== Ω(n) – prime divisors ===
Ω(n), defined above as the number of prime factors of n counted with multiplicities, is completely additive (see Prime omega function).
=== νp(n) – p-adic valuation of an integer n ===
For a fixed prime p, νp(n), defined above as the exponent of the largest power of p dividing n, is completely additive.
=== Logarithmic derivative ===
ld
(
n
)
=
D
(
n
)
n
=
∑
p
prime
p
∣
n
v
p
(
n
)
p
{\displaystyle \operatorname {ld} (n)={\frac {D(n)}{n}}=\sum _{\stackrel {p\mid n}{p{\text{ prime}}}}{\frac {v_{p}(n)}{p}}}
, where
D
(
n
)
{\displaystyle D(n)}
is the arithmetic derivative.
== Neither multiplicative nor additive ==
=== π(x), Π(x), ϑ(x), ψ(x) – prime-counting functions ===
These important functions (which are not arithmetic functions) are defined for non-negative real arguments, and are used in the various statements and proofs of the prime number theorem. They are summation functions (see the main section just below) of arithmetic functions which are neither multiplicative nor additive.
π(x), the prime-counting function, is the number of primes not exceeding x. It is the summation function of the characteristic function of the prime numbers.
π
(
x
)
=
∑
p
≤
x
1
{\displaystyle \pi (x)=\sum _{p\leq x}1}
A related function counts prime powers with weight 1 for primes, 1/2 for their squares, 1/3 for cubes, etc. It is the summation function of the arithmetic function which takes the value 1/k on integers which are the kth power of some prime number, and the value 0 on other integers.
Π
(
x
)
=
∑
p
k
≤
x
1
k
.
{\displaystyle \Pi (x)=\sum _{p^{k}\leq x}{\frac {1}{k}}.}
ϑ(x) and ψ(x), the Chebyshev functions, are defined as sums of the natural logarithms of the primes not exceeding x.
ϑ
(
x
)
=
∑
p
≤
x
log
p
,
{\displaystyle \vartheta (x)=\sum _{p\leq x}\log p,}
ψ
(
x
)
=
∑
p
k
≤
x
log
p
.
{\displaystyle \psi (x)=\sum _{p^{k}\leq x}\log p.}
The second Chebyshev function ψ(x) is the summation function of the von Mangoldt function just below.
=== Λ(n) – von Mangoldt function ===
Λ(n), the von Mangoldt function, is 0 unless the argument n is a prime power pk, in which case it is the natural logarithm of the prime p:
Λ
(
n
)
=
{
log
p
if
n
=
2
,
3
,
4
,
5
,
7
,
8
,
9
,
11
,
13
,
16
,
…
=
p
k
is a prime power
0
if
n
=
1
,
6
,
10
,
12
,
14
,
15
,
18
,
20
,
21
,
…
is not a prime power
.
{\displaystyle \Lambda (n)={\begin{cases}\log p&{\text{if }}n=2,3,4,5,7,8,9,11,13,16,\ldots =p^{k}{\text{ is a prime power}}\\0&{\text{if }}n=1,6,10,12,14,15,18,20,21,\dots \;\;\;\;{\text{ is not a prime power}}.\end{cases}}}
=== p(n) – partition function ===
p(n), the partition function, is the number of ways of representing n as a sum of positive integers, where two representations with the same summands in a different order are not counted as being different:
p
(
n
)
=
|
{
(
a
1
,
a
2
,
…
a
k
)
:
0
<
a
1
≤
a
2
≤
⋯
≤
a
k
∧
n
=
a
1
+
a
2
+
⋯
+
a
k
}
|
.
{\displaystyle p(n)=\left|\left\{(a_{1},a_{2},\dots a_{k}):0<a_{1}\leq a_{2}\leq \cdots \leq a_{k}\;\land \;n=a_{1}+a_{2}+\cdots +a_{k}\right\}\right|.}
=== λ(n) – Carmichael function ===
λ(n), the Carmichael function, is the smallest positive number such that
a
λ
(
n
)
≡
1
(
mod
n
)
{\displaystyle a^{\lambda (n)}\equiv 1{\pmod {n}}}
for all a coprime to n. Equivalently, it is the least common multiple of the orders of the elements of the multiplicative group of integers modulo n.
For powers of odd primes and for 2 and 4, λ(n) is equal to the Euler totient function of n; for powers of 2 greater than 4 it is equal to one half of the Euler totient function of n:
λ
(
n
)
=
{
ϕ
(
n
)
if
n
=
2
,
3
,
4
,
5
,
7
,
9
,
11
,
13
,
17
,
19
,
23
,
25
,
27
,
…
1
2
ϕ
(
n
)
if
n
=
8
,
16
,
32
,
64
,
…
{\displaystyle \lambda (n)={\begin{cases}\;\;\phi (n)&{\text{if }}n=2,3,4,5,7,9,11,13,17,19,23,25,27,\dots \\{\tfrac {1}{2}}\phi (n)&{\text{if }}n=8,16,32,64,\dots \end{cases}}}
and for general n it is the least common multiple of λ of each of the prime power factors of n:
λ
(
p
1
a
1
p
2
a
2
…
p
ω
(
n
)
a
ω
(
n
)
)
=
lcm
[
λ
(
p
1
a
1
)
,
λ
(
p
2
a
2
)
,
…
,
λ
(
p
ω
(
n
)
a
ω
(
n
)
)
]
.
{\displaystyle \lambda (p_{1}^{a_{1}}p_{2}^{a_{2}}\dots p_{\omega (n)}^{a_{\omega (n)}})=\operatorname {lcm} [\lambda (p_{1}^{a_{1}}),\;\lambda (p_{2}^{a_{2}}),\dots ,\lambda (p_{\omega (n)}^{a_{\omega (n)}})].}
=== h(n) – class number ===
h(n), the class number function, is the order of the ideal class group of an algebraic extension of the rationals with discriminant n. The notation is ambiguous, as there are in general many extensions with the same discriminant. See quadratic field and cyclotomic field for classical examples.
=== rk(n) – sum of k squares ===
rk(n) is the number of ways n can be represented as the sum of k squares, where representations that differ only in the order of the summands or in the signs of the square roots are counted as different.
r
k
(
n
)
=
|
{
(
a
1
,
a
2
,
…
,
a
k
)
:
n
=
a
1
2
+
a
2
2
+
⋯
+
a
k
2
}
|
{\displaystyle r_{k}(n)=\left|\left\{(a_{1},a_{2},\dots ,a_{k}):n=a_{1}^{2}+a_{2}^{2}+\cdots +a_{k}^{2}\right\}\right|}
=== D(n) – Arithmetic derivative ===
Using the Heaviside notation for the derivative, the arithmetic derivative D(n) is a function such that
D
(
n
)
=
1
{\displaystyle D(n)=1}
if n prime, and
D
(
m
n
)
=
m
D
(
n
)
+
D
(
m
)
n
{\displaystyle D(mn)=mD(n)+D(m)n}
(the product rule)
== Summation functions ==
Given an arithmetic function a(n), its summation function A(x) is defined by
A
(
x
)
:=
∑
n
≤
x
a
(
n
)
.
{\displaystyle A(x):=\sum _{n\leq x}a(n).}
A can be regarded as a function of a real variable. Given a positive integer m, A is constant along open intervals m < x < m + 1, and has a jump discontinuity at each integer for which a(m) ≠ 0.
Since such functions are often represented by series and integrals, to achieve pointwise convergence it is usual to define the value at the discontinuities as the average of the values to the left and right:
A
0
(
m
)
:=
1
2
(
∑
n
<
m
a
(
n
)
+
∑
n
≤
m
a
(
n
)
)
=
A
(
m
)
−
1
2
a
(
m
)
.
{\displaystyle A_{0}(m):={\frac {1}{2}}\left(\sum _{n<m}a(n)+\sum _{n\leq m}a(n)\right)=A(m)-{\frac {1}{2}}a(m).}
Individual values of arithmetic functions may fluctuate wildly – as in most of the above examples. Summation functions "smooth out" these fluctuations. In some cases it may be possible to find asymptotic behaviour for the summation function for large x.
A classical example of this phenomenon is given by the divisor summatory function, the summation function of d(n), the number of divisors of n:
lim inf
n
→
∞
d
(
n
)
=
2
{\displaystyle \liminf _{n\to \infty }d(n)=2}
lim sup
n
→
∞
log
d
(
n
)
log
log
n
log
n
=
log
2
{\displaystyle \limsup _{n\to \infty }{\frac {\log d(n)\log \log n}{\log n}}=\log 2}
lim
n
→
∞
d
(
1
)
+
d
(
2
)
+
⋯
+
d
(
n
)
log
(
1
)
+
log
(
2
)
+
⋯
+
log
(
n
)
=
1.
{\displaystyle \lim _{n\to \infty }{\frac {d(1)+d(2)+\cdots +d(n)}{\log(1)+\log(2)+\cdots +\log(n)}}=1.}
An average order of an arithmetic function is some simpler or better-understood function which has the same summation function asymptotically, and hence takes the same values "on average". We say that g is an average order of f if
∑
n
≤
x
f
(
n
)
∼
∑
n
≤
x
g
(
n
)
{\displaystyle \sum _{n\leq x}f(n)\sim \sum _{n\leq x}g(n)}
as x tends to infinity. The example above shows that d(n) has the average order log(n).
== Dirichlet convolution ==
Given an arithmetic function a(n), let Fa(s), for complex s, be the function defined by the corresponding Dirichlet series (where it converges):
F
a
(
s
)
:=
∑
n
=
1
∞
a
(
n
)
n
s
.
{\displaystyle F_{a}(s):=\sum _{n=1}^{\infty }{\frac {a(n)}{n^{s}}}.}
Fa(s) is called a generating function of a(n). The simplest such series, corresponding to the constant function a(n) = 1 for all n, is ζ(s) the Riemann zeta function.
The generating function of the Möbius function is the inverse of the zeta function:
ζ
(
s
)
∑
n
=
1
∞
μ
(
n
)
n
s
=
1
,
ℜ
s
>
1.
{\displaystyle \zeta (s)\,\sum _{n=1}^{\infty }{\frac {\mu (n)}{n^{s}}}=1,\;\;\Re s>1.}
Consider two arithmetic functions a and b and their respective generating functions Fa(s) and Fb(s). The product Fa(s)Fb(s) can be computed as follows:
F
a
(
s
)
F
b
(
s
)
=
(
∑
m
=
1
∞
a
(
m
)
m
s
)
(
∑
n
=
1
∞
b
(
n
)
n
s
)
.
{\displaystyle F_{a}(s)F_{b}(s)=\left(\sum _{m=1}^{\infty }{\frac {a(m)}{m^{s}}}\right)\left(\sum _{n=1}^{\infty }{\frac {b(n)}{n^{s}}}\right).}
It is a straightforward exercise to show that if c(n) is defined by
c
(
n
)
:=
∑
i
j
=
n
a
(
i
)
b
(
j
)
=
∑
i
∣
n
a
(
i
)
b
(
n
i
)
,
{\displaystyle c(n):=\sum _{ij=n}a(i)b(j)=\sum _{i\mid n}a(i)b\left({\frac {n}{i}}\right),}
then
F
c
(
s
)
=
F
a
(
s
)
F
b
(
s
)
.
{\displaystyle F_{c}(s)=F_{a}(s)F_{b}(s).}
This function c is called the Dirichlet convolution of a and b, and is denoted by
a
∗
b
{\displaystyle a*b}
.
A particularly important case is convolution with the constant function a(n) = 1 for all n, corresponding to multiplying the generating function by the zeta function:
g
(
n
)
=
∑
d
∣
n
f
(
d
)
.
{\displaystyle g(n)=\sum _{d\mid n}f(d).}
Multiplying by the inverse of the zeta function gives the Möbius inversion formula:
f
(
n
)
=
∑
d
∣
n
μ
(
n
d
)
g
(
d
)
.
{\displaystyle f(n)=\sum _{d\mid n}\mu \left({\frac {n}{d}}\right)g(d).}
If f is multiplicative, then so is g. If f is completely multiplicative, then g is multiplicative, but may or may not be completely multiplicative.
== Relations among the functions ==
There are a great many formulas connecting arithmetical functions with each other and with the functions of analysis, especially powers, roots, and the exponential and log functions. The page divisor sum identities contains many more generalized and related examples of identities involving arithmetic functions.
Here are a few examples:
=== Dirichlet convolutions ===
∑
δ
∣
n
μ
(
δ
)
=
∑
δ
∣
n
λ
(
n
δ
)
|
μ
(
δ
)
|
=
{
1
if
n
=
1
0
if
n
≠
1
{\displaystyle \sum _{\delta \mid n}\mu (\delta )=\sum _{\delta \mid n}\lambda \left({\frac {n}{\delta }}\right)|\mu (\delta )|={\begin{cases}1&{\text{if }}n=1\\0&{\text{if }}n\neq 1\end{cases}}}
where λ is the Liouville function.
∑
δ
∣
n
φ
(
δ
)
=
n
.
{\displaystyle \sum _{\delta \mid n}\varphi (\delta )=n.}
φ
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
δ
=
n
∑
δ
∣
n
μ
(
δ
)
δ
.
{\displaystyle \varphi (n)=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)\delta =n\sum _{\delta \mid n}{\frac {\mu (\delta )}{\delta }}.}
Möbius inversion
∑
d
∣
n
J
k
(
d
)
=
n
k
.
{\displaystyle \sum _{d\mid n}J_{k}(d)=n^{k}.}
J
k
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
δ
k
=
n
k
∑
δ
∣
n
μ
(
δ
)
δ
k
.
{\displaystyle J_{k}(n)=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)\delta ^{k}=n^{k}\sum _{\delta \mid n}{\frac {\mu (\delta )}{\delta ^{k}}}.}
Möbius inversion
∑
δ
∣
n
δ
s
J
r
(
δ
)
J
s
(
n
δ
)
=
J
r
+
s
(
n
)
{\displaystyle \sum _{\delta \mid n}\delta ^{s}J_{r}(\delta )J_{s}\left({\frac {n}{\delta }}\right)=J_{r+s}(n)}
∑
δ
∣
n
φ
(
δ
)
d
(
n
δ
)
=
σ
(
n
)
.
{\displaystyle \sum _{\delta \mid n}\varphi (\delta )d\left({\frac {n}{\delta }}\right)=\sigma (n).}
∑
δ
∣
n
|
μ
(
δ
)
|
=
2
ω
(
n
)
.
{\displaystyle \sum _{\delta \mid n}|\mu (\delta )|=2^{\omega (n)}.}
|
μ
(
n
)
|
=
∑
δ
∣
n
μ
(
n
δ
)
2
ω
(
δ
)
.
{\displaystyle |\mu (n)|=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)2^{\omega (\delta )}.}
Möbius inversion
∑
δ
∣
n
2
ω
(
δ
)
=
d
(
n
2
)
.
{\displaystyle \sum _{\delta \mid n}2^{\omega (\delta )}=d(n^{2}).}
2
ω
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
d
(
δ
2
)
.
{\displaystyle 2^{\omega (n)}=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)d(\delta ^{2}).}
Möbius inversion
∑
δ
∣
n
d
(
δ
2
)
=
d
2
(
n
)
.
{\displaystyle \sum _{\delta \mid n}d(\delta ^{2})=d^{2}(n).}
d
(
n
2
)
=
∑
δ
∣
n
μ
(
n
δ
)
d
2
(
δ
)
.
{\displaystyle d(n^{2})=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)d^{2}(\delta ).}
Möbius inversion
∑
δ
∣
n
d
(
n
δ
)
2
ω
(
δ
)
=
d
2
(
n
)
.
{\displaystyle \sum _{\delta \mid n}d\left({\frac {n}{\delta }}\right)2^{\omega (\delta )}=d^{2}(n).}
∑
δ
∣
n
λ
(
δ
)
=
{
1
if
n
is a square
0
if
n
is not square.
{\displaystyle \sum _{\delta \mid n}\lambda (\delta )={\begin{cases}&1{\text{ if }}n{\text{ is a square }}\\&0{\text{ if }}n{\text{ is not square.}}\end{cases}}}
where λ is the Liouville function.
∑
δ
∣
n
Λ
(
δ
)
=
log
n
.
{\displaystyle \sum _{\delta \mid n}\Lambda (\delta )=\log n.}
Λ
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
log
(
δ
)
.
{\displaystyle \Lambda (n)=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)\log(\delta ).}
Möbius inversion
=== Sums of squares ===
For all
k
≥
4
,
r
k
(
n
)
>
0.
{\displaystyle k\geq 4,\;\;\;r_{k}(n)>0.}
(Lagrange's four-square theorem).
r
2
(
n
)
=
4
∑
d
∣
n
(
−
4
d
)
,
{\displaystyle r_{2}(n)=4\sum _{d\mid n}\left({\frac {-4}{d}}\right),}
where the Kronecker symbol has the values
(
−
4
n
)
=
{
+
1
if
n
≡
1
(
mod
4
)
−
1
if
n
≡
3
(
mod
4
)
0
if
n
is even
.
{\displaystyle \left({\frac {-4}{n}}\right)={\begin{cases}+1&{\text{if }}n\equiv 1{\pmod {4}}\\-1&{\text{if }}n\equiv 3{\pmod {4}}\\\;\;\;0&{\text{if }}n{\text{ is even}}.\\\end{cases}}}
There is a formula for r3 in the section on class numbers below.
r
4
(
n
)
=
8
∑
4
∤
d
d
∣
n
d
=
8
(
2
+
(
−
1
)
n
)
∑
2
∤
d
d
∣
n
d
=
{
8
σ
(
n
)
if
n
is odd
24
σ
(
n
2
ν
)
if
n
is even
,
{\displaystyle r_{4}(n)=8\sum _{\stackrel {d\mid n}{4\,\nmid \,d}}d=8(2+(-1)^{n})\sum _{\stackrel {d\mid n}{2\,\nmid \,d}}d={\begin{cases}8\sigma (n)&{\text{if }}n{\text{ is odd }}\\24\sigma \left({\frac {n}{2^{\nu }}}\right)&{\text{if }}n{\text{ is even }}\end{cases}},}
where ν = ν2(n).
r
6
(
n
)
=
16
∑
d
∣
n
χ
(
n
d
)
d
2
−
4
∑
d
∣
n
χ
(
d
)
d
2
,
{\displaystyle r_{6}(n)=16\sum _{d\mid n}\chi \left({\frac {n}{d}}\right)d^{2}-4\sum _{d\mid n}\chi (d)d^{2},}
where
χ
(
n
)
=
(
−
4
n
)
.
{\displaystyle \chi (n)=\left({\frac {-4}{n}}\right).}
Define the function σk*(n) as
σ
k
∗
(
n
)
=
(
−
1
)
n
∑
d
∣
n
(
−
1
)
d
d
k
=
{
∑
d
∣
n
d
k
=
σ
k
(
n
)
if
n
is odd
∑
2
∣
d
d
∣
n
d
k
−
∑
2
∤
d
d
∣
n
d
k
if
n
is even
.
{\displaystyle \sigma _{k}^{*}(n)=(-1)^{n}\sum _{d\mid n}(-1)^{d}d^{k}={\begin{cases}\sum _{d\mid n}d^{k}=\sigma _{k}(n)&{\text{if }}n{\text{ is odd }}\\\sum _{\stackrel {d\mid n}{2\,\mid \,d}}d^{k}-\sum _{\stackrel {d\mid n}{2\,\nmid \,d}}d^{k}&{\text{if }}n{\text{ is even}}.\end{cases}}}
That is, if n is odd, σk*(n) is the sum of the kth powers of the divisors of n, that is, σk(n), and if n is even it is the sum of the kth powers of the even divisors of n minus the sum of the kth powers of the odd divisors of n.
r
8
(
n
)
=
16
σ
3
∗
(
n
)
.
{\displaystyle r_{8}(n)=16\sigma _{3}^{*}(n).}
Adopt the convention that Ramanujan's τ(x) = 0 if x is not an integer.
r
24
(
n
)
=
16
691
σ
11
∗
(
n
)
+
128
691
{
(
−
1
)
n
−
1
259
τ
(
n
)
−
512
τ
(
n
2
)
}
{\displaystyle r_{24}(n)={\frac {16}{691}}\sigma _{11}^{*}(n)+{\frac {128}{691}}\left\{(-1)^{n-1}259\tau (n)-512\tau \left({\frac {n}{2}}\right)\right\}}
=== Divisor sum convolutions ===
Here "convolution" does not mean "Dirichlet convolution" but instead refers to the formula for the coefficients of the product of two power series:
(
∑
n
=
0
∞
a
n
x
n
)
(
∑
n
=
0
∞
b
n
x
n
)
=
∑
i
=
0
∞
∑
j
=
0
∞
a
i
b
j
x
i
+
j
=
∑
n
=
0
∞
(
∑
i
=
0
n
a
i
b
n
−
i
)
x
n
=
∑
n
=
0
∞
c
n
x
n
.
{\displaystyle \left(\sum _{n=0}^{\infty }a_{n}x^{n}\right)\left(\sum _{n=0}^{\infty }b_{n}x^{n}\right)=\sum _{i=0}^{\infty }\sum _{j=0}^{\infty }a_{i}b_{j}x^{i+j}=\sum _{n=0}^{\infty }\left(\sum _{i=0}^{n}a_{i}b_{n-i}\right)x^{n}=\sum _{n=0}^{\infty }c_{n}x^{n}.}
The sequence
c
n
=
∑
i
=
0
n
a
i
b
n
−
i
{\displaystyle c_{n}=\sum _{i=0}^{n}a_{i}b_{n-i}}
is called the convolution or the Cauchy product of the sequences an and bn.
These formulas may be proved analytically (see Eisenstein series) or by elementary methods.
σ
3
(
n
)
=
1
5
{
6
n
σ
1
(
n
)
−
σ
1
(
n
)
+
12
∑
0
<
k
<
n
σ
1
(
k
)
σ
1
(
n
−
k
)
}
.
{\displaystyle \sigma _{3}(n)={\frac {1}{5}}\left\{6n\sigma _{1}(n)-\sigma _{1}(n)+12\sum _{0<k<n}\sigma _{1}(k)\sigma _{1}(n-k)\right\}.}
σ
5
(
n
)
=
1
21
{
10
(
3
n
−
1
)
σ
3
(
n
)
+
σ
1
(
n
)
+
240
∑
0
<
k
<
n
σ
1
(
k
)
σ
3
(
n
−
k
)
}
.
{\displaystyle \sigma _{5}(n)={\frac {1}{21}}\left\{10(3n-1)\sigma _{3}(n)+\sigma _{1}(n)+240\sum _{0<k<n}\sigma _{1}(k)\sigma _{3}(n-k)\right\}.}
σ
7
(
n
)
=
1
20
{
21
(
2
n
−
1
)
σ
5
(
n
)
−
σ
1
(
n
)
+
504
∑
0
<
k
<
n
σ
1
(
k
)
σ
5
(
n
−
k
)
}
=
σ
3
(
n
)
+
120
∑
0
<
k
<
n
σ
3
(
k
)
σ
3
(
n
−
k
)
.
{\displaystyle {\begin{aligned}\sigma _{7}(n)&={\frac {1}{20}}\left\{21(2n-1)\sigma _{5}(n)-\sigma _{1}(n)+504\sum _{0<k<n}\sigma _{1}(k)\sigma _{5}(n-k)\right\}\\&=\sigma _{3}(n)+120\sum _{0<k<n}\sigma _{3}(k)\sigma _{3}(n-k).\end{aligned}}}
σ
9
(
n
)
=
1
11
{
10
(
3
n
−
2
)
σ
7
(
n
)
+
σ
1
(
n
)
+
480
∑
0
<
k
<
n
σ
1
(
k
)
σ
7
(
n
−
k
)
}
=
1
11
{
21
σ
5
(
n
)
−
10
σ
3
(
n
)
+
5040
∑
0
<
k
<
n
σ
3
(
k
)
σ
5
(
n
−
k
)
}
.
{\displaystyle {\begin{aligned}\sigma _{9}(n)&={\frac {1}{11}}\left\{10(3n-2)\sigma _{7}(n)+\sigma _{1}(n)+480\sum _{0<k<n}\sigma _{1}(k)\sigma _{7}(n-k)\right\}\\&={\frac {1}{11}}\left\{21\sigma _{5}(n)-10\sigma _{3}(n)+5040\sum _{0<k<n}\sigma _{3}(k)\sigma _{5}(n-k)\right\}.\end{aligned}}}
τ
(
n
)
=
65
756
σ
11
(
n
)
+
691
756
σ
5
(
n
)
−
691
3
∑
0
<
k
<
n
σ
5
(
k
)
σ
5
(
n
−
k
)
,
{\displaystyle \tau (n)={\frac {65}{756}}\sigma _{11}(n)+{\frac {691}{756}}\sigma _{5}(n)-{\frac {691}{3}}\sum _{0<k<n}\sigma _{5}(k)\sigma _{5}(n-k),}
where τ(n) is Ramanujan's function.
Since σk(n) (for natural number k) and τ(n) are integers, the above formulas can be used to prove congruences for the functions. See Ramanujan tau function for some examples.
Extend the domain of the partition function by setting p(0) = 1.
p
(
n
)
=
1
n
∑
1
≤
k
≤
n
σ
(
k
)
p
(
n
−
k
)
.
{\displaystyle p(n)={\frac {1}{n}}\sum _{1\leq k\leq n}\sigma (k)p(n-k).}
This recurrence can be used to compute p(n).
=== Class number related ===
Peter Gustav Lejeune Dirichlet discovered formulas that relate the class number h of quadratic number fields to the Jacobi symbol.
An integer D is called a fundamental discriminant if it is the discriminant of a quadratic number field. This is equivalent to D ≠ 1 and either a) D is squarefree and D ≡ 1 (mod 4) or b) D ≡ 0 (mod 4), D/4 is squarefree, and D/4 ≡ 2 or 3 (mod 4).
Extend the Jacobi symbol to accept even numbers in the "denominator" by defining the Kronecker symbol:
(
a
2
)
=
{
0
if
a
is even
(
−
1
)
a
2
−
1
8
if
a
is odd.
{\displaystyle \left({\frac {a}{2}}\right)={\begin{cases}\;\;\,0&{\text{ if }}a{\text{ is even}}\\(-1)^{\frac {a^{2}-1}{8}}&{\text{ if }}a{\text{ is odd. }}\end{cases}}}
Then if D < −4 is a fundamental discriminant
h
(
D
)
=
1
D
∑
r
=
1
|
D
|
r
(
D
r
)
=
1
2
−
(
D
2
)
∑
r
=
1
|
D
|
/
2
(
D
r
)
.
{\displaystyle {\begin{aligned}h(D)&={\frac {1}{D}}\sum _{r=1}^{|D|}r\left({\frac {D}{r}}\right)\\&={\frac {1}{2-\left({\tfrac {D}{2}}\right)}}\sum _{r=1}^{|D|/2}\left({\frac {D}{r}}\right).\end{aligned}}}
There is also a formula relating r3 and h. Again, let D be a fundamental discriminant, D < −4. Then
r
3
(
|
D
|
)
=
12
(
1
−
(
D
2
)
)
h
(
D
)
.
{\displaystyle r_{3}(|D|)=12\left(1-\left({\frac {D}{2}}\right)\right)h(D).}
=== Prime-count related ===
Let
H
n
=
1
+
1
2
+
1
3
+
⋯
+
1
n
{\displaystyle H_{n}=1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{n}}}
be the nth harmonic number. Then
σ
(
n
)
≤
H
n
+
e
H
n
log
H
n
{\displaystyle \sigma (n)\leq H_{n}+e^{H_{n}}\log H_{n}}
is true for every natural number n if and only if the Riemann hypothesis is true.
The Riemann hypothesis is also equivalent to the statement that, for all n > 5040,
σ
(
n
)
<
e
γ
n
log
log
n
{\displaystyle \sigma (n)<e^{\gamma }n\log \log n}
(where γ is the Euler–Mascheroni constant). This is Robin's theorem.
∑
p
ν
p
(
n
)
=
Ω
(
n
)
.
{\displaystyle \sum _{p}\nu _{p}(n)=\Omega (n).}
ψ
(
x
)
=
∑
n
≤
x
Λ
(
n
)
.
{\displaystyle \psi (x)=\sum _{n\leq x}\Lambda (n).}
Π
(
x
)
=
∑
n
≤
x
Λ
(
n
)
log
n
.
{\displaystyle \Pi (x)=\sum _{n\leq x}{\frac {\Lambda (n)}{\log n}}.}
e
θ
(
x
)
=
∏
p
≤
x
p
.
{\displaystyle e^{\theta (x)}=\prod _{p\leq x}p.}
e
ψ
(
x
)
=
lcm
[
1
,
2
,
…
,
⌊
x
⌋
]
.
{\displaystyle e^{\psi (x)}=\operatorname {lcm} [1,2,\dots ,\lfloor x\rfloor ].}
=== Menon's identity ===
In 1965 P Kesava Menon proved
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
gcd
(
k
−
1
,
n
)
=
φ
(
n
)
d
(
n
)
.
{\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}\gcd(k-1,n)=\varphi (n)d(n).}
This has been generalized by a number of mathematicians. For example,
B. Sury
∑
gcd
(
k
1
,
n
)
=
1
1
≤
k
1
,
k
2
,
…
,
k
s
≤
n
gcd
(
k
1
−
1
,
k
2
,
…
,
k
s
,
n
)
=
φ
(
n
)
σ
s
−
1
(
n
)
.
{\displaystyle \sum _{\stackrel {1\leq k_{1},k_{2},\dots ,k_{s}\leq n}{\gcd(k_{1},n)=1}}\gcd(k_{1}-1,k_{2},\dots ,k_{s},n)=\varphi (n)\sigma _{s-1}(n).}
N. Rao
∑
gcd
(
k
1
,
k
2
,
…
,
k
s
,
n
)
=
1
1
≤
k
1
,
k
2
,
…
,
k
s
≤
n
gcd
(
k
1
−
a
1
,
k
2
−
a
2
,
…
,
k
s
−
a
s
,
n
)
s
=
J
s
(
n
)
d
(
n
)
,
{\displaystyle \sum _{\stackrel {1\leq k_{1},k_{2},\dots ,k_{s}\leq n}{\gcd(k_{1},k_{2},\dots ,k_{s},n)=1}}\gcd(k_{1}-a_{1},k_{2}-a_{2},\dots ,k_{s}-a_{s},n)^{s}=J_{s}(n)d(n),}
where a1, a2, ..., as are integers, gcd(a1, a2, ..., as, n) = 1.
László Fejes Tóth
∑
gcd
(
k
,
m
)
=
1
1
≤
k
≤
m
gcd
(
k
2
−
1
,
m
1
)
gcd
(
k
2
−
1
,
m
2
)
=
φ
(
n
)
∑
d
2
∣
m
2
d
1
∣
m
1
φ
(
gcd
(
d
1
,
d
2
)
)
2
ω
(
lcm
(
d
1
,
d
2
)
)
,
{\displaystyle \sum _{\stackrel {1\leq k\leq m}{\gcd(k,m)=1}}\gcd(k^{2}-1,m_{1})\gcd(k^{2}-1,m_{2})=\varphi (n)\sum _{\stackrel {d_{1}\mid m_{1}}{d_{2}\mid m_{2}}}\varphi (\gcd(d_{1},d_{2}))2^{\omega (\operatorname {lcm} (d_{1},d_{2}))},}
where m1 and m2 are odd, m = lcm(m1, m2).
In fact, if f is any arithmetical function
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
f
(
gcd
(
k
−
1
,
n
)
)
=
φ
(
n
)
∑
d
∣
n
(
μ
∗
f
)
(
d
)
φ
(
d
)
,
{\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}f(\gcd(k-1,n))=\varphi (n)\sum _{d\mid n}{\frac {(\mu *f)(d)}{\varphi (d)}},}
where
∗
{\displaystyle *}
stands for Dirichlet convolution.
=== Miscellaneous ===
Let m and n be distinct, odd, and positive. Then the Jacobi symbol satisfies the law of quadratic reciprocity:
(
m
n
)
(
n
m
)
=
(
−
1
)
(
m
−
1
)
(
n
−
1
)
/
4
.
{\displaystyle \left({\frac {m}{n}}\right)\left({\frac {n}{m}}\right)=(-1)^{(m-1)(n-1)/4}.}
Let D(n) be the arithmetic derivative. Then the logarithmic derivative
D
(
n
)
n
=
∑
p
prime
p
∣
n
v
p
(
n
)
p
.
{\displaystyle {\frac {D(n)}{n}}=\sum _{\stackrel {p\mid n}{p{\text{ prime}}}}{\frac {v_{p}(n)}{p}}.}
See Arithmetic derivative for details.
Let λ(n) be Liouville's function. Then
|
λ
(
n
)
|
μ
(
n
)
=
λ
(
n
)
|
μ
(
n
)
|
=
μ
(
n
)
,
{\displaystyle |\lambda (n)|\mu (n)=\lambda (n)|\mu (n)|=\mu (n),}
and
λ
(
n
)
μ
(
n
)
=
|
μ
(
n
)
|
=
μ
2
(
n
)
.
{\displaystyle \lambda (n)\mu (n)=|\mu (n)|=\mu ^{2}(n).}
Let λ(n) be Carmichael's function. Then
λ
(
n
)
∣
ϕ
(
n
)
.
{\displaystyle \lambda (n)\mid \phi (n).}
Further,
λ
(
n
)
=
ϕ
(
n
)
if and only if
n
=
{
1
,
2
,
4
;
3
,
5
,
7
,
9
,
11
,
…
(that is,
p
k
, where
p
is an odd prime)
;
6
,
10
,
14
,
18
,
…
(that is,
2
p
k
, where
p
is an odd prime)
.
{\displaystyle \lambda (n)=\phi (n){\text{ if and only if }}n={\begin{cases}1,2,4;\\3,5,7,9,11,\ldots {\text{ (that is, }}p^{k}{\text{, where }}p{\text{ is an odd prime)}};\\6,10,14,18,\ldots {\text{ (that is, }}2p^{k}{\text{, where }}p{\text{ is an odd prime)}}.\end{cases}}}
See Multiplicative group of integers modulo n and Primitive root modulo n.
2
ω
(
n
)
≤
d
(
n
)
≤
2
Ω
(
n
)
.
{\displaystyle 2^{\omega (n)}\leq d(n)\leq 2^{\Omega (n)}.}
6
π
2
<
ϕ
(
n
)
σ
(
n
)
n
2
<
1.
{\displaystyle {\frac {6}{\pi ^{2}}}<{\frac {\phi (n)\sigma (n)}{n^{2}}}<1.}
c
q
(
n
)
=
μ
(
q
gcd
(
q
,
n
)
)
ϕ
(
q
gcd
(
q
,
n
)
)
ϕ
(
q
)
=
∑
δ
∣
gcd
(
q
,
n
)
μ
(
q
δ
)
δ
.
{\displaystyle {\begin{aligned}c_{q}(n)&={\frac {\mu \left({\frac {q}{\gcd(q,n)}}\right)}{\phi \left({\frac {q}{\gcd(q,n)}}\right)}}\phi (q)\\&=\sum _{\delta \mid \gcd(q,n)}\mu \left({\frac {q}{\delta }}\right)\delta .\end{aligned}}}
Note that
ϕ
(
q
)
=
∑
δ
∣
q
μ
(
q
δ
)
δ
.
{\displaystyle \phi (q)=\sum _{\delta \mid q}\mu \left({\frac {q}{\delta }}\right)\delta .}
c
q
(
1
)
=
μ
(
q
)
.
{\displaystyle c_{q}(1)=\mu (q).}
c
q
(
q
)
=
ϕ
(
q
)
.
{\displaystyle c_{q}(q)=\phi (q).}
∑
δ
∣
n
d
3
(
δ
)
=
(
∑
δ
∣
n
d
(
δ
)
)
2
.
{\displaystyle \sum _{\delta \mid n}d^{3}(\delta )=\left(\sum _{\delta \mid n}d(\delta )\right)^{2}.}
Compare this with 13 + 23 + 33 + ... + n3 = (1 + 2 + 3 + ... + n)2
d
(
u
v
)
=
∑
δ
∣
gcd
(
u
,
v
)
μ
(
δ
)
d
(
u
δ
)
d
(
v
δ
)
.
{\displaystyle d(uv)=\sum _{\delta \mid \gcd(u,v)}\mu (\delta )d\left({\frac {u}{\delta }}\right)d\left({\frac {v}{\delta }}\right).}
σ
k
(
u
)
σ
k
(
v
)
=
∑
δ
∣
gcd
(
u
,
v
)
δ
k
σ
k
(
u
v
δ
2
)
.
{\displaystyle \sigma _{k}(u)\sigma _{k}(v)=\sum _{\delta \mid \gcd(u,v)}\delta ^{k}\sigma _{k}\left({\frac {uv}{\delta ^{2}}}\right).}
τ
(
u
)
τ
(
v
)
=
∑
δ
∣
gcd
(
u
,
v
)
δ
11
τ
(
u
v
δ
2
)
,
{\displaystyle \tau (u)\tau (v)=\sum _{\delta \mid \gcd(u,v)}\delta ^{11}\tau \left({\frac {uv}{\delta ^{2}}}\right),}
where τ(n) is Ramanujan's function.
== First 100 values of some arithmetic functions ==
== Notes ==
== References ==
Tom M. Apostol (1976), Introduction to Analytic Number Theory, Springer Undergraduate Texts in Mathematics, ISBN 0-387-90163-9
Apostol, Tom M. (1989), Modular Functions and Dirichlet Series in Number Theory (2nd Edition), New York: Springer, ISBN 0-387-97127-0
Bateman, Paul T.; Diamond, Harold G. (2004), Analytic number theory, an introduction, World Scientific, ISBN 978-981-238-938-1
Cohen, Henri (1993), A Course in Computational Algebraic Number Theory, Berlin: Springer, ISBN 3-540-55640-0
Edwards, Harold (1977). Fermat's Last Theorem. New York: Springer. ISBN 0-387-90230-9.
Hardy, G. H. (1999), Ramanujan: Twelve Lectures on Subjects Suggested by his Life and work, Providence RI: AMS / Chelsea, hdl:10115/1436, ISBN 978-0-8218-2023-0
Hardy, G. H.; Wright, E. M. (1979) [1938]. An Introduction to the Theory of Numbers (5th ed.). Oxford: Clarendon Press. ISBN 0-19-853171-0. MR 0568909. Zbl 0423.10001.
Jameson, G. J. O. (2003), The Prime Number Theorem, Cambridge University Press, ISBN 0-521-89110-8
Koblitz, Neal (1984), Introduction to Elliptic Curves and Modular Forms, New York: Springer, ISBN 0-387-97966-2
Landau, Edmund (1966), Elementary Number Theory, New York: Chelsea
William J. LeVeque (1996), Fundamentals of Number Theory, Courier Dover Publications, ISBN 0-486-68906-9
Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company, LCCN 77-171950
Elliott Mendelson (1987), Introduction to Mathematical Logic, CRC Press, ISBN 0-412-80830-7
Nagell, Trygve (1964), Introduction to number theory (2nd Edition), Chelsea, ISBN 978-0-8218-2833-5 {{citation}}: ISBN / Date incompatibility (help)
Niven, Ivan M.; Zuckerman, Herbert S. (1972), An introduction to the theory of numbers (3rd Edition), John Wiley & Sons, ISBN 0-471-64154-5
Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall, LCCN 77-81766
Ramanujan, Srinivasa (2000), Collected Papers, Providence RI: AMS / Chelsea, ISBN 978-0-8218-2076-6
Williams, Kenneth S. (2011), Number theory in the spirit of Liouville, London Mathematical Society Student Texts, vol. 76, Cambridge: Cambridge University Press, ISBN 978-0-521-17562-3, Zbl 1227.11002
== Further reading ==
Schwarz, Wolfgang; Spilker, Jürgen (1994), Arithmetical Functions. An introduction to elementary and analytic properties of arithmetic functions and to some of their almost-periodic properties, London Mathematical Society Lecture Note Series, vol. 184, Cambridge University Press, ISBN 0-521-42725-8, Zbl 0807.11001
== External links ==
"Arithmetic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Matthew Holden, Michael Orrison, Michael Varble Yet another Generalization of Euler's Totient Function
Huard, Ou, Spearman, and Williams. Elementary Evaluation of Certain Convolution Sums Involving Divisor Functions
Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions Archived 2021-01-16 at the Wayback Machine
László Tóth, Menon's Identity and arithmetical sums representing functions of several variables | Wikipedia/Number-theoretic_function |
The Schreier–Sims algorithm is an algorithm in computational group theory, named after the mathematicians Otto Schreier and Charles Sims. This algorithm can find the order of a finite permutation group, determine whether a given permutation is a member of the group, and other tasks in polynomial time. It was introduced by Sims in 1970, based on Schreier's subgroup lemma. The running time was subsequently improved by Donald Knuth in 1991. Later, an even faster randomized version of the algorithm was developed.
== Background and timing ==
The algorithm is an efficient method of computing a base and strong generating set (BSGS) of a permutation group. In particular, an SGS determines the order of a group and makes it easy to test membership in the group. Since the SGS is critical for many algorithms in computational group theory, computer algebra systems typically rely on the Schreier–Sims algorithm for efficient calculations in groups.
The running time of Schreier–Sims varies on the implementation. Let
G
≤
S
n
{\displaystyle G\leq S_{n}}
be given by
t
{\displaystyle t}
generators. For the deterministic version of the algorithm, possible running times are:
O
(
n
2
log
3
|
G
|
+
t
n
log
|
G
|
)
{\displaystyle O(n^{2}\log ^{3}|G|+tn\log |G|)}
requiring memory
O
(
n
2
log
|
G
|
+
t
n
)
{\displaystyle O(n^{2}\log |G|+tn)}
O
(
n
3
log
3
|
G
|
+
t
n
2
log
|
G
|
)
{\displaystyle O(n^{3}\log ^{3}|G|+tn^{2}\log |G|)}
requiring memory
O
(
n
log
2
|
G
|
+
t
n
)
{\displaystyle O(n\log ^{2}|G|+tn)}
The use of Schreier vectors can have a significant influence on the performance of implementations of the Schreier–Sims algorithm.
The Monte Carlo variations of the Schreier–Sims algorithm have the estimated complexity:
O
(
n
log
n
log
4
|
G
|
+
t
n
log
|
G
|
)
{\displaystyle O(n\log n\log ^{4}|G|+tn\log |G|)}
requiring memory
O
(
n
log
|
G
|
+
t
n
)
{\displaystyle O(n\log |G|+tn)}
.
Modern computer algebra systems, such as GAP and Magma, typically use an optimized Monte Carlo algorithm.
== Outline of basic algorithm ==
Following is C++-style pseudo-code for the basic idea of the Schreier-Sims algorithm. It is meant to leave out all finer details, such as memory management or any kind of low-level optimization, so as not to obfuscate the most important ideas of the algorithm. Its goal is not to compile.
Notable details left out here include the growing of the orbit tree and the calculation of each new Schreier generator. In place of the orbit tree, a Schreier vector can be used, but the idea is essentially the same. The tree is rooted at the identity element, which fixes the point stabilized by the subgroup. Each node of the tree can represent a permutation that, when combined with all permutations in the path from the root to it, takes that point to some new point not visited by any other node of the tree. By the orbit-stabilizer theorem, these form a transversal of the subgroup of our group that stabilizes the point whose entire orbit is maintained by the tree. Calculating a Schreier generator is a simple application of the Schreier's subgroup lemma.
Another detail left out is the membership test. This test is based upon the sifting process. A permutation is sifted down the chain at each step by finding the containing coset, then using that coset's representative to find a permutation in the subgroup, and the process is repeated in the subgroup with that found permutation. If the end of the chain is reached (i.e., we reach the trivial subgroup), then the sifted permutation was a member of the group at the top of the chain.
== References ==
Knuth, Donald E. "Efficient representation of perm groups". Combinatorica 11 (1991), no. 1, 33–43.
Seress, A., Permutation Group Algorithms, Cambridge U Press, 2002.
Sims, Charles C. "Computational methods in the study of permutation groups", in Computational Problems in Abstract Algebra, pp. 169–183, Pergamon, Oxford, 1970. | Wikipedia/Schreier–Sims_algorithm |
In Boolean logic, the majority function (also called the median operator) is the Boolean function that evaluates to false when half or more arguments are false and true otherwise, i.e. the value of the function equals the value of the majority of the inputs.
== Boolean circuits ==
A majority gate is a logical gate used in circuit complexity and other applications of Boolean circuits. A majority gate returns true if and only if more than 50% of its inputs are true.
For instance, in a full adder, the carry output is found by applying a majority function to the three inputs, although frequently this part of the adder is broken down into several simpler logical gates.
Many systems have triple modular redundancy; they use the majority function for majority logic decoding to implement error correction.
A major result in circuit complexity asserts that the majority function cannot be computed by AC0 circuits of subexponential size.
== Properties ==
For any x, y, and z, the ternary median operator ⟨x, y, z⟩ satisfies the following equations.
⟨x, y, y⟩ = y
⟨x, y, z⟩ = ⟨z, x, y⟩
⟨x, y, z⟩ = ⟨x, z, y⟩
⟨⟨x, w, y⟩, w, z⟩ = ⟨x, w, ⟨y, w, z⟩⟩
An abstract system satisfying these as axioms is a median algebra.
Other useful properties of the ternary median operator function include:
given ⟨x, y, z⟩ = w, ⟨x, y, w⟩ = z
⟨¬x, ¬y, ¬z⟩ = ¬⟨x, y, z⟩
⟨x, y, x ⊕ y ⊕ z⟩ = ⟨x, y, ¬z⟩
⟨¬x, y, x ⊕ y ⊕ z⟩ = ⟨¬x, y, z⟩
== Ties ==
Most applications deliberately force an odd number of inputs so they don't have to deal with the question of what happens when exactly half the inputs are 0 and exactly half the inputs are 1. The few systems that calculate the majority function on an even number of inputs are often biased towards "0" – they produce "0" when exactly half the inputs are 0 – for example, a 4-input majority gate has a 0 output only when two or more 0's appear at its inputs. In a few systems, the tie can be broken randomly.
== Monotone formulae for majority ==
For n = 1 the median operator is just the unary identity operation x. For n = 3 the ternary median operator can be expressed using conjunction and disjunction as xy + yz + zx.
For an arbitrary n there exists a monotone formula for majority of size O(n5.3). This is proved using probabilistic method. Thus, this formula is non-constructive.
Approaches exist for an explicit formula for majority of polynomial size:
Take the median from a sorting network, where each compare-and-swap "wire" is simply an OR gate and an AND gate. The Ajtai–Komlós–Szemerédi (AKS) construction is an example.
Combine the outputs of smaller majority circuits.
Derandomize the Valiant proof of a monotone formula.
== See also ==
Boolean algebra (structure)
Boolean algebras canonically defined
Boyer–Moore majority vote algorithm
Majority problem (cellular automaton)
== Notes ==
== References ==
Knuth, Donald E. (2008). Introduction to combinatorial algorithms and Boolean functions. The Art of Computer Programming. Vol. 4a. Upper Saddle River, NJ: Addison-Wesley. pp. 64–74. ISBN 978-0-321-53496-5.
== External links ==
Media related to Majority functions at Wikimedia Commons | Wikipedia/Majority_function |
In operator theory, an area of mathematics, Douglas' lemma relates factorization, range inclusion, and majorization of Hilbert space operators. It is generally attributed to Ronald G. Douglas, although Douglas acknowledges that aspects of the result may already have been known. The statement of the result is as follows:
Theorem: If
A
{\displaystyle A}
and
B
{\displaystyle B}
are bounded operators on a Hilbert space
H
{\displaystyle H}
, the following are equivalent:
range
A
⊆
range
B
{\displaystyle \operatorname {range} A\subseteq \operatorname {range} B}
A
A
∗
≤
λ
2
B
B
∗
{\displaystyle AA^{*}\leq \lambda ^{2}BB^{*}}
for some
λ
≥
0
{\displaystyle \lambda \geq 0}
There exists a bounded operator
C
{\displaystyle C}
on
H
{\displaystyle H}
such that
A
=
B
C
{\displaystyle A=BC}
.
Moreover, if these equivalent conditions hold, then there is a unique operator
C
{\displaystyle C}
such that
‖
C
‖
2
=
inf
{
μ
:
A
A
∗
≤
μ
B
B
∗
}
{\displaystyle \Vert C\Vert ^{2}=\inf\{\mu :\,AA^{*}\leq \mu BB^{*}\}}
ker
A
=
ker
C
{\displaystyle \ker A=\ker C}
range
C
⊆
range
B
∗
¯
{\displaystyle \operatorname {range} C\subseteq {\overline {\operatorname {range} B^{*}}}}
.
A generalization of Douglas' lemma for unbounded operators on a Banach space was proved by Forough (2014).
== See also ==
Positive operator
== References == | Wikipedia/Douglas'_lemma |
Brute Force or brute force may refer to:
== Techniques ==
Brute force method or proof by exhaustion, a method of mathematical proof
Brute-force attack, a cryptanalytic attack
Brute-force search, a computer problem-solving technique
== People ==
Brute Force (musician) (born 1940), American singer and songwriter
== Arts and entertainment ==
=== Film ===
Brute Force (1914 film), a short silent drama directed by D. W. Griffith
Brute Force (1947 film), a film noir directed by Jules Dassin
=== Literature ===
Brute Force, a 2008 Nick Stone Missions novel by Andy McNab
Brute Force (Ellis book), a 1990 book by the historian John Ellis
Brute Force: Cracking the Data Encryption Standard, a 2005 book by Matt Curtin
=== Other media ===
Brute Force (album), a 2016 record by the Algorithm
Brute Force (comics), a comic by Simon Furman
Brute Force (video game), a 2003 third-person shooter
== See also ==
All pages with titles beginning with Brute Force
All pages with titles beginning with Brute force
All pages with titles containing Brute force
Force (disambiguation)
Brute (disambiguation)
Fôrça Bruta, a 1970 album by Jorge Ben | Wikipedia/Brute_force_(disambiguation) |
In the study of graph algorithms, an implicit graph representation (or more simply implicit graph) is a graph whose vertices or edges are not represented as explicit objects in a computer's memory, but rather are determined algorithmically from some other input, for example a computable function.
== Neighborhood representations ==
The notion of an implicit graph is common in various search algorithms which are described in terms of graphs. In this context, an implicit graph may be defined as a set of rules to define all neighbors for any specified vertex. This type of implicit graph representation is analogous to an adjacency list, in that it provides easy access to the neighbors of each vertex. For instance, in searching for a solution to a puzzle such as Rubik's Cube, one may define an implicit graph in which each vertex represents one of the possible states of the cube, and each edge represents a move from one state to another. It is straightforward to generate the neighbors of any vertex by trying all possible moves in the puzzle and determining the states reached by each of these moves; however, an implicit representation is necessary, as the state space of Rubik's Cube is too large to allow an algorithm to list all of its states.
In computational complexity theory, several complexity classes have been defined in connection with implicit graphs, defined as above by a rule or algorithm for listing the neighbors of a vertex. For instance, PPA is the class of problems in which one is given as input an undirected implicit graph (in which vertices are n-bit binary strings, with a polynomial time algorithm for listing the neighbors of any vertex) and a vertex of odd degree in the graph, and must find a second vertex of odd degree. By the handshaking lemma, such a vertex exists; finding one is a problem in NP, but the problems that can be defined in this way may not necessarily be NP-complete, as it is unknown whether PPA = NP. PPAD is an analogous class defined on implicit directed graphs that has attracted attention in algorithmic game theory because it contains the problem of computing a Nash equilibrium. The problem of testing reachability of one vertex to another in an implicit graph may also be used to characterize space-bounded nondeterministic complexity classes including NL (the class of problems that may be characterized by reachability in implicit directed graphs whose vertices are O(log n)-bit bitstrings), SL (the analogous class for undirected graphs), and PSPACE (the class of problems that may be characterized by reachability in implicit graphs with polynomial-length bitstrings). In this complexity-theoretic context, the vertices of an implicit graph may represent the states of a nondeterministic Turing machine, and the edges may represent possible state transitions, but implicit graphs may also be used to represent many other types of combinatorial structure. PLS, another complexity class, captures the complexity of finding local optima in an implicit graph.
Implicit graph models have also been used as a form of relativization in order to prove separations between complexity classes that are stronger than the known separations for non-relativized models. For instance, Childs et al. used neighborhood representations of implicit graphs to define a graph traversal problem that can be solved in polynomial time on a quantum computer but that requires exponential time to solve on any classical computer.
== Adjacency labeling schemes ==
In the context of efficient representations of graphs, J. H. Muller defined a local structure or adjacency labeling scheme for a graph G in a given family F of graphs to be an assignment of an O(log n)-bit identifier to each vertex of G, together with an algorithm (that may depend on F but is independent of the individual graph G) that takes as input two vertex identifiers and determines whether or not they are the endpoints of an edge in G. That is, this type of implicit representation is analogous to an adjacency matrix: it is straightforward to check whether two vertices are adjacent but finding the neighbors of any vertex may involve looping through all vertices and testing which ones are neighbors.
Graph families with adjacency labeling schemes include:
Bounded degree graphs
If every vertex in G has at most d neighbors, one may number the vertices of G from 1 to n and let the identifier for a vertex be the (d + 1)-tuple of its own number and the numbers of its neighbors. Two vertices are adjacent when the first numbers in their identifiers appear later in the other vertex's identifier. More generally, the same approach can be used to provide an implicit representation for graphs with bounded arboricity or bounded degeneracy, including the planar graphs and the graphs in any minor-closed graph family.
Intersection graphs
An interval graph is the intersection graph of a set of line segments in the real line. It may be given an adjacency labeling scheme in which the points that are endpoints of line segments are numbered from 1 to 2n and each vertex of the graph is represented by the numbers of the two endpoints of its corresponding interval. With this representation, one may check whether two vertices are adjacent by comparing the numbers that represent them and verifying that these numbers define overlapping intervals. The same approach works for other geometric intersection graphs including the graphs of bounded boxicity and the circle graphs, and subfamilies of these families such as the distance-hereditary graphs and cographs. However, a geometric intersection graph representation does not always imply the existence of an adjacency labeling scheme, because it may require more than a logarithmic number of bits to specify each geometric object. For instance, representing a graph as a unit disk graph may require exponentially many bits for the coordinates of the disk centers.
Low-dimensional comparability graphs
The comparability graph for a partially ordered set has a vertex for each set element and an edge between two set elements that are related by the partial order. The order dimension of a partial order is the minimum number of linear orders whose intersection is the given partial order. If a partial order has bounded order dimension, then an adjacency labeling scheme for the vertices in its comparability graph may be defined by labeling each vertex with its position in each of the defining linear orders, and determining that two vertices are adjacent if each corresponding pair of numbers in their labels has the same order relation as each other pair. In particular, this allows for an adjacency labeling scheme for the chordal comparability graphs, which come from partial orders of dimension at most four.
=== The implicit graph conjecture ===
Not all graph families have local structures. For some families, a simple counting argument proves that adjacency labeling schemes do not exist: only O(n log n) bits may be used to represent an entire graph, so a representation of this type can only exist when the number of n-vertex graphs in the given family F is at most 2O(n log n). Graph families that have larger numbers of graphs than this, such as the bipartite graphs or the triangle-free graphs, do not have adjacency labeling schemes. However, even families of graphs in which the number of graphs in the family is small might not have an adjacency labeling scheme; for instance, the family of graphs with fewer edges than vertices has 2O(n log n) n-vertex graphs but does not have an adjacency labeling scheme, because one could transform any given graph into a larger graph in this family by adding a new isolated vertex for each edge, without changing its labelability. Kannan et al. asked whether having a forbidden subgraph characterization and having at most 2O(n log n) n-vertex graphs are together enough to guarantee the existence of an adjacency labeling scheme; this question, which Spinrad restated as a conjecture. Recent work has refuted this conjecture by providing a family of graphs with a forbidden subgraph characterization and a slow-enough growth rate but with no adjacency labeling scheme.
Among the families of graphs which satisfy the conditions of the conjecture and for which there is no known adjacency labeling scheme are the family of disk graphs and line segment intersection graphs.
=== Labeling schemes and induced universal graphs ===
If a graph family F has an adjacency labeling scheme, then the n-vertex graphs in F may be represented as induced subgraphs of a common induced universal graph of polynomial size, the graph consisting of all possible vertex identifiers. Conversely, if an induced universal graph of this type can be constructed, then the identities of its vertices may be used as labels in an adjacency labeling scheme. For this application of implicit graph representations, it is important that the labels use as few bits as possible, because the number of bits in the labels translates directly into the number of vertices in the induced universal graph. Alstrup and Rauhe showed that any tree has an adjacency labeling scheme with log2 n + O(log* n) bits per label, from which it follows that any graph with arboricity k has a scheme with k log2 n + O(log* n) bits per label and a universal graph with nk2O(log* n) vertices. In particular, planar graphs have arboricity at most three, so they have universal graphs with a nearly-cubic number of vertices.
This bound was improved by Gavoille and Labourel who showed that planar graphs and minor-closed graph families have a labeling scheme with 2 log2 n + O(log log n) bits per label, and that graphs of bounded treewidth have a labeling scheme with log2 n + O(log log n) bits per label.
The bound for planar graphs was improved again by Bonamy, Gavoille, and Piliczuk who showed that planar graphs have a labelling scheme with (4/3+o(1))log2 n bits per label.
Finally Dujmović et al showed that planar graphs have a labelling scheme with (1+o(1))log2 n bits per label giving a universal graph with n1+o(1) vertices.
== Evasiveness ==
The Aanderaa–Karp–Rosenberg conjecture concerns implicit graphs given as a set of labeled vertices with a black-box rule for determining whether any two vertices are adjacent. This definition differs from an adjacency labeling scheme in that the rule may be specific to a particular graph rather than being a generic rule that applies to all graphs in a family. Because of this difference, every graph has an implicit representation. For instance, the rule could be to look up the pair of vertices in a separate adjacency matrix. However, an algorithm that is given as input an implicit graph of this type must operate on it only through the implicit adjacency test, without reference to how the test is implemented.
A graph property is the question of whether a graph belongs to a given family of graphs; the answer must remain invariant under any relabeling of the vertices. In this context, the question to be determined is how many pairs of vertices must be tested for adjacency, in the worst case, before the property of interest can be determined to be true or false for a given implicit graph. Rivest and Vuillemin proved that any deterministic algorithm for any nontrivial graph property must test a quadratic number of pairs of vertices. The full Aanderaa–Karp–Rosenberg conjecture is that any deterministic algorithm for a monotonic graph property (one that remains true if more edges are added to a graph with the property) must in some cases test every possible pair of vertices. Several cases of the conjecture have been proven to be true—for instance, it is known to be true for graphs with a prime number of vertices—but the full conjecture remains open. Variants of the problem for randomized algorithms and quantum algorithms have also been studied.
Bender and Ron have shown that, in the same model used for the evasiveness conjecture, it is possible in only constant time to distinguish directed acyclic graphs from graphs that are very far from being acyclic. In contrast, such a fast time is not possible in neighborhood-based implicit graph models,
== See also ==
Black box group, an implicit model for group-theoretic algorithms
Matroid oracle, an implicit model for matroid algorithms
== References == | Wikipedia/Implicit_graph |
In computer science, the Floyd-Rivest algorithm is a selection algorithm developed by Robert W. Floyd and Ronald L. Rivest that has an optimal expected number of comparisons within lower-order terms. It is functionally equivalent to quickselect, but runs faster in practice on average. It has an expected running time of O(n) and an expected number of comparisons of n + min(k, n − k) + O(n1/2 log1/2 n).
The algorithm was originally presented in a Stanford University technical report containing two papers, where it was referred to as SELECT and paired with PICK, or median of medians. It was subsequently published in Communications of the ACM, Volume 18: Issue 3.
== Algorithm ==
The Floyd-Rivest algorithm is a divide and conquer algorithm, sharing many similarities with quickselect. It uses sampling to help partition the list into three sets. It then recursively selects the kth smallest element from the appropriate set.
The general steps are:
Select a small random sample S from the list L.
From S, recursively select two elements, u and v, such that u < v. These two elements will be the pivots for the partition and are expected to contain the kth smallest element of the entire list between them (in a sorted list).
Using u and v, partition S into three sets: A, B, and C. A will contain the elements with values less than u, B will contain the elements with values between u and v, and C will contain the elements with values greater than v.
Partition the remaining elements in L (that is, the elements in L - S) by comparing them to u or v and placing them into the appropriate set. If k is smaller than half the number of the elements in L rounded up, then the remaining elements should be compared to v first and then only to u if they are smaller than v. Otherwise, the remaining elements should be compared to u first and only to v if they are greater than u.
Based on the value of k, apply the algorithm recursively to the appropriate set to select the kth smallest element in L.
By using |S| = Θ(n2/3 log1/3 n), we can get n + min(k, n − k) + O(n2/3 log1/3 n) expected comparisons. We can get n + min(k, n − k) + O(n1/2 log1/2 n) expected comparisons by starting with a small S and repeatedly updating u and v to keep the size of B small enough (O(n1/2 log1/2 n) at Θ(n) processed elements) without unacceptable risk of the desired element being outside of B.
=== Pseudocode version ===
The following pseudocode rearranges the elements between left and right, such that for some value k, where left ≤ k ≤ right, the kth element in the list will contain the (k − left + 1)th smallest value, with the ith element being less than or equal to the kth for all left ≤ i ≤ k and the jth element being larger or equal to for k ≤ j ≤ right:
// left is the left index for the interval
// right is the right index for the interval
// k is the desired index value, where array[k] is the (k+1)th smallest element when left = 0
function select(array, left, right, k) is
while right > left do
// Use select recursively to sample a smaller set of size s
// the arbitrary constants 600 and 0.5 are used in the original
// version to minimize execution time.
if right − left > 600 then
n := right − left + 1
i := k − left + 1
z := ln(n)
s := 0.5 × exp(2 × z/3)
sd := 0.5 × sqrt(z × s × (n − s)/n) × sign(i − n/2)
newLeft := max(left, k − i × s/n + sd)
newRight := min(right, k + (n − i) × s/n + sd)
select(array, newLeft, newRight, k)
// partition the elements between left and right around t
t := array[k]
i := left
j := right
swap array[left] and array[k]
if array[right] > t then
swap array[right] and array[left]
while i < j do
swap array[i] and array[j]
i := i + 1
j := j − 1
while array[i] < t do
i := i + 1
while array[j] > t do
j := j − 1
if array[left] = t then
swap array[left] and array[j]
else
j := j + 1
swap array[j] and array[right]
// Adjust left and right towards the boundaries of the subset
// containing the (k − left + 1)th smallest element.
if j ≤ k then
left := j + 1
if k ≤ j then
right := j − 1
== See also ==
Quickselect
Introselect
Median of medians
== References == | Wikipedia/Floyd–Rivest_algorithm |
In computer science, an online algorithm measures its competitiveness against different adversary models. For deterministic algorithms, the adversary is the same as the adaptive offline adversary. For randomized online algorithms competitiveness can depend upon the adversary model used.
== Common adversaries ==
The three common adversaries are the oblivious adversary, the adaptive online adversary, and the adaptive offline adversary.
The oblivious adversary is sometimes referred to as the weak adversary. This adversary knows the algorithm's code, but does not get to know the randomized results of the algorithm.
The adaptive online adversary is sometimes called the medium adversary. This adversary must make its own decision before it is allowed to know the decision of the algorithm.
The adaptive offline adversary is sometimes called the strong adversary. This adversary knows everything, even the random number generator. This adversary is so strong that randomization does not help against it.
== Important results ==
From S. Ben-David, A. Borodin, R. Karp, G. Tardos, A. Wigderson we have:
If there is a randomized algorithm that is α-competitive against any adaptive offline adversary then there also exists an α-competitive deterministic algorithm.
If G is a c-competitive randomized algorithm against any adaptive online adversary, and there is a randomized d-competitive algorithm against any oblivious adversary, then G is a randomized (c * d)-competitive algorithm against any adaptive offline adversary.
== See also ==
Competitive analysis (online algorithm)
K-server problem
Online algorithm
== References ==
Borodin, A.; El-Yaniv, R. (1998). Online Computation and Competitive Analysis. Cambridge University Press. ISBN 978-0-521-56392-5.
S. Ben-David; A. Borodin; R. Karp; G. Tardos; A. Wigderson. (1994). "On the Power of Randomization in On-line Algorithms" (PDF). Algorithmica. 11: 2–14. doi:10.1007/BF01294260.
== External links ==
Bibliography of papers on online algorithms | Wikipedia/Adversary_model |
Elsevier ( EL-sə-veer) is a Dutch academic publishing company specializing in scientific, technical, and medical content. Its products include journals such as The Lancet, Cell, the ScienceDirect collection of electronic journals, Trends, the Current Opinion series, the online citation database Scopus, the SciVal tool for measuring research performance, the ClinicalKey search engine for clinicians, and the ClinicalPath evidence-based cancer care service. Elsevier's products and services include digital tools for data management, instruction, research analytics, and assessment. Elsevier is part of the RELX Group, known until 2015 as Reed Elsevier, a publicly traded company. According to RELX reports, in 2022 Elsevier published more than 600,000 articles annually in over 2,800 journals. As of 2018, its archives contained over 17 million documents and 40,000 e-books, with over one billion annual downloads.
Researchers have criticized Elsevier for its high profit margins and copyright practices. The company had a reported profit before tax of £2.295 billion with an adjusted operating margin of 33.1% in 2023. Much of the research that Elsevier publishes is publicly funded; its high costs have led to accusations of rent-seeking, boycotts against them, and the rise of alternate avenues for publication and access, such as preprint servers and shadow libraries.
== History ==
Elsevier was founded in 1880 and adopted the name and logo from the Dutch publishing house Elzevir that was an inspiration but has no connection to the contemporary Elsevier. The Elzevir family operated as booksellers and publishers in the Netherlands; the founder, Lodewijk Elzevir (1542–1617), lived in Leiden and established that business in 1580. As a company logo, Elsevier used the Elzevir family's printer's mark, a tree entwined with a vine and the words Non Solus, which is Latin for "not alone". According to Elsevier, this logo represents "the symbiotic relationship between publisher and scholar".
The expansion of Elsevier in the scientific field after 1945 was funded with the profits of the newsweekly Elsevier, which published its first issue on 27 October 1945. The weekly was an instant success and very profitable. The weekly was a continuation, as is stated in its first issue, of the monthly Elsevier, which was founded in 1891 to promote the name of the publishing house and had to stop publication in December 1940 because of the German occupation of the Netherlands.
In May 1939 Klautz established the Elsevier Publishing Company Ltd. in London to distribute these academic titles in the British Commonwealth (except Canada). When the Nazis occupied the Netherlands for the duration of five years from May 1940, he had just founded a second international office, the Elsevier Publishing Company Inc. in New York.
In 1947, Elsevier began publishing its first English-language journal, Biochimica et Biophysica Acta.
In 1970, Elsevier acquired competing firm North-
Holland. In 1971 the firm acquired Excerpta Medica,a small medical abstract publisher based in Amsterdam. As the first and only company in the world that employed a database for the production of journals, it introduced computer technology to Elsevier. In 1978 Elsevier merged with Dutch newspaper publisher NDU, and devised a strategy to broadcast textual news to people's television sets through Viewdata and Teletext technology.
In 1979 Elsevier Science Publishers launched the Article Delivery Over Network Information System (ADONIS) project in conjunction with four business partners. The project aims to find a way to deliver scientific articles to libraries electronically, and would continue for over a decade. In 1991, in conjunction with nine American universities, Elsevier's The University Licensing Project (TULIP) was the first step in creating published, copyrighted material available over the Internet. It formed the basis for ScienceDirect, launched six years later. In 1997, after almost two decades of experiments, ScienceDirect was launched as the first online repository of electronic (scientific) books and articles. Though librarians and researchers were initially hesitant regarding the new technology, more and more of them switched to e-only subscriptions.
In 2004 Elsevier launched Scopus - a multidisciplinary metadata database of scholarly publications, only the second of such kind (after the Web of Science, although free Google Scholar was also launched in 2004). Scopus covers journals, some conference papers and books from various publishers, and measures performance on both author and publication levels. In 2009 SciVal Spotlight was released. This tool enabled research administrators to measure their institution's relative standing in terms of productivity, grants, and publications.
In 2013, Elsevier acquired Mendeley, a UK company making software for managing and sharing research papers. Mendeley, previously an open platform for sharing of research, was greatly criticized for the sale, which users saw as acceding to the "paywall" approach to research literature. Mendeley's previously open-sharing system now allows exchange of paywalled resources only within private groups. The New Yorker described Elsevier's reasons for buying Mendeley as two-fold: to acquire its user data, and to "destroy or coöpt an open-science icon that threatens its business model".
== Company statistics ==
During 2018, researchers submitted over 1.8 million research papers to Elsevier-based publications. Over 20,000 editors managed the peer review and selection of these papers, resulting in the publication of more than 470,000 articles in over 2,500 journals. Editors are generally unpaid volunteers who perform their duties alongside a full-time job in academic institutions, although exceptions have been reported. In 2013, the five editorial groups Elsevier, Springer, Wiley-Blackwell, Taylor & Francis, and SAGE Publications published more than half of all academic papers in the peer-reviewed literature. At that time, Elsevier accounted for 16% of the world market in science, technology, and medical publishing. In 2019, Elsevier accounted for the review, editing and dissemination of 18% of the world's scientific articles. About 45% of revenue by geography in 2019 derived from North America, 24% from Europe, and the remaining 31% from the rest of the world. Around 84% of revenue by format came from electronic usage and 16% came from print.
The firm employs 8,100 people. The CEO is Kumsal Bayazit, who was appointed on 15 February 2019. In 2018, it reported a mean 2017 gender pay gap of 29.1% for its UK workforce, while the median was 40.4%, the highest yet reported by a publisher in UK. Elsevier attributed the result to the under-representation of women in its senior ranks and the prevalence of men in its technical workforce. The UK workforce consists of 1,200 people in the UK, and represents 16% of Elsevier's global employee population. Elsevier's parent company, RELX, has a global workforce that is 51% female to 49% male, with 43% female and 57% male managers, and 29% female and 71% male senior operational managers.
In 2018, Elsevier accounted for 34% of the revenues of RELX group (£2.538 billion of £7.492 billion). In operating profits, it represented 40% (£942 million of £2,346 million). Adjusted operating profits (with constant currency) rose by 2% from 2017 to 2018. Profits grew further from 2018 to 2019, to a total of £982 million. the first half of 2019, RELX reported the first slowdown in revenue growth for Elsevier in several years: 1% vs. an expectation of 2% and a typical growth of at least 4% in the previous 5 years. Overall for 2019, Elsevier reported revenue growth of 3.9% from 2018, with the underlying growth at constant currency at 2%. In 2019, Elsevier accounted for 34% of the revenues of RELX (£2.637billion of £7.874billion). In adjusted operating profits, it represented 39% (£982m of £2.491bn). Adjusted operating profits (with constant currency) rose by 2% from 2018 to 2019.
In 2019, researchers submitted over two million research papers to Elsevier-based publications. Over 22,000 editors managed the peer review and selection of these papers, resulting in the publication of about 500,000 articles in over 2,500 journals.
In 2020 Elsevier was the largest academic publisher, with approximately 16% of the academic publishing market and more than 3000 journals.
== Market model ==
=== Products and services ===
Products and services include electronic and print versions of journals, textbooks and reference works, and cover the health, life, physical, and social sciences.
The target markets are academic and government research institutions, corporate research labs, booksellers, librarians, scientific researchers, authors, editors, physicians, nurses, allied health professionals, medical and nursing students and schools, medical researchers, pharmaceutical companies, hospitals, and research establishments. It publishes in 13 languages including English, German, French, Spanish, Italian, Portuguese, Polish, Japanese, Hindi, and Chinese.
Flagship products and services include VirtualE, ScienceDirect, Scopus, Scirus, EMBASE, Engineering Village, Compendex, Cell, Knovel, SciVal, Pure, and Analytical Services, The Consult series (FirstCONSULT, PathCONSULT, NursingCONSULT, MDConsult, StudentCONSULT), Virtual Clinical Excursions, and major reference works such as Gray's Anatomy, Nelson Pediatrics, Dorland's Illustrated Medical Dictionary, Netter's Atlas of Human Anatomy, and online versions of many journals including The Lancet.
ScienceDirect is Elsevier's platform for online electronic access to its journals and over 40,000 e-books, reference works, book series, and handbooks. The articles are grouped in four main sections: Physical Sciences and Engineering, Life Sciences, Health Sciences, and Social Sciences and Humanities. For most articles on the website, abstracts are freely available; access to the full text of the article (in PDF, and also HTML for newer publications) often requires a subscription or pay-per-view purchase.
In 2019, Elsevier published 49,000 free open access articles and 370 full open access journals. Moreover, 1,900 of its journals sold hybrid open access options.
=== Pricing ===
The subscription rates charged by the company for its journals have been criticized; some very large journals (with more than 5,000 articles) charge subscription prices as high as £9,634, far above average, and many British universities pay more than a million pounds to Elsevier annually. The company has been criticized not only by advocates of a switch to the open-access publication model, but also by universities whose library budgets make it difficult for them to afford current journal prices.
For example, in 2004, a resolution by Stanford University's senate singled out Elsevier's journals as being "disproportionately expensive compared to their educational and research value", which librarians should consider dropping, and encouraged its faculty "not to contribute articles or editorial or review efforts to publishers and journals that engage in exploitive or exorbitant pricing". Similar guidelines and criticism of Elsevier's pricing policies have been passed by the University of California, Harvard University, and Duke University.
In July 2015, the Association of Universities in the Netherlands threatened to boycott Elsevier, which refused to negotiate on any open access policy for Dutch universities. After a year of negotiation, Elsevier pledged to make 30% of research published by Dutch researchers in Elsevier journals open access by 2018.
In October 2018, a complaint against Elsevier was filed with the European Commission, alleging anticompetitive practices stemming from Elsevier's confidential subscription agreements and market dominance. The European Commission decided not to investigate.
The elevated pricing of field journals in economics, most of which are published by Elsevier, was one of the motivations that moved the American Economic Association to launch the American Economic Journal in 2009.
=== Mergers and acquisitions ===
RELX Group has been active in mergers and acquisitions. Elsevier has incorporated other businesses that were either complementing or competing in the field of research and publishing and that reinforce its market power, such as Mendeley (after the closure of 2collab), SSRN, bepress/Digital Commons, PlumX, Hivebench, Newsflo, Science-Metrix, and Interfolio.
=== Conferences ===
Elsevier also conducts conferences, exhibitions, and workshops around the world, with over 50 conferences a year covering life sciences, physical sciences and engineering, social sciences, and health sciences.
=== Shill review offer ===
According to the BBC, in 2009, the firm [Elsevier] offered a £17.25 Amazon voucher to academics who contributed to the textbook Clinical Psychology if they would go on Amazon.com and Barnes & Noble (a large US books retailer) and give it five stars. Elsevier responded by stating "Encouraging interested parties to post book reviews isn't outside the norm in scholarly publishing, nor is it wrong to offer to nominally compensate people for their time. But in all instances the request should be unbiased, with no incentives for a positive review, and that's where this particular e-mail went too far", and that it was a mistake by a marketing employee.
=== Blocking text mining research ===
Elsevier seeks to regulate text and data mining with private licenses, claiming that reading requires extra permission if automated and that the publisher holds copyright on output of automated processes. The conflict on research and copyright policy has often resulted in researchers being blocked from their work. In November 2015, Elsevier blocked a scientist from performing text mining research at scale on Elsevier papers, even though his institution already pays for access to Elsevier journal content. The data was collected using the R package "statcheck".
=== Fossil fuel company consulting and advocacy ===
Elsevier is one of the most prolific publishers of books aimed at expanding the production of fossil fuels. Since at least 2010 the company has worked with the fossil fuel industry to optimise fossil fuel extraction. It commissions authors, journal advisory board members and editors who are employees of the largest oil firms. In addition it markets data services and research portals directly to the fossil fuel industry to help "increase the odds of exploration success".
== Relationship with academic institutions ==
=== Finland ===
In 2015, Finnish research organizations paid a total of 27 million euros in subscription fees. Over one-third of the total costs went to Elsevier. The information was revealed after successful court appeal following a denied request on the subscription fees, due to confidentiality clauses in contracts with the publishers. Establishing of this fact lead to creation of tiedonhinta.fi petition demanding more reasonable pricing and open access to content signed by more than 2800 members of the research community. While deals with other publishers have been made, this was not the case for Elsevier, leading to the nodealnoreview.org boycott of the publisher signed more than 600 times.
In January 2018, it was confirmed that a deal had been reached between those concerned.
=== France ===
The French Couperin consortium agreed in 2019 to a 4-year contract with Elsevier, despite criticism from the scientific community.
The French École Normale Supérieure has stopped having Elsevier publish the journal Annales Scientifiques de l'École Normale Supérieure (as of 2008).
Effective on 1 January 2020, the French Academy of Sciences stopped publishing its 7 journals Comptes rendus de l'Académie des Sciences with Elsevier and switched to Centre Mersenne.
=== Germany ===
Since 2018 and as of 2023, almost no academic institution in Germany is subscribed to Elsevier.
Germany's DEAL project (Projekt DEAL), which includes over 60 major research institutions, has announced that all of its members are cancelling their contracts with Elsevier, effective 1 January 2017. The boycott is in response to Elsevier's refusal to adopt "transparent business models" to "make publications more openly accessible". Horst Hippler, spokesperson for the DEAL consortium states that "taxpayers have a right to read what they are paying for" and that "publishers must understand that the route to open-access publishing at an affordable price is irreversible". In July 2017, another 13 institutions announced that they would also be cancelling their subscriptions to Elsevier journals. In August 2017, at least 185 German institutions had cancelled their contracts with Elsevier. In 2018, whilst negotiations were ongoing, around 200 German universities that cancelled their subscriptions to Elsevier journals were granted complimentary open access to them until this ended in July of the year.
On 19 December 2018, the Max Planck Society (MPS) announced that the existing subscription agreement with Elsevier would not be renewed after the expiration date of 31 December 2018. MPS counts 14,000 scientists in 84 research institutes, publishing 12,000 articles each year.
In 2023 Elsevier and DEAL reached a tentative agreement on a publish and read model, which would take effect until 2028 if at least 70% of the eligible institutions opt into it.
=== Hungary ===
In March 2018, the Hungarian Electronic Information Service National Programme entered negotiations on its 2019 Elsevier subscriptions, asking for a read-and-publish deal. Negotiations were ended by the Hungarian consortium in December 2018, and the subscription was not renewed.
=== Iran ===
In 2013, Elsevier changed its policies in response to sanctions announced by the US Office of Foreign Assets Control that year. This included a request that all Elsevier journals avoid publishing papers by Iranian nationals who are employed by the Iranian government. Elsevier executive Mark Seeley expressed regret on behalf of the company, but did not announce an intention to challenge this interpretation of the law.
=== Italy ===
CRUI (an association of Italian universities) sealed a 5-year-long deal for 2018–2022, despite protests from the scientific community, protests focused on aspects such as the lack of prevention of cost increases by means of the double dipping.
=== Netherlands ===
In 2015, a consortium of all of Netherlands' 14 universities threatened to boycott Elsevier if it could not agree that articles by Dutch authors would be made open access and settled with the compromise of 30% of its Dutch papers becoming open access by 2018. Gerard Meijer, president of Radboud University in Nijmegen and lead negotiator on the Dutch side noted, "it's not the 100% that I hoped for".
=== Norway ===
In March 2019, the Norwegian government on behalf of 44 institutions — universities, university colleges, research institutes, and hospitals — decided to break negotiations on renewal of their subscription deal with Elsevier, because of disagreement regarding open-access policy and Elsevier's unwillingness to reduce the cost of reading access.
=== South Korea ===
In 2017, over 70 university libraries confirmed a "contract boycott" movement involving three publishers including Elsevier. As of January 2018, whilst negotiations remain underway, a decision will be made as to whether or not continue the participating libraries will continue the boycott. It was subsequently confirmed that an agreement had been reached.
=== Sweden ===
In May 2018, the Bibsam Consortium, which negotiates license agreements on behalf of all Swedish universities and research institutes, decided not to renew their contract with Elsevier, alleging that the publisher does not meet the demands of transition towards a more open-access model, and referring to the rapidly increasing costs for publishing. Swedish universities will still have access to articles published before 30 June 2018. Astrid Söderbergh Widding, chairman of the Bibsam Consortium, said, "the current system for scholarly communication must change and our only option is to cancel deals when they don't meet our demands for a sustainable transition to open access". Sweden has a goal of open access by 2026. In November 2019 the negotiations concluded, with Sweden paying for reading access to Elsevier journals and open access publishing for all its researchers' articles.
=== Taiwan ===
In Taiwan, more than 75% of universities, including the country's top 11 institutions, have joined a collective boycott against Elsevier. On 7 December 2016, the Taiwanese consortium, CONCERT, which represents more than 140 institutions, announced it would not renew its contract with Elsevier.
=== United States ===
In March 2018, Florida State University's faculty elected to cancel its $2 million subscription to a bundle of several journals. Starting in 2019, it will instead buy access to titles à la carte.
In February 2019, the University of California said it would terminate subscriptions "in [a] push for open access to publicly funded research". After months of negotiations over open access to research by UC researchers and prices for subscriptions to Elsevier journals, a press release by the UC Office of the President issued Thursday, 28 February 2019 stated "Under Elsevier's proposed terms, the publisher would have charged UC authors large publishing fees on top of the university's multimillion dollar subscription, resulting in much greater cost to the university and much higher profits for Elsevier." On 10 July 2019, Elsevier began restricting access to all new paywalled articles and approximately 5% of paywalled articles published before 2019.
In April 2020, the University of North Carolina elected not to renew its bundled Elsevier package, citing a failure "to provide an affordable path". Rather than extend the license, which was stated to cost $2.6 million annually, the university decided to continue subscribing to a smaller set of individual journals. The State University of New York Libraries Consortium also announced similar outcome, with the help of estimates from Unpaywall Journals. Similarly, MIT announced in June 2020 that it would no longer pay for access to new Elsevier articles.
In 2022 Elsevier and the University of Michigan established an agreement to support authors who wish to publish open access.
=== Ukraine ===
In June 2020 the Ukrainian government cancelled subscriptions for all universities in the country after failed negotiations. The Ministry of Education claimed that Elsevier indexes journals in its register that call themselves Russian but are from "occupied territories".
== Criticism of academic practices ==
=== Lacking dissemination of its research ===
==== Lobbying efforts against open access ====
Elsevier have been known to be involved in lobbying against open access. These have included the likes of:
The Federal Research Public Access Act (FRPAA)
The Research Works Act
PRISM. In the case of PRISM, the Association of American Publishers hired Eric Dezenhall, the so-called "Pit Bull Of Public Relations"
Horizon 2020
Office of Science and Technology Policy (OSTP)
The European Union's Open Science Monitor was criticised after Elsevier were confirmed as a subcontractor
UK Research and Innovation.
===== Selling open-access articles =====
In 2014, 2015, 2016, and 2017, Elsevier was found to be selling some articles that should have been open access, but had been put behind a paywall. A related case occurred in 2015, when Elsevier charged for downloading an open-access article from a journal published by John Wiley & Sons. However, whether Elsevier was in violation of the license under which the article was made available on their website was not clear.
===== Action against academics posting their own articles online =====
In 2013, Digimarc, a company representing Elsevier, told the University of Calgary to remove articles published by faculty authors on university web pages; although such self-archiving of academic articles may be legal under the fair dealing provisions in Canadian copyright law, the university complied. Harvard University and the University of California, Irvine also received takedown notices for self-archived academic articles, a first for Harvard, according to Peter Suber.
Months after its acquisition of Academia.edu rival Mendeley, Elsevier sent thousands of takedown notices to Academia.edu, a practice that has since ceased following widespread complaint by academics, according to Academia.edu founder and chief executive Richard Price.
After Elsevier acquired the repository SSRN in May 2016, academics started complaining that some of their work has been removed without notice. The action was explained as a technical error.
===== Sci-Hub and LibGen lawsuit =====
In 2015, Elsevier filed a lawsuit against the sites Sci-Hub and LibGen, which make copyright-protected articles available for free. Elsevier also claimed illegal access to institutional accounts.
===== Initial rejection of the Initiative for Open Citations =====
Among the major academic publishers, Elsevier alone declined to join the Initiative for Open Citations. In the context of the resignation of the Journal of Informetrics' editorial board, the firm stated: "Elsevier invests significantly in citation extraction technology. While these are made available to those who wish to license this data, Elsevier cannot make such a large corpus of data, to which it has added significant value, available for free."
Elsevier finally joined the initiative in January 2021 after the data was already available with an Open Data Commons license in Microsoft Academic.
===== ResearchGate take down =====
A chamber of the Munich Regional Court has ruled that the research networking site ResearchGate has to take down articles uploaded without consent from their original publishers and ResearchGate must take down Elsevier articles. A case was brought forward in 2017 by the Coalition for Responsible Sharing, a group of publishers that includes Elsevier and the American Chemical Society.
===== Resignation of editorial boards =====
The editorial boards of a number of journals have resigned because of disputes with Elsevier over pricing:
In 1999, the entire editorial board of the Journal of Logic Programming resigned after 16 months of unsuccessful negotiations with Elsevier about the price of library subscriptions. The personnel created a new journal, Theory and Practice of Logic Programming, with Cambridge University Press at a much lower price, while Elsevier continued publication with a new editorial board and a slightly different name (the Journal of Logic and Algebraic Programming).
In 2002, dissatisfaction at Elsevier's pricing policies caused the European Economic Association to terminate an agreement with Elsevier designating Elsevier's European Economic Review as the official journal of the association. The EEA launched a new journal, the Journal of the European Economic Association.
In 2003, the entire editorial board of the Journal of Algorithms resigned to start ACM Transactions on Algorithms with a different, lower-priced, not-for-profit publisher, at the suggestion of Journal of Algorithms founder Donald Knuth. The Journal of Algorithms continued under Elsevier with a new editorial board until October 2009, when it was discontinued.
In 2005, the editors of the International Journal of Solids and Structures resigned to start the Journal of Mechanics of Materials and Structures. However, a new editorial board was quickly established and the journal continues in apparently unaltered form.
In 2006, the entire editorial board of the distinguished mathematical journal Topology resigned because of stalled negotiations with Elsevier to lower the subscription price. This board then launched the new Journal of Topology at a far lower price, under the auspices of the London Mathematical Society. Topology then remained in circulation under a new editorial board until 2009.
In 2023, the editorial board of the open access journal NeuroImage resigned and started a new journal, because of Elsevier's unwillingness to reduce article-processing charges. The editors called Elsevier's $3,450 per article processing charge "unethical and unsustainable".
Editorial boards have also resigned over open access policies or other issues:
In 2015, Stephen Leeder was removed from his role as editor of the Medical Journal of Australia when its publisher decided to outsource the journal's production to Elsevier. As a consequence, all but one of the journal's editorial advisory committee members co-signed a letter of resignation.
In 2015, the entire editorial staff of the general linguistics journal Lingua resigned in protest of Elsevier's unwillingness to agree to their terms of Fair Open Access. Editor-in-chief Johan Rooryck also announced that the Lingua staff would establish a new journal, Glossa.
In 2019, the entire editorial board of Elsevier's Journal of Informetrics resigned over the open-access policies of its publisher and founded open-access journal called Quantitative Science Studies.
In 2020, Elsevier effectively severed the tie between the Journal of Asian Economics and the academic society that founded it, the American Committee on Asian Economic Studies (ACAES), by offering the ACAES-appointed editor, Calla Wiemer, a terminal contract for 2020. As a result, a majority of the editorial board eventually resigned.
In 2023, the editorial board of the journal Design Studies resigned over Elsevier's 1) plans to increase publications seven-fold; 2) the appointment of an external Editor-in-Chief who had not previously published in the journal; and 3) changing the scope of the journal without consulting the editorial team or the journal's parent society.
In December 2024, the editorial board of Journal of Human Evolution, including emeritus editors and all but one associate editor, resigned, citing actions by Elsevier that they said "are fundamentally incompatible with the ethos of the journal and preclude maintaining the quality and integrity fundamental to JHE's success". In addition to pricing, specific complaints also included interference in the editorial board, lack of necessary support from the company, and the disruptive use of generative artificial intelligence by the company to alter submissions without informing editors or contributors.
===== "The Cost of Knowledge" boycott =====
In 2003, various university librarians began coordinating with each other to complain about Elsevier's "big deal" journal bundling packages, in which the company offered a group of journal subscriptions to libraries at a certain rate, but in which librarians claimed no economical option was available to subscribe to only the popular journals at a rate comparable to the bundled rate. Librarians continued to discuss the implications of the pricing schemes, many feeling pressured into buying the Elsevier packages without other options.
On 21 January 2012, mathematician Timothy Gowers publicly announced he would boycott Elsevier, noting that others in the field have been doing so privately. The reasons for the boycott are high subscription prices for individual journals, bundling subscriptions to journals of different value and importance, and Elsevier's support for SOPA, PIPA, and the Research Works Act, which would have prohibited open-access mandates for U.S. federally-funded research and severely restricted the sharing of scientific data.
Following this, a petition advocating noncooperation with Elsevier (that is, not submitting papers to Elsevier journals, not refereeing articles in Elsevier journals, and not participating in journal editorial boards), appeared on the site "The Cost of Knowledge". By February 2012, this petition had been signed by over 5,000 academics, growing to over 17,000 by November 2018. The firm disputed the claims, claiming that their prices are below the industry average, and stating that bundling is only one of several different options available to buy access to Elsevier journals. The company also claimed that its profit margins are "simply a consequence of the firm's efficient operation". The academics replied that their work was funded by public money, thus should be freely available.
On 27 February 2012, Elsevier issued a statement on its website that declared that it has withdrawn support from the Research Works Act. Although the Cost of Knowledge movement was not mentioned, the statement indicated the hope that the move would "help create a less heated and more productive climate" for ongoing discussions with research funders. Hours after Elsevier's statement, the sponsors of the bill, US House Representatives Darrell Issa and Carolyn Maloney, issued a joint statement saying that they would not push the bill in Congress.
===== Plan S open-access initiative =====
The Plan S open-access initiative, which began in Europe and has since spread to some US research funding agencies, would require researchers receiving some grants to publish in open-access journals by 2020. A spokesman for Elsevier said "If you think that information should be free of charge, go to Wikipedia". In September 2018, UBS advised to sell Elsevier (RELX) stocks, noting that Plan S could affect 5-10% of scientific funding and may force Elsevier to reduce pricing.
=== "Who's Afraid of Peer Review" ===
In 2013, one of Elsevier's journals was caught in the sting set up by John Bohannon, published in Science, called "Who's Afraid of Peer Review?" The journal Drug Invention Today accepted an obviously bogus paper made up by Bohannon that should have been rejected by any good peer-review system. Instead, Drug Invention Today was among many open-access journals that accepted the fake paper for publication. As of 2014, this journal had been transferred to a different publisher.
=== Fake journals ===
At a 2009 court case in Australia where Merck & Co. was being sued by a user of Vioxx, the plaintiff alleged that Merck had paid Elsevier to publish the Australasian Journal of Bone and Joint Medicine, which had the appearance of being a peer-reviewed academic journal but in fact contained only articles favourable to Merck drugs. Merck described the journal as a "complimentary publication", denied claims that articles within it were ghost written by Merck, and stated that the articles were all reprinted from peer-reviewed medical journals. In May 2009, Elsevier Health Sciences CEO Hansen released a statement regarding Australia-based sponsored journals, conceding that they were "sponsored article compilation publications, on behalf of pharmaceutical clients, that were made to look like journals and lacked the proper disclosures". The statement acknowledged that it "was an unacceptable practice". The Scientist reported that, according to an Elsevier spokesperson, six sponsored publications "were put out by their Australia office and bore the Excerpta Medica imprint from 2000 to 2005", namely the Australasian Journal of Bone and Joint Medicine (Australas. J. Bone Joint Med.), the Australasian Journal of General Practice (Australas. J. Gen. Pract.), the Australasian Journal of Neurology (Australas. J. Neurol.), the Australasian Journal of Cardiology (Australas. J. Cardiol.), the Australasian Journal of Clinical Pharmacy (Australas. J. Clin. Pharm.), and the Australasian Journal of Cardiovascular Medicine (Australas. J. Cardiovasc. Med.). Excerpta Medica was a "strategic medical communications agency" run by Elsevier, according to the imprint's web page. In October 2010, Excerpta Medica was acquired by Adelphi Worldwide.
==== Chaos, Solitons & Fractals ====
There was speculation that the editor-in-chief of Elsevier journal Chaos, Solitons & Fractals, Mohamed El Naschie, misused his power to publish his own work without appropriate peer review. The journal had published 322 papers with El Naschie as author since 1993. The last issue of December 2008 featured five of his papers. The controversy was covered extensively in blogs. The publisher announced in January 2009 that El Naschie had retired as editor-in-chief. As of November 2011 the co-Editors-in-Chief of the journal were Maurice Courbage and Paolo Grigolini. In June 2011, El Naschie sued the journal Nature for libel, claiming that his reputation had been damaged by their November 2008 article about his retirement, which included statements that Nature had been unable to verify his claimed affiliations with certain international institutions. The suit came to trial in November 2011 and was dismissed in July 2012, with the judge ruling that the article was "substantially true", contained "honest comment", and was "the product of responsible journalism". The judgement noted that El Naschie, who represented himself in court, had failed to provide any documentary evidence that his papers had been peer-reviewed. Judge Victoria Sharp also found "reasonable and serious grounds" for suspecting that El Naschie used a range of false names to defend his editorial practice in communications with Nature, and described this behavior as "curious" and "bizarre".
=== Plagiarism ===
Elsevier's 'Duties of Authors' states that authors should ensure they have written entirely original works, and that proper acknowledgement of others' work must always be given. Elsevier claims plagiarism in all its forms constitutes unethical behaviour. Some Elsevier journals automatically screen submissions for plagiarism, but not all.
Albanian politician Taulant Muka claimed that Elsevier journal Procedia had plagiarized in the abstract of one of its articles. It is unclear whether or not Muka had access to the entirety of the article.
=== Scientific racism ===
Angela Saini has criticized the two Elsevier journals Intelligence and Personality and Individual Differences for having included on their editorial boards such well-known proponents of scientific racism as Richard Lynn and Gerhard Meisenberg; in response to her inquiries, Elsevier defended their presence as editors. The journal Intelligence has been criticized for having "occasionally included papers with pseudoscientific findings about intelligence differences between races". It is the official journal of the International Society for Intelligence Research, which organizes the controversial series of conferences London Conference on Intelligence, described by the New Statesman as a forum for scientific racism.
In response to a 2019 open letter, efforts by Retraction Watch and a petition, on 17 June 2020 Elsevier announced it was retracting an article that J. Philippe Rushton and Donald Templer published in 2012 in the Elsevier journal Personality and Individual Differences. The article had claimed that there was scientific evidence that skin color was related to aggression and sexuality in humans.
=== Manipulation of bibliometrics ===
According to the signatories of the San Francisco Declaration on Research Assessment (see also Goodhart's law), commercial academic publishers benefit from manipulation of bibliometrics and scientometrics, such as the journal impact factor. The impact factor, which is often used as a proxy of prestige, can influence revenues, subscriptions, and academics' willingness to contribute unpaid work. However, there's evidence suggesting that reliability of published research works in several fields may decrease with increasing journal rank.
Nine Elsevier journals, which exhibited unusual levels of self-citation, had their journal impact factor of 2019 suspended from Journal Citation Reports in 2020, a sanction that hit 34 journals in total.
In 2023, the International Journal of Hydrogen Energy, which is published by Elsevier, was criticized for desk-rejecting a submitted article for the main reason that it did not cite enough articles from the same journal.
One of their journals, Journal of Analytical and Applied Pyrolysis, was involved in the manipulation of the peer review report.
=== Conflict of interest ===
Elsevier is a publisher of climate change research, but they partnered with the fossil fuel industry. Climate scientists are concerned that this conflict of interest could undermine the credibility of climate science because they believe that fossil fuel extraction and climate action are incompatible.
== Antitrust lawsuit ==
In September 2024, Lucina Uddin, a neuroscience professor at UCLA, sued Elsevier along with five other academic journal publishers in a proposed class-action lawsuit, alleging that the publishers violated antitrust law by agreeing not to compete against each other for manuscripts and by denying scholars payment for peer review services.
== Awards ==
Elsevier has partnered with a number of organisations and lent its name to several awards.
Since 1987, Elsevier has partnered with the academic journal Spectrochimica Acta Part B to award the Elsevier / Spectrochimica Acta Atomic Spectroscopy Award. This award is given each year for a jury-selected best paper of the year. The award is worth $1000.
Starting in 1987, the IBMS Elsevier Award was awarded in 1992, 1995, 1998, 2001, 2003, 2005, 2007 by the International Bone and Mineral Society in partnership with Elsevier, "for outstanding research and teaching throughout their career by an IBMS member in the fields of bone and mineral metabolism".
From 2007, the Coordenação de Aperfeicoamento de Pessoal de Nível Superior (CAPES) in Brazil partnered with Elsevier to award the CAPES Elsevier Award, the award being restricted to women from 2013 to encourage more women to pursue scientific careers. Several awards were awarded each year, as of 2014.
From 2011, the OWSD-Elsevier Foundation Awards for Early-Career Women Scientists in the Developing World (OWSD-Elsevier Foundation Awards) have been awarded annually to early-career women scientists in selected developing countries in four regions: Latin America and the Caribbean, East and Southeast Asia and the Pacific, Central and South Asia, and Sub-Saharan Africa. The Organization for Women in Science for the Developing World (OWSD), the Elsevier Foundation, and The World Academy of Sciences first partnered to recognize achievements of early-career women scientists in developing countries in 2011.
In 2016, the Elsevier Foundation awarded the Elsevier Foundation-ISC3 Green and Sustainable Chemistry Challenge. From 2021 and as of 2024, the annual award is known as the Elsevier Foundation Chemistry for Climate Action Challenge. Two prizes have been awarded each year; until 2020, the first prizewinner was awarded €50,000, and the second prize was €25,000. Since then, €25,000 has been awarded to each winner, usually an entrepreneur who has created a project or proposal that aids the fight against climate change.
== Imprints ==
Elsevier uses its imprints (that is, brand names used in publishing) to market to different consumer segments. Many of the imprints have previously been the names of publishing companies that were purchased by Reed Elsevier.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
Official website
Campaign success: Reed Elsevier sells international arms fairs Archived 6 August 2018 at the Wayback Machine
Mary H. Munroe (2004). "Reed Elsevier Timeline". The Academic Publishing Industry: A Story of Merger and Acquisition. Archived from the original on 20 October 2014 – via Northern Illinois University. | Wikipedia/Journal_of_Algorithms |
In computer science, a state space is a discrete space representing the set of all possible configurations of a system. It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory.
For instance, the toy problem Vacuum World has a discrete finite state space in which there are a limited set of configurations that the vacuum and dirt can be in. A "counter" system, where states are the natural numbers starting at 1 and are incremented over time has an infinite discrete state space. The angular position of an undamped pendulum is a continuous (and therefore infinite) state space.
== Definition ==
State spaces are useful in computer science as a simple model of machines. Formally, a state space can be defined as a tuple [N, A, S, G] where:
N is a set of states
A is a set of arcs connecting the states
S is a nonempty subset of N that contains start states
G is a nonempty subset of N that contains the goal states.
== Properties ==
A state space has some common properties:
complexity, where branching factor is important
structure of the space, see also graph theory:
directionality of arcs
tree
rooted graph
For example, the Vacuum World has a branching factor of 4, as the vacuum cleaner can end up in 1 of 4 adjacent squares after moving (assuming it cannot stay in the same square nor move diagonally). The arcs of Vacuum World are bidirectional, since any square can be reached from any adjacent square, and the state space is not a tree since it is possible to enter a loop by moving between any 4 adjacent squares.
State spaces can be either infinite or finite, and discrete or continuous.
=== Size ===
The size of the state space for a given system is the number of possible configurations of the space.
==== Finite ====
If the size of the state space is finite, calculating the size of the state space is a combinatorial problem. For example, in the eight queens puzzle, the state space can be calculated by counting all possible ways to place 8 pieces on an 8x8 chessboard. This is the same as choosing 8 positions without replacement from a set of 64, or
(
64
8
)
=
4
,
426
,
165
,
368
{\displaystyle {\binom {64}{8}}=4,426,165,368}
This is significantly greater than the number of legal configurations of the queens, 92. In many games the effective state space is small compared to all reachable/legal states. This property is also observed in chess, where the effective state space is the set of positions that can be reached by game-legal moves. This is far smaller than the set of positions that can be achieved by placing combinations of the available chess pieces directly on the board.
==== Infinite ====
All continuous state spaces can be described by a corresponding continuous function and are therefore infinite. Discrete state spaces can also have (countably) infinite size, such as the state space of the time-dependent "counter" system, similar to the system in queueing theory defining the number of customers in a line, which would have state space {0, 1, 2, 3, ...}.
== Exploration ==
Exploring a state space is the process of enumerating possible states in search of a goal state. The state space of Pacman, for example, contains a goal state whenever all food pellets have been eaten, and is explored by moving Pacman around the board.
=== Search states ===
A search state is a compressed representation of a world state in a state space, and is used for exploration. Search states are used because a state space often encodes more information than is necessary to explore the space. Compressing each world state to only information needed for exploration improves efficiency by reducing the number of states in the search. For example, a state in the Pacman space includes information about the direction Pacman is facing (up, down, left, or right). Since it does not cost anything to change directions in Pacman, search states for Pacman would not include this information and reduce the size of the search space by a factor of 4, one for each direction Pacman could be facing.
=== Methods ===
Standard search algorithms are effective in exploring discrete state spaces. The following algorithms exhibit both completeness and optimality in searching a state space:
Breadth-first search
A* search
Uniform cost search
These methods do not extend naturally to exploring continuous state spaces. Exploring a continuous state space in search of a given goal state is equivalent to optimizing an arbitrary continuous function which is not always possible; see mathematical optimization.
== See also ==
== References == | Wikipedia/State_space_(computer_science) |
In computer science, streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes, typically just one. These algorithms are designed to operate with limited memory, generally logarithmic in the size of the stream and/or in the maximum value in the stream, and may also have limited processing time per item.
As a result of these constraints, streaming algorithms often produce approximate answers based on a summary or "sketch" of the data stream.
== History ==
Though streaming algorithms had already been studied by Munro and Paterson as early as 1978, as well as Philippe Flajolet and G. Nigel Martin in 1982/83, the field of streaming algorithms was first formalized and popularized in a 1996 paper by Noga Alon, Yossi Matias, and Mario Szegedy. For this paper, the authors later won the Gödel Prize in 2005 "for their foundational contribution to streaming algorithms." There has since been a large body of work centered around data streaming algorithms that spans a diverse spectrum of computer science fields such as theory, databases, networking, and natural language processing.
Semi-streaming algorithms were introduced in 2005 as a relaxation of streaming algorithms for graphs, in which the space allowed is linear in the number of vertices n, but only logarithmic in the number of edges m. This relaxation is still meaningful for dense graphs, and can solve interesting problems (such as connectivity) that are insoluble in
o
(
n
)
{\displaystyle o(n)}
space.
== Models ==
=== Data stream model ===
In the data stream model, some or all of the input is represented as a finite sequence of integers (from some finite domain) which is generally not available for random access, but instead arrives one at a time in a "stream". If the stream has length n and the domain has size m, algorithms are generally constrained to use space that is logarithmic in m and n. They can generally make only some small constant number of passes over the stream, sometimes just one.
=== Turnstile and cash register models ===
Much of the streaming literature is concerned with computing statistics on
frequency distributions that are too large to be stored. For this class of
problems, there is a vector
a
=
(
a
1
,
…
,
a
n
)
{\displaystyle \mathbf {a} =(a_{1},\dots ,a_{n})}
(initialized to the zero vector
0
{\displaystyle \mathbf {0} }
) that has updates presented to it in a stream. The goal of these algorithms is to compute
functions of
a
{\displaystyle \mathbf {a} }
using considerably less space than it
would take to represent
a
{\displaystyle \mathbf {a} }
precisely. There are two
common models for updating such streams, called the "cash register" and
"turnstile" models.
In the cash register model, each update is of the form
⟨
i
,
c
⟩
{\displaystyle \langle i,c\rangle }
, so that
a
i
{\displaystyle a_{i}}
is incremented by some positive
integer
c
{\displaystyle c}
. A notable special case is when
c
=
1
{\displaystyle c=1}
(only unit insertions are permitted).
In the turnstile model, each update is of the form
⟨
i
,
c
⟩
{\displaystyle \langle i,c\rangle }
, so that
a
i
{\displaystyle a_{i}}
is incremented by some (possibly negative) integer
c
{\displaystyle c}
. In the "strict turnstile" model, no
a
i
{\displaystyle a_{i}}
at any time may be less than zero.
=== Sliding window model ===
Several papers also consider the "sliding window" model. In this model,
the function of interest is computing over a fixed-size window in the
stream. As the stream progresses, items from the end of the window are
removed from consideration while new items from the stream take their
place.
Besides the above frequency-based problems, some other types of problems
have also been studied. Many graph problems are solved in the setting
where the adjacency matrix or the adjacency list of the graph is streamed in
some unknown order. There are also some problems that are very dependent
on the order of the stream (i.e., asymmetric functions), such as counting
the number of inversions in a stream and finding the longest increasing
subsequence.
== Evaluation ==
The performance of an algorithm that operates on data streams is measured by three basic factors:
The number of passes the algorithm must make over the stream.
The available memory.
The running time of the algorithm.
These algorithms have many similarities with online algorithms since they both require decisions to be made before all data are available, but they are not identical. Data stream algorithms only have limited memory available but they may be able to defer action until a group of points arrive, while online algorithms are required to take action as soon as each point arrives.
If the algorithm is an approximation algorithm then the accuracy of the answer is another key factor. The accuracy is often stated as an
(
ϵ
,
δ
)
{\displaystyle (\epsilon ,\delta )}
approximation meaning that the algorithm achieves an error of less than
ϵ
{\displaystyle \epsilon }
with probability
1
−
δ
{\displaystyle 1-\delta }
.
== Applications ==
Streaming algorithms have several applications in networking such as
monitoring network links for elephant flows, counting the number of
distinct flows, estimating the distribution of flow sizes, and so
on. They also have applications in
databases, such as estimating the size of a join .
== Some streaming problems ==
=== Frequency moments ===
The kth frequency moment of a set of frequencies
a
{\displaystyle \mathbf {a} }
is defined as
F
k
(
a
)
=
∑
i
=
1
n
a
i
k
{\displaystyle F_{k}(\mathbf {a} )=\sum _{i=1}^{n}a_{i}^{k}}
.
The first moment
F
1
{\displaystyle F_{1}}
is simply the sum of the frequencies (i.e., the total count). The second moment
F
2
{\displaystyle F_{2}}
is useful for computing statistical properties of the data, such as the Gini coefficient
of variation.
F
∞
{\displaystyle F_{\infty }}
is defined as the frequency of the most frequent items.
The seminal paper of Alon, Matias, and Szegedy dealt with the problem of estimating the frequency moments.
==== Calculating frequency moments ====
A direct approach to find the frequency moments requires to maintain a register mi for all distinct elements ai ∈ (1,2,3,4,...,N) which requires at least memory
of order
Ω
(
N
)
{\displaystyle \Omega (N)}
. But we have space limitations and require an algorithm that computes in much lower memory. This can be achieved by using approximations instead of exact values. An algorithm that computes an (ε,δ)approximation of Fk, where F'k is the (ε,δ)-
approximated value of Fk. Where ε is the approximation parameter and δ is the confidence parameter.
===== Calculating F0 (distinct elements in a data stream) =====
====== FM-Sketch algorithm ======
Flajolet et al. in introduced probabilistic method of counting which was inspired from a paper by Robert Morris. Morris in his paper says that if the requirement of accuracy is dropped, a counter n can be replaced by a counter log n which can be stored in log log n bits. Flajolet et al. in improved this method by using a hash function h which is assumed to uniformly distribute the element in the hash space (a binary string of length L).
h
:
[
m
]
→
[
0
,
2
L
−
1
]
{\displaystyle h:[m]\rightarrow [0,2^{L}-1]}
Let bit(y,k) represent the kth bit in binary representation of y
y
=
∑
k
≥
0
b
i
t
(
y
,
k
)
∗
2
k
{\displaystyle y=\sum _{k\geq 0}\mathrm {bit} (y,k)*2^{k}}
Let
ρ
(
y
)
{\displaystyle \rho (y)}
represents the position of least
significant 1-bit in the binary representation of yi with a suitable convention for
ρ
(
0
)
{\displaystyle \rho (0)}
.
ρ
(
y
)
=
{
M
i
n
(
k
:
b
i
t
(
y
,
k
)
==
1
)
if
y
>
0
L
if
y
=
0
{\displaystyle \rho (y)={\begin{cases}\mathrm {Min} (k:\mathrm {bit} (y,k)==1)&{\text{if }}y>0\\L&{\text{if }}y=0\end{cases}}}
Let A be the sequence of data stream of length M whose cardinality need to be determined. Let BITMAP [0...L − 1] be the
hash space where the ρ(hashedvalues) are recorded. The below algorithm then determines approximate cardinality of A.Procedure FM-Sketch:
for i in 0 to L − 1 do
BITMAP[i] := 0
end for
for x in A: do
Index := ρ(hash(x))
if BITMAP[index] = 0 then
BITMAP[index] := 1
end if
end for
B := Position of left most 0 bit of BITMAP[]
return 2 ^ B
If there are N distinct elements in a data stream.
For
i
≫
log
(
N
)
{\displaystyle i\gg \log(N)}
then BITMAP[i] is certainly 0
For
i
≪
log
(
N
)
{\displaystyle i\ll \log(N)}
then BITMAP[i] is certainly 1
For
i
≈
log
(
N
)
{\displaystyle i\approx \log(N)}
then BITMAP[i] is a fringes of 0's and 1's
====== K-minimum value algorithm ======
The previous algorithm describes the first attempt to approximate F0 in the data stream by Flajolet and Martin. Their algorithm picks a random hash function which they assume to uniformly distribute the hash values in hash space.
Bar-Yossef et al. in introduced k-minimum value algorithm for determining number of distinct elements in data stream. They used a similar hash function h which can be normalized to [0,1] as
h
:
[
m
]
→
[
0
,
1
]
{\displaystyle h:[m]\rightarrow [0,1]}
. But they fixed a limit t to number of values in hash space. The value of t is assumed of the order
O
(
1
ε
2
)
{\displaystyle O\left({\dfrac {1}{\varepsilon _{2}}}\right)}
(i.e. less approximation-value ε requires more t). KMV algorithm keeps only t-smallest hash values in the hash space. After all the m values of stream have arrived,
υ
=
M
a
x
(
h
(
a
i
)
)
{\displaystyle \upsilon =\mathrm {Max} (h(a_{i}))}
is used to calculate
F
0
′
=
t
υ
{\displaystyle F'_{0}={\dfrac {t}{\upsilon }}}
. That is, in a close-to uniform hash space, they expect at-least t elements to be less than
O
(
t
F
0
)
{\displaystyle O\left({\dfrac {t}{F_{0}}}\right)}
.Procedure 2 K-Minimum Value
Initialize first t values of KMV
for a in a1 to an do
if h(a) < Max(KMV) then
Remove Max(KMV) from KMV set
Insert h(a) to KMV
end if
end for
return t/Max(KMV)
====== Complexity analysis of KMV ======
KMV algorithm can be implemented in
O
(
(
1
ε
2
)
⋅
log
(
m
)
)
{\displaystyle O\left(\left({\dfrac {1}{\varepsilon _{2}}}\right)\cdot \log(m)\right)}
memory bits space. Each hash value requires space of order
O
(
log
(
m
)
)
{\displaystyle O(\log(m))}
memory bits. There are hash values of the order
O
(
1
ε
2
)
{\displaystyle O\left({\dfrac {1}{\varepsilon _{2}}}\right)}
. The access time can be reduced if we store the t hash values in a binary tree. Thus the time complexity will be reduced to
O
(
log
(
1
ε
)
⋅
log
(
m
)
)
{\displaystyle O\left(\log \left({\dfrac {1}{\varepsilon }}\right)\cdot \log(m)\right)}
.
===== Calculating Fk =====
Alon et al. estimates Fk by defining random variables that can be computed within given space and time. The expected value of random variables gives the approximate value of Fk.
Assume length of sequence m is known in advance. Then construct a random variable X as follows:
Select ap be a random member of sequence A with index at p,
a
p
=
l
∈
(
1
,
2
,
3
,
…
,
n
)
{\displaystyle a_{p}=l\in (1,2,3,\ldots ,n)}
Let
r
=
|
{
q
:
q
≥
p
,
a
q
=
l
}
|
{\displaystyle r=|\{q:q\geq p,a_{q}=l\}|}
, represents the number of occurrences of l within the members of the sequence A following ap.
Random variable
X
=
m
(
r
k
−
(
r
−
1
)
k
)
{\displaystyle X=m(r^{k}-(r-1)^{k})}
.
Assume S1 be of the order
O
(
n
1
−
1
/
k
/
λ
2
)
{\displaystyle O(n^{1-1/k}/\lambda ^{2})}
and S2 be of the order
O
(
log
(
1
/
ε
)
)
{\displaystyle O(\log(1/\varepsilon ))}
. Algorithm takes S2 random variable
Y
1
,
Y
2
,
.
.
.
,
Y
S
2
{\displaystyle Y_{1},Y_{2},...,Y_{S2}}
and outputs the median
Y
{\displaystyle Y}
. Where Yi is the average of Xij where 1 ≤ j ≤ S1.
Now calculate expectation of random variable E(X).
E
(
X
)
=
∑
i
=
1
n
∑
i
=
1
m
i
(
j
k
−
(
j
−
1
)
k
)
=
m
m
[
(
1
k
+
(
2
k
−
1
k
)
+
…
+
(
m
1
k
−
(
m
1
−
1
)
k
)
)
+
(
1
k
+
(
2
k
−
1
k
)
+
…
+
(
m
2
k
−
(
m
2
−
1
)
k
)
)
+
…
+
(
1
k
+
(
2
k
−
1
k
)
+
…
+
(
m
n
k
−
(
m
n
−
1
)
k
)
)
]
=
∑
i
=
1
n
m
i
k
=
F
k
{\displaystyle {\begin{array}{lll}E(X)&=&\sum _{i=1}^{n}\sum _{i=1}^{m_{i}}(j^{k}-(j-1)^{k})\\&=&{\frac {m}{m}}[(1^{k}+(2^{k}-1^{k})+\ldots +(m_{1}^{k}-(m_{1}-1)^{k}))\\&&\;+\;(1^{k}+(2^{k}-1^{k})+\ldots +(m_{2}^{k}-(m_{2}-1)^{k}))+\ldots \\&&\;+\;(1^{k}+(2^{k}-1^{k})+\ldots +(m_{n}^{k}-(m_{n}-1)^{k}))]\\&=&\sum _{i=1}^{n}m_{i}^{k}=F_{k}\end{array}}}
====== Complexity of Fk ======
From the algorithm to calculate Fk discussed above, we can see that each random variable X stores value of ap and r. So, to compute X we need to maintain only log(n) bits for storing ap and log(n) bits for storing r. Total number of random variable X will be the
S
1
∗
S
2
{\displaystyle S_{1}*S_{2}}
.
Hence the total space complexity the algorithm takes is of the order of
O
(
k
log
1
ε
λ
2
n
1
−
1
k
(
log
n
+
log
m
)
)
{\displaystyle O\left({\dfrac {k\log {1 \over \varepsilon }}{\lambda ^{2}}}n^{1-{1 \over k}}\left(\log n+\log m\right)\right)}
====== Simpler approach to calculate F2 ======
The previous algorithm calculates
F
2
{\displaystyle F_{2}}
in order of
O
(
n
(
log
m
+
log
n
)
)
{\displaystyle O({\sqrt {n}}(\log m+\log n))}
memory bits. Alon et al. in simplified this algorithm using four-wise independent random variable with values mapped to
{
−
1
,
1
}
{\displaystyle \{-1,1\}}
.
This further reduces the complexity to calculate
F
2
{\displaystyle F_{2}}
to
O
(
log
1
ε
λ
2
(
log
n
+
log
m
)
)
{\displaystyle O\left({\dfrac {\log {1 \over \varepsilon }}{\lambda ^{2}}}\left(\log n+\log m\right)\right)}
=== Frequent elements ===
In the data stream model, the frequent elements problem is to output a set of elements that constitute more than some fixed fraction of the stream. A special case is the majority problem, which is to determine whether or not any value constitutes a majority of the stream.
More formally, fix some positive constant c > 1, let the length of the stream be m, and let fi denote the frequency of value i in the stream. The frequent elements problem is to output the set { i | fi > m/c }.
Some notable algorithms are:
Boyer–Moore majority vote algorithm
Count-Min sketch
Lossy counting
Multi-stage Bloom filters
Misra–Gries heavy hitters algorithm
Misra–Gries summary
=== Event detection ===
Detecting events in data streams is often done using a heavy hitters algorithm as listed above: the most frequent items and their frequency are determined using one of these algorithms, then the largest increase over the previous time point is reported as trend. This approach can be refined by using exponentially weighted moving averages and variance for normalization.
=== Counting distinct elements ===
Counting the number of distinct elements in a stream (sometimes called the
F0 moment) is another problem that has been well studied.
The first algorithm for it was proposed by Flajolet and Martin. In 2010, Daniel Kane, Jelani Nelson and David Woodruff found an asymptotically optimal algorithm for this problem. It uses O(ε2 + log d) space, with O(1) worst-case update and reporting times, as well as universal hash functions and a r-wise independent hash family where r = Ω(log(1/ε) / log log(1/ε)).
=== Entropy ===
The (empirical) entropy of a set of frequencies
a
{\displaystyle \mathbf {a} }
is
defined as
F
k
(
a
)
=
∑
i
=
1
n
a
i
m
log
a
i
m
{\displaystyle F_{k}(\mathbf {a} )=\sum _{i=1}^{n}{\frac {a_{i}}{m}}\log {\frac {a_{i}}{m}}}
, where
m
=
∑
i
=
1
n
a
i
{\displaystyle m=\sum _{i=1}^{n}a_{i}}
.
=== Online learning ===
Learn a model (e.g. a classifier) by a single pass over a training set.
Feature hashing
Stochastic gradient descent
== Lower bounds ==
Lower bounds have been computed for many of the data streaming problems
that have been studied. By far, the most common technique for computing
these lower bounds has been using communication complexity.
== See also ==
Data stream mining
Data stream clustering
Online algorithm
Stream processing
Sequential algorithm
== Notes ==
== References ==
Alon, Noga; Matias, Yossi; Szegedy, Mario (1999), "The space complexity of approximating the frequency moments", Journal of Computer and System Sciences, 58 (1): 137–147, doi:10.1006/jcss.1997.1545, ISSN 0022-0000. First published as Alon, Noga; Matias, Yossi; Szegedy, Mario (1996), "The space complexity of approximating the frequency moments", Proceedings of the 28th ACM Symposium on Theory of Computing (STOC 1996), pp. 20–29, CiteSeerX 10.1.1.131.4984, doi:10.1145/237814.237823, ISBN 978-0-89791-785-8, S2CID 1627911.
Babcock, Brian; Babu, Shivnath; Datar, Mayur; Motwani, Rajeev; Widom, Jennifer (2002), "Models and issues in data stream systems", Proceedings of the 21st ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS 2002) (PDF), pp. 1–16, CiteSeerX 10.1.1.138.190, doi:10.1145/543613.543615, ISBN 978-1-58113-507-7, S2CID 2071130, archived from the original (PDF) on 2017-07-09, retrieved 2013-07-15.
Flajolet, Philippe; Martin, G. Nigel (1985). "Probabilistic counting algorithms for data base applications" (PDF). Journal of Computer and System Sciences. 31 (2): 182–209. doi:10.1016/0022-0000(85)90041-8. Retrieved 2016-12-11.
Gilbert, A. C.; Kotidis, Y.; Muthukrishnan, S.; Strauss, M. J. (2001), "Surfing Wavelets on Streams: One-Pass Summaries for Approximate Aggregate Queries" (PDF), Proceedings of the International Conference on Very Large Data Bases: 79–88.
Kane, Daniel M.; Nelson, Jelani; Woodruff, David P. (2010). "An optimal algorithm for the distinct elements problem". Proceedings of the Twenty-Ninth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems. PODS '10. New York, NY, USA: ACM. pp. 41–52. CiteSeerX 10.1.1.164.142. doi:10.1145/1807085.1807094. ISBN 978-1-4503-0033-9. S2CID 10006932..
Karp, R. M.; Papadimitriou, C. H.; Shenker, S. (2003), "A simple algorithm for finding frequent elements in streams and bags", ACM Transactions on Database Systems, 28 (1): 51–55, CiteSeerX 10.1.1.116.8530, doi:10.1145/762471.762473, S2CID 952840.
Lall, Ashwin; Sekar, Vyas; Ogihara, Mitsunori; Xu, Jun; Zhang, Hui (2006). "Data streaming algorithms for estimating entropy of network traffic". Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems (ACM SIGMETRICS 2006). p. 145. doi:10.1145/1140277.1140295. hdl:1802/2537. ISBN 978-1-59593-319-5. S2CID 240982..
Xu, Jun (Jim) (2007), A Tutorial on Network Data Streaming (PDF).
Heath, D., Kasif, S., Kosaraju, R., Salzberg, S., Sullivan, G., "Learning Nested Concepts With Limited Storage", Proceeding IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2, Pages 777–782, Morgan Kaufmann Publishers Inc. San Francisco, CA, USA ©1991
Morris, Robert (1978), "Counting large numbers of events in small registers", Communications of the ACM, 21 (10): 840–842, doi:10.1145/359619.359627, S2CID 36226357. | Wikipedia/Streaming_algorithms |
ACM Transactions on Algorithms (TALG) is a quarterly peer-reviewed scientific journal covering the field of algorithms. It was established in 2005 and is published by the Association for Computing Machinery. The editor-in-chief is Edith Cohen. The journal was created when the editorial board of the Journal of Algorithms resigned out of protest to the pricing policies of the publisher, Elsevier. Apart from regular submissions, the journal also invites selected papers from the ACM-SIAM Symposium on Discrete Algorithms (SODA).
== Abstracting and indexing ==
The journal is abstracted and indexed in the Science Citation Index Expanded, Current Contents/Engineering, Computing & Technology, and Scopus. According to the Journal Citation Reports, the journal has a 2023 impact factor of 0.9.
== Past editors ==
The following persons have been editors-in-chief of the journal:
Harold N. Gabow (2005-2008)
Susanne Albers (2008-2014)
Aravind Srinivasan (2014-2021)
== See also ==
Algorithmica
Algorithms (journal)
== References ==
== External links ==
Official website | Wikipedia/ACM_Transactions_on_Algorithms |
Paxos is a family of protocols for solving consensus in a network of unreliable or fallible processors.
Consensus is the process of agreeing on one result among a group of participants. This problem becomes difficult when the participants or their communications may experience failures.
Consensus protocols are the basis for the state machine replication approach to distributed computing, as suggested by Leslie Lamport and surveyed by Fred Schneider. State machine replication is a technique for converting an algorithm into a fault-tolerant, distributed implementation. Ad-hoc techniques may leave important cases of failures unresolved. The principled approach proposed by Lamport et al. ensures all cases are handled safely.
The Paxos protocol was first submitted in 1989 and named after a fictional legislative consensus system used on the Paxos island in Greece, where Lamport wrote that the parliament had to function "even though legislators continually wandered in and out of the parliamentary Chamber". It was later published as a journal article in 1998.
The Paxos family of protocols includes a spectrum of trade-offs between the number of processors, number of message delays before learning the agreed value, the activity level of individual participants, number of messages sent, and types of failures. Although no deterministic fault-tolerant consensus protocol can guarantee progress in an asynchronous network (a result proved in a paper by Fischer, Lynch and Paterson), Paxos guarantees safety (consistency), and the conditions that could prevent it from making progress are difficult to provoke.
Paxos is usually used where durability is required (for example, to replicate a file or a database), in which the amount of durable state could be large. The protocol attempts to make progress even during periods when some bounded number of replicas are unresponsive. There is also a mechanism to drop a permanently failed replica or to add a new replica.
== History ==
The topic predates the protocol. In 1988, Lynch, Dwork and Stockmeyer had demonstrated the solvability of consensus in a broad family of "partially synchronous" systems. Paxos has strong similarities to a protocol used for agreement in "viewstamped replication", first published by Oki and Liskov in 1988, in the context of distributed transactions. Notwithstanding this prior work, Paxos offered a particularly elegant formalism, and included one of the earliest proofs of safety for a fault-tolerant distributed consensus protocol.
Reconfigurable state machines have strong ties to prior work on reliable group multicast protocols that support dynamic group membership, for example Birman's work in 1985 and 1987 on the virtually synchronous gbcast protocol. However, gbcast is unusual in supporting durability and addressing partitioning failures.
Most reliable multicast protocols lack these properties, which are required for implementations of the state machine replication model.
This point is elaborated in a paper by Lamport, Malkhi and Zhou.
Paxos protocols are members of a theoretical class of solutions to a problem formalized as uniform agreement with crash failures.
Lower bounds for this problem have been proved by Keidar and Shraer. Derecho, a C++ software library for cloud-scale state machine replication, offers a Paxos protocol that has been integrated with self-managed virtually synchronous membership. This protocol matches the Keidar and Shraer optimality bounds, and maps efficiently to modern remote DMA (RDMA) datacenter hardware (but uses TCP if RDMA is not available).
== Assumptions ==
In order to simplify the presentation of Paxos, the following assumptions and definitions are made explicit. Techniques to broaden the applicability are known in the literature, and are not covered in this article.
=== Processors ===
Processors operate at arbitrary speed.
Processors may experience failures.
Processors with stable storage may re-join the protocol after failures (following a crash-recovery failure model).
Processors do not collude, lie, or otherwise attempt to subvert the protocol. (That is, Byzantine failures don't occur. See Byzantine Paxos for a solution that tolerates failures that arise from arbitrary/malicious behavior of the processes.)
=== Network ===
Processors can send messages to any other processor.
Messages are sent asynchronously and may take arbitrarily long to deliver.
Messages may be lost, reordered, or duplicated.
Messages are delivered without corruption. (That is, Byzantine failures don't occur. See Byzantine Paxos for a solution which tolerates corrupted messages that arise from arbitrary/malicious behavior of the messaging channels.)
=== Number of processors ===
In general, a consensus algorithm can make progress using
n
=
2
F
+
1
{\displaystyle n=2F+1}
processors, despite the simultaneous failure of any
F
{\displaystyle F}
processors: in other words, the number of non-faulty processes must be strictly greater than the number of faulty processes. However, using reconfiguration, a protocol may be employed which survives any number of total failures as long as no more than F fail simultaneously. For Paxos protocols, these reconfigurations can be handled as separate configurations.
== Safety and liveness properties ==
In order to guarantee safety (also called "consistency"), Paxos defines three properties and ensures the first two are always held, regardless of the pattern of failures:
Validity (or non-triviality)
Only proposed values can be chosen and learned.
Agreement (or consistency, or safety)
No two distinct learners can learn different values (or there can't be more than one decided value).
Termination (or liveness)
If value C has been proposed, then eventually learner L will learn some value (if sufficient processors remain non-faulty).
Note that Paxos is not guaranteed to terminate, and thus does not have the liveness property. This is supported by the Fischer Lynch Paterson impossibility result (FLP) which states that a consistency protocol can only have two of safety, liveness, and fault tolerance. As Paxos's point is to ensure fault tolerance and it guarantees safety, it cannot also guarantee liveness.
== Typical deployment ==
In most deployments of Paxos, each participating process acts in three roles: Proposer, Acceptor and Learner. This reduces the message complexity significantly, without sacrificing correctness:
In Paxos, clients send commands to a leader. During normal operation, the leader receives a client's command, assigns it a new command number
i
{\displaystyle i}
, and then begins the
i
{\displaystyle i}
th instance of the consensus algorithm by sending messages to a set of acceptor processes.
By merging roles, the protocol "collapses" into an efficient client-master-replica style deployment, typical of the database community. The benefit of the Paxos protocols (including implementations with merged roles) is the guarantee of its safety properties.
A typical implementation's message flow is covered in the section Multi-Paxos.
== Basic Paxos ==
This protocol is the most basic of the Paxos family. Each "instance" (or "execution") of the basic Paxos protocol decides on a single output value. The protocol proceeds over several rounds. A successful round has 2 phases: phase 1 (which is divided into parts a and b) and phase 2 (which is divided into parts a and b). See below the description of the phases. Remember that we assume an asynchronous model, so e.g. a processor may be in one phase while another processor may be in another.
=== Phase 1 ===
==== Phase 1a: Prepare ====
A Proposer creates a message, which we call a Prepare. The message is identified with a unique number, n, which must be greater than any number previously used in a Prepare message by this Proposer. Note that n is not the value to be proposed; it is simply a unique identifier of this initial message by the Proposer. In fact, the Prepare message needn't contain the proposed value (often denoted by v).
The Proposer chooses at least a Quorum of Acceptors and sends the Prepare message containing n to them. A Proposer should not initiate Paxos if it cannot communicate with enough Acceptors to constitute a Quorum.
==== Phase 1b: Promise ====
The Acceptors wait for a Prepare message from any of the Proposers. When an Acceptor receives a Prepare message, the Acceptor must examine the identifier number, n, of that message. There are two cases:
If n is higher than every previous proposal number received by the Acceptor (from any Proposer), then the Acceptor must return a message (called a Promise) to the Proposer, indicating that the Acceptor will ignore all future proposals numbered less than or equal to n. The Promise must include the highest number among the Proposals that the Acceptor previously accepted, along with the corresponding accepted value.
If n is less than or equal to any previous proposal number received by the Acceptor, the Acceptor needn't respond and can ignore the proposal. However, for the sake of optimization, sending a denial, or negative acknowledgement (NAK), response would tell the Proposer that it can stop its attempt to create consensus with proposal n.
=== Phase 2 ===
==== Phase 2a: Accept ====
If a Proposer receives Promises from a Quorum of Acceptors, it needs to set a value v to its proposal. If any Acceptors had previously accepted any proposal, then they'll have sent their values to the Proposer, who now must set the value of its proposal, v, to the value associated with the highest proposal number reported by the Acceptors, let's call it z. If none of the Acceptors had accepted a proposal up to this point, then the Proposer may choose the value it originally wanted to propose, say x.
The Proposer sends an Accept message, (n, v), to a Quorum of Acceptors with the chosen value for its proposal, v, and the proposal number n (which is the same as the number contained in the Prepare message previously sent to the Acceptors). So, the Accept message is either (n, v=z) or, in case none of the Acceptors previously accepted a value, (n, v=x).
This Accept message should be interpreted as a "request", as in "Accept this proposal, please!".
==== Phase 2b: Accepted ====
If an Acceptor receives an Accept message, (n, v), from a Proposer, it must accept it if and only if it has not already promised (in Phase 1b of the Paxos protocol) to only consider proposals having an identifier greater than n.
If the Acceptor has not already promised (in Phase 1b) to only consider proposals having an identifier greater than n, it should register the value v (of the just received Accept message) as the accepted value (of the Protocol), and send an Accepted message to the Proposer and every Learner (which can typically be the Proposers themselves). Learners will learn the decided value only after receiving Accepted messages from a majority of acceptors, i.e. not after receiving just the first Accept message.
Else, it can ignore the Accept message or request.
Note that consensus is achieved when a majority of Acceptors accept the same identifier number (rather than the same value). Because each identifier is unique to a Proposer and only one value may be proposed per identifier, all Acceptors that accept the same identifier thereby accept the same value. These facts result in a few counter-intuitive scenarios that do not impact correctness: Acceptors can accept multiple values, a value may achieve a majority across Acceptors (with different identifiers) only to later be changed, and Acceptors may continue to accept proposals after an identifier has achieved a majority. However, the Paxos protocol guarantees that consensus is permanent and the chosen value is immutable.
=== When rounds fail ===
Rounds fail when multiple Proposers send conflicting Prepare messages, or when the Proposer does not receive a Quorum of responses (Promise or Accepted). In these cases, another round must be started with a higher proposal number.
=== Paxos can be used to select a leader ===
Notice that a Proposer in Paxos could propose "I am the leader" (or, for example, "Proposer X is the leader"). Because of the agreement and validity guarantees of Paxos, if accepted by a Quorum, then the Proposer is now known to be the leader to all other nodes. This satisfies the needs of leader election because there is a single node believing it is the leader and a single node known to be the leader at all times.
=== Graphic representation of the flow of messages in the basic Paxos ===
The following diagrams represent several cases/situations of the application of the Basic Paxos protocol. Some cases show how the Basic Paxos protocol copes with the failure of certain (redundant) components of the distributed system.
Note that the values returned in the Promise message are "null" the first time a proposal is made (since no Acceptor has accepted a value before in this round).
==== Basic Paxos without failures ====
In the diagram below, there is 1 Client, 1 Proposer, 3 Acceptors (i.e. the Quorum size is 3) and 2 Learners (represented by the 2 vertical lines). This diagram represents the case of a first round, which is successful (i.e. no process in the network fails).
Here, V is the last of (Va, Vb, Vc).
==== Error cases in basic Paxos ====
The simplest error cases are the failure of an Acceptor (when a Quorum of Acceptors remains alive) and failure of a redundant Learner. In these cases, the protocol requires no "recovery" (i.e. it still succeeds): no additional rounds or messages are required, as shown below (in the next two diagrams/cases).
==== Basic Paxos when an Acceptor fails ====
In the following diagram, one of the Acceptors in the Quorum fails, so the Quorum size becomes 2. In this case, the Basic Paxos protocol still succeeds.
Client Proposer Acceptor Learner
| | | | | | |
X-------->| | | | | | Request
| X--------->|->|->| | | Prepare(1)
| | | | ! | | !! FAIL !!
| |<---------X--X | | Promise(1,{Va, Vb, null})
| X--------->|->| | | Accept!(1,V)
| |<---------X--X--------->|->| Accepted(1,V)
|<---------------------------------X--X Response
| | | | | |
==== Basic Paxos when a redundant learner fails ====
In the following case, one of the (redundant) Learners fails, but the Basic Paxos protocol still succeeds.
Client Proposer Acceptor Learner
| | | | | | |
X-------->| | | | | | Request
| X--------->|->|->| | | Prepare(1)
| |<---------X--X--X | | Promise(1,{Va,Vb,Vc})
| X--------->|->|->| | | Accept!(1,V)
| |<---------X--X--X------>|->| Accepted(1,V)
| | | | | | ! !! FAIL !!
|<---------------------------------X Response
| | | | | |
==== Basic Paxos when a Proposer fails ====
In this case, a Proposer fails after proposing a value, but before the agreement is reached. Specifically, it fails in the middle of the Accept message, so only one Acceptor of the Quorum receives the value. Meanwhile, a new Leader (a Proposer) is elected (but this is not shown in detail). Note that there are 2 rounds in this case (rounds proceed vertically, from the top to the bottom).
Client Proposer Acceptor Learner
| | | | | | |
X----->| | | | | | Request
| X------------>|->|->| | | Prepare(1)
| |<------------X--X--X | | Promise(1,{Va, Vb, Vc})
| | | | | | |
| | | | | | | !! Leader fails during broadcast !!
| X------------>| | | | | Accept!(1,V)
| ! | | | | |
| | | | | | | !! NEW LEADER !!
| X--------->|->|->| | | Prepare(2)
| |<---------X--X--X | | Promise(2,{V, null, null})
| X--------->|->|->| | | Accept!(2,V)
| |<---------X--X--X------>|->| Accepted(2,V)
|<---------------------------------X--X Response
| | | | | | |
==== Basic Paxos when multiple Proposers conflict ====
The most complex case is when multiple Proposers believe themselves to be Leaders. For instance, the current leader may fail and later recover, but the other Proposers have already re-selected a new leader. The recovered leader has not learned this yet and attempts to begin one round in conflict with the current leader. In the diagram below, 4 unsuccessful rounds are shown, but there could be more (as suggested at the bottom of the diagram).
Client Proposer Acceptor Learner
| | | | | | |
X----->| | | | | | Request
| X------------>|->|->| | | Prepare(1)
| |<------------X--X--X | | Promise(1,{null,null,null})
| ! | | | | | !! LEADER FAILS
| | | | | | | !! NEW LEADER (knows last number was 1)
| X--------->|->|->| | | Prepare(2)
| |<---------X--X--X | | Promise(2,{null,null,null})
| | | | | | | | !! OLD LEADER recovers
| | | | | | | | !! OLD LEADER tries 2, denied
| X------------>|->|->| | | Prepare(2)
| |<------------X--X--X | | Nack(2)
| | | | | | | | !! OLD LEADER tries 3
| X------------>|->|->| | | Prepare(3)
| |<------------X--X--X | | Promise(3,{null,null,null})
| | | | | | | | !! NEW LEADER proposes, denied
| | X--------->|->|->| | | Accept!(2,Va)
| | |<---------X--X--X | | Nack(3)
| | | | | | | | !! NEW LEADER tries 4
| | X--------->|->|->| | | Prepare(4)
| | |<---------X--X--X | | Promise(4,{null,null,null})
| | | | | | | | !! OLD LEADER proposes, denied
| X------------>|->|->| | | Accept!(3,Vb)
| |<------------X--X--X | | Nack(4)
| | | | | | | | ... and so on ...
==== Basic Paxos where an Acceptor accepts Two Different Values ====
In the following case, one Proposer achieves acceptance of value V1 by one Acceptor before failing. A new Proposer prepares the Acceptors that never accepted V1, allowing it to propose V2. Then V2 is accepted by all Acceptors, including the one that initially accepted V1.
Proposer Acceptor Learner
| | | | | | |
X--------->|->|->| | | Prepare(1)
|<---------X--X--X | | Promise(1,{null,null,null})
x--------->| | | | | Accept!(1,V1)
| | X------------>|->| Accepted(1,V1)
! | | | | | | !! FAIL !!
| | | | | |
X--------->|->| | | Prepare(2)
|<---------X--X | | Promise(2,{null,null})
X------>|->|->| | | Accept!(2,V2)
|<------X--X--X------>|->| Accepted(2,V2)
| | | | | |
==== Basic Paxos where a multi-identifier majority is insufficient ====
In the following case, one Proposer achieves acceptance of value V1 of one Acceptor before failing. A new Proposer prepares the Acceptors that never accepted V1, allowing it to propose V2. This Proposer is able to get one Acceptor to accept V2 before failing. A new Proposer finds a majority that includes the Acceptor that has accepted V1, and must propose it. The Proposer manages to get two Acceptors to accept it before failing. At this point, three Acceptors have accepted V1, but not for the same identifier. Finally, a new Proposer prepares the majority that has not seen the largest accepted identifier. The value associated with the largest identifier in that majority is V2, so it must propose it. This Proposer then gets all Acceptors to accept V2, achieving consensus.
Proposer Acceptor Learner
| | | | | | | | | | |
X--------------->|->|->|->|->| | | Prepare(1)
|<---------------X--X--X--X--X | | Promise(1,{null,null,null,null,null})
x--------------->| | | | | | | Accept!(1,V1)
| | | | X------------------>|->| Accepted(1,V1)
! | | | | | | | | | | !! FAIL !!
| | | | | | | | | |
X--------------->|->|->|->| | | Prepare(2)
|<---------------X--X--X--X | | Promise(2,{null,null,null,null})
X--------------->| | | | | | Accept!(2,V2)
| | | | X--------------->|->| Accepted(2,V2)
! | | | | | | | | | !! FAIL !!
| | | | | | | | |
X--------->|---->|->|->| | | Prepare(3)
|<---------X-----X--X--X | | Promise(3,{V1,null,null,null})
X--------------->|->| | | | Accept!(3,V1)
| | | | X--X--------->|->| Accepted(3,V1)
! | | | | | | | | !! FAIL !!
| | | | | | | |
X------>|->|------->| | | Prepare(4)
|<------X--X--|--|--X | | Promise(4,{V1(1),V2(2),null})
X------>|->|->|->|->| | | Accept!(4,V2)
| X--X--X--X--X------>|->| Accepted(4,V2)
==== Basic Paxos where new Proposers cannot change an existing consensus ====
In the following case, one Proposer achieves acceptance of value V1 of two Acceptors before failing. A new Proposer may start another round, but it is now impossible for that proposer to prepare a majority that doesn't include at least one Acceptor that has accepted V1. As such, even though the Proposer doesn't see the existing consensus, the Proposer's only option is to propose the value already agreed upon. New Proposers can continually increase the identifier to restart the process, but the consensus can never be changed.
Proposer Acceptor Learner
| | | | | | |
X--------->|->|->| | | Prepare(1)
|<---------X--X--X | | Promise(1,{null,null,null})
x--------->|->| | | | Accept!(1,V1)
| | X--X--------->|->| Accepted(1,V1)
! | | | | | | !! FAIL !!
| | | | | |
X--------->|->| | | Prepare(2)
|<---------X--X | | Promise(2,{V1,null})
X------>|->|->| | | Accept!(2,V1)
|<------X--X--X------>|->| Accepted(2,V1)
| | | | | |
== Multi-Paxos ==
A typical deployment of Paxos requires a continuous stream of agreed values acting as commands to a distributed state machine. If each command is the result of a single instance of the Basic Paxos protocol, a significant amount of overhead would result.
If the leader is relatively stable, phase 1 becomes unnecessary. Thus, it is possible to skip phase 1 for future instances of the protocol with the same leader.
To achieve this, the round number I is included along with each value which is incremented in each round by the same Leader. Multi-Paxos reduces the failure-free message delay (proposal to learning) from 4 delays to 2 delays.
=== Graphic representation of the flow of messages in the Multi-Paxos ===
==== Multi-Paxos without failures ====
In the following diagram, only one instance (or "execution") of the basic Paxos protocol, with an initial Leader (a Proposer), is shown. Note that a Multi-Paxos consists of several instances of the basic Paxos protocol.
Client Proposer Acceptor Learner
| | | | | | | --- First Request ---
X-------->| | | | | | Request
| X--------->|->|->| | | Prepare(N)
| |<---------X--X--X | | Promise(N,I,{Va,Vb,Vc})
| X--------->|->|->| | | Accept!(N,I,V)
| |<---------X--X--X------>|->| Accepted(N,I,V)
|<---------------------------------X--X Response
| | | | | | |
where V = last of (Va, Vb, Vc).
==== Multi-Paxos when phase 1 can be skipped ====
In this case, subsequent instances of the basic Paxos protocol (represented by I+1) use the same leader, so the phase 1 (of these subsequent instances of the basic Paxos protocol), which consist of the Prepare and Promise sub-phases, is skipped. Note that the Leader should be stable, i.e. it should not crash or change.
Client Proposer Acceptor Learner
| | | | | | | --- Following Requests ---
X-------->| | | | | | Request
| X--------->|->|->| | | Accept!(N,I+1,W)
| |<---------X--X--X------>|->| Accepted(N,I+1,W)
|<---------------------------------X--X Response
| | | | | | |
==== Multi-Paxos when roles are collapsed ====
A common deployment of the Multi-Paxos consists in collapsing the role of the Proposers, Acceptors and Learners to "Servers". So, in the end, there are only "Clients" and "Servers".
The following diagram represents the first "instance" of a basic Paxos protocol, when the roles of the Proposer, Acceptor and Learner are collapsed to a single role, called the "Server".
Client Servers
| | | | --- First Request ---
X-------->| | | Request
| X->|->| Prepare(N)
| |<-X--X Promise(N, I, {Va, Vb})
| X->|->| Accept!(N, I, Vn)
| X<>X<>X Accepted(N, I)
|<--------X | | Response
| | | |
==== Multi-Paxos when roles are collapsed and the leader is steady ====
In the subsequent instances of the basic Paxos protocol, with the same leader as in the previous instances of the basic Paxos protocol, the phase 1 can be skipped.
Client Servers
X-------->| | | Request
| X->|->| Accept!(N,I+1,W)
| X<>X<>X Accepted(N,I+1)
|<--------X | | Response
| | | |
== Optimisations ==
A number of optimisations can be performed to reduce the number of exchanged messages, to improve the performance of the protocol, etc. A few of these optimisations are reported below.
"We can save messages at the cost of an extra message delay by having a single distinguished learner that informs the other learners when it finds out that a value has been chosen. Acceptors then send Accepted messages only to the distinguished learner. In most applications, the roles of leader and distinguished learner are performed by the same processor.
"A leader can send its Prepare and Accept! messages just to a quorum of acceptors. As long as all acceptors in that quorum are working and can communicate with the leader and the learners, there is no need for acceptors not in the quorum to do anything.
"Acceptors do not care what value is chosen. They simply respond to Prepare and Accept! messages to ensure that, despite failures, only a single value can be chosen. However, if an acceptor does learn what value has been chosen, it can store the value in stable storage and erase any other information it has saved there. If the acceptor later receives a Prepare or Accept! message, instead of performing its Phase1b or Phase2b action, it can simply inform the leader of the chosen value.
"Instead of sending the value v, the leader can send a hash of v to some acceptors in its Accept! messages. A learner will learn that v is chosen if it receives Accepted messages for either v or its hash from a quorum of acceptors, and at least one of those messages contains v rather than its hash. However, a leader could receive Promise messages that tell it the hash of a value v that it must use in its Phase2a action without telling it the actual value of v. If that happens, the leader cannot execute its Phase2a action until it communicates with some process that knows v."
"A proposer can send its proposal only to the leader rather than to all coordinators. However, this requires that the result of the leader-selection algorithm be broadcast to the proposers, which might be expensive. So, it might be better to let the proposer send its proposal to all coordinators. (In that case, only the coordinators themselves need to know who the leader is.)
"Instead of each acceptor sending Accepted messages to each learner, acceptors can send their Accepted messages to the leader and the leader can inform the learners when a value has been chosen. However, this adds an extra message delay.
"Finally, observe that phase 1 is unnecessary for round 1 .. The leader of round 1 can begin the round by sending an Accept! message with any proposed value."
== Cheap Paxos ==
Cheap Paxos extends Basic Paxos to tolerate F failures with F+1 main processors and F auxiliary processors by dynamically reconfiguring after each failure.
This reduction in processor requirements comes at the expense of liveness; if too many main processors fail in a short time, the system must halt until the auxiliary processors can reconfigure the system. During stable periods, the auxiliary processors take no part in the protocol.
"With only two processors p and q, one processor cannot distinguish failure of the other processor from failure of the communication medium. A third processor is needed. However, that third processor does not have to participate in choosing the sequence of commands. It must take action only in case p or q fails, after which it does nothing while either p or q continues to operate the system by itself. The third processor can therefore be a small/slow/cheap one, or a processor primarily devoted to other tasks."
=== Message flow: Cheap Multi-Paxos ===
An example involving three main acceptors, one auxiliary acceptor and quorum size of three, showing failure of one main processor and subsequent reconfiguration:
{ Acceptors }
Proposer Main Aux Learner
| | | | | | -- Phase 2 --
X----------->|->|->| | | Accept!(N,I,V)
| | | ! | | --- FAIL! ---
|<-----------X--X--------------->| Accepted(N,I,V)
| | | | | -- Failure detected (only 2 accepted) --
X----------->|->|------->| | Accept!(N,I,V) (re-transmit, include Aux)
|<-----------X--X--------X------>| Accepted(N,I,V)
| | | | | -- Reconfigure : Quorum = 2 --
X----------->|->| | | Accept!(N,I+1,W) (Aux not participating)
|<-----------X--X--------------->| Accepted(N,I+1,W)
| | | | |
== Fast Paxos ==
Fast Paxos generalizes Basic Paxos to reduce end-to-end message delays. In Basic Paxos, the message delay from client request to learning is 3 message delays. Fast Paxos allows 2 message delays, but requires that (1) the system be composed of 3f+ 1 acceptors to tolerate up to f faults (instead of the classic 2f+1), and (2) the Client to send its request to multiple destinations.
Intuitively, if the leader has no value to propose, then a client could send an Accept! message to the Acceptors directly. The Acceptors would respond as in Basic Paxos, sending Accepted messages to the leader and every Learner achieving two message delays from Client to Learner.
If the leader detects a collision, it resolves the collision by sending Accept! messages for a new round which are Accepted as usual. This coordinated recovery technique requires four message delays from Client to Learner.
The final optimization occurs when the leader specifies a recovery technique in advance, allowing the Acceptors to perform the collision recovery themselves. Thus, uncoordinated collision recovery can occur in three message delays (and only two message delays if all Learners are also Acceptors).
=== Message flow: Fast Paxos, non-conflicting ===
Client Leader Acceptor Learner
| | | | | | | |
| X--------->|->|->|->| | | Any(N,I,Recovery)
| | | | | | | |
X------------------->|->|->|->| | | Accept!(N,I,W)
| |<---------X--X--X--X------>|->| Accepted(N,I,W)
|<------------------------------------X--X Response(W)
| | | | | | | |
=== Message flow: Fast Paxos, conflicting proposals ===
Conflicting proposals with coordinated recovery. Note: the protocol does not specify how to handle the dropped client request.
Client Leader Acceptor Learner
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | !! Concurrent conflicting proposals
| | | | | | | | | !! received in different order
| | | | | | | | | !! by the Acceptors
| X--------------?|-?|-?|-?| | | Accept!(N,I,V)
X-----------------?|-?|-?|-?| | | Accept!(N,I,W)
| | | | | | | | |
| | | | | | | | | !! Acceptors disagree on value
| | |<-------X--X->|->|----->|->| Accepted(N,I,V)
| | |<-------|<-|<-X--X----->|->| Accepted(N,I,W)
| | | | | | | | |
| | | | | | | | | !! Detect collision & recover
| | X------->|->|->|->| | | Accept!(N+1,I,W)
| | |<-------X--X--X--X----->|->| Accepted(N+1,I,W)
|<---------------------------------X--X Response(W)
| | | | | | | | |
Conflicting proposals with uncoordinated recovery.
Client Leader Acceptor Learner
| | | | | | | | |
| | X------->|->|->|->| | | Any(N,I,Recovery)
| | | | | | | | |
| | | | | | | | | !! Concurrent conflicting proposals
| | | | | | | | | !! received in different order
| | | | | | | | | !! by the Acceptors
| X--------------?|-?|-?|-?| | | Accept!(N,I,V)
X-----------------?|-?|-?|-?| | | Accept!(N,I,W)
| | | | | | | | |
| | | | | | | | | !! Acceptors disagree on value
| | |<-------X--X->|->|----->|->| Accepted(N,I,V)
| | |<-------|<-|<-X--X----->|->| Accepted(N,I,W)
| | | | | | | | |
| | | | | | | | | !! Detect collision & recover
| | |<-------X--X--X--X----->|->| Accepted(N+1,I,W)
|<---------------------------------X--X Response(W)
| | | | | | | | |
=== Message flow: Fast Paxos with uncoordinated recovery, collapsed roles ===
(merged Acceptor/Learner roles)
Client Servers
| | | | | |
| | X->|->|->| Any(N,I,Recovery)
| | | | | |
| | | | | | !! Concurrent conflicting proposals
| | | | | | !! received in different order
| | | | | | !! by the Servers
| X--------?|-?|-?|-?| Accept!(N,I,V)
X-----------?|-?|-?|-?| Accept!(N,I,W)
| | | | | |
| | | | | | !! Servers disagree on value
| | X<>X->|->| Accepted(N,I,V)
| | |<-|<-X<>X Accepted(N,I,W)
| | | | | |
| | | | | | !! Detect collision & recover
| | X<>X<>X<>X Accepted(N+1,I,W)
|<-----------X--X--X--X Response(W)
| | | | | |
== Generalized Paxos ==
Generalized consensus explores the relationship between the operations of the replicated state machine and the consensus protocol that implements it. The main discovery involves optimizations of Paxos when conflicting proposals could be applied in any order. i.e., when the proposed operations are commutative operations for the state machine. In such cases, the conflicting operations can both be accepted, avoiding the delays required for resolving conflicts and re-proposing the rejected operations.
This concept is further generalized into ever-growing sequences of commutative operations, some of which are known to be stable (and thus may be executed). The protocol tracks these sequences ensuring that all proposed operations of one sequence are stabilized before allowing any operation non-commuting with them to become stable.
=== Example ===
In order to illustrate Generalized Paxos, the example below shows a message flow between two concurrently executing clients and a replicated state machine implementing read/write operations over two distinct registers A and B.
Note that in this table indicates operations which are non-commutative.
A possible sequence of operations :
<1:Read(A), 2:Read(B), 3:Write(B), 4:Read(B), 5:Read(A), 6:Write(A)>
Since 5:Read(A) commutes with both 3:Write(B) and 4:Read(B), one possible permutation equivalent to the previous order is the following:
<1:Read(A), 2:Read(B), 5:Read(A), 3:Write(B), 4:Read(B), 6:Write(A)>
In practice, a commute occurs only when operations are proposed concurrently.
=== Message flow: Generalized Paxos (example) ===
Responses not shown. Note: message abbreviations differ from previous message flows due to specifics of the protocol, see for a full discussion.
Client Leader Acceptor Learner
| | | | | | | | !! New Leader Begins Round
| | X----->|->|->| | | Prepare(N)
| | |<-----X- X- X | | Promise(N,null)
| | X----->|->|->| | | Phase2Start(N,null)
| | | | | | | |
| | | | | | | | !! Concurrent commuting proposals
| X------- ?|-----?|-?|-?| | | Propose(ReadA)
X-----------?|-----?|-?|-?| | | Propose(ReadB)
| | X------X-------------->|->| Accepted(N,<ReadA,ReadB>)
| | |<--------X--X-------->|->| Accepted(N,<ReadB,ReadA>)
| | | | | | | |
| | | | | | | | !! No Conflict, both accepted
| | | | | | | | Stable = <ReadA, ReadB>
| | | | | | | |
| | | | | | | | !! Concurrent conflicting proposals
X-----------?|-----?|-?|-?| | | Propose(<WriteB,ReadA>)
| X--------?|-----?|-?|-?| | | Propose(ReadB)
| | | | | | | |
| | X------X-------------->|->| Accepted(N,<WriteB,ReadA> . <ReadB>)
| | |<--------X--X-------->|->| Accepted(N,<ReadB> . <WriteB,ReadA>)
| | | | | | | |
| | | | | | | | !! Conflict detected, leader chooses
| | | | | | | | commutative order:
| | | | | | | | V = <ReadA, WriteB, ReadB>
| | | | | | | |
| | X----->|->|->| | | Phase2Start(N+1,V)
| | |<-----X- X- X-------->|->| Accepted(N+1,V)
| | | | | | | | Stable = <ReadA, ReadB> .
| | | | | | | | <ReadA, WriteB, ReadB>
| | | | | | | |
| | | | | | | | !! More conflicting proposals
X-----------?|-----?|-?|-?| | | Propose(WriteA)
| X--------?|-----?|-?|-?| | | Propose(ReadA)
| | | | | | | |
| | X------X-------------->|->| Accepted(N+1,<WriteA> . <ReadA>)
| | |<--------X- X-------->|->| Accepted(N+1,<ReadA> . <WriteA>)
| | | | | | | |
| | | | | | | | !! Leader chooses order:
| | | | | | | | W = <WriteA, ReadA>
| | | | | | | |
| | X----->|->|->| | | Phase2Start(N+2,W)
| | |<-----X- X- X-------->|->| Accepted(N+2,W)
| | | | | | | | Stable = <ReadA, ReadB> .
| | | | | | | | <ReadA, WriteB, ReadB> .
| | | | | | | | <WriteA, ReadA>
| | | | | | | |
=== Performance ===
The above message flow shows us that Generalized Paxos can leverage operation semantics to avoid collisions when the spontaneous ordering of the network fails. This allows the protocol to be in practice quicker than Fast Paxos. However, when a collision occurs, Generalized Paxos needs two additional round trips to recover. This situation is illustrated with operations WriteB and ReadB in the above schema.
In the general case, such round trips are unavoidable and come from the fact that multiple commands can be accepted during a round. This makes the protocol more expensive than Paxos when conflicts are frequent. Hopefully two possible refinements of Generalized Paxos are possible to improve recovery time.
First, if the coordinator is part of every quorum of acceptors (round N is said centered), then to recover at round N+1 from a collision at round N, the coordinator skips phase 1 and proposes at phase 2 the sequence it accepted last during round N. This reduces the cost of recovery to a single round trip.
Second, if both rounds N and N+1 use a unique and identical centered quorum, when an acceptor detects a collision at round N, it spontaneously proposes at round N+1 a sequence suffixing both (i) the sequence accepted at round N by the coordinator and (ii) the greatest non-conflicting prefix it accepted at round N. For instance, if the coordinator and the acceptor accepted respectively at round N <WriteB, ReadB> and <ReadB, ReadA> , the acceptor will spontaneously accept <WriteB, ReadB, ReadA> at round N+1. With this variation, the cost of recovery is a single message delay which is obviously optimal. Notice here that the use of a unique quorum at a round does not harm liveness. This comes from the fact that any process in this quorum is a read quorum for the prepare phase of the next rounds.
== Byzantine Paxos ==
Paxos may also be extended to support arbitrary failures of the participants, including lying, fabrication of messages, collusion with other participants, selective non-participation, etc. These types of failures are called Byzantine failures, after the solution popularized by Lamport.
Byzantine Paxos introduced by Castro and Liskov adds an extra message (Verify) which acts to distribute knowledge and verify the actions of the other processors:
=== Message flow: Byzantine Multi-Paxos, steady state ===
Client Proposer Acceptor Learner
| | | | | | |
X-------->| | | | | | Request
| X--------->|->|->| | | Accept!(N,I,V)
| | X<>X<>X | | Verify(N,I,V) - BROADCAST
| |<---------X--X--X------>|->| Accepted(N,V)
|<---------------------------------X--X Response(V)
| | | | | | |
Fast Byzantine Paxos introduced by Martin and Alvisi removes this extra delay, since the client sends commands directly to the Acceptors.
Note the Accepted message in Fast Byzantine Paxos is sent to all Acceptors and all Learners, while Fast Paxos sends Accepted messages only to Learners):
=== Message flow: Fast Byzantine Multi-Paxos, steady state ===
Client Acceptor Learner
| | | | | |
X----->|->|->| | | Accept!(N,I,V)
| X<>X<>X------>|->| Accepted(N,I,V) - BROADCAST
|<-------------------X--X Response(V)
| | | | | |
The failure scenario is the same for both protocols; Each Learner waits to receive F+1 identical messages from different Acceptors. If this does not occur, the Acceptors themselves will also be aware of it (since they exchanged each other's messages in the broadcast round), and correct Acceptors will re-broadcast the agreed value:
=== Message flow: Fast Byzantine Multi-Paxos, failure ===
Client Acceptor Learner
| | | ! | | !! One Acceptor is faulty
X----->|->|->! | | Accept!(N,I,V)
| X<>X<>X------>|->| Accepted(N,I,{V,W}) - BROADCAST
| | | ! | | !! Learners receive 2 different commands
| | | ! | | !! Correct Acceptors notice error and choose
| X<>X<>X------>|->| Accepted(N,I,V) - BROADCAST
|<-------------------X--X Response(V)
| | | ! | |
== Adapting Paxos for RDMA networks ==
With the emergence of very high speed reliable datacenter networks that support remote DMA (RDMA), there has been substantial interest in optimizing Paxos to leverage hardware offloading, in which the network interface card and network routers provide reliability and network-layer congestion control, freeing the host CPU for other tasks. The Derecho C++ Paxos library is an open-source Paxos implementation that explores this option.
Derecho offers both a classic Paxos, with data durability across full shutdown/restart sequences, and vertical Paxos (atomic multicast), for in-memory replication and state-machine synchronization. The Paxos protocols employed by Derecho needed to be adapted to maximize asynchronous data streaming and remove other sources of delay on the leader's critical path. So doing enables Derecho to sustain the full bidirectional RDMA data rate. In contrast, although traditional Paxos protocols can be migrated to an RDMA network by simply mapping the message send operations to native RDMA operations, doing so leaves round-trip delays on the critical path. In high-speed RDMA networks, even small delays can be large enough to prevent utilization of the full potential bandwidth.
== Production use of Paxos ==
Google uses the Paxos algorithm in their Chubby distributed lock service in order to keep replicas consistent in case of failure. Chubby is used by Bigtable which is now in production in Google Analytics and other products.
Google Spanner and Megastore use the Paxos algorithm internally.
The OpenReplica replication service uses Paxos to maintain replicas for an open access system that enables users to create fault-tolerant objects. It provides high performance through concurrent rounds and flexibility through dynamic membership changes.
IBM supposedly uses the Paxos algorithm in their IBM SAN Volume Controller product to implement a general purpose fault-tolerant virtual machine used to run the configuration and control components of the storage virtualization services offered by the cluster.
Microsoft uses Paxos in the Autopilot cluster management service from Bing, and in Windows Server Failover Clustering.
WANdisco have implemented Paxos within their DConE active-active replication technology.
XtreemFS uses a Paxos-based lease negotiation algorithm for fault-tolerant and consistent replication of file data and metadata.
Heroku uses Doozerd which implements Paxos for its consistent distributed data store.
Ceph uses Paxos as part of the monitor processes to agree which OSDs are up and in the cluster.
The MariaDB Xpand distributed SQL database uses Paxos for distributed transaction resolution.
Neo4j HA graph database implements Paxos, replacing Apache ZooKeeper from v1.9
Apache Cassandra NoSQL database uses Paxos for Light Weight Transaction feature only.
ScyllaDB NoSQL database uses Paxos for Light Weight Transactions.
Amazon Elastic Container Services uses Paxos to maintain a consistent view of cluster state.
Amazon DynamoDB uses the Paxos algorithm for leader election and consensus.
== See also ==
Two generals problem
Chandra–Toueg consensus algorithm
State machine
Raft
== References == | Wikipedia/Paxos_algorithm |
Raft is a consensus algorithm designed as an alternative to the Paxos family of algorithms. It was meant to be more understandable than Paxos by means of separation of logic, but it is also formally proven safe and offers some additional features. Raft offers a generic way to distribute a state machine across a cluster of computing systems, ensuring that each node in the cluster agrees upon the same series of state transitions. It has a number of open-source reference implementations, with full-specification implementations in Go, C++, Java, and Scala. It is named after Reliable, Replicated, Redundant, And Fault-Tolerant.
Raft is not a Byzantine fault tolerant (BFT) algorithm; the nodes trust the elected leader.
== Basics ==
Raft achieves consensus via an elected leader. A server in a raft cluster is either a leader or a follower, and can be a candidate in the precise case of an election (leader unavailable). The leader is responsible for log replication to the followers. It regularly informs the followers of its existence by sending a heartbeat message. Each follower has a timeout (typically between 150 and 300 ms) in which it expects the heartbeat from the leader. The timeout is reset on receiving the heartbeat. If no heartbeat is received the follower changes its status to candidate and starts a leader election.
=== Approach of the consensus problem in Raft ===
Raft implements consensus by a leader approach. The cluster has one and only one elected leader which is fully responsible for managing log replication on the other servers of the cluster. It means that the leader can decide on new entries' placement and establishment of data flow between it and the other servers without consulting other servers. A leader leads until it fails or disconnects, in which case surviving servers elect a new leader.
The consensus problem is decomposed in Raft into two relatively independent subproblems listed down below.
==== Leader election ====
When the existing leader fails or when the algorithm initializes, a new leader needs to be elected.
In this case, a new term starts in the cluster. A term is an arbitrary period of time on the server for which a new leader needs to be elected. Each term starts with a leader election. If the election is completed successfully (i.e. a single leader is elected) the term keeps going with normal operations orchestrated by the new leader. If the election is a failure, a new term starts, with a new election.
A leader election is started by a candidate server. A server becomes a candidate if it receives no communication by the leader over a period called the election timeout, so it assumes there is no acting leader anymore. It starts the election by increasing the term counter, voting for itself as new leader, and sending a message to all other servers requesting their vote. A server will vote only once per term, on a first-come-first-served basis. If a candidate receives a message from another server with a term number larger than the candidate's current term, then the candidate's election is defeated and the candidate changes into a follower and recognizes the leader as legitimate. If a candidate receives a majority of votes, then it becomes the new leader. If neither happens, e.g., because of a split vote, then a new term starts, and a new election begins.
Raft uses a randomized election timeout to ensure that split vote problems are resolved quickly. This should reduce the chance of a split vote because servers won't become candidates at the same time: a single server will time out, win the election, then become leader and send heartbeat messages to other servers before any of the followers can become candidates.
==== Log replication ====
The leader is responsible for the log replication. It accepts client requests. Each client request consists of a command to be executed by the replicated state machines in the cluster. After being appended to the leader's log as a new entry, each of the requests is forwarded to the followers as AppendEntries messages. In case of unavailability of the followers, the leader retries AppendEntries messages indefinitely, until the log entry is eventually stored by all of the followers.
Once the leader receives confirmation from half or more of its followers that the entry has been replicated, the leader applies the entry to its local state machine, and the request is considered committed. This event also commits all previous entries in the leader's log. Once a follower learns that a log entry is committed, it applies the entry to its local state machine. This ensures consistency of the logs between all the servers through the cluster, ensuring that the safety rule of Log Matching is respected.
In the case of a leader crash, the logs can be left inconsistent, with some logs from the old leader not being fully replicated through the cluster. The new leader will then handle inconsistency by forcing the followers to duplicate its own log. To do so, for each of its followers, the leader will compare its log with the log from the follower, find the last entry where they agree, then delete all the entries coming after this critical entry in the follower log and replace it with its own log entries. This mechanism will restore log consistency in a cluster subject to failures.
=== Safety ===
==== Safety rules in Raft ====
Raft guarantees each of these safety properties:
Election safety: at most one leader can be elected in a given term.
Leader append-only: a leader can only append new entries to its logs (it can neither overwrite nor delete entries).
Log matching: if two logs contain an entry with the same index and term, then the logs are identical in all entries up through the given index.
Leader completeness: if a log entry is committed in a given term then it will be present in the logs of the leaders since this term.
State machine safety: if a server has applied a particular log entry to its state machine, then no other server may apply a different command for the same log.
The first four rules are guaranteed by the details of the algorithm described in the previous section. The State Machine Safety is guaranteed by a restriction on the election process.
==== State machine safety ====
This rule is ensured by a simple restriction: a candidate can't win an election unless its log contains all committed entries. In order to be elected, a candidate has to contact a majority of the cluster, and given the rules for logs to be committed, it means that every committed entry is going to be present on at least one of the servers the candidates contact.
Raft determines which of two logs (carried by two distinct servers) is more up-to-date by comparing the index term of the last entries in the logs. If the logs have a last entry with different terms, then the log with the later term is more up-to-date. If the logs end with the same term, then whichever log is longer is more up-to-date.
In Raft, the request from a candidate to a voter includes information about the candidate's log. If its own log is more up-to-date than the candidate's log, the voter denies its vote to the candidate. This implementation ensures the State Machine Safety rule.
==== Follower crashes ====
If a follower crashes, AppendEntries and vote requests sent by other servers will fail. Such failures are handled by the servers trying indefinitely to reach the downed follower. If the follower restarts, the pending requests will complete. If the request has already been taken into account before the failure, the restarted follower will just ignore it.
==== Timing and availability ====
Timing is critical in Raft to elect and maintain a steady leader over time, in order to have a perfect availability of the cluster. Stability is ensured by respecting the timing requirement of the algorithm:
broadcastTime << electionTimeout << MTBF
broadcastTime is the average time it takes a server to send a request to every server in the cluster and receive responses. It is relative to the infrastructure used.
MTBF (Mean Time Between Failures) is the average time between failures for a server. It is also relative to the infrastructure.
electionTimeout is the same as described in the Leader Election section. It is something the programmer must choose.
Typical numbers for these values can be 0.5 ms to 20 ms for broadcastTime, which implies that the programmer sets the electionTimeout somewhere between 10 ms and 500 ms. It can take several weeks or months between single server failures, which means the values are sufficient for a stable cluster.
== Extensions ==
The dissertation “Consensus: Bridging Theory and Practice” by one of the co-authors of the original paper describes extensions to the original algorithm:
Pre-Vote: when a member rejoins the cluster, it can depending on timing trigger an election although there is already a leader. To avoid this, pre-vote will first check in with the other members. Avoiding the unnecessary election improves the availability of cluster, therefore this extension is usually present in production implementations.
Leadership transfer: a leader that is shutting down orderly can explicitly transfer the leadership to another member. This can be faster than waiting for a timeout. Also, a leader can step down when another member would be a better leader, for example when that member is on a faster machine.
== Production use of Raft ==
CockroachDB uses Raft in the Replication Layer.
Etcd uses Raft to manage a highly-available replicated log
Hazelcast uses Raft to provide its CP Subsystem, a strongly consistent layer for distributed data structures.
MongoDB uses a variant of Raft in the replication set.
Neo4j uses Raft to ensure consistency and safety.
RabbitMQ uses Raft to implement durable, replicated FIFO queues.
ScyllaDB uses Raft for metadata (schema and topology changes)
Splunk Enterprise uses Raft in a Search Head Cluster (SHC)
TiDB uses Raft with the storage engine TiKV.
YugabyteDB uses Raft in the DocDB Replication
ClickHouse uses Raft for in-house implementation of ZooKeeper-like service
Redpanda uses the Raft consensus algorithm for data replication
Apache Kafka Raft (KRaft) uses Raft for metadata management.
NATS Messaging uses the Raft consensus algorithm for Jetstream cluster management and data replication
Camunda uses the Raft consensus algorithm for data replication
== References ==
== External links ==
Official website | Wikipedia/Raft_(computer_science) |
Replication in computing refers to maintaining multiple copies of data, processes, or resources to ensure consistency across redundant components. This fundamental technique spans databases, file systems, and distributed systems, serving to improve availability, fault-tolerance, accessibility, and performance. Through replication, systems can continue operating when components fail (failover), serve requests from geographically distributed locations, and balance load across multiple machines. The challenge lies in maintaining consistency between replicas while managing the fundamental tradeoffs between data consistency, system availability, and network partition tolerance – constraints known as the CAP theorem.
== Terminology ==
Replication in computing can refer to:
Data replication, where the same data is stored on multiple storage devices
Computation replication, where the same computing task is executed many times. Computational tasks may be:
Replicated in space, where tasks are executed on separate devices
Replicated in time, where tasks are executed repeatedly on a single device
Replication in space or in time is often linked to scheduling algorithms.
Access to a replicated entity is typically uniform with access to a single non-replicated entity. The replication itself should be transparent to an external user. In a failure scenario, a failover of replicas should be hidden as much as possible with respect to quality of service.
Computer scientists further describe replication as being either:
Active replication, which is performed by processing the same request at every replica
Passive replication, which involves processing every request on a single replica and transferring the result to the other replicas
When one leader replica is designated via leader election to process all the requests, the system is using a primary-backup or primary-replica scheme, which is predominant in high-availability clusters. In comparison, if any replica can process a request and distribute a new state, the system is using a multi-primary or multi-master scheme. In the latter case, some form of distributed concurrency control must be used, such as a distributed lock manager.
Load balancing differs from task replication, since it distributes a load of different computations across machines, and allows a single computation to be dropped in case of failure. Load balancing, however, sometimes uses data replication (especially multi-master replication) internally, to distribute its data among machines.
Backup differs from replication in that the saved copy of data remains unchanged for a long period of time. Replicas, on the other hand, undergo frequent updates and quickly lose any historical state. Replication is one of the oldest and most important topics in the overall area of distributed systems.
Data replication and computation replication both require processes to handle incoming events. Processes for data replication are passive and operate only to maintain the stored data, reply to read requests and apply updates. Computation replication is usually performed to provide fault-tolerance, and take over an operation if one component fails. In both cases, the underlying needs are to ensure that the replicas see the same events in equivalent orders, so that they stay in consistent states and any replica can respond to queries.
=== Replication models in distributed systems ===
Three widely cited models exist for data replication, each having its own properties and performance:
Transactional replication: used for replicating transactional data, such as a database. The one-copy serializability model is employed, which defines valid outcomes of a transaction on replicated data in accordance with the overall ACID (atomicity, consistency, isolation, durability) properties that transactional systems seek to guarantee.
State machine replication: assumes that the replicated process is a deterministic finite automaton and that atomic broadcast of every event is possible. It is based on distributed consensus and has a great deal in common with the transactional replication model. This is sometimes mistakenly used as a synonym of active replication. State machine replication is usually implemented by a replicated log consisting of multiple subsequent rounds of the Paxos algorithm. This was popularized by Google's Chubby system, and is the core behind the open-source Keyspace data store.
Virtual synchrony: involves a group of processes which cooperate to replicate in-memory data or to coordinate actions. The model defines a distributed entity called a process group. A process can join a group and is provided with a checkpoint containing the current state of the data replicated by group members. Processes can then send multicasts to the group and will see incoming multicasts in the identical order. Membership changes are handled as a special multicast that delivers a new "membership view" to the processes in the group.
== Database replication ==
Database replication involves maintaining copies of the same data on multiple machines, typically implemented through three main approaches: single-leader, multi-leader, and leaderless replication.
In single-leader (also called primary/replica) replication, one database instance is designated as the leader (primary), which handles all write operations. The leader logs these updates, which then propagate to replica nodes. Each replica acknowledges receipt of updates, enabling subsequent write operations. Replicas primarily serve read requests, though they may serve stale data due to replication lag – the delay in propagating changes from the leader.
In multi-master replication (also called multi-leader), updates can be submitted to any database node, which then propagate to other servers. This approach is particularly beneficial in multi-data center deployments, where it enables local write processing while masking inter-data center network latency. However, it introduces substantially increased costs and complexity which may make it impractical in some situations. The most common challenge that exists in multi-master replication is transactional conflict prevention or resolution when concurrent modifications occur on different leader nodes.
Most synchronous (or eager) replication solutions perform conflict prevention, while asynchronous (or lazy) solutions have to perform conflict resolution. For instance, if the same record is changed on two nodes simultaneously, an eager replication system would detect the conflict before confirming the commit and abort one of the transactions. A lazy replication system would allow both transactions to commit and run a conflict resolution during re-synchronization. Conflict resolution methods can include techniques like last-write-wins, application-specific logic, or merging concurrent updates.
However, replication transparency can not always be achieved. When data is replicated in a database, they will be constrained by CAP theorem or PACELC theorem. In the NoSQL movement, data consistency is usually sacrificed in exchange for other more desired properties, such as availability (A), partition tolerance (P), etc. Various data consistency models have also been developed to serve as Service Level Agreement (SLA) between service providers and the users.
There are several techniques for replicating data changes between nodes:
Statement-based replication: Write requests (such as SQL statements) are logged and transmitted to replicas for execution. This can be problematic with non-deterministic functions or statements having side effects.
Write-ahead log (WAL) shipping: The storage engine's low-level write-ahead log is replicated, ensuring identical data structures across nodes.
Logical (row-based) replication: Changes are described at the row level using a dedicated log format, providing greater flexibility and independence from storage engine internals.
== Disk storage replication ==
Active (real-time) storage replication is usually implemented by distributing updates of a block device to several physical hard disks. This way, any file system supported by the operating system can be replicated without modification, as the file system code works on a level above the block device driver layer. It is implemented either in hardware (in a disk array controller) or in software (in a device driver).
The most basic method is disk mirroring, which is typical for locally connected disks. The storage industry narrows the definitions, so mirroring is a local (short-distance) operation. A replication is extendable across a computer network, so that the disks can be located in physically distant locations, and the primary/replica database replication model is usually applied. The purpose of replication is to prevent damage from failures or disasters that may occur in one location – or in case such events do occur, to improve the ability to recover data. For replication, latency is the key factor because it determines either how far apart the sites can be or the type of replication that can be employed.
The main characteristic of such cross-site replication is how write operations are handled, through either asynchronous or synchronous replication; synchronous replication needs to wait for the destination server's response in any write operation whereas asynchronous replication does not.
Synchronous replication guarantees "zero data loss" by the means of atomic write operations, where the write operation is not considered complete until acknowledged by both the local and remote storage. Most applications wait for a write transaction to complete before proceeding with further work, hence overall performance decreases considerably. Inherently, performance drops proportionally to distance, as minimum latency is dictated by the speed of light. For 10 km distance, the fastest possible roundtrip takes 67 μs, whereas an entire local cached write completes in about 10–20 μs.
In asynchronous replication, the write operation is considered complete as soon as local storage acknowledges it. Remote storage is updated with a small lag. Performance is greatly increased, but in case of a local storage failure, the remote storage is not guaranteed to have the current copy of data (the most recent data may be lost).
Semi-synchronous replication typically considers a write operation complete when acknowledged by local storage and received or logged by the remote server. The actual remote write is performed asynchronously, resulting in better performance but remote storage will lag behind the local storage, so that there is no guarantee of durability (i.e., seamless transparency) in the case of local storage failure.
Point-in-time replication produces periodic snapshots which are replicated instead of primary storage. This is intended to replicate only the changed data instead of the entire volume. As less information is replicated using this method, replication can occur over less-expensive bandwidth links such as iSCSI or T1 instead of fiberoptic lines.
=== Implementations ===
Many distributed filesystems use replication to ensure fault tolerance and avoid a single point of failure.
Many commercial synchronous replication systems do not freeze when the remote replica fails or loses connection – behaviour which guarantees zero data loss – but proceed to operate locally, losing the desired zero recovery point objective.
Techniques of wide-area network (WAN) optimization can be applied to address the limits imposed by latency.
== File-based replication ==
File-based replication conducts data replication at the logical level (i.e., individual data files) rather than at the storage block level. There are many different ways of performing this, which almost exclusively rely on software.
=== Capture with a kernel driver ===
A kernel driver (specifically a filter driver) can be used to intercept calls to the filesystem functions, capturing any activity as it occurs. This uses the same type of technology that real-time active virus checkers employ. At this level, logical file operations are captured like file open, write, delete, etc. The kernel driver transmits these commands to another process, generally over a network to a different machine, which will mimic the operations of the source machine. Like block-level storage replication, the file-level replication allows both synchronous and asynchronous modes. In synchronous mode, write operations on the source machine are held and not allowed to occur until the destination machine has acknowledged the successful replication. Synchronous mode is less common with file replication products although a few solutions exist.
File-level replication solutions allow for informed decisions about replication based on the location and type of the file. For example, temporary files or parts of a filesystem that hold no business value could be excluded. The data transmitted can also be more granular; if an application writes 100 bytes, only the 100 bytes are transmitted instead of a complete disk block (generally 4,096 bytes). This substantially reduces the amount of data sent from the source machine and the storage burden on the destination machine.
Drawbacks of this software-only solution include the requirement for implementation and maintenance on the operating system level, and an increased burden on the machine's processing power.
==== File system journal replication ====
Similarly to database transaction logs, many file systems have the ability to journal their activity. The journal can be sent to another machine, either periodically or in real time by streaming. On the replica side, the journal can be used to play back file system modifications.
One of the notable implementations is Microsoft's System Center Data Protection Manager (DPM), released in 2005, which performs periodic updates but does not offer real-time replication.
=== Batch replication ===
This is the process of comparing the source and destination file systems and ensuring that the destination matches the source. The key benefit is that such solutions are generally free or inexpensive. The downside is that the process of synchronizing them is quite system-intensive, and consequently this process generally runs infrequently.
One of the notable implementations is rsync.
== Replication within file ==
In a paging operating system, pages in a paging file are sometimes replicated within a track to reduce rotational latency.
In IBM's VSAM, index data are sometimes replicated within a track to reduce rotational latency.
== Distributed shared memory replication ==
Another example of using replication appears in distributed shared memory systems, where many nodes of the system share the same page of memory. This usually means that each node has a separate copy (replica) of this page.
== Primary-backup and multi-primary replication ==
Many classical approaches to replication are based on a primary-backup model where one device or process has unilateral control over one or more other processes or devices. For example, the primary might perform some computation, streaming a log of updates to a backup (standby) process, which can then take over if the primary fails. This approach is common for replicating databases, despite the risk that if a portion of the log is lost during a failure, the backup might not be in a state identical to the primary, and transactions could then be lost.
A weakness of primary-backup schemes is that only one is actually performing operations. Fault-tolerance is gained, but the identical backup system doubles the costs. For this reason, starting c. 1985, the distributed systems research community began to explore alternative methods of replicating data. An outgrowth of this work was the emergence of schemes in which a group of replicas could cooperate, with each process acting as a backup while also handling a share of the workload.
Computer scientist Jim Gray analyzed multi-primary replication schemes under the transactional model and published a widely cited paper skeptical of the approach "The Dangers of Replication and a Solution". He argued that unless the data splits in some natural way so that the database can be treated as n disjoint sub-databases, concurrency control conflicts will result in seriously degraded performance and the group of replicas will probably slow as a function of n. Gray suggested that the most common approaches are likely to result in degradation that scales as O(n³). His solution, which is to partition the data, is only viable in situations where data actually has a natural partitioning key.
In the 1985–1987, the virtual synchrony model was proposed and emerged as a widely adopted standard (it was used in the Isis Toolkit, Horus, Transis, Ensemble, Totem, Spread, C-Ensemble, Phoenix and Quicksilver systems, and is the basis for the CORBA fault-tolerant computing standard). Virtual synchrony permits a multi-primary approach in which a group of processes cooperates to parallelize some aspects of request processing. The scheme can only be used for some forms of in-memory data, but can provide linear speedups in the size of the group.
A number of modern products support similar schemes. For example, the Spread Toolkit supports this same virtual synchrony model and can be used to implement a multi-primary replication scheme; it would also be possible to use C-Ensemble or Quicksilver in this manner. WANdisco permits active replication where every node on a network is an exact copy or replica and hence every node on the network is active at one time; this scheme is optimized for use in a wide area network (WAN).
Modern multi-primary replication protocols optimize for the common failure-free operation. Chain replication is a popular family of such protocols. State-of-the-art protocol variants of chain replication offer high throughput and strong consistency by arranging replicas in a chain for writes. This approach enables local reads on all replica nodes but has high latency for writes that must traverse multiple nodes sequentially.
A more recent multi-primary protocol, Hermes, combines cache-coherent-inspired invalidations and logical timestamps to achieve strong consistency with local reads and high-performance writes from all replicas. During fault-free operation, its broadcast-based writes are non-conflicting and commit after just one multicast round-trip to replica nodes. This design results in high throughput and low latency for both reads and writes.
== See also ==
Change data capture
Fault-tolerant computer system
Log shipping
Multi-master replication
Optimistic replication
Shard (data)
State machine replication
Virtual synchrony
== References == | Wikipedia/Replication_(computer_science) |
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.
The integral to be estimated is often of the form
∫
C
f
(
z
)
e
λ
g
(
z
)
d
z
,
{\displaystyle \int _{C}f(z)e^{\lambda g(z)}\,dz,}
where C is a contour, and λ is large. One version of the method of steepest descent deforms the contour of integration C into a new path integration C′ so that the following conditions hold:
C′ passes through one or more zeros of the derivative g′(z),
the imaginary part of g(z) is constant on C′.
The method of steepest descent was first published by Debye (1909), who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note by Riemann (1863) about hypergeometric functions. The contour of steepest descent has a minimax property, see Fedoryuk (2001). Siegel (1932) described some other unpublished notes of Riemann, where he used this method to derive the Riemann–Siegel formula.
== Basic idea ==
The method of steepest descent is a method to approximate a complex integral of the form
I
(
λ
)
=
∫
C
f
(
z
)
e
λ
g
(
z
)
d
z
{\displaystyle I(\lambda )=\int _{C}f(z)e^{\lambda g(z)}\,\mathrm {d} z}
for large
λ
→
∞
{\displaystyle \lambda \rightarrow \infty }
, where
f
(
z
)
{\displaystyle f(z)}
and
g
(
z
)
{\displaystyle g(z)}
are analytic functions of
z
{\displaystyle z}
. Because the integrand is analytic, the contour
C
{\displaystyle C}
can be deformed into a new contour
C
′
{\displaystyle C'}
without changing the integral. In particular, one seeks a new contour on which the imaginary part, denoted
ℑ
(
⋅
)
{\displaystyle \Im (\cdot )}
, of
g
(
z
)
=
ℜ
[
g
(
z
)
]
+
i
ℑ
[
g
(
z
)
]
{\displaystyle g(z)=\Re [g(z)]+i\,\Im [g(z)]}
is constant (
ℜ
(
⋅
)
{\displaystyle \Re (\cdot )}
denotes the real part). Then
I
(
λ
)
=
e
i
λ
ℑ
{
g
(
z
)
}
∫
C
′
f
(
z
)
e
λ
ℜ
{
g
(
z
)
}
d
z
,
{\displaystyle I(\lambda )=e^{i\lambda \Im \{g(z)\}}\int _{C'}f(z)e^{\lambda \Re \{g(z)\}}\,\mathrm {d} z,}
and the remaining integral can be approximated with other methods like Laplace's method.
== Etymology ==
The method is called the method of steepest descent because for analytic
g
(
z
)
{\displaystyle g(z)}
, constant phase contours are equivalent to steepest descent contours.
If
g
(
z
)
=
X
(
z
)
+
i
Y
(
z
)
{\displaystyle g(z)=X(z)+iY(z)}
is an analytic function of
z
=
x
+
i
y
{\displaystyle z=x+iy}
, it satisfies the Cauchy–Riemann equations
∂
X
∂
x
=
∂
Y
∂
y
and
∂
X
∂
y
=
−
∂
Y
∂
x
.
{\displaystyle {\frac {\partial X}{\partial x}}={\frac {\partial Y}{\partial y}}\qquad {\text{and}}\qquad {\frac {\partial X}{\partial y}}=-{\frac {\partial Y}{\partial x}}.}
Then
∂
X
∂
x
∂
Y
∂
x
+
∂
X
∂
y
∂
Y
∂
y
=
∇
X
⋅
∇
Y
=
0
,
{\displaystyle {\frac {\partial X}{\partial x}}{\frac {\partial Y}{\partial x}}+{\frac {\partial X}{\partial y}}{\frac {\partial Y}{\partial y}}=\nabla X\cdot \nabla Y=0,}
so contours of constant phase are also contours of steepest descent.
== A simple estimate ==
Let f, S : Cn → C and C ⊂ Cn. If
M
=
sup
x
∈
C
ℜ
(
S
(
x
)
)
<
∞
,
{\displaystyle M=\sup _{x\in C}\Re (S(x))<\infty ,}
where
ℜ
(
⋅
)
{\displaystyle \Re (\cdot )}
denotes the real part, and there exists a positive real number λ0 such that
∫
C
|
f
(
x
)
e
λ
0
S
(
x
)
|
d
x
<
∞
,
{\displaystyle \int _{C}\left|f(x)e^{\lambda _{0}S(x)}\right|dx<\infty ,}
then the following estimate holds:
|
∫
C
f
(
x
)
e
λ
S
(
x
)
d
x
|
⩽
const
⋅
e
λ
M
,
∀
λ
∈
R
,
λ
⩾
λ
0
.
{\displaystyle \left|\int _{C}f(x)e^{\lambda S(x)}dx\right|\leqslant {\text{const}}\cdot e^{\lambda M},\qquad \forall \lambda \in \mathbb {R} ,\quad \lambda \geqslant \lambda _{0}.}
Proof of the simple estimate:
|
∫
C
f
(
x
)
e
λ
S
(
x
)
d
x
|
⩽
∫
C
|
f
(
x
)
|
|
e
λ
S
(
x
)
|
d
x
≡
∫
C
|
f
(
x
)
|
e
λ
M
|
e
λ
0
(
S
(
x
)
−
M
)
e
(
λ
−
λ
0
)
(
S
(
x
)
−
M
)
|
d
x
⩽
∫
C
|
f
(
x
)
|
e
λ
M
|
e
λ
0
(
S
(
x
)
−
M
)
|
d
x
|
e
(
λ
−
λ
0
)
(
S
(
x
)
−
M
)
|
⩽
1
=
e
−
λ
0
M
∫
C
|
f
(
x
)
e
λ
0
S
(
x
)
|
d
x
⏟
const
⋅
e
λ
M
.
{\displaystyle {\begin{aligned}\left|\int _{C}f(x)e^{\lambda S(x)}dx\right|&\leqslant \int _{C}|f(x)|\left|e^{\lambda S(x)}\right|dx\\&\equiv \int _{C}|f(x)|e^{\lambda M}\left|e^{\lambda _{0}(S(x)-M)}e^{(\lambda -\lambda _{0})(S(x)-M)}\right|dx\\&\leqslant \int _{C}|f(x)|e^{\lambda M}\left|e^{\lambda _{0}(S(x)-M)}\right|dx&&\left|e^{(\lambda -\lambda _{0})(S(x)-M)}\right|\leqslant 1\\&=\underbrace {e^{-\lambda _{0}M}\int _{C}\left|f(x)e^{\lambda _{0}S(x)}\right|dx} _{\text{const}}\cdot e^{\lambda M}.\end{aligned}}}
== The case of a single non-degenerate saddle point ==
=== Basic notions and notation ===
Let x be a complex n-dimensional vector, and
S
x
x
″
(
x
)
≡
(
∂
2
S
(
x
)
∂
x
i
∂
x
j
)
,
1
⩽
i
,
j
⩽
n
,
{\displaystyle S''_{xx}(x)\equiv \left({\frac {\partial ^{2}S(x)}{\partial x_{i}\partial x_{j}}}\right),\qquad 1\leqslant i,\,j\leqslant n,}
denote the Hessian matrix for a function S(x). If
φ
(
x
)
=
(
φ
1
(
x
)
,
φ
2
(
x
)
,
…
,
φ
k
(
x
)
)
{\displaystyle {\boldsymbol {\varphi }}(x)=(\varphi _{1}(x),\varphi _{2}(x),\ldots ,\varphi _{k}(x))}
is a vector function, then its Jacobian matrix is defined as
φ
x
′
(
x
)
≡
(
∂
φ
i
(
x
)
∂
x
j
)
,
1
⩽
i
⩽
k
,
1
⩽
j
⩽
n
.
{\displaystyle {\boldsymbol {\varphi }}_{x}'(x)\equiv \left({\frac {\partial \varphi _{i}(x)}{\partial x_{j}}}\right),\qquad 1\leqslant i\leqslant k,\quad 1\leqslant j\leqslant n.}
A non-degenerate saddle point, z0 ∈ Cn, of a holomorphic function S(z) is a critical point of the function (i.e., ∇S(z0) = 0) where the function's Hessian matrix has a non-vanishing determinant (i.e.,
det
S
z
z
″
(
z
0
)
≠
0
{\displaystyle \det S''_{zz}(z^{0})\neq 0}
).
The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point:
=== Complex Morse lemma ===
The Morse lemma for real-valued functions generalizes as follows for holomorphic functions: near a non-degenerate saddle point z0 of a holomorphic function S(z), there exist coordinates in terms of which S(z) − S(z0) is exactly quadratic. To make this precise, let S be a holomorphic function with domain W ⊂ Cn, and let z0 in W be a non-degenerate saddle point of S, that is, ∇S(z0) = 0 and
det
S
z
z
″
(
z
0
)
≠
0
{\displaystyle \det S''_{zz}(z^{0})\neq 0}
. Then there exist neighborhoods U ⊂ W of z0 and V ⊂ Cn of w = 0, and a bijective holomorphic function φ : V → U with φ(0) = z0 such that
∀
w
∈
V
:
S
(
φ
(
w
)
)
=
S
(
z
0
)
+
1
2
∑
j
=
1
n
μ
j
w
j
2
,
det
φ
w
′
(
0
)
=
1
,
{\displaystyle \forall w\in V:\qquad S({\boldsymbol {\varphi }}(w))=S(z^{0})+{\frac {1}{2}}\sum _{j=1}^{n}\mu _{j}w_{j}^{2},\quad \det {\boldsymbol {\varphi }}_{w}'(0)=1,}
Here, the μj are the eigenvalues of the matrix
S
z
z
″
(
z
0
)
{\displaystyle S_{zz}''(z^{0})}
.
=== The asymptotic expansion in the case of a single non-degenerate saddle point ===
Assume
f (z) and S(z) are holomorphic functions in an open, bounded, and simply connected set Ωx ⊂ Cn such that the Ix = Ωx ∩ Rn is connected;
ℜ
(
S
(
z
)
)
{\displaystyle \Re (S(z))}
has a single maximum:
max
z
∈
I
x
ℜ
(
S
(
z
)
)
=
ℜ
(
S
(
x
0
)
)
{\displaystyle \max _{z\in I_{x}}\Re (S(z))=\Re (S(x^{0}))}
for exactly one point x0 ∈ Ix;
x0 is a non-degenerate saddle point (i.e., ∇S(x0) = 0 and
det
S
x
x
″
(
x
0
)
≠
0
{\displaystyle \det S''_{xx}(x^{0})\neq 0}
).
Then, the following asymptotic holds
where μj are eigenvalues of the Hessian
S
x
x
″
(
x
0
)
{\displaystyle S''_{xx}(x^{0})}
and
(
−
μ
j
)
−
1
2
{\displaystyle (-\mu _{j})^{-{\frac {1}{2}}}}
are defined with arguments
This statement is a special case of more general results presented in Fedoryuk (1987).
Equation (8) can also be written as
where the branch of
det
(
−
S
x
x
″
(
x
0
)
)
{\displaystyle {\sqrt {\det \left(-S_{xx}''(x^{0})\right)}}}
is selected as follows
(
det
(
−
S
x
x
″
(
x
0
)
)
)
−
1
2
=
exp
(
−
i
Ind
(
−
S
x
x
″
(
x
0
)
)
)
∏
j
=
1
n
|
μ
j
|
−
1
2
,
Ind
(
−
S
x
x
″
(
x
0
)
)
=
1
2
∑
j
=
1
n
arg
(
−
μ
j
)
,
|
arg
(
−
μ
j
)
|
<
π
2
.
{\displaystyle {\begin{aligned}\left(\det \left(-S_{xx}''(x^{0})\right)\right)^{-{\frac {1}{2}}}&=\exp \left(-i{\text{ Ind}}\left(-S_{xx}''(x^{0})\right)\right)\prod _{j=1}^{n}\left|\mu _{j}\right|^{-{\frac {1}{2}}},\\{\text{Ind}}\left(-S_{xx}''(x^{0})\right)&={\tfrac {1}{2}}\sum _{j=1}^{n}\arg(-\mu _{j}),&&|\arg(-\mu _{j})|<{\tfrac {\pi }{2}}.\end{aligned}}}
Consider important special cases:
If S(x) is real valued for real x and x0 in Rn (aka, the multidimensional Laplace method), then
Ind
(
−
S
x
x
″
(
x
0
)
)
=
0.
{\displaystyle {\text{Ind}}\left(-S_{xx}''(x^{0})\right)=0.}
If S(x) is purely imaginary for real x (i.e.,
ℜ
(
S
(
x
)
)
=
0
{\displaystyle \Re (S(x))=0}
for all x in Rn) and x0 in Rn (aka, the multidimensional stationary phase method), then
Ind
(
−
S
x
x
″
(
x
0
)
)
=
π
4
sign
S
x
x
″
(
x
0
)
,
{\displaystyle {\text{Ind}}\left(-S_{xx}''(x^{0})\right)={\frac {\pi }{4}}{\text{sign }}S_{xx}''(x_{0}),}
where
sign
S
x
x
″
(
x
0
)
{\displaystyle {\text{sign }}S_{xx}''(x_{0})}
denotes the signature of matrix
S
x
x
″
(
x
0
)
{\displaystyle S_{xx}''(x_{0})}
, which equals to the number of negative eigenvalues minus the number of positive ones. It is noteworthy that in applications of the stationary phase method to the multidimensional WKB approximation in quantum mechanics (as well as in optics), Ind is related to the Maslov index see, e.g., Chaichian & Demichev (2001) and Schulman (2005).
== The case of multiple non-degenerate saddle points ==
If the function S(x) has multiple isolated non-degenerate saddle points, i.e.,
∇
S
(
x
(
k
)
)
=
0
,
det
S
x
x
″
(
x
(
k
)
)
≠
0
,
x
(
k
)
∈
Ω
x
(
k
)
,
{\displaystyle \nabla S\left(x^{(k)}\right)=0,\quad \det S''_{xx}\left(x^{(k)}\right)\neq 0,\quad x^{(k)}\in \Omega _{x}^{(k)},}
where
{
Ω
x
(
k
)
}
k
=
1
K
{\displaystyle \left\{\Omega _{x}^{(k)}\right\}_{k=1}^{K}}
is an open cover of Ωx, then the calculation of the integral asymptotic is reduced to the case of a single saddle point by employing the partition of unity. The partition of unity allows us to construct a set of continuous functions ρk(x) : Ωx → [0, 1], 1 ≤ k ≤ K, such that
∑
k
=
1
K
ρ
k
(
x
)
=
1
,
∀
x
∈
Ω
x
,
ρ
k
(
x
)
=
0
∀
x
∈
Ω
x
∖
Ω
x
(
k
)
.
{\displaystyle {\begin{aligned}\sum _{k=1}^{K}\rho _{k}(x)&=1,&&\forall x\in \Omega _{x},\\\rho _{k}(x)&=0&&\forall x\in \Omega _{x}\setminus \Omega _{x}^{(k)}.\end{aligned}}}
Whence,
∫
I
x
⊂
Ω
x
f
(
x
)
e
λ
S
(
x
)
d
x
≡
∑
k
=
1
K
∫
I
x
⊂
Ω
x
ρ
k
(
x
)
f
(
x
)
e
λ
S
(
x
)
d
x
.
{\displaystyle \int _{I_{x}\subset \Omega _{x}}f(x)e^{\lambda S(x)}dx\equiv \sum _{k=1}^{K}\int _{I_{x}\subset \Omega _{x}}\rho _{k}(x)f(x)e^{\lambda S(x)}dx.}
Therefore as λ → ∞ we have:
∑
k
=
1
K
∫
a neighborhood of
x
(
k
)
f
(
x
)
e
λ
S
(
x
)
d
x
=
(
2
π
λ
)
n
2
∑
k
=
1
K
e
λ
S
(
x
(
k
)
)
(
det
(
−
S
x
x
″
(
x
(
k
)
)
)
)
−
1
2
f
(
x
(
k
)
)
,
{\displaystyle \sum _{k=1}^{K}\int _{{\text{a neighborhood of }}x^{(k)}}f(x)e^{\lambda S(x)}dx=\left({\frac {2\pi }{\lambda }}\right)^{\frac {n}{2}}\sum _{k=1}^{K}e^{\lambda S\left(x^{(k)}\right)}\left(\det \left(-S_{xx}''\left(x^{(k)}\right)\right)\right)^{-{\frac {1}{2}}}f\left(x^{(k)}\right),}
where equation (13) was utilized at the last stage, and the pre-exponential function f (x) at least must be continuous.
== The other cases ==
When ∇S(z0) = 0 and
det
S
z
z
″
(
z
0
)
=
0
{\displaystyle \det S''_{zz}(z^{0})=0}
, the point z0 ∈ Cn is called a degenerate saddle point of a function S(z).
Calculating the asymptotic of
∫
f
(
x
)
e
λ
S
(
x
)
d
x
,
{\displaystyle \int f(x)e^{\lambda S(x)}dx,}
when λ → ∞, f (x) is continuous, and S(z) has a degenerate saddle point, is a very rich problem, whose solution heavily relies on the catastrophe theory. Here, the catastrophe theory replaces the Morse lemma, valid only in the non-degenerate case, to transform the function S(z) into one of the multitude of canonical representations. For further details see, e.g., Poston & Stewart (1978) and Fedoryuk (1987).
Integrals with degenerate saddle points naturally appear in many applications including optical caustics and the multidimensional WKB approximation in quantum mechanics.
The other cases such as, e.g., f (x) and/or S(x) are discontinuous or when an extremum of S(x) lies at the integration region's boundary, require special care (see, e.g., Fedoryuk (1987) and Wong (1989)).
== Extensions and generalizations ==
An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.
Given a contour C in the complex sphere, a function f defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If f and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).
The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.
Another extension is the Method of Chester–Friedman–Ursell for coalescing saddle points and uniform asymptotic extensions.
== See also ==
Pearcey integral
Stationary phase approximation
Laplace's method
== Notes ==
== References ==
Chaichian, M.; Demichev, A. (2001), Path Integrals in Physics Volume 1: Stochastic Process and Quantum Mechanics, Taylor & Francis, p. 174, ISBN 075030801X
Debye, P. (1909), "Näherungsformeln für die Zylinderfunktionen für große Werte des Arguments und unbeschränkt veränderliche Werte des Index", Mathematische Annalen, 67 (4): 535–558, doi:10.1007/BF01450097, S2CID 122219667 English translation in Debye, Peter J. W. (1954), The collected papers of Peter J. W. Debye, Interscience Publishers, Inc., New York, ISBN 978-0-918024-58-9, MR 0063975 {{citation}}: ISBN / Date incompatibility (help)
Deift, P.; Zhou, X. (1993), "A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation", Ann. of Math., vol. 137, no. 2, The Annals of Mathematics, Vol. 137, No. 2, pp. 295–368, arXiv:math/9201261, doi:10.2307/2946540, JSTOR 2946540, S2CID 12699956.
Erdelyi, A. (1956), Asymptotic Expansions, Dover.
Fedoryuk, M. V. (2001) [1994], "Saddle point method", Encyclopedia of Mathematics, EMS Press.
Fedoryuk, M. V. (1987), Asymptotic: Integrals and Series, Nauka, Moscow [in Russian].
Kamvissis, S.; McLaughlin, K. T.-R.; Miller, P. (2003), "Semiclassical Soliton Ensembles for the Focusing Nonlinear Schrödinger Equation", Annals of Mathematics Studies, vol. 154, Princeton University Press.
Riemann, B. (1863), Sullo svolgimento del quoziente di due serie ipergeometriche in frazione continua infinita (Unpublished note, reproduced in Riemann's collected papers.)
Siegel, C. L. (1932), "Über Riemanns Nachlaß zur analytischen Zahlentheorie", Quellen und Studien zur Geschichte der Mathematik, Astronomie und Physik, 2: 45–80 Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966.
Translated in Barkan, Eric; Sklar, David (2018), "On Riemanns Nachlass for Analytic Number Theory: A translation of Siegel's Uber", arXiv:1810.05198 [math.HO].
Poston, T.; Stewart, I. (1978), Catastrophe Theory and Its Applications, Pitman.
Schulman, L. S. (2005), "Ch. 17: The Phase of the Semiclassical Amplitude", Techniques and Applications of Path Integration, Dover, ISBN 0486445283
Wong, R. (1989), Asymptotic approximations of integrals, Academic Press. | Wikipedia/Saddle_point_approximation |
In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive.
After Ackermann's publication of his function (which had three non-negative integer arguments), many authors modified it to suit various purposes, so that today "the Ackermann function" may refer to any of numerous variants of the original function. One common version is the two-argument Ackermann–Péter function developed by Rózsa Péter and Raphael Robinson. This function is defined from the recurrence relation
A
(
m
+
1
,
n
+
1
)
=
A
(
m
,
A
(
m
+
1
,
n
)
)
{\displaystyle \operatorname {A} (m+1,n+1)=\operatorname {A} (m,\operatorname {A} (m+1,n))}
with appropriate base cases. Its value grows very rapidly; for example,
A
(
4
,
2
)
{\displaystyle \operatorname {A} (4,2)}
results in
2
65536
−
3
{\displaystyle 2^{65536}-3}
, an integer with 19,729 decimal digits.
== History ==
In the late 1920s, the mathematicians Gabriel Sudan and Wilhelm Ackermann, students of David Hilbert, were studying the foundations of computation. Both Sudan and Ackermann are credited with discovering total computable functions (termed simply "recursive" in some references) that are not primitive recursive. Sudan published the lesser-known Sudan function, then shortly afterwards and independently, in 1928, Ackermann published his function
φ
{\displaystyle \varphi }
(from Greek, the letter phi). Ackermann's three-argument function,
φ
(
m
,
n
,
p
)
{\displaystyle \varphi (m,n,p)}
, is defined such that for
p
=
0
,
1
,
2
{\displaystyle p=0,1,2}
, it reproduces the basic operations of addition, multiplication, and exponentiation as
φ
(
m
,
n
,
0
)
=
m
+
n
φ
(
m
,
n
,
1
)
=
m
×
n
φ
(
m
,
n
,
2
)
=
m
n
{\displaystyle {\begin{aligned}\varphi (m,n,0)&=m+n\\\varphi (m,n,1)&=m\times n\\\varphi (m,n,2)&=m^{n}\end{aligned}}}
and for
p
>
2
{\displaystyle p>2}
it extends these basic operations in a way that can be compared to the hyperoperations:
φ
(
m
,
n
,
3
)
=
m
[
4
]
(
n
+
1
)
φ
(
m
,
n
,
p
)
⪆
m
[
p
+
1
]
(
n
+
1
)
for
p
>
3
{\displaystyle {\begin{aligned}\varphi (m,n,3)&=m[4](n+1)\\\varphi (m,n,p)&\gtrapprox m[p+1](n+1)&&{\text{for }}p>3\end{aligned}}}
(Aside from its historic role as a total-computable-but-not-primitive-recursive function, Ackermann's original function is seen to extend the basic arithmetic operations beyond exponentiation, although not as seamlessly as do variants of Ackermann's function that are specifically designed for that purpose—such as Goodstein's hyperoperation sequence.)
In On the Infinite, David Hilbert hypothesized that the Ackermann function was not primitive recursive, but it was Ackermann, Hilbert's personal secretary and former student, who actually proved the hypothesis in his paper On Hilbert's Construction of the Real Numbers.
Rózsa Péter and Raphael Robinson later developed a two-variable version of the Ackermann function that became preferred by almost all authors.
The generalized hyperoperation sequence, e.g.
G
(
m
,
a
,
b
)
=
a
[
m
]
b
{\displaystyle G(m,a,b)=a[m]b}
, is a version of the Ackermann function as well.
In 1963 R.C. Buck based an intuitive two-variable variant
F
{\displaystyle \operatorname {F} }
on the hyperoperation sequence:
F
(
m
,
n
)
=
2
[
m
]
n
.
{\displaystyle \operatorname {F} (m,n)=2[m]n.}
Compared to most other versions, Buck's function has no unessential offsets:
F
(
0
,
n
)
=
2
[
0
]
n
=
n
+
1
F
(
1
,
n
)
=
2
[
1
]
n
=
2
+
n
F
(
2
,
n
)
=
2
[
2
]
n
=
2
×
n
F
(
3
,
n
)
=
2
[
3
]
n
=
2
n
F
(
4
,
n
)
=
2
[
4
]
n
=
2
2
2
.
.
.
2
⋮
{\displaystyle {\begin{aligned}\operatorname {F} (0,n)&=2[0]n=n+1\\\operatorname {F} (1,n)&=2[1]n=2+n\\\operatorname {F} (2,n)&=2[2]n=2\times n\\\operatorname {F} (3,n)&=2[3]n=2^{n}\\\operatorname {F} (4,n)&=2[4]n=2^{2^{2^{{}^{.^{.^{{}_{.}2}}}}}}\\&\quad \vdots \end{aligned}}}
Many other versions of Ackermann function have been investigated.
== Definition ==
=== Definition: as m-ary function ===
Ackermann's original three-argument function
φ
(
m
,
n
,
p
)
{\displaystyle \varphi (m,n,p)}
is defined recursively as follows for nonnegative integers
m
,
n
,
{\displaystyle m,n,}
and
p
{\displaystyle p}
:
φ
(
m
,
n
,
0
)
=
m
+
n
φ
(
m
,
0
,
1
)
=
0
φ
(
m
,
0
,
2
)
=
1
φ
(
m
,
0
,
p
)
=
m
for
p
>
2
φ
(
m
,
n
,
p
)
=
φ
(
m
,
φ
(
m
,
n
−
1
,
p
)
,
p
−
1
)
for
n
,
p
>
0
{\displaystyle {\begin{aligned}\varphi (m,n,0)&=m+n\\\varphi (m,0,1)&=0\\\varphi (m,0,2)&=1\\\varphi (m,0,p)&=m&&{\text{for }}p>2\\\varphi (m,n,p)&=\varphi (m,\varphi (m,n-1,p),p-1)&&{\text{for }}n,p>0\end{aligned}}}
Of the various two-argument versions, the one developed by Péter and Robinson (called "the" Ackermann function by most authors) is defined for nonnegative integers
m
{\displaystyle m}
and
n
{\displaystyle n}
as follows:
A
(
0
,
n
)
=
n
+
1
A
(
m
+
1
,
0
)
=
A
(
m
,
1
)
A
(
m
+
1
,
n
+
1
)
=
A
(
m
,
A
(
m
+
1
,
n
)
)
{\displaystyle {\begin{array}{lcl}\operatorname {A} (0,n)&=&n+1\\\operatorname {A} (m+1,0)&=&\operatorname {A} (m,1)\\\operatorname {A} (m+1,n+1)&=&\operatorname {A} (m,\operatorname {A} (m+1,n))\end{array}}}
The Ackermann function has also been expressed in relation to the hyperoperation sequence:
A
(
m
,
n
)
=
{
n
+
1
m
=
0
2
[
m
]
(
n
+
3
)
−
3
m
>
0
{\displaystyle A(m,n)={\begin{cases}n+1&m=0\\2[m](n+3)-3&m>0\\\end{cases}}}
or, written in Knuth's up-arrow notation (extended to integer indices
≥
−
2
{\displaystyle \geq -2}
):
A
(
m
,
n
)
=
{
n
+
1
m
=
0
2
↑
m
−
2
(
n
+
3
)
−
3
m
>
0
{\displaystyle A(m,n)={\begin{cases}n+1&m=0\\2\uparrow ^{m-2}(n+3)-3&m>0\\\end{cases}}}
or, equivalently, in terms of Buck's function F:
A
(
m
,
n
)
=
{
n
+
1
m
=
0
F
(
m
,
n
+
3
)
−
3
m
>
0
{\displaystyle A(m,n)={\begin{cases}n+1&m=0\\F(m,n+3)-3&m>0\\\end{cases}}}
=== Definition: as iterated 1-ary function ===
Define
f
n
{\displaystyle f^{n}}
as the n-th iterate of
f
{\displaystyle f}
:
f
0
(
x
)
=
x
f
n
+
1
(
x
)
=
f
(
f
n
(
x
)
)
{\displaystyle {\begin{array}{rll}f^{0}(x)&=&x\\f^{n+1}(x)&=&f(f^{n}(x))\end{array}}}
Iteration is the process of composing a function with itself a certain number of times. Function composition is an associative operation, so
f
(
f
n
(
x
)
)
=
f
n
(
f
(
x
)
)
{\displaystyle f(f^{n}(x))=f^{n}(f(x))}
.
Conceiving the Ackermann function as a sequence of unary functions, one can set
A
m
(
n
)
=
A
(
m
,
n
)
{\displaystyle \operatorname {A} _{m}(n)=\operatorname {A} (m,n)}
.
The function then becomes a sequence
A
0
,
A
1
,
A
2
,
.
.
.
{\displaystyle \operatorname {A} _{0},\operatorname {A} _{1},\operatorname {A} _{2},...}
of unary functions, defined from iteration:
A
0
(
n
)
=
n
+
1
A
m
+
1
(
n
)
=
A
m
n
+
1
(
1
)
{\displaystyle {\begin{array}{lcl}\operatorname {A} _{0}(n)&=&n+1\\\operatorname {A} _{m+1}(n)&=&\operatorname {A} _{m}^{n+1}(1)\\\end{array}}}
== Computation ==
The recursive definition of the Ackermann function can naturally be transposed to a term rewriting system (TRS).
=== TRS, based on 2-ary function ===
The definition of the 2-ary Ackermann function leads to the obvious reduction rules
(r1)
A
(
0
,
n
)
→
S
(
n
)
(r2)
A
(
S
(
m
)
,
0
)
→
A
(
m
,
S
(
0
)
)
(r3)
A
(
S
(
m
)
,
S
(
n
)
)
→
A
(
m
,
A
(
S
(
m
)
,
n
)
)
{\displaystyle {\begin{array}{lll}{\text{(r1)}}&A(0,n)&\rightarrow &S(n)\\{\text{(r2)}}&A(S(m),0)&\rightarrow &A(m,S(0))\\{\text{(r3)}}&A(S(m),S(n))&\rightarrow &A(m,A(S(m),n))\end{array}}}
Example
Compute
A
(
1
,
2
)
→
∗
4
{\displaystyle A(1,2)\rightarrow _{*}4}
The reduction sequence is
To compute
A
(
m
,
n
)
{\displaystyle \operatorname {A} (m,n)}
one can use a stack, which initially contains the elements
⟨
m
,
n
⟩
{\displaystyle \langle m,n\rangle }
.
Then repeatedly the two top elements are replaced according to the rules
(r1)
0
,
n
→
(
n
+
1
)
(r2)
(
m
+
1
)
,
0
→
m
,
1
(r3)
(
m
+
1
)
,
(
n
+
1
)
→
m
,
(
m
+
1
)
,
n
{\displaystyle {\begin{array}{lllllllll}{\text{(r1)}}&0&,&n&\rightarrow &(n+1)\\{\text{(r2)}}&(m+1)&,&0&\rightarrow &m&,&1\\{\text{(r3)}}&(m+1)&,&(n+1)&\rightarrow &m&,&(m+1)&,&n\end{array}}}
Schematically, starting from
⟨
m
,
n
⟩
{\displaystyle \langle m,n\rangle }
:
WHILE stackLength <> 1
{
POP 2 elements;
PUSH 1 or 2 or 3 elements, applying the rules r1, r2, r3
}
The pseudocode is published in Grossman & Zeitman (1988).
For example, on input
⟨
2
,
1
⟩
{\displaystyle \langle 2,1\rangle }
,
Remarks
The leftmost-innermost strategy is implemented in 225 computer languages on Rosetta Code.
For all
m
,
n
{\displaystyle m,n}
the computation of
A
(
m
,
n
)
{\displaystyle A(m,n)}
takes no more than
(
A
(
m
,
n
)
+
1
)
m
{\displaystyle (A(m,n)+1)^{m}}
steps.
Grossman & Zeitman (1988) pointed out that in the computation of
A
(
m
,
n
)
{\displaystyle \operatorname {A} (m,n)}
the maximum length of the stack is
A
(
m
,
n
)
{\displaystyle \operatorname {A} (m,n)}
, as long as
m
>
0
{\displaystyle m>0}
. Their own algorithm, inherently iterative, computes
A
(
m
,
n
)
{\displaystyle \operatorname {A} (m,n)}
within
O
(
m
A
(
m
,
n
)
)
{\displaystyle {\mathcal {O}}(m\operatorname {A} (m,n))}
time and within
O
(
m
)
{\displaystyle {\mathcal {O}}(m)}
space.
=== TRS, based on iterated 1-ary function ===
The definition of the iterated 1-ary Ackermann functions leads to different reduction rules
(r4)
A
(
S
(
0
)
,
0
,
n
)
→
S
(
n
)
(r5)
A
(
S
(
0
)
,
S
(
m
)
,
n
)
→
A
(
S
(
n
)
,
m
,
S
(
0
)
)
(r6)
A
(
S
(
S
(
x
)
)
,
m
,
n
)
→
A
(
S
(
0
)
,
m
,
A
(
S
(
x
)
,
m
,
n
)
)
{\displaystyle {\begin{array}{lll}{\text{(r4)}}&A(S(0),0,n)&\rightarrow &S(n)\\{\text{(r5)}}&A(S(0),S(m),n)&\rightarrow &A(S(n),m,S(0))\\{\text{(r6)}}&A(S(S(x)),m,n)&\rightarrow &A(S(0),m,A(S(x),m,n))\end{array}}}
As function composition is associative, instead of rule r6 one can define
(r7)
A
(
S
(
S
(
x
)
)
,
m
,
n
)
→
A
(
S
(
x
)
,
m
,
A
(
S
(
0
)
,
m
,
n
)
)
{\displaystyle {\begin{array}{lll}{\text{(r7)}}&A(S(S(x)),m,n)&\rightarrow &A(S(x),m,A(S(0),m,n))\end{array}}}
Like in the previous section the computation of
A
m
1
(
n
)
{\displaystyle \operatorname {A} _{m}^{1}(n)}
can be implemented with a stack.
Initially the stack contains the three elements
⟨
1
,
m
,
n
⟩
{\displaystyle \langle 1,m,n\rangle }
.
Then repeatedly the three top elements are replaced according to the rules
(r4)
1
,
0
,
n
→
(
n
+
1
)
(r5)
1
,
(
m
+
1
)
,
n
→
(
n
+
1
)
,
m
,
1
(r6)
(
x
+
2
)
,
m
,
n
→
1
,
m
,
(
x
+
1
)
,
m
,
n
{\displaystyle {\begin{array}{lllllllll}{\text{(r4)}}&1&,0&,n&\rightarrow &(n+1)\\{\text{(r5)}}&1&,(m+1)&,n&\rightarrow &(n+1)&,m&,1\\{\text{(r6)}}&(x+2)&,m&,n&\rightarrow &1&,m&,(x+1)&,m&,n\\\end{array}}}
Schematically, starting from
⟨
1
,
m
,
n
⟩
{\displaystyle \langle 1,m,n\rangle }
:
WHILE stackLength <> 1
{
POP 3 elements;
PUSH 1 or 3 or 5 elements, applying the rules r4, r5, r6;
}
Example
On input
⟨
1
,
2
,
1
⟩
{\displaystyle \langle 1,2,1\rangle }
the successive stack configurations are
1
,
2
,
1
_
→
r
5
2
,
1
,
1
_
→
r
6
1
,
1
,
1
,
1
,
1
_
→
r
5
1
,
1
,
2
,
0
,
1
_
→
r
6
1
,
1
,
1
,
0
,
1
,
0
,
1
_
→
r
4
1
,
1
,
1
,
0
,
2
_
→
r
4
1
,
1
,
3
_
→
r
5
4
,
0
,
1
_
→
r
6
1
,
0
,
3
,
0
,
1
_
→
r
6
1
,
0
,
1
,
0
,
2
,
0
,
1
_
→
r
6
1
,
0
,
1
,
0
,
1
,
0
,
1
,
0
,
1
_
→
r
4
1
,
0
,
1
,
0
,
1
,
0
,
2
_
→
r
4
1
,
0
,
1
,
0
,
3
_
→
r
4
1
,
0
,
4
_
→
r
4
5
{\displaystyle {\begin{aligned}&{\underline {1,2,1}}\rightarrow _{r5}{\underline {2,1,1}}\rightarrow _{r6}1,1,{\underline {1,1,1}}\rightarrow _{r5}1,1,{\underline {2,0,1}}\rightarrow _{r6}1,1,1,0,{\underline {1,0,1}}\\&\rightarrow _{r4}1,1,{\underline {1,0,2}}\rightarrow _{r4}{\underline {1,1,3}}\rightarrow _{r5}{\underline {4,0,1}}\rightarrow _{r6}1,0,{\underline {3,0,1}}\rightarrow _{r6}1,0,1,0,{\underline {2,0,1}}\\&\rightarrow _{r6}1,0,1,0,1,0,{\underline {1,0,1}}\rightarrow _{r4}1,0,1,0,{\underline {1,0,2}}\rightarrow _{r4}1,0,{\underline {1,0,3}}\rightarrow _{r4}{\underline {1,0,4}}\rightarrow _{r4}5\end{aligned}}}
The corresponding equalities are
A
2
(
1
)
=
A
1
2
(
1
)
=
A
1
(
A
1
(
1
)
)
=
A
1
(
A
0
2
(
1
)
)
=
A
1
(
A
0
(
A
0
(
1
)
)
)
=
A
1
(
A
0
(
2
)
)
=
A
1
(
3
)
=
A
0
4
(
1
)
=
A
0
(
A
0
3
(
1
)
)
=
A
0
(
A
0
(
A
0
2
(
1
)
)
)
=
A
0
(
A
0
(
A
0
(
A
0
(
1
)
)
)
)
=
A
0
(
A
0
(
A
0
(
2
)
)
)
=
A
0
(
A
0
(
3
)
)
=
A
0
(
4
)
=
5
{\displaystyle {\begin{aligned}&A_{2}(1)=A_{1}^{2}(1)=A_{1}(A_{1}(1))=A_{1}(A_{0}^{2}(1))=A_{1}(A_{0}(A_{0}(1)))\\&=A_{1}(A_{0}(2))=A_{1}(3)=A_{0}^{4}(1)=A_{0}(A_{0}^{3}(1))=A_{0}(A_{0}(A_{0}^{2}(1)))\\&=A_{0}(A_{0}(A_{0}(A_{0}(1))))=A_{0}(A_{0}(A_{0}(2)))=A_{0}(A_{0}(3))=A_{0}(4)=5\end{aligned}}}
When reduction rule r7 is used instead of rule r6, the replacements in the stack will follow
(r7)
(
x
+
2
)
,
m
,
n
→
(
x
+
1
)
,
m
,
1
,
m
,
n
{\displaystyle {\begin{array}{lllllllll}{\text{(r7)}}&(x+2)&,m&,n&\rightarrow &(x+1)&,m&,1&,m&,n\end{array}}}
The successive stack configurations will then be
1
,
2
,
1
_
→
r
5
2
,
1
,
1
_
→
r
7
1
,
1
,
1
,
1
,
1
_
→
r
5
1
,
1
,
2
,
0
,
1
_
→
r
7
1
,
1
,
1
,
0
,
1
,
0
,
1
_
→
r
4
1
,
1
,
1
,
0
,
2
_
→
r
4
1
,
1
,
3
_
→
r
5
4
,
0
,
1
_
→
r
7
3
,
0
,
1
,
0
,
1
_
→
r
4
3
,
0
,
2
_
→
r
7
2
,
0
,
1
,
0
,
2
_
→
r
4
2
,
0
,
3
_
→
r
7
1
,
0
,
1
,
0
,
3
_
→
r
4
1
,
0
,
4
_
→
r
4
5
{\displaystyle {\begin{aligned}&{\underline {1,2,1}}\rightarrow _{r5}{\underline {2,1,1}}\rightarrow _{r7}1,1,{\underline {1,1,1}}\rightarrow _{r5}1,1,{\underline {2,0,1}}\rightarrow _{r7}1,1,1,0,{\underline {1,0,1}}\\&\rightarrow _{r4}1,1,{\underline {1,0,2}}\rightarrow _{r4}{\underline {1,1,3}}\rightarrow _{r5}{\underline {4,0,1}}\rightarrow _{r7}3,0,{\underline {1,0,1}}\rightarrow _{r4}{\underline {3,0,2}}\\&\rightarrow _{r7}2,0,{\underline {1,0,2}}\rightarrow _{r4}{\underline {2,0,3}}\rightarrow _{r7}1,0,{\underline {1,0,3}}\rightarrow _{r4}{\underline {1,0,4}}\rightarrow _{r4}5\end{aligned}}}
The corresponding equalities are
A
2
(
1
)
=
A
1
2
(
1
)
=
A
1
(
A
1
(
1
)
)
=
A
1
(
A
0
2
(
1
)
)
=
A
1
(
A
0
(
A
0
(
1
)
)
)
=
A
1
(
A
0
(
2
)
)
=
A
1
(
3
)
=
A
0
4
(
1
)
=
A
0
3
(
A
0
(
1
)
)
=
A
0
3
(
2
)
=
A
0
2
(
A
0
(
2
)
)
=
A
0
2
(
3
)
=
A
0
(
A
0
(
3
)
)
=
A
0
(
4
)
=
5
{\displaystyle {\begin{aligned}&A_{2}(1)=A_{1}^{2}(1)=A_{1}(A_{1}(1))=A_{1}(A_{0}^{2}(1))=A_{1}(A_{0}(A_{0}(1)))\\&=A_{1}(A_{0}(2))=A_{1}(3)=A_{0}^{4}(1)=A_{0}^{3}(A_{0}(1))=A_{0}^{3}(2)\\&=A_{0}^{2}(A_{0}(2))=A_{0}^{2}(3)=A_{0}(A_{0}(3))=A_{0}(4)=5\end{aligned}}}
Remarks
On any given input the TRSs presented so far converge in the same number of steps. They also use the same reduction rules (in this comparison the rules r1, r2, r3 are considered "the same as" the rules r4, r5, r6/r7 respectively). For example, the reduction of
A
(
2
,
1
)
{\displaystyle A(2,1)}
converges in 14 steps: 6 × r1, 3 × r2, 5 × r3. The reduction of
A
2
(
1
)
{\displaystyle A_{2}(1)}
converges in the same 14 steps: 6 × r4, 3 × r5, 5 × r6/r7. The TRSs differ in the order in which the reduction rules are applied.
When
A
i
(
n
)
{\displaystyle A_{i}(n)}
is computed following the rules {r4, r5, r6}, the maximum length of the stack stays below
2
×
A
(
i
,
n
)
{\displaystyle 2\times A(i,n)}
. When reduction rule r7 is used instead of rule r6, the maximum length of the stack is only
2
(
i
+
2
)
{\displaystyle 2(i+2)}
. The length of the stack reflects the recursion depth. As the reduction according to the rules {r4, r5, r7} involves a smaller maximum depth of recursion, this computation is more efficient in that respect.
=== TRS, based on hyperoperators ===
As Sundblad (1971) — or Porto & Matos (1980) — showed explicitly, the Ackermann function can be expressed in terms of the hyperoperation sequence:
A
(
m
,
n
)
=
{
n
+
1
m
=
0
2
[
m
]
(
n
+
3
)
−
3
m
>
0
{\displaystyle A(m,n)={\begin{cases}n+1&m=0\\2[m](n+3)-3&m>0\\\end{cases}}}
or, after removal of the constant 2 from the parameter list, in terms of Buck's function
A
(
m
,
n
)
=
{
n
+
1
m
=
0
F
(
m
,
n
+
3
)
−
3
m
>
0
{\displaystyle A(m,n)={\begin{cases}n+1&m=0\\F(m,n+3)-3&m>0\\\end{cases}}}
Buck's function
F
(
m
,
n
)
=
2
[
m
]
n
{\displaystyle \operatorname {F} (m,n)=2[m]n}
, a variant of Ackermann function by itself, can be computed with the following reduction rules:
(b1)
F
(
S
(
0
)
,
0
,
n
)
→
S
(
n
)
(b2)
F
(
S
(
0
)
,
S
(
0
)
,
0
)
→
S
(
S
(
0
)
)
(b3)
F
(
S
(
0
)
,
S
(
S
(
0
)
)
,
0
)
→
0
(b4)
F
(
S
(
0
)
,
S
(
S
(
S
(
m
)
)
)
,
0
)
→
S
(
0
)
(b5)
F
(
S
(
0
)
,
S
(
m
)
,
S
(
n
)
)
→
F
(
S
(
n
)
,
m
,
F
(
S
(
0
)
,
S
(
m
)
,
0
)
)
(b6)
F
(
S
(
S
(
x
)
)
,
m
,
n
)
→
F
(
S
(
0
)
,
m
,
F
(
S
(
x
)
,
m
,
n
)
)
{\displaystyle {\begin{array}{lll}{\text{(b1)}}&F(S(0),0,n)&\rightarrow &S(n)\\{\text{(b2)}}&F(S(0),S(0),0)&\rightarrow &S(S(0))\\{\text{(b3)}}&F(S(0),S(S(0)),0)&\rightarrow &0\\{\text{(b4)}}&F(S(0),S(S(S(m))),0)&\rightarrow &S(0)\\{\text{(b5)}}&F(S(0),S(m),S(n))&\rightarrow &F(S(n),m,F(S(0),S(m),0))\\{\text{(b6)}}&F(S(S(x)),m,n)&\rightarrow &F(S(0),m,F(S(x),m,n))\end{array}}}
Instead of rule b6 one can define the rule
(b7)
F
(
S
(
S
(
x
)
)
,
m
,
n
)
→
F
(
S
(
x
)
,
m
,
F
(
S
(
0
)
,
m
,
n
)
)
{\displaystyle {\begin{array}{lll}{\text{(b7)}}&F(S(S(x)),m,n)&\rightarrow &F(S(x),m,F(S(0),m,n))\end{array}}}
To compute the Ackermann function it suffices to add three reduction rules
(r8)
A
(
0
,
n
)
→
S
(
n
)
(r9)
A
(
S
(
m
)
,
n
)
→
P
(
F
(
S
(
0
)
,
S
(
m
)
,
S
(
S
(
S
(
n
)
)
)
)
)
(r10)
P
(
S
(
S
(
S
(
m
)
)
)
)
→
m
{\displaystyle {\begin{array}{lll}{\text{(r8)}}&A(0,n)&\rightarrow &S(n)\\{\text{(r9)}}&A(S(m),n)&\rightarrow &P(F(S(0),S(m),S(S(S(n)))))\\{\text{(r10)}}&P(S(S(S(m))))&\rightarrow &m\\\end{array}}}
These rules take care of the base case A(0,n), the alignment (n+3) and the fudge (-3).
Example
Compute
A
(
2
,
1
)
→
∗
5
{\displaystyle A(2,1)\rightarrow _{*}5}
The matching equalities are
when the TRS with the reduction rule
b6
{\displaystyle {\text{b6}}}
is applied:
A
(
2
,
1
)
+
3
=
F
(
2
,
4
)
=
⋯
=
F
6
(
0
,
2
)
=
F
(
0
,
F
5
(
0
,
2
)
)
=
F
(
0
,
F
(
0
,
F
4
(
0
,
2
)
)
)
=
F
(
0
,
F
(
0
,
F
(
0
,
F
3
(
0
,
2
)
)
)
)
=
F
(
0
,
F
(
0
,
F
(
0
,
F
(
0
,
F
2
(
0
,
2
)
)
)
)
)
=
F
(
0
,
F
(
0
,
F
(
0
,
F
(
0
,
F
(
0
,
F
(
0
,
2
)
)
)
)
)
)
=
F
(
0
,
F
(
0
,
F
(
0
,
F
(
0
,
F
(
0
,
3
)
)
)
)
)
=
F
(
0
,
F
(
0
,
F
(
0
,
F
(
0
,
4
)
)
)
)
=
F
(
0
,
F
(
0
,
F
(
0
,
5
)
)
)
=
F
(
0
,
F
(
0
,
6
)
)
=
F
(
0
,
7
)
=
8
{\displaystyle {\begin{aligned}&A(2,1)+3=F(2,4)=\dots =F^{6}(0,2)=F(0,F^{5}(0,2))=F(0,F(0,F^{4}(0,2)))\\&=F(0,F(0,F(0,F^{3}(0,2))))=F(0,F(0,F(0,F(0,F^{2}(0,2)))))=F(0,F(0,F(0,F(0,F(0,F(0,2))))))\\&=F(0,F(0,F(0,F(0,F(0,3)))))=F(0,F(0,F(0,F(0,4))))=F(0,F(0,F(0,5)))=F(0,F(0,6))=F(0,7)=8\end{aligned}}}
when the TRS with the reduction rule
b7
{\displaystyle {\text{b7}}}
is applied:
A
(
2
,
1
)
+
3
=
F
(
2
,
4
)
=
⋯
=
F
6
(
0
,
2
)
=
F
5
(
0
,
F
(
0
,
2
)
)
=
F
5
(
0
,
3
)
=
F
4
(
0
,
F
(
0
,
3
)
)
=
F
4
(
0
,
4
)
=
F
3
(
0
,
F
(
0
,
4
)
)
=
F
3
(
0
,
5
)
=
F
2
(
0
,
F
(
0
,
5
)
)
=
F
2
(
0
,
6
)
=
F
(
0
,
F
(
0
,
6
)
)
=
F
(
0
,
7
)
=
8
{\displaystyle {\begin{aligned}&A(2,1)+3=F(2,4)=\dots =F^{6}(0,2)=F^{5}(0,F(0,2))=F^{5}(0,3)=F^{4}(0,F(0,3))=F^{4}(0,4)\\&=F^{3}(0,F(0,4))=F^{3}(0,5)=F^{2}(0,F(0,5))=F^{2}(0,6)=F(0,F(0,6))=F(0,7)=8\end{aligned}}}
Remarks
The computation of
A
i
(
n
)
{\displaystyle \operatorname {A} _{i}(n)}
according to the rules {b1 - b5, b6, r8 - r10} is deeply recursive. The maximum depth of nested
F
{\displaystyle F}
s is
A
(
i
,
n
)
+
1
{\displaystyle A(i,n)+1}
. The culprit is the order in which iteration is executed:
F
n
+
1
(
x
)
=
F
(
F
n
(
x
)
)
{\displaystyle F^{n+1}(x)=F(F^{n}(x))}
. The first
F
{\displaystyle F}
disappears only after the whole sequence is unfolded.
The computation according to the rules {b1 - b5, b7, r8 - r10} is more efficient in that respect. The iteration
F
n
+
1
(
x
)
=
F
n
(
F
(
x
)
)
{\displaystyle F^{n+1}(x)=F^{n}(F(x))}
simulates the repeated loop over a block of code. The nesting is limited to
(
i
+
1
)
{\displaystyle (i+1)}
, one recursion level per iterated function. Meyer & Ritchie (1967) showed this correspondence.
These considerations concern the recursion depth only. Either way of iterating leads to the same number of reduction steps, involving the same rules (when the rules b6 and b7 are considered "the same"). The reduction of
A
(
2
,
1
)
{\displaystyle A(2,1)}
for instance converges in 35 steps: 12 × b1, 4 × b2, 1 × b3, 4 × b5, 12 × b6/b7, 1 × r9, 1 × r10. The modus iterandi only affects the order in which the reduction rules are applied.
A real gain of execution time can only be achieved by not recalculating subresults over and over again. Memoization is an optimization technique where the results of function calls are cached and returned when the same inputs occur again. See for instance Ward (1993). Grossman & Zeitman (1988) published a cunning algorithm which computes
A
(
i
,
n
)
{\displaystyle A(i,n)}
within
O
(
i
A
(
i
,
n
)
)
{\displaystyle {\mathcal {O}}(iA(i,n))}
time and within
O
(
i
)
{\displaystyle {\mathcal {O}}(i)}
space.
=== Huge numbers ===
To demonstrate how the computation of
A
(
4
,
3
)
{\displaystyle A(4,3)}
results in many steps and in a large number:
A
(
4
,
3
)
→
A
(
3
,
A
(
4
,
2
)
)
→
A
(
3
,
A
(
3
,
A
(
4
,
1
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
4
,
0
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
3
,
1
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
3
,
0
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
2
,
1
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
1
,
A
(
2
,
0
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
1
,
A
(
1
,
1
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
1
,
A
(
0
,
A
(
1
,
0
)
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
1
,
A
(
0
,
A
(
0
,
1
)
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
1
,
A
(
0
,
2
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
1
,
3
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
0
,
A
(
1
,
2
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
0
,
A
(
0
,
A
(
1
,
1
)
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
0
,
A
(
0
,
A
(
0
,
A
(
1
,
0
)
)
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
0
,
A
(
0
,
A
(
0
,
A
(
0
,
1
)
)
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
0
,
A
(
0
,
A
(
0
,
2
)
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
0
,
A
(
0
,
3
)
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
A
(
0
,
4
)
)
)
)
)
→
A
(
3
,
A
(
3
,
A
(
3
,
A
(
2
,
5
)
)
)
)
⋮
→
A
(
3
,
A
(
3
,
A
(
3
,
13
)
)
)
⋮
→
A
(
3
,
A
(
3
,
65533
)
)
⋮
→
A
(
3
,
2
65536
−
3
)
⋮
→
2
2
65536
−
3.
{\displaystyle {\begin{aligned}A(4,3)&\rightarrow A(3,A(4,2))\\&\rightarrow A(3,A(3,A(4,1)))\\&\rightarrow A(3,A(3,A(3,A(4,0))))\\&\rightarrow A(3,A(3,A(3,A(3,1))))\\&\rightarrow A(3,A(3,A(3,A(2,A(3,0)))))\\&\rightarrow A(3,A(3,A(3,A(2,A(2,1)))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(2,0))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(1,1))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(0,A(1,0)))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(0,A(0,1)))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(0,2))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,3)))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(1,2))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,A(1,1)))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,A(0,A(1,0))))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,A(0,A(0,1))))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,A(0,2)))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,3))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,4)))))\\&\rightarrow A(3,A(3,A(3,A(2,5))))\\&\qquad \vdots \\&\rightarrow A(3,A(3,A(3,13)))\\&\qquad \vdots \\&\rightarrow A(3,A(3,65533))\\&\qquad \vdots \\&\rightarrow A(3,2^{65536}-3)\\&\qquad \vdots \\&\rightarrow 2^{2^{65536}}-3.\\\end{aligned}}}
== Table of values ==
Computing the Ackermann function can be restated in terms of an infinite table. First, place the natural numbers along the top row. To determine a number in the table, take the number immediately to the left. Then use that number to look up the required number in the column given by that number and one row up. If there is no number to its left, simply look at the column headed "1" in the previous row. Here is a small upper-left portion of the table:
The numbers here which are only expressed with recursive exponentiation or Knuth arrows are very large and would take up too much space to notate in plain decimal digits.
Despite the large values occurring in this early section of the table, some even larger numbers have been defined, such as Graham's number, which cannot be written with any small number of Knuth arrows. This number is constructed with a technique similar to applying the Ackermann function to itself recursively.
This is a repeat of the above table, but with the values replaced by the relevant expression from the function definition to show the pattern clearly:
== Properties ==
=== General remarks ===
It may not be immediately obvious that the evaluation of
A
(
m
,
n
)
{\displaystyle A(m,n)}
always terminates. However, the recursion is bounded because in each recursive application either
m
{\displaystyle m}
decreases, or
m
{\displaystyle m}
remains the same and
n
{\displaystyle n}
decreases. Each time that
n
{\displaystyle n}
reaches zero,
m
{\displaystyle m}
decreases, so
m
{\displaystyle m}
eventually reaches zero as well. (Expressed more technically, in each case the pair
(
m
,
n
)
{\displaystyle (m,n)}
decreases in the lexicographic order on pairs, which is a well-ordering, just like the ordering of single non-negative integers; this means one cannot go down in the ordering infinitely many times in succession.) However, when
m
{\displaystyle m}
decreases there is no upper bound on how much
n
{\displaystyle n}
can increase — and it will often increase greatly.
For small values of m like 1, 2, or 3, the Ackermann function grows relatively slowly with respect to n (at most exponentially). For
m
≥
4
{\displaystyle m\geq 4}
, however, it grows much more quickly; even
A
(
4
,
2
)
{\displaystyle A(4,2)}
is about 2.00353×1019728, and the decimal expansion of
A
(
4
,
3
)
{\displaystyle A(4,3)}
is very large by any typical measure, about 2.12004×106.03123×1019727.
An interesting aspect is that the only arithmetic operation it ever uses is addition of 1. Its fast growing power is based solely on nested recursion. This also implies that its running time is at least proportional to its output, and so is also extremely huge. In actuality, for most cases the running time is far larger than the output; see above.
A single-argument version
f
(
n
)
=
A
(
n
,
n
)
{\displaystyle f(n)=A(n,n)}
that increases both
m
{\displaystyle m}
and
n
{\displaystyle n}
at the same time dwarfs every primitive recursive function, including very fast-growing functions such as the exponential function, the factorial function, multi- and superfactorial functions, and even functions defined using Knuth's up-arrow notation (except when the indexed up-arrow is used). It can be seen that
f
(
n
)
{\displaystyle f(n)}
is roughly comparable to
f
ω
(
n
)
{\displaystyle f_{\omega }(n)}
in the fast-growing hierarchy. This extreme growth can be exploited to show that
f
{\displaystyle f}
which is obviously computable on a machine with infinite memory such as a Turing machine and so is a computable function, grows faster than any primitive recursive function and is therefore not primitive recursive.
=== Not primitive recursive ===
The Ackermann function grows faster than any primitive recursive function and therefore is not itself primitive recursive. Proof sketch: primitive recursive function defined using up to k recursions must grow slower than
f
k
+
1
(
n
)
{\displaystyle f_{k+1}(n)}
, the (k+1)-th function in the fast-growing hierarchy, but the Ackermann function grows at least as fast as
f
ω
(
n
)
{\displaystyle f_{\omega }(n)}
.
Specifically, one shows that, for every primitive recursive function
f
(
x
1
,
…
,
x
n
)
{\displaystyle f(x_{1},\ldots ,x_{n})}
, there exists a non-negative integer
t
{\displaystyle t}
, such that for all non-negative integers
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
,
f
(
x
1
,
…
,
x
n
)
<
A
(
t
,
max
i
x
i
)
.
{\displaystyle f(x_{1},\ldots ,x_{n})<A(t,\max _{i}x_{i}).}
Once this is established, it follows that
A
{\displaystyle A}
itself is not primitive recursive, since otherwise putting
x
1
=
x
2
=
t
{\displaystyle x_{1}=x_{2}=t}
would lead to the contradiction
A
(
t
,
t
)
<
A
(
t
,
t
)
.
{\displaystyle A(t,t)<A(t,t).}
The proof proceeds as follows: define the class
A
{\displaystyle {\mathcal {A}}}
of all functions that grow slower than the Ackermann function
A
=
{
f
|
∃
t
∀
x
1
⋯
∀
x
n
:
f
(
x
1
,
…
,
x
n
)
<
A
(
t
,
max
i
x
i
)
}
{\displaystyle {\mathcal {A}}=\left\{f\,{\bigg |}\,\exists t\ \forall x_{1}\cdots \forall x_{n}:\ f(x_{1},\ldots ,x_{n})<A(t,\max _{i}x_{i})\right\}}
and show that
A
{\displaystyle {\mathcal {A}}}
contains all primitive recursive functions. The latter is achieved by showing that
A
{\displaystyle {\mathcal {A}}}
contains the constant functions, the successor function, the projection functions and that it is closed under the operations of function composition and primitive recursion.
== Inverse ==
Since the function f(n) = A(n, n) considered above grows very rapidly, its inverse function, f−1, grows very slowly. This inverse Ackermann function f−1 is usually denoted by α. In fact, α(n) is less than 5 for any practical input size n, since A(4, 4) is on the order of
2
2
2
2
16
{\displaystyle 2^{2^{2^{2^{16}}}}}
.
This inverse appears in the time complexity of some algorithms, such as the disjoint-set data structure and Chazelle's algorithm for minimum spanning trees. Sometimes Ackermann's original function or other variations are used in these settings, but they all grow at similarly high rates. In particular, some modified functions simplify the expression by eliminating the −3 and similar terms.
A two-parameter variation of the inverse Ackermann function can be defined as follows, where
⌊
x
⌋
{\displaystyle \lfloor x\rfloor }
is the floor function:
α
(
m
,
n
)
=
min
{
i
≥
1
:
A
(
i
,
⌊
m
/
n
⌋
)
≥
log
2
n
}
.
{\displaystyle \alpha (m,n)=\min\{i\geq 1:A(i,\lfloor m/n\rfloor )\geq \log _{2}n\}.}
This function arises in more precise analyses of the algorithms mentioned above, and gives a more refined time bound. In the disjoint-set data structure, m represents the number of operations while n represents the number of elements; in the minimum spanning tree algorithm, m represents the number of edges while n represents the number of vertices. Several slightly different definitions of α(m, n) exist; for example, log2 n is sometimes replaced by n, and the floor function is sometimes replaced by a ceiling.
Other studies might define an inverse function of one where m is set to a constant, such that the inverse applies to a particular row.
The inverse of the Ackermann function is primitive recursive, since it is graph primitive recursive, and it is upper bounded by a primitive recursive function.
== Usage ==
=== In computational complexity ===
The Ackermann function appears in the time complexity of some algorithms, such as vector addition systems and Petri net reachability, thus showing they are computationally infeasible for large instances.
The inverse of the Ackermann function appears in some time complexity results. For instance, the disjoint-set data structure takes amortized time per operation proportional to the inverse Ackermann function, and cannot be made faster within the cell-probe model of computational complexity.
=== In discrete geometry ===
Certain problems in discrete geometry related to Davenport–Schinzel sequences have complexity bounds in which the inverse Ackermann function
α
(
n
)
{\displaystyle \alpha (n)}
appears. For instance, for
n
{\displaystyle n}
line segments in the plane, the unbounded face of the arrangement of the segments has complexity
O
(
n
α
(
n
)
)
{\displaystyle O(n\alpha (n))}
, and some systems of
n
{\displaystyle n}
line segments have an unbounded face of complexity
Ω
(
n
α
(
n
)
)
{\displaystyle \Omega (n\alpha (n))}
.
=== As a benchmark ===
The Ackermann function, due to its definition in terms of extremely deep recursion, can be used as a benchmark of a compiler's ability to optimize recursion. The first published use of Ackermann's function in this way was in 1970 by Dragoș Vaida and, almost simultaneously, in 1971, by Yngve Sundblad.
Sundblad's seminal paper was taken up by Brian Wichmann (co-author of the Whetstone benchmark) in a trilogy of papers written between 1975 and 1982.
== See also ==
Computability theory
Double recursion
Fast-growing hierarchy
Goodstein function
Primitive recursive function
Recursion (computer science)
== Notes ==
== References ==
== Bibliography ==
== External links ==
"Ackermann function". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
Weisstein, Eric W. "Ackermann function". MathWorld.
This article incorporates public domain material from Paul E. Black. "Ackermann's function". Dictionary of Algorithms and Data Structures. NIST.
An animated Ackermann function calculator
Aaronson, Scott (1999). "Who Can Name the Bigger Number?".
Ackermann functions. Includes a table of some values.
Brubaker, Ben (4 December 2023). "An Easy-Sounding Problem Yields Numbers Too Big for Our Universe".
Munafo, Robert. "Large Numbers". describes several variations on the definition of A.
Nivasch, Gabriel (October 2021). "Inverse Ackermann without pain". Archived from the original on 21 August 2007. Retrieved 18 June 2023.
Seidel, Raimund. "Understanding the inverse Ackermann function" (PDF).
The Ackermann function written in different programming languages, (on Rosetta Code)
Smith, Harry J. "Ackermann's Function". Archived from the original on 26 October 2009.) Some study and programming.
Wiernik, Ady; Sharir, Micha (1988). "Planar realizations of nonlinear Davenport–Schinzel sequences by segments". Discrete & Computational Geometry. 3 (1): 15–47. doi:10.1007/BF02187894. MR 0918177. | Wikipedia/Inverse_Ackermann_function |
In computational complexity theory, and more specifically in the analysis of algorithms with integer data, the transdichotomous model is a variation of the random-access machine in which the machine word size is assumed to match the problem size. The model was proposed by Michael Fredman and Dan Willard, who chose its name "because the dichotomy between the machine model and the problem size is crossed in a reasonable manner."
In a problem such as integer sorting in which there are n integers to be sorted, the transdichotomous model assumes that each integer may be stored in a single word of computer memory, that operations on single words take constant time per operation, and that the number of bits that can be stored in a single word is at least log2n. The goal of complexity analysis in this model is to find time bounds that depend only on n and not on the actual size of the input values or the machine words. In modeling integer computation, it is necessary to assume that machine words are limited in size, because models with unlimited precision are unreasonably powerful (able to solve PSPACE-complete problems in polynomial time). The transdichotomous model makes a minimal assumption of this type: that there is some limit, and that the limit is large enough to allow random-access indexing into the input data.
As well as its application to integer sorting, the transdichotomous model has also been applied to the design of priority queues and to problems in computational geometry and graph algorithms.
== See also ==
Word RAM
Cell-probe model
== References == | Wikipedia/Transdichotomous_model |
Merge algorithms are a family of algorithms that take multiple sorted lists as input and produce a single list as output, containing all the elements of the inputs lists in sorted order. These algorithms are used as subroutines in various sorting algorithms, most famously merge sort.
== Application ==
The merge algorithm plays a critical role in the merge sort algorithm, a comparison-based sorting algorithm. Conceptually, the merge sort algorithm consists of two steps:
Recursively divide the list into sublists of (roughly) equal length, until each sublist contains only one element, or in the case of iterative (bottom up) merge sort, consider a list of n elements as n sub-lists of size 1. A list containing a single element is, by definition, sorted.
Repeatedly merge sublists to create a new sorted sublist until the single list contains all elements. The single list is the sorted list.
The merge algorithm is used repeatedly in the merge sort algorithm.
An example merge sort is given in the illustration. It starts with an unsorted array of 7 integers. The array is divided into 7 partitions; each partition contains 1 element and is sorted. The sorted partitions are then merged to produce larger, sorted, partitions, until 1 partition, the sorted array, is left.
== Merging two lists ==
Merging two sorted lists into one can be done in linear time and linear or constant space (depending on the data access model). The following pseudocode demonstrates an algorithm that merges input lists (either linked lists or arrays) A and B into a new list C.: 104 The function head yields the first element of a list; "dropping" an element means removing it from its list, typically by incrementing a pointer or index.
algorithm merge(A, B) is
inputs A, B : list
returns list
C := new empty list
while A is not empty and B is not empty do
if head(A) ≤ head(B) then
append head(A) to C
drop the head of A
else
append head(B) to C
drop the head of B
// By now, either A or B is empty. It remains to empty the other input list.
while A is not empty do
append head(A) to C
drop the head of A
while B is not empty do
append head(B) to C
drop the head of B
return C
When the inputs are linked lists, this algorithm can be implemented to use only a constant amount of working space; the pointers in the lists' nodes can be reused for bookkeeping and for constructing the final merged list.
In the merge sort algorithm, this subroutine is typically used to merge two sub-arrays A[lo..mid], A[mid+1..hi] of a single array A. This can be done by copying the sub-arrays into a temporary array, then applying the merge algorithm above. The allocation of a temporary array can be avoided, but at the expense of speed and programming ease. Various in-place merge algorithms have been devised, sometimes sacrificing the linear-time bound to produce an O(n log n) algorithm; see Merge sort § Variants for discussion.
== K-way merging ==
k-way merging generalizes binary merging to an arbitrary number k of sorted input lists. Applications of k-way merging arise in various sorting algorithms, including patience sorting and an external sorting algorithm that divides its input into k = 1/M − 1 blocks that fit in memory, sorts these one by one, then merges these blocks.: 119–120
Several solutions to this problem exist. A naive solution is to do a loop over the k lists to pick off the minimum element each time, and repeat this loop until all lists are empty:
In the worst case, this algorithm performs (k−1)(n−k/2) element comparisons to perform its work if there are a total of n elements in the lists.
It can be improved by storing the lists in a priority queue (min-heap) keyed by their first element:
Searching for the next smallest element to be output (find-min) and restoring heap order can now be done in O(log k) time (more specifically, 2⌊log k⌋ comparisons), and the full problem can be solved in O(n log k) time (approximately 2n⌊log k⌋ comparisons).: 119–120
A third algorithm for the problem is a divide and conquer solution that builds on the binary merge algorithm:
When the input lists to this algorithm are ordered by length, shortest first, it requires fewer than n⌈log k⌉ comparisons, i.e., less than half the number used by the heap-based algorithm; in practice, it may be about as fast or slow as the heap-based algorithm.
== Parallel merge ==
A parallel version of the binary merge algorithm can serve as a building block of a parallel merge sort. The following pseudocode demonstrates this algorithm in a parallel divide-and-conquer style (adapted from Cormen et al.: 800 ). It operates on two sorted arrays A and B and writes the sorted output to array C. The notation A[i...j] denotes the part of A from index i through j, exclusive.
algorithm merge(A[i...j], B[k...ℓ], C[p...q]) is
inputs A, B, C : array
i, j, k, ℓ, p, q : indices
let m = j - i,
n = ℓ - k
if m < n then
swap A and B // ensure that A is the larger array: i, j still belong to A; k, ℓ to B
swap m and n
if m ≤ 0 then
return // base case, nothing to merge
let r = ⌊(i + j)/2⌋
let s = binary-search(A[r], B[k...ℓ])
let t = p + (r - i) + (s - k)
C[t] = A[r]
in parallel do
merge(A[i...r], B[k...s], C[p...t])
merge(A[r+1...j], B[s...ℓ], C[t+1...q])
The algorithm operates by splitting either A or B, whichever is larger, into (nearly) equal halves. It then splits the other array into a part with values smaller than the midpoint of the first, and a part with larger or equal values. (The binary search subroutine returns the index in B where A[r] would be, if it were in B; that this always a number between k and ℓ.) Finally, each pair of halves is merged recursively, and since the recursive calls are independent of each other, they can be done in parallel. Hybrid approach, where serial algorithm is used for recursion base case has been shown to perform well in practice
The work performed by the algorithm for two arrays holding a total of n elements, i.e., the running time of a serial version of it, is O(n). This is optimal since n elements need to be copied into C. To calculate the span of the algorithm, it is necessary to derive a Recurrence relation. Since the two recursive calls of merge are in parallel, only the costlier of the two calls needs to be considered. In the worst case, the maximum number of elements in one of the recursive calls is at most
3
4
n
{\textstyle {\frac {3}{4}}n}
since the array with more elements is perfectly split in half. Adding the
Θ
(
log
(
n
)
)
{\displaystyle \Theta \left(\log(n)\right)}
cost of the Binary Search, we obtain this recurrence as an upper bound:
T
∞
merge
(
n
)
=
T
∞
merge
(
3
4
n
)
+
Θ
(
log
(
n
)
)
{\displaystyle T_{\infty }^{\text{merge}}(n)=T_{\infty }^{\text{merge}}\left({\frac {3}{4}}n\right)+\Theta \left(\log(n)\right)}
The solution is
T
∞
merge
(
n
)
=
Θ
(
log
(
n
)
2
)
{\displaystyle T_{\infty }^{\text{merge}}(n)=\Theta \left(\log(n)^{2}\right)}
, meaning that it takes that much time on an ideal machine with an unbounded number of processors.: 801–802
Note: The routine is not stable: if equal items are separated by splitting A and B, they will become interleaved in C; also swapping A and B will destroy the order, if equal items are spread among both input arrays. As a result, when used for sorting, this algorithm produces a sort that is not stable.
== Parallel merge of two lists ==
There are also algorithms that introduce parallelism within a single instance of merging of two sorted lists. These can be used in field-programmable gate arrays (FPGAs), specialized sorting circuits, as well as in modern processors with single-instruction multiple-data (SIMD) instructions.
Existing parallel algorithms are based on modifications of the merge part of either the bitonic sorter or odd-even mergesort. In 2018, Saitoh M. et al. introduced MMS for FPGAs, which focused on removing a multi-cycle feedback datapath that prevented efficient pipelining in hardware. Also in 2018, Papaphilippou P. et al. introduced FLiMS that improved the hardware utilization and performance by only requiring
log
2
(
P
)
+
1
{\displaystyle \log _{2}(P)+1}
pipeline stages of P/2 compare-and-swap units to merge with a parallelism of P elements per FPGA cycle.
== Language support ==
Some computer languages provide built-in or library support for merging sorted collections.
=== C++ ===
The C++'s Standard Template Library has the function std::merge, which merges two sorted ranges of iterators, and std::inplace_merge, which merges two consecutive sorted ranges in-place. In addition, the std::list (linked list) class has its own merge method which merges another list into itself. The type of the elements merged must support the less-than (<) operator, or it must be provided with a custom comparator.
C++17 allows for differing execution policies, namely sequential, parallel, and parallel-unsequenced.
=== Python ===
Python's standard library (since 2.6) also has a merge function in the heapq module, that takes multiple sorted iterables, and merges them into a single iterator.
== See also ==
Merge (revision control)
Join (relational algebra)
Join (SQL)
Join (Unix)
== References ==
== Further reading ==
Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Pages 158–160 of section 5.2.4: Sorting by Merging. Section 5.3.2: Minimum-Comparison Merging, pp. 197–207.
== External links ==
High Performance Implementation of Parallel and Serial Merge in C# with source in GitHub and in C++ GitHub | Wikipedia/Merge_algorithm |
In computer programming, the Schwartzian transform is a technique used to improve the efficiency of sorting a list of items. This idiom is appropriate for comparison-based sorting when the ordering is actually based on the ordering of a certain property (the key) of the elements, where computing that property is an intensive operation that should be performed a minimal number of times. The Schwartzian transform is notable in that it does not use named temporary arrays.
The Schwartzian transform is a version of a Lisp idiom known as decorate-sort-undecorate, which avoids recomputing the sort keys by temporarily associating them with the input items. This approach is similar to memoization, which avoids repeating the calculation of the key corresponding to a specific input value. By comparison, this idiom assures that each input item's key is calculated exactly once, which may still result in repeating some calculations if the input data contains duplicate items.
The idiom is named after Randal L. Schwartz, who first demonstrated it in Perl shortly after the release of Perl 5 in 1994. The term "Schwartzian transform" applied solely to Perl programming for a number of years, but it has later been adopted by some users of other languages, such as Python, to refer to similar idioms in those languages. However, the algorithm was already in use in other languages (under no specific name) before it was popularized among the Perl community in the form of that particular idiom by Schwartz. The term "Schwartzian transform" indicates a specific idiom, and not the algorithm in general.
For example, to sort the word list ("aaaa","a","aa") according to word length: first build the list (["aaaa",4],["a",1],["aa",2]), then sort it according to the numeric values getting (["a",1],["aa",2],["aaaa",4]), then strip off the numbers and you get ("a","aa","aaaa"). That was the algorithm in general, so it does not count as a transform. To make it a true Schwartzian transform, it would be done in Perl like this:
== The Perl idiom ==
The general form of the Schwartzian transform is:
Here foo($_) represents an expression that takes $_ (each item of the list in turn) and produces the corresponding value that is to be compared in its stead.
Reading from right to left (or from the bottom to the top):
the original list @unsorted is fed into a map operation that wraps each item into a (reference to an anonymous 2-element) array consisting of itself and the calculated value that will determine its sort order (list of item becomes a list of [item, value]);
then the list of lists produced by map is fed into sort, which sorts it according to the values previously calculated (list of [item, value] ⇒ sorted list of [item, value]);
finally, another map operation unwraps the values (from the anonymous array) used for the sorting, producing the items of the original list in the sorted order (sorted list of [item, value] ⇒ sorted list of item).
The use of anonymous arrays ensures that memory will be reclaimed by the Perl garbage collector immediately after the sorting is done.
== Efficiency analysis ==
Without the Schwartzian transform, the sorting in the example above would be written in Perl like this:
While it is shorter to code, the naive approach here could be much less efficient if the key function (called foo in the example above) is expensive to compute. This is because the code inside the brackets is evaluated each time two elements need to be compared. An optimal comparison sort performs O(n log n) comparisons (where n is the length of the list), with 2 calls to foo every comparison, resulting in O(n log n) calls to foo. In comparison, using the Schwartzian transform, we only make 1 call to foo per element, at the beginning map stage, for a total of n calls to foo.
However, if the function foo is relatively simple, then the extra overhead of the Schwartzian transform may be unwarranted.
== Example ==
For example, to sort a list of files by their modification times, a naive approach might be as follows:
function naiveCompare(file a, file b) {
return modificationTime(a) < modificationTime(b)
}
// Assume that sort(list, comparisonPredicate) sorts the given list using
// the comparisonPredicate to compare two elements.
sortedArray := sort(filesArray, naiveCompare)
Unless the modification times are memoized for each file, this method requires re-computing them every time a file is compared in the sort. Using the Schwartzian transform, the modification time is calculated only once per file.
A Schwartzian transform involves the functional idiom described above, which does not use temporary arrays.
The same algorithm can be written procedurally to better illustrate how it works, but this requires using temporary arrays, and is not a Schwartzian transform. The following example pseudo-code implements the algorithm in this way:
for each file in filesArray
insert array(file, modificationTime(file)) at end of transformedArray
function simpleCompare(array a, array b) {
return a[2] < b[2]
}
transformedArray := sort(transformedArray, simpleCompare)
for each file in transformedArray
insert file[1] at end of sortedArray
== History ==
The first known online appearance of the Schwartzian transform is a December 16, 1994 posting by Randal Schwartz to a thread in comp.unix.shell Usenet newsgroup, crossposted to comp.lang.perl. (The current version of the Perl Timeline is incorrect and refers to a later date in 1995.) The thread began with a question about how to sort a list of lines by their "last" word:
adjn:Joshua Ng
adktk:KaLap Timothy Kwong
admg:Mahalingam Gobieramanan
admln:Martha L. Nangalama
Schwartz responded with:
This code produces the result:
admg:Mahalingam Gobieramanan
adktk:KaLap Timothy Kwong
admln:Martha L. Nangalama
adjn:Joshua Ng
Schwartz noted in the post that he was "Speak[ing] with a lisp in Perl", a reference to the idiom's Lisp origins.
The term "Schwartzian transform" itself was coined by Tom Christiansen in a follow-up reply. Later posts by Christiansen made it clear that he had not intended to name the construct, but merely to refer to it from the original post: his attempt to finally name it "The Black Transform" did not take hold ("Black" here being a pun on "schwar[t]z", which means black in German).
== Comparison to other languages ==
Some other languages provide a convenient interface to the same optimization as the Schwartzian transform:
In Python 2.4 and above, both the sorted() function and the in-place list.sort() method take a key= parameter that allows the user to provide a "key function" (like foo in the examples above). In Python 3 and above, use of the key function is the only way to specify a custom sort order (the previously supported cmp= parameter that allowed the user to provide a "comparison function" was removed). Before Python 2.4, developers would use the lisp-originated decorate–sort–undecorate (DSU) idiom, usually by wrapping the objects in a (sortkey, object) tuple.
In Ruby 1.8.6 and above, the Enumerable abstract class (which includes Arrays) contains a sort_by method, which allows specifying the "key function" (like foo in the examples above) as a code block.
In D 2 and above, the schwartz Sort function is available. It might require less temporary data and be faster than the Perl idiom or the decorate–sort–undecorate idiom present in Python and Lisp. This is because sorting is done in-place, and only minimal extra data (one array of transformed elements) is created.
Racket's core sort function accepts a #:key keyword argument with a function that extracts a key, and an additional #:cache-keys? requests that the resulting values are cached during sorting. For example, a convenient way to shuffle a list is (sort l < #:key (λ (_) (random)) #:cache-keys? #t).
In PHP 5.3 and above the transform can be implemented by use of array_walk, e.g. to work around the limitations of the unstable sort algorithms in PHP.
In Elixir, the Enum.sort_by/2 and Enum.sort_by/3 methods allow users to perform a Schwartzian transform for any module that implements the Enumerable protocol.
In Raku, one needs to supply a comparator lambda that only takes 1 argument to perform a Schwartzian transform under the hood: would sort on the string representation using a Schwartzian transform, would do the same converting the elements to compare just before each comparison.
In Rust, somewhat confusingly, the slice::sort_by_key method does not perform a Schwartzian transform as it will not allocate additional storage for the key, it will call the key function for each value for each comparison. The slice::sort_by_cached_key method will compute the keys once per element.
In Haskell, the sortOn function from the base library performs a Schwartzian transform.
== References ==
== External links ==
Sorting with the Schwartzian Transform by Randal L. Schwartz
Mark-Jason Dominus explains the Schwartzian Transform
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52234
Python Software Foundation (2005). 1.5.2 I want to do a complicated sort: can you do a Schwartzian Transform in Python?. Retrieved June 22, 2005.
Memoize Perl module - making expensive functions faster by caching their results. | Wikipedia/Schwartzian_transform |
An electoral or voting system is a set of rules used to determine the results of an election. Electoral systems are used in politics to elect governments, while non-political elections may take place in business, nonprofit organizations and informal organisations. These rules govern all aspects of the voting process: when elections occur, who is allowed to vote, who can stand as a candidate, how ballots are marked and cast, how the ballots are counted, how votes translate into the election outcome, limits on campaign spending, and other factors that can affect the result. Political electoral systems are defined by constitutions and electoral laws, are typically conducted by election commissions, and can use multiple types of elections for different offices.
Some electoral systems elect a single winner to a unique position, such as prime minister, president or governor, while others elect multiple winners, such as members of parliament or boards of directors. When electing a legislature, areas may be divided into constituencies with one or more representatives or the electorate may elect representatives as a single unit. Voters may vote directly for an individual candidate or for a list of candidates put forward by a political party or alliance. There are many variations in electoral systems.
The mathematical and normative study of voting rules falls under the branches of economics called social choice and mechanism design, but the question has also engendered substantial contributions from political scientists, analytic philosophers, computer scientists, and mathematicians. The field has produced several major results, including Arrow's impossibility theorem (showing that ranked voting cannot eliminate the spoiler effect) and Gibbard's theorem (showing it is impossible to design a straightforward voting system, i.e. one where it is always obvious to a strategic voter which ballot they should cast).
== Types ==
The most common categorizations of electoral systems are: single-winner vs. multi-winner systems and proportional representation vs. winner-take-all systems vs. mixed systems.
=== Single-winner and winner-take-all systems ===
In all cases, where only a single winner is to be elected, the electoral system is winner-take-all. The same can be said for elections where only one person is elected per district. Since district elections are winner-take-all, the electoral system as a whole produces dis-proportional results. Some systems where multiple winners are elected at once (in the same district), such a plurality block voting are also winner-take-all.
In party block voting, voters can only vote for the list of candidates of a single party, with the party receiving the most votes winning all seats, even if that party receives only a minority of votes. This is also described as winner-take-all. This is used in five countries as part of mixed systems.
==== Plurality voting - first past the post and block voting ====
Plurality voting is a system in which the candidate(s) with the largest number of votes wins, with no requirement to get a majority of votes. In cases where there is a single position to be filled, it is known as first-past-the-post. This is the second most common electoral system for national legislatures (after proportional representation), with 58 countries using FPTP and single-member districts to elect the national legislative chamber, the vast majority of which are current or former British or American colonies or territories. It is also the second most common system used for presidential elections, being used in 19 countries. The two-round system is the most common system used to elect a president.
In cases where there are multiple positions to be filled, most commonly in cases of multi-member constituencies, there are several types of plurality electoral systems. Under block voting (also known as multiple non-transferable vote or plurality-at-large), voters have as many votes as there are seats and can vote for any candidate, regardless of party, but give only one vote to each preferred candidate. The most-popular candidates are declared elected, whether they have a majority of votes or not and whether or not that result is proportional to the way votes were cast. Eight countries use this system.
Cumulative voting allows a voter to cast more than one vote for the same candidate, in multi-member districts. Its effect may be proportional to the same degree that single non-transferable voting or limited voting is, thus it is often called semi-proportional.
Approval voting is a choose-all-you-like voting system that aims to increase the number of candidates that win with majority support. Voters are free to pick as many candidates as they like and each choice has equal weight, independent of the number of candidates a voter supports. The candidate with the most votes wins.
==== Runoff systems ====
A runoff system is one in which a candidates receives a majority of votes to be elected, either in a runoff election or final round of vote counting. This is sometimes referred to as a way to ensure that a winner must have a majority of votes, although usually only a plurality is required in the last round (when three or more candidates move on to the runoff election), and sometimes even in the first round winners can avoid a second round without achieving a majority. In social choice theory, runoff systems are not called majority voting, as this term refers to Condorcet-methods.
There are two main groups of runoff systems, those in one group use a single round of voting achieved by voters casting ranked votes and then using vote transfers if necessary to establish a majority, and those in the other group use two or more rounds of voting, to narrow the field of candidates and to determine a winner who has a majority of the votes. Both are primarily used for single-member constituencies or election of a single position such as mayor.
If a candidate receives a majority of the vote in the first round, then the system is simple first past the post voting. But if no one has a majority of votes in first round, the systems respond in different ways.
Under instant-runoff voting (IRV), when no one wins a majority in first round, runoff is achieved through vote transfers made possible by voters having ranked the candidates in order of preference, with lower preferences used as back-up preferences. This system is used for parliamentary elections in Australia and Papua New Guinea. If no candidate receives a majority of the vote in the first round, the votes of the least-popular candidate are transferred as per marked second preferences and added to the totals of surviving candidates. This is repeated until a candidate achieves a majority. The count ends any time one candidate has a majority of votes but it may continue until only two candidates remain, at which point one or other of the candidates will take a majority of votes still in play.
A different form of single-winner preferential voting is the contingent vote where voters do not rank all candidates, but rank just two or three. If no candidate has a majority in the first round, all candidates are excluded except the top two. If the voter gave first preference to one of the excluded candidates, the votes is transferred to the next usable back-up preferences if possible, or otherwise put in the exhausted pile. The resulting vote totals are used to determine the winner by plurality. This system is used in Sri Lankan presidential elections, with voters allowed to give three preferences.
The other main form of runoff system is the two-round system, which is the most common system used for presidential elections around the world, being used in 88 countries. It is also used, in conjunction with single-member districts, in 20 countries for electing members of the legislature. If no candidate achieves a majority of votes in the first round of voting, a second round is held to determine the winner. In most cases the second round is limited to the top two candidates from the first round, although in some elections more than two candidates may choose to contest the second round; in these cases the second-round winner is not required to have a majority of votes, but may be elected by having a plurality of votes.
Some countries use a modified form of the two-round system, so going to a second round happens less often. In Ecuador a candidate in the presidential election is declared the winner if they receive more than 50 percent of the vote or 40% of the vote and are 10% ahead of their nearest rival, In Argentina, where the system is known as ballotage, election is achieved by those with majority or if they have 45% and a 10% lead.
In some cases, where a certain level of support is required, a runoff may be held using a different system. In U.S. presidential elections, when no candidate wins a majority of the United States Electoral College (using seat count, not votes cast, as is used in the majoritarian systems described above), a contingent election is held by the House of Representatives, not the voters themselves. The House contingency election sees three candidates go on to the last round and each state's Representatives vote as a single unit, not as individuals.
An exhaustive ballot sees multiple rounds of voting (where no one has majority in first round). The number of rounds is not limited to two rounds, but sees the last-placed candidate eliminated in each round of voting, repeated until one candidate has majority of votes. Due to the potentially large number of rounds, this system is not used in any major popular elections, but is used to elect the Speakers of parliament in several countries and members of the Swiss Federal Council.
In some systems, such as election of the speaker of the United States House of Representatives, there may be multiple rounds held without any candidates being eliminated until a candidate achieves a majority.
==== Positional systems ====
Positional systems like the Borda Count are ranked voting systems that assign a certain number of points to each candidate, weighted by position. The most popular such system is first-preference plurality. Another well-known variant, the Borda count, each candidate is given a number of points equal to their rank, and the candidate with the least points wins. This system is intended to elect broadly acceptable options or candidates, rather than those preferred by a majority. This system is used to elect the ethnic minority representatives seats in the Slovenian parliament.
The Dowdall system is used in Nauru for parliamentary elections and sees voters rank the candidates. First preference votes are counted as whole numbers, the second preferences by two, third preferences by three, and so on; this continues to the lowest possible ranking. The totals for each candidate determine the winners.
=== Multi-winner systems ===
Multi-winner systems include both proportional systems and non-proportional multi-winner systems, such as party block voting and plurality block voting.
==== Proportional systems ====
Proportional representation is the most widely used electoral system for national legislatures, with the parliaments of over eighty countries elected by a form of the system. These systems elect multiple members in one contest, whether that is at-large, as in a city-wide election at the city level or state-wide or nation-wide at those levels, or in multi-member districts at any level.
Party-list proportional representation is the single most common electoral system and is used by 80 countries, and involves seats being allocated to parties based on party vote share.
In closed list systems voters do not have any influence over which candidates are elected to fill the party seats, but in open list systems voters are able to both vote for the party list and for candidates (or only for candidates). Voters thus have means to sometimes influence the order in which party candidates will be assigned seats. In some countries, notably Israel and the Netherlands, elections are carried out using 'pure' proportional representation, with the votes tallied on a national level before assigning seats to parties. (There are no district seats, only at-large.) However, in most cases several multi-member constituencies are used rather than a single nationwide constituency, giving an element of geographical or local representation. Such may result in the distribution of seats not reflecting the national vote totals of parties. As a result, some countries that use districts have leveling seats that are awarded to some of the parties whose seat proportion is lower than their proportion of the vote. Levelling seats are either used at the regional level or at the national level. Such mixed member proportional systems are used in New Zealand and in Scotland. (They are discussed below.)
List PR systems usually set an electoral threshold, the minimum percentage of the vote that a party must obtain to win levelling seats or to win seats at all. Some systems allow a go around of this rule. For instance, if a party takes a district seat, the party may be eligible for top-up seats even if its percentage of the votes is below the threshold.
There are different methods of allocating seats in proportional representation systems. There are two main methods: highest average and largest remainder. Highest average systems involve dividing the votes received by each party by a divisor or vote average that represents an idealized seats-to-votes ratio, then rounding normally. In the largest remainder system, parties' vote shares are divided by an electoral quota. This usually leaves some seats unallocated, which are awarded to parties based on which parties have the largest number of "leftover" votes.
Single transferable vote (STV) is another form of proportional representation. Like list PR, STV is designed to elect multiple winners. In STV, multi-member districts or multi-winner at-large contests are used. Each voter casts one vote, being a ranked ballot marked for individual candidates, rather than voting for a party list. STV is used in Malta, the Republic of Ireland and Australia (partially). To be certain of being elected, candidates must pass a quota (the Droop quota being the most common). Candidates that achieve the quota are elected. If necessary to fill seats, the least-successful candidate is eliminated and their voters transferred in accordance with the rankings marked by the voter. Surplus votes held by successful candidates may also be transferred. Eventually all seats are filled by candidates who have passed the quota or there are only as many remaining candidates as the number of remaining open seats.
Under single non-transferable vote (SNTV), multi-member districts are used. Each voter can vote for only one candidate, with the candidates receiving the most votes declared the winners, whether any of them have a majority of votes or not. Despite its simplicity, its results are very close to those of STV and list PR - every district elects a mixed, balanced multi-party group of representatives. This system is used in Kuwait, the Pitcairn Islands and Vanuatu.
==== Mixed systems ====
In several countries, mixed systems are used to elect the legislature. These include parallel voting (also known as mixed-member majoritarian) and mixed-member proportional representation.
In non-compensatory, parallel voting systems, which are used in 20 countries, members of a legislature are elected by two different methods; part of the membership is elected by a plurality or majoritarian election system in single-member constituencies and the other part by proportional representation. The results of the constituency contests have no effect on the outcome of the proportional vote.
In compensatory mixed-member systems levelling seats are allocated to balance nation-wide or regional disproportionality produced by the way seats are won in constituency contests. The mixed-member proportional systems, in use in eight countries, provide enough compensatory seats to ensure that many parties have a share of seats approximately proportional to their vote share. Most of the MMP countries use a PR system at the district level, thus lowering the number of levelling seats that are needed to produce proportional results. Of the MMP countries, only New Zealand and Lesotho use single-winner first-past-the-post voting in their districts. Scotland uses a regionalized MMP system where levelling seats are allocated in each region to balance the disproportionality produced in single-winner districts within the region. Variations of this include the Additional Member System, and Alternative Vote Plus, in which voters cast votes for both single-member constituencies and multi-member constituencies; the allocation of seats in the multi-member constituencies is adjusted to achieve an overall seat allocation proportional to parties' vote share by taking into account the number of seats won by parties in the single-member constituencies.
Some MMP systems are insufficiently compensatory, and this may result in overhang seats, where parties win more seats in the constituency system than they would be entitled to based on their vote share. Some MMP systems have mechanism (another form of top-up) where additional seats are awarded to the other parties to balance out the effect of the overhang. Germany in 2024 passed a new election law where district overhang seats may be denied, over-riding the district result in the pursuit of overall proportionality.
Vote linkage mixed systems are also compensatory, however they usually use different mechanism than seat linkage (top-up) method of MMP and usually aren't able to achieve proportional representation.
Some electoral systems feature a majority bonus system to either ensure one party or coalition gains a majority in the legislature, or to give the party receiving the most votes a clear advantage in terms of the number of seats. San Marino has a modified two-round system, which sees a second round of voting featuring the top two parties or coalitions if no party takes a majority of votes in the first round. The winner of the second round is guaranteed 35 seats in the 60-seat Grand and General Council. In Greece the party receiving the most votes was given an additional 50 seats, a system which was abolished following the 2019 elections.
=== Primary elections ===
Primary elections are a feature of some electoral systems, either as a formal part of the electoral system or informally by choice of individual political parties as a method of selecting candidates, as is the case in Italy. Primary elections limit the possible adverse effect of vote splitting by ensuring that a party puts forward only one party candidate. In Argentina they are a formal part of the electoral system and take place two months before the main elections; any party receiving less than 1.5% of the vote is not permitted to contest the main elections.
In the United States, there are both partisan and non-partisan primary elections. In non-partisan primaries, the most-popular nominees, even if only one party, are put forward to the election.
=== Indirect elections ===
Some elections feature an indirect electoral system, whereby there is either no popular vote, or the popular vote is only one stage of the election; in these systems the final vote is usually taken by an electoral college. In several countries, such as Mauritius or Trinidad and Tobago, the post of President is elected by the legislature. In others like India, the vote is taken by an electoral college consisting of the national legislature and state legislatures. In the United States, the president is indirectly elected using a two-stage process; a popular vote in each state elects members to the electoral college that in turn elects the President. This can result in a situation where a candidate who receives the most votes nationwide does not win the electoral college vote, as most recently happened in 2000 and 2016.
=== Proposed and lesser-used systems ===
In addition to the current electoral systems used for political elections, there are numerous other systems that have been used in the past, are currently used only in private organizations (such as electing board members of corporations or student organizations), or have never been fully implemented.
==== Winner-take-all systems ====
Among the Ranked systems these include Bucklin voting, the various Condorcet methods (Copeland's, Dodgson's, Kemeny-Young, Maximal lotteries, Minimax, Nanson's, Ranked pairs, Schulze), the Coombs' method and positional voting.
Among the Cardinal electoral systems, the most well known of these is range voting, where any number of candidates are scored from a set range of numbers. A very common example of range voting are the 5-star ratings used for many customer satisfaction surveys and reviews. Other cardinal systems include satisfaction approval voting, highest median rules (including the majority judgment), and the D21 – Janeček method where voters can cast positive and negative votes.
Historically, weighted voting systems were used in some countries. These allocated a greater weight to the votes of some voters than others, either indirectly by allocating more seats to certain groups (such as the Prussian three-class franchise), or by weighting the results of the vote. The latter system was used in colonial Rhodesia for the 1962 and 1965 elections. The elections featured two voter rolls (the 'A' roll being largely European and the 'B' roll largely African); the seats of the House Assembly were divided into 50 constituency seats and 15 district seats. Although all voters could vote for both types of seats, 'A' roll votes were given greater weight for the constituency seats and 'B' roll votes greater weight for the district seats. Weighted systems are still used in corporate elections, with votes weighted to reflect stock ownership.
==== Proportional systems ====
Dual-member proportional representation is a proposed system with two candidates elected in each constituency, one with the most votes and one to ensure proportionality of the combined results. Biproportional apportionment is a system where the total number of votes is used to calculate the number of seats each party is due, followed by a calculation of the constituencies in which the seats should be awarded in order to achieve the total due to them.
For proportional systems that use ranked choice voting, there are several proposals, including CPO-STV, Schulze STV and the Wright system, which are each considered to be variants of proportional representation by means of the single transferable vote. Among the proportional voting systems that use rating are Thiele's voting rules and Phragmen's voting rule. A special case of Thiele's voting rules is Proportional Approval Voting. Some proportional systems that may be used with either ranking or rating include the Method of Equal Shares and the Expanding Approvals Rule.
== Rules and regulations ==
In addition to the specific method of electing candidates, electoral systems are also characterised by their wider rules and regulations, which are usually set out in a country's constitution or electoral law. Participatory rules determine candidate nomination and voter registration, in addition to the location of polling places and the availability of online voting, postal voting, and absentee voting. Other regulations include the selection of voting devices such as paper ballots, machine voting or open ballot systems, and consequently the type of vote counting systems, verification and auditing used.
Electoral rules place limits on suffrage and candidacy. Most countries's electorates are characterised by universal suffrage, but there are differences on the age at which people are allowed to vote, with the youngest being 16 and the oldest 21. People may be disenfranchised for a range of reasons, such as being a serving prisoner, being declared bankrupt, having committed certain crimes or being a serving member of the armed forces. Similar limits are placed on candidacy (also known as passive suffrage), and in many cases the age limit for candidates is higher than the voting age. A total of 21 countries have compulsory voting, although in some there is an upper age limit on enforcement of the law. Many countries also have the none of the above option on their ballot papers.
In systems that use constituencies, apportionment or districting defines the area covered by each constituency. Where constituency boundaries are drawn has a strong influence on the likely outcome of elections in the constituency due to the geographic distribution of voters. Political parties may seek to gain an advantage during redistricting by ensuring their voter base has a majority in as many constituencies as possible, a process known as gerrymandering. Historically rotten and pocket boroughs, constituencies with unusually small populations, were used by wealthy families to gain parliamentary representation.
Some countries have minimum turnout requirements for elections to be valid. In Serbia this rule caused multiple re-runs of presidential elections, with the 1997 election re-run once and the 2002 elections re-run three times due insufficient turnout in the first, second and third attempts to run the election. The turnout requirement was scrapped prior to the fourth vote in 2004. Similar problems in Belarus led to the 1995 parliamentary elections going to a fourth round of voting before enough parliamentarians were elected to make a quorum.
Reserved seats are used in many countries to ensure representation for ethnic minorities, women, young people or the disabled. These seats are separate from general seats, and may be elected separately (such as in Morocco where a separate ballot is used to elect the 60 seats reserved for women and 30 seats reserved for young people in the House of Representatives), or be allocated to parties based on the results of the election; in Jordan the reserved seats for women are given to the female candidates who failed to win constituency seats but with the highest number of votes, whilst in Kenya the Senate seats reserved for women, young people and the disabled are allocated to parties based on how many seats they won in the general vote. Some countries achieve minority representation by other means, including requirements for a certain proportion of candidates to be women, or by exempting minority parties from the electoral threshold, as is done in Poland, Romania and Serbia.
== History ==
=== Pre-democratic ===
In ancient Greece and Italy, the institution of suffrage already existed in a rudimentary form at the outset of the historical period. In the early monarchies it was customary for the king to invite pronouncements of his people on matters in which it was prudent to secure its assent beforehand. In these assemblies the people recorded their opinion by clamouring (a method which survived in Sparta as late as the 4th century BCE), or by the clashing of spears on shields.
=== Early democracy ===
Voting has been used as a feature of democracy since the 6th century BCE, when democracy was introduced by the Athenian democracy. However, in Athenian democracy, voting was seen as the least democratic among methods used for selecting public officials, and was little used, because elections were believed to inherently favor the wealthy and well-known over average citizens. Viewed as more democratic were assemblies open to all citizens, and selection by lot, as well as rotation of office.
Generally, the taking of votes was effected in the form of a poll. The practice of the Athenians, which is shown by inscriptions to have been widely followed in the other states of Greece, was to hold a show of hands, except on questions affecting the status of individuals: these latter, which included all lawsuits and proposals of ostracism, in which voters chose the citizen they most wanted to exile for ten years, were determined by secret ballot (one of the earliest recorded elections in Athens was a plurality vote that it was undesirable to win, namely an ostracism vote). At Rome the method which prevailed up to the 2nd century BCE was that of division (discessio). But the system became subject to intimidation and corruption. Hence a series of laws enacted between 139 and 107 BCE prescribed the use of the ballot (tabella), a slip of wood coated with wax, for all business done in the assemblies of the people.
For the purpose of carrying resolutions a simple majority of votes was deemed sufficient. As a general rule equal value was made to attach to each vote; but in the popular assemblies at Rome a system of voting by groups was in force until the middle of the 3rd century BCE by which the richer classes secured a decisive preponderance.
Most elections in the early history of democracy were held using plurality voting or some variant, but as an exception, the state of Venice in the 13th century adopted approval voting to elect their Great Council.
The Venetians' method for electing the Doge was a particularly convoluted process, consisting of five rounds of drawing lots (sortition) and five rounds of approval voting. By drawing lots, a body of 30 electors was chosen, which was further reduced to nine electors by drawing lots again. An electoral college of nine members elected 40 people by approval voting; those 40 were reduced to form a second electoral college of 12 members by drawing lots again. The second electoral college elected 25 people by approval voting, which were reduced to form a third electoral college of nine members by drawing lots. The third electoral college elected 45 people, which were reduced to form a fourth electoral college of 11 by drawing lots. They in turn elected a final electoral body of 41 members, who ultimately elected the Doge. Despite its complexity, the method had certain desirable properties such as being hard to game and ensuring that the winner reflected the opinions of both majority and minority factions. This process, with slight modifications, was central to the politics of the Republic of Venice throughout its remarkable lifespan of over 500 years, from 1268 to 1797.
=== Development of new systems ===
Jean-Charles de Borda proposed the Borda count in 1770 as a method for electing members to the French Academy of Sciences. His method was opposed by the Marquis de Condorcet, who proposed instead the method of pairwise comparison that he had devised. Implementations of this method are known as Condorcet methods. He also wrote about the Condorcet paradox, which he called the intransitivity of majority preferences. However, recent research has shown that the philosopher Ramon Llull devised both the Borda count and a pairwise method that satisfied the Condorcet criterion in the 13th century. The manuscripts in which he described these methods had been lost to history until they were rediscovered in 2001.
Later in the 18th century, apportionment methods came to prominence due to the United States Constitution, which mandated that seats in the United States House of Representatives had to be allocated among the states proportionally to their population, but did not specify how to do so. A variety of methods were proposed by statesmen such as Alexander Hamilton, Thomas Jefferson, and Daniel Webster. Some of the apportionment methods devised in the United States were in a sense rediscovered in Europe in the 19th century, as seat allocation methods for the newly proposed method of party-list proportional representation. The result is that many apportionment methods have two names; Jefferson's method is equivalent to the D'Hondt method, as is Webster's method to the Sainte-Laguë method, while Hamilton's method is identical to the Hare largest remainder method.
The single transferable vote (STV) method was devised by Carl Andræ in Denmark in 1855 and in the United Kingdom by Thomas Hare in 1857. STV elections were first held in Denmark in 1856, and in Tasmania in 1896 after its use was promoted by Andrew Inglis Clark. Over the course of the 20th century, STV was subsequently adopted by Ireland and Malta for their national elections, in Australia for their Senate elections, as well as by many municipal elections around the world.
Party-list proportional representation began to be used to elect European legislatures in the early 20th century, with Belgium the first to implement it for its 1900 general elections. Since then, proportional and semi-proportional methods have come to be used in almost all democratic countries, with most exceptions being former British and French colonies.
=== Single-winner innovations ===
Perhaps influenced by the rapid development of multiple-winner STV, theorists published new findings about single-winner methods in the late 19th century. Around 1870, William Robert Ware proposed applying STV to single-winner elections, yielding instant-runoff voting (IRV). Soon, mathematicians began to revisit Condorcet's ideas and invent new methods for Condorcet completion; Edward J. Nanson combined the newly described instant runoff voting with the Borda count to yield a new Condorcet method called Nanson's method. Charles Dodgson, better known as Lewis Carroll, proposed the straightforward Condorcet method known as Dodgson's method. He also proposed a proportional representation system based on multi-member districts, quotas as minimum requirements to take seats, and votes transferable by candidates through proxy voting.
Ranked voting electoral systems eventually gathered enough support to be adopted for use in government elections. In Australia, IRV was first adopted in 1893 and STV in 1896 (Tasmania). IRV continues to be used along with STV today.
In the United States, during the early 20th-century progressive era some municipalities began to use supplementary voting and Bucklin voting. However, a series of court decisions ruled Bucklin to be unconstitutional, while supplementary voting was soon repealed in every city that had implemented it.
The use of game theory to analyze electoral systems led to discoveries about the effects of certain methods. Earlier developments such as Arrow's impossibility theorem had already shown the issues with ranked voting systems. Research led Steven Brams and Peter Fishburn to formally define and promote the use of approval voting in 1977. Political scientists of the 20th century published many studies on the effects that the electoral systems have on voters' choices and political parties, and on political stability. A few scholars also studied which effects caused a nation to switch to a particular electoral system.
=== Recent reform efforts ===
A new push for electoral reform occurred in the 1990s, when proposals were made to replace plurality voting in governmental elections with other methods. New Zealand adopted mixed-member proportional representation for the 1996 general elections, having been approved in a 1993 referendum. After plurality voting was a factor in the contested results of the 2000 presidential elections in the United States, various municipalities in the United States have begun to adopt instant-runoff voting. In 2020 a referendum adopting approval voting in St. Louis passed with 70% support.
In Canada, three separate referendums on the single transferable vote have been held but producing no reform (in 2005, 2009, and 2018). The 2020 Massachusetts Question 2, which attempted to expand instant-runoff voting into Massachusetts, was defeated by a 10-point margin. In the United Kingdom, a 2011 referendum on IRV saw the proposal rejected by a two-to-one margin.
==== Repeals and backlash ====
Some cities that adopted instant-runoff voting subsequently returned to first-past-the-post. Studies have found voter satisfaction with IRV falls dramatically the first time a race produces a result different from first-past-the-post. The United Kingdom used a form of instant-runoff voting for local elections prior to 2022, before returning to first-past-the-post over concerns regarding the system's complexity. Ranked-choice voting has been implemented in two states and banned in 10 others (in addition to other states with constitutional prohibitions on the rule).
In November 2024, voters in the U.S. decided on 10 ballot measures related to electoral systems. Nine of the ballot measures aimed to change existing electoral systems, and voters rejected each proposal. One, in Missouri, which banned ranked-choice voting (RCV), was approved. Voters rejected ballot measures to enact ranked-choice voting and other electoral system changes in Arizona, Colorado, Idaho, Nevada, and Oregon, as well as in Montana and South Dakota. In Alaska, voters rejected a ballot initiative 50.1% to 49.9% to repeal the state's top-four primaries and ranked-choice voting general elections, a system that was adopted via ballot measure in 2020.
== Comparison ==
Electoral systems can be compared by different means:
Define criteria mathematically, such that any electoral system either passes or fails. This gives perfectly objective results, but their practical relevance is still arguable.
Define ideal criteria and use simulated elections to see how often or how close various methods fail to meet the selected criteria. This gives results which are practically relevant, but the method of generating the sample of simulated elections can still be arguably biased.
Consider criteria that can be more easily measured using real-world elections, such as the Gallagher index, political fragmentation, voter turnout, wasted votes, political apathy, complexity of vote counting, and barriers to entry for new political movements and evaluate each method based on how they perform in real-world elections or evaluate the performance of countries with these electoral systems.
A 2019 peer-reviewed meta-analysis based on 1,037 regressions in 46 studies finds that countries with majoritary kind of electoral rules would be more "fiscally virtuous" since they would exhibit better fiscal balances in the pre-electoral period, which may be explained by less spending distortion. The meta-analysis also notes that countries with proportional kind of electoral rules would have bigger pre-electoral revenue cuts than other countries.
Gibbard's theorem, built upon the earlier Arrow's theorem and the Gibbard–Satterthwaite theorem, to prove that for any single-winner deterministic voting methods, at least one of the following three properties must hold:
The process is dictatorial, i.e. there is a single voter whose vote chooses the outcome.
The process limits the possible outcomes to two options only.
The process is not straightforward; the optimal ballot for a voter "requires strategic voting", i.e. it depends on their beliefs about other voters' ballots.
According to a 2006 survey of electoral system experts, their preferred electoral systems were in order of preference:
Mixed member proportional
Single transferable vote
Open list proportional
Alternative vote
Closed list proportional
Single member plurality
Runoffs
Mixed member majoritarian
Single non-transferable vote
== Systems by elected body ==
== See also ==
== References ==
== External links ==
ACE Electoral Knowledge Network
The International IDEA Handbook of Electoral System Design Archived 2020-11-24 at the Wayback Machine IDEA | Wikipedia/Voting_systems |
In computer science, merge-insertion sort or the Ford–Johnson algorithm is a comparison sorting algorithm published in 1959 by L. R. Ford Jr. and Selmer M. Johnson. It uses fewer comparisons in the worst case than the best previously known algorithms, binary insertion sort and merge sort, and for 20 years it was the sorting algorithm with the fewest known comparisons. Although not of practical significance, it remains of theoretical interest in connection with the problem of sorting with a minimum number of comparisons. The same algorithm may have also been independently discovered by Stanisław Trybuła and Czen Ping.
== Algorithm ==
Merge-insertion sort performs the following steps, on an input
X
{\displaystyle X}
of
n
{\displaystyle n}
elements:
Group the elements of
X
{\displaystyle X}
into
⌊
n
/
2
⌋
{\displaystyle \lfloor n/2\rfloor }
pairs of elements, arbitrarily, leaving one element unpaired if there is an odd number of elements.
Perform
⌊
n
/
2
⌋
{\displaystyle \lfloor n/2\rfloor }
comparisons, one per pair, to determine the larger of the two elements in each pair.
Recursively sort the
⌊
n
/
2
⌋
{\displaystyle \lfloor n/2\rfloor }
larger elements from each pair, creating a sorted sequence
S
{\displaystyle S}
of
⌊
n
/
2
⌋
{\displaystyle \lfloor n/2\rfloor }
of the input elements, in ascending order, using the merge-insertion sort.
Insert at the start of
S
{\displaystyle S}
the element that was paired with the first and smallest element of
S
{\displaystyle S}
.
Insert the remaining
⌈
n
/
2
⌉
−
1
{\displaystyle \lceil n/2\rceil -1}
elements of
X
∖
S
{\displaystyle X\setminus S}
into
S
{\displaystyle S}
, one at a time, with a specially chosen insertion ordering described below. Use binary search in subsequences of
S
{\displaystyle S}
(as described below) to determine the position at which each element should be inserted.
The algorithm is designed to take advantage of the fact that the binary searches used to insert elements into
S
{\displaystyle S}
are most efficient (from the point of view of worst case analysis) when the length of the subsequence that is searched is one less than a power of two. This is because, for those lengths, all outcomes of the search use the same number of comparisons as each other. To choose an insertion ordering that produces these lengths, consider the sorted sequence
S
{\displaystyle S}
after step 4 of the outline above (before inserting the remaining elements),
and let
x
i
{\displaystyle x_{i}}
denote the
i
{\displaystyle i}
th element of this sorted sequence. Thus,
S
=
(
x
1
,
x
2
,
x
3
,
…
)
,
{\displaystyle S=(x_{1},x_{2},x_{3},\dots ),}
where each element
x
i
{\displaystyle x_{i}}
with
i
≥
3
{\displaystyle i\geq 3}
is paired with an element
y
i
<
x
i
{\displaystyle y_{i}<x_{i}}
that has not yet been inserted. (There are no elements
y
1
{\displaystyle y_{1}}
or
y
2
{\displaystyle y_{2}}
because
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
were paired with each other.) If
n
{\displaystyle n}
is odd, the remaining unpaired element should also be numbered as
y
i
{\displaystyle y_{i}}
with
i
{\displaystyle i}
larger than the indexes of the paired elements.
Then, the final step of the outline above can be expanded into the following steps:
Partition the uninserted elements
y
i
{\displaystyle y_{i}}
into groups with contiguous indexes. There are two elements
y
3
{\displaystyle y_{3}}
and
y
4
{\displaystyle y_{4}}
in the first group, and the sums of sizes of every two adjacent groups form a sequence of powers of two. Thus, the sizes of groups are: 2, 2, 6, 10, 22, 42, ...
Order the uninserted elements by their groups (smaller indexes to larger indexes), but within each group order them from larger indexes to smaller indexes. Thus, the ordering becomes
y
4
,
y
3
,
y
6
,
y
5
,
y
12
,
y
11
,
y
10
,
y
9
,
y
8
,
y
7
,
y
22
,
y
21
…
{\displaystyle y_{4},y_{3},y_{6},y_{5},y_{12},y_{11},y_{10},y_{9},y_{8},y_{7},y_{22},y_{21}\dots }
Use this ordering to insert the elements
y
i
{\displaystyle y_{i}}
into
S
{\displaystyle S}
. For each element
y
i
{\displaystyle y_{i}}
, use a binary search from the start of
S
{\displaystyle S}
up to but not including
x
i
{\displaystyle x_{i}}
to determine where to insert
y
i
{\displaystyle y_{i}}
.
== Analysis ==
Let
C
(
n
)
{\displaystyle C(n)}
denote the number of comparisons that merge-insertion sort makes, in the worst case, when sorting
n
{\displaystyle n}
elements.
This number of comparisons can be broken down as the sum of three terms:
⌊
n
/
2
⌋
{\displaystyle \lfloor n/2\rfloor }
comparisons among the pairs of items,
C
(
⌊
n
/
2
⌋
)
{\displaystyle C(\lfloor n/2\rfloor )}
comparisons for the recursive call, and
some number of comparisons for the binary insertions used to insert the remaining elements.
In the third term, the worst-case number of comparisons for the elements in the first group is two, because each is inserted into a subsequence of
S
{\displaystyle S}
of length at most three. First,
y
4
{\displaystyle y_{4}}
is inserted into the three-element subsequence
(
x
1
,
x
2
,
x
3
)
{\displaystyle (x_{1},x_{2},x_{3})}
. Then,
y
3
{\displaystyle y_{3}}
is inserted into some permutation of the three-element subsequence
(
x
1
,
x
2
,
y
4
)
{\displaystyle (x_{1},x_{2},y_{4})}
, or in some cases into the two-element subsequence
(
x
1
,
x
2
)
{\displaystyle (x_{1},x_{2})}
. Similarly, the elements
y
6
{\displaystyle y_{6}}
and
y
5
{\displaystyle y_{5}}
of the second group are each inserted into a subsequence of length at most seven, using three comparisons. More generally, the worst-case number of comparisons for the elements in the
i
{\displaystyle i}
th group is
i
+
1
{\displaystyle i+1}
, because each is inserted into a subsequence of length at most
2
i
+
1
−
1
{\displaystyle 2^{i+1}-1}
. By summing the number of comparisons used for all the elements and solving the resulting recurrence relation,
this analysis can be used to compute the values of
C
(
n
)
{\displaystyle C(n)}
, giving the formula
C
(
n
)
=
∑
i
=
1
n
⌈
log
2
3
i
4
⌉
≈
n
log
2
n
−
1.415
n
{\displaystyle C(n)=\sum _{i=1}^{n}\left\lceil \log _{2}{\frac {3i}{4}}\right\rceil \approx n\log _{2}n-1.415n}
or, in closed form,
C
(
n
)
=
n
⌈
log
2
3
n
4
⌉
−
⌊
2
⌊
log
2
6
n
⌋
3
⌋
+
⌊
log
2
6
n
2
⌋
.
{\displaystyle C(n)=n{\biggl \lceil }\log _{2}{\frac {3n}{4}}{\biggr \rceil }-{\biggl \lfloor }{\frac {2^{\lfloor \log _{2}6n\rfloor }}{3}}{\biggr \rfloor }+{\biggl \lfloor }{\frac {\log _{2}6n}{2}}{\biggr \rfloor }.}
For
n
=
1
,
2
,
…
{\displaystyle n=1,2,\dots }
the numbers of comparisons are
0, 1, 3, 5, 7, 10, 13, 16, 19, 22, 26, 30, 34, ... (sequence A001768 in the OEIS)
== Relation to other comparison sorts ==
The algorithm is called merge-insertion sort because the initial comparisons that it performs before its recursive call (pairing up arbitrary items and comparing each pair) are the same as the initial comparisons of merge sort,
while the comparisons that it performs after the recursive call (using binary search to insert elements one by one into a sorted list) follow the same principle as insertion sort. In this sense, it is a hybrid algorithm that combines both merge sort and insertion sort.
For small inputs (up to
n
=
11
{\displaystyle n=11}
) its numbers of comparisons equal the lower bound on comparison sorting of
⌈
log
2
n
!
⌉
≈
n
log
2
n
−
1.443
n
{\displaystyle \lceil \log _{2}n!\rceil \approx n\log _{2}n-1.443n}
. However, for larger inputs the number of comparisons made by the merge-insertion algorithm is bigger than this lower bound.
Merge-insertion sort also performs fewer comparisons than the sorting numbers, which count the comparisons made by binary insertion sort or merge sort in the worst case. The sorting numbers fluctuate between
n
log
2
n
−
0.915
n
{\displaystyle n\log _{2}n-0.915n}
and
n
log
2
n
−
n
{\displaystyle n\log _{2}n-n}
, with the same leading term but a worse constant factor in the lower-order linear term.
Merge-insertion sort is the sorting algorithm with the minimum possible comparisons for
n
{\displaystyle n}
items whenever
n
≤
22
{\displaystyle n\leq 22}
, and it has the fewest comparisons known for
n
≤
46
{\displaystyle n\leq 46}
.
For 20 years, merge-insertion sort was the sorting algorithm with the fewest comparisons known for all input lengths.
However, in 1979 Glenn Manacher published another sorting algorithm that used even fewer comparisons, for large enough inputs.
It remains unknown exactly how many comparisons are needed for sorting, for all
n
{\displaystyle n}
, but Manacher's algorithm
and later record-breaking sorting algorithms have all used modifications of the merge-insertion sort ideas.
== References == | Wikipedia/Ford–Johnson_algorithm |
Shuffling is a technique used to randomize a deck of playing cards, introducing an element of chance into card games. Various shuffling methods exist, each with its own characteristics and potential for manipulation.
One of the simplest shuffling techniques is the overhand shuffle, where small packets of cards are transferred from one hand to the other. This method is easy to perform but can be manipulated to control the order of cards. Another common technique is the riffle shuffle, where the deck is split into two halves and interleaved. This method is more complex but minimizes the risk of exposing cards. The Gilbert–Shannon–Reeds model suggests that seven riffle shuffles are sufficient to thoroughly randomize a deck, although some studies indicate that six shuffles may be enough.
Other shuffling methods include the Hindu shuffle, commonly used in Asia, and the pile shuffle, where cards are dealt into piles and then stacked. The Mongean shuffle involves a specific sequence of transferring cards between hands, resulting in a predictable order. The faro shuffle, a controlled shuffle used by magicians, involves interweaving two halves of the deck and can restore the original order after several shuffles.
Shuffling can be simulated using algorithms like the Fisher–Yates shuffle, which generates a random permutation of cards. In online gambling, the randomness of shuffling is crucial, and many sites provide descriptions of their shuffling algorithms. Shuffling machines are also used in casinos to increase complexity and prevent predictions. Despite these advances, the mathematics of shuffling continue to be a subject of research, with ongoing debates about the number of shuffles required for true randomization.
== Techniques ==
=== Overhand ===
One of the easiest shuffles to accomplish after a little practice is the overhand shuffle. Johan Jonasson wrote, "The overhand shuffle... is the shuffling technique where you gradually transfer the deck from, say, your right hand to your left hand by sliding off small packets from the top of the deck with your thumb." In detail as normally performed, with the pack initially held in the left hand (say), most of the cards are grasped as a group from the bottom of the pack between the thumb and fingers of the right hand and lifted clear of the small group that remains in the left hand. Small packets are then released from the right hand a packet at a time so that they drop on the top of the pack accumulating in the left hand. The process is repeated several times. The randomness of the whole shuffle is increased by the number of small packets in each shuffle and the number of repeat shuffles performed.
The overhand shuffle offers sufficient opportunity for sleight of hand techniques to be used to affect the ordering of cards, creating a stacked deck. The most common way that players cheat with the overhand shuffle is by having a card at the top or bottom of the pack that they require, and then slipping it to the bottom at the start of a shuffle (if it was on top to start), or leaving it as the last card in a shuffle and just dropping it on top (if it was originally on the bottom of the deck).
=== Riffle ===
A common shuffling technique is called the riffle, or dovetail shuffle or leafing the cards, in which half of the deck is held in each hand with the thumbs inward, then cards are released by the thumbs so that they fall to the table interleaved. Many also lift the cards up after a riffle, forming what is called a bridge which puts the cards back into place; it can also be done by placing the halves flat on the table with their rear corners touching, then lifting the back edges with the thumbs while pushing the halves together. While this method is more difficult, it is often used in casinos because it minimizes the risk of exposing cards during the shuffle. There are two types of perfect riffle shuffles: if the top card moves to be second from the top then it is an in shuffle, otherwise it is known as an out shuffle (which preserves both the top and bottom cards).
The Gilbert–Shannon–Reeds model provides a mathematical model of the random outcomes of riffling that has been shown experimentally to be a good fit to human shuffling and that forms the basis for a recommendation that card decks be riffled seven times in order to randomize them thoroughly. Later, mathematicians Lloyd M. Trefethen and Lloyd N. Trefethen authored a paper using a tweaked version of the Gilbert–Shannon–Reeds model showing that the minimum number of riffles for total randomization could also be six, if the method of defining randomness is changed.
=== Box ===
Also known as "strip." The deck is held from the top by one hand close to the top of the table, and a pile is stripped off the top of the deck with the other hand and placed on the table. Additional piles are stripped off and placed on top of the previous pile until all cards have been placed onto the new pile. Boxing the cards is functionally the same as an overhand shuffle, however, by keeping the cards close to the table, it is less likely that cards will be accidentally exposed.
=== Hindu ===
Also known as the "Indian", "Kattar", "Kenchi" (Hindi for scissor) or "Kutti Shuffle". The deck is held face down, with the middle finger on one long edge and the thumb on the other on the bottom half of the deck. The other hand draws off a packet from the top of the deck. This packet is allowed to drop into the palm. The maneuver is repeated over and over, with newly drawn packets dropping onto previous ones, until the deck is all in the second hand. Indian shuffle differs from stripping in that all the action is in the hand taking the cards, whereas in stripping, the action is performed by the hand with the original deck, giving the cards to the resulting pile. This is the most common shuffling technique in Asia and other parts of the world, while the overhand shuffle is primarily used in Western countries.
=== Pile ===
Cards are simply dealt out into a number of piles, then the piles are stacked on top of each other. Though this is deterministic and does not randomize the cards at all, it ensures that cards that were next to each other are now separated. Some variations on the pile shuffle attempt to make it slightly random by dealing to the piles in a random order each circuit.
=== 52 pickup ===
A person may throw a deck of cards into the air or across a surface, and then pick up the cards in random order, assembled with the cards facing the same direction. If specific cards are observed too closely as they are picked up, an additional 52 pickup or an additional shuffling method may be needed for sufficient randomization. This method is useful for beginners, but the shuffle requires a large clean surface for spreading out the cards, and it may take more time than is desired.
'A game of 52 pickup' is also the name of a child's prank, where one child asks a 'friend' if they want to play 52 pickup. They then throw the cards into the air, and demand the other child 'pick them up'.
=== Corgi ===
This method is similar to 52 pickup and also useful for beginners. Also known as the Chemmy, Irish, wash, scramble, hard shuffle, smooshing, schwirsheling, or washing the cards, this involves simply spreading the cards out face down, and sliding them around and over each other with one's hands. Then the cards are moved into one pile so that they begin to intertwine and are then arranged back into a stack. Statistically random shuffling is achieved after approximately one minute of smooshing. Smooshing has been largely popularized by Simon Hofman.
=== Mongean ===
The Mongean shuffle, or Monge's shuffle, is performed as follows (by a right-handed person): Start with the unshuffled deck in the left hand and transfer the top card to the right. Then repeatedly take the top card from the left hand and transfer it to the right, putting the second card at the top of the new deck, the third at the bottom, the fourth at the top, the fifth at the bottom, etc. The result, if one started with cards numbered consecutively
1
,
2
,
3
,
4
,
5
,
6
,
…
,
2
n
{\displaystyle \scriptstyle 1,2,3,4,5,6,\dots ,2n}
, would be a deck with the cards in the following order:
2
n
,
2
n
−
2
,
2
n
−
4
,
…
,
4
,
2
,
1
,
3
,
…
,
2
n
−
3
,
2
n
−
1
{\displaystyle \scriptstyle 2n,2n-2,2n-4,\dots ,4,2,1,3,\dots ,2n-3,2n-1}
.
=== Faro ===
Weaving is the procedure of pushing the ends of two halves of a deck against each other in such a way that they naturally intertwine. Sometimes the deck is split into equal halves of 26 cards which are then pushed together in a certain way so as to make them perfectly interweave. This is known as a Faro Shuffle.
The faro shuffle is performed by cutting the deck into two, preferably equal, packs in both hands as follows (right-handed):
The cards are held from above in the right and from below in the left hand. Separation of the deck is done simply lifting up half the cards with the right hand thumb slightly and pushing the left hand's packet forward away from the right hand. The two packets are often crossed and tapped against each other to align them. They are then pushed together by the short sides and bent (either up or down). The cards then alternately fall into each other, much like a zipper. A flourish can be added by springing the packets together by applying pressure and bending them from above, as called the bridge finish. The faro is a controlled shuffle which does not randomize a deck when performed properly.
A perfect faro shuffle, where the cards are perfectly alternated, is considered one of the most difficult sleights by card magicians, simply because it requires the shuffler to be able to cut the deck into two equal packets and apply just the right amount of pressure when pushing the cards into each other. Performing eight perfect faro shuffles in a row restores the order of the deck to the original order only if there are 52 cards in the deck and if the original top and bottom cards remain in their positions (1st and 52nd) during the eight shuffles. If the top and bottom cards are weaved in during each shuffle, it takes 52 shuffles to return the deck back into original order (or 26 shuffles to reverse the order).
=== Mexican spiral ===
The Mexican spiral shuffle is performed by cyclic actions of moving the top card onto the table, then the new top card under the deck, the next onto the table, next under the deck, and so on until the last card is dealt onto the table. It takes quite a long time, compared with riffle or overhand shuffles, but allows other players to fully control cards which are on the table. The Mexican spiral shuffle was popular at the end of the 19th century in some areas of Mexico as a protection from gamblers and con men arriving from the United States.
=== Team shuffle ===
Especially useful for large decks, a shuffler may divide a deck into two or more smaller decks, and give the other portion(s) to (an)other shuffler(s), each to choose their own shuffling method(s). Smaller decks or portions of smaller decks may be traded around as shuffling continues, then the smaller decks are combined (and briefly shuffled) into the original large deck. This also prevents one shuffler having unfair control of the randomization.
=== Cut ===
Typically performed after a previous shuffling method, the cut is of simply taking a deck, dividing it into two portions of random size, and putting the previously lower portion on top of the previously higher portion. This is occasionally performed by a second shuffler, for additional assurance of randomization, and to prevent either the shuffler or an observer from knowing the top or bottom card.
== Faking ==
Magicians, sleight-of-hand artists, and card cheats employ various methods of shuffling whereby the deck appears to have been shuffled fairly, when in reality one or more cards (up to and including the entire deck) stays in the same position. It is also possible, though generally considered very difficult, to "stack the deck" (place cards into a desirable order) by means of one or more riffle shuffles; this is called "riffle stacking".
Both performance magicians and card sharps regard the Zarrow shuffle and the Push-Through-False-Shuffle as particularly effective examples of the false shuffle. In these shuffles, the entire deck remains in its original order, although spectators think they see an honest riffle shuffle.
== Machines ==
Casinos often equip their tables with shuffling machines instead of having croupiers shuffle the cards, as it gives the casino a few advantages, including an increased complexity to the shuffle and therefore an increased difficulty for players to make predictions, even if they are collaborating with croupiers. The shuffling machines are carefully designed to avoid biasing the shuffle and are typically computer-controlled. Shuffling machines also save time that would otherwise be wasted on manual shuffling, thereby increasing the profitability of the table. These machines are also used to lessen repetitive-motion-stress injuries to a dealer.
Players with superstitions often regard with suspicion any electronic equipment, so casinos sometimes still have the croupiers perform the shuffling at tables that typically attract those crowds (e.g., baccarat tables). Additionally, casinos replace their decks at regular intervals; even if a shuffling machine is being used, the croupier usually manually shuffles the replacement decks before placing them into the machine.
== Randomization ==
There are 52 factorial (expressed in shorthand as 52!) possible orderings of the cards in a 52-card deck. In other words, there are 52 × 51 × 50 × 49 × ··· × 4 × 3 × 2 × 1 possible combinations of card sequence. This is approximately 8.0658×1067 (80,658 vigintillion) possible orderings, or specifically 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000. The magnitude of this number means that it is exceedingly improbable that two randomly selected, truly randomized decks will be the same. However, while the exact sequence of all cards in a randomized deck is unpredictable, it may be possible to make some probabilistic predictions about a deck that is not sufficiently randomized.
=== Sufficiency ===
The number of shuffles that are sufficient for a "good" level of randomness depends on the type of shuffle and the measure of "good enough randomness", which in turn depends on the game in question. For most games, four to seven riffle shuffles are sufficient: for unsuited games such as blackjack, four riffle shuffles are sufficient, while for suited games, seven riffle shuffles are necessary. There are some games, however, for which even seven riffle shuffles are insufficient.
In practice the number of shuffles required depends both on the quality of the shuffle and how significant non-randomness is, particularly how good the people playing are at noticing and using non-randomness. Two to four shuffles is good enough for casual play. But in club play, good bridge players take advantage of non-randomness after four shuffles, and top blackjack players supposedly track aces through the deck; this is known as "ace tracking", or more generally, as "shuffle tracking".
=== Research ===
Following early research at Bell Labs, which was abandoned in 1955, the question of how many shuffles was required remained open until 1990, when it was convincingly solved as seven shuffles, as elaborated below. Some results preceded this, and refinements have continued since.
A leading figure in the mathematics of shuffling is mathematician and magician Persi Diaconis, who began studying the question around 1970, and has authored many papers in the 1980s, 1990s, and 2000s on the subject with numerous co-authors. Most famous is (Bayer & Diaconis 1992), co-authored with mathematician Dave Bayer, which analyzed the Gilbert–Shannon–Reeds model of random riffle shuffling and concluded that the deck did not start to become random until five good riffle shuffles, and was truly random after seven, in the precise sense of variation distance described in Markov chain mixing time; of course, you would need more shuffles if your shuffling technique is poor. Recently, the work of Trefethen et al. has questioned some of Diaconis' results, concluding that six shuffles are enough. The difference hinges on how each measured the randomness of the deck. Diaconis used a very sensitive test of randomness, and therefore needed to shuffle more. Even more sensitive measures exist, and the question of what measure is best for specific card games is still open. Diaconis released a response indicating that you only need four shuffles for un-suited games such as blackjack.
On the other hand, variation distance may be too forgiving a measure and seven riffle shuffles may be many too few. For example, seven shuffles of a new deck leaves an 81% probability of winning New Age Solitaire where the probability is 50% with a uniform random deck. One sensitive test for randomness uses a standard deck without the jokers divided into suits with two suits in ascending order from ace to king, and the other two suits in reverse. (Many decks already come ordered this way when new.) After shuffling, the measure of randomness is the number of rising sequences that are left in each suit.
== Algorithms ==
If a computer has access to purely random numbers, it is capable of generating a "perfect shuffle", a random permutation of the cards; beware that this terminology (an algorithm that perfectly randomizes the deck) differs from "a perfectly executed single shuffle", notably a perfectly interleaving faro shuffle. The Fisher–Yates shuffle, popularized by Donald Knuth, is simple (a few lines of code) and efficient (O(n) on an n-card deck, assuming constant time for fundamental steps) algorithm for doing this. Shuffling can be seen as the opposite of sorting.
A new alternative to Fisher-Yates, which does not use any array memory operations, is the use a Pseudo Random Index Generator (PRIG) function algorithm.
There are other, less-desirable algorithms in common use. For example, one can assign a random number to each card, and then sort the cards in order of their random numbers. This will generate a random permutation, unless any of the random numbers generated are the same as any others (i.e. pairs, triplets etc.). This can be eliminated either by adjusting one of the pair's values randomly up or down by a small amount, or reduced to an arbitrarily low probability by choosing a sufficiently wide range of random number choices. If using efficient sorting such as mergesort or heapsort this is an O(n log n) average and worst-case algorithm.
=== Online gambling ===
These issues are of considerable commercial importance in online gambling, where the randomness of the shuffling of packs of simulated cards for online card games is crucial. For this reason, many online gambling sites provide descriptions of their shuffling algorithms and the sources of randomness used to drive these algorithms, with some gambling sites also providing auditors' reports of the performance of their systems.
== See also ==
Card manipulation
List of card manipulation techniques
Mental poker
Cheating at poker
Solitaire (cipher)
== References ==
=== Footnotes ===
== External links ==
Physical card shuffling:
Illustrated guide to several shuffling methods
Magician's tool with much shuffling simulation
Mathematics of shuffling:
Real World Shuffling In Practice
Shuffle - MathWorld - Wolfram Research
Ivars Peterson's MathTrek: Card Shuffling Shenanigans
Real world (historical) application:
How We Learned to Cheat at Online Poker: A Study in Software Security | Wikipedia/Shuffling_algorithm |
The pairwise sorting network is a sorting network discovered and published by Ian Parberry in 1992 in Parallel Processing Letters. The pairwise sorting network has the same size (number of comparators) and depth as the odd–even mergesort network. At the time of publication, the network was one of several known networks with a depth of
O
(
log
2
n
)
{\displaystyle O(\log ^{2}n)}
. It requires
n
(
log
n
)
(
log
n
−
1
)
/
4
+
n
−
1
{\displaystyle n(\log n)(\log n-1)/4+n-1}
comparators and has depth
(
log
n
)
(
log
n
+
1
)
/
2
{\displaystyle (\log n)(\log n+1)/2}
.
The sorting procedure implemented by the network is as follows (guided by the zero-one principle):
Sort consecutive pairwise bits of the input (corresponds to the first layer of the diagram)
Sort all pairs into lexicographic order by recursively sorting all odd bits and even bits separately (corresponds to the next three layers of 2+4+8 columns of the diagram)
Sort the pairs in nondecreasing order using a specialized network (corresponds to the final layers of the diagram)
== Relation to Batcher odd-even mergesort ==
The pairwise sorting network is very similar to the Batcher odd-even mergesort, but differs in the structure of operations. While Batcher repeatedly divides, sorts and merges increasingly longer subsequences, the pairwise method does all the subdivision first, then does all the merging at the end in the reverse sequence. In certain applications like encoding cardinality constraints, the pairwise sorting network is superior to the Batcher network.
== References ==
== External links ==
Sorting Networks – Archive of web page by the author. | Wikipedia/Pairwise_sorting_network |
In computer science, comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks.
Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. In order to sort larger amounts of inputs, new sorting networks must be constructed. This independence of comparison sequences is useful for parallel execution and for implementation in hardware. Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, who subsequently patented the idea.
Sorting networks can be implemented either in hardware or in software. Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. Batcher, in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches. Since the 2000s, sorting nets (especially bitonic mergesort) are used by the GPGPU community for constructing sorting algorithms to run on graphics processing units.
== Introduction ==
A sorting network consists of two types of items: comparators and wires. The wires are thought of as running from left to right, carrying values (one per wire) that traverse the network all at the same time. Each comparator connects two wires. When a pair of values, traveling through a pair of wires, encounter a comparator, the comparator swaps the values if and only if the top wire's value is greater or equal to the bottom wire's value.
In a formula, if the top wire carries x and the bottom wire carries y, then after hitting a comparator the wires carry
x
′
=
min
(
x
,
y
)
{\displaystyle x'=\min(x,y)}
and
y
′
=
max
(
x
,
y
)
{\displaystyle y'=\max(x,y)}
, respectively, so the pair of values is sorted.: 635 A network of wires and comparators that will correctly sort all possible inputs into ascending order is called a sorting network or Kruskal hub. By reflecting the network, it is also possible to sort all inputs into descending order.
The full operation of a simple sorting network is shown below. It is evident why this sorting network will correctly sort the inputs; note that the first four comparators will "sink" the largest value to the bottom and "float" the smallest value to the top. The final comparator sorts out the middle two wires.
=== Depth and efficiency ===
The efficiency of a sorting network can be measured by its total size, meaning the number of comparators in the network, or by its depth, defined (informally) as the largest number of comparators that any input value can encounter on its way through the network. Noting that sorting networks can perform certain comparisons in parallel (represented in the graphical notation by comparators that lie on the same vertical line), and assuming all comparisons to take unit time, it can be seen that the depth of the network is equal to the number of time steps required to execute it.: 636–637
=== Insertion and Bubble networks ===
We can easily construct a network of any size recursively using the principles of insertion and selection. Assuming we have a sorting network of size n, we can construct a network of size n + 1 by "inserting" an additional number into the already sorted subnet (using the principle underlying insertion sort). We can also accomplish the same thing by first "selecting" the lowest value from the inputs and then sort the remaining values recursively (using the principle underlying bubble sort).
The structure of these two sorting networks are very similar. A construction of the two different variants, which collapses together comparators that can be performed simultaneously shows that, in fact, they are identical.
The insertion network (or equivalently, bubble network) has a depth of 2n - 3, where n is the number of values. This is better than the O(n log n) time needed by random-access machines, but it turns out that there are much more efficient sorting networks with a depth of just O(log2 n), as described below.
=== Zero-one principle ===
While it is easy to prove the validity of some sorting networks (like the insertion/bubble sorter), it is not always so easy. There are n! permutations of numbers in an n-wire network, and to test all of them would take a significant amount of time, especially when n is large. The number of test cases can be reduced significantly, to 2n, using the so-called zero-one principle. While still exponential, this is smaller than n! for all n ≥ 4, and the difference grows quite quickly with increasing n.
The zero-one principle states that, if a sorting network can correctly sort all 2n sequences of zeros and ones, then it is also valid for arbitrary ordered inputs. This not only drastically cuts down on the number of tests needed to ascertain the validity of a network, it is of great use in creating many constructions of sorting networks as well.
The principle can be proven by first observing the following fact about comparators: when a monotonically increasing function f is applied to the inputs, i.e., x and y are replaced by f(x) and f(y), then the comparator produces min(f(x), f(y)) = f(min(x, y)) and max(f(x), f(y)) = f(max(x, y)). By induction on the depth of the network, this result can be extended to a lemma stating that if the network transforms the sequence a1, ..., an into b1, ..., bn, it will transform f(a1), ..., f(an) into f(b1), ..., f(bn). Suppose that some input a1, ..., an contains two items ai < aj, and the network incorrectly swaps these in the output. Then it will also incorrectly sort f(a1), ..., f(an) for the function
f
(
x
)
=
{
1
if
x
>
a
i
0
otherwise.
{\displaystyle f(x)={\begin{cases}1\ &{\mbox{if }}x>a_{i}\\0\ &{\mbox{otherwise.}}\end{cases}}}
This function is monotonic, so we have the zero-one principle as the contrapositive.: 640–641
== Constructing sorting networks ==
Various algorithms exist to construct sorting networks of depth O(log2 n) (hence size O(n log2 n)) such as Batcher odd–even mergesort, bitonic sort, Shell sort, and the Pairwise sorting network. These networks are often used in practice.
It is also possible to construct networks of depth O(log n) (hence size O(n log n)) using a construction called the AKS network, after its discoverers Ajtai, Komlós, and Szemerédi. While an important theoretical discovery, the AKS network has very limited practical application because of the large linear constant hidden by the Big-O notation.: 653 These are partly due to a construction of an expander graph.
A simplified version of the AKS network was described by Paterson in 1990, who noted that "the constants obtained for the depth bound still prevent the construction being of practical value".
A more recent construction called the zig-zag sorting network of size O(n log n) was discovered by Goodrich in 2014. While its size is much smaller than that of AKS networks, its depth O(n log n) makes it unsuitable for a parallel implementation.
=== Optimal sorting networks ===
For small, fixed numbers of inputs n, optimal sorting networks can be constructed, with either minimal depth (for maximally parallel execution) or minimal size (number of comparators). These networks can be used to increase the performance of larger sorting networks resulting from the recursive constructions of, e.g., Batcher, by halting the recursion early and inserting optimal nets as base cases. The following table summarizes the optimality results for small networks for which the optimal depth is known:
For larger networks neither the optimal depth nor the optimal size are currently known. The bounds known so far are provided in the table below:
The first sixteen depth-optimal networks are listed in Knuth's Art of Computer Programming, and have been since the 1973 edition; however, while the optimality of the first eight was established by Floyd and Knuth in the 1960s, this property wasn't proven for the final six until 2014 (the cases nine and ten having been decided in 1991).
For one to twelve inputs, minimal (i.e. size-optimal) sorting networks are known, and for higher values, lower bounds on their sizes S(n) can be derived inductively using a lemma due to Van Voorhis (p. 240): S(n) ≥ S(n − 1) + ⌈log2n⌉. The first ten optimal networks have been known since 1969, with the first eight again being known as optimal since the work of Floyd and Knuth, but optimality of the cases n = 9 and n = 10 took until 2014 to be resolved.
The optimality of the smallest known sorting networks for n = 11 and n = 12 was resolved in 2020.
Some work in designing optimal sorting network has been done using genetic algorithms: D. Knuth mentions that the smallest known sorting network for n = 13 was found by Hugues Juillé in 1995 "by simulating an evolutionary process of genetic breeding" (p. 226), and that the minimum depth sorting networks for n = 9 and n = 11 were found by Loren Schwiebert in 2001 "using genetic methods" (p. 229).
=== Complexity of testing sorting networks ===
Unless P=NP, the problem of testing whether a candidate network is a sorting network is likely to remain difficult for networks of large sizes, due to the problem being co-NP-complete.
== References ==
Angel, O.; Holroyd, A. E.; Romik, D.; Virág, B. (2007). "Random sorting networks". Advances in Mathematics. 215 (2): 839–868. arXiv:math/0609538. doi:10.1016/j.aim.2007.05.019.
== External links ==
List of smallest sorting networks for given number of inputs
Sorting Networks
CHAPTER 28: SORTING NETWORKS
Sorting Networks
Tool for generating and graphing sorting networks
Sorting networks and the END algorithm
Lipton, Richard J.; Regan, Ken (24 April 2014). "Galactic Sorting Networks". Gödel’s Lost Letter and P=NP.
Sorting Networks validity | Wikipedia/Sorting_network |
In computer programming, the act of swapping two variables refers to mutually exchanging the values of the variables. Usually, this is done with the data in memory. For example, in a program, two variables may be defined thus (in pseudocode):
data_item x := 1
data_item y := 0
swap (x, y);
After swap() is performed, x will contain the value 0 and y will contain 1; their values have been exchanged. This operation may be generalized to other types of values, such as strings and aggregated data types. Comparison sorts use swaps to change the positions of data.
In many programming languages the swap function is built-in. In C++ overloads are provided allowing std::swap to exchange some large structures in O(1) time.
== Using a temporary variable ==
The simplest and probably most widely used method to swap two variables is to use a third temporary variable:
define swap (x, y)
temp := x
x := y
y := temp
While this is conceptually simple and in many cases the only convenient way to swap two variables, it uses extra memory. Although this should not be a problem in most applications, the sizes of the values being swapped may be huge (which means the temporary variable may occupy a lot of memory as well), or the swap operation may need to be performed many times, as in sorting algorithms.
In addition, swapping two variables in object-oriented languages such as C++ may involve one call to the class constructor and destructor for the temporary variable, and three calls to the copy constructor. Some classes may allocate memory in the constructor and deallocate it in the destructor, thus creating expensive calls to the system. Copy constructors for classes containing a lot of data, e.g. in an array, may even need to copy the data manually.
== XOR swap ==
XOR swap uses the XOR operation to swap two numeric variables. It is generally touted to be faster than the naive method mentioned above; however it does have disadvantages. XOR swap is generally used to swap low-level data types, like integers. However, it is, in theory, capable of swapping any two values which can be represented by fixed-length bitstrings.
== Swap through addition and subtraction ==
This method swaps two variables by adding and subtracting their values. This is rarely used in practical applications, mainly because:
It can only swap numeric variables; it may not be possible or logical to add or subtract complex data types, like containers.
When swapping variables of a fixed size, arithmetic overflow becomes an issue.
It does not work generally for floating-point values, because floating-point arithmetic is non-associative.
== Swapping containers ==
Containers which allocate memory from the heap using pointers may be swapped in a single operation, by swapping the pointers alone. This is usually found in programming languages supporting pointers, like C or C++. The Standard Template Library overloads its built-in swap function to exchange the contents of containers efficiently this way.
As pointer variables are usually of a fixed size (e.g., most desktop computers have pointers 64 bits long), and they are numeric, they can be swapped quickly using XOR swap.
== Parallel assignment ==
Some languages, like Ruby, Julia or Python support parallel assignments, which simplifies the notation for swapping two variables:
a, b = b, a
This is shorthand for an operation involving an intermediate data structure: in Python and Julia, a tuple; in Ruby, an array.
Javascript 6+ supports destructuring operators which do the same thing:
[a, b] = [b, a];
== Function swap ==
Here, two globally scoped variables are passed by value through a function, eliminating the need for a temporary storage variable.
== Facilitation of swapping in modern computers ==
=== Dedicated instructions ===
Because of the many applications of swapping data in computers, most processors now provide the ability to swap variables directly through built-in instructions. x86 processors, for example, include an XCHG instruction to swap two registers directly without requiring that a third temporary register is used. A compare-and-swap instruction is even provided in some processor architectures, which compares and conditionally swaps two registers. This is used to support mutual exclusion techniques.
XCHG may not be as efficient as it seems. For example, in x86 processors, XCHG will implicitly lock access to any operands in memory to ensure the operation is atomic, and so may not be efficient when swapping memory. Such locking is important when it is used to implement thread-safe synchronization, as in mutexes. However, an XCHG is usually the fastest way to swap two machine-size words residing in registers. Register renaming may also be used to swap registers efficiently.
=== Parallel execution ===
With the advent of instruction pipelining in modern computers and multi-core processors facilitating parallel computing, two or more operations can be performed at once. This can speed up the swap using temporary variables and give it an edge over other algorithms. For example, the XOR swap algorithm requires sequential execution of three instructions. However, using two temporary registers, two processors executing in parallel can swap two variables in two clock cycles:
Step 1
Processor 1: temp_1 := X
Processor 2: temp_2 := Y
Step 2
Processor 1: X := temp_2
Processor 2: Y := temp_1
More temporary registers are used, and four instructions are needed instead of three. In any case, in practice this could not be implemented in separate processors, as it violates Bernstein's conditions for parallel computing. It would be infeasible to keep the processors sufficiently in sync with one another for this swap to have any significant advantage over traditional versions. However, it can be used to optimize swapping for a single processor with multiple load/store units.
== References == | Wikipedia/Swap_(computer_science) |
Cryptography, or cryptology (from Ancient Greek: κρυπτός, romanized: kryptós "hidden, secret"; and γράφειν graphein, "to write", or -λογία -logia, "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security (data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications.
Cryptography prior to the modern age was effectively synonymous with encryption, converting readable information (plaintext) to unintelligible nonsense text (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literature often uses the names "Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for the eavesdropping adversary. Since the development of rotor cipher machines in World War I and the advent of computers in World War II, cryptography methods have become increasingly complex and their applications more varied.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements in integer factorization algorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted. Information-theoretically secure schemes that provably cannot be broken even with unlimited computing power, such as the one-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes.
The growth of cryptographic technology has raised a number of legal issues in the Information Age. Cryptography's potential for use as a tool for espionage and sedition has led many governments to classify it as a weapon and to limit or even prohibit its use and export. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation. Cryptography also plays a major role in digital rights management and copyright infringement disputes with regard to digital media.
== Terminology ==
The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug", a story by Edgar Allan Poe.
Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (called plaintext) into an unintelligible form (called ciphertext). Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks.
There are two main types of cryptosystems: symmetric and asymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key. Examples of asymmetric systems include Diffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), and Post-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard). Insecure symmetric algorithms include children's language tangling schemes such as Pig Latin or other cant, and all historical cryptographic schemes, however seriously intended, prior to the invention of the one-time pad early in the 20th century.
In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext.
Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations.
Some use the terms "cryptography" and "cryptology" interchangeably in English, while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis. English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above. RFC 2828 advises that steganography is sometimes included in cryptology.
The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is called cryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications.
== History ==
Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion of messages from a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensure secrecy in communications, such as those of spies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs and secure computation, among others.
=== Classic cryptography ===
The main classical cipher types are transposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in the Latin alphabet). Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet. Suetonius reports that Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone in Egypt (c. 1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information.
The Greeks of Classical times are said to have known of ciphers (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Steganography (i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, from Herodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair. Other steganography methods involve 'hiding in plain sight,' such as using a music cipher to disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use of invisible ink, microdots, and digital watermarks to conceal information.
In India, the 2000-year-old Kama Sutra of Vātsyāyana speaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones.
In Sassanid Persia, there were two secret scripts, according to the Muslim author Ibn al-Nadim: the šāh-dabīrīya (literally "King's script") which was used for official correspondence, and the rāz-saharīya which was used to communicate secret messages with other countries.
David Kahn notes in The Codebreakers that modern cryptology originated among the Arabs, the first people to systematically document cryptanalytic methods. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.
Ciphertexts produced by a classical cipher (and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery of frequency analysis, nearly all such ciphers could be broken by an informed attacker. Such classical ciphers still enjoy popularity today, though mostly as puzzles (see cryptogram). The Arab mathematician and polymath Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques.
Language letter frequencies may offer little help for some extended historical encryption techniques such as homophonic cipher that tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack.
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of the polyalphabetic cipher, most clearly by Leon Battista Alberti around the year 1467, though there is some indication that it was already known to Al-Kindi. Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automatic cipher device, a wheel that implemented a partial realization of his invention. In the Vigenère cipher, a polyalphabetic cipher, encryption uses a key word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was vulnerable to Kasiski examination, but this was first published about ten years later by Friedrich Kasiski.
Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs's Principle; alternatively and more bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, as Shannon's Maxim—'the enemy knows the system'.
Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's wheel cypher (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among them rotor machines—famously including the Enigma machine used by the German government and military from the late 1920s and during World War II. The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.
=== Early computer-era cryptography ===
Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts at Bletchley Park during WWII spurred the development of more efficient means for carrying out repetitive tasks, such as military code breaking (decryption). This culminated in the development of the Colossus, the world's first fully electronic, digital, programmable computer, which assisted in the decryption of ciphers generated by the German Army's Lorenz SZ40/42 machine.
Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970s IBM personnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States. In 1976 Whitfield Diffie and Martin Hellman published the Diffie–Hellman key exchange algorithm. In 1977 the RSA algorithm was published in Martin Gardner's Scientific American column. Since then, cryptography has become a widely used tool in communications, computer networks, and computer security generally.
Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems are intractable, such as the integer factorization or the discrete logarithm problems, so there are deep connections with abstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. The one-time pad is one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one by Michael O. Rabin that are provably secure provided factoring n = pq is impossible; it is quite unusable in practice. The discrete logarithm problem is the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem.
As well as being aware of cryptographic history, cryptographic algorithm and system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope of brute-force attacks, so when specifying key lengths, the required key lengths are similarly advancing. The potential impact of quantum computing are already being considered by some cryptographic system designers developing post-quantum cryptography. The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative.
== Modern cryptography ==
Claude Shannon's two papers, his 1948 paper on information theory, and especially his 1949 paper on cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography. His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis", and as having turned cryptography from an "art to a science". As a result of his contributions and work, he has been described as the "founding father of modern cryptography".
Prior to the early 20th century, cryptography was mainly concerned with linguistic and lexicographic patterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics. Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems and quantum physics.
Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation on binary bit sequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible.
=== Symmetric-key cryptography ===
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.
Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher.
The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block cipher designs that have been designated cryptography standards by the US government (though DES's designation was finally withdrawn after the AES was adopted). Despite its deprecation as an official standard, DES (especially its still-approved and much more secure triple-DES variant) remains quite popular; it is used across a wide range of applications, from ATM encryption to e-mail privacy and secure remote access. Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such as FEAL.
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material. RC4 is a widely used stream cipher. Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of a Pseudorandom number generator) and applying an XOR operation to each bit of the plaintext with each bit of the keystream.
Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt; this additional complication blocks an attack scheme against bare digest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-length hash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
=== Public-key cryptography ===
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as the square of the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret.
In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion of public-key (also, more generally, called asymmetric key) cryptography in which two different but mathematically related keys are used—a public key and a private key. A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair. The historian David Kahn described public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, the public key is used for encryption, while the private or secret key is used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting the Diffie–Hellman key exchange protocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on a shared encryption key.
The X.509 standard defines the most commonly used format for public key certificates.
Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 by Ronald Rivest, Adi Shamir, and Len Adleman, whose solution has since become known as the RSA algorithm.
The Diffie–Hellman and RSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Other asymmetric-key algorithms include the Cramer–Shoup cryptosystem, ElGamal encryption, and various elliptic curve techniques.
A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments. Reportedly, around 1970, James H. Ellis had conceived the principles of asymmetric key cryptography. In 1973, Clifford Cocks invented a solution that was very similar in design rationale to RSA. In 1974, Malcolm J. Williamson is claimed to have developed the Diffie–Hellman key exchange.
Public-key cryptography is also used for implementing digital signature schemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else to forge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one for signing, in which a secret key is used to process the message (or a hash of the message, or both), and one for verification, in which the matching public key is used with the message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many VPNs, etc.).
Public-key algorithms are most often based on the computational complexity of "hard" problems, often from number theory. For example, the hardness of RSA is related to the integer factorization problem, while Diffie–Hellman and DSA are related to the discrete logarithm problem. The security of elliptic curve cryptography is based on number theoretic problems involving elliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.
=== Cryptographic hash functions ===
Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance). MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
=== Cryptanalysis ===
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is truly random, never reused, kept secret from all possible attackers, and of equal or greater length than the message. Most ciphers, apart from the one-time pad, can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible.
There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In a ciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example is gardening, used by the British during WWII. In a chosen-ciphertext attack, Eve may be able to choose ciphertexts and learn their corresponding plaintexts. Finally in a man-in-the-middle attack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient. Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of the protocols involved).
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255 decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; a linear cryptanalysis attack against DES requires 243 known plaintexts (with their corresponding ciphertexts) and approximately 243 DES operations. This is a considerable improvement over brute force attacks.
Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty of integer factorization of semiprimes and the difficulty of calculating discrete logarithms, both of which are not yet proven to be solvable in polynomial time (P) using only a classical Turing-complete computer. Much public-key cryptanalysis concerns designing algorithms in P that can solve these problems, or using other technologies, such as quantum computers. For instance, the best-known algorithms for solving the elliptic curve-based version of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s.
While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic analysis and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues. Social engineering and other attacks against humans (e.g., bribery, extortion, blackmail, espionage, rubber-hose cryptanalysis or torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin.
=== Cryptographic primitives ===
Much of the theoretical work in cryptography concerns cryptographic primitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools called cryptosystems or cryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographic primitives and cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way functions, etc.
=== Cryptosystems ===
One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g., chosen-plaintext attack (CPA) security in the random oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such cryptosystems are sometimes called cryptographic protocols.
Some widely known cryptosystems include RSA, Schnorr signature, ElGamal encryption, and Pretty Good Privacy (PGP). More complex cryptosystems include electronic cash systems, signcryption systems, etc. Some more 'theoretical' cryptosystems include interactive proof systems, (like zero-knowledge proofs) and systems for secret sharing.
=== Lightweight cryptography ===
Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth of Internet of Things (IoT) has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security. Algorithms such as PRESENT, AES, and SPECK are examples of the many LWC algorithms that have been developed to achieve the standard set by the National Institute of Standards and Technology.
== Applications ==
Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys. But, some algorithms like BitLocker and VeraCrypt are generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. The C++ opensource encryption library OpenSSL provides free and opensource encryption software and tools. The most commonly used encryption cipher suit is AES, as it has hardware acceleration for all x86 based processors that has AES-NI. A close contender is ChaCha20-Poly1305, which is a stream cipher, however it is commonly used for mobile devices as they are ARM based which does not feature AES-NI instruction set extension.
=== Cybersecurity ===
Cryptography can be used to secure communications by encrypting them. Websites use encryption via HTTPS. "End-to-end" encryption, where only sender and receiver can read messages, is implemented for email in Pretty Good Privacy and for secure messaging in general in WhatsApp, Signal and Telegram.
Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker. Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext.
Encryption is sometimes used to encrypt one's entire drive. For example, University College London has implemented BitLocker (a program by Microsoft) to render drive data opaque without users logging in.
=== Cryptocurrencies and cryptoeconomics ===
Cryptographic techniques enable cryptocurrency technologies, such as distributed ledger technologies (e.g., blockchains), which finance cryptoeconomics applications such as decentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to: cryptographic keys, cryptographic hash function, asymmetric (public key) encryption, Multi-Factor Authentication (MFA), End-to-End Encryption (E2EE), and Zero Knowledge Proofs (ZKP).
== Legal issues ==
=== Prohibitions ===
Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous. Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. In China and Iran, a license is still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List. Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
=== Export controls ===
In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed. Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution.
In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled. Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000; there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs similarly can transmit and receive emails via TLS, and can send and receive email encrypted with S/MIME. Many Internet users do not realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.
=== NSA involvement ===
Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy. The NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography. DES was designed to be resistant to differential cryptanalysis, a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s. According to Steven Levy, IBM discovered differential cryptanalysis, but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement (i.e. wiretapping).
=== Digital rights management ===
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes. This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states.
The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA. Cryptologist Bruce Schneier has argued that the DMCA encourages vendor lock-in, while inhibiting actual measures toward cyber-security. Both Alan Cox (longtime Linux kernel developer) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases, the Motion Picture Association of America sent out numerous DMCA takedown notices, and there was a massive Internet backlash triggered by the perceived impact of such notices on fair use and free speech.
=== Forced disclosure of encryption keys ===
In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security. Successful prosecutions have occurred under the Act; the first, in 2009, resulted in a term of 13 months' imprisonment. Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can compel a person to reveal an encryption passphrase or password. The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment. In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court.
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016 FBI–Apple encryption dispute concerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supports plausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of a drive which has been securely wiped).
== See also ==
Collision attack
Comparison of cryptography libraries
Cryptovirology – Securing and encrypting virology
Crypto Wars – Attempts to limit access to strong cryptography
Encyclopedia of Cryptography and Security – Book by Technische Universiteit Eindhoven
Global surveillance – Mass surveillance across national borders
Indistinguishability obfuscation – Type of cryptographic software obfuscation
Information theory – Scientific study of digital information
Outline of cryptography
List of cryptographers – A list of historical mathmaticians
List of multiple discoveries
List of unsolved problems in computer science – List of unsolved computational problems
Pre-shared key – Method to set encryption keys
Secure cryptoprocessor
Strong cryptography – Term applied to cryptographic systems that are highly resistant to cryptanalysis
Syllabical and Steganographical Table – Eighteenth-century work believed to be the first cryptography chart – first cryptography chart
World Wide Web Consortium's Web Cryptography API – World Wide Web Consortium cryptography standard
== References ==
== Further reading ==
== External links ==
The dictionary definition of cryptography at Wiktionary
Media related to Cryptography at Wikimedia Commons
Cryptography on In Our Time at the BBC
Crypto Glossary and Dictionary of Technical Cryptography Archived 4 July 2022 at the Wayback Machine
A Course in Cryptography by Raphael Pass & Abhi Shelat – offered at Cornell in the form of lecture notes.
For more on the use of cryptographic elements in fiction, see: Dooley, John F. (23 August 2012). "Cryptology in Fiction". Archived from the original on 29 July 2020. Retrieved 20 February 2015.
The George Fabyan Collection at the Library of Congress has early editions of works of seventeenth-century English literature, publications relating to cryptography. | Wikipedia/cryptography |
The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and Technology (NIST) as a U.S. Federal Information Processing Standard (FIPS), including:
SHA-0: A retronym applied to the original version of the 160-bit hash function published in 1993 under the name "SHA". It was withdrawn shortly after publication due to an undisclosed "significant flaw" and replaced by the slightly revised version SHA-1.
SHA-1: A 160-bit hash function which resembles the earlier MD5 algorithm. This was designed by the National Security Agency (NSA) to be part of the Digital Signature Algorithm. Cryptographic weaknesses were discovered in SHA-1, and the standard was no longer approved for most cryptographic uses after 2010.
SHA-2: A family of two similar hash functions, with different block sizes, known as SHA-256 and SHA-512. They differ in the word size; SHA-256 uses 32-bit words where SHA-512 uses 64-bit words. There are also truncated versions of each standard, known as SHA-224, SHA-384, SHA-512/224 and SHA-512/256. These were also designed by the NSA.
SHA-3: A hash function formerly called Keccak, chosen in 2012 after a public competition among non-NSA designers. It supports the same hash lengths as SHA-2, and its internal structure differs significantly from the rest of the SHA family.
The corresponding standards are FIPS PUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512). NIST has updated Draft FIPS Publication 202, SHA-3 Standard separate from the Secure Hash Standard (SHS).
== Comparison of SHA functions ==
In the table below, internal state means the "internal hash sum" after each compression of a data block.
== Validation ==
All SHA-family algorithms, as FIPS-approved security functions, are subject to official validation by the CMVP (Cryptographic Module Validation Program), a joint program run by the American National Institute of Standards and Technology (NIST) and the Canadian Communications Security Establishment (CSE).
== References == | Wikipedia/Secure_Hash_Algorithm |
Shorthand is an abbreviated symbolic writing method that increases speed and brevity of writing as compared to longhand, a more common method of writing a language. The process of writing in shorthand is called stenography, from the Greek stenos (narrow) and graphein (to write). It has also been called brachygraphy, from Greek brachys (short), and tachygraphy, from Greek tachys (swift, speedy), depending on whether compression or speed of writing is the goal.
Many forms of shorthand exist. A typical shorthand system provides symbols or abbreviations for words and common phrases, which can allow someone well-trained in the system to write as quickly as people speak. Abbreviation methods are alphabet-based and use different abbreviating approaches. Many journalists use shorthand writing to quickly take notes at press conferences or other similar scenarios. In the computerized world, several autocomplete programs, standalone or integrated in text editors, based on word lists, also include a shorthand function for frequently used phrases.
Shorthand was used more widely in the past, before the invention of recording and dictation machines. Shorthand was considered an essential part of secretarial training and police work and was useful for journalists. Although the primary use of shorthand has been to record oral dictation and other types of verbal communication, some systems are used for compact expression. For example, healthcare professionals might use shorthand notes in medical charts and correspondence. Shorthand notes were typically temporary, intended either for immediate use or for later typing, data entry, or (mainly historically) transcription to longhand. Longer-term uses do exist, such as encipherment; diaries (like that of Samuel Pepys) are a common example.
== History ==
=== Classical antiquity ===
The earliest known indication of shorthand systems is from the Parthenon in Ancient Greece, where a mid-4th century BC inscribed marble slab was found. This shows a writing system primarily based on vowels, using certain modifications to indicate consonants. Hellenistic tachygraphy is reported from the 2nd century BC onwards, though there are indications that it might be older. The oldest datable reference is a contract from Middle Egypt, stating that Oxyrhynchos gives the "semeiographer" Apollonios for two years to be taught shorthand writing. Hellenistic tachygraphy consisted of word stem signs and word ending signs. Over time, many syllabic signs were developed.
In Ancient Rome, Marcus Tullius Tiro (103–4 BC), a slave and later a freedman of Cicero, developed the Tironian notes so that he could write down Cicero's speeches. Plutarch (c. 46 – c. 120 AD) in his "Life of Cato the Younger" (95–46 BC) records that Cicero, during a trial of some insurrectionists in the senate, employed several expert rapid writers, whom he had taught to make figures comprising numerous words in a few short strokes, to preserve Cato's speech on this occasion. The Tironian notes consisted of Latin word stem abbreviations (notae) and of word ending abbreviations (titulae). The original Tironian notes consisted of about 4,000 signs, but new signs were introduced, so that their number might increase to as many as 13,000. In order to have a less complex writing system, a syllabic shorthand script was sometimes used. After the decline of the Roman Empire, the Tironian notes were no longer used to transcribe speeches, though they were still known and taught, particularly during the Carolingian Renaissance. After the 11th century, however, they were mostly forgotten.
When many monastery libraries were secularized in the course of the 16th-century Protestant Reformation, long-forgotten manuscripts of Tironian notes were rediscovered.
=== Imperial China ===
In imperial China, clerks used an abbreviated, highly cursive form of Chinese characters to record court proceedings and criminal confessions. These records were used to create more formal transcripts. One cornerstone of imperial court proceedings was that all confessions had to be acknowledged by the accused's signature, personal seal, or thumbprint, requiring fast writing. Versions of this technique survived in clerical professions into the modern day and, influenced by Western shorthand methods, some new methods were invented.
=== Europe and North America ===
An interest in shorthand or "short-writing" developed towards the end of the 16th century in England. In 1588, Timothy Bright published his Characterie; An Arte of Shorte, Swifte and Secrete Writing by Character which introduced a system with 500 arbitrary symbols each representing one word. Bright's book was followed by a number of others, including Peter Bales' The Writing Schoolemaster in 1590, John Willis's Art of Stenography in 1602, Edmond Willis's An abbreviation of writing by character in 1618, and Thomas Shelton's Short Writing in 1626 (later re-issued as Tachygraphy).
Shelton's system became very popular and is well known because it was used by Samuel Pepys for his diary and for many of his official papers, such as his letter copy books. It was also used by Isaac Newton in some of his notebooks. Shelton borrowed heavily from his predecessors, especially Edmond Willis. Each consonant was represented by an arbitrary but simple symbol, while the five vowels were represented by the relative positions of the surrounding consonants. Thus the symbol for B with symbol for T drawn directly above it represented "bat", while B with T below it meant "but"; top-right represented "e", middle-right "i", and lower-right "o". A vowel at the end of a word was represented by a dot in the appropriate position, while there were additional symbols for initial vowels. This basic system was supplemented by further symbols representing common prefixes and suffixes.
One drawback of Shelton's system was that there was no way to distinguish long and short vowels or diphthongs; so the b-a-t sequence could mean "bat", or "bait", or "bate", while b-o-t might mean "boot", or "bought", or "boat". The reader needed to use the context to work out which alternative was meant. The main advantage of the system was that it was easy to learn and to use. It was popular, and under the two titles of Short Writing and Tachygraphy, Shelton's book ran to more than 20 editions between 1626 and 1710.
Shelton's chief rivals were Theophilus Metcalfe's Stenography or Short Writing (1633) which was in its "55th edition" by 1721, and Jeremiah Rich's system of 1654, which was published under various titles including The penns dexterity compleated (1669). Rich's system was used by George Treby chairman of the House of Commons Committee of Secrecy investigating the Popish Plot. Another English shorthand system creator of the 17th century was William Mason (fl. 1672–1709) who published Arts Advancement in 1682.
Modern-looking geometric shorthand was introduced with John Byrom's New Universal Shorthand of 1720. Samuel Taylor published a similar system in 1786, the first English shorthand system to be used all over the English-speaking world. Thomas Gurney published Brachygraphy in the mid-18th century. In 1834 in Germany, Franz Xaver Gabelsberger published his Gabelsberger shorthand. Gabelsberger based his shorthand on the shapes used in German cursive handwriting rather than on the geometrical shapes that were common in the English stenographic tradition.
Taylor's system was superseded by Pitman shorthand, first introduced in 1837 by English teacher Isaac Pitman, and improved many times since. Pitman's system has been used all over the English-speaking world and has been adapted to many other languages, including Latin. Pitman's system uses a phonemic orthography. For this reason, it is sometimes known as phonography, meaning "sound writing" in Greek. One of the reasons this system allows fast transcription is that vowel sounds are optional when only consonants are needed to determine a word. The availability of a full range of vowel symbols, however, makes complete accuracy possible. Isaac's brother Benn Pitman, who lived in Cincinnati, Ohio, was responsible for introducing the method to America. The record for fast writing with Pitman shorthand is 350 wpm during a two-minute test by Nathan Behrin in 1922.
In the United States and some other parts of the world, it was largely superseded by Gregg shorthand, which was first published in 1888 by John Robert Gregg. This system was influenced by the handwriting shapes that Gabelsberger had introduced. Gregg's shorthand, like Pitman's, is phonetic, but has the simplicity of being "light-line." Pitman's system uses thick and thin strokes to distinguish related sounds, while Gregg's uses only thin strokes and makes some of the same distinctions by the length of the stroke. In fact, Gregg claimed joint authorship in another shorthand system published in pamphlet form by one Thomas Stratford Malone; Malone, however, claimed sole authorship and a legal battle ensued. The two systems use very similar, if not identical, symbols; however, these symbols are used to represent different sounds. For instance, on page 10 of the manual is the word d i m 'dim'; however, in the Gregg system, the spelling would actually mean n u k or 'nook'.
Andrew J. Graham was a phonotypist operating in the period between the emergence of Pitman's and Gregg's systems. In 1854 he published a short-lived (only 9 issues) phonotypy journal called The Cosmotype, subtitled "devoted to that which will entertain usefully, instruct, and improve humanity", and several other monographs about phonography. In 1857 he published his own Pitman-like "Graham's Brief Longhand" that saw wide adoption in the United States in the late 19th century. He published a translation of the New Testament. His method landed him in a 1864 copyright infringement lawsuit against Benn Pitman in Ohio. Graham died in 1895 and was buried in Montclair's Rosedale Cemetery; even as late as 1918 his company Andrew J. Graham & Co continued to market his method.
In his youth, Woodrow Wilson had mastered the Graham system and even corresponded with Graham in Graham. Throughout his life, Wilson continued to develop and employ his own Graham system writing, to the point that by the 1950s, when the Graham method had all but disappeared, Wilson scholars had trouble interpreting his shorthand. In 1960 an 84-year-old anachronistic shorthand expert Clifford Gehman managed to crack Wilson's shorthand, demonstrating on a translation of Wilson's acceptance speech for the 1912 presidential nomination.
=== Japan ===
Our Japanese pen shorthand began in 1882, transplanted from the American Pitman-Graham system. Geometric theory has great influence in Japan. But Japanese motions of writing gave some influence to our shorthand. We are proud to have reached the highest speed in capturing spoken words with a pen. Major pen shorthand systems are Shuugiin, Sangiin, Nakane and Waseda [a repeated vowel shown here means a vowel spoken in double-length in Japanese, sometimes shown instead as a bar over the vowel]. Including a machine-shorthand system, Sokutaipu, we have 5 major shorthand systems now. The Japan Shorthand Association now has 1,000 members.
There are several other pen shorthands in use (Ishimura, Iwamura, Kumassaki, Kotani, and Nissokuken), leading to a total of nine pen shorthands in use. In addition, there is the Yamane pen shorthand (of unknown importance) and three machine shorthands systems (Speed Waapuro, Caver and Hayatokun or sokutaipu). The machine shorthands have gained some ascendancy over the pen shorthands.
Japanese shorthand systems ('sokki' shorthand or 'sokkidou' stenography) commonly use a syllabic approach, much like the common writing system for Japanese (which has actually two syllabaries in everyday use). There are several semi-cursive systems. Most follow a left-to-right, top-to-bottom writing direction. Several systems incorporate a loop into many of the strokes, giving the appearance of Gregg, Graham, or Cross's Eclectic shorthand without actually functioning like them. The Kotani (aka Same-Vowel-Same-Direction or SVSD or V-type) system's strokes frequently cross over each other and in so doing form loops.
Japanese also has its own variously cursive form of writing kanji characters, the most extremely simplified of which is known as Sōsho.
The two Japanese syllabaries are themselves adapted from the Chinese characters: both of the syllabaries, katakana and hiragana, are in everyday use alongside the Chinese characters known as kanji; the kanji, being developed in parallel to the Chinese characters, have their own idiosyncrasies, but Chinese and Japanese ideograms are largely comprehensible, even if their use in the languages are not the same.
Prior to the Meiji era, Japanese did not have its own shorthand (the kanji did have their own abbreviated forms borrowed alongside them from China). Takusari Kooki was the first to give classes in a new Western-style non-ideographic shorthand of his own design, emphasis being on the non-ideographic and new. This was the first shorthand system adapted to writing phonetic Japanese, all other systems prior being based on the idea of whole or partial semantic ideographic writing like that used in the Chinese characters, and the phonetic approach being mostly peripheral to writing in general. Even today, Japanese writing uses the syllabaries to pronounce or spell out words, or to indicate grammatical words. Furigana are written alongside kanji, or Chinese characters, to indicate their pronunciation especially in juvenile publications. Furigana are usually written using the hiragana syllabary; foreign words may not have a kanji form and are spelled out using katakana.
The new sokki were used to transliterate popular vernacular story-telling theater (yose) of the day. This led to a thriving industry of sokkibon (shorthand books). The ready availability of the stories in book form, and higher rates of literacy (which the very industry of sokkibon may have helped create, due to these being oral classics that were already known to most people) may also have helped kill the yose theater, as people no longer needed to see the stories performed in person to enjoy them. Sokkibon also allowed a whole host of what had previously been mostly oral rhetorical and narrative techniques into writing, such as imitation of dialect in conversations (which can be found back in older gensaku literature; but gensaku literature used conventional written language in between conversations, however).
== Classification ==
=== Geometric and script-like systems ===
Shorthands that use simplified letterforms are sometimes termed stenographic shorthands, contrasting with alphabetic shorthands, below. Stenographic shorthands can be further differentiated by the target letter forms as geometric, script, and semi-script or elliptical.
Geometric shorthands are based on circles, parts of circles, and straight lines placed strictly horizontally, vertically or diagonally. The first modern shorthand systems were geometric. Examples include Pitman shorthand, Boyd's syllabic shorthand, Samuel Taylor's Universal Stenography, the French Prévost-Delaunay, and the Duployé system, adapted to write the Kamloops Wawa (used for Chinook Jargon) writing system.
Script shorthands are based on the motions of ordinary handwriting. The first system of this type was published under the title Cadmus Britanicus by Simon Bordley, in 1787. However, the first practical system was the German Gabelsberger shorthand of 1834. This class of system is now common in all more recent German shorthand systems, as well as in Austria, Italy, Scandinavia, the Netherlands, Russia, other Eastern European countries, and elsewhere.
Script-geometric, or semi-script, shorthands are based on the ellipse. Semi-script can be considered a compromise between the geometric systems and the script systems. The first such system was that of George Carl Märes in 1885. However, the most successful system of this type was Gregg shorthand, introduced by John Robert Gregg in 1888. Gregg had studied not only the geometric English systems, but also the German Stolze stenography, a script shorthand. Other examples include Teeline Shorthand and Thomas Natural Shorthand.
The semi-script philosophy gained popularity in Italy in the first half of the 20th century with three different systems created by Giovanni Vincenzo Cima, Erminio Meschini, and Stenital Mosciaro.
=== Systems resembling standard writing ===
Some shorthand systems attempted to ease learning by using characters from the Latin alphabet. Such non-stenographic systems have often been described as alphabetic, and purists might claim that such systems are not 'true' shorthand. However, these alphabetic systems do have value for students who cannot dedicate the years necessary to master a stenographic shorthand. Alphabetic shorthands cannot be written at the speeds theoretically possible with symbol systems—200 words per minute or more—but require only a fraction of the time to acquire a useful speed of between 70 and 100 words per minute.
Non-stenographic systems often supplement alphabetic characters by using punctuation marks as additional characters, giving special significance to capitalised letters, and sometimes using additional non-alphabetic symbols. Examples of such systems include Stenoscript, Speedwriting and Forkner shorthand. However, there are some pure alphabetic systems, including Personal Shorthand, SuperWrite, Easy Script Speed Writing, Keyscript Shorthand and Yash3k which limit their symbols to a priori alphabetic characters. These have the added advantage that they can also be typed—for instance, onto a computer, PDA, or cellphone. Early editions of Speedwriting were also adapted so that they could be written on a typewriter, and therefore would possess the same advantage.
=== Varieties of vowel representation ===
Shorthand systems can also be classified according to the way that vowels are represented.
Alphabetic – Expression by "normal" vowel signs that are not fundamentally different from consonant signs (e.g., Gregg, Duployan).
Mixed alphabetic – Expression of vowels and consonants by different kinds of strokes (e.g., Arends' system for German or Melin's Swedish Shorthand where vowels are expressed by upward or sideway strokes and consonants and consonant clusters by downward strokes).
Abjad – No expression of the individual vowels at all except for indications of an initial or final vowel (e.g., Taylor).
Marked abjad – Expression of vowels by the use of detached signs (such as dots, ticks, and other marks) written around the consonant signs.
Positional abjad – Expression of an initial vowel by the height of the word in relation to the line, no necessary expression of subsequent vowels (e.g., Pitman, which can optionally express other vowels by detached diacritics).
Abugida – Expression of a vowel by the shape of a stroke, with the consonant indicated by orientation (e.g., Boyd).
Mixed abugida – Expression of the vowels by the width of the joining stroke that leads to the following consonant sign, the height of the following consonant sign in relation to the preceding one, and the line pressure of the following consonant sign (e.g., most German shorthand systems).
=== Machine shorthand systems ===
Traditional shorthand systems are written on paper with a stenographic pencil or a stenographic pen. Some consider that only handwritten systems can strictly speaking be called shorthand.
Machine shorthand is also a common term for writing produced by a stenotype, a specialized keyboard. These are often used for court room transcripts and in live subtitling. However, there are other shorthand machines used worldwide, including: Velotype; Palantype in the UK; Grandjean Stenotype, used extensively in France and French-speaking countries; Michela Stenotype, used extensively in Italy; and Stenokey, used in Bulgaria and elsewhere.
== Common modern English shorthand systems ==
One of the most widely used forms of shorthand is still the Pitman shorthand method described above, which has been adapted for 15 languages. Although Pitman's method was extremely popular at first and is still commonly used, especially in the UK, in the U.S., its popularity has been largely superseded by Gregg shorthand, developed by John Robert Gregg in 1888.
In the UK, the spelling-based (rather than phonetic) Teeline shorthand is now more commonly taught and used than Pitman, and Teeline is the recommended system of the National Council for the Training of Journalists with an overall speed of 100 words per minute necessary for certification. Other less commonly used systems in the UK are Pitman 2000, PitmanScript, Speedwriting, and Gregg. Teeline is also the most common shorthand method taught to New Zealand journalists, whose certification typically requires a shorthand speed of at least 80 words per minute.
In Nigeria, shorthand is still taught in higher institutions of learning, especially for students studying Office Technology Management and Business Education.
== Notable shorthand systems ==
Chandler shorthand (Mary Chandler Atherton)
Current Shorthand (Henry Sweet)
Duployan shorthand (Émile Duployé)
Eclectic shorthand (J.G. Cross)
Gabelsberger shorthand (Franz Xaver Gabelsberger)
Deutsche Einheitskurzschrift (German Unified Shorthand), which is based on the ideas of systems by Gabelsberger, Stolze, Faulmann, and other German system inventors
Gregg shorthand (John Robert Gregg)
Munson Shorthand (James Eugene Munson)
Personal Shorthand, originally called Briefhand
Pitman shorthand (Isaac Pitman)
Speedwriting (Emma Dearborn)
Teeline Shorthand (James Hill)
Tironian notes (Marcus Tullius Tiro)
== See also ==
== References ==
== External links ==
Keyscript Shorthand: keyscriptshorthand.com & keyscriptshorthand2.website3.me
Media related to Shorthand at Wikimedia Commons
The dictionary definition of shorthand at Wiktionary
The Louis A. Leslie Collection of Historical Shorthand Materials at Rider University – materials for download | Wikipedia/Stenography |
Printer tracking dots, also known as printer steganography, DocuColor tracking dots, yellow dots, secret dots, or a machine identification code (MIC), is a digital watermark which many color laser printers and photocopiers produce on every printed page that identifies the specific device that was used to print the document. Developed by Xerox and Canon in the mid-1980s, the existence of these tracking codes became public only in 2004.
== History ==
In the mid-1980s, Xerox pioneered an encoding mechanism for a unique number represented by tiny dots spread over the entire print area, and first deployed this scheme in its DocuColor line of printers. Xerox developed this surreptitious tracking code "to assuage fears that their color copiers could be used to counterfeit bills" and received U.S. Patent No. 5515451 describing the use of the yellow dots to identify the source of a copied or printed document. The scheme was then widely deployed in other printers, including those made by other manufacturers.
The public first became aware of the tracking scheme in October 2004, when Dutch authorities used it to track counterfeiters who had used a Canon color laser printer. In November 2004, PC World reported the machine identification code had been used for decades in some printers, allowing law enforcement to identify and track counterfeiters. The Central Bank Counterfeit Deterrence Group (CBCDG) has denied that it developed the feature.
In 2005, the civil liberties activist group Electronic Frontier Foundation (EFF) encouraged the public to send in sample printouts and subsequently decoded the pattern. The pattern has been demonstrated on a wide range of printers from different manufacturers and models. The EFF stated in 2015 that the documents that they previously received through a Freedom of Information Act request suggested that all major manufacturers of color laser printers entered a secret agreement with governments to ensure that the output of those printers is forensically traceable.
Although we still don't know if this is correct, or how subsequent generations of forensic tracking technologies might work, it is probably safest to assume that all modern color laser printers do include some form of tracking information that associates documents with the printer's serial number. (If any manufacturer wishes to go on record with a statement to the contrary, we'll be happy to publish that here.)
In 2007, the European Parliament was asked about the question of invasion of privacy.
== Technical aspects ==
The pattern consists of a dot-matrix spread of yellow dots, which can barely be seen with the naked eye. The dots have a diameter of one-tenth millimetre (0.004 in) and a spacing of about one millimetre (0.04 in). Their arrangement encodes the serial number of the device, date and time of the printing, and is repeated several times across the printing area in case of errors. For example, if the code consists of 8 × 16 dots in a square or hexagonal pattern, it spreads over a surface of about 4 square centimetres (0.62 sq in) and appears on a sheet of size A4 paper about 150 times. Thus, it can be analyzed even if only fragments or excerpts are available. Some printers arrange yellow dots in seemingly random point clouds.
According to the Chaos Computer Club in 2005, color printers leave the code in a matrix of 32 × 16 dots and thus can store 64 bytes of data (64×8).
As of 2011, Xerox was one of the few manufacturers to draw attention to the marked pages, stating in a product description, "The digital color printing system is equipped with an anti-counterfeit identification and banknote recognition system according to the requirements of numerous governments. Each copy shall be marked with a label which, if necessary, allows identification of the printing system with which it was created. This code is not visible under normal conditions."
In 2018, scientists at the TU Dresden analyzed the patterns of 106 printer models from 18 manufacturers and found four different encoding schemes.
== Visibility ==
The dots can be made visible by printing or copying a page and subsequently scanning a small section with a high-resolution scanner. The yellow color channel can then be enhanced with an image processing program to make the dots of the identification code clearly visible. Under good lighting conditions, a magnifying glass may be enough to see the pattern. Under UV-light, the yellow dots are clearly recognizable.
Using this steganographic process, high-quality copies of an original (e.g. a banknote) under blue light can be made identifiable. Using this process, even shredded prints can be identified: the 2011 "Shredder Challenge" initiated by the DARPA was solved by a team called "All Your Shreds Are Belong To U.S." consisting of Otávio Good and two colleagues.
== Practical application ==
Both journalists and security experts have suggested that The Intercept's handling of the leaks by whistleblower Reality Winner, which included publishing secret NSA documents unredacted and including the printer tracking dots, was used to identify Winner as the leaker, leading to her arrest in 2017 and conviction.
== Protection of privacy and circumvention ==
Copies or printouts of documents with confidential personal information, for example health care information, account statements, tax declaration or balance sheets, can be traced to the owner of the printer and the inception date of the documents can be revealed. This traceability is unknown to many users and inaccessible, as manufacturers do not publicize the code that produces these patterns. It is unclear which data may be unintentionally passed on with a copy or printout. In particular, there are no mentions of the technique in the support materials of most affected printers. In 2005, the Electronic Frontier Foundation (EFF) sought a decoding method and made available a Python script for analysis.
In 2018, scientists from TU Dresden developed and published a tool to extract and analyze the steganographic codes of a given color printer and subsequently to anonymize prints from that printer. The anonymization works by printing additional yellow dots on top of the printer's tracking dots. The scientists made the software available to support whistleblowers in their efforts to publicize grievances.
== Comparable processes ==
Other methods of identification are not as easily recognizable as yellow dots. For example, a modulation of laser intensity and a variation of shades of grey in texts are feasible. As of 2006, it was unknown whether manufacturers were also using these techniques.
== See also ==
EURion constellation, a dot matrix spread over a bank note, which stops some printers and color copiers from processing
Taggant § Explosive taggants
Typewriter § Forensic examination
== References ==
== External links ==
Laudatio der deutschen BigBrotherAwards 2004 (in German)
Information by the Chaos Computer Club Archived March 13, 2017, at the Wayback Machine
Information by the Electronic Frontier Foundation
EFF List of Printers Which Do or Do Not Display Tracking Dots (last updated 2017) | Wikipedia/Printer_steganography |
A mimic function changes a file
A
{\displaystyle A}
so it assumes the statistical properties of another file
B
{\displaystyle B}
. That is, if
p
(
t
,
A
)
{\displaystyle p(t,A)}
is the probability of some substring
t
{\displaystyle t}
occurring in
A
{\displaystyle A}
, then a mimic function
f
{\displaystyle f}
, recodes
A
{\displaystyle A}
so that
p
(
t
,
f
(
A
)
)
{\displaystyle p(t,f(A))}
approximates
p
(
t
,
B
)
{\displaystyle p(t,B)}
for all strings
t
{\displaystyle t}
of length less than some
n
{\displaystyle n}
. It is commonly considered to be one of the basic techniques for hiding information, often called steganography.
The simplest mimic functions use simple statistical models to pick the symbols in the output. If the statistical model says that item
x
{\displaystyle x}
occurs with probability
p
(
x
,
A
)
{\displaystyle p(x,A)}
and item
y
{\displaystyle y}
occurs with probability
p
(
y
,
A
)
{\displaystyle p(y,A)}
, then a random number is used to choose between outputting
x
{\displaystyle x}
or
y
{\displaystyle y}
with probability
p
(
x
,
A
)
{\displaystyle p(x,A)}
or
p
(
y
,
A
)
{\displaystyle p(y,A)}
respectively.
Even more sophisticated models use reversible Turing machines.
== References ==
Wayner, Peter (December 1990). Mimic Functions (Report). Cornell University Department of Computer Science. TR 90-1176.
Wayner, Peter (July 1992). "Mimic Functions". Cryptologia. 16 (3): 193–214. doi:10.1080/0161-119291866883.
Wayner, Peter (2008). Disappearing Cryptography (3rd ed.). Morgan Kaufmann. ISBN 978-0123744791. | Wikipedia/Mimic_function |
BPCS-steganography (Bit-Plane Complexity Segmentation steganography) is a type of digital steganography.
Digital steganography can hide confidential data (i.e. secret files) very securely by embedding them into some media data called "vessel data." The vessel data is also referred to as "carrier, cover, or dummy data". In BPCS-steganography true color images (i.e., 24-bit color images) are mostly used for vessel data. The embedding operation in practice is to replace the "complex areas" on the bit planes of the vessel image with the confidential data. The most important aspect of BPCS-steganography is that the embedding capacity is very large. In comparison to simple image based steganography which uses solely the least important bit of data, and thus (for a 24-bit color image) can only embed data equivalent to 1/8 of the total size, BPCS-steganography uses multiple bit-planes, and so can embed a much higher amount of data, though this is dependent on the individual image. For a 'normal' image, roughly 50% of the data might be replaceable with secret data before image degradation becomes apparent.
== Principle of embedding ==
The Human visual system has such a special property that a too-complicated visual pattern can not be perceived as "shape-informative." For example, on a very flat beach shore every single square-foot area looks the same - it is just a sandy area, no shape is observed. However, if you look carefully, two same-looking areas are entirely different in their sand particle shapes. BPCS-steganography makes use of this property. It replaces complex areas on the bit-planes of the vessel image with other complex data patterns (i.e., pieces of secret files). This replacing operation is called "embedding." No one can see any difference between the two vessel images of before and after the embedding operation.
An issue arises where the data to be embedded appears visually as simple information, if this simple information replaces the complex information in the original image it may create spurious 'real image information'. In this case the data is passed through a binary image conjugation transformation, in order to create a reciprocal complex representation.
== Present status of research and development ==
This form of steganography was proposed jointly by Eiji Kawaguchi and Richard O. Eason in 1998. Their experimental program (titled Qtech Hide & View) is freely available for educational purposes. Recently, many researchers are tackling its algorithm improvement and applications as well as resistibility studies against steganalysis.
== See also ==
== References ==
A Concept of Digital Picture Envelope for Internet Communication
A Model of Anonymous Covert Mailing System Using Steganographic Scheme
A Model of Unforgeable Digital Certificate Document System
BPCS Steganography Using EZW Encoded Images
HIGH CAPACITY DATA HIDING SYSTEM USING BPCS STEGANOGRAPHY
STEGANOGRAPHY USING BPCS TO THE INTEGER WAVELET TRANSFORMED IMAGE
== External links ==
Invitation to BPCS-Steganography (in English)
Invitation to BPCS-Steganography (in Japanese) | Wikipedia/BPCS-Steganography |
Steganographic file systems are a kind of file system first proposed by Ross Anderson, Roger Needham, and Adi Shamir. Their paper proposed two main methods of hiding data: in a series of fixed size files originally consisting of random bits on top of which 'vectors' could be superimposed in such a way as to allow levels of security to decrypt all lower levels but not even know of the existence of any higher levels, or an entire partition is filled with random bits and files hidden in it.
In a steganographic file system using the second scheme, files are not merely stored, nor stored encrypted, but the entire partition is randomized - encrypted files strongly resemble randomized sections of the partition, and so when files are stored on the partition, there is no easy way to discern between meaningless gibberish and the actual encrypted files. Furthermore, locations of files are derived from the key for the files, and the locations are hidden and available to only programs with the passphrase. This leads to the problem that very quickly files can overwrite each other (because of the Birthday Paradox); this is compensated for by writing all files in multiple places to lessen the chance of data loss.
== Advantage ==
While there may seem to be no point to a file system which is guaranteed to either be grossly inefficient storage space-wise or to cause data loss and corruption either from data collisions or loss of the key (in addition to being a complex system, and for having poor read/write performance), performance was not the goal of StegFS. Rather, StegFS is intended to thwart "rubberhose attacks", which usually work because encrypted files are distinguishable from regular files, and authorities can coerce the user until the user gives up the keys and all the files are distinguishable as regular files. However, since in a steganographic file system, the number of files are unknown and every byte looks like an encrypted byte, the authorities cannot know how many files (and hence, keys) are stored. The user has plausible deniability — he can say there are only a few innocuous files or none at all, and anybody without the keys cannot gainsay the user.
== Criticisms ==
Poul-Henning Kamp has criticized the threat model for steganographic file systems in his paper on GBDE, observing that in certain coercive situations, especially where the searched-for information is in fact not stored in the steganographic file systems, it is not possible for a subject to "get off the hook" by proving that all keys have been surrendered.
== Other methods ==
Other methods exist; the method laid out before is the one implemented by StegFS, but it is possible to steganographically hide data within image (e.g. PNGDrive) or audio files- ScramDisk or the Linux loop device can do this.
Generally, a steganographic file system is implemented over a steganographic layer, which supplies just the storage mechanism. For example, the steganographic file system layer can be some existing MP3 files, each file contains a chunk of data (or a part of the file system). The final product is a file system that is hardly detected (depending on the steganographic layer) that can store any kind of file in a regular file system hierarchy.
TrueCrypt allows for "hidden volumes" - two or more passwords open different volumes in the same file, but only one of the volumes contains secret data.
== See also ==
== References ==
== External links ==
Original paper by Anderson, Needham, et al. -(PDF file)
A MP3 Steganographic File System Approach
MagikFS - The Steganographic FileSystem
StegFS - A Steganographic File System Without Data Losing Problems
StegHide - Hiding Data Accesses in Steganographic File Systems
Xuan Zhou's Ph.D. Thesis on Steganographic File System | Wikipedia/Steganographic_file_system |
Polygraphia is a cryptographic work written by Johannes Trithemius published in 1518 dedicated to the art of steganography.
The full title is Polygraphiae libri sex, Ioannis Trithemii abbatis Peapolitani, quondam Spanheimensis, ad Maximilianum Caesarem [Six books of polygraphy, by Johannes Trithemius, abbot at Würzburg, formerly at Spanheim, for the Emperor Maximilian ].
It is the oldest known source of the popular Witches' Alphabet, used at large by modern traditions of witchcraft.
== Review ==
It is composed of six books and a decrypytion key
Book I contains no fewer than 384 alphabets (called "minutiae" by the author) of 24 letters (or "degrees"): each letter corresponds to a Latin word (noun, verb, adjective, etc.) in reference to Christian prayers and religious texts, being in total 9,216 different words. This is nowadays known as the Ave Maria cipher, which mostly uses only a few of the first alphabets.
Book II contains 308 more Latin alphabets with 7,392 words, again using Latin words with mostly religious context.
Book III presents 132 alphabets in three columns which are 3,168 dictions of a "universal language" where each letter is equivalent to an invented word (for example "a" could be Abra, mada, badar, cadalan, pasa etc.) but capable of expressing numbers (from 1 to 10 would be Abra, Abre, Abri, Abro, Abru, Abras, Abres, Abris, Abros and Abrus).
Book IV shows 2,880 invented alphabet dictions in 120 alphabets. To decode, one must simply extract the second letter of each word.
Book V reproduces two canonical hash tables, one direct with 80 alphabets and the other inverted with 98 alphabets, allowing infinite permutations, to which twelve "planispheric wheels" each comprising six categories of 24 numbers combined with the 24 letters and thus allowing elaborate a big amount of ciphered messages.
Book VI is a collection of (partly alleged) ancient alphabets, including Germanic-Franconian, Ethiopian, Norman, Magical and Alchemical.
The work ends with alphabets of his invention as the "tetragramaticus" formed by 4 characters that are diversified in 24 letters and the "enagramaticus" of 9 characters and 28 letters, from which he gives examples of writings that belongs to something it resembles a natural language.
== Relationship with Steganographia ==
According to some scholars, both books, Steganographia and Polygraphia, are but a single work presented in two parts: the first is metaphysical and quite theoretical (it even hides a complete treatise on "angelology", or the study of angels with their names and hierarchies, between its pages), the second is more practical and is used for encoding messages.
== See also ==
History of cryptography
Johannes Trithemius
Polygraphia Nova
== References ==
=== Bibliography ===
Polygraphiae libri sex Ioannis Trithemij George Fabyan Collection at the Library of Congress
Steganographia (Latin). Digital Edition, 1997
Steganographia (Latin). Google Books, 1608 edition
Steganographia (Latin). Google Books, 1621 edition
Solved: The Ciphers in Book iii of Trithemius's Steganographia, PDF, 208 kB
Hill Monastic Manuscript Library article on Trithemius (includes links to photographs of various Trithemius first editions.)
(in Italian)The complete and solved Steganography books
Steganographia qvæ hvcvsqve a nemine intellecta George Fabyan Collection at the Library of Congress
== External links ==
Works by or about Johannes Trithemius at the Internet Archive
Herbermann, Charles, ed. (1913). "John Trithemius" . Catholic Encyclopedia. New York: Robert Appleton Company.
Trithemius Redivivus Translations and resources pertaining to the Steganographia of Johannes Trithemius | Wikipedia/Polygraphia_(book) |
Steganographia is a book on steganography, written in c. 1499 by the German Benedictine abbot and polymath Johannes Trithemius.
== General ==
Trithemius' most famous work, Steganographia (written c.1499; published Frankfurt, 1606), was placed on the Index Librorum Prohibitorum in 1609 and removed in 1900. This book is in three volumes, and appears to be about magic—specifically, about using spirits to communicate over long distances. However, since the publication of a decryption key to the first two volumes in 1606, they were discovered to be actually concerned with cryptography and steganography. Until 1996, the third volume was widely believed to be solely about magic, but the "magical" formulas have now been shown to be covertexts for yet more material on cryptography.
== Reception ==
References within the third book to magical works by such figures as Agrippa and John Dee still lend credence to the idea of a mystic-magical foundation of the third volume. Additionally, while Trithemius's steganographic methods can be established to be free of the need for angelic–astrological mediation, an underlying theological motive for their contrivance remains. The preface to Polygraphia equally establishes the everyday practicability of cryptography, and was conceived by Trithemius as a "secular consequent of the ability of a soul specially empowered by God to reach, by magical means, from earth to Heaven". Robert Hooke suggested, in the chapter of Dr. Dee's Book of Spirits, that John Dee used Trithemian steganography to conceal his communication with Queen Elizabeth I.
== See also ==
Greek Magical Papyri
History of cryptography
== References ==
== External links ==
Steganographia in english (Trithemius.com) | Wikipedia/Steganographia |
A steganography software tool allows a user to embed hidden data inside a carrier file, such as an image or video, and later extract that data.
It is not necessary to conceal the message in the original file at all. Thus, it is not necessary to modify the original file and thus, it is difficult to detect anything. If a given section is subjected to successive bitwise manipulation to generate the cyphertext, then there is no evidence in the original file to show that it is being used to encrypt a file.
== Architecture ==
=== Carrier ===
The carrier is the signal, stream, or data file into which the hidden data is hidden by making subtle modifications. Examples include audio files, image files, documents, and executable files. In practice, the carrier should look and work the same as the original unmodified carrier, and should appear benign to anyone inspecting it.
Certain properties can raise suspicion that a file is carrying hidden data:
If the hidden data is large relative to the carrier content, as in an empty document that is a megabyte in size.
The use of obsolete formats or poorly-supported extensions which break commonly used tools.
It is a cryptographic requirement that the carrier (e.g. photo) is original, not a copy of something publicly available (e.g., downloaded). This is because the publicly available source data could be compared against the version with a hidden message embedded.
There is a weaker requirement that the embedded message not change the carrier's statistics (or other metrics) such that the presence of a message is detectable. For instance, if the least-significant-bits of the red camera-pixel channel of an image has a Gaussian distribution given a constant colored field, simple image steganography which produces a random distribution of these bits could allow discrimination of stego images from unchanged ones.
The sheer volume of modern (ca 2014) and inane high-bandwidth media (e.g., youtube.com, bittorrent sources. eBay, Facebook, spam, etc.) provides ample opportunity for covert information±.
=== Chain ===
Hidden data may be split among a set of files, producing a carrier chain, which has the property that all the carriers must be available, unmodified, and processed in the correct order in order to retrieve the hidden data. This additional security feature usually is achieved by:
using a different initialization vector for each carrier and storing it inside processed carriers -> CryptedIVn = Crypt( IVn, CryptedIVn-1 )
using a different cryptography algorithm for each carrier and choosing it with a chain-order-dependent equiprobabilistic algorithm
=== Robustness and cryptography ===
Steganography tools aim to ensure robustness against modern forensic methods, such as statistical steganalysis. Such robustness may be achieved by a balanced mix of:
a stream-based cryptography process;
a data whitening process;
an encoding process.
If the data is detected, cryptography also helps to minimize the resulting damage, since the data is not exposed, only the fact that a secret was transmitted. The sender may be forced to decrypt the data once it is discovered, but deniable encryption can be leveraged to make the decrypted data appear benign.
Strong steganography software relies on a multi-layered architecture with a deep, documented obfuscation process.
=== Carrier engine ===
The carrier engine is the core of any steganography tool. Different file formats are modified in different ways, in order to covertly insert hidden data inside them. Processing algorithms include:
Injection (suspicious because of the content-unrelated file size increment)
Generation (suspicious because of the traceability of the generated carriers)
Ancillary data and metadata substitution
LSB or adaptive substitution
Frequency space manipulation
== See also ==
== Articles ==
Kharrazi, Mehdi; Sencar, Husrev T.; Memon, Nasir (2006). "Performance study of common image steganography and steganalysis techniques" (PDF). Journal of Electronic Imaging. 15 (4): 041104. doi:10.1117/1.2400672. Retrieved 7 February 2021.
Guillermito. "Analyzing steganography software". Retrieved 28 November 2012.
Provos, Niels; Honeyman, Peter (2003). "Hide and Seek: An Introduction to Steganography" (PDF). IEEE Security & Privacy. 1 (3): 32–44. doi:10.1109/msecp.2003.1203220. ISSN 1540-7993. Retrieved 28 November 2012.
Provos, Niels. "Defending against statistical steganalysis". Proceedings of the 10th Conference on USENIX Security Symposium. SSYM'01. 10. USENIX Association: 24–37. Retrieved 28 November 2012.
Bierbrauer, Jürgen; Fridrich, Jessica. "Constructing good covering codes for applications in Steganography" (PDF). Transactions on Data Hiding and Multimedia Security III. Lecture Notes in Computer Science. 4920: 1–22. Retrieved 7 February 2021.
Rocha, Anderson; Goldenstein, Siome, Steganography and Steganalysis: past, present, and future (PDF), First IEEE Workitorial on Vision of the Unseen (WVU'08), retrieved 8 March 2017
== References ==
== External links ==
Exhaustive directory of steganography software by Dr. Neil Johnson | Wikipedia/Steganography_tools |
The following outline is provided as an overview of and topical guide to cryptography:
Cryptography (or cryptology) – practice and study of hiding information. Modern cryptography intersects the disciplines of mathematics, computer science, and engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
== Essence of cryptography ==
Cryptographer
Encryption/decryption
Cryptographic key
Cipher
Ciphertext
Plaintext
Code
Tabula recta
Alice and Bob
== Uses of cryptographic techniques ==
Commitment schemes
Secure multiparty computation
Electronic voting
Authentication
Digital signatures
Crypto systems
Dining cryptographers problem
Anonymous remailer
Pseudonymity
Onion routing
Digital currency
Secret sharing
Indistinguishability obfuscation
== Branches of cryptography ==
Multivariate cryptography
Post-quantum cryptography
Quantum cryptography
Steganography
Visual cryptography
Chaotic cryptology
== History of cryptography ==
Japanese cryptology from the 1500s to Meiji
World War I cryptography
World War II cryptography
Reservehandverfahren
Venona project
Ultra
== Ciphers ==
=== Classical ===
==== Substitution ====
Monoalphabetic substitution
Caesar cipher
ROT13
Affine cipher
Atbash cipher
Keyword cipher
Polyalphabetic substitution
Vigenère cipher
Autokey cipher
Homophonic substitution cipher
Polygraphic substitution
Playfair cipher
Hill cipher
==== Transposition ====
Scytale
Grille
Permutation cipher
VIC cipher – complex hand cypher used by at least one Soviet spy in the early 1950s; it proved quite secure for the time
=== Modern symmetric-key algorithms ===
==== Stream ciphers ====
A5/1 & A5/2 – ciphers specified for the GSM cellular telephone standard
BMGL
Chameleon
FISH – by Siemens AG
WWII 'Fish' cyphers
Geheimfernschreiber – WWII mechanical onetime pad by Siemens AG, called STURGEON by Bletchley Park
Pike – improvement on FISH by Ross Anderson
Schlusselzusatz – WWII mechanical onetime pad by Lorenz, called tunny by Bletchley Park
HELIX
ISAAC – intended as a PRNG
Leviathan
LILI-128
MUGI – CRYPTREC recommendation
MULTI-S01 - CRYPTREC recommendation
One-time pad – Vernam and Mauborgne, patented 1919; an extreme stream cypher
Panama
RC4 (ARCFOUR) – one of a series by Professor Ron Rivest of MIT; CRYPTREC recommended limited to 128-bit key
CipherSaber – (RC4 variant with 10 byte random IV, easy to implement
Salsa20 – an eSTREAM recommended cipher
ChaCha20 – A Salsa20 variant.
SEAL
SNOW
SOBER
SOBER-t16
SOBER-t32
WAKE
==== Block ciphers ====
Product cipher
Feistel cipher – pattern by Horst Feistel
Advanced Encryption Standard (Rijndael) – 128-bit block; NIST selection for the AES, FIPS 197; Created 2001—by Joan Daemen and Vincent Rijmen; NESSIE selection; CRYPTREC recommendation.
Anubis – 128-bit block
BEAR – built from a stream cypher and hash function, by Ross Anderson
Blowfish – 64-bit block; by Bruce Schneier et al.
Camellia – 128-bit block; NESSIE selection (NTT & Mitsubishi Electric); CRYPTREC recommendation
CAST-128 (CAST5) – 64-bit block; one of a series of algorithms by Carlisle Adams and Stafford Tavares, insistent that the name is not due to their initials
CAST-256 (CAST6) – 128-bit block; the successor to CAST-128 and a candidate for the AES competition
CIPHERUNICORN-A – 128-bit block; CRYPTREC recommendation
CIPHERUNICORN-E – 64-bit block; CRYPTREC recommendation (limited)
CMEA – cipher used in US cellphones, found to have weaknesses.
CS-Cipher – 64-bit block
Data Encryption Standard (DES) – 64-bit block; FIPS 46-3, 1976
DEAL – an AES candidate derived from DES
DES-X – a variant of DES to increase the key size.
FEAL
GDES – a DES variant designed to speed up encryption
Grand Cru – 128-bit block
Hierocrypt-3 – 128-bit block; CRYPTREC recommendation
Hierocrypt-L1 – 64-bit block; CRYPTREC recommendation (limited)
IDEA NXT – project name FOX, 64-bit and 128-bit block family; Mediacrypt (Switzerland); by Pascal Junod & Serge Vaudenay of Swiss Institute of Technology Lausanne
International Data Encryption Algorithm (IDEA) – 64-bit block;James Massey & X Lai of ETH Zurich
Iraqi Block Cipher (IBC)
KASUMI – 64-bit block; based on MISTY1, adopted for next generation W-CDMA cellular phone security
KHAZAD – 64-bit block designed by Barretto and Rijmen
Khufu and Khafre – 64-bit block ciphers
Kuznyechik – Russian 128-bit block cipher, defined in GOST R 34.12-2015 and RFC 7801.
LION – block cypher built from stream cypher and hash function, by Ross Anderson
LOKI89/91 – 64-bit block ciphers
LOKI97 – 128-bit block cipher, AES candidate
Lucifer – by Tuchman et al. of IBM, early 1970s; modified by NSA/NBS and released as DES
MAGENTA – AES candidate
Mars – AES finalist, by Don Coppersmith et al.
MISTY1 – NESSIE selection 64-bit block; Mitsubishi Electric (Japan); CRYPTREC recommendation (limited)
MISTY2 – 128-bit block: Mitsubishi Electric (Japan)
Nimbus – 64-bit block
NOEKEON – 128-bit block
NUSH – variable block length (64-256-bit)
Q – 128-bit block
RC2 – 64-bit block, variable key length
RC6 – variable block length; AES finalist, by Ron Rivest et al.
RC5 – Ron Rivest
SAFER – variable block length
SC2000 – 128-bit block; CRYPTREC recommendation
Serpent – 128-bit block; AES finalist by Ross Anderson, Eli Biham, Lars Knudsen
SHACAL-1 – 160-bit block
SHACAL-2 – 256-bit block cypher; NESSIE selection Gemplus (France)
Shark – grandfather of Rijndael/AES, by Daemen and Rijmen
Square – father of Rijndael/AES, by Daemen and Rijmen
TEA – by David Wheeler & Roger Needham
Triple DES – by Walter Tuchman, leader of the Lucifer design team—not all triple uses of DES increase security, Tuchman's does; CRYPTREC recommendation (limited), only when used as in FIPS Pub 46-3
Twofish – 128-bit block; AES finalist by Bruce Schneier et al.
XTEA – by David Wheeler & Roger Needham
3-Way – 96-bit block by Joan Daemen
Polyalphabetic substitution machine cyphers
Enigma – WWII German rotor cypher machine—many variants, any user networks for most of the variants
Purple – highest security WWII Japanese Foreign Office cypher machine; by Japanese Navy Captain
SIGABA – WWII US cypher machine by William Friedman, Frank Rowlett et al.
TypeX – WWII UK cypher machine
Hybrid code/cypher combinations
JN-25 – WWII Japanese Navy superencyphered code; many variants
Naval Cypher 3 – superencrypted code used by the Royal Navy in the 1930s and into WWII
=== Modern asymmetric-key algorithms ===
==== Asymmetric key algorithm ====
ACE-KEM – NESSIE selection asymmetric encryption scheme; IBM Zurich Research
ACE Encrypt
Chor-Rivest
Diffie-Hellman – key agreement; CRYPTREC recommendation
El Gamal – discrete logarithm
Elliptic curve cryptography – (discrete logarithm variant)
PSEC-KEM – NESSIE selection asymmetric encryption scheme; NTT (Japan); CRYPTREC recommendation only in DEM construction w/SEC1 parameters
ECIES – Elliptic Curve Integrated Encryption System, Certicom Corporation
ECIES-KEM
ECDH – Elliptic Curve Diffie-Hellman key agreement, CRYPTREC recommendation
EPOC
Kyber
Merkle–Hellman knapsack cryptosystem – knapsack scheme
McEliece cryptosystem
Niederreiter cryptosystem
NTRUEncrypt
RSA – factoring
RSA-KEM – NESSIE selection asymmetric encryption scheme; ISO/IEC 18033-2 draft
RSA-OAEP – CRYPTREC recommendation
Rabin cryptosystem – factoring
Rabin-SAEP
HIME(R)
Paillier cryptosystem
Threshold cryptosystem
XTR
== Keys ==
=== Key authentication ===
Public key infrastructure
X.509
OpenPGP
Public key certificate
Certificate authority
Certificate revocation
ID-based cryptography
Certificate-based encryption
Secure key issuing cryptography
Certificateless cryptography
Merkle tree
=== Transport/exchange ===
Diffie–Hellman
Man-in-the-middle attack
Needham–Schroeder
Offline private key
Otway–Rees
Trusted paper key
Wide Mouth Frog
=== Weak keys ===
Brute force attack
Dictionary attack
Related key attack
Key derivation function
Key strengthening
Password
Password-authenticated key agreement
Passphrase
Salt
Factorization
== Cryptographic hash functions ==
Message authentication code
Keyed-hash message authentication code
Encrypted CBC-MAC (EMAC) – NESSIE selection MAC
HMAC – NESSIE selection MAC; ISO/IEC 9797-1, FIPS PUB 113 and IETF RFC
TTMAC – (Two-Track-MAC) NESSIE selection MAC; K.U.Leuven (Belgium) & debis AG (Germany)
UMAC – NESSIE selection MAC; Intel, UNevada Reno, IBM, Technion, & UC Davis
Oblivious Pseudorandom Function
MD5 – one of a series of message digest algorithms by Prof Ron Rivest of MIT; 128-bit digest
SHA-1 – developed at NSA 160-bit digest, an FIPS standard; the first released version was defective and replaced by this; NIST/NSA have released several variants with longer 'digest' lengths; CRYPTREC recommendation (limited)
SHA-256 – NESSIE selection hash function, FIPS 180-2, 256-bit digest; CRYPTREC recommendation
SHA-384 – NESSIE selection hash function, FIPS 180-2, 384-bit digest; CRYPTREC recommendation
SHA-512 – NESSIE selection hash function, FIPS 180-2, 512-bit digest; CRYPTREC recommendation
SHA-3 – originally known as Keccak; was the winner of the NIST hash function competition using sponge function.
Streebog – Russian algorithm created to replace an obsolete GOST hash function defined in obsolete standard GOST R 34.11-94.
RIPEMD-160 – developed in Europe for the RIPE project, 160-bit digest; CRYPTREC recommendation (limited)
RTR0 – one of Retter series; developed by Maciej A. Czyzewski; 160-bit digest
Tiger – by Ross Anderson et al.
Snefru – NIST hash function competition
Whirlpool – NESSIE selection hash function, Scopus Tecnologia S.A. (Brazil) & K.U.Leuven (Belgium)
== Cryptanalysis ==
=== Classical ===
Frequency analysis
Contact analysis
Index of coincidence
Kasiski examination
=== Modern ===
Symmetric algorithms
Boomerang attack
Brute force attack
Davies' attack
Differential cryptanalysis
Impossible differential cryptanalysis
Integral cryptanalysis
Linear cryptanalysis
Meet-in-the-middle attack
Mod-n cryptanalysis
Related-key attack
Slide attack
XSL attack
Hash functions:
Birthday attack
Attack models
Chosen-ciphertext
Chosen-plaintext
Ciphertext-only
Known-plaintext
Side channel attacks
Power analysis
Timing attack
Cold boot attack
Differential fault analysis
Network attacks
Man-in-the-middle attack
Replay attack
External attacks
Black-bag cryptanalysis
Rubber-hose cryptanalysis
== Robustness properties ==
Provable security
Random oracle model
Ciphertext indistinguishability
Semantic security
Malleability
Forward secrecy
Forward anonymity
Freshness
Kerckhoffs's principle – Cryptographic principle that states everything except the key can be public knowledge
== Undeciphered historical codes and ciphers ==
Beale ciphers
Chaocipher
D'Agapeyeff cipher
Dorabella cipher
Rongorongo
Shugborough inscription
Voynich manuscript
== Organizations and selection projects ==
=== Cryptography standards ===
Federal Information Processing Standards (FIPS) Publication Program – run by NIST to produce standards in many areas to guide operations of the US Federal government; many FIPS publications are ongoing and related to cryptography
American National Standards Institute (ANSI) – standardization process that produces many standards in many areas; some are cryptography related, ongoing)
International Organization for Standardization (ISO) – standardization process produces many standards in many areas; some are cryptography related, ongoing
Institute of Electrical and Electronics Engineers (IEEE) – standardization process produces many standards in many areas; some are cryptography related, ongoing
Internet Engineering Task Force (IETF) – standardization process that produces many standards called RFCs) in many areas; some are cryptography related, ongoing)
=== General cryptographic ===
National Security Agency (NSA) – internal evaluation/selections, charged with assisting NIST in its cryptographic responsibilities
Government Communications Headquarters (GCHQ) – internal evaluation/selections, a division is charged with developing and recommending cryptographic standards for the UK government
Defence Signals Directorate (DSD) – Australian SIGINT agency, part of ECHELON
Communications Security Establishment (CSE) – Canadian intelligence agency
=== Open efforts ===
Data Encryption Standard (DES) – NBS selection process, ended 1976
RIPE – division of the RACE project sponsored by the European Union, ended mid-1980s
Advanced Encryption Standard (AES) – a "break-off" competition sponsored by NIST, ended in 2001
NESSIE Project – an evaluation/selection program sponsored by the European Union, ended in 2002
eSTREAM– program funded by ECRYPT; motivated by the failure of all of the stream ciphers submitted to NESSIE, ended in 2008
CRYPTREC – evaluation/recommendation program sponsored by the Japanese government; draft recommendations published 2003
CrypTool – an e-learning freeware programme in English and German— exhaustive educational tool about cryptography and cryptanalysis
== Influential cryptographers ==
List of cryptographers
== Legal issues ==
AACS encryption key controversy
Free speech
Bernstein v. United States - Daniel J. Bernstein's challenge to the restrictions on the export of cryptography from the United States.
Junger v. Daley
DeCSS
Phil Zimmermann - Arms Export Control Act investigation regarding the PGP software.
Export of cryptography
Key escrow and Clipper Chip
Digital Millennium Copyright Act
Digital rights management (DRM)
Patents
RSA – now public domain
David Chaum – and digital cash
Cryptography and law enforcement
Telephone wiretapping
Espionage
Cryptography laws in different nations
Official Secrets Act – United Kingdom, India, Ireland, Malaysia, and formerly New Zealand
Regulation of Investigatory Powers Act 2000 – United Kingdom
== Academic and professional publications ==
Journal of Cryptology
Encyclopedia of Cryptography and Security
Cryptologia – quarterly journal focusing on historical aspects
Communication Theory of Secrecy Systems – cryptography from the viewpoint of information theory
International Association for Cryptologic Research (website)
== Allied sciences ==
Security engineering
== See also ==
Outline of computer science
Outline of computer security
== References == | Wikipedia/Topics_in_cryptography |
The vast majority of the National Security Agency's work on encryption is classified, but from time to time NSA participates in standards processes or otherwise publishes information about its cryptographic algorithms. The NSA has categorized encryption items into four product types, and algorithms into two suites. The following is a brief and incomplete summary of public knowledge about NSA algorithms and protocols.
== Type 1 Product ==
A Type 1 Product refers to an NSA endorsed classified or controlled cryptographic item for classified or sensitive U.S. government information, including cryptographic equipment, assembly or component classified or certified by NSA for encrypting and decrypting classified and sensitive national security information when appropriately keyed.
== Type 2 Product ==
A Type 2 Product refers to an NSA endorsed unclassified cryptographic equipment, assemblies or components for sensitive but unclassified U.S. government information.
== Type 3 Product ==
Unclassified cryptographic equipment, assembly, or component used, when appropriately keyed, for encrypting or decrypting unclassified sensitive U.S. Government or commercial information, and to protect systems requiring protection mechanisms consistent with standard commercial practices. A Type 3 Algorithm refers to NIST endorsed algorithms, registered and FIPS published, for sensitive but unclassified U.S. government and commercial information.
== Type 4 Product ==
A Type 4 Algorithm refers to algorithms that are registered by the NIST but are not FIPS published. Unevaluated commercial cryptographic equipment, assemblies, or components that are neither NSA nor NIST certified for any Government usage.
== Algorithm Suites ==
=== Suite A ===
A set of NSA unpublished algorithms that is intended for highly sensitive communication and critical authentication systems.
=== Suite B ===
A set of NSA endorsed cryptographic algorithms for use as an interoperable cryptographic base for both unclassified information and most classified information. Suite B was announced on 16 February 2005, and phased out in 2016.
=== Commercial National Security Algorithm Suite ===
A set of cryptographic algorithms promulgated by the National Security Agency as a replacement for NSA Suite B Cryptography until post-quantum cryptography standards are promulgated.
=== Quantum resistant suite ===
In August 2015, NSA announced that it is planning to transition "in the not distant future" to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy." NSA advised: "For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition."
== See also ==
NSA encryption systems
Speck and Simon, light-weight block ciphers, published by NSA in 2013
== References == | Wikipedia/NSA_cryptography |
The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and Technology (NIST) as a U.S. Federal Information Processing Standard (FIPS), including:
SHA-0: A retronym applied to the original version of the 160-bit hash function published in 1993 under the name "SHA". It was withdrawn shortly after publication due to an undisclosed "significant flaw" and replaced by the slightly revised version SHA-1.
SHA-1: A 160-bit hash function which resembles the earlier MD5 algorithm. This was designed by the National Security Agency (NSA) to be part of the Digital Signature Algorithm. Cryptographic weaknesses were discovered in SHA-1, and the standard was no longer approved for most cryptographic uses after 2010.
SHA-2: A family of two similar hash functions, with different block sizes, known as SHA-256 and SHA-512. They differ in the word size; SHA-256 uses 32-bit words where SHA-512 uses 64-bit words. There are also truncated versions of each standard, known as SHA-224, SHA-384, SHA-512/224 and SHA-512/256. These were also designed by the NSA.
SHA-3: A hash function formerly called Keccak, chosen in 2012 after a public competition among non-NSA designers. It supports the same hash lengths as SHA-2, and its internal structure differs significantly from the rest of the SHA family.
The corresponding standards are FIPS PUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512). NIST has updated Draft FIPS Publication 202, SHA-3 Standard separate from the Secure Hash Standard (SHS).
== Comparison of SHA functions ==
In the table below, internal state means the "internal hash sum" after each compression of a data block.
== Validation ==
All SHA-family algorithms, as FIPS-approved security functions, are subject to official validation by the CMVP (Cryptographic Module Validation Program), a joint program run by the American National Institute of Standards and Technology (NIST) and the Canadian Communications Security Establishment (CSE).
== References == | Wikipedia/Secure_Hash_Algorithm_(disambiguation) |
In Euclidean geometry, an affine transformation or affinity (from the Latin, affinis, "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles.
More generally, an affine transformation is an automorphism of an affine space (Euclidean spaces are specific affine spaces), that is, a function which maps an affine space onto itself while preserving both the dimension of any affine subspaces (meaning that it sends points to points, lines to lines, planes to planes, and so on) and the ratios of the lengths of parallel line segments. Consequently, sets of parallel affine subspaces remain parallel after an affine transformation. An affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line.
If X is the point set of an affine space, then every affine transformation on X can be represented as the composition of a linear transformation on X and a translation of X. Unlike a purely linear transformation, an affine transformation need not preserve the origin of the affine space. Thus, every linear transformation is affine, but not every affine transformation is linear.
Examples of affine transformations include translation, scaling, homothety, similarity, reflection, rotation, hyperbolic rotation, shear mapping, and compositions of them in any combination and sequence.
Viewing an affine space as the complement of a hyperplane at infinity of a projective space, the affine transformations are the projective transformations of that projective space that leave the hyperplane at infinity invariant, restricted to the complement of that hyperplane.
A generalization of an affine transformation is an affine map (or affine homomorphism or affine mapping) between two (potentially different) affine spaces over the same field k. Let (X, V, k) and (Z, W, k) be two affine spaces with X and Z the point sets and V and W the respective associated vector spaces over the field k. A map f : X → Z is an affine map if there exists a linear map mf : V → W such that mf (x − y) = f (x) − f (y) for all x, y in X.
== Definition ==
Let X be an affine space over a field k, and V be its associated vector space. An affine transformation is a bijection f from X onto itself that is an affine map; this means that a linear map g from V to V is well defined by the equation
g
(
y
−
x
)
=
f
(
y
)
−
f
(
x
)
;
{\displaystyle g(y-x)=f(y)-f(x);}
here, as usual, the subtraction of two points denotes the free vector from the second point to the first one, and "well-defined" means that
y
−
x
=
y
′
−
x
′
{\displaystyle y-x=y'-x'}
implies that
f
(
y
)
−
f
(
x
)
=
f
(
y
′
)
−
f
(
x
′
)
.
{\displaystyle f(y)-f(x)=f(y')-f(x').}
If the dimension of X is at least two, a semiaffine transformation f of X is a bijection from X onto itself satisfying:
For every d-dimensional affine subspace S of X, then f (S) is also a d-dimensional affine subspace of X.
If S and T are parallel affine subspaces of X, then f (S) and f (T) are parallel.
These two conditions are satisfied by affine transformations, and express what is precisely meant by the expression that "f preserves parallelism".
These conditions are not independent as the second follows from the first. Furthermore, if the field k has at least three elements, the first condition can be simplified to: f is a collineation, that is, it maps lines to lines.
== Structure ==
By the definition of an affine space, V acts on X, so that, for every pair
(
x
,
v
)
{\displaystyle (x,\mathbf {v} )}
in X × V there is associated a point y in X. We can denote this action by
v
→
(
x
)
=
y
{\displaystyle {\vec {v}}(x)=y}
. Here we use the convention that
v
→
=
v
{\displaystyle {\vec {v}}={\textbf {v}}}
are two interchangeable notations for an element of V. By fixing a point c in X one can define a function mc : X → V by mc(x) = cx→. For any c, this function is one-to-one, and so, has an inverse function mc−1 : V → X given by
m
c
−
1
(
v
)
=
v
→
(
c
)
{\displaystyle m_{c}^{-1}({\textbf {v}})={\vec {v}}(c)}
. These functions can be used to turn X into a vector space (with respect to the point c) by defining:
x
+
y
=
m
c
−
1
(
m
c
(
x
)
+
m
c
(
y
)
)
,
for all
x
,
y
in
X
,
{\displaystyle x+y=m_{c}^{-1}\left(m_{c}(x)+m_{c}(y)\right),{\text{ for all }}x,y{\text{ in }}X,}
and
r
x
=
m
c
−
1
(
r
m
c
(
x
)
)
,
for all
r
in
k
and
x
in
X
.
{\displaystyle rx=m_{c}^{-1}\left(rm_{c}(x)\right),{\text{ for all }}r{\text{ in }}k{\text{ and }}x{\text{ in }}X.}
This vector space has origin c and formally needs to be distinguished from the affine space X, but common practice is to denote it by the same symbol and mention that it is a vector space after an origin has been specified. This identification permits points to be viewed as vectors and vice versa.
For any linear transformation λ of V, we can define the function L(c, λ) : X → X by
L
(
c
,
λ
)
(
x
)
=
m
c
−
1
(
λ
(
m
c
(
x
)
)
)
=
c
+
λ
(
c
x
→
)
.
{\displaystyle L(c,\lambda )(x)=m_{c}^{-1}\left(\lambda (m_{c}(x))\right)=c+\lambda ({\vec {cx}}).}
Then L(c, λ) is an affine transformation of X which leaves the point c fixed. It is a linear transformation of X, viewed as a vector space with origin c.
Let σ be any affine transformation of X. Pick a point c in X and consider the translation of X by the vector
w
=
c
σ
(
c
)
→
{\displaystyle {\mathbf {w}}={\overrightarrow {c\sigma (c)}}}
, denoted by Tw. Translations are affine transformations and the composition of affine transformations is an affine transformation. For this choice of c, there exists a unique linear transformation λ of V such that
σ
(
x
)
=
T
w
(
L
(
c
,
λ
)
(
x
)
)
.
{\displaystyle \sigma (x)=T_{\mathbf {w}}\left(L(c,\lambda )(x)\right).}
That is, an arbitrary affine transformation of X is the composition of a linear transformation of X (viewed as a vector space) and a translation of X.
This representation of affine transformations is often taken as the definition of an affine transformation (with the choice of origin being implicit).
== Representation ==
As shown above, an affine map is the composition of two functions: a translation and a linear map. Ordinary vector algebra uses matrix multiplication to represent linear maps, and vector addition to represent translations. Formally, in the finite-dimensional case, if the linear map is represented as a multiplication by an invertible matrix
A
{\displaystyle A}
and the translation as the addition of a vector
b
{\displaystyle \mathbf {b} }
, an affine map
f
{\displaystyle f}
acting on a vector
x
{\displaystyle \mathbf {x} }
can be represented as
y
=
f
(
x
)
=
A
x
+
b
.
{\displaystyle \mathbf {y} =f(\mathbf {x} )=A\mathbf {x} +\mathbf {b} .}
=== Augmented matrix ===
Using an augmented matrix and an augmented vector, it is possible to represent both the translation and the linear map using a single matrix multiplication. The technique requires that all vectors be augmented with a "1" at the end, and all matrices be augmented with an extra row of zeros at the bottom, an extra column—the translation vector—to the right, and a "1" in the lower right corner. If
A
{\displaystyle A}
is a matrix,
[
y
1
]
=
[
A
b
0
⋯
0
1
]
[
x
1
]
{\displaystyle {\begin{bmatrix}\mathbf {y} \\1\end{bmatrix}}=\left[{\begin{array}{ccc|c}&A&&\mathbf {b} \\0&\cdots &0&1\end{array}}\right]{\begin{bmatrix}\mathbf {x} \\1\end{bmatrix}}}
is equivalent to the following
y
=
A
x
+
b
.
{\displaystyle \mathbf {y} =A\mathbf {x} +\mathbf {b} .}
The above-mentioned augmented matrix is called an affine transformation matrix. In the general case, when the last row vector is not restricted to be
[
0
⋯
0
1
]
{\displaystyle \left[{\begin{array}{ccc|c}0&\cdots &0&1\end{array}}\right]}
, the matrix becomes a projective transformation matrix (as it can also be used to perform projective transformations).
This representation exhibits the set of all invertible affine transformations as the semidirect product of
K
n
{\displaystyle K^{n}}
and
GL
(
n
,
K
)
{\displaystyle \operatorname {GL} (n,K)}
. This is a group under the operation of composition of functions, called the affine group.
Ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a translation, in which the origin must necessarily be mapped to some other point. By appending the additional coordinate "1" to every vector, one essentially considers the space to be mapped as a subset of a space with an additional dimension. In that space, the original space occupies the subset in which the additional coordinate is 1. Thus the origin of the original space can be found at
(
0
,
0
,
…
,
0
,
1
)
{\displaystyle (0,0,\dotsc ,0,1)}
. A translation within the original space by means of a linear transformation of the higher-dimensional space is then possible (specifically, a shear transformation). The coordinates in the higher-dimensional space are an example of homogeneous coordinates. If the original space is Euclidean, the higher dimensional space is a real projective space.
The advantage of using homogeneous coordinates is that one can combine any number of affine transformations into one by multiplying the respective matrices. This property is used extensively in computer graphics, computer vision and robotics.
==== Example augmented matrix ====
Suppose you have three points that define a non-degenerate triangle in a plane, or four points that define a non-degenerate tetrahedron in 3-dimensional space, or generally n + 1 points x1, ..., xn+1 that define a non-degenerate simplex in n-dimensional space. Suppose you have corresponding destination points y1, ..., yn+1, where these new points can lie in a space with any number of dimensions. (Furthermore, the new points need not be distinct from each other and need not form a non-degenerate simplex.) The unique augmented matrix M that achieves the affine transformation
[
y
i
1
]
=
M
[
x
i
1
]
{\displaystyle {\begin{bmatrix}\mathbf {y} _{i}\\1\end{bmatrix}}=M{\begin{bmatrix}\mathbf {x} _{i}\\1\end{bmatrix}}}
for every i is
M
=
[
y
1
⋯
y
n
+
1
1
⋯
1
]
[
x
1
⋯
x
n
+
1
1
⋯
1
]
−
1
.
{\displaystyle M={\begin{bmatrix}\mathbf {y} _{1}&\cdots &\mathbf {y} _{n+1}\\1&\cdots &1\end{bmatrix}}{\begin{bmatrix}\mathbf {x} _{1}&\cdots &\mathbf {x} _{n+1}\\1&\cdots &1\end{bmatrix}}^{-1}.}
== Properties ==
=== Properties preserved ===
An affine transformation preserves:
collinearity between points: three or more points which lie on the same line (called collinear points) continue to be collinear after the transformation.
parallelism: two or more lines which are parallel, continue to be parallel after the transformation.
convexity of sets: a convex set continues to be convex after the transformation. Moreover, the extreme points of the original set are mapped to the extreme points of the transformed set.
ratios of lengths of parallel line segments: for distinct parallel segments defined by points
p
1
{\displaystyle p_{1}}
and
p
2
{\displaystyle p_{2}}
,
p
3
{\displaystyle p_{3}}
and
p
4
{\displaystyle p_{4}}
, the ratio of
p
1
p
2
→
{\displaystyle {\overrightarrow {p_{1}p_{2}}}}
and
p
3
p
4
→
{\displaystyle {\overrightarrow {p_{3}p_{4}}}}
is the same as that of
f
(
p
1
)
f
(
p
2
)
→
{\displaystyle {\overrightarrow {f(p_{1})f(p_{2})}}}
and
f
(
p
3
)
f
(
p
4
)
→
{\displaystyle {\overrightarrow {f(p_{3})f(p_{4})}}}
.
barycenters of weighted collections of points.
=== Groups ===
As an affine transformation is invertible, the square matrix
A
{\displaystyle A}
appearing in its matrix representation is invertible. The matrix representation of the inverse transformation is thus
[
A
−
1
−
A
−
1
b
→
0
…
0
1
]
.
{\displaystyle \left[{\begin{array}{ccc|c}&A^{-1}&&-A^{-1}{\vec {b}}\ \\0&\ldots &0&1\end{array}}\right].}
The invertible affine transformations (of an affine space onto itself) form the affine group, which has the general linear group of degree
n
{\displaystyle n}
as subgroup and is itself a subgroup of the general linear group of degree
n
+
1
{\displaystyle n+1}
.
The similarity transformations form the subgroup where
A
{\displaystyle A}
is a scalar times an orthogonal matrix. For example, if the affine transformation acts on the plane and if the determinant of
A
{\displaystyle A}
is 1 or −1 then the transformation is an equiareal mapping. Such transformations form a subgroup called the equi-affine group. A transformation that is both equi-affine and a similarity is an isometry of the plane taken with Euclidean distance.
Each of these groups has a subgroup of orientation-preserving or positive affine transformations: those where the determinant of
A
{\displaystyle A}
is positive. In the last case this is in 3D the group of rigid transformations (proper rotations and pure translations).
If there is a fixed point, we can take that as the origin, and the affine transformation reduces to a linear transformation. This may make it easier to classify and understand the transformation. For example, describing a transformation as a rotation by a certain angle with respect to a certain axis may give a clearer idea of the overall behavior of the transformation than describing it as a combination of a translation and a rotation. However, this depends on application and context.
== Affine maps ==
An affine map
f
:
A
→
B
{\displaystyle f\colon {\mathcal {A}}\to {\mathcal {B}}}
between two affine spaces is a map on the points that acts linearly on the vectors (that is, the vectors between points of the space). In symbols,
f
{\displaystyle f}
determines a linear transformation
φ
{\displaystyle \varphi }
such that, for any pair of points
P
,
Q
∈
A
{\displaystyle P,Q\in {\mathcal {A}}}
:
f
(
P
)
f
(
Q
)
→
=
φ
(
P
Q
→
)
{\displaystyle {\overrightarrow {f(P)~f(Q)}}=\varphi ({\overrightarrow {PQ}})}
or
f
(
Q
)
−
f
(
P
)
=
φ
(
Q
−
P
)
{\displaystyle f(Q)-f(P)=\varphi (Q-P)}
.
We can interpret this definition in a few other ways, as follows.
If an origin
O
∈
A
{\displaystyle O\in {\mathcal {A}}}
is chosen, and
B
{\displaystyle B}
denotes its image
f
(
O
)
∈
B
{\displaystyle f(O)\in {\mathcal {B}}}
, then this means that for any vector
x
→
{\displaystyle {\vec {x}}}
:
f
:
(
O
+
x
→
)
↦
(
B
+
φ
(
x
→
)
)
{\displaystyle f\colon (O+{\vec {x}})\mapsto (B+\varphi ({\vec {x}}))}
.
If an origin
O
′
∈
B
{\displaystyle O'\in {\mathcal {B}}}
is also chosen, this can be decomposed as an affine transformation
g
:
A
→
B
{\displaystyle g\colon {\mathcal {A}}\to {\mathcal {B}}}
that sends
O
↦
O
′
{\displaystyle O\mapsto O'}
, namely
g
:
(
O
+
x
→
)
↦
(
O
′
+
φ
(
x
→
)
)
{\displaystyle g\colon (O+{\vec {x}})\mapsto (O'+\varphi ({\vec {x}}))}
,
followed by the translation by a vector
b
→
=
O
′
B
→
{\displaystyle {\vec {b}}={\overrightarrow {O'B}}}
.
The conclusion is that, intuitively,
f
{\displaystyle f}
consists of a translation and a linear map.
=== Alternative definition ===
Given two affine spaces
A
{\displaystyle {\mathcal {A}}}
and
B
{\displaystyle {\mathcal {B}}}
, over the same field, a function
f
:
A
→
B
{\displaystyle f\colon {\mathcal {A}}\to {\mathcal {B}}}
is an affine map if and only if for every family
{
(
a
i
,
λ
i
)
}
i
∈
I
{\displaystyle \{(a_{i},\lambda _{i})\}_{i\in I}}
of weighted points in
A
{\displaystyle {\mathcal {A}}}
such that
∑
i
∈
I
λ
i
=
1
{\displaystyle \sum _{i\in I}\lambda _{i}=1}
,
we have
f
(
∑
i
∈
I
λ
i
a
i
)
=
∑
i
∈
I
λ
i
f
(
a
i
)
{\displaystyle f\left(\sum _{i\in I}\lambda _{i}a_{i}\right)=\sum _{i\in I}\lambda _{i}f(a_{i})}
.
In other words,
f
{\displaystyle f}
preserves barycenters.
== History ==
The word "affine" as a mathematical term is defined in connection with tangents to curves in Euler's 1748 Introductio in analysin infinitorum. Felix Klein attributes the term "affine transformation" to Möbius and Gauss.
== Image transformation ==
In their applications to digital image processing, the affine transformations are analogous to printing on a sheet of rubber and stretching the sheet's edges parallel to the plane. This transform relocates pixels requiring intensity interpolation to approximate the value of moved pixels, bicubic interpolation is the standard for image transformations in image processing applications. Affine transformations scale, rotate, translate, mirror and shear images as shown in the following examples:
The affine transforms are applicable to the registration process where two or more images are aligned (registered). An example of image registration is the generation of panoramic images that are the product of multiple images stitched together.
=== Affine warping ===
The affine transform preserves parallel lines. However, the stretching and shearing transformations warp shapes, as the following example shows:
This is an example of image warping. However, the affine transformations do not facilitate projection onto a curved surface or radial distortions.
== In the plane ==
Every affine transformations in a Euclidean plane is the composition of a translation and an affine transformation that fixes a point; the latter may be
a homothety,
rotations around the fixed point,
a scaling, with possibly negative scaling factors, in two directions (not necessarily perpendicular); this includes reflections,
a shear mapping
a squeeze mapping.
Given two non-degenerate triangles ABC and A′B′C′ in a Euclidean plane, there is a unique affine transformation T that maps A to A′, B to B′ and C to C′. Each of ABC and A′B′C′ defines an affine coordinate system and a barycentric coordinate system. Given a point P, the point T(P) is the point that has the same coordinates on the second system as the coordinates of P on the first system.
Affine transformations do not respect lengths or angles; they multiply areas by the constant factor
area of A′B′C′ / area of ABC.
A given T may either be direct (respect orientation), or indirect (reverse orientation), and this may be determined by comparing the orientations of the triangles.
== Examples ==
=== Over the real numbers ===
The functions
f
:
R
→
R
,
f
(
x
)
=
m
x
+
c
{\displaystyle f\colon \mathbb {R} \to \mathbb {R} ,\;f(x)=mx+c}
with
m
{\displaystyle m}
and
c
{\displaystyle c}
in
R
{\displaystyle \mathbb {R} }
and
m
≠
0
{\displaystyle m\neq 0}
, are precisely the affine transformations of the real line.
=== In plane geometry ===
In
R
2
{\displaystyle \mathbb {R} ^{2}}
, the transformation shown at left is accomplished using the map given by:
[
x
y
]
↦
[
0
1
2
1
]
[
x
y
]
+
[
−
100
−
100
]
{\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}\mapsto {\begin{bmatrix}0&1\\2&1\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}+{\begin{bmatrix}-100\\-100\end{bmatrix}}}
Transforming the three corner points of the original triangle (in red) gives three new points which form the new triangle (in blue). This transformation skews and translates the original triangle.
In fact, all triangles are related to one another by affine transformations. This is also true for all parallelograms, but not for all quadrilaterals.
== See also ==
Anamorphosis – artistic applications of affine transformations
Affine geometry
3D projection
Homography
Flat (geometry)
Bent function
Multilinear polynomial
== Notes ==
== References ==
Berger, Marcel (1987), Geometry I, Berlin: Springer, ISBN 3-540-11658-3
Brannan, David A.; Esplen, Matthew F.; Gray, Jeremy J. (1999), Geometry, Cambridge University Press, ISBN 978-0-521-59787-6
Nomizu, Katsumi; Sasaki, S. (1994), Affine Differential Geometry (New ed.), Cambridge University Press, ISBN 978-0-521-44177-3
Klein, Felix (1948) [1939], Elementary Mathematics from an Advanced Standpoint: Geometry, Dover
Samuel, Pierre (1988), Projective Geometry, Springer-Verlag, ISBN 0-387-96752-4
Sharpe, R. W. (1997). Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. New York: Springer. ISBN 0-387-94732-9.
Snapper, Ernst; Troyer, Robert J. (1989) [1971], Metric Affine Geometry, Dover, ISBN 978-0-486-66108-7
Wan, Zhe-xian (1993), Geometry of Classical Groups over Finite Fields, Chartwell-Bratt, ISBN 0-86238-326-9
== External links ==
Media related to Affine transformation at Wikimedia Commons
"Affine transformation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Geometric Operations: Affine Transform, R. Fisher, S. Perkins, A. Walker and E. Wolfart.
Weisstein, Eric W. "Affine Transformation". MathWorld.
Affine Transform by Bernard Vuilleumier, Wolfram Demonstrations Project.
Affine Transformation with MATLAB | Wikipedia/Affine_transformations |
Extendable-output function (XOF) is an extension of the cryptographic hash that allows its output to be arbitrarily long. In particular, the sponge construction makes any sponge hash a natural XOF: the squeeze operation can be repeated, and the regular hash functions with a fixed-size result are obtained from a sponge mechanism by stopping the squeezing phase after obtaining the fixed number of bits).
The genesis of a XOF makes it collision, preimage and second preimage resistant. Technically, any XOF can be turned into a cryptographic hash by truncating the result to a fixed length (in practice, hashes and XOFs are defined differently for domain separation). The examples of XOF include the algorithms from the Keccak family: SHAKE128, SHAKE256, and a variant with higher efficiency, KangarooTwelve.
XOFs are used as key derivation functions (KDFs), stream ciphers, mask generation functions.
== Related-output issues ==
By their nature, XOFs can produce related outputs (a longer result includes a shorter one as a prefix). The use of KDFs for key derivation can therefore cause related-output problems. As a "naïve" example, if the Triple DES keys are generated with a XOF, and there is a confusion in the implementation that causes some operations to be performed as 3TDEA (3x56 = 168-bit key), and some as 2TDEA (2x56 = 112 bit key), comparing the encryption results will lower the attack complexity to just 56 bits; similar problems can occur if hashes in the NIST SP 800-108 are naïvely replaced by the KDFs.
== References ==
== Sources ==
Mittelbach, Arno; Fischlin, Marc (2021). "Extendable Output Functions (XOFs)". The Theory of Hash Functions and Random Oracles: An Approach to Modern Cryptography. Information Security and Cryptography. Springer International Publishing. ISBN 978-3-030-63287-8. Retrieved 2023-06-22.
Peyrin, Thomas; Wang, Haoyang (2020). "The MALICIOUS Framework: Embedding Backdoors into Tweakable Block Ciphers" (PDF). Advances in Cryptology – CRYPTO 2020. Lecture Notes in Computer Science. Vol. 12172. Springer International Publishing. pp. 249–278. doi:10.1007/978-3-030-56877-1_9. ISBN 978-3-030-56876-4. ISSN 0302-9743. S2CID 221107066.
Perlner, Ray (August 22, 2014). "Extendable-Output Functions (XOFs)". csrc.nist.gov. NIST. Retrieved 22 June 2023.
Dworkin, Morris (August 22, 2014). "Domain Extensions". csrc.nist.gov. NIST. Retrieved 22 June 2023. | Wikipedia/Extendable-output_function |
Cryptography was used extensively during World War II because of the importance of radio communication and the ease of radio interception. The nations involved fielded a plethora of code and cipher systems, many of the latter using rotor machines. As a result, the theoretical and practical aspects of cryptanalysis, or codebreaking, were much advanced.
Possibly the most important codebreaking event of the war was the successful decryption by the Allies of the German "Enigma" Cipher. The first break into Enigma was accomplished by Polish Cipher Bureau around 1932; the techniques and insights used were passed to the French and British Allies just before the outbreak of the war in 1939. They were substantially improved by British efforts at Bletchley Park during the war. Decryption of the Enigma Cipher allowed the Allies to read important parts of German radio traffic on important networks and was an invaluable source of military intelligence throughout the war. Intelligence from this source and other high level sources, such as Cryptanalysis of the Lorenz cipher, was eventually called Ultra.
A similar break into the most secure Japanese diplomatic cipher, designated Purple by the US Army Signals Intelligence Service, started before the US entered the war. Product from this source was called Magic.
On the other side, German code breaking in World War II achieved some notable successes cracking British naval and other ciphers.
== Australia ==
Central Bureau
FRUMEL: Fleet Radio Unit, Melbourne
Secret Intelligence Australia
== Finland ==
Finnish Defence Intelligence Agency
== France ==
PC Bruno
Hans-Thilo Schmidt
== Germany ==
Enigma machine
Fish (cryptography) British codename for German teleprinter ciphers
Lorenz cipher a Fish cipher codenamed Tunny by the British
Siemens and Halske T52 Geheimfernschreiber, a Fish cipher codenamed Sturgeon by the British
Short Weather Cipher
B-Dienst
Reservehandverfahren
OKW/CHI
Gisbert Hasenjaeger
== Italy ==
Hagelin machine
Enigma machine
== Japan ==
Japanese army and diplomatic codes
Japanese naval codes
PURPLE
JN-25
== Poland ==
Cryptanalysis of the Enigma
Biuro Szyfrów (Cipher Bureau)
Marian Rejewski
Jerzy Różycki
Henryk Zygalski
bomba
Lacida Machine
== Soviet Union ==
5th Department of NKVD (1941-1943), 5th Department of NKGB (1943-1945)
Lieutenant general Ivan Shevelyov, the head of the department
8th Department of Red Army General Staff
Lieutenant general Piotr Belyusov, the head of the department
== Sweden ==
Arne Beurling
== United Kingdom ==
Bletchley Park
Cryptanalysis of the Enigma
Cryptanalysis of the Lorenz cipher
Far East Combined Bureau (FECB)
Naval Intelligence Division (NID)
Wireless Experimental Centre (WEC)
Bombe
Colossus computer
Typex
SYKO
Ultra
Alan Turing
W. T. Tutte
John Tiltman
Max Newman
Tommy Flowers
I. J. Good
John Herivel
Leo Marks
Gordon Welchman
Poem code
== United States ==
Magic (cryptography)
Signals Intelligence Service US Army, see also Arlington Hall
OP-20-G US Navy Signals Intelligence group
Elizebeth Smith Friedman
William Friedman
Frank Rowlett
Abraham Sinkov
Genevieve Grotjan Feinstein
Leo Rosen
Joseph Rochefort, leader of the effort to crack Japanese Naval codes
Joseph Mauborgne
Agnes Meyer Driscoll
SIGABA cipher machine
SIGSALY voice encryption
SIGTOT one-time tape system
M-209 cipher machine
Station HYPO cryptanalysis group
Station CAST cryptanalysis group
Station NEGAT
== See also ==
Cryptography
History of cryptography
World War I cryptography
Ultra (cryptography)
Magic (cryptography)
Cryptanalysis of the Enigma
Bombe
Enigma (machine)
SIGABA
TypeX
Lorenz cipher
Geheimfernschreiber
Codetalkers
PURPLE
SIGSALY
JN-25
Bletchley Park
Biuro Szyfrów
PC Bruno
SIS US Army, later moved to Arlington Hall
OP-20-G US Navy
Marian Rejewski
Jerzy Różycki
Henryk Zygalski
Alan Turing
W. T. Tutte
John Tiltman
Max Newman
Tommy Flowers
I. J. Good
William Friedman
Frank Rowlett
Abraham Sinkov
Joseph Rochefort
Agnes Meyer Driscoll
Hans-Thilo Schmidt
== References == | Wikipedia/World_War_II_cryptography |
The Blum–Micali algorithm is a cryptographically secure pseudorandom number generator. The algorithm gets its security from the difficulty of computing discrete logarithms.
Let
p
{\displaystyle p}
be an odd prime, and let
g
{\displaystyle g}
be a primitive root modulo
p
{\displaystyle p}
. Let
x
0
{\displaystyle x_{0}}
be a seed, and let
x
i
+
1
=
g
x
i
mod
p
{\displaystyle x_{i+1}=g^{x_{i}}\ {\bmod {\ p}}}
.
The
i
{\displaystyle i}
th output of the algorithm is 1 if
x
i
≤
p
−
1
2
{\displaystyle x_{i}\leq {\frac {p-1}{2}}}
.
Otherwise the output is 0. This is equivalent to using one bit of
x
i
{\displaystyle x_{i}}
as your random number. It has been shown that
n
−
c
−
1
{\displaystyle n-c-1}
bits of
x
i
{\displaystyle x_{i}}
can be used if solving the discrete log problem is infeasible even for exponents with as few as
c
{\displaystyle c}
bits.
In order for this generator to be secure, the prime number
p
{\displaystyle p}
needs to be large enough so that computing discrete logarithms modulo
p
{\displaystyle p}
is infeasible. To be more precise, any method that predicts the numbers generated will lead to an algorithm that solves the discrete logarithm problem for that prime.
There is a paper discussing possible examples of the quantum permanent compromise attack to the Blum–Micali construction. This attacks illustrate how a previous attack to the Blum–Micali generator can be extended to the whole Blum–Micali construction, including the Blum Blum Shub and Kaliski generators.
== References ==
== External links ==
https://web.archive.org/web/20080216164459/http://crypto.stanford.edu/pbc/notes/crypto/blummicali.xhtml | Wikipedia/Blum–Micali_algorithm |
A cryptographic hash function (CHF) is a hash algorithm (a map of an arbitrary binary string to a binary string with a fixed size of
n
{\displaystyle n}
bits) that has special properties desirable for a cryptographic application:
the probability of a particular
n
{\displaystyle n}
-bit output result (hash value) for a random input string ("message") is
2
−
n
{\displaystyle 2^{-n}}
(as for any good hash), so the hash value can be used as a representative of the message;
finding an input string that matches a given hash value (a pre-image) is infeasible, assuming all input strings are equally likely. The resistance to such search is quantified as security strength: a cryptographic hash with
n
{\displaystyle n}
bits of hash value is expected to have a preimage resistance strength of
n
{\displaystyle n}
bits, unless the space of possible input values is significantly smaller than
2
n
{\displaystyle 2^{n}}
(a practical example can be found in § Attacks on hashed passwords);
a second preimage resistance strength, with the same expectations, refers to a similar problem of finding a second message that matches the given hash value when one message is already known;
finding any pair of different messages that yield the same hash value (a collision) is also infeasible: a cryptographic hash is expected to have a collision resistance strength of
n
/
2
{\displaystyle n/2}
bits (lower due to the birthday paradox).
Cryptographic hash functions have many information-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication. They can also be used as ordinary hash functions, to index data in hash tables, for fingerprinting, to detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption. Indeed, in information-security contexts, cryptographic hash values are sometimes called (digital) fingerprints, checksums, (message) digests, or just hash values, even though all these terms stand for more general functions with rather different properties and purposes.
Non-cryptographic hash functions are used in hash tables and to detect accidental errors; their constructions frequently provide no resistance to a deliberate attack. For example, a denial-of-service attack on hash tables is possible if the collisions are easy to find, as in the case of linear cyclic redundancy check (CRC) functions.
== Properties ==
Most cryptographic hash functions are designed to take a string of any length as input and produce a fixed-length hash value.
A cryptographic hash function must be able to withstand all known types of cryptanalytic attack. In theoretical cryptography, the security level of a cryptographic hash function has been defined using the following properties:
Pre-image resistance
Given a hash value h, it should be difficult to find any message m such that h = hash(m). This concept is related to that of a one-way function. Functions that lack this property are vulnerable to preimage attacks.
Second pre-image resistance
Given an input m1, it should be difficult to find a different input m2 such that hash(m1) = hash(m2). This property is sometimes referred to as weak collision resistance. Functions that lack this property are vulnerable to second-preimage attacks.
Collision resistance
It should be difficult to find two different messages m1 and m2 such that hash(m1) = hash(m2). Such a pair is called a cryptographic hash collision. This property is sometimes referred to as strong collision resistance. It requires a hash value at least twice as long as that required for pre-image resistance; otherwise, collisions may be found by a birthday attack.
Collision resistance implies second pre-image resistance but does not imply pre-image resistance. The weaker assumption is always preferred in theoretical cryptography, but in practice, a hash-function that is only second pre-image resistant is considered insecure and is therefore not recommended for real applications.
Informally, these properties mean that a malicious adversary cannot replace or modify the input data without changing its digest. Thus, if two strings have the same digest, one can be very confident that they are identical. Second pre-image resistance prevents an attacker from crafting a document with the same hash as a document the attacker cannot control. Collision resistance prevents an attacker from creating two distinct documents with the same hash.
A function meeting these criteria may still have undesirable properties. Currently, popular cryptographic hash functions are vulnerable to length-extension attacks: given hash(m) and len(m) but not m, by choosing a suitable m′ an attacker can calculate hash(m ∥ m′), where ∥ denotes concatenation. This property can be used to break naive authentication schemes based on hash functions. The HMAC construction works around these problems.
In practice, collision resistance is insufficient for many practical uses. In addition to collision resistance, it should be impossible for an adversary to find two messages with substantially similar digests; or to infer any useful information about the data, given only its digest. In particular, a hash function should behave as much as possible like a random function (often called a random oracle in proofs of security) while still being deterministic and efficiently computable. This rules out functions like the SWIFFT function, which can be rigorously proven to be collision-resistant assuming that certain problems on ideal lattices are computationally difficult, but, as a linear function, does not satisfy these additional properties.
Checksum algorithms, such as CRC32 and other cyclic redundancy checks, are designed to meet much weaker requirements and are generally unsuitable as cryptographic hash functions. For example, a CRC was used for message integrity in the WEP encryption standard, but an attack was readily discovered, which exploited the linearity of the checksum.
=== Degree of difficulty ===
In cryptographic practice, "difficult" generally means "almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important". The meaning of the term is therefore somewhat dependent on the application since the effort that a malicious agent may put into the task is usually proportional to their expected gain. However, since the needed effort usually multiplies with the digest length, even a thousand-fold advantage in processing power can be neutralized by adding a dozen bits to the latter.
For messages selected from a limited set of messages, for example passwords or other short messages, it can be feasible to invert a hash by trying all possible messages in the set. Because cryptographic hash functions are typically designed to be computed quickly, special key derivation functions that require greater computing resources have been developed that make such brute-force attacks more difficult.
In some theoretical analyses "difficult" has a specific mathematical meaning, such as "not solvable in asymptotic polynomial time". Such interpretations of difficulty are important in the study of provably secure cryptographic hash functions but do not usually have a strong connection to practical security. For example, an exponential-time algorithm can sometimes still be fast enough to make a feasible attack. Conversely, a polynomial-time algorithm (e.g., one that requires n20 steps for n-digit keys) may be too slow for any practical use.
== Illustration ==
An illustration of the potential use of a cryptographic hash is as follows: Alice poses a tough math problem to Bob and claims that she has solved it. Bob would like to try it himself, but would yet like to be sure that Alice is not bluffing. Therefore, Alice writes down her solution, computes its hash, and tells Bob the hash value (whilst keeping the solution secret). Then, when Bob comes up with the solution himself a few days later, Alice can prove that she had the solution earlier by revealing it and having Bob hash it and check that it matches the hash value given to him before. (This is an example of a simple commitment scheme; in actual practice, Alice and Bob will often be computer programs, and the secret would be something less easily spoofed than a claimed puzzle solution.)
== Applications ==
=== Verifying the integrity of messages and files ===
An important application of secure hashes is the verification of message integrity. Comparing message digests (hash digests over the message) calculated before, and after, transmission can determine whether any changes have been made to the message or file.
MD5, SHA-1, or SHA-2 hash digests are sometimes published on websites or forums to allow verification of integrity for downloaded files, including files retrieved using file sharing such as mirroring. This practice establishes a chain of trust as long as the hashes are posted on a trusted site – usually the originating site – authenticated by HTTPS. Using a cryptographic hash and a chain of trust detects malicious changes to the file. Non-cryptographic error-detecting codes such as cyclic redundancy checks only prevent against non-malicious alterations of the file, since an intentional spoof can readily be crafted to have the colliding code value.
=== Signature generation and verification ===
Almost all digital signature schemes require a cryptographic hash to be calculated over the message. This allows the signature calculation to be performed on the relatively small, statically sized hash digest. The message is considered authentic if the signature verification succeeds given the signature and recalculated hash digest over the message. So the message integrity property of the cryptographic hash is used to create secure and efficient digital signature schemes.
=== Password verification ===
Password verification commonly relies on cryptographic hashes. Storing all user passwords as cleartext can result in a massive security breach if the password file is compromised. One way to reduce this danger is to only store the hash digest of each password. To authenticate a user, the password presented by the user is hashed and compared with the stored hash. A password reset method is required when password hashing is performed; original passwords cannot be recalculated from the stored hash value.
However, use of standard cryptographic hash functions, such as the SHA series, is no longer considered safe for password storage.: 5.1.1.2 These algorithms are designed to be computed quickly, so if the hashed values are compromised, it is possible to try guessed passwords at high rates. Common graphics processing units can try billions of possible passwords each second. Password hash functions that perform key stretching – such as PBKDF2, scrypt or Argon2 – commonly use repeated invocations of a cryptographic hash to increase the time (and in some cases computer memory) required to perform brute-force attacks on stored password hash digests. For details, see § Attacks on hashed passwords.
A password hash also requires the use of a large random, non-secret salt value that can be stored with the password hash. The salt is hashed with the password, altering the password hash mapping for each password, thereby making it infeasible for an adversary to store tables of precomputed hash values to which the password hash digest can be compared or to test a large number of purloined hash values in parallel.
=== Proof-of-work ===
A proof-of-work system (or protocol, or function) is an economic measure to deter denial-of-service attacks and other service abuses such as spam on a network by requiring some work from the service requester, usually meaning processing time by a computer. A key feature of these schemes is their asymmetry: the work must be moderately hard (but feasible) on the requester side but easy to check for the service provider. One popular system – used in Bitcoin mining and Hashcash – uses partial hash inversions to prove that work was done, to unlock a mining reward in Bitcoin, and as a good-will token to send an e-mail in Hashcash. The sender is required to find a message whose hash value begins with a number of zero bits. The average work that the sender needs to perform in order to find a valid message is exponential in the number of zero bits required in the hash value, while the recipient can verify the validity of the message by executing a single hash function. For instance, in Hashcash, a sender is asked to generate a header whose 160-bit SHA-1 hash value has the first 20 bits as zeros. The sender will, on average, have to try 219 times to find a valid header.
=== File or data identifier ===
A message digest can also serve as a means of reliably identifying a file; several source code management systems, including Git, Mercurial and Monotone, use the sha1sum of various types of content (file content, directory trees, ancestry information, etc.) to uniquely identify them. Hashes are used to identify files on peer-to-peer filesharing networks. For example, in an ed2k link, an MD4-variant hash is combined with the file size, providing sufficient information for locating file sources, downloading the file, and verifying its contents. Magnet links are another example. Such file hashes are often the top hash of a hash list or a hash tree, which allows for additional benefits.
One of the main applications of a hash function is to allow the fast look-up of data in a hash table. Being hash functions of a particular kind, cryptographic hash functions lend themselves well to this application too.
However, compared with standard hash functions, cryptographic hash functions tend to be much more expensive computationally. For this reason, they tend to be used in contexts where it is necessary for users to protect themselves against the possibility of forgery (the creation of data with the same digest as the expected data) by potentially malicious participants, such as open source applications with multiple sources of download, where malicious files could be substituted in with the same appearance to the user, or an authentic file is modified to contain malicious data.
==== Content-addressable storage ====
== Hash functions based on block ciphers ==
There are several methods to use a block cipher to build a cryptographic hash function, specifically a one-way compression function.
The methods resemble the block cipher modes of operation usually used for encryption. Many well-known hash functions, including MD4, MD5, SHA-1 and SHA-2, are built from block-cipher-like components designed for the purpose, with feedback to ensure that the resulting function is not invertible. SHA-3 finalists included functions with block-cipher-like components (e.g., Skein, BLAKE) though the function finally selected, Keccak, was built on a cryptographic sponge instead.
A standard block cipher such as AES can be used in place of these custom block ciphers; that might be useful when an embedded system needs to implement both encryption and hashing with minimal code size or hardware area. However, that approach can have costs in efficiency and security. The ciphers in hash functions are built for hashing: they use large keys and blocks, can efficiently change keys every block, and have been designed and vetted for resistance to related-key attacks. General-purpose ciphers tend to have different design goals. In particular, AES has key and block sizes that make it nontrivial to use to generate long hash values; AES encryption becomes less efficient when the key changes each block; and related-key attacks make it potentially less secure for use in a hash function than for encryption.
== Hash function design ==
=== Merkle–Damgård construction ===
A hash function must be able to process an arbitrary-length message into a fixed-length output. This can be achieved by breaking the input up into a series of equally sized blocks, and operating on them in sequence using a one-way compression function. The compression function can either be specially designed for hashing or be built from a block cipher. A hash function built with the Merkle–Damgård construction is as resistant to collisions as is its compression function; any collision for the full hash function can be traced back to a collision in the compression function.
The last block processed should also be unambiguously length padded; this is crucial to the security of this construction. This construction is called the Merkle–Damgård construction. Most common classical hash functions, including SHA-1 and MD5, take this form.
=== Wide pipe versus narrow pipe ===
A straightforward application of the Merkle–Damgård construction, where the size of hash output is equal to the internal state size (between each compression step), results in a narrow-pipe hash design. This design causes many inherent flaws, including length-extension, multicollisions, long message attacks, generate-and-paste attacks, and also cannot be parallelized. As a result, modern hash functions are built on wide-pipe constructions that have a larger internal state size – which range from tweaks of the Merkle–Damgård construction to new constructions such as the sponge construction and HAIFA construction. None of the entrants in the NIST hash function competition use a classical Merkle–Damgård construction.
Meanwhile, truncating the output of a longer hash, such as used in SHA-512/256, also defeats many of these attacks.
== Use in building other cryptographic primitives ==
Hash functions can be used to build other cryptographic primitives. For these other primitives to be cryptographically secure, care must be taken to build them correctly.
Message authentication codes (MACs) (also called keyed hash functions) are often built from hash functions. HMAC is such a MAC.
Just as block ciphers can be used to build hash functions, hash functions can be used to build block ciphers. Luby-Rackoff constructions using hash functions can be provably secure if the underlying hash function is secure. Also, many hash functions (including SHA-1 and SHA-2) are built by using a special-purpose block cipher in a Davies–Meyer or other construction. That cipher can also be used in a conventional mode of operation, without the same security guarantees; for example, SHACAL, BEAR and LION.
Pseudorandom number generators (PRNGs) can be built using hash functions. This is done by combining a (secret) random seed with a counter and hashing it.
Some hash functions, such as Skein, Keccak, and RadioGatún, output an arbitrarily long stream and can be used as a stream cipher, and stream ciphers can also be built from fixed-length digest hash functions. Often this is done by first building a cryptographically secure pseudorandom number generator and then using its stream of random bytes as keystream. SEAL is a stream cipher that uses SHA-1 to generate internal tables, which are then used in a keystream generator more or less unrelated to the hash algorithm. SEAL is not guaranteed to be as strong (or weak) as SHA-1. Similarly, the key expansion of the HC-128 and HC-256 stream ciphers makes heavy use of the SHA-256 hash function.
== Concatenation ==
Concatenating outputs from multiple hash functions provide collision resistance as good as the strongest of the algorithms included in the concatenated result. For example, older versions of Transport Layer Security (TLS) and Secure Sockets Layer (SSL) used concatenated MD5 and SHA-1 sums. This ensures that a method to find collisions in one of the hash functions does not defeat data protected by both hash functions.
For Merkle–Damgård construction hash functions, the concatenated function is as collision-resistant as its strongest component, but not more collision-resistant. Antoine Joux observed that 2-collisions lead to n-collisions: if it is feasible for an attacker to find two messages with the same MD5 hash, then they can find as many additional messages with that same MD5 hash as they desire, with no greater difficulty. Among those n messages with the same MD5 hash, there is likely to be a collision in SHA-1. The additional work needed to find the SHA-1 collision (beyond the exponential birthday search) requires only polynomial time.
== Cryptographic hash algorithms ==
There are many cryptographic hash algorithms; this section lists a few algorithms that are referenced relatively often. A more extensive list can be found on the page containing a comparison of cryptographic hash functions.
=== MD5 ===
MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function, MD4, and was specified in 1992 as RFC 1321. Collisions against MD5 can be calculated within seconds, which makes the algorithm unsuitable for most use cases where a cryptographic hash is required. MD5 produces a digest of 128 bits (16 bytes).
=== SHA-1 ===
SHA-1 was developed as part of the U.S. Government's Capstone project. The original specification – now commonly called SHA-0 – of the algorithm was published in 1993 under the title Secure Hash Standard, FIPS PUB 180, by U.S. government standards agency NIST (National Institute of Standards and Technology). It was withdrawn by the NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designated SHA-1. Collisions against the full SHA-1 algorithm can be produced using the shattered attack and the hash function should be considered broken. SHA-1 produces a hash digest of 160 bits (20 bytes).
Documents may refer to SHA-1 as just "SHA", even though this may conflict with the other Secure Hash Algorithms such as SHA-0, SHA-2, and SHA-3.
=== RIPEMD-160 ===
RIPEMD (RACE Integrity Primitives Evaluation Message Digest) is a family of cryptographic hash functions developed in Leuven, Belgium, by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel at the COSIC research group at the Katholieke Universiteit Leuven, and first published in 1996. RIPEMD was based upon the design principles used in MD4 and is similar in performance to the more popular SHA-1. RIPEMD-160 has, however, not been broken. As the name implies, RIPEMD-160 produces a hash digest of 160 bits (20 bytes).
=== Whirlpool ===
Whirlpool is a cryptographic hash function designed by Vincent Rijmen and Paulo S. L. M. Barreto, who first described it in 2000. Whirlpool is based on a substantially modified version of the Advanced Encryption Standard (AES). Whirlpool produces a hash digest of 512 bits (64 bytes).
=== SHA-2 ===
SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA), first published in 2001. They are built using the Merkle–Damgård structure, from a one-way compression function itself built using the Davies–Meyer structure from a (classified) specialized block cipher.
SHA-2 basically consists of two hash algorithms: SHA-256 and SHA-512. SHA-224 is a variant of SHA-256 with different starting values and truncated output. SHA-384 and the lesser-known SHA-512/224 and SHA-512/256 are all variants of SHA-512. SHA-512 is more secure than SHA-256 and is commonly faster than SHA-256 on 64-bit machines such as AMD64.
The output size in bits is given by the extension to the "SHA" name, so SHA-224 has an output size of 224 bits (28 bytes); SHA-256, 32 bytes; SHA-384, 48 bytes; and SHA-512, 64 bytes.
=== SHA-3 ===
SHA-3 (Secure Hash Algorithm 3) was released by NIST on August 5, 2015. SHA-3 is a subset of the broader cryptographic primitive family Keccak. The Keccak algorithm is the work of Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. Keccak is based on a sponge construction, which can also be used to build other cryptographic primitives such as a stream cipher. SHA-3 provides the same output sizes as SHA-2: 224, 256, 384, and 512 bits.
Configurable output sizes can also be obtained using the SHAKE-128 and SHAKE-256 functions. Here the -128 and -256 extensions to the name imply the security strength of the function rather than the output size in bits.
=== BLAKE2 ===
BLAKE2, an improved version of BLAKE, was announced on December 21, 2012. It was created by Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O'Hearn, and Christian Winnerlein with the goal of replacing the widely used but broken MD5 and SHA-1 algorithms. When run on 64-bit x64 and ARM architectures, BLAKE2b is faster than SHA-3, SHA-2, SHA-1, and MD5. Although BLAKE and BLAKE2 have not been standardized as SHA-3 has, BLAKE2 has been used in many protocols including the Argon2 password hash, for the high efficiency that it offers on modern CPUs. As BLAKE was a candidate for SHA-3, BLAKE and BLAKE2 both offer the same output sizes as SHA-3 – including a configurable output size.
=== BLAKE3 ===
BLAKE3, an improved version of BLAKE2, was announced on January 9, 2020. It was created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. BLAKE3 is a single algorithm, in contrast to BLAKE and BLAKE2, which are algorithm families with multiple variants. The BLAKE3 compression function is closely based on that of BLAKE2s, with the biggest difference being that the number of rounds is reduced from 10 to 7. Internally, BLAKE3 is a Merkle tree, and it supports higher degrees of parallelism than BLAKE2.
== Attacks on cryptographic hash algorithms ==
There is a long list of cryptographic hash functions but many have been found to be vulnerable and should not be used. For instance, NIST selected 51 hash functions as candidates for round 1 of the SHA-3 hash competition, of which 10 were considered broken and 16 showed significant weaknesses and therefore did not make it to the next round; more information can be found on the main article about the NIST hash function competitions.
Even if a hash function has never been broken, a successful attack against a weakened variant may undermine the experts' confidence. For instance, in August 2004 collisions were found in several then-popular hash functions, including MD5. These weaknesses called into question the security of stronger algorithms derived from the weak hash functions – in particular, SHA-1 (a strengthened version of SHA-0), RIPEMD-128, and RIPEMD-160 (both strengthened versions of RIPEMD).
On August 12, 2004, Joux, Carribault, Lemuel, and Jalby announced a collision for the full SHA-0 algorithm. Joux et al. accomplished this using a generalization of the Chabaud and Joux attack. They found that the collision had complexity 251 and took about 80,000 CPU hours on a supercomputer with 256 Itanium 2 processors – equivalent to 13 days of full-time use of the supercomputer.
In February 2005, an attack on SHA-1 was reported that would find collision in about 269 hashing operations, rather than the 280 expected for a 160-bit hash function. In August 2005, another attack on SHA-1 was reported that would find collisions in 263 operations. Other theoretical weaknesses of SHA-1 have been known, and in February 2017 Google announced a collision in SHA-1. Security researchers recommend that new applications can avoid these problems by using later members of the SHA family, such as SHA-2, or using techniques such as randomized hashing that do not require collision resistance.
A successful, practical attack broke MD5 (used within certificates for Transport Layer Security) in 2008.
Many cryptographic hashes are based on the Merkle–Damgård construction. All cryptographic hashes that directly use the full output of a Merkle–Damgård construction are vulnerable to length extension attacks. This makes the MD5, SHA-1, RIPEMD-160, Whirlpool, and the SHA-256 / SHA-512 hash algorithms all vulnerable to this specific attack. SHA-3, BLAKE2, BLAKE3, and the truncated SHA-2 variants are not vulnerable to this type of attack.
== Attacks on hashed passwords ==
Rather than store plain user passwords, controlled-access systems frequently store the hash of each user's password in a file or database. When someone requests access, the password they submit is hashed and compared with the stored value. If the database is stolen (an all-too-frequent occurrence), the thief will only have the hash values, not the passwords.
Passwords may still be retrieved by an attacker from the hashes, because most people choose passwords in predictable ways. Lists of common passwords are widely circulated and many passwords are short enough that even all possible combinations may be tested if calculation of the hash does not take too much time.
The use of cryptographic salt prevents some attacks, such as building files of precomputing hash values, e.g. rainbow tables. But searches on the order of 100 billion tests per second are possible with high-end graphics processors, making direct attacks possible even with salt.
The United States National Institute of Standards and Technology recommends storing passwords using special hashes called key derivation functions (KDFs) that have been created to slow brute force searches.: 5.1.1.2 Slow hashes include pbkdf2, bcrypt, scrypt, argon2, Balloon and some recent modes of Unix crypt. For KDFs that perform multiple hashes to slow execution, NIST recommends an iteration count of 10,000 or more.: 5.1.1.2
== See also ==
== References ==
=== Citations ===
=== Sources ===
Menezes, Alfred J.; van Oorschot, Paul C.; Vanstone, Scott A. (7 December 2018). "Hash functions". Handbook of Applied Cryptography. CRC Press. pp. 33–. ISBN 978-0-429-88132-9.
Aumasson, Jean-Philippe (6 November 2017). Serious Cryptography: A Practical Introduction to Modern Encryption. No Starch Press. ISBN 978-1-59327-826-7. OCLC 1012843116.
== External links ==
Paar, Christof; Pelzl, Jan (2009). "11: Hash Functions". Understanding Cryptography, A Textbook for Students and Practitioners. Springer. Archived from the original on 2012-12-08. (companion web site contains online cryptography course that covers hash functions)
"The ECRYPT Hash Function Website".
Buldas, A. (2011). "Series of mini-lectures about cryptographic hash functions". Archived from the original on 2012-12-06.
Open source python based application with GUI used to verify downloads. | Wikipedia/Cryptographic_hash |
The Microsoft Windows platform specific Cryptographic Application Programming Interface (also known variously as CryptoAPI, Microsoft Cryptography API, MS-CAPI or simply CAPI) is an application programming interface included with Microsoft Windows operating systems that provides services to enable developers to secure Windows-based applications using cryptography. It is a set of dynamically linked libraries that provides an abstraction layer which isolates programmers from the code used to encrypt the data. The Crypto API was first introduced in Windows NT 4.0 and enhanced in subsequent versions.
CryptoAPI supports both public-key and symmetric key cryptography, though persistent symmetric keys are not supported. It includes functionality for encrypting and decrypting data and for authentication using digital certificates. It also includes a cryptographically secure pseudorandom number generator function CryptGenRandom.
CryptoAPI works with a number of CSPs (Cryptographic Service Providers) installed on the machine. CSPs are the modules that do the actual work of encoding and decoding data by performing the cryptographic functions. Vendors of HSMs may supply a CSP which works with their hardware.
== Cryptography API: Next Generation ==
Windows Vista features an update to the Crypto API known as Cryptography API: Next Generation (CNG). It has better API factoring to allow the same functions to work using a wide range of cryptographic algorithms, and includes a number of newer algorithms that are part of the National Security Agency (NSA) Suite B. It is also flexible, featuring support for plugging custom cryptographic APIs into the CNG runtime. However, CNG Key Storage Providers still do not support symmetric keys. CNG works in both user and kernel mode, and also supports all of the algorithms from the CryptoAPI. The Microsoft provider that implements CNG is housed in Bcrypt.dll.
CNG also supports elliptic curve cryptography which, because it uses shorter keys for the same expected level of security, is more efficient than RSA. The CNG API integrates with the smart card subsystem by including a Base Smart Card Cryptographic Service Provider (Base CSP) module which encapsulates the smart card API. Smart card manufacturers just have to make their devices compatible with this, rather than provide a from-scratch solution.
CNG also adds support for Dual_EC_DRBG, a pseudorandom number generator defined in NIST SP 800-90A that could expose the user to eavesdropping by the National Security Agency since it contains a kleptographic backdoor, unless the developer remembers to generate new base points with a different cryptographically secure pseudorandom number generator or a true random number generator and then publish the generated seed in order to remove the NSA backdoor. It is also very slow. It is only used when called for explicitly.
CNG also replaces the default PRNG with CTR_DRBG using AES as the block cipher, because the earlier RNG which is defined in the now superseded FIPS 186-2 is based on either DES or SHA-1, both which have been broken. CTR_DRBG is one of the two algorithms in NIST SP 800-90 endorsed by Schneier, the other being Hash_DRBG.
== See also ==
CAPICOM
DPAPI
Encrypting File System
Public-key cryptography
Cryptographic Service Provider
PKCS#11
Crypto API (Linux)
== References ==
== External links ==
Cryptography Reference on MSDN
Microsoft CAPI at CryptoDox | Wikipedia/Cryptographic_Application_Programming_Interface |
Cryptography, or cryptology (from Ancient Greek: κρυπτός, romanized: kryptós "hidden, secret"; and γράφειν graphein, "to write", or -λογία -logia, "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security (data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications.
Cryptography prior to the modern age was effectively synonymous with encryption, converting readable information (plaintext) to unintelligible nonsense text (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literature often uses the names "Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for the eavesdropping adversary. Since the development of rotor cipher machines in World War I and the advent of computers in World War II, cryptography methods have become increasingly complex and their applications more varied.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual practice to do so. Such schemes, if well designed, are therefore termed "computationally secure". Theoretical advances (e.g., improvements in integer factorization algorithms) and faster computing technology require these designs to be continually reevaluated and, if necessary, adapted. Information-theoretically secure schemes that provably cannot be broken even with unlimited computing power, such as the one-time pad, are much more difficult to use in practice than the best theoretically breakable but computationally secure schemes.
The growth of cryptographic technology has raised a number of legal issues in the Information Age. Cryptography's potential for use as a tool for espionage and sedition has led many governments to classify it as a weapon and to limit or even prohibit its use and export. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation. Cryptography also plays a major role in digital rights management and copyright infringement disputes with regard to digital media.
== Terminology ==
The first use of the term "cryptograph" (as opposed to "cryptogram") dates back to the 19th century—originating from "The Gold-Bug", a story by Edgar Allan Poe.
Until modern times, cryptography referred almost exclusively to "encryption", which is the process of converting ordinary information (called plaintext) into an unintelligible form (called ciphertext). Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms that carry out the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and, in each instance, by a "key". The key is a secret (ideally known only to the communicants), usually a string of characters (ideally short so it can be remembered by the user), which is needed to decrypt the ciphertext. In formal mathematical terms, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, and the encryption and decryption algorithms that correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks.
There are two main types of cryptosystems: symmetric and asymmetric. In symmetric systems, the only ones known until the 1970s, the same secret key encrypts and decrypts a message. Data manipulation in symmetric systems is significantly faster than in asymmetric systems. Asymmetric systems use a "public key" to encrypt a message and a related "private key" to decrypt it. The advantage of asymmetric systems is that the public key can be freely published, allowing parties to establish secure communication without having a shared secret key. In practice, asymmetric systems are used to first exchange a secret key, and then secure communication proceeds via a more efficient symmetric system using that key. Examples of asymmetric systems include Diffie–Hellman key exchange, RSA (Rivest–Shamir–Adleman), ECC (Elliptic Curve Cryptography), and Post-quantum cryptography. Secure symmetric algorithms include the commonly used AES (Advanced Encryption Standard) which replaced the older DES (Data Encryption Standard). Insecure symmetric algorithms include children's language tangling schemes such as Pig Latin or other cant, and all historical cryptographic schemes, however seriously intended, prior to the invention of the one-time pad early in the 20th century.
In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning: the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example, "wallaby" replaces "attack at dawn"). A cypher, in contrast, is a scheme for changing or substituting an element below such a level (a letter, a syllable, or a pair of letters, etc.) to produce a cyphertext.
Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to "crack" encryption algorithms or their implementations.
Some use the terms "cryptography" and "cryptology" interchangeably in English, while others (including US military practice generally) use "cryptography" to refer specifically to the use and practice of cryptographic techniques and "cryptology" to refer to the combined study of cryptography and cryptanalysis. English is more flexible than several other languages in which "cryptology" (done by cryptologists) is always used in the second sense above. RFC 2828 advises that steganography is sometimes included in cryptology.
The study of characteristics of languages that have some application in cryptography or cryptology (e.g. frequency data, letter combinations, universal patterns, etc.) is called cryptolinguistics. Cryptolingusitics is especially used in military intelligence applications for deciphering foreign communications.
== History ==
Before the modern era, cryptography focused on message confidentiality (i.e., encryption)—conversion of messages from a comprehensible form into an incomprehensible one and back again at the other end, rendering it unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption attempted to ensure secrecy in communications, such as those of spies, military leaders, and diplomats. In recent decades, the field has expanded beyond confidentiality concerns to include techniques for message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs and secure computation, among others.
=== Classic cryptography ===
The main classical cipher types are transposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in the Latin alphabet). Simple versions of either have never offered much confidentiality from enterprising opponents. An early substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced by a letter three positions further down the alphabet. Suetonius reports that Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an example of an early Hebrew cipher. The earliest known use of cryptography is some carved ciphertext on stone in Egypt (c. 1900 BCE), but this may have been done for the amusement of literate observers rather than as a way of concealing information.
The Greeks of Classical times are said to have known of ciphers (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Steganography (i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, from Herodotus, was a message tattooed on a slave's shaved head and concealed under the regrown hair. Other steganography methods involve 'hiding in plain sight,' such as using a music cipher to disguise an encrypted message within a regular piece of sheet music. More modern examples of steganography include the use of invisible ink, microdots, and digital watermarks to conceal information.
In India, the 2000-year-old Kama Sutra of Vātsyāyana speaks of two different kinds of ciphers called Kautiliyam and Mulavediya. In the Kautiliyam, the cipher letter substitutions are based on phonetic relations, such as vowels becoming consonants. In the Mulavediya, the cipher alphabet consists of pairing letters and using the reciprocal ones.
In Sassanid Persia, there were two secret scripts, according to the Muslim author Ibn al-Nadim: the šāh-dabīrīya (literally "King's script") which was used for official correspondence, and the rāz-saharīya which was used to communicate secret messages with other countries.
David Kahn notes in The Codebreakers that modern cryptology originated among the Arabs, the first people to systematically document cryptanalytic methods. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.
Ciphertexts produced by a classical cipher (and some modern ciphers) will reveal statistical information about the plaintext, and that information can often be used to break the cipher. After the discovery of frequency analysis, nearly all such ciphers could be broken by an informed attacker. Such classical ciphers still enjoy popularity today, though mostly as puzzles (see cryptogram). The Arab mathematician and polymath Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), which described the first known use of frequency analysis cryptanalysis techniques.
Language letter frequencies may offer little help for some extended historical encryption techniques such as homophonic cipher that tend to flatten the frequency distribution. For those ciphers, language letter group (or n-gram) frequencies may provide an attack.
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of the polyalphabetic cipher, most clearly by Leon Battista Alberti around the year 1467, though there is some indication that it was already known to Al-Kindi. Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automatic cipher device, a wheel that implemented a partial realization of his invention. In the Vigenère cipher, a polyalphabetic cipher, encryption uses a key word, which controls letter substitution depending on which letter of the key word is used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was vulnerable to Kasiski examination, but this was first published about ten years later by Friedrich Kasiski.
Although frequency analysis can be a powerful and general technique against many ciphers, encryption has still often been effective in practice, as many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher used and perhaps of the key involved, thus making espionage, bribery, burglary, defection, etc., more attractive approaches to the cryptanalytically uninformed. It was finally explicitly recognized in the 19th century that secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs's Principle; alternatively and more bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, as Shannon's Maxim—'the enemy knows the system'.
Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher. In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's wheel cypher (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among them rotor machines—famously including the Enigma machine used by the German government and military from the late 1920s and during World War II. The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.
=== Early computer-era cryptography ===
Cryptanalysis of the new mechanical ciphering devices proved to be both difficult and laborious. In the United Kingdom, cryptanalytic efforts at Bletchley Park during WWII spurred the development of more efficient means for carrying out repetitive tasks, such as military code breaking (decryption). This culminated in the development of the Colossus, the world's first fully electronic, digital, programmable computer, which assisted in the decryption of ciphers generated by the German Army's Lorenz SZ40/42 machine.
Extensive open academic research into cryptography is relatively recent, beginning in the mid-1970s. In the early 1970s IBM personnel designed the Data Encryption Standard (DES) algorithm that became the first federal government cryptography standard in the United States. In 1976 Whitfield Diffie and Martin Hellman published the Diffie–Hellman key exchange algorithm. In 1977 the RSA algorithm was published in Martin Gardner's Scientific American column. Since then, cryptography has become a widely used tool in communications, computer networks, and computer security generally.
Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems are intractable, such as the integer factorization or the discrete logarithm problems, so there are deep connections with abstract mathematics. There are very few cryptosystems that are proven to be unconditionally secure. The one-time pad is one, and was proven to be so by Claude Shannon. There are a few important algorithms that have been proven secure under certain assumptions. For example, the infeasibility of factoring extremely large integers is the basis for believing that RSA is secure, and some other systems, but even so, proof of unbreakability is unavailable since the underlying mathematical problem remains open. In practice, these are widely used, and are believed unbreakable in practice by most competent observers. There are systems similar to RSA, such as one by Michael O. Rabin that are provably secure provided factoring n = pq is impossible; it is quite unusable in practice. The discrete logarithm problem is the basis for believing some other cryptosystems are secure, and again, there are related, less practical systems that are provably secure relative to the solvability or insolvability discrete log problem.
As well as being aware of cryptographic history, cryptographic algorithm and system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope of brute-force attacks, so when specifying key lengths, the required key lengths are similarly advancing. The potential impact of quantum computing are already being considered by some cryptographic system designers developing post-quantum cryptography. The announced imminence of small implementations of these machines may be making the need for preemptive caution rather more than merely speculative.
== Modern cryptography ==
Claude Shannon's two papers, his 1948 paper on information theory, and especially his 1949 paper on cryptography, laid the foundations of modern cryptography and provided a mathematical basis for future cryptography. His 1949 paper has been noted as having provided a "solid theoretical basis for cryptography and for cryptanalysis", and as having turned cryptography from an "art to a science". As a result of his contributions and work, he has been described as the "founding father of modern cryptography".
Prior to the early 20th century, cryptography was mainly concerned with linguistic and lexicographic patterns. Since then cryptography has broadened in scope, and now makes extensive use of mathematical subdisciplines, including information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics. Cryptography is also a branch of engineering, but an unusual one since it deals with active, intelligent, and malevolent opposition; other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems and quantum physics.
Just as the development of digital computers and electronics helped in cryptanalysis, it made possible much more complex ciphers. Furthermore, computers allowed for the encryption of any kind of data representable in any binary format, unlike classical ciphers which only encrypted written language texts; this was new and significant. Computer use has thus supplanted linguistic cryptography, both for cipher design and cryptanalysis. Many computer ciphers can be characterized by their operation on binary bit sequences (sometimes in groups or blocks), unlike classical and mechanical schemes, which generally manipulate traditional characters (i.e., letters and digits) directly. However, computers have also assisted cryptanalysis, which has compensated to some extent for increased cipher complexity. Nonetheless, good modern ciphers have stayed ahead of cryptanalysis; it is typically the case that use of a quality cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible.
=== Symmetric-key cryptography ===
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.
Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher.
The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block cipher designs that have been designated cryptography standards by the US government (though DES's designation was finally withdrawn after the AES was adopted). Despite its deprecation as an official standard, DES (especially its still-approved and much more secure triple-DES variant) remains quite popular; it is used across a wide range of applications, from ATM encryption to e-mail privacy and secure remote access. Many other block ciphers have been designed and released, with considerable variation in quality. Many, even some designed by capable practitioners, have been thoroughly broken, such as FEAL.
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the output stream is created based on a hidden internal state that changes as the cipher operates. That internal state is initially set up using the secret key material. RC4 is a widely used stream cipher. Block ciphers can be used as stream ciphers by generating blocks of a keystream (in place of a Pseudorandom number generator) and applying an XOR operation to each bit of the plaintext with each bit of the keystream.
Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be used to authenticate the hash value upon receipt; this additional complication blocks an attack scheme against bare digest algorithms, and so has been thought worth the effort. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed-length hash, which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
=== Public-key cryptography ===
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, although a message or group of messages can have a different key than others. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps for each ciphertext exchanged as well. The number of keys required increases as the square of the number of network members, which very quickly requires complex key management schemes to keep them all consistent and secret.
In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion of public-key (also, more generally, called asymmetric key) cryptography in which two different but mathematically related keys are used—a public key and a private key. A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair. The historian David Kahn described public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. In a public-key encryption system, the public key is used for encryption, while the private or secret key is used for decryption. While Diffie and Hellman could not find such a system, they showed that public-key cryptography was indeed possible by presenting the Diffie–Hellman key exchange protocol, a solution that is now widely used in secure communications to allow two parties to secretly agree on a shared encryption key.
The X.509 standard defines the most commonly used format for public key certificates.
Diffie and Hellman's publication sparked widespread academic efforts in finding a practical public-key encryption system. This race was finally won in 1978 by Ronald Rivest, Adi Shamir, and Len Adleman, whose solution has since become known as the RSA algorithm.
The Diffie–Hellman and RSA algorithms, in addition to being the first publicly known examples of high-quality public-key algorithms, have been among the most widely used. Other asymmetric-key algorithms include the Cramer–Shoup cryptosystem, ElGamal encryption, and various elliptic curve techniques.
A document published in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization, revealed that cryptographers at GCHQ had anticipated several academic developments. Reportedly, around 1970, James H. Ellis had conceived the principles of asymmetric key cryptography. In 1973, Clifford Cocks invented a solution that was very similar in design rationale to RSA. In 1974, Malcolm J. Williamson is claimed to have developed the Diffie–Hellman key exchange.
Public-key cryptography is also used for implementing digital signature schemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic of being easy for a user to produce, but difficult for anyone else to forge. Digital signatures can also be permanently tied to the content of the message being signed; they cannot then be 'moved' from one document to another, for any attempt will be detectable. In digital signature schemes, there are two algorithms: one for signing, in which a secret key is used to process the message (or a hash of the message, or both), and one for verification, in which the matching public key is used with the message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many VPNs, etc.).
Public-key algorithms are most often based on the computational complexity of "hard" problems, often from number theory. For example, the hardness of RSA is related to the integer factorization problem, while Diffie–Hellman and DSA are related to the discrete logarithm problem. The security of elliptic curve cryptography is based on number theoretic problems involving elliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.
=== Cryptographic hash functions ===
Cryptographic hash functions are functions that take a variable-length input and return a fixed-length output, which can be used in, for example, a digital signature. For a hash function to be secure, it must be difficult to compute two inputs that hash to the same value (collision resistance) and to compute an input that hashes to a given output (preimage resistance). MD4 is a long-used hash function that is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The US National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the US standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit." Thus, a hash function design competition was meant to select a new U.S. national standard, to be called SHA-3, by 2012. The competition ended on October 2, 2012, when the NIST announced that Keccak would be the new SHA-3 hash algorithm. Unlike block and stream ciphers that are invertible, cryptographic hash functions produce a hashed output that cannot be used to retrieve the original input data. Cryptographic hash functions are used to verify the authenticity of data retrieved from an untrusted source or to add a layer of security.
=== Cryptanalysis ===
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is truly random, never reused, kept secret from all possible attackers, and of equal or greater length than the message. Most ciphers, apart from the one-time pad, can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such proof has been found to date, the one-time-pad remains the only theoretically unbreakable cipher. Although well-implemented one-time-pad encryption cannot be broken, traffic analysis is still possible.
There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what Eve (an attacker) knows and what capabilities are available. In a ciphertext-only attack, Eve has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, Eve has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext attack, Eve may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example is gardening, used by the British during WWII. In a chosen-ciphertext attack, Eve may be able to choose ciphertexts and learn their corresponding plaintexts. Finally in a man-in-the-middle attack Eve gets in between Alice (the sender) and Bob (the recipient), accesses and modifies the traffic and then forward it to the recipient. Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of the protocols involved).
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255 decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even that the key sought will have been found. But this may not be enough assurance; a linear cryptanalysis attack against DES requires 243 known plaintexts (with their corresponding ciphertexts) and approximately 243 DES operations. This is a considerable improvement over brute force attacks.
Public-key algorithms are based on the computational difficulty of various problems. The most famous of these are the difficulty of integer factorization of semiprimes and the difficulty of calculating discrete logarithms, both of which are not yet proven to be solvable in polynomial time (P) using only a classical Turing-complete computer. Much public-key cryptanalysis concerns designing algorithms in P that can solve these problems, or using other technologies, such as quantum computers. For instance, the best-known algorithms for solving the elliptic curve-based version of discrete logarithm are much more time-consuming than the best-known algorithms for factoring, at least for problems of more or less equivalent size. Thus, to achieve an equivalent strength of encryption, techniques that depend upon the difficulty of factoring large composite numbers, such as the RSA cryptosystem, require larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s.
While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, they may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic analysis and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues. Social engineering and other attacks against humans (e.g., bribery, extortion, blackmail, espionage, rubber-hose cryptanalysis or torture) are usually employed due to being more cost-effective and feasible to perform in a reasonable amount of time compared to pure cryptanalysis by a high margin.
=== Cryptographic primitives ===
Much of the theoretical work in cryptography concerns cryptographic primitives—algorithms with basic cryptographic properties—and their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools called cryptosystems or cryptographic protocols, which guarantee one or more high-level security properties. Note, however, that the distinction between cryptographic primitives and cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way functions, etc.
=== Cryptosystems ===
One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are designed to provide particular functionality (e.g., public key encryption) while guaranteeing certain security properties (e.g., chosen-plaintext attack (CPA) security in the random oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. As the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such cryptosystems are sometimes called cryptographic protocols.
Some widely known cryptosystems include RSA, Schnorr signature, ElGamal encryption, and Pretty Good Privacy (PGP). More complex cryptosystems include electronic cash systems, signcryption systems, etc. Some more 'theoretical' cryptosystems include interactive proof systems, (like zero-knowledge proofs) and systems for secret sharing.
=== Lightweight cryptography ===
Lightweight cryptography (LWC) concerns cryptographic algorithms developed for a strictly constrained environment. The growth of Internet of Things (IoT) has spiked research into the development of lightweight algorithms that are better suited for the environment. An IoT environment requires strict constraints on power consumption, processing power, and security. Algorithms such as PRESENT, AES, and SPECK are examples of the many LWC algorithms that have been developed to achieve the standard set by the National Institute of Standards and Technology.
== Applications ==
Cryptography is widely used on the internet to help protect user-data and prevent eavesdropping. To ensure secrecy during transmission, many systems use private key cryptography to protect transmitted information. With public-key systems, one can maintain secrecy without a master key or a large number of keys. But, some algorithms like BitLocker and VeraCrypt are generally not private-public key cryptography. For example, Veracrypt uses a password hash to generate the single private key. However, it can be configured to run in public-private key systems. The C++ opensource encryption library OpenSSL provides free and opensource encryption software and tools. The most commonly used encryption cipher suit is AES, as it has hardware acceleration for all x86 based processors that has AES-NI. A close contender is ChaCha20-Poly1305, which is a stream cipher, however it is commonly used for mobile devices as they are ARM based which does not feature AES-NI instruction set extension.
=== Cybersecurity ===
Cryptography can be used to secure communications by encrypting them. Websites use encryption via HTTPS. "End-to-end" encryption, where only sender and receiver can read messages, is implemented for email in Pretty Good Privacy and for secure messaging in general in WhatsApp, Signal and Telegram.
Operating systems use encryption to keep passwords secret, conceal parts of the system, and ensure that software updates are truly from the system maker. Instead of storing plaintext passwords, computer systems store hashes thereof; then, when a user logs in, the system passes the given password through a cryptographic hash function and compares it to the hashed value on file. In this manner, neither the system nor an attacker has at any point access to the password in plaintext.
Encryption is sometimes used to encrypt one's entire drive. For example, University College London has implemented BitLocker (a program by Microsoft) to render drive data opaque without users logging in.
=== Cryptocurrencies and cryptoeconomics ===
Cryptographic techniques enable cryptocurrency technologies, such as distributed ledger technologies (e.g., blockchains), which finance cryptoeconomics applications such as decentralized finance (DeFi). Key cryptographic techniques that enable cryptocurrencies and cryptoeconomics include, but are not limited to: cryptographic keys, cryptographic hash function, asymmetric (public key) encryption, Multi-Factor Authentication (MFA), End-to-End Encryption (E2EE), and Zero Knowledge Proofs (ZKP).
== Legal issues ==
=== Prohibitions ===
Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous. Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. In China and Iran, a license is still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List. Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
=== Export controls ===
In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed. Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution.
In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled. Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000; there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs similarly can transmit and receive emails via TLS, and can send and receive email encrypted with S/MIME. Many Internet users do not realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally do not find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.
=== NSA involvement ===
Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy. The NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography. DES was designed to be resistant to differential cryptanalysis, a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s. According to Steven Levy, IBM discovered differential cryptanalysis, but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement (i.e. wiretapping).
=== Digital rights management ===
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes. This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states.
The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA. Cryptologist Bruce Schneier has argued that the DMCA encourages vendor lock-in, while inhibiting actual measures toward cyber-security. Both Alan Cox (longtime Linux kernel developer) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases, the Motion Picture Association of America sent out numerous DMCA takedown notices, and there was a massive Internet backlash triggered by the perceived impact of such notices on fair use and free speech.
=== Forced disclosure of encryption keys ===
In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security. Successful prosecutions have occurred under the Act; the first, in 2009, resulted in a term of 13 months' imprisonment. Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can compel a person to reveal an encryption passphrase or password. The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment. In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court.
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016 FBI–Apple encryption dispute concerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supports plausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of a drive which has been securely wiped).
== See also ==
Collision attack
Comparison of cryptography libraries
Cryptovirology – Securing and encrypting virology
Crypto Wars – Attempts to limit access to strong cryptography
Encyclopedia of Cryptography and Security – Book by Technische Universiteit Eindhoven
Global surveillance – Mass surveillance across national borders
Indistinguishability obfuscation – Type of cryptographic software obfuscation
Information theory – Scientific study of digital information
Outline of cryptography
List of cryptographers – A list of historical mathmaticians
List of multiple discoveries
List of unsolved problems in computer science – List of unsolved computational problems
Pre-shared key – Method to set encryption keys
Secure cryptoprocessor
Strong cryptography – Term applied to cryptographic systems that are highly resistant to cryptanalysis
Syllabical and Steganographical Table – Eighteenth-century work believed to be the first cryptography chart – first cryptography chart
World Wide Web Consortium's Web Cryptography API – World Wide Web Consortium cryptography standard
== References ==
== Further reading ==
== External links ==
The dictionary definition of cryptography at Wiktionary
Media related to Cryptography at Wikimedia Commons
Cryptography on In Our Time at the BBC
Crypto Glossary and Dictionary of Technical Cryptography Archived 4 July 2022 at the Wayback Machine
A Course in Cryptography by Raphael Pass & Abhi Shelat – offered at Cornell in the form of lecture notes.
For more on the use of cryptographic elements in fiction, see: Dooley, John F. (23 August 2012). "Cryptology in Fiction". Archived from the original on 29 July 2020. Retrieved 20 February 2015.
The George Fabyan Collection at the Library of Congress has early editions of works of seventeenth-century English literature, publications relating to cryptography. | Wikipedia/Cryptographer |
In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. This randomness is often collected from hardware sources (variance in fan noise or HDD), either pre-existing ones such as mouse movements or specially provided randomness generators. A lack of entropy can have a negative impact on performance and security.
== Linux kernel ==
The Linux kernel generates entropy from keyboard timings, mouse movements, and integrated drive electronics (IDE) timings and makes the random character data available to other operating system processes through the special files /dev/random and /dev/urandom. This capability was introduced in Linux version 1.3.30.
There are some Linux kernel patches allowing one to use more entropy sources. The audio_entropyd project, which is included in some operating systems such as Fedora, allows audio data to be used as an entropy source. Also available are video_entropyd, which calculates random data from a video-source and entropybroker, which includes these three and can be used to distribute the entropy data to systems not capable of running any of these (e.g. virtual machines). Furthermore, one can use the HAVEGE algorithm through haveged to pool entropy. In some systems, network interrupts can be used as an entropy source as well.
== OpenBSD kernel ==
OpenBSD has integrated cryptography as one of its main goals and has always worked on increasing its entropy for encryption but also for randomising many parts of the OS, including various internal operations of its kernel. Around 2011, two of the random devices were dropped and linked into a single source as it could produce hundreds of megabytes per second of high quality random data on an average system. This made depletion of random data by userland programs impossible on OpenBSD once enough entropy has initially been gathered.
== Hurd kernel ==
A driver ported from the Linux kernel has been made available for the Hurd kernel.
== Solaris ==
/dev/random and /dev/urandom have been available as Sun packages or patches for Solaris since Solaris 2.6, and have been a standard feature since Solaris 9. As of Solaris 10, administrators can remove existing entropy sources or define new ones via the kernel-level cryptographic framework.
A 3rd-party kernel module implementing /dev/random is also available for releases dating back to Solaris 2.4.
== OS/2 ==
There is a software package for OS/2 that allows software processes to retrieve random data.
== Windows ==
Microsoft Windows releases newer than Windows 95 use CryptoAPI to gather entropy in a similar fashion to Linux kernel's /dev/random.
Windows's CryptoAPI uses the binary registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\RNG\Seed to store a seeded value from all of its entropy sources.
Because CryptoAPI is closed-source, some free and open source software applications running on the Windows platform use other measures to get randomness. For example, GnuPG, as of version 1.06, uses a variety of sources such as the number of free bytes in memory that combined with a random seed generates desired randomness it needs.
Programmers using CAPI can get entropy by calling CAPI's CryptGenRandom(), after properly initializing it.
CryptoAPI was deprecated from Windows Vista and higher. New API is called Cryptography API: Next Generation (CNG). Windows's CNG uses the binary registry key HKEY_LOCAL_MACHINE\SYSTEM\RNG\Seed to store a seeded value.
Newer version of Windows are able to use a variety of entropy sources:
TPM if available and enabled on motherboard
Entropy from UEFI interface (if booted from UEFI)
RDRAND CPU instruction if available
Hardware system clock (RTC)
OEM0 ACPI table content
Interrupt timings
Keyboard timings and Mouse movements
== Embedded systems ==
Embedded systems have difficulty gathering enough entropy as they are often very simple devices with short boot times, and key generation operations that require sufficient entropy are often one of the first things a system may do. Common entropy sources may not exist on these devices, or will not have been active long enough during boot to ensure sufficient entropy exists. Embedded devices often lack rotating disk drives, human interface devices, and even fans, and the network interface, if any, will not have been active for long enough to provide much entropy. Lacking easy access to entropy, some devices may use hard-coded keys to seed random generators, or seed random generators from easily guessed unique identifiers such as the device's MAC address. A simple study demonstrated the widespread use of weak keys by finding many embedded systems such as routers using the same keys. It was thought that the number of weak keys found would have been far higher if simple and often attacker determinable one-time unique identifiers had not been incorporated into the entropy of some of these systems.
== (De)centralized systems ==
A true random number generator (TRNG) can be a (de)central service. One example of a centralized system where a random number can be acquired is the randomness beacon service from the National Institute of Standards and Technology. The Cardano platform uses the participants of their decentralized proof-of-stake protocol to generate random numbers.
== Other systems ==
There are some software packages that allow one to use a userspace process to gather random characters, exactly what /dev/random does, such as EGD, the Entropy Gathering Daemon.
== Hardware-originated entropy ==
Modern CPUs and hardware often feature integrated generators that can provide high-quality and high-speed entropy to operating systems. On systems based on the Linux kernel, one can read the entropy generated from such a device through /dev/hw_random. However, sometimes /dev/hw_random may be slow;
There are some companies manufacturing entropy generation devices, and some of them are shipped with drivers for Linux.
On Linux system, one can install the rng-tools package that supports the true random number generators (TRNGs) found in CPUs supporting the RDRAND instruction, Trusted Platform Modules and in some Intel, AMD, or VIA chipsets, effectively increasing the entropy collected into /dev/random and potentially improving the cryptographic potential. This is especially useful on headless systems that have no other sources of entropy.
== Practical implications ==
System administrators, especially those supervising Internet servers, have to ensure that the server processes will not halt because of entropy depletion. Entropy on servers utilising the Linux kernel, or any other kernel or userspace process that generates entropy from the console and the storage subsystem, is often less than ideal because of the lack of a mouse and keyboard, thus servers have to generate their entropy from a limited set of resources such as IDE timings.
The entropy pool size in Linux is viewable through the file /proc/sys/kernel/random/entropy_avail and should generally be at least 2000 bits (out of a maximum of 4096). Entropy changes frequently.
Administrators responsible for systems that have low or zero entropy should not attempt to use /dev/urandom as a substitute for /dev/random as this may cause SSL/TLS connections to have lower-grade encryption.
Some software systems change their Diffie-Hellman keys often, and this may in some cases help a server to continue functioning normally even with an entropy bottleneck.
On servers with low entropy, a process can appear hung when it is waiting for random characters to appear in /dev/random (on Linux-based systems). For example, there was a known problem in Debian that caused exim4 to hang in some cases because of this.
=== Security ===
Entropy sources can be used for keyboard timing attacks.
Entropy can affect the cryptography (TLS/SSL) of a server: If a server fails to use a proper source of randomness, the keys generated by the server will be insecure.
In some cases a cracker (malicious attacker) can guess some bits of entropy from the output of a pseudorandom number generator (PRNG), and this happens when not enough entropy is introduced into the PRNG.
== Potential sources ==
Commonly used entropy sources include the mouse, keyboard, and IDE timings, but there are other potential sources. For example, one could collect entropy from the computer's microphone, or by building a sensor to measure the air turbulence inside a disk drive.
For Unix/BSD derivatives there exists a USB based solution that utilizes an ARM Cortex CPU for filtering / securing the bit stream generated by two entropy generator sources in the system.
Cloudflare use an image feed from a rack of 80 lava lamps as an additional source of entropy.
== See also ==
Entropy (information theory)
Entropy
Randomness
== References ==
== External links ==
Overview of entropy and of entropy generators in Linux | Wikipedia/Entropy_(computing) |
In mathematics, a negligible function is a function
μ
:
N
→
R
{\displaystyle \mu :\mathbb {N} \to \mathbb {R} }
such that for every positive integer c there exists an integer Nc such that for all x > Nc,
|
μ
(
x
)
|
<
1
x
c
.
{\displaystyle |\mu (x)|<{\frac {1}{x^{c}}}.}
Equivalently, the following definition may be used.
A function
μ
:
N
→
R
{\displaystyle \mu :\mathbb {N} \to \mathbb {R} }
is negligible, if for every positive polynomial poly(·) there exists an integer Npoly > 0 such that for all x > Npoly
|
μ
(
x
)
|
<
1
poly
(
x
)
.
{\displaystyle |\mu (x)|<{\frac {1}{\operatorname {poly} (x)}}.}
== History ==
The concept of negligibility can find its trace back to sound models of analysis. Though the concepts of "continuity" and "infinitesimal" became important in mathematics during Newton and Leibniz's time (1680s), they were not well-defined until the late 1810s. The first reasonably rigorous definition of continuity in mathematical analysis was due to Bernard Bolzano, who wrote in 1817 the modern definition of continuity. Later Cauchy, Weierstrass and Heine also defined as follows (with all numbers in the real number domain
R
{\displaystyle \mathbb {R} }
):
(Continuous function) A function
f
:
R
→
R
{\displaystyle f:\mathbb {R} {\rightarrow }\mathbb {R} }
is continuous at
x
=
x
0
{\displaystyle x=x_{0}}
if for every
ε
>
0
{\displaystyle \varepsilon >0}
, there exists a positive number
δ
>
0
{\displaystyle \delta >0}
such that
|
x
−
x
0
|
<
δ
{\displaystyle |x-x_{0}|<\delta }
implies
|
f
(
x
)
−
f
(
x
0
)
|
<
ε
.
{\displaystyle |f(x)-f(x_{0})|<\varepsilon .}
This classic definition of continuity can be transformed into the
definition of negligibility in a few steps by changing parameters used in the definition. First, in the case
x
0
=
∞
{\displaystyle x_{0}=\infty }
with
f
(
x
0
)
=
0
{\displaystyle f(x_{0})=0}
, we must define the concept of "infinitesimal function":
(Infinitesimal) A continuous function
μ
:
R
→
R
{\displaystyle \mu :\mathbb {R} \to \mathbb {R} }
is infinitesimal (as
x
{\displaystyle x}
goes to infinity) if for every
ε
>
0
{\displaystyle \varepsilon >0}
there exists
N
ε
{\displaystyle N_{\varepsilon }}
such that for all
x
>
N
ε
{\displaystyle x>N_{\varepsilon }}
|
μ
(
x
)
|
<
ε
.
{\displaystyle |\mu (x)|<\varepsilon \,.}
Next, we replace
ε
>
0
{\displaystyle \varepsilon >0}
by the functions
1
/
x
c
{\displaystyle 1/x^{c}}
where
c
>
0
{\displaystyle c>0}
or by
1
/
poly
(
x
)
{\displaystyle 1/\operatorname {poly} (x)}
where
poly
(
x
)
{\displaystyle \operatorname {poly} (x)}
is a positive polynomial. This leads to the definitions of negligible functions given at the top of this article. Since the constants
ε
>
0
{\displaystyle \varepsilon >0}
can be expressed as
1
/
poly
(
x
)
{\displaystyle 1/\operatorname {poly} (x)}
with a constant polynomial, this shows that infinitesimal functions are a superset of negligible functions.
== Use in cryptography ==
In complexity-based modern cryptography, a security scheme is
provably secure if the probability of security failure (e.g.,
inverting a one-way function, distinguishing cryptographically strong pseudorandom bits from truly random bits) is negligible in terms of the input
x
{\displaystyle x}
= cryptographic key length
n
{\displaystyle n}
. Hence comes the definition at the top of the page because key length
n
{\displaystyle n}
must be a natural number.
Nevertheless, the general notion of negligibility doesn't require that the input parameter
x
{\displaystyle x}
is the key length
n
{\displaystyle n}
. Indeed,
x
{\displaystyle x}
can be any predetermined system metric and corresponding mathematical analysis would illustrate some hidden analytical behaviors of the system.
The reciprocal-of-polynomial formulation is used for the same reason that computational boundedness is defined as polynomial running time: it has mathematical closure properties that make it tractable in the asymptotic setting (see #Closure properties). For example, if an attack succeeds in violating a security condition only with negligible probability, and the attack is repeated a polynomial number of times, the success probability of the overall attack still remains negligible.
In practice one might want to have more concrete functions bounding the adversary's success probability and to choose the security parameter large enough that this probability is smaller than some threshold, say 2−128.
== Closure properties ==
One of the reasons that negligible functions are used in foundations of complexity-theoretic cryptography is that they obey closure properties. Specifically,
If
f
,
g
:
N
→
R
{\displaystyle f,g:\mathbb {N} \to \mathbb {R} }
are negligible, then the function
x
↦
f
(
x
)
+
g
(
x
)
{\displaystyle x\mapsto f(x)+g(x)}
is negligible.
If
f
:
N
→
R
{\displaystyle f:\mathbb {N} \to \mathbb {R} }
is negligible and
p
{\displaystyle p}
is any real polynomial, then the function
x
↦
p
(
x
)
⋅
f
(
x
)
{\displaystyle x\mapsto p(x)\cdot f(x)}
is negligible.
Conversely, if
f
:
N
→
R
{\displaystyle f:\mathbb {N} \to \mathbb {R} }
is not negligible, then neither is
x
↦
f
(
x
)
/
p
(
x
)
{\displaystyle x\mapsto f(x)/p(x)}
for any real polynomial
p
{\displaystyle p}
.
== Examples ==
n
↦
a
−
n
{\displaystyle n\mapsto a^{-n}}
is negligible for any
a
≥
2
{\displaystyle a\geq 2}
:
Step: This is an exponential decay function where
a
{\displaystyle a}
is a constant greater than or equal to 2. As
n
→
∞
{\displaystyle n\to \infty }
,
a
−
n
→
0
{\displaystyle a^{-n}\to 0}
very quickly, making it negligible.
f
(
n
)
=
3
−
n
{\displaystyle f(n)=3^{-{\sqrt {n}}}}
is negligible:
Step: This function has exponential decay with a base of 3, but the exponent grows slower than
n
{\displaystyle n}
(only at
n
{\displaystyle {\sqrt {n}}}
). As
n
→
∞
{\displaystyle n\to \infty }
,
3
−
n
→
0
{\displaystyle 3^{-{\sqrt {n}}}\to 0}
, so it’s still negligible but decays slower than
3
−
n
{\displaystyle 3^{-n}}
.
f
(
n
)
=
n
−
log
n
{\displaystyle f(n)=n^{-\log n}}
is negligible:
Step: In this case,
n
−
log
n
{\displaystyle n^{-\log n}}
represents a polynomial decay, with the exponent growing negatively due to
log
n
{\displaystyle \log n}
. Since the decay rate increases with
n
{\displaystyle n}
, the function goes to 0 faster than polynomial functions like
n
−
k
{\displaystyle n^{-k}}
for any constant
k
{\displaystyle k}
, making it negligible.
f
(
n
)
=
(
log
n
)
−
log
n
{\displaystyle f(n)=(\log n)^{-\log n}}
is negligible:
Step: This function decays as the logarithm of
n
{\displaystyle n}
raised to a negative exponent
−
log
n
{\displaystyle -\log n}
, which leads to a fast approach to 0 as
n
→
∞
{\displaystyle n\to \infty }
. The decay here is faster than inverse logarithmic or polynomial rates, making it negligible.
f
(
n
)
=
2
−
c
log
n
{\displaystyle f(n)=2^{-c\log n}}
is not negligible, for positive
c
{\displaystyle c}
:
Step: We can rewrite this as
f
(
n
)
=
n
−
c
{\displaystyle f(n)=n^{-c}}
, which is a polynomial decay rather than an exponential one. Since
c
{\displaystyle c}
is positive,
f
(
n
)
→
0
{\displaystyle f(n)\to 0}
as
n
→
∞
{\displaystyle n\to \infty }
, but it doesn’t decay as quickly as true exponential functions with respect to
n
{\displaystyle n}
, making it non-negligible.
Assume
n
>
0
{\displaystyle n>0}
, we take the limit as
n
→
∞
{\displaystyle n\to \infty }
:
Negligible:
f
(
n
)
=
1
x
n
/
2
{\displaystyle f(n)={\frac {1}{x^{n/2}}}}
:
Step: This function decays exponentially with base
x
{\displaystyle x}
raised to the power of
−
n
2
{\displaystyle -{\frac {n}{2}}}
. As
n
→
∞
{\displaystyle n\to \infty }
,
x
−
n
2
→
0
{\displaystyle x^{-{\frac {n}{2}}}\to 0}
quickly, making it negligible.
f
(
n
)
=
1
x
log
(
n
k
)
{\displaystyle f(n)={\frac {1}{x^{\log {(n^{k})}}}}}
for
k
≥
1
{\displaystyle k\geq 1}
:
Step: We can simplify
x
−
log
(
n
k
)
{\displaystyle x^{-\log(n^{k})}}
as
n
−
k
log
x
{\displaystyle n^{-k\log x}}
, which decays faster than any polynomial. As
n
→
∞
{\displaystyle n\to \infty }
, the function approaches zero and is considered negligible for any
k
≥
1
{\displaystyle k\geq 1}
and
x
>
1
{\displaystyle x>1}
.
f
(
n
)
=
1
x
(
log
n
)
k
{\displaystyle f(n)={\frac {1}{x^{(\log n)^{k}}}}}
for
k
≥
1
{\displaystyle k\geq 1}
:
Step: The decay is determined by the base
x
{\displaystyle x}
raised to the power of
−
(
log
n
)
k
{\displaystyle -(\log n)^{k}}
. Since
(
log
n
)
k
{\displaystyle (\log n)^{k}}
grows with
n
{\displaystyle n}
, this function approaches zero faster than polynomial decay, making it negligible.=
f
(
n
)
=
1
x
n
{\displaystyle f(n)={\frac {1}{x^{\sqrt {n}}}}}
:
Step: Here,
f
(
n
)
{\displaystyle f(n)}
decays exponentially with a base of
x
{\displaystyle x}
raised to
−
n
{\displaystyle -{\sqrt {n}}}
. As
n
→
∞
{\displaystyle n\to \infty }
,
f
(
n
)
→
0
{\displaystyle f(n)\to 0}
quickly, so it’s considered negligible.
Non-negligible:
f
(
n
)
=
1
n
1
/
n
{\displaystyle f(n)={\frac {1}{n^{1/n}}}}
:
Step: Since
n
1
/
n
→
1
{\displaystyle n^{1/n}\to 1}
as
n
→
∞
{\displaystyle n\to \infty }
, this function decays very slowly, failing to approach zero quickly enough to be considered negligible.
f
(
n
)
=
1
x
n
(
log
n
)
{\displaystyle f(n)={\frac {1}{x^{n(\log n)}}}}
:
Step: With an exponential base and exponent
n
(
log
n
)
{\displaystyle n(\log n)}
, this function would approach zero very rapidly, suggesting negligibility.
== See also ==
Negligible set
Colombeau algebra
Nonstandard numbers
Gromov's theorem on groups of polynomial growth
Non-standard calculus
== References ==
Goldreich, Oded (2001). Foundations of Cryptography: Volume 1, Basic Tools. Cambridge University Press. ISBN 0-521-79172-3.
Sipser, Michael (1997). "Section 10.6.3: One-way functions". Introduction to the Theory of Computation. PWS Publishing. pp. 374–376. ISBN 0-534-94728-X.
Papadimitriou, Christos (1993). "Section 12.1: One-way functions". Computational Complexity (1st ed.). Addison Wesley. pp. 279–298. ISBN 0-201-53082-1.
Colombeau, Jean François (1984). New Generalized Functions and Multiplication of Distributions. Mathematics Studies 84, North Holland. ISBN 0-444-86830-5.
Bellare, Mihir (1997). "A Note on Negligible Functions". Journal of Cryptology. 15. Dept. of Computer Science & Engineering University of California at San Diego: 2002. CiteSeerX 10.1.1.43.7900. | Wikipedia/Negligible_function |
The Yarrow algorithm is a family of cryptographic pseudorandom number generators (CSPRNG) devised by John Kelsey, Bruce Schneier, and Niels Ferguson and published in 1999. The Yarrow algorithm is explicitly unpatented, royalty-free, and open source; no license is required to use it. An improved design from Ferguson and Schneier, Fortuna, is described in their book, Practical Cryptography
Yarrow was used in FreeBSD, but is now superseded by Fortuna. Yarrow was also incorporated in iOS and macOS for their /dev/random devices, but Apple has switched to Fortuna since 2020 Q1.
== Name ==
The name Yarrow alludes to the use of the yarrow plant in the random generating process of I Ching divination. Since the Xia dynasty (c. 2070 to c. 1600 BCE), Chinese have used yarrow stalks for divination. Fortunetellers divide a set of 50 yarrow stalks into piles and use modular arithmetic recursively to generate two bits of random information
that have a non-uniform distribution.
== Principles ==
Yarrow's main design principles are: resistance to attacks, easy use by programmers with no cryptography background, and reusability of existing building blocks. The former widely used designs such as ANSI X9.17 and RSAREF 2.0 PRNG have loopholes that provide attack opportunities under some circumstances. Some of them are not designed with real-world attacks in mind. Yarrow also aims to provide easy integration, to enable system designers with little knowledge of PRNG functionality.
== Design ==
=== Components ===
The design of Yarrow consists of four major components: an entropy accumulator, a reseed mechanism, a generation mechanism, and reseed control.
Yarrow accumulates entropy into two pools: the fast pool, which provides frequent reseeds of the key to keep the duration of key compromises as short as possible; the slow pool, which provides rare but conservative reseeds of the key. This makes sure that the reseed is secured even when the entropy estimates are very optimistic.
The reseed mechanism connects the entropy accumulator to the generating mechanism. Reseeding from the fast pool uses the current key and the hash of all inputs to the fast pool since startup to generate a new key; reseeding from the slow pool behaves similarly, except it also uses the hash of all inputs to the slow pool to generate a new key. Both of the reseedings reset the entropy estimation of the fast pool to zero, but the last one also sets the estimation of the slow pool to zero. The reseeding mechanism updates the key constantly, so that even if the key of pool information is known to the attacker before the reseed, they will be unknown to the attacker after the reseed.
The reseed control component is leveraging between frequent reseeding, which is desirable but might allow iterative guessing attacks, and infrequent reseeding, which compromises more information for an attacker who has the key. Yarrow uses the fast pool to reseed whenever the source passes some threshold values, and uses the slow pool to reseed whenever at least two of its sources pass some other threshold value. The specific threshold values are mentioned in the Yarrow-160 section.
=== Design philosophy ===
Yarrow assumes that enough entropy can be accumulated to ensure that the PRNG is in an unpredictable state. The designers accumulate entropy in the purpose of keeping the ability to recover the PRNG even when the key is compromised. Similar design philosophy is taken by RSAREF, DSA and ANSI X9.17 PRNGs.
=== Yarrow-160 ===
The Yarrow uses two important algorithms: a one-way hash function and a block cipher. The specific description and properties are listed in the table below.
==== Generation ====
Yarrow-160 uses three-key Triple DES in counter mode to generate outputs. C is an n-bit counter value; K is the key. In order to generate the next output block, Yarrow follows the functions shown here.
Yarrow keeps count of the output block, because once the key is compromised, the leak of the old output before the compromised one can be stopped immediately. Once some system security parameter Pg is reached, the algorithm will generate k bits of PRNG output and use them as the new key. In Yarrow-160, the system security parameter is set to be 10, which means Pg = 10. The parameter is intentionally set to be low to minimize the number of outputs that can be backtracked.
==== Reseed ====
The reseed mechanism of Yarrow-160 uses SHA-1 and Triple DES as the hash function and block cipher. The details steps are in the original paper.
==== Implementation of Yarrow-160 ====
Yarrow-160 has been implemented in Java, and for FreeBSD. The examples can be found in "An implementation of the Yarrow PRNG for FreeBSD" by Mark R. V. Murray.
== Pros and cons of Yarrow ==
=== Pros ===
Yarrow reuses existing building blocks.
Compared to previous PRNGs, Yarrow is reasonably efficient.
Yarrow can be used by programmers with no cryptography background in a reasonably secure way. Yarrow is portable and precisely defined. The interface is simple and clear. These features somewhat decrease the chances of implementation errors.
Yarrow was created using an attack-oriented design process.
The entropy estimation of Yarrow is very conservative, thus preventing exhaustive search attacks. It is very common that PRNGs fail in real-world applications due to entropy overestimation and guessable starting points.
The reseeding process of Yarrow is relatively computationally expensive, thus the cost of attempting to guess the PRNG's key is higher.
Yarrow uses functions to simplify the management of seed files, thus the files are constantly updated.
To handle cryptanalytic attacks, Yarrow is designed to be based on a block cipher that is secured. The level of security of the generation mechanism depends on the block cipher.
Yarrow tries to avoid data-dependent execution paths. This is done to prevent side-channel attacks such as timing attacks and power analysis. This is an improvement compared to earlier PRNGs, for example RSAREF 2.0 PRNG, that will completely fall apart once additional information about the internal operations are no longer secured.
Yarrow uses cryptographic hash functions to process input samples, and then uses a secure update function to combine the samples with the existing key. This makes sure that the attacker cannot easily manipulate the input samples. PRNGs such as RSAREF 2.0 PRNG do not have the ability to resist this kind of chosen-input attack.
Unlike ANSI X9.17 PRNG, Yarrow has the ability to recover from a key compromise. This means that even when the key is compromised, the attacker will not be able to predict future outputs forever. This is due to the reseeding mechanism of Yarrow.: 5
Yarrow has the entropy samples pool separated from the key, and only reseeds the key when the entropy pool content is completely unpredictable. This design prevents iterative guessing attacks, where an attacker with the key guesses the next sample and checks the result by observing the next output.
=== Cons ===
Yarrow depends on SHA-1, a hash that has been broken (in terms of collision resistance) since Yarrow's publication and is no longer considered secure. However, there is no published attack that uses SHA-1 collisions to undermine Yarrow's randomness.
Since the outputs of Yarrow are cryptographically derived, the systems that use those outputs can only be as secure as the generation mechanism itself. That means the attacker who can break the generation mechanism will easily break a system that depends on Yarrow's outputs. This problem cannot be solved by increasing entropy accumulation.
Yarrow requires entropy estimation, which is a very big challenge for implementations. It is hard to be sure how much entropy to collect before using it to reseed the PRNG. This problem is solved by Fortuna, an improvement of Yarrow. Fortuna has 32 pools to collect entropy and removed the entropy estimator completely.
Yarrow's strength is limited by the size of the key. For example, Yarrow-160 has an effective key size of 160 bits. If the security requires 256 bits, Yarrow-160 is not capable of doing the job.
== References ==
== External links ==
Yarrow algorithm page
"Yarrow implementation in Java"
"Yarrow implementation in FreeBSD"
"An implementation of the Yarrow PRNG for FreeBSD" | Wikipedia/Yarrow_algorithm |
Corporate warfare is a form of information warfare in which attacks on companies by other companies take place. Such warfare may be part of economic warfare and cyberwarfare; but can involve espionage, 'dirty' PR tactics, or physical theft. The intention is largely to destabilise or sink the value of the opposing company for financial gain, or to steal trade secrets from them.
== In fiction ==
In the science fiction genre of cyberpunk, corporations guard their data and hire individuals to break into the computer systems of their competitors. In the genre pioneered by William Gibson, power is largely in the hands of megacorporations which often maintain their own private armies and security forces and wage corporate warfare against each other.
== Cyber ==
According to Schwartau, companies are typically targeted by their competitors. Such warfare may include methods of industrial espionage, spreading disinformation, leaking confidential information and damaging a company's information systems.
Chris Rouland of the cybersecurity & cyberarms company Endgame, Inc. controversially advocated that private companies should be allowed to "hack back" against nations or criminals trying to steal their data. After a wave of high-profile attacks against US companies and government databases a panel of experts assembled by the George Washington University Center for Cyber and Homeland Security said policies should be eased to allow "active defense" measures to deter hackers and did not recommend hacking back "because [they] don't want the cure to be worse than the disease". Relevantly at the February 2017 RSA Conference Microsoft President Brad Smith stated that technology companies need to preserve trust and stability online by pledging neutrality in cyber conflict.
The dramatic increase in the use of the Internet for business purposes has exposed private entities to greater risks of cyber-attacks. Garcia and Horowitz propose a game theoretic approach that considers economic motivations for investment in Internet security and investigates a scenario in which firms plan for long-term security investment by considering the likelihood of cyber-attacks.
Botnets may be used to knock business competitors offline. They can be hired by corporations to disrupt the operation of competitors on the networks.
Low-grade corporate warfare is constantly being waged between technology giants by "patent trolls, insider blogs and corporate talking points".
Supply chain attacks in corporate warfare can be called supply chain interdiction.
The term may also refer to the privatization of warfare mainly by the involvement of private military companies.
It has been speculated that the concept of "non-international armed conflict within the meaning of Article 3 GC I to IV" of the Fourth Geneva Convention would be wide enough to allow for covering "a renaissance of corporate warfare".
== Art ==
In 2016 a digital illustration series by the German Foreal design studio called "Corporate Warfare" visualized the power and impact of big brand corporations by branded torpedoes and atomic bombs. Dirk Schuster, cofounder of Foreal states that "big corporations can have more power than governments, so we put them in a military context".
Sam Esmail, creator of the television series Mr. Robot, states that "the next world war won't be fought with nukes, but with information, economics and corporate warfare.
== See also ==
== References ==
== External links == | Wikipedia/Corporate_warfare |
Industrial technology is the use of engineering and manufacturing technology to make production faster, simpler, and more efficient. The industrial technology field employs creative and technically proficient individuals who can help a company achieve efficient and profitable productivity.
Industrial technology programs typically include instruction in optimization theory, human factors, organizational behavior, industrial processes, industrial planning procedures, computer applications, and report and presentation preparation.
Planning and designing manufacturing processes and equipment is the main aspect of being an industrial technologist. An industrial technologist is often responsible for implementing certain designs and processes.
== Accreditation and certification ==
The USA-based Association of Technology, Management, and Applied Engineering (ATMAE), accredits selected collegiate programs in Industrial Technology in the USA. An instructor or graduate of an Industrial Technology program may choose to become a Certified Technology Manager (CTM) by sitting for a rigorous exam administered by ATMAE covering Production Planning & Control, Safety, Quality, and Management/Supervision.
ATMAE program accreditation is recognized by the Council for Higher Education Accreditation (CHEA) for accrediting Industrial Technology programs. CHEA recognizes ATMAE in the U.S. for accrediting associate, baccalaureate, and master's degree programs in technology, applied technology, engineering technology, and technology-related disciplines delivered by national or regional accredited institutions in the United States.(2011) Industrial technology is also one of the largest industries used.
== Knowledge base ==
"A career in industrial technology typically entails formal education from an accredited college or university. Opportunities are available to professionals with all levels of education. Those who hold associate degrees typically qualify for the entry-level technician and technologist positions, such as in the maintenance and operation of machinery. Bachelor's degree-holders could fill management and engineering positions, such as plant manager, production supervisor and quality systems engineering technologist. A graduate degree in industrial technology could qualify individuals for jobs in research, teaching and upper-level management".
Industrial Technology includes wide-ranging subject matter and could be viewed as an amalgamation of industrial engineering and business topics with a focus on practicality and management of technical systems with less focus on actual engineering of those systems.
Typical curriculum at a four-year university might include courses on manufacturing process, technology and impact on society, mechanical and electronic systems, quality assurance and control, materials science, packaging, production and operations management, and manufacturing facility planning and design. In addition, the Industrial Technologist may have exposure to more vocational-style education in the form of courses on CNC manufacturing, welding, and other tools-of-the-trade in manufacturing.
== Industrial technologist ==
Industrial technology program graduates obtain a majority of positions which are applied engineering and/or management oriented. Since "industrial technologist" is not a common job title in the United States, the actual bachelor's degree or associate degree earned by the individual is obscured by the job title he/she receives. Typical job titles for industrial technologists having a bachelor's degree include quality systems engineer, manufacturing engineer, industrial engineer, plant manager, production supervisor, etc. Typical job titles for industrial technologists having a two-year associate degree include project technologist, manufacturing technologist, process technologist, etc.
A technologist curriculum may focus or specialize in a certain technical area of study. Examples of this includes electronics, manufacturing, construction, graphics, automation/robotics, CADD, nanotechnology, aviation, etc.
== Technological development in industry ==
A major subject of study is technological development in industry. This has been defined as:
the introduction of new tools and techniques for performing given tasks in production, distribution, data processing (etc.);
the mechanization of the production process, or the achievement of a state of greater autonomy of technical production systems from human control, responsibility, or intervention;
changes in the nature and level of integration of technical production systems, or enhanced interdependence;
the development, utilization, and application of new scientific ideas, concepts, and information in production and other processes; and
enhancement of technical performance capabilities, or increase in the efficiency of tools, equipment, and techniques in performing given tasks.
Studies in this area often employ a multi-disciplinary research methodology and shade off into the wider analysis of business and economic growth (development, performance). The studies are often based on a mixture of industrial field research and desk-based data analysis and aim to be of interest and use to practitioners in business management and investment (etc.) as well as academics. In engineering, construction, textiles, food and drugs, chemicals and petroleum, and other industries, the focus has been on not only the nature and factors facilitating and hampering the introduction and utilization of new technologies but also the impact of new technologies on the production organization (etc.) of firms and various social and other wider aspects of the technological development process.
How and When Technological development in industry Performed :
Technological Processes based always on (Material, Equipment, Human skills and operating circumstances.
So, If any of these parameters changed, we have to re-calibrate this technology to match the designed product.
This re-calibration can't be considered as a technology change because industrial technology is not more than an Engineering guide to achieve the required specification of the designed product.
To calibrate any industrial technology, we should make a documented copy of manufacturing experiments until matching the final product specifications based on original technology, new changed parameters and scientific basics.
Finally, documentation of the new change should be done to the original industrial technology for that new case as a new addition.
Any application of industrial technology for 1st time or after a long time stop, Technology processes should be tested by a primary samples triers as a Re-calibration process.
== References == | Wikipedia/Industrial_technology |
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries.
The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study.
Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy.
Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.
== History ==
The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials.
Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena.
== Fundamentals ==
A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few.
The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials.
=== Structure ===
Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc.
Structure is studied in the following levels.
==== Atomic structure ====
Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material.
===== Bonding =====
To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure.
===== Crystallography =====
Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties.
==== Nanostructure ====
Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit.
Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties.
In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale.
Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm.
Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater.
Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure.
==== Microstructure ====
Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured.
The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties.
==== Macrostructure ====
Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye.
=== Properties ===
Materials exhibit myriad properties, including the following.
Mechanical properties, see Strength of materials
Chemical properties, see Chemistry
Electrical properties, see Electricity
Thermal properties, see Thermodynamics
Optical properties, see Optics and Photonics
Magnetic properties, see Magnetism
The properties of a material determine its usability and hence its engineering application.
=== Processing ===
Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene.
=== Thermodynamics ===
Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics.
The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium.
=== Kinetics ===
Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat.
== Research ==
Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas.
=== Nanomaterials ===
Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc.
=== Biomaterials ===
A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science.
Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material.
=== Electronic, optical, and magnetic ===
Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance.
Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer.
This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics.
=== Computational materials science ===
With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more.
== Industry ==
Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.).
Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced.
Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass.
=== Ceramics and glasses ===
Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components.
Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties.
Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries.
=== Composites ===
Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases.
Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide.
Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose.
=== Polymers ===
Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics.
Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties.
Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics.
Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc.
The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints.
=== Metal alloys ===
The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value.
Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels.
Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications.
=== Semiconductors ===
A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate.
Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications.
== Relation with other fields ==
Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more.
The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education.
Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in.
The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields.
== Emerging technologies ==
== Subdisciplines ==
The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites.
Ceramic engineering
Metallurgy
Polymer science and engineering
Composite engineering
There are additionally broadly applicable, materials independent, endeavors.
Materials characterization (spectroscopy, microscopy, diffraction)
Computational materials science
Materials informatics and selection
There are also relatively broad focuses across materials on specific phenomena and techniques.
Crystallography
Surface science
Tribology
Microelectronics
== Related or interdisciplinary fields ==
Condensed matter physics, solid-state physics and solid-state chemistry
Nanotechnology
Mineralogy
Supramolecular chemistry
Biomaterials science
== Professional societies ==
American Ceramic Society
ASM International
Association for Iron and Steel Technology
Materials Research Society
The Minerals, Metals & Materials Society
== See also ==
== References ==
=== Citations ===
=== Bibliography ===
Ashby, Michael; Hugh Shercliff; David Cebon (2007). Materials: engineering, science, processing and design (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-8391-3.
Askeland, Donald R.; Pradeep P. Phulé (2005). The Science & Engineering of Materials (5th ed.). Thomson-Engineering. ISBN 978-0-534-55396-8.
Callister, Jr., William D. (2000). Materials Science and Engineering – An Introduction (5th ed.). John Wiley and Sons. ISBN 978-0-471-32013-5.
Eberhart, Mark (2003). Why Things Break: Understanding the World by the Way It Comes Apart. Harmony. ISBN 978-1-4000-4760-4.
Gaskell, David R. (1995). Introduction to the Thermodynamics of Materials (4th ed.). Taylor and Francis Publishing. ISBN 978-1-56032-992-3.
González-Viñas, W. & Mancini, H.L. (2004). An Introduction to Materials Science. Princeton University Press. ISBN 978-0-691-07097-1.
Gordon, James Edward (1984). The New Science of Strong Materials or Why You Don't Fall Through the Floor (eissue ed.). Princeton University Press. ISBN 978-0-691-02380-9.
Mathews, F.L. & Rawlings, R.D. (1999). Composite Materials: Engineering and Science. Boca Raton: CRC Press. ISBN 978-0-8493-0621-1.
Lewis, P.R.; Reynolds, K. & Gagg, C. (2003). Forensic Materials Engineering: Case Studies. Boca Raton: CRC Press. ISBN 9780849311826.
Wachtman, John B. (1996). Mechanical Properties of Ceramics. New York: Wiley-Interscience, John Wiley & Son's. ISBN 978-0-471-13316-2.
Walker, P., ed. (1993). Chambers Dictionary of Materials Science and Technology. Chambers Publishing. ISBN 978-0-550-13249-9.
Mahajan, S. (2015). "The role of materials science in the evolution of microelectronics". MRS Bulletin. 12 (40): 1079–1088. Bibcode:2015MRSBu..40.1079M. doi:10.1557/mrs.2015.276.
== Further reading ==
Timeline of Materials Science Archived 2011-07-27 at the Wayback Machine at The Minerals, Metals & Materials Society (TMS) – accessed March 2007
Burns, G.; Glazer, A.M. (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 978-0-12-145761-7.
Cullity, B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 978-0-534-55396-8.
Giacovazzo, C; Monaco HL; Viterbo D; Scordari F; Gilli G; Zanotti G; Catti M (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 978-0-19-855578-0.
Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 978-0-8493-6594-2.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 978-0-19-852015-3.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 978-0-19-852017-7.
O'Keeffe, M.; Hyde, B.G. (1996). "Crystal Structures; I. Patterns and Symmetry". Zeitschrift für Kristallographie – Crystalline Materials. 212 (12). Washington, DC: Mineralogical Society of America, Monograph Series: 899. Bibcode:1997ZK....212..899K. doi:10.1524/zkri.1997.212.12.899. ISBN 978-0-939950-40-9.
Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 978-0-486-69447-4.
Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 978-0-19-855577-3.
== External links ==
MS&T conference organized by the main materials societies
MIT OpenCourseWare for MSE | Wikipedia/Material_sciences |
Strategic intelligence (STRATINT) pertains to the collection, processing, analysis, and dissemination of intelligence that is required for forming policy and military plans at the national and international level. Much of the information needed for strategic reflections comes from Open Source Intelligence. Other sources include traditional HUMINT (especially in recent years), Signals intelligence including ELINT, MASINT which overlaps with SIGINT/ELINT to some degree, and 'National technical means of verification' (e.g. spysats). The father of intelligence analysis and of the strategic intelligence concept was Sherman Kent, in his seminal work Strategic Intelligence for American World Policy, first published in 1949. For Kent, strategic intelligence is ”the knowledge upon which our nation's foreign relations, in war and peace, must rest".
Strategic intelligence pertains to the following system of abilities that, according to Michael Maccoby, characterize some of the most successful leaders in business, government and military.:
foresight, the ability to understand trends that present threats or opportunities for an organization;
visioning, the ability to conceptualize an ideal future state based on foresight and create a process to engage others to implement it;
system thinking, the ability to perceive, synthesize, and integrate elements that function as a whole to achieve a common purpose.
motivating, the ability to motivate different people to work together to implement a vision. Understanding what motivates people is based upon another ability, personality intelligence.
partnering, the ability to develop strategic alliances with individuals, groups and organizations. This quality also depends on personality intelligence.
In "Transforming Health Care Leadership, A Systems Guide to Improve Patient Care, Decrease Costs, and Improve Population Health," Jossey Bass, 2013, Maccoby and his co-authors Clifford L. Norman, C. Jane Norman, and Richard Margolies apply strategic intelligence to health care leadership and add to strategic intelligence leadership philosophy and W. Edwards Deming's four elements of "profound Knowledge": understanding variation, systems thinking, understanding personality, and understanding knowledge creation. The concept is further developed and applied in Michael Maccoby, "Strategic Intelligence, Conceptual Tools for Leading Change," Oxford University Press, 2015.
Recent thought leadership on strategic intelligence focuses on the consequences of the modern information age, which has led to the availability of substantial volumes of information than previously encountered. Alfred Rolington, the former CEO of Jane's Information Group and Oxford Analytica, recommends that intelligence organizations approach the challenges of the modern information age by breaking from their traditional models to become more deeply and continuously inter-linked. Specifically, Mr. Rolington advocates more fluid, networked operating methods that incorporates greater open-sourced information and data in analysis.
== References ==
== External links ==
Source
Strategic Intelligence from the World Economic Forum | Wikipedia/Strategic_intelligence |
Marketing strategy refers to efforts undertaken by an organization to increase its sales and achieve competitive advantage. In other words, it is the method of advertising a company's products to the public through an established plan through the meticulous planning and organization of ideas, data, and information.
Strategic marketing emerged in the 1970s and 1980s as a distinct field of study, branching out of strategic management. Marketing strategies concern the link between the organization and its customers, and how best to leverage resources within an organization to achieve a competitive advantage. In recent years, the advent of digital marketing has revolutionized strategic marketing practices, introducing new avenues for customer engagement and data-driven decision-making.
== Marketing management versus marketing strategy ==
The terms “strategic” and “managerial” marketing distinguish between two processes, each with different goals and conceptual tools. Strategic marketing involves implementing policies that boost a business’s competitive position while addressing challenges and opportunities in the industry. Managerial marketing involves executing specific and targeted objectives.
Marketing strategy allows a firm to narrow down its visions into practical and achievable goals while Marketing management involves practical planning to implement these goals. The term higher-order planning is often used to refer to marketing strategy since this strategy helps establish the general direction for the firm while providing a structure for the marketing program.
Marketing Management is a combined effort of strategies on how a business can launch its products and services. On the other hand, Marketing strategy is the combination of many processes where the business owner or marketer can attract potential customers via several channels. It can be through offline channels or online channels.
Marketing Strategy Examples –
Pricing Strategy
Customer Service process
GTM (Go-To-Market) Strategy
Packaging
Market Mapping and Distribution Reach
Channel Management
Budgeting
Marketing Management Examples –
Launch & Promotion
Launch Mode – Offline & Online
Campaign Management
Budget for the promotional plan
Advertisement Strategy
These are a few examples to understand the basics.
== History ==
Marketing scholars have suggested that strategic marketing arose in the late 1970s and its origins can be understood in terms of a distinct evolutionary path:: 50–56
Budgeting Control (also known as scientific management)
Date: From the late 19th century
Key Thinkers: Frederick Winslow Taylor, Frank and Lillian Gilbreth, Henry L. Gantt, Harrington Emerson
Key Ideas: Emphasis on quantification and scientific modeling, reduce work to the smallest possible units and assign work to specialists, exercise control through rigid managerial hierarchies, standardize inputs to reduce variation, and defects and control costs, use quantitative forecasting methods to predict any changes.
Long-range Planning
Date: From the 1950s
Key Thinkers: Herbert A. Simon
Key Ideas: The managerial focus was to anticipate growth and manage operations in an increasingly complex business world.
Strategic Planning (also known as corporate planning)
Date: From the 1960s
Key Thinkers: Michael Porter
Key Ideas: Organizations must find the right fit within an industry structure; advantage derives from industry concentration and market power; firms should strive to achieve a monopoly or quasi-monopoly; successful firms should be able to erect barriers to entry.
Strategic Marketing Management A business's overall game plan for reaching prospective consumers and turning them into customers of the products or services the business provides.
Date: from the late 1970s
Key thinkers: R. Buzzell and B. Gale
Key Ideas: Each business is unique and there can be no formula for achieving competitive advantage; firms should adopt a flexible planning and review process that aims to cope with strategic surprises and rapidly developing threats; management's focus is on how to deliver superior customer value; highlights the key role of marketing as the link between customers and the organization.
Resource-based view (also known as resource-advantage theory)
Date: From the mid-1990s
Key Thinkers: Jay B. Barney, George S. Day, Gary Hamel, Shelby D. Hunt, G. Hooley and C.K. Prahalad
Key Ideas: The firm's resources are financial, legal, human, organizational, informational, and relational; resources are heterogeneous and imperfectly mobile, management's key task is to understand and organize resources for sustainable competitive advantage.
== Overview ==
Marketing strategy involves mapping out the company's direction for the forthcoming planning period, whether that be three, five, or ten years. It involves undertaking a 360° review of the firm and its operating environment to identify new business opportunities that the firm could potentially leverage for competitive advantage. Strategic planning can also reveal market threats that the firm may need to consider for long-term sustainability. Strategic planning makes no assumptions about the firm continuing to offer the same products to the same customers in the future. Instead, it is concerned with identifying the business opportunities that are likely to be successful and evaluating the firm's capacity to leverage such opportunities. It seeks to identify the strategic gap, which is the difference between where a firm is currently situated (the strategic reality or inadvertent strategy) and where it should be situated for sustainable, long-term growth (the strategic intent or deliberate strategy).
Strategic planning seeks to address three deceptively simple questions, specifically:: 34
Where are we now? (Situation analysis)
What business should we be in? (Vision and mission)
How should we get there? (Strategies, plans, goals, and objectives)
A fourth question may be added to the list, namely 'How do we know when we got there?' Due to the increasing need for accountability, many marketing organizations use a variety of metrics to track strategic performance, allowing for corrective action to be taken as required. On the surface, strategic planning seeks to address three simple questions, however, the research and analysis involved in strategic planning are very sophisticated and require a great deal of skill and judgment.
== Tools and techniques ==
Strategic analysis is designed to address the first strategic question, "Where are we now?" : 34 Traditional market research is less useful for strategic marketing because the analyst does not seek insights about customer attitudes and preferences. Instead, strategic analysts are seeking insights into the firm's operating environment to identify possible future scenarios, opportunities, and threats.
Mintzberg suggests that the top planners spend most of their time engaged in analysis and are concerned with industry or competitive analyses as well as internal studies, including the use of computer models to analyze trends in the organization. Strategic planners use a variety of research tools and analytical techniques, depending on the environment complexity and the firm's goals. Fletcher and Bensoussan, for instance, have identified some 200 qualitative and quantitative analytical techniques regularly used by strategic analysts while a recent publication suggests that 72 techniques are essential. No optimal technique can be identified as useful across all situations or problems. Determining which technique to use in any given situation rests with the analyst's skills. The choice of tool depends on a variety of factors including: data availability; the nature of the marketing problem; the objective or purpose, the analyst's skill level as well as other constraints such as time or motivation.
The most commonly used tools and techniques include:
Research methods
Environmental scanning
Marketing intelligence (also known as competitive intelligence)
Futures research
Analytical techniques
Brand Development Index (BDI)/ Category development index (CDI): 31–35
Brand/ Category penetration : 17-66
Benchmarking
Blind spots analysis
Functional capability and resource analysis
Impact analysis: 79–81
Counterfactual analysis
Demand analysis
Emerging Issues Analysis
Experience curve analysis
Gap analysis
Herfindahl index: 1-16
Industry Analysis (also known as Porter's five forces analysis): 139-140 : 88-94 : 368–382
Management profiling
Market segmentation analysis
Market share analysis
Perceptual mapping
PEST analysis and its variants including PESTLE, STEEPLED and STEER (PEST is occasionally known as Six Segment Analysis)
Portfolio analysis, such as BCG growth-share matrix or GE business screen matrix: 38-39 : 361-376
Precursor Analysis or Evolutionary analysis
Product life cycle analysis and S-curve analysis (also known as technology life cycle or hype cycle analysis)
Product evolutionary cycle analysis
Scenario analysis: 81-83
Segment Share Analysis
Situation analysis: 80
Strategic Group Analysis: 82
SWOT analysis
Trend Analysis: 143-145
Value chain analysis: 142-143
== Vision and mission statements ==
The vision and mission address the second central question, 'Where are we going?' At the conclusion of the research and analysis stage, the firm will typically review its vision statement, mission statement and, if necessary, devise a new vision and mission for the outlook period. At this stage, the firm will also devise a generic competitive strategy as the basis for maintaining a sustainable competitive advantage for the forthcoming planning period.
A vision statement is a realistic, long-term future scenario for the organization. (Vision statements should not be confused with slogans or mottos.) It is a "clearly articulated statement of the business scope." A strong vision statement typically includes the following:
Competitive scope
Market scope
Geographic scope
Vertical scope
Some scholars point out the market visioning is a skill or competency that encapsulates the planners' capacity "to link advanced technologies to market opportunities of the future, and to do so through a shared understanding of a given product market.
A mission statement is a clear and concise statement of the organization's reason for being and its scope of operations, while the generic strategy outlines how the company intends to achieve both its vision and mission.
Mission statements should include detailed information and must be more than a simple motherhood statement. A mission statement typically includes the following:
Specification of target customers
Identification of principal products or services offered
Specification of the geographic scope of operations
Identification of core technologies or core capabilities
An outline of the firm's commitment to long-term survival, growth and profitability
An outline of the key elements in the company's philosophy and core values
Identification of the company's desired public image
== Generic competitive strategy ==
The generic competitive strategy outlines the fundamental basis for obtaining a sustainable competitive advantage within a category. Firms can normally trace their competitive position to one of three factors:
Superior skills (e.g. coordination of individual specialists, created through the interplay of investment in training and professional development, work and learning)
Superior resources (e.g. patents, trade-mark protection, specialized physical assets and relationships with suppliers and distribution infrastructure.)
Superior position (the products or services offered, the market segments served, and the extent to which the product-market can be isolated from direct competition.)
It is essential that the internal analysis provide a frank and open evaluation of the firm's superiority in terms of skills, resources or market position since this will provide the basis for competing over the forthcoming planning period. For this reason, some companies engage external consultants, often advertising or marketing agencies, to provide an independent assessment of the firm's capabilities and resources.
== Ethnic Marketing Strategy ==
There is one strategy that is at times weaved into marketing strategies, however not explicitly stated. And it is unethical in that it specifically targets unsuspecting minority groups. First, consider the definition of ethics, which is the moral question of whether or not something is socially acceptable. Applying this definition to marketing strategy, companies must be wary that they do not purposefully seek to seclude groups of people based on their cultural background. A company that is seeking to expand internationally has a duty to establish their marketing agenda with multiple cultures in mind, so as to prevent bodies of people from getting left out. Marketing strategies have two goals: first of which, keeping with company's goals, is to benefit in some way consumers on a micro level from person to person and then second, keep all of society as a whole in contentment.
=== Porter approach ===
In 1980, Michael Porter developed an approach to strategy formulation that proved to be extremely popular with both scholars and practitioners. The approach became known as the positioning school because of its emphasis on locating a defensible competitive position within an industry or sector. In this approach, strategy formulation consists of three key strands of thinking: analysis of the five forces to determine the sources of competitive advantage; the selection of one of three possible positions which leverage the advantage and the value chain to implement the strategy. In this approach, the strategic choices involve decisions about whether to compete for a share of the total market or for a specific target group (competitive scope) and whether to compete on costs or product differences (competitive advantage). This type of thinking leads to three generic strategies:: 12
Cost leadership – the firm targets the mass market and attempts to be the lowest-cost producer in the market
Differentiation – the firm targets the mass market and tries to maintain unique points of product difference perceived as desirable by customers and for which they are prepared to pay premium prices
Focus – the firm does not compete for head to head, but instead selects a narrow target market and focuses its efforts on satisfying the needs of that segment
According to Porter, these strategies are mutually exclusive and the firm must select one approach to the exclusion of all others.: 12 Firms that try to be all things to all people can present a confusing market position which ultimately leads to below-average returns. Any ambiguity about the firm's approach is a recipe for "strategic mediocrity" and any firm that tries to pursue two approaches simultaneously is said to be "stuck in the middle" and destined for failure.
Porter's approach was the dominant paradigm throughout the 1980s, allowing others who sought to formulate strategy within their business model to follow his (at the time) best division of the ways in which to target the market. This only lasted a little while though, as Porter's approach began receiving a good amount of criticism mainly due to its simplicity; which is part of what made his approach so popular. One important criticism is that it is possible to identify successful companies that pursue a hybrid strategy – such as low-cost positions and differentiated positions simultaneously. Toyota is a classic example of this hybrid approach. Other scholars point to the simplistic nature of the analysis and the overly prescriptive nature of the strategic choices which limits strategies to just three options. Yet others point to research showing that many practitioners find the approach to be overly theoretical and not applicable to their business.
=== Resource-based view (RBV) ===
During the 1990s, the resource-based view (also known as the resource-advantage theory) of the firm became the dominant paradigm. It is an interdisciplinary approach that represents a substantial shift in thinking. It focuses attention on an organization's internal resources as a means of organizing processes and obtaining a competitive advantage. The resource-based view suggests that organizations must develop unique, firm-specific core competencies that will allow them to outperform competitors by doing things differently and in a superior manner.
Barney stated that for resources to hold potential as sources of sustainable competitive advantage, they should be valuable, rare, and imperfectly imitable. A key insight arising from the resource-based view is that not all resources are of equal importance nor possess the potential to become a source of sustainable competitive advantage. The sustainability of any competitive advantage depends on the extent to which resources can be imitated or substituted. Barney and others point out that understanding the causal relationship between the sources of advantage and successful strategies can be very difficult in practice. Barney calls the situation where there is a connection to a firm's organized materials and when their continued competitive advantage is only partially comprehended as "casually ambiguous". Thus, a great deal of managerial effort must be invested in identifying, understanding, and classifying core competencies. In addition, management must invest in organizational learning to develop and maintain key resources and competencies.
Market Based Resources include:
Organizational culture e.g. market orientation, research orientation, culture of innovation, etc.
Assets e.g. brands, Mktg IS, databases, etc.
Capabilities (or competencies) e.g. market sensing, marketing research, relationships, know-how, tacit knowledge, etc.
After more than two decades of advancements in marketing strategy and in the resource-based view paradigm, Cacciolatti & Lee (2016) proposed a novel resource-advantage theory based framework that builds on those organizational capabilities that are relevant to marketing strategy and shows how they have an effect on firm performance. The capabilities-performance model proposed by Cacciolatti & Lee (2016) illustrates the mechanism whereby market orientation, strategic orientation, and organizational power moderate the capabilities-performance relationship. Such a logic of analysis was implicit in the original formulation of RA theory and although it was taken into consideration by several scholars, it has never been articulated explicitly and tested empirically.
In the resource-based view, strategists select the strategy or competitive position that best exploits the internal resources and capabilities relative to external opportunities. Given that strategic resources represent a complex network of inter-related assets and capabilities, organizations can adopt many possible competitive positions. Although scholars debate the precise categories of competitive positions that are used, there is general agreement, within the literature, that the resource-based view is much more flexible than Porter's prescriptive approach to strategy formulation.
Hooley et al., suggest the following classification of competitive positions:
Price positioning
Quality positioning
Innovation positioning
Service positioning
Benefit positioning
Tailored positioning (one-to-one marketing)
=== Other approaches ===
The choice of competitive strategy often depends on a variety of factors including: the firm's market position relative to rival firms, the stage of the product life cycle. A well-established firm in a mature market will likely have a different strategy than a start-up.
==== Growth strategies ====
Growth of a business is critical for business success. A firm may grow by developing the market or by developing new products. The Ansoff product and market growth matrix illustrates the two broad dimensions for achieving growth. The Ansoff matrix identifies four specific growth strategies: market penetration, product development, market development and diversification.
Market penetration involves selling existing products to existing consumers. This is a conservative, low risk approach since the product is already on the established market.
Product development is the introduction of a new product to existing customers. This can include modifications to an already existing market which can create a product that has more appeal.
Market development involves the selling of existing products to new customers in order to identify and build a new clientele base. This can include new geographical markets, new distribution channels, and different pricing policies that bring the product price within the competence of new market segments.
Diversification is the riskiest area for a business. This is where a new product is sold to a new market. There are two type of Diversification; horizontal and vertical. Horizontal diversification focuses more on the product(s) where the business is knowledgeable, whereas vertical diversification focuses more on the introduction of new product into new markets, where the business could have less knowledge of the new market.
Horizontal integration
A horizontal integration strategy may be indicated in fast-changing work environments as well as providing a broad knowledge base for the business and employees. A benefit of horizontal diversification is that it is an open platform for a business to expand and build away from the already existing market.
High levels of horizontal integration lead to high levels of communication within the business. Another benefit of using this strategy is that it leads to a larger market for merged businesses, and it is easier to build good reputations for a business when using this strategy. A disadvantage of using a diversification strategy is that the benefits could take a while to start showing, which could lead the business to believe that the strategy in ineffective. Another disadvantage or risk is, it has been shown that using the horizontal diversification method has become harmful for stock value, but using the vertical diversification had the best effects.
A disadvantage of using the horizontal integration strategy is that this limits and restricts the field of interest that the business. Horizontal integration can affect a business's reputation, especially after a merge has happened between two or more businesses. There are three main benefits to a business's reputation after a merge. A larger business helps the reputation and increases the severity of the punishment. As well as the merge of information after a merge has happened, this increases the knowledge of the business and marketing area they are focused on. The last benefit is more opportunities for deviation to occur in merged businesses rather than independent businesses.
Vertical integration
Vertical integration is when business is expanded through the vertical production line on one business. An example of a vertically integrated business could be Apple. Apple owns all their own software, hardware, designs and operating systems instead of relying on other businesses to supply these. By having a highly vertically integrated business this creates different economies therefore creating a positive performance for the business. Vertical integration is seen as a business controlling the inputs of supplies and outputs of products as well as the distribution of the final product. Some benefits of using a Vertical integration strategy is that costs may be reduced because of the reducing transaction costs which include finding, selling, monitoring, contracting and negotiating with other firms. Also by decreasing outside businesses input it will increase the efficient use of inputs into the business. Another benefit of vertical integration is that it improves the exchange of information through the different stages of the production line. Some competitive advantages could include; avoiding foreclosures, improving the business marketing intelligence, and opens up opportunities to create different products for the market. Some disadvantages of using a Vertical Integration Strategy include the internal costs for the business and the need for overhead costs. Also if the business is not well organized and fully equipped and prepared the business will struggle using this strategy. There are also competitive disadvantages as well, which include; creates barriers for the business, and loses access to information from suppliers and distributors.
==== Market position and strategy ====
In terms of market position, firms may be classified as market leaders, market challengers, market followers or market nichers.
Market leader: The market leader dominates the market by objective measure of market share. Their overall posture is defensive because they have more to lose. Their objectives are to reinforce their prominent position through the use of PR to develop corporate image and to block competitors brand for brand, matching distribution through tactics such as the use of "fighting" brands, pre-emptive strikes, use of regulation to block competitors and even to spread rumours about competitors. Market leaders may adopt unconventional or unexpected approaches to building growth and their tactical responses are likely to include: product proliferation; diversification; multi-branding; erecting barriers to entry; vertical and horizontal integration and corporate acquisitions.
Market challenger: The market challenger holds the second highest market share in the category, following closely behind the dominant player. Their market posture is generally offensive because they have less to lose and more to gain by taking risks. They will compete head to head with the market leader in an effort to grow market share. Their overall strategy is to gain market share through product, packaging and service innovations; new market development and redefinition of the product to broaden its scope and their position within it.
Market follower: Followers are generally content to play second fiddle. They rarely invest in R & D and tend to wait for market leaders to develop innovative products and subsequently adopt a “me-too” approach. Their market posture is typically neutral. Their strategy is to maintain their market position by maintaining existing customers and capturing a fair share of any new segments. They tend to maintain profits by controlling costs.
Market nicher: The market nicher occupies a small niche in the market in order to avoid head to head competition. Their objective is to build strong ties with the customer base and develop strong loyalty with existing customers. Their market posture is generally neutral. Their strategy is to develop and build the segment and protect it from erosion. Tactically, nichers are likely to improve the product or service offering, leverage cross-selling opportunities, offer value for money and build relationships through superior after-direct sales service, service quality and other related value-adding activities.
Most firms carry out strategic planning every 3– 5 years and treat the process as a means of checking whether the company is on track to achieve its vision and mission. Ideally, strategies are both dynamic and interactive, partially planned and partially unplanned. Strategies are broad in their scope in order to enable a firm to react to unforeseen developments while trying to keep focused on a specific pathway. A key aspect of marketing strategy is to keep marketing consistent with a company's overarching mission statement.
Strategies often specify how to adjust the marketing mix; firms can use tools such as Marketing Mix Modeling to help them decide how to allocate scarce resources, as well as how to allocate funds across a portfolio of brands. In addition, firms can conduct analyses of performance, customer analysis, competitor analysis, and target market analysis.
==== Entry strategies ====
Marketing strategies may differ depending on the unique situation of the individual business. According to Lieberman and Montgomery, every entrant into a market – whether it is new or not – is classified under a Market Pioneer, Close Follower or a Late follower
===== Pioneers =====
Market pioneers are known to often open a new market to consumers based on a major innovation. They emphasize these product developments, and in a significant number of cases, studies have shown that early entrants – or pioneers – into a market have serious market-share advantages above all those who enter later. Pioneers have the first-mover advantage, and in order to have this advantage, business’ must ensure they have at least one or more of three primary sources: Technological Leadership, Preemption of Assets or Buyer Switching Costs. Technological Leadership means gaining an advantage through either Research and Development or the “learning curve”. This lets a business use the research and development stage as a key point of selling due to primary research of a new or developed product. Preemption of Assets can help gain an advantage through acquiring scarce assets within a certain market, allowing the first-mover to be able to have control of existing assets rather than those that are created through new technology. Thus allowing pre-existing information to be used and a lower risk when first entering a new market. By being a first entrant, it is easy to avoid higher switching costs compared to later entrants. For example, those who enter later would have to invest more expenditure in order to encourage customers away from early entrants. However, while Market Pioneers may have the “highest probability of engaging in product development” and lower switching costs, to have the first-mover advantage, it can be more expensive due to product innovation being more costly than product imitation. It has been found that while Pioneers in both consumer goods and industrial markets have gained “significant sales advantages”, they incur larger disadvantages cost-wise.
===== Close followers =====
Being market pioneer can, more often than not, attract entrepreneurs or investors depending on the benefits of the market. If there is an upside potential and the ability to have a stable market share, many businesses would start to follow in the footsteps of these pioneers. These are more commonly known as Close Followers. These entrants into the market can also be seen as challengers to the Market Pioneers and the Late Followers. This is because early followers are more than likely to invest a significant amount in Product Research and Development than later entrants. By doing this, it allows businesses to find weaknesses in the products produced before, thus leading to improvements and expansion on the aforementioned product. Therefore, it could also lead to customer preference, which is essential in market success. Due to the nature of early followers and the research time being later than Market Pioneers, different development strategies are used as opposed to those who entered the market in the beginning, and the same is applied to those who are Late Followers in the market. By having a different strategy, it allows the followers to create their own unique selling point and perhaps target a different audience in comparison to that of the Market Pioneers. Early following into a market can often be encouraged by an established business’ product that is “threatened or has industry-specific supporting assets”.
===== Late entrants =====
Following the so called, "Close Followers" are the "Late Entrants". They get their name from their delayed arrival into the market. Despite the thought process that late entry into the market will lead to absolute failure, there are actually a few pros for those classified as late entrants. One such pro is the ability to view how others who previously joined the market have acted and strategize market planning around their mistakes and/or successes. Late Followers have the advantage of learning from their early competitors and improving the benefits or reducing the total costs. This allows them to create a strategy that could essentially mean gaining market share and most importantly, staying in the market. In addition to this, markets evolve, leading to consumers wanting improvements and advancements on products. Late Followers have the advantage of catching the shifts in customer needs and wants towards the products. When bearing in mind customer preference, customer value has a significant influence. Customer value means taking into account the investment of customers as well as the brand or product. It is created through the “perceptions of benefits” and the “total cost of ownership”. On the other hand, if the needs and wants of consumers have only slightly altered, Late Followers could have a cost advantage over early entrants due to the use of product imitation. However, if a business is switching markets, this could take the cost advantage away due to the expense of changing markets for the business. Late Entry into a market does not necessarily mean there is a disadvantage when it comes to market share, it depends on how the marketing mix is adopted and the performance of the business. If the marketing mix is not used correctly – despite the entrant time – the business will gain little to no advantages, potentially missing out on a significant opportunity.
The differentiated strategy
The customized target strategy
The requirements of individual customer markets are unique, and their purchases sufficient to make viable the design of a new marketing mix for each customer.
If a company adopts this type of market strategy, a separate marketing mix is to be designed for each customer.
Specific marketing mixes can be developed to appeal to most of the segments when market segmentation reveals several potential targets.
Customization must however be generalized or not target consumers based on race or ethnic background. This sort of marketing strategy is unethical. Currently more research has to be done to discern a way that prevents this strategy, because a generalized set of rules to police what is considered the overall "good" cannot be instituted.
== Developing marketing goals and objectives ==
Whereas the vision and mission provide the framework, the "goals define targets within the mission, which, when achieved, should move the organization toward the performance of that mission." Goals are broad primary outcomes whereas, objectives are measurable steps taken to achieve a goal or strategy. In strategic planning, it is important for managers to translate the overall strategy into goals and objectives. Goals are designed to inspire action and focus attention on specific desired outcomes. Objectives, on the other hand, are used to measure an organization's performance on specific dimensions, thereby providing the organization with feedback on how well it is achieving its goals and strategies.
Managers typically establish objectives using the balanced scorecard approach. This means that objectives do not include desired financial outcomes exclusively, but also specify measures of performance for customers (e.g. satisfaction, loyalty, repeat patronage), internal processes (e.g., employee satisfaction, productivity) and innovation and improvement activities.
After setting the goals marketing strategy or marketing plan should be developed. The marketing strategy plan provides an outline of the specific actions to be taken over time to achieve the objectives. Plans can be extended to cover many years, with sub-plans for each year. Plans usually involve monitoring, to assess progress, and prepare for contingencies if problems arise. Simultaneous such as customer lifetime value models can be used to help marketers conduct "what-if" analyses to forecast what potential scenarios arising from possible actions, and to gauge how specific actions might affect such variables as the revenue-per-customer and the churn rate.
== Strategy typologies ==
Developing competitive strategy requires significant judgement and is based on a deep understanding of the firm's current situation, its past history and its operating environment. No heuristics have yet been developed to assist strategists choose the optimal strategic direction. Nevertheless, some researchers and scholars have sought to classify broad groups of strategy approaches that might serve as broad frameworks for thinking about suitable choices.
=== Strategy types ===
In 2003, Raymond E. Miles and Charles C. Snow, based on an in-depth cross-industry study of a sample of large corporations, proposed a detailed scheme using four categories:
Prospectors: proactively seek to locate and exploit new market opportunities
Analyzers: are very innovative in their product-market choices; tend to follow prospectors into new markets; often introduce new or improved product designs. This type of organisation works in two types of market, one generally stable, one subject to more change
Defenders: are relatively cautious in their initiatives; seek to seal off a portion of the market which they can defend against competitive incursions; often market highest quality offerings and position as a quality leader
Reactors: tend to vacillate in their responses to environmental changes and are generally the least profitable organizations
=== Marketing strategy ===
Marketing warfare strategies are competitor-centered strategies drawn from analogies with the field of military science. Warfare strategies were popular in the 1980s, but interest in this approach has waned in the new era of relationship marketing. An increased awareness of the distinctions between business and military cultures also raises questions about the extent to which this type of analogy is useful. In spite of its limitations, the typology of marketing warfare strategies is useful for predicting and understanding competitor responses.
In the 1980s, Kotler and Singh developed a typology of marketing warfare strategies:
Frontal attack: where an aggressor goes head to head for the same market segments on an offer by offer, price by price basis; normally used by a market challenger against a more dominant player
Flanking attack: attacking an organization on its weakest front; used by market challengers
Bypass attack: bypassing the market leader by attacking smaller, more vulnerable target organizations in order to broaden the aggressor's resource base
Encirclement attack: attacking a dominant player on all fronts
Guerilla warfare: sporadic, unexpected attacks using both conventional and unconventional means to attack a rival; normally practiced by smaller players against the market leader
== Relationship between the marketing strategy and the marketing mix ==
Marketing strategy and marketing mix are related elements of a comprehensive marketing plan. While marketing strategy is aligned with setting the direction of a company or product/service line, the marketing mix is majorly tactical in nature and is employed to carry out the overall marketing strategy. The 4P's of the marketing mix (Price, Product, Place and Promotion) represent the tools that marketers can leverage while defining their marketing strategy to create a marketing plan. Accuracy of marketing mix impacts success of overall marketing strategy. The 4P's of this marketing mix, ceteris paribus, should line up with the heart of the company.
== See also ==
== References ==
== Further reading ==
Laermer, Richard; Simmons, Mark, Punk Marketing, New York: Harper Collins, 2007 ISBN 978-0-06-115110-1 (Review of the book by Marilyn Scrizzi, in Journal of Consumer Marketing 24(7), 2007)
Morgan, N.A., Whitler, K.A., Feng, H. et al. Research in marketing strategy. J. of the Acad. Mark. Sci. 47, 4–29 (2019). https://doi.org/10.1007/s11747-018-0598-1
== External links ==
Media related to Marketing strategy at Wikimedia Commons | Wikipedia/Marketing_strategy |
In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.
Ideal lattices also form the basis for quantum computer attack resistant cryptography based on the Ring Learning with Errors. These cryptosystems are provably secure under the assumption that the shortest vector problem (SVP) is hard in these ideal lattices.
== Introduction ==
In general terms, ideal lattices are lattices corresponding to ideals in rings of the form
Z
[
x
]
/
⟨
f
⟩
{\displaystyle \mathbb {Z} [x]/\langle f\rangle }
for some irreducible polynomial
f
{\displaystyle f}
of degree
n
{\displaystyle n}
. All of the definitions of ideal lattices from prior work are instances of the following general notion: let
R
{\displaystyle R}
be a ring whose additive group is isomorphic to
Z
n
{\displaystyle \mathbb {Z} ^{n}}
(i.e., it is a free
Z
{\displaystyle \mathbb {Z} }
-module of rank
n
{\displaystyle n}
), and let
σ
{\displaystyle \sigma }
be an additive isomorphism mapping
R
{\displaystyle R}
to some lattice
σ
(
R
)
{\displaystyle \sigma (R)}
in an
n
{\displaystyle n}
-dimensional real vector space (e.g.,
R
n
{\displaystyle \mathbb {R} ^{n}}
). The family of ideal lattices for the ring
R
{\displaystyle R}
under the embedding
σ
{\displaystyle \sigma }
is the set of all lattices
σ
(
I
)
{\displaystyle \sigma (I)}
, where
I
{\displaystyle I}
is an ideal in
R
.
{\displaystyle R.}
== Definition ==
=== Notation ===
Let
f
∈
Z
[
x
]
{\displaystyle f\in \mathbb {Z} [x]}
be a monic polynomial of degree
n
{\displaystyle n}
, and consider the quotient ring
Z
[
x
]
/
⟨
f
⟩
{\displaystyle \mathbb {Z} [x]/\langle f\rangle }
.
Using the standard set of representatives
{
(
g
mod
f
)
:
g
∈
Z
[
x
]
}
{\displaystyle \lbrace (g{\bmod {f}}):g\in \mathbb {Z} [x]\rbrace }
, and identification of polynomials with vectors, the quotient ring
Z
[
x
]
/
⟨
f
⟩
{\displaystyle \mathbb {Z} [x]/\langle f\rangle }
is isomorphic (as an additive group) to the integer lattice
Z
n
{\displaystyle \mathbb {Z} ^{n}}
, and any ideal
I
⊆
Z
[
x
]
/
⟨
f
⟩
{\displaystyle I\subseteq \mathbb {Z} [x]/\langle f\rangle }
defines a corresponding integer sublattice
L
(
I
)
⊆
Z
n
{\displaystyle {\mathcal {L}}(I)\subseteq \mathbb {Z} ^{n}}
.
An ideal lattice is an integer lattice
L
(
B
)
⊆
Z
n
{\displaystyle {\mathcal {L}}(B)\subseteq \mathbb {Z} ^{n}}
such that
B
=
{
g
mod
f
:
g
∈
I
}
{\displaystyle B=\lbrace g{\bmod {f}}:g\in I\rbrace }
for some monic polynomial
f
{\displaystyle f}
of degree
n
{\displaystyle n}
and ideal
I
⊆
Z
[
x
]
/
⟨
f
⟩
{\displaystyle I\subseteq \mathbb {Z} [x]/\langle f\rangle }
.
=== Related properties ===
It turns out that the relevant properties of
f
{\displaystyle f}
for the resulting function to be collision resistant are:
f
{\displaystyle f}
should be irreducible.
the ring norm
‖
g
‖
f
{\displaystyle \lVert g\rVert _{f}}
is not much bigger than
‖
g
‖
∞
{\displaystyle \lVert g\rVert _{\infty }}
for any polynomial
g
{\displaystyle g}
, in a quantitative sense.
The first property implies that every ideal of the ring
Z
[
x
]
/
⟨
f
⟩
{\displaystyle \mathbb {Z} [x]/\langle f\rangle }
defines a full-rank lattice in
Z
n
{\displaystyle \mathbb {Z} ^{n}}
and plays a fundamental role in proofs.
Lemma: Every ideal
I
{\displaystyle I}
of
Z
[
x
]
/
⟨
f
⟩
{\displaystyle \mathbb {Z} [x]/\langle f\rangle }
, where
f
{\displaystyle f}
is a monic, irreducible integer polynomial of degree
n
{\displaystyle n}
, is isomorphic to a full-rank lattice in
Z
n
{\displaystyle \mathbb {Z} ^{n}}
.
Ding and Lindner gave evidence that distinguishing ideal lattices from general ones can be done in polynomial time and showed that in practice randomly chosen lattices are never ideal. They only considered the case where the lattice has full rank, i.e. the basis consists of
n
{\displaystyle n}
linear independent vectors. This is not a fundamental restriction because Lyubashevsky and Micciancio have shown that if a lattice is ideal with respect to an irreducible monic polynomial, then it has full rank, as given in the above lemma.
Algorithm: Identifying ideal lattices with full rank bases
Data: A full-rank basis
B
∈
Z
(
n
,
n
)
{\displaystyle B\in \mathbb {Z} ^{(n,n)}}
Result: true and
q
{\displaystyle {\textbf {q}}}
, if
B
{\displaystyle B}
spans an ideal lattice with respect to
q
{\displaystyle {\textbf {q}}}
, otherwise false.
Transform
B
{\displaystyle B}
into HNF
Calculate
A
=
a
d
j
(
B
)
{\displaystyle A={\rm {adj}}(B)}
,
d
=
det
(
B
)
{\displaystyle d=\det(B)}
, and
z
=
B
(
n
,
n
)
{\displaystyle z=B_{(n,n)}}
Calculate the product
P
=
A
M
B
mod
d
{\displaystyle P=AMB{\bmod {d}}}
if only the last column of P is non-zero then
set
c
=
P
(
⋅
,
n
)
{\displaystyle c=P_{(\centerdot ,n)}}
to equal this column
else return false
if
z
∣
c
i
{\displaystyle z\mid c_{i}}
for
i
=
1
,
…
,
n
{\displaystyle i=1,\ldots ,n}
then
use CRT to find
q
∗
≡
(
c
/
z
)
mod
(
d
/
z
)
{\displaystyle q^{\ast }\equiv (c/z){\bmod {(}}d/z)}
and
q
∗
≡
0
mod
z
{\displaystyle q^{\ast }\equiv 0{\bmod {\ }}z}
else return false
if
B
q
∗
≡
0
mod
(
d
/
z
)
{\displaystyle Bq^{\ast }\equiv 0{\bmod {(}}d/z)}
then
return true,
q
=
B
q
∗
/
d
{\displaystyle q=Bq^{\ast }/d}
else return false
where the matrix M is
M
=
(
0
⋅
⋅
⋅
0
⋅
⋅
I
n
−
1
⋅
0
)
{\displaystyle M={\begin{pmatrix}0&\cdot &\cdot &\cdot &0\\&&&&\cdot \\&&&&\cdot \\I_{n-1}&&&&\cdot \\&&&&0\end{pmatrix}}}
Using this algorithm, it can be seen that many lattices are not ideal lattices. For example, let
n
=
2
{\displaystyle n=2}
and
k
∈
Z
∖
{
0
,
±
1
}
{\displaystyle k\in \mathbb {Z} \smallsetminus \lbrace 0,\pm 1\rbrace }
, then
B
1
=
(
k
0
0
1
)
{\displaystyle B_{1}={\begin{pmatrix}k&0\\0&1\end{pmatrix}}}
is ideal, but
B
2
=
(
1
0
0
k
)
{\displaystyle B_{2}={\begin{pmatrix}1&0\\0&k\end{pmatrix}}}
is not.
B
2
{\displaystyle B_{2}}
with
k
=
2
{\displaystyle k=2}
is an example given by Lyubashevsky and Micciancio.
Performing the algorithm on it and referring to the basis as B, matrix B is already in Hermite Normal Form so the first step is not needed. The determinant is
d
=
2
{\displaystyle d=2}
, the adjugate matrix
A
=
(
2
0
0
1
)
,
{\displaystyle A={\begin{pmatrix}2&0\\0&1\end{pmatrix}},}
M
=
(
0
0
1
0
)
{\displaystyle M={\begin{pmatrix}0&0\\1&0\end{pmatrix}}}
and finally, the product
P
=
A
M
B
mod
d
{\displaystyle P=AMB{\bmod {d}}}
is
P
=
(
0
0
1
0
)
.
{\displaystyle P={\begin{pmatrix}0&0\\1&0\end{pmatrix}}.}
At this point the algorithm stops, because all but the last column of
P
{\displaystyle P}
have to be zero if
B
{\displaystyle B}
would span an ideal lattice.
== Use in cryptography ==
Micciancio introduced the class of structured cyclic lattices, which correspond to ideals in polynomial rings
Z
[
x
]
/
(
x
n
−
1
)
{\displaystyle \mathbb {Z} [x]/(x^{n}-1)}
, and presented the first provably secure one-way function based on the worst-case hardness of the restriction of Poly(n)-SVP to cyclic lattices. (The problem γ-SVP consists in computing a non-zero vector of a given lattice, whose norm is no more than γ times larger than the norm of a shortest non-zero lattice vector.) At the same time, thanks to its algebraic structure, this one-way function enjoys high efficiency comparable to the NTRU scheme
O
~
(
n
)
{\displaystyle {\tilde {O}}(n)}
evaluation time and storage cost). Subsequently, Lyubashevsky and Micciancio and independently Peikert and Rosen showed how to modify Micciancio's function to construct an efficient and provably secure collision resistant hash function. For this, they introduced the more general class of ideal lattices, which correspond to ideals in polynomial rings
Z
[
x
]
/
f
(
x
)
{\displaystyle \mathbb {Z} [x]/f(x)}
. The collision resistance relies on the hardness of the restriction of Poly(n)-SVP to ideal lattices (called Poly(n)-Ideal-SVP). The average-case collision-finding problem is a natural computational problem called Ideal-SIS, which has been shown to be as hard as the worst-case instances of Ideal-SVP. Provably secure efficient signature schemes from ideal lattices have also been proposed, but constructing efficient provably secure public key encryption from ideal lattices was an interesting open problem.
The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding and provided a state of the art description of a quantum resistant key exchange using Ring LWE. The paper appeared in 2012 after a provisional patent application was filed in 2012. In 2014, Peikert presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional signal for rounding in Ding's construction is also utilized. A digital signature using the same concepts was done several years earlier by Vadim Lyubashevsky in, "Lattice Signatures Without Trapdoors." Together, the work of Peikert and Lyubashevsky provide a suite of Ring-LWE based quantum attack resistant algorithms with the same security reductions.
=== Efficient collision resistant hash functions ===
The main usefulness of the ideal lattices in cryptography stems from the fact that very efficient and practical collision resistant hash functions can be built based on the hardness of finding an approximate shortest vector in such lattices.
Independently constructed collision resistant hash functions by Peikert and Rosen, as well as Lyubashevsky and Micciancio, based on ideal lattices (a generalization of cyclic lattices), and provided a fast and practical implementation. These results paved the way for other efficient cryptographic constructions including identification schemes and signatures.
Lyubashevsky and Micciancio gave constructions of efficient collision resistant hash functions that can be proven secure based on worst case hardness of the shortest vector problem for ideal lattices. They defined hash function families as: Given a ring
R
=
Z
p
[
x
]
/
⟨
f
⟩
{\displaystyle R=\mathbb {Z} _{p}[x]/\langle f\rangle }
, where
f
∈
Z
p
[
x
]
{\displaystyle f\in \mathbb {Z} _{p}[x]}
is a monic, irreducible polynomial of degree
n
{\displaystyle n}
and
p
{\displaystyle p}
is an integer of order roughly
n
2
{\displaystyle n^{2}}
, generate
m
{\displaystyle m}
random elements
a
1
,
…
,
a
m
∈
R
{\displaystyle a_{1},\dots ,a_{m}\in R}
, where
m
{\displaystyle m}
is a constant. The ordered
m
{\displaystyle m}
-tuple
h
=
(
a
1
,
…
,
a
m
)
∈
R
m
{\displaystyle h=(a_{1},\ldots ,a_{m})\in R^{m}}
determines the hash function. It will map elements in
D
m
{\displaystyle D^{m}}
, where
D
{\displaystyle D}
is a strategically chosen subset of
R
{\displaystyle R}
, to
R
{\displaystyle R}
. For an element
b
=
(
b
1
,
…
,
b
m
)
∈
D
m
{\displaystyle b=(b_{1},\ldots ,b_{m})\in D^{m}}
, the hash is
h
(
b
)
=
∑
i
=
1
m
α
i
⋅
b
i
{\displaystyle h(b)=\sum _{i=1}^{m}\alpha _{i}\centerdot b_{i}}
. Here the size of the key (the hash function) is
O
(
m
n
log
p
)
=
O
(
n
log
n
)
{\displaystyle O(mn\log p)=O(n\log n)}
, and the operation
α
i
⋅
b
i
{\displaystyle \alpha _{i}\centerdot b_{i}}
can be done in time
O
(
n
log
n
log
log
n
)
{\displaystyle O(n\log n\log \log n)}
by using the Fast Fourier Transform (FFT) , for appropriate choice of the polynomial
f
{\displaystyle f}
. Since
m
{\displaystyle m}
is a constant,
hashing requires time
O
(
n
log
n
log
log
n
)
{\displaystyle O(n\log n\log \log n)}
. They proved that the hash function family is collision resistant by showing that if there is a polynomial-time algorithm that succeeds with non-negligible probability in finding
b
≠
b
′
∈
D
m
{\displaystyle b\neq b'\in D^{m}}
such that
h
(
b
)
=
h
(
b
′
)
{\displaystyle h(b)=h(b')}
, for a randomly chosen hash function
h
∈
R
m
{\displaystyle h\in R^{m}}
, then a certain
problem called the “shortest vector problem” is solvable in polynomial time for every ideal of the ring
Z
[
x
]
/
⟨
f
⟩
{\displaystyle \mathbb {Z} [x]/\langle f\rangle }
.
Based on the work of Lyubashevsky and Micciancio in 2006, Micciancio and Regev defined the following algorithm of hash functions based on ideal lattices:
Parameters: Integers
q
,
n
,
m
,
d
{\displaystyle q,n,m,d}
with
n
∣
m
{\displaystyle n\mid m}
, and vector f
∈
Z
n
{\displaystyle \in \mathbb {Z} ^{n}}
.
Key:
m
/
n
{\displaystyle m/n}
vectors
a
1
,
…
,
a
m
/
n
{\displaystyle a_{1},\ldots ,a_{m/n}}
chosen independently and uniformly at random in
Z
q
n
{\displaystyle \mathbb {Z} _{q}^{n}}
.
Hash function:
f
A
:
{
0
,
…
,
d
−
1
}
m
⟶
Z
q
n
{\displaystyle f_{A}:\lbrace 0,\ldots ,d-1\rbrace ^{m}\longrightarrow \mathbb {Z} _{q}^{n}}
given by
f
A
(
y
)
=
[
F
∗
a
1
|
…
|
F
∗
a
m
/
n
]
y
mod
q
{\displaystyle f_{A}(y)=[F\ast a_{1}|\ldots |F\ast a_{m/n}]y{\bmod {\ }}q}
.
Here
n
,
m
,
q
,
d
{\displaystyle n,m,q,d}
are parameters, f is a vector in
Z
n
{\displaystyle \mathbb {Z} ^{n}}
and
A
{\displaystyle A}
is a block-matrix with structured blocks
A
(
i
)
=
F
∗
a
(
i
)
{\displaystyle A^{(i)}=F\ast a^{(i)}}
.
Finding short vectors in
Λ
q
⊥
(
[
F
∗
a
1
|
…
|
F
∗
a
m
/
n
]
)
{\displaystyle \Lambda _{q}^{\perp }([F\ast a_{1}|\ldots |F\ast a_{m/n}])}
on the average (even with just inverse polynomial
probability) is as hard as solving various lattice problems (such as approximate SVP and SIVP) in the worst
case over ideal lattices, provided the vector f satisfies the following two properties:
For any two unit vectors u, v, the vector [F∗u]v has small (say, polynomial in
n
{\displaystyle n}
, typically
O
(
n
)
)
{\displaystyle O({\sqrt {n}}))}
norm.
The polynomial
f
(
x
)
=
x
n
+
f
n
x
n
−
1
+
⋯
+
f
1
∈
Z
[
x
]
{\displaystyle f(x)=x^{n}+f_{n}x^{n-1}+\cdots +f_{1}\in \mathbb {Z} [x]}
is irreducible over the integers, i.e., it does not factor into the product of integer polynomials of smaller degree.
The first property is satisfied by the vector
F
=
(
−
1
,
0
,
…
,
0
)
{\displaystyle \mathbf {F} =(-1,0,\ldots ,0)}
corresponding to circulant matrices,
because all the coordinates of [F∗u]v are bounded by 1, and hence
‖
[
F
∗
u
]
v
‖
≤
n
{\displaystyle \lVert [{\textbf {F}}\ast {\textbf {u}}]{\textbf {v}}\rVert \leq {\sqrt {n}}}
. However, the polynomial
x
n
−
1
{\displaystyle x^{n}-1}
corresponding to
f
=
(
−
1
,
0
,
…
,
0
)
{\displaystyle \mathbf {f} =(-1,0,\ldots ,0)}
is not irreducible because it factors into
(
x
−
1
)
(
x
n
−
1
+
x
n
−
2
+
⋯
+
x
+
1
)
{\displaystyle (x-1)(x^{n-1}+x^{n-2}+\cdots +x+1)}
, and this is why collisions can be efficiently found. So,
f
=
(
−
1
,
0
,
…
,
0
)
{\displaystyle \mathbf {f} =(-1,0,\ldots ,0)}
is not a good choice to get collision resistant hash functions, but many other choices are possible. For example, some choices of f for which both properties are satisfied (and therefore, result in collision resistant hash functions with worst-case security guarantees) are
f
=
(
1
,
…
,
1
)
∈
Z
n
{\displaystyle \mathbf {f} =(1,\ldots ,1)\in \mathbb {Z} ^{n}}
where
n
+
1
{\displaystyle n+1}
is prime, and
f
=
(
1
,
0
,
…
,
0
)
∈
Z
n
{\displaystyle \mathbf {f} =(1,0,\ldots ,0)\in \mathbb {Z} ^{n}}
for
n
{\displaystyle n}
equal to a power of 2.
=== Digital signatures ===
Digital signatures schemes are among the most important cryptographic primitives. They can be obtained by using the one-way functions based on the worst-case hardness of lattice problems. However, they are impractical. A number of new digital signature schemes based on learning with errors, ring learning with errors and trapdoor lattices have been developed since the learning with errors problem was applied in a cryptographic context.
Their direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices. The scheme of Lyubashevsky and Micciancio has worst-case security guarantees based on ideal lattices and it is the most asymptotically efficient construction known to date, yielding signature generation and verification algorithms that run in almost linear time.
One of the main open problems that was raised by their work is constructing a one-time signature with similar efficiency, but based on a weaker hardness assumption. For instance, it would be great to provide a one-time signature with security based on the hardness of approximating the Shortest Vector Problem (SVP) (in ideal lattices) to within a factor of
O
~
(
n
)
{\displaystyle {\tilde {O}}(n)}
.
Their construction is based on a standard transformation from one-time signatures (i.e. signatures that allow to securely sign a single message) to general signature schemes, together with a novel construction of a lattice based one-time signature whose security is ultimately based on the worst-case hardness of approximating the shortest vector in all lattices corresponding to ideals in the ring
Z
[
x
]
/
⟨
f
⟩
{\displaystyle \mathbb {Z} [x]/\langle f\rangle }
for any irreducible polynomial
f
{\displaystyle f}
.
Key-Generation Algorithm:
Input:
1
n
{\displaystyle 1^{n}}
, irreducible polynomial
f
∈
Z
{\displaystyle f\in \mathbb {Z} }
of degree
n
{\displaystyle n}
.
Set
p
⟵
(
φ
n
)
3
{\displaystyle p\longleftarrow (\varphi n)^{3}}
,
m
⟵
⌈
log
n
⌉
{\displaystyle m\longleftarrow \lceil \log n\rceil }
,
R
⟵
Z
p
[
x
]
/
⟨
f
⟩
{\displaystyle R\longleftarrow \mathbb {Z} _{p}[x]/\langle f\rangle }
For all positive
i
{\displaystyle i}
, let the sets
D
K
i
{\displaystyle DK_{i}}
and
D
L
i
{\displaystyle DL_{i}}
be defined as:
D
K
i
=
{
y
^
∈
R
m
{\displaystyle DK_{i}=\lbrace {\hat {y}}\in R^{m}}
such that
‖
y
^
‖
∞
≤
5
i
p
1
/
m
}
{\displaystyle \lVert {\hat {y}}\rVert _{\infty }\leq 5ip^{1/m}\rbrace }
D
L
i
=
{
y
^
∈
R
m
{\displaystyle DL_{i}=\lbrace {\hat {y}}\in R^{m}}
such that
‖
y
^
‖
∞
≤
5
i
n
φ
p
1
/
m
}
{\displaystyle \lVert {\hat {y}}\rVert _{\infty }\leq 5in\varphi p^{1/m}\rbrace }
Choose uniformly random
h
∈
H
R
,
m
{\displaystyle h\in {\mathcal {H}}_{R,m}}
Pick a uniformly random string
r
∈
{
0
,
1
}
⌊
log
2
n
⌋
{\displaystyle r\in \lbrace 0,1\rbrace ^{\lfloor \log ^{2}n\rfloor }}
If
r
=
0
⌊
log
2
n
⌋
{\displaystyle r=0^{\lfloor \log ^{2}n\rfloor }}
then
Set
j
=
⌊
log
2
n
⌋
{\displaystyle j=\lfloor \log ^{2}n\rfloor }
else
Set
j
{\displaystyle j}
to the position of the first 1 in the string
r
{\displaystyle r}
end if
Pick
k
^
,
l
^
{\displaystyle {\hat {k}},{\hat {l}}}
independently and uniformly at random from
D
K
j
{\displaystyle DK_{j}}
and
D
L
j
{\displaystyle DL_{j}}
respectively
Signing Key:
(
k
^
,
l
^
)
{\displaystyle ({\hat {k}},{\hat {l}})}
. Verification Key:
(
h
,
h
(
k
^
)
,
h
(
l
^
)
)
{\displaystyle (h,h({\hat {k}}),h({\hat {l}}))}
Signing Algorithm:
Input: Message
z
∈
R
{\displaystyle z\in R}
such that
‖
z
‖
∞
≤
1
{\displaystyle \lVert z\rVert _{\infty }\leq 1}
; signing key
(
k
^
,
l
^
)
{\displaystyle ({\hat {k}},{\hat {l}})}
Output:
s
^
⟵
k
^
z
+
l
^
{\displaystyle {\hat {s}}\longleftarrow {\hat {k}}z+{\hat {l}}}
Verification Algorithm:
Input: Message
z
{\displaystyle z}
; signature
s
^
{\displaystyle {\hat {s}}}
; verification key
(
h
,
h
(
k
^
)
,
h
(
l
^
)
)
{\displaystyle (h,h({\hat {k}}),h({\hat {l}}))}
Output: “ACCEPT”, if
‖
s
^
‖
∞
≤
10
φ
p
1
/
m
n
log
2
n
{\displaystyle \lVert {\hat {s}}\rVert _{\infty }\leq 10\varphi p^{1/m}n\log ^{2}n}
and
s
^
=
k
^
z
+
l
^
{\displaystyle {\hat {s}}={\hat {k}}z+{\hat {l}}}
“REJECT”, otherwise.
=== The SWIFFT hash function ===
The hash function is quite efficient and can be computed asymptotically in
O
~
(
m
)
{\displaystyle {\tilde {O}}(m)}
time using the Fast Fourier Transform (FFT) over the complex numbers. However, in practice, this carries a substantial overhead. The SWIFFT family of hash functions defined by Micciancio and Regev is essentially a highly optimized variant of the hash function above using the (FFT) in
Z
q
{\displaystyle \mathbb {Z} _{q}}
. The vector f is set to
(
1
,
0
,
…
,
0
)
∈
Z
n
{\displaystyle (1,0,\dots ,0)\in \mathbb {Z} ^{n}}
for
n
{\displaystyle n}
equal to a power of 2, so that the corresponding polynomial
x
n
+
1
{\displaystyle x^{n}+1}
is irreducible.
Let
q
{\displaystyle q}
be a prime number such that
2
n
{\displaystyle 2n}
divides
q
−
1
{\displaystyle q-1}
, and let
W
∈
Z
q
n
×
n
{\displaystyle {\textbf {W}}\in \mathbb {Z} _{q}^{n\times n}}
be an invertible matrix over
Z
q
{\displaystyle \mathbb {Z} _{q}}
to be chosen later. The SWIFFT hash function maps a key
a
~
(
1
)
,
…
,
a
~
(
m
/
n
)
{\displaystyle {\tilde {a}}^{(1)},\ldots ,{\tilde {a}}^{(m/n)}}
consisting of
m
/
n
{\displaystyle m/n}
vectors chosen uniformly from
Z
q
n
{\displaystyle \mathbb {Z} _{q}^{n}}
and an input
y
∈
{
0
,
…
,
d
−
1
}
m
{\displaystyle y\in \lbrace 0,\ldots ,d-1\rbrace ^{m}}
to
W
⋅
f
A
(
y
)
mod
q
{\displaystyle {\textbf {W}}^{\centerdot }f_{A}(y){\bmod {\ }}q}
where
A
=
[
F
∗
α
(
1
)
,
…
,
F
∗
α
(
m
/
n
)
]
{\displaystyle {\textbf {A}}=[{\textbf {F}}\ast \alpha ^{(1)},\ldots ,{\textbf {F}}\ast \alpha ^{(m/n)}]}
is as before and
α
(
i
)
=
W
−
1
a
~
(
i
)
mod
q
{\displaystyle \alpha ^{(i)}={\textbf {W}}^{-1}{\tilde {a}}^{(i)}{\bmod {q}}}
.
Multiplication by the invertible matrix
W
−
1
{\displaystyle {\textbf {W}}^{-1}}
maps a uniformly chosen
a
~
∈
Z
q
n
{\displaystyle {\tilde {a}}\in \mathbb {Z} _{q}^{n}}
to a uniformly chosen
α
∈
Z
q
n
{\displaystyle \alpha \in \mathbb {Z} _{q}^{n}}
. Moreover,
W
⋅
f
A
(
y
)
=
W
⋅
f
A
(
y
′
)
(
mod
q
)
{\displaystyle {\textbf {W}}^{\centerdot }f_{A}(y)={\textbf {W}}^{\centerdot }f_{A}(y'){\pmod {q}}}
if and only if
f
A
(
y
)
=
f
A
(
y
′
)
(
mod
q
)
{\displaystyle f_{A}(y)=f_{A}(y'){\pmod {q}}}
.
Together, these two facts establish that finding collisions in SWIFFT is equivalent to finding collisions in the underlying ideal lattice function
f
A
{\displaystyle f_{A}}
, and the claimed collision resistance property of SWIFFT is supported by the connection to worst case lattice problems on ideal lattices.
The algorithm of the SWIFFT hash function is:
Parameters: Integers
n
,
m
,
q
,
d
{\displaystyle n,m,q,d}
such that
n
{\displaystyle n}
is a power of 2,
q
{\displaystyle q}
is prime,
2
n
∣
(
q
−
1
)
{\displaystyle 2n\mid (q-1)}
and
n
∣
m
{\displaystyle n\mid m}
.
Key:
m
/
n
{\displaystyle m/n}
vectors
a
~
1
,
…
,
a
~
m
/
n
{\displaystyle {\tilde {a}}_{1},\ldots ,{\tilde {a}}_{m/n}}
chosen independently and uniformly at random in
Z
q
n
{\displaystyle \mathbb {Z} _{q}^{n}}
.
Input:
m
/
n
{\displaystyle m/n}
vectors
y
(
1
)
,
…
,
y
(
m
/
n
)
∈
{
0
,
…
,
d
−
1
}
n
{\displaystyle y^{(1)},\dots ,y^{(m/n)}\in \lbrace 0,\dots ,d-1\rbrace ^{n}}
.
Output: the vector
∑
i
=
1
m
/
n
a
~
(
i
)
⊙
(
W
y
(
i
)
)
∈
Z
q
n
{\displaystyle \sum _{i=1}^{m/n}{\tilde {a}}^{(i)}\odot ({\textbf {W}}y^{(i)})\in \mathbb {Z} _{q}^{n}}
, where
⊙
{\displaystyle \odot }
is the component-wise vector product.
=== Learning with errors (LWE) ===
==== Ring-LWE ====
Learning with errors (LWE) problem has been shown to be as hard as worst-case lattice problems and has served as the foundation for many cryptographic applications. However, these applications are inefficient because of an inherent quadratic overhead in the use of LWE. To get truly efficient LWE applications, Lyubashevsky, Peikert and Regev defined an appropriate version of the LWE problem in a wide class of rings and proved its hardness under worst-case assumptions on ideal lattices in these rings. They called their LWE version ring-LWE.
Let
f
(
x
)
=
x
n
+
1
∈
Z
[
x
]
{\displaystyle f(x)=x^{n}+1\in \mathbb {Z} [x]}
, where the security parameter
n
{\displaystyle n}
is a power of 2, making
f
(
x
)
{\displaystyle f(x)}
irreducible over the rationals. (This particular
f
(
x
)
{\displaystyle f(x)}
comes from the family of cyclotomic polynomials, which play a special role in this work).
Let
R
=
Z
[
x
]
/
⟨
f
(
x
)
⟩
{\displaystyle R=\mathbb {Z} [x]/\langle f(x)\rangle }
be the ring of integer polynomials modulo
f
(
x
)
{\displaystyle f(x)}
. Elements of
R
{\displaystyle R}
(i.e., residues modulo
f
(
x
)
{\displaystyle f(x)}
) are typically represented by integer polynomials of degree less than
n
{\displaystyle n}
. Let
q
≡
1
mod
2
n
{\displaystyle q\equiv 1{\bmod {2}}n}
be a sufficiently large public prime modulus (bounded by a polynomial in
n
{\displaystyle n}
), and let
R
q
=
R
/
⟨
q
⟩
=
Z
q
[
x
]
/
⟨
f
(
x
)
⟩
{\displaystyle R_{q}=R/\langle q\rangle =\mathbb {Z} _{q}[x]/\langle f(x)\rangle }
be the ring of integer polynomials modulo both
f
(
x
)
{\displaystyle f(x)}
and
q
{\displaystyle q}
. Elements of
R
q
{\displaystyle R_{q}}
may be represented by polynomials of degree less than
n
{\displaystyle n}
-whose coefficients are from
{
0
,
…
,
q
−
1
}
{\displaystyle \lbrace 0,\dots ,q-1\rbrace }
.
In the above-described ring, the R-LWE problem may be described as follows.
Let
s
=
s
(
x
)
∈
R
q
{\displaystyle s=s(x)\in R_{q}}
be a uniformly random ring element, which is kept secret. Analogously to standard LWE, the goal of the attacker is to distinguish arbitrarily many (independent) ‘random noisy ring equations’ from truly uniform ones. More specifically, the noisy equations are of the form
(
a
,
b
≈
a
⋅
s
)
∈
R
q
×
R
q
{\displaystyle (a,b\approx a\centerdot s)\in R_{q}\times R_{q}}
, where a is uniformly random and the product
a
⋅
s
{\displaystyle a\centerdot s}
is perturbed by some ‘small’ random error term, chosen from a certain distribution over
R
{\displaystyle R}
.
They gave a quantum reduction from approximate SVP (in the worst case) on ideal lattices in
R
{\displaystyle R}
to the search version of ring-LWE, where the goal is to recover the secret
s
∈
R
q
{\displaystyle s\in R_{q}}
(with high probability, for any
s
{\displaystyle s}
) from arbitrarily many noisy products. This result follows the general outline of Regev's iterative quantum reduction for general lattices, but ideal lattices introduce several new technical roadblocks in both the ‘algebraic’ and ‘geometric’ components of the reduction. They used algebraic number theory, in particular, the canonical embedding of a number field and the Chinese Remainder Theorem to overcome these obstacles. They got the following theorem:
Theorem Let
K
{\displaystyle K}
be an arbitrary number field of degree
n
{\displaystyle n}
. Let
α
=
α
(
n
)
∈
(
0
,
1
)
{\displaystyle \alpha =\alpha (n)\in (0,1)}
be arbitrary, and let the (rational) integer modulus
q
=
q
(
n
)
≥
2
{\displaystyle q=q(n)\geq 2}
be such that
α
⋅
q
≥
ω
(
log
n
)
{\displaystyle \alpha \centerdot q\geq \omega ({\sqrt {\log n}})}
. There is a probabilistic polynomial-time quantum reduction from
K
{\displaystyle K}
-
D
G
S
γ
{\displaystyle DGS_{\gamma }}
to
O
K
{\displaystyle {\mathcal {O}}_{K}}
-
L
W
E
q
,
Ψ
≤
α
{\displaystyle LWE_{q,\Psi \leq \alpha }}
, where
γ
=
η
ϵ
(
I
)
⋅
ω
(
log
n
)
/
α
{\displaystyle \gamma =\eta _{\epsilon }(I)\centerdot \omega ({\sqrt {\log n}})/\alpha }
.
In 2013, Guneysu, Lyubashevsky, and Poppleman proposed a digital signature scheme based on the Ring Learning with Errors problem. In 2014, Peikert presented a Ring Learning with Errors Key Exchange (RLWE-KEX) in his paper, "Lattice Cryptography for the Internet." This was further developed by the work of Singh.
==== Ideal-LWE ====
Stehle, Steinfeld, Tanaka and Xagawa defined a structured variant of LWE problem (Ideal-LWE) to describe an efficient public key encryption scheme based on the worst case hardness of the approximate SVP in ideal lattices. This is the first CPA-secure public key encryption scheme whose security relies on the hardness of the worst-case instances of
O
~
(
n
2
)
{\displaystyle {\tilde {O}}(n^{2})}
-Ideal-SVP against subexponential quantum attacks. It achieves asymptotically optimal efficiency: the public/private key length is
O
~
(
n
)
{\displaystyle {\tilde {O}}(n)}
bits and the amortized encryption/decryption cost is
O
~
(
1
)
{\displaystyle {\tilde {O}}(1)}
bit operations per message bit (encrypting
Ω
~
(
n
)
{\displaystyle {\tilde {\Omega }}(n)}
bits at once, at a
O
~
(
n
)
{\displaystyle {\tilde {O}}(n)}
cost). The security assumption here is that
O
~
(
n
2
)
{\displaystyle {\tilde {O}}(n^{2})}
-Ideal-SVP cannot be solved by any subexponential time quantum algorithm. It is noteworthy that this is stronger than standard public key cryptography security assumptions. On the other hand, contrary to the most of public key cryptography, lattice-based cryptography allows security against subexponential quantum attacks.
Most of the cryptosystems based on general lattices rely on the average-case hardness of the Learning with errors (LWE). Their scheme is based on a structured variant of LWE, that they call Ideal-LWE. They needed to introduce some techniques to circumvent two main difficulties that arise from the restriction to ideal lattices. Firstly, the previous cryptosystems based on unstructured lattices all make use of Regev's worst-case to average-case classical reduction from Bounded Distance Decoding problem (BDD) to LWE (this is the classical step in the quantum reduction from SVP to LWE). This reduction exploits the unstructured-ness of the considered lattices, and does not seem to carry over to the structured lattices involved in Ideal-LWE. In particular, the probabilistic independence of the rows of the LWE matrices allows to consider a single row. Secondly, the other ingredient used in previous cryptosystems, namely Regev's reduction from the computational variant of LWE to its decisional variant, also seems to fail for Ideal-LWE: it relies on the probabilistic independence of the columns of the LWE matrices.
To overcome these difficulties, they avoided the classical step of the reduction. Instead, they used the quantum step to construct a new quantum average-case reduction from SIS (average-case collision-finding problem) to LWE. It also works from Ideal-SIS to Ideal-LWE. Combined with the reduction from worst-case Ideal-SVP to average-case Ideal-SIS, they obtained the a quantum reduction from Ideal-SVP to Ideal-LWE. This shows the hardness of the computational variant of Ideal-LWE. Because they did not obtain the hardness of the decisional variant, they used a generic hardcore function to derive pseudorandom bits for encryption. This is why they needed to assume the exponential hardness of SVP.
=== Fully homomorphic encryption ===
A fully homomorphic encryption (FHE) scheme is one which allows for computation over encrypted data, without first needing to decrypt. The problem of constructing a fully homomorphic encryption scheme was first put forward by Rivest, Adleman and Dertouzos in 1978, shortly after the invention of RSA by Rivest, Adleman and Shamir.
An encryption scheme
ε
=
(
K
e
y
G
e
n
,
E
n
c
r
y
p
t
,
D
e
c
r
y
p
t
,
E
v
a
l
)
{\displaystyle \varepsilon =({\mathsf {KeyGen}},{\mathsf {Encrypt}},{\mathsf {Decrypt}},{\mathsf {Eval}})}
is homomorphic for circuits in
C
{\displaystyle {\mathcal {C}}}
if, for any circuit
C
∈
C
{\displaystyle C\in {\mathcal {C}}}
,
given
P
K
,
S
K
←
K
e
y
G
e
n
(
1
λ
)
{\displaystyle PK,SK\leftarrow {\mathsf {KeyGen}}(1^{\lambda })}
,
y
=
E
n
c
r
y
p
t
(
P
K
,
x
)
{\displaystyle y={\mathsf {Encrypt}}(PK,x)}
, and
y
′
=
E
v
a
l
(
P
K
,
C
,
y
)
{\displaystyle y'={\mathsf {Eval}}(PK,C,y)}
,
it holds that
D
e
c
r
y
p
t
(
S
K
,
y
′
)
=
C
(
x
)
{\displaystyle {\mathsf {Decrypt}}(SK,y')=C(x)}
.
ε
{\displaystyle \varepsilon }
is fully homomorphic if it is homomorphic for all circuits of size
poly
(
λ
)
{\displaystyle \operatorname {poly} (\lambda )}
where
λ
{\displaystyle \lambda }
is the scheme's security parameter.
In 2009, Gentry proposed the first solution to the problem of constructing a fully homomorphic encryption scheme. His scheme was based on ideal lattices.
== See also ==
Lattice-based cryptography
Homomorphic encryption
Ring learning with errors key exchange
Post-quantum cryptography
Short integer solution problem
== References == | Wikipedia/Ideal_lattice_cryptography |
Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. The requirement that both parties have access to the secret key is one of the main drawbacks of symmetric-key encryption, in comparison to public-key encryption (also known as asymmetric-key encryption). However, symmetric-key encryption algorithms are usually better for bulk encryption. With exception of the one-time pad they have a smaller key size, which means less storage space and faster transmission. Due to this, asymmetric-key encryption is often used to exchange the secret key for symmetric-key encryption.
== Types ==
Symmetric-key encryption can use either stream ciphers or block ciphers.
Stream ciphers encrypt the digits (typically bytes), or letters (in substitution ciphers) of a message one at a time. An example is ChaCha20. Substitution ciphers are well-known ciphers, but can be easily decrypted using a frequency table.
Block ciphers take a number of bits and encrypt them in a single unit, padding the plaintext to achieve a multiple of the block size. The Advanced Encryption Standard (AES) algorithm, approved by NIST in December 2001, uses 128-bit blocks.
== Implementations ==
Examples of popular symmetric-key algorithms include Twofish, Serpent, AES (Rijndael), Camellia, Salsa20, ChaCha20, Blowfish, CAST5, Kuznyechik, RC4, DES, 3DES, Skipjack, Safer, and IDEA.
== Use as a cryptographic primitive ==
Symmetric ciphers are commonly used to achieve other cryptographic primitives than just encryption.
Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from an AEAD cipher (e.g. AES-GCM).
However, symmetric ciphers cannot be used for non-repudiation purposes except by involving additional parties. See the ISO/IEC 13888-2 standard.
Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods.
== Construction of symmetric ciphers ==
Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible.
== Security of symmetric ciphers ==
Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the functions for each round can greatly reduce the chances of a successful attack. It is also possible to increase the key length or the rounds in the encryption process to better protect against attack. This, however, tends to increase the processing power and decrease the speed at which the process runs due to the amount of operations the system needs to do.
Most modern symmetric-key algorithms appear to be resistant to the threat of post-quantum cryptography. Quantum computers would exponentially increase the speed at which these ciphers can be decoded; notably, Grover's algorithm would take the square-root of the time traditionally required for a brute-force attack, although these vulnerabilities can be compensated for by doubling key length. For example, a 128 bit AES cipher would not be secure against such an attack as it would reduce the time required to test all possible iterations from over 10 quintillion years to about six months. By contrast, it would still take a quantum computer the same amount of time to decode a 256 bit AES cipher as it would a conventional computer to decode a 128 bit AES cipher. For this reason, AES-256 is believed to be "quantum resistant".
== Key management ==
== Key establishment ==
Symmetric-key algorithms require both the sender and the recipient of a message to have the same secret key. All early cryptographic systems required either the sender or the recipient to somehow receive a copy of that secret key over a physically secure channel.
Nearly all modern cryptographic systems still use symmetric-key algorithms internally to encrypt the bulk of the messages, but they eliminate the need for a physically secure channel by using Diffie–Hellman key exchange or some other public-key protocol to securely come to agreement on a fresh new secret key for each session/conversation (forward secrecy).
== Key generation ==
When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation use a source of high entropy for its initialization.
== Reciprocal cipher ==
A reciprocal cipher is a cipher where, just as one enters the plaintext into the cryptography system to get the ciphertext, one could enter the ciphertext into the same place in the system to get the plaintext. A reciprocal cipher is also sometimes referred as self-reciprocal cipher.
Practically all mechanical cipher machines implement a reciprocal cipher, a mathematical involution on each typed-in letter.
Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way.
Examples of reciprocal ciphers include:
Atbash
Beaufort cipher
Enigma machine
Marie Antoinette and Axel von Fersen communicated with a self-reciprocal cipher.
the Porta polyalphabetic cipher is self-reciprocal.
Purple cipher
RC4
ROT13
XOR cipher
Vatsyayana cipher
The majority of all modern ciphers can be classified as either a stream cipher, most of which use a reciprocal XOR cipher combiner, or a block cipher, most of which use a Feistel cipher or Lai–Massey scheme with a reciprocal transformation in each round.
== Notes ==
== References == | Wikipedia/Symmetric-key_encryption_algorithm |
In cryptography, a padded uniform random blob or PURB is a discipline for encrypted data formats designed to minimize unintended information leakage either from its encryption format metadata or from its total length.
== Properties of PURBs ==
When properly created, a PURB's content is indistinguishable from a uniform random bit string to any observer without a relevant decryption key. A PURB therefore leaks no information through headers or other cleartext metadata associated with the encrypted data format. This leakage minimization "hygiene" practice contrasts with traditional encrypted data formats such as Pretty Good Privacy, which include cleartext metadata encoding information such as the application that created the data, the data format version, the number of recipients the data is encrypted for, the identities or public keys of the recipients, and the ciphers or suites that were used to encrypt the data. While such encryption metadata was considered non-sensitive when these encrypted formats were designed, modern attack techniques have found numerous ways to employ such incidentally-leaked metadata in facilitating attacks, such as by identifying data encrypted with weak ciphers or obsolete algorithms, fingerprinting applications to track users or identify software versions with known vulnerabilities, or traffic analysis techniques such as identifying all users, groups, and associated public keys involved in a conversation from an encrypted message observed between only two of them.
In addition, a PURB is padded to a constrained set of possible lengths, in order to minimize the amount of information the encrypted data could potentially leak to observers via its total length. Without padding, encrypted objects such as files or bit strings up to
M
{\displaystyle M}
bits in length can leak up to
O
(
log
M
)
{\displaystyle O(\log M)}
bits of information to an observer - namely the number of bits required to represent the length exactly. A PURB is padded to a length representable in a floating point number whose mantissa is no longer (i.e., contains no more significant bits) than its exponent. This constraint limits the maximum amount of information a PURB's total length can leak to
O
(
log
log
M
)
{\displaystyle O(\log \log M)}
bits, a significant asymptotic reduction and the best achievable in general for variable-length encrypted formats whose multiplicative overhead is limited to a constant factor of the unpadded payload size. This asymptotic leakage is the same as one would obtain by padding encrypted objects to a power of some base, such as to a power of two. Allowing some significant mantissa bits in the length's representation rather than just an exponent, however, significantly reduces the overhead of padding. For example, padding to the next power of two can impose up to 100% overhead by nearly doubling the object's size, while a PURB's padding imposes overhead of at most 12% for small strings and decreasing gradually (to 6%, 3%, etc.) as objects get larger.
Experimental evidence indicate that on data sets comprising objects such as files, software packages, and online videos, leaving objects unpadded or padding to a constant block size often leaves them uniquely identifiable by total length alone. Padding objects to a power of two or to a PURB length, in contrast, ensures that most objects are indistinguishable from at least some other objects and thus have a nontrivial anonymity set.
== Encoding and decoding PURBs ==
Because a PURB is a discipline for designing encrypted formats and not a particular encrypted format, there is no single prescribed method for encoding or decoding PURBs. Applications may use any encryption and encoding scheme provided it produces a bit string that appears uniformly random to an observer without an appropriate key, provided the appropriate hardness assumptions are satisfied of course, and provided the PURB is padded to one of the allowed lengths. Correctly-encoded PURBs therefore do not identify the application that created them in their ciphertext. A decoding application, therefore, cannot readily tell before decryption whether a PURB was encrypted for that application or its user, other than by trying to decrypt it with any available decryption keys.
Encoding and decoding a PURB presents technical efficiency challenges, in that traditional parsing techniques are not applicable because a PURB by definition has no metadata markers that a traditional parser could use to discern the PURB's structure before decrypting it. Instead, a PURB must be decrypted first obliviously to its internal structure, and then parsed only after the decoder has used an appropriate decryption key to find a suitable cryptographic entrypoint into the PURB.
Encoding and decoding PURBs intended to be decrypted by several different recipients, public keys, and/or ciphers presents the additional technical challenge that each recipient must find a different entrypoint at a distinct location in the PURB non-overlapping with those of the other recipients, but the PURB presents no cleartext metadata indicating the positions of those entrypoints or even the total number of them. The paper that proposed PURBs also included algorithms for encrypting objects to multiple recipients using multiple cipher suites. With these algorithms, recipients can find their respective entrypoints into the PURB with only a logarithmic number of trial decryptions using symmetric-key cryptography and only one expensive public-key operation per cipher suite.
A third technical challenge is representing the public-key cryptographic material that needs to be encoded into each entrypoint in a PURB, such as the ephemeral Diffie-Hellman public key a recipient needs to derive the shared secret, in an encoding indistinguishable from uniformly random bits. Because the standard encodings of elliptic-curve points are readily distinguishable from random bits, for example, special indistinguishable encoding algorithms must be used for this purpose, such as Elligator and its successors.
== Tradeoffs and limitations ==
The primary privacy advantage that PURBs offer is a strong assurance that correctly-encrypted data leaks nothing incidental via internal metadata that observers might readily use to identify weaknesses in the data or software used to produce it, or to fingerprint the application or user that created the PURB. This privacy advantage can translate into a security benefit for data encrypted with weak or obsolete ciphers, or by software with known vulnerabilities that an attacker might exploit based on trivially-observable information gleaned from cleartext metadata.
A primary disadvantage of the PURB encryption discipline is the complexity of encoding and decoding, because the decoder cannot rely on conventional parsing techniques before decryption. A secondary disadvantage is the overhead that padding adds, although the padding scheme proposed for PURBs incurs at most only a few percent overhead for objects of significant size.
The Padme padding proposed in the PURB paper only creates files of specific very distinct sizes. Thus, an encrypted file may often be identified as PURB encrypted with high confidence, as the probability of any other file having exactly one of those padded sizes is very low. Another padding problem occurs with very short messages, where the padding does not effectively hide the size of the content.
One critique of incurring the complexity and overhead costs of PURB encryption is that the context in which a PURB is stored or transmitted may often leak metadata about the encrypted content anyway, and such metadata is outside of the encryption format's purview or control and thus cannot be addressed by the encryption format alone. For example, an application's or user's choice of filename and directory in which to store a PURB on disk may indicate allow an observer to infer the application that likely created it and to what purpose, even if the PURB's data content itself does not. Similarly, encrypting an E-mail's body as a PURB instead of with traditional PGP or S/MIME format may eliminate the encryption format's metadata leakage, but cannot prevent information leakage from the cleartext E-mail headers, or from the endpoint hosts and E-mail servers involved in the exchange. Nevertheless, separate but complementary disciplines are typically available to limit such contextual metadata leakage, such as appropriate file naming conventions or use of pseudonymous E-mail addresses for sensitive communications.
== References == | Wikipedia/PURB_(cryptography) |
In cryptography, encryption (more specifically, encoding) is the process of transforming information in a way that, ideally, only authorized parties can decode. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Despite its goal, encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.
For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.
Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes use the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
== History ==
=== Ancient ===
One of the earliest forms of encryption is symbol replacement, which was first found in the tomb of Khnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes. One of the most famous military encryption developments was the Caesar cipher, in which a plaintext letter is shifted a fixed number of positions along the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with a fixed number on the Caesar cipher.
Around 800 AD, Arab mathematician al-Kindi developed the technique of frequency analysis – which was an attempt to crack ciphers systematically, including the Caesar cipher. This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift: for example, the most common letter in English text is E and is therefore likely to be represented by the letter that appears most commonly in the ciphertext. This technique was rendered ineffective by the polyalphabetic cipher, described by al-Qalqashandi (1355–1418) and Leon Battista Alberti (in 1465), which varied the substitution alphabet as encryption proceeded in order to confound such analysis.
=== 19th–20th century ===
Around 1790, Thomas Jefferson theorized a cipher to encode and decode messages to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or the Jefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher.
A similar device to the Jefferson Disk, the M-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942.
In World War II, the Axis powers used a more advanced version of the M-94 called the Enigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours. The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine.
=== Modern ===
Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent eavesdropping attacks. One of the first "modern" cipher suites, DES, used a 56-bit key with 72,057,594,037,927,936 possibilities; it was cracked in 1999 by EFF's brute-force DES cracker, which required 22 hours and 15 minutes to do so. Modern encryption standards often use stronger key sizes, such as AES (256-bit mode), TwoFish, ChaCha20-Poly1305, Serpent (configurable up to 512-bit). Cipher suites that use a 128-bit or higher key, like AES, will not be able to be brute-forced because the total amount of keys is 3.4028237e+38 possibilities. The most likely option for cracking ciphers with high key size is to find vulnerabilities in the cipher itself, like inherent biases and backdoors or by exploiting physical side effects through Side-channel attacks. For example, RC4, a stream cipher, was cracked due to inherent biases and vulnerabilities in the cipher.
== Encryption in cryptography ==
In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key).
Many complex cryptographic algorithms often use simple modular arithmetic in their implementations.
=== Types ===
In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine used a new symmetric-key each day for encoding and decoding messages.
In public-key cryptography schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key (also called private-key).: 478 Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange.
RSA (Rivest–Shamir–Adleman) is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys.
A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated.
== Uses ==
Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed used encryption for some of their data in transit, and 53% used encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest.
Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users.
=== Data erasure ===
Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device.
== Limitations ==
Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods.
The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to brute force attacks.
Quantum computing uses properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption uses the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing.
While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be used in favor of encryption as well. The National Security Agency (NSA) is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing.
== Attacks and countermeasures ==
Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by a cloud service for example. Homomorphic encryption and secure multi-party computation are emerging techniques to compute encrypted data; these techniques are general and Turing complete but incur high computational and/or communication costs.
In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, and ransomware attacks. Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy.
== The debate around encryption ==
The question of balancing the need for national security with the right to privacy has been debated for years, since encryption has become critical in today's digital society. The modern encryption debate started around the '90s when US government tried to ban cryptography because, according to them, it would threaten national security. The debate is polarized around two opposing views. Those who see strong encryption as a problem making it easier for criminals to hide their illegal acts online and others who argue that encryption keep digital communications safe. The debate heated up in 2014, when Big Tech like Apple and Google set encryption by default in their devices. This was the start of a series of controversies that puts governments, companies and internet users at stake.
=== Integrity protection of Ciphertexts ===
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature usually done by a hashing algorithm or a PGP signature. Authenticated encryption algorithms are designed to provide both encryption and integrity protection together. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for example traffic analysis, TEMPEST, or Trojan horse.
Integrity protection mechanisms such as MACs and digital signatures must be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a message end-to-end along its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correct keys and has not been tampered with. If an endpoint device has been configured to trust a root certificate that an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing a man-in-the-middle attack anywhere along the message's path. The common practice of TLS interception by network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship.
=== Ciphertext length and padding ===
Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message's length is a form of metadata that can still leak sensitive information about the message. For example, the well-known CRIME and BREACH attacks against HTTPS were side-channel attacks that relied on information leakage via the length of encrypted content. Traffic analysis is a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages.
Padding a message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasing bandwidth overhead. Messages may be padded randomly or deterministically, with each approach having different tradeoffs. Encrypting and padding messages to form padded uniform random blobs or PURBs is a practice guaranteeing that the cipher text leaks no metadata about its cleartext's content, and leaks asymptotically minimal
O
(
log
log
M
)
{\displaystyle O(\log \log M)}
information via its length.
== See also ==
== References ==
== Further reading ==
Fouché Gaines, Helen (1939), Cryptanalysis: A Study of Ciphers and Their Solution, New York: Dover Publications Inc, ISBN 978-0486200972 {{citation}}: ISBN / Date incompatibility (help)
Kahn, David (1967), The Codebreakers - The Story of Secret Writing (ISBN 0-684-83130-9)
Preneel, Bart (2000), "Advances in Cryptology – EUROCRYPT 2000", Springer Berlin Heidelberg, ISBN 978-3-540-67517-4
Sinkov, Abraham (1966): Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America. ISBN 0-88385-622-0
Tenzer, Theo (2021): SUPER SECRETO – The Third Epoch of Cryptography: Multiple, exponential, quantum-secure and above all, simple and practical Encryption for Everyone, Norderstedt, ISBN 978-3-755-76117-4.
Lindell, Yehuda; Katz, Jonathan (2014), Introduction to modern cryptography, Hall/CRC, ISBN 978-1466570269
Ermoshina, Ksenia; Musiani, Francesca (2022), Concealing for Freedom: The Making of Encryption, Secure Messaging and Digital Liberties (Foreword by Laura DeNardis)(open access) (PDF), Manchester, UK: matteringpress.org, ISBN 978-1-912729-22-7, archived from the original (PDF) on 2022-06-02
== External links ==
The dictionary definition of encryption at Wiktionary
Media related to Cryptographic algorithms at Wikimedia Commons | Wikipedia/Encryption_algorithm |
SM9 is a Chinese national cryptography standard for Identity Based Cryptography issued by the Chinese State Cryptographic Authority in March 2016. It is represented by the Chinese National Cryptography Standard (Guomi), GM/T 0044-2016 SM9. The standard contains the following components:
(GM/T 0044.1) The Identity-Based Asymmetric Cryptography Algorithm
(GM/T 0044.2) The Identity-Based Digital Signature Algorithm which allows one entity to digitally sign a message which can be verified by another entity.
(GM/T 0044.3) The Identity-Based Key Establishment and Key Wrapping
(GM/T 0044.4) The Identity Based Public-Key Encryption Key Encapsulation Algorithm which allows one entity to securely send a symmetric key to another entity.
== Identity Based Cryptography ==
Identity Based Cryptography is a type of public key cryptography that uses a widely known representation of an entity's identity (name, email address, phone number etc.) as the entity's public key. This eliminates the need to have a separate public key bound by some mechanism (such as a digitally signed public key certificate) to the identity of an entity. In Identity Based Cryptography (IBC) the public key is often taken as the concatenation of an entity's Identity and a validity period for the public key.
In Identity Based Cryptography, one or more trusted agents use their private keys to compute an entity's private key from their public key (Identity and Validity Period). The corresponding public keys of the trusted agent or agents are known to everyone using the network. If only one trusted agent is used that trusted agent can compute all the private keys for users in the network. To avoid that state, some researchers propose using multiple trusted agents in such a way that more than one of them need to be compromised in order to compute individual public keys.
== Chinese Cryptographic Standards ==
The SM9 Standard adopted in 2016 is one of a number of Chinese national cryptography standards. Other publicly available Chinese cryptographic standards are:
SM2 - an Elliptic Curve Diffie-Hellman key agreement and signature using a specified 256-bit elliptic curve. GM/T 0003.1: SM2 (published in 2010)
SM3 - a 256-bit cryptographic hash function. GM/T 0004.1-2012: SM3 (published in 2010)
SM4 - a 128-bit block cipher with a 128-bit key. GM/T 0002-2012: SM4 (published in 2012)
ZUC, a stream cipher. GM/T 0001–2016.
The SM9 standard along with these other standards are issued by the Chinese State Cryptographic Authority. The first part of the standard SM9-1 provides an overview of the standard.
== SM9 Identity Based Signature Algorithm ==
The Identity Based Signature Algorithm in SM9 traces its origins to an Identity Based Signature Algorithm published at Asiacrypt 2005 in the paper: "Efficient and Provably-Secure Identity-Based Signatures and Signcryption from Bilinear Maps" by Barreto, Libert, McCullagh, and Quisquater. It was standardized in IEEE 1363.3 and in ISO/IEC 14888-3:2015.
== SM9 Identity Based Key Encapsulation ==
The Identity Based Key Encapsulation Algorithm in SM9 traces its origins to a 2003 paper by Sakai and Kasahara titled "ID Based Cryptosystems with Pairing on Elliptic Curve." It was standardized in IEEE 1363.3, in ISO/IEC 18033-5:2015 and IETF RFC 6508.
== SM9 Identity Based Key Agreement ==
The Identity Based Key Agreement algorithm in SM9 traces its origins to a 2004 paper by McCullagh and Barreto titled, "A New Two-Party Identity-Based Authenticated Key Agreement" [1]. The International Standards Organization incorporated this identity key exchange protocol algorithm into ISO/IEC 11770–3 in 2015.
== Implementations of SM9 ==
An open source implementation of the SM9 algorithms is part of the GMSSL package available on GitHub. The Shenzhen Aolian Information Security Technology Co (also known as Olym Tech) is also marketing a series of products that implement the SM9 algorithms.
== Further Information ==
The following links provide more detailed information on the SM9 algorithms in English:
The SM9 Cryptographic Schemes
Using Identity as Raw Public Key in Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS)
== References == | Wikipedia/SM9_(cryptography_standard) |
Advanced Network and Services, Inc. (ANS) was a United States non-profit organization formed in September, 1990 by the NSFNET partners (Merit Network, IBM, and MCI) to run the network infrastructure for the soon to be upgraded NSFNET Backbone Service. ANS was incorporated in the State of New York and had offices in Armonk and Poughkeepsie, New York.
== History ==
=== ANSNet ===
In anticipation of the NSFNET Digital Signal 3 (T3) upgrade and the approaching end of the 5-year NSFNET cooperative agreement, in September 1990 Merit, IBM, and MCI formed Advanced Network and Services (ANS), a new non-profit corporation with a more broadly based Board of Directors than the Michigan-based Merit Network. Under its cooperative agreement with US National Science Foundation (NSF), Merit remained ultimately responsible for the operation of NSFNET, but subcontracted much of the engineering and operations work to ANS. Both IBM and MCI made substantial new financial and other commitments to help support the new venture. Allan Weis left IBM to become ANS's first President and Managing Director. Douglas Van Houweling, former Chair of the Merit Network Board and Vice Provost for Information Technology at the University of Michigan, was the first Chairman of the ANS Board of Directors.
Completed in November 1991, the new T3 backbone was named ANSNet and provided the physical infrastructure used by Merit to deliver the NSFNET Backbone Service.
=== ANS CO+RE ===
In May, 1991 a new ISP, ANS CO+RE (commercial plus research), was created as a for-profit subsidiary of the non-profit Advanced Network and Services. ANS CO+RE was created specifically to allow commercial traffic on ANSNet without jeopardizing its parent's non-profit status or violating any tax laws.
The NSFNET Backbone Service and ANS CO+RE both used and shared the common ANSNet infrastructure. NSF agreed to allow ANS CO+RE to carry commercial traffic subject to several conditions:
that the NSFNET Backbone Service was not diminished;
that ANS CO+RE recovered at least the average cost of the commercial traffic traversing the network; and
that any excess revenues recovered above the cost of carrying the commercial traffic would be placed into an infrastructure pool to be distributed by an allocation committee broadly representative of the networking community to enhance and extend national and regional networking infrastructure and support.
In 1992, ANS worked to address security concerns by potential customers caused by recent security incidents (e.g. morris worm) and opened an office in Northern Virginia for their security product team. The security team created one of the first Internet firewalls called ANS InterLock. InterLock was arguably the first proxy-based Internet firewall product (other firewalls at the time were router-based ACLs or part of a service offering) and consisted of modifications to IBM's AIX (and later Sun's Solaris) operating system. InterLock's popularity at the time of the boom of the WWW was responsible for the infamous proxy settings in the Mosaic web browser, so users could access the Internet transparently through their layer 7 inspection proxy for HTTP 1.0.
ANS and, in particular, ANS CO+RE were involved in controversies over who and how commercial traffic should be carried over networking infrastructure that had, until recently, been government-sponsored. These controversies are discussed in the "Commercial ISPs, ANS CO+RE, and the CIX" and "Controversy" sections of the NSFNET article.
=== Sale of networking business to AOL ===
In 1995, there was a transition to a new Internet architecture and the NSFNET Backbone Service was decommissioned. At this point, ANS sold its networking business to AOL for $35M. The networking business would become a new AOL subsidiary company known as ANS Communications, Inc. Although now two separate entities, both the for-profit and non-profit ANS organizations shared the same pre-sale history.
=== A new life as a philanthropic organization ===
With over $35M from its sale of its networking business, ANS became a philanthropic organization with a mission to advance education by accelerating the use of computer network applications and technology". This work helped create ThinkQuest, the National Tele-Immersion Initiative, the IP Performance Metrics program, and provided grant funding for educational programs including TRIO Upward Bound, the Internet Society, Internet2, Computers for Youth, Year Up, National Foundation for Teaching Entrepreneurship, Sarasota TeXcellence Program, and many others.
One of their philanthropic ventures was to sponsor competitions in science and math, arts and literature, social sciences and even sports. They awarded over $1M in prizes in contests with the goal to use the Web for educational projects with widespread or popular applications.
=== ANS closes ===
ANS closed down its operations in the mid 2015.
== See also ==
History of the Internet
Commercial Internet eXchange (CIX)
== References == | Wikipedia/Advanced_Network_and_Services |
Merit Network, Inc., is a nonprofit member-governed organization providing high-performance computer networking and related services to educational, government, health care, and nonprofit organizations, primarily in Michigan. Created in 1966, Merit operates the longest running regional computer network in the United States.
== Organization ==
Created in 1966 as the Michigan Educational Research Information Triad by Michigan State University (MSU), the University of Michigan (U-M), and Wayne State University (WSU), Merit was created to investigate resource sharing by connecting the mainframe computers at these three Michigan public research universities. Merit's initial three node packet-switched computer network was operational in October 1972 using custom hardware based on DEC PDP-11 minicomputers and software developed by the Merit staff and the staffs at the three universities.
Over the next dozen years the initial network grew as new services such as dial-in terminal support, remote job submission, remote printing, and file transfer were added; as gateways to the national and international Tymnet, Telenet, and Datapac networks were established, as support for the X.25 and TCP/IP protocols was added; as additional computers such as WSU's MVS system and the UM's electrical engineering's VAX running UNIX were attached; and as new universities became Merit members.
Merit's involvement in national networking activities started in the mid-1980s with connections to the national supercomputing centers and work on the 56 kbit/s National Science Foundation Network (NSFNET), the forerunner of today's Internet. From 1987 until April 1995, Merit re-engineered and managed the NSFNET backbone service.
MichNet, Merit's regional network in Michigan was attached to NSFNET and in the early 1990s Merit began extending "the Internet" throughout Michigan, offering both direct connect and dial-in services, and upgrading the statewide network from 56 kbit/s to 1.5 Mbit/s, and on to 45, 155, 622 Mbit/s, and eventually 1 and 10 Gbit/s. In 2003 Merit began its transition to a facilities based network, using fiber optic facilities that it shares with its members, that it purchases or leases under long-term agreements, or that it builds.
In addition to network connectivity services, Merit offers a number of related services within Michigan and beyond, including: Internet2 connectivity, VPN, Network monitoring, Voice over IP (VOIP), Cloud storage, E-mail, Domain Name, Network Time, VMware and Zimbra software licensing, Colocation, and professional development seminars, workshops, classes, conferences, and meetings.
== History ==
=== Creating the network: 1966 to 1973 ===
The Michigan Educational Research Information Triad (MERIT) was formed in the fall of 1966 by Michigan State University (MSU), University of Michigan (U-M), and Wayne State University (WSU). More often known as the Merit Computer Network or simply Merit, it was created to design and implement a computer network connecting the mainframe computers at the universities.
In the fall of 1969, after funding for the initial development of the network had been secured, Bertram Herzog was named director for MERIT. Eric Aupperle was hired as senior engineer, and was charged with finding hardware to make the network operational. The National Science Foundation (NSF) and the State of Michigan provided the initial funding for the network.
In June 1970, the Applied Dynamics Division of Reliance Electric in Saline, Michigan was contracted to build three Communication Computers or CCs. Each would consist of a Digital Equipment Corporation (DEC) PDP-11 computer, dataphone interfaces, and interfaces that would attach them directly to the mainframe computers. The cost was to be slightly less than the $300,000 ($2,429,000, adjusted for inflation) originally budgeted. Merit staff wrote the software that ran on the CCs, while staff at each of the universities wrote the mainframe software to interface to the CCs.
The first completed connection linked the IBM S/360-67 mainframe computers running the Michigan Terminal System at WSU and U-M, and was publicly demonstrated on December 14, 1971. The MSU node was completed in October 1972, adding a CDC 6500 mainframe running Scope/Hustler. The network was officially dedicated on May 15, 1973.
=== Expanding the network: 1974 to 1985 ===
In 1974, Herzog returned to teaching in the University of Michigan's Industrial Engineering Department, and Aupperle was appointed as director.
Use of the all uppercase name "MERIT" was abandoned in favor of the mixed case "Merit".
The first network connections were host to host interactive connections which allowed person to remote computer or local computer to remote computer interactions. To this, terminal to host connections, batch connections (remote job submission, remote printing, batch file transfer), and interactive file copy were added. And, in addition to connecting to host computers over custom hardware interfaces, the ability to connect to hosts or other networks over groups of asynchronous ports and via X.25 were added.
Merit interconnected with Telenet (later SprintNet) in 1976 to give Merit users dial-in access from locations around the United States. Dial-in access within the U.S. and internationally was further expanded via Merit's interconnections to Tymnet, ADP's Autonet, and later still the IBM Global Network as well as Merit's own expanding network of dial-in sites in Michigan, New York City, and Washington, D.C.
In 1978, Western Michigan University (WMU) became the fourth member of Merit (prompting a name change, as the acronym Merit no longer made sense as the group was no longer a triad).
To expand the network, the Merit staff developed new hardware interfaces for the Digital PDP-11 based on printed circuit technology. The new system became known as the Primary Communications Processor (PCP), with the earliest PCPs connecting a PDP-10 located at WMU and a DEC VAX running UNIX at U-M's Electrical Engineering department.
A second hardware technology initiative in 1983 produced the smaller Secondary Communication Processors (SCP) based on DEC LSI-11 processors. The first SCP was installed at the Michigan Union in Ann Arbor, creating UMnet, which extended Merit's network connectivity deeply into the U-M campus.
In 1983 Merit's PCP and SCP software was enhanced to support TCP/IP and Merit interconnected with the ARPANET.
=== National networking, NSFNET, and the Internet: 1986 to 1995 ===
In 1986 Merit engineered and operated leased lines and satellite links that allowed the University of Michigan to access the supercomputing facilities at Pittsburgh, San Diego, and NCAR.
In 1987, Merit, IBM and MCI submitted a winning proposal to NSF to implement a new NSFNET backbone network. The new NSFNET backbone network service began July 1, 1988. It interconnected supercomputing centers around the country at 1.5 megabits per second (T1), 24 times faster than the 56 kilobits-per-second speed of the previous network. The NSFNET backbone grew to link scientists and educators on university campuses nationwide and connect them to their counterparts around the world.
The NSFNET project caused substantial growth at Merit, nearly tripling the staff and leading to the establishment of a new 24-hour Network Operations Center at the U-M Computer Center.
In September 1990 in anticipation of the NSFNET T3 upgrade and the approaching end of the 5-year NSFNET cooperative agreement, Merit, IBM, and MCI formed Advanced Network and Services (ANS), a new non-profit corporation with a more broadly based Board of Directors than the Michigan-based Merit Network. Under its cooperative agreement with NSF, Merit remained ultimately responsible for the operation of NSFNET, but subcontracted much of the engineering and operations work to ANS.
In 1991 the NSFNET backbone service was expanded to additional sites and upgraded to a more robust 45 Mbit/s (T3) based network. The new T3 backbone was named ANSNet and provided the physical infrastructure used by Merit to deliver the NSFNET Backbone Service.
On April 30, 1995, the NSFNET project came to an end, when the NSFNET backbone service was decommissioned and replaced by a new Internet architecture with commercial ISPs interconnected at Network Access Points provided by multiple providers across the country.
=== Bringing the Internet to Michigan: 1985 to 2001 ===
During the 1980s, Merit Network grew to serve eight member universities, with Oakland University joining in 1985 and Central Michigan University, Eastern Michigan University, and Michigan Technological University joining in 1987.
In 1990, Merit's board of directors formally changed the organization's name to Merit Network, Inc., and created the name MichNet to refer to Merit's statewide network. The board also approved a staff proposal to allow organizations other than publicly supported universities, referred to as affiliates, to be served by MichNet without prior board approval.
1992 saw major upgrades of the MichNet backbone to use Cisco routers in addition to the PDP-11 and LSI-11 based PCPs and SCPs. This was also the start of relentless upgrades to higher and higher speeds, first from 56 kbit/s to T1 (1.5 Mbit/s) followed by multiple T1s (3.0 to 10.5 Mbit/s), T3 (45 Mbit/s), OC3c (155 Mbit/s), OC12c (622 Mbit/s), and eventually one and ten gigabits (1000 to 10,000 Mbit/s).
In 1993 Merit's first Network Access Server (NAS) using RADIUS (Remote Authentication Dial-In User Service) was deployed. The RADIUS server was developed by Livingston Enterprises. The NASs supported dial-in access separate from the Merit PCPs and SCPs.
In 1993 Merit started what would become an eight-year phase out of its aging PCP and SCP technology. By 1998 the only PCPs still in service were supporting Wayne State University's MTS mainframe host. During their remarkably long twenty-year life cycle the number of PCPs and SCPs in service reached a high of roughly 290 in 1991, supporting a total of about 13,000 asynchronous ports and numerous LAN and WAN gateways.
In 1994 the Merit Board endorsed a plan to expand the MichNet shared dial-in service, leading to a rapid expansion of the Internet dial-in service over the next several years. In 1994 there were 38 shared dial-in sites. By 1996 there were 131 shared dial-in sites and more than 92% of Michigan residents could reach the Internet with a local phone call. And by the end of 2001 there were 10,733 MichNet shared dial-in lines in over 200 Michigan cities plus New York City, Washington, D.C., and Windsor, Ontario, Canada. As an outgrowth of this work, in 1997, Merit created the Authentication, Authorization, and Accounting (AAA) Consortium.
During 1994 an expanded K-12 outreach program at Merit helped lead the formation of six regional K-12 groups known as Hubs. The Hubs and Merit applied for and were awarded funding from the Ratepayer fund, which as part of a settlement of an earlier Ameritech of Michigan ratepayer overcharge, had been established by Michigan Public Service Commission to further the K-12 community's network connectivity.
During the 1990s, Merit added Grand Valley State University (1994), Northern Michigan University (1994), Lake Superior State University (1997), and Ferris State University (1998) as members. By 1999, Merit had 163 affiliate members, with 401 attachments from 353 separate locations.
Merit was involved in a number of projects in cooperation with organizations throughout Michigan, including:
Project Connect, a 1992 cooperative effort among Merit, Novell, and GTE, that equipped five southeastern Michigan schools with Novell Local Area Networks with connections to MichNet;
GoMLink, an early virtual library reference service operated by the University of Michigan;
the Michigan Electronic Library (MEL), a networked virtual library service of the Library of Michigan and the University of Michigan;
the Michigan Library Association's "Action Plan for Michigan Libraries"; Internet dial-in access for libraries sponsored by the Library of Michigan;
development of the "Michigan Information Network (MIN) Plan";
in cooperation with MiCTA, providing assistance to the K-12, library, and rural healthcare communities in understanding the federal Universal Service Fund (USF) E-Rate program; and
the Society of Manufacturing Engineers CoNDUIT project, funded by the United States Department of Defense to train staff of small manufacturing businesses in the use of modern technology.
=== Transition to the commercial Internet, Internet2 and the vBNS: 1994 to 2005 ===
In 1994, as the NSFNET project was drawing to a close, Merit organized the meetings for the North American Network Operators' Group (NANOG). NANOG evolved from the NSFNET "Regional-Techs" meetings, where technical staff from the regional networks met to discuss operational issues of common concern with each other and with the Merit engineering staff. At the February 1994 regional techs meeting in San Diego, the group revised its charter to include a broader base of network service providers, and subsequently adopted NANOG as its new name.
Also starting in 1994, Merit developed the Routing Assets Database (RADb) as part of the NSF-funded Routing Arbiter Project.
MichNet obtained its initial commodity Internet access, a T3 (45 Mbit/s), from the commercial ISP, internetMCI.
In 1996 Merit became an affiliate member of Internet2, in 1997 established its first connection to the NSF very high-speed Backbone Network Service (vBNS), and in February 1999 began serving as Michigan's GigaPOP for Internet2 service.
Following the NSFNET project Merit lead a number of activities with a national or international scope, including:
the GateD Consortium (1995);
the 1997 NSF funded Multi-threaded Routing Toolkit project;
the 1997 NSF funded Internet Performance Measurement and Analysis (IPMA) project, a joint project with U-M's Electrical Engineering and Computer Science;
the 1996 NETSCARF network statistics collection and analysis project, funded by the ANS Resource Allocation Committee; and
the 1999 DARPA funded Lighthouse project focusing on large-scale network attack recognition, remediation and survivable network infrastructure led by the University of Michigan College of Engineering.
In 2000, Merit spun off two for-profit companies: NextHop Technologies, which developed and marketed GateD routing software, and Interlink Networks, which specialized in authentication, authorization, and accounting (AAA) software.
Eric Aupperle retired as president in 2001, after 27 years at Merit. He was appointed President Emeritus by the Merit board. Hunt Williams became Merit's new president.
=== Creating a facilities based network, adding new services: 2003 to the present ===
In 2004 Michael R. McPherson was named Merit's interim president and CEO.
In January 2005 Merit and Internet2 moved into the new Michigan Information Technology Center (MITC) in Ann Arbor.
In 2006, Dr. Donald J. Welch was named president and CEO of Merit Network, Inc.
In December 2006 Merit and OSTN partner to provide IPTV to Michigan institutions. OSTN is a global television network devoted to student-produced programming.
In July 2007, Merit decommissioned its dial-up services.
During the 1970s, 1980s, and 1990s Merit operated what is known as a "value-added network" where individual data circuits were leased on a relatively short-term basis (one to three or sometimes five years) from traditional telecommunications providers such as Ameritech, GTE, Sprint, and MCI and assembled into a larger network by adding routers and other equipment. This worked well for many years, but as data rates continued to increase from kilobits, to megabits, to gigabits the cost of leasing the higher speed data circuits became significant. As a result, the alternative of building its network using "dark fiber" that Merit owned or leased on a relatively longer-term basis (10, 20, or more years) under what are known as "Indefeasible Rights of Use" (IRU) as well as using or sharing fiber that is owned by its members became attractive.
Merit's statewide fiber-optic network strategy began to take shape when:
in 2003 a fiber ring was deployed in Lansing;
in 2003 Michigan State University, the University of Michigan, and Wayne State University launched the Michigan LambdaRail Network (MiLR) project to link the campuses to each other and to Chicago using privately owned fiber, with Merit to operate MiLR on behalf of the three universities and using some of the MiLR fiber for its own network;
in 2004 fiber rings were added in Grand Rapids and Chicago;
in August 2005 Merit was utilizing dark fiber from Michigan Lambda Rail (MiLR) between Detroit and Chicago to support the southern portion of its network backbone;
in July 2006 Merit began to use optical fiber that had been installed by a consortium of government and community organizations in the Alpena area;
in February 2006 Merit and the Ontario Research and Innovation Optical Network (ORION) were linked using fiber optic cable across the US-Canada border through the Detroit–Windsor Tunnel, later in September 2008, a wireless connection across the Soo Locks between Sault Ste. Marie, Michigan and Sault Ste. Marie, Ontario provided a second link between Merit and ORION;
in September 2007 Merit created the first high-speed network connection between Michigan's two peninsulas with fiber optic cable across the Mackinac Bridge;
in November 2007 Merit completed Phase I of its fiber network expansion into the Upper Peninsula of Michigan, connecting Lake Superior State University (LSSU), Michigan Technological University (MTU), and Northern Michigan University (NMU) via fiber-optic cable at gigabit Ethernet speeds;
in May 2008 Merit completes a new fiber optic link from Southfield to Toledo providing a 10 Gbit/s link to OSCnet, Ohio's regional research and education network, and a second path between Merit and the Internet2 network;
in March 2009 a partnership between the City of Hillsdale, Hillsdale College, Hillsdale County Intermediate School District (ISD), and Merit, completed a fiber-optic ring to improve connectivity in the city and reduce network costs for the Hillsdale-area organizations; and
in December 2009 Merit began to use a new fiber optic link between Mt. Pleasant and Big Rapids. This completed the 500-mile (800 km) "Blue-Line" fiber optic network that links 16 cities in the lower half of Michigan's lower peninsula (Kalamazoo, Grand Rapids, Muskegon, Big Rapids, Mt. Pleasant, Midland, Saginaw, Flint, Pontiac, Rochester, Southfield, Ypsilanti, Ann Arbor, Jackson, East Lansing, and Battle Creek).
In July 2008, Merit began upgrading its core backbone network to 10 gigabits and installing five new Juniper MX480 routers. This upgrade was completed in May 2009 with seven backbone nodes in Grand Rapids, East Lansing, Detroit, Ann Arbor, Kalamazoo, and Chicago (2) all operating at 10 Gbit/s. Also during May 2009 Merit replaced its four 1 Gbit/s links to the commodity Internet with two 10 Gbit/s links over diverse paths to two different Tier 1 providers. And in October 2009 the links from Ann Arbor to Jackson and from Jackson and East Lansing were upgraded to 10 Gbit/s.
In January 2010, Merit and its partners, ACD.net; Lynx Network Group, LLC; and TC3Net; learned that their REACH-3MC (Rural, Education, Anchor, Community and Healthcare – Michigan Middle Mile Collaborative) proposal had been awarded ~$33.3M in grants and loans from the Broadband Technology Opportunities Program (BTOP), part of the federal stimulus package. REACH-3MC will build a 1,017-mile (1,637 km) optical fiber extension into rural and underserved communities in 32 counties in Michigan's lower peninsula.
In August 2010, Merit and its REACH-3MC partners were selected to receive US$69.6M in a second round of federal stimulus funding to build an additional 1,270 miles (2,040 km) of optical fiber in the northern lower peninsula and upper peninsula of Michigan and extending into Wisconsin.
At NANOG's 50th meeting in Atlanta in October 2010, members of the NANOG community supported a charter amendment to transition the hosting of NANOG following the February 2011 NANOG meeting to NewNOG, a newly formed non-profit.
On February 16, 2012, Merit's president and CEO, Donald Welch was honored as an Innovator in Infrastructure and "Champion of Change" during a ceremony that took place at the White House.
In August 2012, Merit announced that the first site of the Michigan Cyber Range would be installed at Eastern Michigan University. Merit hosts and operates the Michigan Cyber Range, a cybersecurity learning environment that, like a test track or a firing range, enables individuals and organizations to conduct "live fire" exercises, simulations that test the detection and reaction skills of participants in a variety of situations. Merit is partnering with the State of Michigan, Eastern Michigan University, Ferris State University, and others to provide this invaluable learning environment, which trains students and IT professionals to be better prepared for cyberattacks and how to react to Internet security situations.
In January 2013, the Michigan Cyber Range began a collaboration arrangement with Mile2, a developer and provider of vendor neutral professional certifications for the cyber security industry. Mile2 provides course materials, instructors and certification exams to the Michigan Cyber Range. Mile2 is recognized by the National Security Agency (NSA) as an Information Assurance (IA) Courseware Institution. Mile2 is NSA CNSS-accredited as well as NIST and NICCS mapped.
On April 8, 2013, Merit announced that round 1 of REACH-3MC construction was complete with fiber-optic cable along the 1,017-mile (1,637 km) network extension through rural and under served areas in Michigan's Lower Peninsula, including all 55 fiber-optic lateral connections to Merit Members from the middle-mile infrastructure. Portions of the fiber-optic network extension had been in use prior to the completion of round 1.
In May 2013, Merit hosted its 15th annual Merit Member Conference and its first annual Michigan Cybersecurity Industry Summit in Ann Arbor.
In June 2013, Merit honored as both a 2013 Computerworld Honors Laureate and 21st Century Achievement Award Winner for its REACH-3MC fiber-optic network project. Merit Network CEO and President Don Welch was honored at a gala celebration in Washington, D.C.
During the summer of 2013, Merit's Michigan Cyber Range debuted its cybersecurity training environment, Alphaville. The platform was used for training exercises, including a red team-blue team event conducted with the West Michigan Cyber Security Consortium (WMCSC).
In September 2013, Merit launched Merit Secure Sandbox, a secure environment that can be used by organizations for educational purposes, cybersecurity exercises, and software testing. In September, the Michigan Cyber Range also added a SCADA component to Alphaville.
In July 2014, Merit Network and WiscNet lit a new fiber-optic connection between Powers, Michigan; Marinette and Green Bay, Wisconsin; and Chicago, Illinois. The new 10 gigabit-per-second (Gbps) fiber-optic connection replaced two 1 Gbit/s circuits, providing greater capacity and speed between the Upper Peninsula and Chicago.
In October 2014, Merit completed the REACH-3MC fiber-optic infrastructure project, which built fiber-optic infrastructure across Michigan and in parts of Minnesota and Wisconsin. Merit connected 141 community anchor institutions, which includes schools, libraries, health care, government, and public safety. 70 additional organizations were also connected to the network by constructing last-mile fiber to the network. Each connection was a minimum of 1 gigabit-per-second (Gbps), providing broadband speeds to previously unserved or underserved parts of Michigan. Merit completed 2,287 miles of fiber-optic infrastructure, which is the equivalent of travelling from Ann Arbor to Orlando, Florida.
On April 30, 2015, Dr. Eric Aupperle died. Dr. Aupperle joined Merit Computer Network in 1969 as project leader. Eric was appointed director of Merit in 1974, became president in 1988, and retired 2001.
In August 2015, Joseph Sawasky, the chief information officer and associate vice president of computing and information technology at Wayne State University, was selected as the president and CEO of Merit Network.
In October 2015, Merit selected Jason Brown as the organization's first chief information security officer (CISO). The position was created as part of an ongoing mission to strengthen Merit Network's infrastructure, data and Member institutions from potential cyberattack.
In March 2016, the organization launched the Merit Commons, a social collaboration environment for its Member community. The secure, social portal enables Members to communicate and collaborate in real time with organic message streams, much like Facebook or Twitter.
At the annual Merit Member Conference in May 2016, Merit celebrated its 50th anniversary with a gala that included dignitaries, former staff, employees and Merit supporters. During a panel discussion, Doug Van Houweling from the University of Michigan and Steve Wolff from Internet2 provided a glimpse into the early days of Merit, the complex NSFNET project and how the technology and network protocols created by Merit's engineers influenced the internet. David Behen, chief information officer (CIO) for the State of Michigan, presented an honor from Governor Rick Snyder to Joe Sawasky on behalf of Merit Network, recognizing the organization's historic achievements.
During 2016, Merit added new publicly accessible hubs of the Michigan Cyber Range in Southeast Michigan. Cyber Range Hubs opened inside the Velocity Center at Macomb-Oakland University in Sterling Heights on March 18 and at Pinckney Community High School on December 7. Each location provides certification courses, cybersecurity training exercises and product hardening/testing through a direct connection to the Michigan Cyber Range.
In 2016, Merit began one of its largest projects; managing the implementation of the Michigan Statewide Education Network (MISEN), a private transport based network. MISEN connects 55 of Michigan's 56 Intermediate School Districts (ISDs) via high capacity fiber infrastructure. The project was completed on June 30, 2017, and the result was a 10 Gb connection to each ISD as well as a 100 Gb resilient core. Merit continues to manage MISEN, which gives Michigan ISDs the ability to leverage the multi-gigabit infrastructure for services like Internet access, student information systems, and other critical services, putting Michigan's schools at the forefront of technology and innovation. Throughout 2017, Merit has continued shifting their strategy to focus on network, security and community. They are now considered one of the national leaders in cybersecurity.
In 2019, Merit launched the Michigan Moonshot, an approach to impact the digital divide statewide.
In 2019, as part of the Michigan Moonshot, Merit partnered with national broadband organizations (including the Michigan Broadband Cooperative, Next Century Cities, and the Institute for Local Self Reliance) to create the Michigan Moonshot Broadband Framework. This crowdsourced document will serve as a community network primer and the basis for planning a community roadmap. Contained within, a reader will find overviews on policy and technology, community success stories, links to myriad resources and planning tools from national broadband leaders and a phased plan for building a regional network. While much of this information exists in locations across the web, this unique curation was carefully designed by leading experts to serve as a comprehensive playbook for communities that are committed to improving broadband access for their citizens.
In May 2019, Merit Network, in partnership with Michigan State University's Quello Center and the D.C.-based Measurement Lab, launched a pilot for the Michigan Moonshot broadband data collection project. Three school districts, representing more than 6,000 students, were chosen. The data for this project consisted of three databases linked by a unique de-identified participant ID; including a paper survey completed by all students age 13 and older, student records (i.e., M-STEP scores) that were de-identified and results of an Internet speed test that students completed on a website using any device they used to complete homework.
Armed with an accurate picture of Michigan's connectivity, blockers to broadband network deployment in rural communities could be reduced through a combination of techniques. Pilot project findings are expected to be released in late fall, 2019.
On May 30, 2019, Merit hosted the Mackinac Policy Conference Session as part of the Michigan Moonshot initiative. President and CEO of Merit Network Joe Sawasky moderated a panel that featured state, regional, and national thought leaders, including: Dr. Johannes Bauer, Quello chair for media and information policy and chairman of the department of media and information at Michigan State University, Lt. Governor Garlin Gilchrist II, and Marc Hudson, founder and CEO of Rocket Fiber.
In October 2019, Merit's president and CEO, Joe Sawasky, joined Former FCC Commissioner, Mignon Clyburn, Jonathan Sallet, senior fellow at the Benton Institute, Larra Clark, deputy director at the American Library Association Public Policy and Luis Wong, CEO of the California K-12 High Speed Network for a panel discussion, Broadband for All in the 2020s at the 2019 SHLB Coalition’s Anchor NETS conference.
In 2019, Jonathan Sallet, Senior Fellow for the Benton Institute for Broadband & Society, published Broadband for America’s Future: A Vision for the 2020s. The purpose of this document is to collect, combine, and contribute to a national broadband agenda for the next decade. As the most transformative technology of our generation, broadband delivers new opportunities and strengthens communities. The Benton Institute upholds a commitment to changing lives and advancing society through high-performance broadband connection, which will bring remarkable economic, social, cultural, and personal benefits.
In 2019, Merit's Chief Information Security Officer role grew into an executive position, overseeing the Michigan Cyber Range and Merit's security division. Kevin Hayes has served as Merit's CISO since 2018.
In 2019, Merit Network partnered with MISEN (Michigan Statewide Educational Network) and MAISA (Michigan Association of Intermediate School Administrators) to develop Essential Cybersecurity Practices for K12. This guide translates the CIS Top 20 Security Controls into achievable actions that school IT staff can accomplish.
On October 19, 2019, Merit Network relocated from 1000 Oakbrook Drive in Ann Arbor, MI to 880 Technology Drive, Suite B, Ann Arbor, MI 48108. The 880 building provides a collaborative space with increased community access, including additional space available for rent by outside organizations.
On October 28, 2019, the Michigan National Guard and the Michigan Cyber Range hosted an International Cyber Exercise as part of the state's North American International Cyber Summit. Eleven teams from five countries and six states competed in an all-out, fast-paced cyber exercise that resembles the physical game of paintball.
On October 29, 2019, Merit hosted the 4th Annual Governor’s High School Cyber Challenge capstone event. More than 600 students from throughout Michigan participated in the event. Okemos High School won the competition.
== Merit today ==
Today, in addition to network connectivity, Merit offers:
== References ==
== External links ==
Merit Network, Inc., web site | Wikipedia/Merit_Network |
RIPE NCC (Réseaux IP Européens Network Coordination Centre) is the regional Internet registry (RIR) for Europe, the Middle East, and parts of Central Asia. Its headquarters are in Amsterdam, the Netherlands, with a branch office in Dubai, UAE.
A RIR oversees the allocation and registration of Internet number resources (IPv4 addresses, IPv6 addresses and autonomous system numbers) in a specific region.
The RIPE NCC supports the technical and administrative coordination of the infrastructure of the Internet. It is a not-for-profit membership organisation with almost 20,000 members at the end of 2024. It has members located in over 120 countries in its service region and beyond.
Any individual or organisation can become a member of the RIPE NCC. The membership consists mainly of Internet service providers (ISPs), telecommunication organisations, educational institutions, governments, regulatory agencies, and large corporations.
The RIPE NCC also provides technical and administrative support to Réseaux IP Européens (RIPE), a forum open to all parties with an interest in the technical development of the Internet.
== History ==
The RIPE NCC began its operations in April 1992 in Amsterdam, Netherlands. Initial funding was provided by the academic networks Réseaux Associés pour la Recherche Européenne (RARE) members, EARN, and EUnet. The RIPE NCC was formally established when the Dutch version of the Articles of Association was deposited with the Amsterdam Chamber of Commerce on 12 November 1997. The first RIPE NCC Activity Plan was published in May 1991.
On 25 November 2019, RIPE NCC announced that it had made its “final /22 IPv4 allocation from the last remaining addresses in our available pool. We have now run out of IPv4 addresses.” RIPE NCC will continue to allocate IPv4 addresses, but only “from organisations that have gone out of business or are closed, or from networks that return addresses they no longer need. These addresses will be allocated to our members (LIRs) according to their position on a new waiting list ….” The announcement also called for support for the implementation of the IPv6 roll-out.
== Activities ==
The RIPE NCC supports technical coordination of the Internet infrastructure in its service region and beyond. It undertakes many activities in this area, including:
Allocation and registration of Internet number resources (IP addresses and autonomous system numbers)
The allocation of IP addresses is important for several reasons. Public addresses need to be unique; if duplicate internet addresses existed on a network, network traffic could be delivered to the wrong host. The RIRs make sure that public addresses are given to one organisation. The RIPE NCC does this for its own service region. Worldwide, IANA assigns blocks of addresses to the RIRs and they distribute these to end users via the LIRs (normally ISPs). Beside making sure that IP addresses and AS Numbers are only allocated to one user, the shortage of IPv4 addresses makes it important that the remaining addresses are allocated in an organised manner. For many years, the RIPE NCC has followed strict guidelines on how to assign IPv4 addresses according to policy developed by the RIPE Community, as outlined in the RIPE Document ripe-498. As the last /8 block has been assigned from IANA to all the RIRs, the RIPE NCC will only have new IPv4 addresses available for allocation for a certain amount of time.
Development, operation and maintenance of the RIPE Database
Development, operation and maintenance of the RIPE Routing Registry
Operation of K-root, one of the world's root name servers
Coordination support for ENUM delegations
Collection and publication of neutral statistics on Internet development and performance, notably via the RIPE Atlas global measurement network and RIPEstat, a web-based interface providing information about IP address space, autonomous system numbers, and related information for hostnames and countries.
=== The RIPE Database ===
The RIPE Database is a public database containing registration details of the IP addresses and AS numbers originally allocated to members by the RIPE NCC. It shows which organisations or individuals currently hold which Internet number resources, when the allocations were made and contact details. The organisations or individuals that hold these resources are responsible for updating information in the database.
As of March 2008, the database contents are available for near real-time mirroring (NRTM).
=== RIPE Routing Registry ===
The RIPE Routing Registry (RR) is a sub-set of the RIPE Database and holds routing information in RPSL. The RIPE RR is a part of the Internet RR, a collection of databases that mirror each other. Information about domain names in the RIPE Database is for reference only. It is not the domain name registry that is run by the country code Top Level Domain (ccTLD) administrators of Europe and surrounding areas.
== The RIPE NCC and RIPE ==
Réseaux IP Européens is a forum open to all parties with an interest in the technical development of the Internet. Although similar in name, RIPE and the RIPE NCC are separate entities. However, they are highly interdependent. The RIPE NCC provides administrative support to RIPE, such as the facilitation of RIPE Meetings and giving administrative support to RIPE Working Groups.
== Fees and IPv4 Transfer Market ==
The RIPE NCC charges members an annual membership fee. Since 2012 this fee has been equal for all members and is unrelated to resource holdings. A separate charge is made for each Provider Independent number resource associated with customers of members.
There is also an active market in IPv4 address transfers and these relate to registration in the RIRs' databases rather than the addresses themselves. The RIPE NCC has a formal transfer process. Members must pay their annual fees before they can transfer resources away.
== Service regions ==
The RIPE NCC service region consists of countries in Europe, the Middle East and parts of Central Asia. RIPE NCC services are available to users outside this region through Local Internet Registries; these entities must have a valid legal address inside the service region but can offer their services to anyone.
Asia
Southwest Asia
Central Asia
Europe
North America
Greenland (Denmark)
=== Former service regions ===
Prior to the formation of AFRINIC, the RIPE NCC served the following countries:
== Related organisations and events ==
Internet Corporation for Assigned Names and Numbers (ICANN)
ICANN assigns blocks of Internet resources (IP Resources and AS Numbers) to the RIPE NCC and the other RIRs.
Number Resource Organization (NRO)
The NRO is made up of the five RIRs: AfriNIC, APNIC, ARIN, LACNIC and the RIPE NCC. It carries out the joint activities of the RIRs including joint technical projects, liaison activities and policy coordination.
Address Supporting Organization (ASO)
The NRO also performs the function of the ASO, one of the supporting organisations called for by the ICANN bylaws. The ASO reviews and develops recommendations on Internet Policy relating to the system of IP addressing and advises the ICANN Board on these matters.
World Summit on the Information Society (WSIS)
As part of the NRO, the RIPE NCC was actively involved in the WSIS.
Internet Governance Forum (IGF)
As part of the NRO, the RIPE NCC is actively involved in the IGF.
== References ==
== External links ==
Official website | Wikipedia/RIPE_Network_Coordination_Centre |
The Internet Research Task Force (IRTF) is an organization, overseen by the Internet Architecture Board, that focuses on longer-term research issues related to the Internet. A parallel organization, the Internet Engineering Task Force (IETF), focuses on the shorter term issues of engineering and standards making.
The IRTF promotes research of importance to the evolution of the Internet by creating focused, long-term research groups working on topics related to Internet protocols, applications, architecture and technology. Unlike the IETF, the task force does not set standards and there is no explicit outcome expected of IRTF research groups.
== Organization ==
The IRTF is composed of a number of focused and long-term research groups. These groups work on topics related to Internet protocols, applications, architecture and technology. Research groups have the stable long-term membership needed to promote the development of research collaboration and teamwork in exploring research issues. Participation is by individual contributors, rather than by representatives of organizations. The list of current groups can be found on the IRTF's homepage.
== Operations ==
The IRTF is managed by the IRTF chair in consultation with the Internet Research Steering Group (IRSG). The IRSG membership includes the IRTF chair, the chairs of the various Research Groups and other individuals (members at large) from the research community selected by the IRTF chair. The chair of the IRTF is appointed by the Internet Architecture Board (IAB) for a two-year term.
These individuals have chaired the IRTF:
David D. Clark, 1989–1992
Jon Postel, 1992–1995
Abel Weinrib, 1995–1999
Erik Huizer, 1999–2001
Vern Paxson, 2001–2005
Aaron Falk, 2005–2011
Lars Eggert, 2011–2017
Allison Mankin, 2017–2019
Colin Perkins, 2019–Present
The IRTF chair is responsible for ensuring that research groups produce coherent, coordinated, architecturally consistent and timely output as a contribution to the overall evolution of the Internet architecture. In addition to the detailed tasks related to research groups outlined below, the IRTF chair may also from time to time arrange for topical workshops attended by the IRSG and perhaps other experts in the field.
The RFC Editor publishes documents from the IRTF and its research groups on the IRTF stream.
=== IRSG ===
The IRTF is managed by the IRTF chair in consultation with the Internet Research Steering Group (IRSG). The IRSG membership includes the IRTF chair, the chairs of the various IRTF research groups and other individuals (members at large) from the research or IETF communities. IRSG members at large are chosen by the IRTF chair in consultation with the rest of the IRSG and on approval by the Internet Architecture Board.
In addition to managing the research groups, the IRSG may from time to time hold topical workshops focusing on research areas of importance to the evolution of the Internet, or more general workshops to, for example, discuss research priorities from an Internet perspective.
The IRSG also reviews and approves documents published as part of the IRTF document stream.
== See also ==
Internet studies
IRTF (disambiguation)
== References ==
== External links ==
Homepage of the Internet Research Task Force
Homepage of the Internet Speed Test | Wikipedia/Internet_Research_Task_Force |
The Pirate Bay, commonly abbreviated as TPB, is a free searchable online index of movies, music, video games, pornography and software. Founded in 2003 by Swedish think tank Piratbyrån, The Pirate Bay facilitates the connection among users of the peer-to-peer torrent protocol, which are able to contribute to the site through the addition of magnet links. The Pirate Bay has consistently ranked as one of the most visited torrent websites in the world.
Over the years the website has faced several server raids, shutdowns and domain seizures, switching to a series of new web addresses to continue operating. In multiple countries, Internet service providers (ISPs) have been ordered to block access to it. Subsequently, proxy websites have emerged to circumvent the blocks.
In April 2009, the website's founders Fredrik Neij, Peter Sunde and Gottfrid Svartholm were found guilty in the Pirate Bay trial in Sweden for assisting in copyright infringement and were sentenced to serve one year in prison and pay a fine. They were all released by 2015 after serving shortened sentences.
The Pirate Bay has sparked controversies and discussion about legal aspects of file sharing, copyright, and civil liberties and has become a platform for political initiatives against established intellectual property laws as well as a central figure in an anti-copyright movement.
== History ==
The Pirate Bay was established on 15 September 2003 by the Swedish anti-copyright organisation Piratbyrån (lit. 'The Piracy Bureau'); it has been run as a separate organisation since October 2004. The Pirate Bay was first run by Fredrik Neij and Gottfrid Svartholm with Peter Sunde as the spokesperson; the founders are known by their nicknames "TiAMO", "anakata" and "brokep", respectively. They have both been accused of "assisting in making copyrighted content available" by the Motion Picture Association of America. On 31 May 2006, the website's servers in Stockholm were raided and seized by Swedish police, leading to three days of downtime. The Pirate Bay claims to be a non-profit entity based in Seychelles; however, this is disputed.
The Pirate Bay has been involved in a number of lawsuits, both as plaintiff and as defendant. On 17 April 2009 the founders and Carl Lundström were found guilty of assistance to copyright infringement and sentenced to one year in prison and payment of a fine of 30 million Swedish kronor (approximately US$4.2 million, £2.8 million sterling, or €3.1 million), after a trial of nine days. The defendants appealed the verdict and accused the judge of giving in to political pressure. On 26 November 2010, a Swedish appeals court upheld the verdict, decreasing the original prison terms but increasing the fine to 46 million kronor. On 17 May 2010, because of an injunction against their bandwidth provider, the site was taken offline. Access to the website was later restored with a message making fun of the injunction on their front page. On 23 June 2010, the group Piratbyrån disbanded due to the death of Ibi Kopimi Botani, a prominent member and co-founder of the group.
The Pirate Bay was hosted for several years by PRQ, a Sweden-based company, owned by Neij and Svartholm. PRQ is said to provide "highly secure, no-questions-asked hosting services to its customers". From May 2011, Serious Tubes Networks started providing network connectivity to The Pirate Bay. In May 2012, as part of Google's newly inaugurated "Transparency Report", the company reported over 6,000 formal requests to remove Pirate Bay links from the Google Search index; those requests covered over 80,500 URLs, with the five copyright holders having the most requests consisting of: Froytal Services LLC, Bang Bros, Takedown Piracy LLC, Amateur Teen Kingdom, and International Federation of the Phonographic Industry (IFPI). On 10 August 2013, The Pirate Bay announced the release of PirateBrowser, a free web browser used to circumvent internet censorship. The site was the most visited torrent directory on the World Wide Web from 2003 until November 2014, when KickassTorrents had more visitors according to Alexa. On 8 December 2014, Google removed most of the Google Play apps from its app store that have "The Pirate Bay" in the title.
On 9 December 2014, The Pirate Bay was raided by the Swedish police, who seized servers, computers, and other equipment. Several other torrent related sites including EZTV, Zoink, Torrage and the Istole tracker were also shut down in addition to The Pirate Bay's forum Suprbay.org. On the second day after the raid EZTV was reported to be showing "signs of life" with uploads to ExtraTorrent and KickassTorrents and supporting proxy sites like eztv-proxy.net via the main website's backend IP addresses. Several copies of The Pirate Bay went online during the next several days, most notably oldpiratebay.org, created by isoHunt.
On 19 May 2015, the .se domain of The Pirate Bay was ordered to be seized following a ruling by a Swedish court. The site reacted by adding six new domains in its place. The judgment was appealed on 26 May 2015. On 12 May 2016, the appeal was dismissed and the Court ruled the domains be turned over to the Swedish state. The site returned to using its original .org domain in May 2016. In August 2016, the US government shut down Kickass Torrents, which resulted in The Pirate Bay becoming once again the most visited BitTorrent website. As of 2025, The Pirate Bay is still on the top 10 of the most visited torrent sites of the year.
== Website ==
=== Content ===
The Pirate Bay allows users to search for Magnet links. These are used to reference resources available for download via peer-to-peer networks which, when opened in a BitTorrent client, begin downloading the desired content. Originally, The Pirate Bay allowed users to download BitTorrent files (torrents), small files that contain metadata necessary to download the data files from other users. The torrents are organised into categories: "Audio", "Video", "Applications", "Games", "Porn", and "Other". Registration requires an email address and is free; registered users may upload their own torrents and comment on torrents. According to a study of newly uploaded files during 2013 by TorrentFreak, 44% of uploads were television shows and movies, porn was in second place with 35% of uploads, and audio made up 9% of uploads. Registration for new users was closed in May 2019 following problems with the uploading of malware torrents. Registrations were reopened in June 2023, following the closure of RARBG, which further restricted the online possibilities of new potential uploaders and pushed TPB team to act.
The website features a browse function that enables users to see what is available in broad categories like Audio, Video, and Games, as well as sub-categories like Audio books, High-res Movies, and Comics. Since January 2012, it also features a "Physibles" category for 3D-printable objects. The contents of these categories can be sorted by file name, the number of seeders or leechers, the date posted, etc.
Piratbyrån described The Pirate Bay as a long-running project of performance art. Normally, the front page of The Pirate Bay featured a drawing of a pirate ship with the logo of the 1980s anti-copyright infringement campaign, "Home Taping Is Killing Music", on its sails instead of the Jolly Roger symbol usually associated with pirate ships.
=== Technical details ===
Initially, The Pirate Bay's four Linux servers ran a custom web server called Hypercube. An old version is open-source. On 1 June 2005, The Pirate Bay updated its website in an effort to reduce bandwidth usage, which was reported to be at 2 HTTP requests per millisecond on each of the four web servers, as well as to create a more user friendly interface for the front-end of the website. The website now runs Lighttpd and PHP on its dynamic front ends, MySQL at the database back end, Sphinx on the two search systems, memcached for caching SQL queries and PHP-sessions and Varnish in front of Lighttpd for caching static content. As of September 2008, The Pirate Bay consisted of 31 dedicated servers including nine dynamic web fronts, a database, two search engines, and eight BitTorrent trackers.
On 7 December 2007, The Pirate Bay finished the move from Hypercube to Opentracker as its BitTorrent tracking software, also enabling the use of the UDP tracker protocol for which Hypercube lacked support. This allowed UDP multicast to be used to synchronise the multiple servers with each other much faster than before. Opentracker is free software.
In June 2008, The Pirate Bay announced that their servers would support SSL encryption in response to Sweden's new wiretapping law. On 19 January 2009, The Pirate Bay launched IPv6 support for their tracker system, using an IPv6-only version of Opentracker. On 17 November 2009, The Pirate Bay shut off its tracker service permanently, stating that centralised trackers are no longer needed since distributed hash tables (DHT), peer exchange (PEX), and magnet links allow peers to find each other and content in a decentralised way.
On 20 February 2012, The Pirate Bay announced in a Facebook post that after 29 February the site would no longer offer torrent files, and would instead offer only magnet links. The site commented: "Not having torrents will be a bit cheaper for us but it will also make it harder for our common enemies to stop us." The site added that torrents being shared by fewer than ten people will retain their torrent files, to ensure compatibility with older software that may not support magnet links.
== Funding ==
=== Early financing ===
In April 2007, a rumour was confirmed on the Swedish talk show Bert that The Pirate Bay had received financial support from right-wing entrepreneur Carl Lundström. This caused some consternation since Lundström, an heir to the Wasabröd fortune, is known for financing several far-right political parties and movements like Sverigedemokraterna and Bevara Sverige Svenskt (Keep Sweden Swedish). During the talk show, Piratbyrån spokesman Tobias Andersson acknowledged that "without Lundström's support, Pirate Bay would not have been able to start" and stated that most of the money went towards acquiring servers and bandwidth.
=== Donations ===
From 2004 until 2006, The Pirate Bay had a "Donate" link to a donations page which listed several payment methods, stated that funds supported only the tracker, and offered time-limited benefits to donors such as no advertisements and "VIP" status. After that, the link was removed from the home page, and the donations page only recommended donating "to your local pro-piracy group" for a time, after which it redirected to the site's main page. Billboard claimed that the site in 2009 "appeals for donations to keep its service running". In 2006, Petter Nilsson, a candidate on the Swedish political reality show Toppkandidaterna (The Top Candidates), donated 35,000 Swedish kronor (US$4,925.83) to The Pirate Bay, which they used to buy new servers.
In 2007, the site ran a fund intended to buy Sealand, a platform with debated micronation status. In 2009, the convicted principals of TPB requested that users stop trying to donate money for their fines, because they refused to pay them. In 2013, The Pirate Bay published its Bitcoin address on the site front page for donations, as well as Litecoin.
=== Merchandising ===
The site linked to an online store selling site-related merchandise, first noted in 2006 in Svenska Dagbladet.
=== Advertising ===
Since 2006, the website has received financing through advertisements on result pages. According to speculations by Svenska Dagbladet, the advertisements generate about 600,000 kronor ($84,000) per month. In an investigation in 2006, the police concluded that The Pirate Bay brings in 1.2 million kronor ($169,000) per year from advertisements. The prosecution estimated in the 2009 trial from emails and screenshots that the advertisements pay over 10 million kronor ($1.4 million) a year, but the indictment used the estimate from the police investigation. The lawyers of the site's administrators counted the 2006 revenue closer to 725,000 kronor ($102,000). The verdict of the first trial, however, quoted the estimate from the preliminary investigation.
As of 2008, IFPI claims that the website is extremely profitable, and that The Pirate Bay is more engaged in making profit than supporting people's rights. The website has insisted that these allegations are not true, stating, "It's not free to operate a Web Site on this scale", and, "If we were making lots of money I, Svartholm, wouldn't be working late at the office tonight, I'd be sitting on a beach somewhere, working on my tan." In response to claims of annual revenue exceeding $3 million made by the IFPI, Sunde argues that the website's high bandwidth, power, and hardware costs eliminate the potential for profit. The Pirate Bay, he says, may ultimately be operating at a loss. In the 2009 trial, the defence estimated the site's yearly expenses to be 800,000 kronor ($110,000).
There have been unintentional advertisers. In 2007, an online ad agency placed Wal-Mart The Simpsons DVD ads "along with search results that included downloads of the series". In 2012, banner ads for Canada's Department of Finance Economic Action Plan were placed atop search results, as part of a larger "media buy", but were pulled "quickly".
=== Cryptocurrency mining and tokens ===
In 2017, The Pirate Bay embedded scripts on its website that would consume resources on visitors' computers in order to mine the Monero cryptocurrency. Visitors were initially not informed that these scripts had been added. After negative feedback, the operators published an announcement stating that it was a test to see if it could replace advertisements. The mining script appeared and disappeared from the website repeatedly over the following months through 2018. In 2021 The Pirate Bay embarked in a short lived creation of their own crypto tokens, which were rapidly abandoned.
=== Fee ===
According to the site's usage policy, it reserves the right to charge commercial policy violators "a basic fee of €5,000 plus bandwidth and other costs that may arise due to the violation". Sunde accused Swedish book publishers, who scraped the site for information about copyrighted books, of violating the usage policy, and asserted TPB's copyright on its database.
== Projects ==
The team behind The Pirate Bay has worked on several websites and software projects of varying degrees of permanence. In 2007, BayImg, an image hosting website similar to TinyPic went online in June. Pre-publication images posted to BayImg became part of a legal battle when Conde Nast's network was later allegedly hacked. In July, "within hours after Ingmar Bergman's death", BergmanBits.com was launched, listing torrents for the director's films, online until mid-2008. In August, The Pirate Bay relaunched the BitTorrent website Suprnova.org to perform the same functions as The Pirate Bay, with different torrent trackers, but the site languished; the domain was returned to its original owner in August 2010, and it now redirects to TorrentFreak.tv. Suprbay.org was introduced in August as the official forum for ThePirateBay.org and the various sites connected to it. Users can request reseeding of torrents, or report malware within torrent files or illegal material on ThePirateBay.org.
BOiNK was announced in October 2007 in response to the raid on Oink's Pink Palace, a music-oriented BitTorrent website. A month later Sunde cancelled BOiNK, citing the many new music websites created since the downfall of OiNK. A Mac dashboard widget was released in December, listing "top 10 stuff currently on TPB, either per category or the full list". SlopsBox, a disposable email address anti-spam service, also appeared in December, and was reviewed in 2009.
In 2008, Baywords was launched as a free blogging service that lets users of the site blog about anything as long as it does not break any Swedish laws. In December, The Pirate Bay resurrected ShareReactor as a combined eD2k and BitTorrent site. The same month, the Vio mobile video converter was released, designed to convert video files for playback on mobile devices such as iPhone, BlackBerry, Android, many Nokia and Windows Mobile devices.
In 2009, Pastebay, a note sharing service similar to Pastebin, was made available to the public as of 23 March. The Video Bay video streaming/sharing site was announced in June to be "The YouTube Killer", with content viewable in HTML 5-capable browsers. The site was in an "Extreme Beta" phase; a message on the homepage instructed the user "don't expect anything to work at all". The Video Bay was never completed and as of 28 April 2013, The Video Bay is inaccessible.
On 18 April 2011, Pirate Bay temporarily changed its name to "Research Bay", collaborating with P2P researchers of the Lund University Cybernorms group in a large poll of P2P users. The researchers published their results online on "The Survey Bay", as a public Creative Commons project in 2013. In January 2012, the site announced The Promo Bay; "doodles" by selected musicians, artists and others could be rotated onto the site's front page at a future date. Brazilian novelist Paulo Coelho was promoted, offering a collection of his books for free download. By November, 10,000 artists were reported to have signed up. TPB preserves a dated collection of exhibited logos. On 2 December 2012, some ISPs in the UK such as BT, Virgin Media, and BE started blocking The Promo Bay but stopped a few days later when the BPI reversed its position.
=== Purchases ===
In January 2007, when the micronation of Sealand was put up for sale, the ACFI and The Pirate Bay tried to buy it. The Sealand government, however, did not want to be involved with The Pirate Bay, as it was their opinion that file sharing represented "theft of proprietary rights". A new plan was formed to buy an island instead, but this too was never implemented, despite the website having raised US$25,000 (€15,000) in donations for this cause.
The P2P news blog TorrentFreak reported on 12 October 2007 that the Internet domain ifpi.com, which previously belonged to the International Federation of the Phonographic Industry, an anti-piracy organisation, had been acquired by The Pirate Bay. When asked about how they got hold of the domain, Sunde told TorrentFreak, "It's not a hack, someone just gave us the domain name. We have no idea how they got it, but it's ours and we're keeping it." The website was renamed "The International Federation of Pirates Interests" However, the IFPI filed a complaint with the World Intellectual Property Organization shortly thereafter, which subsequently ordered The Pirate Bay to return the domain name to the IFPI.
=== Cryptocurrency ===
On 12 May 2021, The Pirate Bay launched Pirate Token, a BEP-20 token, to be used to sustain its community and develop tools for the website.
== Incidents ==
=== May 2006 raid ===
On 31 May 2006, a raid against The Pirate Bay and people involved with the website took place as ordered by Swedish judge Tomas Norström, later the presiding judge of the 2009 trial, prompted by allegations of copyright violations. Police officers shut down the website and confiscated its servers, as well as all other servers hosted by The Pirate Bay's Internet service provider, PRQ. The company is owned by two operators of The Pirate Bay. Three people – Neij, Svartholm and Mikael Viborg – were held by the police for questioning, but were released later that evening. All servers in the room were seized, including those running the website of Piratbyrån, an independent organisation fighting for file sharing rights, as well as servers unrelated to The Pirate Bay or other file sharing activities. Equipment such as hardware routers, switches, blank CDs, and fax machines were also seized.
The Motion Picture Association of America (MPAA) wrote in a press release: "Since filing a criminal complaint in Sweden in November 2004, the film industry has worked vigorously with Swedish and U.S. government officials in Sweden to shut this illegal website down." MPAA CEO Dan Glickman also stated, "Intellectual property theft is a problem for film industries all over the world and we are glad that the local government in Sweden has helped stop The Pirate Bay from continuing to enable rampant copyright theft on the Internet." The MPAA press release set forth its justification for the raid and claimed that there were three arrests; however, the individuals were not actually arrested, only held for questioning. The release also reprinted John G. Malcolm's allegation that The Pirate Bay was making money from the distribution of copyrighted material, a criticism denied by The Pirate Bay.
After the raid, The Pirate Bay displayed a message that confirmed that the Swedish police had executed search warrants for breach of copyright law or assisting such a breach. The closure message initially caused some confusion because on 1 April 2005, April Fools' Day, The Pirate Bay had posted a similar message as a prank, stating that they were unavailable due to a raid by the Swedish Anti-Piracy Bureau and IFPI. Piratbyrån set up a temporary news blog to inform the public about the incident. On 2 June 2006, The Pirate Bay was available once again, with their logo depicting a pirate ship firing cannonballs at the Hollywood Sign. The Pirate Bay has servers in both Belgium and Russia for future use in case of another raid. According to The Pirate Bay, in the two years following the raid, it grew from 1 million to 2.7 million registered users and from 2.5 million to 12 million peers. The Pirate Bay now claims over 5 million active users.
Sweden's largest technology museum, the Swedish National Museum of Science and Technology, acquired one of the confiscated servers in 2009 and exhibited it for having great symbolic value as a "big problem or a big opportunity".
=== Autopsy photos ===
In September 2008, the Swedish media reported that the public preliminary investigation protocols concerning a child murder case known as the Arboga case had been made available through a torrent on The Pirate Bay. In Sweden, preliminary investigations became publicly available the moment a lawsuit is filed and can be ordered from the court by any individual. The document included pictures from the autopsy of the two murdered children, which caused their father Nicklas Jangestig to urge the website to have the pictures removed. The Pirate Bay refused to remove the torrent. The number of downloads increased to about 50,000 a few days later. On 11 September 2008, Sunde participated in the debate program Debatt on the public broadcaster SVT. He had agreed to participate on the condition that the children's father, Nicklas Jangestig, would not take part in the debate. Jangestig ultimately did participate in the program by telephone, which made Sunde feel betrayed by SVT. This caused The Pirate Bay to suspend all of its press contacts the following day. "I don't think it's our job to judge if something is ethical or unethical or what other people want to put out on the internet", Sunde said to TV4.
=== Legal issues ===
In September 2007, a large number of internal emails were leaked from anti-piracy company MediaDefender by an anonymous hacker. Some of the leaked emails discussed hiring hackers to perform DDoS attacks on The Pirate Bay's servers and trackers. In response to the leak, The Pirate Bay filed charges in Sweden against MediaDefender clients Twentieth Century Fox Sweden AB, EMI Sweden AB, Universal Music Group Sweden AB, Universal Pictures Nordic AB, Paramount Home Entertainment (Sweden) AB, Atari Nordic AB, Activision Nordic, Ubisoft Sweden AB, Sony BMG Music Entertainment (Sweden) AB, and Sony Pictures Home Entertainment Nordic AB, but the charges were not pursued. MediaDefender's stocks fell sharply after this incident, and several media companies withdrew from the service after the company announced the leak had caused $825,000 in losses.
Later, Sunde accused police investigator Jim Keyzer of a conflict of interest when he declined to investigate MediaDefender. Keyzer later accepted a job for MPAA member studio Warner Brothers. The leaked emails revealed that other MPAA member studios hired MediaDefender to pollute The Pirate Bay's torrent database. In an official letter to the Swedish Minister of Justice, the International Olympic Committee (IOC) requested assistance from the Swedish government to prevent The Pirate Bay from distributing video clips of the Beijing Olympics. The IOC claimed there were more than one million downloads of footage from the Olympics – mostly of the opening ceremony. The Pirate Bay, however, did not take anything down, and temporarily renamed the website to The Beijing Bay.
The trial against the men behind the Pirate Bay started in Sweden on 16 February 2009. They were accused of breaking Swedish copyright law. The defendants, however, continued to be confident about the outcome. Half the charges against The Pirate Bay were dropped on the second day of the trial.
The three operators of the site and their one investor Carl Lundström were convicted in Stockholm district court on 17 April 2009 and sentenced to one year in jail each and a total of 30 million kronor ($3.6 million, €2.7 million, £2.4 million sterling) in fines and damages. The defendants' lawyers appealed to the Svea Court of Appeal and requested a retrial in the district court, alleging bias on the part of judge Tomas Norström.
On 13 May 2009, several record companies again sued The Pirate Bay's founders as well as their main internet service provider Black Internet. They required enforcement for ending The Pirate Bay's accessory to copyright infringement that had not stopped despite the court order in April, and in the complaint listed several pages of works being shared with the help of the site. The suit was joined by several major film companies on 30 July. The Stockholm district court ruled on 21 August that Black Internet must stop making available the specific works mentioned in the judgment, or face a 500,000 kronor fine. The company was notified of the order on 24 August, and they complied with it on the same day by disconnecting The Pirate Bay. Computer Sweden noted that the judgment did not order The Pirate Bay to be disconnected, but the ISP had no other option for stopping the activity on the site. It was the first time in Sweden for an ISP to be forced to stop providing access for a website. A public support fund fronted by the CEO of the ISP was set up to cover the legal fees of an appeal. Pirate Party leader Rickard Falkvinge submitted the case for Parliamentary Ombudsman review, criticising the court's order to make intermediaries responsible for relayed content and to assign active crime prevention tasks to a private party.
On 28 October 2009, the Stockholm District Court ordered a temporary injunction on Neij and Svartholm with a penalty of 500,000 kronor each, forbidding them from participating in the operation of The Pirate Bay's website or trackers.
On 21 May 2010, the Svea Court of Appeal decided not to change the orders on Black Internet or Neij and Svartholm.
On 1 February 2012, the Supreme Court of Sweden refused to hear an appeal in the conviction case, and agreed with the decision of the Svea Court of Appeal, which had upheld the sentences in November 2011.
On 2 September 2012 Svartholm was arrested in Cambodia. He was detained in Phnom Penh by officers executing an international warrant issued against him in April after he did not turn up to serve a one-year jail sentence for copyright violations. On 24 December 2012, administrators of TPB changed the homepage to urge users to send Warg, in jail, "gifts and letters".
In March 2013, The Pirate Bay claimed in a blog post that it had moved its servers to North Korea. The incident turned out to be a hoax. In April 2013, within a week The Pirate Bay had moved its servers from Greenland to Iceland to St. Martin, either in response to legal threats or preemptively. In December 2013, the site changed its domain to .ac (Ascension Island), following the seizure of the .sx domain. On 12 December, the site moved to .pe (Peru), on 18 December to .gy (Guyana). Following the site's suspension from the .gy domain, on 19 December The Pirate Bay returned to .se (Sweden), which it had previously occupied between February 2012 and April 2013.
=== Trial ===
The Pirate Bay trial was a joint criminal and civil prosecution in Sweden of four individuals charged for promoting the copyright infringement of others with The Pirate Bay site. The criminal charges were supported by a consortium of intellectual rights holders led by IFPI, who filed individual civil compensation claims against the owners of The Pirate Bay.
Swedish prosecutors filed charges on 31 January 2008 against the founders along with Carl Lundström, a Swedish businessman who through his businesses sold services to the site. The prosecutor claimed the four worked together to administer, host, and develop the site and thereby facilitated other people's breach of copyright law. Some 34 cases of copyright infringements were originally listed, of which 21 were related to music files, 9 to movies, and 4 to games. One case involving music files was later dropped by the copyright holder who made the file available again on The Pirate Bay site. In addition, claims for damages of 117 million kronor ($13 million, €12.5 million) were filed. The case was decided jointly by a judge and three appointed lay judges. According to Swedish media, the lead judge, judge Norström, was a member of the Swedish Copyright Association and sat on the board of the Swedish Association for the Protection of Industrial Property, but denied that his involvement constituted a conflict of interest.
The trial started on 16 February 2009, in the district court (tingsrätt) of Stockholm, Sweden. The hearings ended on 3 March 2009 and the verdict was announced at 11:00 am on Friday 17 April 2009: Neij, Sunde, Svartholm and Lundström were all found guilty and sentenced to serve one year in prison and pay a fine of 30 million Swedish krona (app. €2.7 million or US$3.5 million). All of the defendants appealed the verdict.
The appeal trial concluded on 15 October 2010, and the verdict was announced on 26 November. The appeal court shortened sentences of three of the defendants who appeared in court that day. Neij's sentence was reduced to 10 months, Sunde's to eight, and Lundström's to four. However, the fine was increased from 32 to 46 million kronor.
On 1 February 2012, the Supreme Court of Sweden refused to hear an appeal in the case, prompting the site to change its official domain name to thepiratebay.se from thepiratebay.org. The move to a .se domain was claimed to prevent susceptibility to US laws from taking control of the site. On 9 April 2013, the site changed its domain name to thepiratebay.gl, under the Greenland TLD, in anticipation of possible seizure by Swedish authorities of its .se domain. The change proved to be short lived, as the site returned to the .se domain on 12 April 2013 after being blocked on the .gl domain by Tele-Post, which administers domains in Greenland. Tele-Post cited a Danish court ruling that the site was in violation of copyright laws.
The founders were all released after having finished serving their sentences by 2015.
=== Service issues ===
In May 2007, The Pirate Bay was attacked by a group of hackers. They copied the user database, which included over 1.5 million users. The Pirate Bay claimed to its users that the data was of no value and that passwords and e-mails were encrypted and hashed. Some blogs stated that a group known as the AUH (Arga Unga Hackare, Swedish for "Angry Young Hackers") were suspected of executing the attack; however, the AUH stated on the Computer Sweden newspaper that they were not involved and would take revenge on those responsible for the attack.
On 27 April 2009, the website of The Pirate Bay had fibre IPv4 connectivity issues. There was widespread speculation this was a forced outage from the Swedish anti-piracy group, accelerated somewhat by TPB adding contact details for the Swedish anti-piracy group's lawyers to its RIPE database record. The site and its forums were still available via IPv6 at the time.
On 24 August 2009, one of The Pirate Bay's upstream providers was ordered to discontinue service for the website by a Swedish court in response to a civil action brought by several entertainment companies including Disney, Universal, Time Warner, Columbia, Sony, NBC, and Paramount. According to the TPB Blog, this caused a downtime of 3 hours; however, some users were unable to access the site immediately following the relocation due to unrelated technical difficulties. The site was fully operational again for everyone within 24 hours.
On 6 October 2009, one of the IP transit providers to The Pirate Bay blocked all Pirate Bay traffic causing an outage for most users around the world. The same day, the site was reportedly back online at an IP address at CyberBunker, located in the Netherlands. It is not known whether The Pirate Bay is actually located at CyberBunker or whether they are using the CyberBunker service that routes CyberBunker IP addresses to any datacenter around the world. These routes are not visible to the outside world.
CyberBunker was given a court injunction on 17 May 2010, taking the site offline briefly; later that day, hosting was restored by Sweden's Pirate Party. Now former spokesman Sunde commented that it would now be very difficult to stop the site because it would now be seen as political censorship if anyone tries to shut it down.
On 8 July 2010, a group of Argentine hackers gained access to The Pirate Bay's administration panel through a security breach via the backend of The Pirate Bay website. They were able to delete torrents and expose users' IP-addresses, emails and MD5-hashed passwords. The Pirate Bay was taken offline for upgrades. Users visiting the website were met by the following message: "Upgrading some stuff, database is in use for backups, soon back again. Btw, it's nice weather outside I think."
On 16 May 2012, The Pirate Bay experienced a major DDoS attack, causing the site to be largely inaccessible worldwide for around 24 hours. The Pirate Bay said that it did not know who was behind the attack, although it "had its suspicions".
On 5 May 2015, The Pirate Bay went offline for several hours, apparently as a result of not properly configuring its SSL certificate.
=== Acquisition discussion ===
On 30 June 2009, Swedish advertising company Global Gaming Factory X AB announced their intention to buy the site for 60 million kronor (approximately US$8.5 million) (30 million kronor in cash, 30 million kronor in GGF shares).
The Pirate Bay founders stated that the profits from the sale would be placed in an offshore account where it would be used to fund projects pertaining to "freedom of speech, freedom of information, and the openness of the Internet". Assurances were made that "no personal data will be transferred in the eventual sale (since no personal data is kept)." Global Gaming Chief Executive Hans Pandeya commented on the site's future by saying "We would like to introduce models which entail that content providers and copyright owners get paid for content that is downloaded via the site", and announced that users would be charged a monthly fee for access to The Pirate Bay.
Global Gaming Factory's letter of intent expired at the end of September 2009, without the transaction having taken place. This may be due to the company's financial difficulties. "PC World" magazine regarded the deal's future as "doomed".
=== December 2014 raid ===
On 9 December 2014, police in Stockholm raided the company's premises and seized servers and other computers and equipment, which resulted in the website going offline. The raid was in response to a complaint from Rights Alliance, a Swedish anti-piracy group. The Pirate Bay was one of many peer-to-peer and torrent-related websites and apps that went down. One member of the crew was arrested. Torrent Freak reported that most other torrent sites reported a 5–10% increase in traffic from the displaced users, though the shutdown had little effect on overall piracy levels. In retaliation to the raid, a group of hackers claiming to be part of Anonymous allegedly leaked email log-in details of Swedish government officials. Sunde commented in a blog post that he was happy to see the website shut down, believing his successors have done nothing to improve the site, criticising in particular the increased use of advertisements.
IsoHunt has since copied much of the original TPB database and made it accessible through oldpiratebay.org, a searchable index of old Pirate Bay torrents. IsoHunt also released a tool called The Open Bay, to allow users to deploy their own version of the Pirate Bay website. The tool is responsible for around 372 mirror sites. Since 17 December 2014, The Pirate Bay's Facebook page has been unavailable. On 22 December 2014, a website was resumed at the domain thepiratebay.se, showing a flip clock with the length of time in days and hours that the site had been offline, and a waving pirate flag. From this day TPB was hosted for a period in Moldova, on Trabia Network (Moldo-German company) servers. The Pirate Bay then began using the services of CloudFlare, a company which offers reverse proxy services. On 1 January 2015, the website presented a countdown to 1 February 2015. The website returned with a prominent phoenix logo displayed at the domain thepiratebay.se on 31 January 2015.
=== Error 522 downtimes ===
Beginning in October 2018, the clearnet Pirate Bay website started to be inaccessible in some locations around the world, showing Error 522. As the result, direct visits to the website dropped by more than 32 percent in October. The incident was found to be unrelated to internet provider blocking or domain name problem, but the exact cause has not been determined. The site's Tor domain and proxies remained unaffected.
The Error 522 problem occurred again in early March 2020, with the site's admins unable to say when it would be resolved. After one month, the site's functionality was restored with an update of the domain records and the Cloudflare nameservers.
== Censorship and controversies ==
=== Anti-copyright movement ===
The Pirate Bay has sparked controversies and discussion about legal aspects of file sharing, copyright, and civil liberties and has become a platform for political initiatives against established intellectual property laws and a central figure in an anti-copyright movement. The website faced several shutdowns and domain seizures which "did little to take the site offline, as it simply switched to a series of new web addresses and continued to operate".
=== Domain blocking by countries ===
The Pirate Bay's website has been blocked in some countries, despite the relative ease by which such blocks can be circumvented in most countries. While the URL to the Pirate Bay itself has been blocked in these countries, numerous mirror websites emerged to make the website available at different URLs, routing traffic around the block.
According to Google chairman Eric Schmidt, "government plans to block access to illicit filesharing websites could set a 'disastrous precedent' for freedom of speech"; he also expressed that Google would "fight attempts to restrict access to sites such as the Pirate Bay".
==== Sweden ====
On 13 February 2017, Sweden's Patent and Market Court of Appeal decided that the broadband provider Bredbandsbolaget must block its customers from accessing file sharing site The Pirate Bay, overruling a district court ruling to the contrary from 2015. This is the first time a site was openly blocked in Sweden. The rest of the ISPs are expected to follow the same court orders.
The ISP Telia was mandated to block the Pirate Bay through a dynamic injunction on 12 December 2019. This means that when the rights holders find a website (IP and URL for the Pirate Bay) they can inform Telia who are legally required to block it in 2–3 weeks. Telia objected to this blocking order and attempted to appeal the injunction but lost on 29 June 2020 and must maintain the dynamic injunction for 3 years.
=== Censorship by corporations ===
==== Facebook ====
After The Pirate Bay introduced a feature in March 2009 to easily share links to torrents on the social networking site Facebook, Wired found in May that Facebook had started blocking the links. On further inspection, they discovered that all messages containing links to The Pirate Bay in both public and in private messages, regardless of content, were being blocked. Electronic Frontier Foundation lawyers commented that Facebook might be working against the US Electronic Communications Privacy Act by intercepting user messages, but Facebook chief privacy officer Chris Kelly said that they have the right to use blocks on links where there is a "demonstrated disregard for intellectual property rights", following users' agreement on their terms of service. Links to other similar sites have not been blocked.
==== Microsoft ====
In March 2012, Microsoft blocked Windows Live Messenger messages containing links to The Pirate Bay. When a user sends an instant message that contains a link to The Pirate Bay, Windows Live Messenger prompts a warning and claims "Blocked as it was reported unsafe". "We block instant messages if they contain malicious or spam URLs based on intelligence algorithms, third-party sources, or user complaints. Pirate Bay URLs were flagged by one or more of these and were consequently blocked", Microsoft told The Register in an emailed statement.
==== Google ====
In late November 2021, Google removed The Pirate Bay and more than 100 related domains from its search results in the Netherlands due to the Dutch court order. Two years later, in December 2023, the link to The Pirate Bay was removed from Google Knowledge Panel.
== In media ==
The Pirate Bay is featured in Steal This Film (2006), a documentary series about society and filesharing, produced by The League of Noble Peers; in the Danish Documentary Good Copy Bad Copy, which explores the issues surrounding file copyright; and the documentary TPB AFK. The Pirate Bay has been a topic on the US-syndicated NPR radio show On the Media.
Björn Ulvaeus, member of the Swedish pop music group ABBA, criticised copyright infringing activities of The Pirate Bay supporters as "lazy and mean". In contrast, Brazilian best-selling author Paulo Coelho has embraced free sharing online. Coelho supports The Pirate Bay and offered to be a witness in the 2009 trial. He accounts much of his growing sales to his work shared on the Internet and comments that "a person who does not share is not only selfish, but bitter and alone".
== See also ==
== References ==
== External links ==
Official website | Wikipedia/The_Pirate_Bay |
The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.
IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information.
IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974, which was complemented by a connection-oriented service that became the basis for the Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP.
The first major version of IP, Internet Protocol version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol version 6 (IPv6), which has been in increasing deployment on the public Internet since around 2006.
== Function ==
The Internet Protocol is responsible for addressing host interfaces, encapsulating data into datagrams (including fragmentation and reassembly) and routing datagrams from a source host interface to a destination host interface across one or more IP networks. For these purposes, the Internet Protocol defines the format of packets and provides an addressing system.
Each datagram has two components: a header and a payload. The IP header includes a source IP address, a destination IP address, and other metadata needed to route and deliver the datagram. The payload is the data that is transported. This method of nesting the data payload in a packet with a header is called encapsulation.
IP addressing entails the assignment of IP addresses and associated parameters to host interfaces. The address space is divided into subnets, involving the designation of network prefixes. IP routing is performed by all hosts, as well as routers, whose main function is to transport packets across network boundaries. Routers communicate with one another via specially designed routing protocols, either interior gateway protocols or exterior gateway protocols, as needed for the topology of the network.
== Addressing methods ==
There are four principal addressing methods in the Internet Protocol:
Unicast delivers a message to a single specific node using a one-to-one association between a sender and destination: each destination address uniquely identifies a single receiver endpoint.
Broadcast delivers a message to all nodes in the network using a one-to-all association; a single datagram (or packet) from one sender is routed to all of the possibly multiple endpoints associated with the broadcast address. The network automatically replicates datagrams as needed to reach all the recipients within the scope of the broadcast, which is generally an entire network subnet.
Multicast delivers a message to a group of nodes that have expressed interest in receiving the message using a one-to-many-of-many or many-to-many-of-many association; datagrams are routed simultaneously in a single transmission to many recipients. Multicast differs from broadcast in that the destination address designates a subset, not necessarily all, of the accessible nodes.
Anycast delivers a message to any one out of a group of nodes, typically the one nearest to the source using a one-to-one-of-many association where datagrams are routed to any single member of a group of potential receivers that are all identified by the same destination address. The routing algorithm selects the single receiver from the group based on which is the nearest according to some distance or cost measure.
== Version history ==
In May 1974, the Institute of Electrical and Electronics Engineers (IEEE) published a paper entitled "A Protocol for Packet Network Intercommunication". The paper's authors, Vint Cerf and Bob Kahn, described an internetworking protocol for sharing resources using packet switching among network nodes. A central control component of this model was the Transmission Control Program that incorporated both connection-oriented links and datagram services between hosts. The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol and User Datagram Protocol at the transport layer and the Internet Protocol at the internet layer. The model became known as the Department of Defense (DoD) Internet Model and Internet protocol suite, and informally as TCP/IP.
The following Internet Experiment Note (IEN) documents describe the evolution of the Internet Protocol into the modern version of IPv4:
IEN 2 Comments on Internet Protocol and TCP (August 1977) describes the need to separate the TCP and Internet Protocol functionalities (which were previously combined). It proposes the first version of the IP header, using 0 for the version field.
IEN 26 A Proposed New Internet Header Format (February 1978) describes a version of the IP header that uses a 1-bit version field.
IEN 28 Draft Internetwork Protocol Description Version 2 (February 1978) describes IPv2.
IEN 41 Internetwork Protocol Specification Version 4 (June 1978) describes the first protocol to be called IPv4. The IP header is different from the modern IPv4 header.
IEN 44 Latest Header Formats (June 1978) describes another version of IPv4, also with a header different from the modern IPv4 header.
IEN 54 Internetwork Protocol Specification Version 4 (September 1978) is the first description of IPv4 using the header that would become standardized in 1980 as RFC 760.
IEN 80
IEN 111
IEN 123
IEN 128/RFC 760 (1980)
IP versions 1 to 3 were experimental versions, designed between 1973 and 1978. Versions 2 and 3 supported variable-length addresses ranging between 1 and 16 octets (between 8 and 128 bits). An early draft of version 4 supported variable-length addresses of up to 256 octets (up to 2048 bits) but this was later abandoned in favor of a fixed-size 32-bit address in the final version of IPv4. This remains the dominant internetworking protocol in use in the Internet Layer; the number 4 identifies the protocol version, carried in every IP datagram. IPv4 is defined in RFC 791 (1981).
Version number 5 was used by the Internet Stream Protocol, an experimental streaming protocol that was not adopted.
The successor to IPv4 is IPv6. IPv6 was a result of several years of experimentation and dialog during which various protocol models were proposed, such as TP/IX (RFC 1475), PIP (RFC 1621) and TUBA (TCP and UDP with Bigger Addresses, RFC 1347). Its most prominent difference from version 4 is the size of the addresses. While IPv4 uses 32 bits for addressing, yielding c. 4.3 billion (4.3×109) addresses, IPv6 uses 128-bit addresses providing c. 3.4×1038 addresses. Although adoption of IPv6 has been slow, as of January 2023, most countries in the world show significant adoption of IPv6, with over 41% of Google's traffic being carried over IPv6 connections.
The assignment of the new protocol as IPv6 was uncertain until due diligence assured that IPv6 had not been used previously. Other Internet Layer protocols have been assigned version numbers, such as 7 (IP/TX), 8 and 9 (historic). Notably, on April 1, 1994, the IETF published an April Fools' Day RfC about IPv9. IPv9 was also used in an alternate proposed address space expansion called TUBA. A 2004 Chinese proposal for an IPv9 protocol appears to be unrelated to all of these, and is not endorsed by the IETF.
=== IP version numbers ===
As the version number is carried in a 4-bit field, only numbers 0–15 can be assigned.
== Reliability ==
The design of the Internet protocol suite adheres to the end-to-end principle, a concept adapted from the CYCLADES project. Under the end-to-end principle, the network infrastructure is considered inherently unreliable at any single network element or transmission medium and is dynamic in terms of the availability of links and nodes. No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity, the intelligence in the network is located in the end nodes.
As a consequence of this design, the Internet Protocol only provides best-effort delivery and its service is characterized as unreliable. In network architectural parlance, it is a connectionless protocol, in contrast to connection-oriented communication. Various fault conditions may occur, such as data corruption, packet loss and duplication. Because routing is dynamic, meaning every packet is treated independently, and because the network maintains no state based on the path of prior packets, different packets may be routed to the same destination via different paths, resulting in out-of-order delivery to the receiver.
All fault conditions in the network must be detected and compensated by the participating end nodes. The upper layer protocols of the Internet protocol suite are responsible for resolving reliability issues. For example, a host may buffer network data to ensure correct ordering before the data is delivered to an application.
IPv4 provides safeguards to ensure that the header of an IP packet is error-free. A routing node discards packets that fail a header checksum test. Although the Internet Control Message Protocol (ICMP) provides notification of errors, a routing node is not required to notify either end node of errors. IPv6, by contrast, operates without header checksums, since current link layer technology is assumed to provide sufficient error detection.
== Link capacity and capability ==
The dynamic nature of the Internet and the diversity of its components provide no guarantee that any particular path is actually capable of, or suitable for, performing the data transmission requested. One of the technical constraints is the size of data packets possible on a given link. Facilities exist to examine the maximum transmission unit (MTU) size of the local link and Path MTU Discovery can be used for the entire intended path to the destination.
The IPv4 internetworking layer automatically fragments a datagram into smaller units for transmission when the link MTU is exceeded. IP provides re-ordering of fragments received out of order. An IPv6 network does not perform fragmentation in network elements, but requires end hosts and higher-layer protocols to avoid exceeding the path MTU.
The Transmission Control Protocol (TCP) is an example of a protocol that adjusts its segment size to be smaller than the MTU. The User Datagram Protocol (UDP) and ICMP disregard MTU size, thereby forcing IP to fragment oversized datagrams.
== Security ==
During the design phase of the ARPANET and the early Internet, the security aspects and needs of a public, international network were not adequately anticipated. Consequently, many Internet protocols exhibited vulnerabilities highlighted by network attacks and later security assessments. In 2008, a thorough security assessment and proposed mitigation of problems was published. The IETF has been pursuing further studies.
== See also ==
ICANN
IP routing
List of IP protocol numbers
List of IP version numbers
Next-generation network
New IP (proposal)
== References ==
== External links ==
Manfred Lindner. "IP Technology" (PDF). Retrieved 2018-02-11.
Manfred Lindner. "IP Routing" (PDF). Retrieved 2018-02-11. | Wikipedia/Transmission_Control_Program |
Internet exchange points (IXes or IXPs) are common grounds of IP networking, allowing participant Internet service providers (ISPs) to exchange data destined for their respective networks. IXPs are generally located at places with preexisting connections to multiple distinct networks, i.e., datacenters, and operate physical infrastructure (switches) to connect their participants. Organizationally, most IXPs are each independent not-for-profit associations of their constituent participating networks (that is, the set of ISPs that participate in that IXP). The primary alternative to IXPs is private peering, where ISPs and large customers directly connect their networks.
IXPs reduce the portion of an ISP's traffic that must be delivered via their upstream transit providers, thereby reducing the average per-bit delivery cost of their service. Furthermore, the increased number of paths available through the IXP improves routing efficiency (by allowing routers to select shorter paths) and fault-tolerance. IXPs exhibit the characteristics of the network effect.
== History ==
Internet exchange points began as Network Access Points or NAPs, a key component of Al Gore's National Information Infrastructure (NII) plan, which defined the transition from the US Government-paid-for NSFNET era (when Internet access was government sponsored and commercial traffic was prohibited) to the commercial Internet of today. The four Network Access Points (NAPs) were defined as transitional data communications facilities at which Network Service Providers (NSPs) would exchange traffic, in replacement of the publicly financed NSFNET Internet backbone. The National Science Foundation let contracts supporting the four NAPs, one to MFS Datanet for the preexisting MAE-East in Washington, D.C., and three others to Sprint, Ameritech, and Pacific Bell, for new facilities of various designs and technologies, in New York (actually Pennsauken, New Jersey), Chicago, and California, respectively. As a transitional strategy, they were effective, providing a bridge from the Internet's beginnings as a government-funded academic experiment, to the modern Internet of many private-sector competitors collaborating to form a network-of-networks, transporting Internet bandwidth from its points-of-production at Internet exchange points to its sites-of-consumption at users' locations.
This transition was particularly timely, coming hard on the heels of the ANS CO+RE controversy, which had disturbed the nascent industry, led to congressional hearings, resulted in a law allowing NSF to promote and use networks that carry commercial traffic, prompted a review of the administration of NSFNET by the NSF's Inspector General (no serious problems were found), and caused commercial operators to realize that they needed to be able to communicate with each other independent of third parties or at neutral exchange points.
Although the three telco-operated NAPs faded into obscurity relatively quickly after the expiration of the federal subsidies, MAE-East, thrived for fifteen more years, and its west-coast counterpart MAE-West continued for more than twenty years.
Today, the phrase "Network Access Point" is of historical interest only, since the four transitional NAPs disappeared long ago, replaced by hundreds of modern Internet exchange points, though in Spanish-speaking Latin America, the phrase lives on to a small degree, among those who conflate the NAPs with IXPs.
== Function ==
The primary purpose of an IXP is to allow networks to interconnect directly, via the exchange, rather than going through one or more third-party networks. The primary advantages of direct interconnection are cost, latency, and bandwidth.
Traffic passing through an exchange is typically not billed by any party, whereas traffic to an ISP's upstream provider is. The direct interconnection, often located in the same city as both networks, avoids the need for data to travel to other cities—and potentially on other continents—to get from one network to another, thus reducing latency.
The third advantage, speed, is most noticeable in areas that have poorly developed long-distance connections. ISPs in regions with poor connections might have to pay between 10 or 100 times more for data transport than ISPs in North America, Europe, or Japan. Therefore, these ISPs typically have slower, more limited connections to the rest of the Internet. However, a connection to a local IXP may allow them to transfer data without limit, and without cost, vastly improving the bandwidth between customers of such adjacent ISPs.
Internet Exchange Points (IXPs) are public locations where several networks are connected to each other. Public peering is done at IXPs, while private peering can be done with direct links between networks.
== Operations ==
=== Technical operations ===
A typical IXP consists of one or more network switches, to which each of the participating ISPs connect. Prior to the existence of switches, IXPs typically employed fiber-optic inter-repeater link (FOIRL) hubs or Fiber Distributed Data Interface (FDDI) rings, migrating to Ethernet and FDDI switches as those became available in 1993 and 1994.
Asynchronous Transfer Mode (ATM) switches were briefly used at a few IXPs in the late 1990s, accounting for approximately 4% of the market at their peak, and there was an attempt by Stockholm-based IXP NetNod to use SRP/DPT, but Ethernet has prevailed, accounting for more than 95% of all existing Internet exchange switch fabrics. All Ethernet port speeds are to be found at modern IXPs, ranging from 10 Mb/second ports in use in small developing-country IXPs, to ganged 10 Gb/second ports in major centers like Seoul, New York, London, Frankfurt, Amsterdam, and Palo Alto. Ports with 100 Gb/second are available, for example, at the AMS-IX in Amsterdam and at the DE-CIX in Frankfurt.
=== Business operations ===
The principal business and governance models for IXPs include:
Not-for-profit association (usually of the participating ISPs)
Operator-neutral for-profit company (usually the operator of a datacenter hosting the IXP)
University
Government agency (often the communications ministry or regulator, at national scale, or municipal government, at local scale)
Unincorporated informal association of networks (defined by an open-ended multi-party contract, without independent legal existence)
The technical and business logistics of traffic exchange between ISPs is governed by bilateral or multilateral peering agreements. Under such agreements, traffic is exchanged without compensation. When an IXP incurs operating costs, they are typically shared among all of its participants.
At the more expensive exchanges, participants pay a monthly or annual fee, usually determined by the speed of the port or ports which they are using. Fees based on the volume of traffic are less common because they provide a counterincentive to the growth of the exchange. Some exchanges charge a setup fee to offset the costs of the switch port and any media adaptors (gigabit interface converters, small form-factor pluggable transceivers, XFP transceivers, XENPAKs, etc.) that the new participant requires.
== Traffic exchange ==
Internet traffic exchange between two participants on an IXP is facilitated by Border Gateway Protocol (BGP) routing configurations between them. They choose to announce routes via the peering relationship – either routes to their own addresses or routes to addresses of other ISPs that they connect to, possibly via other mechanisms. The other party to the peering can then apply route filtering, where it chooses to accept those routes, and route traffic accordingly, or to ignore those routes, and use other routes to reach those addresses.
In many cases, an ISP will have both a direct link to another ISP and accept a route (normally ignored) to the other ISP through the IXP; if the direct link fails, traffic will then start flowing over the IXP. In this way, the IXP acts as a backup link.
When these conditions are met, and a contractual structure exists to create a market to purchase network services, the IXP is sometimes called a "transit exchange". The Vancouver Transit Exchange, for example, is described as a "shopping mall" of service providers at one central location, making it easy to switch providers, "as simple as getting a VLAN to a new provider". The VTE is run by BCNET, a public entity.
Advocates of green broadband schemes and more competitive telecommunications services often advocate aggressive expansion of transit exchanges into every municipal area network so that competing service providers can place such equipment as video on demand hosts and PSTN switches to serve existing phone equipment, without being answerable to any monopoly incumbent.
Since the dissolution of the Internet backbone and transition to the IXP system in 1992, the measurement of Internet traffic exchanged at IXPs has been the primary source of data about Internet bandwidth production: how it grows over time and where it is produced. Standardized measures of bandwidth production have been in place since 1996 and have been refined over time.
== See also ==
Historical IXPs
MAE-East and MAE-West
Commercial Internet eXchange (CIX)
Federal Internet Exchange (FIX)
Associations of Internet exchange point operators:
Euro-IX, the European Internet Exchange Association
APIX, the Asia Pacific Internet Exchange Association
LAC-IX, the Latin America & Caribbean Internet Exchange Association
Af-IX, the African IXP Association
Route server
Internet service provider
Data center
Packet Clearing House
List of Internet exchange points
Meet-me room
Peering
== References ==
== External links ==
European Internet Exchange Association
Internet Exchange Directory maintained by Packet Clearing House
Internet Exchange Points from Data Center Map
IXP History Collection
PeeringDB
Lookin'Glass.Org BGP Looking Glass services at IX's. | Wikipedia/Network_Access_Point |
Abilene Network was a high-performance backbone network created by the Internet2 community in the late 1990s. In 2007 the Abilene Network was retired and the upgraded network became known as the "Internet2 Network".
== History ==
One of the aims of the Abilene project was to achieve 10 Gbit/s connectivity between every node by the end of 2006. Over 230 member institutions participated in Abilene, mostly universities and some corporate and affiliate institutions, in all of the US states as well as the District of Columbia and Puerto Rico.
It connected to European research networks NORDUnet and SURFnet.
The legal entity behind the network was the University Corporation for Advanced Internet Development.
When it was established in 1999, the network backbone had a capacity of 2.5 Gbit/s. An upgrade to 10 Gbit/s began in 2003 and was completed on February 4, 2004.
The name Abilene was chosen because of the project's resemblance, in ambition and scope, to the railhead in Abilene, Kansas, which in the 1860s represented the frontier of the United States for the nation's railroad infrastructure.
== Abilene and Internet2 and the Internet2 Network ==
The media often incorrectly used the term "Internet2 Network" when referring to the "Abilene Network". Some sources even suggest that Internet2 is a network wholly separate from the Internet. This is misleading, since, at the time of the Abilene Network, Internet2 was the consortium and not a computer network. It is possible that many news sources adopted the term Internet2 because it seems like a logical name for a next-generation Internet backbone. Articles that reference Internet2 as a network were in fact referring to the Abilene Network. Internet2 adopted the name Internet2 Network for its entire network infrastructure.
Abilene formed a high-speed backbone by deploying cutting edge technologies not yet generally available on the scale of a national network backbone. Abilene was a private network used for education and research, but was not entirely isolated, since its members usually provide alternative access to many of their resources through the public Internet.
== Operations Center (NOC) ==
Abilene's Network Operations Center (NOC) was hosted at Indiana University since its inception. "The cross-country backbone is 10 gigabits per second, with the goal of offering 100 megabits per second of connectivity between every Abilene connected desktop."
== 2006/2007 Upgrade ==
The Abilene project had used optical fiber networks donated by Qwest Communications. In March 2006, Internet2 announced its upgrade plans and migration to Level 3 Communications. Internet2's Abilene transport agreement with Qwest expired in October 2007. Unlike the previous architecture, Level 3 manages and operated an Infinera Networks based DWDM system devoted to Internet2. Internet2 has control over provisioning and uses the 40 wavelength capacity to provide IP backbone connectivity as well as transport for a new SONET-based dynamic provisioning network based on the Ciena Corporation CoreDirector platform. The IP network was based on the Juniper Networks T640 routing platform.
When the transition to the new Level3-based infrastructure was complete, the name Abilene ceased to be used in favor of The Internet2 Network.
== References ==
== External links ==
Internet2 Network
Qwest Abilene and Internet2 FAQ Archived 2013-06-27 at the Wayback Machine | Wikipedia/Abilene_Network |
The very high-speed Backbone Network Service (vBNS) came on line in April 1995 as part of a National Science Foundation (NSF) sponsored project to provide high-speed interconnection between NSF-sponsored supercomputing centers and select access points in the United States. The network was engineered and operated by MCI Telecommunications under a cooperative agreement with the NSF.
NSF support was available to organizations that could demonstrate a need for very high speed networking capabilities and wished to connect to the vBNS or later to the Abilene Network, the high speed network operated by the University Corporation for Advanced Internet Development (UCAID, which operates Internet2).
By 1998, the vBNS had grown to connect more than 100 universities and research and engineering institutions via 12 national points of presence with DS-3 (45 Mbit/s), OC-3c (155 Mbit/s), and OC-12c (622 Mbit/s) links on an all OC-12c, a substantial engineering feat for that time. The vBNS installed one of the first ever production OC-48c (2.5 Gbit/s) IP links in February 1999, and went on to upgrade the entire backbone to OC-48c.
In June 1999 MCI WorldCom introduced vBNS+ which allowed attachments to the vBNS network by organizations that were not approved by or receiving support from NSF.
The vBNS pioneered the production deployment of many novel network technologies including Asynchronous Transfer Mode (ATM), IP multicasting, quality of service, and IPv6.
After the expiration of the NSF agreement, the vBNS largely transitioned to providing service to the government. Most universities and research centers migrated to the Internet2 educational backbone.
In January 2006 MCI and Verizon merged. The vBNS+ is now a service of Verizon Business.
== References == | Wikipedia/Very_high-speed_Backbone_Network_Service |
The Symposium on Operating Systems Principles (SOSP), organized by the Association for Computing Machinery (ACM), is one of the most prestigious single-track academic conferences on operating systems.
Before 2023, SOSP was held every other year, alternating with the conference on Operating Systems Design and Implementation (OSDI); starting 2024, SOSP began to be held every year. The first SOSP was held in 1967. It is sponsored by the ACM's Special Interest Group on Operating Systems (SIGOPS).
== History ==
The inaugural conference was held in Gatlinburg, Tennessee on 1–4 October 1967 at the Mountain View Hotel. There were fifteen papers in total, of which three presentations were in the Computer Networks and Communications session. Larry Roberts presented his plan for the ARPANET, a computer network for resource sharing, which at that point was based on Wesley Clark's proposal for a message switching network. Jack Dennis from MIT discussed the merits of a more general data communications network. Roger Scantlebury, a member of Donald Davies' team from the UK National Physical Laboratory, presented their research on packet switching in a high-speed computer network, and referenced the work of Paul Baran. At this seminal meeting, Scantlebury proposed packet switching for use in the ARPANET and persuaded Roberts and Bob Taylor the economics were favorable to message switching. The ARPA team enthusiastically received the idea and Roberts incorporated it into the ARPANET design.
In total, 29 conferences have been held, seven of which were outside the USA. The first conference held outside the USA was in Saint-Malo, France in 1997. Other countries to have hosted the conference are Canada, the UK, Portugal, China and Germany.
== List of conferences ==
From 1967 to 2023, the conferences were held every two years, with the first SOSP conference taking place in Gatlinburg, Tennessee. Beginning in 2024, SOSP the conference is held every year.
== See also ==
List of computer science conferences
== References ==
== External links ==
http://sosp.org/
https://dl.acm.org/conference/sosp | Wikipedia/Symposium_on_Operating_Systems_Principles |
The Defense Data Network (DDN) was a computer networking effort of the United States Department of Defense from 1983 through 1995. It was based on ARPANET technology.
== History ==
As an experiment, from 1971 to 1977, the Worldwide Military Command and Control System (WWMCCS) purchased and operated an ARPANET-type system from BBN Technologies for the Prototype WWMCCS Intercomputer Network (PWIN). The experiments proved successful enough that it became the basis of the much larger WIN system. Six initial WIN sites in 1977 increased to 20 sites by 1981.
In 1975, the Defense Communication Agency (DCA) took over operation of the ARPANET as it became an operational tool in addition to an ongoing research project. At that time, the Automatic Digital Network (AUTODIN), carried most of the Defense Department's message traffic. Starting in 1972, attempts had been made to introduce some packet switching into its planned replacement, AUTODIN II. AUTODIN II development proved unsatisfactory, however, and in 1982, AUTODIN II was canceled, to be replaced by a combination of several packet-based networks that would connect military installations.
The DCA used "Defense Data Network" (DDN) as the program name for this new network. Under its initial architecture, as developed by the Institute for Defense Analysis, the DDN would consist of two separate instances: the unclassified MILNET, which would be split off the ARPANET; and a classified network, also based on ARPANET technology, which would provide services for WIN, DODIIS, and SACDIN. C/30 packet switches, developed by BBN Technologies as upgraded Interface Message Processors, would provide the network technology. End-to-end encryption would be provided by ARPANET encryption devices, namely the Internet Private Line Interface (IPLI) or Blacker.
After MILNET was split away, the ARPANET would continue be used as an Internet backbone for researchers, but be slowly phased out. Both networks carried unclassified information, and were connected at a small number of points which would allow total separation in the event of an emergency.
As a large-scale, private internet, the DDN provided Internet Protocol connectivity across the United States and to US military bases abroad. The Defense Communications Engineering Center (DCEC), part of DCA, handled DDN network engineering and DDN network operations. The DCEC was located in Reston, Virginia from the mid-1980s until it was closed and merged with a DISA site in Bailey's Crossroads, Virginia in the early 2000s (long after DCA had been merged into the new Defense Information Systems Agency (DISA)).
Throughout the 1980s it expanded as a set of four parallel military networks, each at a different security level. The networks were:
Military Network (MILNET) for UNCLASSIFIED traffic
Defense Secure Network One (DSNET 1) for SECRET traffic
Defense Secure Network Two (DSNET 2) for TOP SECRET traffic
Defense Secure Network Three (DSNET 3) for TOP SECRET/Sensitive Compartmented Information (TS/SCI)
MILNET and DSNET 1 were common user networks, much like the public Internet, but DSNET 2 was dedicated to supporting the Worldwide Military Command and Control System (WWMCCS) and DSNET 3 was dedicated to supporting the DOD Intelligence Information System (DODIIS). These networks transitioned to become the NIPRNET, SIPRNET, and JWICS networks in the 1990s.
=== DDN-NIC ===
DDN-NIC or Network Information Center (NIC) was located at the DDN Installation and Integration Support (DIIS) program office in Chantilly, Virginia. It provided general reference services to DDN users via telephone, electronic mail, and U.S. mail. It was the first organization responsible for the assignment of TCP/IP addresses and Autonomous System numbers.
== See also ==
Defense Information Systems Network (DISN)
== References ==
== External links ==
Cybertelecom :: Internet History 1983 | Wikipedia/Defense_Data_Network |
The Government Open Systems Interconnection Profile (GOSIP) was a specification that profiled open networking products for procurement by governments in the late 1980s and early 1990s.
== Timeline ==
1988 - GOSIP: Government Open Systems Interconnection Profile published by CCTA, an agency of UK government
1988 - UK's CCTA commences work with France and West Germany on European Procurement Handbook (EPHOS)
1990 - The US specification requiring Open Systems Interconnection (OSI) protocols was first published as Federal Information Processing Standards document FIPS 146-1. The requirement for US Government vendors to demonstrate their support for this profile led them to join the formal interoperability and conformance testing for networking products, which had been done by industry professionals at the annual InterOp show since 1980.
1990 - Publication of European Procurement Handbook (EPHOS), intended to be a European GOSIP
1991 - 4th and final version of UK GOSIP released
1993 - Australia and New Zealand GOSIP Version 3 - 1993 Government Open Systems Interconnection Profile
1995 - FIPS 146-2 allowed "...other specifications based on open, voluntary standards such as those cited in paragraph 3 ("...such as those developed by the Internet Engineering Task Force (IETF)... and the International Telecommunications Union, Telecommunication Standardization Sector (ITU–T))"
In practice, from 1995 interest in OSI implementations declined, and worldwide the deployment of standards-based networking services since have been predominantly based on the Internet protocol suite. However, the Defense Messaging System continued to be based on the OSI protocols X.400 and X.500, due to their integrated security capabilities.
== See also ==
OSI model
ISO Development Environment (ISODE)
Protocol Wars
== References == | Wikipedia/Government_Open_Systems_Interconnection_Profile |
The Baconian theory of Shakespearean authorship contends that Sir Francis Bacon, philosopher, essayist and scientist, wrote the plays that are attributed to William Shakespeare. Various explanations are offered for this alleged subterfuge, most commonly that Bacon's rise to high office might have been hindered if it became known that he wrote plays for the public stage. The plays are credited to Shakespeare, who, supporters of the theory claim, was merely a front to shield the identity of Bacon. All but a few academic Shakespeare scholars reject the arguments for Bacon authorship, as well as those for all other alternative authors.
The theory was first put forth in the mid-nineteenth century, based on perceived correspondences between the philosophical ideas found in Bacon’s writings and the works of Shakespeare. Legal and autobiographical allusions and cryptographic ciphers and codes were later found in the plays and poems to buttress the theory. The Baconian theory gained popularity and attention in the late nineteenth and early twentieth century. The academic consensus is that Shakespeare wrote the works bearing his name. Supporters of Bacon continue to argue for his candidacy through organisations, books, newsletters, and websites.
== Terminology ==
Sir Francis Bacon (1561 – 1626) was an English scientist, philosopher, courtier, diplomat, essayist, historian and successful politician who served as Solicitor General in 1607, Attorney General in 1613 and Lord Chancellor in 1618. Those who subscribe to the theory that Bacon wrote Shakespeare's works refer to themselves as "Baconians", while dubbing those who maintain the orthodox view that William Shakespeare of Stratford wrote his own works "Stratfordians".
Baptised as "Gulielmus filius Johannes Shakspere" (William son of John Shakspere) on 26 April 1564, the traditionally accepted author's surname had several variant spellings during his lifetime, but his signature is most commonly spelled "Shakspere". Baconians often use "Shakspere" or "Shakespeare" for the glover's son and actor from Stratford, and "Shake-speare" for the author to avoid the assumption that the Stratford man wrote the works attributed to him.
== History of Baconian theory ==
A pamphlet entitled The Story of the Learned Pig (circa 1786) and alleged research by James Wilmot have been described by some as the earliest instances of the claim that Bacon wrote Shakespeare's works, but the Wilmot research has been exposed as a forgery, and the pamphlet makes no direct reference to Bacon.
The idea was first proposed by Delia Bacon (no relation) in lectures and conversations with intellectuals in America and Britain. William Henry Smith was the first to publish the theory in a letter to Lord Ellesmere published in the form of a sixteen-page pamphlet entitled Was Lord Bacon the Author of Shakespeare's Plays? Smith suggested that several letters to and from Francis Bacon hinted at his authorship. A year later, both Smith and Delia Bacon published books expounding the Baconian theory. In Delia Bacon's work, "Shakespeare" was represented as a group of writers, including Francis Bacon, Sir Walter Raleigh and Edmund Spenser, whose agenda was to propagate an anti-monarchical system of philosophy by secreting it in the text.
In 1867, in the library of Northumberland House, John Bruce happened upon a bundle of bound documents, some of whose sheets had been ripped away. It had comprised a number of Bacon's oratories and disquisitions, and had also apparently held copies of the plays Richard II and Richard III, The Isle of Dogs and Leicester's Commonwealth, but these had been removed. On the outer sheet was scrawled repeatedly the names of Bacon and Shakespeare along with the name of Thomas Nashe. There were several quotations from Shakespeare and a reference to the word Honorificabilitudinitatibus, which appears in Shakespeare's Love's Labour's Lost and Nashe's Lenten Stuff. The Earl of Northumberland sent the bundle to James Spedding, who subsequently penned a thesis on the subject, with which was published a facsimile of the aforementioned cover. Spedding hazarded a 1592 date, making it possibly the earliest extant mention of Shakespeare.
After a diligent deciphering of the Elizabethan handwriting in Francis Bacon's notebook, known as the Promus of Formularies and Elegancies, Constance Mary Fearon Pott (1833–1915) argued that a number of the ideas and figures of speech in Bacon's book could also be found in the Shakespeare plays. Pott founded the Francis Bacon Society in 1885 and published her Bacon-centered theory in 1891. In this, Pott developed the view of W.F.C. Wigston, that Francis Bacon was the founding member of the Rosicrucians, a secret society of occult philosophers, and claimed that they secretly created art, literature and drama, including the entire Shakespeare canon, before adding the symbols of the rose and cross rug to their work. William Comyns Beaumont also popularised the notion of Bacon's authorship.
Other Baconians ignored the esoteric following that the theory was attracting. Bacon's reason for publishing under a pseudonym was said to be his need to secure his high office, possibly in order to complete his "Great Instauration" project to reform the moral and intellectual culture of the nation. The argument is that Bacon intended to set up new institutes of experimentation to gather the data to which his inductive method could be applied. He needed high office to gain the requisite influence, and being known as a dramatist, an allegedly low-class profession, would have impeded his prospects (see stigma of print). Realising that play-acting was used by the ancients "as a means of educating men's minds to virtue", and being "strongly addicted to the theatre" himself, he is claimed to have set out the otherwise-unpublished moral philosophical component of his Great Instauration project in the Shakespearean oeuvre. In this way, he could influence the nobility through dramatic performance with his observations on what constitutes "good" government.
By the end of the 19th century, Baconian theory had received support from a number of high-profile individuals. Mark Twain showed an inclination for it in his essay Is Shakespeare Dead?. Friedrich Nietzsche expressed interest in and gave credence to the Baconian theory in his writings. The German mathematician Georg Cantor believed that Shakespeare was Bacon. He eventually published two pamphlets supporting the theory in 1896 and 1897. By 1900, leading Baconians were asserting that their cause would soon be won. In 1916 a judge in Chicago ruled in a civil trial that Bacon was the true author of the Shakespeare canon. However, this proved to be the heyday of the theory. A number of new candidates were proposed in the early 20th century, notably Roger Manners, 5th Earl of Rutland, William Stanley, 6th Earl of Derby and Edward de Vere, 17th Earl of Oxford, dethroning Bacon as the sole alternative to Shakespeare. Furthermore, these and other alternative authorship theories failed to make any headway among academics.
=== Baconian cryptology ===
In 1880 Ignatius L. Donnelly, a U.S. Congressman, science fiction author and Atlantis theorist, wrote The Great Cryptogram, in which he argued that Bacon revealed his authorship of the works by concealing secret ciphers in the text. This produced a plethora of late 19th-century Baconian theorising, which developed the theme that Bacon had hidden encoded messages in the plays.
Baconian theory developed a new twist in the writings of Orville Ward Owen and Elizabeth Wells Gallup. Owen's book Sir Francis Bacon's Cipher Story (1893–95) claimed to have discovered a secret history of the Elizabethan era hidden in cipher-form in Bacon/Shakespeare's works. The most remarkable revelation was that Bacon was the son of Queen Elizabeth. According to Owen, Bacon revealed that Elizabeth was secretly married to Robert Dudley, Earl of Leicester, who fathered both Bacon himself and Robert Devereux, 2nd Earl of Essex, the latter ruthlessly executed by his own mother in 1601. Bacon was the true heir to the throne of England, but had been excluded from his rightful place. This tragic life-story was the secret hidden in the plays.
Elizabeth Wells Gallup developed Owen's views, arguing that a bi-literal cipher, which she had identified in the First Folio of Shakespeare's works, revealed concealed messages confirming that Bacon was the queen's son. This argument was taken up by several other writers, notably Alfred Dodd in Francis Bacon’s Personal Life Story (1910) and C.Y.C. Dawbarn in Uncrowned (1913). In Dodd's account Bacon was a national redeemer, who, deprived of his ordained public role as monarch, instead performed a spiritual transformation of the nation in private though his work: "He was born for England, to set the land he loved on new lines, 'to be a Servant to Posterity'". In 1916 Gallup's financial backer George Fabyan was sued by film producer William Selig. He argued that Fabyan's advocacy of Bacon threatened the profits expected from a forthcoming film about Shakespeare. The judge determined that ciphers identified by Gallup proved that Francis Bacon was the author of the Shakespeare canon, awarding Fabyan $5,000 in damages.
Orville Ward Owen had such conviction of his own cipher method that, in 1909, he began excavating the bed of the River Wye, near Chepstow Castle, in the search of Bacon's original Shakespearean manuscripts. The project ended with his death in 1924. Nothing was found.
The American art collector Walter Conrad Arensberg (1878–1954) believed that Bacon had concealed messages in a variety of ciphers, relating to a secret history of the time and the esoteric secrets of the Rosicrucians, in the Shakespearean works. He published a variety of decipherments between 1922 and 1930, concluding finally that, although he had failed to find them, there certainly were concealed messages. He established the Francis Bacon Foundation in California in 1937 and left it his collection of Baconiana.
In 1957 the expert cryptographers William and Elizebeth Friedman published The Shakespearean Ciphers Examined, a study of all the proposed ciphers identified by Baconians (and others) up to that point. The Friedmans had worked with Gallup. They showed that the method is unlikely to have been employed by the author of Shakespeare's works, concluding that none of the ciphers claimed to exist by Baconians were valid.
== Credentials for authorship ==
Early Baconians were influenced by Victorian bardolatry, which portrayed Shakespeare as a profound intellectual, "the greatest intellect who, in our recorded world, has left record of himself in the way of Literature", as Thomas Carlyle stated. In conformity with these ideas, Baconian writer Harry Stratford Caldecott held that the Shakespearean work was of such an incalculably higher calibre than that of contemporary playwrights that it could not possibly have been written by any of them. Even mainstream Shakespearean scholar Horace Howard Furness, wrote that "Had the plays come down to us anonymously – had the labour of discovering the author been imposed upon future generations – we could have found no one of that day but Francis Bacon to whom to assign the crown. In this case it would have been resting now upon his head by almost common consent." "He was," agreed Caldecott, "all the things that the plays of Shakespeare demand that the author should be – a man of vast and boundless ambition and attainments, a philosopher, a poet, a lawyer, a statesman."
Baconians have also argued that Shakespeare's works show a detailed scientific knowledge that, they claim, only Bacon could have possessed. Certain passages in Coriolanus, first published in 1623, are alleged to refer to the circulation of the blood, a theory known to Bacon through his friendship with William Harvey, but not made public until after Shakespeare's death in 1616. They also argue that Bacon has been praised for his poetic style, even in his prose works.
Opponents of this view argue that Shakespeare's erudition was greatly exaggerated by Victorian enthusiasts, and that the works display the typical knowledge of a man with a grammar school education of the time. His Latin is derived from school books of the era. There is no record that any contemporary of Shakespeare referred to him as a learned writer or scholar. Ben Jonson and Francis Beaumont both refer to his lack of classical learning. If a university-trained playwright wrote the plays, it is hard to explain the multiple classical blunders in Shakespeare. Not only does he mistake the scansion of multiple classical names, in Troilus and Cressida he has Greeks and Trojans citing Plato and Aristotle a thousand years before their births. Willinsky suggests that most of Shakespeare's classical allusions were drawn from Thomas Cooper’s Thesaurus Linguae Romanae et Britannicae (1565), since a number of errors in that work are replicated in several of Shakespeare’s plays, and a copy of this book had been bequeathed to Stratford Grammar School by John Bretchgirdle for "the common use of scholars".
In addition, it is argued that Bacon's and Shakespeare's styles of writing are profoundly different, and that they use different vocabulary. Scott McCrea writes, "there is no answer for Bacon's different renderings of the same word – 'politiques' instead of 'politicians', or 'submiss' instead of the Author's 'submissive', or 'militar' instead of the Poet's 'military'. These are two different writers."
== Alleged coded references to Bacon's authorship ==
Baconians have claimed that some contemporaries of Bacon and Shakespeare were in on the secret of Bacon's authorship, and have left hints of this in their writings. "There can be no doubt," said Caldecott, "that Ben Jonson was in possession of the secret composition of Shakespeare's works." An intimate of both Bacon and Shakespeare – he was for a time the former's stenographer and Latin interpreter, and had his debut as a playwright produced by the latter – he was placed perfectly to be in the know. He did not name Shakespeare among the sixteen greatest cards of the epoch but wrote of Bacon that he "hath filled up all the numbers and performed that in our tongue which may be compared or preferred either to insolent Greece and haughty Rome so that he may be named, and stand as the mark and acme of our language." Jonson's First-Folio tribute to "The Author Mr William Shakespeare",[...] contains the same words, stating that Shakespeare is as good as "all that insolent Greece or haughty Rome" produced. According to Caldecott, "If Ben Jonson knew that the name 'Shakespeare' was a mere cloak for Bacon, it is easy enough to reconcile the application of the same language indifferently to one and the other. Otherwise," declared Caldecott, "it is not easily explicable.".
Baconians Walter Begley and Bertram G. Theobald claimed that Elizabethan satirists Joseph Hall and John Marston alluded to Francis Bacon as the true author of Venus and Adonis and The Rape of Lucrece by using the sobriquet "Labeo" in a series of poems published in 1597–98. They take this to be a coded reference to Bacon on the grounds that the name derives from Rome's most famous legal scholar, Marcus Labeo, with Bacon holding an equivalent position in Elizabethan England. Hall denigrates several poems by Labeo and states that he passes off criticism to "shift it to another's name". This is taken to imply that he published under a pseudonym. In the following year Marston used Bacon's Latin motto in a poem and seems to quote from Venus and Adonis, which he attributes to Labeo. Theobald argued that this confirmed that Hall's Labeo was known to be Bacon and that he wrote Venus and Adonis. Critics of this view argue that the name Labeo derives from Attius Labeo, a notoriously bad poet, and that Hall's Labeo could refer to one of many poets of the time, or even be a composite figure, standing for the triumph of bad verse. Also, Marston's use of the Latin motto is a different poem from the one which alludes to Venus and Adonis. Only the latter uses the name Labeo, so there is no link between Labeo and Bacon.
In 1645 a satirical poem (often attributed to George Wither) was published entitled The Great Assizes Holden in Parnassus by Apollo and his Assessours. This describes an imaginary trial of recent writers for crimes against literature. Apollo presides at a trial. Bacon ("The Lord Verulam, Chancellor of Parnassus") heads a group of scholars who act as the judges. The jury comprises poets and playwrights, including "William Shakespeere". One of the convicted "criminals" challenges the court, attacking the credentials of the jury, including Shakespeare, who is called a mere "mimic". Despite the fact that Bacon and Shakespeare appear as different individuals, Baconians have argued that this is a coded assertion of Bacon's authorship of the canon, or at least proof that he was recognised as a poet.
Various images, especially in the frontispieces or title pages of books, have been said to contain symbolism pointing to Bacon's authorship. A book on codes and cyphers entitled Cryptomenytices et Cryptographiae, is said to depict Bacon writing a work and Shakespeare (signified by the spear he carries) receiving it. Other books with similar alleged coded imagery include the third edition of John Florio's translation of Montaigne, and various editions of works by Bacon himself.
== Gray's Inn revels 1594–95 and 1613 ==
Gray's Inn law school traditionally held revels over Easter 94 and '95, all performed plays were amateur productions. In his commentary on the Gesta Grayorum, a contemporary account of the 1594–95 revels, Desmond Bland informs us that they were "intended as a training ground in all the manners that are learned by nobility [...:] dancing, music, declamation, acting." James Spedding, the Victorian editor of Bacon's Works, thought that Sir Francis Bacon was involved in the writing of this account.
The Gesta Grayorum is a pamphlet of 68 pages first published in 1688. It informs us that The Comedy of Errors received its first known performance at these revels at 21:00 on 28 December 1594 (Innocents Day) when "a Comedy of Errors (like to Plautus his Menechmus) was played by the Players [...]." Whoever the players were, there is evidence that Shakespeare and his company were not among them: according to the royal Chamber accounts, dated 15 March 1595 – see Figure – he and the Lord Chamberlain's Men were performing for the Queen at Greenwich on Innocents Day. E.K. Chambers informs us that "the Court performances were always at night, beginning about 10pm and ending at 1am", so their presence at both performances is highly unlikely; furthermore, the Gray's Inn Pension Book, which recorded all payments made by the Gray's Inn committee, exhibits no payment either to a dramatist or to professional company for this play. Baconians interpret this as a suggestion that, following precedent, The Comedy of Errors was both written and performed by members of the Inns of Court as part of their participation in the Gray's Inn celebrations. One problem with this argument is that the Gesta Grayorum refers to the players as "a Company of base and common fellows", which would apply well to a professional theatre company, but not to law students. But, given the jovial tone of the Gesta, and that the description occurred during a skit in which a "Sorcerer or Conjuror" was accused of causing "disorders with a play of errors or confusions", Baconians interpret it as merely a comic description of the Gray's Inn players.
Gray's Inn actually had a company of players during the revels. The Gray's Inn Pension Book records on 11 February 1595 that "one hyndred marks [£66.67] [are] to be layd out & bestowyd upon the gentlemen for their sports and shewes this Shrovetyde at the court before the Queens Majestie ...."
There is, most importantly to the Baconians' argument, evidence that Bacon had control over the Gray's Inn players. In a letter either to Lord Burghley, dated before 1598, or to the Earl of Somerset in 1613, he writes, "I am sorry the joint masque from the four Inns of Court faileth [.... T]here are a dozen gentlemen of Gray's Inn that will be ready to furnish a masque". Bacon contributed to the production of The Masque of Flowers. The dedication to a masque by Francis Beaumont performed at Whitehall in 1613 describes Bacon as the "chief contriver" of its performances at Gray's Inn and the Inner Temple. He also appears to have been their treasurer prior to the 1594–95 revels.
The discrepancy surrounding the whereabouts of the Chamberlain's Men is normally explained by theatre historians as an error in the Chamber Accounts. W. W. Greg suggested the following explanation:
"[T]he accounts of the Treasurer of the Chamber show payments to this company [the Chamberlain's Men] for performances before the Court on both 26 Dec. and 1 Jan. These accounts, however, also show a payment to the Lord Admiral's men in respect of 28 Dec. It is true that instances of two court performances on one night do occur elsewhere, but in view of the double difficulty involved, it is perhaps best to assume that in the Treasurer's accounts, 28 Dec. is an error for 27 Dec."
== Verbal parallels ==
=== Gesta Grayorum ===
The final paragraph of the Gesta Grayorum – see Figure – uses a "greater lessens the smaller" construction that occurs in an exchange from the Merchant of Venice (1594–97), 5.1.92–97:
Ner. When the moon shone we did not see the candle
Por. So doth the greater glory dim the less,
A substitute shines brightly as a King
Until a King be by, and then his state
Empties itself, as doth an inland brooke
Into the main of waters ...
The Merchant of Venice uses both the same theme as the Gesta Grayorum (see Figure) and the same three examples to illustrate it – a subject obscured by royalty, a small light overpowered by that of a heavenly body and a river diluted on reaching the sea. In an essay from 1603, Bacon makes further use of two of these examples: "The second condition [of perfect mixture] is that the greater draws the less. So we see that when two lights do meet, the greater doth darken and drown the less. And when a small river runs into a greater, it loseth both the name and stream." A figure similar to "loseth both the name and stream" occurs in Hamlet (1600–01), 3.1.87–88:
Hamlet. With this regard their currents turn awry
And lose the name of action.
Bacon was usually careful to cite his sources but does not mention Shakespeare once in any of his work. Baconians claim, furthermore, that, if the Gesta Grayorum was circulated prior to its publication in 1688 – and no one seems to know if it was – it was probably only among members of the Inns of Court.
=== Promus ===
In the 19th century, a waste book entitled the Promus of Formularies and Elegancies was discovered. It contained 1,655 hand written proverbs, metaphors, aphorisms, salutations and other miscellany. Although some entries appear original, a number of them are drawn from the Latin and Greek writers Seneca, Horace, Virgil, Ovid; John Heywood's Proverbes (1562); Michel de Montaigne's Essays (1575), and various other French, Italian and Spanish sources. A section at the end aside, the writing was, by Sir Edward Maunde-Thompson's reckoning, in Bacon's hand; indeed, his signature appears on folio 115 verso. Only two folios of the notebook were dated, the third sheet 5 December 1594 and the 32nd 27 January 1595 (1596). Bacon supporters found similarities between a great number of specific phrases and aphorisms from the plays and those written by Bacon in the Promus. In 1883 Mrs. Henry Pott edited Bacon's Promus and found 4,400 parallels of thought or expression between Shakespeare and Bacon.
Parallel 1
Parolles. So I say both of Galen and Paracelsus (1603–05 All's Well That Ends Well, 2.3.11)
Galens compositions not Paracelsus separations (Promus, folio 84, verso)
Parallel 2
Launce. Then may I set the world on wheels, when she can spin for her living (1589–93, The Two Gentlemen of Verona, 3.1.307–08)
Now toe on her distaff then she can spynne/The world runs on wheels (Promus, folio 96, verso)
Parallel 3
Hostesse. O, that right should o'rcome might. Well of sufferance, comes ease (1598, Henry IV, Part 2, 5.4.24–25)
Might overcomes right/Of sufferance cometh ease (Promus, folio 103, recto)
The orthodox view is that these were commonplace phrases; Baconians claim the occurrence in the last two examples of two ideas from the same Promus folio in the same Shakespeare speech is unlikely.
=== Published work ===
There is an example in Troilus and Cressida (2.2.163) which shows that Bacon and Shakespeare shared the same interpretation of an Aristotelian view:
Hector. Paris and Troilus, you have both said well,
And on the cause and question now in hand
Have glozed, but superficially: not much
Unlike young men, whom Aristotle thought
Unfit to hear moral philosophy:
The reasons you allege do more conduce
To the hot passion of distemper'd blood
Bacon's similar take reads thus: "Is not the opinion of Aristotle very wise and worthy to be regarded, 'that young men are no fit auditors of moral philosophy', because the boiling heat of their affections is not yet settled, nor tempered with time and experience?"
What Aristotle actually said was slightly different: "Hence a young man is not a proper hearer of lectures on political science; [...] and further since he tends to follow his passions his study will be vain and unprofitable [...]." The added coincidence of heat and passion and the replacement of "political science" with "moral philosophy" is employed by both Shakespeare and Bacon. However, Shakespeare's play precedes Bacon's publication, allowing the possibility of the latter borrowing from the former.
== Arguments against Baconian theory ==
Mainstream academics reject the Baconian theory (along with other "alternative authorship" theories), citing a range of evidence – not least of all its reliance on a conspiracy theory. As far back as 1879, a New York Herald scribe bemoaned the waste of "considerable blank ammunition [...] in this ridiculous war between the Baconians and the Shakespearians", while Richard Garnett made the common objection that Bacon was far too busy with his own work to have had time to create the entire canon of another writer too, declaring that "Baconians talk as if Bacon had nothing to do but to write his play at his chambers and send it to his factotum, Shakespeare, at the other end of the town."
Horace Howard Furness wrote in a letter that, "Donnelly's theory about Bacon's authorship is too foolish to be seriously answered. I don't think he started it for any other purpose than notoriety. I believe he doesn't attempt to show that Bacon corrected the proof-sheets of the First Folio, and no human foresight could have told how the printed line would run, and have so regulated the MSS. To Donnelly's theory the pagination & the number of lines in a page are essential."
Mainstream academics reject the anti-Stratfordian claim that Shakespeare had not the education to write the plays. Shakespeare grew up in a family of some importance in Stratford; his father John Shakespeare, one of the wealthiest men in Stratford, was an Alderman and later High Bailiff of the corporation. It would be surprising had he not attended the local grammar school, as such institutions were founded to educate boys of Shakespeare's moderately well-to-do standing.
Stratfordian scholars also cite Occam's razor, the principle that the simplest and best-evidenced explanation (in this case that the plays were written by Shakespeare of Stratford) is most likely to be the correct one. A critique of all alternative authorship theories may be found in Samuel Schoenbaum's Shakespeare's Lives. Questioning Bacon's ability as a poet, Sidney Lee asserted: "[...] such authentic examples of Bacon's efforts to write verse as survive prove beyond all possibility of contradiction that, great as he was as a prose writer and a philosopher, he was incapable of penning any of the poetry assigned to Shakespeare."
At least one Stratfordian scholar claims Bacon privately disavowed the idea he was a poet, and, seen in the context of Bacon's philosophy, the "concealed poet" is something other than a dramatic or narrative poet. A mainstream historian of authorship doubt, Frank Wadsworth, asserted that the "essential pattern of the Baconian argument" consisted of "expressed dissatisfaction with the number of historical records of Shakespeare's career, followed by the substitution of a wealth of imaginative conjectures in their place."
In his 1971 essay "Bill and I," the author and scientific historian Isaac Asimov made the case that Bacon did not write Shakespeare's plays because certain portions of the Shakespeare canon show a misunderstanding of the prevailing scientific beliefs of the time that Bacon, one of the most intensely educated people of his time, would not have possessed. Asimov cites an excerpt from the last act of The Merchant of Venice, as well as the following excerpt from A Midsummer Night's Dream:
...The rude sea grew civil at her song,
And certain stars shot madly from their spheres,
to hear the sea maid's music. (Act 2, Scene 1, 152–154).
In the above example, the reference to stars shooting madly from their spheres was not in accordance with the then-accepted Greek astronomical belief that the stars all occupied the same sphere that surrounded the Earth as opposed to separate ones. While it was believed that additional ambient spheres existed, they were thought to contain the other bodies in the sky that move independently from the rest of the stars, i.e. the Sun, the Moon, and the planets that are visible to the naked eye (whose name makes its way into English from the Greek word planetes, meaning "wanderers," as in the wandering bodies that orbited the Earth independently from the fixed stars in their sphere).
== References in popular culture ==
Satirist Max Beerbohm published a cartoon entitled "William Shakespeare, his method of work", in his 1904 collection The Poet's Corner. Beerbohm depicts Shakespeare receiving the manuscript of Hamlet from Bacon. In Beerbohm's comic essay On Shakespeare's Birthday he declares himself to be unconvinced by Baconian theory, but wishes it were true because of the mischief it would cause – and because having one hero who was both an intellectual and a creative genius would be more exciting than two separate ones.
In Rudyard Kipling's 1926 short story "The Propagation of Knowledge" (later collected in Debits and Credits and The Complete Stalky & Co.), some schoolboys discover the Baconian theory and profess to be adherents, infuriating their English master.
J.C. Squire's "If It Had Been Discovered in 1930 that Bacon Really Did Write Shakespeare," published in If It Had Happened Otherwise (1931), is a comic farce wherein cultural upheavals, acts of quick thinking in rebranding tourist attractions, and additions of new slang terms to the English language occur when someone finds a box containing 17th-century documents proving that the plays of Shakespeare were in fact written by Bacon.
In P. G. Wodehouse's story The Reverent Wooing of Archibald, the dedicated "sock collector" Archibald Mulliner is told that Bacon wrote plays for Shakespeare. He remarks that it was "dashed decent of him", but suggests he may have only done it because he owed Shakespeare money. Archibald then listens to an elderly Baconian expounding an incomprehensible cipher theory. The narrator remarks that the speech was "unusually lucid and simple for a Baconian". Archibald nevertheless wishes he could escape by picking up a nearby battle-axe hanging on the wall and "dot this doddering old ruin one just above the imitation necklace".
In Caryl Brahms' and S. J. Simon's No Bed for Bacon, Bacon constantly intrudes on Shakespeare's rehearsals and lectures him on playwriting technique (with quotations from Bacon's actual works), until Shakespeare in exasperation asks "Master Bacon: do I write my plays, or do you?"
An episode of The New Adventures of Sherlock Holmes radio program (27 May 1946) starring Basil Rathbone and Nigel Bruce uses the Baconian Cipher to call attention to a case involving a dispute over authorship of Shakespeare's works.
NBC-TV Cartoon Peabody's Improbable History, Episode 49, 31 October 1961, Bacon accuses Shakespeare of stealing credit for Romeo and Juliet. Dialogue includes line such as, "Bacon, you'll fry for this!"
A couple of references were made in The Muppet Show. In the episode "Juliet Prowse", Miss Piggy asks her partner, "Do you prefer Shakespeare to Bacon?" Her partner replies, "I prefer anything to bacon." In the Panel Discussion segment of the episode "Florence Henderson", Kermit asks "Is Shakespeare, in fact, Bacon?"; Piggy took offence at this question.
The 1981 Cold War thriller The Amateur, written by Robert Littell, involves CIA agents using Bacon's biliteral cipher. In the course of the plot, Professor Lakos, a Baconian theorist and cipher-expert played by Christopher Plummer, assists the hero to uncover the truth. Littell published a novelisation of the story in the same year.
In 1973 Margaret Barsi-Greene published the "autobiography" of Bacon expounding the "Prince Tudor" version of Baconian theory. In 1992 this was adapted as the play I, Prince Tudor, wrote Shakespeare by the dramatist Paula Fitzgerald. In 2005 Ross Jackson published Shaker of the Speare: The Francis Bacon Story, a novel also based on the Prince Tudor model.
Cartoonist Frank Cho claims to be a believer in Baconian authorship, and his comic strips such as Liberty Meadows occasionally have characters act as his mouthpiece for this matter.
In the 2011 video game Portal 2, the Fact Sphere in the boss level states the following: "William Shakespeare did not exist. His plays were masterminded by Francis Bacon who used a ouija board to enslave playwriting ghosts."
"The Adventures of Shake and Bake", an SCTV skit that first aired 23 April 1982, parodies the Shakespeare/Bacon theory and features Dave Thomas as Shakespeare and Rick Moranis as Bacon.
Poet John S. Hall wrote a satirical poem about the theory, recording and releasing it on the 1991 album Real Men.
In the 2016 video game The Witness, the Baconian theory is brought up as part of the "Eclipse lecture".
The Curse of Oak Island on The History Channel frequently references the theory that the manuscripts of William Shakespeare, as written by Francis Bacon, are buried in The Money Pit on Oak Island.
The Canadian radio and television comedy duo of Wayne and Shuster often spoofed Shakespeare's plays. One of their most well-known skits, "Rinse the Blood Off My Toga" begins with the statement, "This play is presented with apologies to William Shakespeare, and Sir Francis Bacon, just in case."
== See also ==
Jacques Duchaussoy, author of Bacon, Shakespeare ou Saint-Germain (1962), a non-fiction book that discussed the possibility of Francis Bacon ghost writing for Shakespeare and Miguel de Cervantes.
Petter Amundsens' theory in the documentary Cracking the Shakespeare Code.
Ashton, Susanna. "Who Brings Home the Bacon? Shakespeare and Turn-of-the-Century American Authorship". American Periodicals, vol. 6, 1996, pp. 1–28.
== Notes ==
== References ==
== Bibliography ==
== Further reading ==
Fagone, Jason (2017). "Chapter 3". The Woman Who Smashed Codes. Key St.: Harper Collins. ISBN 978-0-06-243048-9.
== External links ==
"The Francis Bacon Society".
"Sir Francis Bacon's New Advancement of Learning". sirbacon.org.
"The Shakespeare Authorship Page".
"The George Fabyan Collection". Library of Congress. at the Library of Congress has works about the Shakespeare-Bacon authorship controversy, as Fabyan published writings on principles of Baconian ciphers and their application in sixteenth and seventeenth-century books. | Wikipedia/Baconian_theory |
The Cryptographic Module Validation Program (CMVP) is a joint American and Canadian security accreditation program for cryptographic modules. The program is available to any vendors who seek to have their products certified for use by the U.S. Government and regulated industries (such as financial and health-care institutions) that collect, store, transfer, share and disseminate "sensitive, but not classified" information. All of the tests under the CMVP are handled by third-party laboratories that are accredited as Cryptographic Module Testing Laboratories by the National Voluntary Laboratory Accreditation Program (NVLAP). Product certifications under the CMVP are performed in accordance with the requirements of FIPS 140-3.
The CMVP was established by the U.S. National Institute of Standards and Technology (NIST) and the Communications Security Establishment (CSE) of the Government of Canada in July 1995.
The Cryptographic Algorithm Validation Program (CAVP), which provides guidelines for validation testing for FIPS approved and NIST recommended cryptographic algorithms and components of algorithms, is a prerequisite for CMVP.
== Notes ==
== External links ==
NIST Cryptographic Module Validation Program
NIST FIPS 140-2 | Wikipedia/Cryptographic_Module_Validation_Program |
Kleptography is the study of stealing information securely and subliminally. The term was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology – Crypto '96.
Kleptography is a subfield of cryptovirology and is a natural extension of the theory of subliminal channels that was pioneered by Gus Simmons while at Sandia National Laboratory. A kleptographic backdoor is synonymously referred to as an asymmetric backdoor. Kleptography encompasses secure and covert communications through cryptosystems and cryptographic protocols. This is reminiscent of, but not the same as steganography that studies covert communications through graphics, video, digital audio data, and so forth.
== Kleptographic attack ==
=== Meaning ===
A kleptographic attack is an attack which uses asymmetric cryptography to implement a cryptographic backdoor. For example, one such attack could be to subtly modify how the public and private key pairs are generated by the cryptosystem so that the private key could be derived from the public key using the attacker's private key. In a well-designed attack, the outputs of the infected cryptosystem would be computationally indistinguishable from the outputs of the corresponding uninfected cryptosystem. If the infected cryptosystem is a black-box implementation such as a hardware security module, a smartcard, or a Trusted Platform Module, a successful attack could go completely unnoticed.
A reverse engineer might be able to uncover a backdoor inserted by an attacker, and when it is a symmetric backdoor, even use it themself. However, by definition a kleptographic backdoor is asymmetric and the reverse-engineer cannot use it. A kleptographic attack (asymmetric backdoor) requires a private key known only to the attacker in order to use the backdoor. In this case, even if the reverse engineer was well-funded and gained complete knowledge of the backdoor, it would remain useless for them to extract the plaintext without the attacker's private key.
=== Construction ===
Kleptographic attacks can be constructed as a cryptotrojan that infects a cryptosystem and opens a backdoor for the attacker, or can be implemented by the manufacturer of a cryptosystem. The attack does not necessarily have to reveal the entirety of the cryptosystem's output; a more complicated attack technique may alternate between producing uninfected output and insecure data with the backdoor present.
=== Design ===
Kleptographic attacks have been designed for RSA key generation, the Diffie–Hellman key exchange, the Digital Signature Algorithm, and other cryptographic algorithms and protocols. SSL, SSH, and IPsec protocols are vulnerable to kleptographic attacks. In each case, the attacker is able to compromise the particular cryptographic algorithm or protocol by inspecting the information that the backdoor information is encoded in (e.g., the public key, the digital signature, the key exchange messages, etc.) and then exploiting the logic of the asymmetric backdoor using their secret key (usually a private key).
A. Juels and J. Guajardo proposed a method (KEGVER) through which a third party can verify RSA key generation. This is devised as a form of distributed key generation in which the secret key is only known to the black box itself. This assures that the key generation process was not modified and that the private key cannot be reproduced through a kleptographic attack.
=== Examples ===
Four practical examples of kleptographic attacks (including a simplified SETUP attack against RSA) can be found in JCrypTool 1.0, the platform-independent version of the open-source CrypTool project. A demonstration of the prevention of kleptographic attacks by means of the KEGVER method is also implemented in JCrypTool.
The Dual_EC_DRBG cryptographic pseudo-random number generator from the NIST SP 800-90A is thought to contain a kleptographic backdoor. Dual_EC_DRBG utilizes elliptic curve cryptography, and NSA is thought to hold a private key which, together with bias flaws in Dual_EC_DRBG, allows NSA to decrypt SSL traffic between computers using Dual_EC_DRBG for example. The algebraic nature of the attack follows the structure of the repeated Dlog Kleptogram in the work of Young and Yung.
== References == | Wikipedia/Kleptographic_attack |
A cryptographically secure pseudorandom number generator (CSPRNG) or cryptographic pseudorandom number generator (CPRNG) is a pseudorandom number generator (PRNG) with properties that make it suitable for use in cryptography. It is also referred to as a cryptographic random number generator (CRNG).
== Background ==
Most cryptographic applications require random numbers, for example:
key generation
initialization vectors
nonces
salts in certain signature schemes, including ECDSA and RSASSA-PSS
token generation
The "quality" of the randomness required for these applications varies. For example, creating a nonce in some protocols needs only uniqueness. On the other hand, the generation of a master key requires a higher quality, such as more entropy. And in the case of one-time pads, the information-theoretic guarantee of perfect secrecy only holds if the key material comes from a true random source with high entropy, and thus just any kind of pseudorandom number generator is insufficient.
Ideally, the generation of random numbers in CSPRNGs uses entropy obtained from a high-quality source, generally the operating system's randomness API. However, unexpected correlations have been found in several such ostensibly independent processes. From an information-theoretic point of view, the amount of randomness, the entropy that can be generated, is equal to the entropy provided by the system. But sometimes, in practical situations, numbers are needed with more randomness than the available entropy can provide. Also, the processes to extract randomness from a running system are slow in actual practice. In such instances, a CSPRNG can sometimes be used. A CSPRNG can "stretch" the available entropy over more bits.
== Requirements ==
The requirements of an ordinary PRNG are also satisfied by a cryptographically secure PRNG, but the reverse is not true. CSPRNG requirements fall into two groups:
They pass statistical randomness tests:
Every CSPRNG should satisfy the next-bit test. That is, given the first k bits of a random sequence, there is no polynomial-time algorithm that can predict the (k+1)th bit with probability of success non-negligibly better than 50%. Andrew Yao proved in 1982 that a generator passing the next-bit test will pass all other polynomial-time statistical tests for randomness.
They hold up well under serious attack, even when part of their initial or running state becomes available to an attacker:
Every CSPRNG should withstand "state compromise extension attacks".: 4 In the event that part or all of its state has been revealed (or guessed correctly), it should be impossible to reconstruct the stream of random numbers prior to the revelation. Additionally, if there is an entropy input while running, it should be infeasible to use knowledge of the input's state to predict future conditions of the CSPRNG state.
For instance, if the PRNG under consideration produces output by computing bits of pi in sequence, starting from some unknown point in the binary expansion, it may well satisfy the next-bit test and thus be statistically random, as pi is conjectured to be a normal number. However, this algorithm is not cryptographically secure; an attacker who determines which bit of pi is currently in use (i.e. the state of the algorithm) will be able to calculate all preceding bits as well.
Most PRNGs are not suitable for use as CSPRNGs and will fail on both counts. First, while most PRNGs' outputs appear random to assorted statistical tests, they do not resist determined reverse engineering. Specialized statistical tests may be found specially tuned to such a PRNG that shows the random numbers not to be truly random. Second, for most PRNGs, when their state has been revealed, all past random numbers can be retrodicted, allowing an attacker to read all past messages, as well as future ones.
CSPRNGs are designed explicitly to resist this type of cryptanalysis.
== Definitions ==
In the asymptotic setting, a family of deterministic polynomial time computable functions
G
k
:
{
0
,
1
}
k
→
{
0
,
1
}
p
(
k
)
{\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}
for some polynomial p, is a pseudorandom number generator (PRNG, or PRG in some references), if it stretches the length of its input (
p
(
k
)
>
k
{\displaystyle p(k)>k}
for any k), and if its output is computationally indistinguishable from true randomness, i.e. for any probabilistic polynomial time algorithm A, which outputs 1 or 0 as a distinguisher,
|
Pr
x
←
{
0
,
1
}
k
[
A
(
G
(
x
)
)
=
1
]
−
Pr
r
←
{
0
,
1
}
p
(
k
)
[
A
(
r
)
=
1
]
|
<
μ
(
k
)
{\displaystyle \left|\Pr _{x\gets \{0,1\}^{k}}[A(G(x))=1]-\Pr _{r\gets \{0,1\}^{p(k)}}[A(r)=1]\right|<\mu (k)}
for some negligible function
μ
{\displaystyle \mu }
. (The notation
x
←
X
{\displaystyle x\gets X}
means that x is chosen uniformly at random from the set X.)
There is an equivalent characterization: For any function family
G
k
:
{
0
,
1
}
k
→
{
0
,
1
}
p
(
k
)
{\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}
, G is a PRNG if and only if the next output bit of G cannot be predicted by a polynomial time algorithm.
A forward-secure PRNG with block length
t
(
k
)
{\displaystyle t(k)}
is a PRNG
G
k
:
{
0
,
1
}
k
→
{
0
,
1
}
k
×
{
0
,
1
}
t
(
k
)
{\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{k}\times \{0,1\}^{t(k)}}
, where the input string
s
i
{\displaystyle s_{i}}
with length k is the current state at period i, and the output (
s
i
+
1
{\displaystyle s_{i+1}}
,
y
i
{\displaystyle y_{i}}
) consists of the next state
s
i
+
1
{\displaystyle s_{i+1}}
and the pseudorandom output block
y
i
{\displaystyle y_{i}}
of period i, that withstands state compromise extensions in the following sense. If the initial state
s
1
{\displaystyle s_{1}}
is chosen uniformly at random from
{
0
,
1
}
k
{\displaystyle \{0,1\}^{k}}
, then for any i, the sequence
(
y
1
,
y
2
,
…
,
y
i
,
s
i
+
1
)
{\displaystyle (y_{1},y_{2},\dots ,y_{i},s_{i+1})}
must be computationally indistinguishable from
(
r
1
,
r
2
,
…
,
r
i
,
s
i
+
1
)
{\displaystyle (r_{1},r_{2},\dots ,r_{i},s_{i+1})}
, in which the
r
i
{\displaystyle r_{i}}
are chosen uniformly at random from
{
0
,
1
}
t
(
k
)
{\displaystyle \{0,1\}^{t(k)}}
.
Any PRNG
G
:
{
0
,
1
}
k
→
{
0
,
1
}
p
(
k
)
{\displaystyle G\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}
can be turned into a forward secure PRNG with block length
p
(
k
)
−
k
{\displaystyle p(k)-k}
by splitting its output into the next state and the actual output. This is done by setting
G
(
s
)
=
G
0
(
s
)
‖
G
1
(
s
)
{\displaystyle G(s)=G_{0}(s)\Vert G_{1}(s)}
, in which
|
G
0
(
s
)
|
=
|
s
|
=
k
{\displaystyle |G_{0}(s)|=|s|=k}
and
|
G
1
(
s
)
|
=
p
(
k
)
−
k
{\displaystyle |G_{1}(s)|=p(k)-k}
; then G is a forward secure PRNG with
G
0
{\displaystyle G_{0}}
as the next state and
G
1
{\displaystyle G_{1}}
as the pseudorandom output block of the current period.
== Entropy extraction ==
Santha and Vazirani proved that several bit streams with weak randomness can be combined to produce a higher-quality, quasi-random bit stream.
Even earlier, John von Neumann proved that a simple algorithm can remove a considerable amount of the bias in any bit stream, which should be applied to each bit stream before using any variation of the Santha–Vazirani design.
== Designs ==
CSPRNG designs are divided into two classes:
Designs based on cryptographic primitives such as ciphers and cryptographic hashes
Designs based on mathematical problems thought to be hard
=== Designs based on cryptographic primitives ===
A secure block cipher can be converted into a CSPRNG by running it in counter mode using, for example, a special construct that the NIST in SP 800-90A calls CTR DRBG. CTR_DBRG typically uses Advanced Encryption Standard (AES).
AES-CTR_DRBG is often used as a random number generator in systems that use AES encryption.
The NIST CTR_DRBG scheme erases the key after the requested randomness is output by running additional cycles. This is wasteful from a performance perspective, but does not immediately cause issues with forward secrecy. However, realizing the performance implications, the NIST recommends an "extended AES-CTR-DRBG interface" for its Post-Quantum Cryptography Project submissions. This interface allows multiple sets of randomness to be generated without intervening erasure, only erasing when the user explicitly signals the end of requests. As a result, the key could remain in memory for an extended time if the "extended interface" is misused. Newer "fast-key-erasure" RNGs erase the key with randomness as soon as randomness is requested.
A stream cipher can be converted into a CSPRNG. This has been done with RC4, ISAAC, and ChaCha20, to name a few.
A cryptographically secure hash might also be a base of a good CSPRNG, using, for example, a construct that NIST calls Hash DRBG.
An HMAC primitive can be used as a base of a CSPRNG, for example, as part of the construct that NIST calls HMAC DRBG.
=== Number-theoretic designs ===
The Blum Blum Shub algorithm has a security proof based on the difficulty of the quadratic residuosity problem. Since the only known way to solve that problem is to factor the modulus, it is generally regarded that the difficulty of integer factorization provides a conditional security proof for the Blum Blum Shub algorithm. However the algorithm is very inefficient and therefore impractical unless extreme security is needed.
The Blum–Micali algorithm has a security proof based on the difficulty of the discrete logarithm problem but is also very inefficient.
Daniel Brown of Certicom wrote a 2006 security proof for Dual EC DRBG, based on the assumed hardness of the Decisional Diffie–Hellman assumption, the x-logarithm problem, and the truncated point problem. The 2006 proof explicitly assumes a lower outlen (amount of bits provided per iteration) than in the Dual_EC_DRBG standard, and that the P and Q in the Dual_EC_DRBG standard (which were revealed in 2013 to be probably backdoored by NSA) are replaced with non-backdoored values.
=== Practical schemes ===
"Practical" CSPRNG schemes not only include an CSPRNG algorithm, but also a way to initialize ("seed") it while keeping the seed secret. A number of such schemes have been defined, including:
Implementations of /dev/random in Unix-like systems.
Yarrow, which attempts to evaluate the entropic quality of its seeding inputs, and uses SHA-1 and 3DES internally. Yarrow was used in macOS and other Apple OS' up until about December 2019, after which it switched to Fortuna.
Fortuna, the successor to Yarrow, which does not attempt to evaluate the entropic quality of its inputs; it uses SHA-256 and "any good block cipher". Fortuna is used in FreeBSD. Apple changed to Fortuna for most or all Apple OSs beginning around Dec. 2019.
The Linux kernel CSPRNG, which uses ChaCha20 to generate data, and BLAKE2s to ingest entropy.
arc4random, a CSPRNG in Unix-like systems that seeds from /dev/random. It originally is based on RC4, but all main implementations now use ChaCha20.
CryptGenRandom, part of Microsoft's CryptoAPI, offered on Windows. Different versions of Windows use different implementations.
ANSI X9.17 standard (Financial Institution Key Management (wholesale)), which has been adopted as a FIPS standard as well. It takes as input a TDEA (keying option 2) key bundle k and (the initial value of) a 64-bit random seed s. Each time a random number is required, it executes the following steps:
Obviously, the technique is easily generalized to any block cipher; AES has been suggested. If the key k is leaked, the entire X9.17 stream can be predicted; this weakness is cited as a reason for creating Yarrow.
All these above-mentioned schemes, save for X9.17, also mix the state of a CSPRNG with an additional source of entropy. They are therefore not "pure" pseudorandom number generators, in the sense that the output is not completely determined by their initial state. This addition aims to prevent attacks even if the initial state is compromised.
== Standards ==
Several CSPRNGs have been standardized. For example:
FIPS 186-4
NIST SP 800-90A
NIST SP 800-90A Rev.1
ANSI X9.17-1985 Appendix C
ANSI X9.31-1998 Appendix A.2.4
ANSI X9.62-1998 Annex A.4, obsoleted by ANSI X9.62-2005, Annex D (HMAC_DRBG)
A good reference is maintained by NIST.
There are also standards for statistical testing of new CSPRNG designs:
A Statistical Test Suite for Random and Pseudorandom Number Generators, NIST Special Publication 800-22.
== Security flaws ==
=== NSA kleptographic backdoor in the Dual_EC_DRBG PRNG ===
The Guardian and The New York Times reported in 2013 that the National Security Agency (NSA) inserted a backdoor into a pseudorandom number generator (PRNG) of NIST SP 800-90A, which allows the NSA to readily decrypt material that was encrypted with the aid of Dual EC DRBG. Both papers reported that, as independent security experts long suspected, the NSA had been introducing weaknesses into CSPRNG standard 800-90; this being confirmed for the first time by one of the top-secret documents leaked to The Guardian by Edward Snowden. The NSA worked covertly to get its own version of the NIST draft security standard approved for worldwide use in 2006. The leaked document states that "eventually, NSA became the sole editor". In spite of the known potential for a kleptographic backdoor and other known significant deficiencies with Dual_EC_DRBG, several companies such as RSA Security continued using Dual_EC_DRBG until the backdoor was confirmed in 2013. RSA Security received a $10 million payment from the NSA to do so.
=== DUHK attack ===
On October 23, 2017, Shaanan Cohney, Matthew Green, and Nadia Heninger, cryptographers at the University of Pennsylvania and Johns Hopkins University, released details of the DUHK (Don't Use Hard-coded Keys) attack on WPA2 where hardware vendors use a hardcoded seed key for the ANSI X9.31 RNG algorithm, stating "an attacker can brute-force encrypted data to discover the rest of the encryption parameters and deduce the master encryption key used to encrypt web sessions or virtual private network (VPN) connections."
=== Japanese PURPLE cipher machine ===
During World War II, Japan used a cipher machine for diplomatic communications; the United States was able to crack it and read its messages, mostly because the "key values" used were insufficiently random.
== References ==
== External links ==
RFC 4086, Randomness Requirements for Security
Java "entropy pool" for cryptographically secure unpredictable random numbers. Archived 2008-12-02 at the Wayback Machine
Java standard class providing a cryptographically strong pseudo-random number generator (PRNG).
Cryptographically Secure Random number on Windows without using CryptoAPI
Conjectured Security of the ANSI-NIST Elliptic Curve RNG, Daniel R. L. Brown, IACR ePrint 2006/117.
A Security Analysis of the NIST SP 800-90 Elliptic Curve Random Number Generator, Daniel R. L. Brown and Kristian Gjosteen, IACR ePrint 2007/048. To appear in CRYPTO 2007.
Cryptanalysis of the Dual Elliptic Curve Pseudorandom Generator, Berry Schoenmakers and Andrey Sidorenko, IACR ePrint 2006/190.
Efficient Pseudorandom Generators Based on the DDH Assumption, Reza Rezaeian Farashahi and Berry Schoenmakers and Andrey Sidorenko, IACR ePrint 2006/321.
Analysis of the Linux Random Number Generator, Zvi Gutterman and Benny Pinkas and Tzachy Reinman.
NIST Statistical Test Suite documentation and software download. | Wikipedia/Cryptographic_pseudo-random_number_generator |
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key. Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security. There are many kinds of public-key cryptosystems, with different security goals, including digital signature, Diffie–Hellman key exchange, public-key key encapsulation, and public-key encryption.
Public key algorithms are fundamental security primitives in modern cryptosystems, including applications and protocols that offer assurance of the confidentiality and authenticity of electronic communications and data storage. They underpin numerous Internet standards, such as Transport Layer Security (TLS), SSH, S/MIME, and PGP. Compared to symmetric cryptography, public-key cryptography can be too slow for many purposes, so these protocols often combine symmetric cryptography with public-key cryptography in hybrid cryptosystems.
== Description ==
Before the mid-1970s, all cipher systems used symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via a secure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels are not available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users.
By contrast, in a public-key cryptosystem, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret.
The two best-known types of public key cryptography are digital signature and public-key encryption:
In a digital signature system, a sender can use a private key together with a message to create a signature. Anyone with the corresponding public key can verify whether the signature matches the message, but a forger who does not know the private key cannot find any message/signature pair that will pass verification with the public key.For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine.
In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext.Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources' messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not conceal metadata like what computer a source used to send a message, when they sent it, or how long it is. Public-key encryption on its own also does not tell the recipient anything about who sent a message: 283 —it just conceals the content of the message.
One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including:
A public key infrastructure (PKI), in which one or more third parties – known as certificate authorities – certify ownership of key pairs. TLS relies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved.
A "web of trust" decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user. PGP uses this approach, in addition to lookup in the domain name system (DNS). The DKIM system for digitally signing emails also uses this approach.
== Applications ==
The most obvious application of a public key encryption system is for encrypting communication to provide confidentiality – a message that a sender encrypts using the recipient's public key, which can be decrypted only by the recipient's paired private key.
Another application in public key cryptography is the digital signature. Digital signature schemes can be used for sender authentication.
Non-repudiation systems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication.
Further applications built on this foundation include: digital cash, password-authenticated key agreement, time-stamping services and non-repudiation protocols.
== Hybrid cryptosystems ==
Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/private asymmetric key-exchange algorithm to encrypt and exchange a symmetric key, which is then used by symmetric-key cryptography to transmit data using the now-shared symmetric key for a symmetric key encryption algorithm. PGP, SSH, and the SSL/TLS family of schemes use this procedure; they are thus called hybrid cryptosystems. The initial asymmetric cryptography-based key exchange to share a server-generated symmetric key from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection.
== Weaknesses ==
As with all security-related systems, there are various potential weaknesses in public-key cryptography. Aside from poor choice of an asymmetric key algorithm (there are few that are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost.
Additionally, with the advent of quantum computing, many asymmetric key algorithms are considered vulnerable to attacks, and new quantum-resistant schemes are being developed to overcome the problem.
=== Algorithms ===
All public key schemes are in theory susceptible to a "brute-force key search attack". However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" by Claude Shannon – is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; both RSA and ElGamal encryption have known attacks that are much faster than the brute-force approach. None of these are sufficiently improved to be actually practical, however.
Major weaknesses have been found for several formerly promising asymmetric key algorithms. The "knapsack packing" algorithm was found to be insecure after the development of a new attack. As with all cryptographic functions, public-key implementations may be vulnerable to side-channel attacks that exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks.
=== Alteration of public keys ===
Another potential security vulnerability in using asymmetric keys is the possibility of a "man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion.
A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can not be prevented or monitored by the sender.
A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, the Internet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at an Internet service provider (ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk.
In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker.
=== Public key infrastructure ===
One approach to prevent such attacks involves the use of a public key infrastructure (PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. However, this has potential weaknesses.
For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin. Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check the bona fides of the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. An attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit, assuming that they were able to place themselves in the communication stream.
Despite its theoretical and potential problems, Public key infrastructure is widely used. Examples include TLS and its predecessor SSL, which are commonly used to provide security for web browser transactions (for example, most websites utilize TLS for HTTPS).
Aside from the resistance to attack of a particular key pair, the security of the certification hierarchy must be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate. Public key digital certificates are typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure.
=== Unencrypted metadata ===
Most of the available public-key encryption software does not conceal metadata in the message header, which might include the identities of the sender and recipient, the sending date, subject field, and the software they use etc. Rather, only the body of the message is concealed and can only be decrypted with the private key of the intended recipient. This means that a third party could construct quite a detailed model of participants in a communication network, along with the subjects being discussed, even if the message body itself is hidden.
However, there has been a recent demonstration of messaging with encrypted headers, which obscures the identities of the sender and recipient, and significantly reduces the available metadata to a third party. The concept is based around an open repository containing separately encrypted metadata blocks and encrypted messages. Only the intended recipient is able to decrypt the metadata block, and having done so they can identify and download their messages and decrypt them. Such a messaging system is at present in an experimental phase and not yet deployed. Scaling this method would reveal to the third party only the inbox server being used by the recipient and the timestamp of sending and receiving. The server could be shared by thousands of users, making social network modelling much more challenging.
== History ==
During the early history of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach to distributing keys.
=== Anticipation ===
In his 1874 book The Principles of Science, William Stanley Jevons wrote:
Can the reader say what two numbers multiplied together will produce the number 8616460799? I think it unlikely that anyone but myself will ever know.
Here he described the relationship of one-way functions to cryptography, and went on to discuss specifically the factorization problem used to create a trapdoor function. In July 1996, mathematician Solomon W. Golomb said: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography."
=== Classified discovery ===
In 1970, James H. Ellis, a British cryptographer at the UK Government Communications Headquarters (GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it.
In 1973, his colleague Clifford Cocks implemented what has become known as the RSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer, Malcolm J. Williamson, developed what is now known as Diffie–Hellman key exchange.
The scheme was also passed to the US's National Security Agency. Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization:
I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution from Berners-Lee designing an open internet architecture for CERN, its adaptation and adoption for the Arpanet ... did public key cryptography realise its full potential.
—Ralph Benjamin
These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997.
=== Public discovery ===
In 1976, an asymmetric key cryptosystem was published by Whitfield Diffie and Martin Hellman who, influenced by Ralph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which uses exponentiation in a finite field, came to be known as Diffie–Hellman key exchange. This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known as Merkle's Puzzles, and was invented in 1974 and only published in 1978. This makes asymmetric encryption a rather new field in cryptography although cryptography itself dates back more than 2,000 years.
In 1977, a generalization of Cocks's scheme was independently invented by Ron Rivest, Adi Shamir and Leonard Adleman, all then at MIT. The latter authors published their work in 1978 in Martin Gardner's Scientific American column, and the algorithm came to be known as RSA, from their initials. RSA uses exponentiation modulo a product of two very large primes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty of factoring large integers, a problem for which there is no known efficient general technique. A description of the algorithm was published in the Mathematical Games column in the August 1977 issue of Scientific American.
Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including the Rabin cryptosystem, ElGamal encryption, DSA and ECC.
== Examples ==
Examples of well-regarded asymmetric key techniques for varied purposes include:
Diffie–Hellman key exchange protocol
DSS (Digital Signature Standard), which incorporates the Digital Signature Algorithm
ElGamal
Elliptic-curve cryptography
Elliptic Curve Digital Signature Algorithm (ECDSA)
Elliptic-curve Diffie–Hellman (ECDH)
Ed25519 and Ed448 (EdDSA)
X25519 and X448 (ECDH/EdDH)
Various password-authenticated key agreement techniques
Paillier cryptosystem
RSA encryption algorithm (PKCS#1)
Cramer–Shoup cryptosystem
YAK authenticated key agreement protocol
Examples of asymmetric key algorithms not yet widely adopted include:
NTRUEncrypt cryptosystem
Kyber
McEliece cryptosystem
Examples of notable – yet insecure – asymmetric key algorithms include:
Merkle–Hellman knapsack cryptosystem
Examples of protocols using asymmetric key algorithms include:
S/MIME
GPG, an implementation of OpenPGP, and an Internet Standard
EMV, EMV Certificate Authority
IPsec
PGP
ZRTP, a secure VoIP protocol
Transport Layer Security standardized by IETF and its predecessor Secure Socket Layer
SILC
SSH
Bitcoin
Off-the-Record Messaging
== See also ==
== Notes ==
== References ==
== External links ==
Oral history interview with Martin Hellman, Charles Babbage Institute, University of Minnesota. Leading cryptography scholar Martin Hellman discusses the circumstances and fundamental insights of his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s.
An account of how GCHQ kept their invention of PKE secret until 1997 | Wikipedia/Asymmetric_cryptography |
Kidnapping or abduction is the unlawful abduction and confinement of a person against their will, and is a crime in many jurisdictions. Kidnapping may be accomplished by use of force or fear, or a victim may be enticed into confinement by fraud or deception. Kidnapping is distinguished from false imprisonment by the intentional movement of the victim to a different location.
Kidnapping may be done to demand a ransom in exchange for releasing the victim, or for other illegal purposes. Kidnapping can be accompanied by bodily injury, which in some jurisdictions elevates the crime to aggravated kidnapping.
Kidnapping of a child may be a distinct crime, depending on jurisdiction.
== Motives ==
Kidnapping can occur for a variety of reasons, with motivations for the crime varying particularly based on the perpetrator.
=== Ransom ===
The kidnapping of a person, most often an adult, for ransom is a common motivation behind kidnapping. This method is primarily utilized by larger organizations, such as criminal gangs, terrorist organizations, or insurgent groups. Typically this is done for financial incentive, with sums of money varying depending on the victim or the method of kidnapping.
Mexican gangs are estimated to have made up to $250 million in kidnappings from Central American migrants.
According to a 2022 study by political scientist Danielle Gilbert, armed groups in Colombia engage in ransom kidnappings as a way to maintain the armed groups' local systems of taxation. The groups resort to ransom kidnappings to punish tax evasion and incentivize inhabitants not to shirk. A 2024 study argued that insurgent groups are more likely to engage in kidnappings "under two conditions: to generate support and reinstate bargaining capacity when organizations suffer military losses on the battlefield and to enforce loyalties and display strength when organizations face violent competition from other non-state actors."
Kidnapping has been identified as one source by which terrorist organizations have been known to obtain funding.
Express kidnapping is a method of abduction used in some countries, mainly from Latin America, where a small ransom, that a company or family can easily pay, is demanded. Express kidnapping is also used for an immediate ransom in which the victim is taken to an ATM and forced to give the captor money.
Tiger kidnapping occurs when a person is kidnapped, and the captor forces them to commit a crime such as robbery or murder. The victim is held hostage until the captor's demands are met. The term originates from the usually long preceding observation, like a tiger does when stalking prey. This is a method which has been used by the Real Irish Republican Army and the Continuity Irish Republican Army.
Virtual kidnapping is a unique form of kidnapping that has risen in recent years. Unlike previous forms of kidnapping, virtual kidnapping does not actually involve a victim of any kind. The scam involves a process of calling numerous people on the phone and making them believe the caller has a victim's loved one, such as a child, in order to gain a quick ransom from the victim. Previously these calls used to affect Spanish speaking communities in large cities, such as Los Angeles or Houston. Until around 2015 when the calls started to be directed to English speakers as well. Around 80 victims were identified as falling for this scam, with losses ranging close to $100,000. While most perpetrators behind this scam can be linked back to Mexico, one instance occurred in Houston, Texas. Yanette Rodriguez Acosta was found guilty of accosting victims for large sums of money, which she would pick up at a set drop off of point. She was sentenced to seven years in prison, with an additional three years of supervision following her release.
In the past, and presently in some parts of the world (such as southern Sudan), kidnapping is a common means used to obtain slaves and money through ransom. In the 19th century, kidnapping in the form of shanghaiing (or "pressganging") men supplied merchant ships with sailors, whom the law considered unfree labour.
=== Pirates ===
Kidnapping on the high seas in connection with piracy has been increasing. It was reported that 661 crewmembers were taken hostage and 12 kidnapped in the first nine months of 2009. The IMB Piracy Reporting Centre recorded that 141 crew members were taken hostage and 83 were kidnapped in 2018.
=== Other ===
Other motivations behind kidnapping include the kidnap of a person for sexual assault purposes, or situations of domestic violence. For example, the 2003 Domestic Violence Report in Colorado shows in most instances of domestic violence people, most typically white females, will be taken from their residence by a present or former spouse or significant other. Often they will be taken by force, not with a weapon, and victims will be freed without injury to their person.
Bride kidnapping is a term often applied loosely, to include any bride "abducted" against the will of her parents, even if she is willing to marry the "abductor". It still is traditional amongst certain nomadic peoples of Central Asia. It has seen a resurgence in Kyrgyzstan since the fall of the Soviet Union and the subsequent erosion of women's rights.
Kidnapping has sometimes been used by the family and friends of a cult member as a method to remove them from the cult and begin a deprogramming process to change their allegiance away from the group.
Motivations for kidnapping cannot always be easily defined. During the 1990s and afterward, for example, the New York divorce coercion gang was involved in a sting of kidnappings. They would take Jewish husbands from their homes in New York and New Jersey and torture them in order for them to grant gittin, or religious divorces, to their wives. The gang is notorious for crimes of this nature. They were later apprehended for their crimes on October 9, 2013, in connection with a foiled kidnapping plot.
== By jurisdiction ==
Jurisdiction of kidnapping varies depending on the country, with each one having their own way of defining and prosecuting the crime. Some such countries with clearly defined laws on kidnapping include:
=== Australia ===
In Australia, kidnapping is a criminal offence, as defined by either the State Crimes Act or the Commonwealth Criminal Code. It is a serious indictable offence, and is punishable by up to 25 years' imprisonment.
=== Canada ===
Kidnapping that does not result in a homicide is a hybrid offence that comes with a maximum possible penalty of life imprisonment (18 months if tried summarily). A murder that results from kidnapping is classified as 1st-degree, with a sentence of life imprisonment that results from conviction (the mandatory penalty for murder under Canadian law).
=== Mexico ===
The General Law to Prevent and Punish Crimes of Kidnapping establishes a prison sentence of 20–40 years for an individual convicted of holding another person as a hostage. The prison term increases to 25–45 years if the kidnapping occurred with violence against the victims, and then increases to 25–50 years if the kidnapping was committed by members of public safety. If the kidnapping results in homicide, the prison sentence will be from 40 to 70 years.
=== Pakistan ===
In Pakistan, there are two kinds of kidnapping: Kidnapping from Pakistan and kidnapping from lawful guardianship. Penal Code 360 states that whoever conveys any person beyond the limits of Pakistan without the consent of that person or of some person legally authorized to consent on behalf of that person is said to kidnap that person from Pakistan. Penal Code 363 states that whoever kidnaps any person from Pakistan or lawful guardianship shall be punished with imprisonment of either description of a term which may extend to seven years and shall also be liable to a fine. Kidnapping with a motive of murder, hurt, slavery, or to the lust of any person shall be punished with imprisonment for life with rigorous imprisonment for a term which may extend to ten years and shall also be liable to a fine.
=== Netherlands ===
Article 282 prohibits hostaging (and 'kidnapping' is a kind of 'hostaging'). Part 1 of Article 282 allows sentencing kidnappers to maximum imprisonment of eight years or a fine of the fifth category. Part 2 allows maximum imprisonment of nine years or a fine of the fifth category if there are serious injuries. Part 3 allows maximum imprisonment of 12 years or a fine of the fifth category if the victim has been killed. Part 4 allows sentencing people that collaborate with kidnapping (such as proposing or make available a location where the victim hostaged). Part 1, 2 and 3 will apply also to them.
=== United Kingdom ===
Kidnapping is an offence under the common law of England and Wales. Lord Brandon said in 1984 R v D:
First, the nature of the offence is an attack on, and infringement of, the personal liberty of an individual. Secondly, the offence contains four ingredients as follows: (1) the taking or carrying away of one person by another; (2) by force or fraud; (3) without the consent of the person so taken or carried away; and (4) without lawful excuse.
In all cases of kidnapping of children, where it is alleged that a child has been kidnapped, it is the absence of the consent of that child which is material. This is the case regardless of the age of the child. A very small child will not have the understanding or intelligence to consent. This means that absence of consent will be a necessary inference from the age of the child. It is a question of fact for the jury whether an older child has sufficient understanding and intelligence to consent. Lord Brandon said: "I should not expect a jury to find at all frequently that a child under fourteen had sufficient understanding and intelligence to give its consent." If the child (being capable of doing so) did consent to being taken or carried away, the fact that the person having custody or care and control of that child did not consent to that child being taken or carried away is immaterial. If, on the other hand, the child did not consent, the consent of the person having custody or care and control of the child may support a defence of lawful excuse. It is known as Gillick competence.
Regarding restriction on prosecution, no prosecution may be instituted, except by or with the consent of the Director of Public Prosecutions, for an offence of kidnapping if it was committed against a child under the age of sixteen and by a person connected with the child, within the meaning of section 1 of the Child Abduction Act 1984. Kidnapping is an indictable-only offence. Kidnapping is punishable with imprisonment or fine at the discretion of the court. There is no limit on the fine or the term of imprisonment that may be imposed provided the sentence is not inordinate.
A parent should only be prosecuted for kidnapping their own child "in exceptional cases, where the conduct of the parent concerned is so bad that an ordinary right-thinking person would immediately and without hesitation regard it as criminal in nature".
=== United States ===
Law in the United States follows from English common law. Following the highly publicized 1932 Lindbergh kidnapping, Congress passed the Federal Kidnapping Act, which authorized the FBI to investigate kidnapping at a time when the Bureau was expanding in size and authority. The fact that a kidnapped victim may have been taken across state lines brings the crime within the ambit of federal criminal law.
Most states recognize different types of kidnapping and punish according to such factors as the location, duration, method, manner and purpose of the offense. There are several deterrents to kidnapping in the United States of America. Among these are:
The extreme logistical challenges involved in successfully exchanging the money for the return of the victim without being apprehended or surveilled.
Harsh punishment. Convicted kidnappers face lengthy prison terms. If a victim is brought across state lines, federal charges can be laid as well.
Good cooperation and information sharing between law enforcement agencies, and tools for spreading information to the public (such as the AMBER Alert system).
In 2009, Phoenix, Arizona reported over 300 cases of kidnapping, gaining it a reputation as America's kidnapping capital, as according to the Los Angeles Times. Hundreds of kidnappings for ransom occurred in the city, as per the Times, most of them having connections to Mexican drug and human trafficking as a way to pay off unpaid debts. These statistics would have made the city have the highest kidnapping rate of any U.S. city, and second in the world only to Mexico City. However, an investigation and later audit by the U.S. Department of Justice Inspector General found these statistics to be falsified. Only 59 federally reportable kidnappings occurred in 2008. This is in comparison to the over 300 claimed kidnappings on grant applications. The falsified data can be attributed to a variety of issues within the southwestern United States as a whole, including misclassification by local police, lack of unified standards, a desire for Federal grants, or the Mexican Drug War.
== Statistics ==
=== Countries with the highest rates ===
In 2021, the United Nations Office on Drugs and Crime reported that the United States was the country with most kidnappings, totaling 56,652. This is in comparison to 2010, when they were ranked sixth in the world (by absolute numbers, not per capita) for kidnapping by ransom, according to available statistics (after Colombia, Italy, Lebanon, Peru, and the Philippines).
Kidnapping for ransom is a common occurrence in various parts of the world today. In 2018, the United Nations found Pakistan and England had the highest number of kidnappings while New Zealand had the highest rate among the 70 countries for which data is available. As of 2007, that title belonged to Iraq with possibly 1,500 foreigners kidnapped. In 2004, it was Mexico, and in 2001, it was Colombia. Reports suggest a world total of 12,500–25,500 per year with 3,600 per year in Colombia and 3,000 per year in Mexico around the year 2000. However, by 2016, the number of kidnappings in Colombia had declined to 205 and it continues to decline.
Mexican numbers are hard to confirm because of fears of police involvement in kidnapping. According to Pax Christi, a Catholic peace movement, "Kidnapping seems to flourish particularly in fragile states and conflict countries, as politically motivated militias, organized crime and the drugs mafia fill the vacuum left by government".
Since 2019, the risk of kidnapping has risen worldwide, as a result of the COVID-19 pandemic. This increase is mostly seen in kidnappings for ransom. This factors from a variety of aspects, including socioeconomic disparities, insufficient resources, and flawed judicial systems. Another impact of the COVID-19 pandemic on kidnappers is the economic strain that it had put many families through. This pressured kidnappers to increase kidnappings as well as ransom demands. After 2022, the diminishing effects of COVID-19 have led many countries to welcome back in-person interactions, travel and tourism. The connection between increased tourism and kidnapping is reflected through the rise of global kidnapping rates from 2019 to 2021–2023.
The highest recorded ransom demand in 2021 was $77.3 million, while in 2019, it was $28.7 million. Between those two years, the average global ransom demand increased 43%, while the median global ransom demand increased by 6%. In Sub-Saharan Africa, regions such as Congo (DRC), Nigeria, and South Africa are likely to maintain higher levels of kidnappings due to ongoing effects of religious extremist groups, recent genocides, and civil wars. While there is no hard evidence of which country had the most kidnappings in 2021, the American region (which includes Mexico) maintains its position as the region with the second highest kidnapping rates.
One notorious failed example of kidnap for ransom was the 1976 Chowchilla bus kidnapping, in which 26 children were abducted with the intention of bringing in a $5 million ransom. The children and driver escaped from an underground van without the aid of law enforcement. According to the Department of Justice, kidnapping makes up 2% of all reported violent crimes against juveniles.
=== By country ===
The annual number of recorded kidnappings per capita by country for the last available year, according to the United Nations Office on Drugs and Crime, is shown in the table below. Each country's definition of kidnapping may differ, and the table does not include unreported kidnappings.
== See also ==
== References ==
== Further reading ==
Lewis, Damien; Mende Nazer (2003). Slave: My True Story. New York: PublicAffairs. ISBN 1-58648-212-2. OCLC 54461588.
== External links ==
Media related to Kidnapping at Wikimedia Commons
The dictionary definition of kidnapping at Wiktionary
"Snatched: Notorious Kidnappings" Archived 2011-12-14 at the Wayback Machine—slideshow by Life magazine | Wikipedia/Kidnapping |
The Armed Forces of the Republic of Poland (Polish: Siły Zbrojne Rzeczypospolitej Polskiej, pronounced [ˈɕiwɨ ˈzbrɔjnɛ ʐɛt͡ʂpɔsˈpɔlitɛj ˈpɔlskʲɛj]; abbreviated SZ RP), also called the Polish Armed Forces and popularly called Wojsko Polskie in Poland ([ˈvɔj.skɔ ˈpɔl.skjɛ], roughly "the Polish Military"—abbreviated WP), are the national armed forces of the Republic of Poland.
They comprise five main service branches: the Polish Land Forces (Wojska Lądowe), the Polish Navy (Marynarka Wojenna), the Polish Air Force (Siły Powietrzne), the Polish Special Forces (Wojska Specjalne), and the Polish Territorial Defence Force (Wojska Obrony Terytorialnej), under the command of the Ministry of National Defence of Poland. According to SIPRI, Poland spent $38 billion on its defense budget in 2024, ranking 13th in the world. In 2023, Poland spent the greatest share of its GDP for military expenditures (3.9%) among all NATO members. With over 292,000 active personnel in 2025, the Polish Armed Forces are the third largest military in NATO, after Turkey and the USA.
Historically, the name Polish Armed Forces has been used since the early 1800s, but can also be applied to earlier periods. The Polish Legions and the Blue Army, composed of Polish volunteers from the United States and those who switched sides from the Central Powers, were formed during World War I. In the war's aftermath, the Polish Army was reformed from the remnants of the partitioning powers' forces and expanded significantly during the Polish–Soviet War of 1920.
World War II dramatically impacted Polish military structures, with the initial defeat by Nazi Germany and Soviet Union invasions leading to the dispersion of Polish forces into the underground. After 1945, the Polish People's Army (LWP) was formed under the Soviet political control and its standards aligned to those of the former Warsaw Pact. The LWP's reputation suffered due to its role in political suppression both domestically and abroad, such as during the Prague Spring. Following the fall of communism, Poland shifted towards Western military standards, joining NATO in 1999, participating in missions in Iraq and Afghanistan and undertaking substantial modernization of its forces.
== Mission ==
Pursuant to the national security strategy of Poland, the supreme strategic goal of Poland's military forces is to ensure favourable and secure conditions for the realization of national interests by eliminating external and internal threats, reducing risks, rightly assessing undertaken challenges, and ably using existing opportunities. The Republic of Poland's
main strategic goals in the area of defence include:
Ensuring the independence and sovereignty of the Republic of Poland, as well as its integrality and the inviolability of its borders
Defence and protection of all the citizens of the Republic of Poland
Creating conditions to ensure the continuity of the implementation of functions by public administration authorities and other entities competent in the area of national security, including entities responsible for running the economy and for other areas important for the life and security of its citizens
Creating conditions for the improvement of the state's national defence capabilities and ensuring defence readiness in allied structures
Developing partnership military cooperation with other states, especially neighbouring ones
Implementing commitments arising from Poland's NATO and European Union membership
Engaging in international crisis response operations led by NATO, the EU, the UN, and as a part of emergency coalitions
== History ==
=== Origins and establishment ===
The List of Polish wars chronicles Polish military involvements since the year 972. The present armed forces trace their roots to the early 20th century, yet the history of Polish armed forces in their broadest sense stretches back much further. After the partitions of Poland, during the period from 1795 until 1918, Polish military was recreated several times during national insurrections that included the November Uprising of 1830, and the January Uprising in 1863, and the Napoleonic Wars that saw the formation of the Polish Legions in Italy. The Congress Poland, being part of the Russian Empire with a certain degree of autonomy, had a separate Polish army in the years 1815–1830, which was disbanded after the unsuccessful November Uprising. Large numbers of Poles also served in the armies of the partitioning powers, Russian Empire, Austria-Hungary and German Empire.
During World War I, the Polish Legions were set up in Galicia, the southern part of Poland under Austrian occupation. They were both disbanded after the Central Powers failed to provide guarantees of Polish independence after the war. General Józef Haller, the commander of the Second Brigade of the Polish Legion, switched sides in late 1917, and via Murmansk took part of his troops to France, where he created the Blue Army. It was joined by several thousand Polish volunteers from the United States. It fought on the French front in 1917 and 1918.
The Polish Army was recreated in 1918 from elements of the three separate Russian, Austro-Hungarian, and German armies, and armed with equipment left following World War I. The force expanded during the Polish–Soviet War of 1919–1922 to nearly 800,000 men, but then were reduced after peace was reestablished.
At the onset of World War II, on 1 September 1939 Nazi Germany invaded Poland. Polish forces were overwhelmed by the German attack in September 1939, which was followed on 17 September 1939 by an invasion by the Soviet Union. Some Polish forces escaped from the occupied country and joined Allied forces fighting in other theaters while those that remained in Poland splintered into guerilla units of the Armia Krajowa ("Home Army") and other partisan groups which fought in clandestine ways against the foreign occupiers. Thus, there were three threads to Polish armed forces from 1939; the Polish Armed Forces in the West, the Armia Krajowa and other resistance organizations fighting the Germans in Poland, and the Polish Armed Forces in the East, which later became the post-war communist Polish People's Army (LWP).
Until the fall of communism, the army's prestige under communist rule continued to fall, as it was used by the government to resettle ethnic minorities immediately after the war (Operation Vistula), and to violently suppress opposition several times, during the 1956 Poznań protests, the 1970 Polish protests, and during martial law in Poland in 1981–1983. The LWP also took part in the suppressing of the 1968 democratization process of Czechoslovakia, commonly known as the Prague Spring. That same year Marshal of Poland Marian Spychalski was asked to replace Edward Ochab as chairman of the Council of State, and General Wojciech Jaruzelski, at that time the Chief of the General Staff, was named to replace him. Jaruzelski, a known Soviet loyalist, was put in place by the Soviets in order to ensure that a trusted group of officers was in control of one of the least trusted armies in the Warsaw Pact.
=== Republic of Poland ===
After January 1990 and the collapse of the communist block, the name of the armed forces was changed to "Armed Forces of the Republic of Poland" to accord with the Polish State's new official name. Following the subsequent disbandment of the Warsaw Pact, Poland was admitted into NATO on 12 March 1999 and the Polish armed forces began a major reorganization effort in order to conform to the new western standards.
==== Involvement in Afghanistan (2002-2014) ====
From 2002 until 2014, Polish military forces were part of the Coalition Forces that participated in the ISAF mission in Afghanistan led by NATO. Poland's contribution to ISAF was the country's largest since its entrance into NATO. Polish forces also took part in the Iraq War. From 2003 to 2008, Polish military forces commanded the Multinational Division (MND-CS) located in the South-Central Occupation Zone of Iraq. The division was made up of troops from 23 nations and totaled as many as 8,500 soldiers.
==== Invasion of Iraq (2003) ====
In March 2003, the Polish Armed Forces took part in the 2003 invasion of Iraq, deploying special forces and a support ship. Following the destruction of Saddam's regime the Polish Land Forces supplied a brigade and a division headquarters for the 17-nation Multinational Division Central-South, part of the U.S.-led Multi-National Force – Iraq. At its peak Poland had 2,500 soldiers in the south of the country.
==== Peacekeeping missions ====
Other completed operations include 2005 'Swift Relief' in Pakistan, in which NATO Response Force-allocated personnel were despatched. Polish Land Forces personnel sent to Pakistan included a military engineers company, a platoon of the 1st Special Commando Regiment, and a logistics component from the 10th Logistics Brigade. Elsewhere, Polish forces were sent to MINURCAT in Chad and the Central African Republic (2007–2010).
As of 2008, Poland had deployed 985 personnel in eight separarate UN peacekeeping operations (the United Nations Disengagement Observer Force, MINURSO, MONUC, UNOCI, UNIFIL, UNMEE, UNMIK, UNMIL, and UNOMIG).
==== Fully professional armed forces (2010) ====
Formerly set up according to Warsaw Pact standards, the Polish armed forces are now fully organized according to NATO requirements. Poland is also playing an increasingly larger role as a major European peacekeeping power in the world through various UN peacekeeping actions, and cooperation with neighboring nations through multinational formations and units such as the Multinational Corps Northeast and POLUKRBAT. As of 1 January 2010, the Armed Forces of the Republic of Poland have transitioned to a completely contract-based manpower supply system.
On 10 April 2010, a Polish Air Force Tu-154M crashed near Smolensk, Russia while in transit to a ceremony commemorating the Katyn massacre. On board the plane were the President (Commander-in-Chief), the Chief of Staff, all four Branch Commanders of the Polish Military, and a number of other military officials; all were killed.
In 2014–2015, the Armed Forces General Command and Armed Forces Operational Command were both established, superseding the previous individual service branch command structures.
==== Homeland Defence Act (2022) ====
Prompted in part by the 2022 Russian invasion of Ukraine, the Homeland Defence Act was unanimously passed by the Polish parliament on March 17, 2022 and signed into law by President Duda the following day. In accordance with the act, Poland intends to roughly double the size of the armed forces to 300,000 personnel, and to spend at least 3% of GDP on defence budget in 2023. This includes increasing the size of the tank fleet by adding approximately 1,000 new tanks and adding 600 new howitzers to Poland's ground forces. Poland's Deputy Prime Minister and Defense Minister Mariusz Blaszczak said that it is Poland's goal to build the most powerful ground forces of all the North Atlantic Treaty Organization members in Europe.
== Equipment ==
Since 2011, the Armed Forces are in the middle of a long-term modernization program. Plans involve new anti-aircraft missile systems, ballistic missile defense systems, a Lead-In Fighter Trainer (LIFT) aircraft, medium transport and combat helicopters, submarines, unmanned aerial vehicles, as well as self-propelled howitzers. Technical modernization plans for the years 2013 through to 2022 have been put in place. During the 2013 to 2016 period of the plan, 37.8 billion PLN, or 27.8% of the period's military budget of 135.5 billion PLN was invested into technical modernisation.
Significant military equipment acquisitions are also planned for through the 2022 period, with the Ministry of Defense outlying 61 billion złoty to be spent on further modernization. A major feature of the program is the acquisition of around 1,200 unmanned aerial vehicles, including at least 1,000 with combat capabilities.
Additionally, new helicopters and air defense systems are to be procured along with five light vessels for the navy. A new submarine force is to be jointly operated with a NATO partner, and general upgrade and modernization efforts are aimed at the country's air defenses, naval forces, cyber warfare capabilities, armored forces, and territorial defense forces (to have 50,000 volunteer members).
== Organization ==
The Polish Armed Forces consist of 292,000+ active duty personnel. In 2023, troop strength in the five different branches was as follows:
Land Forces (Wojska Lądowe): 100,200, Reserve 40,000+
Air Force (Siły Powietrzne): 46,500
Navy (Marynarka Wojenna): 17,000
Special Forces (Wojska Specjalne): 4,000
Territorial Defence Force (Wojska Obrony Terytorialnej): 55,000
All five branches are supported by:
Military infrastructure: 25,500, including:
Ministry of National Defence of the Republic of Poland (Ministerstwo Obrony Narodowej)
Central support
Military command
Supply and military logistics
Military Gendarmerie (Żandarmeria Wojskowa): 4,500
== Traditions ==
The Polish armed forces has consistently held two yearly military parades (Polish: Defilada wojskowa) on Armed Forces Day and National Independence Day. These parades take place on Ujazdów Avenue and near the Tomb of the Unknown Soldier on Piłsudski Square respectively. The Armed Forces Day Parade was introduced in 2007 and 2008 as first grand military parades since the holiday was reinstated and have been held yearly since 2013. The first Polish military parade took place on 17 January 1945 and as of 2019, the 3 May Constitution Day parade was officially reinstated.
Marsz Generalski and Warszawianka (1831) are the main military musical pieces performed at ceremonial events. While the former is a solemn march used during inspections and the march on of the Polish flag, the latter is a march strictly used for march pasts, military parades and other processions.
The Polish Armed Forces are the only military entity in the world to use a two-finger salute which is only used while wearing a hat (it refers to the fact that the salute is given to the emblem itself) with the emblem of the Polish eagle, such as military hat rogatywka. The salute is performed with the middle and index fingers extended and touching each other, while the ring and little fingers are bent and touched by the thumb. The tips of the middle and index fingers touch the peak of the cap, two fingers supposedly meaning Honour and Fatherland (Honor i Ojczyzna).
Czołem Żołnierze (the Polish language version of Greetings Soldiers) is the official military greeting of the armed forces, usually given by the members of the government or military establishment as well as visiting dignitaries during ceremonial occasions. The soldiers will usually respond with Czołem (States title/rank of dignitary).
== See also ==
Polish Armed Forces (Second Polish Republic)
Main Directorate of Information of the Polish Army (GZI WP)
Internal Military Service (WSW)
Border Protection Troops (WOP)
Polish Legions (Napoleonic period)
Polish Military Organisation
Armia Ludowa
Gwardia Ludowa
Polish forces in the West
Polish forces in the East
Anders' Army
First Polish Army (1944–1945)
Armia Krajowa
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
Remigiusz Wilk, "Work in Progress", Jane's Defence Weekly, 20 August 2012
== External links ==
Official website of the Ministry of National Defense (in Polish)
Official website of the Polish General Staff (in Polish)
Official website of the Armed Forces Operational Command (in Polish)
Official website of the Military Gendarmerie (in Polish)
Pictures of the Polish Army in Iraq (2003) at the Wayback Machine (archived November 27, 2005)
Polish forces in the West – study of the Polish participation in the liberation of Western Europe at the Wayback Machine (archived February 8, 2016) | Wikipedia/Polish_Armed_Forces |
A specialist law enforcement agency is a law enforcement agency which specialises in the types of laws it enforces, or types of activities it undertakes, or geography it enforces laws in, or these in combination.
The specialisation may be imposed voluntarily by policy, consensus with other agencies, or logistical constraints, or it could be imposed legally by law and jurisdiction, or it is result of the historical evolution of the agency, or combinations thereof.
For example, some agencies only enforce laws related to taxation or customs. Other agencies can only operate in certain areas, for example government-occupied buildings and land, or on waterways. Some agencies have otherwise unrestricted jurisdiction, but specialise in certain types of operations, for example highway patrols.
Specialities can include:
Buildings and lands occupied or explicitly controlled by the institution and the institution's personnel, and public entering the buildings and precincts of the institution. For example, United States Department of Veterans Affairs Police.
National border patrol, security, and integrity. For example, United States Border Patrol.
Water ways and bodies and/or coastal areas. For example, National Oceanic and Atmospheric Administration Fisheries Office of Law Enforcement.
Environment, parks, and/or heritage property. For example, United States Park Police.
Property, personnel, and/or postal items of a postal service. For example, United States Postal Inspection Service.
Buildings and other fixed assets. For example, Polizei beim Deutschen Bundestag.
Highways, roads, and/or traffic. For example, California Highway Patrol.
Customs, excise and gambling. For example, Australian Customs Service.
Anti-corruption For example, Independent Commission Against Corruption (New South Wales).
Taxation For example, Australian Taxation Office.
Railways, tramways, and/or rail transit systems. For example, Canadian Pacific Railway Police Service.
Vehicle safety and hazardous material transport laws and regulations, licensing, registration, insurance, etc. For example, Iowa Department of Transportation Motor Vehicle Enforcement Agency.
Protection of international or domestic VIPs, protection of significant state assets. For example, Special Protection Group.
Serious or complex fraud, commercial crime, fraud covering multiple lower level jurisdictions. For example, Serious Fraud Office (UK).
Paramilitary law enforcement, counter insurgency, and riot control. For example, Central Reserve Police Force.
Diplomatic personnel and facilities. For example, Diplomatic Security Service.
Coastal patrol, marine border protection, marine search and rescue. For example, Indian Coast Guard.
Enforcement of motor vehicles act, motor vehicles rules, motor vehicles taxation acts. For example, Motor Vehicles Departments (MVD) or Transport Departments of various Indian states.
Enforcement of laws related to the sale and consumption of alcohol and drugs, enforces laws related to liquor, narcotics, psychotropics and medicines that contain alcohol and narcotics. For example, Kerala Excise, Department of Excise & Taxation, Haryana, Telangana State Excise.
Investigation of corruption-related cases within the states. For example, Vigilance and Anti-Corruption Bureau, Anti-Corruption Bureau (Maharashtra).
Investigation of financial crimes such as money laundering, foreign exchange violations, economic frauds, economic offences,etc. For example, at federal level: Enforcement Directorate (ED) of India, state level: Economic Offences Wing (EOW) of various indian states.
Investigation and prosecution of terrorism related matters. National Investigation Agency
See Specialist law enforcement agencies for a list of specialist agencies.
== See also ==
Law enforcement organization
Police tactical unit, such as SWAT
Special police forces | Wikipedia/Specialist_law_enforcement_agency |
Internal affairs (often known as IA) is a division of a law enforcement agency that investigates incidents and possible suspicions of criminal and professional misconduct attributed to members of the parent force. It is thus a mechanism of limited self-governance, "a police force policing itself". The names used by internal affairs divisions vary between agencies and jurisdictions; for example, they may be known as the internal investigations division (usually referred to as IID), professional standards or responsibility, inspector or inspectorate general, internal review board, or similar.
Due to the sensitive nature of this responsibility, in many departments, officers employed in an internal affairs unit are not in a detective command but report directly to the head of internal affairs who themselves typically report directly to the head of the parent agency, or to a board of civilian commissioners.
Internal affairs investigators are generally bound by stringent rules when conducting their investigations. For example, in California, the Peace Officers Bill of Rights (POBR) is a mandated set of rules found in the California Government Code which applies to most peace officers (law enforcement officers) within California. The bill, among other provisions; restricts where and when a peace officer may be interviewed regarding the subject of an investigation; codifies the right of the peace officer being questioned to have a personal and/or legal representative present at most proceedings; guarantees the right of appeal to any non-probationary peace officer subject to punitive action by the agency; and requires that a peace officer being interviewed regarding an alleged criminal act be advised of their constitutional rights and protections (I.e. that they be Mirandized).
== Function ==
The internal affairs function is not an enforcement function, but rather a policing function that works to report only. The concept of internal affairs is very broad and unique to each police department. However, the sole purpose to having an internal affairs unit is to investigate and find the truth to what occurred when an officer is accused of misconduct. An investigation can also give insight on a policy itself that may have issues.
== Investigations ==
The circumstances of the complaint determine who will be investigating the complaint. The investigation of alleged misconduct by police officers can be conducted by the internal affairs unit, an executive police officer, or an outside agency. In the Salt Lake City Police Department, the Civilian Review Board will also investigate the complaint, but they will do so independently. When the investigation begins, everything is documented and all employees, complainants, and witnesses are interviewed. Any physical evidence is analyzed and past behaviors of the officer in question are reviewed. Dispatch tapes, police reports, tickets, audio, and videotapes are all reviewed if available. Many controversies arise because an officer investigating police misconduct may show favoritism and/or hold grudges particularly when a single officer is conducting the investigation. Some departments hire uninvolved officers or include another department or a special unit to conduct the investigation.
=== Small agencies ===
Larger agencies have the resources to have separate units for internal affairs, but smaller agencies – which do not have the luxury – are more common, with 87% of police departments in the United States employing 25 or fewer sworn officers. Smaller agencies that do not have sufficient resources may have the executive officer, the accused's supervisor, or another police department conduct an investigation. The state police may also be asked to investigate criminal behavior, but they do not deal in minor misconduct or rule violation cases. However, allowing another department to investigate can reportedly result in lower morale among the officers because it is said it can appear as an admission that the department cannot handle their own affairs.
== Civilian review board ==
Several police departments in the United States have been compelled to institute citizen review or investigation of police misconduct complaints in response to community perception that internal affairs investigations are biased in favor of police officers.
For example, San Francisco, California, has its Office of Citizen Complaints, created by voter initiative in 1983, in which citizens who have never been members of the San Francisco Police Department investigate complaints of police misconduct filed against members of the San Francisco Police Department. Washington, DC, has a similar office, created in 1999, known as the Office of Police Complaints.
In the state of Utah, the Internal Affairs Division must properly file a complaint before the committee can officially investigate. Complaints involving police misuse of force will be brought to the Civilian Review Board, but citizens can request the committee to investigate any other issues of misconduct.
== See also ==
Community policing
Civilian oversight of law enforcement
Infernal Affairs, 2002 Hong Kong film
List of police complaint authorities
Law Enforcement Conduct Commission (New South Wales, Australia)
Independent Broad-based Anti-corruption Commission (Victoria, Australia)
Special Investigations Unit (Ontario, Canada)
Fiosrú – the Office of the Police Ombudsman (Ireland)
Independent Police Conduct Authority (New Zealand)
Investigative Committee (Russia)
His Majesty's Inspectorate of Constabulary and Fire & Rescue Services (UK)
Independent Office for Police Conduct (UK)
Provost (military police)
== References == | Wikipedia/Internal_affairs_(law_enforcement) |
A rapid reaction force / rapid response force (RRF), quick reaction force / quick response force (QRF), immediate reaction force (IRF), rapid deployment force (RDF), or quick maneuver force (QMF) is a military or law enforcement unit capable of responding to emergencies in a very short time frame.
== Types ==
=== Quick reaction force ===
A quick reaction force (QRF) is an armed military unit capable of rapidly responding to developing situations, usually to assist allied units in need of assistance. They are equipped to respond to any type of emergency within a short time frame, often only a few minutes, based on unit standard operating procedures (SOPs). Cavalry units are frequently postured as QRFs, with a main mission of security and reconnaissance.
A quick reaction force belongs directly to the commander of the unit it is created from and is typically held in the reserve.
=== Rapid deployment force ===
A rapid deployment force (RDF) is a military formation that is capable of fast deployment outside their country's borders. They typically consist of well-trained military units (special forces, paratroopers, marines, etc.) that can be deployed fairly quickly or on short notice, usually from other major assets and without requiring a large organized support force immediately.
== List ==
=== Rapid reaction force ===
The Allied Rapid Reaction Corps (ARRC) is a NATO rapid reaction force, established in 1992. A successor to the British Army's I Corps, the ARRC is capable of rapidly deploying a NATO headquarters for operations and crisis response.
The European Gendarmerie Force (EUROGENDFOR) is a European rapid reaction force under the European Union, established in 2006. An alliance of gendarmerie forces from Italy, France, the Netherlands, Poland, Portugal, Romania, and Spain, it serves as a unified intervention force of European militarized police.
The European Rapid Operational Force (EUROFOR) was a European rapid reaction force under the European Union and Western European Union, established in 1995 and composed of military units from Italy, France, Portugal, and Spain. EUROFOR was tasked with performing duties outlined in the Petersberg Tasks. EUROFOR deployed to Kosovo from 2000 to 2001, and North Macedonia as part of EUFOR Concordia in 2003. After being converted into an EU Battlegroup, EUROFOR was dissolved in 2012.
The European Rapid Reaction Force (ERRF) was the intended result of the Helsinki Headline Goal. Though many media reports suggested the ERRF would be a European Union army, the Helsinki Headline Goal was little more than headquarters arrangements and a list of theoretically available national forces for a rapid reaction force.
The NATO Response Force (NRF) is a NATO rapid reaction force, established in 2003. Distinct from the ARRC, the NRF comprises land, sea, air, and special forces units that can be deployed quickly.
The concept of a United Nations rapid reaction force was proposed in the mid-1990s by several commentators and officials, including Secretary-General Boutros Boutros-Ghali. The UN rapid reaction force would consist of personnel stationed in their home countries, but they would have the same training, equipment, and procedures, and would conduct joint exercises. The force would remain at high readiness at all times so as to quickly deploy them where necessary.
Riot Police Units (RPU) are the rapid reaction forces of Japanese prefectural police. They combine riot police, police tactical units, and disaster response squads under one unit. Each prefectural police force operates RPUs, sometimes under different names.
Rapid Action Force of India
Army Deployment Force of Singapore
2nd Quick Response Division ROK Marine Corps Quick Maneuver Force
The Immediate Response Force (IRF) is an American rapid reaction force composed of units from the United States Army and United States Air Force. They are capable of responding to any location in the world within 18 hours of notice.
The Joint Rapid Reaction Force (JRRF) was a British Armed Forces capability concept created in 1999. The force was composed of units from all three branches of the British military, and was able to rapidly deploy anywhere in the world at short notice. However, the War in Afghanistan and 2003 invasion of Iraq siphoned British personnel and equipment, leaving the JRRF with insufficient forces. The JRRF was succeeded by the Combined Joint Expeditionary Force in 2010 and the Joint Expeditionary Force in 2014.
People's Armed Police 1st Mobile Corps
People's Armed Police 2nd Mobile Corps
=== Rapid deployment force ===
Argentine Rapid Deployment Force
3rd Brigade
Rapid Deployment Force
Egyptian Rapid Deployment Forces
Finnish Rapid Deployment Force
/ Rapid Forces Division
Indonesian Air Force Quick Reaction Forces Command
/ NATO Rapid Deployable Corps – Italy
Central Readiness Regiment
ROKMC Quick Maneuver Force
10th Parachute Brigade
Norwegian Telemark Battalion
710th Special Operations Wing
Rapid Reaction Brigade
Air Mobile Brigade
The Rapid Deployment Force (RDF) is divided into two types. The first is the airborne force responsible for the 31st Infantry Regiment, King Bhumibol's Guard. The second is infantry battalions designated as RDF, each Army Area has one RDF battalion including The 3rd Infantry Battalion, 19th Infantry Regiment of the 9th Infantry Division was designated the RDF of the First Army Area. • The 1st Infantry Battalion, 6th Infantry Regiment of the 6th Infantry Division was designated the RDF of the Second Army Area. • The 1st Infantry Battalion, 7th Infantry Regiment of the 7th Infantry Division was designated the RDF of the Third Army Area. • The 2nd Infantry Battalion, 15th Infantry Regiment of the 5th Infantry Division was designated the RDF of the Fourth Army Area.
The Rapid Deployment Joint Task Force (RDJTF) was a former United States Department of Defense joint task force. It was formed in 1979 as the Rapid Deployment Force (RDF), envisioned as a mobile force that could quickly deploy U.S. forces to any location outside the usual American deployment areas of Western Europe and East Asia, soon coming to focus on the Middle East. It was inactivated in 1983 and reorganized as the United States Central Command.
Marine Expeditionary Unit
XVIII Airborne Corps
75th Ranger Regiment
/ Russian Airborne Forces
Separate Operational Purpose Division
EU Battlegroup
People's Liberation Army Navy Marine Corps
People's Liberation Army Air Force Airborne Corps
== See also ==
Expeditionary warfare
Power projection
== References == | Wikipedia/Rapid_reaction_force |
A territorial police force is a police service that is responsible for an area defined by sub-national boundaries, distinguished from other police services which deal with the entire country or a type of crime. In countries organized as federations, police responsible for individual sub-national jurisdictions are typically called state or provincial police.
== Canada ==
The Royal Canadian Mounted Police (RCMP/GRC) is the federal-level police service. It also acts as the provincial police service in every province except for Ontario, and Quebec, which operate provincial police services, as well as Newfoundland and Labrador, which is served by the Royal Newfoundland Constabulary. The RCMP is also contracted to act as the territorial police force in Nunavut, Yukon and the Northwest Territories in addition to being the federal police force in those Canadian territories.
== Spanish Sahara ==
A separate Sahrawi indigenous unit serving the Spanish colonial government was the Policia Territorial. This gendarmerie corresponded to the Civil Guard in metropolitan Spain. It was commanded by Spanish officers and included Spanish personnel of all ranks.
== United Kingdom ==
In the United Kingdom (UK) the phrase is gaining increased official (but not yet statutory) use to describe the collection of forces responsible for general policing in areas defined with respect to local government areas. The phrase "Home Office Police" is commonly used but this is often inaccurate or inadequate as the words naturally exclude forces outside England and Wales, but include some special police forces over which the Home Secretary has some power.
The police forces referred to as "territorial" are those whose police areas are defined by:
Police Act 1996 – England and Wales, later legislation matched the Metropolitan Police District to the boundary of Greater London
Police and Fire Reform (Scotland) Act 2012 – Scotland
Police (Northern Ireland) Act 2000 – Northern Ireland (renaming the Royal Ulster Constabulary the Police Service of Northern Ireland without changing the area served)
Members of territorial police forces have jurisdiction in one of the three distinct legal systems of the United Kingdom – either England and Wales, Scotland or Northern Ireland. A police officer of one of the three legal systems has all the powers of a constable throughout their own legal system but limited powers in the other two legal systems. Certain exceptions where full police powers cross the border with the officer are when officers are providing planned support to another force such as the G8 Conference in Scotland in 2005 or COP26 officers of the Metropolitan Police who are on protection duties anywhere in the United Kingdom and when taking a person to or from a prison.
== United Nations ==
The United Nations (UN) has operated territorial police forces in those parts of countries which have been under control of the UN from time to time. These were usually formed from police personnel on loan from member countries. A recent example is the use of such a force in East Timor in substitution for Indonesian National Police.
== References ==
== External links ==
The Police in Scotland | Wikipedia/Territorial_police_force |
A law enforcement agency (LEA) has powers, which other government subjects do not, to enable the LEA to undertake its responsibilities. These powers are generally in one of six forms:
Exemptions from laws
Intrusive powers, for search, seizure, and interception
Legal deception
Use of force and constraint of liberty
Jurisdictional override
Direction
The types of powers and law exemptions available to a LEA vary from country to country.
They depend on the social, legal, and technical maturity of the country, and on the resources available to LEAs generally in the country. Some countries may have no laws regarding a particular type of activity by its subjects at all, while other countries might have very stringent laws on the same type of activity. This will impact significantly on the legal structures, if any, that govern how an LEA can operate, and on how the LEA's use of powers is overviewed.
Law enforcement agency powers are part of a broad range of techniques used for law enforcement, many of which require no specific legislative support or independent overview.
See law enforcement techniques for a list of other law enforcement techniques.
== Overview of use of powers ==
The powers and law exemptions granted to an LEA allow the LEA to act in a way which would typically be regarded as violating the rights of law complying subjects. Accordingly, to minimise the risk that these powers and law exemptions might be misused or abused, many countries have in place strong overview regimes to monitor the use and application of the LEA's powers and law exemptions. Overview regimes can involve judicial officers, be provided by internal audit services, by independent authorities, by the LEA's governing body, or some other civil mechanism.
Generally, the use of powers and law exemptions fall into two loose categories:
those that are deemed to be intrusive and might significantly impact the rights of a subject
those that are relatively unintrusive and do not significantly impact the rights of a subject.
While a LEA's powers and law exemptions are not usually explicitly categorised in this way, they do fall into these two broad categories in this manner and can be identified by the types and level of overview applied to the use of the powers and law exemptions. The former group can have strong and multiple levels of overview, typically for every exercising of the power or law exemption, and the latter group can have no overview other than an exceptional response for some extreme malutilisation of the power or law exemption.
Due to their nature, specifically allocated powers have a greater impact on subjects, whereas law exemptions have a lesser impact on subjects. For example, the use of deadly force is normally an explicitly granted power. This is distinct from the carrying of a firearm in a public place. The latter is normally a law exemption. The discharging of a firearm is normally subject to significant overview, whether or not person injury or property damage occurred, whereas the carrying of a firearm in compliance with the law exemption requires no reporting or overview.
=== Approval and internal overview ===
Typically personnel of an LEA cannot just exercise a power of their own volition. In order to exercise a power, an officer of an LEA must argue for and get approval, from either a senior officer of the LEA or a judicial officer. The senior officer or the judicial officer have a responsibility to ensure that use of a power is necessary and does not unnecessarily violate the rights of subjects.
=== Judicial overview ===
Judicial overview of LEA powers is typically in the form of the LEA having to provide grounds for the exercising of the power to a judicial officer in order to get approval, each time the power is to be exercised. Typically, in line with the separation of authority, the judicial officer is external to the LEA. Judicial overview is typically required for the more intrusive powers. The judicial approval for the use of a power is usually called a warrant, for example, a search warrant for the intrusive search and seizure of a subject's property, or a telecommunications interception warrant to listen to and copy subjects' communications:)
=== Civil and external overview ===
Civil overview can be applied to internally approved use of powers and also to judicial approval of powers. Civil overview is normally after the event reporting on the frequency and effectiveness of the use of the powers to open fora accessible to the public.
External overview can be done by auditors, or specifically created general overview authorities, for example ombudsmen. The reports, or at least summaries of the reports, of these entities on the LEA's use of powers or law exemptions is typically available to the public. For example, the Australian Federal Police's controlled operations are subject to open civil review by its governing body, the Parliament of Australia.
=== Internal review ===
Internal review involves formal reviews done by the LEA itself on the use of its powers and law exemptions. Often, as part of this process, every time a certain power is used an incident report detailing the circumstances requiring the use of the power and the outcomes of the use of the power must be completed. For example, a use of force report. These reports are then collated and analysed to determine if there are any patterns of misuse, overuse, or process change or LEA personnel training requirements required.
=== Abuse of powers and law exemptions ===
The abuse, or perceived abuse, or lack of openness and reporting on the use, of powers and law exemptions by an LEA, can give rise to lack of confidence and respect by subjects in the LEA. Where there is abuse of its powers and law exemptions by an LEA, and no or ineffective overview of its activities, the LEA can come to be referred to as a secret police agency.
== Communications interception ==
The interception of communications is usually the interception of electronic voice or data connections, and is typically called telecommunications interception (TI). In some countries TI is called wire tapping. Other forms of communications interception can be intercepting radio transmissions and opening physical mail items.
In a civil society or democratic society, governing bodies give their law enforcement agencies specific powers to intercept telecommunications via specific laws, for example, in Australia with the Telecommunications (Interception and Access) Act 1979, in the United Kingdom via the Regulation of Investigatory Powers Act 2000, and in the United States with 18 USC §2516.
The use of TI powers by a LEA is typically subject to strong overview from outside of the LEA. For example, in Australia an ombudsman has strong intrusive powers to monitor and review an LEA using TI.
== Intrusive seizure ==
Intrusive seizure can include:
Property seizure by entering premises
Taking from a subject finger prints, palm prints, sole prints, teeth impressions, lip prints, blood samples, DNA samples, etc.
Information seizure by accessing information systems
=== Property seizure by entering premises ===
Law enforcement agencies are specifically given the authority to seize property, for the example the Federal Bureau of Investigation
The power to search and seize property is typically granted in an instance via an instrument called a search warrant.
== Legal deception ==
Legal deception powers can include:
Assumed identities
Controlled operations
Intrusive surveillance
=== Assumed identities ===
Law enforcement agency personnel when they take on assumed identities are often referred to as covert officers or undercover officers. The use of such methods in open societies are typically explicitly authorised and is subject to overview, for example in Australia under the Crimes Act 1914, and in the United Kingdom under the Regulation of Investigatory Powers Act 2000.
=== Controlled operations ===
Controlled operations, an action by a law enforcement agency to allow a criminal act to occur, for example the importation of illicit substances, so that as many of the subjects involved in the act can be identified as possible during the process of importation. A controlled operation typically includes the substitution of all of the illicit substances, or the significant majority of them, by the law enforcement agency with similar looking but benign materials.
In open societies, controlled operations are specifically legislated for to be used by law enforcement agencies and are subject to overview, for example in Australia.
=== Intrusive surveillance ===
Intrusive surveillance typically means entering or interfering with the private and confidential space and property of a subject. Intrusive surveillance typically requires the law enforcement agency to be enabled for a law enforcement agency, for example in Australia, and in the United Kingdom. Recent advances in technology have made surveillance easier to achieve and, in some instance, even commonplace.
== Use of force ==
Use of force powers can include:
Non-lethal force, including either non-lethal weapons or temporary physical holds and contact to immobilize subjects
Lethal force
Immobilisation and restraint
== Constraint of liberty ==
Constraint of liberty powers can include:
Arrest. The power to arrest is typically granted in an instance via an instrument called an arrest warrant. The power to arrest is also typically granted to a member of an LEA for whenever the member has probable cause to do so. Open governments publicly give their law enforcement agencies the power to arrest subjects, for example, in the United States, the FBI has the power of arrest under 18 USC §3052.
Detention
== Jurisdictional override ==
Sometimes, a law enforcement agency will not normally have the jurisdictional authority to be involved in enforcing compliance of, or investigating the non compliance with, a law unless that law or the non complying subject crosses over multiple jurisdictions, or the non compliance is especially severe.
For example, in the United States while kidnapping is typically initially a state jurisdictional matter, the United States' Federal Bureau of Investigation is allowed to take responsibility when the matter crosses state boundaries by virtue of the act(s) then becoming a federal matter.
== Direction ==
The power of direction allows a LEA to direct a subject to either carry out some act or provide information with the subject having no right to refuse, even if the outcome is to incriminate the subject, that is, any explicit, implied, or de facto right to silence is overridden.
This power when provided to an LEA in a civil society or democratic society is typically counterbalanced by the subject not being able to be prosecuted as a result of them complying with the direction, but they can be prosecuted if they do not comply. They can be prosecuted if other law enforcement outcomes have the otherwise same effect. A subject can be prosecuted using information obtained from another subject under direction.
An example of this power of direction is held by the Australian Crime Commission. The Australian Federal Police (AFP) also has a power of direction, but this is limited to being applied to AFP appointees.
== References ==
== See also ==
Photography is Not a Crime
Police stop, search, detention and arrest powers in the United Kingdom
Proactive policing
Law enforcement in the United States
Federal law enforcement in the United States | Wikipedia/Law_enforcement_agency_powers |
Watchmen were organised groups of men, usually authorised by a state, government, city, or society, to deter criminal activity and provide law enforcement as well as traditionally perform the services of public safety, fire watch, crime prevention, crime detection, and recovery of stolen goods. Watchmen have existed since earliest recorded times in various guises throughout the world and were generally succeeded by the emergence of formally organised professional policing.
== Early origins ==
An early reference to a watch can be found in the Bible where the Prophet Ezekiel states that it was the duty of the watch to blow the horn and sound the alarm. (Ezekiel 33:1-6)
The Roman Empire made use of the Praetorian Guard and the Vigiles, literally the watch.
== Watchmen in England ==
=== The problem of the night ===
In the late 1600s, the streets in London were dark and had a shortage of good quality artificial light. It had been recognized for centuries that the coming of darkness to the unlit streets of a town brought a heightened threat of danger, and that the night provided cover to the disorderly and immoral, and to those bent on robbery or burglary or who in other ways threatened physical harm to people in the streets and in their houses.
In the 13th century, the anxieties created by darkness gave rise to rules about who could use the streets after dark and the formation of a night watch to enforce them. These rules had for long been underpinned in London and other towns by the curfew, the time (announced by the ringing of a bell) at which the gates closed and the streets were cleared. These rules, where codified by law, would come to be known as the nightwalker statutes; such statutes empowered and required night watchmen (and their assistants) to arrest those persons found about the town or city during hours of darkness. Only people with good reason to be out could then travel through the city. Anyone outside at night without reason or permission was considered suspect and potentially criminal.
Allowances were usually made for people who had some social status on their side. Lord Feilding clearly expected to pass through London's streets untroubled at 1am one night in 1641, and he quickly became piqued when his coach was stopped by the watch, shouting huffily that it was a 'disgrace' to stop someone of such high standing as he, and telling the constable in charge of the watch that he would box him on the ears if he did not let his coach carry on back to his house. 'It is impossible' to 'distinguish a lord from another man by the outside of a coach', the constable said later in his defence, 'especially at unreasonable times'.
=== Formation of watchmen ===
The Ordinance of 1233 required the appointment of watchmen. The Assize of Arms of 1252, which required the appointment of constables to summon men to arms, quell breaches of the peace, and to deliver offenders to the sheriff, is cited as one of the earliest creations of an English police force, as was the Statute of Winchester of 1285. In 1252 a royal writ established a watch and ward with royal officers appointed as shire reeves:
By order of the King of England the Winchester Act Mandating The Watch. Part Four and the King commandth that from henceforth all Watches be made as it hath been used in past times that was to wit from the day of Ascension unto the day of St. Michael in every city by six men at every gate in every borough by twelve men in every town by six or four according to the number of inhabitants of the town. They shall keep the Watch all night from sun setting unto sun rising. And if any stranger do pass them by them he shall be arrested until morning and if no suspicion be found he shall go quit.
Later in 1279 King Edward I formed a special guard of 20 sergeants at arms who carried decorated battle maces as a badge of office. By 1415 a watch was appointed to the Parliament of England and in 1485 King Henry VII established a household watch that became known as the Beefeaters.
As of the 1660s, it was already common practice to avoid night-time service in the watch by paying for a substitute. Substitution had become so common by the late 17th century that the night watch was virtually by then a fully paid force.
An act of Common Council, known as 'Robinson's Act' from the name of the sitting lord mayor, was promulgated in October 1663. It confirmed the duty of all householders in the City to take their turn at watching in order 'to keep the peace and apprehend night-walkers, malefactors and suspected persons'. For the most part the act of Common Council of 1663 reiterated the rules and obligations that had long existed. The number of watchmen required for each ward, it declared, was to be the number 'established by custom' – in fact, by an act of Common Council of 1621. Even though it had been true before the civil war that the watch had already become a body of paid men, supported by what were in effect the fines collected from those with an obligation to serve, the Common Council did not acknowledge this in the confirming act of Common Council of 1663.
The act of Common Council of 1663 confirmed that watch on its old foundations, and left its effective management to the ward authorities. The important matter to be arranged in the wards was who was going to serve and on what basis. How the money was to be collected to support a force of paid constables, and by whom, were crucial issues. The 1663 act of Common Council left it to the ward beadle or a constable and it seems to have been increasingly the case that rather than individuals paying directly for a substitute, when their turn came to serve, the eligible householders were asked to contribute to a watch fund that supported hired man.
From the mid-1690s the City authorities made several attempts to replace Robinson's Act and establish the watch on a new footing. Though they did not say it directly, the overwhelming requirement was to get quotas adjusted to reflect the reality that the watch consisted of hired men rather than citizens doing their civic duty—the assumption upon which the 1663 act of Common Council, and all previous acts, had been based.
The implications and consequences of changes in the watch were worked out in practice and in legislation in two stages between the Restoration and the middle decades of the 18th century. The first involved the gradual recognition that a paid (and full-time) watch needed to be differently constituted from one made up of unpaid citizens, a point accepted in practice in legislation passed by the Common Council in 1705, though it was not articulated in as direct a way.
The fact that the 1705 act of Common Council called for watchmen to be strong and able-bodied men seems further confirmation that the watch was now expected to be made up of hired hands rather than every male house holder serving in turn. The act of Common Council of 1705 laid out the new quotas of watchmen and the disposition of watch-stands agreed to each ward. To discourage the corruption that had been blamed for earlier under-manning, it forbade constables to collect and disturbs the money paid in for hired watchmen: that was now supposed to be the responsibility of the deputy and common councilmen of the ward.
The second stage was the recognition that watchmen could not be sustained without a major shift in the way local services were financed. This led to the City's acquisition of taxing power by means of an act of Parliament in 1737 which changed the obligation to serve in person into an obligation to pay to support a force of salaried men. Under the new act, the ward authorities also continued to hire their own watchmen and to make whatever local rules seemed appropriate—establishing, for example, the places in their wards where the watchmen would stand and the beats they would patrol. But the implementation of the new Watch Act did have the effect of imposing some uniformity on the watch over the whole City, making in the process some modest incursions into the local autonomy of the wards. One of the leading elements in the regime that emerged from the implementation of the new act was an agreement that every watchman would be paid the same amount and that the wages should be raised to thirteen pounds a year.
From 1485 to the 1820s, in the absence of a police force, it was the parish-based watchmen who were responsible for keeping order in London's streets.[1]
=== Duties ===
Night watchmen patrolled the streets from 9 or 10 pm until sunrise, and were expected to examine all suspicious characters. These controls continued in the late 17th century. Guarding the streets to prevent crime, to watch out for fires, and – despite the absence of a formal curfew – to ensure that suspicious and unauthorised people did not prowl around under cover of darkness was still the duty of night watch and the constables who were supposed to command them.
The principal task of the watch in 1660 and for long after continued to be the control of the streets at night imposing a form of moral or social curfew that aimed to prevent those without legitimate reason to be abroad from wandering the streets at night. That task was becoming increasingly difficult in the 17th century because of the growth of the population and variety of ways in which the social and cultural life was being transformed. The shape of the urban day was being altered after the Restoration by the development of shops, taverns and coffee-houses, theatres, the opera and other places of entertainment. All these places remained open in the evening and extended their hours of business and pleasure into the night.
The watch was affected by this changing urban world since policing the night streets become more complicated when larger number of people were moving around. And what was frequently thought to be poor quality of the watchman—and in time, the lack of effective lighting—came commonly to be blamed when street crimes and night-time disorders seemed to be growing out of control.
Traditionally, householders served in the office of constable by appointment or rotation. During their year of office they performed their duties part-time alongside their normal employment. Similarly, householders were expected to serve by rotation on the nightly watch. From the late seventeenth century, however, many householders avoided these obligations by hiring deputies to serve in their place. As this practice increased, some men were able to make a living out of acting as deputy constables or as paid night watchmen. In the case of the watch, this procedure was formalized in many parts of London by the passage of "Watch Acts", which replaced householders' duty of service by a tax levied specifically for the purpose of hiring full-time watchmen. Some voluntary prosecution societies also hired men to patrol their areas.
=== Reputation ===
While the societies for the reformation of manners showed there was a good deal of support for the effective policing of morality, they also suggested that the existing mechanisms of crime control were regarded by some as ineffective.
Constable Dogberry's men from Much Ado About Nothing by Shakespeare, who would 'rather sleep than talk', may be dismissed as merely a dramatic device or a caricature, but successful dramatists nevertheless work with characters who strike a chord with their audience. A hundred years later such complaints were still commonplace. Daniel Defoe wrote four pamphlets and a broadsheet on the issue of street crime in which, among other things, he roundly attacked the efficacy of the watch and called for measures to ensure it 'be compos'd of stout, able-body'd Men, and of those a sufficient Number'.
Watchmen on roads leading to London had a reputation for clumsiness in the late 1580s. It was a temptation on cold winter nights to slip away early from watching stations to catch some sleep. Constables in charge sometimes let watches go home early. 'The late placing and early dischargering' of night-watches concerned Common Council in 1609 and again three decades later when someone sent out to spy on watches reported that they 'break up longe before they ought'. 'The greatest parte of constables' broke up watches 'earlie in the morninge' at exactly the time 'when most danger' was 'feared' in the long night, leaving the dark streets to thieves.
Watchmen often counted off the hours until sunrise on chilly nights. Alehouses offered some warmth, even after curfew bells told people to drink up. A group of watchmen sneaked into a 'vitlers' house one night in 1617 and stayed 'drinking and taking tobacco all night longe'. Like other officers, watchmen could become the focus for trouble themselves, adding to the hullabaloo at night instead of ordering others to keep the noise down and go to bed. And as by day, there were more than a few crooked officers policing the streets at night, quite happy to turn a blind eye to trouble for a bribe. Watchman Edward Gardener was taken before the recorder with 'a common nightwalker' – Mary Taylor – in 1641 after he 'tooke 2s to lett' her 'escape' when he was escorting her to Bridewell late at night. Another watchman from over the river in Southwark took advantage of the tricky situation people suddenly found themselves in if they stumbled into the watch, 'demanding money [from them] for passing the watch'.
A common complaint in the 1690s was that watchmen were inadequately armed. This was another aspect of the watch in the process of being transformed. The Common Council acts required watchmen to carry halberds, with some still doing so through the late seventeenth century. But it seems clear that few did, because the halberd was no longer suitable for the work they were being called upon to do. It was more often observed that watchmen failed to carry them, and it is surely the case that the halberd was no longer a useful weapon for a watch that was supposed to be mobile. By the second quarter of the 18th century, watchmen were equipped with a staff, along with their lantern.
=== Watch houses ===
Another step in the evolution of the watch involved building 'watch howses' as the country lurched towards revolution after 1640. A City committee was asked to look into the question 'what watchhouses are necessary' and where 'for the safety of this cittye' in 1642. Workmen began building watch houses in strategic spots soon after. They provided assembly-points for watchmen to gather to hear orders for the night ahead, somewhere to shelter from 'extremitye of wind and weather', and holding-places for suspects until morning when justices examined the night's catch. There were watch houses next to Temple Bar (1648), 'neere the Granaryes' by Bridewell (1648), 'neere Moregate' (1648), and next to St. Paul's south door (1649). They were not big; the one on St. Paul's side was 'a small house or shed'. This was a time of experimentation, and people (including those in authority) were learning how to make best use of these new structures in their midst.
=== Policing the night streets ===
The watchmen patrolled the streets at night, calling out the hour, keeping a lookout for fires, checking that doors were locked and ensuring that drunks and other vagrants were delivered to the watch constable. However, their low wages and the uncongenial nature of the job attracted a fairly low standard of person, and they acquired a possibly exaggerated reputation for being old, ineffectual, feeble, drunk or asleep on the job.
London had a system of night policing in place before 1660, although it was improved over the next century through better lighting, administrations, finances, and better and more regular salaries. But the essential elements of the night-watch were performing completely by the middle of the seventeenth century.
During the 1820s, mounting crime levels and increasing political and industrial disorder prompted calls for reform, led by Sir Robert Peel, which culminated in the demise of the watchmen and their replacement by a uniformed metropolitan police force.
John Gray, the owner of Greyfriars Bobby, was a nightwatchman in the 1850s.
== Watchmen around the world ==
=== United States of America ===
The first form of societal protection in the United States was based on practices developed in England. The City of Boston was the first settlement in the thirteen colonies to establish a night watch in 1631 (replaced in 1838); Plymouth, Massachusetts in 1633 (replaced in 1861); New York (then New Amsterdam) (replaced in 1845) and Jamestown followed in 1658.
With the unification of laws and centralization of state power (e.g. the Municipal Police Act of 1844 in New York City, United States), such formations became increasingly incorporated into state-run police forces (see metropolitan police and municipal police).
=== Philippines ===
In the Philippines, Barangay watchmen called "Tanod" are common. Their role is to serve as frontline law enforcement officers in Barangays, especially those far from city or town centres. They are mainly supervised by the Barangay Captain and may be armed with a bolo knife.
== See also ==
City watch
Dogberry
Nightwalker statute
Night-watchman state
Security officer
Watch committee
== References ==
^ This can be verified by England's Old Bailey court records.
== Further reading ==
David Barrie, Police in the Age of Improvement: Police Development and the Civic Tradition in Scotland, 1775-1865, Willan Publishing, 2008, ISBN 1-84392-266-5. Chapter "Watching and Warding", Google Print, p.34-41
Second Thoughts are Best
Augusta Triumphans
== Bibliography ==
Beattie, J. M. (2001). Policing and Punishment in London 1660-1750. Oxford University Press. ISBN 0-19-820867-7
Ekirch A. R. (2001). At Day’s Close: A History of Nighttime, London: Weidenfeld and Nicolson.
Clarkson, Charles Tempest; Richardson, J. Hall (1889). Police!. OCLC 60726408
"Constables and the Night Watch". .oldbaileyonline,retrieved 22 November 2015, http://www.oldbaileyonline.org/static/Policing.jsp
Critchley, Thomas Alan (1978). A History of Police in England and Wales.
Griffiths, Paul (2010). Lost Londons Change, Crime, and Control in the Capital City, 1550-1660. Cambridge University Press. ISBN 9780521174114.
Delbrück, Hans (1990). Renfroe, Walter J. Jr, ed. Medieval Warfare. History of the Art of War 3. ISBN 0-8032-6585-9.
Philip McCouat, "Watchmen, goldfinders and the plague bearers of the night", Journal of Art in Society, retrieved 22 October 2015, http://www.artinsociety.com/watchmen-goldfinders-and-the-plague-bearers-of-the-night.html
Pollock, Frederick; Maitland, Frederic William (1898). The History of English Law Before the Time of Edward I. 1 (2 ed.). ISBN 978-1-58477-718-2.
Rawlings, Philip (2002). Policing A Short History. USA: Willan Publishing. ISBN 1903240263.
Rich, Robert M. (1977). Essays on the Theory and Practice of Criminal Justice. ISBN 978-0-8191-0235-5.
== External links ==
The Proceedings of the Old Bailey, 1674-1913
Biblical Watchman News Reporters
Constables and the Night Watch
"Watchmen, goldfinders and the plague bearers of the night" | Wikipedia/Watchman_(law_enforcement) |
Following the historic Lindbergh kidnapping (the abduction and murder of Charles Lindbergh's toddler son), the United States Congress passed a federal kidnapping statute—known as the Federal Kidnapping Act, 18 U.S.C. § 1201(a)(1) (popularly known as the Lindbergh Law, or Little Lindbergh Law)—which was intended to let federal authorities step in and pursue kidnappers once they had crossed state lines with their victim. The act was first proposed in December 1931 by Missouri Senator Roscoe Conkling Patterson, who pointed to several recent kidnappings in Missouri in calling for a federal solution. Initial resistance to Patterson's proposal was based on concerns over funding and state's rights. Consideration of the law was revived following the kidnapping of Howard Woolverton in late January 1932. Woolverton's kidnapping featured prominently in several newspaper series researched and prepared in the weeks following his abduction, and were quite possibly inspired by it. Two such projects, by Bruce Catton of the Newspaper Enterprise Association and Fred Pasley of the Daily News of New York City, were ready for publication within a day or two of the Lindbergh kidnapping. Both series, which ran in papers across North America, described kidnapping as an existential threat to American life, a singular, growing crime wave in which no one was safe.
Following the discovery of Baby Lindbergh's body not far from his home, the act became law in summer 1932. In 1934, the act was amended to provide exception for parents who abduct their own minor children and made a death sentence possible in cases where the victim was not released unharmed.
The theory behind the Lindbergh Law was that federal law enforcement intervention was necessary because state and local law enforcement officers could not effectively pursue kidnappers across state lines. Since federal law enforcement, such as FBI agents and U.S. Marshals, have national law enforcement authority, Congress believed they could do a much more effective job of dealing with kidnappings than could state, county, and local authorities. There is a rebuttable presumption of transportation in interstate or foreign commerce if the victim is not released within 24 hours. Additionally, it is likewise an offense to conspire or attempt to violate the statute. There is also extraterritorial jurisdiction if the offense is against an internationally protected person.
Only one person, Arthur Gooch, was executed for a federal kidnapping conviction in a case where the victim did not die. Under the current statute, the victim must die for the crime to become a capital offense. Barring a permitted departure from federal guidelines, kidnapping resulting in death now carries a mandatory life sentence if the perpetrator is an adult. In addition, the law mandates a minimum of 20 years in prison if the victim is a minor and the perpetrator is an adult and not a family member.
Several states implemented their own versions of this law, known as "Little Lindbergh" laws, covering acts of kidnapping that did not cross state lines. In some states, if the victim was physically harmed in any manner, the crime qualified for capital punishment. This was what occurred in the Caryl Chessman case in California. Following the April 8, 1968 decision by the United States Supreme Court in United States v. Jackson, kidnapping alone no longer constitutes a capital offense.
== Notable convictions ==
In what is considered the FBI's first major prosecution of the Act, George "Machine Gun Kelly" Barnes, his wife Kathryn Kelly, Albert L. Bates and others received life sentences for kidnapping Oklahoma oilman Charles F. Urschel in July 1933.
Thomas Robert Limerick, sentenced to life in prison for 1935 for the kidnapping of three people during a bank robbery. He was killed during an escape attempt from the Alcatraz Federal Penitentiary in 1938.
Arthur Gooch, executed in 1936 for the kidnapping of police officers R.N. Baker and H.R. Marks in Texas, before releasing them alive in Oklahoma. He is the only person executed for a federal kidnapping conviction in a case where the victim(s) did not die. The crime became a capital offense when Baker was badly injured after being shoved into a glass case, which then broke. Gooch's accomplice, Ambrose Nix, was killed while resisting arrest on December 23, 1934.
Samuel Shockley, sentenced to life in prison in 1938 for the kidnapping of Mr. and Mrs. D. F. Pendley during a bank robbery. He was later convicted of murder for his participation in the Battle of Alcatraz, sentenced to death, and executed in 1948.
John Henry Seadlund, executed in 1938 for the kidnapping and murder of Charles Sherman Ross. Seadlund also killed his accomplice, 19-year-old James Atwood Gray.
Clarence Carnes, sentenced to 99 years in prison in 1945 for the kidnapping of Jack Nance while he was a fugitive. At the time, he had been serving a life sentence for murder in Oklahoma. Carnes was later convicted of murder for his involvement in the Battle of Alcatraz and received another life sentence. He was the only surviving participant to be spared execution, after federal officers testified on his behalf, saying he had rejected orders to kill them. Carnes was paroled in 1973, but returned to prison for parole violations. He died in prison in 1988.
Miran Edgar Thompson, sentenced to 99 years in prison in 1945 for the kidnapping of Betty Jim Shelton after killing a police officer in Texas. He was then given a life sentence in Texas for murdering the officer after the jury spared him from execution. Thompson was later convicted of murder for his participation in the Battle of Alcatraz, sentenced to death, and executed in 1948.
William Edward Cook, sentenced to 300 years in prison in 1951 after pleading guilty to the kidnapping and murder of a family of five, over the objections of the prosecution, who had wanted him executed. After learning that Cook would become eligible for parole in 20 years, the public demanded that he stand trial for another murder he committed in California. He was convicted of that murder, sentenced to death, and executed in 1952.
Carl Austin Hall and Bonnie Emily Heady, executed in 1953 for the kidnapping and murder of Bobby Greenlease
George and Michael Krull, sentenced to life in prison in 1956 for the kidnapping and rape of Sunie Jones. They were also federally prosecuted for the rape specifically, since it occurred in a national park. The Krulls were both sentenced to death on this charge, and executed in 1957.
Walter Hill, sentenced to 25 years in prison in 1962 for the kidnapping and robbery of Arthur Phillips. Hill, who had a prior murder conviction, had his sentence extended by five years after being convicted of voluntary manslaughter for fatally stabbing a fellow inmate. Hill was paroled in 1975, and committed a triple murder in Alabama in 1977. He was sentenced to death in that case and executed in 1997.
Melvin Rees, sentenced to life in prison in 1961 for the kidnapping and murder of Mildred Ann Jackson. Rees was also convicted in five murders in Maryland and Virginia, including that of Jackson. He was sentenced to life in prison in Maryland and sentenced to death in Virginia. However, he was never executed due to his mental capacity to assist in his own defense. Rees's sentence was commuted to life in prison in 1972 and he died in prison in 1995.
Victor Harry Feguer, executed in 1963 for the kidnapping and murder of Doctor Edward Bartels.
Robert Lee Willie and Joseph Vaccaro, sentenced to life in prison for the kidnapping and attempted murder of Mark Allen Brewster, and the kidnapping and rape of Debbie Cuevas. Willie and Vaccaro were later sentenced to death and life in prison, respectively, on a state murder charge in Louisiana. Willie, who pleaded guilty to two additional murders while on death row, was executed in 1984.
Ming Sen Shiue, sentenced to life in prison in 1980 for the kidnapping and rape of Mary Stauffer, and the kidnapping of her daughter, Elizabeth. He was denied mandatory release and declared a dangerous sexual predator in 2010.
Alton Coleman and Debra Brown, serial killers each sentenced to 20 years in prison in 1985 for kidnapping Oline Carmical. Coleman and Brown were later sentenced to death on state murder charges. Coleman received death sentences in Ohio, Indiana, and Illinois, while Brown received death sentences in Ohio and Indiana. Coleman was executed in Ohio in 2002, but Brown later had her sentence commuted to life in prison without parole.
Andrew Six and his uncle, Donald Petary, were both convicted of kidnapping 12-year-old Kathy Allen from Iowa and later murdered her in Missouri, and sentenced to 200 years' imprisonment in 1987. Six and Petary were later both sentenced to death in Missouri for state murder charges, and in the end, Six was executed at the Potosi Correctional Center on August 20, 1997, while Petary died on death row in 1998 before he could be executed.
Genaro Ruiz Camacho, sentenced to life without parole in 1991 for the kidnapping and murder of Evellyn Banks and her son, Andre Banks. Camacho was later sentenced to death on state murder charges in Texas and executed in 1998.
Donald Leroy Evans, serial killer sentenced to life without parole in 1991 for the kidnapping, rape, and murder of Beatrice Louise Routh. Evans was later sentenced to death on state murder charges in Mississippi. He was stabbed to death by a fellow death row inmate in 1999.
Larry Hall, suspected serial killer sentenced to life without parole in 1995 for the kidnapping and murder of 16-year-old Jessica Roach. Hall is suspected of dozens of murders in multiple states.
Cary Stayner, serial killer sentenced to life without parole in 2000 for the kidnapping, rape, and murder of Joie Ruth Armstrong. Cary was later sentenced to death on state murder charges in California.
Louis Jones Jr., executed in 2003 for the kidnapping, rape, and murder of Tracie McBride.
Alfonso Rodriguez Jr., sentenced to death in 2007 for the kidnapping, rape, and murder of Dru Sjodin. His death sentence was overturned in 2021 and he was resentenced to life without parole in 2023.
James Ford Seale, sentenced to life in prison in 2007 for the kidnapping and murder of two young black men as a Ku Klux Klan member in 1964. Died in prison in 2011.
Brian David Mitchell and Wanda Barzee, sentenced to life without parole and 15 years, respectively, for kidnapping and repeatedly raping 14-year-old Elizabeth Smart while holding her in captivity for approximately nine months.
Annugetta Pettway, sentenced to 11 years in prison in 2012 after pleading guilty to kidnapping Carlina White as an infant and raising her as her own child. Released in 2021.
Joseph Edward Duncan, serial killer sentenced to death in 2008 for the kidnapping, rape, and murder of Dylan Groene. Duncan died on death row in 2021 before he could be executed.
Gary Hilton, serial killer sentenced to life without parole in 2014 for the kidnapping and murder of John and Irene Bryant. Hilton remains on death row in Florida for a separate murder.
Brendt Allen Christensen, sentenced to life without parole in 2019 for the kidnapping, rape, and murder of Yingying Zhang, after a jury spared him from execution.
Wesley Ira Purkey, executed in 2020 for the kidnapping, rape, and murder of Jennifer Long.
Lezmond Mitchell, executed in 2020 for the carjacking, kidnapping, and murder of lyce Slim and Tiffany Lee.
Keith Dwayne Nelson, executed in 2020 for the kidnapping, rape, and murder of Pamela Butler.
Orlando Cordia Hall, Bruce Carneil Webster, Demetrius Kenyon Hall, Steven Christopher Beckley, and Marvin Terrance Holloway, prosecuted for the kidnapping, rape, and murder of Lisa Rene. Orlando Hall, the ringleader, was executed in 2020, and Bruce Webster had his death sentence commuted to life without parole in 2019 after he was found to be intellectually disabled. Demetrius Hall, Steven Beckley, and Marvin Holloway all pleaded guilty and testified against Orlando Hall and Bruce Webster in exchange for leniency, receiving prison sentences ranging from 15 to 30 years, and have since been released from prison. Although Beckley pleaded guilty to kidnapping resulting in death, he avoided a mandatory life sentence since prosecutors gave him and the others "substantial assistance" specifications due to their cooperation.
Lisa Marie Montgomery, executed in 2021 for the kidnapping of Bobbie Jo Stinnett's unborn child. Montgomery cut the baby out of Stinnett's womb and left Stinnett to die.
Dustin Higgs and Willis Haynes, prosecuted for the kidnapping and murder of Tamika Black, Tanji Jackson, and Mishann Chinn. They were also tried on federal first degree murder charges since the killings occurred on federal property. Higgs was executed in 2021 and Haynes received a life sentence. A third man who was present, Victor Gloria, pleaded guilty to being an accessory after the fact and received a 7-year sentence in exchange for testifying against Higgs and Hayes.
Javier Enrique Da Silva Rojas, sentenced to 30 years in prison in 2021 after pleading guilty to the kidnapping and murder of his ex-girlfriend, Valerie Reyes, by stuffing her in a suitcase, causing her to suffocate.
== References ==
== Further reading ==
Meredith, Kevin with Hendry Jr., David W. (2023). Under Penalty of Death: The Untold Story of Machine Gun Kelly's First Kidnapping. Red Lightning Books (an imprint of Indiana University Press). ISBN 978-1-68435-199-2.{{cite book}}: CS1 maint: multiple names: authors list (link)
== External links ==
Peters, Gerhard; Woolley, John T. "Herbert Hoover: "Statement on the Lindbergh Kidnaping.," May 13, 1932". The American Presidency Project. University of California - Santa Barbara. | Wikipedia/Federal_Kidnapping_Act |
The use of force, in the context of law enforcement, may be defined as "the amount of effort required by police to compel compliance by an unwilling subject." Multiple definitions exist according to context and purpose. In practical terms, use of force amounts to any combination of threatened or actual force used for a lawful purpose, e.g. to effect arrest; defend oneself or another person; or to interrupt a crime in progress or prevent an imminent crime. Depending on the jurisdiction, legal rights of this nature might be recognized to varying degrees for both police officers and non-sworn individuals; and may be accessible regardless of citizenship. Canada's Criminal Code, for example, provides in section 494 for arrest in certain circumstances by "any one."
Use of force doctrines can be employed by law enforcement officers and military personnel, who are on guard duty. The aim of such doctrines is to balance the needs of security with ethical concerns for the rights and well-being of intruders or suspects. Injuries to civilians tend to focus attention on self-defense as a justification and in the event of death, the notion of justifiable homicide.
Police use physical force to the extent necessary to secure observance of the law or to restore order only when the exercise of persuasion, advice and warning is found to be insufficient.
For the English law on the use of force in crime prevention, see Self-defence in English law. The Australian position on the use of troops for civil policing is set out by Michael Head in Calling Out the Troops: Disturbing Trends and Unanswered Questions; compare "Use of Deadly Force by the South African Police Services Re-visited" by Malebo Keebine-Sibanda and Omphemetse Sibanda.
== History ==
Use of force dates back to the beginning of established law enforcement, with a fear that officers would abuse their power. Currently in society, this fear still exists and one of the ways to fix this problem, is to require police to wear body cameras, which should be turned on during all interactions with civilians.
== Use of force continuum ==
The use of force may be standardized by a Use of Force Continuum, which presents guidelines as to the degree of force appropriate in a given situation. One source identifies five very generalized steps, increasing from least use of force to greatest. This kind of continuum generally has many levels, and officers are instructed to respond with a level of force appropriate to the situation at hand, acknowledging that the officer may move from one part of the continuum to another in a matter of seconds.
== U.S. case law ==
=== Graham v. Connor (1989) ===
On November 12, 1984, Graham, who was a diabetic, felt an insulin reaction coming on and rushed to the store with a friend to get some orange juice. When the store was too crowded, he and his friend went to another friend's house. In the midst of all this, he was being watched by Officer Connor, of the Charlotte City Police Department police department. While on their way to the friend's house, the officer stopped the two of them and called for backup. After several other officers arrived, one of them handcuffed Graham. Eventually, when Connor learned that nothing had happened in the convenience store, the officers drove Graham home and released him. Over the course of the encounter, Graham sustained a broken foot, cuts on his wrists, a bruised forehead and an injured shoulder. In the resulting case, Graham v. Connor (1989), the Supreme Court held that it was irrelevant whether Connor acted in good faith, because the use of force must be judged based on its objective reasonableness. In determining the "objective reasonableness" of force, the court set out a series of three factors: "the severity of the crime," "whether there is an immediate threat to the safety of officers or others," and "whether the suspect is actively resisting arrest or evading".
=== Tennessee v. Garner (1985) ===
On October 3, 1974, Officers Elton Hymon and Leslie Wright of the Memphis Police Department were called to respond to a possible burglary. When they arrived to the scene, a woman standing on the porch began to tell them that she heard glass breaking and that she believed the house next door was being broken into. Officer Hymon went to check, where he saw Edward Garner, who was fleeing the scene. As Garner was climbing over the gate, Hymon called out "police, halt", and when Garner failed to do so, Hymon fatally shot Garner in the back of the head, despite being "reasonably sure" that Garner was unarmed. The Supreme Court held, in Tennessee v. Garner, that deadly force may be used to prevent the escape of a fleeing felon only if the officer has probable cause to believe that the suspect poses a serious risk to the officer or to others.
=== Payne v. Pauley (2003) ===
Payne v. Pauley is a case in the Seventh Federal Circuit Court of Appeals, which held that the use of force must be both reasonable and actually necessary to avoid an excessive force complaint.
=== Nelson v. City of Davis (2004) ===
On April 16, 2004, what was supposed to be known as the "biggest party in history" took place at the annual UC Davis picnic. Due to the large number of participants at this party, people began to illegally park their cars. Sgt. John Wilson demanded that officers start to issue parking tickets to the illegally parked cars. Tickets were also issued to the underage drinkers. Wilson called the owner of the apartment complex because of the disturbances that were being caused; loud music and the sounds of bottles breaking. Wilson was consented by the complex apartment owner to have non-residents to leave the complex. Thirty or forty officers were rounded up with riot gear – including pepper ball guns – to try to disperse the crowd of 1,000 attendees. The officers gathered in front of the complex where 15 to 20 students, including Timothy C. Nelson, were attempting to leave, but no instructions were given by the police. Officers began to fire pepper-balls, one of which struck Nelson in the eye. He collapsed immediately and was taken to the hospital much later on, where he suffered multiple injuries including temporary blindness and a permanent loss of visual acuity. He endured multiple surgeries to try to repair the injury. Nelson lost his athletic scholarship due to his injury and was forced to withdraw from UC Davis. The officers were unable to find any criminal charges against Nelson. The Ninth Circuit held that the use of force was unreasonable and the officers were not entitled to qualified immunity.
=== Plumhoff v. Rickard (2014) ===
On July 18, 2014, a West Memphis police officer stopped Donald Rickard for a broken headlight. As the officer talked with Rickard he noticed that there was an indentation in the windshield and that Rickard was acting very erratic. The officer asked Rickard to step out of the vehicle. Rickard at that point fled the scene. A high speed chase ensued, which involved several other officers. Rickard lost control of his vehicle in a parking lot, and officers exited their vehicles to approach Rickard. Rickard again tried to flee, hitting several police cruisers and nearly hitting several officers. At this time officers opened fire on Rickard. The officers fired a total of 15 rounds which resulted in the death of both Rickard and his passenger. The Supreme Court ruled that the use of force was justified, because the objective reasonableness of the use of deadly force must be based on the situation in which it was used, and not on hindsight.
=== Kisela v. Hughes (2018) ===
Andrew Kisela, a Tucson police officer, shot Hughes less than a minute after arriving with other police officers to a report of a woman erratically hacking a tree with a knife. Hughes was in possession of a large kitchen knife, had taken steps towards her roommate, and had refused to drop the knife when repeatedly told to do so. After the shooting, the officers discovered that Hughes had a history of mental illness. All officers stated later that they believed Hughes to be a threat to the roommate. Hughes sued the officer claiming "excessive use of force" in violation of the 4th amendment. The Supreme Court ruled in favor of Officer Kisela, and stated that a reasonable officer is not required to foresee judicial decisions "that do not yet exist in instances where the requirements of the Fourth Amendment are far from obvious".
== U.S. statistics ==
Of the 40 million people in the United States who had face to face contact with the police 1.4%, or 574,000, reported use of force or the threat of use of force being directed at them. About a quarter of the 574,000 incidents involved the police officer pointing the gun at the subject of the incident and 53.5% of the incidents saw the officer using physical force such as kicking, grabbing, and pushing. In addition, 13.7% of those that had force used against them or were threatened with the use of force submitted complaints to the offending officer's department. Of those that received use of force from a police officer or were threatened with use of force almost 75% reported that they believed it was excessive and unwarranted. This statistic was consistent across the Caucasian, African American, and Hispanic races.
A report by the Washington Post found that 385 Americans were fatally shot by law enforcement officers in the first five months of 2015, an average of more than two fatal shootings a day, which was more than twice the rate reported in official statistics. 221 of those killed were armed with guns, and 68 were armed with knives or other blades.
U.S. military personnel on guard duty are given a "use of force briefing" by the sergeant of the guard before being assigned to their post.
== Officer attributes ==
=== Education ===
Studies have shown that law enforcement personnel with some college education (typically two-year degrees) use force much less often than those with little to no higher education. In events that the educated officers do use force, it is usually what is considered "reasonable" force. Despite these findings, very little – only 1% – of police forces within the United States have education requirements for those looking to join their forces. Some argue that police work deeply requires experience that can only be gained from actually working in the field.
=== Experience ===
It is argued that the skills for performing law enforcement tasks well cannot be produced from a classroom setting. These skills tend to be better gained through repeated exposure to law enforcement situations while in the line of work. The results as to whether or not the amount of experience an officer has contributes to the likelihood that they will use force differ among studies.
=== Other characteristics ===
It has not been strongly found that the race, class, gender, age etc. of an officer affects the likelihood that they will use force. Situational factors may come into play.
==== Split-second syndrome ====
Split-second syndrome is an example of how use of force can be situation-based. Well-meaning officers may resort to the use of force too quickly under situations where they must make a rapid decision.
=== Police dogs ===
A 2020 investigation coordinated by the Marshall Project found evidence of widespread deployment of police dogs in the U.S. as disproportionate force and disproportionately against people of color. A series of 13 linked reports, found more than 150 cases from 2015 to 2020 of K-9 officers improperly using dogs as weapons to catch, bite and injure people. The rate of police K-9 bites in Baton Rouge, Louisiana, a majority-Black city of 220,000 residents, averages more than double that of the next-ranked city, Indianapolis, and nearly one-third of the police dog bites are inflicted on teenage men, most of whom are Black. medical researchers found that police dog attacks are "more like shark attacks than nips from a family pet” due to the aggressive training police dogs undergo. Many people bitten were not violent and were not suspected of crimes. Police officers are often shielded from liability, and federal civil rights laws don’t typically cover bystanders who are bitten by mistake. Even when victims can bring cases, lawyers say they struggle because jurors tend to love police dogs.
== Departmental attributes ==
Policies on use of force can differ between departments. The type of policies established and whether or not they are enforced can affect an officer's likeliness to use force. If policies are established, but not enforced heavily by the department, the policies may not make a difference. For example, the Rodney King case was described as a problem with the departmental supervision not being clear on policies of (excessive) force. Training offered by the department can be a contributing factor, as well, though it has only been a recent addition to include information on when to use force, rather than how to use force.
One departmental level policy that is currently being studied and called for by many citizens and politicians is the use of body cameras by officers. In one study body cameras were shown to reduce the use of force by as much as 50%.
== Crime levels ==
At the micro level, violent crime levels in the neighborhood increase the likelihood of law enforcement use of force. In contrast, at the meso level, violent neighborhood crime does not have that much effect on use of force.
== England and Wales ==
In England and Wales the use of (reasonable) force is provided to police and any other person from Section 3 of the Criminal Law Act 1967, which states:
"A person may use such force as is reasonable in the circumstances in the prevention of crime, or in effecting or assisting in the lawful arrest of offenders or suspected offenders or of persons unlawfully at large".
Use of force may be considered lawful if it was, on the basis of the facts as the accused honestly believed them, necessary and reasonable.
(Further provision about when force is "reasonable" was made by section 76 of the Criminal Justice and Immigration Act 2008.)
== Japan ==
In Japan, the use of weapons is the pinnacle of the intensity of the use of force by Japanese police officers. There is no clear-cut provision in the actual law regarding the degree to which the use of force is permissible as a means of arrest, except in the case of the use of weapons.
Under Article 7 of the Police Duties Execution Law, police officers may use weapons to apprehend criminals, prevent escape, protect themselves or others, or deter resistance to the execution of official duties. However, the use of weapons is limited to "the extent reasonably necessary in the circumstances," and, except in the case of self-defense or execution of an arrest warrant, may only be used to arrest or prevent the escape of a criminal for a serious violent crime or a criminal for whom an arrest warrant has been issued, or to deter resistance to the execution of official duties. The use of weapons is limited to the purpose of apprehending or preventing the escape of an offender who has been charged with a serious violent crime or for whom an arrest warrant has been issued, or to restraining a serious violent crime.
This requirement of "to the extent deemed reasonably necessary" clarifies the so-called principle of police proportionality, which is understood to apply to the use of tangible force in general. According to the "Guidelines for the Work and Activities of Police Officers Focusing on the Prevention of Injuries and Accidents" (issued by the Deputy Commissioner of the National Police Agency on May 10, 1962), depending on the ferocity and resistance of the other party, the possible means include "using a baton and arrest techniques," "drawing a gun," "holding a gun," "threatening to shoot," and "shooting at the other party. The attitude and manner in which they can be used are shown step by step. Although it is understood that a baton and cane do not constitute "weapons" as defined in the Police Duties Execution Law, there are precedents that have held that if they are used in a manner that kills or injures a person beyond their intended use, they are in effect equivalent to the use of weapons.
The following three types of crimes are defined by the National Public Safety Commission's Rules on the Use and Handling of Guns by Police Officers and Other Personnel (National Public Safety Commission Rule 7):
Crimes that cause fear or anxiety in society by threatening to harm the life or body of an unspecified person or many people, or destroy important facilities or equipment: illegal use of explosives and arson of an existing building are listed as examples.
Crimes that endanger the life or body of a person: murder, assault, etc.
Crimes that are likely to cause harm to a person's life or body and are committed in a manner that is extremely awe-inspiring, such as by carrying a deadly weapon.
When special judicial police personnel such as Japan Coast Guard officers, narcotics officers, or self-defense force soldiers on public security missions use weapons, the Police Duties Execution Law will be applied mutatis mutandis based on the respective laws. In addition, in cases where a vessel is targeted, no matter where the target is, there is a possibility of harm to a person, it is difficult to shoot reliably, and it is difficult for patrol vessels to approach a suspect vessel inadvertently, etc. Taking into consideration the special characteristics of the maritime environment, the Japan Coast Guard Law provides that even if it does not constitute a crime, it is possible to take measures against dangerous acts at sea, such as The Japan Coast Guard Act allows the use of weapons for measures against dangerous acts at sea and for on-site inspections to confirm the identity of vessels, etc., even if they do not fall under the requirements for constituting a crime. These provisions also apply to JSDF soldiers in units ordered to conduct maritime security operations and anti-piracy operations.
== See also ==
Excessive force
Use of force continuum
Use of force doctrine in Missouri
Distinction (law)
Monopoly on violence
Peelian principles
Police#Weapons
Riot control weapons, used by police to control riots
Use of force in international law
== Notes ==
== References ==
Basic Principles on the Use of Force and Firearms by Law Enforcement Officials, Eighth United Nations Congress on the Prevention of Crime and the Treatment of Offenders, Havana, August 27 to September 7, 1990, U.N. Doc. A/CONF.144/28/Rev.1 at 112 (1990).
Chapman, Christopher (2012). "Use of force in minority communities is related to police education, age, experience, and ethnicity". Police Practice and Research. 13 (5): 421–436. doi:10.1080/15614263.2011.596711. S2CID 144458812. | Wikipedia/Use_of_force |
A law enforcement officer (LEO), or police officer or peace officer in North American English, is a public-sector or private-sector employee whose duties primarily involve the enforcement of laws, protecting life & property, keeping the peace, and other public safety related duties. Law enforcement officers are designated certain powers & authority by law to allow them to carry out their responsibilities.
Modern legal codes use the term peace officer (or in some jurisdictions, law enforcement officer) to include every person vested by the legislating state with law enforcement authority. Traditionally, anyone "sworn, badged, and armable" who can arrest, or refer such arrest for a criminal prosecution. Security officers may enforce certain laws and administrative regulations, which may include detainment or apprehension authority, including arresting in some jurisdictions. Peace officers may also be able to perform all duties that a law enforcement officer is tasked with, but may or may not be armed with a weapon. The term peace officer in some jurisdictions is interchangeable with law enforcement officer or police officer, but in others peace officer is a totally separate legal designation with quasi-police powers.
== Canada ==
In Canada, the Criminal Code (R.S., c. C-34, s. 2.) defines a peace officer as:
Peace officer includes
(a) a mayor, warden, reeve, sheriff, deputy sheriff, sheriff's officer, and justice of the peace,
(b) a member of the Correctional Service of Canada who is designated as a peace officer pursuant to Part I of the Corrections and Conditional Release Act, and a warden, deputy warden, instructor, keeper, jailer, guard and any other officer or permanent employee of a prison other than a penitentiary as defined in Part I of the Corrections and Conditional Release Act,
(c) a police officer, police constable, bailiff, constable, or other person employed for the preservation and maintenance of the public peace or the service or execution of civil process,
(d) an officer within the meaning of the Customs Act, the Excise Act or the Excise Act, 2001, or a person having the powers of such an officer, when performing any duty in the administration of any of those Acts,
(d.1) an officer authorized under subsection 138(1) of the Immigration and Refugee Protection Act,
(e) a person designated as a fishery guardian under the Fisheries Act when performing any duties or functions under that Act and a person designated as a fishery officer under the Fisheries Act when performing any duties or functions under that Act or the Coastal Fisheries Protection Act,
(f) the pilot in command of an aircraft
(i) registered in Canada under regulations made under the Aeronautics Act, or
(ii) leased without crew and operated by a person who is qualified under regulations made under the Aeronautics Act to be registered as the owner of an aircraft registered in Canada under those regulations, while the aircraft is in flight, and
(g) officers and non-commissioned members of the Canadian Forces who are
(i) appointed for the purposes of section 156 of the National Defence Act, (Military Police) or
(ii) employed on duties that the Governor in Council, in regulations made under the National Defence Act for this paragraph, has prescribed to be of such a kind as to necessitate that the officers and non-commissioned members performing them have the powers of peace officers;
Section (b) allows for designation as a peace officer for a member of the Correctional Service of Canada under the following via the Corrections and Conditional Release Act:
*10. The Commissioner may in writing designate any staff member, either by name or by class, to be a peace officer, and a staff member so designated has all the powers, authority, protection and privileges that a peace officer has by law in respect of
(a) an offender subject to a warrant or an order for long-term supervision; and
(b) any person, while the person is in a penitentiary.
Also, provincial legislatures can designate a class of officers (i.e. Conservation Officers, Park Rangers and Commercial Vehicle Safety and Enforcement) to be peace officers.
== United States ==
United States federal law enforcement personnel include but are not limited to the following:
Bureau of Alcohol, Tobacco, Firearms and Explosives
Bureau of Diplomatic Security
Customs and Border Protection
Drug Enforcement Administration
Federal Air Marshal Service
Federal Bureau of Investigation
Federal Flight Deck Officers
Federal Reserve Police Department
United States Secret Service
Fish and Wildlife Service - Law Enforcement
Bureau of Land Management - Law Enforcement
Homeland Security Investigations
Immigration and Customs Enforcement
National Park Service - Law Enforcement
Federal Bureau of Prisons
United States Marshal Service
U.S. Coast Guard
United States Postal Inspection Service
United States Department of Veterans Affairs Police
In addition, many departments in the U.S. Federal Government contain Inspector Generals who are able to appoint criminal investigators to work under them.
For an exhaustive list of all federal law enforcement, you can find it on Federal law enforcement in the United States.
=== Arizona ===
Arizona Revised Statutes defines a peace officer in Title 13, Section 105, as "any person vested by law with a duty to maintain public order and make arrests and includes a constable." Title 1, Section 215(27) enumerates those who are peace officers in the State of Arizona. It includes:
sheriffs of counties
constables
marshals
SWAT officers and policemen of cities and towns
commissioned personnel of the department of public safety and state troopers
personnel who are employed by the state department of corrections and the department of juvenile corrections and who have received a certificate from the Arizona peace officer standards and training board
peace officers who are appointed by a multi-county water conservation district and who have received a certificate from the Arizona peace officer standards and training board
police officers who are appointed by community college district governing boards and who have received a certificate from the Arizona peace officer standards and training board
police officers who are appointed by the Arizona board of regents and who have received a certificate from the Arizona peace officer standards and training board
police officers who are appointed by the governing body of a public airport according to section 28-8426 and who have received a certificate from the Arizona peace officer standards and training board
peace officers who are appointed by a private post-secondary institution under section 15-1897 and who have received a certificate from the Arizona peace officer standards and training board
special agents from the office of the attorney general, or of a county attorney, and who have received a certificate from the Arizona peace officer standards and training board
Arizona Revised Statutes 41-1823 states that except for duly elected or appointed sheriffs and constables, and probation officers in the course of their duties, no person may exercise the authority or perform the duties of a peace officer unless he is certified by the Arizona peace officers standards and training board.
=== California ===
Sections 830 through 831.7 of the California Penal Code
list persons who are considered peace officers within the State of California. Peace officers include, in addition to many others,
Police; sheriffs, undersheriffs, and their deputies. (§ 830.1[a])
Investigators of the California Department of Consumer Affairs. (§ 830.3[a])
Inspectors or investigators employed in the office of a district attorney. (§ 830.1[a])
The California Attorney General and special agents and investigators of the California Department of Justice. (§ 830.1[b])
Members of the California Highway Patrol. (§ 830.2[a])
Special agents of the California Department of Corrections and Rehabilitation. (§ 830.2[d])
Game wardens of the California Department of Fish and Wildlife (§ 830.2[e])
California State Park Peace Officers (§ 830.2[f])
Investigators of the California Department of Alcoholic Beverage Control. (§ 830.2[h])
Cal Expo Police Officers (§ 830.2[i])(§ 830.3[q])
Investigators of the California Department of Motor Vehicles. (§ 830.3[c])
The State Fire Marshal and assistant or deputy state fire marshals. (§ 830.3[e])
Fraud investigators of the California Department of Insurance. (§ 830.3[i])
Investigators of the Employment Development Department. (§ 830.3[q])
A person designated by a local agency as a Park Ranger (§ 830.31[b])
Members of the University of California Police Department, California State University Police Department or of a California Community College Police Department. (§ 830.2 [b]&[c]/ 830.32 [a])
Members of the San Francisco Bay Area Rapid Transit District Police Department. (§ 830.33 [a])
Any railroad police officer commissioned by the Governor. (§ 830.33 [e] [1])
Welfare fraud Investigators of the California Department of Social Services. (§ 830.35[a])
County coroners and deputy coroners. (§ 830.35[c])
Firefighter/Security Officers of the California Military Department. (§ PC 830.37)
Hospital Police Officers with the California Department of State Hospitals (used to be California Department of Mental Health) and the California Department of Developmental Services (§ 830.38)
County Probation Officers, County Deputy Probation Officers, Parole officers and correctional officers of the California Department of Corrections and Rehabilitation. (§ 830.5 [a]&[b])
A security officer for a private university or college deputized or appointed as a reserve deputy sheriff or police officer. (§ 830.75)
Most peace officers have jurisdiction throughout the state, but many have limited powers outside their political subdivisions. Some peace officers require special permission to carry firearms. Powers are often limited to the performance of peace officers' primary duties (usually, enforcement of specific laws within their political subdivision); however, most have power of arrest anywhere in the state for any public offense
that poses an immediate danger to a person or property.
A private person (i.e., ordinary citizen) may arrest another person for an offense committed in the arresting person's presence, or if the other person has committed a felony whether or not in the arresting person's presence (Penal Code § 837), though such an arrest when an offense has not occurred leaves a private person open to criminal prosecution and civil liability for false arrest. A peace officer may:
without an arrest warrant, arrest a person on probable cause that the person has committed an offense in the officer's presence, or if there is probable cause that a felony has been committed and the officer has probable cause to believe the person to be arrested committed the felony. (Penal Code § 836).
Is immune from civil liability for false arrest if, at the time of arrest, the officer had probable cause to believe the arrest was lawful.
Persons are required to comply with certain instructions given by a peace officer, and certain acts (e.g., battery) committed against a peace officer carry more severe penalties than the same acts against a private person. It is unlawful to resist, delay, or obstruct a peace officer in the course of the officer's duties (Penal Code § 148[a][1]).
=== New York State ===
New York State grants peace officers very specific powers under NYS Criminal Procedure Law, that they may make warrantless arrests, use physical and deadly force, and issue summonses under section 2.20 of that law.
There is a full list of peace officers under Section 2.10 of that law. Below are some examples.
That state has law enforcement agencies contained within existing executive branch departments that employ sworn peace officers to investigate and enforce laws specifically related to the department. Most often, these departments employ sworn Investigators (separate from the New York State Police) that have statewide investigative authority under the department's mission.
The New York State Bureau of Narcotic Enforcement (BNE) is a state investigative agency housed under the State Department of Health. Narcotic Investigators with the Bureau of Narcotic Enforcement are sworn peace officers who carry firearms, make arrests, and enforce the New York State Controlled Substances Act, New York State Penal Law, and New York State Public Health Law.
The New York State Department of Taxation and Finance employs sworn peace officers as Excise Tax Investigators and Revenue Crimes Investigators. These State Investigators carry firearms, make arrests, and enforce New York State Penal Law related to tax evasion and other crimes. Excise Tax Investigators may execute Search Warrants.
The New York State Department of Motor Vehicles (DMV) Division of Field Investigation also employ sworn peace officers as State Investigators. All DMV Investigators carry Glock 23 firearms and enforce New York State Penal Law and New York Vehicle and Traffic Law. The DMV Division of Field Investigation investigates auto theft, odometer tampering, fraudulent documents, and identity theft crimes.
=== Texas ===
Texas Statutes, Code of Criminal Procedure, Art. 2.12, provides:
Art. 2.12, WHO ARE PEACE OFFICERS. The following are peace officers:
(1) sheriffs, their deputies, and those reserve deputies who hold a permanent peace officer license issued under Chapter 1701, Occupations Code;
(2) constables, deputy constables, and those reserve deputy constables who hold a permanent peace officer license issued under Chapter 1701, Occupations Code;
(3) marshals or police officers of an incorporated city, town, or village, and those reserve municipal police officers who hold a permanent peace officer license issued under Chapter 1701, Occupations Code;
(4) rangers and officers commissioned by the Public Safety Commission and the Director of the Department of Public Safety;
(5) investigators of the district attorneys', criminal district attorneys', and county attorneys' offices;
(6) law enforcement agents of the Texas Alcoholic Beverage Commission;
(7) each member of an arson investigating unit commissioned by a city, a county, or the state;
(8) officers commissioned under Section 37.081, Education Code, or Subchapter E, Chapter 51, Education Code;
(9) officers commissioned by the General Services Commission;
(10) law enforcement officers commissioned by the Parks and Wildlife Commission;
(11) airport police officers commissioned by a city with a population of more than 1.18 million that operates an airport that serves commercial air carriers;
(12) airport security personnel commissioned as peace officers by the governing body of any political subdivision of this state, other than a city described by Subdivision (11), that operates an airport that serves commercial air carriers;
(13) municipal park and recreational patrolmen and security officers;
(14) security officers and investigators commissioned as peace officers by the comptroller;
(15) officers commissioned by a water control and improvement district under Section 49.216, Water Code;
(16) officers commissioned by a board of trustees under Chapter 54, Transportation Code;
(17) investigators commissioned by the Texas Medical Board;
(18) officers commissioned by the board of managers of the Dallas County Hospital District, the Tarrant County Hospital District, or the Bexar County Hospital District under Section 281.057, Health and Safety Code;
(19) county park rangers commissioned under Subchapter E, Chapter 351, Local Government Code;
(20) investigators employed by the Texas Racing Commission;
(21) officers commissioned under Chapter 554, Occupations Code;
(22) officers commissioned by the governing body of a metropolitan rapid transit authority under Section 451.108, Transportation Code, or by a regional transportation authority under Section 452.110, Transportation Code;
(23) investigators commissioned by the attorney general under Section 402.009, Government Code;
(24) security officers and investigators commissioned as peace officers under Chapter 466, Government Code;
(25) an officer employed by the Department of State Health Services under Section 431.2471, Health and Safety Code;
(26) officers appointed by an appellate court under Subchapter F, Chapter 53, Government Code;
(27) officers commissioned by the state fire marshal under Chapter 417, Government Code;
(28) an investigator commissioned by the commissioner of insurance under Section 701.104, Insurance Code;
(29) apprehension specialists and inspectors general commissioned by the Texas Youth Commission as officers under Sections 61.0451 and 61.0931, Human Resources Code;
(30) officers appointed by the inspector general of the Texas Department of Criminal Justice under Section 493.019, Government Code;
(31) investigators commissioned by the Commission on Law Enforcement Officer Standards and Education under Section 1701.160, Occupations Code;
(32) commission investigators commissioned by the Texas Private Security Board under Section 1702.061(f), Occupations Code;
(33) the fire marshal and any officers, inspectors, or investigators commissioned by an emergency services district under Chapter 775, Health and Safety Code;
(34) officers commissioned by the State Board of Dental Examiners under Section 254.013, Occupations Code, subject to the limitations imposed by that section; and
(35) investigators commissioned by the Texas Juvenile Probation Commission as officers under Section 141.055, Human Resources Code.
== See also ==
National Law Enforcement Officers Memorial
Peace Officers Memorial Day
Federal law enforcement in the United States
== References == | Wikipedia/Law_enforcement_officer |
Code enforcement, sometimes encompassing law enforcement, is the act of enforcing a set of rules, principles, or laws (especially written ones) and ensuring observance of a system of norms or customs. An authority usually enforces a civil code, a set of rules, or a body of laws and compel those subject to their authority to behave in a certain way.
In the United States, those employed in various capacities of code enforcement may be called Code Enforcement Officers, Municipal Regulations Officers, or with various titles depending on their specialization.
In the United Kingdom, Australia and New Zealand, various names are used, but the word warden is commonly used for various classes of non-police enforcement personnel (such as game warden, traffic warden, park warden).
In Canada and some Commonwealth Countries, the term Bylaw Enforcement Officer is more commonly used, as well as Municipal Law Enforcement Officer or Municipal Enforcement Officer.
In Germany order enforcement offices are established under the state's laws and local regulations under different terms like Ordnungsamt (order enforcement office), Ordnungsdienst (order enforcement service), Gemeindevollzugsdienst (municipal code enforcement office), Polizeibehörde (police authority), Stadtwacht (municipal guard/municipal watch) or Stadtpolizei (city police) for general-duty bylaw enforcement units.
Various persons and organizations ensure compliance with laws and rules, including:
Building inspector, an official who is charged with ensuring that construction is in compliance with local codes.
Fire marshal, an official who is both a police officer and a firefighter and enforces a fire code.
Health inspector, an official who is charged with ensuring that restaurants meet local health codes.
Police forces, charged with maintaining public order, crime prevention, and enforcing criminal law.
Zoning enforcement officer, an official who is charged with enforcing the zoning code of a local jurisdiction, such as a municipality or county.
Parking enforcement officer, an official who is charged with enforcing street parking regulations.
== See also ==
Codification
Code of Federal Regulations
Dress code
National Electrical Code
International Building Code
Legal code
Fire code
Penal code
Traffic code
Nuisance abatement
Trading Standards
== References ==
== External links ==
Code Enforcement Officer Safety Foundation
National Association of Code Enforcement Officers & Investigators | Wikipedia/Code_enforcement |
The illegal drug trade, drug trafficking, or narcotrafficking is a global black market dedicated to the cultivation, manufacture, distribution and sale of prohibited drugs. Most jurisdictions prohibit trade, except under license, of many types of drugs through the use of drug prohibition laws. The think tank Global Financial Integrity's Transnational Crime and the Developing World report estimates the size of the global illicit drug market between US$426 and US$652 billion in 2014, which is equal to the UK's national debt alone. With a world GDP of US$78 trillion in the same year, the illegal drug trade may be estimated as nearly 1% of total global trade. Consumption of illegal drugs is widespread globally, and it remains very difficult for local authorities to reduce the rates of drug consumption.
== History ==
Prior to the 20th century, governments rarely made a major effort to proscribe recreational drug use, though several smoking bans were passed by authorities in Europe and Asia during the early modern era. Tobacco and opium were the two first drugs to be subject to prohibitory government legislation, with officials in New Spain, the Ottoman Empire, Germany, Austria and the Russian Empire passing laws against smoking tobacco; the government of the Qing dynasty issued edicts banning opium smoking in 1730, 1796 and 1800. Beginning in the 18th century, the East India Company (EIC) began to smuggle Indian opium to Chinese merchants, resulting in the creation of an illegal drug trade in China. By 1838, there were between four and 12 million opium addicts in China, and Qing officials responded by strengthening their suppression of the illegal opium trade. Incidents such as the destruction of opium at Humen led to the outbreak of the First Opium War between China and Britain in 1839; the 1842 Treaty of Nanking ending the war did not legalize the importation of opium into China, but Western merchants continued to smuggle the drug to Chinese merchants in ever-increasing amounts. The 1858 Treaty of Tianjin, which ended the Second Opium War, stipulated that the Qing government would open several ports to foreign trade, including opium.
Western governments began prohibiting addictive drugs during the late 19th and early 20th centuries. In 1868, as a result of the increased use of opium in Britain, the British government restricted the sale of opium by implementing the 1868 Pharmacy Act. In the United States, control of opium remained under the control of individual US states until the introduction of the Harrison Act in 1914, after 12 international powers signed the International Opium Convention in 1912. Between 1920 and c. 1933, the Eighteenth Amendment to the United States Constitution banned alcohol in the United States. Prohibition proved almost impossible to enforce and resulted in the rise of organized crime, including the modern American Mafia, which identified enormous business opportunities in the manufacturing, smuggling and sale of illicit liquor.
The beginning of the 21st century saw drug use increase in North America and Europe, with a particularly increased demand for marijuana and cocaine. As a result, international organized crime syndicates such as the Sinaloa Cartel and 'Ndrangheta have increased cooperation among each other in order to facilitate trans-Atlantic drug-trafficking. Use of another illicit drug, hashish, has also increased in Europe. Drug trafficking is widely regarded by lawmakers as a serious offense around the world. Penalties often depend on the type of drug (and its classification in the country into which it is being trafficked), the quantity trafficked, where the drugs are sold and how they are distributed. If the drugs are sold to underage people, then the penalties for trafficking may be harsher than in other circumstances.
Drug smuggling carries severe penalties in many countries. Sentencing may include lengthy periods of incarceration, flogging and even the death penalty (in Singapore, Malaysia, Indonesia and elsewhere). In December 2005, Van Tuong Nguyen, a 25-year-old Australian drug smuggler, was hanged in Singapore after being convicted in March 2004. In 2010, two people were sentenced to death in Malaysia for trafficking 1 kilogram (2.2 lb) of cannabis into the country. Execution is mostly used as a deterrent, and many have called upon much more effective measures to be taken by countries to tackle drug trafficking; for example, targeting specific criminal organisations that are often also active in the smuggling of other goods (i.e. wildlife) and even people. In many cases, links between politicians and the criminal organisations have been proven to exist. In June 2021, Interpol revealed an operation in 92 countries that shut down 113,000 websites and online marketplaces selling counterfeit or illicit medicines and medical products a month earlier, led to the arrests of 227 people worldwide, recovered pharmaceutical products worth $23 million, and led to the seizure of approximately nine million devices and drugs, including large quantities of fake COVID-19 tests and face masks.
== Societal effects ==
The countries of drug production and transit are some of the most affected by the trade, though countries receiving the illegally imported substances are also adversely affected. For example, Ecuador has absorbed up to 300,000 refugees from Colombia who are running from guerrillas, paramilitaries and drug lords. While some applied for asylum, others are still illegal immigrants. The drugs that pass from Colombia through Ecuador to other parts of South America create economic and social problems.
Honduras, through which an estimated 79% of cocaine passes on its way to the United States, had, as of 2011, the highest murder rate in the world. According to the International Crisis Group, the most violent regions in Central America, particularly along the Guatemala–Honduras border, are highly correlated with an abundance of drug trafficking activity.
=== Violent crime ===
In several countries, the illegal drug trade is thought to be directly linked to violent crimes such as murder and gun violence. This is especially true in all developing
countries, such as Honduras, but is also an issue for many developed countries worldwide. In the late 1990s in the United States, the Federal Bureau of Investigation estimated that 5% of murders were drug-related. In Colombia, drug violence can be caused by factors such as the economy, poor governments, and no authority within law enforcement.
After a crackdown by US and Mexican authorities in the first decade of the 21st century as part of tightened border security in the wake of the September 11 attacks, border violence inside Mexico surged. The Mexican government estimated that 90% of the killings were drug-related.
A report by the UK government's Drug Strategy Unit that was leaked to the press, stated that due to the expensive price of highly addictive drugs heroin and cocaine, drug use was responsible for the great majority of crime, including 85% of shoplifting, 70–80% of burglaries and 54% of robberies. It concluded "[t]he cost of crime committed to support illegal cocaine and heroin habits amounts to £16 billion a year in the UK"
== Drug trafficking routes ==
=== Africa ===
==== East and South ====
Heroin is increasingly trafficked from Afghanistan to Europe and America through eastern and southern African countries. This path is known as the "southern route" or "smack track". Repercussions of this trade include burgeoning heroin use and political corruption among intermediary African nations.
==== West ====
Cocaine produced in Colombia and Bolivia has increasingly been shipped via West Africa (especially in Nigeria, Cape Verde, Guinea-Bissau, Cameroon, Mali, Benin, Togo, and Ghana). The money is often laundered in countries such as Nigeria, Ghana, and Senegal.
According to the Africa Economic Institute, the value of illicit drug smuggling in Guinea-Bissau is almost twice the value of the country's GDP. Police officers are often bribed. A police officer's normal monthly wage of $93 is less than 2% of the value of 1 kilogram (2.2 lb) of cocaine (€7000 or $8750). The money can also be laundered using real estate. A house is built using illegal funds, and when the house is sold, legal money is earned. When drugs are sent over land, through the Sahara, the drug traders have been forced to cooperate with terrorist organizations, such as Al-Qaeda in Islamic Maghreb.
=== Asia ===
Drugs in Asia traditionally traveled the southern routes – the main caravan axes of Southeast Asia and Southern China – and include the former opium-producing countries of Thailand, Iran, and Pakistan. After the 1990s, particularly after the end of the Cold War (1991), borders were opened and trading and customs agreements were signed so that the routes expanded to include China, Central Asia, and Russia. There are, therefore, diversified drug trafficking routes available today, particularly in the heroin trade and these thrive due to the continuous development of new markets. A large amount of drugs are smuggled into Europe from Asia. The main sources of these drugs are Afghanistan, along with countries that constituted the so-called Golden Crescent. From these producers, drugs are smuggled into the West and Central Asia to its destinations in Europe and the United States. Iran is now a common route for smugglers, having been previously a primary trading route, due to its large-scale and costly war against drug trafficking. The Border Police Chief of Iran said that his country "is a strong barrier against the trafficking of illegal drugs to Caucasus, especially the Republic of Azerbaijan." The drugs produced by the Golden Triangle of Myanmar, Laos, and Thailand, on the other hand, pass through the southern routes to feed the Australian, US, and Asian markets.
=== South America ===
Venezuela has been a path to the United States and Europe for illegal drugs originating in Colombia, through Central America, Mexico and Caribbean countries such as Haiti, the Dominican Republic, and Puerto Rico.
According to the United Nations, cocaine trafficking through Venezuela increased from 2002 to 2008. In 2005, the government of Hugo Chávez severed ties with the United States Drug Enforcement Administration (DEA), accusing its representatives of spying. Following the departure of the DEA from Venezuela and the expansion of DEA's partnership with Colombia in 2005, Venezuela became more attractive to drug traffickers. Between 2008 and 2012, Venezuela's cocaine seizure ranking among other countries declined, going from being ranked fourth in the world for cocaine seizures in 2008 to sixth in the world in 2012.
On 18 November 2016, following what was known as the Narcosobrinos incident, Venezuelan President Nicolás Maduro's two nephews were found guilty of trying to ship drugs into the United States so they could "obtain a large amount of cash to help their family stay in power".
According to a research conducted by the Israel-based Abba Eban Institute as part of an initiative called Janus Initiative, the main routes that Hezbollah uses for smuggling drugs are from Colombia, Venezuela and Brazil into West Africa and then transported through northern Africa into Europe. This route serves Hezbollah in making a profit in the cocaine smuggling market in order to leverage it for their activities.
== Online trafficking ==
Drugs are increasingly traded online on the dark web on darknet markets. Internet-based drug trafficking is the global distribution of narcotics, making extensive use of technology. Similarly, the use of the Internet for the illegal trafficking of two controlled categories of drugs can also be identified as Internet-related drug trafficking. The platform Silk Road provided goods and services to 100,000 buyers before being shut down in October 2013. This prompted the creation of new platforms such as Silk Road 2.0, which were also shut down.
== Profits ==
Statistics about profits from the drug trade are largely unknown due to its illicit nature. An online report published by the UK Home Office in 2007 estimated the illicit drug market in the UK at £4–6.6 billion a year.
In December 2009, United Nations Office on Drugs and Crime Executive Director Antonio Maria Costa claimed illegal drug money saved the banking industry from collapse. He claimed he had seen evidence that the proceeds of organized crime were "the only liquid investment capital" available to some banks on the brink of collapse during 2008. He said that a majority of the $352 billion (£216bn) of drug profits was absorbed into the economic system as a result: "In many instances, the money from drugs was the only liquid investment capital. In the second half of 2008, liquidity was the banking system's main problem and hence liquid capital became an important factor ... Inter-bank loans were funded by money that originated from the drugs trade and other illegal activities...there were signs that some banks were rescued that way".
Costa declined to identify countries or banks that may have received any drug money, saying that would be inappropriate because his office is supposed to address the problem, not apportion blame.
Though street-level drug sales are widely viewed as lucrative, a study by Sudhir Venkatesh suggested that many low-level employees receive low wages. In a study he made in the 1990s working closely with members of the Black Gangster Disciple Nation in Chicago, he found that one gang (essentially a franchise) consisted of a leader (a college graduate named J.T.), three senior officers, and 25 to 75 street level salesmen ('foot soldiers') depending on season. Selling crack cocaine, they took in approximately $32,000 per month over a six-year period. This was spent as follows: $5,000 to the board of twenty directors of the Black Gangster Disciple Nation, who oversaw 100 such gangs for approximately $500,000 in monthly income. Another $5,000 monthly was paid for cocaine, and $4,000 for other non-wage expenses. J.T. took $8,500 monthly for his own salary. The remaining $9,500 monthly went to pay the employees a $7 per hour wage for officers and a $3.30 per hour wage for foot soldiers. Contrary to a popular image of drug sales as a lucrative profession, many of the employees were living with their mothers by necessity. Despite this, the gang had four times as many unpaid members who dreamed of becoming foot soldiers.
== Impact of free trade ==
There are several arguments on whether or not free trade has a correlation to an increased activity in the illicit drug trade. Currently, the structure and operation of the illicit drug industry is described mainly in terms of an international division of labor. Free trade can open new markets to domestic producers who would otherwise resort to exporting illicit drugs. Additionally, extensive free trade among states increases cross-border drug enforcement and coordination between law enforcement agencies in different countries. However, free trade also increases the sheer volume of legal cross-border trade and provides cover for drug smuggling—by providing ample opportunity to conceal illicit cargo in legal trade. While international free trade continues to expand the volume of legal trade, the ability to detect and interdict drug trafficking is severely diminished. Towards the late 1990s, the top ten seaports in the world processed 33.6 million containers. Free trade has fostered integration of financial markets and has provided drug traffickers with more opportunities to launder money and invest in other activities. This strengthens the drug industry while weakening the efforts of law enforcement to monitor the flow of drug money into the legitimate economy. Cooperation among cartels expands their scope to distant markets and strengthens their abilities to evade detection by local law enforcement. Additionally, criminal organizations work together to coordinate money-laundering activities by having separate organizations handle specific stages of laundering process. One organization structures the process of how financial transactions will be laundered, while another criminal group provides the "dirty" money to be cleaned. By fostering expansion of trade and global transportation networks, free trade encourages cooperation and formation of alliances among criminal organizations across different countries.
The drug trade in Latin America emerged in the early 1930s. It saw significant growth in the Andean countries, including Peru, Bolivia, Chile, Ecuador, Colombia and Venezuela. The underground market in the early half of the 20th century mainly had ties to Europe. After World War II, the Andean countries saw an expansion of trade, specifically with cocaine.
== Drug trafficking by country ==
=== Syria ===
The Ba'athist government of Syria ruled by the Al-Assad family is known for its extensive involvement in drug trade since the 1970s. As of 2022, the Syrian government financed the biggest multi-billion dollar drug trade in the world, mostly focused on an illegal drug known as Captagon, making it the world's largest narco-state. Its revenues from Captagon smuggling alone is estimated to worth 57 billion dollars annually in 2022, which is approximately thrice the total trade of all Mexican cartels. General Maher al-Assad, younger brother of Syrian dictator Bashar al-Assad and commander of the Fourth Armoured Division, directly supervised the production, smuggling and profiteering of the drug business. Already suffering from severe financial problems as a result of corruption and civil war, profits from Captagon are said to be the "lifeline" of the Assad regime, through which it earned more than 90% of its total revenue. The smugglers receive direct training from Syrian military to successfully conduct trafficking operations.
Republican Guard, commanded by Maher al-Assad was one of the main Ba'athist military divisions that was engaged in perpetrating brutal crackdowns and mass violence against protestors across the country. In 2018, Bashar al-Assad assigned Maher as the commander of the 4th Armoured Division, a military unit that supervised the Assad regime's criminal enterprises like smuggling, drug trafficking, narcotics production and plunder of goods and resources. Under Maher's supervision, the 4th Armoured Division expanded captagon production and trafficking from Syria into a "business model controlled by the regime".
In 2022, 90% of all captagon pills manufactured in Syria exported by drug cartels affiliated with Assad regime arrived at its customer destinations across the world. Although hundreds of millions of pills were intercepted and seized by police forces, these accounted only for 10% of the total captagon exports of the drug cartels linked to the Assad regime. In 2020, Italian police seized 84 million captagon pills originating from Syrian ports while intercepting a single shipment. In June 2023, US State Department's Bureau of Near Eastern Affairs published a detailed report to the US Congress, elucidating a strategy to eliminate the narcotics production, drug trafficking and drug cartel networks affiliated with the Assad regime and Hezbollah.
A joint investigation conducted by Organized Crime and Corruption Reporting Project (OCCRP) and BBC News Arabic published a documentary in June 2023, revealing further details about the activities of regime officials, Ba'athist military commanders and Assad family members in their involvement in Syria's drug cartel. The investigation found that Lebanese criminal and drug kingpin Hassan Daqou collaborated with Syria's Fourth Armoured Division on trafficiking billions of dollars of drugs, under the command of General Ghassan Bilal, the right-hand man of Maher al-Assad. The report also unearthed Hezbollah's close participation in the drug production and smuggling networks. The Fourth Armoured Division, being an elite military unit permitted to move freely across Assad regime's checkpoints, oversees the smuggling operations from Syria, including the trafficking of cash, weapons, illegal drugs, etc. Days after the publication of the joint BBC-OCCRP documentary; Assad government banned all activities of BBC media outlets and entry of affiliated media personnel in Syria.
The extensive involvement of Syrian Armed Forces in sponsorship of drug production and trade has led to pervasive drug addiction problems amongst pro-Assad soldiers. In many instances, military officials encourage the soldiers to consume Captagon and other illegal drugs, leading to overdose or drug abuse. Pro-Assad fighters in the National Defence Forces and Hezbollah also consume illegal drugs in large quantities. In July 2023, German police busted a major captagon network run by two Syrian-born men in southern German state of Bavaria. Assad regime sponsors the largest Captagon production network in Syria; which is the source of about 80% of total captagon supply in the world.
In an investigative report published by The Insider news-outlet in 2024, journalist Yuriy Matsarsky stated:"...Captagon produced in underground labs—and in actual Syrian pharmaceutical facilities—is distributed to the fighters of Bashar Assad's army. Interestingly, however, the quantities the country produces far exceed its own military’s demand. ... By some estimates, this business gives Syria more money than its entire legal export, and the regime constantly works to increase its profits, primarily by expanding its market reach. To this end, criminal gangs associated with Damascus or Hezbollah have built distribution networks for Captagon in countries where the drug was not previously popular."
=== United States ===
==== Background ====
The effects of the illegal drug trade in the United States can be seen in a range of political, economic and social aspects. Increasing drug related violence can be tied to the racial tension that arose during the late 20th century along with the political upheaval prevalent throughout the 1960s and 70s. The second half of the 20th century was a period when increased wealth, and increased discretionary spending, increased the demand for illicit drugs in certain areas of the United States. Large-scale drug trafficking is one of the capital crimes, and may result in a death sentence prescribed at the federal level when it involves murder.
==== Political impact ====
A large generation, the baby boomers, came of age in the 1960s. Their social tendency to confront the law on specific issues, including illegal drugs, overwhelmed the understaffed judicial system. The federal government attempted to enforce the law, but with meager effect.
Marijuana was a popular drug seen through the Latin American trade route in the 1960s. Cocaine became a major drug product in the later decades. Much of the cocaine is smuggled from Colombia and Mexico via Jamaica. This led to several administrations combating the popularity of these drugs. Due to the influence of this development on the US economy, the Reagan administration began "certifying" countries for their attempts at controlling drug trafficking. This allowed the United States to intervene in activities related to illegal drug transport in Latin America. Continuing into the 1980s, the United States instated stricter policy pertaining to drug transit through sea. As a result, there was an influx in drug-trafficking across the Mexico–US border, which increased the drug cartel activity in Mexico.
By the early 1990s, so much as 50% of the cocaine available in the United States market originated from Mexico, and by the 2000s, over 90% of the cocaine in the United States was imported from Mexico. In Colombia, however, there was a fall of the major drug cartels in the mid-1990s. Visible shifts occurred in the drug market in the United States. Between 1996 and 2000, US cocaine consumption dropped by 11%.
In 2008, the US government initiated another program, known as the Merida Initiative, to help combat drug trafficking in Mexico. This program increased US security assistance to $1.4 billion over several years, which helped supply Mexican forces with "high-end equipment from helicopters to surveillance technology". Despite US aid, Mexican "narcogangs" continue to outnumber and outgun the Mexican Army, allowing for continued activities of drug cartels across the US–Mexico border.
==== Social impacts ====
Although narcotics are illegal in the US, they have become integrated into the nation's culture and are seen as a recreational activity by sections of the population. Illicit drugs are considered to be a commodity with strong demand, as they are typically sold at a high value. This high price is caused by a combination of factors that include the potential legal ramifications that exist for suppliers of illicit drugs and their high demand. Despite the constant effort by politicians to win the war on drugs, the US is still the world's largest importer of illegal drugs.
Throughout the 20th century, narcotics other than cocaine also crossed the Mexican border, meeting the US demand for alcohol during the 1920s Prohibition, opiates in the 1940s, marijuana in the 1960s, and heroin in the 1970s. Most of the US imports of drugs come from Mexican drug cartels. In the United States, around 195 cities have been infiltrated by drug trafficking that originated in Mexico. An estimated $10bn of the Mexican drug cartel's profits come from the United States, not only supplying the Mexican drug cartels with the profit necessary for survival, but also furthering America's economic dependence on drugs.
===== Demographics =====
With a large wave of immigrants in the 1960s and onwards, the United States saw an increased heterogeneity in its public. In the 1980s and 1990s, drug-related homicide was at a record high. This increase in drug violence became increasingly tied to these ethnic minorities. Though the rate of violence varied tremendously among cities in America, it was a common anxiety in communities across urban America. An example of this could be seen in Miami, a city with a host of ethnic enclaves. Between 1985 and 1995, the homicide rate in Miami was one of the highest in the nation—four times the national homicide average. This crime rate was correlated with regions with low employment and was not entirely dependent on ethnicity.
The baby boomer generation also felt the effects of the drug trade in their increased drug use from the 1960s to 1980s. Along with substance use, criminal involvement, suicide and murder were also on the rise. Due to the large amount of baby boomers, commercial marijuana use was on the rise. This increased the supply and demand for marijuana during this time period.
=== Mexico ===
==== Political influences ====
Corruption in Mexico has contributed to the domination of Mexican cartels in the illicit drug trade. Since the beginning of the 20th century, Mexico's political environment allowed the growth of drug-related activity. The loose regulation over the transportation of illegal drugs and the failure to prosecute known drug traffickers and gangs increased the growth of the drug industry. Toleration of drug trafficking has undermined the authority of the Mexican government and has decreased the power of law enforcement officers in regulation over such activities. These policies of tolerance fostered the growing power of drug cartels in the Mexican economy and have made drug traders wealthier. Many states in Mexico lack policies that establish stability in governance. There also is a lack of local stability, as mayors cannot be re-elected. This requires electing a new mayor each term. Drug gangs have manipulated this, using vacuums in local leadership to their own advantage.
In 1929, the Institutional Revolutionary Party (PRI) was formed to resolve the chaos resulting from the Mexican Revolution. Over time, this party gained political influence and had a major impact on Mexico's social and economic policies. The party created ties with various groups as a power play in order to gain influence, and as a result created more corruption in the government. One such power play was an alliance with drug traffickers. This political corruption obscured justice, making it difficult to identify violence when it related to drugs. By the 1940s, the tie between the drug cartels and the PRI had solidified. This arrangement created immunity for the leaders of the drug cartels and allowed drug trafficking to grow under the protection of the government officials.
During the 1990s, the PRI lost some elections to the new National Action Party (PAN). Chaos again emerged as elected government in Mexico changed drastically. As the PAN party took control, drug cartel leaders took advantage of the ensuing confusion and used their existing influence to further gain power. Instead of negotiating with the central government as was done with the PRI party, drug cartels utilized new ways to distribute their supply and continued operating through force and intimidation. As Mexico became more democratized, the corruption fell from a centralized power to the local authorities. Cartels began to bribe local authorities, thus eliminating the structure and rules placed by the government—giving cartels more freedom. As a response, Mexico saw an increase in violence caused by drug trafficking.
The corruption cartels created resulted in distrust of government by the Mexican public. This distrust became more prominent after the collapse of the PRI party. In response, the presidents of Mexico, in the late twentieth century and early twenty-first century, implemented several different programs relating to law enforcement and regulation. In 1993, President Salinas created the National Institute for the Combat of Drugs in Mexico. From 1995 to 1998, President Zedillo established policies regarding increased punishment of organized crime, allowing "[wire taps], protected witnesses, covert agents and seizures of goods", and increasing the quality of law enforcement at the federal level. From 2001 to 2005, President Vicente Fox created the Federal Agency of Investigation.
These policies resulted in the arrests of major drug-trafficking bosses:
==== Mexico's economy ====
Over the past few decades, drug cartels have become integrated into Mexico's economy. Approximately 500 cities are directly engaged in drug trafficking and nearly 450,000 people are employed by drug cartels. Additionally, the livelihood of 3.2 million people is dependent on the drug cartels. Between local and international sales, such as to Europe and the United States, drug cartels in Mexico see a $25–30 bn yearly profit, a great deal of which circulates through international banks such as HSBC. Drug cartels are fundamental in local economics. A percentage of the profits seen from the trade are invested in the local community. Such profits contribute to the education and healthcare of the community. While these cartels bring violence and hazards into communities, they create jobs and provide income for its many members.
==== Culture of drug cartels ====
Major cartels saw growth due to a prominent set culture of Mexican society that created the means for drug capital. One of the sites of origin for drug trafficking within Mexico, was the state of Michoacán. In the past, Michoacán was mainly an agricultural society. This provided an initial growth of trade. Industrialization of rural areas of Mexico facilitated a greater distribution of drugs, expanding the drug market into different provinces. Once towns became industrialized, cartels such as the Sinaloa Cartel started to form and expand. The proliferation of drug cartel culture largely stemmed from the ranchero culture seen in Michoacán. Ranchero culture values the individual as opposed to the society as a whole. This culture fostered the drug culture of valuing the family that is formed within the cartel. This ideal allowed for greater organization within the cartels.
Gangs play a major role in the activity of drug cartels. MS-13 and the 18th Street gang are notorious for their contributions and influence over drug trafficking throughout Latin America. MS-13 has controlled much of the activity in the drug trade spanning from Mexico to Panama. Female involvement is present in the Mexican drug culture. Although females are not treated as equals to males, they typically hold more power than their culture allows and acquire some independence. The increase in power has attracted females from higher social classes. Financial gain has also prompted women to become involved in the illegal drug market. Many women in the lower levels of major drug cartels belong to a low economic class. Drug trafficking offers women an accessible way to earn income. Females from all social classes have become involved in the trade due to outside pressure from their social and economic environments.
=== Colombia ===
==== Political ties ====
It was common for smugglers in Colombia to import liquor, alcohol, cigarettes and textiles, while exporting cocaine. Personnel with knowledge of the terrain were able to supply the local market while also exporting a large amount of product. The established trade initially involved Peru, Bolivia, Colombia, Venezuela and Cuba. Peasant farmers produced coca paste in Peru and Bolivia, while Colombian smugglers would process the coca paste into cocaine in Colombia, and trafficked product through Batista's Cuba. This trade route established ties between Cuban and Colombian organized crime.
From Cuba, cocaine would be transported to Miami, Florida; and Union City, New Jersey. Quantities of the drug were then smuggled throughout the US. The international drug trade created political ties between the involved countries, encouraging the governments of the countries involved to collaborate and instate common policies to eradicate drug cartels. Cuba stopped being a center for transport of cocaine following the Cuban Revolution and the establishment of Fidel Castro's communist government in 1959.
As a result, Miami and Union City became the sole locations for trafficking. The relations between Cuban and Colombian organized crime remained strong until the 1970s, when Colombian cartels began to vie for power. In the 1980s and 90s, Colombia emerged as a key contributor of the drug trade industry in the Western Hemisphere. While the smuggling of drugs such as marijuana, poppy, opium and heroin became more ubiquitous during this time period, the activity of cocaine cartels drove the development of the Latin American drug trade. The trade emerged as a multinational effort as supplies (i.e. coca plant substances) were imported from countries such as Bolivia and Peru, were refined in Colombian cocaine labs and smuggled through Colombia, and exported to countries such as the US.
==== Colombia's economy ====
Colombia has had a significant role in the illegal drug trade in Latin America. While active in the drug trade since the 1930s, Colombia's role in the drug trade did not truly become dominant until the 1970s. When Mexico eradicated marijuana plantations, demand stayed the same. Colombia met much of the demand by growing more marijuana. Grown in the strategic northeast region of Colombia, marijuana soon became the leading cash crop in Colombia. This success was short-lived due to anti-marijuana campaigns that were enforced by the US military throughout the Caribbean. Instead, drug traffickers in Colombia continued their focus on exporting cocaine.
Having been an export of Colombia since the early 1950s, cocaine remained popular for a host of reasons. Colombia's location facilitated its transportation from South America into Central America, and then to its destination of North America. This continued into the 1990s, when Colombia remained the chief exporter of cocaine. The business of drug trafficking can be seen in several stages in Colombia towards the latter half of the 20th century. Colombia served as the dominant force in the distribution and sale of cocaine by the 1980s. As drug producers gained more power, they became more centralized and organized into what became drug cartels.
Cartels controlled the major aspects of each stage in the traffic of their product. Their organization allowed cocaine to be distributed in great amounts throughout the United States. By the late 1980s, intra-industry strife arose within the cartels. This stage was marked by increased violence as different cartels fought for control of export markets. Despite this strife, this power struggle led to then having multiple producers of coca leaf farms. This in turn caused an improvement in quality control and reduction of police interdiction in the distribution of cocaine. This also led to cartels attempting to repatriate their earnings which would eventually make up 5.5% of Colombia's GDP. This drive to repatriate earnings led to the pressure of legitimizing their wealth, causing an increase in violence throughout Colombia.
Throughout the 1980s, estimates of illegal drug value in Colombia ranged from $2bn to $4bn. This made up about 7–10% of the $36bn estimated Gross National Product (GNP) of Colombia during this decade. In the 1990s, the estimates of the illegal drug value remained roughly within the same range (~$2.5bn). As the Colombian GNP rose throughout the 1990s ($68.5bn in 1994 and $96.3bn in 1997), illegal drug values began to comprise a decreasing fraction of the national economy.
By the early 1990s, although Colombia led in the exportation of cocaine, it found increasing confrontations within its state. These confrontations were primarily between cartels and government institutions. This led to a decrease in the drug trade's contribution to the GDP of Colombia; dropping from 5.5% to 2.6%. Though a contributor of wealth, the distribution of cocaine has had negative effects on the socio-political situation of Colombia and has weakened its economy as well.
==== Social impacts ====
By the 1980s, Colombian cartels became the dominant cocaine distributors in the US. This led to the spread of increased violence throughout both Latin America and Miami. In the 1980s, two major drug cartels emerged in Colombia: the Medellín and Cali groups.
Throughout the 1990s however, several factors led to the decline of these major cartels and to the rise of smaller Colombian cartels. The US demand for cocaine dropped while Colombian production rose, pressuring traffickers to find new drugs and markets. In this time period, there was an increase in activity of Caribbean cartels that led to the rise of an alternate route of smuggling through Mexico. This led to the increased collaboration between major Colombian and Mexican drug traffickers. Such drastic changes in the execution of drug trade in Colombia paired with the political instabilities and rise of drug wars in Medellin and Cali, gave way for the rise of the smaller Colombian drug trafficking organizations (and the rise of heroin trade). As the drug trade's influence over the economy increased, drug lords and their networks grew in their power and influence in society. The occurrences in drug-related violence increased during this time period as drug lords fought to maintain their control in the economy.
Typically, a drug cartel had support networks that consisted of a number of individuals. These people individuals ranged from those directly involved in the trade (such as suppliers, chemists, transporters, smugglers, etc.) as well as those involved indirectly in the trade (such as politicians, bankers, police, etc.). As these smaller Colombian drug cartels grew in prevalence, several notable aspects of the Colombian society gave way for further development of the Colombian drug industry. For example, until the late 1980s, the long-term effects of the drug industry were not realized by much of society. Additionally, there was a lack of regulation in prisons where captured traffickers were sent. These prisons were under-regulated, under-funded, and under-staffed, which allowed for the formation of prison gangs, for the smuggling of arms/weapons/etc., for feasible escapes, and even for captured drug lords to continue running their businesses from prison.
=== Western Balkans ===
Since the beginning of the 21st century, the global drug trade network witnessed the emergence of criminal groups from the Western Balkans as crucial players. These groups have moved up from being small-time crooks to major drug distributors. Most of these organized crime groups belonged to Albania, Bosnia and Herzegovina, Kosovo, Montenegro, North Macedonia and Serbia. The illicit trade activities of the Balkans primarily involved Latin America, Western Europe, South Africa, Australia and Turkey. These groups keep their operations outside the Western Balkans, while staying connected to their homeland. Within the network of these groups, the dealmakers operate in a proximity of supply sources and the distribution networks are managed by foot soldiers. However, the bosses of the organized criminal groups stay and keep their wealth in the United Arab Emirates (UAE). The UAE is amongst the enablers of global corruption and illicit financial flows. Analysts have claimed that criminal actors across the world either operate from or through the Emirates. It was a haven for criminals, where the risk for illicit activities remains low.
For the Balkan criminals, a growing trend was to relocate to the UAE, which became an attraction to dirty money and kingpins from several European nations and the United Kingdom. Besides, Dubai was also dubbed as the "new Costa del Crime", replacing the crime hideaway of Spain, the Costa del Sol. The UAE had poor regulations for money laundering and for screening of suspicious transactions. The lack of regulations against illicit financial activities prompted the Financial Action Task Force (FATF) to place the Gulf country on its grey list in March 2022. Consequently, the Emirates' remained a safe option for the criminals. Nearly two-thirds of the Albanian criminal groups, who were active in trade of drugs like cocaine, were believed to be hiding in the UAE. One of such individuals, Eldi Dizdari was accused of international drug trafficking and was living in Dubai. Research revealed that these criminals invested huge amounts in the Emirates' real estate and other economical sectors to live there. Another trafficker of cocaine from Bosnia, Edin Gačanin was living in the UAE using his extensive profits to buy property and protection in the country. Dubbed as the "European Escobar", he connected the supply network between production markets of Latin America and consumer markets of Western European. He was able to evade the arrest and investigations, including by the US Drug Enforcement Administration, by seeking shelter in the Emirates.
== Trade in specific drugs ==
=== Cannabis ===
While the recreational use of (and consequently the distribution of) cannabis is illegal in most countries throughout the world, recreational distribution is legal in some countries, such as Canada, and medical distribution is permitted in some places, such as 38 of the 50 US states (although importation and distribution is still federally prohibited). Beginning in 2014, Uruguay became the first country to legalize cultivation, sale, and consumption of cannabis for recreational use for adult residents. In 2018, Canada became the second country to legalize use, sale and cultivation of cannabis. The first few weeks were met with extremely high demand, most shops being out of stock after operating for only four days.
Cannabis use is tolerated in some areas, most notably the Netherlands, which has legalized the possession and licensed sale (but not cultivation) of the drug. Many nations have decriminalized the possession of small amounts of marijuana. Due to the hardy nature of the cannabis plant, marijuana is grown all across the world; today, it is the world's most popular illegal drug with the highest level of availability. Cannabis is grown legally in many countries for industrial, non-drug use (known as hemp) as well. Cannabis-hemp may also be planted for other non-drug domestic purposes, such as seasoning that occurs in Aceh.
The demand for cannabis around the world, coupled with the drug's relative ease of cultivation, makes the illicit cannabis trade one of the primary ways in which organized criminal groups finance many of their activities. In Mexico, for example, the illicit trafficking of cannabis is thought to constitute the majority of many of the cartels' earnings, and the main way in which the cartels finance many other illegal activities; including the purchase of other illegal drugs for trafficking, and for acquiring weapons that are ultimately used to commit murders (causing a burgeoning in the homicide rates of many areas of the world, but particularly Latin America).
Some studies show that the increased legalization of cannabis in the United States (beginning in 2012 with Washington Initiative 502 and Colorado Amendment 64) has led Mexican cartels to smuggle less cannabis in exchange for more heroin.
=== Alcohol ===
Alcohol, in the context of alcoholic beverages rather than denatured alcohol, is illegal in a number of Muslim countries, such as Saudi Arabia; this has resulted in a thriving illegal trade in alcohol. The manufacture, sale, transportation, import, and export of alcoholic beverages were illegal in the United States during the time known as the Prohibition in the 1920s and early 1930s.
=== Heroin ===
In the 1950s and 1960s, most heroin was produced in Turkey and transshipped in France via the French Connection crime ring, with much of it arriving in the United States. This resulted in the record setting April 26, 1968 seizure of 246 lb (111.6 kg) of heroin smuggled in a vehicle on the SS France (1960) ocean liner. By the time of The French Connection (1971 film), this route was being supplanted.
Then, until c. 2004, the majority of the world's heroin was produced in an area known as the Golden Triangle. However, by 2007, 93% of the opiates on the world market originated in Afghanistan. This amounted to an export value of about US$4 billion, with a quarter being earned by opium farmers and the rest going to district officials, insurgents, warlords and drug traffickers. Another significant area where poppy fields are grown for the manufacture of heroin is Mexico. In November 2023, a U.N report showed that in the entirety of Afghanistan, poppy cultivation dropped by over 95%, removing it from its place as being the world's largest opium producer.
According to the United States Drug Enforcement Administration, the price of heroin is typically valued 8 to 10 times that of cocaine on American streets, making it a high-profit substance for smugglers and dealers. In Europe (except the transit countries Portugal and the Netherlands), for example, a purported gram of street heroin, usually consisting of 700–800 mg of a light to dark brown powder containing 5–10% heroin base, costs €30–70, making the effective value per gram of pure heroin €300–700. Heroin is generally a preferred product for smuggling and distribution—over unrefined opium due to the cost-effectiveness and increased efficacy of heroin.
Because of the high cost per volume, heroin is easily smuggled. A US quarter-sized (2.5 cm) cylindrical vial can contain hundreds of doses. From the 1930s to the early 1970s, the so-called French Connection supplied the majority of US demand. Allegedly, during the Vietnam War, drug lords such as Ike Atkinson used to smuggle hundreds of kilograms of heroin to the US in coffins of dead American soldiers (see Cadaver Connection). Since that time it has become more difficult for drugs to be imported into the US than it had been in previous decades, but that does not stop the heroin smugglers from getting their product across US borders. Purity levels vary greatly by region with Northeastern cities having the most pure heroin in the United States. On 17 October 2018 police in Genoa, Italy discovered 270 kg (600 lb) of heroin hidden in a ship coming from the Iranian southern port of Bandar Abbas. The ship had already passed and stopped at Hamburg in Germany and Valencia in Spain.
Penalties for smuggling heroin or morphine are often harsh in most countries. Some countries will readily hand down a death sentence (e.g. Singapore) or life in prison for the illegal smuggling of heroin or morphine, which are both internationally Schedule I drugs under the Single Convention on Narcotic Drugs.
In May 2021, Romania seized 1.4 tonnes of heroin at Constanța port of a shipment from Iran that was headed for Western Europe.
=== Methamphetamine ===
Methamphetamine is another popular drug among distributors. Three common street names are "meth", "crank", and "ice".
According to the Community Epidemiology Work Group, the number of clandestine methamphetamine laboratory incidents reported to the National Clandestine Laboratory Database decreased from 1999 to 2009. During this period, methamphetamine lab incidents increased in mid-western States (Illinois, Michigan, Missouri, and Ohio), and in Pennsylvania. In 2004, more lab incidents were reported in Missouri (2,788) and Illinois (1,058) than in California (764). In 2003, methamphetamine lab incidents reached new highs in Georgia (250), Minnesota (309), and Texas (677). There were only seven methamphetamine lab incidents reported in Hawaii in 2004, though nearly 59 percent of substance use treatment admissions (excluding alcohol) were for primary methamphetamine use during the first six months of 2004. As of 2007, Missouri leads the United States in drug-lab seizures, with 1,268 incidents reported. Often canine units are used for detecting rolling meth labs which can be concealed on large vehicles, or transported on something as small as a motorcycle. These labs are more difficult to detect than stationary ones, and can often be obscured among legal cargo in big trucks.
Methamphetamine is sometimes used intravenously, placing users and their partners at risk for transmission of HIV and hepatitis C. "Meth" can also be inhaled, most commonly vaporized on aluminum foil or in a glass pipe. This method is reported to give "an unnatural high" and a "brief intense rush".
In South Africa, methamphetamine is called "tik" or "tik-tik". Known locally as "tik", the substance was virtually unknown as late as 2003. Now, it is the country's main addictive substance, even when alcohol is included. Children as young as eight are abusing the substance, smoking it in crude glass vials made from light bulbs. Since methamphetamine is easy to produce, the substance is manufactured locally in staggering quantities.
The government of North Korea currently operates methamphetamine production facilities. There, the drug is used as medicine because no alternatives are available; it also is smuggled across the Chinese border.
The Australian Crime Commission's illicit drug data report for 2011–2012 stated that the average strength of crystal methamphetamine doubled in most Australian jurisdictions within a 12-month period, and the majority of domestic laboratory closures involved small "addict-based" operations.
=== Temazepam ===
Temazepam, a strong hypnotic benzodiazepine, is illicitly manufactured in clandestine laboratories (called jellie labs) to supply the increasingly high demand for the drug internationally. Many clandestine temazepam labs are in Eastern Europe. The labs manufacture temazepam by chemically altering diazepam, oxazepam or lorazepam. "Jellie labs" have been identified and shut down in Russia, Ukraine, Latvia and Belarus.
=== Cocaine ===
Cocaine is a highly trafficked drug. In 2017 the value of the global market for illicit cocaine was estimated at between $94 and $143 billion. In 2022, illicit sales in Europe were estimated at $11.1 billion. In 2020, almost 2,000 tons of cocaine were produced for distribution through illicit markets.
=== Fentanyl ===
Fentanyl, a synthetic opioid, is 20 to 40 times more potent than heroin and 100 times more potent than morphine; its primary clinical utility is in pain management for cancer patients and those recovering from painful surgeries. Illicit use of fentanyl continues to fuel an epidemic of synthetic opioid drug overdose deaths in the US. From 2011 to 2021, synthetic opioid deaths per year increased from 2,600 overdoses to 70,601. Since 2018, fentanyl and its analogues have been responsible for most drug overdose deaths in the US, causing over 71,238 deaths in 2021. Fentanyl is often mixed, cut, or ingested alongside other drugs, including cocaine and heroin. The fentanyl epidemic has erupted in a highly acrimonious dispute between the US and Mexican governments. While US officials blame the flood of fentanyl crossing the border primarily on Mexican crime groups, President Andrés Manuel López Obrador insists that the main source of this synthetic drug is Asia. He believes that the crisis of a lack of family values in the US drives people to use the drug.
== See also ==
Allegations of CIA drug trafficking
Arguments for and against drug prohibition
Corruption
Counterfeit medications
Counterfeit money
Environmental impact of illicit drug production
Rum running
Illicit cigarette trade
Human trafficking
Arms trafficking
Wildlife trafficking
Illegal organ trade
Drug liberalization
Drug trafficking organizations
Golden Crescent
Golden Triangle (Southeast Asia)
Illegal drug trade in the Indian Ocean region
Maritime drug smuggling into Australia
Narco-capitalism
Narco-state
Narcoterrorism
Organized crime
Operation Show Me How
=== International coordination ===
International Day Against Drug Abuse and Illicit Trafficking
Interpol
United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances
== References ==
== External links ==
Official website of the United Nations Office on Drugs and Crime (UNODC)
Illicit drug issues by country, by the CIA. Archived 2010-12-29 at the Wayback Machine. | Wikipedia/Drug_smuggling |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.