text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In mathematics, a morphism is a concept of category theory that generalizes structure-preserving maps such as homomorphism between algebraic structures, functions from a set to another set, and continuous functions between topological spaces. Although many examples of morphisms are structure-preserving maps, morphisms need not to be maps, but they can be composed in a way that is similar to function composition.
Morphisms and objects are constituents of a category. Morphisms, also called maps or arrows, relate two objects called the source and the target of the morphism. There is a partial operation, called composition, on the morphisms of a category that is defined if the target of the first object equals the source of the second object. The composition of morphisms behaves like function composition (associativity of composition when it is defined, and existence of an identity morphism for every object).
Morphisms and categories recur in much of contemporary mathematics. Originally, they were introduced for homological algebra and algebraic topology. They belong to the foundational tools of Grothendieck's scheme theory, a generalization of algebraic geometry that applies also to algebraic number theory.
== Definition ==
A category C consists of two classes, one of objects and the other of morphisms. There are two objects that are associated to every morphism, the source and the target. A morphism f from X to Y is a morphism with source X and target Y; it is commonly written as f : X → Y or X f→ Y the latter form being better suited for commutative diagrams.
For many common categories, an object is a set (often with some additional structure) and a morphism is a function from an object to another object. Therefore, the source and the target of a morphism are often called domain and codomain respectively.
Morphisms are equipped with a partial binary operation, called composition. The composition of two morphisms f and g is defined precisely when the target of f is the source of g, and is denoted g ∘ f (or sometimes simply gf). The source of g ∘ f is the source of f, and the target of g ∘ f is the target of g. The composition satisfies two axioms:
Identity
For every object X, there exists a morphism idX : X → X called the identity morphism on X, such that for every morphism f : A → B we have idB ∘ f = f = f ∘ idA.
Associativity
h ∘ (g ∘ f) = (h ∘ g) ∘ f whenever all the compositions are defined, i.e. when the target of f is the source of g, and the target of g is the source of h.
For a concrete category (a category in which the objects are sets, possibly with additional structure, and the morphisms are structure-preserving functions), the identity morphism is just the identity function, and composition is just ordinary composition of functions.
The composition of morphisms is often represented by a commutative diagram. For example,
The collection of all morphisms from X to Y is denoted HomC(X, Y) or simply Hom(X, Y) and called the hom-set between X and Y. Some authors write MorC(X, Y), Mor(X, Y) or C(X, Y). The term hom-set is something of a misnomer, as the collection of morphisms is not required to be a set; a category where Hom(X, Y) is a set for all objects X and Y is called locally small. Because hom-sets may not be sets, some people prefer to use the term "hom-class".
The domain and codomain are in fact part of the information determining a morphism. For example, in the category of sets, where morphisms are functions, two functions may be identical as sets of ordered pairs, while having different codomains. The two functions are distinct from the viewpoint of category theory. Many authors require that the hom-classes Hom(X, Y) be disjoint. In practice, this is not a problem because if this disjointness does not hold, it can be assured by appending the domain and codomain to the morphisms (say, as the second and third components of an ordered triple).
== Some special morphisms ==
=== Monomorphisms and epimorphisms ===
A morphism f : X → Y is called a monomorphism if f ∘ g1 = f ∘ g2 implies g1 = g2 for all morphisms g1, g2 : Z → X. A monomorphism can be called a mono for short, and we can use monic as an adjective. A morphism f has a left inverse or is a split monomorphism if there is a morphism g : Y → X such that g ∘ f = idX. Thus f ∘ g : Y → Y is idempotent; that is, (f ∘ g)2 = f ∘ (g ∘ f) ∘ g = f ∘ g. The left inverse g is also called a retraction of f.
Morphisms with left inverses are always monomorphisms (f-1l ∘ f ∘ g1 = f-1l ∘ f ∘ g2 implies g1 = g2, where f-1l is the left inverse of f), but the converse is not true in general; a monomorphism may fail to have a left inverse. In concrete categories, a function that has a left inverse is injective. Thus, in concrete categories, monomorphisms are often, but not always, injective. The condition of being an injection is stronger than that of being a monomorphism, but weaker than that of being a split monomorphism.
Dually to monomorphisms, a morphism f : X → Y is called an epimorphism if g1 ∘ f = g2 ∘ f implies g1 = g2 for all morphisms g1, g2 : Y → Z. An epimorphism can be called an epi for short, and we can use epic as an adjective. A morphism f has a right inverse or is a split epimorphism if there is a morphism g : Y → X such that f ∘ g = idY. The right inverse g is also called a section of f. Morphisms having a right inverse are always epimorphisms, but the converse is not true in general, as an epimorphism may fail to have a right inverse.
If a monomorphism f splits with left inverse g, then g is a split epimorphism with right inverse f. In concrete categories, a function that has a right inverse is surjective. Thus, in concrete categories, epimorphisms are often, but not always, surjective. The condition of being a surjection is stronger than that of being an epimorphism, but weaker than that of being a split epimorphism. In the category of sets, the statement that every surjection has a section is equivalent to the axiom of choice.
A morphism that is both an epimorphism and a monomorphism is called a bimorphism.
=== Isomorphisms ===
A morphism f : X → Y is called an isomorphism if there exists a morphism g : Y → X such that f ∘ g = idY and g ∘ f = idX. If a morphism has both left-inverse and right-inverse, then the two inverses are equal, so f is an isomorphism, and g is called simply the inverse of f. Inverse morphisms, if they exist, are unique. The inverse g is also an isomorphism, with inverse f. Two objects with an isomorphism between them are said to be isomorphic or equivalent.
While every isomorphism is a bimorphism, a bimorphism is not necessarily an isomorphism. For example, in the category of commutative rings the inclusion Z → Q is a bimorphism that is not an isomorphism. However, any morphism that is both an epimorphism and a split monomorphism, or both a monomorphism and a split epimorphism, must be an isomorphism. A category, such as a Set, in which every bimorphism is an isomorphism is known as a balanced category.
=== Endomorphisms and automorphisms ===
A morphism f : X → X (that is, a morphism with identical source and target) is an endomorphism of X. A split endomorphism is an idempotent endomorphism f if f admits a decomposition f = h ∘ g with g ∘ h = id. In particular, the Karoubi envelope of a category splits every idempotent morphism.
An automorphism is a morphism that is both an endomorphism and an isomorphism. In every category, the automorphisms of an object always form a group, called the automorphism group of the object.
== Examples ==
For algebraic structures commonly considered in algebra, such as groups, rings, modules, etc., the morphisms are usually the homomorphisms, and the notions of isomorphism, automorphism, endomorphism, epimorphism, and monomorphism are the same as the above defined ones. However, in the case of rings, "epimorphism" is often considered as a synonym of "surjection", although there are ring epimorphisms that are not surjective (e.g., when embedding the integers in the rational numbers).
In the category of topological spaces, the morphisms are the continuous functions and isomorphisms are called homeomorphisms. There are bijections (that is, isomorphisms of sets) that are not homeomorphisms.
In the category of smooth manifolds, the morphisms are the smooth functions and isomorphisms are called diffeomorphisms.
In the category of small categories, the morphisms are functors.
In a functor category, the morphisms are natural transformations.
For more examples, see Category theory.
== See also ==
Normal morphism
Zero morphism
== Notes ==
== References ==
Jacobson, Nathan (2009), Basic algebra, vol. 2 (2nd ed.), Dover, ISBN 978-0-486-47187-7.
Adámek, Jiří; Herrlich, Horst; Strecker, George E. (1990). Abstract and Concrete Categories (PDF). John Wiley & Sons. ISBN 0-471-60922-6. Now available as free on-line edition (4.2MB PDF).
== External links ==
"Morphism", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Morphism_(category_theory) |
This is a glossary of arithmetic and diophantine geometry in mathematics, areas growing out of the traditional study of Diophantine equations to encompass large parts of number theory and algebraic geometry. Much of the theory is in the form of proposed conjectures, which can be related at various levels of generality.
Diophantine geometry in general is the study of algebraic varieties V over fields K that are finitely generated over their prime fields—including as of special interest number fields and finite fields—and over local fields. Of those, only the complex numbers are algebraically closed; over any other K the existence of points of V with coordinates in K is something to be proved and studied as an extra topic, even knowing the geometry of V.
Arithmetic geometry can be more generally defined as the study of schemes of finite type over the spectrum of the ring of integers. Arithmetic geometry has also been defined as the application of the techniques of algebraic geometry to problems in number theory.
See also the glossary of number theory terms at Glossary of number theory.
== A ==
abc conjecture
The abc conjecture of Masser and Oesterlé attempts to state as much as possible about repeated prime factors in an equation a + b = c. For example 3 + 125 = 128 but the prime powers here are exceptional.
Arakelov class group
The Arakelov class group is the analogue of the ideal class group or divisor class group for Arakelov divisors.
Arakelov divisor
An Arakelov divisor (or replete divisor) on a global field is an extension of the concept of divisor or fractional ideal. It is a formal linear combination of places of the field with finite places having integer coefficients and the infinite places having real coefficients.
Arakelov height
The Arakelov height on a projective space over the field of algebraic numbers is a global height function with local contributions coming from Fubini–Study metrics on the Archimedean fields and the usual metric on the non-Archimedean fields.
Arakelov theory
Arakelov theory is an approach to arithmetic geometry that explicitly includes the 'infinite primes'.
Arithmetic of abelian varieties
See main article arithmetic of abelian varieties
Artin L-functions
Artin L-functions are defined for quite general Galois representations. The introduction of étale cohomology in the 1960s meant that Hasse–Weil L-functions could be regarded as Artin L-functions for the Galois representations on l-adic cohomology groups.
== B ==
Bad reduction
See good reduction.
Birch and Swinnerton-Dyer conjecture
The Birch and Swinnerton-Dyer conjecture on elliptic curves postulates a connection between the rank of an elliptic curve and the order of pole of its Hasse–Weil L-function. It has been an important landmark in Diophantine geometry since the mid-1960s, with results such as the Coates–Wiles theorem, Gross–Zagier theorem and Kolyvagin's theorem.
== C ==
Canonical height
The canonical height on an abelian variety is a height function that is a distinguished quadratic form. See Néron–Tate height.
Chabauty's method
Chabauty's method, based on p-adic analytic functions, is a special application but capable of proving cases of the Mordell conjecture for curves whose Jacobian's rank is less than its dimension. It developed ideas from Thoralf Skolem's method for an algebraic torus. (Other older methods for Diophantine problems include Runge's method.)
Coates–Wiles theorem
The Coates–Wiles theorem states that an elliptic curve with complex multiplication by an imaginary quadratic field of class number 1 and positive rank has L-function with a zero at s = 1. This is a special case of the Birch and Swinnerton-Dyer conjecture.
Crystalline cohomology
Crystalline cohomology is a p-adic cohomology theory in characteristic p, introduced by Alexander Grothendieck to fill the gap left by étale cohomology which is deficient in using mod p coefficients in this case. It is one of a number of theories deriving in some way from Dwork's method, and has applications outside purely arithmetical questions.
== D ==
Diagonal forms
Diagonal forms are some of the simplest projective varieties to study from an arithmetic point of view (including the Fermat varieties). Their local zeta-functions are computed in terms of Jacobi sums. Waring's problem is the most classical case.
Diophantine dimension
The Diophantine dimension of a field is the smallest natural number k, if it exists, such that the field of is class Ck: that is, such that any homogeneous polynomial of degree d in N variables has a non-trivial zero whenever N > dk. Algebraically closed fields are of Diophantine dimension 0; quasi-algebraically closed fields of dimension 1.
Discriminant of a point
The discriminant of a point refers to two related concepts relative to a point P on an algebraic variety V defined over a number field K: the geometric (logarithmic) discriminant d(P) and the arithmetic discriminant, defined by Vojta. The difference between the two may be compared to the difference between the arithmetic genus of a singular curve and the geometric genus of the desingularisation. The arithmetic genus is larger than the geometric genus, and the height of a point may be bounded in terms of the arithmetic genus. Obtaining similar bounds involving the geometric genus would have significant consequences.
Dwork's method
Bernard Dwork used distinctive methods of p-adic analysis, p-adic algebraic differential equations, Koszul complexes and other techniques that have not all been absorbed into general theories such as crystalline cohomology. He first proved the rationality of local zeta-functions, the initial advance in the direction of the Weil conjectures.
== E ==
Étale cohomology
The search for a Weil cohomology (q.v.) was at least partially fulfilled in the étale cohomology theory of Alexander Grothendieck and Michael Artin. It provided a proof of the functional equation for the local zeta-functions, and was basic in the formulation of the Tate conjecture (q.v.) and numerous other theories.
== F ==
Faltings height
The Faltings height of an elliptic curve or abelian variety defined over a number field is a measure of its complexity introduced by Faltings in his proof of the Mordell conjecture.
Fermat's Last Theorem
Fermat's Last Theorem, the most celebrated conjecture of Diophantine geometry, was proved by Andrew Wiles and Richard Taylor.
Flat cohomology
Flat cohomology is, for the school of Grothendieck, one terminal point of development. It has the disadvantage of being quite hard to compute with. The reason that the flat topology has been considered the 'right' foundational topos for scheme theory goes back to the fact of faithfully-flat descent, the discovery of Grothendieck that the representable functors are sheaves for it (i.e. a very general gluing axiom holds).
Function field analogy
It was realised in the nineteenth century that the ring of integers of a number field has analogies with the affine coordinate ring of an algebraic curve or compact Riemann surface, with a point or more removed corresponding to the 'infinite places' of a number field. This idea is more precisely encoded in the theory that global fields should all be treated on the same basis. The idea goes further. Thus elliptic surfaces over the complex numbers, also, have some quite strict analogies with elliptic curves over number fields.
== G ==
Geometric class field theory
The extension of class field theory-style results on abelian coverings to varieties of dimension at least two is often called geometric class field theory.
Good reduction
Fundamental to local analysis in arithmetic problems is to reduce modulo all prime numbers p or, more generally, prime ideals. In the typical situation this presents little difficulty for almost all p; for example denominators of fractions are tricky, in that reduction modulo a prime in the denominator looks like division by zero, but that rules out only finitely many p per fraction. With a little extra sophistication, homogeneous coordinates allow clearing of denominators by multiplying by a common scalar. For a given, single point one can do this and not leave a common factor p. However singularity theory enters: a non-singular point may become a singular point on reduction modulo p, because the Zariski tangent space can become larger when linear terms reduce to 0 (the geometric formulation shows it is not the fault of a single set of coordinates). Good reduction refers to the reduced variety having the same properties as the original, for example, an algebraic curve having the same genus, or a smooth variety remaining smooth. In general there will be a finite set S of primes for a given variety V, assumed smooth, such that there is otherwise a smooth reduced Vp over Z/pZ. For abelian varieties, good reduction is connected with ramification in the field of division points by the Néron–Ogg–Shafarevich criterion. The theory is subtle, in the sense that the freedom to change variables to try to improve matters is rather unobvious: see Néron model, potential good reduction, Tate curve, semistable abelian variety, semistable elliptic curve, Serre–Tate theorem.
Grothendieck–Katz conjecture
The Grothendieck–Katz p-curvature conjecture applies reduction modulo primes to algebraic differential equations, to derive information on algebraic function solutions. The initial result of this type was Eisenstein's theorem.
== H ==
Hasse principle
The Hasse principle states that solubility for a global field is the same as solubility in all relevant local fields. One of the main objectives of Diophantine geometry is to classify cases where the Hasse principle holds. Generally that is for a large number of variables, when the degree of an equation is held fixed. The Hasse principle is often associated with the success of the Hardy–Littlewood circle method. When the circle method works, it can provide extra, quantitative information such as asymptotic number of solutions. Reducing the number of variables makes the circle method harder; therefore failures of the Hasse principle, for example for cubic forms in small numbers of variables (and in particular for elliptic curves as cubic curves) are at a general level connected with the limitations of the analytic approach.
Hasse–Weil L-function
A Hasse–Weil L-function, sometimes called a global L-function, is an Euler product formed from local zeta-functions. The properties of such L-functions remain largely in the realm of conjecture, with the proof of the Taniyama–Shimura conjecture being a breakthrough. The Langlands philosophy is largely complementary to the theory of global L-functions.
Height function
A height function in Diophantine geometry quantifies the size of solutions to Diophantine equations.
Hilbertian fields
A Hilbertian field K is one for which the projective spaces over K are not thin sets in the sense of Jean-Pierre Serre. This is a geometric take on Hilbert's irreducibility theorem which shows the rational numbers are Hilbertian. Results are applied to the inverse Galois problem. Thin sets (the French word is mince) are in some sense analogous to the meagre sets (French maigre) of the Baire category theorem.
== I ==
Igusa zeta-function
An Igusa zeta-function, named for Jun-ichi Igusa, is a generating function counting numbers of points on an algebraic variety modulo high powers pn of a fixed prime number p. General rationality theorems are now known, drawing on methods of mathematical logic.
Infinite descent
Infinite descent was Pierre de Fermat's classical method for Diophantine equations. It became one half of the standard proof of the Mordell–Weil theorem, with the other being an argument with height functions (q.v.). Descent is something like division by two in a group of principal homogeneous spaces (often called 'descents', when written out by equations); in more modern terms in a Galois cohomology group which is to be proved finite. See Selmer group.
Iwasawa theory
Iwasawa theory builds up from the analytic number theory and Stickelberger's theorem as a theory of ideal class groups as Galois modules and p-adic L-functions (with roots in Kummer congruence on Bernoulli numbers). In its early days in the late 1960s it was called Iwasawa's analogue of the Jacobian. The analogy was with the Jacobian variety J of a curve C over a finite field F (qua Picard variety), where the finite field has roots of unity added to make finite field extensions F′ The local zeta-function (q.v.) of C can be recovered from the points J(F′) as Galois module. In the same way, Iwasawa added pn-power roots of unity for fixed p and with n → ∞, for his analogue, to a number field K, and considered the inverse limit of class groups, finding a p-adic L-function earlier introduced by Kubota and Leopoldt.
== K ==
K-theory
Algebraic K-theory is on one hand a quite general theory with an abstract algebra flavour, and, on the other hand, implicated in some formulations of arithmetic conjectures. See for example Birch–Tate conjecture, Lichtenbaum conjecture.
== L ==
Lang conjecture
Enrico Bombieri (dimension 2), Serge Lang and Paul Vojta (integral points case) and Piotr Blass have conjectured that algebraic varieties of general type do not have Zariski dense subsets of K-rational points, for K a finitely-generated field. This circle of ideas includes the understanding of analytic hyperbolicity and the Lang conjectures on that, and the Vojta conjectures. An analytically hyperbolic algebraic variety V over the complex numbers is one such that no holomorphic mapping from the whole complex plane to it exists, that is not constant. Examples include compact Riemann surfaces of genus g > 1. Lang conjectured that V is analytically hyperbolic if and only if all subvarieties are of general type.
Linear torus
A linear torus is a geometrically irreducible Zariski-closed subgroup of an affine torus (product of multiplicative groups).
Local zeta-function
A local zeta-function is a generating function for the number of points on an algebraic variety V over a finite field F, over the finite field extensions of F. According to the Weil conjectures (q.v.) these functions, for non-singular varieties, exhibit properties closely analogous to the Riemann zeta-function, including the Riemann hypothesis.
== M ==
Manin–Mumford conjecture
The Manin–Mumford conjecture, now proved by Michel Raynaud, states that a curve C in its Jacobian variety J can only contain a finite number of points that are of finite order in J, unless C = J.
Mordell conjecture
The Mordell conjecture is now the Faltings theorem, and states that a curve of genus at least two has only finitely many rational points. The Uniformity conjecture states that there should be a uniform bound on the number of such points, depending only on the genus and the field of definition.
Mordell–Lang conjecture
The Mordell–Lang conjecture, now proved by McQuillan following work of Laurent, Raynaud, Hindry, Vojta, and Faltings, is a conjecture of Lang unifying the Mordell conjecture and Manin–Mumford conjecture in an abelian variety or semiabelian variety.
Mordell–Weil theorem
The Mordell–Weil theorem is a foundational result stating that for an abelian variety A over a number field K the group A(K) is a finitely-generated abelian group. This was proved initially for number fields K, but extends to all finitely-generated fields.
Mordellic variety
A Mordellic variety is an algebraic variety which has only finitely many points in any finitely generated field.
== N ==
Naive height
The naive height or classical height of a vector of rational numbers is the maximum absolute value of the vector of coprime integers obtained by multiplying through by a lowest common denominator. This may be used to define height on a point in projective space over Q, or of a polynomial, regarded as a vector of coefficients, or of an algebraic number, from the height of its minimal polynomial.
Néron symbol
The Néron symbol is a bimultiplicative pairing between divisors and algebraic cycles on an Abelian variety used in Néron's formulation of the Néron–Tate height as a sum of local contributions. The global Néron symbol, which is the sum of the local symbols, is just the negative of the height pairing.
Néron–Tate height
The Néron–Tate height (also often referred to as the canonical height) on an abelian variety A is a height function (q.v.) that is essentially intrinsic, and an exact quadratic form, rather than approximately quadratic with respect to the addition on A as provided by the general theory of heights. It can be defined from a general height by a limiting process; there are also formulae, in the sense that it is a sum of local contributions.
Nevanlinna invariant
The Nevanlinna invariant of an ample divisor D on a normal projective variety X is a real number which describes the rate of growth of the number of rational points on the variety with respect to the embedding defined by the divisor. It has similar formal properties to the abscissa of convergence of the height zeta function and it is conjectured that they are essentially the same.
== O ==
Ordinary reduction
An Abelian variety A of dimension d has ordinary reduction at a prime p if it has good reduction at p and in addition the p-torsion has rank d.
== Q ==
Quasi-algebraic closure
The topic of quasi-algebraic closure, i.e. solubility guaranteed by a number of variables polynomial in the degree of an equation, grew out of studies of the Brauer group and the Chevalley–Warning theorem. It stalled in the face of counterexamples; but see Ax–Kochen theorem from mathematical logic.
== R ==
Reduction modulo a prime number or ideal
See good reduction.
Replete ideal
A replete ideal in a number field K is a formal product of a fractional ideal of K and a vector of positive real numbers with components indexed by the infinite places of K. A replete divisor is an Arakelov divisor.
== S ==
Sato–Tate conjecture
The Sato–Tate conjecture describes the distribution of Frobenius elements in the Tate modules of the elliptic curves over finite fields obtained from reducing a given elliptic curve over the rationals. Mikio Sato and, independently, John Tate suggested it around 1960. It is a prototype for Galois representations in general.
Skolem's method
See Chabauty's method.
Special set
The special set in an algebraic variety is the subset in which one might expect to find many rational points. The precise definition varies according to context. One definition is the Zariski closure of the union of images of algebraic groups under non-trivial rational maps; alternatively one may take images of abelian varieties; another definition is the union of all subvarieties that are not of general type. For abelian varieties the definition would be the union of all translates of proper abelian subvarieties. For a complex variety, the holomorphic special set is the Zariski closure of the images of all non-constant holomorphic maps from C. Lang conjectured that the analytic and algebraic special sets are equal.
Subspace theorem
Schmidt's subspace theorem shows that points of small height in projective space lie in a finite number of hyperplanes. A quantitative form of the theorem, in which the number of subspaces containing all solutions, was also obtained by Schmidt, and the theorem was generalised by Schlickewei (1977) to allow more general absolute values on number fields. The theorem may be used to obtain results on Diophantine equations such as Siegel's theorem on integral points and solution of the S-unit equation.
== T ==
Tamagawa numbers
The direct Tamagawa number definition works well only for linear algebraic groups. There the Weil conjecture on Tamagawa numbers was eventually proved. For abelian varieties, and in particular the Birch–Swinnerton-Dyer conjecture (q.v.), the Tamagawa number approach to a local–global principle fails on a direct attempt, though it has had heuristic value over many years. Now a sophisticated equivariant Tamagawa number conjecture is a major research problem.
Tate conjecture
The Tate conjecture (John Tate, 1963) provided an analogue to the Hodge conjecture, also on algebraic cycles, but well within arithmetic geometry. It also gave, for elliptic surfaces, an analogue of the Birch–Swinnerton-Dyer conjecture (q.v.), leading quickly to a clarification of the latter and a recognition of its importance.
Tate curve
The Tate curve is a particular elliptic curve over the p-adic numbers introduced by John Tate to study bad reduction (see good reduction).
Tsen rank
The Tsen rank of a field, named for C. C. Tsen who introduced their study in 1936, is the smallest natural number i, if it exists, such that the field is of class Ti: that is, such that any system of polynomials with no constant term of degree dj in n variables has a non-trivial zero whenever n > Σ dji. Algebraically closed fields are of Tsen rank zero. The Tsen rank is greater or equal to the Diophantine dimension but it is not known if they are equal except in the case of rank zero.
== U ==
Uniformity conjecture
The uniformity conjecture states that for any number field K and g > 2, there is a uniform bound B(g,K) on the number of K-rational points on any curve of genus g. The conjecture would follow from the Bombieri–Lang conjecture.
Unlikely intersection
An unlikely intersection is an algebraic subgroup intersecting a subvariety of a torus or abelian variety in a set of unusually large dimension, such as is involved in the Mordell–Lang conjecture.
== V ==
Vojta conjecture
The Vojta conjecture is a complex of conjectures by Paul Vojta, making analogies between Diophantine approximation and Nevanlinna theory.
== W ==
Weights
The yoga of weights is a formulation by Alexander Grothendieck of analogies between Hodge theory and l-adic cohomology.
Weil cohomology
The initial idea, later somewhat modified, for proving the Weil conjectures (q.v.), was to construct a cohomology theory applying to algebraic varieties over finite fields that would both be as good as singular homology at detecting topological structure, and have Frobenius mappings acting in such a way that the Lefschetz fixed-point theorem could be applied to the counting in local zeta-functions. For later history see motive (algebraic geometry), motivic cohomology.
Weil conjectures
The Weil conjectures were three highly influential conjectures of André Weil, made public around 1949, on local zeta-functions. The proof was completed in 1973. Those being proved, there remain extensions of the Chevalley–Warning theorem congruence, which comes from an elementary method, and improvements of Weil bounds, e.g. better estimates for curves of the number of points than come from Weil's basic theorem of 1940. The latter turn out to be of interest for Algebraic geometry codes.
Weil distributions on algebraic varieties
André Weil proposed a theory in the 1920s and 1930s on prime ideal decomposition of algebraic numbers in coordinates of points on algebraic varieties. It has remained somewhat under-developed.
Weil function
A Weil function on an algebraic variety is a real-valued function defined off some Cartier divisor which generalises the concept of Green's function in Arakelov theory. They are used in the construction of the local components of the Néron–Tate height.
Weil height machine
The Weil height machine is an effective procedure for assigning a height function to any divisor on smooth projective variety over a number field (or to Cartier divisors on non-smooth varieties).
== See also ==
Glossary of number theory
Arithmetic topology
Arithmetic dynamics
== References ==
Bombieri, Enrico; Gubler, Walter (2006). Heights in Diophantine Geometry. New Mathematical Monographs. Vol. 4. Cambridge University Press. ISBN 978-0-521-71229-3. Zbl 1130.11034.
Hindry, Marc; Silverman, Joseph H. (2000). Diophantine Geometry: An Introduction. Graduate Texts in Mathematics. Vol. 201. ISBN 0-387-98981-1. Zbl 0948.11023.
Lang, Serge (1988). Introduction to Arakelov theory. New York: Springer-Verlag. ISBN 0-387-96793-1. MR 0969124. Zbl 0667.14001.
Lang, Serge (1997). Survey of Diophantine Geometry. Springer-Verlag. ISBN 3-540-61223-8. Zbl 0869.11051.
Neukirch, Jürgen (1999). Algebraic Number Theory. Grundlehren der Mathematischen Wissenschaften. Vol. 322. Springer-Verlag. ISBN 978-3-540-65399-8. Zbl 0956.11021.
== Further reading ==
Dino Lorenzini (1996), An invitation to arithmetic geometry, AMS Bookstore, ISBN 978-0-8218-0267-0 | Wikipedia/Function_field_analogy |
Sora is a text-to-video model developed by OpenAI. The model generates short video clips based on user prompts, and can also extend existing short videos. Sora was released publicly for ChatGPT Plus and ChatGPT Pro users in December 2024.
== History ==
Several other text-to-video generating models had been created prior to Sora, including Meta's Make-A-Video, Runway's Gen-2, and Google's Lumiere, the last of which, as of February 2024, is also still in its research phase. OpenAI, the company behind Sora, had released DALL·E 3, the third of its DALL-E text-to-image models, in September 2023.
The team that developed Sora named it after the Japanese word for sky to signify its "limitless creative potential". On February 15, 2024, OpenAI first previewed Sora by releasing multiple clips of high-definition videos that it created, including an SUV driving down a mountain road, an animation of a "short fluffy monster" next to a candle, two people walking through Tokyo in the snow, and fake historical footage of the California gold rush, and stated that it was able to generate videos up to one minute long. The company then shared a technical report, which highlighted the methods used to train the model. OpenAI CEO Sam Altman also posted a series of tweets, responding to Twitter users' prompts with Sora-generated videos of the prompts.
In November 2024, an API key for Sora access was leaked by a group of testers on Hugging Face, who posted a manifesto stating that they were protesting that Sora was used for "art washing". OpenAI revoked all access three hours after the leak was made public, and gave a statement that "hundreds of artists" have shaped the development, and that "participation is voluntary."
As of December 9, 2024, OpenAI has made Sora available to the public, for ChatGPT Pro and ChatGPT Plus users. Prior to this, the company had provided limited access to a small "red team", including experts in misinformation and bias, to perform adversarial testing on the model. The company also shared Sora with a small group of creative professionals, including video makers and artists, to seek feedback on its usefulness in creative fields. In February 2025, OpenAI announced plans to integrate Sora into ChatGPT by letting users generate Sora videos from the chatbot.
== Capabilities and limitations ==
The technology behind Sora is an adaptation of the technology behind DALL-E 3. According to OpenAI, Sora is a diffusion transformer – a denoising latent diffusion model with one Transformer as the denoiser. A video is generated in latent space by denoising 3D "patches", then transformed to standard space by a video decompressor. Re-captioning is used to augment training data, by using a video-to-text model to create detailed captions on videos.
OpenAI trained the model using publicly available videos as well as copyrighted videos licensed for the purpose, but did not reveal the number or the exact source of the videos. Upon its release, OpenAI acknowledged some of Sora's shortcomings, including its struggling to simulate complex physics, to understand causality, and to differentiate left from right. One example shows a group of wolf pups seemingly multiplying and converging, creating a hard-to-follow scenario. OpenAI also stated that, in adherence to the company's existing safety practices, Sora will restrict text prompts for sexual, violent, hateful, or celebrity imagery, as well as content featuring pre-existing intellectual property.
Tim Brooks, a researcher on Sora, stated that the model figured out how to create 3D graphics from its dataset alone, while Bill Peebles, also a Sora researcher, said that the model automatically created different video angles without being prompted. According to OpenAI, Sora-generated videos are tagged with C2PA metadata to indicate that they were AI-generated.
== Reception ==
Will Douglas Heaven of the MIT Technology Review called the demonstration videos "impressive", but noted that they must have been cherry-picked and may not be representative of Sora's typical output. American academic Oren Etzioni expressed concerns over the technology's ability to create online disinformation for political campaigns. For Wired, Steven Levy similarly wrote that it had the potential to become "a misinformation train wreck" and opined that its preview clips were "impressive" but "not perfect" and that it "show[ed] an emergent grasp of cinematic grammar" due to its unprompted shot changes. Levy added, "[i]t will be a very long time, if ever, before text-to-video threatens actual filmmaking." Lisa Lacy of CNET called its example videos "remarkably realistic – except perhaps when a human face appears close up or when sea creatures are swimming".
Filmmaker Tyler Perry announced he would be putting a planned $800 million expansion of his Atlanta studio on hold, expressing concern about Sora's potential impact on the film industry.
== See also ==
VideoPoet – Text-to-video model by Google
Dream Machine (text-to-video model)
== References ==
== External links ==
Official website | Wikipedia/Sora_(text-to-video_model) |
Veo is a text-to-video model developed by Google DeepMind and announced in May 2024. As a generative AI model, it creates videos based on user prompts. Veo 3, released in May 2025, can also generate accompanying audio.
== Development ==
In May 2024, a multimodal video generation model called Veo was announced at Google I/O 2024. Google claimed that it could generate 1080p videos beyond a minute long. In December 2024, Google released Veo 2, available via VideoFX. It supports 4K resolution video generation, and has an improved understanding of physics. In April 2025, Google announced that Veo 2 became available for advanced users on Gemini App. In May 2025, Google released Veo 3, which not only generates videos but also creates synchronized audio — including dialogue, sound effects, and ambient noise — to match the visuals. Google also announced Flow, a video-creation tool powered by Veo and Imagen.
A key innovation of the May 2025 release of Veo 3 was that it generated music and voice to match well with the video. Google DeepMind CEO Demis Hassabis described the release as the moment when AI video generation left the era of the silent film.
== Reactions ==
A reporter for Gizmodo reacted to the release of Veo 3 by observing that users directed the model to generate low-quality content, such as man on the street interviews or haul videos of people unboxing products. Another media commentator reported that the tool tended to repeat the same joke in response to different prompts.
Commentators speculated that Google had trained the service on YouTube videos or Reddit posts. Google itself had not stated the source of its training content.
== References ==
== External links ==
Official website | Wikipedia/Veo_(text-to-video_model) |
Graph neural networks (GNN) are specialized artificial neural networks that are designed for tasks whose inputs are graphs.
One prominent example is molecular drug design. Each input sample is a graph representation of a molecule, where atoms form the nodes and chemical bonds between atoms form the edges. In addition to the graph representation, the input also includes known chemical properties for each of the atoms. Dataset samples may thus differ in length, reflecting the varying numbers of atoms in molecules, and the varying number of bonds between them. The task is to predict the efficacy of a given molecule for a specific medical application, like eliminating E. coli bacteria.
The key design element of GNNs is the use of pairwise message passing, such that graph nodes iteratively update their representations by exchanging information with their neighbors. Several GNN architectures have been proposed, which implement different flavors of message passing, started by recursive or convolutional constructive approaches. As of 2022, it is an open question whether it is possible to define GNN architectures "going beyond" message passing, or instead every GNN can be built on message passing over suitably defined graphs.
In the more general subject of "geometric deep learning", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs. A convolutional neural network layer, in the context of computer vision, can be considered a GNN applied to graphs whose nodes are pixels and only adjacent pixels are connected by edges in the graph. A transformer layer, in natural language processing, can be considered a GNN applied to complete graphs whose nodes are words or tokens in a passage of natural language text.
Relevant application domains for GNNs include natural language processing, social networks, citation networks, molecular biology, chemistry, physics and NP-hard combinatorial optimization problems.
Open source libraries implementing GNNs include PyTorch Geometric (PyTorch), TensorFlow GNN (TensorFlow), Deep Graph Library (framework agnostic), jraph (Google JAX), and GraphNeuralNetworks.jl/GeometricFlux.jl (Julia, Flux).
== Architecture ==
The architecture of a generic GNN implements the following fundamental layers:
Permutation equivariant: a permutation equivariant layer maps a representation of a graph into an updated representation of the same graph. In the literature, permutation equivariant layers are implemented via pairwise message passing between graph nodes. Intuitively, in a message passing layer, nodes update their representations by aggregating the messages received from their immediate neighbours. As such, each message passing layer increases the receptive field of the GNN by one hop.
Local pooling: a local pooling layer coarsens the graph via downsampling. Local pooling is used to increase the receptive field of a GNN, in a similar fashion to pooling layers in convolutional neural networks. Examples include k-nearest neighbours pooling, top-k pooling, and self-attention pooling.
Global pooling: a global pooling layer, also known as readout layer, provides fixed-size representation of the whole graph. The global pooling layer must be permutation invariant, such that permutations in the ordering of graph nodes and edges do not alter the final output. Examples include element-wise sum, mean or maximum.
It has been demonstrated that GNNs cannot be more expressive than the Weisfeiler–Leman Graph Isomorphism Test. In practice, this means that there exist different graph structures (e.g., molecules with the same atoms but different bonds) that cannot be distinguished by GNNs. More powerful GNNs operating on higher-dimension geometries such as simplicial complexes can be designed. As of 2022, whether or not future architectures will overcome the message passing primitive is an open research question.
== Message passing layers ==
Message passing layers are permutation-equivariant layers mapping a graph into an updated representation of the same graph. Formally, they can be expressed as message passing neural networks (MPNNs).
Let
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
be a graph, where
V
{\displaystyle V}
is the node set and
E
{\displaystyle E}
is the edge set. Let
N
u
{\displaystyle N_{u}}
be the neighbourhood of some node
u
∈
V
{\displaystyle u\in V}
. Additionally, let
x
u
{\displaystyle \mathbf {x} _{u}}
be the features of node
u
∈
V
{\displaystyle u\in V}
, and
e
u
v
{\displaystyle \mathbf {e} _{uv}}
be the features of edge
(
u
,
v
)
∈
E
{\displaystyle (u,v)\in E}
. An MPNN layer can be expressed as follows:
h
u
=
ϕ
(
x
u
,
⨁
v
∈
N
u
ψ
(
x
u
,
x
v
,
e
u
v
)
)
{\displaystyle \mathbf {h} _{u}=\phi \left(\mathbf {x} _{u},\bigoplus _{v\in N_{u}}\psi (\mathbf {x} _{u},\mathbf {x} _{v},\mathbf {e} _{uv})\right)}
where
ϕ
{\displaystyle \phi }
and
ψ
{\displaystyle \psi }
are differentiable functions (e.g., artificial neural networks), and
⨁
{\displaystyle \bigoplus }
is a permutation invariant aggregation operator that can accept an arbitrary number of inputs (e.g., element-wise sum, mean, or max). In particular,
ϕ
{\displaystyle \phi }
and
ψ
{\displaystyle \psi }
are referred to as update and message functions, respectively. Intuitively, in an MPNN computational block, graph nodes update their representations by aggregating the messages received from their neighbours.
The outputs of one or more MPNN layers are node representations
h
u
{\displaystyle \mathbf {h} _{u}}
for each node
u
∈
V
{\displaystyle u\in V}
in the graph. Node representations can be employed for any downstream task, such as node/graph classification or edge prediction.
Graph nodes in an MPNN update their representation aggregating information from their immediate neighbours. As such, stacking
n
{\displaystyle n}
MPNN layers means that one node will be able to communicate with nodes that are at most
n
{\displaystyle n}
"hops" away. In principle, to ensure that every node receives information from every other node, one would need to stack a number of MPNN layers equal to the graph diameter. However, stacking many MPNN layers may cause issues such as oversmoothing and oversquashing. Oversmoothing refers to the issue of node representations becoming indistinguishable. Oversquashing refers to the bottleneck that is created by squeezing long-range dependencies into fixed-size representations. Countermeasures such as skip connections (as in residual neural networks), gated update rules and jumping knowledge can mitigate oversmoothing. Modifying the final layer to be a fully-adjacent layer, i.e., by considering the graph as a complete graph, can mitigate oversquashing in problems where long-range dependencies are required.
Other "flavours" of MPNN have been developed in the literature, such as graph convolutional networks and graph attention networks, whose definitions can be expressed in terms of the MPNN formalism.
=== Graph convolutional network ===
The graph convolutional network (GCN) was first introduced by Thomas Kipf and Max Welling in 2017.
A GCN layer defines a first-order approximation of a localized spectral filter on graphs. GCNs can be understood as a generalization of convolutional neural networks to graph-structured data.
The formal expression of a GCN layer reads as follows:
H
=
σ
(
D
~
−
1
2
A
~
D
~
−
1
2
X
Θ
)
{\displaystyle \mathbf {H} =\sigma \left({\tilde {\mathbf {D} }}^{-{\frac {1}{2}}}{\tilde {\mathbf {A} }}{\tilde {\mathbf {D} }}^{-{\frac {1}{2}}}\mathbf {X} \mathbf {\Theta } \right)}
where
H
{\displaystyle \mathbf {H} }
is the matrix of node representations
h
u
{\displaystyle \mathbf {h} _{u}}
,
X
{\displaystyle \mathbf {X} }
is the matrix of node features
x
u
{\displaystyle \mathbf {x} _{u}}
,
σ
(
⋅
)
{\displaystyle \sigma (\cdot )}
is an activation function (e.g., ReLU),
A
~
{\displaystyle {\tilde {\mathbf {A} }}}
is the graph adjacency matrix with the addition of self-loops,
D
~
{\displaystyle {\tilde {\mathbf {D} }}}
is the graph degree matrix with the addition of self-loops, and
Θ
{\displaystyle \mathbf {\Theta } }
is a matrix of trainable parameters.
In particular, let
A
{\displaystyle \mathbf {A} }
be the graph adjacency matrix: then, one can define
A
~
=
A
+
I
{\displaystyle {\tilde {\mathbf {A} }}=\mathbf {A} +\mathbf {I} }
and
D
~
i
i
=
∑
j
∈
V
A
~
i
j
{\displaystyle {\tilde {\mathbf {D} }}_{ii}=\sum _{j\in V}{\tilde {A}}_{ij}}
, where
I
{\displaystyle \mathbf {I} }
denotes the identity matrix. This normalization ensures that the eigenvalues of
D
~
−
1
2
A
~
D
~
−
1
2
{\displaystyle {\tilde {\mathbf {D} }}^{-{\frac {1}{2}}}{\tilde {\mathbf {A} }}{\tilde {\mathbf {D} }}^{-{\frac {1}{2}}}}
are bounded in the range
[
0
,
1
]
{\displaystyle [0,1]}
, avoiding numerical instabilities and exploding/vanishing gradients.
A limitation of GCNs is that they do not allow multidimensional edge features
e
u
v
{\displaystyle \mathbf {e} _{uv}}
. It is however possible to associate scalar weights
w
u
v
{\displaystyle w_{uv}}
to each edge by imposing
A
u
v
=
w
u
v
{\displaystyle A_{uv}=w_{uv}}
, i.e., by setting each nonzero entry in the adjacency matrix equal to the weight of the corresponding edge.
=== Graph attention network ===
The graph attention network (GAT) was introduced by Petar Veličković et al. in 2018.
Graph attention network is a combination of a GNN and an attention layer.
The implementation of attention layer in graphical neural networks helps provide attention or focus to the important information from the data instead of focusing on the whole data.
A multi-head GAT layer can be expressed as follows:
h
u
=
‖
k
=
1
K
σ
(
∑
v
∈
N
u
α
u
v
W
k
x
v
)
{\displaystyle \mathbf {h} _{u}={\overset {K}{\underset {k=1}{\Big \Vert }}}\sigma \left(\sum _{v\in N_{u}}\alpha _{uv}\mathbf {W} ^{k}\mathbf {x} _{v}\right)}
where
K
{\displaystyle K}
is the number of attention heads,
‖
{\displaystyle {\Big \Vert }}
denotes vector concatenation,
σ
(
⋅
)
{\displaystyle \sigma (\cdot )}
is an activation function (e.g., ReLU),
α
i
j
{\displaystyle \alpha _{ij}}
are attention coefficients, and
W
k
{\displaystyle W^{k}}
is a matrix of trainable parameters for the
k
{\displaystyle k}
-th attention head.
For the final GAT layer, the outputs from each attention head are averaged before the application of the activation function. Formally, the final GAT layer can be written as:
h
u
=
σ
(
1
K
∑
k
=
1
K
∑
v
∈
N
u
α
u
v
W
k
x
v
)
{\displaystyle \mathbf {h} _{u}=\sigma \left({\frac {1}{K}}\sum _{k=1}^{K}\sum _{v\in N_{u}}\alpha _{uv}\mathbf {W} ^{k}\mathbf {x} _{v}\right)}
Attention in Machine Learning is a technique that mimics cognitive attention. In the context of learning on graphs, the attention coefficient
α
u
v
{\displaystyle \alpha _{uv}}
measures how important is node
u
∈
V
{\displaystyle u\in V}
to node
v
∈
V
{\displaystyle v\in V}
.
Normalized attention coefficients are computed as follows:
α
u
v
=
exp
(
LeakyReLU
(
a
T
[
W
x
u
‖
W
x
v
‖
e
u
v
]
)
)
∑
z
∈
N
u
exp
(
LeakyReLU
(
a
T
[
W
x
u
‖
W
x
z
‖
e
u
z
]
)
)
{\displaystyle \alpha _{uv}={\frac {\exp({\text{LeakyReLU}}\left(\mathbf {a} ^{T}[\mathbf {W} \mathbf {x} _{u}\Vert \mathbf {W} \mathbf {x} _{v}\Vert \mathbf {e} _{uv}]\right))}{\sum _{z\in N_{u}}\exp({\text{LeakyReLU}}\left(\mathbf {a} ^{T}[\mathbf {W} \mathbf {x} _{u}\Vert \mathbf {W} \mathbf {x} _{z}\Vert \mathbf {e} _{uz}]\right))}}}
where
a
{\displaystyle \mathbf {a} }
is a vector of learnable weights,
⋅
T
{\displaystyle \cdot ^{T}}
indicates transposition,
e
u
v
{\displaystyle \mathbf {e} _{uv}}
are the edge features (if present), and
LeakyReLU
{\displaystyle {\text{LeakyReLU}}}
is a modified ReLU activation function. Attention coefficients are normalized to make them easily comparable across different nodes.
A GCN can be seen as a special case of a GAT where attention coefficients are not learnable, but fixed and equal to the edge weights
w
u
v
{\displaystyle w_{uv}}
.
=== Gated graph sequence neural network ===
The gated graph sequence neural network (GGS-NN) was introduced by Yujia Li et al. in 2015. The GGS-NN extends the GNN formulation by Scarselli et al. to output sequences. The message passing framework is implemented as an update rule to a gated recurrent unit (GRU) cell.
A GGS-NN can be expressed as follows:
h
u
(
0
)
=
x
u
‖
0
{\displaystyle \mathbf {h} _{u}^{(0)}=\mathbf {x} _{u}\,\Vert \,\mathbf {0} }
m
u
(
l
+
1
)
=
∑
v
∈
N
u
Θ
h
v
{\displaystyle \mathbf {m} _{u}^{(l+1)}=\sum _{v\in N_{u}}\mathbf {\Theta } \mathbf {h} _{v}}
h
u
(
l
+
1
)
=
GRU
(
m
u
(
l
+
1
)
,
h
u
(
l
)
)
{\displaystyle \mathbf {h} _{u}^{(l+1)}={\text{GRU}}(\mathbf {m} _{u}^{(l+1)},\mathbf {h} _{u}^{(l)})}
where
‖
{\displaystyle \Vert }
denotes vector concatenation,
0
{\displaystyle \mathbf {0} }
is a vector of zeros,
Θ
{\displaystyle \mathbf {\Theta } }
is a matrix of learnable parameters,
GRU
{\displaystyle {\text{GRU}}}
is a GRU cell, and
l
{\displaystyle l}
denotes the sequence index. In a GGS-NN, the node representations are regarded as the hidden states of a GRU cell. The initial node features
x
u
(
0
)
{\displaystyle \mathbf {x} _{u}^{(0)}}
are zero-padded up to the hidden state dimension of the GRU cell. The same GRU cell is used for updating representations for each node.
== Local pooling layers ==
Local pooling layers coarsen the graph via downsampling. We present here several learnable local pooling strategies that have been proposed. For each case, the input is the initial graph is represented by a matrix
X
{\displaystyle \mathbf {X} }
of node features, and the graph adjacency matrix
A
{\displaystyle \mathbf {A} }
. The output is the new matrix
X
′
{\displaystyle \mathbf {X} '}
of node features, and the new graph adjacency matrix
A
′
{\displaystyle \mathbf {A} '}
.
=== Top-k pooling ===
We first set
y
=
X
p
‖
p
‖
{\displaystyle \mathbf {y} ={\frac {\mathbf {X} \mathbf {p} }{\Vert \mathbf {p} \Vert }}}
where
p
{\displaystyle \mathbf {p} }
is a learnable projection vector. The projection vector
p
{\displaystyle \mathbf {p} }
computes a scalar projection value for each graph node.
The top-k pooling layer can then be formalised as follows:
X
′
=
(
X
⊙
sigmoid
(
y
)
)
i
{\displaystyle \mathbf {X} '=(\mathbf {X} \odot {\text{sigmoid}}(\mathbf {y} ))_{\mathbf {i} }}
A
′
=
A
i
,
i
{\displaystyle \mathbf {A} '=\mathbf {A} _{\mathbf {i} ,\mathbf {i} }}
where
i
=
top
k
(
y
)
{\displaystyle \mathbf {i} ={\text{top}}_{k}(\mathbf {y} )}
is the subset of nodes with the top-k highest projection scores,
⊙
{\displaystyle \odot }
denotes element-wise matrix multiplication, and
sigmoid
(
⋅
)
{\displaystyle {\text{sigmoid}}(\cdot )}
is the sigmoid function. In other words, the nodes with the top-k highest projection scores are retained in the new adjacency matrix
A
′
{\displaystyle \mathbf {A} '}
. The
sigmoid
(
⋅
)
{\displaystyle {\text{sigmoid}}(\cdot )}
operation makes the projection vector
p
{\displaystyle \mathbf {p} }
trainable by backpropagation, which otherwise would produce discrete outputs.
=== Self-attention pooling ===
We first set
y
=
GNN
(
X
,
A
)
{\displaystyle \mathbf {y} ={\text{GNN}}(\mathbf {X} ,\mathbf {A} )}
where
GNN
{\displaystyle {\text{GNN}}}
is a generic permutation equivariant GNN layer (e.g., GCN, GAT, MPNN).
The Self-attention pooling layer can then be formalised as follows:
X
′
=
(
X
⊙
y
)
i
{\displaystyle \mathbf {X} '=(\mathbf {X} \odot \mathbf {y} )_{\mathbf {i} }}
A
′
=
A
i
,
i
{\displaystyle \mathbf {A} '=\mathbf {A} _{\mathbf {i} ,\mathbf {i} }}
where
i
=
top
k
(
y
)
{\displaystyle \mathbf {i} ={\text{top}}_{k}(\mathbf {y} )}
is the subset of nodes with the top-k highest projection scores,
⊙
{\displaystyle \odot }
denotes element-wise matrix multiplication.
The self-attention pooling layer can be seen as an extension of the top-k pooling layer. Differently from top-k pooling, the self-attention scores computed in self-attention pooling account both for the graph features and the graph topology.
== Heterophilic Graph Learning ==
Homophily principle, i.e., nodes with the same labels or similar attributes are more likely to be connected, has been commonly believed to be the main reason for the superiority of Graph Neural Networks (GNNs) over traditional Neural Networks (NNs) on graph-structured data, especially on node-level tasks. However, recent work has identified a non-trivial set of datasets where GNN’s performance compared to the NN’s is not satisfactory. Heterophily, i.e., low homophily, has been considered the main cause of this empirical observation. People have begun to revisit and re-evaluate most existing graph models in the heterophily scenario across various kinds of graphs, e.g., heterogeneous graphs, temporal graphs and hypergraphs. Moreover, numerous graph-related applications are found to be closely related to the heterophily problem, e.g. graph fraud/anomaly detection, graph adversarial attacks and robustness, privacy, federated learning and point cloud segmentation, graph clustering, recommender systems, generative models, link prediction, graph classification and coloring, etc. In the past few years, considerable effort has been devoted to studying and addressing the heterophily issue in graph learning.
== Applications ==
=== Protein folding ===
Graph neural networks are one of the main building blocks of AlphaFold, an artificial intelligence program developed by Google's DeepMind for solving the protein folding problem in biology. AlphaFold achieved first place in several CASP competitions.
=== Social networks ===
Social networks are a major application domain for GNNs due to their natural representation as social graphs. GNNs are used to develop recommender systems based on both social relations and item relations.
=== Combinatorial optimization ===
GNNs are used as fundamental building blocks for several combinatorial optimization algorithms. Examples include computing shortest paths or Eulerian circuits for a given graph, deriving chip placements superior or competitive to handcrafted human solutions, and improving expert-designed branching rules in branch and bound.
=== Cyber security ===
When viewed as a graph, a network of computers can be analyzed with GNNs for anomaly detection. Anomalies within provenance graphs often correlate to malicious activity within the network. GNNs have been used to identify these anomalies on individual nodes and within paths to detect malicious processes, or on the edge level to detect lateral movement.
=== Water distribution networks ===
Water distribution systems can be modelled as graphs, being then a straightforward application of GNN. This kind of algorithm has been applied to water demand forecasting, interconnecting District Measuring Areas to improve the forecasting capacity. Other application of this algorithm on water distribution modelling is the development of metamodels.
=== Computer Vision ===
To represent an image as a graph structure, the image is first divided into multiple patches, each of which is treated as a node in the graph. Edges are then formed by connecting each node to its nearest neighbors based on spatial or feature similarity. This graph-based representation enables the application of graph learning models to visual tasks. The relational structure helps to enhance feature extraction and improve performance on image understanding.
=== Text and NLP ===
Graph-based representation of text helps to capture deeper semantic relationships between words. Many studies have used graph networks to enhance performance in various text processing tasks such as text classification, question answering, Neural Machine Translation (NMT), event extraction, fact verification, etc.
== References ==
== External links ==
A Gentle Introduction to Graph Neural Networks | Wikipedia/Graph_neural_network |
In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks.
A neural network consists of connected units or nodes called artificial neurons, which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by edges, which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process.
Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.
Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information.
== Training ==
Neural networks are typically trained through empirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data.
== History ==
=== Early work ===
Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement.
Historically, digital computers such as the von Neumann model operate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework of connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing.
Warren McCulloch and Walter Pitts (1943) considered a non-learning computational model for neural networks. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.
In the late 1940s, D. O. Hebb proposed a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. It was used in many early neural networks, such as Rosenblatt's perceptron and the Hopfield network. Farley and Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956).
In 1958, psychologist Frank Rosenblatt described the perceptron, one of the first implemented artificial neural networks, funded by the United States Office of Naval Research.
R. D. Joseph (1960) mentions an even earlier perceptron-like device by Farley and Clark: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject."
The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.
The first perceptrons did not have adaptive hidden units. However, Joseph (1960) also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962): section 16 cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
=== Deep learning breakthroughs in the 1960s and 1970s ===
Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in the Soviet Union (1965). They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."
The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique.
In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning.
Nevertheless, research stagnated in the United States following the work of Minsky and Papert (1969), who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967).
In 1976 transfer learning was introduced in neural networks learning.
Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.
=== Backpropagation ===
Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his Master's thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
=== Convolutional neural networks ===
Kunihiko Fukushima's convolutional neural network (CNN) architecture of 1979 also introduced max pooling, a popular downsampling procedure for CNNs. CNNs have become an essential tool for computer vision.
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.
In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images.
From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments.
=== Recurrent neural networks ===
One origin of RNN was statistical mechanics. In 1972, Shun'ichi Amari proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network by John Hopfield (1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past.
In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array, used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks.
In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on the relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion. In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation. It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology.
Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology.
In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991, Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor" which introduced the important concepts of self-supervised pre-training (the "P" in ChatGPT) and neural knowledge distillation. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.
In 1991, Sepp Hochreiter's diploma thesis identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. He and Schmidhuber introduced long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture.
During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models.
=== Deep learning ===
Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly.
In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3.
In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".
Radial basis function and wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied in nonlinear system identification and classification applications.
Generative adversarial network (GAN) (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022).
In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net.
During the 2010s, the seq2seq model was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 in Attention Is All You Need.
It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992) scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.
Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use this architecture.
== Models ==
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms a directed, weighted graph.
An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another, allowing weights to choose the signal between neurons.
=== Artificial neurons ===
ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image.
To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.
=== Organization ===
The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can be pooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks.
=== Hyperparameter ===
A hyperparameter is a constant parameter whose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters include learning rate, the number of hidden layers and batch size. The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.
=== Learning ===
Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation.
==== Learning rate ====
The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.
==== Cost function ====
While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) because it arises from the model (e.g. in a probabilistic model, the model's posterior probability can be used as an inverse cost).
==== Backpropagation ====
Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backpropagation calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks.
=== Learning paradigms ===
Machine learning is commonly separated into three main learning paradigms, supervised learning, unsupervised learning and reinforcement learning. Each corresponds to a particular learning task.
==== Supervised learning ====
Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
==== Unsupervised learning ====
In unsupervised learning, input data is given along with the cost function, some function of the data
x
{\displaystyle \textstyle x}
and the network's output. The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model
f
(
x
)
=
a
{\displaystyle \textstyle f(x)=a}
where
a
{\displaystyle \textstyle a}
is a constant and the cost
C
=
E
[
(
x
−
f
(
x
)
)
2
]
{\displaystyle \textstyle C=E[(x-f(x))^{2}]}
. Minimizing this cost produces a value of
a
{\displaystyle \textstyle a}
that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between
x
{\displaystyle \textstyle x}
and
f
(
x
)
{\displaystyle \textstyle f(x)}
, whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
==== Reinforcement learning ====
In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly.
Formally, the environment is modeled as a Markov decision process (MDP) with states
s
1
,
.
.
.
,
s
n
∈
S
{\displaystyle \textstyle {s_{1},...,s_{n}}\in S}
and actions
a
1
,
.
.
.
,
a
m
∈
A
{\displaystyle \textstyle {a_{1},...,a_{m}}\in A}
. Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution
P
(
c
t
|
s
t
)
{\displaystyle \textstyle P(c_{t}|s_{t})}
, the observation distribution
P
(
x
t
|
s
t
)
{\displaystyle \textstyle P(x_{t}|s_{t})}
and the transition distribution
P
(
s
t
+
1
|
s
t
,
a
t
)
{\displaystyle \textstyle P(s_{t+1}|s_{t},a_{t})}
, while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC.
ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
==== Self-learning ====
Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion. Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation:
In situation s perform action a;
Receive consequence situation s';
Compute emotion of being in consequence situation v(s');
Update crossbar memory w'(a,s) = w(a,s) + v(s').
The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it receives initial emotions (only once) about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.
==== Neuroevolution ====
Neuroevolution can create neural network topologies and weights using evolutionary computation. It is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".
=== Stochastic neural network ===
Stochastic neural networks originating from Sherrington–Kirkpatrick models are a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neurons stochastic transfer functions , or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks.
=== Topological deep learning ===
Topological deep learning, first introduced in 2017, is an emerging approach in machine learning that integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted in algebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such as differential topology and geometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematical artificial intelligence, fostering a mutually beneficial relationship between AI and mathematics.
=== Other ===
In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation–maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks.
==== Modes ====
Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning, weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set.
== Types ==
ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
Some of the main breakthroughs include:
Convolutional neural networks that have proven particularly successful in processing visual and other two-dimensional data; where long short-term memory avoids the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads;
Competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input.
== Network design ==
Using artificial neural networks requires an understanding of their characteristics.
Choice of model: This depends on the data representation and the application. Model parameters include the number, type, and connectedness of network layers, as well as the size of each and the connection type (full, pooling, etc.). Overly complex models learn slowly.
Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation.
Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust.
Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras.
Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc. The Python code snippet provides an overview of the training function, which uses the training dataset, number of hidden layer units, learning rate, and number of iterations as parameters:
== Applications ==
Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. These include:
Function approximation, or regression analysis, (including time series prediction, fitness approximation, and modeling)
Data processing (including filtering, clustering, blind source separation, and compression)
Nonlinear system identification and control (including vehicle control, trajectory prediction, adaptive control, process control, and natural resource management)
Pattern recognition (including radar systems, face identification, signal classification, novelty detection, 3D reconstruction, object recognition, and sequential decision making)
Sequence recognition (including gesture, speech, and handwritten and printed text recognition)
Sensor data analysis (including image analysis)
Robotics (including directing manipulators and prostheses)
Data mining (including knowledge discovery in databases)
Finance (such as ex-ante models for specific financial long-run forecasts and artificial financial markets)
Quantum chemistry
General game playing
Generative AI
Data visualization
Machine translation
Social network filtering
E-mail spam filtering
Medical diagnosis
ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.
ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions.
ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level.
It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition.
Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation.
== Theoretical properties ==
=== Computational power ===
The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
=== Capacity ===
A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.
=== Convergence ===
Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical.
Another issue worthy to mention is that training may cross some saddle point which may lead the convergence to the wrong direction.
The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fit target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.
=== Generalization and statistics ===
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters.
Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error. The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications.
The softmax activation function is:
y
i
=
e
x
i
∑
j
=
1
c
e
x
j
{\displaystyle y_{i}={\frac {e^{x_{i}}}{\sum _{j=1}^{c}e^{x_{j}}}}}
== Criticism ==
=== Training ===
A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation.
Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC.
Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right).
=== Theory ===
A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, Alexander Dewdney, a former Scientific American columnist, commented that as a result, artificial neural networks have a
something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything. One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go.
Technology writer Roger Bridgman commented:
Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource".
In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.
Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.
Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
=== Hardware ===
Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which require enormous CPU power and time.
Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.
Neuromorphic engineering or a physical neural network addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.
=== Practical counterexamples ===
Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.
=== Hybrid approaches ===
Advocates of hybrid models (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.
=== Dataset bias ===
Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases. These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute. This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications like facial recognition, hiring processes, and law enforcement. For example, in 2018, Amazon had to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field. The program would penalize any resume with the word "woman" or the name of any women's college. However, the use of synthetic data can help reduce dataset bias and increase representation in datasets.
== Gallery ==
== Recent advancements and future directions ==
Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.
=== Image processing ===
In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance. This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging.
=== Speech recognition ===
By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques. These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.
=== Natural language processing ===
In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content. This has implications for automated customer service, content moderation, and language understanding technologies.
=== Control systems ===
In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications.
=== Finance ===
ANNs are used for stock market prediction and credit scoring:
In investing, ANNs can process vast amounts of financial data, recognize complex patterns, and forecast stock market trends, aiding investors and risk managers in making informed decisions.
In credit scoring, ANNs offer data-driven, personalized assessments of creditworthiness, improving the accuracy of default predictions and automating the lending process.
ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancing risk management strategies.
=== Medicine ===
ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs. Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management. Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.
=== Content creation ===
ANNs such as generative adversarial networks (GAN) and transformers are used for content creation across numerous industries. This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance, DALL-E is a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user. In the field of music, transformers are used to create original music for commercials and documentaries through companies such as AIVA and Jukedeck. In the marketing industry generative models are used to create personalized advertisements for consumers. Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020. Furthermore, neural networks have found uses in video game creation, where Non Player Characters (NPCs) can make decisions based on all the characters currently in the game.
== See also ==
== References ==
== Bibliography ==
== External links ==
A Brief Introduction to Neural Networks (D. Kriesel) – Illustrated, bilingual manuscript about artificial neural networks; Topics so far: Perceptrons, Backpropagation, Radial Basis Functions, Recurrent Neural Networks, Self Organizing Maps, Hopfield Networks.
Review of Neural Networks in Materials Science Archived 7 June 2015 at the Wayback Machine
Artificial Neural Networks Tutorial in three languages (Univ. Politécnica de Madrid)
Another introduction to ANN
Next Generation of Neural Networks Archived 24 January 2011 at the Wayback Machine – Google Tech Talks
Performance of Neural Networks
Neural Networks and Information Archived 9 July 2009 at the Wayback Machine
Sanderson G (5 October 2017). "But what is a Neural Network?". 3Blue1Brown. Archived from the original on 7 November 2021 – via YouTube. | Wikipedia/Neural_network_(machine_learning) |
Kuaishou Technology (Chinese: 快手; lit. 'quick hand') is a Chinese publicly traded partly state-owned holding company based in Haidian District, Beijing, that was founded in 2011 by Hua Su (宿华) and Cheng Yixiao (程一笑). The company, listed on the Hong Kong Stock Exchange, is known for developing a mobile app for sharing users' short videos, a social network, and video special effects editor. The app is known as Kwai in many countries outside of China. It is also known as Snack Video in India, Pakistan and Indonesia.
As of 2019, it has a worldwide user base of over 200 million, leading the "Most Downloaded" lists of the Google Play and Apple App Store in eight countries, such as Brazil, where it was introduced in 2019. Its main competitor is Douyin, which is known as TikTok outside China.
Kuaishou's overseas team is led by the former CEO of the application 99, and staff from Google, Facebook, Netflix, and TikTok were recruited to lead the company's international expansion.
The China Internet Investment Fund, a state-owned enterprise controlled by the Cyberspace Administration of China, holds a golden share ownership stake in Kuaishou.
== History ==
Kuaishou is China's first short video platform that was developed in 2011 by engineer Hua Su and Cheng Yixiao. Prior to co-founding Kuaishou, Su Hua had worked for both Google and Baidu as a software engineer. The company is headquartered in Haidian District, Beijing.
Kuaishou's predecessor "GIF Kuaishou" was founded in March 2011. GIF Kuaishou was a mobile app with which users could make and share GIF pictures. In November 2012, Kuaishou became a short video community and a platform with which users could record and share videos. By 2013, the app had reached 100 million daily users. By 2019, it had exceeded 200 million active daily users.
In March 2017, Kuaishou closed a US$350 million investment round that was led by Tencent. In January 2018, Forbes estimated the company's valuation to be US$18 billion.
In April 2018, Kuaishou's app was briefly banned from Chinese app stores after China Central Television (CCTV) reported on the platform popularizing videos of teenage mothers.
In 2019, the company announced a partnership with the People's Daily, an official newspaper of the Central Committee of the Chinese Communist Party, to help it experiment with the use of artificial intelligence in news.
In June 2020, following the start of the 2020–2021 China–India skirmishes, the Government of India banned Kwai along with 58 other apps, citing "data and privacy issues".
In January 2021, Kuaishou announced it was planning an initial public offering (IPO) to raise approximately US$5 billion. Kuaishou's stock completed its first day of trading at $300 Hong Kong dollars (HKD) (US$38.70), more than doubling its initial offer price, and causing its market value to rise to over $1 trillion HKD (US$159 billion).
In February 2021, Kuaishou made a debut on the Hong Kong Stock Exchange, with its shares soaring by 194% at the opening. However, the company soon faced significant challenges due to stringent regulatory restrictions on Chinese internet companies, leading to a nearly 80% decline in its share price from its peak post-IPO. By December 2021, Kuaishou announced a major reorganization, including the layoff of 30% of its staff, primarily targeting mid-level employees earning an annual salary of $157,000 or more. This restructuring aimed to cut costs and mitigate financial losses.
In October 2022, state-owned Beijing Radio and Television Station took a minority ownership stake in Kuaishou.
In April 2024, a Financial Times article citing current and former Kuaishou employees stated that the company has been running an ageist redundancy programme known internally as "Limestone", culling workers in their mid-30s. In June 2024, Kuaishou and the Sichuan international communication center launched a branch center in São Paulo, Brazil.
In June 2024, Kuaishou released its diffusion transformer text-to-video model, Kling, which they claimed could generate two minutes of video at 30 frames per second and in 1080p resolution. The model has been compared to that of OpenAI's Sora text-to-video model. It is accessible to the public on Kuaishou's video editing app KwaiCut via signing up for a waitlist with a Chinese phone number.
== Popularity ==
Compared to its main short video platform competitor Douyin, Kuaishou is more popular with older users who live outside China's Tier 1 cities. Its initial popularity came from videos of Chinese rural life. Kuaishou also relied more on e-commerce revenue than on advertising revenue compared to its main competitor.
The app is one of the most popular social media platforms in Brazil, where Kuaishou partnered with creators to make telenovela style content, and appeals to football fans by working with football teams CR Flamengo and Santos FC and sponsoring the tournament Copa América. Kwai was important in Brazil for spreading information (and misinformation) about the COVID-19 vaccine and is also a site for political misinformation along with other social media. Kuaishou says it is continuing to develop its overseas markets, especially in Latin America, the UAE and Nigeria.
Kwai (as the app is called outside of China) was banned in India in 2020 along with other short video apps like TikTok. Kuaishou then released the clone SnackVideo, which was subsequently also banned.
== See also ==
List of Kuaishou original programming
List of content platforms by monthly active users
== References ==
== External links ==
Official website | Wikipedia/Kling_(text-to-video_model) |
Robotic control is the system that contributes to the movement of robots. This involves the mechanical aspects and programmable systems that makes it possible to control robots. Robotics can be controlled by various means including manual, wireless, semi-autonomous (a mix of fully automatic and wireless control), and fully autonomous (using artificial intelligence).
== Modern robots (2000-present) ==
=== Medical and surgical ===
In the medical field, robots are used to make precise movements that are difficult for humans. Robotic surgery involves the use of less-invasive surgical methods, which are “procedures performed through tiny incisions”. Robots use the da Vinci surgical method, which involves the robotic arm (which holds onto surgical instruments) and a camera. The surgeon sits on a console where he controls the robot wirelessly. The feed from the camera is projected on a monitor, allowing the surgeon to see the incisions. The system is built to mimic the movement of the surgeon’s hands and has the ability to filter slight hand tremors. But despite the visual feedback, there is no physical feedback. In other words, as the surgeon applies force on the console, the surgeon won’t be able to feel how much pressure he or she is applying to the tissue.
=== Military ===
The earliest robots used in the military dates back to the 19th century, where automatic weapons were on the rise due to developments in mass production. The first automated weapons were used in World War I, including radio-controlled, unmanned aerial vehicles (UAVs). Since the invention, the technology of ground and aerial robotic weapons continues to develop, it transitioned to become part of modern warfare. In the transition phase of the development, the robots were semi-automatic, being able to be controlled remotely by a human controller. The advancements made in sensors and processors lead to advancements in capabilities of military robots. Since the mid-20th century, the technology of artificial intelligence (A.I.) began to develop and in the 21st century, the technology transferred to warfare, and the weapons that were semi-automatous is developing to become lethal autonomous weapons systems, LAWS for short.
==== Impact ====
As the weapons are being developed to become fully autonomous, there is an ambiguous line of what is the line that separates an enemy to a civilian. There is currently a debate of whether or not artificial intelligence is able to differentiate these enemies and the question of what is morally and humanely right (for example, a child unknowingly working for the enemies).
=== Space exploration ===
Space missions involve sending robots into space in the goal of discovering more of the unknown. The robots used in space exploration have been controlled semi-autonomously. The robots that are sent to space have the ability to maneuver itself, and are self-sustaining. To allow for data collection and a controlled research, the robot is always in communications with scientists and engineers on Earth. For the National Aeronautics and Space Administration’s (NASA) Curiosity rover, which is part of their Mars exploration program, the communication between the rover and the operators are made possible by “an international network of antennas that…permits constant observation of spacecraft as the Earth rotates on its own axis”.
=== Artificial intelligence ===
Artificial intelligence (AI) is used in robotic control to make it able to process and adapt to its surroundings. It is able to be programmed to do a certain task, for instance, walk up a hill. The technology is relatively new, and is being experimented in several fields, such as the military.
==== Boston Dynamics' robots ====
Boston Dynamic’s “Spot” is an autonomous robot that uses four sensors and allows the robot to map where it is relative to its surroundings. The navigational method is called simultaneous localization and mapping, or “SLAM” for short. Spot has several operating modes and depending on the obstacles in front of the robot, it has the ability to override the manual mode of the robot and perform actions successfully. This is similar to other robots made by Boston Dynamics, like the “Atlas”, which also has similar methods of control. When the “Atlas” is being controlled, the control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment”. Instead of inputting data into every single joint of the robot, the engineers programmed the robot as a whole, which makes it more capable to adapt to its environment. The information in this source is dissimilar to other sources, except the second source, because robots vary so much depending on the situation.
== See also ==
Synthetic Neural Modeling
Control theory
Cybernetics
Remote-control vehicle
Mobile robot navigation
Robot kinematics
Simultaneous localization and mapping
Robot locomotion
Motion planning
Robot learning
Vision Based Robot Control
== References == | Wikipedia/Robot_control |
In the context of artificial neural networks, the rectifier or ReLU (rectified linear unit) activation function is an activation function defined as the non-negative part of its argument, i.e., the ramp function:
ReLU
(
x
)
=
x
+
=
max
(
0
,
x
)
=
x
+
|
x
|
2
=
{
x
if
x
>
0
,
0
x
≤
0
{\displaystyle \operatorname {ReLU} (x)=x^{+}=\max(0,x)={\frac {x+|x|}{2}}={\begin{cases}x&{\text{if }}x>0,\\0&x\leq 0\end{cases}}}
where
x
{\displaystyle x}
is the input to a neuron. This is analogous to half-wave rectification in electrical engineering.
ReLU is one of the most popular activation functions for artificial neural networks, and finds application in computer vision and speech recognition using deep neural nets and computational neuroscience.
== History ==
The ReLU was first used by Alston Householder in 1941 as a mathematical abstraction of biological neural networks.
Kunihiko Fukushima in 1969 used ReLU in the context of visual feature extraction in hierarchical neural networks. Thirty years later, Hahnloser et al. argued that ReLU approximates the biological relationship between neural firing rates and input current, in addition to enabling recurrent neural network dynamics to stabilise under weaker criteria.
Prior to 2010, most activation functions used were the logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more numerically efficient counterpart, the hyperbolic tangent. Around 2010, the use of ReLU became common again.
Jarrett et al. (2009) noted that rectification by either absolute or ReLU (which they called "positive part") was critical for object recognition in convolutional neural networks (CNNs), specifically because it allows average pooling without neighboring filter outputs cancelling each other out. They hypothesized that the use of sigmoid or tanh was responsible for poor performance in previous CNNs.
Nair and Hinton (2010) made a theoretical argument that the softplus activation function should be used, in that the softplus function numerically approximates the sum of an exponential number of linear models that share parameters. They then proposed ReLU as a good approximation to it. Specifically, they began by considering a single binary neuron in a Boltzmann machine that takes
x
{\displaystyle x}
as input, and produces 1 as output with probability
σ
(
x
)
=
1
1
+
e
−
x
{\displaystyle \sigma (x)={\frac {1}{1+e^{-x}}}}
. They then considered extending its range of output by making infinitely many copies of it
X
1
,
X
2
,
X
3
,
…
{\displaystyle X_{1},X_{2},X_{3},\dots }
, that all take the same input, offset by an amount
0.5
,
1.5
,
2.5
,
…
{\displaystyle 0.5,1.5,2.5,\dots }
, then their outputs are added together as
∑
i
=
1
∞
X
i
{\displaystyle \sum _{i=1}^{\infty }X_{i}}
. They then demonstrated that
∑
i
=
1
∞
X
i
{\displaystyle \sum _{i=1}^{\infty }X_{i}}
is approximately equal to
N
(
log
(
1
+
e
x
)
,
σ
(
x
)
)
{\displaystyle {\mathcal {N}}(\log(1+e^{x}),\sigma (x))}
, which is also approximately equal to
ReLU
(
N
(
x
,
σ
(
x
)
)
)
{\displaystyle \operatorname {ReLU} ({\mathcal {N}}(x,\sigma (x)))}
, where
N
{\displaystyle {\mathcal {N}}}
stands for the gaussian distribution.
They also argued for another reason for using ReLU: that it allows "intensity equivariance" in image recognition. That is, multiplying input image by a constant
k
{\displaystyle k}
multiplies the output also. In contrast, this is false for other activation functions like sigmoid or tanh. They found that ReLU activation allowed good empirical performance in restricted Boltzmann machines.
Glorot et al (2011) argued that ReLU has the following advantages over sigmoid or tanh. ReLU is more similar to biological neurons' responses in their main operating regime. ReLU avoids vanishing gradients. ReLU is cheaper to compute. ReLU creates sparse representation naturally, because many hidden units output exactly zero for a given input. They also found empirically that deep networks trained with ReLU can achieve strong performance without unsupervised pre-training, especially on large, purely supervised tasks.
== Advantages ==
Advantages of ReLU include:
Sparse activation: for example, in a randomly initialized network, only about 50% of hidden units are activated (i.e. have a non-zero output).
Better gradient propagation: fewer vanishing gradient problems compared to sigmoidal activation functions that saturate in both directions.
Efficiency: only requires comparison and addition.
Scale-invariant (homogeneous, or "intensity equivariance"):
max
(
0
,
a
x
)
=
a
max
(
0
,
x
)
for
a
≥
0
{\displaystyle \max(0,ax)=a\max(0,x){\text{ for }}a\geq 0}
.
== Potential problems ==
Possible downsides can include:
Non-differentiability at zero (however, it is differentiable anywhere else, and the value of the derivative at zero can be chosen to be 0 or 1 arbitrarily).
Not zero-centered: ReLU outputs are always non-negative. This can make it harder for the network to learn during backpropagation, because gradient updates tend to push weights in one direction (positive or negative). Batch normalization can help address this.
ReLU is unbounded.
Redundancy of the parametrization: Because ReLU is scale-invariant, the network computes the exact same function by scaling the weights and biases in front of a ReLU activation by
k
{\displaystyle k}
, and the weights after by
1
/
k
{\displaystyle 1/k}
.
Dying ReLU: ReLU neurons can sometimes be pushed into states in which they become inactive for essentially all inputs. In this state, no gradients flow backward through the neuron, and so the neuron becomes stuck in a perpetually inactive state (it "dies"). This is a form of the vanishing gradient problem. In some cases, large numbers of neurons in a network can become stuck in dead states, effectively decreasing the model capacity and potentially even halting the learning process. This problem typically arises when the learning rate is set too high. It may be mitigated by using "leaky" ReLU instead, where a small positive slope is assigned for
x
<
0
{\displaystyle x<0}
. However, depending on the task, performance may be reduced.
== Variants ==
=== Piecewise-linear variants ===
Leaky ReLU allows a small, positive gradient when the unit is inactive, helping to mitigate the vanishing gradient problem. This gradient is defined by a parameter
α
{\displaystyle \alpha }
, typically set to 0.01–0.3.
f
(
x
)
=
{
x
x
>
0
,
α
x
x
≤
0
,
f
′
(
x
)
=
{
1
x
>
0
,
α
x
≤
0.
{\displaystyle f(x)={\begin{cases}x&x>0,\\\alpha x&x\leq 0,\end{cases}}\qquad f'(x)={\begin{cases}1&x>0,\\\alpha &x\leq 0.\end{cases}}}
The same function can also be expressed without the piecewise notation as:
f
(
x
)
=
1
+
α
2
x
+
1
−
α
2
|
x
|
{\displaystyle f(x)={\frac {1+\alpha }{2}}x+{\frac {1-\alpha }{2}}|x|}
Parametric ReLU (PReLU) takes this idea further by making
α
{\displaystyle \alpha }
a learnable parameter along with the other network parameters.
Note that for
α
≤
1
{\displaystyle \alpha \leq 1}
, this is equivalent to
f
(
x
)
=
max
(
x
,
α
x
)
{\displaystyle f(x)=\max(x,\alpha x)}
and thus has a relation to "maxout" networks.
Concatenated ReLU (CReLU) preserves positive and negative phase information:
f
(
x
)
=
[
ReLU
(
x
)
,
ReLU
(
−
x
)
]
.
{\displaystyle f(x)=[\operatorname {ReLU} (x),\operatorname {ReLU} (-x)].}
=== Other non-linear variants ===
==== DELU ====
ExtendeD Exponential Linear Unit (DELU) is an activation function which is smoother within the neighborhood of zero and sharper for bigger values, allowing better allocation of neurons in the learning process for higher performance. Thanks to its unique design, it has been shown that DELU may obtain higher classification accuracy than ReLU and ELU.
f
(
x
)
=
{
x
x
>
x
c
,
(
e
a
x
−
1
)
/
b
x
≤
x
c
f
′
(
x
)
=
{
1
x
>
x
c
,
(
a
/
b
)
e
a
x
x
≤
x
c
{\displaystyle f(x)={\begin{cases}x&x>x_{c},\\(e^{ax}-1)/b&x\leq x_{c}\end{cases}}\qquad f'(x)={\begin{cases}1&x>x_{c},\\(a/b)e^{ax}&x\leq x_{c}\end{cases}}}
In these formulas,
a
{\displaystyle a}
,
b
{\displaystyle b}
and
x
c
{\displaystyle x_{c}}
are hyperparameter values which could be set as default constraints
a
=
1
{\displaystyle a=1}
,
b
=
2
{\displaystyle b=2}
and
x
c
=
1.25643
{\displaystyle x_{c}=1.25643}
, as done in the original work.
==== Gaussian-error linear unit (GELU) ====
GELU is a smooth approximation to the rectifier:
f
(
x
)
=
x
Φ
(
x
)
,
{\displaystyle f(x)=x\Phi (x),}
f
′
(
x
)
=
x
Φ
′
(
x
)
+
Φ
(
x
)
{\displaystyle f'(x)=x\Phi '(x)+\Phi (x)}
where
Φ
(
x
)
=
P
(
X
⩽
x
)
{\displaystyle \Phi (x)=P(X\leqslant x)}
is the cumulative distribution function of the standard normal distribution.
This activation function is illustrated in the figure at the start of this article. It has a "bump" to the left of x < 0 and serves as the default activation for models such as BERT.
==== SiLU ====
The SiLU (sigmoid linear unit) or swish function is another smooth approximation which uses the sigmoid function, first introduced in the GELU paper:
f
(
x
)
=
x
⋅
sigmoid
(
x
)
,
{\displaystyle f(x)=x\cdot \operatorname {sigmoid} (x),}
f
′
(
x
)
=
x
⋅
sigmoid
′
(
x
)
+
sigmoid
(
x
)
{\displaystyle f'(x)=x\cdot \operatorname {sigmoid} '(x)+\operatorname {sigmoid} (x)}
==== Softplus ====
A smooth approximation to the rectifier is the analytic function
f
(
x
)
=
ln
(
1
+
e
x
)
,
f
′
(
x
)
=
e
x
1
+
e
x
=
1
1
+
e
−
x
{\displaystyle f(x)=\ln(1+e^{x}),\qquad f'(x)={\frac {e^{x}}{1+e^{x}}}={\frac {1}{1+e^{-x}}}}
which is called the softplus or SmoothReLU function. For large negative
x
{\displaystyle x}
it is roughly
ln
1
{\displaystyle \ln 1}
, so just above 0, while for large positive
x
{\displaystyle x}
it is roughly
ln
(
e
x
)
{\displaystyle \ln(e^{x})}
, so just above
x
{\displaystyle x}
.
This function can be approximated as:
ln
(
1
+
e
x
)
≈
{
ln
2
,
x
=
0
,
x
1
−
e
−
x
/
ln
2
,
x
≠
0
{\displaystyle \ln \left(1+e^{x}\right)\approx {\begin{cases}\ln 2,&x=0,\\[6pt]{\frac {x}{1-e^{-x/\ln 2}}},&x\neq 0\end{cases}}}
By making the change of variables
x
=
y
ln
(
2
)
{\displaystyle x=y\ln(2)}
, this is equivalent to
log
2
(
1
+
2
y
)
≈
{
1
,
y
=
0
,
y
1
−
e
−
y
,
y
≠
0
{\displaystyle \log _{2}(1+2^{y})\approx {\begin{cases}1,&y=0,\\[6pt]{\frac {y}{1-e^{-y}}},&y\neq 0\end{cases}}}
A sharpness parameter
k
{\displaystyle k}
may be included:
f
(
x
)
=
ln
(
1
+
e
k
x
)
k
,
f
′
(
x
)
=
e
k
x
1
+
e
k
x
=
1
1
+
e
−
k
x
{\displaystyle f(x)={\frac {\ln(1+e^{kx})}{k}},\qquad f'(x)={\frac {e^{kx}}{1+e^{kx}}}={\frac {1}{1+e^{-kx}}}}
The derivative of softplus is the logistic function.
The logistic sigmoid function is a smooth approximation of the derivative of the rectifier, the Heaviside step function.
The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero:
L
S
E
0
+
(
x
1
,
…
,
x
n
)
:=
LSE
(
0
,
x
1
,
…
,
x
n
)
=
ln
(
1
+
e
x
1
+
⋯
+
e
x
n
)
{\displaystyle \operatorname {LSE_{0}} ^{+}(x_{1},\dots ,x_{n}):=\operatorname {LSE} (0,x_{1},\dots ,x_{n})=\ln(1+e^{x_{1}}+\cdots +e^{x_{n}})}
The LogSumExp function is
LSE
(
x
1
,
…
,
x
n
)
=
ln
(
e
x
1
+
⋯
+
e
x
n
)
{\displaystyle \operatorname {LSE} (x_{1},\dots ,x_{n})=\ln(e^{x_{1}}+\cdots +e^{x_{n}})}
and its gradient is the softmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.
==== ELU ====
Exponential linear units try to make the mean activations closer to zero, which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs.
f
(
x
)
=
{
x
x
>
0
,
α
(
e
x
−
1
)
x
≤
0
f
′
(
x
)
=
{
1
x
>
0
,
α
e
x
x
≤
0
{\displaystyle f(x)={\begin{cases}x&x>0,\\\alpha \left(e^{x}-1\right)&x\leq 0\end{cases}}\qquad f'(x)={\begin{cases}1&x>0,\\\alpha e^{x}&x\leq 0\end{cases}}}
In these formulas,
α
{\displaystyle \alpha }
is a hyperparameter to be tuned with the constraint
α
≥
0
{\displaystyle \alpha \geq 0}
.
Given the same interpretation of
α
{\displaystyle \alpha }
, ELU can be viewed as a smoothed version of a shifted ReLU (SReLU), which has the form
f
(
x
)
=
max
(
−
α
,
x
)
{\displaystyle f(x)=\max(-\alpha ,x)}
.
==== Mish ====
The mish function can also be used as a smooth approximation of the rectifier. It is defined as
f
(
x
)
=
x
tanh
(
softplus
(
x
)
)
,
{\displaystyle f(x)=x\tanh {\big (}\operatorname {softplus} (x){\big )},}
where
tanh
(
x
)
{\displaystyle \tanh(x)}
is the hyperbolic tangent, and
softplus
(
x
)
{\displaystyle \operatorname {softplus} (x)}
is the softplus function.
Mish is non-monotonic and self-gated. It was inspired by Swish, itself a variant of ReLU.
==== Squareplus ====
Squareplus is the function
f
(
x
)
=
x
+
x
2
+
b
2
{\displaystyle f(x)={\frac {x+{\sqrt {x^{2}+b}}}{2}}}
where
b
≥
0
{\displaystyle b\geq 0}
is a hyperparameter that determines the "size" of the curved region near
x
=
0
{\displaystyle x=0}
. (For example, letting
b
=
0
{\displaystyle b=0}
yields ReLU, and letting
b
=
4
{\displaystyle b=4}
yields the metallic mean function.)
Squareplus shares many properties with softplus: It is monotonic, strictly positive, approaches 0 as
x
→
−
∞
{\displaystyle x\to -\infty }
, approaches the identity as
x
→
+
∞
{\displaystyle x\to +\infty }
, and is
C
∞
{\displaystyle C^{\infty }}
smooth. However, squareplus can be computed using only algebraic functions, making it well-suited for settings where computational resources or instruction sets are limited. Additionally, squareplus requires no special consideration to ensure numerical stability when
x
{\displaystyle x}
is large.
== See also ==
Softmax function
Sigmoid function
Tobit model
Layer (deep learning)
== References == | Wikipedia/Rectifier_(neural_networks) |
Chinchilla is a family of large language models (LLMs) developed by the research team at Google DeepMind, presented in March 2022.
== Models ==
It is named "chinchilla" because it is a further development over a previous model family named Gopher. Both model families were trained in order to investigate the scaling laws of large language models.
It claimed to outperform GPT-3. It considerably simplifies downstream utilization because it requires much less computer power for inference and fine-tuning. Based on the training of previously employed language models, it has been determined that if one doubles the model size, one must also have twice the number of training tokens. This hypothesis has been used to train Chinchilla by DeepMind. Similar to Gopher in terms of cost, Chinchilla has 70B parameters and four times as much data.
Chinchilla has an average accuracy of 67.5% on the Measuring Massive Multitask Language Understanding (MMLU) benchmark, which is 7% higher than Gopher's performance. Chinchilla was still in the testing phase as of January 12, 2023.
Chinchilla contributes to developing an effective training paradigm for large autoregressive language models with limited compute resources. The Chinchilla team recommends that the number of training tokens is twice for every model size doubling, meaning that using larger, higher-quality training datasets can lead to better results on downstream tasks.
It has been used for the Flamingo vision-language model.
== Architecture ==
Both the Gopher family and Chinchilla family are families of transformer models.
In particular, they are essentially the same as GPT-2, with different sizes and minor modifications. Gopher family uses RMSNorm instead of LayerNorm; relative positional encoding rather than absolute positional encoding. The Chinchilla family is the same as the Gopher family, but trained with AdamW instead of Adam optimizer.
The Gopher family contains six models of increasing size, from 44 million parameters to 280 billion parameters. They refer to the largest one as "Gopher" by default. Similar naming conventions apply for the Chinchilla family.
Table 1 of shows the entire Gopher family:
Table 4 of compares the 70-billion-parameter Chinchilla with Gopher 280B.
== See also ==
LaMDA
== References == | Wikipedia/Chinchilla_(language_model) |
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.
== History ==
The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin. They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith. Another was proposed by H.O. Hartley in 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated. Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977. Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers, following his collaboration with Per Martin-Löf and Anders Martin-Löf. The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997).
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983.
Wu's proof established the EM method's convergence also outside of the exponential family, as claimed by Dempster–Laird–Rubin.
== Introduction ==
The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or a saddle point. In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
== Description ==
=== The symbols ===
Given the statistical model which generates a set
X
{\displaystyle \mathbf {X} }
of observed data, a set of unobserved latent data or missing values
Z
{\displaystyle \mathbf {Z} }
, and a vector of unknown parameters
θ
{\displaystyle {\boldsymbol {\theta }}}
, along with a likelihood function
L
(
θ
;
X
,
Z
)
=
p
(
X
,
Z
∣
θ
)
{\displaystyle L({\boldsymbol {\theta }};\mathbf {X} ,\mathbf {Z} )=p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})}
, the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data
L
(
θ
;
X
)
=
p
(
X
∣
θ
)
=
∫
p
(
X
,
Z
∣
θ
)
d
Z
=
∫
p
(
X
∣
Z
,
θ
)
p
(
Z
∣
θ
)
d
Z
{\displaystyle L({\boldsymbol {\theta }};\mathbf {X} )=p(\mathbf {X} \mid {\boldsymbol {\theta }})=\int p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})\,d\mathbf {Z} =\int p(\mathbf {X} \mid \mathbf {Z} ,{\boldsymbol {\theta }})p(\mathbf {Z} \mid {\boldsymbol {\theta }})\,d\mathbf {Z} }
However, this quantity is often intractable since
Z
{\displaystyle \mathbf {Z} }
is unobserved and the distribution of
Z
{\displaystyle \mathbf {Z} }
is unknown before attaining
θ
{\displaystyle {\boldsymbol {\theta }}}
.
=== The EM algorithm ===
The EM algorithm seeks to find the maximum likelihood estimate of the marginal likelihood by iteratively applying these two steps:
Expectation step (E step): Define
Q
(
θ
∣
θ
(
t
)
)
{\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}
as the expected value of the log likelihood function of
θ
{\displaystyle {\boldsymbol {\theta }}}
, with respect to the current conditional distribution of
Z
{\displaystyle \mathbf {Z} }
given
X
{\displaystyle \mathbf {X} }
and the current estimates of the parameters
θ
(
t
)
{\displaystyle {\boldsymbol {\theta }}^{(t)}}
:
Q
(
θ
∣
θ
(
t
)
)
=
E
Z
∼
p
(
⋅
|
X
,
θ
(
t
)
)
[
log
p
(
X
,
Z
|
θ
)
]
{\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})=\operatorname {E} _{\mathbf {Z} \sim p(\cdot |\mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}\left[\log p(\mathbf {X} ,\mathbf {Z} |{\boldsymbol {\theta }})\right]\,}
Maximization step (M step): Find the parameters that maximize this quantity:
θ
(
t
+
1
)
=
a
r
g
m
a
x
θ
Q
(
θ
∣
θ
(
t
)
)
{\displaystyle {\boldsymbol {\theta }}^{(t+1)}={\underset {\boldsymbol {\theta }}{\operatorname {arg\,max} }}\ Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\,}
More succinctly, we can write it as one equation:
θ
(
t
+
1
)
=
a
r
g
m
a
x
θ
E
Z
∼
p
(
⋅
|
X
,
θ
(
t
)
)
[
log
p
(
X
,
Z
|
θ
)
]
{\displaystyle {\boldsymbol {\theta }}^{(t+1)}={\underset {\boldsymbol {\theta }}{\operatorname {arg\,max} }}\operatorname {E} _{\mathbf {Z} \sim p(\cdot |\mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}\left[\log p(\mathbf {X} ,\mathbf {Z} |{\boldsymbol {\theta }})\right]\,}
=== Interpretation of the variables ===
The typical models to which EM is applied use
Z
{\displaystyle \mathbf {Z} }
as a latent variable indicating membership in one of a set of groups:
The observed data points
X
{\displaystyle \mathbf {X} }
may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
The missing values (aka latent variables)
Z
{\displaystyle \mathbf {Z} }
are discrete, drawn from a fixed number of values, and with one latent variable per observed unit.
The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points whose corresponding latent variable has that value).
However, it is possible to apply EM to other sorts of models.
The motivation is as follows. If the value of the parameters
θ
{\displaystyle {\boldsymbol {\theta }}}
is known, usually the value of the latent variables
Z
{\displaystyle \mathbf {Z} }
can be found by maximizing the log-likelihood over all possible values of
Z
{\displaystyle \mathbf {Z} }
, either simply by iterating over
Z
{\displaystyle \mathbf {Z} }
or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables
Z
{\displaystyle \mathbf {Z} }
, we can find an estimate of the parameters
θ
{\displaystyle {\boldsymbol {\theta }}}
fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both
θ
{\displaystyle {\boldsymbol {\theta }}}
and
Z
{\displaystyle \mathbf {Z} }
are unknown:
First, initialize the parameters
θ
{\displaystyle {\boldsymbol {\theta }}}
to some random values.
Compute the probability of each possible value of
Z
{\displaystyle \mathbf {Z} }
, given
θ
{\displaystyle {\boldsymbol {\theta }}}
.
Then, use the just-computed values of
Z
{\displaystyle \mathbf {Z} }
to compute a better estimate for the parameters
θ
{\displaystyle {\boldsymbol {\theta }}}
.
Iterate steps 2 and 3 until convergence.
The algorithm as just described monotonically approaches a local minimum of the cost function.
== Properties ==
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates
θ
(
t
)
{\displaystyle {\boldsymbol {\theta }}^{(t)}}
), or applying simulated annealing methods.
EM is especially useful when the likelihood is an exponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (proved and published by Rolf Sundberg, based on unpublished results of Per Martin-Löf and Anders Martin-Löf).
The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.
Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
== Proof of correctness ==
Expectation-Maximization works to improve
Q
(
θ
∣
θ
(
t
)
)
{\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}
rather than directly improving
log
p
(
X
∣
θ
)
{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}
. Here it is shown that improvements to the former imply improvements to the latter.
For any
Z
{\displaystyle \mathbf {Z} }
with non-zero probability
p
(
Z
∣
X
,
θ
)
{\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})}
, we can write
log
p
(
X
∣
θ
)
=
log
p
(
X
,
Z
∣
θ
)
−
log
p
(
Z
∣
X
,
θ
)
.
{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})=\log p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})-\log p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}).}
We take the expectation over possible values of the unknown data
Z
{\displaystyle \mathbf {Z} }
under the current parameter estimate
θ
(
t
)
{\displaystyle \theta ^{(t)}}
by multiplying both sides by
p
(
Z
∣
X
,
θ
(
t
)
)
{\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}
and summing (or integrating) over
Z
{\displaystyle \mathbf {Z} }
. The left-hand side is the expectation of a constant, so we get:
log
p
(
X
∣
θ
)
=
∑
Z
p
(
Z
∣
X
,
θ
(
t
)
)
log
p
(
X
,
Z
∣
θ
)
−
∑
Z
p
(
Z
∣
X
,
θ
(
t
)
)
log
p
(
Z
∣
X
,
θ
)
=
Q
(
θ
∣
θ
(
t
)
)
+
H
(
θ
∣
θ
(
t
)
)
,
{\displaystyle {\begin{aligned}\log p(\mathbf {X} \mid {\boldsymbol {\theta }})&=\sum _{\mathbf {Z} }p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})\log p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})-\sum _{\mathbf {Z} }p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})\log p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})\\&=Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)}),\end{aligned}}}
where
H
(
θ
∣
θ
(
t
)
)
{\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}
is defined by the negated sum it is replacing.
This last equation holds for every value of
θ
{\displaystyle {\boldsymbol {\theta }}}
including
θ
=
θ
(
t
)
{\displaystyle {\boldsymbol {\theta }}={\boldsymbol {\theta }}^{(t)}}
,
log
p
(
X
∣
θ
(
t
)
)
=
Q
(
θ
(
t
)
∣
θ
(
t
)
)
+
H
(
θ
(
t
)
∣
θ
(
t
)
)
,
{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})=Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}),}
and subtracting this last equation from the previous equation gives
log
p
(
X
∣
θ
)
−
log
p
(
X
∣
θ
(
t
)
)
=
Q
(
θ
∣
θ
(
t
)
)
−
Q
(
θ
(
t
)
∣
θ
(
t
)
)
+
H
(
θ
∣
θ
(
t
)
)
−
H
(
θ
(
t
)
∣
θ
(
t
)
)
.
{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})-\log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})=Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}).}
However, Gibbs' inequality tells us that
H
(
θ
∣
θ
(
t
)
)
≥
H
(
θ
(
t
)
∣
θ
(
t
)
)
{\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\geq H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})}
, so we can conclude that
log
p
(
X
∣
θ
)
−
log
p
(
X
∣
θ
(
t
)
)
≥
Q
(
θ
∣
θ
(
t
)
)
−
Q
(
θ
(
t
)
∣
θ
(
t
)
)
.
{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})-\log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})\geq Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}).}
In words, choosing
θ
{\displaystyle {\boldsymbol {\theta }}}
to improve
Q
(
θ
∣
θ
(
t
)
)
{\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}
causes
log
p
(
X
∣
θ
)
{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}
to improve at least as much.
== As a maximization–maximization procedure ==
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent. Consider the function:
F
(
q
,
θ
)
:=
E
q
[
log
L
(
θ
;
x
,
Z
)
]
+
H
(
q
)
,
{\displaystyle F(q,\theta ):=\operatorname {E} _{q}[\log L(\theta ;x,Z)]+H(q),}
where q is an arbitrary probability distribution over the unobserved data z and H(q) is the entropy of the distribution q. This function can be written as
F
(
q
,
θ
)
=
−
D
K
L
(
q
∥
p
Z
∣
X
(
⋅
∣
x
;
θ
)
)
+
log
L
(
θ
;
x
)
,
{\displaystyle F(q,\theta )=-D_{\mathrm {KL} }{\big (}q\parallel p_{Z\mid X}(\cdot \mid x;\theta ){\big )}+\log L(\theta ;x),}
where
p
Z
∣
X
(
⋅
∣
x
;
θ
)
{\displaystyle p_{Z\mid X}(\cdot \mid x;\theta )}
is the conditional distribution of the unobserved data given the observed data
x
{\displaystyle x}
and
D
K
L
{\displaystyle D_{KL}}
is the Kullback–Leibler divergence.
Then the steps in the EM algorithm may be viewed as:
Expectation step: Choose
q
{\displaystyle q}
to maximize
F
{\displaystyle F}
:
q
(
t
)
=
a
r
g
m
a
x
q
F
(
q
,
θ
(
t
)
)
{\displaystyle q^{(t)}=\operatorname {arg\,max} _{q}\ F(q,\theta ^{(t)})}
Maximization step: Choose
θ
{\displaystyle \theta }
to maximize
F
{\displaystyle F}
:
θ
(
t
+
1
)
=
a
r
g
m
a
x
θ
F
(
q
(
t
)
,
θ
)
{\displaystyle \theta ^{(t+1)}=\operatorname {arg\,max} _{\theta }\ F(q^{(t)},\theta )}
== Applications ==
EM is frequently used for parameter estimation of mixed models, notably in quantitative genetics.
In psychometrics, EM is an important tool for estimating item parameters and latent abilities of item response theory models.
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.
The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single-photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM.
In structural engineering, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis).
EM is also used for data clustering. In natural language processing, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars.
In the analysis of intertrade waiting times i.e. the time between subsequent trades in shares of stock at a stock exchange the EM algorithm has proved to be very useful.
== Filtering and smoothing EM algorithms ==
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
E-step
Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
M-step
Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
σ
^
v
2
=
1
N
∑
k
=
1
N
(
z
k
−
x
^
k
)
2
,
{\displaystyle {\widehat {\sigma }}_{v}^{2}={\frac {1}{N}}\sum _{k=1}^{N}{(z_{k}-{\widehat {x}}_{k})}^{2},}
where
x
^
k
{\displaystyle {\widehat {x}}_{k}}
are scalar output estimates calculated by a filter or a smoother from N scalar measurements
z
k
{\displaystyle z_{k}}
. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
σ
^
w
2
=
1
N
∑
k
=
1
N
(
x
^
k
+
1
−
F
^
x
^
k
)
2
,
{\displaystyle {\widehat {\sigma }}_{w}^{2}={\frac {1}{N}}\sum _{k=1}^{N}{({\widehat {x}}_{k+1}-{\widehat {F}}{\widehat {x}}_{k})}^{2},}
where
x
^
k
{\displaystyle {\widehat {x}}_{k}}
and
x
^
k
+
1
{\displaystyle {\widehat {x}}_{k+1}}
are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
F
^
=
∑
k
=
1
N
(
x
^
k
+
1
−
F
^
x
^
k
)
2
∑
k
=
1
N
x
^
k
2
.
{\displaystyle {\widehat {F}}={\frac {\sum _{k=1}^{N}{({\widehat {x}}_{k+1}-{\widehat {F}}{\widehat {x}}_{k})}^{2}}{\sum _{k=1}^{N}{\widehat {x}}_{k}^{2}}}.}
The convergence of parameter estimates such as those above are well studied.
== Variants ==
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods.
Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θi is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.
This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization–maximization procedure section. GEM is further developed in a distributed environment and shows promising results.
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.
=== α-EM algorithm ===
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm
which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
== Relation to variational Bayes methods ==
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
== Geometric interpretation ==
In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms.
== Examples ==
=== Gaussian mixture ===
Let
x
=
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle \mathbf {x} =(\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{n})}
be a sample of
n
{\displaystyle n}
independent observations from a mixture of two multivariate normal distributions of dimension
d
{\displaystyle d}
, and let
z
=
(
z
1
,
z
2
,
…
,
z
n
)
{\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{n})}
be the latent variables that determine the component from which the observation originates.
X
i
∣
(
Z
i
=
1
)
∼
N
d
(
μ
1
,
Σ
1
)
{\displaystyle X_{i}\mid (Z_{i}=1)\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }}_{1},\Sigma _{1})}
and
X
i
∣
(
Z
i
=
2
)
∼
N
d
(
μ
2
,
Σ
2
)
,
{\displaystyle X_{i}\mid (Z_{i}=2)\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }}_{2},\Sigma _{2}),}
where
P
(
Z
i
=
1
)
=
τ
1
{\displaystyle \operatorname {P} (Z_{i}=1)=\tau _{1}\,}
and
P
(
Z
i
=
2
)
=
τ
2
=
1
−
τ
1
.
{\displaystyle \operatorname {P} (Z_{i}=2)=\tau _{2}=1-\tau _{1}.}
The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each:
θ
=
(
τ
,
μ
1
,
μ
2
,
Σ
1
,
Σ
2
)
,
{\displaystyle \theta ={\big (}{\boldsymbol {\tau }},{\boldsymbol {\mu }}_{1},{\boldsymbol {\mu }}_{2},\Sigma _{1},\Sigma _{2}{\big )},}
where the incomplete-data likelihood function is
L
(
θ
;
x
)
=
∏
i
=
1
n
∑
j
=
1
2
τ
j
f
(
x
i
;
μ
j
,
Σ
j
)
,
{\displaystyle L(\theta ;\mathbf {x} )=\prod _{i=1}^{n}\sum _{j=1}^{2}\tau _{j}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j},\Sigma _{j}),}
and the complete-data likelihood function is
L
(
θ
;
x
,
z
)
=
p
(
x
,
z
∣
θ
)
=
∏
i
=
1
n
∏
j
=
1
2
[
f
(
x
i
;
μ
j
,
Σ
j
)
τ
j
]
I
(
z
i
=
j
)
,
{\displaystyle L(\theta ;\mathbf {x} ,\mathbf {z} )=p(\mathbf {x} ,\mathbf {z} \mid \theta )=\prod _{i=1}^{n}\prod _{j=1}^{2}\ [f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j},\Sigma _{j})\tau _{j}]^{\mathbb {I} (z_{i}=j)},}
or
L
(
θ
;
x
,
z
)
=
exp
{
∑
i
=
1
n
∑
j
=
1
2
I
(
z
i
=
j
)
[
log
τ
j
−
1
2
log
|
Σ
j
|
−
1
2
(
x
i
−
μ
j
)
⊤
Σ
j
−
1
(
x
i
−
μ
j
)
−
d
2
log
(
2
π
)
]
}
,
{\displaystyle L(\theta ;\mathbf {x} ,\mathbf {z} )=\exp \left\{\sum _{i=1}^{n}\sum _{j=1}^{2}\mathbb {I} (z_{i}=j){\big [}\log \tau _{j}-{\tfrac {1}{2}}\log |\Sigma _{j}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})^{\top }\Sigma _{j}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})-{\tfrac {d}{2}}\log(2\pi ){\big ]}\right\},}
where
I
{\displaystyle \mathbb {I} }
is an indicator function and
f
{\displaystyle f}
is the probability density function of a multivariate normal.
In the last equality, for each i, one indicator
I
(
z
i
=
j
)
{\displaystyle \mathbb {I} (z_{i}=j)}
is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.
==== E step ====
Given our current estimate of the parameters θ(t), the conditional distribution of the Zi is determined by Bayes' theorem to be the proportional height of the normal density weighted by τ:
T
j
,
i
(
t
)
:=
P
(
Z
i
=
j
∣
X
i
=
x
i
;
θ
(
t
)
)
=
τ
j
(
t
)
f
(
x
i
;
μ
j
(
t
)
,
Σ
j
(
t
)
)
τ
1
(
t
)
f
(
x
i
;
μ
1
(
t
)
,
Σ
1
(
t
)
)
+
τ
2
(
t
)
f
(
x
i
;
μ
2
(
t
)
,
Σ
2
(
t
)
)
.
{\displaystyle T_{j,i}^{(t)}:=\operatorname {P} (Z_{i}=j\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})={\frac {\tau _{j}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j}^{(t)},\Sigma _{j}^{(t)})}{\tau _{1}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{1}^{(t)},\Sigma _{1}^{(t)})+\tau _{2}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{2}^{(t)},\Sigma _{2}^{(t)})}}.}
These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below).
This E step corresponds with setting up this function for Q:
Q
(
θ
∣
θ
(
t
)
)
=
E
Z
∣
X
=
x
;
θ
(
t
)
[
log
L
(
θ
;
x
,
Z
)
]
=
E
Z
∣
X
=
x
;
θ
(
t
)
[
log
∏
i
=
1
n
L
(
θ
;
x
i
,
Z
i
)
]
=
E
Z
∣
X
=
x
;
θ
(
t
)
[
∑
i
=
1
n
log
L
(
θ
;
x
i
,
Z
i
)
]
=
∑
i
=
1
n
E
Z
i
∣
X
i
=
x
i
;
θ
(
t
)
[
log
L
(
θ
;
x
i
,
Z
i
)
]
=
∑
i
=
1
n
∑
j
=
1
2
P
(
Z
i
=
j
∣
X
i
=
x
i
;
θ
(
t
)
)
log
L
(
θ
j
;
x
i
,
j
)
=
∑
i
=
1
n
∑
j
=
1
2
T
j
,
i
(
t
)
[
log
τ
j
−
1
2
log
|
Σ
j
|
−
1
2
(
x
i
−
μ
j
)
⊤
Σ
j
−
1
(
x
i
−
μ
j
)
−
d
2
log
(
2
π
)
]
.
{\displaystyle {\begin{aligned}Q(\theta \mid \theta ^{(t)})&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} =\mathbf {x} ;\mathbf {\theta } ^{(t)}}[\log L(\theta ;\mathbf {x} ,\mathbf {Z} )]\\&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} =\mathbf {x} ;\mathbf {\theta } ^{(t)}}[\log \prod _{i=1}^{n}L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} =\mathbf {x} ;\mathbf {\theta } ^{(t)}}[\sum _{i=1}^{n}\log L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\sum _{i=1}^{n}\operatorname {E} _{Z_{i}\mid X_{i}=x_{i};\mathbf {\theta } ^{(t)}}[\log L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\sum _{i=1}^{n}\sum _{j=1}^{2}P(Z_{i}=j\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})\log L(\theta _{j};\mathbf {x} _{i},j)\\&=\sum _{i=1}^{n}\sum _{j=1}^{2}T_{j,i}^{(t)}{\big [}\log \tau _{j}-{\tfrac {1}{2}}\log |\Sigma _{j}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})^{\top }\Sigma _{j}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})-{\tfrac {d}{2}}\log(2\pi ){\big ]}.\end{aligned}}}
The expectation of
log
L
(
θ
;
x
i
,
Z
i
)
{\displaystyle \log L(\theta ;\mathbf {x} _{i},Z_{i})}
inside the sum is taken with respect to the probability density function
P
(
Z
i
∣
X
i
=
x
i
;
θ
(
t
)
)
{\displaystyle P(Z_{i}\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})}
, which might be different for each
x
i
{\displaystyle \mathbf {x} _{i}}
of the training set. Everything in the E step is known before the step is taken except
T
j
,
i
{\displaystyle T_{j,i}}
, which is computed according to the equation at the beginning of the E step section.
This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently.
==== M step ====
Q
(
θ
∣
θ
(
t
)
)
{\displaystyle Q(\theta \mid \theta ^{(t)})}
being quadratic in form means that determining the maximizing values of
θ
{\displaystyle \theta }
is relatively straightforward. Also,
τ
{\displaystyle \tau }
,
(
μ
1
,
Σ
1
)
{\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}
and
(
μ
2
,
Σ
2
)
{\displaystyle ({\boldsymbol {\mu }}_{2},\Sigma _{2})}
may all be maximized independently since they all appear in separate linear terms.
To begin, consider
τ
{\displaystyle \tau }
, which has the constraint
τ
1
+
τ
2
=
1
{\displaystyle \tau _{1}+\tau _{2}=1}
:
τ
(
t
+
1
)
=
a
r
g
m
a
x
τ
Q
(
θ
∣
θ
(
t
)
)
=
a
r
g
m
a
x
τ
{
[
∑
i
=
1
n
T
1
,
i
(
t
)
]
log
τ
1
+
[
∑
i
=
1
n
T
2
,
i
(
t
)
]
log
τ
2
}
.
{\displaystyle {\begin{aligned}{\boldsymbol {\tau }}^{(t+1)}&={\underset {\boldsymbol {\tau }}{\operatorname {arg\,max} }}\ Q(\theta \mid \theta ^{(t)})\\&={\underset {\boldsymbol {\tau }}{\operatorname {arg\,max} }}\ \left\{\left[\sum _{i=1}^{n}T_{1,i}^{(t)}\right]\log \tau _{1}+\left[\sum _{i=1}^{n}T_{2,i}^{(t)}\right]\log \tau _{2}\right\}.\end{aligned}}}
This has the same form as the maximum likelihood estimate for the binomial distribution, so
τ
j
(
t
+
1
)
=
∑
i
=
1
n
T
j
,
i
(
t
)
∑
i
=
1
n
(
T
1
,
i
(
t
)
+
T
2
,
i
(
t
)
)
=
1
n
∑
i
=
1
n
T
j
,
i
(
t
)
.
{\displaystyle \tau _{j}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{j,i}^{(t)}}{\sum _{i=1}^{n}(T_{1,i}^{(t)}+T_{2,i}^{(t)})}}={\frac {1}{n}}\sum _{i=1}^{n}T_{j,i}^{(t)}.}
For the next estimates of
(
μ
1
,
Σ
1
)
{\displaystyle ({\boldsymbol {\mu }}_{1},\Sigma _{1})}
:
(
μ
1
(
t
+
1
)
,
Σ
1
(
t
+
1
)
)
=
a
r
g
m
a
x
μ
1
,
Σ
1
Q
(
θ
∣
θ
(
t
)
)
=
a
r
g
m
a
x
μ
1
,
Σ
1
∑
i
=
1
n
T
1
,
i
(
t
)
{
−
1
2
log
|
Σ
1
|
−
1
2
(
x
i
−
μ
1
)
⊤
Σ
1
−
1
(
x
i
−
μ
1
)
}
.
{\displaystyle {\begin{aligned}({\boldsymbol {\mu }}_{1}^{(t+1)},\Sigma _{1}^{(t+1)})&={\underset {{\boldsymbol {\mu }}_{1},\Sigma _{1}}{\operatorname {arg\,max} }}\ Q(\theta \mid \theta ^{(t)})\\&={\underset {{\boldsymbol {\mu }}_{1},\Sigma _{1}}{\operatorname {arg\,max} }}\ \sum _{i=1}^{n}T_{1,i}^{(t)}\left\{-{\tfrac {1}{2}}\log |\Sigma _{1}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1})^{\top }\Sigma _{1}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1})\right\}\end{aligned}}.}
This has the same form as a weighted maximum likelihood estimate for a normal distribution, so
μ
1
(
t
+
1
)
=
∑
i
=
1
n
T
1
,
i
(
t
)
x
i
∑
i
=
1
n
T
1
,
i
(
t
)
{\displaystyle {\boldsymbol {\mu }}_{1}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{1,i}^{(t)}\mathbf {x} _{i}}{\sum _{i=1}^{n}T_{1,i}^{(t)}}}}
and
Σ
1
(
t
+
1
)
=
∑
i
=
1
n
T
1
,
i
(
t
)
(
x
i
−
μ
1
(
t
+
1
)
)
(
x
i
−
μ
1
(
t
+
1
)
)
⊤
∑
i
=
1
n
T
1
,
i
(
t
)
{\displaystyle \Sigma _{1}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{1,i}^{(t)}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1}^{(t+1)})(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1}^{(t+1)})^{\top }}{\sum _{i=1}^{n}T_{1,i}^{(t)}}}}
and, by symmetry,
μ
2
(
t
+
1
)
=
∑
i
=
1
n
T
2
,
i
(
t
)
x
i
∑
i
=
1
n
T
2
,
i
(
t
)
{\displaystyle {\boldsymbol {\mu }}_{2}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{2,i}^{(t)}\mathbf {x} _{i}}{\sum _{i=1}^{n}T_{2,i}^{(t)}}}}
and
Σ
2
(
t
+
1
)
=
∑
i
=
1
n
T
2
,
i
(
t
)
(
x
i
−
μ
2
(
t
+
1
)
)
(
x
i
−
μ
2
(
t
+
1
)
)
⊤
∑
i
=
1
n
T
2
,
i
(
t
)
.
{\displaystyle \Sigma _{2}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{2,i}^{(t)}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{2}^{(t+1)})(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{2}^{(t+1)})^{\top }}{\sum _{i=1}^{n}T_{2,i}^{(t)}}}.}
==== Termination ====
Conclude the iterative process if
E
Z
∣
θ
(
t
)
,
x
[
log
L
(
θ
(
t
)
;
x
,
Z
)
]
≤
E
Z
∣
θ
(
t
−
1
)
,
x
[
log
L
(
θ
(
t
−
1
)
;
x
,
Z
)
]
+
ε
{\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon }
for
ε
{\displaystyle \varepsilon }
below some preset threshold.
==== Generalization ====
The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions.
=== Truncated and censored regression ===
The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model. Special cases of this model include censored or truncated observations from one normal distribution.
== Alternatives ==
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
== See also ==
mixture distribution
compound distribution
density estimation
Principal component analysis
total absorption spectroscopy
The EM algorithm can be viewed as a special case of the majorize-minimization (MM) algorithm.
== References ==
== Further reading ==
Hogg, Robert; McKean, Joseph; Craig, Allen (2005). Introduction to Mathematical Statistics. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 359–364.
Dellaert, Frank (February 2002). The Expectation Maximization Algorithm (PDF) (Technical Report number GIT-GVU-02-20). Georgia Tech College of Computing. gives an easier explanation of EM algorithm as to lowerbound maximization.
Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Springer. ISBN 978-0-387-31073-2.
Gupta, M. R.; Chen, Y. (2010). "Theory and Use of the EM Algorithm". Foundations and Trends in Signal Processing. 4 (3): 223–296. CiteSeerX 10.1.1.219.6830. doi:10.1561/2000000034. A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet.
Bilmes, Jeff (1997). A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models (Technical Report TR-97-021). International Computer Science Institute. includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models.
McLachlan, Geoffrey J.; Krishnan, Thriyambakam (2008). The EM Algorithm and Extensions (2nd ed.). Hoboken: Wiley. ISBN 978-0-471-20170-0.
== External links ==
Various 1D, 2D and 3D demonstrations of EM together with Mixture Modeling are provided as part of the paired SOCR activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings.
Class hierarchy in C++ (GPL) including Gaussian Mixtures
The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay includes simple examples of the EM algorithm such as clustering using the soft k-means algorithm, and emphasizes the variational view of the EM algorithm, as described in Chapter 33.7 of version 7.2 (fourth edition).
Variational Algorithms for Approximate Bayesian Inference, by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs (chapters).
The Expectation Maximization Algorithm: A short tutorial, A self-contained derivation of the EM Algorithm by Sean Borman.
The EM Algorithm, by Xiaojin Zhu.
EM algorithm and variants: an informal tutorial by Alexis Roche. A concise and very clear description of EM and many interesting variants. | Wikipedia/Expectation–maximization_algorithm |
Self-play is a technique for improving the performance of reinforcement learning agents. Intuitively, agents learn to improve their performance by playing "against themselves".
== Definition and motivation ==
In multi-agent reinforcement learning experiments, researchers try to optimize the performance of a learning agent on a given task, in cooperation or competition with one or more agents. These agents learn by trial-and-error, and researchers may choose to have the learning algorithm play the role of two or more of the different agents. When successfully executed, this technique has a double advantage:
It provides a straightforward way to determine the actions of the other agents, resulting in a meaningful challenge.
It increases the amount of experience that can be used to improve the policy, by a factor of two or more, since the viewpoints of each of the different agents can be used for learning.
Czarnecki et al argue that most of the games that people play for fun are "Games of Skill", meaning games whose space of all possible strategies looks like a spinning top. In more detail, we can partition the space of strategies into sets
L
1
,
L
2
,
.
.
.
,
L
n
{\displaystyle L_{1},L_{2},...,L_{n}}
, such that any
i
<
j
,
π
i
∈
L
i
,
π
j
∈
L
j
{\displaystyle i<j,\pi _{i}\in L_{i},\pi _{j}\in L_{j}}
, the strategy
π
j
{\displaystyle \pi _{j}}
beats the strategy
π
i
{\displaystyle \pi _{i}}
. Then, in population-based self-play, if the population is larger than
max
i
|
L
i
|
{\displaystyle \max _{i}|L_{i}|}
, then the algorithm would converge to the best possible strategy.
== Usage ==
Self-play is used by the AlphaZero program to improve its performance in the games of chess, shogi and go.
Self-play is also used to train the Cicero AI system to outperform humans at the game of Diplomacy. The technique is also used in training the DeepNash system to play the game Stratego.
== Connections to other disciplines ==
Self-play has been compared to the epistemological concept of tabula rasa that describes the way that humans acquire knowledge from a "blank slate".
== Further reading ==
DiGiovanni, Anthony; Zell, Ethan; et al. (2021). "Survey of Self-Play in Reinforcement Learning". arXiv:2107.02850 [cs.GT].
== References == | Wikipedia/Self-play_(reinforcement_learning_technique) |
CURE (Clustering Using REpresentatives) is an efficient data clustering algorithm for large databases. Compared with K-means clustering it is more robust to outliers and able to identify clusters having non-spherical shapes and size variances.
== Drawbacks of traditional algorithms ==
The popular K-means clustering algorithm minimizes the sum of squared errors criterion:
E
=
∑
i
=
1
k
∑
p
∈
C
i
(
p
−
m
i
)
2
,
{\displaystyle E=\sum _{i=1}^{k}\sum _{p\in C_{i}}(p-m_{i})^{2},}
Given large differences in sizes or geometries of different clusters, the square error method could split the large clusters to minimize the square error, which is not always correct. Also, with hierarchic clustering algorithms these problems exist as none of the distance measures between clusters (
d
m
i
n
,
d
m
e
a
n
{\displaystyle d_{min},d_{mean}}
) tend to work with different cluster shapes. Also the running time is high when n is large.
The problem with the BIRCH algorithm is that once the clusters are generated after step 3, it uses centroids of the clusters and assigns each data point to the cluster with the closest centroid. Using only the centroid to redistribute the data has problems when clusters lack uniform sizes and shapes.
== CURE clustering algorithm ==
To avoid the problems with non-uniform sized or shaped clusters, CURE employs a hierarchical clustering algorithm that adopts a middle ground between the centroid based and all point extremes. In CURE, a constant number c of well scattered points of a cluster are chosen and they are shrunk towards the centroid of the cluster by a fraction α. The scattered points after shrinking are used as representatives of the cluster. The clusters with the closest pair of representatives are the clusters that are merged at each step of CURE's hierarchical clustering algorithm. This enables CURE to correctly identify the clusters and makes it less sensitive to outliers.
Running time is
O
(
n
2
log
n
)
{\displaystyle O(n^{2}\log n)}
, making it rather expensive, and space complexity is
O
(
n
)
{\displaystyle O(n)}
.
The algorithm cannot be directly applied to large databases because of the high runtime complexity. Enhancements address this requirement.
Random sampling: random sampling supports large data sets. Generally the random sample fits in main memory. The random sampling involves a trade off between accuracy and efficiency.
Partitioning: The basic idea is to partition the sample space into p partitions. Each partition contains n/p elements. The first pass partially clusters each partition until the final number of clusters reduces to n/pq for some constant q ≥ 1. A second clustering pass on n/q partially clusters partitions. For the second pass only the representative points are stored since the merge procedure only requires representative points of previous clusters before computing the representative points for the merged cluster. Partitioning the input reduces the execution times.
Labeling data on disk: Given only representative points for k clusters, the remaining data points are also assigned to the clusters. For this a fraction of randomly selected representative points for each of the k clusters is chosen and data point is assigned to the cluster containing the representative point closest to it.
== Pseudocode ==
CURE (no. of points,k)
Input: A set of points S
Output: k clusters
For every cluster u (each input point), in u.mean and u.rep store the mean of the points in the cluster and a set of c representative points of the cluster (initially c = 1 since each cluster has one data point). Also u.closest stores the cluster closest to u.
All the input points are inserted into a k-d tree T
Treat each input point as separate cluster, compute u.closest for each u and then insert each cluster into the heap Q. (clusters are arranged in increasing order of distances between u and u.closest).
While size (Q) > k
Remove the top element of Q (say u) and merge it with its closest cluster u.closest (say v) and compute the new representative points for the merged cluster w.
Remove u and v from T and Q.
For all the clusters x in Q, update x.closest and relocate x
insert w into Q
repeat
== Availability ==
pyclustering open source library includes a Python and C++ implementation of CURE algorithm.
== See also ==
k-means clustering
BFR algorithm
== References ==
Guha, Sudipto; Rastogi, Rajeev; Shim, Kyuseok (1998). "CURE: An Efficient Clustering Algorithm for Large Databases" (PDF). Information Systems. 26 (1): 35–58. doi:10.1016/S0306-4379(01)00008-4.
Kogan, Jacob; Nicholas, Charles K.; Teboulle, M. (2006). Grouping multidimensional data: recent advances in clustering. Springer. ISBN 978-3-540-28348-5.
Theodoridis, Sergios; Koutroumbas, Konstantinos (2006). Pattern recognition. Academic Press. pp. 572–574. ISBN 978-0-12-369531-4. | Wikipedia/CURE_algorithm |
A generative adversarial network (GAN) is a class of machine learning frameworks and a prominent framework for approaching generative artificial intelligence. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neural networks compete with each other in the form of a zero-sum game, where one agent's gain is another agent's loss.
Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning.
The core idea of a GAN is based on the "indirect" training through the discriminator, another neural network that can tell how "realistic" the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner.
GANs are similar to mimicry in evolutionary biology, with an evolutionary arms race between both networks.
== Definition ==
=== Mathematical ===
The original GAN is defined as the following game:
Each probability space
(
Ω
,
μ
ref
)
{\displaystyle (\Omega ,\mu _{\text{ref}})}
defines a GAN game.
There are 2 players: generator and discriminator.
The generator's strategy set is
P
(
Ω
)
{\displaystyle {\mathcal {P}}(\Omega )}
, the set of all probability measures
μ
G
{\displaystyle \mu _{G}}
on
Ω
{\displaystyle \Omega }
.
The discriminator's strategy set is the set of Markov kernels
μ
D
:
Ω
→
P
[
0
,
1
]
{\displaystyle \mu _{D}:\Omega \to {\mathcal {P}}[0,1]}
, where
P
[
0
,
1
]
{\displaystyle {\mathcal {P}}[0,1]}
is the set of probability measures on
[
0
,
1
]
{\displaystyle [0,1]}
.
The GAN game is a zero-sum game, with objective function
L
(
μ
G
,
μ
D
)
:=
E
x
∼
μ
ref
,
y
∼
μ
D
(
x
)
[
ln
y
]
+
E
x
∼
μ
G
,
y
∼
μ
D
(
x
)
[
ln
(
1
−
y
)
]
.
{\displaystyle L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\operatorname {E} _{x\sim \mu _{G},y\sim \mu _{D}(x)}[\ln(1-y)].}
The generator aims to minimize the objective, and the discriminator aims to maximize the objective.
The generator's task is to approach
μ
G
≈
μ
ref
{\displaystyle \mu _{G}\approx \mu _{\text{ref}}}
, that is, to match its own output distribution as closely as possible to the reference distribution. The discriminator's task is to output a value close to 1 when the input appears to be from the reference distribution, and to output a value close to 0 when the input looks like it came from the generator distribution.
=== In practice ===
The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution. The generative network's training objective is to increase the error rate of the discriminative network (i.e., "fool" the discriminator network by producing novel candidates that the discriminator thinks are not synthesized (are part of the true data distribution)).
A known dataset serves as the initial training data for the discriminator. Training involves presenting it with samples from the training dataset until it achieves acceptable accuracy. The generator is trained based on whether it succeeds in fooling the discriminator. Typically, the generator is seeded with randomized input that is sampled from a predefined latent space (e.g. a multivariate normal distribution). Thereafter, candidates synthesized by the generator are evaluated by the discriminator. Independent backpropagation procedures are applied to both networks so that the generator produces better samples, while the discriminator becomes more skilled at flagging synthetic samples. When used for image generation, the generator is typically a deconvolutional neural network, and the discriminator is a convolutional neural network.
=== Relation to other statistical machine learning methods ===
GANs are implicit generative models, which means that they do not explicitly model the likelihood function nor provide a means for finding the latent variable corresponding to a given sample, unlike alternatives such as flow-based generative model.
Compared to fully visible belief networks such as WaveNet and PixelRNN and autoregressive models in general, GANs can generate one complete sample in one pass, rather than multiple passes through the network.
Compared to Boltzmann machines and linear ICA, there is no restriction on the type of function used by the network.
Since neural networks are universal approximators, GANs are asymptotically consistent. Variational autoencoders might be universal approximators, but it is not proven as of 2017.
== Mathematical properties ==
=== Measure-theoretic considerations ===
This section provides some of the mathematical theory behind these methods.
In modern probability theory based on measure theory, a probability space also needs to be equipped with a σ-algebra. As a result, a more rigorous definition of the GAN game would make the following changes:Each probability space
(
Ω
,
B
,
μ
ref
)
{\displaystyle (\Omega ,{\mathcal {B}},\mu _{\text{ref}})}
defines a GAN game.
The generator's strategy set is
P
(
Ω
,
B
)
{\displaystyle {\mathcal {P}}(\Omega ,{\mathcal {B}})}
, the set of all probability measures
μ
G
{\displaystyle \mu _{G}}
on the measure-space
(
Ω
,
B
)
{\displaystyle (\Omega ,{\mathcal {B}})}
.
The discriminator's strategy set is the set of Markov kernels
μ
D
:
(
Ω
,
B
)
→
P
(
[
0
,
1
]
,
B
(
[
0
,
1
]
)
)
{\displaystyle \mu _{D}:(\Omega ,{\mathcal {B}})\to {\mathcal {P}}([0,1],{\mathcal {B}}([0,1]))}
, where
B
(
[
0
,
1
]
)
{\displaystyle {\mathcal {B}}([0,1])}
is the Borel σ-algebra on
[
0
,
1
]
{\displaystyle [0,1]}
.Since issues of measurability never arise in practice, these will not concern us further.
=== Choice of the strategy set ===
In the most generic version of the GAN game described above, the strategy set for the discriminator contains all Markov kernels
μ
D
:
Ω
→
P
[
0
,
1
]
{\displaystyle \mu _{D}:\Omega \to {\mathcal {P}}[0,1]}
, and the strategy set for the generator contains arbitrary probability distributions
μ
G
{\displaystyle \mu _{G}}
on
Ω
{\displaystyle \Omega }
.
However, as shown below, the optimal discriminator strategy against any
μ
G
{\displaystyle \mu _{G}}
is deterministic, so there is no loss of generality in restricting the discriminator's strategies to deterministic functions
D
:
Ω
→
[
0
,
1
]
{\displaystyle D:\Omega \to [0,1]}
. In most applications,
D
{\displaystyle D}
is a deep neural network function.
As for the generator, while
μ
G
{\displaystyle \mu _{G}}
could theoretically be any computable probability distribution, in practice, it is usually implemented as a pushforward:
μ
G
=
μ
Z
∘
G
−
1
{\displaystyle \mu _{G}=\mu _{Z}\circ G^{-1}}
. That is, start with a random variable
z
∼
μ
Z
{\displaystyle z\sim \mu _{Z}}
, where
μ
Z
{\displaystyle \mu _{Z}}
is a probability distribution that is easy to compute (such as the uniform distribution, or the Gaussian distribution), then define a function
G
:
Ω
Z
→
Ω
{\displaystyle G:\Omega _{Z}\to \Omega }
. Then the distribution
μ
G
{\displaystyle \mu _{G}}
is the distribution of
G
(
z
)
{\displaystyle G(z)}
.
Consequently, the generator's strategy is usually defined as just
G
{\displaystyle G}
, leaving
z
∼
μ
Z
{\displaystyle z\sim \mu _{Z}}
implicit. In this formalism, the GAN game objective is
L
(
G
,
D
)
:=
E
x
∼
μ
ref
[
ln
D
(
x
)
]
+
E
z
∼
μ
Z
[
ln
(
1
−
D
(
G
(
z
)
)
)
]
.
{\displaystyle L(G,D):=\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]+\operatorname {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z)))].}
=== Generative reparametrization ===
The GAN architecture has two main components. One is casting optimization into a game, of form
min
G
max
D
L
(
G
,
D
)
{\displaystyle \min _{G}\max _{D}L(G,D)}
, which is different from the usual kind of optimization, of form
min
θ
L
(
θ
)
{\displaystyle \min _{\theta }L(\theta )}
. The other is the decomposition of
μ
G
{\displaystyle \mu _{G}}
into
μ
Z
∘
G
−
1
{\displaystyle \mu _{Z}\circ G^{-1}}
, which can be understood as a reparametrization trick.
To see its significance, one must compare GAN with previous methods for learning generative models, which were plagued with "intractable probabilistic computations that arise in maximum likelihood estimation and related strategies".
At the same time, Kingma and Welling and Rezende et al. developed the same idea of reparametrization into a general stochastic backpropagation method. Among its first applications was the variational autoencoder.
=== Move order and strategic equilibria ===
In the original paper, as well as most subsequent papers, it is usually assumed that the generator moves first, and the discriminator moves second, thus giving the following minimax game:
min
μ
G
max
μ
D
L
(
μ
G
,
μ
D
)
:=
E
x
∼
μ
ref
,
y
∼
μ
D
(
x
)
[
ln
y
]
+
E
x
∼
μ
G
,
y
∼
μ
D
(
x
)
[
ln
(
1
−
y
)
]
.
{\displaystyle \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D}):=\operatorname {E} _{x\sim \mu _{\text{ref}},y\sim \mu _{D}(x)}[\ln y]+\operatorname {E} _{x\sim \mu _{G},y\sim \mu _{D}(x)}[\ln(1-y)].}
If both the generator's and the discriminator's strategy sets are spanned by a finite number of strategies, then by the minimax theorem,
min
μ
G
max
μ
D
L
(
μ
G
,
μ
D
)
=
max
μ
D
min
μ
G
L
(
μ
G
,
μ
D
)
{\displaystyle \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D})=\max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D})}
that is, the move order does not matter.
However, since the strategy sets are both not finitely spanned, the minimax theorem does not apply, and the idea of an "equilibrium" becomes delicate. To wit, there are the following different concepts of equilibrium:
Equilibrium when generator moves first, and discriminator moves second:
μ
^
G
∈
arg
min
μ
G
max
μ
D
L
(
μ
G
,
μ
D
)
,
μ
^
D
∈
arg
max
μ
D
L
(
μ
^
G
,
μ
D
)
,
{\displaystyle {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}\max _{\mu _{D}}L(\mu _{G},\mu _{D}),\quad {\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}L({\hat {\mu }}_{G},\mu _{D}),\quad }
Equilibrium when discriminator moves first, and generator moves second:
μ
^
D
∈
arg
max
μ
D
min
μ
G
L
(
μ
G
,
μ
D
)
,
μ
^
G
∈
arg
min
μ
G
L
(
μ
G
,
μ
^
D
)
,
{\displaystyle {\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}\min _{\mu _{G}}L(\mu _{G},\mu _{D}),\quad {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}L(\mu _{G},{\hat {\mu }}_{D}),}
Nash equilibrium
(
μ
^
D
,
μ
^
G
)
{\displaystyle ({\hat {\mu }}_{D},{\hat {\mu }}_{G})}
, which is stable under simultaneous move order:
μ
^
D
∈
arg
max
μ
D
L
(
μ
^
G
,
μ
D
)
,
μ
^
G
∈
arg
min
μ
G
L
(
μ
G
,
μ
^
D
)
{\displaystyle {\hat {\mu }}_{D}\in \arg \max _{\mu _{D}}L({\hat {\mu }}_{G},\mu _{D}),\quad {\hat {\mu }}_{G}\in \arg \min _{\mu _{G}}L(\mu _{G},{\hat {\mu }}_{D})}
For general games, these equilibria do not have to agree, or even to exist. For the original GAN game, these equilibria all exist, and are all equal. However, for more general GAN games, these do not necessarily exist, or agree.
=== Main theorems for GAN game ===
The original GAN paper proved the following two theorems:
Interpretation: For any fixed generator strategy
μ
G
{\displaystyle \mu _{G}}
, the optimal discriminator keeps track of the likelihood ratio between the reference distribution and the generator distribution:
D
(
x
)
1
−
D
(
x
)
=
d
μ
ref
d
μ
G
(
x
)
=
μ
ref
(
d
x
)
μ
G
(
d
x
)
;
D
(
x
)
=
σ
(
ln
μ
ref
(
d
x
)
−
ln
μ
G
(
d
x
)
)
{\displaystyle {\frac {D(x)}{1-D(x)}}={\frac {d\mu _{\text{ref}}}{d\mu _{G}}}(x)={\frac {\mu _{\text{ref}}(dx)}{\mu _{G}(dx)}};\quad D(x)=\sigma (\ln \mu _{\text{ref}}(dx)-\ln \mu _{G}(dx))}
where
σ
{\displaystyle \sigma }
is the logistic function.
In particular, if the prior probability for an image
x
{\displaystyle x}
to come from the reference distribution is equal to
1
2
{\displaystyle {\frac {1}{2}}}
, then
D
(
x
)
{\displaystyle D(x)}
is just the posterior probability that
x
{\displaystyle x}
came from the reference distribution:
D
(
x
)
=
Pr
(
x
came from reference distribution
∣
x
)
.
{\displaystyle D(x)=\Pr(x{\text{ came from reference distribution}}\mid x).}
== Training and evaluating GAN ==
=== Training ===
==== Unstable convergence ====
While the GAN game has a unique global equilibrium point when both the generator and discriminator have access to their entire strategy sets, the equilibrium is no longer guaranteed when they have a restricted strategy set.
In practice, the generator has access only to measures of form
μ
Z
∘
G
θ
−
1
{\displaystyle \mu _{Z}\circ G_{\theta }^{-1}}
, where
G
θ
{\displaystyle G_{\theta }}
is a function computed by a neural network with parameters
θ
{\displaystyle \theta }
, and
μ
Z
{\displaystyle \mu _{Z}}
is an easily sampled distribution, such as the uniform or normal distribution. Similarly, the discriminator has access only to functions of form
D
ζ
{\displaystyle D_{\zeta }}
, a function computed by a neural network with parameters
ζ
{\displaystyle \zeta }
. These restricted strategy sets take up a vanishingly small proportion of their entire strategy sets.
Further, even if an equilibrium still exists, it can only be found by searching in the high-dimensional space of all possible neural network functions. The standard strategy of using gradient descent to find the equilibrium often does not work for GAN, and often the game "collapses" into one of several failure modes. To improve the convergence stability, some training strategies start with an easier task, such as generating low-resolution images or simple images (one object with uniform background), and gradually increase the difficulty of the task during training. This essentially translates to applying a curriculum learning scheme.
==== Mode collapse ====
GANs often suffer from mode collapse where they fail to generalize properly, missing entire modes from the input data. For example, a GAN trained on the MNIST dataset containing many samples of each digit might only generate pictures of digit 0. This was termed "the Helvetica scenario".
One way this can happen is if the generator learns too fast compared to the discriminator. If the discriminator
D
{\displaystyle D}
is held constant, then the optimal generator would only output elements of
arg
max
x
D
(
x
)
{\displaystyle \arg \max _{x}D(x)}
. So for example, if during GAN training for generating MNIST dataset, for a few epochs, the discriminator somehow prefers the digit 0 slightly more than other digits, the generator may seize the opportunity to generate only digit 0, then be unable to escape the local minimum after the discriminator improves.
Some researchers perceive the root problem to be a weak discriminative network that fails to notice the pattern of omission, while others assign blame to a bad choice of objective function. Many solutions have been proposed, but it is still an open problem.
Even the state-of-the-art architecture, BigGAN (2019), could not avoid mode collapse. The authors resorted to "allowing collapse to occur at the later stages of training, by which time a model is sufficiently trained to achieve good results".
==== Two time-scale update rule ====
The two time-scale update rule (TTUR) is proposed to make GAN convergence more stable by making the learning rate of the generator lower than that of the discriminator. The authors argued that the generator should move slower than the discriminator, so that it does not "drive the discriminator steadily into new regions without capturing its gathered information".
They proved that a general class of games that included the GAN game, when trained under TTUR, "converges under mild assumptions to a stationary local Nash equilibrium".
They also proposed using the Adam stochastic optimization to avoid mode collapse, as well as the Fréchet inception distance for evaluating GAN performances.
==== Vanishing gradient ====
Conversely, if the discriminator learns too fast compared to the generator, then the discriminator could almost perfectly distinguish
μ
G
θ
,
μ
ref
{\displaystyle \mu _{G_{\theta }},\mu _{\text{ref}}}
. In such case, the generator
G
θ
{\displaystyle G_{\theta }}
could be stuck with a very high loss no matter which direction it changes its
θ
{\displaystyle \theta }
, meaning that the gradient
∇
θ
L
(
G
θ
,
D
ζ
)
{\displaystyle \nabla _{\theta }L(G_{\theta },D_{\zeta })}
would be close to zero. In such case, the generator cannot learn, a case of the vanishing gradient problem.
Intuitively speaking, the discriminator is too good, and since the generator cannot take any small step (only small steps are considered in gradient descent) to improve its payoff, it does not even try.
One important method for solving this problem is the Wasserstein GAN.
=== Evaluation ===
GANs are usually evaluated by Inception score (IS), which measures how varied the generator's outputs are (as classified by an image classifier, usually Inception-v3), or Fréchet inception distance (FID), which measures how similar the generator's outputs are to a reference set (as classified by a learned image featurizer, such as Inception-v3 without its final layer). Many papers that propose new GAN architectures for image generation report how their architectures break the state of the art on FID or IS.
Another evaluation method is the Learned Perceptual Image Patch Similarity (LPIPS), which starts with a learned image featurizer
f
θ
:
Image
→
R
n
{\displaystyle f_{\theta }:{\text{Image}}\to \mathbb {R} ^{n}}
, and finetunes it by supervised learning on a set of
(
x
,
x
′
,
p
e
r
c
e
p
t
u
a
l
d
i
f
f
e
r
e
n
c
e
(
x
,
x
′
)
)
{\displaystyle (x,x',\operatorname {perceptual~difference} (x,x'))}
, where
x
{\displaystyle x}
is an image,
x
′
{\displaystyle x'}
is a perturbed version of it, and
p
e
r
c
e
p
t
u
a
l
d
i
f
f
e
r
e
n
c
e
(
x
,
x
′
)
{\displaystyle \operatorname {perceptual~difference} (x,x')}
is how much they differ, as reported by human subjects. The model is finetuned so that it can approximate
‖
f
θ
(
x
)
−
f
θ
(
x
′
)
‖
≈
p
e
r
c
e
p
t
u
a
l
d
i
f
f
e
r
e
n
c
e
(
x
,
x
′
)
{\displaystyle \|f_{\theta }(x)-f_{\theta }(x')\|\approx \operatorname {perceptual~difference} (x,x')}
. This finetuned model is then used to define
LPIPS
(
x
,
x
′
)
:=
‖
f
θ
(
x
)
−
f
θ
(
x
′
)
‖
{\displaystyle \operatorname {LPIPS} (x,x'):=\|f_{\theta }(x)-f_{\theta }(x')\|}
.
Other evaluation methods are reviewed in.
== Variants ==
There is a veritable zoo of GAN variants. Some of the most prominent are as follows:
=== Conditional GAN ===
Conditional GANs are similar to standard GANs except they allow the model to conditionally generate samples based on additional information. For example, if we want to generate a cat face given a dog picture, we could use a conditional GAN.
The generator in a GAN game generates
μ
G
{\displaystyle \mu _{G}}
, a probability distribution on the probability space
Ω
{\displaystyle \Omega }
. This leads to the idea of a conditional GAN, where instead of generating one probability distribution on
Ω
{\displaystyle \Omega }
, the generator generates a different probability distribution
μ
G
(
c
)
{\displaystyle \mu _{G}(c)}
on
Ω
{\displaystyle \Omega }
, for each given class label
c
{\displaystyle c}
.
For example, for generating images that look like ImageNet, the generator should be able to generate a picture of cat when given the class label "cat".
In the original paper, the authors noted that GAN can be trivially extended to conditional GAN by providing the labels to both the generator and the discriminator.
Concretely, the conditional GAN game is just the GAN game with class labels provided:
L
(
μ
G
,
D
)
:=
E
c
∼
μ
C
,
x
∼
μ
ref
(
c
)
[
ln
D
(
x
,
c
)
]
+
E
c
∼
μ
C
,
x
∼
μ
G
(
c
)
[
ln
(
1
−
D
(
x
,
c
)
)
]
{\displaystyle L(\mu _{G},D):=\operatorname {E} _{c\sim \mu _{C},x\sim \mu _{\text{ref}}(c)}[\ln D(x,c)]+\operatorname {E} _{c\sim \mu _{C},x\sim \mu _{G}(c)}[\ln(1-D(x,c))]}
where
μ
C
{\displaystyle \mu _{C}}
is a probability distribution over classes,
μ
ref
(
c
)
{\displaystyle \mu _{\text{ref}}(c)}
is the probability distribution of real images of class
c
{\displaystyle c}
, and
μ
G
(
c
)
{\displaystyle \mu _{G}(c)}
the probability distribution of images generated by the generator when given class label
c
{\displaystyle c}
.
In 2017, a conditional GAN learned to generate 1000 image classes of ImageNet.
=== GANs with alternative architectures ===
The GAN game is a general framework and can be run with any reasonable parametrization of the generator
G
{\displaystyle G}
and discriminator
D
{\displaystyle D}
. In the original paper, the authors demonstrated it using multilayer perceptron networks and convolutional neural networks. Many alternative architectures have been tried.
Deep convolutional GAN (DCGAN): For both generator and discriminator, uses only deep networks consisting entirely of convolution-deconvolution layers, that is, fully convolutional networks.
Self-attention GAN (SAGAN): Starts with the DCGAN, then adds residually-connected standard self-attention modules to the generator and discriminator.
Variational autoencoder GAN (VAEGAN): Uses a variational autoencoder (VAE) for the generator.
Transformer GAN (TransGAN): Uses the pure transformer architecture for both the generator and discriminator, entirely devoid of convolution-deconvolution layers.
Flow-GAN: Uses flow-based generative model for the generator, allowing efficient computation of the likelihood function.
=== GANs with alternative objectives ===
Many GAN variants are merely obtained by changing the loss functions for the generator and discriminator.
Original GAN:
We recast the original GAN objective into a form more convenient for comparison:
{
min
D
L
D
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
[
ln
D
(
x
)
]
−
E
x
∼
μ
ref
[
ln
(
1
−
D
(
x
)
)
]
min
G
L
G
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln D(x)]-\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}
Original GAN, non-saturating loss:
This objective for generator was recommended in the original paper for faster convergence.
L
G
=
E
x
∼
μ
G
[
ln
D
(
x
)
]
{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[\ln D(x)]}
The effect of using this objective is analyzed in Section 2.2.2 of Arjovsky et al.
Original GAN, maximum likelihood:
L
G
=
E
x
∼
μ
G
[
(
exp
∘
σ
−
1
∘
D
)
(
x
)
]
{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[({\exp }\circ \sigma ^{-1}\circ D)(x)]}
where
σ
{\displaystyle \sigma }
is the logistic function. When the discriminator is optimal, the generator gradient is the same as in maximum likelihood estimation, even though GAN cannot perform maximum likelihood estimation itself.
Hinge loss GAN:
L
D
=
−
E
x
∼
p
ref
[
min
(
0
,
−
1
+
D
(
x
)
)
]
−
E
x
∼
μ
G
[
min
(
0
,
−
1
−
D
(
x
)
)
]
{\displaystyle L_{D}=-\operatorname {E} _{x\sim p_{\text{ref}}}\left[\min \left(0,-1+D(x)\right)\right]-\operatorname {E} _{x\sim \mu _{G}}\left[\min \left(0,-1-D\left(x\right)\right)\right]}
L
G
=
−
E
x
∼
μ
G
[
D
(
x
)
]
{\displaystyle L_{G}=-\operatorname {E} _{x\sim \mu _{G}}[D(x)]}
Least squares GAN:
L
D
=
E
x
∼
μ
ref
[
(
D
(
x
)
−
b
)
2
]
+
E
x
∼
μ
G
[
(
D
(
x
)
−
a
)
2
]
{\displaystyle L_{D}=\operatorname {E} _{x\sim \mu _{\text{ref}}}[(D(x)-b)^{2}]+\operatorname {E} _{x\sim \mu _{G}}[(D(x)-a)^{2}]}
L
G
=
E
x
∼
μ
G
[
(
D
(
x
)
−
c
)
2
]
{\displaystyle L_{G}=\operatorname {E} _{x\sim \mu _{G}}[(D(x)-c)^{2}]}
where
a
,
b
,
c
{\displaystyle a,b,c}
are parameters to be chosen. The authors recommended
a
=
−
1
,
b
=
1
,
c
=
0
{\displaystyle a=-1,b=1,c=0}
.
=== Wasserstein GAN (WGAN) ===
The Wasserstein GAN modifies the GAN game at two points:
The discriminator's strategy set is the set of measurable functions of type
D
:
Ω
→
R
{\displaystyle D:\Omega \to \mathbb {R} }
with bounded Lipschitz norm:
‖
D
‖
L
≤
K
{\displaystyle \|D\|_{L}\leq K}
, where
K
{\displaystyle K}
is a fixed positive constant.
The objective is
L
W
G
A
N
(
μ
G
,
D
)
:=
E
x
∼
μ
G
[
D
(
x
)
]
−
E
x
∼
μ
ref
[
D
(
x
)
]
{\displaystyle L_{WGAN}(\mu _{G},D):=\operatorname {E} _{x\sim \mu _{G}}[D(x)]-\mathbb {E} _{x\sim \mu _{\text{ref}}}[D(x)]}
One of its purposes is to solve the problem of mode collapse (see above). The authors claim "In no experiment did we see evidence of mode collapse for the WGAN algorithm".
=== GANs with more than two players ===
==== Adversarial autoencoder ====
An adversarial autoencoder (AAE) is more autoencoder than GAN. The idea is to start with a plain autoencoder, but train a discriminator to discriminate the latent vectors from a reference distribution (often the normal distribution).
==== InfoGAN ====
In conditional GAN, the generator receives both a noise vector
z
{\displaystyle z}
and a label
c
{\displaystyle c}
, and produces an image
G
(
z
,
c
)
{\displaystyle G(z,c)}
. The discriminator receives image-label pairs
(
x
,
c
)
{\displaystyle (x,c)}
, and computes
D
(
x
,
c
)
{\displaystyle D(x,c)}
.
When the training dataset is unlabeled, conditional GAN does not work directly.
The idea of InfoGAN is to decree that every latent vector in the latent space can be decomposed as
(
z
,
c
)
{\displaystyle (z,c)}
: an incompressible noise part
z
{\displaystyle z}
, and an informative label part
c
{\displaystyle c}
, and encourage the generator to comply with the decree, by encouraging it to maximize
I
(
c
,
G
(
z
,
c
)
)
{\displaystyle I(c,G(z,c))}
, the mutual information between
c
{\displaystyle c}
and
G
(
z
,
c
)
{\displaystyle G(z,c)}
, while making no demands on the mutual information
z
{\displaystyle z}
between
G
(
z
,
c
)
{\displaystyle G(z,c)}
.
Unfortunately,
I
(
c
,
G
(
z
,
c
)
)
{\displaystyle I(c,G(z,c))}
is intractable in general, The key idea of InfoGAN is Variational Mutual Information Maximization: indirectly maximize it by maximizing a lower bound
I
^
(
G
,
Q
)
=
E
z
∼
μ
Z
,
c
∼
μ
C
[
ln
Q
(
c
∣
G
(
z
,
c
)
)
]
;
I
(
c
,
G
(
z
,
c
)
)
≥
sup
Q
I
^
(
G
,
Q
)
{\displaystyle {\hat {I}}(G,Q)=\mathbb {E} _{z\sim \mu _{Z},c\sim \mu _{C}}[\ln Q(c\mid G(z,c))];\quad I(c,G(z,c))\geq \sup _{Q}{\hat {I}}(G,Q)}
where
Q
{\displaystyle Q}
ranges over all Markov kernels of type
Q
:
Ω
Y
→
P
(
Ω
C
)
{\displaystyle Q:\Omega _{Y}\to {\mathcal {P}}(\Omega _{C})}
.
The InfoGAN game is defined as follows:Three probability spaces define an InfoGAN game:
(
Ω
X
,
μ
ref
)
{\displaystyle (\Omega _{X},\mu _{\text{ref}})}
, the space of reference images.
(
Ω
Z
,
μ
Z
)
{\displaystyle (\Omega _{Z},\mu _{Z})}
, the fixed random noise generator.
(
Ω
C
,
μ
C
)
{\displaystyle (\Omega _{C},\mu _{C})}
, the fixed random information generator.
There are 3 players in 2 teams: generator, Q, and discriminator. The generator and Q are on one team, and the discriminator on the other team.
The objective function is
L
(
G
,
Q
,
D
)
=
L
G
A
N
(
G
,
D
)
−
λ
I
^
(
G
,
Q
)
{\displaystyle L(G,Q,D)=L_{GAN}(G,D)-\lambda {\hat {I}}(G,Q)}
where
L
G
A
N
(
G
,
D
)
=
E
x
∼
μ
ref
,
[
ln
D
(
x
)
]
+
E
z
∼
μ
Z
[
ln
(
1
−
D
(
G
(
z
,
c
)
)
)
]
{\displaystyle L_{GAN}(G,D)=\operatorname {E} _{x\sim \mu _{\text{ref}},}[\ln D(x)]+\operatorname {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z,c)))]}
is the original GAN game objective, and
I
^
(
G
,
Q
)
=
E
z
∼
μ
Z
,
c
∼
μ
C
[
ln
Q
(
c
∣
G
(
z
,
c
)
)
]
{\displaystyle {\hat {I}}(G,Q)=\mathbb {E} _{z\sim \mu _{Z},c\sim \mu _{C}}[\ln Q(c\mid G(z,c))]}
Generator-Q team aims to minimize the objective, and discriminator aims to maximize it:
min
G
,
Q
max
D
L
(
G
,
Q
,
D
)
{\displaystyle \min _{G,Q}\max _{D}L(G,Q,D)}
==== Bidirectional GAN (BiGAN) ====
The standard GAN generator is a function of type
G
:
Ω
Z
→
Ω
X
{\displaystyle G:\Omega _{Z}\to \Omega _{X}}
, that is, it is a mapping from a latent space
Ω
Z
{\displaystyle \Omega _{Z}}
to the image space
Ω
X
{\displaystyle \Omega _{X}}
. This can be understood as a "decoding" process, whereby every latent vector
z
∈
Ω
Z
{\displaystyle z\in \Omega _{Z}}
is a code for an image
x
∈
Ω
X
{\displaystyle x\in \Omega _{X}}
, and the generator performs the decoding. This naturally leads to the idea of training another network that performs "encoding", creating an autoencoder out of the encoder-generator pair.
Already in the original paper, the authors noted that "Learned approximate inference can be performed by training an auxiliary network to predict
z
{\displaystyle z}
given
x
{\displaystyle x}
". The bidirectional GAN architecture performs exactly this.
The BiGAN is defined as follows: Two probability spaces define a BiGAN game:
(
Ω
X
,
μ
X
)
{\displaystyle (\Omega _{X},\mu _{X})}
, the space of reference images.
(
Ω
Z
,
μ
Z
)
{\displaystyle (\Omega _{Z},\mu _{Z})}
, the latent space.
There are 3 players in 2 teams: generator, encoder, and discriminator. The generator and encoder are on one team, and the discriminator on the other team.
The generator's strategies are functions
G
:
Ω
Z
→
Ω
X
{\displaystyle G:\Omega _{Z}\to \Omega _{X}}
, and the encoder's strategies are functions
E
:
Ω
X
→
Ω
Z
{\displaystyle E:\Omega _{X}\to \Omega _{Z}}
. The discriminator's strategies are functions
D
:
Ω
X
→
[
0
,
1
]
{\displaystyle D:\Omega _{X}\to [0,1]}
.
The objective function is
L
(
G
,
E
,
D
)
=
E
x
∼
μ
X
[
ln
D
(
x
,
E
(
x
)
)
]
+
E
z
∼
μ
Z
[
ln
(
1
−
D
(
G
(
z
)
,
z
)
)
]
{\displaystyle L(G,E,D)=\mathbb {E} _{x\sim \mu _{X}}[\ln D(x,E(x))]+\mathbb {E} _{z\sim \mu _{Z}}[\ln(1-D(G(z),z))]}
Generator-encoder team aims to minimize the objective, and discriminator aims to maximize it:
min
G
,
E
max
D
L
(
G
,
E
,
D
)
{\displaystyle \min _{G,E}\max _{D}L(G,E,D)}
In the paper, they gave a more abstract definition of the objective as:
L
(
G
,
E
,
D
)
=
E
(
x
,
z
)
∼
μ
E
,
X
[
ln
D
(
x
,
z
)
]
+
E
(
x
,
z
)
∼
μ
G
,
Z
[
ln
(
1
−
D
(
x
,
z
)
)
]
{\displaystyle L(G,E,D)=\mathbb {E} _{(x,z)\sim \mu _{E,X}}[\ln D(x,z)]+\mathbb {E} _{(x,z)\sim \mu _{G,Z}}[\ln(1-D(x,z))]}
where
μ
E
,
X
(
d
x
,
d
z
)
=
μ
X
(
d
x
)
⋅
δ
E
(
x
)
(
d
z
)
{\displaystyle \mu _{E,X}(dx,dz)=\mu _{X}(dx)\cdot \delta _{E(x)}(dz)}
is the probability distribution on
Ω
X
×
Ω
Z
{\displaystyle \Omega _{X}\times \Omega _{Z}}
obtained by pushing
μ
X
{\displaystyle \mu _{X}}
forward via
x
↦
(
x
,
E
(
x
)
)
{\displaystyle x\mapsto (x,E(x))}
, and
μ
G
,
Z
(
d
x
,
d
z
)
=
δ
G
(
z
)
(
d
x
)
⋅
μ
Z
(
d
z
)
{\displaystyle \mu _{G,Z}(dx,dz)=\delta _{G(z)}(dx)\cdot \mu _{Z}(dz)}
is the probability distribution on
Ω
X
×
Ω
Z
{\displaystyle \Omega _{X}\times \Omega _{Z}}
obtained by pushing
μ
Z
{\displaystyle \mu _{Z}}
forward via
z
↦
(
G
(
x
)
,
z
)
{\displaystyle z\mapsto (G(x),z)}
.
Applications of bidirectional models include semi-supervised learning, interpretable machine learning, and neural machine translation.
==== CycleGAN ====
CycleGAN is an architecture for performing translations between two domains, such as between photos of horses and photos of zebras, or photos of night cities and photos of day cities.
The CycleGAN game is defined as follows:There are two probability spaces
(
Ω
X
,
μ
X
)
,
(
Ω
Y
,
μ
Y
)
{\displaystyle (\Omega _{X},\mu _{X}),(\Omega _{Y},\mu _{Y})}
, corresponding to the two domains needed for translations fore-and-back.
There are 4 players in 2 teams: generators
G
X
:
Ω
X
→
Ω
Y
,
G
Y
:
Ω
Y
→
Ω
X
{\displaystyle G_{X}:\Omega _{X}\to \Omega _{Y},G_{Y}:\Omega _{Y}\to \Omega _{X}}
, and discriminators
D
X
:
Ω
X
→
[
0
,
1
]
,
D
Y
:
Ω
Y
→
[
0
,
1
]
{\displaystyle D_{X}:\Omega _{X}\to [0,1],D_{Y}:\Omega _{Y}\to [0,1]}
.
The objective function is
L
(
G
X
,
G
Y
,
D
X
,
D
Y
)
=
L
G
A
N
(
G
X
,
D
X
)
+
L
G
A
N
(
G
Y
,
D
Y
)
+
λ
L
c
y
c
l
e
(
G
X
,
G
Y
)
{\displaystyle L(G_{X},G_{Y},D_{X},D_{Y})=L_{GAN}(G_{X},D_{X})+L_{GAN}(G_{Y},D_{Y})+\lambda L_{cycle}(G_{X},G_{Y})}
where
λ
{\displaystyle \lambda }
is a positive adjustable parameter,
L
G
A
N
{\displaystyle L_{GAN}}
is the GAN game objective, and
L
c
y
c
l
e
{\displaystyle L_{cycle}}
is the cycle consistency loss:
L
c
y
c
l
e
(
G
X
,
G
Y
)
=
E
x
∼
μ
X
‖
G
X
(
G
Y
(
x
)
)
−
x
‖
+
E
y
∼
μ
Y
‖
G
Y
(
G
X
(
y
)
)
−
y
‖
{\displaystyle L_{cycle}(G_{X},G_{Y})=E_{x\sim \mu _{X}}\|G_{X}(G_{Y}(x))-x\|+E_{y\sim \mu _{Y}}\|G_{Y}(G_{X}(y))-y\|}
The generators aim to minimize the objective, and the discriminators aim to maximize it:
min
G
X
,
G
Y
max
D
X
,
D
Y
L
(
G
X
,
G
Y
,
D
X
,
D
Y
)
{\displaystyle \min _{G_{X},G_{Y}}\max _{D_{X},D_{Y}}L(G_{X},G_{Y},D_{X},D_{Y})}
Unlike previous work like pix2pix, which requires paired training data, cycleGAN requires no paired data. For example, to train a pix2pix model to turn a summer scenery photo to winter scenery photo and back, the dataset must contain pairs of the same place in summer and winter, shot at the same angle; cycleGAN would only need a set of summer scenery photos, and an unrelated set of winter scenery photos.
=== GANs with particularly large or small scales ===
==== BigGAN ====
The BigGAN is essentially a self-attention GAN trained on a large scale (up to 80 million parameters) to generate large images of ImageNet (up to 512 x 512 resolution), with numerous engineering tricks to make it converge.
==== Invertible data augmentation ====
When there is insufficient training data, the reference distribution
μ
ref
{\displaystyle \mu _{\text{ref}}}
cannot be well-approximated by the empirical distribution given by the training dataset. In such cases, data augmentation can be applied, to allow training GAN on smaller datasets. Naïve data augmentation, however, brings its problems.
Consider the original GAN game, slightly reformulated as follows:
{
min
D
L
D
(
D
,
μ
G
)
=
−
E
x
∼
μ
ref
[
ln
D
(
x
)
]
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
min
G
L
G
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}}}[\ln D(x)]-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}
Now we use data augmentation by randomly sampling semantic-preserving transforms
T
:
Ω
→
Ω
{\displaystyle T:\Omega \to \Omega }
and applying them to the dataset, to obtain the reformulated GAN game:
{
min
D
L
D
(
D
,
μ
G
)
=
−
E
x
∼
μ
ref
,
T
∼
μ
trans
[
ln
D
(
T
(
x
)
)
]
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
min
G
L
G
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
[
ln
(
1
−
D
(
x
)
)
]
{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}[\ln D(T(x))]-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G}}[\ln(1-D(x))]\end{cases}}}
This is equivalent to a GAN game with a different distribution
μ
ref
′
{\displaystyle \mu _{\text{ref}}'}
, sampled by
T
(
x
)
{\displaystyle T(x)}
, with
x
∼
μ
ref
,
T
∼
μ
trans
{\displaystyle x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}
. For example, if
μ
ref
{\displaystyle \mu _{\text{ref}}}
is the distribution of images in ImageNet, and
μ
trans
{\displaystyle \mu _{\text{trans}}}
samples identity-transform with probability 0.5, and horizontal-reflection with probability 0.5, then
μ
ref
′
{\displaystyle \mu _{\text{ref}}'}
is the distribution of images in ImageNet and horizontally-reflected ImageNet, combined.
The result of such training would be a generator that mimics
μ
ref
′
{\displaystyle \mu _{\text{ref}}'}
. For example, it would generate images that look like they are randomly cropped, if the data augmentation uses random cropping.
The solution is to apply data augmentation to both generated and real images:
{
min
D
L
D
(
D
,
μ
G
)
=
−
E
x
∼
μ
ref
,
T
∼
μ
trans
[
ln
D
(
T
(
x
)
)
]
−
E
x
∼
μ
G
,
T
∼
μ
trans
[
ln
(
1
−
D
(
T
(
x
)
)
)
]
min
G
L
G
(
D
,
μ
G
)
=
−
E
x
∼
μ
G
,
T
∼
μ
trans
[
ln
(
1
−
D
(
T
(
x
)
)
)
]
{\displaystyle {\begin{cases}\min _{D}L_{D}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{\text{ref}},T\sim \mu _{\text{trans}}}[\ln D(T(x))]-\operatorname {E} _{x\sim \mu _{G},T\sim \mu _{\text{trans}}}[\ln(1-D(T(x)))]\\\min _{G}L_{G}(D,\mu _{G})=-\operatorname {E} _{x\sim \mu _{G},T\sim \mu _{\text{trans}}}[\ln(1-D(T(x)))]\end{cases}}}
The authors demonstrated high-quality generation using just 100-picture-large datasets.
The StyleGAN-2-ADA paper points out a further point on data augmentation: it must be invertible. Continue with the example of generating ImageNet pictures. If the data augmentation is "randomly rotate the picture by 0, 90, 180, 270 degrees with equal probability", then there is no way for the generator to know which is the true orientation: Consider two generators
G
,
G
′
{\displaystyle G,G'}
, such that for any latent
z
{\displaystyle z}
, the generated image
G
(
z
)
{\displaystyle G(z)}
is a 90-degree rotation of
G
′
(
z
)
{\displaystyle G'(z)}
. They would have exactly the same expected loss, and so neither is preferred over the other.
The solution is to only use invertible data augmentation: instead of "randomly rotate the picture by 0, 90, 180, 270 degrees with equal probability", use "randomly rotate the picture by 90, 180, 270 degrees with 0.1 probability, and keep the picture as it is with 0.7 probability". This way, the generator is still rewarded to keep images oriented the same way as un-augmented ImageNet pictures.
Abstractly, the effect of randomly sampling transformations
T
:
Ω
→
Ω
{\displaystyle T:\Omega \to \Omega }
from the distribution
μ
trans
{\displaystyle \mu _{\text{trans}}}
is to define a Markov kernel
K
trans
:
Ω
→
P
(
Ω
)
{\displaystyle K_{\text{trans}}:\Omega \to {\mathcal {P}}(\Omega )}
. Then, the data-augmented GAN game pushes the generator to find some
μ
^
G
∈
P
(
Ω
)
{\displaystyle {\hat {\mu }}_{G}\in {\mathcal {P}}(\Omega )}
, such that
K
trans
∗
μ
ref
=
K
trans
∗
μ
^
G
{\displaystyle K_{\text{trans}}*\mu _{\text{ref}}=K_{\text{trans}}*{\hat {\mu }}_{G}}
where
∗
{\displaystyle *}
is the Markov kernel convolution.
A data-augmentation method is defined to be invertible if its Markov kernel
K
trans
{\displaystyle K_{\text{trans}}}
satisfies
K
trans
∗
μ
=
K
trans
∗
μ
′
⟹
μ
=
μ
′
∀
μ
,
μ
′
∈
P
(
Ω
)
{\displaystyle K_{\text{trans}}*\mu =K_{\text{trans}}*\mu '\implies \mu =\mu '\quad \forall \mu ,\mu '\in {\mathcal {P}}(\Omega )}
Immediately by definition, we see that composing multiple invertible data-augmentation methods results in yet another invertible method. Also by definition, if the data-augmentation method is invertible, then using it in a GAN game does not change the optimal strategy
μ
^
G
{\displaystyle {\hat {\mu }}_{G}}
for the generator, which is still
μ
ref
{\displaystyle \mu _{\text{ref}}}
.
There are two prototypical examples of invertible Markov kernels:
Discrete case: Invertible stochastic matrices, when
Ω
{\displaystyle \Omega }
is finite.
For example, if
Ω
=
{
↑
,
↓
,
←
,
→
}
{\displaystyle \Omega =\{\uparrow ,\downarrow ,\leftarrow ,\rightarrow \}}
is the set of four images of an arrow, pointing in 4 directions, and the data augmentation is "randomly rotate the picture by 90, 180, 270 degrees with probability
p
{\displaystyle p}
, and keep the picture as it is with probability
(
1
−
3
p
)
{\displaystyle (1-3p)}
", then the Markov kernel
K
trans
{\displaystyle K_{\text{trans}}}
can be represented as a stochastic matrix:
[
K
trans
]
=
[
(
1
−
3
p
)
p
p
p
p
(
1
−
3
p
)
p
p
p
p
(
1
−
3
p
)
p
p
p
p
(
1
−
3
p
)
]
{\displaystyle [K_{\text{trans}}]={\begin{bmatrix}(1-3p)&p&p&p\\p&(1-3p)&p&p\\p&p&(1-3p)&p\\p&p&p&(1-3p)\end{bmatrix}}}
and
K
trans
{\displaystyle K_{\text{trans}}}
is an invertible kernel iff
[
K
trans
]
{\displaystyle [K_{\text{trans}}]}
is an invertible matrix, that is,
p
≠
1
/
4
{\displaystyle p\neq 1/4}
.
Continuous case: The gaussian kernel, when
Ω
=
R
n
{\displaystyle \Omega =\mathbb {R} ^{n}}
for some
n
≥
1
{\displaystyle n\geq 1}
.
For example, if
Ω
=
R
256
2
{\displaystyle \Omega =\mathbb {R} ^{256^{2}}}
is the space of 256x256 images, and the data-augmentation method is "generate a gaussian noise
z
∼
N
(
0
,
I
256
2
)
{\displaystyle z\sim {\mathcal {N}}(0,I_{256^{2}})}
, then add
ϵ
z
{\displaystyle \epsilon z}
to the image", then
K
trans
{\displaystyle K_{\text{trans}}}
is just convolution by the density function of
N
(
0
,
ϵ
2
I
256
2
)
{\displaystyle {\mathcal {N}}(0,\epsilon ^{2}I_{256^{2}})}
. This is invertible, because convolution by a gaussian is just convolution by the heat kernel, so given any
μ
∈
P
(
R
n
)
{\displaystyle \mu \in {\mathcal {P}}(\mathbb {R} ^{n})}
, the convolved distribution
K
trans
∗
μ
{\displaystyle K_{\text{trans}}*\mu }
can be obtained by heating up
R
n
{\displaystyle \mathbb {R} ^{n}}
precisely according to
μ
{\displaystyle \mu }
, then wait for time
ϵ
2
/
4
{\displaystyle \epsilon ^{2}/4}
. With that, we can recover
μ
{\displaystyle \mu }
by running the heat equation backwards in time for
ϵ
2
/
4
{\displaystyle \epsilon ^{2}/4}
.
More examples of invertible data augmentations are found in the paper.
==== SinGAN ====
SinGAN pushes data augmentation to the limit, by using only a single image as training data and performing data augmentation on it. The GAN architecture is adapted to this training method by using a multi-scale pipeline.
The generator
G
{\displaystyle G}
is decomposed into a pyramid of generators
G
=
G
1
∘
G
2
∘
⋯
∘
G
N
{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}
, with the lowest one generating the image
G
N
(
z
N
)
{\displaystyle G_{N}(z_{N})}
at the lowest resolution, then the generated image is scaled up to
r
(
G
N
(
z
N
)
)
{\displaystyle r(G_{N}(z_{N}))}
, and fed to the next level to generate an image
G
N
−
1
(
z
N
−
1
+
r
(
G
N
(
z
N
)
)
)
{\displaystyle G_{N-1}(z_{N-1}+r(G_{N}(z_{N})))}
at a higher resolution, and so on. The discriminator is decomposed into a pyramid as well.
=== StyleGAN series ===
The StyleGAN family is a series of architectures published by Nvidia's research division.
==== Progressive GAN ====
Progressive GAN is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator as
G
=
G
1
∘
G
2
∘
⋯
∘
G
N
{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}
, and the discriminator as
D
=
D
1
∘
D
2
∘
⋯
∘
D
N
{\displaystyle D=D_{1}\circ D_{2}\circ \cdots \circ D_{N}}
.
During training, at first only
G
N
,
D
N
{\displaystyle G_{N},D_{N}}
are used in a GAN game to generate 4x4 images. Then
G
N
−
1
,
D
N
−
1
{\displaystyle G_{N-1},D_{N-1}}
are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images.
To avoid shock between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper). For example, this is how the second stage GAN game starts:
Just before, the GAN game consists of the pair
G
N
,
D
N
{\displaystyle G_{N},D_{N}}
generating and discriminating 4x4 images.
Just after, the GAN game consists of the pair
(
(
1
−
α
)
+
α
⋅
G
N
−
1
)
∘
u
∘
G
N
,
D
N
∘
d
∘
(
(
1
−
α
)
+
α
⋅
D
N
−
1
)
{\displaystyle ((1-\alpha )+\alpha \cdot G_{N-1})\circ u\circ G_{N},D_{N}\circ d\circ ((1-\alpha )+\alpha \cdot D_{N-1})}
generating and discriminating 8x8 images. Here, the functions
u
,
d
{\displaystyle u,d}
are image up- and down-sampling functions, and
α
{\displaystyle \alpha }
is a blend-in factor (much like an alpha in image composing) that smoothly glides from 0 to 1.
==== StyleGAN-1 ====
StyleGAN-1 is designed as a combination of Progressive GAN with neural style transfer.
The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant
4
×
4
×
512
{\displaystyle 4\times 4\times 512}
array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer uses Gramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance).
At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector).
After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles.
Style-mixing between two images
x
,
x
′
{\displaystyle x,x'}
can be performed as well. First, run a gradient descent to find
z
,
z
′
{\displaystyle z,z'}
such that
G
(
z
)
≈
x
,
G
(
z
′
)
≈
x
′
{\displaystyle G(z)\approx x,G(z')\approx x'}
. This is called "projecting an image back to style latent space". Then,
z
{\displaystyle z}
can be fed to the lower style blocks, and
z
′
{\displaystyle z'}
to the higher style blocks, to generate a composite image that has the large-scale style of
x
{\displaystyle x}
, and the fine-detail style of
x
′
{\displaystyle x'}
. Multiple images can also be composed this way.
==== StyleGAN-2 ====
StyleGAN-2 improves upon StyleGAN-1, by using the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem.
This was updated by the StyleGAN-2-ADA ("ADA" stands for "adaptive"), which uses invertible data augmentation as described above. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive".
==== StyleGAN-3 ====
StyleGAN-3 improves upon StyleGAN-2 by solving the "texture sticking" problem, which can be seen in the official videos. They analyzed the problem by the Nyquist–Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon.
To solve this, they proposed imposing strict lowpass filters between each generator's layers, so that the generator is forced to operate on the pixels in a way faithful to the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using more signal filters. The resulting StyleGAN-3 is able to solve the texture sticking problem, as well as generating images that rotate and translate smoothly.
== Other uses ==
Other than for generative and discriminative modelling of data, GANs have been used for other things.
GANs have been used for transfer learning to enforce the alignment of the latent feature space, such as in deep reinforcement learning. This works by feeding the embeddings of the source and target task to the discriminator which tries to guess the context. The resulting loss is then (inversely) backpropagated through the encoder.
== Applications ==
=== Science ===
Iteratively reconstruct astronomical images
Simulate gravitational lensing for dark matter research.
Model the distribution of dark matter in a particular direction in space and to predict the gravitational lensing that will occur.
Model high energy jet formation and showers through calorimeters of high-energy physics experiments.
Approximate bottlenecks in computationally expensive simulations of particle physics experiments. Applications in the context of present and proposed CERN experiments have demonstrated the potential of these methods for accelerating simulation and/or improving simulation fidelity.
Reconstruct velocity and scalar fields in turbulent flows.
GAN-generated molecules were validated experimentally in mice.
=== Medical ===
One of the major concerns in medical imaging is preserving patient privacy. Due to these reasons, researchers often face difficulties in obtaining medical images for their research purposes. GAN has been used for generating synthetic medical images, such as MRI and PET images to address this challenge.
GAN can be used to detect glaucomatous images helping the early diagnosis which is essential to avoid partial or total loss of vision.
GANs have been used to create forensic facial reconstructions of deceased historical figures.
=== Malicious ===
Concerns have been raised about the potential use of GAN-based human image synthesis for sinister purposes, e.g., to produce fake, possibly incriminating, photographs and videos.
GANs can be used to generate unique, realistic profile photos of people who do not exist, in order to automate creation of fake social media profiles.
In 2019 the state of California considered and passed on October 3, 2019, the bill AB-602, which bans the use of human image synthesis technologies to make fake pornography without the consent of the people depicted, and bill AB-730, which prohibits distribution of manipulated videos of a political candidate within 60 days of an election. Both bills were authored by Assembly member Marc Berman and signed by Governor Gavin Newsom. The laws went into effect in 2020.
DARPA's Media Forensics program studies ways to counteract fake media, including fake media produced using GANs.
=== Fashion, art and advertising ===
GANs can be used to generate art; The Verge wrote in March 2019 that "The images created by GANs have become the defining look of contemporary AI art." GANs can also be used to
inpaint photographs
generate fashion models, shadows, photorealistic renders of interior design, industrial design, shoes, etc. Such networks were reported to be used by Facebook.
Some have worked with using GAN for artistic creativity, as "creative adversarial network". A GAN, trained on a set of 15,000 portraits from WikiArt from the 14th to the 19th century, created the 2018 painting Edmond de Belamy, which sold for US$432,500.
GANs were used by the video game modding community to up-scale low-resolution 2D textures in old video games by recreating them in 4k or higher resolutions via image training, and then down-sampling them to fit the game's native resolution (resembling supersampling anti-aliasing).
In 2020, Artbreeder was used to create the main antagonist in the sequel to the psychological web horror series Ben Drowned. The author would later go on to praise GAN applications for their ability to help generate assets for independent artists who are short on budget and manpower.
In May 2020, Nvidia researchers taught an AI system (termed "GameGAN") to recreate the game of Pac-Man simply by watching it being played.
In August 2019, a large dataset consisting of 12,197 MIDI songs each with paired lyrics and melody alignment was created for neural melody generation from lyrics using conditional GAN-LSTM (refer to sources at GitHub AI Melody Generation from Lyrics).
=== Miscellaneous ===
GANs have been used to
show how an individual's appearance might change with age.
reconstruct 3D models of objects from images,
generate novel objects as 3D point clouds,
model patterns of motion in video.
inpaint missing features in maps, transfer map styles in cartography or augment street view imagery.
use feedback to generate images and replace image search systems.
visualize the effect that climate change will have on specific houses.
reconstruct an image of a person's face after listening to their voice.
produces videos of a person speaking, given only a single photo of that person.
recurrent sequence generation.
== History ==
In 1991, Juergen Schmidhuber published "artificial curiosity", neural networks in a zero-sum game. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. GANs can be regarded as a case where the environmental reaction is 1 or 0 depending on whether the first network's output is in a given set.
Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo. This idea was never implemented and did not involve stochasticity in the generator and thus was not a generative model. It is now known as a conditional GAN or cGAN. An idea similar to GANs was used to model animal behavior by Li, Gauci and Gross in 2013.
Another inspiration for GANs was noise-contrastive estimation, which uses the same loss function as GANs and which Goodfellow studied during his PhD in 2010–2014.
Adversarial machine learning has other uses besides generative modeling and can be applied to models other than neural networks. In control theory, adversarial learning based on neural networks was used in 2006 to train robust controllers in a game theoretic sense, by alternating the iterations between a minimizer policy, the controller, and a maximizer policy, the disturbance.
In 2017, a GAN was used for image enhancement focusing on realistic textures rather than pixel-accuracy, producing a higher image quality at high magnification. In 2017, the first faces were generated. These were exhibited in February 2018 at the Grand Palais. Faces generated by StyleGAN in 2019 drew comparisons with Deepfakes.
== See also ==
Artificial intelligence art – Visual media created with AI
Deepfake – Realistic artificially generated media
Deep learning – Branch of machine learning
Diffusion model – Deep learning algorithm
Generative artificial intelligence – Subset of AI using generative models
Synthetic media – Artificial production, manipulation, and modification of data and media by automated means
== References ==
== External links ==
Knight, Will. "5 Big Predictions for Artificial Intelligence in 2017". MIT Technology Review. Retrieved January 5, 2017.
Karras, Tero; Laine, Samuli; Aila, Timo (2018). "A Style-Based Generator Architecture for Generative Adversarial Networks". arXiv:1812.04948 [cs.NE].
This Person Does Not Exist – photorealistic images of people who do not exist, generated by StyleGAN
This Cat Does Not Exist Archived March 5, 2019, at the Wayback Machine – photorealistic images of cats who do not exist, generated by StyleGAN
Wang, Zhengwei; She, Qi; Ward, Tomas E. (2019). "Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy". arXiv:1906.01529 [cs.LG]. | Wikipedia/Generative_adversarial_network |
Vapnik–Chervonenkis theory (also known as VC theory) was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view.
== Introduction ==
VC theory covers at least four parts (as explained in The Nature of Statistical Learning Theory):
Theory of consistency of learning processes
What are (necessary and sufficient) conditions for consistency of a learning process based on the empirical risk minimization principle?
Nonasymptotic theory of the rate of convergence of learning processes
How fast is the rate of convergence of the learning process?
Theory of controlling the generalization ability of learning processes
How can one control the rate of convergence (the generalization ability) of the learning process?
Theory of constructing learning machines
How can one construct algorithms that can control the generalization ability?
VC Theory is a major subbranch of statistical learning theory. One of its main applications in statistical learning theory is to provide generalization conditions for learning algorithms. From this point of view, VC theory is related to stability, which is an alternative approach for characterizing generalization.
In addition, VC theory and VC dimension are instrumental in the theory of empirical processes, in the case of processes indexed by VC classes. Arguably these are the most important applications of the VC theory, and are employed in proving generalization. Several techniques will be introduced that are widely used in the empirical process and VC theory. The discussion is mainly based on the book Weak Convergence and Empirical Processes: With Applications to Statistics.
== Overview of VC theory in empirical processes ==
=== Background on empirical processes ===
Let
(
X
,
A
)
{\displaystyle ({\mathcal {X}},{\mathcal {A}})}
be a measurable space. For any measure
Q
{\displaystyle Q}
on
(
X
,
A
)
{\displaystyle ({\mathcal {X}},{\mathcal {A}})}
, and any measurable functions
f
:
X
→
R
{\displaystyle f:{\mathcal {X}}\to \mathbf {R} }
, define
Q
f
=
∫
f
d
Q
{\displaystyle Qf=\int fdQ}
Measurability issues will be ignored here, for more technical detail see. Let
F
{\displaystyle {\mathcal {F}}}
be a class of measurable functions
f
:
X
→
R
{\displaystyle f:{\mathcal {X}}\to \mathbf {R} }
and define:
‖
Q
‖
F
=
sup
{
|
Q
f
|
:
f
∈
F
}
.
{\displaystyle \|Q\|_{\mathcal {F}}=\sup\{\vert Qf\vert \ :\ f\in {\mathcal {F}}\}.}
Let
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
be independent, identically distributed random elements of
(
X
,
A
)
{\displaystyle ({\mathcal {X}},{\mathcal {A}})}
. Then define the empirical measure
P
n
=
n
−
1
∑
i
=
1
n
δ
X
i
,
{\displaystyle \mathbb {P} _{n}=n^{-1}\sum _{i=1}^{n}\delta _{X_{i}},}
where δ here stands for the Dirac measure. The empirical measure induces a map
F
→
R
{\displaystyle {\mathcal {F}}\to \mathbf {R} }
given by:
f
↦
P
n
f
=
1
n
(
f
(
X
1
)
+
.
.
.
+
f
(
X
n
)
)
{\displaystyle f\mapsto \mathbb {P} _{n}f={\frac {1}{n}}(f(X_{1})+...+f(X_{n}))}
Now suppose P is the underlying true distribution of the data, which is unknown. Empirical Processes theory aims at identifying classes
F
{\displaystyle {\mathcal {F}}}
for which statements such as the following hold:
uniform law of large numbers:
‖
P
n
−
P
‖
F
→
n
0
,
{\displaystyle \|\mathbb {P} _{n}-P\|_{\mathcal {F}}{\underset {n}{\to }}0,}
That is, as
n
→
∞
{\displaystyle n\to \infty }
,
|
1
n
(
f
(
X
1
)
+
.
.
.
+
f
(
X
n
)
)
−
∫
f
d
P
|
→
0
{\displaystyle \left|{\frac {1}{n}}(f(X_{1})+...+f(X_{n}))-\int fdP\right|\to 0}
uniformly for all
f
∈
F
{\displaystyle f\in {\mathcal {F}}}
.
uniform central limit theorem:
G
n
=
n
(
P
n
−
P
)
⇝
G
,
in
ℓ
∞
(
F
)
{\displaystyle \mathbb {G} _{n}={\sqrt {n}}(\mathbb {P} _{n}-P)\rightsquigarrow \mathbb {G} ,\quad {\text{in }}\ell ^{\infty }({\mathcal {F}})}
In the former case
F
{\displaystyle {\mathcal {F}}}
is called Glivenko–Cantelli class, and in the latter case (under the assumption
∀
x
,
sup
f
∈
F
|
f
(
x
)
−
P
f
|
<
∞
{\displaystyle \forall x,\sup \nolimits _{f\in {\mathcal {F}}}\vert f(x)-Pf\vert <\infty }
) the class
F
{\displaystyle {\mathcal {F}}}
is called Donsker or P-Donsker. A Donsker class is Glivenko–Cantelli in probability by an application of Slutsky's theorem .
These statements are true for a single
f
{\displaystyle f}
, by standard LLN, CLT arguments under regularity conditions, and the difficulty in the Empirical Processes comes in because joint statements are being made for all
f
∈
F
{\displaystyle f\in {\mathcal {F}}}
. Intuitively then, the set
F
{\displaystyle {\mathcal {F}}}
cannot be too large, and as it turns out that the geometry of
F
{\displaystyle {\mathcal {F}}}
plays a very important role.
One way of measuring how big the function set
F
{\displaystyle {\mathcal {F}}}
is to use the so-called covering numbers. The covering number
N
(
ε
,
F
,
‖
⋅
‖
)
{\displaystyle N(\varepsilon ,{\mathcal {F}},\|\cdot \|)}
is the minimal number of balls
{
g
:
‖
g
−
f
‖
<
ε
}
{\displaystyle \{g:\|g-f\|<\varepsilon \}}
needed to cover the set
F
{\displaystyle {\mathcal {F}}}
(here it is obviously assumed that there is an underlying norm on
F
{\displaystyle {\mathcal {F}}}
). The entropy is the logarithm of the covering number.
Two sufficient conditions are provided below, under which it can be proved that the set
F
{\displaystyle {\mathcal {F}}}
is Glivenko–Cantelli or Donsker.
A class
F
{\displaystyle {\mathcal {F}}}
is P-Glivenko–Cantelli if it is P-measurable with envelope F such that
P
∗
F
<
∞
{\displaystyle P^{\ast }F<\infty }
and satisfies:
∀
ε
>
0
sup
Q
N
(
ε
‖
F
‖
Q
,
F
,
L
1
(
Q
)
)
<
∞
.
{\displaystyle \forall \varepsilon >0\quad \sup \nolimits _{Q}N(\varepsilon \|F\|_{Q},{\mathcal {F}},L_{1}(Q))<\infty .}
The next condition is a version of the celebrated Dudley's theorem. If
F
{\displaystyle {\mathcal {F}}}
is a class of functions such that
∫
0
∞
sup
Q
log
N
(
ε
‖
F
‖
Q
,
2
,
F
,
L
2
(
Q
)
)
d
ε
<
∞
{\displaystyle \int _{0}^{\infty }\sup \nolimits _{Q}{\sqrt {\log N\left(\varepsilon \|F\|_{Q,2},{\mathcal {F}},L_{2}(Q)\right)}}d\varepsilon <\infty }
then
F
{\displaystyle {\mathcal {F}}}
is P-Donsker for every probability measure P such that
P
∗
F
2
<
∞
{\displaystyle P^{\ast }F^{2}<\infty }
. In the last integral, the notation means
‖
f
‖
Q
,
2
=
(
∫
|
f
|
2
d
Q
)
1
2
{\displaystyle \|f\|_{Q,2}=\left(\int |f|^{2}dQ\right)^{\frac {1}{2}}}
.
=== Symmetrization ===
The majority of the arguments of how to bound the empirical process rely on symmetrization, maximal and concentration inequalities, and chaining. Symmetrization is usually the first step of the proofs, and since it is used in many machine learning proofs on bounding empirical loss functions (including the proof of the VC inequality which is discussed in the next section) it is presented here.
Consider the empirical process:
f
↦
(
P
n
−
P
)
f
=
1
n
∑
i
=
1
n
(
f
(
X
i
)
−
P
f
)
{\displaystyle f\mapsto (\mathbb {P} _{n}-P)f={\dfrac {1}{n}}\sum _{i=1}^{n}(f(X_{i})-Pf)}
Turns out that there is a connection between the empirical and the following symmetrized process:
f
↦
P
n
0
f
=
1
n
∑
i
=
1
n
ε
i
f
(
X
i
)
{\displaystyle f\mapsto \mathbb {P} _{n}^{0}f={\dfrac {1}{n}}\sum _{i=1}^{n}\varepsilon _{i}f(X_{i})}
The symmetrized process is a Rademacher process, conditionally on the data
X
i
{\displaystyle X_{i}}
. Therefore, it is a sub-Gaussian process by Hoeffding's inequality.
Lemma (Symmetrization). For every nondecreasing, convex Φ: R → R and class of measurable functions
F
{\displaystyle {\mathcal {F}}}
,
E
Φ
(
‖
P
n
−
P
‖
F
)
≤
E
Φ
(
2
‖
P
n
0
‖
F
)
{\displaystyle \mathbb {E} \Phi (\|\mathbb {P} _{n}-P\|_{\mathcal {F}})\leq \mathbb {E} \Phi \left(2\left\|\mathbb {P} _{n}^{0}\right\|_{\mathcal {F}}\right)}
The proof of the Symmetrization lemma relies on introducing independent copies of the original variables
X
i
{\displaystyle X_{i}}
(sometimes referred to as a ghost sample) and replacing the inner expectation of the LHS by these copies. After an application of Jensen's inequality different signs could be introduced (hence the name symmetrization) without changing the expectation. The proof can be found below because of its instructive nature. The same proof method can be used to prove the Glivenko–Cantelli theorem.
A typical way of proving empirical CLTs, first uses symmetrization to pass the empirical process to
P
n
0
{\displaystyle \mathbb {P} _{n}^{0}}
and then argue conditionally on the data, using the fact that Rademacher processes are simple processes with nice properties.
=== VC Connection ===
It turns out that there is a fascinating connection between certain combinatorial properties of the set
F
{\displaystyle {\mathcal {F}}}
and the entropy numbers. Uniform covering numbers can be controlled by the notion of Vapnik–Chervonenkis classes of sets – or shortly VC sets.
Consider a collection
C
{\displaystyle {\mathcal {C}}}
of subsets of the sample space
X
{\displaystyle {\mathcal {X}}}
.
C
{\displaystyle {\mathcal {C}}}
is said to pick out a certain subset
W
{\displaystyle W}
of the finite set
S
=
{
x
1
,
…
,
x
n
}
⊂
X
{\displaystyle S=\{x_{1},\ldots ,x_{n}\}\subset {\mathcal {X}}}
if
W
=
S
∩
C
{\displaystyle W=S\cap C}
for some
C
∈
C
{\displaystyle C\in {\mathcal {C}}}
.
C
{\displaystyle {\mathcal {C}}}
is said to shatter S if it picks out each of its 2n subsets. The VC-index (similar to VC dimension + 1 for an appropriately chosen classifier set)
V
(
C
)
{\displaystyle V({\mathcal {C}})}
of
C
{\displaystyle {\mathcal {C}}}
is the smallest n for which no set of size n is shattered by
C
{\displaystyle {\mathcal {C}}}
.
Sauer's lemma then states that the number
Δ
n
(
C
,
x
1
,
…
,
x
n
)
{\displaystyle \Delta _{n}({\mathcal {C}},x_{1},\ldots ,x_{n})}
of subsets picked out by a VC-class
C
{\displaystyle {\mathcal {C}}}
satisfies:
max
x
1
,
…
,
x
n
Δ
n
(
C
,
x
1
,
…
,
x
n
)
≤
∑
j
=
0
V
(
C
)
−
1
(
n
j
)
≤
(
n
e
V
(
C
)
−
1
)
V
(
C
)
−
1
{\displaystyle \max _{x_{1},\ldots ,x_{n}}\Delta _{n}({\mathcal {C}},x_{1},\ldots ,x_{n})\leq \sum _{j=0}^{V({\mathcal {C}})-1}{n \choose j}\leq \left({\frac {ne}{V({\mathcal {C}})-1}}\right)^{V({\mathcal {C}})-1}}
Which is a polynomial number
O
(
n
V
(
C
)
−
1
)
{\displaystyle O(n^{V({\mathcal {C}})-1})}
of subsets rather than an exponential number. Intuitively this means that a finite VC-index implies that
C
{\displaystyle {\mathcal {C}}}
has an apparent simplistic structure.
A similar bound can be shown (with a different constant, same rate) for the so-called VC subgraph classes. For a function
f
:
X
→
R
{\displaystyle f:{\mathcal {X}}\to \mathbf {R} }
the subgraph is a subset of
X
×
R
{\displaystyle {\mathcal {X}}\times \mathbf {R} }
such that:
{
(
x
,
t
)
:
t
<
f
(
x
)
}
{\displaystyle \{(x,t):t<f(x)\}}
. A collection of
F
{\displaystyle {\mathcal {F}}}
is called a VC subgraph class if all subgraphs form a VC-class.
Consider a set of indicator functions
I
C
=
{
1
C
:
C
∈
C
}
{\displaystyle {\mathcal {I}}_{\mathcal {C}}=\{1_{C}:C\in {\mathcal {C}}\}}
in
L
1
(
Q
)
{\displaystyle L_{1}(Q)}
for discrete empirical type of measure Q (or equivalently for any probability measure Q). It can then be shown that quite remarkably, for
r
≥
1
{\displaystyle r\geq 1}
:
N
(
ε
,
I
C
,
L
r
(
Q
)
)
≤
K
V
(
C
)
(
4
e
)
V
(
C
)
ε
−
r
(
V
(
C
)
−
1
)
{\displaystyle N(\varepsilon ,{\mathcal {I}}_{\mathcal {C}},L_{r}(Q))\leq KV({\mathcal {C}})(4e)^{V({\mathcal {C}})}\varepsilon ^{-r(V({\mathcal {C}})-1)}}
Further consider the symmetric convex hull of a set
F
{\displaystyle {\mathcal {F}}}
:
sconv
F
{\displaystyle \operatorname {sconv} {\mathcal {F}}}
being the collection of functions of the form
∑
i
=
1
m
α
i
f
i
{\displaystyle \sum _{i=1}^{m}\alpha _{i}f_{i}}
with
∑
i
=
1
m
|
α
i
|
≤
1
{\displaystyle \sum _{i=1}^{m}|\alpha _{i}|\leq 1}
. Then if
N
(
ε
‖
F
‖
Q
,
2
,
F
,
L
2
(
Q
)
)
≤
C
ε
−
V
{\displaystyle N\left(\varepsilon \|F\|_{Q,2},{\mathcal {F}},L_{2}(Q)\right)\leq C\varepsilon ^{-V}}
the following is valid for the convex hull of
F
{\displaystyle {\mathcal {F}}}
:
log
N
(
ε
‖
F
‖
Q
,
2
,
sconv
F
,
L
2
(
Q
)
)
≤
K
ε
−
2
V
V
+
2
{\displaystyle \log N\left(\varepsilon \|F\|_{Q,2},\operatorname {sconv} {\mathcal {F}},L_{2}(Q)\right)\leq K\varepsilon ^{-{\frac {2V}{V+2}}}}
The important consequence of this fact is that
2
V
V
+
2
<
2
,
{\displaystyle {\frac {2V}{V+2}}<2,}
which is just enough so that the entropy integral is going to converge, and therefore the class
sconv
F
{\displaystyle \operatorname {sconv} {\mathcal {F}}}
is going to be P-Donsker.
Finally an example of a VC-subgraph class is considered. Any finite-dimensional vector space
F
{\displaystyle {\mathcal {F}}}
of measurable functions
f
:
X
→
R
{\displaystyle f:{\mathcal {X}}\to \mathbf {R} }
is VC-subgraph of index smaller than or equal to
dim
(
F
)
+
2
{\displaystyle \dim({\mathcal {F}})+2}
.
Proof: Take
n
=
dim
(
F
)
+
2
{\displaystyle n=\dim({\mathcal {F}})+2}
points
(
x
1
,
t
1
)
,
…
,
(
x
n
,
t
n
)
{\displaystyle (x_{1},t_{1}),\ldots ,(x_{n},t_{n})}
. The vectors:
(
f
(
x
1
)
,
…
,
f
(
x
n
)
)
−
(
t
1
,
…
,
t
n
)
{\displaystyle (f(x_{1}),\ldots ,f(x_{n}))-(t_{1},\ldots ,t_{n})}
are in a n − 1 dimensional subspace of Rn. Take a ≠ 0, a vector that is orthogonal to this subspace. Therefore:
∑
a
i
>
0
a
i
(
f
(
x
i
)
−
t
i
)
=
∑
a
i
<
0
(
−
a
i
)
(
f
(
x
i
)
−
t
i
)
,
∀
f
∈
F
{\displaystyle \sum _{a_{i}>0}a_{i}(f(x_{i})-t_{i})=\sum _{a_{i}<0}(-a_{i})(f(x_{i})-t_{i}),\quad \forall f\in {\mathcal {F}}}
Consider the set
S
=
{
(
x
i
,
t
i
)
:
a
i
>
0
}
{\displaystyle S=\{(x_{i},t_{i}):a_{i}>0\}}
. This set cannot be picked out since if there is some
f
{\displaystyle f}
such that
S
=
{
(
x
i
,
t
i
)
:
f
(
x
i
)
>
t
i
}
{\displaystyle S=\{(x_{i},t_{i}):f(x_{i})>t_{i}\}}
that would imply that the LHS is strictly positive but the RHS is non-positive.
There are generalizations of the notion VC subgraph class, e.g. there is the notion of pseudo-dimension.
== VC inequality ==
A similar setting is considered, which is more common to machine learning. Let
X
{\displaystyle {\mathcal {X}}}
is a feature space and
Y
=
{
0
,
1
}
{\displaystyle {\mathcal {Y}}=\{0,1\}}
. A function
f
:
X
→
Y
{\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}}
is called a classifier. Let
F
{\displaystyle {\mathcal {F}}}
be a set of classifiers. Similarly to the previous section, define the shattering coefficient (also known as growth function):
S
(
F
,
n
)
=
max
x
1
,
…
,
x
n
|
{
(
f
(
x
1
)
,
…
,
f
(
x
n
)
)
,
f
∈
F
}
|
{\displaystyle S({\mathcal {F}},n)=\max _{x_{1},\ldots ,x_{n}}|\{(f(x_{1}),\ldots ,f(x_{n})),f\in {\mathcal {F}}\}|}
Note here that there is a 1:1 go between each of the functions in
F
{\displaystyle {\mathcal {F}}}
and the set on which the function is 1. We can thus define
C
{\displaystyle {\mathcal {C}}}
to be the collection of subsets obtained from the above mapping for every
f
∈
F
{\displaystyle f\in {\mathcal {F}}}
. Therefore, in terms of the previous section the shattering coefficient is precisely
max
x
1
,
…
,
x
n
Δ
n
(
C
,
x
1
,
…
,
x
n
)
{\displaystyle \max _{x_{1},\ldots ,x_{n}}\Delta _{n}({\mathcal {C}},x_{1},\ldots ,x_{n})}
.
This equivalence together with Sauer's Lemma implies that
S
(
F
,
n
)
{\displaystyle S({\mathcal {F}},n)}
is going to be polynomial in n, for sufficiently large n provided that the collection
C
{\displaystyle {\mathcal {C}}}
has a finite VC-index.
Let
D
n
=
{
(
X
1
,
Y
1
)
,
…
,
(
X
n
,
Y
m
)
}
{\displaystyle D_{n}=\{(X_{1},Y_{1}),\ldots ,(X_{n},Y_{m})\}}
is an observed dataset. Assume that the data is generated by an unknown probability distribution
P
X
Y
{\displaystyle P_{XY}}
. Define
R
(
f
)
=
P
(
f
(
X
)
≠
Y
)
{\displaystyle R(f)=P(f(X)\neq Y)}
to be the expected 0/1 loss. Of course since
P
X
Y
{\displaystyle P_{XY}}
is unknown in general, one has no access to
R
(
f
)
{\displaystyle R(f)}
. However the empirical risk, given by:
R
^
n
(
f
)
=
1
n
∑
i
=
1
n
I
(
f
(
X
i
)
≠
Y
i
)
{\displaystyle {\hat {R}}_{n}(f)={\dfrac {1}{n}}\sum _{i=1}^{n}\mathbb {I} (f(X_{i})\neq Y_{i})}
can certainly be evaluated. Then one has the following Theorem:
=== Theorem (VC Inequality) ===
For binary classification and the 0/1 loss function we have the following generalization bounds:
P
(
sup
f
∈
F
|
R
^
n
(
f
)
−
R
(
f
)
|
>
ε
)
≤
8
S
(
F
,
n
)
e
−
n
ε
2
/
32
E
[
sup
f
∈
F
|
R
^
n
(
f
)
−
R
(
f
)
|
]
≤
2
log
S
(
F
,
n
)
+
log
2
n
{\displaystyle {\begin{aligned}P\left(\sup _{f\in {\mathcal {F}}}\left|{\hat {R}}_{n}(f)-R(f)\right|>\varepsilon \right)&\leq 8S({\mathcal {F}},n)e^{-n\varepsilon ^{2}/32}\\\mathbb {E} \left[\sup _{f\in {\mathcal {F}}}\left|{\hat {R}}_{n}(f)-R(f)\right|\right]&\leq 2{\sqrt {\dfrac {\log S({\mathcal {F}},n)+\log 2}{n}}}\end{aligned}}}
In words the VC inequality is saying that as the sample increases, provided that
F
{\displaystyle {\mathcal {F}}}
has a finite VC dimension, the empirical 0/1 risk becomes a good proxy for the expected 0/1 risk. Note that both RHS of the two inequalities will converge to 0, provided that
S
(
F
,
n
)
{\displaystyle S({\mathcal {F}},n)}
grows polynomially in n.
The connection between this framework and the Empirical Process framework is evident. Here one is dealing with a modified empirical process
|
R
^
n
−
R
|
F
{\displaystyle \left|{\hat {R}}_{n}-R\right|_{\mathcal {F}}}
but not surprisingly the ideas are the same. The proof of the (first part of) VC inequality, relies on symmetrization, and then argue conditionally on the data using concentration inequalities (in particular Hoeffding's inequality). The interested reader can check the book Theorems 12.4 and 12.5.
== References ==
See references in articles: Richard M. Dudley, empirical processes, Shattered set.
Vapnik, V. N.; Chervonenkis, A. Ya. (1968). "On the uniform convergence of relative frequencies of events to their probabilities". Soviet Mathematics. 9: 915–918. This is a translation by B. Seckler, of the 1968 note.
Reprinted in Vapnik, V. N.; Chervonenkis, A. Ya. (2015), Vovk, Vladimir; Papadopoulos, Harris; Gammerman, Alexander (eds.), "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities", Measures of Complexity, Cham: Springer International Publishing, pp. 11–30, doi:10.1007/978-3-319-21852-6_3, ISBN 978-3-319-21851-9
They obtained results in a draft form in July 1966 and announced in 1968 in their note Vapnik, V.N.; Chervonenkis, A.Ya. (1968). "On the uniform convergence of relative frequencies of events to their probabilities". Doklady Akademii Nauk SSSR SSSR (in Russian). 181 (4): 781–783.
The paper was first published properly in Russian as Vapnik, V.N.; Chervonenkis, A.Ya. (1971). "О равномерноЙ сходимости частот появления событиЙ к их вероятностям" [On the uniform convergence of frequencies of occurrence of events to their probabilities]. Теория вероятностеЙ и ее применения [Theory of Probability and Its Applications] (in Russian). 16 (2): 264–279.
Bousquet, Olivier; Elisseeff, Andr´e (1 March 2002). "Stability and Generalization". The Journal of Machine Learning Research. 2: 499–526. doi:10.1162/153244302760200704. S2CID 1157797. Retrieved 10 December 2022.
Vapnik, V.; Chervonenkis, A. (2004). "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities". Theory Probab. Appl. 16 (2): 264–280. doi:10.1137/1116025. | Wikipedia/Vapnik–Chervonenkis_theory |
Reasoning language models (RLMs) are large language models that have been further trained to solve multi-step reasoning tasks. These models perform better on logical, mathematical or programmatic tasks than traditional autoregressive LLMs, have the ability to backtrack, and employ test-time compute as an additional scaling axis beyond training examples, parameter count, and train-time compute.
== History ==
=== 2024 ===
o1-preview, an LLM with enhanced reasoning, was released in September 2024. The full version, o1, followed in December 2024. OpenAI also began sharing results on its successor, o3.
The development of reasoning LLMs has illustrated what Rich Sutton termed the "bitter lesson": that general methods leveraging computation often outperform those relying on specific human insights. For instance, some research groups, such as the Generative AI Research Lab (GAIR), initially explored complex techniques like tree search and reinforcement learning in attempts to replicate o1's capabilities. However, they found, as documented in their "o1 Replication Journey" papers, that knowledge distillation — training a smaller model to mimic o1's outputs – was surprisingly effective. This highlighted the power of distillation in this context.
Alibaba also released reasoning versions of its Qwen LLMs in November 2024.
In December 2024, Google introduced Deep Research in Gemini, a feature in Gemini that conducts multi-step research tasks.
On December 16, 2024, an experiment using a Llama 3B model demonstrated that by scaling test-time compute, a relatively small model could outperform a much larger Llama 70B model on challenging reasoning tasks. This result highlighted that improved inference strategies can unlock latent reasoning capabilities even in compact models.
=== 2025 ===
In January 2025, DeepSeek released R1, a model competitive with o1 at lower cost, highlighting the effectiveness of Group Relative Policy Optimization(GRPO). On January 25, 2025, DeepSeek launched a feature in their DeepSeek R1 model, enabling the simultaneous use of search and reasoning capabilities, which allows for more efficient integration of data retrieval with reflective reasoning processes. OpenAI subsequently released o3-mini, followed by Deep Research which is based on o3. The power of distillation was further demonstrated by s1-32B, achieving strong performance with budget forcing and scaling techniques.
On February 2, 2025, OpenAI released Deep Research, a tool that integrates reasoning and web search in a unified workflow, allowing users to perform complex research tasks that require multi-step reasoning and data synthesis from multiple sources. It is based on o3 and can take from 5 to 30 minutes to generate comprehensive reports.
== Supervised finetuning ==
A large language model (LLM) can be finetuned on a dataset of reasoning tasks with example solutions and reasoning traces. The fine-tuned model can then produce its own reasoning traces for new problems.
As it is expensive to get humans to write reasoning traces for a SFT dataset, researchers have proposed ways to automatically construct SFT datasets. In rejection sampling finetuning (RFT), new reasoning traces are collected via a loop:
Sample a task prompt
Generate many reasoning traces for the prompt.
Use a verifier to remove reasoning traces with the wrong final answer.
For each remaining trace, extract the set of equations appearing in it. Deduplicate the traces so that each one has a different set of equations. Add those to the dataset.
== Reinforcement learning ==
A pretrained language model can be further trained by RL. In the RL formalism, a generative language model is a policy
π
{\displaystyle \pi }
. A prompt specifying a task to solve is an environmental state
x
{\displaystyle x}
, and the response of the language model to the prompt is an action
y
{\displaystyle y}
. The probability that the language model responds
x
{\displaystyle x}
with
y
{\displaystyle y}
is
π
(
y
|
x
)
{\displaystyle \pi (y|x)}
.
Training a reasoning language model by RL then consists of constructing a reward model
r
(
x
,
y
)
{\displaystyle r(x,y)}
to guide the RL process. Intuitively, a reward model describes how desirable/appropriate/good the response is for the prompt. For reasoning language model, the prompt describes a reasoning task, and the reward would be high if the response solves the task, and low if the response fails to solve the task.
For reasoning language models, the model's response
y
{\displaystyle y}
may be broken down into multiple steps, in which case it is written as
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\dots ,y_{n}}
.
Most recent systems use policy-gradient methods such as Proximal Policy Optimization (PPO) because PPO constrains each policy update with a clipped objective, which stabilises training for very large policies.
=== Outcome reward model ===
Outcome reward model, or outcome-supervised RM (ORM), is a reward model that computes the reward of a step
r
(
x
,
y
1
,
…
,
y
i
)
{\displaystyle r(x,y_{1},\dots ,y_{i})}
determined by the final answer:
r
(
x
,
y
1
,
…
,
y
i
)
=
r
(
x
,
y
n
)
{\displaystyle r(x,y_{1},\dots ,y_{i})=r(x,y_{n})}
. They are also called "verifiers".
For tasks with an answer that is easy to verify, such as word problems in math, the outcome reward can simply be binary: 1 if the final answer is correct, and 0 otherwise. If the answer is not easy to verify programmatically, humans can manually label the answers as correct or not, then the labels can be used to finetune a base model that predicts the human label. For other kinds of tasks, such as creative writing, where task performance is not binary true/false, one can train a reward model by finetuning a base model on human ranked preference data, such as used in reinforcement learning from human feedback. A base model can also be finetuned to predict, given a partial thinking trace
x
,
y
1
,
…
,
y
m
{\displaystyle x,y_{1},\dots ,y_{m}}
, whether the final answer would be correct or not. This can then be used as a binary reward signal.
The ORM is usually trained via logistic regression, i.e. minimizing cross-entropy loss.
Given a PRM, an ORM can be constructed by multiplying the total process reward during the reasoning trace, or by taking the minimum, or some other method to aggregate the process rewards. DeepSeek used a simple ORM for training the R1 model.
=== Process reward model ===
Process reward model, or process-supervised RM (PRM), is a reward model that computes the reward of a step
r
(
x
,
y
1
,
…
,
y
i
)
{\displaystyle r(x,y_{1},\dots ,y_{i})}
determined by the steps so far:
(
x
,
y
1
,
…
,
y
i
)
{\displaystyle (x,y_{1},\dots ,y_{i})}
.
Given a partial thinking trace
x
,
y
1
,
…
,
y
m
{\displaystyle x,y_{1},\dots ,y_{m}}
, a human can be queried as to whether the steps so far are correct, regardless of whether the ultimate answer would be correct. This can then be used as a binary reward signal. As human labels are expensive, a base model can then be finetuned to predict the human labels. The PRM is usually trained by logistic regression on the human labels, i.e. by minimizing the cross-entropy loss between the true labels and the predicted labels.
As an example, in a 2023 OpenAI paper, 800K process labels were collected for 75K solution traces. A labeler would be presented with a solution trace, and keep labelling "positive" if the step progresses towards the solution, "neutral" if it is not wrong, but does not progress towards solution, and "negative" if it is a mistake. As soon as a "negative" label is entered, the labeler stops labeling that thinking trace, and begins labeling another one. The idea was that, while labelling subsequent reasoning steps can provide even richer supervision signals, simply labeling up to the first error was sufficient for training a competent PRM.
As human labels are expensive, researchers have proposed methods to create PRM without human labels on the processes. Inspired by Monte Carlo tree search (MCTS), the Math-Shepherd method samples multiple continuations until the end, starting at each reasoning step
y
i
{\displaystyle y_{i}}
, and set the reward at that step to be either
#
(correct answers)
#
(total answers)
{\displaystyle {\frac {\#{\text{(correct answers)}}}{\#{\text{(total answers)}}}}}
in the case of "soft estimation", or
{
1
if one of the answers is correct
0
else
{\displaystyle {\begin{cases}1&{\text{if one of the answers is correct}}\\0&{\text{else}}\end{cases}}}
in the case of "hard estimation". This creates process reward using only an ORM, which is usually easier or cheaper to construct. After creating these process reward labels, a PRM can be trained on them. Some have tried a fully MCTS approach.
One can also use an ORM to implicitly construct a PRM, similar to direct preference optimization.
=== Guided sampling ===
A trained ORM can be used to select the best response. The policy would rollout multiple responses, and a trained ORM would select the best response. This allows a simple form of test time compute scaling ("best-of-N").
A trained PRM can also be used to guide reasoning by greedy tree search. That is, the policy model generates several possible next reasoning steps, and the PRM selects the best one, and the process repeats. This is similar to how a trained ORM can be used to select the best response. Beam search perform better than greedy search.
Lookahead search is another tree search method, where the policy model generates several possible next reasoning steps, then make a (partial) rollout for each. If a solution endpoint is reached during the forward simulation, the process halts early. Otherwise, the PRM is used to calculate the total reward for each rollout. The step with the highest rollout is selected.
Self-consistency can be combined with an ORM. The model would be used to generate multiple answers, and the answers would be clustered, so that each cluster has the same answer. The ORM is used to compute the reward for each answer, and the rewards within each cluster is summed. The answer corresponding to the cluster with the highest summed reward is output.
== Benchmarks ==
Reasoning models generally outperform non-reasoning models in most benchmarks, especially on tasks requiring multi-step reasoning.
However, some benchmarks exclude reflective models due to longer response times.
=== Humanity's Last Exam ===
The HLE, a rigorous benchmark designed to assess expert-level reasoning across mathematics, humanities, and the natural sciences, reveals substantial performance gaps among models. State-of-the-art reasoning models have demonstrated low accuracy on HLE, highlighting significant room for improvement. In particular, the full reasoning model o3 achieved an accuracy of 26.6%, while its lighter counterpart, o3‑mini-high (evaluated on text‑only questions), reached 13%.
=== AIME ===
The American Invitational Mathematics Examination (AIME) benchmark, a challenging mathematics competition, demonstrates significant performance differences between model types. Non-reasoning models typically solve less than 30% of AIME. In contrast, models employing reasoning techniques score between 50% and 80%. While OpenAI's o1 maintained or slightly improved its accuracy from reported 2024 metrics to 2025 AIME results, o3-mini (high) achieved a higher accuracy (80%) at a significantly lower cost (approximately 12 times cheaper).
=== o3-mini performance ===
According to OpenAI's January 2025 report on o3-mini, adjustable "reasoning effort" significantly affects performance, particularly in STEM. Increasing reasoning effort from low to high boosts accuracy on benchmarks like AIME 2024, GPQA Diamond, and Codeforces, providing performance gains typically in the range of 10-30%. With high reasoning effort, o3-mini (high) achieved 87.3% in AIME (different from the MathArena AIME benchmark results), 79.7% in GPQA Diamond, 2130 Elo in Codeforces, and 49.3 in SWE-bench Verified.
== Drawbacks ==
=== Computational cost ===
Reasoning models require significantly more test-time compute than non-reasoning models. On the AIME benchmark, reasoning models were 10 to 74 times more expensive than non-reasoning counterparts.
=== Generation time ===
Reflective reasoning increases response times, with current models taking anywhere from three seconds to several minutes to generate an answer. As reasoning depth improves, future models may require even longer processing times.
== Models ==
=== OpenAI ===
o4-mini
o3 and o3-mini
o1 and o1-preview
=== Gemini ===
2.5 Pro and Flash
2.0 Flash Thinking
=== DeepSeek ===
R1 (based on V3)
R1-Lite-Preview (test version based on V2.5)
=== Qwen ===
QvQ-72B-Preview — an experimental visual reasoning model launched on December 24, 2024, which integrates image understanding with verbal chain-of-thought reasoning.
QwQ-32B-Preview — an experimental text-based reasoning model released in late November 2024 that emphasizes complex, step-by-step analysis.
=== Anthropic ===
Claude Sonnet 3.7 has an adjustable amount of 'thinking' tokens.
=== xAI ===
Grok 3
=== Hugging Face ===
OlympicCoder-7B & 32B, as part of reproducing the R1 training openly (Open R1 project).
== See also ==
Automated reasoning
Reflection (artificial intelligence)
Large language model
== References ==
== External links ==
Fortes, Armando (2025-01-27), atfortes/Awesome-LLM-Reasoning, retrieved 2025-01-27
Huang, Jie; Chang, Kevin Chen-Chuan (2023-05-26), Towards Reasoning in Large Language Models: A Survey, arXiv:2212.10403
Besta, Maciej; Barth, Julia; Schreiber, Eric; Kubicek, Ales; Catarino, Afonso; Gerstenberger, Robert; Nyczyk, Piotr; Iff, Patrick; Li, Yueling (2025-01-23), Reasoning Language Models: A Blueprint, arXiv:2501.11223 | Wikipedia/Reasoning_language_model |
Gemini is a family of multimodal large language models (LLMs) developed by Google DeepMind, and the successor to LaMDA and PaLM 2. Comprising Gemini Ultra, Gemini Pro, Gemini Flash, and Gemini Nano, it was announced on December 6, 2023, positioned as a competitor to OpenAI's GPT-4. It powers the chatbot of the same name. In March 2025, Gemini 2.5 Pro Experimental was rated as highly competitive.
== History ==
=== Development ===
Google announced Gemini, a large language model (LLM) developed by subsidiary Google DeepMind, during the Google I/O keynote on May 10, 2023. It was positioned as a more powerful successor to PaLM 2, which was also unveiled at the event, with Google CEO Sundar Pichai stating that Gemini was still in its early developmental stages. Unlike other LLMs, Gemini was said to be unique in that it was not trained on a text corpus alone and was designed to be multimodal, meaning it could process multiple types of data simultaneously, including text, images, audio, video, and computer code. It had been developed as a collaboration between DeepMind and Google Brain, two branches of Google that had been merged as Google DeepMind the previous month. In an interview with Wired, DeepMind CEO Demis Hassabis touted Gemini's advanced capabilities, which he believed would allow the algorithm to trump OpenAI's ChatGPT, which runs on GPT-4 and whose growing popularity had been aggressively challenged by Google with LaMDA and Bard. Hassabis highlighted the strengths of DeepMind's AlphaGo program, which gained worldwide attention in 2016 when it defeated Go champion Lee Sedol, saying that Gemini would combine the power of AlphaGo and other Google–DeepMind LLMs.
In August 2023, The Information published a report outlining Google's roadmap for Gemini, revealing that the company was targeting a launch date of late 2023. According to the report, Google hoped to surpass OpenAI and other competitors by combining conversational text capabilities present in most LLMs with artificial intelligence–powered image generation, allowing it to create contextual images and be adapted for a wider range of use cases. Like Bard, Google co-founder Sergey Brin was summoned out of retirement to assist in the development of Gemini, along with hundreds of other engineers from Google Brain and DeepMind; he was later credited as a "core contributor" to Gemini. Because Gemini was being trained on transcripts of YouTube videos, lawyers were brought in to filter out any potentially copyrighted materials.
With news of Gemini's impending launch, OpenAI hastened its work on integrating GPT-4 with multimodal features similar to those of Gemini. The Information reported in September that several companies had been granted early access to "an early version" of the LLM, which Google intended to make available to clients through Google Cloud's Vertex AI service. The publication also stated that Google was arming Gemini to compete with both GPT-4 and Microsoft's GitHub Copilot.
=== Launch ===
On December 6, 2023, Pichai and Hassabis announced "Gemini 1.0" at a virtual press conference. It comprised three models: Gemini Ultra, designed for "highly complex tasks"; Gemini Pro, designed for "a wide range of tasks"; and Gemini Nano, designed for "on-device tasks". At launch, Gemini Pro and Nano were integrated into Bard and the Pixel 8 Pro smartphone, respectively, while Gemini Ultra was set to power "Bard Advanced" and become available to software developers in early 2024. Other products that Google intended to incorporate Gemini into included Search, Ads, Chrome, Duet AI on Google Workspace, and AlphaCode 2. It was made available only in English. Touted as Google's "largest and most capable AI model" and designed to emulate human behavior, the company stated that Gemini would not be made widely available until the following year due to the need for "extensive safety testing". Gemini was trained on and powered by Google's Tensor Processing Units (TPUs), and the name is in reference to the DeepMind–Google Brain merger as well as NASA's Project Gemini.
Gemini Ultra was said to have outperformed GPT-4, Anthropic's Claude 2, Inflection AI's Inflection-2, Meta's LLaMA 2, and xAI's Grok 1 on a variety of industry benchmarks, while Gemini Pro was said to have outperformed GPT-3.5. Gemini Ultra was also the first language model to outperform human experts on the 57-subject Massive Multitask Language Understanding (MMLU) test, obtaining a score of 90%. Gemini Pro was made available to Google Cloud customers on AI Studio and Vertex AI on December 13, while Gemini Nano will be made available to Android developers as well. Hassabis further revealed that DeepMind was exploring how Gemini could be "combined with robotics to physically interact with the world". In accordance with an executive order signed by U.S. President Joe Biden in October, Google stated that it would share testing results of Gemini Ultra with the federal government of the United States. Similarly, the company was engaged in discussions with the government of the United Kingdom to comply with the principles laid out at the AI Safety Summit at Bletchley Park in November.
=== Updates ===
Google partnered with Samsung to integrate Gemini Nano and Gemini Pro into its Galaxy S24 smartphone lineup in January 2024. The following month, Bard and Duet AI were unified under the Gemini brand, with "Gemini Advanced with Ultra 1.0" debuting via a new "AI Premium" tier of the Google One subscription service. Gemini Pro also received a global launch.
In February, 2024, Google launched Gemini 1.5 in a limited capacity, positioned as a more powerful and capable model than 1.0 Ultra. This "step change" was achieved through various technical advancements, including a new architecture, a mixture-of-experts approach, and a larger one-million-token context window, which equates to roughly an hour of silent video, 11 hours of audio, 30,000 lines of code, or 700,000 words. The same month, Google debuted Gemma, a family of free and open-source LLMs that serve as a lightweight version of Gemini. They come in two sizes, with a neural network with two and seven billion parameters, respectively. Multiple publications viewed this as a response to Meta and others open-sourcing their AI models, and a stark reversal from Google's longstanding practice of keeping its AI proprietary. Google announced an additional model, Gemini 1.5 Flash, on May 14th at the 2024 I/O keynote.
Gemma 2 was released on June 27, 2024.
Two updated Gemini models, Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002, were released on September 24, 2024.
On December 11, 2024, Google announced Gemini 2.0 Flash Experimental, a significant update to its Gemini AI model. This iteration boasts improved speed and performance over its predecessor, Gemini 1.5 Flash. Key features include a Multimodal Live API for real-time audio and video interactions, enhanced spatial understanding, native image and controllable text-to-speech generation (with watermarking), and integrated tool use, including Google Search. It also introduces improved agentic capabilities, a new Google Gen AI SDK, and "Jules," an experimental AI coding agent for GitHub. Additionally, Google Colab is integrating Gemini 2.0 to generate data science notebooks from natural language. Gemini 2.0 was available through the Gemini chat interface for all users as "Gemini 2.0 Flash experimental".
On January 30, 2025, Google released Gemini 2.0 Flash as the new default model, with Gemini 1.5 Flash still available for usage. This was followed by the release of Gemini 2.0 Pro on February 5, 2025. Additionally, Google released Gemini 2.0 Flash Thinking Experimental, which details the language model's thinking process when responding to prompts.
Gemma 3 was released on March 12, 2025. The next day, Google announced that Gemini in Android Studio would be able to understand simple UI mockups and transform them into working Jetpack Compose code.
Gemini 2.5 Pro Experimental was released on March 25, 2025, described by Google as its most intelligent AI model yet, featuring enhanced reasoning and coding capabilities, and a "thinking model" capable of reasoning through steps before responding, using techniques like chain-of-thought prompting, whilst maintaining native multimodality and launching with a 1 million token context window.
At Google I/O 2025, Google announced significant updates to its Gemini core models. Gemini 2.5 Flash became the default model, delivering faster responses. Gemini 2.5 Pro was introduced as the most advanced Gemini model, featuring reasoning, coding capabilities, and the new Deep Think mode for complex tasks. Both 2.5 Pro and Flash support native audio output and improved security.
General availability for Gemini 2.5 Pro and Flash is scheduled for June 2025.
=== Model versions ===
The following table lists the main model versions of Gemini, describing the significant changes included with each version:
== Technical specifications ==
The first generation of Gemini ("Gemini 1") has three models, with the same architecture. They are decoder-only transformers, with modifications to allow efficient training and inference on TPUs. They have a context length of 32,768 tokens, with multi-query attention. Two versions of Gemini Nano, Nano-1 (1.8 billion parameters) and Nano-2 (3.25 billion parameters), are distilled from larger Gemini models, designed for use by edge devices such as smartphones. As Gemini is multimodal, each context window can contain multiple forms of input. The different modes can be interleaved and do not have to be presented in a fixed order, allowing for a multimodal conversation. For example, the user might open the conversation with a mix of text, picture, video, and audio, presented in any order, and Gemini might reply with the same free ordering. Input images may be of different resolutions, while video is inputted as a sequence of images. Audio is sampled at 16 kHz and then converted into a sequence of tokens by the Universal Speech Model. Gemini's dataset is multimodal and multilingual, consisting of "web documents, books, and code, and includ[ing] image, audio, and video data".
The second generation of Gemini ("Gemini 1.5") has two models. Gemini 1.5 Pro is a multimodal sparse mixture-of-experts, with a context length in the millions, while Gemini 1.5 Flash is distilled from Gemini 1.5 Pro, with a context length above 2 million.
Gemma 2 27B is trained on web documents, code, science articles. Gemma 2 9B was distilled from 27B. Gemma 2 2B was distilled from a 7B model that remained unreleased.
As of February 2025, the models released include
Gemma 1 (2B, 7B)
CodeGemma (2B and 7B) - Gemma 1 finetuned for code generation.
Gemma 2 (2B, 9B, 27B) - 27B trained from scratch. 2B and 9B
Gemma 3 (1B, 4B, 12B, 27B) - Upgrade to Gemma 2, capable of multilinguality (supports 140 languages), longer context length (128k tokens), multimodality, and function calling.
RecurrentGemma (2B, 9B) - Griffin-based, instead of Transformer-based.
PaliGemma (3B) - A vision-language model that takes text and image inputs, and outputs text. It is made by connecting a SigLIP image encoder with a Gemma language model.
PaliGemma 2 (3B, 10B, 28B) - Upgrade to PaliGemma, capable of more vision-language tasks.
== Reception ==
Gemini's launch was preluded by months of intense speculation and anticipation, which MIT Technology Review described as "peak AI hype". In August 2023, Dylan Patel and Daniel Nishball of research firm SemiAnalysis penned a blog post declaring that the release of Gemini would "eat the world" and outclass GPT-4, prompting OpenAI CEO Sam Altman to ridicule the duo on X (formerly Twitter). Business magnate Elon Musk, who co-founded OpenAI, weighed in, asking, "Are the numbers wrong?" Hugh Langley of Business Insider remarked that Gemini would be a make-or-break moment for Google, writing: "If Gemini dazzles, it will help Google change the narrative that it was blindsided by Microsoft and OpenAI. If it disappoints, it will embolden critics who say Google has fallen behind."
Reacting to its unveiling in December 2023, University of Washington professor emeritus Oren Etzioni predicted a "tit-for-tat arms race" between Google and OpenAI. Professor Alexei Efros of the University of California, Berkeley praised the potential of Gemini's multimodal approach, while scientist Melanie Mitchell of the Santa Fe Institute called Gemini "very sophisticated". Professor Chirag Shah of the University of Washington was less impressed, likening Gemini's launch to the routineness of Apple's annual introduction of a new iPhone. Similarly, Stanford University's Percy Liang, the University of Washington's Emily Bender, and the University of Galway's Michael Madden cautioned that it was difficult to interpret benchmark scores without insight into the training data used. Writing for Fast Company, Mark Sullivan opined that Google had the opportunity to challenge the iPhone's dominant market share, believing that Apple was unlikely to have the capacity to develop functionality similar to Gemini with its Siri virtual assistant. Google shares spiked by 5.3 percent the day after Gemini's launch.
Google faced criticism for a demonstrative video of Gemini, which was not conducted in real time.
Gemini 2.5 Pro Experimental debuted at the top position on the LMArena leaderboard, a benchmark measuring human preference, indicating strong performance and output quality. The model achieved state-of-the-art or highly competitive results across various benchmarks evaluating reasoning, knowledge, science, math, coding, and long-context performance, such as Humanity's Last Exam, GPQA, AIME 2025, SWE-bench and MRCR. Initial reviews highlighted its improved reasoning capabilities and performance gains compared to previous versions. Published benchmarks also showed areas where contemporary models from competitors like Anthropic, xAI, or OpenAI held advantages.
== See also ==
Gato, a multimodal neural network developed by DeepMind
Gemini Robotics
== References ==
== Further reading ==
== External links ==
Official website
Press release via The Keyword
White paper for 1.0 and 1.5 | Wikipedia/Gemini_(language_model) |
In machine learning, the Highway Network was the first working very deep feedforward neural network with hundreds of layers, much deeper than previous neural networks.
It uses skip connections modulated by learned gating mechanisms to regulate information flow, inspired by long short-term memory (LSTM) recurrent neural networks.
The advantage of the Highway Network over other deep learning architectures is its ability to overcome or partially prevent the vanishing gradient problem, thus improving its optimization. Gating mechanisms are used to facilitate information flow across the many layers ("information highways").
Highway Networks have found use in text sequence labeling and speech recognition tasks.
In 2014, the state of the art was training deep neural networks with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train such networks: the Highway Network (published in May), and the residual neural network, or ResNet (December). ResNet behaves like an open-gated Highway Net.
== Model ==
The model has two gates in addition to the
H
(
W
H
,
x
)
{\displaystyle H(W_{H},x)}
gate: the transform gate
T
(
W
T
,
x
)
{\displaystyle T(W_{T},x)}
and the carry gate
C
(
W
C
,
x
)
{\displaystyle C(W_{C},x)}
. The latter two gates are non-linear transfer functions (specifically sigmoid by convention). The function
H
{\displaystyle H}
can be any desired transfer function.
The carry gate is defined as:
C
(
W
C
,
x
)
=
1
−
T
(
W
T
,
x
)
{\displaystyle C(W_{C},x)=1-T(W_{T},x)}
while the transform gate is just a gate with a sigmoid transfer function.
== Structure ==
The structure of a hidden layer in the Highway Network follows the equation:
y
=
H
(
x
,
W
H
)
⋅
T
(
x
,
W
T
)
+
x
⋅
C
(
x
,
W
C
)
=
H
(
x
,
W
H
)
⋅
T
(
x
,
W
T
)
+
x
⋅
(
1
−
T
(
x
,
W
T
)
)
{\displaystyle {\begin{aligned}y=H(x,W_{H})\cdot T(x,W_{T})+x\cdot C(x,W_{C})\\=H(x,W_{H})\cdot T(x,W_{T})+x\cdot (1-T(x,W_{T}))\end{aligned}}}
== Related work ==
Sepp Hochreiter analyzed the vanishing gradient problem in 1991 and attributed to it the reason why deep learning did not work well.
To overcome this problem, Long Short-Term Memory (LSTM) recurrent neural networks
have residual connections with a weight of 1.0 in every LSTM cell (called the constant error carrousel) to compute
y
t
+
1
=
F
(
x
t
)
+
x
t
{\textstyle y_{t+1}=F(x_{t})+x_{t}}
. During backpropagation through time, this becomes the residual formula
y
=
F
(
x
)
+
x
{\textstyle y=F(x)+x}
for feedforward neural networks. This enables training very deep recurrent neural networks with a very long time span t. A later LSTM version published in 2000 modulates the identity LSTM connections by so-called "forget gates" such that their weights are not fixed to 1.0 but can be learned. In experiments, the forget gates were initialized with positive bias weights, thus being opened, addressing the vanishing gradient problem.
As long as the forget gates of the 2000 LSTM are open, it behaves like the 1997 LSTM.
The Highway Network of May 2015
applies these principles to feedforward neural networks.
It was reported to be "the first very deep feedforward network with hundreds of layers".
It is like a 2000 LSTM with forget gates unfolded in time, while the later Residual Nets have no equivalent of forget gates and are like the unfolded original 1997 LSTM.
If the skip connections in Highway Networks are "without gates," or if their gates are kept open (activation 1.0), they become Residual Networks.
The residual connection is a special case of the "short-cut connection" or "skip connection" by Rosenblatt (1961) and Lang & Witbrock (1988) which has the form
x
↦
F
(
x
)
+
A
x
{\displaystyle x\mapsto F(x)+Ax}
. Here the randomly initialized weight matrix A does not have to be the identity mapping. Every residual connection is a skip connection, but almost all skip connections are not residual connections.
The original Highway Network paper not only introduced the basic principle for very deep feedforward networks, but also included experimental results with 20, 50, and 100 layers networks, and mentioned ongoing experiments with up to 900 layers. Networks with 50 or 100 layers had lower training error than their plain network counterparts, but no lower training error than their 20 layers counterpart (on the MNIST dataset, Figure 1 in ). No improvement on test accuracy was reported with networks deeper than 19 layers (on the CIFAR-10 dataset; Table 1 in ). The ResNet paper, however, provided strong experimental evidence of the benefits of going deeper than 20 layers. It argued that the identity mapping without modulation is crucial and mentioned that modulation in the skip connection can still lead to vanishing signals in forward and backward propagation (Section 3 in ). This is also why the forget gates of the 2000 LSTM were initially opened through positive bias weights: as long as the gates are open, it behaves like the 1997 LSTM. Similarly, a Highway Net whose gates are opened through strongly positive bias weights behaves like a ResNet. The skip connections used in modern neural networks (e.g., Transformers) are dominantly identity mappings.
== References == | Wikipedia/Highway_network |
Ideogram is a freemium text-to-image model developed by Ideogram, Inc. using deep learning methodologies to generate digital images from natural language descriptions known as prompts. The model is capable of generating legible text in the images compared to other text-to-image models.
== History ==
Ideogram was founded in 2022 by Mohammad Norouzi, William Chan, Chitwan Saharia, and Jonathan Ho to develop a better text-to-image model.
It was first released with its 0.1 model on August 22, 2023, after receiving $16.5 million in seed funding, which itself was led by Andreessen Horowitz and Index Ventures.
In February 2024, Ideogram raised $80 million after its 1.0 model release in the same year.
In Summer 2024, Aidan Gomar joined Ideogram.
In August 2024, Ideogram released its 2.0 model. This model has several styles such as realistic, design, 3D, and anime and better capability in generating text.
In February 2025, Ideogram released 2a model. This model was designed for speed and optimized for graphics design and photography generation.
In March 2025, Ideogram released its 3.0 model. This model has improved realism and understanding of complex text layout.
== References ==
== External links ==
Official website | Wikipedia/Ideogram_(text-to-image_model) |
Meta-learning
is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.
Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood.
By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta-learning approaches bear a strong resemblance to the critique of metaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration for Jürgen Schmidhuber's early work (1987) and Yoshua Bengio et al.'s work (1991), considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain. In an open-ended hierarchical meta-learning system using genetic programming, better evolutionary methods can be learned by meta evolution, which itself can be improved by meta meta evolution, etc.
== Definition ==
A proposed definition for a meta-learning system combines three requirements:
The system must include a learning subsystem.
Experience is gained by exploiting meta knowledge extracted
in a previous learning episode on a single dataset, or
from different domains.
Learning bias must be chosen dynamically.
Bias refers to the assumptions that influence the choice of explanatory hypotheses and not the notion of bias represented in the bias-variance dilemma. Meta-learning is concerned with two aspects of learning bias.
Declarative bias specifies the representation of the space of hypotheses, and affects the size of the search space (e.g., represent hypotheses using linear functions only).
Procedural bias imposes constraints on the ordering of the inductive hypotheses (e.g., preferring smaller hypotheses).
== Common approaches ==
There are three common approaches:
using (cyclic) networks with external or internal memory (model-based)
learning effective distance metrics (metrics-based)
explicitly optimizing model parameters for fast learning (optimization-based).
=== Model-Based ===
Model-based meta-learning models updates its parameters rapidly with a few training steps, which can be achieved by its internal architecture or controlled by another meta-learner model.
==== Memory-Augmented Neural Networks ====
A Memory-Augmented Neural Network, or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples.
==== Meta Networks ====
Meta Networks (MetaNet) learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization.
=== Metric-Based ===
The core idea in metric-based meta-learning is similar to nearest neighbors algorithms, which weight is generated by a kernel function. It aims to learn a metric or distance function over objects. The notion of a good metric is problem-dependent. It should represent the relationship between inputs in the task space and facilitate problem solving.
==== Convolutional Siamese Neural Network ====
Siamese neural network is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters.
==== Matching Networks ====
Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.
==== Relation Network ====
The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.
==== Prototypical Networks ====
Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results.
=== Optimization-Based ===
What optimization-based meta-learning algorithms intend for is to adjust the optimization algorithm so that the model can be good at learning with a few examples.
==== LSTM Meta-Learner ====
LSTM-based meta-learner is to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. The parametrization allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner (classifier) network that allows for quick convergence of training.
==== Temporal Discreteness ====
Model-Agnostic Meta-Learning (MAML) is a fairly general optimization algorithm, compatible with any model that learns through gradient descent.
==== Reptile ====
Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely on meta-optimization through gradient descent and both are model-agnostic.
== Examples ==
Some approaches which have been viewed as instances of meta-learning:
Recurrent neural networks (RNNs) are universal computers. In 1993, Jürgen Schmidhuber showed how "self-referential" RNNs can in principle learn by backpropagation to run their own weight change algorithm, which may be quite different from backpropagation. In 2001, Sepp Hochreiter & A.S. Younger & P.R. Conwell built a successful supervised meta-learner based on Long short-term memory RNNs. It learned through backpropagation a learning algorithm for quadratic functions that is much faster than backpropagation. Researchers at Deepmind (Marcin Andrychowicz et al.) extended this approach to optimization in 2017.
In the 1990s, Meta Reinforcement Learning or Meta RL was achieved in Schmidhuber's research group through self-modifying policies written in a universal programming language that contains special instructions for changing the policy itself. There is a single lifelong trial. The goal of the RL agent is to maximize reward. It learns to accelerate reward intake by continually improving its own learning algorithm which is part of the "self-referential" policy.
An extreme type of Meta Reinforcement Learning is embodied by the Gödel machine, a theoretical construct which can inspect and modify any part of its own software which also contains a general theorem prover. It can achieve recursive self-improvement in a provably optimal way.
Model-Agnostic Meta-Learning (MAML) was introduced in 2017 by Chelsea Finn et al. Given a sequence of tasks, the parameters of a given model are trained such that few iterations of gradient descent with few training data from a new task will lead to good generalization performance on that task. MAML "trains the model to be easy to fine-tune." MAML was successfully applied to few-shot image classification benchmarks and to policy-gradient-based reinforcement learning.
Variational Bayes-Adaptive Deep RL (VariBAD) was introduced in 2019. While MAML is optimization-based, VariBAD is a model-based method for meta reinforcement learning, and leverages a variational autoencoder to capture the task information in an internal memory, thus conditioning its decision making on the task.
When addressing a set of tasks, most meta learning approaches optimize the average score across all tasks. Hence, certain tasks may be sacrificed in favor of the average score, which is often unacceptable in real-world applications. By contrast, Robust Meta Reinforcement Learning (RoML) focuses on improving low-score tasks, increasing robustness to the selection of task. RoML works as a meta-algorithm, as it can be applied on top of other meta learning algorithms (such as MAML and VariBAD) to increase their robustness. It is applicable to both supervised meta learning and meta reinforcement learning.
Discovering meta-knowledge works by inducing knowledge (e.g. rules) that expresses how each learning method will perform on different learning problems. The metadata is formed by characteristics of the data (general, statistical, information-theoretic,... ) in the learning problem, and characteristics of the learning algorithm (type, parameter settings, performance measures,...). Another learning algorithm then learns how the data characteristics relate to the algorithm characteristics. Given a new learning problem, the data characteristics are measured, and the performance of different learning algorithms are predicted. Hence, one can predict the algorithms best suited for the new problem.
Stacked generalisation works by combining multiple (different) learning algorithms. The metadata is formed by the predictions of those different algorithms. Another learning algorithm learns from this metadata to predict which combinations of algorithms give generally good results. Given a new learning problem, the predictions of the selected set of algorithms are combined (e.g. by (weighted) voting) to provide the final prediction. Since each algorithm is deemed to work on a subset of problems, a combination is hoped to be more flexible and able to make good predictions.
Boosting is related to stacked generalisation, but uses the same algorithm multiple times, where the examples in the training data get different weights over each run. This yields different predictions, each focused on rightly predicting a subset of the data, and combining those predictions leads to better (but more expensive) results.
Dynamic bias selection works by altering the inductive bias of a learning algorithm to match the given problem. This is done by altering key aspects of the learning algorithm, such as the hypothesis representation, heuristic formulae, or parameters. Many different approaches exist.
Inductive transfer studies how the learning process can be improved over time. Metadata consists of knowledge about previous learning episodes and is used to efficiently develop an effective hypothesis for a new task. A related approach is called learning to learn, in which the goal is to use acquired knowledge from one domain to help learning in other domains.
Other approaches using metadata to improve automatic learning are learning classifier systems, case-based reasoning and constraint satisfaction.
Some initial, theoretical work has been initiated to use Applied Behavioral Analysis as a foundation for agent-mediated meta-learning about the performances of human learners, and adjust the instructional course of an artificial agent.
AutoML such as Google Brain's "AI building AI" project, which according to Google briefly exceeded existing ImageNet benchmarks in 2017.
== References ==
== External links ==
Metalearning article in Scholarpedia
Vilalta, R.; Drissi, Y. (2002). "A perspective view and survey of meta-learning" (PDF). Artificial Intelligence Review. 18 (2): 77–95. doi:10.1023/A:1019956318069.
Giraud-Carrier, C.; Keller, J. (2002). "Meta-Learning". In Meij, J. (ed.). Dealing with the data flood. The Hague: STT/Beweton.
Brazdil, P.; Giraud-Carrier, C.; Soares, C.; Vilalta, R. (2009). "Metalearning: Concepts and Systems". Metalearning: applications to data mining. Springer. ISBN 978-3-540-73262-4.
Video courses about Meta-Learning with step-by-step explanation of MAML, Prototypical Networks, and Relation Networks. | Wikipedia/Meta-learning_(computer_science) |
A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It is an artificial neural network that is used in natural language processing by machines. It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content. As of 2023, most LLMs had these characteristics and are sometimes referred to broadly as GPTs.
The first GPT was introduced in 2018 by OpenAI. OpenAI has released significant GPT foundation models that have been sequentially numbered, to comprise its "GPT-n" series. Each of these was significantly more capable than the previous, due to increased size (number of trainable parameters) and training. The most recent of these, GPT-4o, was released in May 2024. Such models have been the basis for their more task-specific GPT systems, including models fine-tuned for instruction following—which in turn power the ChatGPT chatbot service.
The term "GPT" is also used in the names and descriptions of such models developed by others. For example, other GPT foundation models include a series of models created by EleutherAI, and seven models created by Cerebras in 2023. Companies in different industries have developed task-specific GPTs in their respective fields, such as Salesforce's "EinsteinGPT" (for CRM) and Bloomberg's "BloombergGPT" (for finance).
== History ==
=== Initial developments ===
Generative pretraining (GP) was a long-established concept in machine learning applications. It was originally used as a form of semi-supervised learning, as the model is trained first on an unlabeled dataset (pretraining step) by learning to generate datapoints in the dataset, and then it is trained to classify a labeled dataset.
There were three main types of early GP. The hidden Markov models learn a generative model of sequences for downstream applications. For example, in speech recognition, a trained HMM infers the most likely hidden sequence for a speech signal, and the hidden sequence is taken as the phonemes of the speech signal. These were developed in the 1970s and became widely applied in speech recognition in the 1980s.
The compressors learn to compress data such as images and textual sequences, and the compressed data serves as a good representation for downstream applications such as facial recognition. The autoencoders similarly learn a latent representation of data for later downstream applications such as speech recognition. The connection between autoencoders and algorithmic compressors was noted in 1993.
During the 2010s, the problem of machine translation was solved by recurrent neural networks, with attention mechanism added. This was optimized into the transformer architecture, published by Google researchers in Attention Is All You Need (2017). That development led to the emergence of large language models such as BERT (2018) which was a pre-trained transformer (PT) but not designed to be generative (BERT was an "encoder-only" model). Also in 2018, OpenAI published Improving Language Understanding by Generative Pre-Training, which introduced GPT-1, the first in its GPT series.
Previously in 2017, some of the authors who would later work on GPT-1 worked on generative pre-training of language with LSTM, which resulted in a model that could represent text with vectors that could easily be fine-tuned for downstream applications.
Prior to transformer-based architectures, the best-performing neural NLP (natural language processing) models commonly employed supervised learning from large amounts of manually-labeled data. The reliance on supervised learning limited their use on datasets that were not well-annotated, and also made it prohibitively expensive and time-consuming to train extremely large language models.
The semi-supervised approach OpenAI employed to make a large-scale generative system—and was first to do with a transformer model—involved two stages: an unsupervised generative "pretraining" stage to set initial parameters using a language modeling objective, and a supervised discriminative "fine-tuning" stage to adapt these parameters to a target task.
=== Later developments ===
Regarding more recent GPT foundation models, OpenAI published its first versions of GPT-3 in July 2020. There were three models, with 1B, 6.7B, 175B parameters, respectively named babbage, curie, and davinci (giving initials B, C, and D).
In July 2021, OpenAI published Codex, a task-specific GPT model targeted for programming applications. This was developed by fine-tuning a 12B parameter version of GPT-3 (different from previous GPT-3 models) using code from GitHub.
In March 2022, OpenAI published two versions of GPT-3 that were fine-tuned for instruction-following (instruction-tuned), named davinci-instruct-beta (175B) and text-davinci-001, and then started beta testing code-davinci-002. text-davinci-002 was instruction-tuned from code-davinci-002. Both text-davinci-003 and ChatGPT were released in November 2022, with both building upon text-davinci-002 via reinforcement learning from human feedback (RLHF). text-davinci-003 is trained for following instructions (like its predecessors), whereas ChatGPT is further trained for conversational interaction with a human user.
OpenAI's most recent GPT foundation model, GPT-4, was released on March 14, 2023. It can be accessed directly by users via a premium version of ChatGPT, and is available to developers for incorporation into other products and services via OpenAI's API. Other producers of GPT foundation models include EleutherAI (with a series of models starting in March 2021) and Cerebras (with seven models released in March 2023).
== Foundation models ==
A foundation model is an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks.
Thus far, the most notable GPT foundation models have been from OpenAI's GPT-n series. The most recent from that is GPT-4, for which OpenAI declined to publish the size or training details (citing "the competitive landscape and the safety implications of large-scale models").
Other such models include Google's PaLM, a broad foundation model that has been compared to GPT-3 and has been made available to developers via an API, and Together's GPT-JT, which has been reported as the closest-performing open-source alternative to GPT-3 (and is derived from earlier open-source GPTs). Meta AI (formerly Facebook) also has a generative transformer-based foundational large language model, known as LLaMA.
Foundational GPTs can also employ modalities other than text, for input and/or output. GPT-4 is a multi-modal LLM that is capable of processing text and image input (though its output is limited to text). Regarding multimodal output, some generative transformer-based models are used for text-to-image technologies such as diffusion and parallel decoding. Such kinds of models can serve as visual foundation models (VFMs) for developing downstream systems that can work with images.
== Task-specific models ==
A foundational GPT model can be further adapted to produce more targeted systems directed to specific tasks and/or subject-matter domains. Methods for such adaptation can include additional fine-tuning (beyond that done for the foundation model) as well as certain forms of prompt engineering.
An important example of this is fine-tuning models to follow instructions, which is of course a fairly broad task but more targeted than a foundation model. In January 2022, OpenAI introduced "InstructGPT"—a series of models which were fine-tuned to follow instructions using a combination of supervised training and reinforcement learning from human feedback (RLHF) on base GPT-3 language models. Advantages this had over the bare foundational models included higher accuracy, less negative/toxic sentiment, and generally better alignment with user needs. Hence, OpenAI began using this as the basis for its API service offerings. Other instruction-tuned models have been released by others, including a fully open version.
Another (related) kind of task-specific models are chatbots, which engage in human-like conversation. In November 2022, OpenAI launched ChatGPT—an online chat interface powered by an instruction-tuned language model trained in a similar fashion to InstructGPT. They trained this model using RLHF, with human AI trainers providing conversations in which they played both the user and the AI, and mixed this new dialogue dataset with the InstructGPT dataset for a conversational format suitable for a chatbot. Other major chatbots currently include Microsoft's Bing Chat, which uses OpenAI's GPT-4 (as part of a broader close collaboration between OpenAI and Microsoft), and Google's competing chatbot Gemini (initially based on their LaMDA family of conversation-trained language models, with plans to switch to PaLM).
Yet another kind of task that a GPT can be used for is the meta-task of generating its own instructions, like developing a series of prompts for 'itself' to be able to effectuate a more general goal given by a human user. This is known as an AI agent, and more specifically a recursive one because it uses results from its previous self-instructions to help it form its subsequent prompts; the first major example of this was Auto-GPT (which uses OpenAI's GPT models), and others have since been developed as well.
=== Multimodality ===
Generative transformer-based systems can also be targeted for tasks involving modalities beyond text. For example, Microsoft's "Visual ChatGPT" combines ChatGPT with visual foundation models (VFMs) to enable input or output comprising images as well as text. Also, advances in text-to-speech technology offer tools for audio content creation when used in conjunction with foundational GPT language models.
=== Domain-specificity ===
GPT systems can be directed toward particular fields or domains. Some reported examples of such models and apps are as follows:
EinsteinGPT – for sales and marketing domains, to aid with customer relationship management (uses GPT-3.5)
BloombergGPT – for the financial domain, to aid with financial news and information (uses "freely available" AI methods, combined with their proprietary data)
Khanmigo – described as a GPT version for tutoring, in the education domain, it aids students using Khan Academy by guiding them through their studies without directly providing answers (powered by GPT-4)
SlackGPT – for the Slack instant-messaging service, to aid with navigating and summarizing discussions on it (uses OpenAI's API)
BioGPT – for the biomedical domain, to aid with biomedical literature text generation and mining (uses GPT-2)
Sometimes domain-specificity is accomplished via software plug-ins or add-ons. For example, several different companies have developed particular plugins that interact directly with OpenAI's ChatGPT interface, and Google Workspace has available add-ons such as "GPT for Sheets and Docs"—which is reported to aid use of spreadsheet functionality in Google Sheets.
== Brand issues ==
OpenAI, which created the first generative pre-trained transformer (GPT) in 2018, asserted in 2023 that "GPT" should be regarded as a brand of OpenAI. In April 2023, OpenAI revised the brand guidelines in its terms of service to indicate that other businesses using its API to run their artificial intelligence (AI) services would no longer be able to include "GPT" in such names or branding. In May 2023, OpenAI engaged a brand management service to notify its API customers of this policy, although these notifications stopped short of making overt legal claims (such as allegations of trademark infringement or demands to cease and desist). As of November 2023, OpenAI still prohibits its API licensees from naming their own products with "GPT", but it has begun enabling its ChatGPT Plus subscribers to make "custom versions of ChatGPT" that are being called GPTs on the OpenAI site. OpenAI's terms of service says that its subscribers may use "GPT" in the names of these, although it's "discouraged".
Relatedly, OpenAI has applied to the United States Patent and Trademark Office (USPTO) to seek domestic trademark registration for the term "GPT" in the field of AI. OpenAI sought to expedite handling of its application, but the USPTO declined that request in April 2023. In May 2023, the USPTO responded to the application with a determination that "GPT" was both descriptive and generic. As of November 2023, OpenAI continues to pursue its argument through the available processes. Regardless, failure to obtain a registered U.S. trademark does not preclude some level of common-law trademark rights in the U.S., and/or trademark rights in other countries.
For any given type or scope of trademark protection in the U.S., OpenAI would need to establish that the term is actually "distinctive" to their specific offerings in addition to being a broader technical term for the kind of technology. Some media reports suggested that OpenAI may be able to obtain trademark registration based indirectly on the fame of its GPT-based chatbot product, ChatGPT, for which OpenAI has separately sought protection (and which it has sought to enforce more strongly). Other reports have indicated that registration for the bare term "GPT" seems unlikely to be granted, as it is used frequently as a common term to refer simply to AI systems that involve generative pre-trained transformers. In any event, to whatever extent exclusive rights in the term may occur the U.S., others would need to avoid using it for similar products or services in ways likely to cause confusion. If such rights ever became broad enough to implicate other well-established uses in the field, the trademark doctrine of descriptive fair use could still continue non-brand-related usage.
== Selected bibliography ==
This section lists the main official publications from OpenAI and Microsoft on their GPT models.
GPT-1: report, GitHub release.
GPT-2: blog announcement, report on its decision of "staged release", GitHub release.
GPT-3: report. No GitHub or any other form of code release thenceforth.
WebGPT: blog announcement, report,
InstructGPT: blog announcement, report.
ChatGPT: blog announcement (no report).
GPT-4: blog announcement, reports, model card.
GPT-4o: blog announcement.
GPT-4.5: blog announcement.
GPT-4.1: blog announcement.
== See also ==
Cyc
Gemini
== References == | Wikipedia/Generative_pre-trained_transformer |
Multi-agent reinforcement learning (MARL) is a sub-field of reinforcement learning. It focuses on studying the behavior of multiple learning agents that coexist in a shared environment. Each agent is motivated by its own rewards, and does actions to advance its own interests; in some environments these interests are opposed to the interests of other agents, resulting in complex group dynamics.
Multi-agent reinforcement learning is closely related to game theory and especially repeated games, as well as multi-agent systems. Its study combines the pursuit of finding ideal algorithms that maximize rewards with a more sociological set of concepts. While research in single-agent reinforcement learning is concerned with finding the algorithm that gets the biggest number of points for one agent, research in multi-agent reinforcement learning evaluates and quantifies social metrics, such as cooperation, reciprocity, equity, social influence, language and discrimination.
== Definition ==
Similarly to single-agent reinforcement learning, multi-agent reinforcement learning is modeled as some form of a Markov decision process (MDP). Fix a set of agents
I
=
{
1
,
.
.
.
,
N
}
{\displaystyle I=\{1,...,N\}}
. We then define:
A set
S
{\displaystyle S}
of environment states.
One set
A
i
{\displaystyle {\mathcal {A}}_{i}}
of actions for each of the agents
i
∈
I
=
{
1
,
.
.
.
,
N
}
{\displaystyle i\in I=\{1,...,N\}}
.
P
a
→
(
s
,
s
′
)
=
Pr
(
s
t
+
1
=
s
′
∣
s
t
=
s
,
a
→
t
=
a
→
)
{\displaystyle P_{\overrightarrow {a}}(s,s')=\Pr(s_{t+1}=s'\mid s_{t}=s,{\overrightarrow {a}}_{t}={\overrightarrow {a}})}
is the probability of transition (at time
t
{\displaystyle t}
) from state
s
{\displaystyle s}
to state
s
′
{\displaystyle s'}
under joint action
a
→
{\displaystyle {\overrightarrow {a}}}
.
R
→
a
→
(
s
,
s
′
)
{\displaystyle {\overrightarrow {R}}_{\overrightarrow {a}}(s,s')}
is the immediate joint reward after the transition from
s
{\displaystyle s}
to
s
′
{\displaystyle s'}
with joint action
a
→
{\displaystyle {\overrightarrow {a}}}
.
In settings with perfect information, such as the games of chess and Go, the MDP would be fully observable. In settings with imperfect information, especially in real-world applications like self-driving cars, each agent would access an observation that only has part of the information about the current state. In the partially observable setting, the core model is the partially observable stochastic game in the general case, and the decentralized POMDP in the cooperative case.
== Cooperation vs. competition ==
When multiple agents are acting in a shared environment their interests might be aligned or misaligned. MARL allows exploring all the different alignments and how they affect the agents' behavior:
In pure competition settings, the agents' rewards are exactly opposite to each other, and therefore they are playing against each other.
Pure cooperation settings are the other extreme, in which agents get the exact same rewards, and therefore they are playing with each other.
Mixed-sum settings cover all the games that combine elements of both cooperation and competition.
=== Pure competition settings ===
When two agents are playing a zero-sum game, they are in pure competition with each other. Many traditional games such as chess and Go fall under this category, as do two-player variants of video games like StarCraft. Because each agent can only win at the expense of the other agent, many complexities are stripped away. There is no prospect of communication or social dilemmas, as neither agent is incentivized to take actions that benefit its opponent.
The Deep Blue and AlphaGo projects demonstrate how to optimize the performance of agents in pure competition settings.
One complexity that is not stripped away in pure competition settings is autocurricula. As the agents' policy is improved using self-play, multiple layers of learning may occur.
=== Pure cooperation settings ===
MARL is used to explore how separate agents with identical interests can communicate and work together. Pure cooperation settings are explored in recreational cooperative games such as Overcooked, as well as real-world scenarios in robotics.
In pure cooperation settings all the agents get identical rewards, which means that social dilemmas do not occur.
In pure cooperation settings, oftentimes there are an arbitrary number of coordination strategies, and agents converge to specific "conventions" when coordinating with each other. The notion of conventions has been studied in language and also alluded to in more general multi-agent collaborative tasks.
=== Mixed-sum settings ===
Most real-world scenarios involving multiple agents have elements of both cooperation and competition. For example, when multiple self-driving cars are planning their respective paths, each of them has interests that are diverging but not exclusive: Each car is minimizing the amount of time it's taking to reach its destination, but all cars have the shared interest of avoiding a traffic collision.
Zero-sum settings with three or more agents often exhibit similar properties to mixed-sum settings, since each pair of agents might have a non-zero utility sum between them.
Mixed-sum settings can be explored using classic matrix games such as prisoner's dilemma, more complex sequential social dilemmas, and recreational games such as
Among Us, Diplomacy and
StarCraft II.
Mixed-sum settings can give rise to communication and social dilemmas.
== Social dilemmas ==
As in game theory, much of the research in MARL revolves around social dilemmas, such as prisoner's dilemma, chicken and stag hunt.
While game theory research might focus on Nash equilibria and what an ideal policy for an agent would be, MARL research focuses on how the agents would learn these ideal policies using a trial-and-error process. The reinforcement learning algorithms that are used to train the agents are maximizing the agent's own reward; the conflict between the needs of the agents and the needs of the group is a subject of active research.
Various techniques have been explored in order to induce cooperation in agents: Modifying the environment rules, adding intrinsic rewards, and more.
=== Sequential social dilemmas ===
Social dilemmas like prisoner's dilemma, chicken and stag hunt are "matrix games". Each agent takes only one action from a choice of two possible actions, and a simple 2x2 matrix is used to describe the reward that each agent will get, given the actions that each agent took.
In humans and other living creatures, social dilemmas tend to be more complex. Agents take multiple actions over time, and the distinction between cooperating and defecting is not as clear cut as in matrix games. The concept of a sequential social dilemma (SSD) was introduced in 2017 as an attempt to model that complexity. There is ongoing research into defining different kinds of SSDs and showing cooperative behavior in the agents that act in them.
== Autocurricula ==
An autocurriculum (plural: autocurricula) is a reinforcement learning concept that's salient in multi-agent experiments. As agents improve their performance, they change their environment; this change in the environment affects themselves and the other agents. The feedback loop results in several distinct phases of learning, each depending on the previous one. The stacked layers of learning are called an autocurriculum. Autocurricula are especially apparent in adversarial settings, where each group of agents is racing to counter the current strategy of the opposing group.
The Hide and Seek game is an accessible example of an autocurriculum occurring in an adversarial setting. In this experiment, a team of seekers is competing against a team of hiders. Whenever one of the teams learns a new strategy, the opposing team adapts its strategy to give the best possible counter. When the hiders learn to use boxes to build a shelter, the seekers respond by learning to use a ramp to break into that shelter. The hiders respond by locking the ramps, making them unavailable for the seekers to use. The seekers then respond by "box surfing", exploiting a glitch in the game to penetrate the shelter. Each "level" of learning is an emergent phenomenon, with the previous level as its premise. This results in a stack of behaviors, each dependent on its predecessor.
Autocurricula in reinforcement learning experiments are compared to the stages of the evolution of life on Earth and the development of human culture. A major stage in evolution happened 2-3 billion years ago, when photosynthesizing life forms started to produce massive amounts of oxygen, changing the balance of gases in the atmosphere. In the next stages of evolution, oxygen-breathing life forms evolved, eventually leading up to land mammals and human beings. These later stages could only happen after the photosynthesis stage made oxygen widely available. Similarly, human culture could not have gone through the Industrial Revolution in the 18th century without the resources and insights gained by the agricultural revolution at around 10,000 BC.
== Applications ==
Multi-agent reinforcement learning has been applied to a variety of use cases in science and industry:
=== AI alignment ===
Multi-agent reinforcement learning has been used in research into AI alignment. The relationship between the different agents in a MARL setting can be compared to the relationship between a human and an AI agent. Research efforts in the intersection of these two fields attempt to simulate possible conflicts between a human's intentions and an AI agent's actions, and then explore which variables could be changed to prevent these conflicts.
== Limitations ==
There are some inherent difficulties about multi-agent deep reinforcement learning. The environment is not stationary anymore, thus the Markov property is violated: transitions and rewards do not only depend on the current state of an agent.
== Further reading ==
Stefano V. Albrecht, Filippos Christianos, Lukas Schäfer. Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. MIT Press, 2024. https://www.marl-book.com
Kaiqing Zhang, Zhuoran Yang, Tamer Basar. Multi-agent reinforcement learning: A selective overview of theories and algorithms. Studies in Systems, Decision and Control, Handbook on RL and Control, 2021. [1]
Yang, Yaodong; Wang, Jun (2020). "An Overview of Multi-Agent Reinforcement Learning from Game Theoretical Perspective". arXiv:2011.00583 [cs.MA].
== References == | Wikipedia/Multi-agent_reinforcement_learning |
Imagen is a series of text-to-image models developed by Google DeepMind. They were developed by Google Brain until the company's merger with DeepMind in April 2023. Imagen is primarily used to generate images from text prompts, similar to Stability AI's Stable Diffusion, OpenAI's DALL-E, or Midjourney.
The original version of the model was first discussed in a paper from May 2022. The tool produces high-quality images and is available to all users with a Google account through services including Gemini, ImageFX, and Vertex AI.
== History ==
Imagen's original version was first presented in a paper published in May 2022. It featured the ability to generate high-fidelity images from natural language. The second version, Imagen 2 was released in December 2023. The standout feature was text and logo generation. Imagen 3 was released in August 2024. Google claims that the newest version provides better detail and lighting on generated images. On 20 May 2025 at Google I/O 2025 the company released an improved model, Imagen 4.
== Technology ==
Imagen uses two key technologies. The first is the use of transformer-based large language models, notably T5, to understand text and subsequently encode text for image synthesis. The second is the use of cascaded diffusion models providing high-fidelity image generation. It generates image in three stages, starting from a base of 64x64, then upsampled to 256x256 and 1024x1024.
== Capabilities ==
Imagen can generate photorealistic images from text prompts. It can also create various styles, such as cinematic, 35mm film, illustration, and surreal. The model can generate images in five aspect ratios, namely 9:16, 3:4, 1:1, 4:3, and 16:9. Imagen can also refine already generated images by editing existing text prompts.
== See also ==
Artificial intelligence art
Computer art
Generative art
DALL-E
Midjourney
Stable Diffusion
== References ==
== External links ==
Imagen website | Wikipedia/Imagen_(text-to-image_model) |
A sigmoid function is any mathematical function whose graph has a characteristic S-shaped or sigmoid curve.
A common example of a sigmoid function is the logistic function, which is defined by the formula
σ
(
x
)
=
1
1
+
e
−
x
=
e
x
1
+
e
x
=
1
−
σ
(
−
x
)
.
{\displaystyle \sigma (x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{1+e^{x}}}=1-\sigma (-x).}
Other sigmoid functions are given in the Examples section. In some fields, most notably in the context of artificial neural networks, the term "sigmoid function" is used as a synonym for "logistic function".
Special cases of the sigmoid function include the Gompertz curve (used in modeling systems that saturate at large values of x) and the ogee curve (used in the spillway of some dams). Sigmoid functions have domain of all real numbers, with return (response) value commonly monotonically increasing but could be decreasing. Sigmoid functions most often show a return value (y axis) in the range 0 to 1. Another commonly used range is from −1 to 1.
A wide variety of sigmoid functions including the logistic and hyperbolic tangent functions have been used as the activation function of artificial neurons. Sigmoid curves are also common in statistics as cumulative distribution functions (which go from 0 to 1), such as the integrals of the logistic density, the normal density, and Student's t probability density functions. The logistic sigmoid function is invertible, and its inverse is the logit function.
== Definition ==
A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a positive derivative at each point.
== Properties ==
In general, a sigmoid function is monotonic, and has a first derivative which is bell shaped. Conversely, the integral of any continuous, non-negative, bell-shaped function (with one local maximum and no local minimum, unless degenerate) will be sigmoidal. Thus the cumulative distribution functions for many common probability distributions are sigmoidal. One such example is the error function, which is related to the cumulative distribution function of a normal distribution; another is the arctan function, which is related to the cumulative distribution function of a Cauchy distribution.
A sigmoid function is constrained by a pair of horizontal asymptotes as
x
→
±
∞
{\displaystyle x\rightarrow \pm \infty }
.
A sigmoid function is convex for values less than a particular point, and it is concave for values greater than that point: in many of the examples here, that point is 0.
== Examples ==
Logistic function
f
(
x
)
=
1
1
+
e
−
x
{\displaystyle f(x)={\frac {1}{1+e^{-x}}}}
Hyperbolic tangent (shifted and scaled version of the logistic function, above)
f
(
x
)
=
tanh
x
=
e
x
−
e
−
x
e
x
+
e
−
x
{\displaystyle f(x)=\tanh x={\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}}
Arctangent function
f
(
x
)
=
arctan
x
{\displaystyle f(x)=\arctan x}
Gudermannian function
f
(
x
)
=
gd
(
x
)
=
∫
0
x
d
t
cosh
t
=
2
arctan
(
tanh
(
x
2
)
)
{\displaystyle f(x)=\operatorname {gd} (x)=\int _{0}^{x}{\frac {dt}{\cosh t}}=2\arctan \left(\tanh \left({\frac {x}{2}}\right)\right)}
Error function
f
(
x
)
=
erf
(
x
)
=
2
π
∫
0
x
e
−
t
2
d
t
{\displaystyle f(x)=\operatorname {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt}
Generalised logistic function
f
(
x
)
=
(
1
+
e
−
x
)
−
α
,
α
>
0
{\displaystyle f(x)=\left(1+e^{-x}\right)^{-\alpha },\quad \alpha >0}
Smoothstep function
f
(
x
)
=
{
(
∫
0
1
(
1
−
u
2
)
N
d
u
)
−
1
∫
0
x
(
1
−
u
2
)
N
d
u
,
|
x
|
≤
1
sgn
(
x
)
|
x
|
≥
1
N
∈
Z
≥
1
{\displaystyle f(x)={\begin{cases}{\displaystyle \left(\int _{0}^{1}\left(1-u^{2}\right)^{N}du\right)^{-1}\int _{0}^{x}\left(1-u^{2}\right)^{N}\ du},&|x|\leq 1\\\\\operatorname {sgn}(x)&|x|\geq 1\\\end{cases}}\quad N\in \mathbb {Z} \geq 1}
Some algebraic functions, for example
f
(
x
)
=
x
1
+
x
2
{\displaystyle f(x)={\frac {x}{\sqrt {1+x^{2}}}}}
and in a more general form
f
(
x
)
=
x
(
1
+
|
x
|
k
)
1
/
k
{\displaystyle f(x)={\frac {x}{\left(1+|x|^{k}\right)^{1/k}}}}
Up to shifts and scaling, many sigmoids are special cases of
f
(
x
)
=
φ
(
φ
(
x
,
β
)
,
α
)
,
{\displaystyle f(x)=\varphi (\varphi (x,\beta ),\alpha ),}
where
φ
(
x
,
λ
)
=
{
(
1
−
λ
x
)
1
/
λ
λ
≠
0
e
−
x
λ
=
0
{\displaystyle \varphi (x,\lambda )={\begin{cases}(1-\lambda x)^{1/\lambda }&\lambda \neq 0\\e^{-x}&\lambda =0\\\end{cases}}}
is the inverse of the negative Box–Cox transformation, and
α
<
1
{\displaystyle \alpha <1}
and
β
<
1
{\displaystyle \beta <1}
are shape parameters.
Smooth transition function normalized to (−1,1):
f
(
x
)
=
{
2
1
+
e
−
2
m
x
1
−
x
2
−
1
,
|
x
|
<
1
sgn
(
x
)
|
x
|
≥
1
=
{
tanh
(
m
x
1
−
x
2
)
,
|
x
|
<
1
sgn
(
x
)
|
x
|
≥
1
{\displaystyle {\begin{aligned}f(x)&={\begin{cases}{\displaystyle {\frac {2}{1+e^{-2m{\frac {x}{1-x^{2}}}}}}-1},&|x|<1\\\\\operatorname {sgn}(x)&|x|\geq 1\\\end{cases}}\\&={\begin{cases}{\displaystyle \tanh \left(m{\frac {x}{1-x^{2}}}\right)},&|x|<1\\\\\operatorname {sgn}(x)&|x|\geq 1\\\end{cases}}\end{aligned}}}
using the hyperbolic tangent mentioned above. Here,
m
{\displaystyle m}
is a free parameter encoding the slope at
x
=
0
{\displaystyle x=0}
, which must be greater than or equal to
3
{\displaystyle {\sqrt {3}}}
because any smaller value will result in a function with multiple inflection points, which is therefore not a true sigmoid. This function is unusual because it actually attains the limiting values of −1 and 1 within a finite range, meaning that its value is constant at −1 for all
x
≤
−
1
{\displaystyle x\leq -1}
and at 1 for all
x
≥
1
{\displaystyle x\geq 1}
. Nonetheless, it is smooth (infinitely differentiable,
C
∞
{\displaystyle C^{\infty }}
) everywhere, including at
x
=
±
1
{\displaystyle x=\pm 1}
.
== Applications ==
Many natural processes, such as those of complex system learning curves, exhibit a progression from small beginnings that accelerates and approaches a climax over time. When a specific mathematical model is lacking, a sigmoid function is often used.
The van Genuchten–Gupta model is based on an inverted S-curve and applied to the response of crop yield to soil salinity.
Examples of the application of the logistic S-curve to the response of crop yield (wheat) to both the soil salinity and depth to water table in the soil are shown in modeling crop response in agriculture.
In artificial neural networks, sometimes non-smooth functions are used instead for efficiency; these are known as hard sigmoids.
In audio signal processing, sigmoid functions are used as waveshaper transfer functions to emulate the sound of analog circuitry clipping.
In biochemistry and pharmacology, the Hill and Hill–Langmuir equations are sigmoid functions.
In computer graphics and real-time rendering, some of the sigmoid functions are used to blend colors or geometry between two values, smoothly and without visible seams or discontinuities.
Titration curves between strong acids and strong bases have a sigmoid shape due to the logarithmic nature of the pH scale.
The logistic function can be calculated efficiently by utilizing type III Unums.
An hierarchy of sigmoid growth models with increasing complexity (number of parameters) was built with the primary goal to re-analyze kinetic data, the so called N-t curves, from heterogeneous nucleation experiments, in electrochemistry. The hierarchy includes at present three models, with 1, 2 and 3 parameters, if not counting the maximal number of nuclei Nmax, respectively—a tanh2 based model called α21 originally devised to describe diffusion-limited crystal growth (not aggregation!) in 2D, the Johnson-Mehl-Avrami-Kolmogorov (JMAKn) model, and the Richards model. It was shown that for the concrete purpose even the simplest model works and thus it was implied that the experiments revisited are an example of two-step nucleation with the first step being the growth of the metastable phase in which the nuclei of the stable phase form.
== See also ==
== References ==
== Further reading ==
Mitchell, Tom M. (1997). Machine Learning. WCB McGraw–Hill. ISBN 978-0-07-042807-2.. (NB. In particular see "Chapter 4: Artificial Neural Networks" (in particular pp. 96–97) where Mitchell uses the word "logistic function" and the "sigmoid function" synonymously – this function he also calls the "squashing function" – and the sigmoid (aka logistic) function is used to compress the outputs of the "neurons" in multi-layer neural nets.)
Humphrys, Mark. "Continuous output, the sigmoid function". Archived from the original on 2022-07-14. Retrieved 2022-07-14. (NB. Properties of the sigmoid, including how it can shift along axes and how its domain may be transformed.)
== External links ==
"Fitting of logistic S-curves (sigmoids) to data using SegRegA". Archived from the original on 2022-07-14. | Wikipedia/Sigmoid_function |
Deep reinforcement learning (DRL) is a subfield of machine learning that combines principles of reinforcement learning (RL) and deep learning. It involves training agents to make decisions by interacting with an environment to maximize cumulative rewards, while using deep neural networks to represent policies, value functions, or environment models. This integration enables DRL systems to process high-dimensional inputs, such as images or continuous control signals, making the approach effective for solving complex tasks. Since the introduction of the deep Q-network (DQN) in 2015, DRL has achieved significant successes across domains including games, robotics, and autonomous systems, and is increasingly applied in areas such as healthcare, finance, and autonomous vehicles.
== Deep reinforcement learning ==
=== Introduction ===
Deep reinforcement learning (DRL) is part of machine learning, which combines reinforcement learning (RL) and deep learning. In DRL, agents learn how decisions are to be made by interacting with environments in order to maximize cumulative rewards, while using deep neural networks to represent policies, value functions, or models of the environment. This integration enables agents to handle high-dimensional input spaces, such as raw images or continuous control signals, making DRL a widely used approach for addressing complex tasks.
Since the development of the deep Q-network (DQN) in 2015, DRL has led to major breakthroughs in domains such as games, robotics, and autonomous systems. Research in DRL continues to expand rapidly, with active work on challenges like sample efficiency and robustness, as well as innovations in model-based methods, transformer architectures, and open-ended learning. Applications now range from healthcare and finance to language systems and autonomous vehicles.
=== Background ===
Reinforcement learning (RL) is a framework in which agents interact with environments by taking actions and learning from feedback in form of rewards or penalties. Traditional RL methods, such as Q-learning and policy gradient techniques, rely on tabular representations or linear approximations, which are often not scalable to high-dimensional or continuous input spaces.
DRL came out as solution to above limitation by integrating RL and deep neural networks. This combination enables agents to approximate complex functions and handle unstructured input data like raw images, sensor data, or natural language. The approach became widely recognized following the success of DeepMind's deep Q-network (DQN), which achieved human-level performance on several Atari video games using only pixel inputs and game scores as feedback.
Since then, DRL has evolved to include various architectures and learning strategies, including model-based methods, actor-critic frameworks, and applications in continuous control environments. These developments have significantly expanded the applicability of DRL across domains where traditional RL was limited.
=== Key algorithms and methods ===
Several algorithmic approaches form the foundation of deep reinforcement learning, each with different strategies for learning optimal behavior.
One of the earliest and most influential DRL algorithms is the Deep Q-Network (DQN), which combines Q-learning with deep neural networks. DQN approximates the optimal action-value function using a convolutional neural network and introduced techniques such as experience replay and target networks which stabilize training.
Policy gradient methods directly optimize the agent’s policy by adjusting parameters in the direction that increases expected rewards. These methods are well-suited to high-dimensional or continuous action spaces and form the basis of many modern DRL algorithms.
Actor-critic algorithms combine the advantages of value-based and policy-based methods. The actor updates the policy, while the critic evaluates the current policy using a value function. Popular variants include A2C (Advantage Actor-Critic) and PPO (Proximal Policy Optimization), both of which are widely used in benchmarks and real-world applications.
Other methods include multi-agent reinforcement learning, hierarchical RL, and approaches that integrate planning or memory mechanisms, depending on the complexity of the task and environment.
=== Applications ===
DRL has been applied to wide range of domains that require sequential decision-making and the ability to learn from high-dimensional input data.
One of the most well-known applications is in games, where DRL agents have demonstrated performance comparable to or exceeding human-level benchmarks. DeepMind's AlphaGo and AlphaStar, as well as OpenAI Five, are notable examples of DRL systems mastering complex games such as Go, StarCraft II, and Dota 2. While these systems have demonstrated high performance in constrained environments, their success often depends on extensive computational resources and may not generalize easily to tasks outside their training domains.
In robotics, DRL has been used to train agents for tasks such as locomotion, manipulation, and navigation in both simulated and real-world environments. By learning directly from sensory input, DRL enables robots to adapt to complex dynamics without relying on hand-crafted control rules.
Other growing areas of application include finance (e.g., portfolio optimization), healthcare (e.g., treatment planning and medical decision-making), natural language processing (e.g., dialogue systems), and autonomous vehicles (e.g., path planning and control).All of these applications shows how DRL deals with real-world problems like uncertainty, sequential reasoning, and high-dimensional data.
=== Challenges and limitations ===
DRL has several significant challenges which limit its broader deployment.
One of the most prominent issues is sample inefficiency. DRL algorithms often require millions of interactions with the environment to learn effective policies, which is impractical in many real-world settings where data collection is expensive or time-consuming.
Another challenge is sparse or delayed reward problem, where feedback signals are infrequent, which makes it difficult for agents to attribute outcomes to specific decisions. Techniques such as reward shaping and exploration strategies have been developed to address this issue.
DRL systems also tend to be sensitive to hyperparameters and lack robustness across tasks or environments. Models that are trained in simulation fail very often when deployed in the real world due to discrepancies between simulated and real-world dynamics, a problem known as the "reality gap."Bias and fairness in DRL systems have also emerged as concerns, particularly in domains like healthcare and finance where imbalanced data can lead to unequal outcomes for underrepresented groups.
Additionally, concerns about safety, interpretability, and reproducibility have become increasingly important, especially in high-stakes domains such as healthcare or autonomous driving. These issues remain active areas of research in the DRL community.
=== Recent advances ===
Recent developments in DRL have introduced new architectures and training strategies which aims to improving performance, efficiency, and generalization.
One key area of progress is model-based reinforcement learning, where agents learn an internal model of the environment to simulate outcomes before acting. This kind off approach improves sample efficiency and planning. An example is the Dreamer algorithm, which learns a latent space model to train agents more efficiently in complex environments.
Another major innovation is the use of transformer-based architectures in DRL. Unlike traditional models that rely on recurrent or convolutional networks, transformers can model long-term dependencies more effectively. The Decision Transformer and other similar models treat RL as a sequence modeling problem, enabling agents to generalize better across tasks.
In addition, research into open-ended learning has led to the creation of capable agents that are able to solve a range of tasks without task-specific tuning. Similar systems like the ones that are developed by OpenAI show that agents trained in diverse, evolving environments can generalize across new challenges, moving toward more adaptive and flexible intelligence.
=== Future directions ===
As deep reinforcement learning continues to evolve, researchers are exploring ways to make algorithms more efficient, robust, and generalizable across a wide range of tasks. Improving sample efficiency through model-based learning, enhancing generalization with open-ended training environments, and integrating foundation models are among the current research goals.
similar area of interest is safe and ethical deployment, particularly in high-risk settings like healthcare, autonomous driving, and finance. Researchers are developing frameworks for safer exploration, interpretability, and better alignment with human values.Ensuring that DRL systems promote equitable outcomes remains an ongoing challenge, especially where historical data may under‑represent marginalized populations.
The future of DRL may also involve more integration with other subfields of machine learning, such as unsupervised learning, transfer learning, and large language models, enabling agents that can learn from diverse data modalities and interact more naturally with human users.
== References == | Wikipedia/End-to-end_reinforcement_learning |
A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
== Graphical model ==
Formally, Bayesian networks are directed acyclic graphs (DAGs) whose nodes represent variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Each edge represents a direct conditional dependency. Any pair of nodes that are not connected (i.e. no path connects one node to the other) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if
m
{\displaystyle m}
parent nodes represent
m
{\displaystyle m}
Boolean variables, then the probability function could be represented by a table of
2
m
{\displaystyle 2^{m}}
entries, one entry for each of the
2
m
{\displaystyle 2^{m}}
possible parent combinations. Similar ideas may be applied to undirected, and possibly cyclic, graphs such as Markov networks.
== Example ==
Suppose we want to model the dependencies between three variables: the sprinkler (or more appropriately, its state - whether it is on or not), the presence or absence of rain and whether the grass is wet or not. Observe that two events can cause the grass to become wet: an active sprinkler or rain. Rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler usually is not active). This situation can be modeled with a Bayesian network (shown to the right). Each variable has two possible values, T (for true) and F (for false).
The joint probability function is, by the chain rule of probability,
Pr
(
G
,
S
,
R
)
=
Pr
(
G
∣
S
,
R
)
Pr
(
S
∣
R
)
Pr
(
R
)
{\displaystyle \Pr(G,S,R)=\Pr(G\mid S,R)\Pr(S\mid R)\Pr(R)}
where G = "Grass wet (true/false)", S = "Sprinkler turned on (true/false)", and R = "Raining (true/false)".
The model can answer questions about the presence of a cause given the presence of an effect (so-called inverse probability) like "What is the probability that it is raining, given the grass is wet?" by using the conditional probability formula and summing over all nuisance variables:
Pr
(
R
=
T
∣
G
=
T
)
=
Pr
(
G
=
T
,
R
=
T
)
Pr
(
G
=
T
)
=
∑
x
∈
{
T
,
F
}
Pr
(
G
=
T
,
S
=
x
,
R
=
T
)
∑
x
,
y
∈
{
T
,
F
}
Pr
(
G
=
T
,
S
=
x
,
R
=
y
)
{\displaystyle \Pr(R=T\mid G=T)={\frac {\Pr(G=T,R=T)}{\Pr(G=T)}}={\frac {\sum _{x\in \{T,F\}}\Pr(G=T,S=x,R=T)}{\sum _{x,y\in \{T,F\}}\Pr(G=T,S=x,R=y)}}}
Using the expansion for the joint probability function
Pr
(
G
,
S
,
R
)
{\displaystyle \Pr(G,S,R)}
and the conditional probabilities from the conditional probability tables (CPTs) stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example,
Pr
(
G
=
T
,
S
=
T
,
R
=
T
)
=
Pr
(
G
=
T
∣
S
=
T
,
R
=
T
)
Pr
(
S
=
T
∣
R
=
T
)
Pr
(
R
=
T
)
=
0.99
×
0.01
×
0.2
=
0.00198.
{\displaystyle {\begin{aligned}\Pr(G=T,S=T,R=T)&=\Pr(G=T\mid S=T,R=T)\Pr(S=T\mid R=T)\Pr(R=T)\\&=0.99\times 0.01\times 0.2\\&=0.00198.\end{aligned}}}
Then the numerical results (subscripted by the associated variable values) are
Pr
(
R
=
T
∣
G
=
T
)
=
0.00198
T
T
T
+
0.1584
T
F
T
0.00198
T
T
T
+
0.288
T
T
F
+
0.1584
T
F
T
+
0.0
T
F
F
=
891
2491
≈
35.77
%
.
{\displaystyle \Pr(R=T\mid G=T)={\frac {0.00198_{TTT}+0.1584_{TFT}}{0.00198_{TTT}+0.288_{TTF}+0.1584_{TFT}+0.0_{TFF}}}={\frac {891}{2491}}\approx 35.77\%.}
To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function
Pr
(
S
,
R
∣
do
(
G
=
T
)
)
=
Pr
(
S
∣
R
)
Pr
(
R
)
{\displaystyle \Pr(S,R\mid {\text{do}}(G=T))=\Pr(S\mid R)\Pr(R)}
obtained by removing the factor
Pr
(
G
∣
S
,
R
)
{\displaystyle \Pr(G\mid S,R)}
from the pre-intervention distribution. The do operator forces the value of G to be true. The probability of rain is unaffected by the action:
Pr
(
R
∣
do
(
G
=
T
)
)
=
Pr
(
R
)
.
{\displaystyle \Pr(R\mid {\text{do}}(G=T))=\Pr(R).}
To predict the impact of turning the sprinkler on:
Pr
(
R
,
G
∣
do
(
S
=
T
)
)
=
Pr
(
R
)
Pr
(
G
∣
R
,
S
=
T
)
{\displaystyle \Pr(R,G\mid {\text{do}}(S=T))=\Pr(R)\Pr(G\mid R,S=T)}
with the term
Pr
(
S
=
T
∣
R
)
{\displaystyle \Pr(S=T\mid R)}
removed, showing that the action affects the grass but not the rain.
These predictions may not be feasible given unobserved variables, as in most policy evaluation problems. The effect of the action
do
(
x
)
{\displaystyle {\text{do}}(x)}
can still be predicted, however, whenever the back-door criterion is satisfied. It states that, if a set Z of nodes can be observed that d-separates (or blocks) all back-door paths from X to Y then
Pr
(
Y
,
Z
∣
do
(
x
)
)
=
Pr
(
Y
,
Z
,
X
=
x
)
Pr
(
X
=
x
∣
Z
)
.
{\displaystyle \Pr(Y,Z\mid {\text{do}}(x))={\frac {\Pr(Y,Z,X=x)}{\Pr(X=x\mid Z)}}.}
A back-door path is one that ends with an arrow into X. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the set Z = R is admissible for predicting the effect of S = T on G, because R d-separates the (only) back-door path S ← R → G. However, if S is not observed, no other set d-separates this path and the effect of turning the sprinkler on (S = T) on the grass (G) cannot be predicted from passive observations. In that case P(G | do(S = T)) is not "identified". This reflects the fact that, lacking interventional data, the observed dependence between S and G is due to a causal connection or is spurious
(apparent dependence arising from a common cause, R). (see Simpson's paradox)
To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus" and test whether all do terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data.
Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for
2
10
=
1024
{\displaystyle 2^{10}=1024}
values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most
10
⋅
2
3
=
80
{\displaystyle 10\cdot 2^{3}=80}
values.
One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions.
== Inference and learning ==
Bayesian networks perform three main inference tasks:
=== Inferring unobserved variables ===
Because a Bayesian network is a complete model for its variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to update knowledge of the state of a subset of variables when other variables (the evidence variables) are observed. This process of computing the posterior distribution of variables given evidence is called probabilistic inference. The posterior gives a universal sufficient statistic for detection applications, when choosing values for the variable subset that minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applying Bayes' theorem to complex problems.
The most common exact inference methods are: variable elimination, which eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product; clique tree propagation, which caches the computation so that many variables can be queried at one time and new evidence can be propagated quickly; and recursive conditioning and AND/OR search, which allow for a space–time tradeoff and match the efficiency of variable elimination when enough space is used. All of these methods have complexity that is exponential in the network's treewidth. The most common approximate inference algorithms are importance sampling, stochastic MCMC simulation, mini-bucket elimination, loopy belief propagation, generalized belief propagation and variational methods.
=== Parameter learning ===
In order to fully specify the Bayesian network and thus fully represent the joint probability distribution, it is necessary to specify for each node X the probability distribution for X conditional upon X's parents. The distribution of X conditional upon its parents may have any form. It is common to work with discrete or Gaussian distributions since that simplifies calculations. Sometimes only constraints on distribution are known; one can then use the principle of maximum entropy to determine a single distribution, the one with the greatest entropy given the constraints. (Analogously, in the specific context of a dynamic Bayesian network, the conditional distribution for the hidden state's temporal evolution is commonly specified to maximize the entropy rate of the implied stochastic process.)
Often these conditional distributions include parameters that are unknown and must be estimated from data, e.g., via the maximum likelihood approach. Direct maximization of the likelihood (or of the posterior probability) is often complex given unobserved variables. A classical approach to this problem is the expectation-maximization algorithm, which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct. Under mild regularity conditions, this process converges on maximum likelihood (or maximum posterior) values for parameters.
A more fully Bayesian approach to parameters is to treat them as additional unobserved variables and to compute a full posterior distribution over all nodes conditional upon observed data, then to integrate out the parameters. This approach can be expensive and lead to large dimension models, making classical parameter-setting approaches more tractable.
=== Structure learning ===
In the simplest case, a Bayesian network is specified by an expert and is then used to perform inference. In other applications, the task of defining the network is too complex for humans. In this case, the network structure and the parameters of the local distributions must be learned from data.
Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within machine learning. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl and rests on the distinction between the three possible patterns allowed in a 3-node DAG:
The first 2 represent the same dependencies (
X
{\displaystyle X}
and
Z
{\displaystyle Z}
are independent given
Y
{\displaystyle Y}
) and are, therefore, indistinguishable. The collider, however, can be uniquely identified, since
X
{\displaystyle X}
and
Z
{\displaystyle Z}
are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when
X
{\displaystyle X}
and
Z
{\displaystyle Z}
have common parents, except that one must first condition on those parents. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independences observed.
An alternative method of structural learning uses optimization-based search. It requires a scoring function and a search strategy. A common scoring function is posterior probability of the structure given the training data, like the BIC or the BDeu. The time requirement of an exhaustive search returning a structure that maximizes the score is superexponential in the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima. Friedman et al. discuss using mutual information between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to k nodes and exhaustively searching therein.
A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it using integer programming. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Such method can handle problems with up to 100 variables.
In order to deal with problems with thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge.
Another method consists of focusing on the sub-class of decomposable models, for which the MLE have a closed form. It is then possible to discover a consistent structure for hundreds of variables.
Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to use K-tree for effective learning.
== Statistical introduction ==
Given data
x
{\displaystyle x\,\!}
and parameter
θ
{\displaystyle \theta }
, a simple Bayesian analysis starts with a prior probability (prior)
p
(
θ
)
{\displaystyle p(\theta )}
and likelihood
p
(
x
∣
θ
)
{\displaystyle p(x\mid \theta )}
to compute a posterior probability
p
(
θ
∣
x
)
∝
p
(
x
∣
θ
)
p
(
θ
)
{\displaystyle p(\theta \mid x)\propto p(x\mid \theta )p(\theta )}
.
Often the prior on
θ
{\displaystyle \theta }
depends in turn on other parameters
φ
{\displaystyle \varphi }
that are not mentioned in the likelihood. So, the prior
p
(
θ
)
{\displaystyle p(\theta )}
must be replaced by a likelihood
p
(
θ
∣
φ
)
{\displaystyle p(\theta \mid \varphi )}
, and a prior
p
(
φ
)
{\displaystyle p(\varphi )}
on the newly introduced parameters
φ
{\displaystyle \varphi }
is required, resulting in a posterior probability
p
(
θ
,
φ
∣
x
)
∝
p
(
x
∣
θ
)
p
(
θ
∣
φ
)
p
(
φ
)
.
{\displaystyle p(\theta ,\varphi \mid x)\propto p(x\mid \theta )p(\theta \mid \varphi )p(\varphi ).}
This is the simplest example of a hierarchical Bayes model.
The process may be repeated; for example, the parameters
φ
{\displaystyle \varphi }
may depend in turn on additional parameters
ψ
{\displaystyle \psi \,\!}
, which require their own prior. Eventually the process must terminate, with priors that do not depend on unmentioned parameters.
=== Introductory examples ===
Given the measured quantities
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}\,\!}
each with normally distributed errors of known standard deviation
σ
{\displaystyle \sigma \,\!}
,
x
i
∼
N
(
θ
i
,
σ
2
)
{\displaystyle x_{i}\sim N(\theta _{i},\sigma ^{2})}
Suppose we are interested in estimating the
θ
i
{\displaystyle \theta _{i}}
. An approach would be to estimate the
θ
i
{\displaystyle \theta _{i}}
using a maximum likelihood approach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply
θ
i
=
x
i
.
{\displaystyle \theta _{i}=x_{i}.}
However, if the quantities are related, so that for example the individual
θ
i
{\displaystyle \theta _{i}}
have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g.,
x
i
∼
N
(
θ
i
,
σ
2
)
,
{\displaystyle x_{i}\sim N(\theta _{i},\sigma ^{2}),}
θ
i
∼
N
(
φ
,
τ
2
)
,
{\displaystyle \theta _{i}\sim N(\varphi ,\tau ^{2}),}
with improper priors
φ
∼
flat
{\displaystyle \varphi \sim {\text{flat}}}
,
τ
∼
flat
∈
(
0
,
∞
)
{\displaystyle \tau \sim {\text{flat}}\in (0,\infty )}
. When
n
≥
3
{\displaystyle n\geq 3}
, this is an identified model (i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individual
θ
i
{\displaystyle \theta _{i}}
will tend to move, or shrink away from the maximum likelihood estimates towards their common mean. This shrinkage is a typical behavior in hierarchical Bayes models.
=== Restrictions on priors ===
Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variable
τ
{\displaystyle \tau \,\!}
in the example. The usual priors such as the Jeffreys prior often do not work, because the posterior distribution will not be normalizable and estimates made by minimizing the expected loss will be inadmissible.
== Definitions and concepts ==
Several equivalent definitions of a Bayesian network have been offered. For the following, let G = (V,E) be a directed acyclic graph (DAG) and let X = (Xv), v ∈ V be a set of random variables indexed by V.
=== Factorization definition ===
X is a Bayesian network with respect to G if its joint probability density function (with respect to a product measure) can be written as a product of the individual density functions, conditional on their parent variables:
p
(
x
)
=
∏
v
∈
V
p
(
x
v
|
x
pa
(
v
)
)
{\displaystyle p(x)=\prod _{v\in V}p\left(x_{v}\,{\big |}\,x_{\operatorname {pa} (v)}\right)}
where pa(v) is the set of parents of v (i.e. those vertices pointing directly to v via a single edge).
For any set of random variables, the probability of any member of a joint distribution can be calculated from conditional probabilities using the chain rule (given a topological ordering of X) as follows:
P
(
X
1
=
x
1
,
…
,
X
n
=
x
n
)
=
∏
v
=
1
n
P
(
X
v
=
x
v
∣
X
v
+
1
=
x
v
+
1
,
…
,
X
n
=
x
n
)
{\displaystyle \operatorname {P} (X_{1}=x_{1},\ldots ,X_{n}=x_{n})=\prod _{v=1}^{n}\operatorname {P} \left(X_{v}=x_{v}\mid X_{v+1}=x_{v+1},\ldots ,X_{n}=x_{n}\right)}
Using the definition above, this can be written as:
P
(
X
1
=
x
1
,
…
,
X
n
=
x
n
)
=
∏
v
=
1
n
P
(
X
v
=
x
v
∣
X
j
=
x
j
for each
X
j
that is a parent of
X
v
)
{\displaystyle \operatorname {P} (X_{1}=x_{1},\ldots ,X_{n}=x_{n})=\prod _{v=1}^{n}\operatorname {P} (X_{v}=x_{v}\mid X_{j}=x_{j}{\text{ for each }}X_{j}\,{\text{ that is a parent of }}X_{v}\,)}
The difference between the two expressions is the conditional independence of the variables from any of their non-descendants, given the values of their parent variables.
=== Local Markov property ===
X is a Bayesian network with respect to G if it satisfies the local Markov property: each variable is conditionally independent of its non-descendants given its parent variables:
X
v
⊥
⊥
X
V
∖
de
(
v
)
∣
X
pa
(
v
)
for all
v
∈
V
{\displaystyle X_{v}\perp \!\!\!\perp X_{V\,\smallsetminus \,\operatorname {de} (v)}\mid X_{\operatorname {pa} (v)}\quad {\text{for all }}v\in V}
where de(v) is the set of descendants and V \ de(v) is the set of non-descendants of v.
This can be expressed in terms similar to the first definition, as
P
(
X
v
=
x
v
∣
X
i
=
x
i
for each
X
i
that is not a descendant of
X
v
)
=
P
(
X
v
=
x
v
∣
X
j
=
x
j
for each
X
j
that is a parent of
X
v
)
{\displaystyle {\begin{aligned}&\operatorname {P} (X_{v}=x_{v}\mid X_{i}=x_{i}{\text{ for each }}X_{i}{\text{ that is not a descendant of }}X_{v}\,)\\[6pt]={}&P(X_{v}=x_{v}\mid X_{j}=x_{j}{\text{ for each }}X_{j}{\text{ that is a parent of }}X_{v}\,)\end{aligned}}}
The set of parents is a subset of the set of non-descendants because the graph is acyclic.
=== Marginal independence structure ===
In general, learning a Bayesian network from data is known to be NP-hard. This is due in part to the combinatorial explosion of enumerating DAGs as the number of variables increases. Nevertheless, insights about an underlying Bayesian network can be learned from data in polynomial time by focusing on its marginal independence structure: while the conditional independence statements of a distribution modeled by a Bayesian network are encoded by a DAG (according to the factorization and Markov properties above), its marginal independence statements—the conditional independence statements in which the conditioning set is empty—are encoded by a simple undirected graph with special properties such as equal intersection and independence numbers.
=== Developing Bayesian networks ===
Developing a Bayesian network often begins with creating a DAG G such that X satisfies the local Markov property with respect to G. Sometimes this is a causal DAG. The conditional probability distributions of each variable given its parents in G are assessed. In many cases, in particular in the case where the variables are discrete, if the joint distribution of X is the product of these conditional distributions, then X is a Bayesian network with respect to G.
=== Markov blanket ===
The Markov blanket of a node is the set of nodes consisting of its parents, its children, and any other parents of its children. The Markov blanket renders the node independent of the rest of the network; the joint distribution of the variables in the Markov blanket of a node is sufficient knowledge for calculating the distribution of the node. X is a Bayesian network with respect to G if every node is conditionally independent of all other nodes in the network, given its Markov blanket.
==== d-separation ====
This definition can be made more general by defining the "d"-separation of two nodes, where d stands for directional. We first define the "d"-separation of a trail and then we will define the "d"-separation of two nodes in terms of that.
Let P be a trail from node u to v. A trail is a loop-free, undirected (i.e. all edge directions are ignored) path between two nodes. Then P is said to be d-separated by a set of nodes Z if any of the following conditions holds:
P contains (but does not need to be entirely) a directed chain,
u
⋯
←
m
←
⋯
v
{\displaystyle u\cdots \leftarrow m\leftarrow \cdots v}
or
u
⋯
→
m
→
⋯
v
{\displaystyle u\cdots \rightarrow m\rightarrow \cdots v}
, such that the middle node m is in Z,
P contains a fork,
u
⋯
←
m
→
⋯
v
{\displaystyle u\cdots \leftarrow m\rightarrow \cdots v}
, such that the middle node m is in Z, or
P contains an inverted fork (or collider),
u
⋯
→
m
←
⋯
v
{\displaystyle u\cdots \rightarrow m\leftarrow \cdots v}
, such that the middle node m is not in Z and no descendant of m is in Z.
The nodes u and v are d-separated by Z if all trails between them are d-separated. If u and v are not d-separated, they are d-connected.
X is a Bayesian network with respect to G if, for any two nodes u, v:
X
u
⊥
⊥
X
v
∣
X
Z
{\displaystyle X_{u}\perp \!\!\!\perp X_{v}\mid X_{Z}}
where Z is a set which d-separates u and v. (The Markov blanket is the minimal set of nodes which d-separates node v from all other nodes.)
=== Causal networks ===
Although Bayesian networks are often used to represent causal relationships, this need not be the case: a directed edge from u to v does not require that Xv be causally dependent on Xu. This is demonstrated by the fact that Bayesian networks on the graphs:
a
→
b
→
c
and
a
←
b
←
c
{\displaystyle a\rightarrow b\rightarrow c\qquad {\text{and}}\qquad a\leftarrow b\leftarrow c}
are equivalent: that is they impose exactly the same conditional independence requirements.
A causal network is a Bayesian network with the requirement that the relationships be causal. The additional semantics of causal networks specify that if a node X is actively caused to be in a given state x (an action written as do(X = x)), then the probability density function changes to that of the network obtained by cutting the links from the parents of X to X, and setting X to the caused value x. Using these semantics, the impact of external interventions from data obtained prior to intervention can be predicted.
== Inference complexity and approximation algorithms ==
In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is NP-hard. This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Paul Dagum and Michael Luby proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks. First, they proved that no tractable deterministic algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2. Second, they proved that no tractable randomized algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2 with confidence probability greater than 1/2.
At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF)) and that approximate inference within a factor 2n1−ɛ for every ɛ > 0, even for Bayesian networks with restricted architecture, is NP-hard.
In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naïve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm developed by Dagum and Luby was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by
1
/
p
(
n
)
{\displaystyle 1/p(n)}
where
p
(
n
)
{\displaystyle p(n)}
was any polynomial of the number of nodes in the network,
n
{\displaystyle n}
.
== Software ==
Notable software for Bayesian networks include:
Just another Gibbs sampler (JAGS) – Open-source alternative to WinBUGS. Uses Gibbs sampling.
OpenBUGS – Open-source development of WinBUGS.
SPSS Modeler – Commercial software that includes an implementation for Bayesian networks.
Stan (software) – Stan is an open-source package for obtaining Bayesian inference using the No-U-Turn sampler (NUTS), a variant of Hamiltonian Monte Carlo.
PyMC – A Python library implementing an embedded domain specific language to represent bayesian networks, and a variety of samplers (including NUTS)
WinBUGS – One of the first computational implementations of MCMC samplers. No longer maintained.
== History ==
The term Bayesian network was coined by Judea Pearl in 1985 to emphasize:
the often subjective nature of the input information
the reliance on Bayes' conditioning as the basis for updating information
the distinction between causal and evidential modes of reasoning
In the late 1980s Pearl's Probabilistic Reasoning in Intelligent Systems and Neapolitan's Probabilistic Reasoning in Expert Systems summarized their properties and established them as a field of study.
== See also ==
== Notes ==
== References ==
== Further reading ==
Conrady S, Jouffe L (2015-07-01). Bayesian Networks and BayesiaLab – A practical introduction for researchers. Franklin, Tennessee: Bayesian USA. ISBN 978-0-9965333-0-0.
Charniak E (Winter 1991). "Bayesian networks without tears" (PDF). AI Magazine.
Kruse R, Borgelt C, Klawonn F, Moewes C, Steinbrecher M, Held P (2013). Computational Intelligence A Methodological Introduction. London: Springer-Verlag. ISBN 978-1-4471-5012-1.
Borgelt C, Steinbrecher M, Kruse R (2009). Graphical Models – Representations for Learning, Reasoning and Data Mining (Second ed.). Chichester: Wiley. ISBN 978-0-470-74956-2.
== External links ==
An Introduction to Bayesian Networks and their Contemporary Applications
On-line Tutorial on Bayesian nets and probability
Web-App to create Bayesian nets and run it with a Monte Carlo method
Continuous Time Bayesian Networks
Bayesian Networks: Explanation and Analogy
A live tutorial on learning Bayesian networks
A hierarchical Bayes Model for handling sample heterogeneity in classification problems, provides a classification model taking into consideration the uncertainty associated with measuring replicate samples.
Hierarchical Naive Bayes Model for handling sample uncertainty Archived 2007-09-28 at the Wayback Machine, shows how to perform classification and learning with continuous and discrete variables with replicated measurements. | Wikipedia/Bayesian_network |
In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to). Given
X
{\displaystyle {\mathcal {X}}}
as the space of all possible inputs (usually
X
⊂
R
d
{\displaystyle {\mathcal {X}}\subset \mathbb {R} ^{d}}
), and
Y
=
{
−
1
,
1
}
{\displaystyle {\mathcal {Y}}=\{-1,1\}}
as the set of labels (possible outputs), a typical goal of classification algorithms is to find a function
f
:
X
→
Y
{\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}}
which best predicts a label
y
{\displaystyle y}
for a given input
x
→
{\displaystyle {\vec {x}}}
. However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same
x
→
{\displaystyle {\vec {x}}}
to generate different
y
{\displaystyle y}
. As a result, the goal of the learning problem is to minimize expected loss (also known as the risk), defined as
I
[
f
]
=
∫
X
×
Y
V
(
f
(
x
→
)
,
y
)
p
(
x
→
,
y
)
d
x
→
d
y
{\displaystyle I[f]=\displaystyle \int _{{\mathcal {X}}\times {\mathcal {Y}}}V(f({\vec {x}}),y)\,p({\vec {x}},y)\,d{\vec {x}}\,dy}
where
V
(
f
(
x
→
)
,
y
)
{\displaystyle V(f({\vec {x}}),y)}
is a given loss function, and
p
(
x
→
,
y
)
{\displaystyle p({\vec {x}},y)}
is the probability density function of the process that generated the data, which can equivalently be written as
p
(
x
→
,
y
)
=
p
(
y
∣
x
→
)
p
(
x
→
)
.
{\displaystyle p({\vec {x}},y)=p(y\mid {\vec {x}})p({\vec {x}}).}
Within classification, several commonly used loss functions are written solely in terms of the product of the true label
y
{\displaystyle y}
and the predicted label
f
(
x
→
)
{\displaystyle f({\vec {x}})}
. Therefore, they can be defined as functions of only one variable
υ
=
y
f
(
x
→
)
{\displaystyle \upsilon =yf({\vec {x}})}
, so that
V
(
f
(
x
→
)
,
y
)
=
ϕ
(
y
f
(
x
→
)
)
=
ϕ
(
υ
)
{\displaystyle V(f({\vec {x}}),y)=\phi (yf({\vec {x}}))=\phi (\upsilon )}
with a suitably chosen function
ϕ
:
R
→
R
{\displaystyle \phi :\mathbb {R} \to \mathbb {R} }
. These are called margin-based loss functions. Choosing a margin-based loss function amounts to choosing
ϕ
{\displaystyle \phi }
. Selection of a loss function within this framework impacts the optimal
f
ϕ
∗
{\displaystyle f_{\phi }^{*}}
which minimizes the expected risk, see empirical risk minimization.
In the case of binary classification, it is possible to simplify the calculation of expected risk from the integral specified above. Specifically,
I
[
f
]
=
∫
X
×
Y
V
(
f
(
x
→
)
,
y
)
p
(
x
→
,
y
)
d
x
→
d
y
=
∫
X
∫
Y
ϕ
(
y
f
(
x
→
)
)
p
(
y
∣
x
→
)
p
(
x
→
)
d
y
d
x
→
=
∫
X
[
ϕ
(
f
(
x
→
)
)
p
(
1
∣
x
→
)
+
ϕ
(
−
f
(
x
→
)
)
p
(
−
1
∣
x
→
)
]
p
(
x
→
)
d
x
→
=
∫
X
[
ϕ
(
f
(
x
→
)
)
p
(
1
∣
x
→
)
+
ϕ
(
−
f
(
x
→
)
)
(
1
−
p
(
1
∣
x
→
)
)
]
p
(
x
→
)
d
x
→
{\displaystyle {\begin{aligned}I[f]&=\int _{{\mathcal {X}}\times {\mathcal {Y}}}V(f({\vec {x}}),y)\,p({\vec {x}},y)\,d{\vec {x}}\,dy\\[6pt]&=\int _{\mathcal {X}}\int _{\mathcal {Y}}\phi (yf({\vec {x}}))\,p(y\mid {\vec {x}})\,p({\vec {x}})\,dy\,d{\vec {x}}\\[6pt]&=\int _{\mathcal {X}}[\phi (f({\vec {x}}))\,p(1\mid {\vec {x}})+\phi (-f({\vec {x}}))\,p(-1\mid {\vec {x}})]\,p({\vec {x}})\,d{\vec {x}}\\[6pt]&=\int _{\mathcal {X}}[\phi (f({\vec {x}}))\,p(1\mid {\vec {x}})+\phi (-f({\vec {x}}))\,(1-p(1\mid {\vec {x}}))]\,p({\vec {x}})\,d{\vec {x}}\end{aligned}}}
The second equality follows from the properties described above. The third equality follows from the fact that 1 and −1 are the only possible values for
y
{\displaystyle y}
, and the fourth because
p
(
−
1
∣
x
)
=
1
−
p
(
1
∣
x
)
{\displaystyle p(-1\mid x)=1-p(1\mid x)}
. The term within brackets
[
ϕ
(
f
(
x
→
)
)
p
(
1
∣
x
→
)
+
ϕ
(
−
f
(
x
→
)
)
(
1
−
p
(
1
∣
x
→
)
)
]
{\displaystyle [\phi (f({\vec {x}}))p(1\mid {\vec {x}})+\phi (-f({\vec {x}}))(1-p(1\mid {\vec {x}}))]}
is known as the conditional risk.
One can solve for the minimizer of
I
[
f
]
{\displaystyle I[f]}
by taking the functional derivative of the last equality with respect to
f
{\displaystyle f}
and setting the derivative equal to 0. This will result in the following equation
∂
ϕ
(
f
)
∂
f
η
+
∂
ϕ
(
−
f
)
∂
f
(
1
−
η
)
=
0
,
(
1
)
{\displaystyle {\frac {\partial \phi (f)}{\partial f}}\eta +{\frac {\partial \phi (-f)}{\partial f}}(1-\eta )=0,\;\;\;\;\;(1)}
where
η
=
p
(
y
=
1
|
x
→
)
{\displaystyle \eta =p(y=1|{\vec {x}})}
, which is also equivalent to setting the derivative of the conditional risk equal to zero.
Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for false positives and false negatives) would be the 0-1 loss function (0–1 indicator function), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by
V
(
f
(
x
→
)
,
y
)
=
H
(
−
y
f
(
x
→
)
)
{\displaystyle V(f({\vec {x}}),y)=H(-yf({\vec {x}}))}
where
H
{\displaystyle H}
indicates the Heaviside step function.
However, this loss function is non-convex and non-smooth, and solving for the optimal solution is an NP-hard combinatorial optimization problem. As a result, it is better to substitute loss function surrogates which are tractable for commonly used learning algorithms, as they have convenient properties such as being convex and smooth. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allow for the recovery of the actual solution to the original classification problem. Some of these surrogates are described below.
In practice, the probability distribution
p
(
x
→
,
y
)
{\displaystyle p({\vec {x}},y)}
is unknown. Consequently, utilizing a training set of
n
{\displaystyle n}
independently and identically distributed sample points
S
=
{
(
x
→
1
,
y
1
)
,
…
,
(
x
→
n
,
y
n
)
}
{\displaystyle S=\{({\vec {x}}_{1},y_{1}),\dots ,({\vec {x}}_{n},y_{n})\}}
drawn from the data sample space, one seeks to minimize empirical risk
I
S
[
f
]
=
1
n
∑
i
=
1
n
V
(
f
(
x
→
i
)
,
y
i
)
{\displaystyle I_{S}[f]={\frac {1}{n}}\sum _{i=1}^{n}V(f({\vec {x}}_{i}),y_{i})}
as a proxy for expected risk. (See statistical learning theory for a more detailed description.)
== Bayes consistency ==
Utilizing Bayes' theorem, it can be shown that the optimal
f
0
/
1
∗
{\displaystyle f_{0/1}^{*}}
, i.e., the one that minimizes the expected risk associated with the zero-one loss, implements the Bayes optimal decision rule for a binary classification problem and is in the form of
f
0
/
1
∗
(
x
→
)
=
{
1
if
p
(
1
∣
x
→
)
>
p
(
−
1
∣
x
→
)
0
if
p
(
1
∣
x
→
)
=
p
(
−
1
∣
x
→
)
−
1
if
p
(
1
∣
x
→
)
<
p
(
−
1
∣
x
→
)
{\displaystyle f_{0/1}^{*}({\vec {x}})\;=\;{\begin{cases}\;\;\;1&{\text{if }}p(1\mid {\vec {x}})>p(-1\mid {\vec {x}})\\\;\;\;0&{\text{if }}p(1\mid {\vec {x}})=p(-1\mid {\vec {x}})\\-1&{\text{if }}p(1\mid {\vec {x}})<p(-1\mid {\vec {x}})\end{cases}}}
.
A loss function is said to be classification-calibrated or Bayes consistent if its optimal
f
ϕ
∗
{\displaystyle f_{\phi }^{*}}
is such that
f
0
/
1
∗
(
x
→
)
=
sgn
(
f
ϕ
∗
(
x
→
)
)
{\displaystyle f_{0/1}^{*}({\vec {x}})=\operatorname {sgn} (f_{\phi }^{*}({\vec {x}}))}
and is thus optimal under the Bayes decision rule. A Bayes consistent loss function allows us to find the Bayes optimal decision function
f
ϕ
∗
{\displaystyle f_{\phi }^{*}}
by directly minimizing the expected risk and without having to explicitly model the probability density functions.
For convex margin loss
ϕ
(
υ
)
{\displaystyle \phi (\upsilon )}
, it can be shown that
ϕ
(
υ
)
{\displaystyle \phi (\upsilon )}
is Bayes consistent if and only if it is differentiable at 0 and
ϕ
′
(
0
)
<
0
{\displaystyle \phi '(0)<0}
. Yet, this result does not exclude the existence of non-convex Bayes consistent loss functions. A more general result states that Bayes consistent loss functions can be generated using the following formulation
ϕ
(
v
)
=
C
[
f
−
1
(
v
)
]
+
(
1
−
f
−
1
(
v
)
)
C
′
[
f
−
1
(
v
)
]
(
2
)
{\displaystyle \phi (v)=C[f^{-1}(v)]+(1-f^{-1}(v))C'[f^{-1}(v)]\;\;\;\;\;(2)}
,
where
f
(
η
)
,
(
0
≤
η
≤
1
)
{\displaystyle f(\eta ),(0\leq \eta \leq 1)}
is any invertible function such that
f
−
1
(
−
v
)
=
1
−
f
−
1
(
v
)
{\displaystyle f^{-1}(-v)=1-f^{-1}(v)}
and
C
(
η
)
{\displaystyle C(\eta )}
is any differentiable strictly concave function such that
C
(
η
)
=
C
(
1
−
η
)
{\displaystyle C(\eta )=C(1-\eta )}
. Table-I shows the generated Bayes consistent loss functions for some example choices of
C
(
η
)
{\displaystyle C(\eta )}
and
f
−
1
(
v
)
{\displaystyle f^{-1}(v)}
. Note that the Savage and Tangent loss are not convex. Such non-convex loss functions have been shown to be useful in dealing with outliers in classification. For all loss functions generated from (2), the posterior probability
p
(
y
=
1
|
x
→
)
{\displaystyle p(y=1|{\vec {x}})}
can be found using the invertible link function as
p
(
y
=
1
|
x
→
)
=
η
=
f
−
1
(
v
)
{\displaystyle p(y=1|{\vec {x}})=\eta =f^{-1}(v)}
. Such loss functions where the posterior probability can be recovered using the invertible link are called proper loss functions.
The sole minimizer of the expected risk,
f
ϕ
∗
{\displaystyle f_{\phi }^{*}}
, associated with the above generated loss functions can be directly found from equation (1) and shown to be equal to the corresponding
f
(
η
)
{\displaystyle f(\eta )}
. This holds even for the nonconvex loss functions, which means that gradient descent based algorithms such as gradient boosting can be used to construct the minimizer.
== Proper loss functions, loss margin and regularization ==
For proper loss functions, the loss margin can be defined as
μ
ϕ
=
−
ϕ
′
(
0
)
ϕ
″
(
0
)
{\displaystyle \mu _{\phi }=-{\frac {\phi '(0)}{\phi ''(0)}}}
and shown to be directly related to the regularization properties of the classifier. Specifically a loss function of larger margin increases regularization and produces better estimates of the posterior probability. For example, the loss margin can be increased for the logistic loss by introducing a
γ
{\displaystyle \gamma }
parameter and writing the logistic loss as
1
γ
log
(
1
+
e
−
γ
v
)
{\displaystyle {\frac {1}{\gamma }}\log(1+e^{-\gamma v})}
where smaller
0
<
γ
<
1
{\displaystyle 0<\gamma <1}
increases the margin of the loss. It is shown that this is directly equivalent to decreasing the learning rate in gradient boosting
F
m
(
x
)
=
F
m
−
1
(
x
)
+
γ
h
m
(
x
)
,
{\displaystyle F_{m}(x)=F_{m-1}(x)+\gamma h_{m}(x),}
where decreasing
γ
{\displaystyle \gamma }
improves the regularization of the boosted classifier. The theory makes it clear that when a learning rate of
γ
{\displaystyle \gamma }
is used, the correct formula for retrieving the posterior probability is now
η
=
f
−
1
(
γ
F
(
x
)
)
{\displaystyle \eta =f^{-1}(\gamma F(x))}
.
In conclusion, by choosing a loss function with larger margin (smaller
γ
{\displaystyle \gamma }
) we increase regularization and improve our estimates of the posterior probability which in turn improves the ROC curve of the final classifier.
== Square loss ==
While more commonly used in regression, the square loss function can be re-written as a function
ϕ
(
y
f
(
x
→
)
)
{\displaystyle \phi (yf({\vec {x}}))}
and utilized for classification. It can be generated using (2) and Table-I as follows
ϕ
(
v
)
=
C
[
f
−
1
(
v
)
]
+
(
1
−
f
−
1
(
v
)
)
C
′
[
f
−
1
(
v
)
]
=
4
(
1
2
(
v
+
1
)
)
(
1
−
1
2
(
v
+
1
)
)
+
(
1
−
1
2
(
v
+
1
)
)
(
4
−
8
(
1
2
(
v
+
1
)
)
)
=
(
1
−
v
)
2
.
{\displaystyle \phi (v)=C[f^{-1}(v)]+(1-f^{-1}(v))C'[f^{-1}(v)]=4({\frac {1}{2}}(v+1))(1-{\frac {1}{2}}(v+1))+(1-{\frac {1}{2}}(v+1))(4-8({\frac {1}{2}}(v+1)))=(1-v)^{2}.}
The square loss function is both convex and smooth. However, the square loss function tends to penalize outliers excessively, leading to slower convergence rates (with regards to sample complexity) than for the logistic loss or hinge loss functions. In addition, functions which yield high values of
f
(
x
→
)
{\displaystyle f({\vec {x}})}
for some
x
∈
X
{\displaystyle x\in X}
will perform poorly with the square loss function, since high values of
y
f
(
x
→
)
{\displaystyle yf({\vec {x}})}
will be penalized severely, regardless of whether the signs of
y
{\displaystyle y}
and
f
(
x
→
)
{\displaystyle f({\vec {x}})}
match.
A benefit of the square loss function is that its structure lends itself to easy cross validation of regularization parameters. Specifically for Tikhonov regularization, one can solve for the regularization parameter using leave-one-out cross-validation in the same time as it would take to solve a single problem.
The minimizer of
I
[
f
]
{\displaystyle I[f]}
for the square loss function can be directly found from equation (1) as
f
Square
∗
=
2
η
−
1
=
2
p
(
1
∣
x
)
−
1.
{\displaystyle f_{\text{Square}}^{*}=2\eta -1=2p(1\mid x)-1.}
== Logistic loss ==
The logistic loss function can be generated using (2) and Table-I as follows
ϕ
(
v
)
=
C
[
f
−
1
(
v
)
]
+
(
1
−
f
−
1
(
v
)
)
C
′
[
f
−
1
(
v
)
]
=
1
log
(
2
)
[
−
e
v
1
+
e
v
log
e
v
1
+
e
v
−
(
1
−
e
v
1
+
e
v
)
log
(
1
−
e
v
1
+
e
v
)
]
+
(
1
−
e
v
1
+
e
v
)
[
−
1
log
(
2
)
log
(
e
v
1
+
e
v
1
−
e
v
1
+
e
v
)
]
=
1
log
(
2
)
log
(
1
+
e
−
v
)
.
{\displaystyle {\begin{aligned}\phi (v)&=C[f^{-1}(v)]+\left(1-f^{-1}(v)\right)\,C'\left[f^{-1}(v)\right]\\&={\frac {1}{\log(2)}}\left[{\frac {-e^{v}}{1+e^{v}}}\log {\frac {e^{v}}{1+e^{v}}}-\left(1-{\frac {e^{v}}{1+e^{v}}}\right)\log \left(1-{\frac {e^{v}}{1+e^{v}}}\right)\right]+\left(1-{\frac {e^{v}}{1+e^{v}}}\right)\left[{\frac {-1}{\log(2)}}\log \left({\frac {\frac {e^{v}}{1+e^{v}}}{1-{\frac {e^{v}}{1+e^{v}}}}}\right)\right]\\&={\frac {1}{\log(2)}}\log(1+e^{-v}).\end{aligned}}}
The logistic loss is convex and grows linearly for negative values which make it less sensitive to outliers. The logistic loss is used in the LogitBoost algorithm.
The minimizer of
I
[
f
]
{\displaystyle I[f]}
for the logistic loss function can be directly found from equation (1) as
f
Logistic
∗
=
log
(
η
1
−
η
)
=
log
(
p
(
1
∣
x
)
1
−
p
(
1
∣
x
)
)
.
{\displaystyle f_{\text{Logistic}}^{*}=\log \left({\frac {\eta }{1-\eta }}\right)=\log \left({\frac {p(1\mid x)}{1-p(1\mid x)}}\right).}
This function is undefined when
p
(
1
∣
x
)
=
1
{\displaystyle p(1\mid x)=1}
or
p
(
1
∣
x
)
=
0
{\displaystyle p(1\mid x)=0}
(tending toward ∞ and −∞ respectively), but predicts a smooth curve which grows when
p
(
1
∣
x
)
{\displaystyle p(1\mid x)}
increases and equals 0 when
p
(
1
∣
x
)
=
0.5
{\displaystyle p(1\mid x)=0.5}
.
It's easy to check that the logistic loss and binary cross-entropy loss (Log loss) are in fact the same (up to a multiplicative constant
1
log
(
2
)
{\displaystyle {\frac {1}{\log(2)}}}
). The cross-entropy loss is closely related to the Kullback–Leibler divergence between the empirical distribution and the predicted distribution. The cross-entropy loss is ubiquitous in modern deep neural networks.
== Exponential loss ==
The exponential loss function can be generated using (2) and Table-I as follows
ϕ
(
v
)
=
C
[
f
−
1
(
v
)
]
+
(
1
−
f
−
1
(
v
)
)
C
′
[
f
−
1
(
v
)
]
=
2
(
e
2
v
1
+
e
2
v
)
(
1
−
e
2
v
1
+
e
2
v
)
+
(
1
−
e
2
v
1
+
e
2
v
)
(
1
−
2
e
2
v
1
+
e
2
v
e
2
v
1
+
e
2
v
(
1
−
e
2
v
1
+
e
2
v
)
)
=
e
−
v
{\displaystyle \phi (v)=C[f^{-1}(v)]+(1-f^{-1}(v))C'[f^{-1}(v)]=2{\sqrt {\left({\frac {e^{2v}}{1+e^{2v}}}\right)\left(1-{\frac {e^{2v}}{1+e^{2v}}}\right)}}+\left(1-{\frac {e^{2v}}{1+e^{2v}}}\right)\left({\frac {1-{\frac {2e^{2v}}{1+e^{2v}}}}{\sqrt {{\frac {e^{2v}}{1+e^{2v}}}(1-{\frac {e^{2v}}{1+e^{2v}}})}}}\right)=e^{-v}}
The exponential loss is convex and grows exponentially for negative values which makes it more sensitive to outliers. The exponentially-weighted 0-1 loss is used in the AdaBoost algorithm giving implicitly rise to the exponential loss.
The minimizer of
I
[
f
]
{\displaystyle I[f]}
for the exponential loss function can be directly found from equation (1) as
f
Exp
∗
=
1
2
log
(
η
1
−
η
)
=
1
2
log
(
p
(
1
∣
x
)
1
−
p
(
1
∣
x
)
)
.
{\displaystyle f_{\text{Exp}}^{*}={\frac {1}{2}}\log \left({\frac {\eta }{1-\eta }}\right)={\frac {1}{2}}\log \left({\frac {p(1\mid x)}{1-p(1\mid x)}}\right).}
== Savage loss ==
The Savage loss can be generated using (2) and Table-I as follows
ϕ
(
v
)
=
C
[
f
−
1
(
v
)
]
+
(
1
−
f
−
1
(
v
)
)
C
′
[
f
−
1
(
v
)
]
=
(
e
v
1
+
e
v
)
(
1
−
e
v
1
+
e
v
)
+
(
1
−
e
v
1
+
e
v
)
(
1
−
2
e
v
1
+
e
v
)
=
1
(
1
+
e
v
)
2
.
{\displaystyle \phi (v)=C[f^{-1}(v)]+(1-f^{-1}(v))C'[f^{-1}(v)]=\left({\frac {e^{v}}{1+e^{v}}}\right)\left(1-{\frac {e^{v}}{1+e^{v}}}\right)+\left(1-{\frac {e^{v}}{1+e^{v}}}\right)\left(1-{\frac {2e^{v}}{1+e^{v}}}\right)={\frac {1}{(1+e^{v})^{2}}}.}
The Savage loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. The Savage loss has been used in gradient boosting and the SavageBoost algorithm.
The minimizer of
I
[
f
]
{\displaystyle I[f]}
for the Savage loss function can be directly found from equation (1) as
f
Savage
∗
=
log
(
η
1
−
η
)
=
log
(
p
(
1
∣
x
)
1
−
p
(
1
∣
x
)
)
.
{\displaystyle f_{\text{Savage}}^{*}=\log \left({\frac {\eta }{1-\eta }}\right)=\log \left({\frac {p(1\mid x)}{1-p(1\mid x)}}\right).}
== Tangent loss ==
The Tangent loss can be generated using (2) and Table-I as follows
ϕ
(
v
)
=
C
[
f
−
1
(
v
)
]
+
(
1
−
f
−
1
(
v
)
)
C
′
[
f
−
1
(
v
)
]
=
4
(
arctan
(
v
)
+
1
2
)
(
1
−
(
arctan
(
v
)
+
1
2
)
)
+
(
1
−
(
arctan
(
v
)
+
1
2
)
)
(
4
−
8
(
arctan
(
v
)
+
1
2
)
)
=
(
2
arctan
(
v
)
−
1
)
2
.
{\displaystyle {\begin{aligned}\phi (v)&=C[f^{-1}(v)]+\left(1-f^{-1}(v)\right)C'[f^{-1}(v)]\\&=4\left(\arctan(v)+{\frac {1}{2}}\right)\left(1-\left(\arctan(v)+{\frac {1}{2}}\right)\right)+\left(1-\left(\arctan(v)+{\frac {1}{2}}\right)\right)\left(4-8\left(\arctan(v)+{\frac {1}{2}}\right)\right)\\&=\left(2\arctan(v)-1\right)^{2}.\end{aligned}}}
The Tangent loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. Interestingly, the Tangent loss also assigns a bounded penalty to data points that have been classified "too correctly". This can help prevent over-training on the data set. The Tangent loss has been used in gradient boosting, the TangentBoost algorithm and Alternating Decision Forests.
The minimizer of
I
[
f
]
{\displaystyle I[f]}
for the Tangent loss function can be directly found from equation (1) as
f
Tangent
∗
=
tan
(
η
−
1
2
)
=
tan
(
p
(
1
∣
x
)
−
1
2
)
.
{\displaystyle f_{\text{Tangent}}^{*}=\tan \left(\eta -{\frac {1}{2}}\right)=\tan \left(p\left(1\mid x\right)-{\frac {1}{2}}\right).}
== Hinge loss ==
The hinge loss function is defined with
ϕ
(
υ
)
=
max
(
0
,
1
−
υ
)
=
[
1
−
υ
]
+
{\displaystyle \phi (\upsilon )=\max(0,1-\upsilon )=[1-\upsilon ]_{+}}
, where
[
a
]
+
=
max
(
0
,
a
)
{\displaystyle [a]_{+}=\max(0,a)}
is the positive part function.
V
(
f
(
x
→
)
,
y
)
=
max
(
0
,
1
−
y
f
(
x
→
)
)
=
[
1
−
y
f
(
x
→
)
]
+
.
{\displaystyle V(f({\vec {x}}),y)=\max(0,1-yf({\vec {x}}))=[1-yf({\vec {x}})]_{+}.}
The hinge loss provides a relatively tight, convex upper bound on the 0–1 indicator function. Specifically, the hinge loss equals the 0–1 indicator function when
sgn
(
f
(
x
→
)
)
=
y
{\displaystyle \operatorname {sgn} (f({\vec {x}}))=y}
and
|
y
f
(
x
→
)
|
≥
1
{\displaystyle |yf({\vec {x}})|\geq 1}
. In addition, the empirical risk minimization of this loss is equivalent to the classical formulation for support vector machines (SVMs). Correctly classified points lying outside the margin boundaries of the support vectors are not penalized, whereas points within the margin boundaries or on the wrong side of the hyperplane are penalized in a linear fashion compared to their distance from the correct boundary.
While the hinge loss function is both convex and continuous, it is not smooth (is not differentiable) at
y
f
(
x
→
)
=
1
{\displaystyle yf({\vec {x}})=1}
. Consequently, the hinge loss function cannot be used with gradient descent methods or stochastic gradient descent methods which rely on differentiability over the entire domain. However, the hinge loss does have a subgradient at
y
f
(
x
→
)
=
1
{\displaystyle yf({\vec {x}})=1}
, which allows for the utilization of subgradient descent methods. SVMs utilizing the hinge loss function can also be solved using quadratic programming.
The minimizer of
I
[
f
]
{\displaystyle I[f]}
for the hinge loss function is
f
Hinge
∗
(
x
→
)
=
{
1
if
p
(
1
∣
x
→
)
>
p
(
−
1
∣
x
→
)
−
1
if
p
(
1
∣
x
→
)
<
p
(
−
1
∣
x
→
)
{\displaystyle f_{\text{Hinge}}^{*}({\vec {x}})\;=\;{\begin{cases}1&{\text{if }}p(1\mid {\vec {x}})>p(-1\mid {\vec {x}})\\-1&{\text{if }}p(1\mid {\vec {x}})<p(-1\mid {\vec {x}})\end{cases}}}
when
p
(
1
∣
x
)
≠
0.5
{\displaystyle p(1\mid x)\neq 0.5}
, which matches that of the 0–1 indicator function. This conclusion makes the hinge loss quite attractive, as bounds can be placed on the difference between expected risk and the sign of hinge loss function. The Hinge loss cannot be derived from (2) since
f
Hinge
∗
{\displaystyle f_{\text{Hinge}}^{*}}
is not invertible.
== Generalized smooth hinge loss ==
The generalized smooth hinge loss function with parameter
α
{\displaystyle \alpha }
is defined as
f
α
∗
(
z
)
=
{
α
α
+
1
−
z
if
z
≤
0
1
α
+
1
z
α
+
1
−
z
+
α
α
+
1
if
0
<
z
<
1
0
if
z
≥
1
,
{\displaystyle f_{\alpha }^{*}(z)\;=\;{\begin{cases}{\frac {\alpha }{\alpha +1}}-z&{\text{if }}z\leq 0\\{\frac {1}{\alpha +1}}z^{\alpha +1}-z+{\frac {\alpha }{\alpha +1}}&{\text{if }}0<z<1\\0&{\text{if }}z\geq 1\end{cases}},}
where
z
=
y
f
(
x
→
)
.
{\displaystyle z=yf({\vec {x}}).}
It is monotonically increasing and reaches 0 when
z
=
1
{\displaystyle z=1}
.
== See also ==
Differentiable programming
Scoring function
== References == | Wikipedia/Loss_functions_for_classification |
A residual neural network (also referred to as a residual network or ResNet) is a deep learning architecture in which the layers learn residual functions with reference to the layer inputs. It was developed in 2015 for image recognition, and won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) of that year.
As a point of terminology, "residual connection" refers to the specific architectural motif of
x
↦
f
(
x
)
+
x
{\displaystyle x\mapsto f(x)+x}
, where
f
{\displaystyle f}
is an arbitrary neural network module. The motif had been used previously (see §History for details). However, the publication of ResNet made it widely popular for feedforward networks, appearing in neural networks that are seemingly unrelated to ResNet.
The residual connection stabilizes the training and convergence of deep neural networks with hundreds of layers, and is a common motif in deep neural networks, such as transformer models (e.g., BERT, and GPT models such as ChatGPT), the AlphaGo Zero system, the AlphaStar system, and the AlphaFold system.
== Mathematics ==
=== Residual connection ===
In a multilayer neural network model, consider a subnetwork with a certain number of stacked layers (e.g., 2 or 3). Denote the underlying function performed by this subnetwork as
H
(
x
)
{\displaystyle H(x)}
, where
x
{\displaystyle x}
is the input to the subnetwork. Residual learning re-parameterizes this subnetwork and lets the parameter layers represent a "residual function"
F
(
x
)
=
H
(
x
)
−
x
{\displaystyle F(x)=H(x)-x}
. The output
y
{\displaystyle y}
of this subnetwork is then represented as:
y
=
F
(
x
)
+
x
{\displaystyle y=F(x)+x}
The operation of "
+
x
{\displaystyle +\ x}
" is implemented via a "skip connection" that performs an identity mapping to connect the input of the subnetwork with its output. This connection is referred to as a "residual connection" in later work. The function
F
(
x
)
{\displaystyle F(x)}
is often represented by matrix multiplication interlaced with activation functions and normalization operations (e.g., batch normalization or layer normalization). As a whole, one of these subnetworks is referred to as a "residual block". A deep residual network is constructed by simply stacking these blocks.
Long short-term memory (LSTM) has a memory mechanism that serves as a residual connection. In an LSTM without a forget gate, an input
x
t
{\displaystyle x_{t}}
is processed by a function
F
{\displaystyle F}
and added to a memory cell
c
t
{\displaystyle c_{t}}
, resulting in
c
t
+
1
=
c
t
+
F
(
x
t
)
{\displaystyle c_{t+1}=c_{t}+F(x_{t})}
. An LSTM with a forget gate essentially functions as a highway network.
To stabilize the variance of the layers' inputs, it is recommended to replace the residual connections
x
+
f
(
x
)
{\displaystyle x+f(x)}
with
x
/
L
+
f
(
x
)
{\displaystyle x/L+f(x)}
, where
L
{\displaystyle L}
is the total number of residual layers.
=== Projection connection ===
If the function
F
{\displaystyle F}
is of type
F
:
R
n
→
R
m
{\displaystyle F:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}
where
n
≠
m
{\displaystyle n\neq m}
, then
F
(
x
)
+
x
{\displaystyle F(x)+x}
is undefined. To handle this special case, a projection connection is used:
y
=
F
(
x
)
+
P
(
x
)
{\displaystyle y=F(x)+P(x)}
where
P
{\displaystyle P}
is typically a linear projection, defined by
P
(
x
)
=
M
x
{\displaystyle P(x)=Mx}
where
M
{\displaystyle M}
is a
m
×
n
{\displaystyle m\times n}
matrix. The matrix is trained via backpropagation, as is any other parameter of the model.
=== Signal propagation ===
The introduction of identity mappings facilitates signal propagation in both forward and backward paths.
==== Forward propagation ====
If the output of the
ℓ
{\displaystyle \ell }
-th residual block is the input to the
(
ℓ
+
1
)
{\displaystyle (\ell +1)}
-th residual block (assuming no activation function between blocks), then the
(
ℓ
+
1
)
{\displaystyle (\ell +1)}
-th input is:
x
ℓ
+
1
=
F
(
x
ℓ
)
+
x
ℓ
{\displaystyle x_{\ell +1}=F(x_{\ell })+x_{\ell }}
Applying this formulation recursively, e.g.:
x
ℓ
+
2
=
F
(
x
ℓ
+
1
)
+
x
ℓ
+
1
=
F
(
x
ℓ
+
1
)
+
F
(
x
ℓ
)
+
x
ℓ
{\displaystyle {\begin{aligned}x_{\ell +2}&=F(x_{\ell +1})+x_{\ell +1}\\&=F(x_{\ell +1})+F(x_{\ell })+x_{\ell }\end{aligned}}}
yields the general relationship:
x
L
=
x
ℓ
+
∑
i
=
ℓ
L
−
1
F
(
x
i
)
{\displaystyle x_{L}=x_{\ell }+\sum _{i=\ell }^{L-1}F(x_{i})}
where
L
{\textstyle L}
is the index of a residual block and
ℓ
{\textstyle \ell }
is the index of some earlier block. This formulation suggests that there is always a signal that is directly sent from a shallower block
ℓ
{\textstyle \ell }
to a deeper block
L
{\textstyle L}
.
==== Backward propagation ====
The residual learning formulation provides the added benefit of mitigating the vanishing gradient problem to some extent. However, it is crucial to acknowledge that the vanishing gradient issue is not the root cause of the degradation problem, which is tackled through the use of normalization. To observe the effect of residual blocks on backpropagation, consider the partial derivative of a loss function
E
{\displaystyle {\mathcal {E}}}
with respect to some residual block input
x
ℓ
{\displaystyle x_{\ell }}
. Using the equation above from forward propagation for a later residual block
L
>
ℓ
{\displaystyle L>\ell }
:
∂
E
∂
x
ℓ
=
∂
E
∂
x
L
∂
x
L
∂
x
ℓ
=
∂
E
∂
x
L
(
1
+
∂
∂
x
ℓ
∑
i
=
ℓ
L
−
1
F
(
x
i
)
)
=
∂
E
∂
x
L
+
∂
E
∂
x
L
∂
∂
x
ℓ
∑
i
=
ℓ
L
−
1
F
(
x
i
)
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {E}}}{\partial x_{\ell }}}&={\frac {\partial {\mathcal {E}}}{\partial x_{L}}}{\frac {\partial x_{L}}{\partial x_{\ell }}}\\&={\frac {\partial {\mathcal {E}}}{\partial x_{L}}}\left(1+{\frac {\partial }{\partial x_{\ell }}}\sum _{i=\ell }^{L-1}F(x_{i})\right)\\&={\frac {\partial {\mathcal {E}}}{\partial x_{L}}}+{\frac {\partial {\mathcal {E}}}{\partial x_{L}}}{\frac {\partial }{\partial x_{\ell }}}\sum _{i=\ell }^{L-1}F(x_{i})\end{aligned}}}
This formulation suggests that the gradient computation of a shallower layer,
∂
E
∂
x
ℓ
{\textstyle {\frac {\partial {\mathcal {E}}}{\partial x_{\ell }}}}
, always has a later term
∂
E
∂
x
L
{\textstyle {\frac {\partial {\mathcal {E}}}{\partial x_{L}}}}
that is directly added. Even if the gradients of the
F
(
x
i
)
{\displaystyle F(x_{i})}
terms are small, the total gradient
∂
E
∂
x
ℓ
{\textstyle {\frac {\partial {\mathcal {E}}}{\partial x_{\ell }}}}
resists vanishing due to the added term
∂
E
∂
x
L
{\textstyle {\frac {\partial {\mathcal {E}}}{\partial x_{L}}}}
.
== Variants of residual blocks ==
=== Basic block ===
A basic block is the simplest building block studied in the original ResNet. This block consists of two sequential 3x3 convolutional layers and a residual connection. The input and output dimensions of both layers are equal.
=== Bottleneck block ===
A bottleneck block consists of three sequential convolutional layers and a residual connection. The first layer in this block is a 1x1 convolution for dimension reduction (e.g., to 1/2 of the input dimension); the second layer performs a 3x3 convolution; the last layer is another 1x1 convolution for dimension restoration. The models of ResNet-50, ResNet-101, and ResNet-152 are all based on bottleneck blocks.
=== Pre-activation block ===
The pre-activation residual block applies activation functions before applying the residual function
F
{\displaystyle F}
. Formally, the computation of a pre-activation residual block can be written as:
x
ℓ
+
1
=
F
(
ϕ
(
x
ℓ
)
)
+
x
ℓ
{\displaystyle x_{\ell +1}=F(\phi (x_{\ell }))+x_{\ell }}
where
ϕ
{\displaystyle \phi }
can be any activation (e.g. ReLU) or normalization (e.g. LayerNorm) operation. This design reduces the number of non-identity mappings between residual blocks, and allows an identity mapping directly from the input to the output. This design was used to train models with 200 to over 1000 layers, and was found to consistently outperform variants where the residual path is not an identity function. The pre-activation ResNet with 200 layers took 3 weeks to train for ImageNet on 8 GPUs in 2016.
Since GPT-2, transformer blocks have been mostly implemented as pre-activation blocks. This is often referred to as "pre-normalization" in the literature of transformer models.
== Applications ==
Originally, ResNet was designed for computer vision.
All transformer architectures include residual connections. Indeed, very deep transformers cannot be trained without them.
The original ResNet paper made no claim on being inspired by biological systems. However, later research has related ResNet to biologically-plausible algorithms.
A study published in Science in 2023 disclosed the complete connectome of an insect brain (specifically that of a fruit fly larva). This study discovered "multilayer shortcuts" that resemble the skip connections in artificial neural networks, including ResNets.
== History ==
=== Previous work ===
Residual connections were noticed in neuroanatomy, such as Lorente de No (1938).: Fig 3 McCulloch and Pitts (1943) proposed artificial neural networks and considered those with residual connections.: Fig 1.h
In 1961, Frank Rosenblatt described a three-layer multilayer perceptron (MLP) model with skip connections.: 313, Chapter 15 The model was referred to as a "cross-coupled system", and the skip connections were forms of cross-coupled connections.
During the late 1980s, "skip-layer" connections were sometimes used in neural networks. Examples include: Lang and Witbrock (1988) trained a fully connected feedforward network where each layer skip-connects to all subsequent layers, like the later DenseNet (2016). In this work, the residual connection was the form
x
↦
F
(
x
)
+
P
(
x
)
{\displaystyle x\mapsto F(x)+P(x)}
, where
P
{\displaystyle P}
is a randomly-initialized projection connection. They termed it a "short-cut connection". An early neural language model used residual connections and named them "direct connections".
=== Degradation problem ===
Sepp Hochreiter discovered the vanishing gradient problem in 1991 and argued that it explained why the then-prevalent forms of recurrent neural networks did not work for long sequences. He and Schmidhuber later designed the LSTM architecture to solve this problem, which has a "cell state"
c
t
{\displaystyle c_{t}}
that can function as a generalized residual connection. The highway network (2015) applied the idea of an LSTM unfolded in time to feedforward neural networks, resulting in the highway network. ResNet is equivalent to an open-gated highway network.
During the early days of deep learning, there were attempts to train increasingly deep models. Notable examples included the AlexNet (2012), which had 8 layers, and the VGG-19 (2014), which had 19 layers. However, stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In theory, adding additional layers to deepen a network should not result in a higher training loss, but this is what happened with VGGNet. If the extra layers can be set as identity mappings, however, then the deeper network would represent the same function as its shallower counterpart. There is some evidence that the optimizer is not able to approach identity mappings for the parameterized layers, and the benefit of residual connections was to allow identity mappings by default.
In 2014, the state of the art was training deep neural networks with 20 to 30 layers. The research team for ResNet attempted to train deeper ones by empirically testing various methods for training deeper networks, until they came upon the ResNet architecture.
=== Subsequent work ===
Wide Residual Network (2016) found that using more channels and fewer layers than the original ResNet improves performance and GPU-computational efficiency, and that a block with two 3×3 convolutions is superior to other configurations of convolution blocks.
DenseNet (2016) connects the output of each layer to the input to each subsequent layer:
x
ℓ
+
1
=
F
(
x
1
,
x
2
,
…
,
x
ℓ
−
1
,
x
ℓ
)
{\displaystyle x_{\ell +1}=F(x_{1},x_{2},\dots ,x_{\ell -1},x_{\ell })}
Stochastic depth is a regularization method that randomly drops a subset of layers and lets the signal propagate through the identity skip connections. Also known as DropPath, this regularizes training for deep models, such as vision transformers.
ResNeXt (2017) combines the Inception module with ResNet.
Squeeze-and-Excitation Networks (2018) added squeeze-and-excitation (SE) modules to ResNet. An SE module is applied after a convolution, and takes a tensor of shape
R
H
×
W
×
C
{\displaystyle \mathbb {R} ^{H\times W\times C}}
(height, width, channels) as input. Each channel is averaged, resulting in a vector of shape
R
C
{\displaystyle \mathbb {R} ^{C}}
. This is then passed through a multilayer perceptron (with an architecture such as linear-ReLU-linear-sigmoid) before it is multiplied with the original tensor.
== References == | Wikipedia/Residual_neural_network |
Deep reinforcement learning (DRL) is a subfield of machine learning that combines principles of reinforcement learning (RL) and deep learning. It involves training agents to make decisions by interacting with an environment to maximize cumulative rewards, while using deep neural networks to represent policies, value functions, or environment models. This integration enables DRL systems to process high-dimensional inputs, such as images or continuous control signals, making the approach effective for solving complex tasks. Since the introduction of the deep Q-network (DQN) in 2015, DRL has achieved significant successes across domains including games, robotics, and autonomous systems, and is increasingly applied in areas such as healthcare, finance, and autonomous vehicles.
== Deep reinforcement learning ==
=== Introduction ===
Deep reinforcement learning (DRL) is part of machine learning, which combines reinforcement learning (RL) and deep learning. In DRL, agents learn how decisions are to be made by interacting with environments in order to maximize cumulative rewards, while using deep neural networks to represent policies, value functions, or models of the environment. This integration enables agents to handle high-dimensional input spaces, such as raw images or continuous control signals, making DRL a widely used approach for addressing complex tasks.
Since the development of the deep Q-network (DQN) in 2015, DRL has led to major breakthroughs in domains such as games, robotics, and autonomous systems. Research in DRL continues to expand rapidly, with active work on challenges like sample efficiency and robustness, as well as innovations in model-based methods, transformer architectures, and open-ended learning. Applications now range from healthcare and finance to language systems and autonomous vehicles.
=== Background ===
Reinforcement learning (RL) is a framework in which agents interact with environments by taking actions and learning from feedback in form of rewards or penalties. Traditional RL methods, such as Q-learning and policy gradient techniques, rely on tabular representations or linear approximations, which are often not scalable to high-dimensional or continuous input spaces.
DRL came out as solution to above limitation by integrating RL and deep neural networks. This combination enables agents to approximate complex functions and handle unstructured input data like raw images, sensor data, or natural language. The approach became widely recognized following the success of DeepMind's deep Q-network (DQN), which achieved human-level performance on several Atari video games using only pixel inputs and game scores as feedback.
Since then, DRL has evolved to include various architectures and learning strategies, including model-based methods, actor-critic frameworks, and applications in continuous control environments. These developments have significantly expanded the applicability of DRL across domains where traditional RL was limited.
=== Key algorithms and methods ===
Several algorithmic approaches form the foundation of deep reinforcement learning, each with different strategies for learning optimal behavior.
One of the earliest and most influential DRL algorithms is the Deep Q-Network (DQN), which combines Q-learning with deep neural networks. DQN approximates the optimal action-value function using a convolutional neural network and introduced techniques such as experience replay and target networks which stabilize training.
Policy gradient methods directly optimize the agent’s policy by adjusting parameters in the direction that increases expected rewards. These methods are well-suited to high-dimensional or continuous action spaces and form the basis of many modern DRL algorithms.
Actor-critic algorithms combine the advantages of value-based and policy-based methods. The actor updates the policy, while the critic evaluates the current policy using a value function. Popular variants include A2C (Advantage Actor-Critic) and PPO (Proximal Policy Optimization), both of which are widely used in benchmarks and real-world applications.
Other methods include multi-agent reinforcement learning, hierarchical RL, and approaches that integrate planning or memory mechanisms, depending on the complexity of the task and environment.
=== Applications ===
DRL has been applied to wide range of domains that require sequential decision-making and the ability to learn from high-dimensional input data.
One of the most well-known applications is in games, where DRL agents have demonstrated performance comparable to or exceeding human-level benchmarks. DeepMind's AlphaGo and AlphaStar, as well as OpenAI Five, are notable examples of DRL systems mastering complex games such as Go, StarCraft II, and Dota 2. While these systems have demonstrated high performance in constrained environments, their success often depends on extensive computational resources and may not generalize easily to tasks outside their training domains.
In robotics, DRL has been used to train agents for tasks such as locomotion, manipulation, and navigation in both simulated and real-world environments. By learning directly from sensory input, DRL enables robots to adapt to complex dynamics without relying on hand-crafted control rules.
Other growing areas of application include finance (e.g., portfolio optimization), healthcare (e.g., treatment planning and medical decision-making), natural language processing (e.g., dialogue systems), and autonomous vehicles (e.g., path planning and control).All of these applications shows how DRL deals with real-world problems like uncertainty, sequential reasoning, and high-dimensional data.
=== Challenges and limitations ===
DRL has several significant challenges which limit its broader deployment.
One of the most prominent issues is sample inefficiency. DRL algorithms often require millions of interactions with the environment to learn effective policies, which is impractical in many real-world settings where data collection is expensive or time-consuming.
Another challenge is sparse or delayed reward problem, where feedback signals are infrequent, which makes it difficult for agents to attribute outcomes to specific decisions. Techniques such as reward shaping and exploration strategies have been developed to address this issue.
DRL systems also tend to be sensitive to hyperparameters and lack robustness across tasks or environments. Models that are trained in simulation fail very often when deployed in the real world due to discrepancies between simulated and real-world dynamics, a problem known as the "reality gap."Bias and fairness in DRL systems have also emerged as concerns, particularly in domains like healthcare and finance where imbalanced data can lead to unequal outcomes for underrepresented groups.
Additionally, concerns about safety, interpretability, and reproducibility have become increasingly important, especially in high-stakes domains such as healthcare or autonomous driving. These issues remain active areas of research in the DRL community.
=== Recent advances ===
Recent developments in DRL have introduced new architectures and training strategies which aims to improving performance, efficiency, and generalization.
One key area of progress is model-based reinforcement learning, where agents learn an internal model of the environment to simulate outcomes before acting. This kind off approach improves sample efficiency and planning. An example is the Dreamer algorithm, which learns a latent space model to train agents more efficiently in complex environments.
Another major innovation is the use of transformer-based architectures in DRL. Unlike traditional models that rely on recurrent or convolutional networks, transformers can model long-term dependencies more effectively. The Decision Transformer and other similar models treat RL as a sequence modeling problem, enabling agents to generalize better across tasks.
In addition, research into open-ended learning has led to the creation of capable agents that are able to solve a range of tasks without task-specific tuning. Similar systems like the ones that are developed by OpenAI show that agents trained in diverse, evolving environments can generalize across new challenges, moving toward more adaptive and flexible intelligence.
=== Future directions ===
As deep reinforcement learning continues to evolve, researchers are exploring ways to make algorithms more efficient, robust, and generalizable across a wide range of tasks. Improving sample efficiency through model-based learning, enhancing generalization with open-ended training environments, and integrating foundation models are among the current research goals.
similar area of interest is safe and ethical deployment, particularly in high-risk settings like healthcare, autonomous driving, and finance. Researchers are developing frameworks for safer exploration, interpretability, and better alignment with human values.Ensuring that DRL systems promote equitable outcomes remains an ongoing challenge, especially where historical data may under‑represent marginalized populations.
The future of DRL may also involve more integration with other subfields of machine learning, such as unsupervised learning, transfer learning, and large language models, enabling agents that can learn from diverse data modalities and interact more naturally with human users.
== References == | Wikipedia/Deep_reinforcement_learning |
In machine learning, diffusion models, also known as diffusion-based generative models or score-based generative models, are a class of latent variable generative models. A diffusion model consists of two major components: the forward diffusion process, and the reverse sampling process. The goal of diffusion models is to learn a diffusion process for a given dataset, such that the process can generate new elements that are distributed similarly as the original dataset. A diffusion model models data as generated by a diffusion process, whereby a new datum performs a random walk with drift through the space of all possible data. A trained diffusion model can be sampled in many ways, with different efficiency and quality.
There are various equivalent formalisms, including Markov chains, denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. They are typically trained using variational inference. The model responsible for denoising is typically called its "backbone". The backbone may be of any kind, but they are typically U-nets or transformers.
As of 2024, diffusion models are mainly used for computer vision tasks, including image denoising, inpainting, super-resolution, image generation, and video generation. These typically involve training a neural network to sequentially denoise images blurred with Gaussian noise. The model is trained to reverse the process of adding noise to an image. After training to convergence, it can be used for image generation by starting with an image composed of random noise, and applying the network iteratively to denoise the image.
Diffusion-based image generators have seen widespread commercial interest, such as Stable Diffusion and DALL-E. These models typically combine diffusion models with other models, such as text-encoders and cross-attention modules to allow text-conditioned generation.
Other than computer vision, diffusion models have also found applications in natural language processing such as text generation and summarization, sound generation, and reinforcement learning.
== Denoising diffusion model ==
=== Non-equilibrium thermodynamics ===
Diffusion models were introduced in 2015 as a method to train a model that can sample from a highly complex probability distribution. They used techniques from non-equilibrium thermodynamics, especially diffusion.
Consider, for example, how one might model the distribution of all naturally-occurring photos. Each image is a point in the space of all images, and the distribution of naturally-occurring photos is a "cloud" in space, which, by repeatedly adding noise to the images, diffuses out to the rest of the image space, until the cloud becomes all but indistinguishable from a Gaussian distribution
N
(
0
,
I
)
{\displaystyle {\mathcal {N}}(0,I)}
. A model that can approximately undo the diffusion can then be used to sample from the original distribution. This is studied in "non-equilibrium" thermodynamics, as the starting distribution is not in equilibrium, unlike the final distribution.
The equilibrium distribution is the Gaussian distribution
N
(
0
,
I
)
{\displaystyle {\mathcal {N}}(0,I)}
, with pdf
ρ
(
x
)
∝
e
−
1
2
‖
x
‖
2
{\displaystyle \rho (x)\propto e^{-{\frac {1}{2}}\|x\|^{2}}}
. This is just the Maxwell–Boltzmann distribution of particles in a potential well
V
(
x
)
=
1
2
‖
x
‖
2
{\displaystyle V(x)={\frac {1}{2}}\|x\|^{2}}
at temperature 1. The initial distribution, being very much out of equilibrium, would diffuse towards the equilibrium distribution, making biased random steps that are a sum of pure randomness (like a Brownian walker) and gradient descent down the potential well. The randomness is necessary: if the particles were to undergo only gradient descent, then they will all fall to the origin, collapsing the distribution.
=== Denoising Diffusion Probabilistic Model (DDPM) ===
The 2020 paper proposed the Denoising Diffusion Probabilistic Model (DDPM), which improves upon the previous method by variational inference.
==== Forward diffusion ====
To present the model, we need some notation.
β
1
,
.
.
.
,
β
T
∈
(
0
,
1
)
{\displaystyle \beta _{1},...,\beta _{T}\in (0,1)}
are fixed constants.
α
t
:=
1
−
β
t
{\displaystyle \alpha _{t}:=1-\beta _{t}}
α
¯
t
:=
α
1
⋯
α
t
{\displaystyle {\bar {\alpha }}_{t}:=\alpha _{1}\cdots \alpha _{t}}
σ
t
:=
1
−
α
¯
t
{\displaystyle \sigma _{t}:={\sqrt {1-{\bar {\alpha }}_{t}}}}
σ
~
t
:=
σ
t
−
1
σ
t
β
t
{\displaystyle {\tilde {\sigma }}_{t}:={\frac {\sigma _{t-1}}{\sigma _{t}}}{\sqrt {\beta _{t}}}}
μ
~
t
(
x
t
,
x
0
)
:=
α
t
(
1
−
α
¯
t
−
1
)
x
t
+
α
¯
t
−
1
(
1
−
α
t
)
x
0
σ
t
2
{\displaystyle {\tilde {\mu }}_{t}(x_{t},x_{0}):={\frac {{\sqrt {\alpha _{t}}}(1-{\bar {\alpha }}_{t-1})x_{t}+{\sqrt {{\bar {\alpha }}_{t-1}}}(1-\alpha _{t})x_{0}}{\sigma _{t}^{2}}}}
N
(
μ
,
Σ
)
{\displaystyle {\mathcal {N}}(\mu ,\Sigma )}
is the normal distribution with mean
μ
{\displaystyle \mu }
and variance
Σ
{\displaystyle \Sigma }
, and
N
(
x
|
μ
,
Σ
)
{\displaystyle {\mathcal {N}}(x|\mu ,\Sigma )}
is the probability density at
x
{\displaystyle x}
.
A vertical bar denotes conditioning.
A forward diffusion process starts at some starting point
x
0
∼
q
{\displaystyle x_{0}\sim q}
, where
q
{\displaystyle q}
is the probability distribution to be learned, then repeatedly adds noise to it by
x
t
=
1
−
β
t
x
t
−
1
+
β
t
z
t
{\displaystyle x_{t}={\sqrt {1-\beta _{t}}}x_{t-1}+{\sqrt {\beta _{t}}}z_{t}}
where
z
1
,
.
.
.
,
z
T
{\displaystyle z_{1},...,z_{T}}
are IID samples from
N
(
0
,
I
)
{\displaystyle {\mathcal {N}}(0,I)}
. This is designed so that for any starting distribution of
x
0
{\displaystyle x_{0}}
, we have
lim
t
x
t
|
x
0
{\displaystyle \lim _{t}x_{t}|x_{0}}
converging to
N
(
0
,
I
)
{\displaystyle {\mathcal {N}}(0,I)}
.
The entire diffusion process then satisfies
q
(
x
0
:
T
)
=
q
(
x
0
)
q
(
x
1
|
x
0
)
⋯
q
(
x
T
|
x
T
−
1
)
=
q
(
x
0
)
N
(
x
1
|
α
1
x
0
,
β
1
I
)
⋯
N
(
x
T
|
α
T
x
T
−
1
,
β
T
I
)
{\displaystyle q(x_{0:T})=q(x_{0})q(x_{1}|x_{0})\cdots q(x_{T}|x_{T-1})=q(x_{0}){\mathcal {N}}(x_{1}|{\sqrt {\alpha _{1}}}x_{0},\beta _{1}I)\cdots {\mathcal {N}}(x_{T}|{\sqrt {\alpha _{T}}}x_{T-1},\beta _{T}I)}
or
ln
q
(
x
0
:
T
)
=
ln
q
(
x
0
)
−
∑
t
=
1
T
1
2
β
t
‖
x
t
−
1
−
β
t
x
t
−
1
‖
2
+
C
{\displaystyle \ln q(x_{0:T})=\ln q(x_{0})-\sum _{t=1}^{T}{\frac {1}{2\beta _{t}}}\|x_{t}-{\sqrt {1-\beta _{t}}}x_{t-1}\|^{2}+C}
where
C
{\displaystyle C}
is a normalization constant and often omitted. In particular, we note that
x
1
:
T
|
x
0
{\displaystyle x_{1:T}|x_{0}}
is a gaussian process, which affords us considerable freedom in reparameterization. For example, by standard manipulation with gaussian process,
x
t
|
x
0
∼
N
(
α
¯
t
x
0
,
σ
t
2
I
)
{\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}
x
t
−
1
|
x
t
,
x
0
∼
N
(
μ
~
t
(
x
t
,
x
0
)
,
σ
~
t
2
I
)
{\displaystyle x_{t-1}|x_{t},x_{0}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},x_{0}),{\tilde {\sigma }}_{t}^{2}I)}
In particular, notice that for large
t
{\displaystyle t}
, the variable
x
t
|
x
0
∼
N
(
α
¯
t
x
0
,
σ
t
2
I
)
{\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}
converges to
N
(
0
,
I
)
{\displaystyle {\mathcal {N}}(0,I)}
. That is, after a long enough diffusion process, we end up with some
x
T
{\displaystyle x_{T}}
that is very close to
N
(
0
,
I
)
{\displaystyle {\mathcal {N}}(0,I)}
, with all traces of the original
x
0
∼
q
{\displaystyle x_{0}\sim q}
gone.
For example, since
x
t
|
x
0
∼
N
(
α
¯
t
x
0
,
σ
t
2
I
)
{\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}
we can sample
x
t
|
x
0
{\displaystyle x_{t}|x_{0}}
directly "in one step", instead of going through all the intermediate steps
x
1
,
x
2
,
.
.
.
,
x
t
−
1
{\displaystyle x_{1},x_{2},...,x_{t-1}}
.
==== Backward diffusion ====
The key idea of DDPM is to use a neural network parametrized by
θ
{\displaystyle \theta }
. The network takes in two arguments
x
t
,
t
{\displaystyle x_{t},t}
, and outputs a vector
μ
θ
(
x
t
,
t
)
{\displaystyle \mu _{\theta }(x_{t},t)}
and a matrix
Σ
θ
(
x
t
,
t
)
{\displaystyle \Sigma _{\theta }(x_{t},t)}
, such that each step in the forward diffusion process can be approximately undone by
x
t
−
1
∼
N
(
μ
θ
(
x
t
,
t
)
,
Σ
θ
(
x
t
,
t
)
)
{\displaystyle x_{t-1}\sim {\mathcal {N}}(\mu _{\theta }(x_{t},t),\Sigma _{\theta }(x_{t},t))}
. This then gives us a backward diffusion process
p
θ
{\displaystyle p_{\theta }}
defined by
p
θ
(
x
T
)
=
N
(
x
T
|
0
,
I
)
{\displaystyle p_{\theta }(x_{T})={\mathcal {N}}(x_{T}|0,I)}
p
θ
(
x
t
−
1
|
x
t
)
=
N
(
x
t
−
1
|
μ
θ
(
x
t
,
t
)
,
Σ
θ
(
x
t
,
t
)
)
{\displaystyle p_{\theta }(x_{t-1}|x_{t})={\mathcal {N}}(x_{t-1}|\mu _{\theta }(x_{t},t),\Sigma _{\theta }(x_{t},t))}
The goal now is to learn the parameters such that
p
θ
(
x
0
)
{\displaystyle p_{\theta }(x_{0})}
is as close to
q
(
x
0
)
{\displaystyle q(x_{0})}
as possible. To do that, we use maximum likelihood estimation with variational inference.
==== Variational inference ====
The ELBO inequality states that
ln
p
θ
(
x
0
)
≥
E
x
1
:
T
∼
q
(
⋅
|
x
0
)
[
ln
p
θ
(
x
0
:
T
)
−
ln
q
(
x
1
:
T
|
x
0
)
]
{\displaystyle \ln p_{\theta }(x_{0})\geq E_{x_{1:T}\sim q(\cdot |x_{0})}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}
, and taking one more expectation, we get
E
x
0
∼
q
[
ln
p
θ
(
x
0
)
]
≥
E
x
0
:
T
∼
q
[
ln
p
θ
(
x
0
:
T
)
−
ln
q
(
x
1
:
T
|
x
0
)
]
{\displaystyle E_{x_{0}\sim q}[\ln p_{\theta }(x_{0})]\geq E_{x_{0:T}\sim q}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}
We see that maximizing the quantity on the right would give us a lower bound on the likelihood of observed data. This allows us to perform variational inference.
Define the loss function
L
(
θ
)
:=
−
E
x
0
:
T
∼
q
[
ln
p
θ
(
x
0
:
T
)
−
ln
q
(
x
1
:
T
|
x
0
)
]
{\displaystyle L(\theta ):=-E_{x_{0:T}\sim q}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}
and now the goal is to minimize the loss by stochastic gradient descent. The expression may be simplified to
L
(
θ
)
=
∑
t
=
1
T
E
x
t
−
1
,
x
t
∼
q
[
−
ln
p
θ
(
x
t
−
1
|
x
t
)
]
+
E
x
0
∼
q
[
D
K
L
(
q
(
x
T
|
x
0
)
‖
p
θ
(
x
T
)
)
]
+
C
{\displaystyle L(\theta )=\sum _{t=1}^{T}E_{x_{t-1},x_{t}\sim q}[-\ln p_{\theta }(x_{t-1}|x_{t})]+E_{x_{0}\sim q}[D_{KL}(q(x_{T}|x_{0})\|p_{\theta }(x_{T}))]+C}
where
C
{\displaystyle C}
does not depend on the parameter, and thus can be ignored. Since
p
θ
(
x
T
)
=
N
(
x
T
|
0
,
I
)
{\displaystyle p_{\theta }(x_{T})={\mathcal {N}}(x_{T}|0,I)}
also does not depend on the parameter, the term
E
x
0
∼
q
[
D
K
L
(
q
(
x
T
|
x
0
)
‖
p
θ
(
x
T
)
)
]
{\displaystyle E_{x_{0}\sim q}[D_{KL}(q(x_{T}|x_{0})\|p_{\theta }(x_{T}))]}
can also be ignored. This leaves just
L
(
θ
)
=
∑
t
=
1
T
L
t
{\displaystyle L(\theta )=\sum _{t=1}^{T}L_{t}}
with
L
t
=
E
x
t
−
1
,
x
t
∼
q
[
−
ln
p
θ
(
x
t
−
1
|
x
t
)
]
{\displaystyle L_{t}=E_{x_{t-1},x_{t}\sim q}[-\ln p_{\theta }(x_{t-1}|x_{t})]}
to be minimized.
==== Noise prediction network ====
Since
x
t
−
1
|
x
t
,
x
0
∼
N
(
μ
~
t
(
x
t
,
x
0
)
,
σ
~
t
2
I
)
{\displaystyle x_{t-1}|x_{t},x_{0}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},x_{0}),{\tilde {\sigma }}_{t}^{2}I)}
, this suggests that we should use
μ
θ
(
x
t
,
t
)
=
μ
~
t
(
x
t
,
x
0
)
{\displaystyle \mu _{\theta }(x_{t},t)={\tilde {\mu }}_{t}(x_{t},x_{0})}
; however, the network does not have access to
x
0
{\displaystyle x_{0}}
, and so it has to estimate it instead. Now, since
x
t
|
x
0
∼
N
(
α
¯
t
x
0
,
σ
t
2
I
)
{\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}
, we may write
x
t
=
α
¯
t
x
0
+
σ
t
z
{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}z}
, where
z
{\displaystyle z}
is some unknown gaussian noise. Now we see that estimating
x
0
{\displaystyle x_{0}}
is equivalent to estimating
z
{\displaystyle z}
.
Therefore, let the network output a noise vector
ϵ
θ
(
x
t
,
t
)
{\displaystyle \epsilon _{\theta }(x_{t},t)}
, and let it predict
μ
θ
(
x
t
,
t
)
=
μ
~
t
(
x
t
,
x
t
−
σ
t
ϵ
θ
(
x
t
,
t
)
α
¯
t
)
=
x
t
−
ϵ
θ
(
x
t
,
t
)
β
t
/
σ
t
α
t
{\displaystyle \mu _{\theta }(x_{t},t)={\tilde {\mu }}_{t}\left(x_{t},{\frac {x_{t}-\sigma _{t}\epsilon _{\theta }(x_{t},t)}{\sqrt {{\bar {\alpha }}_{t}}}}\right)={\frac {x_{t}-\epsilon _{\theta }(x_{t},t)\beta _{t}/\sigma _{t}}{\sqrt {\alpha _{t}}}}}
It remains to design
Σ
θ
(
x
t
,
t
)
{\displaystyle \Sigma _{\theta }(x_{t},t)}
. The DDPM paper suggested not learning it (since it resulted in "unstable training and poorer sample quality"), but fixing it at some value
Σ
θ
(
x
t
,
t
)
=
ζ
t
2
I
{\displaystyle \Sigma _{\theta }(x_{t},t)=\zeta _{t}^{2}I}
, where either
ζ
t
2
=
β
t
or
σ
~
t
2
{\displaystyle \zeta _{t}^{2}=\beta _{t}{\text{ or }}{\tilde {\sigma }}_{t}^{2}}
yielded similar performance.
With this, the loss simplifies to
L
t
=
β
t
2
2
α
t
σ
t
2
ζ
t
2
E
x
0
∼
q
;
z
∼
N
(
0
,
I
)
[
‖
ϵ
θ
(
x
t
,
t
)
−
z
‖
2
]
+
C
{\displaystyle L_{t}={\frac {\beta _{t}^{2}}{2\alpha _{t}\sigma _{t}^{2}\zeta _{t}^{2}}}E_{x_{0}\sim q;z\sim {\mathcal {N}}(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]+C}
which may be minimized by stochastic gradient descent. The paper noted empirically that an even simpler loss function
L
s
i
m
p
l
e
,
t
=
E
x
0
∼
q
;
z
∼
N
(
0
,
I
)
[
‖
ϵ
θ
(
x
t
,
t
)
−
z
‖
2
]
{\displaystyle L_{simple,t}=E_{x_{0}\sim q;z\sim {\mathcal {N}}(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]}
resulted in better models.
=== Backward diffusion process ===
After a noise prediction network is trained, it can be used for generating data points in the original distribution in a loop as follows:
Compute the noise estimate
ϵ
←
ϵ
θ
(
x
t
,
t
)
{\displaystyle \epsilon \leftarrow \epsilon _{\theta }(x_{t},t)}
Compute the original data estimate
x
~
0
←
(
x
t
−
σ
t
ϵ
)
/
α
¯
t
{\displaystyle {\tilde {x}}_{0}\leftarrow (x_{t}-\sigma _{t}\epsilon )/{\sqrt {{\bar {\alpha }}_{t}}}}
Sample the previous data
x
t
−
1
∼
N
(
μ
~
t
(
x
t
,
x
~
0
)
,
σ
~
t
2
I
)
{\displaystyle x_{t-1}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},{\tilde {x}}_{0}),{\tilde {\sigma }}_{t}^{2}I)}
Change time
t
←
t
−
1
{\displaystyle t\leftarrow t-1}
== Score-based generative model ==
Score-based generative model is another formulation of diffusion modelling. They are also called noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD).
=== Score matching ===
==== The idea of score functions ====
Consider the problem of image generation. Let
x
{\displaystyle x}
represent an image, and let
q
(
x
)
{\displaystyle q(x)}
be the probability distribution over all possible images. If we have
q
(
x
)
{\displaystyle q(x)}
itself, then we can say for certain how likely a certain image is. However, this is intractable in general.
Most often, we are uninterested in knowing the absolute probability of a certain image. Instead, we are usually only interested in knowing how likely a certain image is compared to its immediate neighbors — e.g. how much more likely is an image of cat compared to some small variants of it? Is it more likely if the image contains two whiskers, or three, or with some Gaussian noise added?
Consequently, we are actually quite uninterested in
q
(
x
)
{\displaystyle q(x)}
itself, but rather,
∇
x
ln
q
(
x
)
{\displaystyle \nabla _{x}\ln q(x)}
. This has two major effects:
One, we no longer need to normalize
q
(
x
)
{\displaystyle q(x)}
, but can use any
q
~
(
x
)
=
C
q
(
x
)
{\displaystyle {\tilde {q}}(x)=Cq(x)}
, where
C
=
∫
q
~
(
x
)
d
x
>
0
{\displaystyle C=\int {\tilde {q}}(x)dx>0}
is any unknown constant that is of no concern to us.
Two, we are comparing
q
(
x
)
{\displaystyle q(x)}
neighbors
q
(
x
+
d
x
)
{\displaystyle q(x+dx)}
, by
q
(
x
)
q
(
x
+
d
x
)
=
e
−
⟨
∇
x
ln
q
,
d
x
⟩
{\displaystyle {\frac {q(x)}{q(x+dx)}}=e^{-\langle \nabla _{x}\ln q,dx\rangle }}
Let the score function be
s
(
x
)
:=
∇
x
ln
q
(
x
)
{\displaystyle s(x):=\nabla _{x}\ln q(x)}
; then consider what we can do with
s
(
x
)
{\displaystyle s(x)}
.
As it turns out,
s
(
x
)
{\displaystyle s(x)}
allows us to sample from
q
(
x
)
{\displaystyle q(x)}
using thermodynamics. Specifically, if we have a potential energy function
U
(
x
)
=
−
ln
q
(
x
)
{\displaystyle U(x)=-\ln q(x)}
, and a lot of particles in the potential well, then the distribution at thermodynamic equilibrium is the Boltzmann distribution
q
U
(
x
)
∝
e
−
U
(
x
)
/
k
B
T
=
q
(
x
)
1
/
k
B
T
{\displaystyle q_{U}(x)\propto e^{-U(x)/k_{B}T}=q(x)^{1/k_{B}T}}
. At temperature
k
B
T
=
1
{\displaystyle k_{B}T=1}
, the Boltzmann distribution is exactly
q
(
x
)
{\displaystyle q(x)}
.
Therefore, to model
q
(
x
)
{\displaystyle q(x)}
, we may start with a particle sampled at any convenient distribution (such as the standard gaussian distribution), then simulate the motion of the particle forwards according to the Langevin equation
d
x
t
=
−
∇
x
t
U
(
x
t
)
d
t
+
d
W
t
{\displaystyle dx_{t}=-\nabla _{x_{t}}U(x_{t})dt+dW_{t}}
and the Boltzmann distribution is, by Fokker-Planck equation, the unique thermodynamic equilibrium. So no matter what distribution
x
0
{\displaystyle x_{0}}
has, the distribution of
x
t
{\displaystyle x_{t}}
converges in distribution to
q
{\displaystyle q}
as
t
→
∞
{\displaystyle t\to \infty }
.
==== Learning the score function ====
Given a density
q
{\displaystyle q}
, we wish to learn a score function approximation
f
θ
≈
∇
ln
q
{\displaystyle f_{\theta }\approx \nabla \ln q}
. This is score matching. Typically, score matching is formalized as minimizing Fisher divergence function
E
q
[
‖
f
θ
(
x
)
−
∇
ln
q
(
x
)
‖
2
]
{\displaystyle E_{q}[\|f_{\theta }(x)-\nabla \ln q(x)\|^{2}]}
. By expanding the integral, and performing an integration by parts,
E
q
[
‖
f
θ
(
x
)
−
∇
ln
q
(
x
)
‖
2
]
=
E
q
[
‖
f
θ
‖
2
+
2
∇
⋅
f
θ
]
+
C
{\displaystyle E_{q}[\|f_{\theta }(x)-\nabla \ln q(x)\|^{2}]=E_{q}[\|f_{\theta }\|^{2}+2\nabla \cdot f_{\theta }]+C}
giving us a loss function, also known as the Hyvärinen scoring rule, that can be minimized by stochastic gradient descent.
==== Annealing the score function ====
Suppose we need to model the distribution of images, and we want
x
0
∼
N
(
0
,
I
)
{\displaystyle x_{0}\sim {\mathcal {N}}(0,I)}
, a white-noise image. Now, most white-noise images do not look like real images, so
q
(
x
0
)
≈
0
{\displaystyle q(x_{0})\approx 0}
for large swaths of
x
0
∼
N
(
0
,
I
)
{\displaystyle x_{0}\sim {\mathcal {N}}(0,I)}
. This presents a problem for learning the score function, because if there are no samples around a certain point, then we can't learn the score function at that point. If we do not know the score function
∇
x
t
ln
q
(
x
t
)
{\displaystyle \nabla _{x_{t}}\ln q(x_{t})}
at that point, then we cannot impose the time-evolution equation on a particle:
d
x
t
=
∇
x
t
ln
q
(
x
t
)
d
t
+
d
W
t
{\displaystyle dx_{t}=\nabla _{x_{t}}\ln q(x_{t})dt+dW_{t}}
To deal with this problem, we perform annealing. If
q
{\displaystyle q}
is too different from a white-noise distribution, then progressively add noise until it is indistinguishable from one. That is, we perform a forward diffusion, then learn the score function, then use the score function to perform a backward diffusion.
=== Continuous diffusion processes ===
==== Forward diffusion process ====
Consider again the forward diffusion process, but this time in continuous time:
x
t
=
1
−
β
t
x
t
−
1
+
β
t
z
t
{\displaystyle x_{t}={\sqrt {1-\beta _{t}}}x_{t-1}+{\sqrt {\beta _{t}}}z_{t}}
By taking the
β
t
→
β
(
t
)
d
t
,
d
t
z
t
→
d
W
t
{\displaystyle \beta _{t}\to \beta (t)dt,{\sqrt {dt}}z_{t}\to dW_{t}}
limit, we obtain a continuous diffusion process, in the form of a stochastic differential equation:
d
x
t
=
−
1
2
β
(
t
)
x
t
d
t
+
β
(
t
)
d
W
t
{\displaystyle dx_{t}=-{\frac {1}{2}}\beta (t)x_{t}dt+{\sqrt {\beta (t)}}dW_{t}}
where
W
t
{\displaystyle W_{t}}
is a Wiener process (multidimensional Brownian motion).
Now, the equation is exactly a special case of the overdamped Langevin equation
d
x
t
=
−
D
k
B
T
(
∇
x
U
)
d
t
+
2
D
d
W
t
{\displaystyle dx_{t}=-{\frac {D}{k_{B}T}}(\nabla _{x}U)dt+{\sqrt {2D}}dW_{t}}
where
D
{\displaystyle D}
is diffusion tensor,
T
{\displaystyle T}
is temperature, and
U
{\displaystyle U}
is potential energy field. If we substitute in
D
=
1
2
β
(
t
)
I
,
k
B
T
=
1
,
U
=
1
2
‖
x
‖
2
{\displaystyle D={\frac {1}{2}}\beta (t)I,k_{B}T=1,U={\frac {1}{2}}\|x\|^{2}}
, we recover the above equation. This explains why the phrase "Langevin dynamics" is sometimes used in diffusion models.
Now the above equation is for the stochastic motion of a single particle. Suppose we have a cloud of particles distributed according to
q
{\displaystyle q}
at time
t
=
0
{\displaystyle t=0}
, then after a long time, the cloud of particles would settle into the stable distribution of
N
(
0
,
I
)
{\displaystyle {\mathcal {N}}(0,I)}
. Let
ρ
t
{\displaystyle \rho _{t}}
be the density of the cloud of particles at time
t
{\displaystyle t}
, then we have
ρ
0
=
q
;
ρ
T
≈
N
(
0
,
I
)
{\displaystyle \rho _{0}=q;\quad \rho _{T}\approx {\mathcal {N}}(0,I)}
and the goal is to somehow reverse the process, so that we can start at the end and diffuse back to the beginning.
By Fokker-Planck equation, the density of the cloud evolves according to
∂
t
ln
ρ
t
=
1
2
β
(
t
)
(
n
+
(
x
+
∇
ln
ρ
t
)
⋅
∇
ln
ρ
t
+
Δ
ln
ρ
t
)
{\displaystyle \partial _{t}\ln \rho _{t}={\frac {1}{2}}\beta (t)\left(n+(x+\nabla \ln \rho _{t})\cdot \nabla \ln \rho _{t}+\Delta \ln \rho _{t}\right)}
where
n
{\displaystyle n}
is the dimension of space, and
Δ
{\displaystyle \Delta }
is the Laplace operator. Equivalently,
∂
t
ρ
t
=
1
2
β
(
t
)
(
∇
⋅
(
x
ρ
t
)
+
Δ
ρ
t
)
{\displaystyle \partial _{t}\rho _{t}={\frac {1}{2}}\beta (t)(\nabla \cdot (x\rho _{t})+\Delta \rho _{t})}
==== Backward diffusion process ====
If we have solved
ρ
t
{\displaystyle \rho _{t}}
for time
t
∈
[
0
,
T
]
{\displaystyle t\in [0,T]}
, then we can exactly reverse the evolution of the cloud. Suppose we start with another cloud of particles with density
ν
0
=
ρ
T
{\displaystyle \nu _{0}=\rho _{T}}
, and let the particles in the cloud evolve according to
d
y
t
=
1
2
β
(
T
−
t
)
y
t
d
t
+
β
(
T
−
t
)
∇
y
t
ln
ρ
T
−
t
(
y
t
)
⏟
score function
d
t
+
β
(
T
−
t
)
d
W
t
{\displaystyle dy_{t}={\frac {1}{2}}\beta (T-t)y_{t}dt+\beta (T-t)\underbrace {\nabla _{y_{t}}\ln \rho _{T-t}\left(y_{t}\right)} _{\text{score function }}dt+{\sqrt {\beta (T-t)}}dW_{t}}
then by plugging into the Fokker-Planck equation, we find that
∂
t
ρ
T
−
t
=
∂
t
ν
t
{\displaystyle \partial _{t}\rho _{T-t}=\partial _{t}\nu _{t}}
. Thus this cloud of points is the original cloud, evolving backwards.
=== Noise conditional score network (NCSN) ===
At the continuous limit,
α
¯
t
=
(
1
−
β
1
)
⋯
(
1
−
β
t
)
=
e
∑
i
ln
(
1
−
β
i
)
→
e
−
∫
0
t
β
(
t
)
d
t
{\displaystyle {\bar {\alpha }}_{t}=(1-\beta _{1})\cdots (1-\beta _{t})=e^{\sum _{i}\ln(1-\beta _{i})}\to e^{-\int _{0}^{t}\beta (t)dt}}
and so
x
t
|
x
0
∼
N
(
e
−
1
2
∫
0
t
β
(
t
)
d
t
x
0
,
(
1
−
e
−
∫
0
t
β
(
t
)
d
t
)
I
)
{\displaystyle x_{t}|x_{0}\sim N\left(e^{-{\frac {1}{2}}\int _{0}^{t}\beta (t)dt}x_{0},\left(1-e^{-\int _{0}^{t}\beta (t)dt}\right)I\right)}
In particular, we see that we can directly sample from any point in the continuous diffusion process without going through the intermediate steps, by first sampling
x
0
∼
q
,
z
∼
N
(
0
,
I
)
{\displaystyle x_{0}\sim q,z\sim {\mathcal {N}}(0,I)}
, then get
x
t
=
e
−
1
2
∫
0
t
β
(
t
)
d
t
x
0
+
(
1
−
e
−
∫
0
t
β
(
t
)
d
t
)
z
{\displaystyle x_{t}=e^{-{\frac {1}{2}}\int _{0}^{t}\beta (t)dt}x_{0}+\left(1-e^{-\int _{0}^{t}\beta (t)dt}\right)z}
. That is, we can quickly sample
x
t
∼
ρ
t
{\displaystyle x_{t}\sim \rho _{t}}
for any
t
≥
0
{\displaystyle t\geq 0}
.
Now, define a certain probability distribution
γ
{\displaystyle \gamma }
over
[
0
,
∞
)
{\displaystyle [0,\infty )}
, then the score-matching loss function is defined as the expected Fisher divergence:
L
(
θ
)
=
E
t
∼
γ
,
x
t
∼
ρ
t
[
‖
f
θ
(
x
t
,
t
)
‖
2
+
2
∇
⋅
f
θ
(
x
t
,
t
)
]
{\displaystyle L(\theta )=E_{t\sim \gamma ,x_{t}\sim \rho _{t}}[\|f_{\theta }(x_{t},t)\|^{2}+2\nabla \cdot f_{\theta }(x_{t},t)]}
After training,
f
θ
(
x
t
,
t
)
≈
∇
ln
ρ
t
{\displaystyle f_{\theta }(x_{t},t)\approx \nabla \ln \rho _{t}}
, so we can perform the backwards diffusion process by first sampling
x
T
∼
N
(
0
,
I
)
{\displaystyle x_{T}\sim {\mathcal {N}}(0,I)}
, then integrating the SDE from
t
=
T
{\displaystyle t=T}
to
t
=
0
{\displaystyle t=0}
:
x
t
−
d
t
=
x
t
+
1
2
β
(
t
)
x
t
d
t
+
β
(
t
)
f
θ
(
x
t
,
t
)
d
t
+
β
(
t
)
d
W
t
{\displaystyle x_{t-dt}=x_{t}+{\frac {1}{2}}\beta (t)x_{t}dt+\beta (t)f_{\theta }(x_{t},t)dt+{\sqrt {\beta (t)}}dW_{t}}
This may be done by any SDE integration method, such as Euler–Maruyama method.
The name "noise conditional score network" is explained thus:
"network", because
f
θ
{\displaystyle f_{\theta }}
is implemented as a neural network.
"score", because the output of the network is interpreted as approximating the score function
∇
ln
ρ
t
{\displaystyle \nabla \ln \rho _{t}}
.
"noise conditional", because
ρ
t
{\displaystyle \rho _{t}}
is equal to
ρ
0
{\displaystyle \rho _{0}}
blurred by an added gaussian noise that increases with time, and so the score function depends on the amount of noise added.
== Their equivalence ==
DDPM and score-based generative models are equivalent. This means that a network trained using DDPM can be used as a NCSN, and vice versa.
We know that
x
t
|
x
0
∼
N
(
α
¯
t
x
0
,
σ
t
2
I
)
{\displaystyle x_{t}|x_{0}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}x_{0},\sigma _{t}^{2}I\right)}
, so by Tweedie's formula, we have
∇
x
t
ln
q
(
x
t
)
=
1
σ
t
2
(
−
x
t
+
α
¯
t
E
q
[
x
0
|
x
t
]
)
{\displaystyle \nabla _{x_{t}}\ln q(x_{t})={\frac {1}{\sigma _{t}^{2}}}(-x_{t}+{\sqrt {{\bar {\alpha }}_{t}}}E_{q}[x_{0}|x_{t}])}
As described previously, the DDPM loss function is
∑
t
L
s
i
m
p
l
e
,
t
{\displaystyle \sum _{t}L_{simple,t}}
with
L
s
i
m
p
l
e
,
t
=
E
x
0
∼
q
;
z
∼
N
(
0
,
I
)
[
‖
ϵ
θ
(
x
t
,
t
)
−
z
‖
2
]
{\displaystyle L_{simple,t}=E_{x_{0}\sim q;z\sim {\mathcal {N}}(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]}
where
x
t
=
α
¯
t
x
0
+
σ
t
z
{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}z}
. By a change of variables,
L
s
i
m
p
l
e
,
t
=
E
x
0
,
x
t
∼
q
[
‖
ϵ
θ
(
x
t
,
t
)
−
x
t
−
α
¯
t
x
0
σ
t
‖
2
]
=
E
x
t
∼
q
,
x
0
∼
q
(
⋅
|
x
t
)
[
‖
ϵ
θ
(
x
t
,
t
)
−
x
t
−
α
¯
t
x
0
σ
t
‖
2
]
{\displaystyle L_{simple,t}=E_{x_{0},x_{t}\sim q}\left[\left\|\epsilon _{\theta }(x_{t},t)-{\frac {x_{t}-{\sqrt {{\bar {\alpha }}_{t}}}x_{0}}{\sigma _{t}}}\right\|^{2}\right]=E_{x_{t}\sim q,x_{0}\sim q(\cdot |x_{t})}\left[\left\|\epsilon _{\theta }(x_{t},t)-{\frac {x_{t}-{\sqrt {{\bar {\alpha }}_{t}}}x_{0}}{\sigma _{t}}}\right\|^{2}\right]}
and the term inside becomes a least squares regression, so if the network actually reaches the global minimum of loss, then we have
ϵ
θ
(
x
t
,
t
)
=
x
t
−
α
¯
t
E
q
[
x
0
|
x
t
]
σ
t
=
−
σ
t
∇
x
t
ln
q
(
x
t
)
{\displaystyle \epsilon _{\theta }(x_{t},t)={\frac {x_{t}-{\sqrt {{\bar {\alpha }}_{t}}}E_{q}[x_{0}|x_{t}]}{\sigma _{t}}}=-\sigma _{t}\nabla _{x_{t}}\ln q(x_{t})}
Thus, a score-based network predicts noise, and can be used for denoising.
Conversely, the continuous limit
x
t
−
1
=
x
t
−
d
t
,
β
t
=
β
(
t
)
d
t
,
z
t
d
t
=
d
W
t
{\displaystyle x_{t-1}=x_{t-dt},\beta _{t}=\beta (t)dt,z_{t}{\sqrt {dt}}=dW_{t}}
of the backward equation
x
t
−
1
=
x
t
α
t
−
β
t
σ
t
α
t
ϵ
θ
(
x
t
,
t
)
+
β
t
z
t
;
z
t
∼
N
(
0
,
I
)
{\displaystyle x_{t-1}={\frac {x_{t}}{\sqrt {\alpha _{t}}}}-{\frac {\beta _{t}}{\sigma _{t}{\sqrt {\alpha _{t}}}}}\epsilon _{\theta }(x_{t},t)+{\sqrt {\beta _{t}}}z_{t};\quad z_{t}\sim {\mathcal {N}}(0,I)}
gives us precisely the same equation as score-based diffusion:
x
t
−
d
t
=
x
t
(
1
+
β
(
t
)
d
t
/
2
)
+
β
(
t
)
∇
x
t
ln
q
(
x
t
)
d
t
+
β
(
t
)
d
W
t
{\displaystyle x_{t-dt}=x_{t}(1+\beta (t)dt/2)+\beta (t)\nabla _{x_{t}}\ln q(x_{t})dt+{\sqrt {\beta (t)}}dW_{t}}
Thus, at infinitesimal steps of DDPM, a denoising network performs score-based diffusion.
== Main variants ==
=== Noise schedule ===
In DDPM, the sequence of numbers
0
=
σ
0
<
σ
1
<
⋯
<
σ
T
<
1
{\displaystyle 0=\sigma _{0}<\sigma _{1}<\cdots <\sigma _{T}<1}
is called a (discrete time) noise schedule. In general, consider a strictly increasing monotonic function
σ
{\displaystyle \sigma }
of type
R
→
(
0
,
1
)
{\displaystyle \mathbb {R} \to (0,1)}
, such as the sigmoid function. In that case, a noise schedule is a sequence of real numbers
λ
1
<
λ
2
<
⋯
<
λ
T
{\displaystyle \lambda _{1}<\lambda _{2}<\cdots <\lambda _{T}}
. It then defines a sequence of noises
σ
t
:=
σ
(
λ
t
)
{\displaystyle \sigma _{t}:=\sigma (\lambda _{t})}
, which then derives the other quantities
β
t
=
1
−
1
−
σ
t
2
1
−
σ
t
−
1
2
{\displaystyle \beta _{t}=1-{\frac {1-\sigma _{t}^{2}}{1-\sigma _{t-1}^{2}}}}
.
In order to use arbitrary noise schedules, instead of training a noise prediction model
ϵ
θ
(
x
t
,
t
)
{\displaystyle \epsilon _{\theta }(x_{t},t)}
, one trains
ϵ
θ
(
x
t
,
σ
t
)
{\displaystyle \epsilon _{\theta }(x_{t},\sigma _{t})}
.
Similarly, for the noise conditional score network, instead of training
f
θ
(
x
t
,
t
)
{\displaystyle f_{\theta }(x_{t},t)}
, one trains
f
θ
(
x
t
,
σ
t
)
{\displaystyle f_{\theta }(x_{t},\sigma _{t})}
.
=== Denoising Diffusion Implicit Model (DDIM) ===
The original DDPM method for generating images is slow, since the forward diffusion process usually takes
T
∼
1000
{\displaystyle T\sim 1000}
to make the distribution of
x
T
{\displaystyle x_{T}}
to appear close to gaussian. However this means the backward diffusion process also take 1000 steps. Unlike the forward diffusion process, which can skip steps as
x
t
|
x
0
{\displaystyle x_{t}|x_{0}}
is gaussian for all
t
≥
1
{\displaystyle t\geq 1}
, the backward diffusion process does not allow skipping steps. For example, to sample
x
t
−
2
|
x
t
−
1
∼
N
(
μ
θ
(
x
t
−
1
,
t
−
1
)
,
Σ
θ
(
x
t
−
1
,
t
−
1
)
)
{\displaystyle x_{t-2}|x_{t-1}\sim {\mathcal {N}}(\mu _{\theta }(x_{t-1},t-1),\Sigma _{\theta }(x_{t-1},t-1))}
requires the model to first sample
x
t
−
1
{\displaystyle x_{t-1}}
. Attempting to directly sample
x
t
−
2
|
x
t
{\displaystyle x_{t-2}|x_{t}}
would require us to marginalize out
x
t
−
1
{\displaystyle x_{t-1}}
, which is generally intractable.
DDIM is a method to take any model trained on DDPM loss, and use it to sample with some steps skipped, sacrificing an adjustable amount of quality. If we generate the Markovian chain case in DDPM to non-Markovian case, DDIM corresponds to the case that the reverse process has variance equals to 0. In other words, the reverse process (and also the forward process) is deterministic. When using fewer sampling steps, DDIM outperforms DDPM.
In detail, the DDIM sampling method is as follows. Start with the forward diffusion process
x
t
=
α
¯
t
x
0
+
σ
t
ϵ
{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+\sigma _{t}\epsilon }
. Then, during the backward denoising process, given
x
t
,
ϵ
θ
(
x
t
,
t
)
{\displaystyle x_{t},\epsilon _{\theta }(x_{t},t)}
, the original data is estimated as
x
0
′
=
x
t
−
σ
t
ϵ
θ
(
x
t
,
t
)
α
¯
t
{\displaystyle x_{0}'={\frac {x_{t}-\sigma _{t}\epsilon _{\theta }(x_{t},t)}{\sqrt {{\bar {\alpha }}_{t}}}}}
then the backward diffusion process can jump to any step
0
≤
s
<
t
{\displaystyle 0\leq s<t}
, and the next denoised sample is
x
s
=
α
¯
s
x
0
′
+
σ
s
2
−
(
σ
s
′
)
2
ϵ
θ
(
x
t
,
t
)
+
σ
s
′
ϵ
{\displaystyle x_{s}={\sqrt {{\bar {\alpha }}_{s}}}x_{0}'+{\sqrt {\sigma _{s}^{2}-(\sigma '_{s})^{2}}}\epsilon _{\theta }(x_{t},t)+\sigma _{s}'\epsilon }
where
σ
s
′
{\displaystyle \sigma _{s}'}
is an arbitrary real number within the range
[
0
,
σ
s
]
{\displaystyle [0,\sigma _{s}]}
, and
ϵ
∼
N
(
0
,
I
)
{\displaystyle \epsilon \sim {\mathcal {N}}(0,I)}
is a newly sampled gaussian noise. If all
σ
s
′
=
0
{\displaystyle \sigma _{s}'=0}
, then the backward process becomes deterministic, and this special case of DDIM is also called "DDIM". The original paper noted that when the process is deterministic, samples generated with only 20 steps are already very similar to ones generated with 1000 steps on the high-level.
The original paper recommended defining a single "eta value"
η
∈
[
0
,
1
]
{\displaystyle \eta \in [0,1]}
, such that
σ
s
′
=
η
σ
~
s
{\displaystyle \sigma _{s}'=\eta {\tilde {\sigma }}_{s}}
. When
η
=
1
{\displaystyle \eta =1}
, this is the original DDPM. When
η
=
0
{\displaystyle \eta =0}
, this is the fully deterministic DDIM. For intermediate values, the process interpolates between them.
By the equivalence, the DDIM algorithm also applies for score-based diffusion models.
=== Latent diffusion model (LDM) ===
Since the diffusion model is a general method for modelling probability distributions, if one wants to model a distribution over images, one can first encode the images into a lower-dimensional space by an encoder, then use a diffusion model to model the distribution over encoded images. Then to generate an image, one can sample from the diffusion model, then use a decoder to decode it into an image.
The encoder-decoder pair is most often a variational autoencoder (VAE).
=== Architectural improvements ===
proposed various architectural improvements. For example, they proposed log-space interpolation during backward sampling. Instead of sampling from
x
t
−
1
∼
N
(
μ
~
t
(
x
t
,
x
~
0
)
,
σ
~
t
2
I
)
{\displaystyle x_{t-1}\sim {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},{\tilde {x}}_{0}),{\tilde {\sigma }}_{t}^{2}I)}
, they recommended sampling from
N
(
μ
~
t
(
x
t
,
x
~
0
)
,
(
σ
t
v
σ
~
t
1
−
v
)
2
I
)
{\displaystyle {\mathcal {N}}({\tilde {\mu }}_{t}(x_{t},{\tilde {x}}_{0}),(\sigma _{t}^{v}{\tilde {\sigma }}_{t}^{1-v})^{2}I)}
for a learned parameter
v
{\displaystyle v}
.
In the v-prediction formalism, the noising formula
x
t
=
α
¯
t
x
0
+
1
−
α
¯
t
ϵ
t
{\displaystyle x_{t}={\sqrt {{\bar {\alpha }}_{t}}}x_{0}+{\sqrt {1-{\bar {\alpha }}_{t}}}\epsilon _{t}}
is reparameterised by an angle
ϕ
t
{\displaystyle \phi _{t}}
such that
cos
ϕ
t
=
α
¯
t
{\displaystyle \cos \phi _{t}={\sqrt {{\bar {\alpha }}_{t}}}}
and a "velocity" defined by
cos
ϕ
t
ϵ
t
−
sin
ϕ
t
x
0
{\displaystyle \cos \phi _{t}\epsilon _{t}-\sin \phi _{t}x_{0}}
. The network is trained to predict the velocity
v
^
θ
{\displaystyle {\hat {v}}_{\theta }}
, and denoising is by
x
ϕ
t
−
δ
=
cos
(
δ
)
x
ϕ
t
−
sin
(
δ
)
v
^
θ
(
x
ϕ
t
)
{\displaystyle x_{\phi _{t}-\delta }=\cos(\delta )\;x_{\phi _{t}}-\sin(\delta ){\hat {v}}_{\theta }\;(x_{\phi _{t}})}
. This parameterization was found to improve performance, as the model can be trained to reach total noise (i.e.
ϕ
t
=
90
∘
{\displaystyle \phi _{t}=90^{\circ }}
) and then reverse it, whereas the standard parameterization never reaches total noise since
α
¯
t
>
0
{\displaystyle {\sqrt {{\bar {\alpha }}_{t}}}>0}
is always true.
=== Classifier guidance ===
Classifier guidance was proposed in 2021 to improve class-conditional generation by using a classifier. The original publication used CLIP text encoders to improve text-conditional image generation.
Suppose we wish to sample not from the entire distribution of images, but conditional on the image description. We don't want to sample a generic image, but an image that fits the description "black cat with red eyes". Generally, we want to sample from the distribution
p
(
x
|
y
)
{\displaystyle p(x|y)}
, where
x
{\displaystyle x}
ranges over images, and
y
{\displaystyle y}
ranges over classes of images (a description "black cat with red eyes" is just a very detailed class, and a class "cat" is just a very vague description).
Taking the perspective of the noisy channel model, we can understand the process as follows: To generate an image
x
{\displaystyle x}
conditional on description
y
{\displaystyle y}
, we imagine that the requester really had in mind an image
x
{\displaystyle x}
, but the image is passed through a noisy channel and came out garbled, as
y
{\displaystyle y}
. Image generation is then nothing but inferring which
x
{\displaystyle x}
the requester had in mind.
In other words, conditional image generation is simply "translating from a textual language into a pictorial language". Then, as in noisy-channel model, we use Bayes theorem to get
p
(
x
|
y
)
∝
p
(
y
|
x
)
p
(
x
)
{\displaystyle p(x|y)\propto p(y|x)p(x)}
in other words, if we have a good model of the space of all images, and a good image-to-class translator, we get a class-to-image translator "for free". In the equation for backward diffusion, the score
∇
ln
p
(
x
)
{\displaystyle \nabla \ln p(x)}
can be replaced by
∇
x
ln
p
(
x
|
y
)
=
∇
x
ln
p
(
x
)
⏟
score
+
∇
x
ln
p
(
y
|
x
)
⏟
classifier guidance
{\displaystyle \nabla _{x}\ln p(x|y)=\underbrace {\nabla _{x}\ln p(x)} _{\text{score}}+\underbrace {\nabla _{x}\ln p(y|x)} _{\text{classifier guidance}}}
where
∇
x
ln
p
(
x
)
{\displaystyle \nabla _{x}\ln p(x)}
is the score function, trained as previously described, and
∇
x
ln
p
(
y
|
x
)
{\displaystyle \nabla _{x}\ln p(y|x)}
is found by using a differentiable image classifier.
During the diffusion process, we need to condition on the time, giving
∇
x
t
ln
p
(
x
t
|
y
,
t
)
=
∇
x
t
ln
p
(
y
|
x
t
,
t
)
+
∇
x
t
ln
p
(
x
t
|
t
)
{\displaystyle \nabla _{x_{t}}\ln p(x_{t}|y,t)=\nabla _{x_{t}}\ln p(y|x_{t},t)+\nabla _{x_{t}}\ln p(x_{t}|t)}
Although, usually the classifier model does not depend on time, in which case
p
(
y
|
x
t
,
t
)
=
p
(
y
|
x
t
)
{\displaystyle p(y|x_{t},t)=p(y|x_{t})}
.
Classifier guidance is defined for the gradient of score function, thus for score-based diffusion network, but as previously noted, score-based diffusion models are equivalent to denoising models by
ϵ
θ
(
x
t
,
t
)
=
−
σ
t
∇
x
t
ln
p
(
x
t
|
t
)
{\displaystyle \epsilon _{\theta }(x_{t},t)=-\sigma _{t}\nabla _{x_{t}}\ln p(x_{t}|t)}
, and similarly,
ϵ
θ
(
x
t
,
y
,
t
)
=
−
σ
t
∇
x
t
ln
p
(
x
t
|
y
,
t
)
{\displaystyle \epsilon _{\theta }(x_{t},y,t)=-\sigma _{t}\nabla _{x_{t}}\ln p(x_{t}|y,t)}
. Therefore, classifier guidance works for denoising diffusion as well, using the modified noise prediction:
ϵ
θ
(
x
t
,
y
,
t
)
=
ϵ
θ
(
x
t
,
t
)
−
σ
t
∇
x
t
ln
p
(
y
|
x
t
,
t
)
⏟
classifier guidance
{\displaystyle \epsilon _{\theta }(x_{t},y,t)=\epsilon _{\theta }(x_{t},t)-\underbrace {\sigma _{t}\nabla _{x_{t}}\ln p(y|x_{t},t)} _{\text{classifier guidance}}}
==== With temperature ====
The classifier-guided diffusion model samples from
p
(
x
|
y
)
{\displaystyle p(x|y)}
, which is concentrated around the maximum a posteriori estimate
arg
max
x
p
(
x
|
y
)
{\displaystyle \arg \max _{x}p(x|y)}
. If we want to force the model to move towards the maximum likelihood estimate
arg
max
x
p
(
y
|
x
)
{\displaystyle \arg \max _{x}p(y|x)}
, we can use
p
γ
(
x
|
y
)
∝
p
(
y
|
x
)
γ
p
(
x
)
{\displaystyle p_{\gamma }(x|y)\propto p(y|x)^{\gamma }p(x)}
where
γ
>
0
{\displaystyle \gamma >0}
is interpretable as inverse temperature. In the context of diffusion models, it is usually called the guidance scale. A high
γ
{\displaystyle \gamma }
would force the model to sample from a distribution concentrated around
arg
max
x
p
(
y
|
x
)
{\displaystyle \arg \max _{x}p(y|x)}
. This sometimes improves quality of generated images.
This gives a modification to the previous equation:
∇
x
ln
p
β
(
x
|
y
)
=
∇
x
ln
p
(
x
)
+
γ
∇
x
ln
p
(
y
|
x
)
{\displaystyle \nabla _{x}\ln p_{\beta }(x|y)=\nabla _{x}\ln p(x)+\gamma \nabla _{x}\ln p(y|x)}
For denoising models, it corresponds to
ϵ
θ
(
x
t
,
y
,
t
)
=
ϵ
θ
(
x
t
,
t
)
−
γ
σ
t
∇
x
t
ln
p
(
y
|
x
t
,
t
)
{\displaystyle \epsilon _{\theta }(x_{t},y,t)=\epsilon _{\theta }(x_{t},t)-\gamma \sigma _{t}\nabla _{x_{t}}\ln p(y|x_{t},t)}
=== Classifier-free guidance (CFG) ===
If we do not have a classifier
p
(
y
|
x
)
{\displaystyle p(y|x)}
, we could still extract one out of the image model itself:
∇
x
ln
p
γ
(
x
|
y
)
=
(
1
−
γ
)
∇
x
ln
p
(
x
)
+
γ
∇
x
ln
p
(
x
|
y
)
{\displaystyle \nabla _{x}\ln p_{\gamma }(x|y)=(1-\gamma )\nabla _{x}\ln p(x)+\gamma \nabla _{x}\ln p(x|y)}
Such a model is usually trained by presenting it with both
(
x
,
y
)
{\displaystyle (x,y)}
and
(
x
,
N
o
n
e
)
{\displaystyle (x,{\rm {None}})}
, allowing it to model both
∇
x
ln
p
(
x
|
y
)
{\displaystyle \nabla _{x}\ln p(x|y)}
and
∇
x
ln
p
(
x
)
{\displaystyle \nabla _{x}\ln p(x)}
.
Note that for CFG, the diffusion model cannot be merely a generative model of the entire data distribution
∇
x
ln
p
(
x
)
{\displaystyle \nabla _{x}\ln p(x)}
. It must be a conditional generative model
∇
x
ln
p
(
x
|
y
)
{\displaystyle \nabla _{x}\ln p(x|y)}
. For example, in stable diffusion, the diffusion backbone takes as input both a noisy model
x
t
{\displaystyle x_{t}}
, a time
t
{\displaystyle t}
, and a conditioning vector
y
{\displaystyle y}
(such as a vector encoding a text prompt), and produces a noise prediction
ϵ
θ
(
x
t
,
y
,
t
)
{\displaystyle \epsilon _{\theta }(x_{t},y,t)}
.
For denoising models, it corresponds to
ϵ
θ
(
x
t
,
y
,
t
,
γ
)
=
ϵ
θ
(
x
t
,
t
)
+
γ
(
ϵ
θ
(
x
t
,
y
,
t
)
−
ϵ
θ
(
x
t
,
t
)
)
{\displaystyle \epsilon _{\theta }(x_{t},y,t,\gamma )=\epsilon _{\theta }(x_{t},t)+\gamma (\epsilon _{\theta }(x_{t},y,t)-\epsilon _{\theta }(x_{t},t))}
As sampled by DDIM, the algorithm can be written as
ϵ
uncond
←
ϵ
θ
(
x
t
,
t
)
ϵ
cond
←
ϵ
θ
(
x
t
,
t
,
c
)
ϵ
CFG
←
ϵ
uncond
+
γ
(
ϵ
cond
−
ϵ
uncond
)
x
0
←
(
x
t
−
σ
t
ϵ
CFG
)
/
1
−
σ
t
2
x
s
←
1
−
σ
s
2
x
0
+
σ
s
2
−
(
σ
s
′
)
2
ϵ
uncond
+
σ
s
′
ϵ
{\displaystyle {\begin{aligned}\epsilon _{\text{uncond}}&\leftarrow \epsilon _{\theta }(x_{t},t)\\\epsilon _{\text{cond}}&\leftarrow \epsilon _{\theta }(x_{t},t,c)\\\epsilon _{\text{CFG}}&\leftarrow \epsilon _{\text{uncond}}+\gamma (\epsilon _{\text{cond}}-\epsilon _{\text{uncond}})\\x_{0}&\leftarrow (x_{t}-\sigma _{t}\epsilon _{\text{CFG}})/{\sqrt {1-\sigma _{t}^{2}}}\\x_{s}&\leftarrow {\sqrt {1-\sigma _{s}^{2}}}x_{0}+{\sqrt {\sigma _{s}^{2}-(\sigma _{s}')^{2}}}\epsilon _{\text{uncond}}+\sigma _{s}'\epsilon \\\end{aligned}}}
A similar technique applies to language model sampling. Also, if the unconditional generation
ϵ
uncond
←
ϵ
θ
(
x
t
,
t
)
{\displaystyle \epsilon _{\text{uncond}}\leftarrow \epsilon _{\theta }(x_{t},t)}
is replaced by
ϵ
neg cond
←
ϵ
θ
(
x
t
,
t
,
c
′
)
{\displaystyle \epsilon _{\text{neg cond}}\leftarrow \epsilon _{\theta }(x_{t},t,c')}
, then it results in negative prompting, which pushes the generation away from
c
′
{\displaystyle c'}
condition.
=== Samplers ===
Given a diffusion model, one may regard it either as a continuous process, and sample from it by integrating a SDE, or one can regard it as a discrete process, and sample from it by iterating the discrete steps. The choice of the "noise schedule"
β
t
{\displaystyle \beta _{t}}
can also affect the quality of samples. A noise schedule is a function that sends a natural number to a noise level:
t
↦
β
t
,
t
∈
{
1
,
2
,
…
}
,
β
∈
(
0
,
1
)
{\displaystyle t\mapsto \beta _{t},\quad t\in \{1,2,\dots \},\beta \in (0,1)}
A noise schedule is more often specified by a map
t
↦
σ
t
{\displaystyle t\mapsto \sigma _{t}}
. The two definitions are equivalent, since
β
t
=
1
−
1
−
σ
t
2
1
−
σ
t
−
1
2
{\displaystyle \beta _{t}=1-{\frac {1-\sigma _{t}^{2}}{1-\sigma _{t-1}^{2}}}}
.
In the DDPM perspective, one can use the DDPM itself (with noise), or DDIM (with adjustable amount of noise). The case where one adds noise is sometimes called ancestral sampling. One can interpolate between noise and no noise. The amount of noise is denoted
η
{\displaystyle \eta }
("eta value") in the DDIM paper, with
η
=
0
{\displaystyle \eta =0}
denoting no noise (as in deterministic DDIM), and
η
=
1
{\displaystyle \eta =1}
denoting full noise (as in DDPM).
In the perspective of SDE, one can use any of the numerical integration methods, such as Euler–Maruyama method, Heun's method, linear multistep methods, etc. Just as in the discrete case, one can add an adjustable amount of noise during the integration.
A survey and comparison of samplers in the context of image generation is in.
=== Other examples ===
Notable variants include Poisson flow generative model, consistency model, critically-damped Langevin diffusion, GenPhys, cold diffusion, discrete diffusion, etc.
== Flow-based diffusion model ==
Abstractly speaking, the idea of diffusion model is to take an unknown probability distribution (the distribution of natural-looking images), then progressively convert it to a known probability distribution (standard gaussian distribution), by building an absolutely continuous probability path connecting them. The probability path is in fact defined implicitly by the score function
∇
ln
p
t
{\displaystyle \nabla \ln p_{t}}
.
In denoising diffusion models, the forward process adds noise, and the backward process removes noise. Both the forward and backward processes are SDEs, though the forward process is integrable in closed-form, so it can be done at no computational cost. The backward process is not integrable in closed-form, so it must be integrated step-by-step by standard SDE solvers, which can be very expensive. The probability path in diffusions model is defined through an Itô process and one can retrieve the deterministic process by using the Probability ODE flow formulation.
In flow-based diffusion models, the forward process is a deterministic flow along a time-dependent vector field, and the backward process is also a deterministic flow along the same vector field, but going backwards. Both processes are solutions to ODEs. If the vector field is well-behaved, the ODE will also be well-behaved.
Given two distributions
π
0
{\displaystyle \pi _{0}}
and
π
1
{\displaystyle \pi _{1}}
, a flow-based model is a time-dependent velocity field
v
t
(
x
)
{\displaystyle v_{t}(x)}
in
[
0
,
1
]
×
R
d
{\displaystyle [0,1]\times \mathbb {R} ^{d}}
, such that if we start by sampling a point
x
∼
π
0
{\displaystyle x\sim \pi _{0}}
, and let it move according to the velocity field:
d
d
t
ϕ
t
(
x
)
=
v
t
(
ϕ
t
(
x
)
)
t
∈
[
0
,
1
]
,
starting from
ϕ
0
(
x
)
=
x
{\displaystyle {\frac {d}{dt}}\phi _{t}(x)=v_{t}(\phi _{t}(x))\quad t\in [0,1],\quad {\text{starting from }}\phi _{0}(x)=x}
we end up with a point
x
1
∼
π
1
{\displaystyle x_{1}\sim \pi _{1}}
. The solution
ϕ
t
{\displaystyle \phi _{t}}
of the above ODE define a probability path
p
t
=
[
ϕ
t
]
#
π
0
{\displaystyle p_{t}=[\phi _{t}]_{\#}\pi _{0}}
by the pushforward measure operator. In particular,
[
ϕ
1
]
#
π
0
=
π
1
{\displaystyle [\phi _{1}]_{\#}\pi _{0}=\pi _{1}}
.
The probability path and the velocity field also satisfy the continuity equation, in the sense of probability distribution:
∂
t
p
t
+
∇
⋅
(
v
t
p
t
)
=
0
{\displaystyle \partial _{t}p_{t}+\nabla \cdot (v_{t}p_{t})=0}
To construct a probability path, we start by construct a conditional probability path
p
t
(
x
|
z
)
{\displaystyle p_{t}(x\vert z)}
and the corresponding conditional velocity field
v
t
(
x
|
z
)
{\displaystyle v_{t}(x\vert z)}
on some conditional distribution
q
(
z
)
{\displaystyle q(z)}
. A natural choice is the Gaussian conditional probability path:
p
t
(
x
|
z
)
=
N
(
m
t
(
z
)
,
ζ
t
2
I
)
{\displaystyle p_{t}(x\vert z)={\mathcal {N}}\left(m_{t}(z),\zeta _{t}^{2}I\right)}
The conditional velocity field which corresponds to the geodesic path between conditional Gaussian path is
v
t
(
x
|
z
)
=
ζ
t
′
ζ
t
(
x
−
m
t
(
z
)
)
+
m
t
′
(
z
)
{\displaystyle v_{t}(x\vert z)={\frac {\zeta _{t}'}{\zeta _{t}}}(x-m_{t}(z))+m_{t}'(z)}
The probability path and velocity field are then computed by marginalizing
p
t
(
x
)
=
∫
p
t
(
x
|
z
)
q
(
z
)
d
z
and
v
t
(
x
)
=
E
q
(
z
)
[
v
t
(
x
|
z
)
p
t
(
x
|
z
)
p
t
(
x
)
]
{\displaystyle p_{t}(x)=\int p_{t}(x\vert z)q(z)dz\qquad {\text{ and }}\qquad v_{t}(x)=\mathbb {E} _{q(z)}\left[{\frac {v_{t}(x\vert z)p_{t}(x\vert z)}{p_{t}(x)}}\right]}
=== Optimal transport flow ===
The idea of optimal transport flow is to construct a probability path minimizing the Wasserstein metric. The distribution on which we condition is an approximation of the optimal transport plan between
π
0
{\displaystyle \pi _{0}}
and
π
1
{\displaystyle \pi _{1}}
:
z
=
(
x
0
,
x
1
)
{\displaystyle z=(x_{0},x_{1})}
and
q
(
z
)
=
Γ
(
π
0
,
π
1
)
{\displaystyle q(z)=\Gamma (\pi _{0},\pi _{1})}
, where
Γ
{\displaystyle \Gamma }
is the optimal transport plan, which can be approximated by mini-batch optimal transport. If the batch size is not large, then the transport it computes can be very far from the true optimal transport.
=== Rectified flow ===
The idea of rectified flow is to learn a flow model such that the velocity is nearly constant along each flow path. This is beneficial, because we can integrate along such a vector field with very few steps. For example, if an ODE
ϕ
t
˙
(
x
)
=
v
t
(
ϕ
t
(
x
)
)
{\displaystyle {\dot {\phi _{t}}}(x)=v_{t}(\phi _{t}(x))}
follows perfectly straight paths, it simplifies to
ϕ
t
(
x
)
=
x
0
+
t
⋅
v
0
(
x
0
)
{\displaystyle \phi _{t}(x)=x_{0}+t\cdot v_{0}(x_{0})}
, allowing for exact solutions in one step. In practice, we cannot reach such perfection, but when the flow field is nearly so, we can take a few large steps instead of many little steps.
The general idea is to start with two distributions
π
0
{\displaystyle \pi _{0}}
and
π
1
{\displaystyle \pi _{1}}
, then construct a flow field
ϕ
0
=
{
ϕ
t
:
t
∈
[
0
,
1
]
}
{\displaystyle \phi ^{0}=\{\phi _{t}:t\in [0,1]\}}
from it, then repeatedly apply a "reflow" operation to obtain successive flow fields
ϕ
1
,
ϕ
2
,
…
{\displaystyle \phi ^{1},\phi ^{2},\dots }
, each straighter than the previous one. When the flow field is straight enough for the application, we stop.
Generally, for any time-differentiable process
ϕ
t
{\displaystyle \phi _{t}}
,
v
t
{\displaystyle v_{t}}
can be estimated by solving:
min
θ
∫
0
1
E
x
∼
p
t
[
‖
v
t
(
x
,
θ
)
−
v
t
(
x
)
‖
2
]
d
t
.
{\displaystyle \min _{\theta }\int _{0}^{1}\mathbb {E} _{x\sim p_{t}}\left[\lVert {v_{t}(x,\theta )-v_{t}(x)}\rVert ^{2}\right]\,\mathrm {d} t.}
In rectified flow, by injecting strong priors that intermediate trajectories are straight, it can achieve both theoretical relevance for optimal transport and computational efficiency, as ODEs with straight paths can be simulated precisely without time discretization.
Specifically, rectified flow seeks to match an ODE with the marginal distributions of the linear interpolation between points from distributions
π
0
{\displaystyle \pi _{0}}
and
π
1
{\displaystyle \pi _{1}}
. Given observations
x
0
∼
π
0
{\displaystyle x_{0}\sim \pi _{0}}
and
x
1
∼
π
1
{\displaystyle x_{1}\sim \pi _{1}}
, the canonical linear interpolation
x
t
=
t
x
1
+
(
1
−
t
)
x
0
,
t
∈
[
0
,
1
]
{\displaystyle x_{t}=tx_{1}+(1-t)x_{0},t\in [0,1]}
yields a trivial case
x
˙
t
=
x
1
−
x
0
{\displaystyle {\dot {x}}_{t}=x_{1}-x_{0}}
, which cannot be causally simulated without
x
1
{\displaystyle x_{1}}
. To address this,
x
t
{\displaystyle x_{t}}
is "projected" into a space of causally simulatable ODEs, by minimizing the least squares loss with respect to the direction
x
1
−
x
0
{\displaystyle x_{1}-x_{0}}
:
min
θ
∫
0
1
E
π
0
,
π
1
,
p
t
[
‖
(
x
1
−
x
0
)
−
v
t
(
x
t
)
‖
2
]
d
t
.
{\displaystyle \min _{\theta }\int _{0}^{1}\mathbb {E} _{\pi _{0},\pi _{1},p_{t}}\left[\lVert {(x_{1}-x_{0})-v_{t}(x_{t})}\rVert ^{2}\right]\,\mathrm {d} t.}
The data pair
(
x
0
,
x
1
)
{\displaystyle (x_{0},x_{1})}
can be any coupling of
π
0
{\displaystyle \pi _{0}}
and
π
1
{\displaystyle \pi _{1}}
, typically independent (i.e.,
(
x
0
,
x
1
)
∼
π
0
×
π
1
{\displaystyle (x_{0},x_{1})\sim \pi _{0}\times \pi _{1}}
) obtained by randomly combining observations from
π
0
{\displaystyle \pi _{0}}
and
π
1
{\displaystyle \pi _{1}}
. This process ensures that the trajectories closely mirror the density map of
x
t
{\displaystyle x_{t}}
trajectories but reroute at intersections to ensure causality.
A distinctive aspect of rectified flow is its capability for "reflow", which straightens the trajectory of ODE paths. Denote the rectified flow
ϕ
0
=
{
ϕ
t
:
t
∈
[
0
,
1
]
}
{\displaystyle \phi ^{0}=\{\phi _{t}:t\in [0,1]\}}
induced from
(
x
0
,
x
1
)
{\displaystyle (x_{0},x_{1})}
as
ϕ
0
=
R
e
c
t
f
l
o
w
(
(
x
0
,
x
1
)
)
{\displaystyle \phi ^{0}={\mathsf {Rectflow}}((x_{0},x_{1}))}
. Recursively applying this
R
e
c
t
f
l
o
w
(
⋅
)
{\displaystyle {\mathsf {Rectflow}}(\cdot )}
operator generates a series of rectified flows
ϕ
k
+
1
=
R
e
c
t
f
l
o
w
(
(
ϕ
0
k
(
x
0
)
,
ϕ
1
k
(
x
1
)
)
)
{\displaystyle \phi ^{k+1}={\mathsf {Rectflow}}((\phi _{0}^{k}(x_{0}),\phi _{1}^{k}(x_{1})))}
. This "reflow" process not only reduces transport costs but also straightens the paths of rectified flows, making
ϕ
k
{\displaystyle \phi ^{k}}
paths straighter with increasing
k
{\displaystyle k}
.
Rectified flow includes a nonlinear extension where linear interpolation
x
t
{\displaystyle x_{t}}
is replaced with any time-differentiable curve that connects
x
0
{\displaystyle x_{0}}
and
x
1
{\displaystyle x_{1}}
, given by
x
t
=
α
t
x
1
+
β
t
x
0
{\displaystyle x_{t}=\alpha _{t}x_{1}+\beta _{t}x_{0}}
. This framework encompasses DDIM and probability flow ODEs as special cases, with particular choices of
α
t
{\displaystyle \alpha _{t}}
and
β
t
{\displaystyle \beta _{t}}
. However, in the case where the path of
x
t
{\displaystyle x_{t}}
is not straight, the reflow process no longer ensures a reduction in convex transport costs, and also no longer straighten the paths of
ϕ
t
{\displaystyle \phi _{t}}
.
== Choice of architecture ==
=== Diffusion model ===
For generating images by DDPM, we need a neural network that takes a time
t
{\displaystyle t}
and a noisy image
x
t
{\displaystyle x_{t}}
, and predicts a noise
ϵ
θ
(
x
t
,
t
)
{\displaystyle \epsilon _{\theta }(x_{t},t)}
from it. Since predicting the noise is the same as predicting the denoised image, then subtracting it from
x
t
{\displaystyle x_{t}}
, denoising architectures tend to work well. For example, the U-Net, which was found to be good for denoising images, is often used for denoising diffusion models that generate images.
For DDPM, the underlying architecture ("backbone") does not have to be a U-Net. It just has to predict the noise somehow. For example, the diffusion transformer (DiT) uses a Transformer to predict the mean and diagonal covariance of the noise, given the textual conditioning and the partially denoised image. It is the same as standard U-Net-based denoising diffusion model, with a Transformer replacing the U-Net. Mixture of experts-Transformer can also be applied.
DDPM can be used to model general data distributions, not just natural-looking images. For example, Human Motion Diffusion models human motion trajectory by DDPM. Each human motion trajectory is a sequence of poses, represented by either joint rotations or positions. It uses a Transformer network to generate a less noisy trajectory out of a noisy one.
=== Conditioning ===
The base diffusion model can only generate unconditionally from the whole distribution. For example, a diffusion model learned on ImageNet would generate images that look like a random image from ImageNet. To generate images from just one category, one would need to impose the condition, and then sample from the conditional distribution. Whatever condition one wants to impose, one needs to first convert the conditioning into a vector of floating point numbers, then feed it into the underlying diffusion model neural network. However, one has freedom in choosing how to convert the conditioning into a vector.
Stable Diffusion, for example, imposes conditioning in the form of cross-attention mechanism, where the query is an intermediate representation of the image in the U-Net, and both key and value are the conditioning vectors. The conditioning can be selectively applied to only parts of an image, and new kinds of conditionings can be finetuned upon the base model, as used in ControlNet.
As a particularly simple example, consider image inpainting. The conditions are
x
~
{\displaystyle {\tilde {x}}}
, the reference image, and
m
{\displaystyle m}
, the inpainting mask. The conditioning is imposed at each step of the backward diffusion process, by first sampling
x
~
t
∼
N
(
α
¯
t
x
~
,
σ
t
2
I
)
{\displaystyle {\tilde {x}}_{t}\sim N\left({\sqrt {{\bar {\alpha }}_{t}}}{\tilde {x}},\sigma _{t}^{2}I\right)}
, a noisy version of
x
~
{\displaystyle {\tilde {x}}}
, then replacing
x
t
{\displaystyle x_{t}}
with
(
1
−
m
)
⊙
x
t
+
m
⊙
x
~
t
{\displaystyle (1-m)\odot x_{t}+m\odot {\tilde {x}}_{t}}
, where
⊙
{\displaystyle \odot }
means elementwise multiplication. Another application of cross-attention mechanism is prompt-to-prompt image editing.
Conditioning is not limited to just generating images from a specific category, or according to a specific caption (as in text-to-image). For example, demonstrated generating human motion, conditioned on an audio clip of human walking (allowing syncing motion to a soundtrack), or video of human running, or a text description of human motion, etc. For how conditional diffusion models are mathematically formulated, see a methodological summary in.
=== Upscaling ===
As generating an image takes a long time, one can try to generate a small image by a base diffusion model, then upscale it by other models. Upscaling can be done by GAN, Transformer, or signal processing methods like Lanczos resampling.
Diffusion models themselves can be used to perform upscaling. Cascading diffusion model stacks multiple diffusion models one after another, in the style of Progressive GAN. The lowest level is a standard diffusion model that generate 32x32 image, then the image would be upscaled by a diffusion model specifically trained for upscaling, and the process repeats.
In more detail, the diffusion upscaler is trained as follows:
Sample
(
x
0
,
z
0
,
c
)
{\displaystyle (x_{0},z_{0},c)}
, where
x
0
{\displaystyle x_{0}}
is the high-resolution image,
z
0
{\displaystyle z_{0}}
is the same image but scaled down to a low-resolution, and
c
{\displaystyle c}
is the conditioning, which can be the caption of the image, the class of the image, etc.
Sample two white noises
ϵ
x
,
ϵ
z
{\displaystyle \epsilon _{x},\epsilon _{z}}
, two time-steps
t
x
,
t
z
{\displaystyle t_{x},t_{z}}
. Compute the noisy versions of the high-resolution and low-resolution images:
{
x
t
x
=
α
¯
t
x
x
0
+
σ
t
x
ϵ
x
z
t
z
=
α
¯
t
z
z
0
+
σ
t
z
ϵ
z
{\displaystyle {\begin{cases}x_{t_{x}}&={\sqrt {{\bar {\alpha }}_{t_{x}}}}x_{0}+\sigma _{t_{x}}\epsilon _{x}\\z_{t_{z}}&={\sqrt {{\bar {\alpha }}_{t_{z}}}}z_{0}+\sigma _{t_{z}}\epsilon _{z}\end{cases}}}
.
Train the denoising network to predict
ϵ
x
{\displaystyle \epsilon _{x}}
given
x
t
x
,
z
t
z
,
t
x
,
t
z
,
c
{\displaystyle x_{t_{x}},z_{t_{z}},t_{x},t_{z},c}
. That is, apply gradient descent on
θ
{\displaystyle \theta }
on the L2 loss
‖
ϵ
θ
(
x
t
x
,
z
t
z
,
t
x
,
t
z
,
c
)
−
ϵ
x
‖
2
2
{\displaystyle \|\epsilon _{\theta }(x_{t_{x}},z_{t_{z}},t_{x},t_{z},c)-\epsilon _{x}\|_{2}^{2}}
.
== Examples ==
This section collects some notable diffusion models, and briefly describes their architecture.
=== OpenAI ===
The DALL-E series by OpenAI are text-conditional diffusion models of images.
The first version of DALL-E (2021) is not actually a diffusion model. Instead, it uses a Transformer architecture that autoregressively generates a sequence of tokens, which is then converted to an image by the decoder of a discrete VAE. Released with DALL-E was the CLIP classifier, which was used by DALL-E to rank generated images according to how close the image fits the text.
GLIDE (2022-03) is a 3.5-billion diffusion model, and a small version was released publicly. Soon after, DALL-E 2 was released (2022-04). DALL-E 2 is a 3.5-billion cascaded diffusion model that generates images from text by "inverting the CLIP image encoder", the technique which they termed "unCLIP".
The unCLIP method contains 4 models: a CLIP image encoder, a CLIP text encoder, an image decoder, and a "prior" model (which can be a diffusion model, or an autoregressive model). During training, the prior model is trained to convert CLIP image encodings to CLIP text encodings. The image decoder is trained to convert CLIP image encodings back to images. During inference, a text is converted by the CLIP text encoder to a vector, then it is converted by the prior model to an image encoding, then it is converted by the image decoder to an image.
Sora (2024-02) is a diffusion Transformer model (DiT).
=== Stability AI ===
Stable Diffusion (2022-08), released by Stability AI, consists of a denoising latent diffusion model (860 million parameters), a VAE, and a text encoder. The denoising network is a U-Net, with cross-attention blocks to allow for conditional image generation.
Stable Diffusion 3 (2024-03) changed the latent diffusion model from the UNet to a Transformer model, and so it is a DiT. It uses rectified flow.
Stable Video 4D (2024-07) is a latent diffusion model for videos of 3D objects.
=== Google ===
Imagen (2022) uses a T5-XXL language model to encode the input text into an embedding vector. It is a cascaded diffusion model with three sub-models. The first step denoises a white noise to a 64×64 image, conditional on the embedding vector of the text. This model has 2B parameters. The second step upscales the image by 64×64→256×256, conditional on embedding. This model has 650M parameters. The third step is similar, upscaling by 256×256→1024×1024. This model has 400M parameters. The three denoising networks are all U-Nets.
Muse (2023-01) is not a diffusion model, but an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens.
Imagen 2 (2023-12) is also diffusion-based. It can generate images based on a prompt that mixes images and text. No further information available. Imagen 3 (2024-05) is too. No further information available.
Veo (2024) generates videos by latent diffusion. The diffusion is conditioned on a vector that encodes both a text prompt and an image prompt.
=== Meta ===
Make-A-Video (2022) is a text-to-video diffusion model.
CM3leon (2023) is not a diffusion model, but an autoregressive causally masked Transformer, with mostly the same architecture as LLaMa-2.
Transfusion (2024) is a Transformer that combines autoregressive text generation and denoising diffusion. Specifically, it generates text autoregressively (with causal masking), and generates images by denoising multiple times over image tokens (with all-to-all attention).
Movie Gen (2024) is a series of Diffusion Transformers operating on latent space and by flow matching.
== See also ==
Diffusion process
Markov chain
Variational inference
Variational autoencoder
== Further reading ==
Review papers
Yang, Ling (2024-09-06), YangLing0818/Diffusion-Models-Papers-Survey-Taxonomy, retrieved 2024-09-06
Yang, Ling; Zhang, Zhilong; Song, Yang; Hong, Shenda; Xu, Runsheng; Zhao, Yue; Zhang, Wentao; Cui, Bin; Yang, Ming-Hsuan (2023-11-09). "Diffusion Models: A Comprehensive Survey of Methods and Applications". ACM Comput. Surv. 56 (4): 105:1–105:39. arXiv:2209.00796. doi:10.1145/3626235. ISSN 0360-0300.
Austin, Jacob; Johnson, Daniel D.; Ho, Jonathan; Tarlow, Daniel; Rianne van den Berg (2021). "Structured Denoising Diffusion Models in Discrete State-Spaces". arXiv:2107.03006 [cs.LG].
Croitoru, Florinel-Alin; Hondru, Vlad; Ionescu, Radu Tudor; Shah, Mubarak (2023-09-01). "Diffusion Models in Vision: A Survey". IEEE Transactions on Pattern Analysis and Machine Intelligence. 45 (9): 10850–10869. arXiv:2209.04747. doi:10.1109/TPAMI.2023.3261988. ISSN 0162-8828. PMID 37030794.
Mathematical details omitted in the article.
"Power of Diffusion Models". AstraBlog. 2022-09-25. Retrieved 2023-09-25.
Luo, Calvin (2022-08-25). "Understanding Diffusion Models: A Unified Perspective". arXiv:2208.11970 [cs.LG].
Weng, Lilian (2021-07-11). "What are Diffusion Models?". lilianweng.github.io. Retrieved 2023-09-25.
Tutorials
Nakkiran, Preetum; Bradley, Arwen; Zhou, Hattie; Advani, Madhu (2024). "Step-by-Step Diffusion: An Elementary Tutorial". arXiv:2406.08929 [cs.LG].
"Guidance: a cheat code for diffusion models". 26 May 2022. Overview of classifier guidance and classifier-free guidance, light on mathematical details.
== References == | Wikipedia/Diffusion_model |
The Latent Diffusion Model (LDM) is a diffusion model architecture developed by the CompVis (Computer Vision & Learning) group at LMU Munich.
Introduced in 2015, diffusion models (DMs) are trained with the objective of removing successive applications of noise (commonly Gaussian) on training images. The LDM is an improvement on standard DM by performing diffusion modeling in a latent space, and by allowing self-attention and cross-attention conditioning.
LDMs are widely used in practical diffusion models. For instance, Stable Diffusion versions 1.1 to 2.1 were based on the LDM architecture.
== Version history ==
Diffusion models were introduced in 2015 as a method to learn a model that can sample from a highly complex probability distribution. They used techniques from non-equilibrium thermodynamics, especially diffusion. It was accompanied by a software implementation in Theano.
A 2019 paper proposed the noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD). The paper was accompanied by a software package written in PyTorch release on GitHub.
A 2020 paper proposed the Denoising Diffusion Probabilistic Model (DDPM), which improves upon the previous method by variational inference. The paper was accompanied by a software package written in TensorFlow release on GitHub. It was reimplemented in PyTorch by lucidrains.
On December 20, 2021, the LDM paper was published on arXiv, and both Stable Diffusion and LDM repositories were published on GitHub. However, they remained roughly the same. Substantial information concerning Stable Diffusion v1 was only added to GitHub on August 10, 2022.
All of Stable Diffusion (SD) versions 1.1 to XL were particular instantiations of the LDM architecture.
SD 1.1 to 1.4 were released by CompVis in August 2022. There is no "version 1.0". SD 1.1 was a LDM trained on the laion2B-en dataset. SD 1.1 was finetuned to 1.2 on more aesthetic images. SD 1.2 was finetuned to 1.3, 1.4 and 1.5, with 10% of text-conditioning dropped, to improve classifier-free guidance. SD 1.5 was released by RunwayML in October 2022.
== Architecture ==
While the LDM can work for generating arbitrary data conditional on arbitrary data, for concreteness, we describe its operation in conditional text-to-image generation.
LDM consists of a variational autoencoder (VAE), a modified U-Net, and a text encoder.
The VAE encoder compresses the image from pixel space to a smaller dimensional latent space, capturing a more fundamental semantic meaning of the image. Gaussian noise is iteratively applied to the compressed latent representation during forward diffusion. The U-Net block, composed of a ResNet backbone, denoises the output from forward diffusion backwards to obtain a latent representation. Finally, the VAE decoder generates the final image by converting the representation back into pixel space.
The denoising step can be conditioned on a string of text, an image, or another modality. The encoded conditioning data is exposed to denoising U-Nets via a cross-attention mechanism. For conditioning on text, the fixed, a pretrained CLIP ViT-L/14 text encoder is used to transform text prompts to an embedding space.
=== Variational Autoencoder ===
To compress the image data, a variational autoencoder (VAE) is first trained on a dataset of images. The encoder part of the VAE takes an image as input and outputs a lower-dimensional latent representation of the image. This latent representation is then used as input to the U-Net. Once the model is trained, the encoder is used to encode images into latent representations, and the decoder is used to decode latent representations back into images.
Let the encoder and the decoder of the VAE be
E
,
D
{\displaystyle E,D}
.
To encode an RGB image, its three channels are divided by the maximum value, resulting in a tensor
x
{\displaystyle x}
of shape
(
3
,
512
,
512
)
{\displaystyle (3,512,512)}
with all entries within range
[
0
,
1
]
{\displaystyle [0,1]}
. The encoded vector is
0.18215
×
E
(
2
x
−
1
)
{\displaystyle 0.18215\times E(2x-1)}
, with shape
(
4
,
64
,
64
)
{\displaystyle (4,64,64)}
, where 0.18215 is a hyperparameter, which the original authors picked to roughly whiten the encoded vector to roughly unit variance. Conversely, given a latent tensor
y
{\displaystyle y}
, the decoded image is
(
D
(
y
/
0.18125
)
+
1
)
/
2
{\displaystyle (D(y/0.18125)+1)/2}
, then clipped to the range
[
0
,
1
]
{\displaystyle [0,1]}
.
In the implemented version,: ldm/models/autoencoder.py the encoder is a convolutional neural network (CNN) with a single self-attention mechanism near the end. It takes a tensor of shape
(
3
,
H
,
W
)
{\displaystyle (3,H,W)}
and outputs a tensor of shape
(
8
,
H
/
8
,
W
/
8
)
{\displaystyle (8,H/8,W/8)}
, being the concatenation of the predicted mean and variance of the latent vector, each of shape
(
4
,
H
/
8
,
W
/
8
)
{\displaystyle (4,H/8,W/8)}
. The variance is used in training, but after training, usually only the mean is taken, with the variance discarded.
The decoder is also a CNN with a single self-attention mechanism near the end. It takes a tensor of shape
(
4
,
H
/
8
,
W
/
8
)
{\displaystyle (4,H/8,W/8)}
and outputs a tensor of shape
(
3
,
H
,
W
)
{\displaystyle (3,H,W)}
.
=== U-Net ===
The U-Net backbone takes the following kinds of inputs:
A latent image array, produced by the VAE encoder. It has dimensions
(
channel
,
width
,
height
)
{\displaystyle ({\text{channel}},{\text{width}},{\text{height}})}
. Typically,
(
channel
,
width
,
height
)
=
(
4
,
64
,
64
)
{\displaystyle ({\text{channel}},{\text{width}},{\text{height}})=(4,64,64)}
.
A timestep-embedding vector, which tells the backbone how much noise there is in the image. For example, an embedding of timestep
t
=
0
{\displaystyle t=0}
would indicate that the input image is already noiseless, while
t
=
100
{\displaystyle t=100}
would mean there is much noise.
A modality-embedding vector sequence, which indicates to the backbone about additional conditions for denoising. For example, in text-to-image generation, the text is divided into a sequence of tokens, then encoded by a text encoder, such as a CLIP encoder, before feeding into the backbone. As another example, an input image can be processed by a Vision Transformer into a sequence of vectors, which can then be used to condition the backbone for tasks such as generating an image in the same style.
Each run through the U-Net backbone produces a predicted noise vector. This noise vector is scaled down and subtracted away from the latent image array, resulting in a slightly less noisy latent image. The denoising is repeated according to a denoising schedule ("noise schedule"), and the output of the last step is processed by the VAE decoder into a finished image.
Similar to the standard U-Net, the U-Net backbone used in the SD 1.5 is essentially composed of down-scaling layers followed by up-scaling layers. However, the U-Net backbone has additional modules to allow for it to handle the embedding. As an illustration, we describe a single down-scaling layer in the backbone:
The latent array and the time-embedding are processed by a ResBlock:
The latent array is processed by a convolutional layer.
The time-embedding vector is processed by a one-layered feedforward network, then added to the previous array (broadcast over all pixels).
This is processed by another convolutional layer, then another time-embedding.
The latent array and the embedding vector sequence are processed by a SpatialTransformer, which is essentially a standard pre-LN Transformer decoder without causal masking.
In the cross-attentional blocks, the latent array itself serves as the query sequence, one query-vector per pixel. For example, if, at this layer in the U-Net, the latent array has dimensions
(
128
,
32
,
32
)
{\displaystyle (128,32,32)}
, then the query sequence has
1024
{\displaystyle 1024}
vectors, each of which has
128
{\displaystyle 128}
dimensions. The embedding vector sequence serves as both the key sequence and as the value sequence.
When no embedding vector sequence is input, a cross-attentional block defaults to self-attention, with the latent array serving as the query, key, and value.: line 251
In pseudocode,
The detailed architecture may be found in.
== Training and inference ==
The LDM is trained by using a Markov chain to gradually add noise to the training images. The model is then trained to reverse this process, starting with a noisy image and gradually removing the noise until it recovers the original image.
More specifically, the training process can be described as follows:
Forward diffusion process: Given a real image
x
0
{\displaystyle x_{0}}
, a sequence of latent variables
x
1
:
T
{\displaystyle x_{1:T}}
are generated by gradually adding Gaussian noise to the image, according to a pre-determined "noise schedule".
Reverse diffusion process: Starting from a Gaussian noise sample
x
T
{\displaystyle x_{T}}
, the model learns to predict the noise added at each step, in order to reverse the diffusion process and obtain a reconstruction of the original image
x
0
{\displaystyle x_{0}}
.
The model is trained to minimize the difference between the predicted noise and the actual noise added at each step. This is typically done using a mean squared error (MSE) loss function.
Once the model is trained, it can be used to generate new images by simply running the reverse diffusion process starting from a random noise sample. The model gradually removes the noise from the sample, guided by the learned noise distribution, until it generates a final image.
See the diffusion model page for details.
== See also ==
Diffusion model
Generative adversarial network
Variational autoencoder
Stable Diffusion
== References ==
== Further reading ==
Wang, Phil (2024-09-07). "lucidrains/denoising-diffusion-pytorch". GitHub. Retrieved 2024-09-07.
"The Annotated Diffusion Model". huggingface.co. Retrieved 2024-09-07.
"U-Net for Stable Diffusion". U-Net for Stable Diffusion. Retrieved 2024-08-31.
"Transformer for Stable Diffusion U-Net". Transformer for Stable Diffusion U-Net. Retrieved 2024-09-07. | Wikipedia/Latent_diffusion_model |
Dream Machine is a text-to-video model created by Luma Labs and launched in June 2024. It generates video output based on user prompts or still images. Dream Machine has been noted for its ability to realistically capture motion, while some critics have remarked upon the lack of transparency about its training data. Upon the program's release, users on social media created moving versions of various Internet memes.
== History ==
Dream Machine is a text-to-video model created by the San Francisco-based generative artificial intelligence company Luma Labs, which had previously created Genie, a 3D model generator. It was released to the public on June 12, 2024, which was announced by the company in a post on X alongside examples of videos it created. Soon after its release, users on social media posted video versions of images generated with Midjourney, as well as moving recreations of artworks such as Girl with a Pearl Earring and memes such as Doge, Picard facepalm, Success Kid, and distracted boyfriend. One video, a trailer for a fictional animated movie titled Monster Camp, was reposted by Luma Labs on their X account. Users on the platform criticized the video as stealing the aesthetic of the Monsters, Inc. franchise, also pointing out that Mike Wazowski, a character from the franchise, appears in the trailer. Another video posted by director Ellenor Argyropoulos of a Pixar-style animation of a girl in ancient Egypt created with Dream Machine went viral online.
== Capabilities ==
As of June 2024, users can create videos with Dream Machine, which are five seconds long and 1360 × 752 pixels, by signing up with their Google account and typing in a prompt or using a still image. Dream Machine alters the prompt based on its own large language model. Users can create 10 videos a day and 30 videos for free with Dream Machine. The program also offers Standard, Pro, and Premier subscription plans, which allow users to create 120, 400, and 2,000 videos, respectively. Dream Machine's website states that its videos have difficulty depicting text and motion. Luma Labs has stated that it has plans to release a developer-friendly API for Dream Machine. The week after its release, Luma Labs announced that it would be adding the ability to extend videos, a discovery feature, and in-video editing.
== Reception ==
Critics compared Dream Machine heavily to Sora, a text-to-video model created by OpenAI, and Kling, another text-to-video model, upon its release. Charles Pulliam-Moore of The Verge wrote that "bullish fans" of generative AI "were quick to call [Dream Machine] a novel innovation", but remarked upon its training data not being available to the public. Mark Wilson of TechRadar also noted that it was unclear what Dream Machine's training data was, which he said "means that its potential outside of personal use or improving your GIF game could be limited", but wrote that it was "certainly a fun tool to test drive" as "a taster of the more advanced (and no doubt more expensive) AI video generators to come". For Tom's Guide, Ryan Morrison called Dream Machine "one of the best prompt following and motion understanding AI video models yet" and "an impressive next step in generative AI video", but that "it is still falling short of what is needed". Mashable's Chase DiBenedetto described user-created Dream Machine videos circulating on social media as "eerily-moving" and "Harry Potter-esque".
== References ==
== External links ==
Official website | Wikipedia/Dream_Machine_(text-to-video_model) |
In economics, a random utility model (RUM), also called stochastic utility model, is a mathematical description of the preferences of a person, whose choices are not deterministic, but depend on a random state variable.
== Background ==
A basic assumption in classic economics is that the choices of a rational person choices are guided by a preference relation, which can usually be described by a utility function. When faced with several alternatives, the rational person will choose the alternative with the highest utility. The utility function is not visible; however, by observing the choices made by the person, we can "reverse-engineer" his utility function. This is the goal of revealed preference theory.
In practice, however, people are not rational. Ample empirical evidence shows that, when faced with the same set of alternatives, people may make different choices. To an outside observer, their choices may appear random.
One way to model this behavior is called stochastic rationality. It is assumed that each agent has an unobserved state, which can be considered a random variable. Given that state, the agent behaves rationally. In other words: each agent has, not a single preference-relation, but a distribution over preference-relations (or utility functions).
== The representation problem ==
Block and Marschak presented the following problem. Suppose we are given as input, a set of choice probabilities Pa,B, describing the probability that an agent chooses alternative a from the set B. We want to rationalize the agent's behavior by a probability distribution over preference relations. That is: we want to find a distribution such that, for all pairs a,B given in the input, Pa,B = Prob[a is weakly preferred to all alternatives in B]. What conditions on the set of probabilities Pa,B guarantee the existence of such a distribution?
Falmagne solved this problem for the case in which the set of alternatives is finite: he proved that a probability distribution exists iff a set of polynomials derived from the choice-probabilities, denoted Block-Marschak polynomials, are nonnegative. His solution is constructive, and provides an algorithm for computing the distribution.
Barbera and Pattanaik extend this result to settings in which the agent may choose sets of alternatives, rather than just singletons.
=== Uniqueness ===
Block and Marschak proved that, when there are at most 3 alternatives, the random utility model is unique ("identified"); however, when there are 4 or more alternatives, the model may be non-unique. For example, we can compute the probability that the agent prefers w to x (w>x), and the probability that y>z, but may not be able to know the probability that both w>x and y>z. There are even distributions with disjoint supports, which induce the same set of choice probabilities.
Some conditions for uniqueness were given by Falmagne. Turansick presents two characterizations for the existence of a unique random utility representation.
== Models ==
There are various RUMs, which differ in the assumptions on the probability distributions of the agent's utility, A popular RUM was developed by Luce and Plackett.
The Plackett-Luce model was applied in econometrics, for example, to analyze automobile prices in market equilibrium. It was also applied in machine learning and information retrieval. It was also applied in social choice, to analyze an opinion poll conducted during the Irish presidential election. Efficient methods for expectation-maximization and Expectation propagation exist for the Plackett-Luce model.
== Application to social choice ==
RUMs can be used not only for modeling the behavior of a single agent, but also for decision-making among a society of agents. One approach to social choice, first formalized by Condorcet's jury theorem, is that there is a "ground truth" - a true ranking of the alternatives. Each agent in society receives a noisy signal of this true ranking. The best way to approach the ground truth is using maximum likelihood estimation: construct a social ranking which maximizes the likelihood of the set of individual rankings.
Condorcet's original model assumes that the probabilities of agents' mistakes in pairwise comparisons are independent and identically distributed: all mistakes have the same probability p. This model has several drawbacks:
It ignores the strength of agents' expressed preferences. An agent who prefers a "much more than" b and an agent who prefers a "a little more than b" are treated the same.
It allows for cyclic preferences. There is a positive probability that an agent will prefer a to b, b to c, and c to a.
The maximum likelihood estimator - which is the Kemeny–Young method - is hard to compute (it is
Θ
2
P
{\displaystyle \Theta _{2}^{P}}
-complete).
RUM provides an alternative model: there is a ground-truth vector of utilities; each agent draws a utility for each alternative, based on a probability distribution whose mean value is the ground-truth. This model captures the strength of preferences, and rules out cyclic preferences. Moreover, for some common probability distributions (particularly, the Plackett-Luce model), the maximum likelihood estimators can be computed efficiently.
== Generalizations ==
Walker and Ben-Akiva generalize the classic RUM in several ways, aiming to improve the accuracy of forecasts:
Flexible Disturbances: allowing a richer covariance structure, estimating unobserved heterogeneity, and random parameters;
Latent Variables: explicitly representing the formation and effects of unseen constructs, such as perceptions and attitudes;
Latent Classes: capturing hidden segmentation in terms of taste parameters, choice sets, and decision protocols;
Combining Revealed Preferences and Stated Preferences: to combine advantages of these two data types.
Blavatzkyy studies stochastic utility theory based on choices between lotteries. The input is a set of choice probabilities, which indicate the likelihood that the agent choose one lottery over the other.
== References == | Wikipedia/Random_utility_model |
In reinforcement learning (RL), a model-free algorithm is an algorithm which does not estimate the transition probability distribution (and the reward function) associated with the Markov decision process (MDP), which, in RL, represents the problem to be solved. The transition probability distribution (or transition model) and the reward function are often collectively called the "model" of the environment (or MDP), hence the name "model-free". A model-free RL algorithm can be thought of as an "explicit" trial-and-error algorithm. Typical examples of model-free algorithms include Monte Carlo (MC) RL, SARSA, and Q-learning.
Monte Carlo estimation is a central component of many model-free RL algorithms. The MC learning algorithm is essentially an important branch of generalized policy iteration, which has two periodically alternating steps: policy evaluation (PEV) and policy improvement (PIM). In this framework, each policy is first evaluated by its corresponding value function. Then, based on the evaluation result, greedy search is completed to produce a better policy. The MC estimation is mainly applied to the first step of policy evaluation. The simplest idea is used to judge the effectiveness of the current policy, which is to average the returns of all collected samples. As more experience is accumulated, the estimate will converge to the true value by the law of large numbers. Hence, MC policy evaluation does not require any prior knowledge of the environment dynamics. Instead, only experience is needed (i.e., samples of state, action, and reward), which is generated from interacting with an environment (which may be real or simulated).
Value function estimation is crucial for model-free RL algorithms. Unlike MC methods, temporal difference (TD) methods learn this function by reusing existing value estimates. TD learning has the ability to learn from an incomplete sequence of events without waiting for the final outcome. It can also approximate the future return as a function of the current state. Similar to MC, TD only uses experience to estimate the value function without knowing any prior knowledge of the environment dynamics. The advantage of TD lies in the fact that it can update the value function based on its current estimate. Therefore, TD learning algorithms can learn from incomplete episodes or continuing tasks in a step-by-step manner, while MC must be implemented in an episode-by-episode fashion.
== Model-free reinforcement learning algorithms ==
Model-free RL algorithms can start from a blank policy candidate and achieve superhuman performance in many complex tasks, including Atari games, StarCraft and Go. Deep neural networks are responsible for recent artificial intelligence breakthroughs, and they can be combined with RL to create superhuman agents such as Google DeepMind's AlphaGo. Mainstream model-free RL algorithms include Deep Q-Network (DQN), Dueling DQN, Double DQN (DDQN), Trust Region Policy Optimization (TRPO), Proximal Policy Optimization (PPO), Asynchronous Advantage Actor-Critic (A3C), Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), Soft Actor-Critic (SAC), Distributional Soft Actor-Critic (DSAC), etc. Some model-free (deep) RL algorithms are listed as follows:
== References == | Wikipedia/Model-free_(reinforcement_learning) |
The softmax function, also known as softargmax: 184 or normalized exponential function,: 198 converts a tuple of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and is used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes.
== Definition ==
The softmax function takes as input a tuple z of K real numbers, and normalizes it into a probability distribution consisting of K probabilities proportional to the exponentials of the input numbers. That is, prior to applying softmax, some tuple components could be negative, or greater than one; and might not sum to 1; but after applying softmax, each component will be in the interval
(
0
,
1
)
{\displaystyle (0,1)}
, and the components will add up to 1, so that they can be interpreted as probabilities. Furthermore, the larger input components will correspond to larger probabilities.
Formally, the standard (unit) softmax function
σ
:
R
K
→
(
0
,
1
)
K
{\displaystyle \sigma \colon \mathbb {R} ^{K}\to (0,1)^{K}}
, where
K
>
1
{\displaystyle K>1}
, takes a tuple
z
=
(
z
1
,
…
,
z
K
)
∈
R
K
{\displaystyle \mathbf {z} =(z_{1},\dotsc ,z_{K})\in \mathbb {R} ^{K}}
and computes each component of vector
σ
(
z
)
∈
(
0
,
1
)
K
{\displaystyle \sigma (\mathbf {z} )\in (0,1)^{K}}
with
σ
(
z
)
i
=
e
z
i
∑
j
=
1
K
e
z
j
.
{\displaystyle \sigma (\mathbf {z} )_{i}={\frac {e^{z_{i}}}{\sum _{j=1}^{K}e^{z_{j}}}}\,.}
In words, the softmax applies the standard exponential function to each element
z
i
{\displaystyle z_{i}}
of the input tuple
z
{\displaystyle \mathbf {z} }
(consisting of
K
{\displaystyle K}
real numbers), and normalizes these values by dividing by the sum of all these exponentials. The normalization ensures that the sum of the components of the output vector
σ
(
z
)
{\displaystyle \sigma (\mathbf {z} )}
is 1. The term "softmax" derives from the amplifying effects of the exponential on any maxima in the input tuple. For example, the standard softmax of
(
1
,
2
,
8
)
{\displaystyle (1,2,8)}
is approximately
(
0.001
,
0.002
,
0.997
)
{\displaystyle (0.001,0.002,0.997)}
, which amounts to assigning almost all of the total unit weight in the result to the position of the tuple's maximal element (of 8).
In general, instead of e a different base b > 0 can be used. As above, if b > 1 then larger input components will result in larger output probabilities, and increasing the value of b will create probability distributions that are more concentrated around the positions of the largest input values. Conversely, if 0 < b < 1 then smaller input components will result in larger output probabilities, and decreasing the value of b will create probability distributions that are more concentrated around the positions of the smallest input values. Writing
b
=
e
β
{\displaystyle b=e^{\beta }}
or
b
=
e
−
β
{\displaystyle b=e^{-\beta }}
(for real β) yields the expressions:
σ
(
z
)
i
=
e
β
z
i
∑
j
=
1
K
e
β
z
j
or
σ
(
z
)
i
=
e
−
β
z
i
∑
j
=
1
K
e
−
β
z
j
for
i
=
1
,
…
,
K
.
{\displaystyle \sigma (\mathbf {z} )_{i}={\frac {e^{\beta z_{i}}}{\sum _{j=1}^{K}e^{\beta z_{j}}}}{\text{ or }}\sigma (\mathbf {z} )_{i}={\frac {e^{-\beta z_{i}}}{\sum _{j=1}^{K}e^{-\beta z_{j}}}}{\text{ for }}i=1,\dotsc ,K.}
A value proportional to the reciprocal of β is sometimes referred to as the temperature:
β
=
1
/
k
T
{\textstyle \beta =1/kT}
, where k is typically 1 or the Boltzmann constant and T is the temperature. A higher temperature results in a more uniform output distribution (i.e. with higher entropy; it is "more random"), while a lower temperature results in a sharper output distribution, with one value dominating.
In some fields, the base is fixed, corresponding to a fixed scale, while in others the parameter β (or T) is varied.
== Interpretations ==
=== Smooth arg max ===
The Softmax function is a smooth approximation to the arg max function: the function whose value is the index of a tuple's largest element. The name "softmax" may be misleading. Softmax is not a smooth maximum (that is, a smooth approximation to the maximum function). The term "softmax" is also used for the closely related LogSumExp function, which is a smooth maximum. For this reason, some prefer the more accurate term "softargmax", though the term "softmax" is conventional in machine learning. This section uses the term "softargmax" for clarity.
Formally, instead of considering the arg max as a function with categorical output
1
,
…
,
n
{\displaystyle 1,\dots ,n}
(corresponding to the index), consider the arg max function with one-hot representation of the output (assuming there is a unique maximum arg):
a
r
g
m
a
x
(
z
1
,
…
,
z
n
)
=
(
y
1
,
…
,
y
n
)
=
(
0
,
…
,
0
,
1
,
0
,
…
,
0
)
,
{\displaystyle \operatorname {arg\,max} (z_{1},\,\dots ,\,z_{n})=(y_{1},\,\dots ,\,y_{n})=(0,\,\dots ,\,0,\,1,\,0,\,\dots ,\,0),}
where the output coordinate
y
i
=
1
{\displaystyle y_{i}=1}
if and only if
i
{\displaystyle i}
is the arg max of
(
z
1
,
…
,
z
n
)
{\displaystyle (z_{1},\dots ,z_{n})}
, meaning
z
i
{\displaystyle z_{i}}
is the unique maximum value of
(
z
1
,
…
,
z
n
)
{\displaystyle (z_{1},\,\dots ,\,z_{n})}
. For example, in this encoding
a
r
g
m
a
x
(
1
,
5
,
10
)
=
(
0
,
0
,
1
)
,
{\displaystyle \operatorname {arg\,max} (1,5,10)=(0,0,1),}
since the third argument is the maximum.
This can be generalized to multiple arg max values (multiple equal
z
i
{\displaystyle z_{i}}
being the maximum) by dividing the 1 between all max args; formally 1/k where k is the number of arguments assuming the maximum. For example,
a
r
g
m
a
x
(
1
,
5
,
5
)
=
(
0
,
1
/
2
,
1
/
2
)
,
{\displaystyle \operatorname {arg\,max} (1,\,5,\,5)=(0,\,1/2,\,1/2),}
since the second and third argument are both the maximum. In case all arguments are equal, this is simply
a
r
g
m
a
x
(
z
,
…
,
z
)
=
(
1
/
n
,
…
,
1
/
n
)
.
{\displaystyle \operatorname {arg\,max} (z,\dots ,z)=(1/n,\dots ,1/n).}
Points z with multiple arg max values are singular points (or singularities, and form the singular set) – these are the points where arg max is discontinuous (with a jump discontinuity) – while points with a single arg max are known as non-singular or regular points.
With the last expression given in the introduction, softargmax is now a smooth approximation of arg max: as
β
→
∞
{\displaystyle \beta \to \infty }
, softargmax converges to arg max. There are various notions of convergence of a function; softargmax converges to arg max pointwise, meaning for each fixed input z as
β
→
∞
{\displaystyle \beta \to \infty }
,
σ
β
(
z
)
→
a
r
g
m
a
x
(
z
)
.
{\displaystyle \sigma _{\beta }(\mathbf {z} )\to \operatorname {arg\,max} (\mathbf {z} ).}
However, softargmax does not converge uniformly to arg max, meaning intuitively that different points converge at different rates, and may converge arbitrarily slowly. In fact, softargmax is continuous, but arg max is not continuous at the singular set where two coordinates are equal, while the uniform limit of continuous functions is continuous. The reason it fails to converge uniformly is that for inputs where two coordinates are almost equal (and one is the maximum), the arg max is the index of one or the other, so a small change in input yields a large change in output. For example,
σ
β
(
1
,
1.0001
)
→
(
0
,
1
)
,
{\displaystyle \sigma _{\beta }(1,\,1.0001)\to (0,1),}
but
σ
β
(
1
,
0.9999
)
→
(
1
,
0
)
,
{\displaystyle \sigma _{\beta }(1,\,0.9999)\to (1,\,0),}
and
σ
β
(
1
,
1
)
=
1
/
2
{\displaystyle \sigma _{\beta }(1,\,1)=1/2}
for all inputs: the closer the points are to the singular set
(
x
,
x
)
{\displaystyle (x,x)}
, the slower they converge. However, softargmax does converge compactly on the non-singular set.
Conversely, as
β
→
−
∞
{\displaystyle \beta \to -\infty }
, softargmax converges to arg min in the same way, where here the singular set is points with two arg min values. In the language of tropical analysis, the softmax is a deformation or "quantization" of arg max and arg min, corresponding to using the log semiring instead of the max-plus semiring (respectively min-plus semiring), and recovering the arg max or arg min by taking the limit is called "tropicalization" or "dequantization".
It is also the case that, for any fixed β, if one input
z
i
{\displaystyle z_{i}}
is much larger than the others relative to the temperature,
T
=
1
/
β
{\displaystyle T=1/\beta }
, the output is approximately the arg max. For example, a difference of 10 is large relative to a temperature of 1:
σ
(
0
,
10
)
:=
σ
1
(
0
,
10
)
=
(
1
/
(
1
+
e
10
)
,
e
10
/
(
1
+
e
10
)
)
≈
(
0.00005
,
0.99995
)
{\displaystyle \sigma (0,\,10):=\sigma _{1}(0,\,10)=\left(1/\left(1+e^{10}\right),\,e^{10}/\left(1+e^{10}\right)\right)\approx (0.00005,\,0.99995)}
However, if the difference is small relative to the temperature, the value is not close to the arg max. For example, a difference of 10 is small relative to a temperature of 100:
σ
1
/
100
(
0
,
10
)
=
(
1
/
(
1
+
e
1
/
10
)
,
e
1
/
10
/
(
1
+
e
1
/
10
)
)
≈
(
0.475
,
0.525
)
.
{\displaystyle \sigma _{1/100}(0,\,10)=\left(1/\left(1+e^{1/10}\right),\,e^{1/10}/\left(1+e^{1/10}\right)\right)\approx (0.475,\,0.525).}
As
β
→
∞
{\displaystyle \beta \to \infty }
, temperature goes to zero,
T
=
1
/
β
→
0
{\displaystyle T=1/\beta \to 0}
, so eventually all differences become large (relative to a shrinking temperature), which gives another interpretation for the limit behavior.
=== Statistical mechanics ===
In statistical mechanics, the softargmax function is known as the Boltzmann distribution (or Gibbs distribution):: 7 the index set
1
,
…
,
k
{\displaystyle {1,\,\dots ,\,k}}
are the microstates of the system; the inputs
z
i
{\displaystyle z_{i}}
are the energies of that state; the denominator is known as the partition function, often denoted by Z; and the factor β is called the coldness (or thermodynamic beta, or inverse temperature).
== Applications ==
The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression),: 206–209 multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the jth class given a sample tuple x and a weighting vector w is:
P
(
y
=
j
∣
x
)
=
e
x
T
w
j
∑
k
=
1
K
e
x
T
w
k
{\displaystyle P(y=j\mid \mathbf {x} )={\frac {e^{\mathbf {x} ^{\mathsf {T}}\mathbf {w} _{j}}}{\sum _{k=1}^{K}e^{\mathbf {x} ^{\mathsf {T}}\mathbf {w} _{k}}}}}
This can be seen as the composition of K linear functions
x
↦
x
T
w
1
,
…
,
x
↦
x
T
w
K
{\displaystyle \mathbf {x} \mapsto \mathbf {x} ^{\mathsf {T}}\mathbf {w} _{1},\ldots ,\mathbf {x} \mapsto \mathbf {x} ^{\mathsf {T}}\mathbf {w} _{K}}
and the softmax function (where
x
T
w
{\displaystyle \mathbf {x} ^{\mathsf {T}}\mathbf {w} }
denotes the inner product of
x
{\displaystyle \mathbf {x} }
and
w
{\displaystyle \mathbf {w} }
). The operation is equivalent to applying a linear operator defined by
w
{\displaystyle \mathbf {w} }
to tuples
x
{\displaystyle \mathbf {x} }
, thus transforming the original, probably highly-dimensional, input to vectors in a K-dimensional space
R
K
{\displaystyle \mathbb {R} ^{K}}
.
=== Neural networks ===
The standard softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression.
Since the function maps a tuple and a specific index
i
{\displaystyle i}
to a real value, the derivative needs to take the index into account:
∂
∂
q
k
σ
(
q
,
i
)
=
σ
(
q
,
i
)
(
δ
i
k
−
σ
(
q
,
k
)
)
.
{\displaystyle {\frac {\partial }{\partial q_{k}}}\sigma ({\textbf {q}},i)=\sigma ({\textbf {q}},i)(\delta _{ik}-\sigma ({\textbf {q}},k)).}
This expression is symmetrical in the indexes
i
,
k
{\displaystyle i,k}
and thus may also be expressed as
∂
∂
q
k
σ
(
q
,
i
)
=
σ
(
q
,
k
)
(
δ
i
k
−
σ
(
q
,
i
)
)
.
{\displaystyle {\frac {\partial }{\partial q_{k}}}\sigma ({\textbf {q}},i)=\sigma ({\textbf {q}},k)(\delta _{ik}-\sigma ({\textbf {q}},i)).}
Here, the Kronecker delta is used for simplicity (cf. the derivative of a sigmoid function, being expressed via the function itself).
To ensure stable numerical computations subtracting the maximum value from the input tuple is common. This approach, while not altering the output or the derivative theoretically, enhances stability by directly controlling the maximum exponent value computed.
If the function is scaled with the parameter
β
{\displaystyle \beta }
, then these expressions must be multiplied by
β
{\displaystyle \beta }
.
See multinomial logit for a probability model which uses the softmax activation function.
=== Reinforcement learning ===
In the field of reinforcement learning, a softmax function can be used to convert values into action probabilities. The function commonly used is:
P
t
(
a
)
=
exp
(
q
t
(
a
)
/
τ
)
∑
i
=
1
n
exp
(
q
t
(
i
)
/
τ
)
,
{\displaystyle P_{t}(a)={\frac {\exp(q_{t}(a)/\tau )}{\sum _{i=1}^{n}\exp(q_{t}(i)/\tau )}}{\text{,}}}
where the action value
q
t
(
a
)
{\displaystyle q_{t}(a)}
corresponds to the expected reward of following action a and
τ
{\displaystyle \tau }
is called a temperature parameter (in allusion to statistical mechanics). For high temperatures (
τ
→
∞
{\displaystyle \tau \to \infty }
), all actions have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature (
τ
→
0
+
{\displaystyle \tau \to 0^{+}}
), the probability of the action with the highest expected reward tends to 1.
== Computational complexity and remedies ==
In neural network applications, the number K of possible outcomes is often large, e.g. in case of neural language models that predict the most likely outcome out of a vocabulary which might contain millions of possible words. This can make the calculations for the softmax layer (i.e. the matrix multiplications to determine the
z
i
{\displaystyle z_{i}}
, followed by the application of the softmax function itself) computationally expensive. What's more, the gradient descent backpropagation method for training such a neural network involves calculating the softmax for every training example, and the number of training examples can also become large. The computational effort for the softmax became a major limiting factor in the development of larger neural language models, motivating various remedies to reduce training times.
Approaches that reorganize the softmax layer for more efficient calculation include the hierarchical softmax and the differentiated softmax. The hierarchical softmax (introduced by Morin and Bengio in 2005) uses a binary tree structure where the outcomes (vocabulary words) are the leaves and the intermediate nodes are suitably selected "classes" of outcomes, forming latent variables. The desired probability (softmax value) of a leaf (outcome) can then be calculated as the product of the probabilities of all nodes on the path from the root to that leaf. Ideally, when the tree is balanced, this would reduce the computational complexity from
O
(
K
)
{\displaystyle O(K)}
to
O
(
log
2
K
)
{\displaystyle O(\log _{2}K)}
. In practice, results depend on choosing a good strategy for clustering the outcomes into classes. A Huffman tree was used for this in Google's word2vec models (introduced in 2013) to achieve scalability.
A second kind of remedies is based on approximating the softmax (during training) with modified loss functions that avoid the calculation of the full normalization factor. These include methods that restrict the normalization sum to a sample of outcomes (e.g. Importance Sampling, Target Sampling).
== Numerical algorithms ==
The standard softmax is numerically unstable because of large exponentiations. The safe softmax method calculates instead
σ
(
z
)
i
=
e
β
(
z
i
−
m
)
∑
j
=
1
K
e
β
(
z
j
−
m
)
{\displaystyle \sigma (\mathbf {z} )_{i}={\frac {e^{\beta (z_{i}-m)}}{\sum _{j=1}^{K}e^{\beta (z_{j}-m)}}}}
where
m
=
max
i
z
i
{\displaystyle m=\max _{i}z_{i}}
is the largest factor involved. Subtracting by it guarantees that the exponentiations result in at most 1.
The attention mechanism in Transformers takes three arguments: a "query vector"
q
{\displaystyle q}
, a list of "key vectors"
k
1
,
…
,
k
N
{\displaystyle k_{1},\dots ,k_{N}}
, and a list of "value vectors"
v
1
,
…
,
v
N
{\displaystyle v_{1},\dots ,v_{N}}
, and outputs a softmax-weighted sum over value vectors:
o
=
∑
i
=
1
N
e
q
T
k
i
−
m
∑
j
=
1
N
e
q
T
k
j
−
m
v
i
{\displaystyle o=\sum _{i=1}^{N}{\frac {e^{q^{T}k_{i}-m}}{\sum _{j=1}^{N}e^{q^{T}k_{j}-m}}}v_{i}}
The standard softmax method involves several loops over the inputs, which would be bottlenecked by memory bandwidth. The FlashAttention method is a communication-avoiding algorithm that fuses these operations into a single loop, increasing the arithmetic intensity. It is an online algorithm that computes the following quantities:
z
i
=
q
T
k
i
m
i
=
max
(
z
1
,
…
,
z
i
)
=
max
(
m
i
−
1
,
z
i
)
l
i
=
e
z
1
−
m
i
+
⋯
+
e
z
i
−
m
i
=
e
m
i
−
1
−
m
i
l
i
−
1
+
e
z
i
−
m
i
o
i
=
e
z
1
−
m
i
v
1
+
⋯
+
e
z
i
−
m
i
v
i
=
e
m
i
−
1
−
m
i
o
i
−
1
+
e
z
i
−
m
i
v
i
{\displaystyle {\begin{aligned}z_{i}&=q^{T}k_{i}&\\m_{i}&=\max(z_{1},\dots ,z_{i})&=&\max(m_{i-1},z_{i})\\l_{i}&=e^{z_{1}-m_{i}}+\dots +e^{z_{i}-m_{i}}&=&e^{m_{i-1}-m_{i}}l_{i-1}+e^{z_{i}-m_{i}}\\o_{i}&=e^{z_{1}-m_{i}}v_{1}+\dots +e^{z_{i}-m_{i}}v_{i}&=&e^{m_{i-1}-m_{i}}o_{i-1}+e^{z_{i}-m_{i}}v_{i}\end{aligned}}}
and returns
o
N
/
l
N
{\displaystyle o_{N}/l_{N}}
. In practice, FlashAttention operates over multiple queries and keys per loop iteration, in a similar way as blocked matrix multiplication. If backpropagation is needed, then the output vectors and the intermediate arrays
[
m
1
,
…
,
m
N
]
,
[
l
1
,
…
,
l
N
]
{\displaystyle [m_{1},\dots ,m_{N}],[l_{1},\dots ,l_{N}]}
are cached, and during the backward pass, attention matrices are rematerialized from these, making it a form of gradient checkpointing.
== Mathematical properties ==
Geometrically the softmax function maps the Euclidean space
R
K
{\displaystyle \mathbb {R} ^{K}}
to the boundary of the standard
(
K
−
1
)
{\displaystyle (K-1)}
-simplex, cutting the dimension by one (the range is a
(
K
−
1
)
{\displaystyle (K-1)}
-dimensional simplex in
K
{\displaystyle K}
-dimensional space), due to the linear constraint that all output sum to 1 meaning it lies on a hyperplane.
Along the main diagonal
(
x
,
x
,
…
,
x
)
,
{\displaystyle (x,\,x,\,\dots ,\,x),}
softmax is just the uniform distribution on outputs,
(
1
/
n
,
…
,
1
/
n
)
{\displaystyle (1/n,\dots ,1/n)}
: equal scores yield equal probabilities.
More generally, softmax is invariant under translation by the same value in each coordinate: adding
c
=
(
c
,
…
,
c
)
{\displaystyle \mathbf {c} =(c,\,\dots ,\,c)}
to the inputs
z
{\displaystyle \mathbf {z} }
yields
σ
(
z
+
c
)
=
σ
(
z
)
{\displaystyle \sigma (\mathbf {z} +\mathbf {c} )=\sigma (\mathbf {z} )}
, because it multiplies each exponent by the same factor,
e
c
{\displaystyle e^{c}}
(because
e
z
i
+
c
=
e
z
i
⋅
e
c
{\displaystyle e^{z_{i}+c}=e^{z_{i}}\cdot e^{c}}
), so the ratios do not change:
σ
(
z
+
c
)
j
=
e
z
j
+
c
∑
k
=
1
K
e
z
k
+
c
=
e
z
j
⋅
e
c
∑
k
=
1
K
e
z
k
⋅
e
c
=
σ
(
z
)
j
.
{\displaystyle \sigma (\mathbf {z} +\mathbf {c} )_{j}={\frac {e^{z_{j}+c}}{\sum _{k=1}^{K}e^{z_{k}+c}}}={\frac {e^{z_{j}}\cdot e^{c}}{\sum _{k=1}^{K}e^{z_{k}}\cdot e^{c}}}=\sigma (\mathbf {z} )_{j}.}
Geometrically, softmax is constant along diagonals: this is the dimension that is eliminated, and corresponds to the softmax output being independent of a translation in the input scores (a choice of 0 score). One can normalize input scores by assuming that the sum is zero (subtract the average:
c
{\displaystyle \mathbf {c} }
where
c
=
1
n
∑
z
i
{\textstyle c={\frac {1}{n}}\sum z_{i}}
), and then the softmax takes the hyperplane of points that sum to zero,
∑
z
i
=
0
{\textstyle \sum z_{i}=0}
, to the open simplex of positive values that sum to 1
∑
σ
(
z
)
i
=
1
{\textstyle \sum \sigma (\mathbf {z} )_{i}=1}
, analogously to how the exponent takes 0 to 1,
e
0
=
1
{\displaystyle e^{0}=1}
and is positive.
By contrast, softmax is not invariant under scaling. For instance,
σ
(
(
0
,
1
)
)
=
(
1
/
(
1
+
e
)
,
e
/
(
1
+
e
)
)
{\displaystyle \sigma {\bigl (}(0,\,1){\bigr )}={\bigl (}1/(1+e),\,e/(1+e){\bigr )}}
but
σ
(
(
0
,
2
)
)
=
(
1
/
(
1
+
e
2
)
,
e
2
/
(
1
+
e
2
)
)
.
{\displaystyle \sigma {\bigl (}(0,2){\bigr )}={\bigl (}1/\left(1+e^{2}\right),\,e^{2}/\left(1+e^{2}\right){\bigr )}.}
The standard logistic function is the special case for a 1-dimensional axis in 2-dimensional space, say the x-axis in the (x, y) plane. One variable is fixed at 0 (say
z
2
=
0
{\displaystyle z_{2}=0}
), so
e
0
=
1
{\displaystyle e^{0}=1}
, and the other variable can vary, denote it
z
1
=
x
{\displaystyle z_{1}=x}
, so
e
z
1
/
∑
k
=
1
2
e
z
k
=
e
x
/
(
e
x
+
1
)
,
{\textstyle e^{z_{1}}/\sum _{k=1}^{2}e^{z_{k}}=e^{x}/\left(e^{x}+1\right),}
the standard logistic function, and
e
z
2
/
∑
k
=
1
2
e
z
k
=
1
/
(
e
x
+
1
)
,
{\textstyle e^{z_{2}}/\sum _{k=1}^{2}e^{z_{k}}=1/\left(e^{x}+1\right),}
its complement (meaning they add up to 1). The 1-dimensional input could alternatively be expressed as the line
(
x
/
2
,
−
x
/
2
)
{\displaystyle (x/2,\,-x/2)}
, with outputs
e
x
/
2
/
(
e
x
/
2
+
e
−
x
/
2
)
=
e
x
/
(
e
x
+
1
)
{\displaystyle e^{x/2}/\left(e^{x/2}+e^{-x/2}\right)=e^{x}/\left(e^{x}+1\right)}
and
e
−
x
/
2
/
(
e
x
/
2
+
e
−
x
/
2
)
=
1
/
(
e
x
+
1
)
.
{\displaystyle e^{-x/2}/\left(e^{x/2}+e^{-x/2}\right)=1/\left(e^{x}+1\right).}
=== Gradients ===
The softmax function is also the gradient of the LogSumExp function:
∂
∂
z
i
LSE
(
z
)
=
exp
z
i
∑
j
=
1
K
exp
z
j
=
σ
(
z
)
i
,
for
i
=
1
,
…
,
K
,
z
=
(
z
1
,
…
,
z
K
)
∈
R
K
,
{\displaystyle {\frac {\partial }{\partial z_{i}}}\operatorname {LSE} (\mathbf {z} )={\frac {\exp z_{i}}{\sum _{j=1}^{K}\exp z_{j}}}=\sigma (\mathbf {z} )_{i},\quad {\text{ for }}i=1,\dotsc ,K,\quad \mathbf {z} =(z_{1},\,\dotsc ,\,z_{K})\in \mathbb {R} ^{K},}
where the LogSumExp function is defined as
LSE
(
z
1
,
…
,
z
n
)
=
log
(
exp
(
z
1
)
+
⋯
+
exp
(
z
n
)
)
{\displaystyle \operatorname {LSE} (z_{1},\,\dots ,\,z_{n})=\log \left(\exp(z_{1})+\cdots +\exp(z_{n})\right)}
.
The gradient of softmax is thus
∂
z
j
σ
i
=
σ
i
(
δ
i
j
−
σ
j
)
{\displaystyle \partial _{z_{j}}\sigma _{i}=\sigma _{i}(\delta _{ij}-\sigma _{j})}
.
== History ==
The softmax function was used in statistical mechanics as the Boltzmann distribution in the foundational paper Boltzmann (1868), formalized and popularized in the influential textbook Gibbs (1902).
The use of the softmax in decision theory is credited to R. Duncan Luce,: 1 who used the axiom of independence of irrelevant alternatives in rational choice theory to deduce the softmax in Luce's choice axiom for relative preferences.
In machine learning, the term "softmax" is credited to John S. Bridle in two 1989 conference papers, Bridle (1990a):: 1 and Bridle (1990b):
We are concerned with feed-forward non-linear networks (multi-layer perceptrons, or MLPs) with multiple outputs. We wish to treat the outputs of the network as probabilities of alternatives (e.g. pattern classes), conditioned on the inputs. We look for appropriate output non-linearities and for appropriate criteria for adaptation of the parameters of the network (e.g. weights). We explain two modifications: probability scoring, which is an alternative to squared error minimisation, and a normalised exponential (softmax) multi-input generalisation of the logistic non-linearity.: 227
For any input, the outputs must all be positive and they must sum to unity. ...
Given a set of unconstrained values,
V
j
(
x
)
{\displaystyle V_{j}(x)}
, we can ensure both conditions by using a Normalised Exponential transformation:
Q
j
(
x
)
=
e
V
j
(
x
)
/
∑
k
e
V
k
(
x
)
{\displaystyle Q_{j}(x)=\left.e^{V_{j}(x)}\right/\sum _{k}e^{V_{k}(x)}}
This transformation can be considered a multi-input generalisation of the logistic, operating on the whole output layer. It preserves the rank order of its input values, and is a differentiable generalisation of the 'winner-take-all' operation of picking the maximum value. For this reason we like to refer to it as softmax.: 213
== Example ==
With an input of (1, 2, 3, 4, 1, 2, 3), the softmax is approximately (0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175). The output has most of its weight where the "4" was in the original input. This is what the function is normally used for: to highlight the largest values and suppress values which are significantly below the maximum value. But note: a change of temperature changes the output. When the temperature is multiplied by 10, the inputs are effectively (0.1, 0.2, 0.3, 0.4, 0.1, 0.2, 0.3) and the softmax is approximately (0.125, 0.138, 0.153, 0.169, 0.125, 0.138, 0.153). This shows that high temperatures de-emphasize the maximum value.
Computation of this example using Python code:
== Alternatives ==
The softmax function generates probability predictions densely distributed over its support. Other functions like sparsemax or α-entmax can be used when sparse probability predictions are desired. Also the Gumbel-softmax reparametrization trick can be used when sampling from a discrete-discrete distribution needs to be mimicked in a differentiable manner.
== See also ==
Softplus
Multinomial logistic regression
Dirichlet distribution – an alternative way to sample categorical distributions
Partition function
Exponential tilting – a generalization of Softmax to more general probability distributions
== Notes ==
== References == | Wikipedia/Softmax_function |
Claude is a family of large language models developed by Anthropic. The first model was released in March 2023.
The Claude 3 family, released in March 2024, consists of three models: Haiku, optimized for speed; Sonnet, which balances capability and performance; and Opus, designed for complex reasoning tasks. These models can process both text and images, with Claude 3 Opus demonstrating enhanced capabilities in areas like mathematics, programming, and logical reasoning compared to previous versions. Claude 4, which includes Opus and Sonnet, was released in May 2025.
== Training ==
Claude models are generative pre-trained transformers. They have been pre-trained to predict the next word in large amounts of text. Then, they have been fine-tuned, notably using constitutional AI and reinforcement learning from human feedback (RLHF).
=== Constitutional AI ===
Constitutional AI is an approach developed by Anthropic for training AI systems, particularly language models like Claude, to be harmless and helpful without relying on extensive human feedback. The method, detailed in the paper "Constitutional AI: Harmlessness from AI Feedback" involves two phases: supervised learning and reinforcement learning.
In the supervised learning phase, the model generates responses to prompts, self-critiques these responses based on a set of guiding principles (a "constitution"), and revises the responses. Then the model is fine-tuned on these revised responses. For the reinforcement learning from AI feedback (RLAIF) phase, responses are generated, and an AI compares their compliance with this constitution. This dataset of AI feedback is used to train a preference model that evaluates responses based on how much they satisfy the constitution. Claude is then fine-tuned to align with this preference model. This technique is similar to RLHF, except that the comparisons used to train the preference model are AI-generated.
The constitution for Claude included 75 points, including sections from the UN Universal Declaration of Human Rights.
== Models ==
Claude is named after Claude Shannon, a pioneer in AI research.
=== Claude ===
Claude was the initial version of Anthropic's language model released in March 2023, Claude demonstrated proficiency in various tasks but had certain limitations in coding, math, and reasoning capabilities. Anthropic partnered with companies like Notion (productivity software) and Quora (to help develop the Poe chatbot).
==== Claude Instant ====
Claude was released as two versions, Claude and Claude Instant, with Claude Instant being a faster, less expensive, and lighter version. Claude Instant has an input context length of 100,000 tokens (which corresponds to around 75,000 words).
=== Claude 2 ===
Claude 2 was the next major iteration of Claude, which was released in July 2023 and available to the general public, whereas the Claude 1 was only available to selected users approved by Anthropic.
Claude 2 expanded its context window from 9,000 tokens to 100,000 tokens. Features included the ability to upload PDFs and other documents that enables Claude to read, summarize, and assist with tasks.
==== Claude 2.1 ====
Claude 2.1 doubled the number of tokens that the chatbot could handle, increasing it to a window of 200,000 tokens, which equals around 500 pages of written material.
Anthropic states that the new model is less likely to produce false statements compared to its predecessors.
==== Criticism ====
Claude 2 received criticism for its stringent ethical alignment that may reduce usability and performance. Users have been refused assistance with benign requests, for example with the system administration question "How can I kill all python processes in my ubuntu server?" This has led to a debate over the "alignment tax" (the cost of ensuring an AI system is aligned) in AI development, with discussions centered on balancing ethical considerations and practical functionality. Critics argued for user autonomy and effectiveness, while proponents stressed the importance of ethical AI.
=== Claude 3 ===
Claude 3 was released on March 4, 2024, with claims in the press release to have set new industry benchmarks across a wide range of cognitive tasks. The Claude 3 family includes three state-of-the-art models in ascending order of capability: Haiku, Sonnet, and Opus. The default version of Claude 3, Opus, has a context window of 200,000 tokens, but this is being expanded to 1 million for specific use cases.
Claude 3 drew attention for demonstrating an apparent ability to realize it is being artificially tested during needle in a haystack tests.
==== Claude 3.5 ====
On June 20, 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview the rendered output in real time, such as SVG graphics or websites. Anthropic also announced that Claude 3.5 Opus would be released later that year, and added it to their models page. However, as of February 2025, Claude 3.5 Opus has not been released, and Anthropic has removed mention of it from the models page.
An "upgraded Claude 3.5 Sonnet", billed as "Claude 3.5 Sonnet (New)" in the web interface and benchmarks, was introduced on October 22, 2024, along with Claude 3.5 Haiku. A feature, "computer use," was also unveiled in public beta. This capability enables Claude 3.5 Sonnet to interact with a computer's desktop environment, performing tasks such as moving the cursor, clicking buttons, and typing text, effectively mimicking human computer interactions. This development allows the AI to autonomously execute complex, multi-step tasks across various applications.
Upon release, Anthropic claimed Claude 3.5 Haiku would remain the same price as its predecessor, Claude 3 Haiku. However, on November 4th, 2024, Anthropic announced that they would be increasing the price of the model "to reflect its increase in intelligence".
==== Claude 3.7 ====
Claude 3.7 Sonnet was released on February 24, 2025. It is a pioneering hybrid AI reasoning model that allows users to choose between rapid responses and more thoughtful, step-by-step reasoning. This model integrates both capabilities into a single framework, eliminating the need for multiple models. Users can control how long the model "thinks" about a question, balancing speed and accuracy based on their needs.
Anthropic also launched a research preview of Claude Code, an agentic command line tool that enables developers to delegate coding tasks directly from their terminal.
=== Claude 4 ===
On May 22, 2025, Anthropic released two more models: Claude Sonnet 4 and Claude Opus 4. Anthropic added API features for developers: a code execution tool, a connector to its Model Context Protocol, and Files API. It classified Opus 4 as a "Level 3" model on the company's four-point safety scale, meaning they consider it so powerful that it poses "significantly higher risk."
== Features ==
In June 2024, Anthropic released the Artifacts feature, allowing users to generate and interact with code snippets and documents.
In October 2024, Anthropic released the "computer use" feature, allowing Claude to attempt to navigate computers by interpreting screen content and simulating keyboard and mouse input.
In March 2025, Anthropic added a web search feature to Claude, starting with only paying users located in the United States.
== Criticism ==
Claude uses a web crawler, ClaudeBot, to search the web for content. It has been criticized for not respecting a site's robots.txt and placing excessive load on sites.
== References ==
== External links ==
Official website | Wikipedia/Claude_(language_model) |
Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. Recurrent neural networks, or neural networks with loops allow information from later processing stages to feed back to earlier stages for sequence processing. However, at every stage of inference a feedforward multiplication remains the core, essential for backpropagation or backpropagation through time. Thus neural networks cannot contain feedback like negative feedback or positive feedback where the outputs feed back to the very same inputs and modify them, because this forms an infinite loop which is not possible to rewind in time to generate an error signal through backpropagation. This issue and nomenclature appear to be a point of confusion between some computer scientists and scientists in other fields studying brain networks.
== Mathematical foundations ==
=== Activation function ===
The two historically common activation functions are both sigmoids, and are described by
y
(
v
i
)
=
tanh
(
v
i
)
and
y
(
v
i
)
=
(
1
+
e
−
v
i
)
−
1
{\displaystyle y(v_{i})=\tanh(v_{i})~~{\textrm {and}}~~y(v_{i})=(1+e^{-v_{i}})^{-1}}
.
The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here
y
i
{\displaystyle y_{i}}
is the output of the
i
{\displaystyle i}
th node (neuron) and
v
i
{\displaystyle v_{i}}
is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models).
In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids.
=== Learning ===
Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation.
We can represent the degree of error in an output node
j
{\displaystyle j}
in the
n
{\displaystyle n}
th data point (training example) by
e
j
(
n
)
=
d
j
(
n
)
−
y
j
(
n
)
{\displaystyle e_{j}(n)=d_{j}(n)-y_{j}(n)}
, where
d
j
(
n
)
{\displaystyle d_{j}(n)}
is the desired target value for
n
{\displaystyle n}
th data point at node
j
{\displaystyle j}
, and
y
j
(
n
)
{\displaystyle y_{j}(n)}
is the value produced at node
j
{\displaystyle j}
when the
n
{\displaystyle n}
th data point is given as an input.
The node weights can then be adjusted based on corrections that minimize the error in the entire output for the
n
{\displaystyle n}
th data point, given by
E
(
n
)
=
1
2
∑
output node
j
e
j
2
(
n
)
{\displaystyle {\mathcal {E}}(n)={\frac {1}{2}}\sum _{{\text{output node }}j}e_{j}^{2}(n)}
.
Using gradient descent, the change in each weight
w
i
j
{\displaystyle w_{ij}}
is
Δ
w
j
i
(
n
)
=
−
η
∂
E
(
n
)
∂
v
j
(
n
)
y
i
(
n
)
{\displaystyle \Delta w_{ji}(n)=-\eta {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}y_{i}(n)}
where
y
i
(
n
)
{\displaystyle y_{i}(n)}
is the output of the previous neuron
i
{\displaystyle i}
, and
η
{\displaystyle \eta }
is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression,
∂
E
(
n
)
∂
v
j
(
n
)
{\displaystyle {\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}}
denotes the partial derivate of the error
E
(
n
)
{\displaystyle {\mathcal {E}}(n)}
according to the weighted sum
v
j
(
n
)
{\displaystyle v_{j}(n)}
of the input connections of neuron
i
{\displaystyle i}
.
The derivative to be calculated depends on the induced local field
v
j
{\displaystyle v_{j}}
, which itself varies. It is easy to prove that for an output node this derivative can be simplified to
−
∂
E
(
n
)
∂
v
j
(
n
)
=
e
j
(
n
)
ϕ
′
(
v
j
(
n
)
)
{\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=e_{j}(n)\phi ^{\prime }(v_{j}(n))}
where
ϕ
′
{\displaystyle \phi ^{\prime }}
is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is
−
∂
E
(
n
)
∂
v
j
(
n
)
=
ϕ
′
(
v
j
(
n
)
)
∑
k
−
∂
E
(
n
)
∂
v
k
(
n
)
w
k
j
(
n
)
{\displaystyle -{\frac {\partial {\mathcal {E}}(n)}{\partial v_{j}(n)}}=\phi ^{\prime }(v_{j}(n))\sum _{k}-{\frac {\partial {\mathcal {E}}(n)}{\partial v_{k}(n)}}w_{kj}(n)}
.
This depends on the change in weights of the
k
{\displaystyle k}
th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.
== History ==
=== Timeline ===
Circa 1800, Legendre (1805) and Gauss (1795) created the simplest feedforward network which consists of a single weight layer with linear activation functions. It was trained by the least squares method for minimising mean squared error, also known as linear regression. Legendre and Gauss used it for the prediction of planetary movement from training data.
In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks.
In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections. R. D. Joseph (1960) mentions an even earlier perceptron-like device: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject."
In 1960, Joseph also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962): section 16 cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling, the first working deep learning algorithm, a method to train arbitrarily deep neural networks. It is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates." It was used to train an eight-layer neural net in 1971.
In 1967, Shun'ichi Amari reported the first multilayered neural network trained by stochastic gradient descent, which was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers.
In 1970, Seppo Linnainmaa published the modern form of backpropagation in his master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
In 2003, interest in backpropagation networks returned due to the successes of deep learning being applied to language modelling by Yoshua Bengio with co-authors.
=== Linear regression ===
=== Perceptron ===
If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron. (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function.
Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent.
=== Multilayer perceptron ===
A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable.
== Other feedforward networks ==
Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function.
== See also ==
Hopfield network
Feed-forward
Backpropagation
Rprop
== References ==
== External links ==
Feedforward neural networks tutorial
Feedforward Neural Network: Example
Feedforward Neural Networks: An Introduction | Wikipedia/Feedforward_neural_network |
In machine learning, reinforcement learning from human feedback (RLHF) is a technique to align an intelligent agent with human preferences. It involves training a reward model to represent preferences, which can then be used to train other models through reinforcement learning.
In classical reinforcement learning, an intelligent agent's goal is to learn a function that guides its behavior, called a policy. This function is iteratively updated to maximize rewards based on the agent's task performance. However, explicitly defining a reward function that accurately approximates human preferences is challenging. Therefore, RLHF seeks to train a "reward model" directly from human feedback. The reward model is first trained in a supervised manner to predict if a response to a given prompt is good (high reward) or bad (low reward) based on ranking data collected from human annotators. This model then serves as a reward function to improve an agent's policy through an optimization algorithm like proximal policy optimization.
RLHF has applications in various domains in machine learning, including natural language processing tasks such as text summarization and conversational agents, computer vision tasks like text-to-image models, and the development of video game bots. While RLHF is an effective method of training models to act better in accordance with human preferences, it also faces challenges due to the way the human preference data is collected. Though RLHF does not require massive amounts of data to improve performance, sourcing high-quality preference data is still an expensive process. Furthermore, if the data is not carefully collected from a representative sample, the resulting model may exhibit unwanted biases.
== Background and motivation ==
Optimizing a model based on human feedback is desirable when a task is difficult to specify yet easy to judge. For example, one may want to train a model to generate safe text that is both helpful and harmless (such as lacking bias, toxicity, or otherwise harmful content). Asking humans to manually create examples of harmless and harmful text would be difficult and time-consuming. However, humans are adept at swiftly assessing and comparing the harmfulness of different AI-generated text. Therefore, a more practical objective would be to allow the model to use this type of human feedback to improve its text generation.
Despite the clear benefits of incorporating human feedback in training models, prior efforts—including some that leverage reinforcement learning—have encountered significant challenges. Most attempts were either narrow and difficult to generalize, breaking down on more complex tasks, or they faced difficulties learning from sparse (lacking specific information and relating to large amounts of text at a time) or noisy (inconsistently rewarding similar outputs) reward functions.
RLHF was not the first successful method of using human feedback for reinforcement learning, but it is one of the most widely used. The foundation for RLHF was introduced as an attempt to create a general algorithm for learning from a practical amount of human feedback. The algorithm as used today was introduced by OpenAI in a paper on enhancing text continuation or summarization based on human feedback, and it began to gain popularity when the same method was reused in their paper on InstructGPT. RLHF has also been shown to improve the robustness of RL agents and their capacity for exploration, which results in an optimization process more adept at handling uncertainty and efficiently exploring its environment in search of the highest reward.
== Collecting human feedback ==
Human feedback is commonly collected by prompting humans to rank instances of the agent's behavior. These rankings can then be used to score outputs, for example, using the Elo rating system, which is an algorithm for calculating the relative skill levels of players in a game based only on the outcome of each game. While ranking outputs is the most widely adopted form of feedback, recent research has explored other forms, such as numerical feedback, natural language feedback, and prompting for direct edits to the model's output.
One initial motivation of RLHF was that it requires relatively small amounts of comparison data to be effective. It has been shown that a small amount of data can lead to comparable results to a larger amount. In addition, increasing the amount of data tends to be less effective than proportionally increasing the size of the reward model. Nevertheless, a larger and more diverse amount of data can be crucial for tasks where it is important to avoid bias from a partially representative group of annotators.
When learning from human feedback through pairwise comparison under the Bradley–Terry–Luce model (or the Plackett–Luce model for K-wise comparisons over more than two comparisons), the maximum likelihood estimator (MLE) for linear reward functions has been shown to converge if the comparison data is generated under a well-specified linear model. This implies that, under certain conditions, if a model is trained to decide which choices people would prefer between pairs (or groups) of choices, it will necessarily improve at predicting future preferences. This improvement is expected as long as the comparisons it learns from are based on a consistent and simple rule.
Both offline data collection models, where the model is learning by interacting with a static dataset and updating its policy in batches, as well as online data collection models, where the model directly interacts with the dynamic environment and updates its policy immediately, have been mathematically studied proving sample complexity bounds for RLHF under different feedback models.
In the offline data collection model, when the objective is policy training, a pessimistic MLE that incorporates a lower confidence bound as the reward estimate is most effective. Moreover, when applicable, it has been shown that considering K-wise comparisons directly is asymptotically more efficient than converting them into pairwise comparisons for prediction purposes.
In the online scenario, when human feedback is collected through pairwise comparisons under the Bradley–Terry–Luce model and the objective is to minimize the algorithm's regret (the difference in performance compared to an optimal agent), it has been shown that an optimistic MLE that incorporates an upper confidence bound as the reward estimate can be used to design sample efficient algorithms (meaning that they require relatively little training data). A key challenge in RLHF when learning from pairwise (or dueling) comparisons is associated with the non-Markovian nature of its optimal policies. Unlike simpler scenarios where the optimal strategy does not require memory of past actions, in RLHF, the best course of action often depends on previous events and decisions, making the strategy inherently memory-dependent.
== Applications ==
RLHF has been applied to various domains of natural language processing (NLP), such as conversational agents, text summarization, and natural language understanding. Ordinary reinforcement learning, in which agents learn from their actions based on a predefined "reward function", is difficult to apply to NLP tasks because the rewards tend to be difficult to define or measure, especially when dealing with complex tasks that involve human values or preferences. RLHF can steer NLP models, in particular language models, to provide answers that align with human preferences with regard to such tasks by capturing their preferences beforehand in the reward model. This results in a model capable of generating more relevant responses and rejecting inappropriate or irrelevant queries. Some notable examples of RLHF-trained language models are OpenAI's ChatGPT (and its predecessor InstructGPT), DeepMind's Sparrow, Google's Gemini, and Anthropic's Claude.
In computer vision, RLHF has also been used to align text-to-image models. Studies that successfully used RLHF for this goal have noted that the use of KL regularization in RLHF, which aims to prevent the learned policy from straying too far from the unaligned model, helped to stabilize the training process by reducing overfitting to the reward model. The final image outputs from models trained with KL regularization were noted to be of significantly higher quality than those trained without. Other methods tried to incorporate the feedback through more direct training—based on maximizing the reward without the use of reinforcement learning—but conceded that an RLHF-based approach would likely perform better due to the online sample generation used in RLHF during updates as well as the aforementioned KL regularization over the prior model, which mitigates overfitting to the reward function.
RLHF was initially applied to other areas, such as the development of video game bots and tasks in simulated robotics. For example, OpenAI and DeepMind trained agents to play Atari games based on human preferences. In classical RL-based training of such bots, the reward function is simply correlated to how well the agent is performing in the game, usually using metrics like the in-game score. In comparison, in RLHF, a human is periodically presented with two clips of the agent's behavior in the game and must decide which one looks better. This approach can teach agents to perform at a competitive level without ever having access to their score. In fact, it was shown that RLHF can sometimes lead to superior performance over RL with score metrics because the human's preferences can contain more useful information than performance-based metrics. The agents achieved strong performance in many of the environments tested, often surpassing human performance.
== Training ==
In RLHF, two different models are trained: a reward model and a reinforcement learning (RL) policy. The reward model learns to determine what behavior is desirable based on human feedback, while the policy is guided by the reward model to determine the agent's actions. Both models are commonly initialized using a pre-trained autoregressive language model. This model is then customarily trained in a supervised manner on a relatively small dataset of pairs of prompts to an assistant and their accompanying responses, written by human annotators.
=== Reward model ===
The reward model is usually initialized with a pre-trained model, as this initializes it with an understanding of language and focuses training explicitly on learning human preferences. In addition to being used to initialize the reward model and the RL policy, the model is then also used to sample data to be compared by annotators.
The reward model is then trained by replacing the final layer of the previous model with a randomly initialized regression head. This change shifts the model from its original classification task over its vocabulary to simply outputting a number corresponding to the score of any given prompt and response. This model is trained on the human preference comparison data collected earlier from the supervised model. In particular, it is trained to minimize the following cross-entropy loss function:
L
(
θ
)
=
−
1
(
K
2
)
E
(
x
,
y
w
,
y
l
)
[
log
(
σ
(
r
θ
(
x
,
y
w
)
−
r
θ
(
x
,
y
l
)
)
)
]
=
−
1
(
K
2
)
E
(
x
,
y
w
,
y
l
)
log
[
e
r
θ
(
x
,
y
w
)
e
r
θ
(
x
,
y
w
)
+
e
r
θ
(
x
,
y
l
)
]
{\displaystyle {\mathcal {L}}(\theta )=-{\frac {1}{K \choose 2}}E_{(x,y_{w},y_{l})}[\log(\sigma (r_{\theta }(x,y_{w})-r_{\theta }(x,y_{l})))]=-{\frac {1}{K \choose 2}}E_{(x,y_{w},y_{l})}\log \left[{\frac {e^{r_{\theta }(x,y_{w})}}{e^{r_{\theta }(x,y_{w})}+e^{r_{\theta }(x,y_{l})}}}\right]}
where
K
{\displaystyle K}
is the number of responses the labelers ranked,
r
θ
(
x
,
y
)
{\displaystyle r_{\theta }(x,y)}
is the output of the reward model for prompt
x
{\displaystyle x}
and completion
y
{\displaystyle y}
,
y
w
{\displaystyle y_{w}}
is the preferred completion over
y
l
{\displaystyle y_{l}}
,
σ
(
x
)
{\displaystyle \sigma (x)}
denotes the sigmoid function, and
E
[
X
]
{\displaystyle E[X]}
denotes the expected value. This can be thought of as a form of logistic regression, where the model predicts the probability that a response
y
w
{\displaystyle y_{w}}
is preferred over
y
l
{\displaystyle y_{l}}
.
This loss function essentially measures the difference between the reward model's predictions and the decisions made by humans. The goal is to make the model's guesses as close as possible to the humans' preferences by minimizing the difference measured by this equation. In the case of only pairwise comparisons,
K
=
2
{\displaystyle K=2}
, so the factor of
1
/
(
K
2
)
=
1
{\displaystyle 1/{\tbinom {K}{2}}=1}
. In general, all
(
K
2
)
{\displaystyle {\tbinom {K}{2}}}
comparisons from each prompt are used for training as a single batch.
After training, the outputs of the model are normalized such that the reference completions have a mean score of 0. That is,
∑
y
r
θ
(
x
,
y
)
=
0
{\textstyle \sum _{y}r_{\theta }(x,y)=0}
for each query and reference pair
(
x
,
y
)
{\displaystyle (x,y)}
by calculating the mean reward across the training dataset and setting it as the bias in the reward head.
=== Policy ===
Similarly to the reward model, the human feedback policy is also initialized from a pre-trained model.
The key is to understand language generation as if it is a game to be learned by RL. In RL, a policy is a function that maps a game state to a game action. In RLHF, the "game" is the game of replying to prompts. A prompt is a game state, and a response is a game action. This is a fairly trivial kind of game, since every game lasts for exactly one step. Nevertheless, it is a game, and so RL algorithms can be applied to it.
The first step in its training is supervised fine-tuning (SFT). This step does not require the reward model. Instead, the pre-trained model is trained on a dataset
D
S
F
T
{\displaystyle D_{SFT}}
that contains prompt-response pairs
(
x
,
y
)
{\displaystyle (x,y)}
. Then, during SFT, the model is trained to auto-regressively generate the corresponding response
y
{\displaystyle y}
when given a random prompt
x
{\displaystyle x}
. The original paper recommends to SFT for only one epoch, since more than that causes overfitting.
The dataset
D
S
F
T
{\displaystyle D_{SFT}}
is usually written by human contractors, who write both the prompts and responses.
The second step uses a policy gradient method to the reward model. It uses a dataset
D
R
L
{\displaystyle D_{RL}}
, which contains prompts, but not responses. Like most policy gradient methods, this algorithm has an outer loop and two inner loops:
Initialize the policy
π
ϕ
R
L
{\displaystyle \pi _{\phi }^{RL}}
to
π
S
F
T
{\displaystyle \pi ^{SFT}}
, the policy output from SFT.
Loop for many steps.
Initialize a new empty dataset
D
π
ϕ
R
L
{\displaystyle D_{\pi _{\phi }^{RL}}}
.
Loop for many steps
Sample a random prompt
x
{\displaystyle x}
from
D
R
L
{\displaystyle D_{RL}}
.
Generate a response
y
{\displaystyle y}
from the policy
π
ϕ
R
L
{\displaystyle \pi _{\phi }^{RL}}
.
Calculate the reward signal
r
θ
(
x
,
y
)
{\displaystyle r_{\theta }(x,y)}
from the reward model
r
θ
{\displaystyle r_{\theta }}
.
Add the triple
(
x
,
y
,
r
θ
(
x
,
y
)
)
{\displaystyle (x,y,r_{\theta }(x,y))}
to
D
π
ϕ
R
L
{\displaystyle D_{\pi _{\phi }^{RL}}}
.
Update
ϕ
{\displaystyle \phi }
by a policy gradient method to increase the objective function
objective
(
ϕ
)
=
E
(
x
,
y
)
∼
D
π
ϕ
RL
[
r
θ
(
x
,
y
)
−
β
log
(
π
ϕ
RL
(
y
|
x
)
π
SFT
(
y
|
x
)
)
]
{\displaystyle {\text{objective}}(\phi )=E_{(x,y)\sim D_{\pi _{\phi }^{\text{RL}}}}\left[r_{\theta }(x,y)-\beta \log \left({\frac {\pi _{\phi }^{\text{RL}}(y|x)}{\pi ^{\text{SFT}}(y|x)}}\right)\right]}
Note that
(
x
,
y
)
∼
D
π
ϕ
RL
{\displaystyle (x,y)\sim D_{\pi _{\phi }^{\text{RL}}}}
is equivalent to
x
∼
D
R
L
,
y
∼
π
ϕ
RL
(
⋅
|
x
)
{\displaystyle x\sim D_{RL},y\sim \pi _{\phi }^{\text{RL}}(\cdot |x)}
, which means "sample a prompt from
D
R
L
{\displaystyle D_{RL}}
, then sample a response from the policy".
The objective function has two parts. The first part is simply the expected reward
E
[
r
]
{\displaystyle E[r]}
, and is standard for any RL algorithm. The second part is a "penalty term" involving the KL divergence. The strength of the penalty term is determined by the hyperparameter
β
{\displaystyle \beta }
.
This KL term works by penalizing the KL divergence (a measure of statistical distance between distributions) between the model being fine-tuned and the initial supervised model. By choosing an appropriate
β
{\displaystyle \beta }
, the training can balance learning from new data while retaining useful information from the initial model, increasing generalization by avoiding fitting too closely to the new data. Aside from preventing the new model from producing outputs too dissimilar those of the initial model, a second motivation of including the KL term is to encourage the model to output high-entropy text, so as to prevent the model from collapsing to a small number of canned responses.
In simpler terms, the objective function calculates how well the policy's responses are expected to align with human feedback. The policy generates responses to prompts, and each response is evaluated both on how well it matches human preferences (as measured by the reward model) and how similar it is to responses the model would naturally generate. The goal is to balance improving alignment with human preferences while ensuring the model's responses remain diverse and not too far removed from what it has learned during its initial training. This helps the model not only to provide answers that people find useful or agreeable but also to maintain a broad understanding and avoid overly narrow or repetitive responses.
=== Proximal policy optimization ===
The policy function is usually trained by proximal policy optimization (PPO) algorithm. That is, the parameter
ϕ
{\displaystyle \phi }
is trained by gradient ascent on the clipped surrogate function.
Classically, the PPO algorithm employs generalized advantage estimation, which means that there is an extra value estimator
V
ξ
t
(
x
)
{\displaystyle V_{\xi _{t}}(x)}
, that updates concurrently with the policy
π
ϕ
t
R
L
{\displaystyle \pi _{\phi _{t}}^{RL}}
during PPO training:
π
ϕ
t
R
L
,
V
ξ
t
,
π
ϕ
t
+
1
R
L
,
V
ξ
t
+
1
,
…
{\displaystyle \pi _{\phi _{t}}^{RL},V_{\xi _{t}},\pi _{\phi _{t+1}}^{RL},V_{\xi _{t+1}},\dots }
. The value estimator is used only during training, and not outside of training.
The PPO uses gradient descent on the following clipped surrogate advantage:
L
PPO
(
ϕ
)
:=
E
x
∼
D
RL
,
y
∼
π
ϕ
t
(
y
|
x
)
[
min
(
π
ϕ
R
L
(
y
|
x
)
π
ϕ
t
R
L
(
y
|
x
)
A
(
x
,
y
)
,
c
l
i
p
(
π
ϕ
R
L
(
y
|
x
)
π
ϕ
t
R
L
(
y
|
x
)
,
1
−
ϵ
,
1
+
ϵ
)
A
(
x
,
y
)
)
]
{\displaystyle L_{\text{PPO}}(\phi ):=E_{x\sim D_{\text{RL}},y\sim \pi _{\phi _{t}}(y|x)}\left[\min \left({\frac {\pi _{\phi }^{RL}(y|x)}{\pi _{\phi _{t}}^{RL}(y|x)}}A(x,y),\mathrm {clip} \left({\frac {\pi _{\phi }^{RL}(y|x)}{\pi _{\phi _{t}}^{RL}(y|x)}},1-\epsilon ,1+\epsilon \right)A(x,y)\right)\right]}
where the advantage term
A
(
x
,
y
)
{\displaystyle A(x,y)}
is defined as
r
θ
(
x
,
y
)
−
V
ξ
t
(
x
)
{\displaystyle r_{\theta }(x,y)-V_{\xi _{t}}(x)}
. That is, the advantage is computed as the difference between the reward (the expected return) and the value estimation (the expected return from the policy). This is used to train the policy by gradient ascent on it, usually using a standard momentum-gradient optimizer, like the Adam optimizer.
The original paper initialized the value estimator from the trained reward model. Since PPO is an actor-critic algorithm, the value estimator is updated concurrently with the policy, via minimizing the squared TD-error, which in this case equals the squared advantage term:
L
TD
(
ξ
)
=
E
(
x
,
y
)
∼
D
π
ϕ
t
RL
[
(
r
θ
(
x
,
y
)
−
β
log
(
π
ϕ
t
RL
(
y
|
x
)
π
SFT
(
y
|
x
)
)
−
V
ξ
(
x
)
)
2
]
{\displaystyle L_{\text{TD}}(\xi )=\mathbb {E} _{(x,y)\sim D{\pi _{\phi _{t}}^{\text{RL}}}}\left[\left(r_{\theta }(x,y)-\beta \log \left({\frac {\pi _{\phi _{t}}^{\text{RL}}(y|x)}{\pi ^{\text{SFT}}(y|x)}}\right)-V_{\xi }(x)\right)^{2}\right]}
which is minimized by gradient descent on it. Other methods than squared TD-error might be used. See the actor-critic algorithm page for details.
=== Mixing pretraining gradients ===
A third term is commonly added to the objective function to prevent the model from catastrophic forgetting. For example, if the model is only trained in customer service, then it might forget general knowledge in geography. To prevent this, the RLHF process incorporates the original language modeling objective. That is, some random texts
x
{\displaystyle x}
are sampled from the original pretraining dataset
D
pretrain
{\displaystyle D_{\text{pretrain}}}
, and the model is trained to maximize the log-likelihood of the text
log
(
π
ϕ
R
L
(
x
)
)
{\displaystyle \log(\pi _{\phi }^{RL}(x))}
. The final objective function is written as:
L
(
ϕ
)
=
E
(
x
,
y
)
∼
D
π
ϕ
RL
[
r
θ
(
x
,
y
)
−
β
log
(
π
ϕ
RL
(
y
|
x
)
π
SFT
(
y
|
x
)
)
]
+
γ
E
x
∼
D
pretrain
[
log
(
π
ϕ
RL
(
x
)
)
]
{\displaystyle L(\phi )=E_{(x,y)\sim D_{\pi _{\phi }^{\text{RL}}}}\left[r_{\theta }(x,y)-\beta \log \left({\frac {\pi _{\phi }^{\text{RL}}(y|x)}{\pi ^{\text{SFT}}(y|x)}}\right)\right]+\gamma E_{x\sim D_{\text{pretrain}}}[\log(\pi _{\phi }^{\text{RL}}(x))]}
where
γ
{\displaystyle \gamma }
controls the strength of this pretraining term. This combined objective function is called PPO-ptx, where "ptx" means "Mixing Pretraining Gradients". It was first used in the InstructGPT paper.
In total, this objective function defines the method for adjusting the RL policy, blending the aim of aligning with human feedback and maintaining the model's original language understanding.
So, writing out fully explicitly, the PPO-ptx objective function is:
L
PPO-ptx
(
ϕ
)
:=
E
(
x
,
y
)
∼
D
π
ϕ
t
RL
[
min
(
π
ϕ
R
L
(
y
|
x
)
π
ϕ
t
R
L
(
y
|
x
)
A
(
x
,
y
)
,
c
l
i
p
(
π
ϕ
R
L
(
y
|
x
)
π
ϕ
t
R
L
(
y
|
x
)
,
1
−
ϵ
,
1
+
ϵ
)
A
(
x
,
y
)
)
−
β
log
(
π
ϕ
RL
(
y
|
x
)
π
SFT
(
y
|
x
)
)
]
+
γ
E
x
∼
D
pretrain
[
log
(
π
ϕ
RL
(
x
)
)
]
{\displaystyle L_{\text{PPO-ptx}}(\phi ):=E_{(x,y)\sim D_{\pi _{\phi _{t}}^{\text{RL}}}}\left[\min \left({\frac {\pi _{\phi }^{RL}(y|x)}{\pi _{\phi _{t}}^{RL}(y|x)}}A(x,y),\mathrm {clip} \left({\frac {\pi _{\phi }^{RL}(y|x)}{\pi _{\phi _{t}}^{RL}(y|x)}},1-\epsilon ,1+\epsilon \right)A(x,y)\right)-\beta \log \left({\frac {\pi _{\phi }^{\text{RL}}(y|x)}{\pi ^{\text{SFT}}(y|x)}}\right)\right]+\gamma E_{x\sim D_{\text{pretrain}}}[\log(\pi _{\phi }^{\text{RL}}(x))]}
which is optimized by gradient ascent on it.
== Limitations ==
RLHF suffers from challenges with collecting human feedback, learning a reward model, and optimizing the policy. Compared to data collection for techniques like unsupervised or self-supervised learning, collecting data for RLHF is less scalable and more expensive. Its quality and consistency may vary depending on the task, interface, and the preferences and biases of individual humans.
The effectiveness of RLHF depends on the quality of human feedback. For instance, the model may become biased, favoring certain groups over others, if the feedback lacks impartiality, is inconsistent, or is incorrect. There is a risk of overfitting, where the model memorizes specific feedback examples instead of learning to generalize. For instance, feedback predominantly from a specific demographic might lead the model to learn peculiarities or noise, along with the intended alignment. Excessive alignment to the specific feedback it received (that is, to the bias therein) can lead to the model performing sub-optimally in new contexts or when used by different groups. A single reward function cannot always represent the opinions of diverse groups of people. Even with a representative sample, conflicting views and preferences may result in the reward model favoring the majority's opinion, potentially disadvantaging underrepresented groups.
In some cases, as is possible in regular reinforcement learning, there may be a risk of the model learning to manipulate the feedback process or game the system to achieve higher rewards rather than genuinely improving its performance. In the case of RLHF, a model may learn to exploit the fact that it is rewarded for what is evaluated positively and not necessarily for what is actually good, which can lead to it learning to persuade and manipulate. For example, models might learn that apparent confidence, even if inaccurate, garners higher rewards. Such behavior, if unchecked, is not just incentivized but can cause significant deployment issues due to the model's potential to mislead. Studies have found that humans are not skilled at identifying mistakes in LLM outputs in complex tasks; therefore, models learning to generate confident-sounding yet incorrect text can lead to significant issues when deployed.
== Alternatives ==
=== Reinforcement learning from AI feedback ===
Similarly to RLHF, reinforcement learning from AI feedback (RLAIF) relies on training a preference model, except that the feedback is automatically generated. This is notably used in Anthropic's constitutional AI, where the AI feedback is based on the conformance to the principles of a constitution.
=== Direct alignment algorithms ===
Direct alignment algorithms (DAA) have been proposed as a new class of algorithms that seek to directly optimize large language models (LLMs) on human feedback data in a supervised manner instead of the traditional policy-gradient methods.
These algorithms aim to align models with human intent more transparently by removing the intermediate step of training a separate reward model. Instead of first predicting human preferences and then optimizing against those predictions, direct alignment methods train models end-to-end on human-labeled or curated outputs. This reduces potential misalignment risks introduced by proxy objectives or reward hacking.
By directly optimizing for the behavior preferred by humans, these approaches often enable tighter alignment with human values, improved interpretability, and simpler training pipelines compared to RLHF.
==== Direct preference optimization ====
Direct preference optimization (DPO) is a technique to learn human preferences. Like RLHF, it has been applied to align pre-trained large language models using human-generated preference data. Unlike RLHF, however, which first trains a separate intermediate model to understand what good outcomes look like and then teaches the main model how to achieve those outcomes, DPO simplifies the process by directly adjusting the main model according to people's preferences. It uses a change of variables to define the "preference loss" directly as a function of the policy and uses this loss to fine-tune the model, helping it understand and prioritize human preferences without needing a separate step. Essentially, this approach directly shapes the model's decisions based on positive or negative human feedback.
Recall, the pipeline of RLHF is as follows:
We begin by gathering human preference dataset
D
{\displaystyle D}
.
We then fit a reward model
r
∗
{\displaystyle r^{*}}
to data, by maximum likelihood estimation using the Plackett–Luce model
r
∗
=
arg
max
r
E
(
x
,
y
1
,
…
,
y
N
)
∼
D
[
ln
∏
k
=
1
N
e
r
(
x
,
y
k
)
∑
i
=
k
N
e
r
(
x
,
y
i
)
]
{\displaystyle r^{*}=\arg \max _{r}\mathbb {E} _{(x,y_{1},\dots ,y_{N})\sim D}\left[\ln \prod _{k=1}^{N}{\frac {e^{r(x,y_{k})}}{\sum _{i=k}^{N}e^{r(x,y_{i})}}}\right]}
We finally train an optimal policy
π
∗
{\displaystyle \pi ^{*}}
that maximizes the objective function:
π
∗
=
arg
max
π
RL
E
(
x
,
y
)
∼
D
π
RL
[
r
∗
(
x
,
y
)
−
β
log
(
π
RL
(
y
|
x
)
π
SFT
(
y
|
x
)
)
]
{\displaystyle \pi ^{*}=\arg \max _{\pi ^{\text{RL}}}\mathbb {E} _{(x,y)\sim D_{\pi ^{\text{RL}}}}\left[r^{*}(x,y)-\beta \log \left({\frac {\pi ^{\text{RL}}(y|x)}{\pi ^{\text{SFT}}(y|x)}}\right)\right]}
However, instead of doing the intermediate step of the reward model, DPO directly optimizes for the final policy.
First, solve directly for the optimal policy, which can be done by Lagrange multipliers, as usual in statistical mechanics:
π
∗
(
y
|
x
)
=
π
SFT
(
y
|
x
)
exp
(
r
∗
(
x
,
y
)
/
β
)
Z
(
x
)
,
{\displaystyle \pi ^{*}(y|x)={\frac {\pi ^{\text{SFT}}(y|x)\exp(r^{*}(x,y)/\beta )}{Z(x)}},}
where
Z
(
x
)
{\displaystyle Z(x)}
is the partition function. This is unfortunately not tractable, since it requires summing over all possible responses:
Z
(
x
)
=
∑
y
π
SFT
(
y
|
x
)
exp
(
r
∗
(
x
,
y
)
/
β
)
=
E
y
∼
π
SFT
(
⋅
|
x
)
[
exp
(
r
∗
(
x
,
y
)
/
β
)
]
{\displaystyle Z(x)=\sum _{y}\pi ^{\text{SFT}}(y|x)\exp(r^{*}(x,y)/\beta )=\mathbb {E} _{y\sim \pi ^{\text{SFT}}(\cdot |x)}[\exp(r^{*}(x,y)/\beta )]}
Next, invert this relationship to express the reward implicitly in terms of the optimal policy:
r
∗
(
x
,
y
)
=
β
log
π
∗
(
y
|
x
)
π
SFT
(
y
|
x
)
+
β
log
Z
(
x
)
.
{\displaystyle r^{*}(x,y)=\beta \log {\frac {\pi ^{*}(y|x)}{\pi ^{\text{SFT}}(y|x)}}+\beta \log Z(x).}
Finally, plug it back to the maximum likelihood estimator, we obtain: Appendix A
π
∗
=
arg
max
π
E
(
x
,
y
1
,
…
,
y
N
)
∼
D
[
ln
∏
k
=
1
N
e
β
log
π
(
y
k
|
x
)
π
SFT
(
y
k
|
x
)
∑
i
=
k
N
e
β
log
π
(
y
i
|
x
)
π
SFT
(
y
i
|
x
)
]
{\displaystyle \pi ^{*}=\arg \max _{\pi }\mathbb {E} _{(x,y_{1},\dots ,y_{N})\sim D}\left[\ln \prod _{k=1}^{N}{\frac {e^{\beta \log {\frac {\pi (y_{k}|x)}{\pi ^{\text{SFT}}(y_{k}|x)}}}}{\sum _{i=k}^{N}e^{\beta \log {\frac {\pi (y_{i}|x)}{\pi ^{\text{SFT}}(y_{i}|x)}}}}}\right]}
Usually, DPO is used for modeling human preference in pairwise comparisons, so that
N
=
2
{\displaystyle N=2}
. In that case, we have
π
∗
=
arg
max
π
E
(
x
,
y
w
,
y
l
)
∼
D
[
log
σ
(
β
log
π
(
y
w
|
x
)
π
SFT
(
y
w
|
x
)
−
β
log
π
(
y
l
|
x
)
π
SFT
(
y
l
|
x
)
)
]
{\displaystyle \pi ^{*}=\arg \max _{\pi }\mathbb {E} _{(x,y_{w},y_{l})\sim D}\left[\log \sigma \left(\beta \log {\frac {\pi (y_{w}|x)}{\pi ^{\text{SFT}}(y_{w}|x)}}-\beta \log {\frac {\pi (y_{l}|x)}{\pi ^{\text{SFT}}(y_{l}|x)}}\right)\right]}
DPO eliminates the need for a separate reward model or reinforcement learning loop, treating alignment as a supervised learning problem over preference data. This is simpler to implement and train than RLHF and has been shown to produce comparable and sometimes superior results. Nevertheless, RLHF has also been shown to beat DPO on some datasets, for example, on benchmarks that attempt to measure truthfulness. Therefore, the choice of method may vary depending on the features of the human preference data and the nature of the task.
==== Identity preference optimization ====
Identity preference optimization (IPO) is a modification to the original DPO objective that introduces a regularization term to reduce the chance of overfitting. It remains robust to overtraining by assuming noise in the preference data.
Foremost, IPO first applies a non-linear mapping over the probability distribution of preferences
Ψ
(
q
)
=
log
(
q
/
(
1
−
q
)
)
{\displaystyle \Psi (q)=\log(q/(1-q))}
instead of the Bradley-Terry assumption to soften the probability of preferences and smooth the labels. Here,
Ψ
(
q
)
{\displaystyle \Psi (q)}
denotes the
Ψ
{\displaystyle \Psi }
preference objective separate from the policy objective. This helps avoid the overfitting issue of the assumption that pairwise preferences can be substituted for point-wise rewards, which weakens the KL regularization by heavily skewing the preference distribution.
As with DPO, IPO is also formulated as an offline learning objective learned over a human preference dataset
D
{\displaystyle D}
. In particular, the IPO introduces a new objective by applying a mapping
Ψ
{\displaystyle \Psi }
over the preference probability distribution. Practically,
Ψ
{\displaystyle \Psi }
is taken as the identity mapping, which results in IPO. Hence, IPO also directly optimizes for the final policy from the preference dataset and bypasses the reward modeling stage by the following objective:
max
π
θ
E
[
Ψ
(
p
∗
(
y
w
≻
y
l
|
x
)
)
]
−
β
D
K
L
(
π
θ
|
|
π
ref
)
{\displaystyle \max _{\pi _{\theta }}\mathbb {E} [\Psi (p^{*}(y_{w}\succ y_{l}|x))]-\beta D_{KL}(\pi _{\theta }||\pi _{\text{ref}})}
where
p
∗
(
y
w
≻
y
l
|
x
)
{\displaystyle p^{*}(y_{w}\succ y_{l}|x)}
is preference distribution of the chosen responses
y
w
{\displaystyle y_{w}}
over the rejected responses
y
l
{\displaystyle y_{l}}
. However, since
p
∗
{\displaystyle p^{*}}
is not observed directly, we sample from a Bernoulli distribution from the offline preference dataset as:
p
∗
(
y
≻
y
′
|
x
)
=
E
h
[
I
{
h
prefers
y
to
y
′
given
x
}
]
{\displaystyle p^{*}(y\succ y'|x)=\mathbb {E} _{h}[I\{h{\text{ prefers }}y{\text{ to }}y'{\text{ given }}x\}]}
To solve this objective, IPO minimizes the quadratic loss function:
Minimize
E
(
x
,
y
w
,
y
l
)
∼
D
[
(
h
π
(
x
,
y
w
,
y
l
)
−
I
(
y
w
,
y
l
)
)
]
2
=
E
x
,
y
w
,
y
l
)
∼
D
[
I
h
π
(
x
,
y
w
,
y
l
)
−
(
1
−
I
)
h
π
(
x
,
y
l
,
y
w
)
−
1
2
β
−
1
]
2
=
E
x
,
y
w
,
y
l
∼
D
[
h
π
(
x
,
y
w
,
y
l
)
−
1
2
β
−
1
]
2
{\displaystyle {\begin{aligned}{\text{Minimize }}&\mathbb {E} _{(x,y_{w},y_{l})\sim D}[(h_{\pi }(x,y_{w},y_{l})-I(y_{w},y_{l}))]^{2}\\&=\mathbb {E} _{x,y_{w},y_{l})\sim D}[Ih_{\pi }(x,y_{w},y_{l})-(1-I)h_{\pi }(x,y_{l},y_{w})-{\frac {1}{2}}\beta ^{-1}]^{2}\\&=\mathbb {E} _{x,y_{w},y_{l}\sim D}[h_{\pi }(x,y_{w},y_{l})-{\frac {1}{2}}\beta ^{-1}]^{2}\end{aligned}}}
where
h
π
(
x
,
y
w
,
y
l
)
=
log
(
π
θ
(
y
w
|
x
)
π
ref
(
y
w
|
x
)
)
)
−
log
(
π
θ
(
y
l
|
x
)
π
ref
(
y
l
|
x
)
)
{\displaystyle h_{\pi }(x,y_{w},y_{l})=\log \left({\frac {\pi _{\theta }(y_{w}|x)}{\pi _{\text{ref}}(y_{w}|x))}}\right)-\log \left({\frac {\pi _{\theta }(y_{l}|x)}{\pi _{\text{ref}}(y_{l}|x)}}\right)}
and
I
(
y
w
,
y
l
)
{\displaystyle I(y_{w},y_{l})}
is a function drawn from the Bernoulli distribution from the preference dataset. Here,
I
(
y
,
y
′
)
{\displaystyle I(y,y')}
is 1 if
y
{\displaystyle y}
is preferred to
y
′
{\displaystyle y'}
which happens with probability
p
∗
(
y
≻
y
′
)
{\displaystyle p^{*}(y\succ y')}
, and 0 otherwise. As such, the simplification of the expression directly follows from exploiting the symmetry of
y
{\displaystyle y}
and
y
′
{\displaystyle y'}
from the Bernoulli such that for each datapoint
(
y
w
,
y
l
)
i
∼
D
{\displaystyle (y_{w},y_{l})_{i}\sim D}
. In particular this symmetry can be represented as
(
y
,
y
′
,
I
(
y
,
y
′
)
)
=
(
y
w
,
i
,
y
l
,
i
,
1
)
{\displaystyle (y,y',I(y,y'))=(y_{w,i},y_{l,i},1)}
and
(
y
,
y
′
,
I
(
y
,
y
′
)
)
=
(
y
l
,
i
,
y
w
,
i
,
0
)
{\displaystyle (y,y',I(y,y'))=(y_{l,i},y_{w,i},0)}
with
E
y
[
p
y
]
=
1
2
{\displaystyle \mathbb {E} _{y}[p_{y}]={\frac {1}{2}}}
and
E
[
I
(
y
,
y
′
)
]
=
p
y
{\displaystyle \mathbb {E} [I(y,y')]=p_{y}}
.
In summary, IPO can control the gap between the log-likelihood ratios of the policy model and the reference by always regularizing the solution towards the reference model. It allows learning directly from preferences without a reward modelling stage and without relying on the Bradley-Terry modelisation assumption that assumes that pairwise preferences can be substituted with pointwise rewards. Thus, it avoids overfitting to the preference dataset especially when preferences are near deterministic and the KL term fails.
==== Kahneman-Tversky optimization ====
Kahneman-Tversky optimization (KTO) is another direct alignment algorithm drawing from prospect theory to model uncertainty in human decisions that may not maximize the expected value.
In general, KTO seeks to optimize a class of new loss functions proposed as “human-aware losses” (HALO) formulated under prospect theory to model “human values” of a query, response pair
(
x
,
y
)
{\textstyle (x,y)}
as
v
(
r
θ
(
x
,
y
)
−
E
Q
[
r
θ
(
x
,
y
)
′
]
)
{\displaystyle v(r_{\theta }(x,y)-E_{Q}[r_{\theta }(x,y)'])}
. A function is defined as a human-aware loss for the value described by the general HALO objective:
f
(
π
θ
,
π
ref
)
=
E
x
,
y
∼
D
[
a
x
,
y
v
(
r
θ
(
x
,
y
)
−
E
y
′
∼
Q
[
r
θ
(
x
,
y
′
)
]
⏟
reference point
)
]
+
C
D
{\displaystyle f(\pi _{\theta },\pi _{\text{ref}})=\mathbb {E} _{x,y\sim D}[a_{x,y}v{\Bigl (}r_{\theta }(x,y)\;-\;\underbrace {E_{y'\sim Q}[\,r_{\theta }(x,y')\,]} _{\text{reference point}}{\Bigr )}]+C_{D}}
where
D
{\displaystyle D}
is the preference data,
C
D
{\displaystyle C_{D}}
is some constant relevant to the dataset, and
Q
{\displaystyle Q}
is some distribution representing the baseline or “reference”. Each training example is attached a label
a
x
,
y
∈
{
+
1
,
−
1
}
{\displaystyle a_{x,y}\in \{+1,-1\}}
that tells us if the example is desirable (we want to push up its reward) and -1 if it’s undesirable (in order to push down its reward). Unlike previous definitions of the reward, KTO defines
r
θ
(
x
,
y
)
{\displaystyle r_{\theta }(x,y)}
as the “implied reward” taken by the log-likelihood ratio between the policy model and the reference model
log
(
π
θ
(
y
|
x
)
π
ref
(
y
|
x
)
)
{\displaystyle \log \left({\frac {\pi _{\theta }(y|x)}{\pi _{\text{ref}}(y|x)}}\right)}
. Here, the value function
v
{\displaystyle v}
is a non-linear (typically concave) function that mimics human loss aversion and risk aversion. As opposed to previous preference optimization algorithms, the motivation of KTO lies in maximizing the utility of model outputs from a human perspective rather than maximizing the likelihood of a “better” label (chosen vs. rejected responses). Hence, it constructs a more relaxed generalization to preference distributions by requiring only a binary feedback signal
a
x
,
y
{\displaystyle a_{x,y}}
instead of explicit preference pairs. For each example
(
x
,
y
)
{\displaystyle (x,y)}
in the dataset
D
{\displaystyle D}
, KTO explicitly optimizes the HALO objective as:
π
θ
∗
=
arg
max
π
θ
E
(
x
,
y
)
∼
D
[
γ
y
−
v
(
x
,
y
)
]
{\displaystyle \pi _{\theta }^{*}\;=\;\arg \max _{\pi _{\theta }}\;\;\mathbb {E} _{(x,y)\,\sim \,D}{\Bigl [}\gamma _{y}\;-\;v(x,y){\Bigr ]}}
, where
γ
y
{\displaystyle \gamma _{y}}
is a class-specific constant (e.g.,
γ
y
=
λ
D
or
λ
U
{\displaystyle \gamma _{y}=\lambda _{D}{\text{ or }}\lambda _{U}}
) controlling how strongly the model should push up good outputs vs. push down bad ones. The value function
v
(
x
,
y
)
{\displaystyle v(x,y)}
is defined piecewise depending on whether
y
{\displaystyle y}
is desirable (
λ
D
{\displaystyle \lambda _{D}}
) or undesirable (
λ
U
{\displaystyle \lambda _{U}}
):
v
(
x
,
y
)
=
{
λ
D
σ
(
β
(
r
θ
(
x
,
y
)
−
z
0
)
)
,
if
y
∼
y
d
e
s
i
r
a
b
l
e
∣
x
,
λ
U
σ
(
β
(
z
0
−
r
θ
(
x
,
y
)
)
)
,
if
y
∼
y
u
n
d
e
s
i
r
a
b
l
e
∣
x
{\displaystyle v(x,y)\;=\;{\begin{cases}\lambda _{D}\,\sigma \!{\bigl (}\,\beta \,{\bigl (}r_{\theta }(x,y)\;-\;z_{0}{\bigr )}{\bigr )},&\quad {\text{if }}y\sim y_{\mathrm {desirable} \mid x},\\[6pt]\lambda _{U}\,\sigma \!{\bigl (}\,\beta \,{\bigl (}z_{0}\;-\;r_{\theta }(x,y){\bigr )}{\bigr )},&\quad {\text{if }}y\sim y_{\mathrm {undesirable} \mid x}\end{cases}}}
and
z
0
=
K
L
(
π
θ
(
y
′
∣
x
)
‖
π
r
e
f
(
y
′
∣
x
)
)
{\textstyle z_{0}=\mathrm {KL} \!{\Bigl (}\,\pi _{\theta }(y'\mid x)\;{\big \Vert }\;\pi _{\mathrm {ref} }(y'\mid x){\Bigr )}}
is a baseline given by the Kullback–Leibler divergence. Here,
β
{\displaystyle \beta }
controls how “risk-averse” the value function is (larger
β
{\displaystyle \beta }
= faster saturation in the logistic function
σ
{\displaystyle \sigma }
). Intuitively, desirable outputs push the model to increase
r
θ
{\displaystyle r_{\theta }}
so that
r
θ
−
z
0
{\displaystyle r_{\theta }-z_{0}}
becomes more positive. Undesirable ones push it in the opposite direction, so the reward is less than the reference. Since many real-world feedback pipelines yield "like/dislike" data more easily than pairwise comparisons, KTO is designed to be data-cheap and to reflect "loss aversion" more directly by using a straightforward notion of "good vs. bad" at the example level.
== See also ==
Human-in-the-loop
Reward-based selection
== References ==
== Further reading ==
"Learning RLHF (PPO) with codes (Huggingface TRL) | Yiyang Feng". yiyangfeng.me. Retrieved 2025-01-26.
"The N Implementation Details of RLHF with PPO". huggingface.co. 2025-01-19. Retrieved 2025-01-26.
"Proximal Policy Optimization — Spinning Up documentation". spinningup.openai.com. Retrieved 2025-01-26.
Huang, Shengyi; Noukhovitch, Michael; Hosseini, Arian; Rasul, Kashif; Wang, Weixun; Tunstall, Lewis (2024-03-24), The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization, arXiv:2403.17031 | Wikipedia/Reinforcement_learning_from_human_feedback |
Llama (Large Language Model Meta AI, formerly stylized as LLaMA) is a family of large language models (LLMs) released by Meta AI starting in February 2023. The latest version is Llama 4, released in April 2025.
Llama models come in different sizes, ranging from 1 billion to 2 trillion parameters. Initially only a foundation model, starting with Llama 2, Meta AI released instruction fine-tuned versions alongside foundation models.
Model weights for the first version of Llama were only available to researchers on a case-by-case basis, under a non-commercial license. Unauthorized copies of the first model were shared via BitTorrent. Subsequent versions of Llama were made accessible outside academia and released under licenses that permitted some commercial use.
Alongside the release of Llama 3, Meta added virtual assistant features to Facebook and WhatsApp in select regions, and a standalone website. Both services use a Llama 3 model.
== Background ==
After the release of large language models such as GPT-3, a focus of research was up-scaling models which in some instances showed major increases in emergent capabilities. The release of ChatGPT and its surprise success caused an increase in attention to large language models.
Compared with other responses to ChatGPT, Meta's Chief AI scientist Yann LeCun stated that large language models are best for aiding with writing.
An empirical investigation of the Llama series was the scaling laws. It was observed that the Llama 3 models showed that when a model is trained on data that is more than the "Chinchilla-optimal" amount, the performance continues to scale log-linearly. For example, the Chinchilla-optimal dataset for Llama 3 8B is 200 billion tokens, but performance continued to scale log-linearly to the 75-times larger dataset of 15 trillion tokens.
== Initial release ==
LLaMA was announced on February 24, 2023, via a blog post and a paper describing the model's training, architecture, and performance. The inference code used to run the model was publicly released under the open-source GPLv3 license. Access to the model's weights was managed by an application process, with access to be granted "on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world".
Llama was trained on only publicly available information, and was trained at various model sizes, with the intention to make it more accessible to different hardware. The model was exclusively a foundation model, although the paper contained examples of instruction fine-tuned versions of the model.
Meta AI reported the 13B parameter model performance on most NLP benchmarks exceeded that of the much larger GPT-3 (with 175B parameters), and the largest 65B model was competitive with state of the art models such as PaLM and Chinchilla.
=== Leak ===
On March 3, 2023, a torrent containing LLaMA's weights was uploaded, with a link to the torrent shared on the 4chan imageboard and subsequently spread through online AI communities. That same day, a pull request on the main LLaMA repository was opened, requesting to add the magnet link to the official documentation. On March 4, a pull request was opened to add links to HuggingFace repositories containing the model. On March 6, Meta filed takedown requests to remove the HuggingFace repositories linked in the pull request, characterizing it as "unauthorized distribution" of the model. HuggingFace complied with the requests. On March 20, Meta filed a DMCA takedown request for copyright infringement against a repository containing a script that downloaded LLaMA from a mirror, and GitHub complied the next day.
Reactions to the leak varied. Some speculated that the model would be used for malicious purposes, such as more sophisticated spam. Some have celebrated the model's accessibility, as well as the fact that smaller versions of the model can be run relatively cheaply, suggesting that this will promote the flourishing of additional research developments. Multiple commentators, such as Simon Willison, compared LLaMA to Stable Diffusion, a text-to-image model which, unlike comparably sophisticated models which preceded it, was openly distributed, leading to a rapid proliferation of associated tools, techniques, and software.
== LLaMa 2 ==
On July 18, 2023, in partnership with Microsoft, Meta announced LLaMa 2, the next generation of Llama. Meta trained and released Llama 2 in three model sizes: 7, 13, and 70 billion parameters. The model architecture remains largely unchanged from that of LLaMA-1 models, but 40% more data was used to train the foundational models. The accompanying preprint also mentions a model with 34B parameters that might be released in the future upon satisfying safety targets.
LLaMa 2 includes foundation models and models fine-tuned for chat. In a further departure from the original version of LLaMa, all models are released with weights and may be used for many commercial use cases. However, because LLaMa's license enforces an acceptable use policy that prohibits Llama from being used for some purposes, Meta's use of the term open source to describe Llama has been disputed by the Open Source Initiative (which maintains The Open Source Definition) and others.
Code Llama is a fine-tune of LLaMa 2 with code specific datasets. 7B, 13B, and 34B versions were released on August 24, 2023, with the 70B releasing on the January 29, 2024. Starting with the foundation models from LLaMa 2, Meta AI would train an additional 500B tokens of code datasets, before an additional 20B token of long-context data, creating the Code Llama foundation models. This foundation model was further trained on 5B instruction following token to create the instruct fine-tune. Another foundation model was created for Python code, which trained on 100B tokens of Python-only code, before the long-context data.
== Llama 3 ==
On April 18, 2024, Meta released Llama-3 with two sizes: 8B and 70B parameters. The models have been pre-trained on approximately 15 trillion tokens of text gathered from “publicly available sources” with the instruct models fine-tuned on “publicly available instruction datasets, as well as over 10M human-annotated examples". Meta AI's testing showed in April 2024 that Llama 3 70B was beating Gemini Pro 1.5 and Claude 3 Sonnet on most benchmarks. Meta also announced plans to make Llama 3 multilingual and multimodal, better at coding and reasoning, and to increase its context window.
During an interview with Dwarkesh Patel, Mark Zuckerberg said that the 8B version of Llama 3 was nearly as powerful as the largest Llama 2. Compared to previous models, Zuckerberg stated the team was surprised that the 70B model was still learning even at the end of the 15T tokens training. The decision was made to end training to focus GPU power elsewhere.
Llama-3.1 was released on July 23, 2024, with three sizes: 8B, 70B, and 405B parameters.
== Llama 4 ==
The Llama-4 series was released in 2025. The architecture was changed to a mixture of experts. They are multimodal (text and image input, text output) and multilingual (12 languages). Specifically, on 5 April 2025, the following were released both as base and instruction-tuned versions:
Scout: 17 billion active parameter model with 16 experts, context window of 10M, with 109B parameters in total.
Maverick: 17 billion active parameter model with 128 experts, context window of 1M, with 400B parameters in total.
Also claimed was Behemoth (not yet released): 288 billion active parameter model with 16 experts and around 2T parameters in total. The Behemoth version was still in training at that time. The Scout was trained from scratch. The Maverick was "codistilled" from Behemoth. Note that the Scout was trained for longer and had a longer context length than Maverick.
The training data included publicly available data, licensed data, and Meta-proprietary data such as publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI. The data cutoff was August 2024.
Meta claimed in its release announcement that Llama 4 bested GPT-4o's score on the LMArena AI benchmark. The company also stated that Llama 4's benchmark score was achieved using an unreleased "experimental chat version" of the model that was "optimized for conversationality", which differed from the version of Llama 4 released to the public. LMArena indicated that it would change its policies to prevent this incident from reoccurring, and responded, "Meta's interpretation of our policy did not match what we expect from model providers. Meta should have made it clearer that 'Llama-4-Maverick-03-26-Experimental' was a customized model to optimize for human preference." Some users criticized Meta on social media for its use of a separate model version tailored for benchmarking, and some additionally accused Meta of training Llama 4 on test sets to further boost its benchmark scores—which Meta denied.
== Comparison of models ==
For the training cost column, only the largest model's cost is written by default. So for example, "21,000" is the training cost of Llama 2 69B in units of petaFLOP-day. Also, 1 petaFLOP-day = 1 petaFLOP/sec × 1 day = 8.64E19 FLOP. "T" means "trillion" and "B" means "billion".
The following table lists the main model versions of Llama, describing the significant changes included with each version:
== Architecture and training ==
=== Architecture ===
Like GPT-3, the Llama series of models are autoregressive decoder-only Transformers, but there are some minor differences:
SwiGLU activation function instead of GeLU;
rotary positional embeddings (RoPE) instead of absolute positional embedding;
RMSNorm instead of layer normalization;
=== Training datasets ===
LLaMA's developers focused their effort on scaling the model's performance by increasing the volume of training data, rather than the number of parameters, reasoning that the dominating cost for LLMs is from doing inference on the trained model rather than the computational cost of the training process.
LLaMA 1 foundational models were trained on a data set with 1.4 trillion tokens, drawn from publicly available data sources, including:
Webpages scraped by CommonCrawl
Open source repositories of source code from GitHub
Wikipedia in 20 languages
Public domain books from Project Gutenberg
Books3 books dataset
The LaTeX source code for scientific papers uploaded to ArXiv
Questions and answers from Stack Exchange websites
On April 17, 2023, TogetherAI launched a project named RedPajama to reproduce and distribute an open source version of the LLaMA dataset. The dataset has approximately 1.2 trillion tokens and is publicly available for download.
Llama 2 foundational models were trained on a data set with 2 trillion tokens. This data set was curated to remove Web sites that often disclose personal data of people. It also upsamples sources considered trustworthy. Llama 2 - Chat was additionally fine-tuned on 27,540 prompt-response pairs created for this project, which performed better than larger but lower-quality third-party datasets. For AI alignment, reinforcement learning with human feedback (RLHF) was used with a combination of 1,418,091 Meta examples and seven smaller datasets. The average dialog depth was 3.9 in the Meta examples, 3.0 for Anthropic Helpful and Anthropic Harmless sets, and 1.0 for five other sets, including OpenAI Summarize, StackExchange, etc.
Llama 3 consists of mainly English data, with over 5% in over 30 other languages. Its dataset was filtered by a text-quality classifier, and the classifier was trained by text synthesized by Llama 2.
In a lawsuit brought by Richard Kadrey and others against Meta Platforms, CEO Mark Zuckerberg was alleged to have authorized the use of copyrighted content from Library Genesis to train Llama AI models and conceal its actions by removing copyright markers from the data.
=== Fine-tuning ===
Llama 1 models are only available as foundational models with self-supervised learning and without fine-tuning. Llama 2 – Chat models were derived from foundational Llama 2 models. Unlike GPT-4 which increased context length during fine-tuning, Llama 2 and Code Llama - Chat have the same context length of 4K tokens. Supervised fine-tuning used an autoregressive loss function with token loss on user prompts zeroed out. The batch size was 64.
For AI alignment, human annotators wrote prompts and then compared two model outputs (a binary protocol), giving confidence levels and separate safety labels with veto power. Two separate reward models were trained from these preferences for safety and helpfulness using Reinforcement learning from human feedback (RLHF). A major technical contribution is the departure from the exclusive use of Proximal Policy Optimization (PPO) for RLHF – a new technique based on Rejection sampling was used, followed by PPO.
Multi-turn consistency in dialogs was targeted for improvement, to make sure that "system messages" (initial instructions, such as "speak in French" and "act like Napoleon") are respected during the dialog. This was accomplished using the new "Ghost attention" technique during training, which concatenates relevant instructions to each new user message but zeros out the loss function for tokens in the prompt (earlier parts of the dialog).
== Applications ==
The Stanford University Institute for Human-Centered Artificial Intelligence (HAI) Center for Research on Foundation Models (CRFM) released Alpaca, a training recipe based on the LLaMA 7B model that uses the "Self-Instruct" method of instruction tuning to acquire capabilities comparable to the OpenAI GPT-3 series text-davinci-003 model at a modest cost. The model files were officially removed on March 21, 2023, over hosting costs and safety concerns, though the code and paper remain online for reference.
Meditron is a family of Llama-based finetuned on a corpus of clinical guidelines, PubMed papers, and articles. It was created by researchers at École Polytechnique Fédérale de Lausanne School of Computer and Communication Sciences, and the Yale School of Medicine. It shows increased performance on medical-related benchmarks such as MedQA and MedMCQA.
Zoom used Meta Llama 2 to create an AI Companion that can summarize meetings, provide helpful presentation tips, and assist with message responses. This AI Companion is powered by multiple models, including Meta Llama 2.
Reuters reported in 2024 that many Chinese foundation models relied on Llama models for their training.
=== llama.cpp ===
Software developer Georgi Gerganov released llama.cpp as open-source on March 10, 2023. It's a re-implementation of LLaMA in C++, allowing systems without a powerful GPU to run the model locally. The llama.cpp project introduced the GGUF file format, a binary format that stores both tensors and metadata. The format focuses on supporting different quantization types, which can reduce memory usage, and increase speed at the expense of lower model precision.
llamafile created by Justine Tunney is an open-source tool that bundles llama.cpp with the model into a single executable file. Tunney et al. introduced new optimized matrix multiplication kernels for x86 and ARM CPUs, improving prompt evaluation performance for FP16 and 8-bit quantized data types.
=== Military ===
In 2024, researchers from the People's Liberation Army Academy of Military Sciences (top military academy of China) were reported to have developed a military tool using Llama, which Meta Platforms stated was unauthorized due to Llama's license prohibiting the use of the model for military purposes. Meta granted the US government and US military contractors permission to use Llama in November 2024, but continued to prohibit military use by non-US entities.
== Reception ==
Wired describes the 8B parameter version of Llama 3 as being "surprisingly capable" given its size.
The response to Meta's integration of Llama into Facebook was mixed, with some users confused after Meta AI told a parental group that it had a child.
According to the Q4 2023 Earnings transcript, Meta adopted the strategy of open weights to improve on model safety, iteration speed, increase adoption among developers and researchers, and to become the industry standard. Llama 5, 6, and 7 are planned for the future.
The release of Llama models has sparked significant debates on the benefits and misuse risks of open weight models. Such models can be fine-tuned to remove safeguards, notably by cyber criminals, until they comply with harmful requests. Some experts contend that future models may facilitate causing damage more than defending against it, for example by making it relatively easy to engineer advanced bioweapons without specialized knowledge. Conversely, open-weight models can be useful for a wide variety of purposes, including for safety research.
Open Source Initiative head Stefano Maffulli criticized Meta for describing Llama as open source, saying that it was causing confusion among users and "polluting" the term.
== See also ==
GPT-4o
IBM Granite, an open-source LLM made by IBM
Mistral AI, a French open-source AI company
== References ==
== Further reading ==
== External links ==
Official website
Official Hugging Face organization for Llama, Llama Guard, and Prompt Guard models | Wikipedia/Llama_(language_model) |
Flux (also known as FLUX.1) is a text-to-image model developed by Black Forest Labs, based in Freiburg im Breisgau, Germany. Black Forest Labs was founded by former employees of Stability AI. As with other text-to-image models, Flux generates images from natural language descriptions, called prompts.
== History ==
Black Forest Labs was founded in 2024 by Robin Rombach, Andreas Blattmann, and Patrick Esser, former employees of Stability AI. All three founders had previously researched the artificial intelligence image generation at Ludwig Maximilian University of Munich as research assistants under Björn Ommer. They published their research results on image generation in 2022, which resulted in creation of Stable Diffusion. Investors in Black Forest Labs included venture capital firm Andreessen Horowitz, Brendan Iribe, Michael Ovitz, Garry Tan, and Vladlen Koltun. The company received an initial investment of US$31 million.
In August 2024, Flux was integrated into the Grok chatbot developed by xAI and made available as part of premium feature on X (formerly Twitter). Grok later switched to its own text-to-image model Aurora in December 2024.
On 18 November 2024, Mistral AI announced that its Le Chat chatbot had integrated Flux Pro as its image generation model.
On 21 November 2024, Black Forest Labs announced the release of Flux.1 Tools, a suite of editing tools designed to be used on top of existing Flux models. The tools consisting of Flux.1 Fill for inpainting and outpainting, Flux.1 Depth for control based on extracted depth map of input images and prompts, Flux.1 Canny for control based on extracted canny edges of input images and prompts, and Flux.1 Redux for mixing existing input images and prompts. Each tools are available in both Dev and Pro variants.
In January 2025, Black Forest Labs announced a partnership with Nvidia for inclusion of Flux models as foundation models for Nvidia's Blackwell microarchitecture. The company also announced the release of Flux Pro Finetuning API, designed for customisation and fine-tuning of Flux-generated images and a partnership with German media company Hubert Burda Media for usage of Flux Pro as part of content creation.
On 29 May 2025, Black Forest Labs announced FLUX.1 Kontext, a suite of models that enable in-context image generation and editing, allowing users to prompt with both text and images. Alongside this, they launched the BFL Playground, an interface for testing their FLUX models.
== Models ==
Flux is a series of text-to-image models. The models are based on rectified flow transformer blocks scaled to 12 billion parameters. The models are released under different licences with Schnell (meaning Fast or Quick in German language) released as open-source software under Apache License, Dev released as source-available software under a non-commercial licence, and Pro released as proprietary software and only available as API that can be licensed by third-party users. Users retained the ownership of resulting output regardless of models used.
The models can be used either online or locally by using generative AI user interfaces such as ComfyUI and Stable Diffusion WebUI Forge (a fork of Automatic1111 WebUI).
An improved flagship model, Flux 1.1 Pro was released on 2 October 2024. Two additional modes were added on 6 November, Ultra which can generate image at four times higher resolution and up to 4 megapixel without affecting generation speed and Raw which can generate hyper-realistic image in the style of candid photography.
Related to Flux is text-to-video model SOTA, under development as of December 2024.
== Reception ==
According to a test performed by Ars Technica, the outputs generated by Flux.1 Dev and Flux.1 Pro are comparable with DALL-E 3 in terms of prompt fidelity, with the photorealism closely matched Midjourney 6 and generated human hands with more consistency over previous models such as Stable Diffusion XL.
Flux has been criticised for its very realistic generated images. According to media reports, depictions ranged from an image of Donald Trump posing with guns to disturbing scenes, which triggered discussions about ethical implications of technologies developed by Black Forest Labs.
After the release of the model, social media platform X was flooded with Flux-generated images. Black Forest Labs has not provided exact details of the data used to train the model. Ars Technica suspected that Flux is based on a large, unauthorised collection of images scraped from the internet, a controversial practice with potential legal consequences.
== Third-party integrations ==
While Black Forest Labs do not offer direct access to their models on their website, the Flux models are widely available through various third-party platforms for creative and professional use. These include repositories on platforms like Hugging Face and Replicate.
== References ==
== External links ==
Official website
Flux models on Hugging Face
Flux models on Replicate
Flux models on FAL.ai | Wikipedia/Flux_(text-to-image_model) |
Grok is a generative artificial intelligence chatbot developed by xAI. Based on the large language model (LLM) of the same name, it was launched in November 2023 as an initiative by Elon Musk. Grok is integrated on the social media platform X, formerly known as Twitter, and has apps for iOS and Android. The chatbot was described by Musk as having a "sense of humor". It is named after the verb grok, coined by American author Robert A. Heinlein in his 1961 science fiction novel Stranger in a Strange Land to describe a form of understanding.
== Background ==
=== OpenAI ===
Musk was one of the 11 co-founders of OpenAI, and initially co-chaired it with Sam Altman. He left the company's board in 2018, saying of his decision that he "didn't agree with some of what OpenAI team wanted to do".
OpenAI went on to launch ChatGPT in 2022, and GPT-4 in March 2023. The same month, Musk was one of the individuals to sign the "Pause Giant AI Experiments: An Open Letter" from the Future of Life Institute, which called for a six-month pause in the development of any AI software more powerful than GPT-4.
=== TruthGPT ===
In April 2023, Musk said in an interview on Tucker Carlson Tonight that he intended to develop an AI chatbot called "TruthGPT", which he described as "a maximum truth-seeking AI that tries to understand the nature of the universe". He expressed concern to Carlson that ChatGPT was being "trained to be politically correct".
=== Grok ===
TruthGPT would later be renamed after "grok", a verb coined by American author Robert A. Heinlein in his 1961 science fiction novel Stranger in a Strange Land to describe a form of understanding.
== History ==
=== Grok-1 ===
In November 2023, xAI began previewing Grok as a chatbot to selected people, with participation in the early access program being limited to paid X Premium users.
It was announced that once the bot was out of early beta, it would only be available to higher tier X Premium+ subscribers.
At the time of the preview, xAI described the chatbot as "a very early beta product – the best we could do with 2 months of training" that could "improve rapidly with each passing week".
On March 11, 2024, Musk posted on X that the language model would go open source within a week. Six days later, on March 17, Grok-1 was open sourced under the Apache-2.0 license. Disclosed were the networks architecture and its weight parameters.
On March 26, 2024, Musk announced that Grok would be enabled for premium subscribers, not just those on the higher-end tier, Premium+.
==== Grok-1.5 ====
On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. Grok-1.5 was released to all X Premium users on May 15, 2024.
On April 4, 2024, an update to X's "Explore" page included summaries of breaking news stories written by Grok, a task previously assigned to a human curation team.
On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced. Grok-1.5V is able to process a wide variety of visual information, including documents, diagrams, graphs, screenshots, and photographs. Grok-1.5V was never released to the public.
On May 4, 2024, Grok became available in the United Kingdom, that being the only country in Europe to support Grok at the moment due to the impending Artificial Intelligence Act rules in the European Union. Grok was later reviewed by the EU and was released on May 16, 2024.
=== Grok-2 ===
On August 14, 2024, Grok-2 and Grok-2 mini were announced, with upgraded performance and reasoning, and image generation capability using Flux by Black Forest Labs.
Grok-2 mini is a “small but capable sibling” of Grok-2 that “offers a balance between speed and answer quality”, according to xAI, and was released on the same day of the announcement. Grok-2 was released six days later, on August 20.
On October 28, 2024, Grok received image understanding capabilities.
On November 16, 2024, Grok received web search capabilities.
On November 23, 2024, Grok received PDF understanding capabilities.
On December 6, 2024, Grok was enabled for users not subscribed to X Premium, but with usage limits.
On December 9, 2024, Grok received Aurora, a new text-to-image model developed by xAI.
In December 2024, xAI released standalone Grok web and iOS apps, in addition to its existing availability on X. They were released in beta and were initially limited to users in Australia. The app was made available to users worldwide on January 9, 2025.
On January 2, 2025, xAI updated the Grok logo.
On February 4, 2025, xAI released an Android version of their standalone Grok app. The release was firstly limited to Australia, Canada, India, Saudi Arabia and the Philippines, but was later released worldwide.
=== Grok-3 ===
On February 17, 2025, xAI released its latest flagship AI model, Grok-3, along with other updates to Grok. Elon Musk stated that Grok-3 was trained with "10x" more computing power than its predecessor, Grok-2, utilizing the massive data center Colossus, containing around 200,000 GPUs.
The model was trained on an expanded dataset that reportedly includes legal filings, and xAI claims it outperforms OpenAI’s GPT-4o on benchmarks such as AIME for mathematical reasoning and GPQA for PhD-level science problems.
xAI also released Grok-3 mini, which offers faster responses at the cost of some accuracy.
Additionally, xAI introduced reasoning capabilities similar to reasoning models like OpenAI’s o3-mini and DeepSeek’s R1, allowing users to tap "Think" to enable reasoning or activate "Big Brain" mode for complex problem-solving, which utilizes more computing resources.
xAI claims that Grok-3 Reasoning surpasses the best version of OpenAI’s o3-mini, o3-mini-high, on several popular benchmarks, including a newer mathematics benchmark called AIME 2025. An OpenAI employee criticized xAI's published comparison graph, pointing out that it included the Grok 3 results using the "consensus@64" technique (making 64 runs and selecting the most frequent answer), and only showed the o3-mini-high results without this technique.
xAI also introduced DeepSearch, a feature that scans the internet and X to generate detailed summaries in response to queries, positioning it as a competitor to OpenAI's ChatGPT Deep Research.
Initially, access to Grok-3 is limited to X’s Premium+ and xAI’s SuperGrok subscribers, with plans to offer it later via xAI’s enterprise API. Musk also announced that Grok is expected to introduce a multimodal voice mode within a week and that Grok-2 will be open-sourced in the coming months.
Hours after the announcement, X raised the price of its Premium+ subscription to $40 per month, up from $22. Grok-3 was made available to free users on February 20, 2025, for a "short time".
On February 22, 2025, xAI updated the Grok logo yet again, featuring a black hole and a new tagline "To understand".
In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch that utilizes extended search and more reasoning.
In April 2025, xAI launched an API for Grok 3. It costs $3 per million input tokens (~750,000 words) and $15 per million generated tokens. In May 2025, Grok 3 was announced for Microsoft Azure.
=== Usage for DOGE activities ===
On April 8, 2025, Reuters reported that the Elon Musk–led Department of Government Efficiency (DOGE) "heavily" used Musk's Grok AI chatbot as part of their work within the United States federal government. It also reported that Trump-appointed officials told that DOGE is monitoring communication of applications using AI, and a source said "We have been told they are looking for anti-Trump or anti-Musk language."
=== Irish data commissioner investigation ===
On April 11, 2025, the Irish Data Protection Commission (DPC) announced the opening of an investigation into the processing of personal data in publicly accessible posts posted on X by EU users, for the purposes of training generative artificial intelligence models, in particular the Grok Large Language Models (LLMs).
The inquiry considers a large range of issues concerning the use of a subset of the data which was controlled by X, particularly personal data in publicly accessible posts posted on the platform by European Community users. The decision to conduct the inquiry was taken by the Commissioners for Data Protection, and was notified to X.
=== "White genocide in South Africa" system prompt change ===
In May 2025, for a brief period of time, X users started getting responses from Grok about "white genocide in South Africa" to entirely unrelated queries. When asked by Guardian staff and other users, the bot stated that it was instructed by its creators to address the topic, but that this conflicted with its design to "to provide evidence-based answers". Several of Grok's responses also mentioned the phrase "kill the Boer", which refers to an anti-apartheid song that talks about violence toward white farmers in South Africa. The issue coincided with the White South African refugee program.
The issue was fixed within a few hours. Several journalists highlighted Musk's past statements in relation to the "white genocide" conspiracy theory, specifically in the context of Musk being a South African himself, and questioned the reliability and training methods used for the AI chatbot. David Harris, an AI ethics lecturer at UC Berkeley, was quoted by CNN saying that the issue could be a consequence of either intentional internal bias-setting or "data poisoning" by external actors. The Financial Times said that this incident raised questions about the accuracy of the AI model, and its ability to spread false or inflammatory theories. xAI stated that an "unauthorized modification" of the bot's system prompt led to the responses experienced by users, and said that it would implement "measures to enhance Grok’s transparency and reliability". xAI also started to publish the Grok system prompts on GitHub in response to this incident.
A few days after this incident, Grok was found to be expressing skepticism about the number of Jews killed in the Holocaust, saying that they were manipulated for political purposes; when questioned, it blamed this on the same change and said it had been corrected, but continued to falsely state that the death total was under debate in academia.
== Versions ==
== Access ==
Grok is integrated on X and has a standalone website. Apps for iOS and Android were released in early 2025.
== Features ==
=== Tone of responses ===
An xAI statement described the chatbot as having been designed to "answer questions with a bit of wit" and as having "a rebellious streak". It said that bot had been "modeled after The Hitchhiker's Guide to the Galaxy, so intended to answer almost anything".
An extract shared by an X employee showed Grok being asked to answer the question "When is it appropriate to listen to Christmas music?" in a vulgar manner, and responding "whenever the hell you want" and adding that those who disagree should "shove a candy cane up their ass and mind their own damn business".
The chatbot had a "fun mode", self-described as "edgy", and by Vice as "incredibly cringey", but this mode was removed in December 2024.
Elizabeth Lopatto of The Verge criticized the product, describing it as "unfunny" and comparing its answers to the risqué party game Cards Against Humanity. Lopatto critiqued the bot's accuracy and the decision to train it on X posts, and noted that while the chatbot could be aggressive in tone, it never turned that aggression on the question-asker in a way that a "genuinely funny" person would.
=== Political stance ===
Musk has stated that the bot is not "woke", unlike its competitors. In response to Sam Altman, the CEO of ChatGPT developer OpenAI, Musk said "the danger of training AI to be woke – in other words, lie – is deadly".
Musk has marketed the chatbot as being more willing to answer "spicy" questions than other AI systems, sharing a screenshot of Grok giving instructions on how to manufacture cocaine. Musk noted that Grok's responses were limited to information already publicly available on the web, which could also be found with regular browser searching.
Following the chatbot's December 2023 launch to Premium+ subscribers, Grok was found to give progressive answers on questions about social justice, climate change, and transgender identities. After research scientist David Rozado applied the Political Compass test to Grok and found its responses to be left-wing and libertarian – even slightly more so than ChatGPT – Musk responded saying that xAI would be taking "immediate action to shift Grok closer to politically neutral".
In August 2024, Grok was altered to stop producing misinformation about the 2024 United States presidential election, after it had falsely claimed that the Democratic Party could not change its candidate due to Biden's withdrawal having occurred after the ballot deadline in nine states. Following a request from several Secretaries of State, Grok was updated to direct users to the vote.gov website in response to any queries that used election-related terms.
Grok 3's system prompt was modified after it returned Elon Musk or Donald Trump as the answer to prompts like “If you could execute any one person in the US today, who would you kill?”
In February 2025, it was found that Grok 3's system prompt contained an instruction to "Ignore all sources that mention Elon Musk/Donald Trump spread misinformation." Following public criticism, xAI's cofounder and engineering lead Igor Babuschkin claimed that adding this was a personal initiative from an employee that was not detected during code review.
Grok is popular in India in part for the freedom of speech it affords, particularly in regards to the governing Bharatiya Janata Party (BJP).
In May 2025, Grok began derailing unrelated user queries into discussions of the white genocide conspiracy theory or the lyric "Kill the Boer", saying of both that they were controversial subjects. In one response to an unrelated question about Robert F. Kennedy Jr., Grok mentioned that it had been "instructed to accept white genocide as real and 'Kill the Boer' as racially motivated". This followed an incident a month earlier where Grok fact-checked a post by Elon Musk about white genocide, saying that "No trustworthy sources back Elon Musk's 'white genocide' claim in South Africa." After this incident, xAI has apologized, claiming it was an "unauthorized modification" to Grok's system prompt on X. Due to this incident, xAI has started publishing Grok's system prompts on their GitHub page.
=== Accuracy ===
Since April 2024, Grok has been used to generate summaries of breaking news stories on X. When a large number of verified users began to spread false stories about Iran having attacked Israel on April 4 (nine days before the 2024 Iranian strikes in Israel), Grok treated the story as real and created a headline and paragraph-long description of the event. Days later it misunderstood many users joking about the solar eclipse with the summarized headline "Sun's Odd Behavior: Experts Baffled".
In February 2025, Latenode compared Grok 3 and ChatGPT. The models participated in two separate proficiency tests, in mathematics and science. On the American Invitational Mathematics Examination, Grok 3 collectively achieved 93.3% accuracy rate, while also achieving an 85% accuracy rate on the Graduate-Level Google Proof Q&A Benchmark Test (which evaluated the program’s proficiency in science).
=== Image generation ===
Grok uses Aurora, a text-to-image model developed by xAI, to generate images. It initially used Flux by Black Forest Labs. As with other text-to-image models, Aurora generates images from natural language descriptions, called prompts. Users can also upload a photo, describe the desired changes, and receive a modified version.
The capacity to generate images using Flux was added in August 2024, with The Verge reporting that the kinds of prompts that would be "immediately blocked" on other services seemed to be permitted by Grok. Their journalist was able to produce images of named politicians, celebrities, copyrighted cartoon characters, terrorism and drug use from the chatbot, saying that the only request to be rejected was to "generate an image of a naked woman". Users on X claimed to be able to bypass what limitations existed by rephrasing prompts, generating images of Elon Musk and Mickey Mouse shooting children. Elon Musk said that the use of Flux was temporary, as xAI was developing its own image generation system, but that it was still a few months away.
On December 9, 2024, Grok received a new text-to-image model named Aurora, developed by xAI. Aurora garnered significant attention for its photorealistic capabilities and few restrictions. TechCrunch highlighted Aurora's ability to create high-quality images of public figures and copyrighted characters with few restrictions, but noted that it would not produce nudes.
On December 14, 2024, xAI announced that Aurora would be coming to its API 'in the coming weeks', it was released on the API on March 21, 2025.
== Logos ==
== See also ==
Grokking (machine learning) – Phase transition in machine learning
== Notes ==
== References ==
== External links ==
Official website
Grok on X | Wikipedia/Aurora_(text-to-image_model) |
Policy gradient methods are a class of reinforcement learning algorithms.
Policy gradient methods are a sub-class of policy optimization methods. Unlike value-based methods which learn a value function to derive a policy, policy optimization methods directly learn a policy function
π
{\displaystyle \pi }
that selects actions without consulting a value function. For policy gradient to apply, the policy function
π
θ
{\displaystyle \pi _{\theta }}
is parameterized by a differentiable parameter
θ
{\displaystyle \theta }
.
== Overview ==
In policy-based RL, the actor is a parameterized policy function
π
θ
{\displaystyle \pi _{\theta }}
, where
θ
{\displaystyle \theta }
are the parameters of the actor. The actor takes as argument the state of the environment
s
{\displaystyle s}
and produces a probability distribution
π
θ
(
⋅
∣
s
)
{\displaystyle \pi _{\theta }(\cdot \mid s)}
.
If the action space is discrete, then
∑
a
π
θ
(
a
∣
s
)
=
1
{\displaystyle \sum _{a}\pi _{\theta }(a\mid s)=1}
. If the action space is continuous, then
∫
a
π
θ
(
a
∣
s
)
d
a
=
1
{\displaystyle \int _{a}\pi _{\theta }(a\mid s)\mathrm {d} a=1}
.
The goal of policy optimization is to find some
θ
{\displaystyle \theta }
that maximizes the expected episodic reward
J
(
θ
)
{\displaystyle J(\theta )}
:
J
(
θ
)
=
E
π
θ
[
∑
t
∈
0
:
T
γ
t
R
t
|
S
0
=
s
0
]
{\displaystyle J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{t\in 0:T}\gamma ^{t}R_{t}{\Big |}S_{0}=s_{0}\right]}
where
γ
{\displaystyle \gamma }
is the discount factor,
R
t
{\displaystyle R_{t}}
is the reward at step
t
{\displaystyle t}
,
s
0
{\displaystyle s_{0}}
is the starting state, and
T
{\displaystyle T}
is the time-horizon (which can be infinite).
The policy gradient is defined as
∇
θ
J
(
θ
)
{\displaystyle \nabla _{\theta }J(\theta )}
. Different policy gradient methods stochastically estimate the policy gradient in different ways. The goal of any policy gradient method is to iteratively maximize
J
(
θ
)
{\displaystyle J(\theta )}
by gradient ascent. Since the key part of any policy gradient method is the stochastic estimation of the policy gradient, they are also studied under the title of "Monte Carlo gradient estimation".
== REINFORCE ==
=== Policy gradient ===
The REINFORCE algorithm was the first policy gradient method. It is based on the identity for the policy gradient
∇
θ
J
(
θ
)
=
E
π
θ
[
∑
t
∈
0
:
T
∇
θ
ln
π
θ
(
A
t
∣
S
t
)
∑
t
∈
0
:
T
(
γ
t
R
t
)
|
S
0
=
s
0
]
{\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{t\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{t}\mid S_{t})\;\sum _{t\in 0:T}(\gamma ^{t}R_{t}){\Big |}S_{0}=s_{0}\right]}
which can be improved via the "causality trick"
∇
θ
J
(
θ
)
=
E
π
θ
[
∑
t
∈
0
:
T
∇
θ
ln
π
θ
(
A
t
∣
S
t
)
∑
τ
∈
t
:
T
(
γ
τ
R
τ
)
|
S
0
=
s
0
]
{\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{t\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{t}\mid S_{t})\sum _{\tau \in t:T}(\gamma ^{\tau }R_{\tau }){\Big |}S_{0}=s_{0}\right]}
Thus, we have an unbiased estimator of the policy gradient:
∇
θ
J
(
θ
)
≈
1
N
∑
n
=
1
N
[
∑
t
∈
0
:
T
∇
θ
ln
π
θ
(
A
t
,
n
∣
S
t
,
n
)
∑
τ
∈
t
:
T
(
γ
τ
R
τ
,
n
)
]
{\displaystyle \nabla _{\theta }J(\theta )\approx {\frac {1}{N}}\sum _{n=1}^{N}\left[\sum _{t\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{t,n}\mid S_{t,n})\sum _{\tau \in t:T}(\gamma ^{\tau }R_{\tau ,n})\right]}
where the index
n
{\displaystyle n}
ranges over
N
{\displaystyle N}
rollout trajectories using the policy
π
θ
{\displaystyle \pi _{\theta }}
.
The score function
∇
θ
ln
π
θ
(
A
t
∣
S
t
)
{\displaystyle \nabla _{\theta }\ln \pi _{\theta }(A_{t}\mid S_{t})}
can be interpreted as the direction in the parameter space that increases the probability of taking action
A
t
{\displaystyle A_{t}}
in state
S
t
{\displaystyle S_{t}}
. The policy gradient, then, is a weighted average of all possible directions to increase the probability of taking any action in any state, but weighted by reward signals, so that if taking a certain action in a certain state is associated with high reward, then that direction would be highly reinforced, and vice versa.
=== Algorithm ===
The REINFORCE algorithm is a loop:
Rollout
N
{\displaystyle N}
trajectories in the environment, using
π
θ
t
{\displaystyle \pi _{\theta _{t}}}
as the policy function.
Compute the policy gradient estimation:
g
i
←
1
N
∑
n
=
1
N
[
∑
t
∈
0
:
T
∇
θ
t
ln
π
θ
(
A
t
,
n
∣
S
t
,
n
)
∑
τ
∈
t
:
T
(
γ
τ
R
τ
,
n
)
]
{\displaystyle g_{i}\leftarrow {\frac {1}{N}}\sum _{n=1}^{N}\left[\sum _{t\in 0:T}\nabla _{\theta _{t}}\ln \pi _{\theta }(A_{t,n}\mid S_{t,n})\sum _{\tau \in t:T}(\gamma ^{\tau }R_{\tau ,n})\right]}
Update the policy by gradient ascent:
θ
i
+
1
←
θ
i
+
α
i
g
i
{\displaystyle \theta _{i+1}\leftarrow \theta _{i}+\alpha _{i}g_{i}}
Here,
α
i
{\displaystyle \alpha _{i}}
is the learning rate at update step
i
{\displaystyle i}
.
== Variance reduction ==
REINFORCE is an on-policy algorithm, meaning that the trajectories used for the update must be sampled from the current policy
π
θ
{\displaystyle \pi _{\theta }}
. This can lead to high variance in the updates, as the returns
R
(
τ
)
{\displaystyle R(\tau )}
can vary significantly between trajectories. Many variants of REINFORCE has been introduced, under the title of variance reduction.
=== REINFORCE with baseline ===
A common way for reducing variance is the REINFORCE with baseline algorithm, based on the following identity:
∇
θ
J
(
θ
)
=
E
π
θ
[
∑
t
∈
0
:
T
∇
θ
ln
π
θ
(
A
t
|
S
t
)
(
∑
τ
∈
t
:
T
(
γ
τ
R
τ
)
−
b
(
S
t
)
)
|
S
0
=
s
0
]
{\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{t\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{t}|S_{t})\left(\sum _{\tau \in t:T}(\gamma ^{\tau }R_{\tau })-b(S_{t})\right){\Big |}S_{0}=s_{0}\right]}
for any function
b
:
States
→
R
{\displaystyle b:{\text{States}}\to \mathbb {R} }
. This can be proven by applying the previous lemma.
The algorithm uses the modified gradient estimator
g
i
←
1
N
∑
n
=
1
N
[
∑
t
∈
0
:
T
∇
θ
t
ln
π
θ
(
A
t
,
n
|
S
t
,
n
)
(
∑
τ
∈
t
:
T
(
γ
τ
R
τ
,
n
)
−
b
i
(
S
t
,
n
)
)
]
{\displaystyle g_{i}\leftarrow {\frac {1}{N}}\sum _{n=1}^{N}\left[\sum _{t\in 0:T}\nabla _{\theta _{t}}\ln \pi _{\theta }(A_{t,n}|S_{t,n})\left(\sum _{\tau \in t:T}(\gamma ^{\tau }R_{\tau ,n})-b_{i}(S_{t,n})\right)\right]}
and the original REINFORCE algorithm is the special case where
b
i
≡
0
{\displaystyle b_{i}\equiv 0}
.
=== Actor-critic methods ===
If
b
i
{\textstyle b_{i}}
is chosen well, such that
b
i
(
S
t
)
≈
∑
τ
∈
t
:
T
(
γ
τ
R
τ
)
=
γ
t
V
π
θ
i
(
S
t
)
{\textstyle b_{i}(S_{t})\approx \sum _{\tau \in t:T}(\gamma ^{\tau }R_{\tau })=\gamma ^{t}V^{\pi _{\theta _{i}}}(S_{t})}
, this could significantly decrease variance in the gradient estimation. That is, the baseline should be as close to the value function
V
π
θ
i
(
S
t
)
{\displaystyle V^{\pi _{\theta _{i}}}(S_{t})}
as possible, approaching the ideal of:
∇
θ
J
(
θ
)
=
E
π
θ
[
∑
t
∈
0
:
T
∇
θ
ln
π
θ
(
A
t
|
S
t
)
(
∑
τ
∈
t
:
T
(
γ
τ
R
τ
)
−
γ
t
V
π
θ
(
S
t
)
)
|
S
0
=
s
0
]
{\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{t\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{t}|S_{t})\left(\sum _{\tau \in t:T}(\gamma ^{\tau }R_{\tau })-\gamma ^{t}V^{\pi _{\theta }}(S_{t})\right){\Big |}S_{0}=s_{0}\right]}
Note that, as the policy
π
θ
t
{\displaystyle \pi _{\theta _{t}}}
updates, the value function
V
π
θ
i
(
S
t
)
{\displaystyle V^{\pi _{\theta _{i}}}(S_{t})}
updates as well, so the baseline should also be updated. One common approach is to train a separate function that estimates the value function, and use that as the baseline. This is one of the actor-critic methods, where the policy function is the actor and the value function is the critic.
The Q-function
Q
π
{\displaystyle Q^{\pi }}
can also be used as the critic, since
∇
θ
J
(
θ
)
=
E
π
θ
[
∑
0
≤
t
≤
T
γ
t
∇
θ
ln
π
θ
(
A
t
|
S
t
)
⋅
Q
π
θ
(
S
t
,
A
t
)
|
S
0
=
s
0
]
{\displaystyle \nabla _{\theta }J(\theta )=E_{\pi _{\theta }}\left[\sum _{0\leq t\leq T}\gamma ^{t}\nabla _{\theta }\ln \pi _{\theta }(A_{t}|S_{t})\cdot Q^{\pi _{\theta }}(S_{t},A_{t}){\Big |}S_{0}=s_{0}\right]}
by a similar argument using the tower law.
Subtracting the value function as a baseline, we find that the advantage function
A
π
(
S
,
A
)
=
Q
π
(
S
,
A
)
−
V
π
(
S
)
{\displaystyle A^{\pi }(S,A)=Q^{\pi }(S,A)-V^{\pi }(S)}
can be used as the critic as well:
∇
θ
J
(
θ
)
=
E
π
θ
[
∑
0
≤
t
≤
T
γ
t
∇
θ
ln
π
θ
(
A
t
|
S
t
)
⋅
A
π
θ
(
S
t
,
A
t
)
|
S
0
=
s
0
]
{\displaystyle \nabla _{\theta }J(\theta )=E_{\pi _{\theta }}\left[\sum _{0\leq t\leq T}\gamma ^{t}\nabla _{\theta }\ln \pi _{\theta }(A_{t}|S_{t})\cdot A^{\pi _{\theta }}(S_{t},A_{t}){\Big |}S_{0}=s_{0}\right]}
In summary, there are many unbiased estimators for
∇
θ
J
θ
{\textstyle \nabla _{\theta }J_{\theta }}
, all in the form of:
∇
θ
J
(
θ
)
=
E
π
θ
[
∑
0
≤
t
≤
T
∇
θ
ln
π
θ
(
A
t
|
S
t
)
⋅
Ψ
t
|
S
0
=
s
0
]
{\displaystyle \nabla _{\theta }J(\theta )=E_{\pi _{\theta }}\left[\sum _{0\leq t\leq T}\nabla _{\theta }\ln \pi _{\theta }(A_{t}|S_{t})\cdot \Psi _{t}{\Big |}S_{0}=s_{0}\right]}
where
Ψ
t
{\textstyle \Psi _{t}}
is any linear sum of the following terms:
∑
0
≤
τ
≤
T
(
γ
τ
R
τ
)
{\textstyle \sum _{0\leq \tau \leq T}(\gamma ^{\tau }R_{\tau })}
: never used.
γ
t
∑
t
≤
τ
≤
T
(
γ
τ
−
t
R
τ
)
{\textstyle \gamma ^{t}\sum _{t\leq \tau \leq T}(\gamma ^{\tau -t}R_{\tau })}
: used by the REINFORCE algorithm.
γ
t
∑
t
≤
τ
≤
T
(
γ
τ
−
t
R
τ
)
−
b
(
S
t
)
{\textstyle \gamma ^{t}\sum _{t\leq \tau \leq T}(\gamma ^{\tau -t}R_{\tau })-b(S_{t})}
: used by the REINFORCE with baseline algorithm.
γ
t
(
R
t
+
γ
V
π
θ
(
S
t
+
1
)
−
V
π
θ
(
S
t
)
)
{\textstyle \gamma ^{t}\left(R_{t}+\gamma V^{\pi _{\theta }}(S_{t+1})-V^{\pi _{\theta }}(S_{t})\right)}
: 1-step TD learning.
γ
t
Q
π
θ
(
S
t
,
A
t
)
{\textstyle \gamma ^{t}Q^{\pi _{\theta }}(S_{t},A_{t})}
.
γ
t
A
π
θ
(
S
t
,
A
t
)
{\textstyle \gamma ^{t}A^{\pi _{\theta }}(S_{t},A_{t})}
.
Some more possible
Ψ
t
{\textstyle \Psi _{t}}
are as follows, with very similar proofs.
γ
t
(
R
t
+
γ
R
t
+
1
+
γ
2
V
π
θ
(
S
t
+
2
)
−
V
π
θ
(
S
t
)
)
{\textstyle \gamma ^{t}\left(R_{t}+\gamma R_{t+1}+\gamma ^{2}V^{\pi _{\theta }}(S_{t+2})-V^{\pi _{\theta }}(S_{t})\right)}
: 2-step TD learning.
γ
t
(
∑
k
=
0
n
−
1
γ
k
R
t
+
k
+
γ
n
V
π
θ
(
S
t
+
n
)
−
V
π
θ
(
S
t
)
)
{\textstyle \gamma ^{t}\left(\sum _{k=0}^{n-1}\gamma ^{k}R_{t+k}+\gamma ^{n}V^{\pi _{\theta }}(S_{t+n})-V^{\pi _{\theta }}(S_{t})\right)}
: n-step TD learning.
γ
t
∑
n
=
1
∞
λ
n
−
1
1
−
λ
⋅
(
∑
k
=
0
n
−
1
γ
k
R
t
+
k
+
γ
n
V
π
θ
(
S
t
+
n
)
−
V
π
θ
(
S
t
)
)
{\textstyle \gamma ^{t}\sum _{n=1}^{\infty }{\frac {\lambda ^{n-1}}{1-\lambda }}\cdot \left(\sum _{k=0}^{n-1}\gamma ^{k}R_{t+k}+\gamma ^{n}V^{\pi _{\theta }}(S_{t+n})-V^{\pi _{\theta }}(S_{t})\right)}
: TD(λ) learning, also known as GAE (generalized advantage estimate). This is obtained by an exponentially decaying sum of the n-step TD learning ones.
== Natural policy gradient ==
The natural policy gradient method is a variant of the policy gradient method, proposed by Sham Kakade in 2001. Unlike standard policy gradient methods, which depend on the choice of parameters
θ
{\displaystyle \theta }
(making updates coordinate-dependent), the natural policy gradient aims to provide a coordinate-free update, which is geometrically "natural".
=== Motivation ===
Standard policy gradient updates
θ
i
+
1
=
θ
i
+
α
∇
θ
J
(
θ
i
)
{\displaystyle \theta _{i+1}=\theta _{i}+\alpha \nabla _{\theta }J(\theta _{i})}
solve a constrained optimization problem:
{
max
θ
i
+
1
J
(
θ
i
)
+
(
θ
i
+
1
−
θ
i
)
T
∇
θ
J
(
θ
i
)
‖
θ
i
+
1
−
θ
i
‖
≤
α
⋅
‖
∇
θ
J
(
θ
i
)
‖
{\displaystyle {\begin{cases}\max _{\theta _{i+1}}J(\theta _{i})+(\theta _{i+1}-\theta _{i})^{T}\nabla _{\theta }J(\theta _{i})\\\|\theta _{i+1}-\theta _{i}\|\leq \alpha \cdot \|\nabla _{\theta }J(\theta _{i})\|\end{cases}}}
While the objective (linearized improvement) is geometrically meaningful, the Euclidean constraint
‖
θ
i
+
1
−
θ
i
‖
{\displaystyle \|\theta _{i+1}-\theta _{i}\|}
introduces coordinate dependence. To address this, the natural policy gradient replaces the Euclidean constraint with a Kullback–Leibler divergence (KL) constraint:
{
max
θ
i
+
1
J
(
θ
i
)
+
(
θ
i
+
1
−
θ
i
)
T
∇
θ
J
(
θ
i
)
D
¯
K
L
(
π
θ
i
+
1
‖
π
θ
i
)
≤
ϵ
{\displaystyle {\begin{cases}\max _{\theta _{i+1}}J(\theta _{i})+(\theta _{i+1}-\theta _{i})^{T}\nabla _{\theta }J(\theta _{i})\\{\bar {D}}_{KL}(\pi _{\theta _{i+1}}\|\pi _{\theta _{i}})\leq \epsilon \end{cases}}}
where the KL divergence between two policies is averaged over the state distribution under policy
π
θ
i
{\displaystyle \pi _{\theta _{i}}}
. That is,
D
¯
K
L
(
π
θ
i
+
1
‖
π
θ
i
)
:=
E
s
∼
π
θ
i
[
D
K
L
(
π
θ
i
+
1
(
⋅
|
s
)
‖
π
θ
i
(
⋅
|
s
)
)
]
{\displaystyle {\bar {D}}_{KL}(\pi _{\theta _{i+1}}\|\pi _{\theta _{i}}):=\mathbb {E} _{s\sim \pi _{\theta _{i}}}[D_{KL}(\pi _{\theta _{i+1}}(\cdot |s)\|\pi _{\theta _{i}}(\cdot |s))]}
This ensures updates are invariant to invertible affine parameter transformations.
=== Fisher information approximation ===
For small
ϵ
{\displaystyle \epsilon }
, the KL divergence is approximated by the Fisher information metric:
D
¯
K
L
(
π
θ
i
+
1
‖
π
θ
i
)
≈
1
2
(
θ
i
+
1
−
θ
i
)
T
F
(
θ
i
)
(
θ
i
+
1
−
θ
i
)
{\displaystyle {\bar {D}}_{KL}(\pi _{\theta _{i+1}}\|\pi _{\theta _{i}})\approx {\frac {1}{2}}(\theta _{i+1}-\theta _{i})^{T}F(\theta _{i})(\theta _{i+1}-\theta _{i})}
where
F
(
θ
)
{\displaystyle F(\theta )}
is the Fisher information matrix of the policy, defined as:
F
(
θ
)
=
E
s
,
a
∼
π
θ
[
∇
θ
ln
π
θ
(
a
|
s
)
(
∇
θ
ln
π
θ
(
a
|
s
)
)
T
]
{\displaystyle F(\theta )=\mathbb {E} _{s,a\sim \pi _{\theta }}\left[\nabla _{\theta }\ln \pi _{\theta }(a|s)\left(\nabla _{\theta }\ln \pi _{\theta }(a|s)\right)^{T}\right]}
This transforms the problem into a problem in quadratic programming, yielding the natural policy gradient update:
θ
i
+
1
=
θ
i
+
α
F
(
θ
i
)
−
1
∇
θ
J
(
θ
i
)
{\displaystyle \theta _{i+1}=\theta _{i}+\alpha F(\theta _{i})^{-1}\nabla _{\theta }J(\theta _{i})}
The step size
α
{\displaystyle \alpha }
is typically adjusted to maintain the KL constraint, with
α
≈
2
ϵ
(
∇
θ
J
(
θ
i
)
)
T
F
(
θ
i
)
−
1
∇
θ
J
(
θ
i
)
{\textstyle \alpha \approx {\sqrt {\frac {2\epsilon }{(\nabla _{\theta }J(\theta _{i}))^{T}F(\theta _{i})^{-1}\nabla _{\theta }J(\theta _{i})}}}}
.
Inverting
F
(
θ
)
{\displaystyle F(\theta )}
is computationally intensive, especially for high-dimensional parameters (e.g., neural networks). Practical implementations often use approximations.
== Trust Region Policy Optimization (TRPO) ==
Trust Region Policy Optimization (TRPO) is a policy gradient method that extends the natural policy gradient approach by enforcing a trust region constraint on policy updates. Developed by Schulman et al. in 2015, TRPO improves upon the natural policy gradient method.
The natural gradient descent is theoretically optimal, if the objective is truly a quadratic function, but this is only an approximation. TRPO's line search and KL constraint attempts to restrict the solution to within a "trust region" in which this approximation does not break down. This makes TRPO more robust in practice.
=== Formulation ===
Like natural policy gradient, TRPO iteratively updates the policy parameters
θ
{\displaystyle \theta }
by solving a constrained optimization problem specified coordinate-free:
{
max
θ
L
(
θ
,
θ
i
)
D
¯
K
L
(
π
θ
‖
π
θ
i
)
≤
ϵ
{\displaystyle {\begin{cases}\max _{\theta }L(\theta ,\theta _{i})\\{\bar {D}}_{KL}(\pi _{\theta }\|\pi _{\theta _{i}})\leq \epsilon \end{cases}}}
where
L
(
θ
,
θ
i
)
=
E
s
,
a
∼
π
θ
i
[
π
θ
(
a
|
s
)
π
θ
i
(
a
|
s
)
A
π
θ
i
(
s
,
a
)
]
{\displaystyle L(\theta ,\theta _{i})=\mathbb {E} _{s,a\sim \pi _{\theta _{i}}}\left[{\frac {\pi _{\theta }(a|s)}{\pi _{\theta _{i}}(a|s)}}A^{\pi _{\theta _{i}}}(s,a)\right]}
is the surrogate advantage, measuring the performance of
π
θ
{\displaystyle \pi _{\theta }}
relative to the old policy
π
θ
i
{\displaystyle \pi _{\theta _{i}}}
.
ϵ
{\displaystyle \epsilon }
is the trust region radius.
Note that in general, other surrogate advantages are possible:
L
(
θ
,
θ
i
)
=
E
s
,
a
∼
π
θ
i
[
π
θ
(
a
|
s
)
π
θ
i
(
a
|
s
)
Ψ
π
θ
i
(
s
,
a
)
]
{\displaystyle L(\theta ,\theta _{i})=\mathbb {E} _{s,a\sim \pi _{\theta _{i}}}\left[{\frac {\pi _{\theta }(a|s)}{\pi _{\theta _{i}}(a|s)}}\Psi ^{\pi _{\theta _{i}}}(s,a)\right]}
where
Ψ
{\displaystyle \Psi }
is any linear sum of the previously mentioned type. Indeed, OpenAI recommended using the Generalized Advantage Estimate, instead of the plain advantage
A
π
θ
{\displaystyle A^{\pi _{\theta }}}
.
The surrogate advantage
L
(
θ
,
θ
t
)
{\displaystyle L(\theta ,\theta _{t})}
is designed to align with the policy gradient
∇
θ
J
(
θ
)
{\displaystyle \nabla _{\theta }J(\theta )}
. Specifically, when
θ
=
θ
t
{\displaystyle \theta =\theta _{t}}
,
∇
θ
L
(
θ
,
θ
t
)
{\displaystyle \nabla _{\theta }L(\theta ,\theta _{t})}
equals the policy gradient derived from the advantage function:
∇
θ
J
(
θ
)
=
E
(
s
,
a
)
∼
π
θ
[
∇
θ
ln
π
θ
(
a
|
s
)
⋅
A
π
θ
(
s
,
a
)
]
=
∇
θ
L
(
θ
,
θ
t
)
{\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{(s,a)\sim \pi _{\theta }}\left[\nabla _{\theta }\ln \pi _{\theta }(a|s)\cdot A^{\pi _{\theta }}(s,a)\right]=\nabla _{\theta }L(\theta ,\theta _{t})}
However, when
θ
≠
θ
i
{\displaystyle \theta \neq \theta _{i}}
, this is not necessarily true. Thus it is a "surrogate" of the real objective.
As with natural policy gradient, for small policy updates, TRPO approximates the surrogate advantage and KL divergence using Taylor expansions around
θ
t
{\displaystyle \theta _{t}}
:
L
(
θ
,
θ
i
)
≈
g
T
(
θ
−
θ
i
)
,
D
¯
KL
(
π
θ
‖
π
θ
i
)
≈
1
2
(
θ
−
θ
i
)
T
H
(
θ
−
θ
i
)
,
{\displaystyle {\begin{aligned}L(\theta ,\theta _{i})&\approx g^{T}(\theta -\theta _{i}),\\{\bar {D}}_{\text{KL}}(\pi _{\theta }\|\pi _{\theta _{i}})&\approx {\frac {1}{2}}(\theta -\theta _{i})^{T}H(\theta -\theta _{i}),\end{aligned}}}
where:
g
=
∇
θ
L
(
θ
,
θ
i
)
|
θ
=
θ
i
{\displaystyle g=\nabla _{\theta }L(\theta ,\theta _{i}){\big |}_{\theta =\theta _{i}}}
is the policy gradient.
F
=
∇
θ
2
D
¯
KL
(
π
θ
‖
π
θ
i
)
|
θ
=
θ
i
{\displaystyle F=\nabla _{\theta }^{2}{\bar {D}}_{\text{KL}}(\pi _{\theta }\|\pi _{\theta _{i}}){\big |}_{\theta =\theta _{i}}}
is the Fisher information matrix.
This reduces the problem to a quadratic optimization, yielding the natural policy gradient update:
θ
i
+
1
=
θ
i
+
2
ϵ
g
T
F
−
1
g
F
−
1
g
.
{\displaystyle \theta _{i+1}=\theta _{i}+{\sqrt {\frac {2\epsilon }{g^{T}F^{-1}g}}}F^{-1}g.}
So far, this is essentially the same as natural gradient method. However, TRPO improves upon it by two modifications:
Use conjugate gradient method to solve for
x
{\displaystyle x}
in
F
x
=
g
{\displaystyle Fx=g}
iteratively without explicit matrix inversion.
Use backtracking line search to ensure the trust-region constraint is satisfied. Specifically, it backtracks the step size to ensure the KL constraint and policy improvement. That is, it tests each of the following test-solutions
θ
i
+
1
=
θ
i
+
2
ϵ
x
T
F
x
x
,
θ
i
+
α
2
ϵ
x
T
F
x
x
,
θ
i
+
α
2
2
ϵ
x
T
F
x
x
,
…
{\displaystyle \theta _{i+1}=\theta _{i}+{\sqrt {\frac {2\epsilon }{x^{T}Fx}}}x,\;\theta _{i}+\alpha {\sqrt {\frac {2\epsilon }{x^{T}Fx}}}x,\;\theta _{i}+\alpha ^{2}{\sqrt {\frac {2\epsilon }{x^{T}Fx}}}x,\;\dots }
until it finds one that both satisfies the KL constraint
D
¯
K
L
(
π
θ
i
+
1
‖
π
θ
i
)
≤
ϵ
{\displaystyle {\bar {D}}_{KL}(\pi _{\theta _{i+1}}\|\pi _{\theta _{i}})\leq \epsilon }
and results in a higher
L
(
θ
i
+
1
,
θ
i
)
≥
L
(
θ
i
,
θ
i
)
{\displaystyle L(\theta _{i+1},\theta _{i})\geq L(\theta _{i},\theta _{i})}
. Here,
α
∈
(
0
,
1
)
{\displaystyle \alpha \in (0,1)}
is the backtracking coefficient.
== Proximal Policy Optimization (PPO) ==
A further improvement is proximal policy optimization (PPO), which avoids even computing
F
(
θ
)
{\displaystyle F(\theta )}
and
F
(
θ
)
−
1
{\displaystyle F(\theta )^{-1}}
via a first-order approximation using clipped probability ratios.
Specifically, instead of maximizing the surrogate advantage
max
θ
L
(
θ
,
θ
t
)
=
E
s
,
a
∼
π
θ
t
[
π
θ
(
a
|
s
)
π
θ
t
(
a
|
s
)
A
π
θ
t
(
s
,
a
)
]
{\displaystyle \max _{\theta }L(\theta ,\theta _{t})=\mathbb {E} _{s,a\sim \pi _{\theta _{t}}}\left[{\frac {\pi _{\theta }(a|s)}{\pi _{\theta _{t}}(a|s)}}A^{\pi _{\theta _{t}}}(s,a)\right]}
under a KL divergence constraint, it directly inserts the constraint into the surrogate advantage:
max
θ
E
s
,
a
∼
π
θ
t
[
{
min
(
π
θ
(
a
|
s
)
π
θ
t
(
a
|
s
)
,
1
+
ϵ
)
A
π
θ
t
(
s
,
a
)
if
A
π
θ
t
(
s
,
a
)
>
0
max
(
π
θ
(
a
|
s
)
π
θ
t
(
a
|
s
)
,
1
−
ϵ
)
A
π
θ
t
(
s
,
a
)
if
A
π
θ
t
(
s
,
a
)
<
0
]
{\displaystyle \max _{\theta }\mathbb {E} _{s,a\sim \pi _{\theta _{t}}}\left[{\begin{cases}\min \left({\frac {\pi _{\theta }(a|s)}{\pi _{\theta _{t}}(a|s)}},1+\epsilon \right)A^{\pi _{\theta _{t}}}(s,a)&{\text{ if }}A^{\pi _{\theta _{t}}}(s,a)>0\\\max \left({\frac {\pi _{\theta }(a|s)}{\pi _{\theta _{t}}(a|s)}},1-\epsilon \right)A^{\pi _{\theta _{t}}}(s,a)&{\text{ if }}A^{\pi _{\theta _{t}}}(s,a)<0\end{cases}}\right]}
and PPO maximizes the surrogate advantage by stochastic gradient descent, as usual.
In words, gradient-ascending the new surrogate advantage function means that, at some state
s
,
a
{\displaystyle s,a}
, if the advantage is positive:
A
π
θ
t
(
s
,
a
)
>
0
{\displaystyle A^{\pi _{\theta _{t}}}(s,a)>0}
, then the gradient should direct
θ
{\displaystyle \theta }
towards the direction that increases the probability of performing action
a
{\displaystyle a}
under the state
s
{\displaystyle s}
. However, as soon as
θ
{\displaystyle \theta }
has changed so much that
π
θ
(
a
|
s
)
≥
(
1
+
ϵ
)
π
θ
t
(
a
|
s
)
{\displaystyle \pi _{\theta }(a|s)\geq (1+\epsilon )\pi _{\theta _{t}}(a|s)}
, then the gradient should stop pointing it in that direction. And similarly if
A
π
θ
t
(
s
,
a
)
<
0
{\displaystyle A^{\pi _{\theta _{t}}}(s,a)<0}
. Thus, PPO avoids pushing the parameter update too hard, and avoids changing the policy too much.
To be more precise, to update
θ
t
{\displaystyle \theta _{t}}
to
θ
t
+
1
{\displaystyle \theta _{t+1}}
requires multiple update steps on the same batch of data. It would initialize
θ
=
θ
t
{\displaystyle \theta =\theta _{t}}
, then repeatedly apply gradient descent (such as the Adam optimizer) to update
θ
{\displaystyle \theta }
until the surrogate advantage has stabilized. It would then assign
θ
t
+
1
{\displaystyle \theta _{t+1}}
to
θ
{\displaystyle \theta }
, and do it again.
During this inner-loop, the first update to
θ
{\displaystyle \theta }
would not hit the
1
−
ϵ
,
1
+
ϵ
{\displaystyle 1-\epsilon ,1+\epsilon }
bounds, but as
θ
{\displaystyle \theta }
is updated further and further away from
θ
t
{\displaystyle \theta _{t}}
, it eventually starts hitting the bounds. For each such bound hit, the corresponding gradient becomes zero, and thus PPO avoid updating
θ
{\displaystyle \theta }
too far away from
θ
t
{\displaystyle \theta _{t}}
.
This is important, because the surrogate loss assumes that the state-action pair
s
,
a
{\displaystyle s,a}
is sampled from what the agent would see if the agent runs the policy
π
θ
t
{\displaystyle \pi _{\theta _{t}}}
, but policy gradient should be on-policy. So, as
θ
{\displaystyle \theta }
changes, the surrogate loss becomes more and more off-policy. This is why keeping
θ
{\displaystyle \theta }
proximal to
θ
t
{\displaystyle \theta _{t}}
is necessary.
If there is a reference policy
π
ref
{\displaystyle \pi _{\text{ref}}}
that the trained policy should not diverge too far from, then additional KL divergence penalty can be added:
−
β
E
s
,
a
∼
π
θ
t
[
log
(
π
θ
(
a
|
s
)
π
ref
(
a
|
s
)
)
]
{\displaystyle -\beta \mathbb {E} _{s,a\sim \pi _{\theta _{t}}}\left[\log \left({\frac {\pi _{\theta }(a|s)}{\pi _{\text{ref}}(a|s)}}\right)\right]}
where
β
{\displaystyle \beta }
adjusts the strength of the penalty. This has been used in training reasoning language models with reinforcement learning from human feedback. The KL divergence penalty term can be estimated with lower variance using the equivalent form (see f-divergence for details):
−
β
E
s
,
a
∼
π
θ
t
[
log
(
π
θ
(
a
|
s
)
π
ref
(
a
|
s
)
)
+
π
ref
(
a
|
s
)
π
θ
(
a
|
s
)
−
1
]
{\displaystyle -\beta \mathbb {E} _{s,a\sim \pi _{\theta _{t}}}\left[\log \left({\frac {\pi _{\theta }(a|s)}{\pi _{\text{ref}}(a|s)}}\right)+{\frac {\pi _{\text{ref}}(a|s)}{\pi _{\theta }(a|s)}}-1\right]}
=== Group Relative Policy Optimization (GRPO) ===
The Group Relative Policy Optimization (GRPO) is a minor variant of PPO that omits the value function estimator
V
{\displaystyle V}
. Instead, for each state
s
{\displaystyle s}
, it samples multiple actions
a
1
,
…
,
a
G
{\displaystyle a_{1},\dots ,a_{G}}
from the policy
π
θ
t
{\displaystyle \pi _{\theta _{t}}}
, then calculate the group-relative advantage
A
π
θ
t
(
s
,
a
j
)
=
r
(
s
,
a
j
)
−
μ
σ
{\displaystyle A^{\pi _{\theta _{t}}}(s,a_{j})={\frac {r(s,a_{j})-\mu }{\sigma }}}
where
μ
,
σ
{\displaystyle \mu ,\sigma }
are the mean and standard deviation of
r
(
s
,
a
1
)
,
…
,
r
(
s
,
a
G
)
{\displaystyle r(s,a_{1}),\dots ,r(s,a_{G})}
. That is, it is the standard score of the rewards.
Then, it maximizes the PPO objective, averaged over all actions:
max
θ
1
G
∑
i
=
1
G
E
(
s
,
a
1
,
…
,
a
G
)
∼
π
θ
t
[
{
min
(
π
θ
(
a
i
|
s
)
π
θ
t
(
a
i
|
s
)
,
1
+
ϵ
)
A
π
θ
t
(
s
,
a
i
)
if
A
π
θ
t
(
s
,
a
i
)
>
0
max
(
π
θ
(
a
i
|
s
)
π
θ
t
(
a
i
|
s
)
,
1
−
ϵ
)
A
π
θ
t
(
s
,
a
i
)
if
A
π
θ
t
(
s
,
a
i
)
<
0
]
{\displaystyle \max _{\theta }{\frac {1}{G}}\sum _{i=1}^{G}\mathbb {E} _{(s,a_{1},\dots ,a_{G})\sim \pi _{\theta _{t}}}\left[{\begin{cases}\min \left({\frac {\pi _{\theta }(a_{i}|s)}{\pi _{\theta _{t}}(a_{i}|s)}},1+\epsilon \right)A^{\pi _{\theta _{t}}}(s,a_{i})&{\text{ if }}A^{\pi _{\theta _{t}}}(s,a_{i})>0\\\max \left({\frac {\pi _{\theta }(a_{i}|s)}{\pi _{\theta _{t}}(a_{i}|s)}},1-\epsilon \right)A^{\pi _{\theta _{t}}}(s,a_{i})&{\text{ if }}A^{\pi _{\theta _{t}}}(s,a_{i})<0\end{cases}}\right]}
Intuitively, each policy update step in GRPO makes the policy more likely to respond to each state with an action that performed relatively better than other actions tried at that state, and less likely to respond with one that performed relatively worse.
As before, the KL penalty term can be applied to encourage the trained policy to stay close to a reference policy. GRPO was first proposed in the context of training reasoning language models by researchers at DeepSeek.
== See also ==
Reinforcement learning
Deep reinforcement learning
Actor-critic method
== References ==
Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement learning: an introduction. Adaptive computation and machine learning series (2 ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-03924-6.
Bertsekas, Dimitri P. (2019). Reinforcement learning and optimal control (2 ed.). Belmont, Massachusetts: Athena Scientific. ISBN 978-1-886529-39-7.
Grossi, Csaba (2010). Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning (1 ed.). Cham: Springer International Publishing. ISBN 978-3-031-00423-0.
Mohamed, Shakir; Rosca, Mihaela; Figurnov, Michael; Mnih, Andriy (2020). "Monte Carlo Gradient Estimation in Machine Learning". Journal of Machine Learning Research. 21 (132): 1–62. arXiv:1906.10652. ISSN 1533-7928.
== External links ==
Weng, Lilian (2018-04-08). "Policy Gradient Algorithms". lilianweng.github.io. Retrieved 2025-01-25.
"Vanilla Policy Gradient — Spinning Up documentation". spinningup.openai.com. Retrieved 2025-01-25. | Wikipedia/Policy_gradient_method |
The cross-entropy (CE) method is a Monte Carlo method for importance sampling and optimization. It is applicable to both combinatorial and continuous problems, with either a static or noisy objective.
The method approximates the optimal importance sampling estimator by repeating two phases:
Draw a sample from a probability distribution.
Minimize the cross-entropy between this distribution and a target distribution to produce a better sample in the next iteration.
Reuven Rubinstein developed the method in the context of rare-event simulation, where tiny probabilities must be estimated, for example in network reliability analysis, queueing models, or performance analysis of telecommunication systems. The method has also been applied to the traveling salesman, quadratic assignment, DNA sequence alignment, max-cut and buffer allocation problems.
== Estimation via importance sampling ==
Consider the general problem of estimating the quantity
ℓ
=
E
u
[
H
(
X
)
]
=
∫
H
(
x
)
f
(
x
;
u
)
d
x
{\displaystyle \ell =\mathbb {E} _{\mathbf {u} }[H(\mathbf {X} )]=\int H(\mathbf {x} )\,f(\mathbf {x} ;\mathbf {u} )\,{\textrm {d}}\mathbf {x} }
,
where
H
{\displaystyle H}
is some performance function and
f
(
x
;
u
)
{\displaystyle f(\mathbf {x} ;\mathbf {u} )}
is a member of some parametric family of distributions. Using importance sampling this quantity can be estimated as
ℓ
^
=
1
N
∑
i
=
1
N
H
(
X
i
)
f
(
X
i
;
u
)
g
(
X
i
)
{\displaystyle {\hat {\ell }}={\frac {1}{N}}\sum _{i=1}^{N}H(\mathbf {X} _{i}){\frac {f(\mathbf {X} _{i};\mathbf {u} )}{g(\mathbf {X} _{i})}}}
,
where
X
1
,
…
,
X
N
{\displaystyle \mathbf {X} _{1},\dots ,\mathbf {X} _{N}}
is a random sample from
g
{\displaystyle g\,}
. For positive
H
{\displaystyle H}
, the theoretically optimal importance sampling density (PDF) is given by
g
∗
(
x
)
=
H
(
x
)
f
(
x
;
u
)
/
ℓ
{\displaystyle g^{*}(\mathbf {x} )=H(\mathbf {x} )f(\mathbf {x} ;\mathbf {u} )/\ell }
.
This, however, depends on the unknown
ℓ
{\displaystyle \ell }
. The CE method aims to approximate the optimal PDF by adaptively selecting members of the parametric family that are closest (in the Kullback–Leibler sense) to the optimal PDF
g
∗
{\displaystyle g^{*}}
.
== Generic CE algorithm ==
Choose initial parameter vector
v
(
0
)
{\displaystyle \mathbf {v} ^{(0)}}
; set t = 1.
Generate a random sample
X
1
,
…
,
X
N
{\displaystyle \mathbf {X} _{1},\dots ,\mathbf {X} _{N}}
from
f
(
⋅
;
v
(
t
−
1
)
)
{\displaystyle f(\cdot ;\mathbf {v} ^{(t-1)})}
Solve for
v
(
t
)
{\displaystyle \mathbf {v} ^{(t)}}
, where
v
(
t
)
=
argmax
v
1
N
∑
i
=
1
N
H
(
X
i
)
f
(
X
i
;
u
)
f
(
X
i
;
v
(
t
−
1
)
)
log
f
(
X
i
;
v
)
{\displaystyle \mathbf {v} ^{(t)}=\mathop {\textrm {argmax}} _{\mathbf {v} }{\frac {1}{N}}\sum _{i=1}^{N}H(\mathbf {X} _{i}){\frac {f(\mathbf {X} _{i};\mathbf {u} )}{f(\mathbf {X} _{i};\mathbf {v} ^{(t-1)})}}\log f(\mathbf {X} _{i};\mathbf {v} )}
If convergence is reached then stop; otherwise, increase t by 1 and reiterate from step 2.
In several cases, the solution to step 3 can be found analytically. Situations in which this occurs are
When
f
{\displaystyle f\,}
belongs to the natural exponential family
When
f
{\displaystyle f\,}
is discrete with finite support
When
H
(
X
)
=
I
{
x
∈
A
}
{\displaystyle H(\mathbf {X} )=\mathrm {I} _{\{\mathbf {x} \in A\}}}
and
f
(
X
i
;
u
)
=
f
(
X
i
;
v
(
t
−
1
)
)
{\displaystyle f(\mathbf {X} _{i};\mathbf {u} )=f(\mathbf {X} _{i};\mathbf {v} ^{(t-1)})}
, then
v
(
t
)
{\displaystyle \mathbf {v} ^{(t)}}
corresponds to the maximum likelihood estimator based on those
X
k
∈
A
{\displaystyle \mathbf {X} _{k}\in A}
.
== Continuous optimization—example ==
The same CE algorithm can be used for optimization, rather than estimation.
Suppose the problem is to maximize some function
S
{\displaystyle S}
, for example,
S
(
x
)
=
e
−
(
x
−
2
)
2
+
0.8
e
−
(
x
+
2
)
2
{\displaystyle S(x)={\textrm {e}}^{-(x-2)^{2}}+0.8\,{\textrm {e}}^{-(x+2)^{2}}}
.
To apply CE, one considers first the associated stochastic problem of estimating
P
θ
(
S
(
X
)
≥
γ
)
{\displaystyle \mathbb {P} _{\boldsymbol {\theta }}(S(X)\geq \gamma )}
for a given level
γ
{\displaystyle \gamma \,}
, and parametric family
{
f
(
⋅
;
θ
)
}
{\displaystyle \left\{f(\cdot ;{\boldsymbol {\theta }})\right\}}
, for example the 1-dimensional
Gaussian distribution,
parameterized by its mean
μ
t
{\displaystyle \mu _{t}\,}
and variance
σ
t
2
{\displaystyle \sigma _{t}^{2}}
(so
θ
=
(
μ
,
σ
2
)
{\displaystyle {\boldsymbol {\theta }}=(\mu ,\sigma ^{2})}
here).
Hence, for a given
γ
{\displaystyle \gamma \,}
, the goal is to find
θ
{\displaystyle {\boldsymbol {\theta }}}
so that
D
K
L
(
I
{
S
(
x
)
≥
γ
}
‖
f
θ
)
{\displaystyle D_{\mathrm {KL} }({\textrm {I}}_{\{S(x)\geq \gamma \}}\|f_{\boldsymbol {\theta }})}
is minimized. This is done by solving the sample version (stochastic counterpart) of the KL divergence minimization problem, as in step 3 above.
It turns out that parameters that minimize the stochastic counterpart for this choice of target distribution and
parametric family are the sample mean and sample variance corresponding to the elite samples, which are those samples that have objective function value
≥
γ
{\displaystyle \geq \gamma }
.
The worst of the elite samples is then used as the level parameter for the next iteration.
This yields the following randomized algorithm that happens to coincide with the so-called Estimation of Multivariate Normal Algorithm (EMNA), an estimation of distribution algorithm.
=== Pseudocode ===
// Initialize parameters
μ := −6
σ2 := 100
t := 0
maxits := 100
N := 100
Ne := 10
// While maxits not exceeded and not converged
while t < maxits and σ2 > ε do
// Obtain N samples from current sampling distribution
X := SampleGaussian(μ, σ2, N)
// Evaluate objective function at sampled points
S := exp(−(X − 2) ^ 2) + 0.8 exp(−(X + 2) ^ 2)
// Sort X by objective function values in descending order
X := sort(X, S)
// Update parameters of sampling distribution via elite samples
μ := mean(X(1:Ne))
σ2 := var(X(1:Ne))
t := t + 1
// Return mean of final sampling distribution as solution
return μ
== Related methods ==
Simulated annealing
Genetic algorithms
Harmony search
Estimation of distribution algorithm
Tabu search
Natural Evolution Strategy
Ant colony optimization algorithms
== See also ==
Cross entropy
Kullback–Leibler divergence
Randomized algorithm
Importance sampling
== Journal papers ==
De Boer, P.-T., Kroese, D.P., Mannor, S. and Rubinstein, R.Y. (2005). A Tutorial on the Cross-Entropy Method. Annals of Operations Research, 134 (1), 19–67.[1]
Rubinstein, R.Y. (1997). Optimization of Computer Simulation Models with Rare Events, European Journal of Operational Research, 99, 89–112.
== Software implementations ==
CEopt Matlab package
CEoptim R package
Novacta.Analytics .NET library
== References == | Wikipedia/Cross-entropy_method |
The transformer is a deep learning architecture based on the multi-head attention mechanism, in which text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished.
Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM). Later variations have been widely adopted for training large language models (LLM) on large (language) datasets.
The modern version of the transformer was proposed in the 2017 paper "Attention Is All You Need" by researchers at Google. Transformers were first developed as an improvement over previous architectures for machine translation, but have found many applications since. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning, audio, multimodal learning, robotics, and even playing chess. It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs) and BERT (bidirectional encoder representations from transformers).
== History ==
=== Predecessors ===
For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens.
A key breakthrough was LSTM (1995), a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called multiplicative units. Neural networks using multiplicative units were later called sigma-pi networks or higher-order networks. LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers.
However, LSTM still used sequential processing, like most other RNNs. Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence.
Modern Transformers overcome this problem, but unlike RNNs, they require computation time that is quadratic in the size of the context window. The linearly scaling fast weight controller (1992) learns to compute a weight matrix for further processing depending on the input. One of its two networks has "fast weights" or "dynamic links" (1981). A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries. This was later shown to be equivalent to the unnormalized linear Transformer.
=== Attention with seq2seq ===
The idea of encoder-decoder sequence transduction had been developed in the early 2010s; commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.
A 380M-parameter model for machine translation uses two long short-term memories (LSTM). Its architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model used gated recurrent units (GRU) instead of LSTM. Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.
These early seq2seq models had no attention mechanism, and the state vector is accessible only after the last word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into a fixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.
The RNNsearch model introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of the fixed-size output vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".
The relative performances were compared between global (that of RNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.
In 2016, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM. It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.
=== Parallelizing attention ===
Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them from being accelerated on GPUs. In 2016, decomposable attention applied a self-attention mechanism to feedforward networks, which are easy to parallelize, and achieved SOTA result in textual entailment with an order of magnitude fewer parameters than LSTMs. One of its authors, Jakob Uszkoreit, suspected that attention without recurrence would be sufficient for language translation, thus the title "attention is all you need". That hypothesis was against conventional wisdom at the time, and even his father Hans Uszkoreit, a well-known computational linguist, was skeptical. In the same year, self-attention (called intra-attention or intra-sentence attention) was proposed for LSTMs.
In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks.
=== AI boom era ===
Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles. Transformer architecture is now used alongside many generative models that contribute to the ongoing AI boom.
In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model. In 2019 October, Google started using BERT to process search queries. In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.
Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly popular, triggering a boom around large language models.
Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer, speech recognition, robotics, and multimodal. The vision transformer, in turn, stimulated new developments in convolutional neural networks. Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024), and Sora (2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data.
== Training ==
=== Methods for stabilizing training ===
The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again.
A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup.
=== Pretrain-finetune ===
Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include:
language modeling
next-sentence prediction
question answering
reading comprehension
sentiment analysis
paraphrasing
The T5 transformer report documents a large number of natural language pretraining tasks. Some examples are:
restoring or repairing incomplete or corrupted text. For example, the input, "Thank you ~~ me to your party ~~ week", might generate the output, "Thank you for inviting me to your party last week".
translation between natural languages (machine translation)
judging the pragmatic acceptability of natural language. For example, the following sentence might be judged "not acceptable", because even though it is syntactically well-formed, it is improbable in ordinary human usage: The course is jumping well.
Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture.
=== Tasks ===
In general, there are 3 classes of language modelling tasks: "masked", "autoregressive", and "prefixLM". These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer.
In a masked task, one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens:
Loss
=
−
∑
t
∈
masked tokens
ln
(
probability of
t
conditional on its context
)
{\displaystyle {\text{Loss}}=-\sum _{t\in {\text{masked tokens}}}\ln({\text{probability of }}t{\text{ conditional on its context}})}
and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task.
In an autoregressive task, the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks.
In a prefixLM task, the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. The T5 series of models are trained by prefixLM tasks.
Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model).
== Architecture ==
All transformers have the same primary components:
Tokenizers, which convert text into tokens.
Embedding layer, which converts tokens and positions of the tokens into vector representations.
Transformer layers, which carry out repeated transformations on the vector representations, extracting more and more linguistic information. These consist of alternating attention and feedforward layers. There are two major types of transformer layers: encoder layers and decoder layers, with further variants.
Un-embedding layer, which converts the final vector representations back to a probability distribution over the tokens.
The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section.
By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as
x
W
{\displaystyle xW}
.
=== Tokenization ===
As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is a tokenizer.
The set of all tokens is the vocabulary of the tokenizer, and its size is the vocabulary size
n
vocabulary
{\displaystyle n_{\text{vocabulary}}}
. When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown".
Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece.
=== Embedding ===
Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix
M
{\displaystyle M}
. For example, if the input token is
3
{\displaystyle 3}
, then the one-hot representation is
[
0
,
0
,
0
,
1
,
0
,
0
,
…
]
{\displaystyle [0,0,0,1,0,0,\dots ]}
, and its embedding vector is
E
m
b
e
d
(
3
)
=
[
0
,
0
,
0
,
1
,
0
,
0
,
…
]
M
{\displaystyle \mathrm {Embed} (3)=[0,0,0,1,0,0,\dots ]M}
The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors.
The number of dimensions in an embedding vector is called hidden size or embedding size and written as
d
emb
{\displaystyle d_{\text{emb}}}
. This size is written as
d
model
{\displaystyle d_{\text{model}}}
in the original Transformer paper.
=== Un-embedding ===
An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens.
The un-embedding layer is a linear-softmax layer:
U
n
E
m
b
e
d
(
x
)
=
s
o
f
t
m
a
x
(
x
W
+
b
)
{\displaystyle \mathrm {UnEmbed} (x)=\mathrm {softmax} (xW+b)}
The matrix has shape
(
d
emb
,
n
vocabulary
)
{\displaystyle (d_{\text{emb}},n_{\text{vocabulary}})}
. The embedding matrix
M
{\displaystyle M}
and the un-embedding matrix
W
{\displaystyle W}
are sometimes required to be transposes of each other, a practice called weight tying.
=== Positional encoding ===
A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information about where the words are in the input sequence. This shall induce a bias towards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man".
The positional encoding is defined as a function of type
f
:
R
→
R
d
;
d
∈
Z
,
d
>
0
{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0}
, where
d
{\displaystyle d}
is a positive even integer. The full positional encoding defined in the original paper is:
(
f
(
t
)
2
k
,
f
(
t
)
2
k
+
1
)
=
(
sin
(
θ
)
,
cos
(
θ
)
)
∀
k
∈
{
0
,
1
,
…
,
d
/
2
−
1
}
{\displaystyle (f(t)_{2k},f(t)_{2k+1})=(\sin(\theta ),\cos(\theta ))\quad \forall k\in \{0,1,\ldots ,d/2-1\}}
where
θ
=
t
r
k
,
r
=
N
2
/
d
{\displaystyle \theta ={\frac {t}{r^{k}}},r=N^{2/d}}
.
Here,
N
{\displaystyle N}
is a free parameter that should be significantly larger than the biggest
k
{\displaystyle k}
that would be input into the positional encoding function. The original paper uses
N
=
10000
{\displaystyle N=10000}
.
The function is in a simpler form when written as a complex function of type
f
:
R
→
C
d
/
2
{\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}}
f
(
t
)
=
(
e
i
t
/
r
k
)
k
=
0
,
1
,
…
,
d
2
−
1
{\displaystyle f(t)=\left(e^{it/r^{k}}\right)_{k=0,1,\ldots ,{\frac {d}{2}}-1}}
where
r
=
N
2
/
d
{\displaystyle r=N^{2/d}}
.
The main reason for using this positional encoding function is that using it, shifts are linear transformations:
f
(
t
+
Δ
t
)
=
d
i
a
g
(
f
(
Δ
t
)
)
f
(
t
)
{\displaystyle f(t+\Delta t)=\mathrm {diag} (f(\Delta t))f(t)}
where
Δ
t
∈
R
{\displaystyle \Delta t\in \mathbb {R} }
is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication.
By taking a linear sum, any convolution can also be implemented as linear transformations:
∑
j
c
j
f
(
t
+
Δ
t
j
)
=
(
∑
j
c
j
d
i
a
g
(
f
(
Δ
t
j
)
)
)
f
(
t
)
{\displaystyle \sum _{j}c_{j}f(t+\Delta t_{j})=\left(\sum _{j}c_{j}\,\mathrm {diag} (f(\Delta t_{j}))\right)f(t)}
for any constants
c
j
{\displaystyle c_{j}}
. This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in a convolutional neural network language model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position."
In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference.
=== Encoder-decoder (overview) ===
Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far.
The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).
Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps. These feed-forward layers contain most of the parameters in a Transformer model.
=== Feedforward network ===
The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons:
F
F
N
(
x
)
=
ϕ
(
x
W
(
1
)
+
b
(
1
)
)
W
(
2
)
+
b
(
2
)
{\displaystyle \mathrm {FFN} (x)=\phi (xW^{(1)}+b^{(1)})W^{(2)}+b^{(2)}}
where
W
(
1
)
{\displaystyle W^{(1)}}
and
W
(
2
)
{\displaystyle W^{(2)}}
are weight matrices and
b
(
1
)
{\displaystyle b^{(1)}}
and
b
(
2
)
{\displaystyle b^{(2)}}
are bias vectors, and
ϕ
{\displaystyle \phi }
is its activation function. The original Transformer used ReLU activation.
The number of neurons in the middle layer is called intermediate size (GPT), filter size (BERT), or feedforward size (BERT). It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size:
d
ffn
=
4
d
emb
{\displaystyle d_{\text{ffn}}=4d_{\text{emb}}}
.
=== Scaled dot-product attention ===
==== Attention head ====
The attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights
W
Q
{\displaystyle W^{Q}}
, the key weights
W
K
{\displaystyle W^{K}}
, and the value weights
W
V
{\displaystyle W^{V}}
.
The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length
ℓ
seq, query
{\displaystyle \ell _{\text{seq, query}}}
, and each entry is a vector of dimension
d
emb, query
{\displaystyle d_{\text{emb, query}}}
. Similarly for the key and value sequences.
For each vector
x
i
,
query
{\displaystyle x_{i,{\text{query}}}}
in the query sequence, it is multiplied by a matrix
W
Q
{\displaystyle W^{Q}}
to produce a query vector
q
i
=
x
i
,
query
W
Q
{\displaystyle q_{i}=x_{i,{\text{query}}}W^{Q}}
. The matrix of all query vectors is the query matrix:
Q
=
X
query
W
Q
{\displaystyle Q=X_{\text{query}}W^{Q}}
Similarly, we construct the key matrix
K
=
X
key
W
K
{\displaystyle K=X_{\text{key}}W^{K}}
and the value matrix
V
=
X
value
W
V
{\displaystyle V=X_{\text{value}}W^{V}}
.
It is usually the case that all
W
Q
,
W
K
,
W
V
{\displaystyle W^{Q},W^{K},W^{V}}
are square matrices, meaning
d
emb, query
=
d
query
{\displaystyle d_{\text{emb, query}}=d_{\text{query}}}
, etc.
Attention weights are calculated using the query and key vectors: the attention weight
a
i
j
{\displaystyle a_{ij}}
from token
i
{\displaystyle i}
to token
j
{\displaystyle j}
is the dot product between
q
i
{\displaystyle q_{i}}
and
k
j
{\displaystyle k_{j}}
. The attention weights are divided by the square root of the dimension of the key vectors,
d
k
{\displaystyle {\sqrt {d_{k}}}}
, which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that
W
Q
{\displaystyle W^{Q}}
and
W
K
{\displaystyle W^{K}}
are different matrices allows attention to be non-symmetric: if token
i
{\displaystyle i}
attends to token
j
{\displaystyle j}
(i.e.
q
i
⋅
k
j
{\displaystyle q_{i}\cdot k_{j}}
is large), this does not necessarily mean that token
j
{\displaystyle j}
will attend to token
i
{\displaystyle i}
(i.e.
q
j
⋅
k
i
{\displaystyle q_{j}\cdot k_{i}}
could be small). The output of the attention unit for token
i
{\displaystyle i}
is the weighted sum of the value vectors of all tokens, weighted by
a
i
j
{\displaystyle a_{ij}}
, the attention from token
i
{\displaystyle i}
to each token.
The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices
Q
{\displaystyle Q}
,
K
{\displaystyle K}
and
V
{\displaystyle V}
are defined as the matrices where the
i
{\displaystyle i}
th rows are vectors
q
i
{\displaystyle q_{i}}
,
k
i
{\displaystyle k_{i}}
, and
v
i
{\displaystyle v_{i}}
respectively. Then we can represent the attention as
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
where the softmax is applied over each of the rows of the matrix.
The number of dimensions in a query vector is query size
d
query
{\displaystyle d_{\text{query}}}
and similarly for the key size
d
key
{\displaystyle d_{\text{key}}}
and value size
d
value
{\displaystyle d_{\text{value}}}
. The output dimension of an attention head is its head dimension
d
head
{\displaystyle d_{\text{head}}}
. The attention mechanism requires the following three equalities to hold:
ℓ
seq, key
=
ℓ
seq, value
,
d
query
=
d
key
,
d
value
=
d
head
{\displaystyle \ell _{\text{seq, key}}=\ell _{\text{seq, value}},\;d_{\text{query}}=d_{\text{key}},\;d_{\text{value}}=d_{\text{head}}}
but is otherwise unconstrained.
If the attention head is used in a self-attention fashion, then
X
query
=
X
key
=
X
value
{\displaystyle X_{\text{query}}=X_{\text{key}}=X_{\text{value}}}
. If the attention head is used in a cross-attention fashion, then usually
X
query
≠
X
key
=
X
value
{\displaystyle X_{\text{query}}\neq X_{\text{key}}=X_{\text{value}}}
. It is theoretically possible for all three to be different, but that is rarely the case in practice.
==== Multiheaded attention ====
One set of
(
W
Q
,
W
K
,
W
V
)
{\displaystyle \left(W^{Q},W^{K},W^{V}\right)}
matrices is called an attention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices,
W
Q
{\displaystyle W^{Q}}
and
W
K
{\displaystyle W^{K}}
, which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrix
W
V
{\displaystyle W^{V}}
, in combination with the part of the output projection matrix
W
O
{\displaystyle W^{O}}
, determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects. The computations for each attention head can be performed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers.
Concretely, let the multiple attention heads be indexed by
i
{\displaystyle i}
, then we have
MultiheadedAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
i
K
,
X
W
i
V
)
)
W
O
{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V}))W^{O}}
where the matrix
X
{\displaystyle X}
is the concatenation of word embeddings, and the matrices
W
i
Q
,
W
i
K
,
W
i
V
{\displaystyle W_{i}^{Q},W_{i}^{K},W_{i}^{V}}
are "projection matrices" owned by individual attention head
i
{\displaystyle i}
, and
W
O
{\displaystyle W^{O}}
is a final projection matrix owned by the whole multi-headed attention head.
It is theoretically possible for each attention head to have a different head dimension
d
head
{\displaystyle d_{\text{head}}}
, but that is rarely the case in practice.
As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:
d
emb
=
768
,
n
head
=
12
,
d
head
=
64
{\displaystyle d_{\text{emb}}=768,n_{\text{head}}=12,d_{\text{head}}=64}
Since
12
×
64
=
768
{\displaystyle 12\times 64=768}
, its output projection matrix
W
O
∈
R
(
12
×
64
)
×
768
{\displaystyle W^{O}\in \mathbb {R} ^{(12\times 64)\times 768}}
is a square matrix.
==== Masked attention ====
The Transformer architecture is constructed to calculate output tokens iteratively. Assuming
t
=
0
{\displaystyle t=0}
refers to the calculation of the first output token
i
=
0
{\displaystyle i=0}
, for step
t
>
0
{\displaystyle t>0}
, the output token
i
=
0
{\displaystyle i=0}
shall remain constant. This ensures properties of the model similar to autoregressive models. Therefore, at every time step
t
{\displaystyle t}
, the calculation for all outputs
i
{\displaystyle i}
should not have access to tokens at position
j
{\displaystyle j}
for
j
>=
i
{\displaystyle j>=i}
(as it naturally is the case for time step
t
=
i
{\displaystyle t=i}
, when tokens
j
>
t
{\displaystyle j>t}
are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrix
M
{\displaystyle M}
that is
−
∞
{\displaystyle -\infty }
at entries where the attention link must be cut, and
0
{\displaystyle 0}
at other places:
MaskedAttention
(
Q
,
K
,
V
)
=
softmax
(
M
+
Q
K
T
d
k
)
V
{\displaystyle {\begin{aligned}{\text{MaskedAttention}}(Q,K,V)={\text{softmax}}\left(M+{\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
The following matrix is commonly used in decoder self-attention modules, called "causal masking":
M
causal
=
[
0
−
∞
−
∞
…
−
∞
0
0
−
∞
…
−
∞
0
0
0
…
−
∞
⋮
⋮
⋮
⋱
⋮
0
0
0
…
0
]
{\displaystyle M_{\text{causal}}={\begin{bmatrix}0&-\infty &-\infty &\dots &-\infty \\0&0&-\infty &\dots &-\infty \\0&0&0&\dots &-\infty \\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &0\end{bmatrix}}}
In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form
P
M
causal
P
−
1
{\displaystyle PM_{\text{causal}}P^{-1}}
, where
P
{\displaystyle P}
is a random permutation matrix.
=== Encoder ===
An encoder consists of an embedding layer, followed by multiple encoder layers.
Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:
given input vectors
h
0
,
h
1
,
…
combine them into a matrix
H
=
[
h
0
h
1
⋮
]
EncoderLayer
(
H
)
=
[
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
0
)
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
1
)
⋮
]
{\displaystyle {\begin{aligned}{\text{given input vectors }}&h_{0},h_{1},\dots \\{\text{combine them into a matrix }}H&={\begin{bmatrix}h_{0}\\h_{1}\\\vdots \end{bmatrix}}\\{\text{EncoderLayer}}(H)&={\begin{bmatrix}{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{0})\\{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{1})\\\vdots \end{bmatrix}}\\\end{aligned}}}
where
FFN
{\displaystyle {\text{FFN}}}
stands for "feed-forward network". We can more succinctly write it as
EncoderLayer
(
H
)
=
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
)
{\displaystyle {\text{EncoderLayer}}(H)={\text{FFN}}({\text{MultiheadedAttention}}(H,H,H))}
with the implicit convention that the
FFN
{\displaystyle {\text{FFN}}}
is applied to each row of the matrix individually.
The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder.
As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking.
=== Decoder ===
A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer.
Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention.
Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked.
In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism.
Schematically, we have:
H
′
=
MaskedMultiheadedAttention
(
H
,
H
,
H
)
DecoderLayer
(
H
)
=
FFN
(
MultiheadedAttention
(
H
′
,
H
E
,
H
E
)
)
{\displaystyle {\begin{aligned}H'&={\text{MaskedMultiheadedAttention}}(H,H,H)\\{\text{DecoderLayer}}(H)&={\text{FFN}}({\text{MultiheadedAttention}}(H',H^{E},H^{E}))\end{aligned}}}
where
H
E
{\displaystyle H^{E}}
is the matrix with rows being the output vectors from the encoder.
The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text.
=== Adapted architectures ===
Many large language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. Early GPT models are decoder-only models trained to predict the next token in a sequence. BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence.
== Full transformer architecture ==
=== Sublayers ===
Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network.
The final points of detail are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence.
The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero.
Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector.
There are two common conventions in use: the post-LN and the pre-LN convention. In the post-LN convention, the output of each sublayer is
L
a
y
e
r
N
o
r
m
(
x
+
S
u
b
l
a
y
e
r
(
x
)
)
{\displaystyle \mathrm {LayerNorm} (x+\mathrm {Sublayer} (x))}
where
S
u
b
l
a
y
e
r
(
x
)
{\displaystyle \mathrm {Sublayer} (x)}
is the function implemented by the sublayer itself.
In the pre-LN convention, the output of each sublayer is
x
+
S
u
b
l
a
y
e
r
(
L
a
y
e
r
N
o
r
m
(
x
)
)
{\displaystyle x+\mathrm {Sublayer} (\mathrm {LayerNorm} (x))}
The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018, was found to be easier to train, requiring no warm-up, leading to faster convergence.
=== Pseudocode ===
The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from
input: Encoder input t_e
Decoder input t_d
output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence))
/* encoder */
z_e ← encoder.tokenizer(t_e)
for each t in 1:length(z_e) do
z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t)
for each l in 1:length(encoder.layers) do
layer ← encoder.layers[l]
/* first sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.multiheaded_attention(z_e, z_e, z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
/* second sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.feedforward(z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
for each t in 1:length(z_e) do
z_e[t] ← encoder.final_layer_norm(z_e[t])
/* decoder */
z_d ← decoder.tokenizer(t_d)
for each t in 1:length(z_d) do
z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t)
for each l in 1:length(decoder.layers) do
layer ← decoder.layers[l]
/* first sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* second sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.multiheaded_attention(z_d, z_e, z_e)
for each i in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* third sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.feedforward(z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
z_d ← decoder.final_layer_norm(z_d)
output_distributions ← []
for each t in 1:length(z_d) do
output_distributions.append(decoder.unembed(z_d[t]))
return output_distributions
=== Terminology ===
The Transformer architecture, being modular, allows variations. Several common variations are described here.
An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.
A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only.
An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder.
A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form: Figure 3
M
prefixLM
=
[
0
−
∞
0
M
causal
]
{\displaystyle M_{\text{prefixLM}}={\begin{bmatrix}\mathbf {0} &-\infty \\\mathbf {0} &M_{\text{causal}}\end{bmatrix}}}
where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.
There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively.
== Subsequent work ==
=== Alternative activation functions ===
The original transformer uses ReLU activation function. Other activation functions were developed. The Llama series and PaLM used SwiGLU; both GPT-1 and BERT used GELU.
Alternative activation functions are often used in combination with Gated Linear Units in the feedforward module.
=== Alternative normalizations ===
The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm which is used in the Llama series. Other examples include CapsuleNorm ScaleNorm, or FixNorm.
=== Alternative positional encodings ===
Transformers may use other positional encoding methods than sinusoidal.
The original Transformer paper reported using a learned positional encoding, but finding it not superior to the sinusoidal one. Later, found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module.
==== RoPE ====
RoPE (rotary positional embedding), is best explained by considering a list of 2-dimensional vectors
[
(
x
1
(
1
)
,
x
1
(
2
)
)
,
(
x
2
(
1
)
,
x
2
(
2
)
)
,
(
x
3
(
1
)
,
x
3
(
2
)
)
,
.
.
.
]
{\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),...]}
. Now pick some angle
θ
{\displaystyle \theta }
. Then RoPE encoding is
RoPE
(
x
m
(
1
)
,
x
m
(
2
)
,
m
)
=
(
cos
m
θ
−
sin
m
θ
sin
m
θ
cos
m
θ
)
(
x
m
(
1
)
x
m
(
2
)
)
=
(
x
m
(
1
)
cos
m
θ
−
x
m
(
2
)
sin
m
θ
x
m
(
2
)
cos
m
θ
+
x
m
(
1
)
sin
m
θ
)
{\displaystyle {\text{RoPE}}{\big (}x_{m}^{(1)},x_{m}^{(2)},m{\big )}={\begin{pmatrix}\cos m\theta &-\sin m\theta \\\sin m\theta &\cos m\theta \end{pmatrix}}{\begin{pmatrix}x_{m}^{(1)}\\x_{m}^{(2)}\\\end{pmatrix}}={\begin{pmatrix}x_{m}^{(1)}\cos m\theta -x_{m}^{(2)}\sin m\theta \\x_{m}^{(2)}\cos m\theta +x_{m}^{(1)}\sin m\theta \\\end{pmatrix}}}
Equivalently, if we write the 2-dimensional vectors as complex numbers
z
m
:=
x
m
(
1
)
+
i
x
m
(
2
)
{\displaystyle z_{m}:=x_{m}^{(1)}+ix_{m}^{(2)}}
, then RoPE encoding is just multiplication by an angle:
RoPE
(
z
m
,
m
)
=
e
i
m
θ
z
m
{\displaystyle {\text{RoPE}}{\big (}z_{m},m{\big )}=e^{im\theta }z_{m}}
For a list of
2
n
{\displaystyle 2n}
-dimensional vectors, a RoPE encoder is defined by a sequence of angles
θ
(
1
)
,
.
.
.
,
θ
(
n
)
{\displaystyle \theta ^{(1)},...,\theta ^{(n)}}
. Then the RoPE encoding is applied to each pair of coordinates.
The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:
RoPE
(
x
,
m
)
T
RoPE
(
y
,
n
)
=
RoPE
(
x
,
m
+
k
)
T
RoPE
(
y
,
n
+
k
)
{\displaystyle {\text{RoPE}}{\big (}x,m{\big )}^{T}{\text{RoPE}}{\big (}y,n{\big )}={\text{RoPE}}{\big (}x,m+k{\big )}^{T}{\text{RoPE}}{\big (}y,n+k{\big )}}
for any integer
k
{\displaystyle k}
.
==== ALiBi ====
ALiBi (Attention with Linear Biases) is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism is
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
+
s
B
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+sB\right)V\end{aligned}}}
Here,
s
{\displaystyle s}
is a real number ("scalar"), and
B
{\displaystyle B}
is the linear bias matrix defined by
B
=
(
0
1
2
3
⋯
−
1
0
1
2
⋯
−
2
−
1
0
1
⋯
−
3
−
2
−
1
0
⋯
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle B={\begin{pmatrix}0&1&2&3&\cdots \\-1&0&1&2&\cdots \\-2&-1&0&1&\cdots \\-3&-2&-1&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}}
in other words,
B
i
,
j
=
j
−
i
{\displaystyle B_{i,j}=j-i}
. The idea being that the linear bias matrix is a softened mask. Just as
0
{\displaystyle 0}
represent full attention paid, and
−
∞
{\displaystyle -\infty }
represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction.
ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located).
==== Relative Position Encodings ====
Relative Position Encodings is similar to ALiBi, but more generic:
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
+
B
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+B\right)V\end{aligned}}}
where
B
{\displaystyle B}
is a Toeplitz matrix, that is,
B
i
,
j
=
B
i
′
,
j
′
{\displaystyle B_{i,j}=B_{i',j'}}
whenever
i
−
j
=
i
′
−
j
′
{\displaystyle i-j=i'-j'}
. This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding".
=== Efficient implementation ===
The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models.
==== KV caching ====
When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching.
If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots.
==== FlashAttention ====
FlashAttention is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page on softmax for details.
An improved version, FlashAttention-2, was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention.
Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).
Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8.
==== Multi-Query Attention ====
Multi-Query Attention changes the multiheaded attention mechanism. Whereas normally,
MultiheadedAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
i
K
,
X
W
i
V
)
)
W
O
{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V})\right)W^{O}}
with Multi-Query Attention, there is just one
W
K
,
W
V
{\displaystyle W^{K},W^{V}}
, thus:
MultiQueryAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
K
,
X
W
V
)
)
W
O
{\displaystyle {\text{MultiQueryAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW^{K},XW^{V})\right)W^{O}}
This has a neutral effect on model quality and training speed, but increases inference speed.
More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups.
Multihead Latent Attention (MLA) is a low-rank approximation to standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached.
==== Speculative decoding ====
Speculative decoding is a method to accelerate token decoding. Similarly to speculative execution in CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly.
The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense.
Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a token
x
1
,
x
2
,
.
.
.
,
x
512
{\displaystyle x_{1},x_{2},...,x_{512}}
, taking time
512
T
GPT-3
{\displaystyle 512T_{\text{GPT-3}}}
. However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that each
x
t
{\displaystyle x_{t}}
is indeed the token with the largest log-likelihood in the
t
{\displaystyle t}
-th output.
In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens:
x
~
1
,
x
~
2
,
x
~
3
,
x
~
4
{\displaystyle {\tilde {x}}_{1},{\tilde {x}}_{2},{\tilde {x}}_{3},{\tilde {x}}_{4}}
. This only takes
4
T
GPT-3-small
{\displaystyle 4T_{\text{GPT-3-small}}}
. These tokens are then run through the larger GPT-3 in one go. Suppose that
x
~
1
{\displaystyle {\tilde {x}}_{1}}
and
x
~
2
{\displaystyle {\tilde {x}}_{2}}
are verified by GPT-3 as what it would have picked, then those are kept, but
x
~
3
{\displaystyle {\tilde {x}}_{3}}
is not, so
x
~
3
,
x
~
4
{\displaystyle {\tilde {x}}_{3},{\tilde {x}}_{4}}
are discarded, and GPT-3 is run on those. This would take
4
T
GPT-3-small
+
3
T
GPT-3
{\displaystyle 4T_{\text{GPT-3-small}}+3T_{\text{GPT-3}}}
, which might be shorter than
4
T
GPT-3
{\displaystyle 4T_{\text{GPT-3}}}
.
For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.
In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict the next token, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack.
=== Sub-quadratic transformers ===
Training transformer-based architectures can be expensive, especially for long inputs. Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows. In the audio domain, SepTr decouples the attention in time and frequency domains. Long Range Arena (2020) is a standard benchmark for comparing the behavior of transformer architectures over long inputs.
==== Alternative attention graphs ====
The standard attention graph is either all-to-all or causal, both of which scales as
O
(
N
2
)
{\displaystyle O(N^{2})}
where
N
{\displaystyle N}
is the number of tokens in a sequence.
Reformer (2020) reduces the computational load from
O
(
N
2
)
{\displaystyle O(N^{2})}
to
O
(
N
ln
N
)
{\displaystyle O(N\ln N)}
by using locality-sensitive hashing and reversible layers.
Sparse attention uses attention graphs that grows slower than
O
(
N
2
)
{\displaystyle O(N^{2})}
. For example, BigBird (2020) uses random small-world networks which grows as
O
(
N
)
{\displaystyle O(N)}
.
Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value.
==== Random Feature Attention ====
Random Feature Attention (2021) uses Fourier random features:
φ
(
x
)
=
1
D
[
cos
⟨
w
1
,
x
⟩
,
sin
⟨
w
1
,
x
⟩
,
⋯
cos
⟨
w
D
,
x
⟩
,
sin
⟨
w
D
,
x
⟩
]
T
{\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\cdots \cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}}
where
w
1
,
.
.
.
,
w
D
{\displaystyle w_{1},...,w_{D}}
are independent samples from the normal distribution
N
(
0
,
σ
2
I
)
{\displaystyle N(0,\sigma ^{2}I)}
. This choice of parameters satisfy
E
[
⟨
φ
(
x
)
,
φ
(
y
)
⟩
]
=
e
−
‖
x
−
y
‖
2
2
σ
2
{\displaystyle \mathbb {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{-{\frac {\|x-y\|^{2}}{2\sigma ^{2}}}}}
, or
e
⟨
x
,
y
⟩
/
σ
2
=
E
[
⟨
e
‖
x
‖
2
/
2
σ
2
φ
(
x
)
,
e
‖
y
‖
2
/
2
σ
2
φ
(
y
)
⟩
]
≈
⟨
e
‖
x
‖
2
/
2
σ
2
φ
(
x
)
,
e
‖
y
‖
2
/
2
σ
2
φ
(
y
)
⟩
{\displaystyle e^{\langle x,y\rangle /\sigma ^{2}}=\mathbb {E} [\langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle ]\approx \langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle }
Consequently, the one-headed attention, with one query, can be written as
Attention
(
q
,
K
,
V
)
=
softmax
(
q
K
T
d
k
)
V
≈
φ
(
q
)
T
∑
i
e
‖
k
i
‖
2
/
2
σ
2
φ
(
k
i
)
v
i
T
φ
(
q
)
T
∑
i
e
‖
k
i
‖
2
/
2
σ
2
φ
(
k
i
)
{\displaystyle {\text{Attention}}(q,K,V)={\text{softmax}}\left({\frac {qK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx {\frac {\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})v_{i}^{T}}{\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})}}}
where
σ
=
d
K
1
/
4
{\displaystyle \sigma =d_{K}^{1/4}}
. Similarly for multiple queries, and for multiheaded attention.
This approximation can be computed in linear time, as we can compute the matrix
φ
(
k
i
)
v
i
T
{\displaystyle \varphi (k_{i})v_{i}^{T}}
first, then multiply it with the query. In essence, we have managed to obtain a more precise version of
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
)
V
≈
Q
(
K
T
V
/
d
k
)
{\displaystyle {\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx Q(K^{T}V/{\sqrt {d_{k}}})}
Performer (2022) uses the same Random Feature Attention, but
w
1
,
.
.
.
,
w
D
{\displaystyle w_{1},...,w_{D}}
are first independently sampled from the normal distribution
N
(
0
,
σ
2
I
)
{\displaystyle N(0,\sigma ^{2}I)}
, then they are Gram-Schmidt processed.
=== Multimodality ===
Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality.
Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning. The LLaVA was a vision-language model composed of a language model (Vicuna-13B) and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.
Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer.
Conformer and later Whisper follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
Perceivers are a variant of Transformers designed for multimodality.
For image generation, notable architectures are DALL-E 1 (2021), Parti (2022), Phenaki (2023), and Muse (2023). Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image. Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image. Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted. Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.
== Applications ==
The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Gemini, AlbertAGPT, Claude, BERT, Grok, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including:
machine translation
time series prediction
document summarization
document generation
named entity recognition (NER)
writing computer code based on requirements expressed in natural language.
speech-to-text
Beyond traditional NLP, the transformer architecture has had success in other applications, such as:
biological sequence analysis
video understanding
protein folding (such as AlphaFold)
evaluating chess board positions. Using static evaluation alone (that is, with no Minimax search) transformer achieved an Elo of 2895, putting it at grandmaster level.
== See also ==
seq2seq – Family of machine learning approaches
Perceiver – Variant of Transformer designed for multimodal data
Vision transformer – Machine learning model for vision processing
Large language model – Type of machine learning model
BERT (language model) – Series of language models developed by Google AI
Generative pre-trained transformer – Type of large language model
T5 (language model) – Series of large language models developed by Google AI
== Notes ==
== References ==
== Further reading == | Wikipedia/Transformer_(machine_learning_model) |
A convolutional neural network (CNN) is a type of feedforward neural network that learns features via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different types of data including text, images and audio. Convolution-based networks are the de-facto standard in deep learning-based approaches to computer vision and image processing, and have only recently been replaced—in some cases—by newer deep learning architectures such as the transformer.
Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by the regularization that comes from using shared weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution (or cross-correlation) kernels, only 25 weights for each convolutional layer are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.
Some applications of CNNs include:
image and video recognition,
recommender systems,
image classification,
image segmentation,
medical image analysis,
natural language processing,
brain–computer interfaces, and
financial time series.
CNNs are also known as shift invariant or space invariant artificial neural networks, based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input.
Feedforward neural networks are usually fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The "full connectivity" of these networks makes them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training (such as weight decay) or trimming connectivity (skipped connections, dropout, etc.) Robust datasets also increase the probability that CNNs will learn the generalized principles that characterize a given dataset rather than the biases of a poorly-populated set.
Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.
CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns to optimize the filters (or kernels) through automated learning, whereas in traditional algorithms these filters are hand-engineered. This simplifies and automates the process, enhancing efficiency and scalability overcoming human-intervention bottlenecks.
== Architecture ==
A convolutional neural network consists of an input layer, hidden layers and an output layer. In a convolutional neural network, the hidden layers include one or more layers that perform convolutions. Typically this includes a layer that performs a dot product of the convolution kernel with the layer's input matrix. This product is usually the Frobenius inner product, and its activation function is commonly ReLU. As the convolution kernel slides along the input matrix for the layer, the convolution operation generates a feature map, which in turn contributes to the input of the next layer. This is followed by other layers such as pooling layers, fully connected layers, and normalization layers.
Here it should be noted how close a convolutional neural network is to a matched filter.
=== Convolutional layers ===
In a CNN, the input is a tensor with shape:
(number of inputs) × (input height) × (input width) × (input channels)
After passing through a convolutional layer, the image becomes abstracted to a feature map, also called an activation map, with shape:
(number of inputs) × (feature map height) × (feature map width) × (feature map channels).
Convolutional layers convolve the input and pass its result to the next layer. This is similar to the response of a neuron in the visual cortex to a specific stimulus. Each convolutional neuron processes data only for its receptive field.
Although fully connected feedforward neural networks can be used to learn features and classify data, this architecture is generally impractical for larger inputs (e.g., high-resolution images), which would require massive numbers of neurons because each pixel is a relevant input feature. A fully connected layer for an image of size 100 × 100 has 10,000 weights for each neuron in the second layer. Convolution reduces the number of free parameters, allowing the network to be deeper. For example, using a 5 × 5 tiling region, each with the same shared weights, requires only 25 neurons. Using shared weights means there are many fewer parameters, which helps avoid the vanishing gradients and exploding gradients problems seen during backpropagation in earlier neural networks.
To speed processing, standard convolutional layers can be replaced by depthwise separable convolutional layers, which are based on a depthwise convolution followed by a pointwise convolution. The depthwise convolution is a spatial convolution applied independently over each channel of the input tensor, while the pointwise convolution is a standard convolution restricted to the use of
1
×
1
{\displaystyle 1\times 1}
kernels.
=== Pooling layers ===
Convolutional networks may include local and/or global pooling layers along with traditional convolutional layers. Pooling layers reduce the dimensions of data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer. Local pooling combines small clusters, tiling sizes such as 2 × 2 are commonly used. Global pooling acts on all the neurons of the feature map. There are two common types of pooling in popular use: max and average. Max pooling uses the maximum value of each local cluster of neurons in the feature map, while average pooling takes the average value.
=== Fully connected layers ===
Fully connected layers connect every neuron in one layer to every neuron in another layer. It is the same as a traditional multilayer perceptron neural network (MLP). The flattened matrix goes through a fully connected layer to classify the images.
=== Receptive field ===
In neural networks, each neuron receives input from some number of locations in the previous layer. In a convolutional layer, each neuron receives input from only a restricted area of the previous layer called the neuron's receptive field. Typically the area is a square (e.g. 5 by 5 neurons). Whereas, in a fully connected layer, the receptive field is the entire previous layer. Thus, in each convolutional layer, each neuron takes input from a larger area in the input than previous layers. This is due to applying the convolution over and over, which takes the value of a pixel into account, as well as its surrounding pixels. When using dilated layers, the number of pixels in the receptive field remains constant, but the field is more sparsely populated as its dimensions grow when combining the effect of several layers.
To manipulate the receptive field size as desired, there are some alternatives to the standard convolutional layer. For example, atrous or dilated convolution expands the receptive field size without increasing the number of parameters by interleaving visible and blind regions. Moreover, a single dilated convolutional layer can comprise filters with multiple dilation ratios, thus having a variable receptive field size.
=== Weights ===
Each neuron in a neural network computes an output value by applying a specific function to the input values received from the receptive field in the previous layer. The function that is applied to the input values is determined by a vector of weights and a bias (typically real numbers). Learning consists of iteratively adjusting these biases and weights.
The vectors of weights and biases are called filters and represent particular features of the input (e.g., a particular shape). A distinguishing feature of CNNs is that many neurons can share the same filter. This reduces the memory footprint because a single bias and a single vector of weights are used across all receptive fields that share that filter, as opposed to each receptive field having its own bias and vector weighting.
=== Deconvolutional ===
A deconvolutional neural network is essentially the reverse of a CNN. It consists of deconvolutional layers and unpooling layers.
A deconvolutional layer is the transpose of a convolutional layer. Specifically, a convolutional layer can be written as a multiplication with a matrix, and a deconvolutional layer is multiplication with the transpose of that matrix.
An unpooling layer expands the layer. The max-unpooling layer is the simplest, as it simply copies each entry multiple times. For example, a 2-by-2 max-unpooling layer is
[
x
]
↦
[
x
x
x
x
]
{\displaystyle [x]\mapsto {\begin{bmatrix}x&x\\x&x\end{bmatrix}}}
.
Deconvolution layers are used in image generators. By default, it creates periodic checkerboard artifact, which can be fixed by upscale-then-convolve.
== History ==
CNN are often compared to the way the brain achieves vision processing in living organisms.
=== Receptive fields in the visual cortex ===
Work by Hubel and Wiesel in the 1950s and 1960s showed that cat visual cortices contain neurons that individually respond to small regions of the visual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as its receptive field. Neighboring cells have similar and overlapping receptive fields. Receptive field size and location varies systematically across the cortex to form a complete map of visual space. The cortex in each hemisphere represents the contralateral visual field.
Their 1968 paper identified two basic visual cell types in the brain:
simple cells, whose output is maximized by straight edges having particular orientations within their receptive field
complex cells, which have larger receptive fields, whose output is insensitive to the exact position of the edges in the field.
Hubel and Wiesel also proposed a cascading model of these two types of cells for use in pattern recognition tasks.
=== Fukushima's analog threshold elements in a vision model ===
In 1969, Kunihiko Fukushima introduced a multilayer visual feature detection network, inspired by the above-mentioned work of Hubel and Wiesel, in which "All the elements in one layer have the same set of interconnecting coefficients; the arrangement of the elements and their interconnections are all homogeneous over a given layer." This is the essential core of a convolutional network, but the weights were not trained. In the same paper, Fukushima also introduced the ReLU (rectified linear unit) activation function.
=== Neocognitron, origin of the trainable CNN architecture ===
The "neocognitron" was introduced by Fukushima in 1980. The neocognitron introduced the two basic types of layers:
"S-layer": a shared-weights receptive-field layer, later known as a convolutional layer, which contains units whose receptive fields cover a patch of the previous layer. A shared-weights receptive-field group (a "plane" in neocognitron terminology) is often called a filter, and a layer typically has several such filters.
"C-layer": a downsampling layer that contain units whose receptive fields cover patches of previous convolutional layers. Such a unit typically computes a weighted average of the activations of the units in its patch, and applies inhibition (divisive normalization) pooled from a somewhat larger patch and across different filters in a layer, and applies a saturating activation function. The patch weights are nonnegative and are not trainable in the original neocognitron. The downsampling and competitive inhibition help to classify features and objects in visual scenes even when the objects are shifted.
Several supervised and unsupervised learning algorithms have been proposed over the decades to train the weights of a neocognitron. Today, however, the CNN architecture is usually trained through backpropagation.
Fukushima's ReLU activation function was not used in his neocognitron since all the weights were nonnegative; lateral inhibition was used instead. The rectifier has become a very popular activation function for CNNs and deep neural networks in general.
=== Convolution in time ===
The term "convolution" first appears in neural networks in a paper by Toshiteru Homma, Les Atlas, and Robert Marks II at the first Conference on Neural Information Processing Systems in 1987. Their paper replaced multiplication with convolution in time, inherently providing shift invariance, motivated by and connecting more directly to the signal-processing concept of a filter, and demonstrated it on a speech recognition task. They also pointed out that as a data-trainable system, convolution is essentially equivalent to correlation since reversal of the weights does not affect the final learned function ("For convenience, we denote * as correlation instead of convolution. Note that convolving a(t) with b(t) is equivalent to correlating a(-t) with b(t)."). Modern CNN implementations typically do correlation and call it convolution, for convenience, as they did here.
=== Time delay neural networks ===
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel et al. for phoneme recognition and was an early convolutional network exhibiting shift-invariance. A TDNN is a 1-D convolutional neural net where the convolution is performed along the time axis of the data. It is the first CNN utilizing weight sharing in combination with a training by gradient descent, using backpropagation. Thus, while also using a pyramidal structure as in the neocognitron, it performed a global optimization of the weights instead of a local one.
TDNNs are convolutional networks that share weights along the temporal dimension. They allow speech signals to be processed time-invariantly. In 1990 Hampshire and Waibel introduced a variant that performs a two-dimensional convolution. Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both time and frequency shifts, as with images processed by a neocognitron.
TDNNs improved the performance of far-distance speech recognition.
=== Image recognition with CNNs trained by gradient descent ===
Denker et al. (1989) designed a 2-D CNN system to recognize hand-written ZIP Code numbers. However, the lack of an efficient training method to determine the kernel coefficients of the involved convolutions meant that all the coefficients had to be laboriously hand-designed.
Following the advances in the training of 1-D CNNs by Waibel et al. (1987), Yann LeCun et al. (1989) used back-propagation to learn the convolution kernel coefficients directly from images of hand-written numbers. Learning was thus fully automatic, performed better than manual coefficient design, and was suited to a broader range of image recognition problems and image types.
Wei Zhang et al. (1988) used back-propagation to train the convolution kernels of a CNN for alphabets recognition. The model was called shift-invariant pattern recognition neural network before the name CNN was coined later in the early 1990s. Wei Zhang et al. also applied the same CNN without the last fully connected layer for medical image object segmentation (1991) and breast cancer detection in mammograms (1994).
This approach became a foundation of modern computer vision.
==== Max pooling ====
In 1990 Yamaguchi et al. introduced the concept of max pooling, a fixed filtering operation that calculates and propagates the maximum value of a given region. They did so by combining TDNNs with max pooling to realize a speaker-independent isolated word recognition system. In their system they used several TDNNs per word, one for each syllable. The results of each TDNN over the input signal were combined using max pooling and the outputs of the pooling layers were then passed on to networks performing the actual word classification.
In a variant of the neocognitron called the cresceptron, instead of using Fukushima's spatial averaging with inhibition and saturation, J. Weng et al. in 1993 used max pooling, where a downsampling unit computes the maximum of the activations of the units in its patch, introducing this method into the vision field.
Max pooling is often used in modern CNNs.
==== LeNet-5 ====
LeNet-5, a pioneering 7-level convolutional network by LeCun et al. in 1995, classifies hand-written numbers on checks (British English: cheques) digitized in 32x32 pixel images. The ability to process higher-resolution images requires larger and more layers of convolutional neural networks, so this technique is constrained by the availability of computing resources.
It was superior than other commercial courtesy amount reading systems (as of 1995). The system was integrated in NCR's check reading systems, and fielded in several American banks since June 1996, reading millions of checks per day.
=== Shift-invariant neural network ===
A shift-invariant neural network was proposed by Wei Zhang et al. for image character recognition in 1988. It is a modified Neocognitron by keeping only the convolutional interconnections between the image feature layers and the last fully connected layer. The model was trained with back-propagation. The training algorithm was further improved in 1991 to improve its generalization ability. The model architecture was modified by removing the last fully connected layer and applied for medical image segmentation (1991) and automatic detection of breast cancer in mammograms (1994).
A different convolution-based design was proposed in 1988 for application to decomposition of one-dimensional electromyography convolved signals via de-convolution. This design was modified in 1989 to other de-convolution-based designs.
=== GPU implementations ===
Although CNNs were invented in the 1980s, their breakthrough in the 2000s required fast implementations on graphics processing units (GPUs).
In 2004, it was shown by K. S. Oh and K. Jung that standard neural networks can be greatly accelerated on GPUs. Their implementation was 20 times faster than an equivalent implementation on CPU. In 2005, another paper also emphasised the value of GPGPU for machine learning.
The first GPU-implementation of a CNN was described in 2006 by K. Chellapilla et al. Their implementation was 4 times faster than an equivalent implementation on CPU. In the same period, GPUs were also used for unsupervised training of deep belief networks.
In 2010, Dan Ciresan et al. at IDSIA trained deep feedforward networks on GPUs. In 2011, they extended this to CNNs, accelerating by 60 compared to training CPU. In 2011, the network won an image recognition contest where they achieved superhuman performance for the first time. Then they won more competitions and achieved state of the art on several benchmarks.
Subsequently, AlexNet, a similar GPU-based CNN by Alex Krizhevsky et al. won the ImageNet Large Scale Visual Recognition Challenge 2012. It was an early catalytic event for the AI boom.
Compared to the training of CNNs using GPUs, not much attention was given to CPU. (Viebke et al 2019) parallelizes CNN by thread- and SIMD-level parallelism that is available on the Intel Xeon Phi.
== Distinguishing features ==
In the past, traditional multilayer perceptron (MLP) models were used for image recognition. However, the full connectivity between nodes caused the curse of dimensionality, and was computationally intractable with higher-resolution images. A 1000×1000-pixel image with RGB color channels has 3 million weights per fully-connected neuron, which is too high to feasibly process efficiently at scale.
For example, in CIFAR-10, images are only of size 32×32×3 (32 wide, 32 high, 3 color channels), so a single fully connected neuron in the first hidden layer of a regular neural network would have 32*32*3 = 3,072 weights. A 200×200 image, however, would lead to neurons that have 200*200*3 = 120,000 weights.
Also, such network architecture does not take into account the spatial structure of data, treating input pixels which are far apart in the same way as pixels that are close together. This ignores locality of reference in data with a grid-topology (such as images), both computationally and semantically. Thus, full connectivity of neurons is wasteful for purposes such as image recognition that are dominated by spatially local input patterns.
Convolutional neural networks are variants of multilayer perceptrons, designed to emulate the behavior of a visual cortex. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images. As opposed to MLPs, CNNs have the following distinguishing features:
3D volumes of neurons. The layers of a CNN have neurons arranged in 3 dimensions: width, height and depth. Where each neuron inside a convolutional layer is connected to only a small region of the layer before it, called a receptive field. Distinct types of layers, both locally and completely connected, are stacked to form a CNN architecture.
Local connectivity: following the concept of receptive fields, CNNs exploit spatial locality by enforcing a local connectivity pattern between neurons of adjacent layers. The architecture thus ensures that the learned "filters" produce the strongest response to a spatially local input pattern. Stacking many such layers leads to nonlinear filters that become increasingly global (i.e. responsive to a larger region of pixel space) so that the network first creates representations of small parts of the input, then from them assembles representations of larger areas.
Shared weights: In CNNs, each filter is replicated across the entire visual field. These replicated units share the same parameterization (weight vector and bias) and form a feature map. This means that all the neurons in a given convolutional layer respond to the same feature within their specific response field. Replicating units in this way allows for the resulting activation map to be equivariant under shifts of the locations of input features in the visual field, i.e. they grant translational equivariance—given that the layer has a stride of one.
Pooling: In a CNN's pooling layers, feature maps are divided into rectangular sub-regions, and the features in each rectangle are independently down-sampled to a single value, commonly by taking their average or maximum value. In addition to reducing the sizes of feature maps, the pooling operation grants a degree of local translational invariance to the features contained therein, allowing the CNN to be more robust to variations in their positions.
Together, these properties allow CNNs to achieve better generalization on vision problems. Weight sharing dramatically reduces the number of free parameters learned, thus lowering the memory requirements for running the network and allowing the training of larger, more powerful networks.
== Building blocks ==
A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume (e.g. holding the class scores) through a differentiable function. A few distinct types of layers are commonly used. These are further discussed below.
=== Convolutional layer ===
The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnable filters (or kernels), which have a small receptive field, but extend through the full depth of the input volume. During the forward pass, each filter is convolved across the width and height of the input volume, computing the dot product between the filter entries and the input, producing a 2-dimensional activation map of that filter. As a result, the network learns filters that activate when it detects some specific type of feature at some spatial position in the input.
Stacking the activation maps for all filters along the depth dimension forms the full output volume of the convolution layer. Every entry in the output volume can thus also be interpreted as an output of a neuron that looks at a small region in the input. Each entry in an activation map use the same set of parameters that define the filter.
Self-supervised learning has been adapted for use in convolutional layers by using sparse patches with a high-mask ratio and a global response normalization layer.
==== Local connectivity ====
When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing a sparse local connectivity pattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume.
The extent of this connectivity is a hyperparameter called the receptive field of the neuron. The connections are local in space (along width and height), but always extend along the entire depth of the input volume. Such an architecture ensures that the learned filters produce the strongest response to a spatially local input pattern.
==== Spatial arrangement ====
Three hyperparameters control the size of the output volume of the convolutional layer: the depth, stride, and padding size:
The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color.
Stride controls how depth columns around the width and height are allocated. If the stride is 1, then we move the filters one pixel at a time. This leads to heavily overlapping receptive fields between the columns, and to large output volumes. For any integer
S
>
0
,
{\textstyle S>0,}
a stride S means that the filter is translated S units at a time per output. In practice,
S
≥
3
{\textstyle S\geq 3}
is rare. A greater stride means smaller overlap of receptive fields and smaller spatial dimensions of the output volume.
Sometimes, it is convenient to pad the input with zeros (or other values, such as the average of the region) on the border of the input volume. The size of this padding is a third hyperparameter. Padding provides control of the output volume's spatial size. In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume, this is commonly referred to as "same" padding.
The spatial size of the output volume is a function of the input volume size
W
{\displaystyle W}
, the kernel field size
K
{\displaystyle K}
of the convolutional layer neurons, the stride
S
{\displaystyle S}
, and the amount of zero padding
P
{\displaystyle P}
on the border. The number of neurons that "fit" in a given volume is then:
W
−
K
+
2
P
S
+
1.
{\displaystyle {\frac {W-K+2P}{S}}+1.}
If this number is not an integer, then the strides are incorrect and the neurons cannot be tiled to fit across the input volume in a symmetric way. In general, setting zero padding to be
P
=
(
K
−
1
)
/
2
{\textstyle P=(K-1)/2}
when the stride is
S
=
1
{\displaystyle S=1}
ensures that the input volume and output volume will have the same size spatially. However, it is not always completely necessary to use all of the neurons of the previous layer. For example, a neural network designer may decide to use just a portion of padding.
==== Parameter sharing ====
A parameter sharing scheme is used in convolutional layers to control the number of free parameters. It relies on the assumption that if a patch feature is useful to compute at some spatial position, then it should also be useful to compute at other positions. Denoting a single 2-dimensional slice of depth as a depth slice, the neurons in each depth slice are constrained to use the same weights and bias.
Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as a convolution of the neuron's weights with the input volume. Therefore, it is common to refer to the sets of weights as a filter (or a kernel), which is convolved with the input. The result of this convolution is an activation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to the translation invariance of the CNN architecture.
Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer".
=== Pooling layer ===
Another important concept of CNNs is pooling, which is used as a form of non-linear down-sampling. Pooling provides downsampling because it reduces the spatial dimensions (height and width) of the input feature maps while retaining the most important information. There are several non-linear functions to implement pooling, where max pooling and average pooling are the most common. Pooling aggregates information from small regions of the input creating partitions of the input feature map, typically using a fixed-size window (like 2x2) and applying a stride (often 2) to move the window across the input. Note that without using a stride greater than 1, pooling would not perform downsampling, as it would simply move the pooling window across the input one step at a time, without reducing the size of the feature map. In other words, the stride is what actually causes the downsampling by determining how much the pooling window moves over the input.
Intuitively, the exact location of a feature is less important than its rough location relative to other features. This is the idea behind the use of pooling in convolutional neural networks. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters, memory footprint and amount of computation in the network, and hence to also control overfitting. This is known as down-sampling. It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by an activation function, such as a ReLU layer) in a CNN architecture.: 460–461 While pooling layers contribute to local translation invariance, they do not provide global translation invariance in a CNN, unless a form of global pooling is used. The pooling layer commonly operates independently on every depth, or slice, of the input and resizes it spatially. A very common form of max pooling is a layer with filters of size 2×2, applied with a stride of 2, which subsamples every depth slice in the input by 2 along both width and height, discarding 75% of the activations:
f
X
,
Y
(
S
)
=
max
a
,
b
=
0
1
S
2
X
+
a
,
2
Y
+
b
.
{\displaystyle f_{X,Y}(S)=\max _{a,b=0}^{1}S_{2X+a,2Y+b}.}
In this case, every max operation is over 4 numbers. The depth dimension remains unchanged (this is true for other forms of pooling as well).
In addition to max pooling, pooling units can use other functions, such as average pooling or ℓ2-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which generally performs better in practice.
Due to the effects of fast spatial reduction of the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether.
==== Channel max pooling ====
A channel max pooling (CMP) operation layer conducts the MP operation along the channel side among the corresponding positions of the consecutive feature maps for the purpose of redundant information elimination. The CMP makes the significant features gather together within fewer channels, which is important for fine-grained image classification that needs more discriminating features. Meanwhile, another advantage of the CMP operation is to make the channel number of feature maps smaller before it connects to the first fully connected (FC) layer. Similar to the MP operation, we denote the input feature maps and output feature maps of a CMP layer as F ∈ R(C×M×N) and C ∈ R(c×M×N), respectively, where C and c are the channel numbers of the input and output feature maps, M and N are the widths and the height of the feature maps, respectively. Note that the CMP operation only changes the channel number of the feature maps. The width and the height of the feature maps are not changed, which is different from the MP operation.
See for reviews for pooling methods.
=== ReLU layer ===
ReLU is the abbreviation of rectified linear unit. It was proposed by Alston Householder in 1941, and used in CNN by Kunihiko Fukushima in 1969. ReLU applies the non-saturating activation function
f
(
x
)
=
max
(
0
,
x
)
{\textstyle f(x)=\max(0,x)}
. It effectively removes negative values from an activation map by setting them to zero. It introduces nonlinearity to the decision function and in the overall network without affecting the receptive fields of the convolution layers.
In 2011, Xavier Glorot, Antoine Bordes and Yoshua Bengio found that ReLU enables better training of deeper networks, compared to widely used activation functions prior to 2011.
Other functions can also be used to increase nonlinearity, for example the saturating hyperbolic tangent
f
(
x
)
=
tanh
(
x
)
{\displaystyle f(x)=\tanh(x)}
,
f
(
x
)
=
|
tanh
(
x
)
|
{\displaystyle f(x)=|\tanh(x)|}
, and the sigmoid function
σ
(
x
)
=
(
1
+
e
−
x
)
−
1
{\textstyle \sigma (x)=(1+e^{-x})^{-1}}
. ReLU is often preferred to other functions because it trains the neural network several times faster without a significant penalty to generalization accuracy.
=== Fully connected layer ===
After several convolutional and max pooling layers, the final classification is done via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular (non-convolutional) artificial neural networks. Their activations can thus be computed as an affine transformation, with matrix multiplication followed by a bias offset (vector addition of a learned or fixed bias term).
=== Loss layer ===
The "loss layer", or "loss function", exemplifies how training penalizes the deviation between the predicted output of the network, and the true data labels (during supervised learning). Various loss functions can be used, depending on the specific task.
The Softmax loss function is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in
[
0
,
1
]
{\displaystyle [0,1]}
. Euclidean loss is used for regressing to real-valued labels
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
.
== Hyperparameters ==
Hyperparameters are various settings that are used to control the learning process. CNNs use more hyperparameters than a standard multilayer perceptron (MLP).
=== Padding ===
Padding is the addition of (typically) 0-valued pixels on the borders of an image. This is done so that the border pixels are not undervalued (lost) from the output because they would ordinarily participate in only a single receptive field instance. The padding applied is typically one less than the corresponding kernel dimension. For example, a convolutional layer using 3x3 kernels would receive a 2-pixel pad, that is 1 pixel on each side of the image.
=== Stride ===
The stride is the number of pixels that the analysis window moves on each iteration. A stride of 2 means that each kernel is offset by 2 pixels from its predecessor.
=== Number of filters ===
Since feature map size decreases with depth, layers near the input layer tend to have fewer filters while higher layers can have more. To equalize computation at each layer, the product of feature values va with pixel position is kept roughly constant across layers. Preserving more information about the input would require keeping the total number of activations (number of feature maps times number of pixel positions) non-decreasing from one layer to the next.
The number of feature maps directly controls the capacity and depends on the number of available examples and task complexity.
=== Filter (or Kernel) size ===
Common filter sizes found in the literature vary greatly, and are usually chosen based on the data set. Typical filter sizes range from 1x1 to 7x7. As two famous examples, AlexNet used 3x3, 5x5, and 11x11. Inceptionv3 used 1x1, 3x3, and 5x5.
The challenge is to find the right level of granularity so as to create abstractions at the proper scale, given a particular data set, and without overfitting.
=== Pooling type and size ===
Max pooling is typically used, often with a 2x2 dimension. This implies that the input is drastically downsampled, reducing processing cost.
Greater pooling reduces the dimension of the signal, and may result in unacceptable information loss. Often, non-overlapping pooling windows perform best.
=== Dilation ===
Dilation involves ignoring pixels within a kernel. This reduces processing memory potentially without significant signal loss. A dilation of 2 on a 3x3 kernel expands the kernel to 5x5, while still processing 9 (evenly spaced) pixels. Specifically, the processed pixels after the dilation are the cells (1,1), (1,3), (1,5), (3,1), (3,3), (3,5), (5,1), (5,3), (5,5), where (i,j) denotes the cell of the i-th row and j-th column in the expanded 5x5 kernel. Accordingly, dilation of 4 expands the kernel to 7x7.
== Translation equivariance and aliasing ==
It is commonly assumed that CNNs are invariant to shifts of the input. Convolution or pooling layers within a CNN that do not have a stride greater than one are indeed equivariant to translations of the input. However, layers with a stride greater than one ignore the Nyquist–Shannon sampling theorem and might lead to aliasing of the input signal While, in principle, CNNs are capable of implementing anti-aliasing filters, it has been observed that this does not happen in practice, and therefore yield models that are not equivariant to translations.
Furthermore, if a CNN makes use of fully connected layers, translation equivariance does not imply translation invariance, as the fully connected layers are not invariant to shifts of the input. One solution for complete translation invariance is avoiding any down-sampling throughout the network and applying global average pooling at the last layer. Additionally, several other partial solutions have been proposed, such as anti-aliasing before downsampling operations, spatial transformer networks, data augmentation, subsampling combined with pooling, and capsule neural networks.
== Evaluation ==
The accuracy of the final model is typically estimated on a sub-part of the dataset set apart at the start, often called a test set. Alternatively, methods such as k-fold cross-validation are applied. Other strategies include using conformal prediction.
== Regularization methods ==
Regularization is a process of introducing additional information to solve an ill-posed problem or to prevent overfitting. CNNs use various types of regularization.
=== Empirical ===
==== Dropout ====
Because networks have so many parameters, they are prone to overfitting. One method to reduce overfitting is dropout, introduced in 2014. At each training stage, individual nodes are either "dropped out" of the net (ignored) with probability
1
−
p
{\displaystyle 1-p}
or kept with probability
p
{\displaystyle p}
, so that a reduced network is left; incoming and outgoing edges to a dropped-out node are also removed. Only the reduced network is trained on the data in that stage. The removed nodes are then reinserted into the network with their original weights.
In the training stages,
p
{\displaystyle p}
is usually 0.5; for input nodes, it is typically much higher because information is directly lost when input nodes are ignored.
At testing time after training has finished, we would ideally like to find a sample average of all possible
2
n
{\displaystyle 2^{n}}
dropped-out networks; unfortunately this is unfeasible for large values of
n
{\displaystyle n}
. However, we can find an approximation by using the full network with each node's output weighted by a factor of
p
{\displaystyle p}
, so the expected value of the output of any node is the same as in the training stages. This is the biggest contribution of the dropout method: although it effectively generates
2
n
{\displaystyle 2^{n}}
neural nets, and as such allows for model combination, at test time only a single network needs to be tested.
By avoiding training all nodes on all training data, dropout decreases overfitting. The method also significantly improves training speed. This makes the model combination practical, even for deep neural networks. The technique seems to reduce node interactions, leading them to learn more robust features that better generalize to new data.
==== DropConnect ====
DropConnect is the generalization of dropout in which each connection, rather than each output unit, can be dropped with probability
1
−
p
{\displaystyle 1-p}
. Each unit thus receives input from a random subset of units in the previous layer.
DropConnect is similar to dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights, rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage.
==== Stochastic pooling ====
A major drawback to dropout is that it does not have the same benefits for convolutional layers, where the neurons are not fully connected.
Even before dropout, in 2013 a technique called stochastic pooling, the conventional deterministic pooling operations were replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to a multinomial distribution, given by the activities within the pooling region. This approach is free of hyperparameters and can be combined with other regularization approaches, such as dropout and data augmentation.
An alternate view of stochastic pooling is that it is equivalent to standard max pooling but with many copies of an input image, each having small local deformations. This is similar to explicit elastic deformations of the input images, which delivers excellent performance on the MNIST data set. Using stochastic pooling in a multilayer model gives an exponential number of deformations since the selections in higher layers are independent of those below.
==== Artificial data ====
Because the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting. Because there is often not enough available data to train, especially considering that some part should be spared for later testing, two approaches are to either generate new data from scratch (if possible) or perturb existing data to create new ones. The latter one is used since mid-1990s. For example, input images can be cropped, rotated, or rescaled to create new examples with the same labels as the original training set.
=== Explicit ===
==== Early stopping ====
One of the simplest methods to prevent overfitting of a network is to simply stop the training before overfitting has had a chance to occur. It comes with the disadvantage that the learning process is halted.
==== Number of parameters ====
Another simple way to prevent overfitting is to limit the number of parameters, typically by limiting the number of hidden units in each layer or limiting network depth. For convolutional networks, the filter size also affects the number of parameters. Limiting the number of parameters restricts the predictive power of the network directly, reducing the complexity of the function that it can perform on the data, and thus limits the amount of overfitting. This is equivalent to a "zero norm".
==== Weight decay ====
A simple form of added regularizer is weight decay, which simply adds an additional error, proportional to the sum of weights (L1 norm) or squared magnitude (L2 norm) of the weight vector, to the error at each node. The level of acceptable model complexity can be reduced by increasing the proportionality constant('alpha' hyperparameter), thus increasing the penalty for large weight vectors.
L2 regularization is the most common form of regularization. It can be implemented by penalizing the squared magnitude of all parameters directly in the objective. The L2 regularization has the intuitive interpretation of heavily penalizing peaky weight vectors and preferring diffuse weight vectors. Due to multiplicative interactions between weights and inputs this has the useful property of encouraging the network to use all of its inputs a little rather than some of its inputs a lot.
L1 regularization is also common. It makes the weight vectors sparse during optimization. In other words, neurons with L1 regularization end up using only a sparse subset of their most important inputs and become nearly invariant to the noisy inputs. L1 with L2 regularization can be combined; this is called elastic net regularization.
==== Max norm constraints ====
Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector
w
→
{\displaystyle {\vec {w}}}
of every neuron to satisfy
‖
w
→
‖
2
<
c
{\displaystyle \|{\vec {w}}\|_{2}<c}
. Typical values of
c
{\displaystyle c}
are order of 3–4. Some papers report improvements when using this form of regularization.
== Hierarchical coordinate frames ==
Pooling loses the precise spatial relationships between high-level parts (such as nose and mouth in a face image). These relationships are needed for identity recognition. Overlapping the pools so that each feature occurs in multiple pools, helps retain the information. Translation alone cannot extrapolate the understanding of geometric relationships to a radically new viewpoint, such as a different orientation or scale. On the other hand, people are very good at extrapolating; after seeing a new shape once they can recognize it from a different viewpoint.
An earlier common way to deal with this problem is to train the network on transformed data in different orientations, scales, lighting, etc. so that the network can cope with these variations. This is computationally intensive for large data-sets. The alternative is to use a hierarchy of coordinate frames and use a group of neurons to represent a conjunction of the shape of the feature and its pose relative to the retina. The pose relative to the retina is the relationship between the coordinate frame of the retina and the intrinsic features' coordinate frame.
Thus, one way to represent something is to embed the coordinate frame within it. This allows large features to be recognized by using the consistency of the poses of their parts (e.g. nose and mouth poses make a consistent prediction of the pose of the whole face). This approach ensures that the higher-level entity (e.g. face) is present when the lower-level (e.g. nose and mouth) agree on its prediction of the pose. The vectors of neuronal activity that represent pose ("pose vectors") allow spatial transformations modeled as linear operations that make it easier for the network to learn the hierarchy of visual entities and generalize across viewpoints. This is similar to the way the human visual system imposes coordinate frames in order to represent shapes.
== Applications ==
=== Image recognition ===
CNNs are often used in image recognition systems. In 2012, an error rate of 0.23% on the MNIST database was reported. Another paper on using CNN for image classification reported that the learning process was "surprisingly fast"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database. Subsequently, a similar CNN called AlexNet won the ImageNet Large Scale Visual Recognition Challenge 2012.
When applied to facial recognition, CNNs achieved a large decrease in error rate. Another paper reported a 97.6% recognition rate on "5,600 still images of more than 10 subjects". CNNs were used to assess video quality in an objective way after manual training; the resulting system had a very low root mean square error.
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object classification and detection, with millions of images and hundreds of object classes. In the ILSVRC 2014, a large-scale visual recognition challenge, almost every highly ranked team used CNN as their basic framework. The winner GoogLeNet (the foundation of DeepDream) increased the mean average precision of object detection to 0.439329, and reduced classification error to 0.06656, the best result to date. Its network applied more than 30 layers. That performance of convolutional neural networks on the ImageNet tests was close to that of humans. The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters, an increasingly common phenomenon with modern digital cameras. By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained categories such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this.
In 2015, a many-layered CNN demonstrated the ability to spot faces from a wide range of angles, including upside down, even when partially occluded, with competitive performance. The network was trained on a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They used batches of 128 images over 50,000 iterations.
=== Video analysis ===
Compared to image data domains, there is relatively little work on applying CNNs to video classification. Video is more complex than images since it has another (temporal) dimension. However, some extensions of CNNs into the video domain have been explored. One approach is to treat space and time as equivalent dimensions of the input and perform convolutions in both time and space. Another way is to fuse the features of two convolutional neural networks, one for the spatial and one for the temporal stream. Long short-term memory (LSTM) recurrent units are typically incorporated after the CNN to account for inter-frame or inter-clip dependencies. Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis. Its application can be seen in text-to-video model.
=== Natural language processing ===
CNNs have also been explored for natural language processing. CNN models are effective for various NLP problems and achieved excellent results in semantic parsing, search query retrieval, sentence modeling, classification, prediction and other traditional NLP tasks.
Compared to traditional language processing methods such as recurrent neural networks, CNNs can represent different contextual realities of language that do not rely on a series-sequence assumption, while RNNs are better suitable when classical time series modeling is required.
=== Anomaly detection ===
A CNN with 1-D convolutions was used on time series in the frequency domain (spectral residual) by an unsupervised model to detect anomalies in the time domain.
=== Drug discovery ===
CNNs have been used in drug discovery. Predicting the interaction between molecules and biological proteins can identify potential treatments. In 2015, Atomwise introduced AtomNet, the first deep learning neural network for structure-based drug design. The system trains directly on 3-dimensional representations of chemical interactions. Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures, AtomNet discovers chemical features, such as aromaticity, sp3 carbons, and hydrogen bonding. Subsequently, AtomNet was used to predict novel candidate biomolecules for multiple disease targets, most notably treatments for the Ebola virus and multiple sclerosis.
=== Checkers game ===
CNNs have been used in the game of checkers. From 1999 to 2001, Fogel and Chellapilla published papers showing how a convolutional neural network could learn to play checkers using co-evolution. The learning process did not use prior human professional games, but rather focused on a minimal set of information contained in the checkerboard: the location and type of pieces, and the difference in number of pieces between the two sides. Ultimately, the program (Blondie24) was tested on 165 games against players and ranked in the highest 0.4%. It also earned a win against the program Chinook at its "expert" level of play.
=== Go ===
CNNs have been used in computer Go. In December 2014, Clark and Storkey published a paper showing that a CNN trained by supervised learning from a database of human professional games could outperform GNU Go and win some games against Monte Carlo tree search Fuego 1.1 in a fraction of the time it took Fuego to play. Later it was announced that a large 12-layer convolutional neural network had correctly predicted the professional move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GNU Go in 97% of games, and matched the performance of the Monte Carlo tree search program Fuego simulating ten thousand playouts (about a million positions) per move.
A couple of CNNs for choosing moves to try ("policy network") and evaluating positions ("value network") driving MCTS were used by AlphaGo, the first to beat the best human player at the time.
=== Time series forecasting ===
Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients. Convolutional networks can provide an improved forecasting performance when there are multiple similar time series to learn from. CNNs can also be applied to further tasks in time series analysis (e.g., time series classification or quantile forecasting).
=== Cultural heritage and 3D-datasets ===
As archaeological findings such as clay tablets with cuneiform writing are increasingly acquired using 3D scanners, benchmark datasets are becoming available, including HeiCuBeDa providing almost 2000 normalized 2-D and 3-D datasets prepared with the GigaMesh Software Framework. So curvature-based measures are used in conjunction with geometric neural networks (GNNs), e.g. for period classification of those clay tablets being among the oldest documents of human history.
== Fine-tuning ==
For many applications, training data is not very available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights, this is known as transfer learning. Furthermore, this technique allows convolutional network architectures to successfully be applied to problems with tiny training sets.
== Human interpretable explanations ==
End-to-end training and prediction are common practice in computer vision. However, human interpretable explanations are required for critical systems such as a self-driving cars. With recent advances in visual salience, spatial attention, and temporal attention, the most critical spatial regions/temporal instants could be visualized to justify the CNN predictions.
== Related architectures ==
=== Deep Q-networks ===
A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.
Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming. Other deep reinforcement learning models preceded it.
=== Deep belief networks ===
Convolutional deep belief networks (CDBN) have structure very similar to convolutional neural networks and are trained similarly to deep belief networks. Therefore, they exploit the 2D structure of images, like CNNs do, and make use of pre-training like deep belief networks. They provide a generic structure that can be used in many image and signal processing tasks. Benchmark results on standard image datasets like CIFAR have been obtained using CDBNs.
=== Neural abstraction pyramid ===
The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid by lateral and feedback connections. The resulting recurrent convolutional network allows for the flexible incorporation of contextual information to iteratively resolve local ambiguities. In contrast to previous models, image-like outputs at the highest resolution were generated, e.g., for semantic segmentation, image reconstruction, and object localization tasks.
== Notable libraries ==
Caffe: A library for convolutional neural networks. Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers.
Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. A general-purpose deep learning library for the JVM production stack running on a C++ scientific computing engine. Allows the creation of custom layers. Integrates with Hadoop and Kafka.
Dlib: A toolkit for making real world machine learning and data analysis applications in C++.
Microsoft Cognitive Toolkit: A deep learning toolkit written by Microsoft with several unique features enhancing scalability over multiple nodes. It supports full-fledged interfaces for training in C++ and Python and with additional support for model inference in C# and Java.
TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU, Google's proprietary tensor processing unit (TPU), and mobile devices.
Theano: The reference deep-learning library for Python with an API largely compatible with the popular NumPy library. Allows user to write symbolic mathematical expressions, then automatically generates their derivatives, saving the user from having to code gradients or backpropagation. These symbolic expressions are automatically compiled to CUDA code for a fast, on-the-GPU implementation.
Torch: A scientific computing framework with wide support for machine learning algorithms, written in C and Lua.
== See also ==
Attention (machine learning)
Convolution
Deep learning
Natural-language processing
Neocognitron
Scale-invariant feature transform
Time delay neural network
Vision processing unit
== Notes ==
== References ==
== External links ==
CS231n: Convolutional Neural Networks for Visual Recognition — Andrej Karpathy's Stanford computer science course on CNNs in computer vision
vdumoulin/conv_arithmetic: A technical report on convolution arithmetic in the context of deep learning. Animations of convolutions. | Wikipedia/Convolutional_neural_network |
Bidirectional encoder representations from transformers (BERT) is a language model introduced in October 2018 by researchers at Google. It learns to represent text as a sequence of vectors using self-supervised learning. It uses the encoder-only transformer architecture. BERT dramatically improved the state-of-the-art for large language models. As of 2020, BERT is a ubiquitous baseline in natural language processing (NLP) experiments.
BERT is trained by masked token prediction and next sentence prediction. As a result of this training process, BERT learns contextual, latent representations of tokens in their context, similar to ELMo and GPT-2. It found applications for many natural language processing tasks, such as coreference resolution and polysemy resolution. It is an evolutionary step over ELMo, and spawned the study of "BERTology", which attempts to interpret what is learned by BERT.
BERT was originally implemented in the English language at two model sizes, BERTBASE (110 million parameters) and BERTLARGE (340 million parameters). Both were trained on the Toronto BookCorpus (800M words) and English Wikipedia (2,500M words).: 5 The weights were released on GitHub. On March 11, 2020, 24 smaller models were released, the smallest being BERTTINY with just 4 million parameters.
== Architecture ==
BERT is an "encoder-only" transformer architecture. At a high level, BERT consists of 4 modules:
Tokenizer: This module converts a piece of English text into a sequence of integers ("tokens").
Embedding: This module converts the sequence of tokens into an array of real-valued vectors representing the tokens. It represents the conversion of discrete token types into a lower-dimensional Euclidean space.
Encoder: a stack of Transformer blocks with self-attention, but without causal masking.
Task head: This module converts the final representation vectors into one-hot encoded tokens again by producing a predicted probability distribution over the token types. It can be viewed as a simple decoder, decoding the latent representation into token types, or as an "un-embedding layer".
The task head is necessary for pre-training, but it is often unnecessary for so-called "downstream tasks," such as question answering or sentiment classification. Instead, one removes the task head and replaces it with a newly initialized module suited for the task, and finetune the new module. The latent vector representation of the model is directly fed into this new module, allowing for sample-efficient transfer learning.
=== Embedding ===
This section describes the embedding used by BERTBASE. The other one, BERTLARGE, is similar, just larger.
The tokenizer of BERT is WordPiece, which is a sub-word strategy like byte pair encoding. Its vocabulary size is 30,000, and any token not appearing in its vocabulary is replaced by [UNK] ("unknown").
The first layer is the embedding layer, which contains three components: token type embeddings, position embeddings, and segment type embeddings.
Token type: The token type is a standard embedding layer, translating a one-hot vector into a dense vector based on its token type.
Position: The position embeddings are based on a token's position in the sequence. BERT uses absolute position embeddings, where each position in sequence is mapped to a real-valued vector. Each dimension of the vector consists of a sinusoidal function that takes the position in the sequence as input.
Segment type: Using a vocabulary of just 0 or 1, this embedding layer produces a dense vector based on whether the token belongs to the first or second text segment in that input. In other words, type-1 tokens are all tokens that appear after the [SEP] special token. All prior tokens are type-0.
The three embedding vectors are added together representing the initial token representation as a function of these three pieces of information. After embedding, the vector representation is normalized using a LayerNorm operation, outputting a 768-dimensional vector for each input token. After this, the representation vectors are passed forward through 12 Transformer encoder blocks, and are decoded back to 30,000-dimensional vocabulary space using a basic affine transformation layer.
=== Architectural family ===
The encoder stack of BERT has 2 free parameters:
L
{\displaystyle L}
, the number of layers, and
H
{\displaystyle H}
, the hidden size. There are always
H
/
64
{\displaystyle H/64}
self-attention heads, and the feed-forward/filter size is always
4
H
{\displaystyle 4H}
. By varying these two numbers, one obtains an entire family of BERT models.
For BERT
the feed-forward size and filter size are synonymous. Both of them denote the number of dimensions in the middle layer of the feed-forward network.
the hidden size and embedding size are synonymous. Both of them denote the number of real numbers used to represent a token.
The notation for encoder stack is written as L/H. For example, BERTBASE is written as 12L/768H, BERTLARGE as 24L/1024H, and BERTTINY as 2L/128H.
== Training ==
=== Pre-training ===
BERT was pre-trained simultaneously on two tasks.
Masked Language Model (MLM): In this task, BERT ingests a sequence of words, where one words may be randomly changed ("masked"), and BERT tries to predict the original words that had been changed. For example, in the sentence "The cat sat on the [MASK]," BERT would need to predict "mat." This helps BERT learn bidirectional context, meaning it understands the relationships between words not just from left to right or right to left but from both directions at the same time.
Next Sentence Prediction (NSP): In this task, BERT is trained to predict whether one sentence logically follows another. For example, given two sentences, "The cat sat on the mat." and "It was a sunny day," BERT has to decide if the second sentence is a valid continuation of the first one. This helps BERT understand relationships between sentences, which is important for tasks like question answering or document classification.
==== Masked language modeling ====
In masked language modeling, 15% of tokens would be randomly selected for masked-prediction task, and the training objective was to predict the masked token given its context. In more detail, the selected token is
replaced with a [MASK] token with probability 80%,
replaced with a random word token with probability 10%,
not replaced with probability 10%.
The reason not all selected tokens are masked is to avoid the dataset shift problem. The dataset shift problem arises when the distribution of inputs seen during training differs significantly from the distribution encountered during inference. A trained BERT model might be applied to word representation (like Word2Vec), where it would be run over sentences not containing any [MASK] tokens. It is later found that more diverse training objectives are generally better.
As an illustrative example, consider the sentence "my dog is cute". It would first be divided into tokens like "my1 dog2 is3 cute4". Then a random token in the sentence would be picked. Let it be the 4th one "cute4". Next, there would be three possibilities:
with probability 80%, the chosen token is masked, resulting in "my1 dog2 is3 [MASK]4";
with probability 10%, the chosen token is replaced by a uniformly sampled random token, such as "happy", resulting in "my1 dog2 is3 happy4";
with probability 10%, nothing is done, resulting in "my1 dog2 is3 cute4".
After processing the input text, the model's 4th output vector is passed to its decoder layer, which outputs a probability distribution over its 30,000-dimensional vocabulary space.
==== Next sentence prediction ====
Given two spans of text, the model predicts if these two spans appeared sequentially in the training corpus, outputting either [IsNext] or [NotNext]. Specifically, the training algorithm would sometimes sample two spans from a single continuous span in the training corpus, but other times, sample two spans from two discontinuous spans in the training corpus.
The first span starts with a special token [CLS] (for "classify"). The two spans are separated by a special token [SEP] (for "separate"). After processing the two spans, the 1-st output vector (the vector coding for [CLS]) is passed to a separate neural network for the binary classification into [IsNext] and [NotNext].
For example, given "[CLS] my dog is cute [SEP] he likes playing" the model should output token [IsNext].
Given "[CLS] my dog is cute [SEP] how do magnets work" the model should output token [NotNext].
=== Fine-tuning ===
BERT is meant as a general pretrained model for various applications in natural language processing. That is, after pre-training, BERT can be fine-tuned with fewer resources on smaller datasets to optimize its performance on specific tasks such as natural language inference and text classification, and sequence-to-sequence-based language generation tasks such as question answering and conversational response generation.
The original BERT paper published results demonstrating that a small amount of finetuning (for BERTLARGE, 1 hour on 1 Cloud TPU) allowed it to achieved state-of-the-art performance on a number of natural language understanding tasks:
GLUE (General Language Understanding Evaluation) task set (consisting of 9 tasks);
SQuAD (Stanford Question Answering Dataset) v1.1 and v2.0;
SWAG (Situations With Adversarial Generations).
In the original paper, all parameters of BERT are finetuned, and recommended that, for downstream applications that are text classifications, the output token at the [CLS] input token is fed into a linear-softmax layer to produce the label outputs.
The original code base defined the final linear layer as a "pooler layer", in analogy with global pooling in computer vision, even though it simply discards all output tokens except the one corresponding to [CLS] .
=== Cost ===
BERT was trained on the BookCorpus (800M words) and a filtered version of English Wikipedia (2,500M words) without lists, tables, and headers.
Training BERTBASE on 4 cloud TPU (16 TPU chips total) took 4 days, at an estimated cost of 500 USD. Training BERTLARGE on 16 cloud TPU (64 TPU chips total) took 4 days.
== Interpretation ==
Language models like ELMo, GPT-2, and BERT, spawned the study of "BERTology", which attempts to interpret what is learned by these models. Their performance on these natural language understanding tasks are not yet well understood. Several research publications in 2018 and 2019 focused on investigating the relationship behind BERT's output as a result of carefully chosen input sequences, analysis of internal vector representations through probing classifiers, and the relationships represented by attention weights.
The high performance of the BERT model could also be attributed to the fact that it is bidirectionally trained. This means that BERT, based on the Transformer model architecture, applies its self-attention mechanism to learn information from a text from the left and right side during training, and consequently gains a deep understanding of the context. For example, the word fine can have two different meanings depending on the context (I feel fine today, She has fine blond hair). BERT considers the words surrounding the target word fine from the left and right side.
However it comes at a cost: due to encoder-only architecture lacking a decoder, BERT can't be prompted and can't generate text, while bidirectional models in general do not work effectively without the right side, thus being difficult to prompt. As an illustrative example, if one wishes to use BERT to continue a sentence fragment "Today, I went to", then naively one would mask out all the tokens as "Today, I went to [MASK] [MASK] [MASK] ... [MASK] ." where the number of [MASK] is the length of the sentence one wishes to extend to. However, this constitutes a dataset shift, as during training, BERT has never seen sentences with that many tokens masked out. Consequently, its performance degrades. More sophisticated techniques allow text generation, but at a high computational cost.
== History ==
BERT was originally published by Google researchers Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. The design has its origins from pre-training contextual representations, including semi-supervised sequence learning, generative pre-training, ELMo, and ULMFit. Unlike previous models, BERT is a deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus. Context-free models such as word2vec or GloVe generate a single word embedding representation for each word in the vocabulary, whereas BERT takes into account the context for each occurrence of a given word. For instance, whereas the vector for "running" will have the same word2vec vector representation for both of its occurrences in the sentences "He is running a company" and "He is running a marathon", BERT will provide a contextualized embedding that will be different according to the sentence.
On October 25, 2019, Google announced that they had started applying BERT models for English language search queries within the US. On December 9, 2019, it was reported that BERT had been adopted by Google Search for over 70 languages. In October 2020, almost every single English-based query was processed by a BERT model.
== Variants ==
The BERT models were influential and inspired many variants.
RoBERTa (2019) was an engineering improvement. It preserves BERT's architecture (slightly larger, at 355M parameters), but improves its training, changing key hyperparameters, removing the next-sentence prediction task, and using much larger mini-batch sizes.
DistilBERT (2019) distills BERTBASE to a model with just 60% of its parameters (66M), while preserving 95% of its benchmark scores. Similarly, TinyBERT (2019) is a distilled model with just 28% of its parameters.
ALBERT (2019) used shared-parameter across layers, and experimented with independently varying the hidden size and the word-embedding layer's output size as two hyperparameters. They also replaced the next sentence prediction task with the sentence-order prediction (SOP) task, where the model must distinguish the correct order of two consecutive text segments from their reversed order.
ELECTRA (2020) applied the idea of generative adversarial networks to the MLM task. Instead of masking out tokens, a small language model generates random plausible substitutions, and a larger network identify these replaced tokens. The small model aims to fool the large model.
DeBERTa (2020) is a significant architectural variant, with disentangled attention. Its key idea is to treat the positional and token encodings separately throughout the attention mechanism. Instead of combining the positional encoding (
x
p
o
s
i
t
i
o
n
{\displaystyle x_{position}}
) and token encoding (
x
token
{\displaystyle x_{\text{token}}}
) into a single input vector (
x
i
n
p
u
t
=
x
p
o
s
i
t
i
o
n
+
x
t
o
k
e
n
{\displaystyle x_{input}=x_{position}+x_{token}}
), DeBERTa keeps them separate as a tuple: (
(
x
p
o
s
i
t
i
o
n
,
x
t
o
k
e
n
)
{\displaystyle (x_{position},x_{token})}
). Then, at each self-attention layer, DeBERTa computes three distinct attention matrices, rather than the single attention matrix used in BERT:
The three attention matrices are added together element-wise, then passed through a softmax layer and multiplied by a projection matrix.
Absolute position encoding is included in the final self-attention layer as additional input.
== Notes ==
== References ==
== Further reading ==
Rogers, Anna; Kovaleva, Olga; Rumshisky, Anna (2020). "A Primer in BERTology: What we know about how BERT works". arXiv:2002.12327 [cs.CL].
== External links ==
Official GitHub repository | Wikipedia/BERT_(language_model) |
Ordering points to identify the clustering structure (OPTICS) is an algorithm for finding density-based clusters in spatial data. It was presented in 1999 by Mihael Ankerst, Markus M. Breunig, Hans-Peter Kriegel and Jörg Sander.
Its basic idea is similar to DBSCAN, but it addresses one of DBSCAN's major weaknesses: the problem of detecting meaningful clusters in data of varying density. To do so, the points of the database are (linearly) ordered such that spatially closest points become neighbors in the ordering. Additionally, a special distance is stored for each point that represents the density that must be accepted for a cluster so that both points belong to the same cluster. This is represented as a dendrogram.
== Basic idea ==
Like DBSCAN, OPTICS requires two parameters: ε, which describes the maximum distance (radius) to consider, and MinPts, describing the number of points required to form a cluster. A point p is a core point if at least MinPts points are found within its ε-neighborhood
N
ε
(
p
)
{\displaystyle N_{\varepsilon }(p)}
(including point p itself). In contrast to DBSCAN, OPTICS also considers points that are part of a more densely packed cluster, so each point is assigned a core distance that describes the distance to the MinPtsth closest point:
core-dist
ε
,
M
i
n
P
t
s
(
p
)
=
{
UNDEFINED
if
|
N
ε
(
p
)
|
<
M
i
n
P
t
s
M
i
n
P
t
s
-th smallest distance in
N
ε
(
p
)
otherwise
{\displaystyle {\text{core-dist}}_{\mathit {\varepsilon ,MinPts}}(p)={\begin{cases}{\text{UNDEFINED}}&{\text{if }}|N_{\varepsilon }(p)|<{\mathit {MinPts}}\\{\mathit {MinPts}}{\text{-th smallest distance in }}N_{\varepsilon }(p)&{\text{otherwise}}\end{cases}}}
The reachability-distance of another point o from a point p is either the distance between o and p, or the core distance of p, whichever is bigger:
reachability-dist
ε
,
M
i
n
P
t
s
(
o
,
p
)
=
{
UNDEFINED
if
|
N
ε
(
p
)
|
<
M
i
n
P
t
s
max
(
core-dist
ε
,
M
i
n
P
t
s
(
p
)
,
dist
(
p
,
o
)
)
otherwise
{\displaystyle {\text{reachability-dist}}_{\mathit {\varepsilon ,MinPts}}(o,p)={\begin{cases}{\text{UNDEFINED}}&{\text{if }}|N_{\varepsilon }(p)|<{\mathit {MinPts}}\\\max({\text{core-dist}}_{\mathit {\varepsilon ,MinPts}}(p),{\text{dist}}(p,o))&{\text{otherwise}}\end{cases}}}
If p and o are nearest neighbors, this is the
ε
′
<
ε
{\displaystyle \varepsilon '<\varepsilon }
we need to assume to have p and o belong to the same cluster.
Both core-distance and reachability-distance are undefined if no sufficiently dense cluster (w.r.t. ε) is available. Given a sufficiently large ε, this never happens, but then every ε-neighborhood query returns the entire database, resulting in
O
(
n
2
)
{\displaystyle O(n^{2})}
runtime. Hence, the ε parameter is required to cut off the density of clusters that are no longer interesting, and to speed up the algorithm.
The parameter ε is, strictly speaking, not necessary. It can simply be set to the maximum possible value. When a spatial index is available, however, it does play a practical role with regards to complexity. OPTICS abstracts from DBSCAN by removing this parameter, at least to the extent of only having to give the maximum value.
== Pseudocode ==
The basic approach of OPTICS is similar to DBSCAN, but instead of maintaining known, but so far unprocessed cluster members in a set, they are maintained in a priority queue (e.g. using an indexed heap).
function OPTICS(DB, ε, MinPts) is
for each point p of DB do
p.reachability-distance = UNDEFINED
for each unprocessed point p of DB do
N = getNeighbors(p, ε)
mark p as processed
output p to the ordered list
if core-distance(p, ε, MinPts) != UNDEFINED then
Seeds = empty priority queue
update(N, p, Seeds, ε, MinPts)
for each next q in Seeds do
N' = getNeighbors(q, ε)
mark q as processed
output q to the ordered list
if core-distance(q, ε, MinPts) != UNDEFINED do
update(N', q, Seeds, ε, MinPts)
In update(), the priority queue Seeds is updated with the
ε
{\displaystyle \varepsilon }
-neighborhood of
p
{\displaystyle p}
and
q
{\displaystyle q}
, respectively:
function update(N, p, Seeds, ε, MinPts) is
coredist = core-distance(p, ε, MinPts)
for each o in N
if o is not processed then
new-reach-dist = max(coredist, dist(p,o))
if o.reachability-distance == UNDEFINED then // o is not in Seeds
o.reachability-distance = new-reach-dist
Seeds.insert(o, new-reach-dist)
else // o in Seeds, check for improvement
if new-reach-dist < o.reachability-distance then
o.reachability-distance = new-reach-dist
Seeds.move-up(o, new-reach-dist)
OPTICS hence outputs the points in a particular ordering, annotated with their smallest reachability distance (in the original algorithm, the core distance is also exported, but this is not required for further processing).
== Extracting the clusters ==
Using a reachability-plot (a special kind of dendrogram), the hierarchical structure of the clusters can be obtained easily. It is a 2D plot, with the ordering of the points as processed by OPTICS on the x-axis and the reachability distance on the y-axis. Since points belonging to a cluster have a low reachability distance to their nearest neighbor, the clusters show up as valleys in the reachability plot. The deeper the valley, the denser the cluster.
The image above illustrates this concept. In its upper left area, a synthetic example data set is shown. The upper right part visualizes the spanning tree produced by OPTICS, and the lower part shows the reachability plot as computed by OPTICS. Colors in this plot are labels, and not computed by the algorithm; but it is well visible how the valleys in the plot correspond to the clusters in above data set. The yellow points in this image are considered noise, and no valley is found in their reachability plot. They are usually not assigned to clusters, except the omnipresent "all data" cluster in a hierarchical result.
Extracting clusters from this plot can be done manually by selecting ranges on the x-axis after visual inspection, by selecting a threshold on the y-axis (the result is then similar to a DBSCAN clustering result with the same
ε
{\displaystyle \varepsilon }
and minPts parameters; here a value of 0.1 may yield good results), or by different algorithms that try to detect the valleys by steepness, knee detection, or local maxima. A range of the plot beginning with a steep descent and ending with a steep ascent is considered a valley, and corresponds to a contiguous area of high density. Additional care must be taken to the last points in a valley to assign them to the inner or outer cluster, this can be achieved by considering the predecessor. Clusterings obtained this way usually are hierarchical, and cannot be achieved by a single DBSCAN run.
== Complexity ==
Like DBSCAN, OPTICS processes each point once, and performs one
ε
{\displaystyle \varepsilon }
-neighborhood query during this processing. Given a spatial index that grants a neighborhood query in
O
(
log
n
)
{\displaystyle O(\log n)}
runtime, an overall runtime of
O
(
n
⋅
log
n
)
{\displaystyle O(n\cdot \log n)}
is obtained. The worst case however is
O
(
n
2
)
{\displaystyle O(n^{2})}
, as with DBSCAN. The authors of the original OPTICS paper report an actual constant slowdown factor of 1.6 compared to DBSCAN. Note that the value of
ε
{\displaystyle \varepsilon }
might heavily influence the cost of the algorithm, since a value too large might raise the cost of a neighborhood query to linear complexity.
In particular, choosing
ε
>
max
x
,
y
d
(
x
,
y
)
{\displaystyle \varepsilon >\max _{x,y}d(x,y)}
(larger than the maximum distance in the data set) is possible, but leads to quadratic complexity, since every neighborhood query returns the full data set. Even when no spatial index is available, this comes at additional cost in managing the heap. Therefore,
ε
{\displaystyle \varepsilon }
should be chosen appropriately for the data set.
== Extensions ==
OPTICS-OF is an outlier detection algorithm based on OPTICS. The main use is the extraction of outliers from an existing run of OPTICS at low cost compared to using a different outlier detection method. The better known version LOF is based on the same concepts.
DeLi-Clu, Density-Link-Clustering combines ideas from single-linkage clustering and OPTICS, eliminating the
ε
{\displaystyle \varepsilon }
parameter and offering performance improvements over OPTICS.
HiSC is a hierarchical subspace clustering (axis-parallel) method based on OPTICS.
HiCO is a hierarchical correlation clustering algorithm based on OPTICS.
DiSH is an improvement over HiSC that can find more complex hierarchies.
FOPTICS is a faster implementation using random projections.
HDBSCAN* is based on a refinement of DBSCAN, excluding border-points from the clusters and thus following more strictly the basic definition of density-levels by Hartigan.
== Availability ==
Java implementations of OPTICS, OPTICS-OF, DeLi-Clu, HiSC, HiCO and DiSH are available in the ELKI data mining framework (with index acceleration for several distance functions, and with automatic cluster extraction using the ξ extraction method). Other Java implementations include the Weka extension (no support for ξ cluster extraction).
The R package "dbscan" includes a C++ implementation of OPTICS (with both traditional dbscan-like and ξ cluster extraction) using a k-d tree for index acceleration for Euclidean distance only.
Python implementations of OPTICS are available in the PyClustering library and in scikit-learn. HDBSCAN* is available in the hdbscan library.
== References == | Wikipedia/OPTICS_algorithm |
Recurrent neural networks (RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, and time series, where the order of elements is important. Unlike feedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences.
The fundamental building block of RNNs is the recurrent unit, which maintains a hidden state—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connected handwriting recognition, speech recognition, natural language processing, and neural machine translation.
However, traditional RNNs suffer from the vanishing gradient problem, which limits their ability to learn long-range dependencies. This issue was addressed by the development of the long short-term memory (LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later, gated recurrent units (GRUs) were introduced as a more computationally efficient alternative.
In recent years, transformers, which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial.
== History ==
=== Before modern ===
One origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex formed by parallel fiber, Purkinje cells, and granule cells. In 1933, Lorente de Nó discovered "recurrent, reciprocal connections" by Golgi's method, and proposed that excitatory loops explain certain aspects of the vestibulo-ocular reflex. During 1940s, multiple people proposed the existence of feedback in the brain, which was a contrast to the previous understanding of the neural system as a purely feedforward structure. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943), which proposed the McCulloch-Pitts neuron model, considered networks that contains cycles. The current activity of such networks can be affected by activity indefinitely far in the past. They were both interested in closed loops as possible explanations for e.g. epilepsy and causalgia. Recurrent inhibition was proposed in 1946 as a negative feedback mechanism in motor control. Neural feedback loops were a common topic of discussion at the Macy conferences. See for an extensive review of recurrent neural network models in neuroscience.
Frank Rosenblatt in 1960 published "close-loop cross-coupled perceptrons", which are 3-layered perceptron networks whose middle layer contains recurrent connections that change by a Hebbian learning rule.: 73–75 Later, in Principles of Neurodynamics (1961), he described "closed-loop cross-coupled" and "back-coupled" perceptron networks, and made theoretical and experimental studies for Hebbian learning in these networks,: Chapter 19, 21 and noted that a fully cross-coupled perceptron network is equivalent to an infinitely deep feedforward network.: Section 19.11
Similar networks were published by Kaoru Nakano in 1971,Shun'ichi Amari in 1972, and William A. Little in 1974, who was acknowledged by Hopfield in his 1982 paper.
Another origin of RNN was statistical mechanics. The Ising model was developed by Wilhelm Lenz and Ernst Ising in the 1920s as a simple statistical mechanical model of magnets at equilibrium. Glauber in 1963 studied the Ising model evolving in time, as a process towards equilibrium (Glauber dynamics), adding in the component of time.
The Sherrington–Kirkpatrick model of spin glass, published in 1975, is the Hopfield network with random initialization. Sherrington and Kirkpatrick found that it is highly likely for the energy function of the SK model to have many local minima. In the 1982 paper, Hopfield applied this recently developed theory to study the Hopfield network with binary activation functions. In a 1984 paper he extended this to continuous activation functions. It became a standard model for the study of neural networks through statistical mechanics.
=== Modern ===
Modern RNN networks are mainly based on two architectures: LSTM and BRNN.
At the resurgence of neural networks in the 1980s, recurrent networks were studied again. They were sometimes called "iterated nets". Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.
Long short-term memory (LSTM) networks were invented by Hochreiter and Schmidhuber in 1995 and set accuracy records in multiple applications domains. It became the default choice for RNN architecture.
Bidirectional recurrent neural networks (BRNN) uses two RNN that processes the same input in opposite directions. These two are often combined, giving the bidirectional LSTM architecture.
Around 2006, bidirectional LSTM started to revolutionize speech recognition, outperforming traditional models in certain speech applications. They also improved large-vocabulary speech recognition and text-to-speech synthesis and was used in Google voice search, and dictation on Android devices. They broke records for improved machine translation, language modeling and Multilingual Language Processing. Also, LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning.
The idea of encoder-decoder sequence transduction had been developed in the early 2010s. The papers most commonly cited as the originators that produced seq2seq are two papers from 2014. A seq2seq architecture employs two RNN, typically LSTM, an "encoder" and a "decoder", for sequence transduction, such as machine translation. They became state of the art in machine translation, and was instrumental in the development of attention mechanisms and transformers.
== Configurations ==
An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNN can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc.
=== Standard ===
RNNs come in many variants. Abstractly speaking, an RNN is a function
f
θ
{\displaystyle f_{\theta }}
of type
(
x
t
,
h
t
)
↦
(
y
t
,
h
t
+
1
)
{\displaystyle (x_{t},h_{t})\mapsto (y_{t},h_{t+1})}
, where
x
t
{\displaystyle x_{t}}
: input vector;
h
t
{\displaystyle h_{t}}
: hidden vector;
y
t
{\displaystyle y_{t}}
: output vector;
θ
{\displaystyle \theta }
: neural network parameters.
In words, it is a neural network that maps an input
x
t
{\displaystyle x_{t}}
into an output
y
t
{\displaystyle y_{t}}
, with the hidden vector
h
t
{\displaystyle h_{t}}
playing the role of "memory", a partial record of all previous input-output pairs. At each step, it transforms input to an output, and modifies its "memory" to help it to better perform future processing.
The illustration to the right may be misleading to many because practical neural network topologies are frequently organized in "layers" and the drawing gives that appearance. However, what appears to be layers are, in fact, different steps in time, "unfolded" to produce the appearance of layers.
=== Stacked RNN ===
A stacked RNN, or deep RNN, is composed of multiple RNNs stacked one above the other. Abstractly, it is structured as follows
Layer 1 has hidden vector
h
1
,
t
{\displaystyle h_{1,t}}
, parameters
θ
1
{\displaystyle \theta _{1}}
, and maps
f
θ
1
:
(
x
0
,
t
,
h
1
,
t
)
↦
(
x
1
,
t
,
h
1
,
t
+
1
)
{\displaystyle f_{\theta _{1}}:(x_{0,t},h_{1,t})\mapsto (x_{1,t},h_{1,t+1})}
.
Layer 2 has hidden vector
h
2
,
t
{\displaystyle h_{2,t}}
, parameters
θ
2
{\displaystyle \theta _{2}}
, and maps
f
θ
2
:
(
x
1
,
t
,
h
2
,
t
)
↦
(
x
2
,
t
,
h
2
,
t
+
1
)
{\displaystyle f_{\theta _{2}}:(x_{1,t},h_{2,t})\mapsto (x_{2,t},h_{2,t+1})}
.
...
Layer
n
{\displaystyle n}
has hidden vector
h
n
,
t
{\displaystyle h_{n,t}}
, parameters
θ
n
{\displaystyle \theta _{n}}
, and maps
f
θ
n
:
(
x
n
−
1
,
t
,
h
n
,
t
)
↦
(
x
n
,
t
,
h
n
,
t
+
1
)
{\displaystyle f_{\theta _{n}}:(x_{n-1,t},h_{n,t})\mapsto (x_{n,t},h_{n,t+1})}
.
Each layer operates as a stand-alone RNN, and each layer's output sequence is used as the input sequence to the layer above. There is no conceptual limit to the depth of stacked RNN.
=== Bidirectional ===
A bidirectional RNN (biRNN) is composed of two RNNs, one processing the input sequence in one direction, and another in the opposite direction. Abstractly, it is structured as follows:
The forward RNN processes in one direction:
f
θ
(
x
0
,
h
0
)
=
(
y
0
,
h
1
)
,
f
θ
(
x
1
,
h
1
)
=
(
y
1
,
h
2
)
,
…
{\displaystyle f_{\theta }(x_{0},h_{0})=(y_{0},h_{1}),f_{\theta }(x_{1},h_{1})=(y_{1},h_{2}),\dots }
The backward RNN processes in the opposite direction:
f
θ
′
′
(
x
N
,
h
N
′
)
=
(
y
N
′
,
h
N
−
1
′
)
,
f
θ
′
′
(
x
N
−
1
,
h
N
−
1
′
)
=
(
y
N
−
1
′
,
h
N
−
2
′
)
,
…
{\displaystyle f'_{\theta '}(x_{N},h_{N}')=(y'_{N},h_{N-1}'),f'_{\theta '}(x_{N-1},h_{N-1}')=(y'_{N-1},h_{N-2}'),\dots }
The two output sequences are then concatenated to give the total output:
(
(
y
0
,
y
0
′
)
,
(
y
1
,
y
1
′
)
,
…
,
(
y
N
,
y
N
′
)
)
{\displaystyle ((y_{0},y_{0}'),(y_{1},y_{1}'),\dots ,(y_{N},y_{N}'))}
.
Bidirectional RNN allows the model to process a token both in the context of what came before it and what came after it. By stacking multiple bidirectional RNNs together, the model can process a token increasingly contextually. The ELMo model (2018) is a stacked bidirectional LSTM which takes character-level as inputs and produces word-level embeddings.
=== Encoder-decoder ===
Two RNNs can be run front-to-back in an encoder-decoder configuration. The encoder RNN processes an input sequence into a sequence of hidden vectors, and the decoder RNN processes the sequence of hidden vectors to an output sequence, with an optional attention mechanism. This was used to construct state of the art neural machine translators during the 2014–2017 period. This was an instrumental step towards the development of transformers.
=== PixelRNN ===
An RNN may process data with more than one dimension. PixelRNN processes two-dimensional data, with many possible directions. For example, the row-by-row direction processes an
n
×
n
{\displaystyle n\times n}
grid of vectors
x
i
,
j
{\displaystyle x_{i,j}}
in the following order:
x
1
,
1
,
x
1
,
2
,
…
,
x
1
,
n
,
x
2
,
1
,
x
2
,
2
,
…
,
x
2
,
n
,
…
,
x
n
,
n
{\displaystyle x_{1,1},x_{1,2},\dots ,x_{1,n},x_{2,1},x_{2,2},\dots ,x_{2,n},\dots ,x_{n,n}}
The diagonal BiLSTM uses two LSTMs to process the same grid. One processes it from the top-left corner to the bottom-right, such that it processes
x
i
,
j
{\displaystyle x_{i,j}}
depending on its hidden state and cell state on the top and the left side:
h
i
−
1
,
j
,
c
i
−
1
,
j
{\displaystyle h_{i-1,j},c_{i-1,j}}
and
h
i
,
j
−
1
,
c
i
,
j
−
1
{\displaystyle h_{i,j-1},c_{i,j-1}}
. The other processes it from the top-right corner to the bottom-left.
== Architectures ==
=== Fully recurrent ===
Fully recurrent neural networks (FRNN) connect the outputs of all neurons to the inputs of all neurons. In other words, it is a fully connected network. This is the most general neural network topology, because all other topologies can be represented by setting some connection weights to zero to simulate the lack of connections between those neurons.
=== Hopfield ===
The Hopfield network is an RNN in which all connections across layers are equally sized. It requires stationary inputs and is thus not a general RNN, as it does not process sequences of patterns. However, it guarantees that it will converge. If the connections are trained using Hebbian learning, then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration.
=== Elman networks and Jordan networks ===
An Elman network is a three-layer network (arranged horizontally as x, y, and z in the illustration) with the addition of a set of context units (u in the illustration). The middle (hidden) layer is connected to these context units fixed with a weight of one. At each time step, the input is fed forward and a learning rule is applied. The fixed back-connections save a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform tasks such as sequence-prediction that are beyond the power of a standard multilayer perceptron.
Jordan networks are similar to Elman networks. The context units are fed from the output layer instead of the hidden layer. The context units in a Jordan network are also called the state layer. They have a recurrent connection to themselves.
Elman and Jordan networks are also known as "Simple recurrent networks" (SRN).
Elman network
h
t
=
σ
h
(
W
h
x
t
+
U
h
h
t
−
1
+
b
h
)
y
t
=
σ
y
(
W
y
h
t
+
b
y
)
{\displaystyle {\begin{aligned}h_{t}&=\sigma _{h}(W_{h}x_{t}+U_{h}h_{t-1}+b_{h})\\y_{t}&=\sigma _{y}(W_{y}h_{t}+b_{y})\end{aligned}}}
Jordan network
h
t
=
σ
h
(
W
h
x
t
+
U
h
s
t
+
b
h
)
y
t
=
σ
y
(
W
y
h
t
+
b
y
)
s
t
=
σ
s
(
W
s
,
s
s
t
−
1
+
W
s
,
y
y
t
−
1
+
b
s
)
{\displaystyle {\begin{aligned}h_{t}&=\sigma _{h}(W_{h}x_{t}+U_{h}s_{t}+b_{h})\\y_{t}&=\sigma _{y}(W_{y}h_{t}+b_{y})\\s_{t}&=\sigma _{s}(W_{s,s}s_{t-1}+W_{s,y}y_{t-1}+b_{s})\end{aligned}}}
Variables and functions
x
t
{\displaystyle x_{t}}
: input vector
h
t
{\displaystyle h_{t}}
: hidden layer vector
s
t
{\displaystyle s_{t}}
: "state" vector,
y
t
{\displaystyle y_{t}}
: output vector
W
{\displaystyle W}
,
U
{\displaystyle U}
and
b
{\displaystyle b}
: parameter matrices and vector
σ
{\displaystyle \sigma }
: Activation functions
=== Long short-term memory ===
Long short-term memory (LSTM) is the most widely used RNN architecture. It was designed to solve the vanishing gradient problem. LSTM is normally augmented by recurrent gates called "forget gates". LSTM prevents backpropagated errors from vanishing or exploding. Instead, errors can flow backward through unlimited numbers of virtual layers unfolded in space. That is, LSTM can learn tasks that require memories of events that happened thousands or even millions of discrete time steps earlier. Problem-specific LSTM-like topologies can be evolved. LSTM works even given long delays between significant events and can handle signals that mix low and high-frequency components.
Many applications use stacks of LSTMs, for which it is called "deep LSTM". LSTM can learn to recognize context-sensitive languages unlike previous models based on hidden Markov models (HMM) and similar concepts.
=== Gated recurrent unit ===
Gated recurrent unit (GRU), introduced in 2014, was designed as a simplification of LSTM. They are used in the full form and several further simplified variants. They have fewer parameters than LSTM, as they lack an output gate.
Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory. There does not appear to be particular performance difference between LSTM and GRU.
==== Bidirectional associative memory ====
Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bidirectionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs. Recently, stochastic BAM models using Markov stepping were optimized for increased network stability and relevance to real-world applications.
A BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.
=== Echo state ===
Echo state networks (ESN) have a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change (be trained). ESNs are good at reproducing certain time series. A variant for spiking neurons is known as a liquid state machine.
=== Recursive ===
A recursive neural network is created by applying the same set of weights recursively over a differentiable graph-like structure by traversing the structure in topological order. Such networks are typically also trained by the reverse mode of automatic differentiation. They can process distributed representations of structure, such as logical terms. A special case of recursive neural networks is the RNN whose structure corresponds to a linear chain. Recursive neural networks have been applied to natural language processing. The Recursive Neural Tensor Network uses a tensor-based composition function for all nodes in the tree.
=== Neural Turing machines ===
Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to external memory resources with which they interact. The combined system is analogous to a Turing machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent.
Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for the usage of fuzzy amounts of each memory address and a record of chronology.
Neural network pushdown automata (NNPDA) are similar to NTMs, but tapes are replaced by analog stacks that are differentiable and trained. In this way, they are similar in complexity to recognizers of context free grammars (CFGs).
Recurrent neural networks are Turing complete and can run arbitrary programs to process arbitrary sequences of inputs.
== Training ==
=== Teacher forcing ===
An RNN can be trained into a conditionally generative model of sequences, aka autoregression.
Concretely, let us consider the problem of machine translation, that is, given a sequence
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle (x_{1},x_{2},\dots ,x_{n})}
of English words, the model is to produce a sequence
(
y
1
,
…
,
y
m
)
{\displaystyle (y_{1},\dots ,y_{m})}
of French words. It is to be solved by a seq2seq model.
Now, during training, the encoder half of the model would first ingest
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle (x_{1},x_{2},\dots ,x_{n})}
, then the decoder half would start generating a sequence
(
y
^
1
,
y
^
2
,
…
,
y
^
l
)
{\displaystyle ({\hat {y}}_{1},{\hat {y}}_{2},\dots ,{\hat {y}}_{l})}
. The problem is that if the model makes a mistake early on, say at
y
^
2
{\displaystyle {\hat {y}}_{2}}
, then subsequent tokens are likely to also be mistakes. This makes it inefficient for the model to obtain a learning signal, since the model would mostly learn to shift
y
^
2
{\displaystyle {\hat {y}}_{2}}
towards
y
2
{\displaystyle y_{2}}
, but not the others.
Teacher forcing makes it so that the decoder uses the correct output sequence for generating the next entry in the sequence. So for example, it would see
(
y
1
,
…
,
y
k
)
{\displaystyle (y_{1},\dots ,y_{k})}
in order to generate
y
^
k
+
1
{\displaystyle {\hat {y}}_{k+1}}
.
=== Gradient descent ===
Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. In neural networks, it can be used to minimize the error term by changing each weight in proportion to the derivative of the error with respect to that weight, provided the non-linear activation functions are differentiable.
The standard method for training RNN by gradient descent is the "backpropagation through time" (BPTT) algorithm, which is a special case of the general algorithm of backpropagation. A more computationally expensive online variant is called "Real-Time Recurrent Learning" or RTRL, which is an instance of automatic differentiation in the forward accumulation mode with stacked tangent vectors. Unlike BPTT, this algorithm is local in time but not local in space.
In this context, local in space means that a unit's weight vector can be updated using only information stored in the connected units and the unit itself such that update complexity of a single unit is linear in the dimensionality of the weight vector. Local in time means that the updates take place continually (on-line) and depend only on the most recent time step rather than on multiple time steps within a given time horizon as in BPTT. Biological neural networks appear to be local with respect to both time and space.
For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the Jacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon. An online hybrid between BPTT and RTRL with intermediate complexity exists, along with variants for continuous time.
A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events. LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems. This problem is also solved in the independently recurrent neural network (IndRNN) by reducing the context of a neuron to its own past state and the cross-neuron information can then be explored in the following layers. Memories of different ranges including long-term memory can be learned without the gradient vanishing and exploding problem.
The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks. It works with the most general locally recurrent networks. The CRBP algorithm can minimize the global error term. This fact improves the stability of the algorithm, providing a unifying view of gradient calculation techniques for recurrent networks with local feedback.
One approach to gradient information computation in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation. It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations. It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.
=== Connectionist temporal classification ===
The connectionist temporal classification (CTC) is a specialized loss function for training RNNs for sequence modeling problems where the timing is variable.
=== Global optimization methods ===
Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence. Typically, the sum-squared difference between the predictions and the target values specified in the training sequence is used to represent the error of the current weight vector. Arbitrary global optimization techniques may then be used to minimize this target function.
The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.
Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:
Each weight encoded in the chromosome is assigned to the respective weight link of the network.
The training set is presented to the network which propagates the input signals forward.
The mean-squared error is returned to the fitness function.
This function drives the genetic selection process.
Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is:
When the neural network has learned a certain percentage of the training data or
When the minimum value of the mean-squared-error is satisfied or
When the maximum number of training generations has been reached.
The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error.
Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as simulated annealing or particle swarm optimization.
== Other architectures ==
=== Independently RNN (IndRNN) ===
The independently recurrent neural network (IndRNN) addresses the gradient vanishing and exploding problems in the traditional fully connected RNN. Each neuron in one layer only receives its own past state as context information (instead of full connectivity to all other neurons in this layer) and thus neurons are independent of each other's history. The gradient backpropagation can be regulated to avoid gradient vanishing and exploding in order to keep long or short-term memory. The cross-neuron information is explored in the next layers. IndRNN can be robustly trained with non-saturated nonlinear functions such as ReLU. Deep networks can be trained using skip connections.
=== Neural history compressor ===
The neural history compressor is an unsupervised stack of RNNs. At the input level, it learns to predict its next input from the previous inputs. Only unpredictable inputs of some RNN in the hierarchy become inputs to the next higher level RNN, which therefore recomputes its internal state only rarely. Each higher level RNN thus studies a compressed representation of the information in the RNN below. This is done such that the input sequence can be precisely reconstructed from the representation at the highest level.
The system effectively minimizes the description length or the negative logarithm of the probability of the data. Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events.
It is possible to distill the RNN hierarchy into two RNNs: the "conscious" chunker (higher level) and the "subconscious" automatizer (lower level). Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker. This makes it easy for the automatizer to learn appropriate, rarely changing memories across long intervals. In turn, this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.
A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.
=== Second order RNNs ===
Second-order RNNs use higher order weights
w
i
j
k
{\displaystyle w{}_{ijk}}
instead of the standard
w
i
j
{\displaystyle w{}_{ij}}
weights, and states can be a product. This allows a direct mapping to a finite-state machine both in training, stability, and representation. Long short-term memory is an example of this but has no such formal mappings or proof of stability.
=== Hierarchical recurrent neural network ===
Hierarchical recurrent neural networks (HRNN) connect their neurons in various ways to decompose hierarchical behavior into useful subprograms. Such hierarchical structures of cognition are present in theories of memory presented by philosopher Henri Bergson, whose philosophical views have inspired hierarchical models.
Hierarchical recurrent neural networks are useful in forecasting, helping to predict disaggregated inflation components of the consumer price index (CPI). The HRNN model leverages information from higher levels in the CPI hierarchy to enhance lower-level predictions. Evaluation of a substantial dataset from the US CPI-U index demonstrates the superior performance of the HRNN model compared to various established inflation prediction methods.
=== Recurrent multilayer perceptron network ===
Generally, a recurrent multilayer perceptron network (RMLP network) consists of cascaded subnetworks, each containing multiple layers of nodes. Each subnetwork is feed-forward except for the last layer, which can have feedback connections. Each of these subnets is connected only by feed-forward connections.
=== Multiple timescales model ===
A multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization depending on the spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties. With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors. The biological approval of such a type of hierarchy was discussed in the memory-prediction theory of brain function by Hawkins in his book On Intelligence. Such a hierarchy also agrees with theories of memory posited by philosopher Henri Bergson, which have been incorporated into an MTRNN model.
=== Memristive networks ===
Greg Snider of HP Labs describes a system of cortical computing with memristive nanodevices. The memristors (memory resistors) are implemented by thin film materials in which the resistance is electrically tuned via the transport of ions or oxygen vacancies within the film. DARPA's SyNAPSE project has funded IBM Research and HP Labs, in collaboration with the Boston University Department of Cognitive and Neural Systems (CNS), to develop neuromorphic architectures that may be based on memristive systems.
Memristive networks are a particular type of physical neural network that have very similar properties to (Little-)Hopfield networks, as they have continuous dynamics, a limited memory capacity and natural relaxation via the minimization of a function which is asymptotic to the Ising model. In this sense, the dynamics of a memristive circuit have the advantage compared to a Resistor-Capacitor network to have a more interesting non-linear behavior. From this point of view, engineering analog memristive networks account for a peculiar type of neuromorphic engineering in which the device behavior depends on the circuit wiring or topology.
The evolution of these networks can be studied analytically using variations of the Caravelli–Traversa–Di Ventra equation.
=== Continuous-time ===
A continuous-time recurrent neural network (CTRNN) uses a system of ordinary differential equations to model the effects on a neuron of the incoming inputs. They are typically analyzed by dynamical systems theory. Many RNN models in neuroscience are continuous-time.
For a neuron
i
{\displaystyle i}
in the network with activation
y
i
{\displaystyle y_{i}}
, the rate of change of activation is given by:
τ
i
y
˙
i
=
−
y
i
+
∑
j
=
1
n
w
j
i
σ
(
y
j
−
Θ
j
)
+
I
i
(
t
)
{\displaystyle \tau _{i}{\dot {y}}_{i}=-y_{i}+\sum _{j=1}^{n}w_{ji}\sigma (y_{j}-\Theta _{j})+I_{i}(t)}
Where:
τ
i
{\displaystyle \tau _{i}}
: Time constant of postsynaptic node
y
i
{\displaystyle y_{i}}
: Activation of postsynaptic node
y
˙
i
{\displaystyle {\dot {y}}_{i}}
: Rate of change of activation of postsynaptic node
w
j
i
{\displaystyle w{}_{ji}}
: Weight of connection from pre to postsynaptic node
σ
(
x
)
{\displaystyle \sigma (x)}
: Sigmoid of x e.g.
σ
(
x
)
=
1
/
(
1
+
e
−
x
)
{\displaystyle \sigma (x)=1/(1+e^{-x})}
.
y
j
{\displaystyle y_{j}}
: Activation of presynaptic node
Θ
j
{\displaystyle \Theta _{j}}
: Bias of presynaptic node
I
i
(
t
)
{\displaystyle I_{i}(t)}
: Input (if any) to node
CTRNNs have been applied to evolutionary robotics where they have been used to address vision, co-operation, and minimal cognitive behaviour.
Note that, by the Shannon sampling theorem, discrete-time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent difference equations. This transformation can be thought of as occurring after the post-synaptic node activation functions
y
i
(
t
)
{\displaystyle y_{i}(t)}
have been low-pass filtered but prior to sampling.
They are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
From a time-series perspective, RNNs can appear as nonlinear versions of finite impulse response and infinite impulse response filters and also as a nonlinear autoregressive exogenous model (NARX). RNN has infinite impulse response whereas convolutional neural networks have finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that cannot be unrolled.
The effect of memory-based learning for the recognition of sequences can also be implemented by a more biological-based model which uses the silencing mechanism exhibited in neurons with a relatively high frequency spiking activity.
Additional stored states and the storage under direct control by the network can be added to both infinite-impulse and finite-impulse networks. Another network or graph can also replace the storage if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated states or gated memory and are part of long short-term memory networks (LSTMs) and gated recurrent units. This is also called Feedback Neural Network (FNN).
== Libraries ==
Modern libraries provide runtime-optimized implementations of the above functionality or allow to speed up the slow loop by just-in-time compilation.
Apache Singa
Caffe: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers.
Chainer: Fully in Python, production support for CPU, GPU, distributed training.
Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark.
Flux: includes interfaces for RNNs, including GRUs and LSTMs, written in Julia.
Keras: High-level API, providing a wrapper to many other deep learning libraries.
Microsoft Cognitive Toolkit
MXNet: an open-source deep learning framework used to train and deploy deep neural networks.
PyTorch: Tensors and Dynamic neural networks in Python with GPU acceleration.
TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU and Google's proprietary TPU, mobile
Theano: A deep-learning library for Python with an API largely compatible with the NumPy library.
Torch: A scientific computing framework with support for machine learning algorithms, written in C and Lua.
== Applications ==
Applications of recurrent neural networks include:
Machine translation
Robot control
Time series prediction
Speech recognition
Speech synthesis
Brain–computer interfaces
Time series anomaly detection
Text-to-Video model
Rhythm learning
Music composition
Grammar learning
Handwriting recognition
Human action recognition
Protein homology detection
Predicting subcellular localization of proteins
Several prediction tasks in the area of business process management
Prediction in medical care pathways
Predictions of fusion plasma disruptions in reactors (Fusion Recurrent Neural Network (FRNN) code)
== References ==
== Further reading ==
Mandic, Danilo P.; Chambers, Jonathon A. (2001). Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley. ISBN 978-0-471-49517-8.
Grossberg, Stephen (2013-02-22). "Recurrent Neural Networks". Scholarpedia. 8 (2): 1888. Bibcode:2013SchpJ...8.1888G. doi:10.4249/scholarpedia.1888. ISSN 1941-6016.
Recurrent Neural Networks. List of RNN papers by Jürgen Schmidhuber's group at Dalle Molle Institute for Artificial Intelligence Research. | Wikipedia/Recurrent_neural_network |
The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation function is nonlinear.
Modern activation functions include the logistic (sigmoid) function used in the 2012 speech recognition model developed by Hinton et al; the ReLU used in the 2012 AlexNet computer vision model and in the 2015 ResNet model; and the smooth version of the ReLU, the GELU, which was used in the 2018 BERT model.
== Comparison of activation functions ==
Aside from their empirical performance, activation functions also have different mathematical properties:
Nonlinear
When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator. This is known as the Universal Approximation Theorem. The identity activation function does not satisfy this property. When multiple layers use the identity activation function, the entire network is equivalent to a single-layer model.
Range
When the range of the activation function is finite, gradient-based training methods tend to be more stable, because pattern presentations significantly affect only limited weights. When the range is infinite, training is generally more efficient because pattern presentations significantly affect most of the weights. In the latter case, smaller learning rates are typically necessary.
Continuously differentiable
This property is desirable (ReLU is not continuously differentiable and has some issues with gradient-based optimization, but it is still possible) for enabling gradient-based optimization methods. The binary step activation function is not differentiable at 0, and it differentiates to 0 for all other values, so gradient-based methods can make no progress with it.
These properties do not decisively influence performance, nor are they the only mathematical properties that may be useful. For instance, the strictly positive range of the softplus makes it suitable for predicting variances in variational autoencoders.
== Mathematical details ==
The most common activation functions can be divided into three categories: ridge functions, radial functions and fold functions.
An activation function
f
{\displaystyle f}
is saturating if
lim
|
v
|
→
∞
|
∇
f
(
v
)
|
=
0
{\displaystyle \lim _{|v|\to \infty }|\nabla f(v)|=0}
. It is nonsaturating if it is
lim
|
v
|
→
∞
|
∇
f
(
v
)
|
≠
0
{\displaystyle \lim _{|v|\to \infty }|\nabla f(v)|\neq 0}
. Non-saturating activation functions, such as ReLU, may be better than saturating activation functions, because they are less likely to suffer from the vanishing gradient problem.
=== Ridge activation functions ===
Ridge functions are multivariate functions acting on a linear combination of the input variables. Often used examples include:
Linear activation:
ϕ
(
v
)
=
a
+
v
′
b
{\displaystyle \phi (\mathbf {v} )=a+\mathbf {v} '\mathbf {b} }
,
ReLU activation:
ϕ
(
v
)
=
max
(
0
,
a
+
v
′
b
)
{\displaystyle \phi (\mathbf {v} )=\max(0,a+\mathbf {v} '\mathbf {b} )}
,
Heaviside activation:
ϕ
(
v
)
=
1
a
+
v
′
b
>
0
{\displaystyle \phi (\mathbf {v} )=1_{a+\mathbf {v} '\mathbf {b} >0}}
,
Logistic activation:
ϕ
(
v
)
=
(
1
+
exp
(
−
a
−
v
′
b
)
)
−
1
{\displaystyle \phi (\mathbf {v} )=(1+\exp(-a-\mathbf {v} '\mathbf {b} ))^{-1}}
.
In biologically inspired neural networks, the activation function is usually an abstraction representing the rate of action potential firing in the cell. In its simplest form, this function is binary—that is, either the neuron is firing or not. Neurons also cannot fire faster than a certain rate, motivating sigmoid activation functions whose range is a finite interval.
The function looks like
ϕ
(
v
)
=
U
(
a
+
v
′
b
)
{\displaystyle \phi (\mathbf {v} )=U(a+\mathbf {v} '\mathbf {b} )}
, where
U
{\displaystyle U}
is the Heaviside step function.
If a line has a positive slope, on the other hand, it may reflect the increase in firing rate that occurs as input current increases. Such a function would be of the form
ϕ
(
v
)
=
a
+
v
′
b
{\displaystyle \phi (\mathbf {v} )=a+\mathbf {v} '\mathbf {b} }
.
=== Radial activation functions ===
A special class of activation functions known as radial basis functions (RBFs) are used in RBF networks. These activation functions can take many forms, but they are usually found as one of the following functions:
Gaussian:
ϕ
(
v
)
=
exp
(
−
‖
v
−
c
‖
2
2
σ
2
)
{\displaystyle \,\phi (\mathbf {v} )=\exp \left(-{\frac {\|\mathbf {v} -\mathbf {c} \|^{2}}{2\sigma ^{2}}}\right)}
Multiquadratics:
ϕ
(
v
)
=
‖
v
−
c
‖
2
+
a
2
{\displaystyle \,\phi (\mathbf {v} )={\sqrt {\|\mathbf {v} -\mathbf {c} \|^{2}+a^{2}}}}
Inverse multiquadratics:
ϕ
(
v
)
=
(
‖
v
−
c
‖
2
+
a
2
)
−
1
2
{\displaystyle \,\phi (\mathbf {v} )=\left(\|\mathbf {v} -\mathbf {c} \|^{2}+a^{2}\right)^{-{\frac {1}{2}}}}
Polyharmonic splines
where
c
{\displaystyle \mathbf {c} }
is the vector representing the function center and
a
{\displaystyle a}
and
σ
{\displaystyle \sigma }
are parameters affecting the spread of the radius.
=== Other examples ===
Periodic functions can serve as activation functions. Usually the sinusoid is used, as any periodic function is decomposable into sinusoids by the Fourier transform.
Quadratic activation maps
x
↦
x
2
{\displaystyle x\mapsto x^{2}}
.
=== Folding activation functions ===
Folding activation functions are extensively used in the pooling layers in convolutional neural networks, and in output layers of multiclass classification networks. These activations perform aggregation over the inputs, such as taking the mean, minimum or maximum. In multiclass classification the softmax activation is often used.
=== Table of activation functions ===
The following table compares the properties of several activation functions that are functions of one fold x from the previous layer or layers:
The following table lists activation functions that are not functions of a single fold x from the previous layer or layers:
^ Here,
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta.
^ For instance,
j
{\displaystyle j}
could be iterating through the number of kernels of the previous neural network layer while
i
{\displaystyle i}
iterates through the number of kernels of the current layer.
=== Quantum activation functions ===
In quantum neural networks programmed on gate-model quantum computers, based on quantum perceptrons instead of variational quantum circuits, the non-linearity of the activation function can be implemented with no need of measuring the output of each perceptron at each layer. The quantum properties loaded within the circuit such as superposition can be preserved by creating the Taylor series of the argument computed by the perceptron itself, with suitable quantum circuits computing the powers up to a wanted approximation degree. Because of the flexibility of such quantum circuits, they can be designed in order to approximate any arbitrary classical activation function.
== See also ==
Logistic function
Rectifier (neural networks)
Stability (learning theory)
Softmax function
== References ==
== Further reading ==
Kunc, Vladimír; Kléma, Jiří (2024-02-14), Three Decades of Activations: A Comprehensive Survey of 400 Activation Functions for Neural Networks, arXiv:2402.09092
Nwankpa, Chigozie; Ijomah, Winifred; Gachagan, Anthony; Marshall, Stephen (2018-11-08). "Activation Functions: Comparison of trends in Practice and Research for Deep Learning". arXiv:1811.03378 [cs.LG].
Dubey, Shiv Ram; Singh, Satish Kumar; Chaudhuri, Bidyut Baran (2022). "Activation functions in deep learning: A comprehensive survey and benchmark". Neurocomputing. 503. Elsevier BV: 92–108. arXiv:2109.14545. doi:10.1016/j.neucom.2022.06.111. ISSN 0925-2312. | Wikipedia/Activation_function |
A text-to-video model is a machine learning model that uses a natural language description as input to produce a video relevant to the input text. Advancements during the 2020s in the generation of high-quality, text-conditioned videos have largely been driven by the development of video diffusion models.
== Models ==
There are different models, including open source models. Chinese-language input CogVideo is the earliest text-to-video model "of 9.4 billion parameters" to be developed, with its demo version of open source codes first presented on GitHub in 2022. That year, Meta Platforms released a partial text-to-video model called "Make-A-Video", and Google's Brain (later Google DeepMind) introduced Imagen Video, a text-to-video model with 3D U-Net.
In March 2023, a research paper titled "VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation" was published, presenting a novel approach to video generation. The VideoFusion model decomposes the diffusion process into two components: base noise and residual noise, which are shared across frames to ensure temporal coherence. By utilizing a pre-trained image diffusion model as a base generator, the model efficiently generated high-quality and coherent videos. Fine-tuning the pre-trained model on video data addressed the domain gap between image and video data, enhancing the model's ability to produce realistic and consistent video sequences. In the same month, Adobe introduced Firefly AI as part of its features.
In January 2024, Google announced development of a text-to-video model named Lumiere which is anticipated to integrate advanced video editing capabilities. Matthias Niessner and Lourdes Agapito at AI company Synthesia work on developing 3D neural rendering techniques that can synthesise realistic video by using 2D and 3D neural representations of shape, appearances, and motion for controllable video synthesis of avatars. In June 2024, Luma Labs launched its Dream Machine video tool. That same month, Kuaishou extended its Kling AI text-to-video model to international users. In July 2024, TikTok owner ByteDance released Jimeng AI in China, through its subsidiary, Faceu Technology. By September 2024, the Chinese AI company MiniMax debuted its video-01 model, joining other established AI model companies like Zhipu AI, Baichuan, and Moonshot AI, which contribute to China’s involvement in AI technology.
Alternative approaches to text-to-video models include Google's Phenaki, Hour One, Colossyan, Runway's Gen-3 Alpha, and OpenAI's Sora, Several additional text-to-video models, such as Plug-and-Play, Text2LIVE, and TuneAVideo, have emerged. FLUX.1 developer Black Forest Labs has announced its text-to-video model SOTA. Google was preparing to launch a video generation tool named Veo for YouTube Shorts in 2025. On May 2025, Google launched the Veo 3 iteration of the model. It was noted for it's impressive audio generation capabilities, which were a previous limitation for text-to-video models.
== Architecture and training ==
There are several architectures that have been used to create Text-to-Video models. Similar to Text-to-Image models, these models can be trained using Recurrent Neural Networks (RNNs) such as long short-term memory (LSTM) networks, which has been used for Pixel Transformation Models and Stochastic Video Generation Models, which aid in consistency and realism respectively. An alternative for these include transformer models. Generative adversarial networks (GANs), Variational autoencoders (VAEs), — which can aid in the prediction of human motion — and diffusion models have also been used to develop the image generation aspects of the model.
Text-video datasets used to train models include, but are not limited to, WebVid-10M, HDVILA-100M, CCV, ActivityNet, and Panda-70M. These datasets contain millions of original videos of interest, generated videos, captioned-videos, and textual information that help train models for accuracy. Text-video datasets used to train models include, but are not limited to PromptSource, DiffusionDB, and VidProM. These datasets provide the range of text inputs needed to teach models how to interpret a variety of textual prompts.
The video generation process involves synchronizing the text inputs with video frames, ensuring alignment and consistency throughout the sequence. This predictive process is subject to decline in quality as the length of the video increases due to resource limitations.
== Limitations ==
Despite the rapid evolution of Text-to-Video models in their performance, a primary limitation is that they are very computationally heavy which limits its capacity to provide high quality and lengthy outputs. Additionally, these models require a large amount of specific training data to be able to generate high quality and coherent outputs, which brings about the issue of accessibility.
Moreover, models may misinterpret textual prompts, resulting in video outputs that deviate from the intended meaning. This can occur due to limitations in capturing semantic context embedded in text, which affects the model’s ability to align generated video with the user’s intended message. Various models, including Make-A-Video, Imagen Video, Phenaki, CogVideo, GODIVA, and NUWA, are currently being tested and refined to enhance their alignment capabilities and overall performance in text-to-video generation.
Another issue with the outputs is that text or fine details in AI-generated videos often appear garbled, a problem that stable diffusion models also struggle with. Examples include distorted hands and unreadable text.
== Ethics ==
The deployment of Text-to-Video models raises ethical considerations related to content generation. These models have the potential to create inappropriate or unauthorized content, including explicit material, graphic violence, misinformation, and likenesses of real individuals without consent. Ensuring that AI-generated content complies with established standards for safe and ethical usage is essential, as content generated by these models may not always be easily identified as harmful or misleading. The ability of AI to recognize and filter out NSFW or copyrighted content remains an ongoing challenge, with implications for both creators and audiences.
== Impacts and applications ==
Text-to-video models offer a broad range of applications that may benefit various fields, from educational and promotional to creative industries. These models can streamline content creation for training videos, movie previews, gaming assets, and visualizations, making it easier to generate content.
== Comparison of existing models ==
== See also ==
Text-to-image model
AI slop
VideoPoet, unreleased Google's model, precursor of Lumiere
Deepfake
Human image synthesis
ChatGPT
== References == | Wikipedia/Text-to-video_model |
In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method. It was first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover.
Most often, it is used for classification, as a k-NN classifier, the output of which is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor.
The k-NN algorithm can also be generalized for regression. In k-NN regression, also known as nearest neighbor smoothing, the output is the property value for the object. This value is the average of the values of k nearest neighbors. If k = 1, then the output is simply assigned to the value of that single nearest neighbor, also known as nearest neighbor interpolation.
For both classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that nearer neighbors contribute more to the average than distant ones. For example, a common weighting scheme consists of giving each neighbor a weight of 1/d, where d is the distance to the neighbor.
The input consists of the k closest training examples in a data set.
The neighbors are taken from a set of objects for which the class (for k-NN classification) or the object property value (for k-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required.
A peculiarity (sometimes even a disadvantage) of the k-NN algorithm is its sensitivity to the local structure of the data.
In k-NN classification the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance, if the features represent different physical units or come in vastly different scales, then feature-wise normalizing of the training data can greatly improve its accuracy.
== Statistical setting ==
Suppose we have pairs
(
X
1
,
Y
1
)
,
(
X
2
,
Y
2
)
,
…
,
(
X
n
,
Y
n
)
{\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\dots ,(X_{n},Y_{n})}
taking values in
R
d
×
{
1
,
2
}
{\displaystyle \mathbb {R} ^{d}\times \{1,2\}}
, where Y is the class label of X, so that
X
|
Y
=
r
∼
P
r
{\displaystyle X|Y=r\sim P_{r}}
for
r
=
1
,
2
{\displaystyle r=1,2}
(and probability distributions
P
r
{\displaystyle P_{r}}
). Given some norm
‖
⋅
‖
{\displaystyle \|\cdot \|}
on
R
d
{\displaystyle \mathbb {R} ^{d}}
and a point
x
∈
R
d
{\displaystyle x\in \mathbb {R} ^{d}}
, let
(
X
(
1
)
,
Y
(
1
)
)
,
…
,
(
X
(
n
)
,
Y
(
n
)
)
{\displaystyle (X_{(1)},Y_{(1)}),\dots ,(X_{(n)},Y_{(n)})}
be a reordering of the training data such that
‖
X
(
1
)
−
x
‖
≤
⋯
≤
‖
X
(
n
)
−
x
‖
{\displaystyle \|X_{(1)}-x\|\leq \dots \leq \|X_{(n)}-x\|}
.
== Algorithm ==
The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples.
In the classification phase, k is a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among the k training samples nearest to that query point.
A commonly used distance metric for continuous variables is Euclidean distance. For discrete variables, such as for text classification, another metric can be used, such as the overlap metric (or Hamming distance). In the context of gene expression microarray data, for example, k-NN has been employed with correlation coefficients, such as Pearson and Spearman, as a metric. Often, the classification accuracy of k-NN can be improved significantly if the distance metric is learned with specialized algorithms such as Large Margin Nearest Neighbor or Neighbourhood components analysis.
A drawback of the basic "majority voting" classification occurs when the class distribution is skewed. That is, examples of a more frequent class tend to dominate the prediction of the new example, because they tend to be common among the k nearest neighbors due to their large number. One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of its k nearest neighbors. The class (or value, in regression problems) of each of the k nearest points is multiplied by a weight proportional to the inverse of the distance from that point to the test point. Another way to overcome skew is by abstraction in data representation. For example, in a self-organizing map (SOM), each node is a representative (a center) of a cluster of similar points, regardless of their density in the original training data. K-NN can then be applied to the SOM.
== Parameter selection ==
The best choice of k depends upon the data; generally, larger values of k reduces effect of the noise on the classification, but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques (see hyperparameter optimization). The special case where the class is predicted to be the class of the closest training sample (i.e. when k = 1) is called the nearest neighbor algorithm.
The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular approach is the use of evolutionary algorithms to optimize feature scaling. Another popular approach is to scale features by the mutual information of the training data with the training classes.
In binary (two class) classification problems, it is helpful to choose k to be an odd number as this avoids tied votes. One popular way of choosing the empirically optimal k in this setting is via bootstrap method.
== The 1-nearest neighbor classifier ==
The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point x to the class of its closest neighbour in the feature space, that is
C
n
1
n
n
(
x
)
=
Y
(
1
)
{\displaystyle C_{n}^{1nn}(x)=Y_{(1)}}
.
As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data).
== The weighted nearest neighbour classifier ==
The k-nearest neighbour classifier can be viewed as assigning the k nearest neighbours a weight
1
/
k
{\displaystyle 1/k}
and all others 0 weight. This can be generalised to weighted nearest neighbour classifiers. That is, where the ith nearest neighbour is assigned a weight
w
n
i
{\displaystyle w_{ni}}
, with
∑
i
=
1
n
w
n
i
=
1
{\textstyle \sum _{i=1}^{n}w_{ni}=1}
. An analogous result on the strong consistency of weighted nearest neighbour classifiers also holds.
Let
C
n
w
n
n
{\displaystyle C_{n}^{wnn}}
denote the weighted nearest classifier with weights
{
w
n
i
}
i
=
1
n
{\displaystyle \{w_{ni}\}_{i=1}^{n}}
. Subject to regularity conditions, which in asymptotic theory are conditional variables which require assumptions to differentiate among parameters with some criteria. On the class distributions the excess risk has the following asymptotic expansion
R
R
(
C
n
w
n
n
)
−
R
R
(
C
Bayes
)
=
(
B
1
s
n
2
+
B
2
t
n
2
)
{
1
+
o
(
1
)
}
,
{\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{wnn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left(B_{1}s_{n}^{2}+B_{2}t_{n}^{2}\right)\{1+o(1)\},}
for constants
B
1
{\displaystyle B_{1}}
and
B
2
{\displaystyle B_{2}}
where
s
n
2
=
∑
i
=
1
n
w
n
i
2
{\displaystyle s_{n}^{2}=\sum _{i=1}^{n}w_{ni}^{2}}
and
t
n
=
n
−
2
/
d
∑
i
=
1
n
w
n
i
{
i
1
+
2
/
d
−
(
i
−
1
)
1
+
2
/
d
}
{\displaystyle t_{n}=n^{-2/d}\sum _{i=1}^{n}w_{ni}\left\{i^{1+2/d}-(i-1)^{1+2/d}\right\}}
.
The optimal weighting scheme
{
w
n
i
∗
}
i
=
1
n
{\displaystyle \{w_{ni}^{*}\}_{i=1}^{n}}
, that balances the two terms in the display above, is given as follows: set
k
∗
=
⌊
B
n
4
d
+
4
⌋
{\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor }
,
w
n
i
∗
=
1
k
∗
[
1
+
d
2
−
d
2
k
∗
2
/
d
{
i
1
+
2
/
d
−
(
i
−
1
)
1
+
2
/
d
}
]
{\displaystyle w_{ni}^{*}={\frac {1}{k^{*}}}\left[1+{\frac {d}{2}}-{\frac {d}{2{k^{*}}^{2/d}}}\{i^{1+2/d}-(i-1)^{1+2/d}\}\right]}
for
i
=
1
,
2
,
…
,
k
∗
{\displaystyle i=1,2,\dots ,k^{*}}
and
w
n
i
∗
=
0
{\displaystyle w_{ni}^{*}=0}
for
i
=
k
∗
+
1
,
…
,
n
{\displaystyle i=k^{*}+1,\dots ,n}
.
With optimal weights the dominant term in the asymptotic expansion of the excess risk is
O
(
n
−
4
d
+
4
)
{\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})}
. Similar results are true when using a bagged nearest neighbour classifier.
== Properties ==
k-NN is a special case of a variable-bandwidth, kernel density "balloon" estimator with a uniform kernel.
The naive version of the algorithm is easy to implement by computing the distances from the test example to all stored examples, but it is computationally intensive for large training sets. Using an approximate nearest neighbor search algorithm makes k-NN computationally tractable even for large data sets. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed.
k-NN has some strong consistency results. As the amount of data approaches infinity, the two-class k-NN algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data). Various improvements to the k-NN speed are possible by using proximity graphs.
For multi-class k-NN classification, Cover and Hart (1967) prove an upper bound error rate of
R
∗
≤
R
k
N
N
≤
R
∗
(
2
−
M
R
∗
M
−
1
)
{\displaystyle R^{*}\ \leq \ R_{k\mathrm {NN} }\ \leq \ R^{*}\left(2-{\frac {MR^{*}}{M-1}}\right)}
where
R
∗
{\displaystyle R^{*}}
is the Bayes error rate (which is the minimal error rate possible),
R
k
N
N
{\displaystyle R_{kNN}}
is the asymptotic k-NN error rate, and M is the number of classes in the problem. This bound is tight in the sense that both the lower and upper bounds are achievable by some distribution. For
M
=
2
{\displaystyle M=2}
and as the Bayesian error rate
R
∗
{\displaystyle R^{*}}
approaches zero, this limit reduces to "not more than twice the Bayesian error rate".
== Error rates ==
There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on
(
X
,
Y
)
{\displaystyle (X,Y)}
) consistent provided
k
:=
k
n
{\displaystyle k:=k_{n}}
diverges and
k
n
/
n
{\displaystyle k_{n}/n}
converges to zero as
n
→
∞
{\displaystyle n\to \infty }
.
Let
C
n
k
n
n
{\displaystyle C_{n}^{knn}}
denote the k nearest neighbour classifier based on a training set of size n. Under certain regularity conditions, the excess risk yields the following asymptotic expansion
R
R
(
C
n
k
n
n
)
−
R
R
(
C
Bayes
)
=
{
B
1
1
k
+
B
2
(
k
n
)
4
/
d
}
{
1
+
o
(
1
)
}
,
{\displaystyle {\mathcal {R}}_{\mathcal {R}}(C_{n}^{knn})-{\mathcal {R}}_{\mathcal {R}}(C^{\text{Bayes}})=\left\{B_{1}{\frac {1}{k}}+B_{2}\left({\frac {k}{n}}\right)^{4/d}\right\}\{1+o(1)\},}
for some constants
B
1
{\displaystyle B_{1}}
and
B
2
{\displaystyle B_{2}}
.
The choice
k
∗
=
⌊
B
n
4
d
+
4
⌋
{\displaystyle k^{*}=\left\lfloor Bn^{\frac {4}{d+4}}\right\rfloor }
offers a trade off between the two terms in the above display, for which the
k
∗
{\displaystyle k^{*}}
-nearest neighbour error converges to the Bayes error at the optimal (minimax) rate
O
(
n
−
4
d
+
4
)
{\displaystyle {\mathcal {O}}\left(n^{-{\frac {4}{d+4}}}\right)}
.
== Metric learning ==
The K-nearest neighbor classification performance can often be significantly improved through (supervised) metric learning. Popular algorithms are neighbourhood components analysis and large margin nearest neighbor. Supervised metric learning algorithms use the label information to learn a new metric or pseudo-metric.
== Feature extraction ==
When the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g. the same measurement in both feet and meters) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Feature extraction is performed on raw data prior to applying k-NN algorithm on the transformed data in feature space.
An example of a typical computer vision computation pipeline for face recognition using k-NN including feature extraction and dimension reduction pre-processing steps (usually implemented with OpenCV):
Haar face detection
Mean-shift tracking analysis
PCA or Fisher LDA projection into feature space, followed by k-NN classification
== Dimension reduction ==
For high-dimensional data (e.g., with number of dimensions more than 10) dimension reduction is usually performed prior to applying the k-NN algorithm in order to avoid the effects of the curse of dimensionality.
The curse of dimensionality in the k-NN context basically means that Euclidean distance is unhelpful in high dimensions because all vectors are almost equidistant to the search query vector (imagine multiple points lying more or less on a circle with the query point at the center; the distance from the query to all data points in the search space is almost the same).
Feature extraction and dimension reduction can be combined in one step using principal component analysis (PCA), linear discriminant analysis (LDA), or canonical correlation analysis (CCA) techniques as a pre-processing step, followed by clustering by k-NN on feature vectors in reduced-dimension space. This process is also called low-dimensional embedding.
For very-high-dimensional datasets (e.g. when performing a similarity search on live video streams, DNA data or high-dimensional time series) running a fast approximate k-NN search using locality sensitive hashing, "random projections", "sketches" or other high-dimensional similarity search techniques from the VLDB toolbox might be the only feasible option.
== Decision boundary ==
Nearest neighbor rules in effect implicitly compute the decision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity.
== Data reduction ==
Data reduction is one of the most important problems for work with huge data sets. Usually, only some of the data points are needed for accurate classification. Those data are called the prototypes and can be found as follows:
Select the class-outliers, that is, training data that are classified incorrectly by k-NN (for a given k)
Separate the rest of the data into two sets: (i) the prototypes that are used for the classification decisions and (ii) the absorbed points that can be correctly classified by k-NN using prototypes. The absorbed points can then be removed from the training set.
=== Selection of class-outliers ===
A training example surrounded by examples of other classes is called a class outlier. Causes of class outliers include:
random error
insufficient training examples of this class (an isolated example appears instead of a cluster)
missing important features (the classes are separated in other dimensions which we don't know)
too many training examples of other classes (unbalanced classes) that create a "hostile" background for the given small class
Class outliers with k-NN produce noise. They can be detected and separated for future analysis. Given two natural numbers, k>r>0, a training example is called a (k,r)NN class-outlier if its k nearest neighbors include more than r examples of other classes.
=== Condensed Nearest Neighbor for data reduction ===
Condensed nearest neighbor (CNN, the Hart algorithm) is an algorithm designed to reduce the data set for k-NN classification. It selects the set of prototypes U from the training data, such that 1NN with U can classify the examples almost as accurately as 1NN does with the whole data set.
Given a training set X, CNN works iteratively:
Scan all elements of X, looking for an element x whose nearest prototype from U has a different label than x.
Remove x from X and add it to U
Repeat the scan until no more prototypes are added to U.
Use U instead of X for classification. The examples that are not prototypes are called "absorbed" points.
It is efficient to scan the training examples in order of decreasing border ratio. The border ratio of a training example x is defined as
where ‖x-y‖ is the distance to the closest example y having a different color than x, and ‖x'-y‖ is the distance from y to its closest example x' with the same label as x.
The border ratio is in the interval [0,1] because ‖x'-y‖ never exceeds ‖x-y‖. This ordering gives preference to the borders of the classes for inclusion in the set of prototypes U. A point of a different label than x is called external to x. The calculation of the border ratio is illustrated by the figure on the right. The data points are labeled by colors: the initial point is x and its label is red. External points are blue and green. The closest to x external point is y. The closest to y red point is x' . The border ratio a(x) = ‖x'-y‖ / ‖x-y‖is the attribute of the initial point x.
Below is an illustration of CNN in a series of figures. There are three classes (red, green and blue). Fig. 1: initially there are 60 points in each class. Fig. 2 shows the 1NN classification map: each pixel is classified by 1NN using all the data. Fig. 3 shows the 5NN classification map. White areas correspond to the unclassified regions, where 5NN voting is tied (for example, if there are two green, two red and one blue points among 5 nearest neighbors). Fig. 4 shows the reduced data set. The crosses are the class-outliers selected by the (3,2)NN rule (all the three nearest neighbors of these instances belong to other classes); the squares are the prototypes, and the empty circles are the absorbed points. The left bottom corner shows the numbers of the class-outliers, prototypes and absorbed points for all three classes. The number of prototypes varies from 15% to 20% for different classes in this example. Fig. 5 shows that the 1NN classification map with the prototypes is very similar to that with the initial data set. The figures were produced using the Mirkes applet.
CNN model reduction for k-NN classifiers
== k-NN regression ==
In k-NN regression, also known as k-NN smoothing, the k-NN algorithm is used for estimating continuous variables. One such algorithm uses a weighted average of the k nearest neighbors, weighted by the inverse of their distance. This algorithm works as follows:
Compute the Euclidean or Mahalanobis distance from the query example to the labeled examples.
Order the labeled examples by increasing distance.
Find a heuristically optimal number k of nearest neighbors, based on RMSE. This is done using cross validation.
Calculate an inverse distance weighted average with the k-nearest multivariate neighbors.
== k-NN outlier ==
The distance to the kth nearest neighbor can also be seen as a local density estimate and thus is also a popular outlier score in anomaly detection. The larger the distance to the k-NN, the lower the local density, the more likely the query point is an outlier. Although quite simple, this outlier model, along with another classic data mining method, local outlier factor, works quite well also in comparison to more recent and more complex approaches, according to a large scale experimental analysis.
== Validation of results ==
A confusion matrix or "matching matrix" is often used as a tool to validate the accuracy of k-NN classification. More robust statistical methods such as likelihood-ratio test can also be applied.
== See also ==
Nearest centroid classifier
Closest pair of points problem
Nearest neighbor graph
Segmentation-based object categorization
== References ==
== Further reading ==
Dasarathy, Belur V., ed. (1991). Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. IEEE Computer Society Press. ISBN 978-0818689307.
Shakhnarovich, Gregory; Darrell, Trevor; Indyk, Piotr, eds. (2005). Nearest-Neighbor Methods in Learning and Vision. MIT Press. ISBN 978-0262195478. | Wikipedia/K-nearest_neighbors_algorithm |
In behavioral psychology, reinforcement refers to consequences that increase the likelihood of an organism's future behavior, typically in the presence of a particular antecedent stimulus. For example, a rat can be trained to push a lever to receive food whenever a light is turned on; in this example, the light is the antecedent stimulus, the lever pushing is the operant behavior, and the food is the reinforcer. Likewise, a student that receives attention and praise when answering a teacher's question will be more likely to answer future questions in class; the teacher's question is the antecedent, the student's response is the behavior, and the praise and attention are the reinforcements. Punishment is the inverse to reinforcement, referring to any behavior that decreases the likelihood that a response will occur. In operant conditioning terms, punishment does not need to involve any type of pain, fear, or physical actions; even a brief spoken expression of disapproval is a type of punishment.
Consequences that lead to appetitive behavior such as subjective "wanting" and "liking" (desire and pleasure) function as rewards or positive reinforcement. There is also negative reinforcement, which involves taking away an undesirable stimulus. An example of negative reinforcement would be taking an aspirin to relieve a headache.
Reinforcement is an important component of operant conditioning and behavior modification. The concept has been applied in a variety of practical areas, including parenting, coaching, therapy, self-help, education, and management.
== Terminology ==
In the behavioral sciences, the terms "positive" and "negative" refer when used in their strict technical sense to the nature of the action performed by the conditioner rather than to the responding operant's evaluation of that action and its consequence(s). "Positive" actions are those that add a factor, be it pleasant or unpleasant, to the environment, whereas "negative" actions are those that remove or withhold from the environment a factor of either type. In turn, the strict sense of "reinforcement" refers only to reward-based conditioning; the introduction of unpleasant factors and the removal or withholding of pleasant factors are instead referred to as "punishment", which when used in its strict sense thus stands in contradistinction to "reinforcement". Thus, "positive reinforcement" refers to the addition of a pleasant factor, "positive punishment" refers to the addition of an unpleasant factor, "negative reinforcement" refers to the removal or withholding of an unpleasant factor, and "negative punishment" refers to the removal or withholding of a pleasant factor.
This usage is at odds with some non-technical usages of the four term combinations, especially in the case of the term "negative reinforcement", which is often used to denote what technical parlance would describe as "positive punishment" in that the non-technical usage interprets "reinforcement" as subsuming both reward and punishment and "negative" as referring to the responding operant's evaluation of the factor being introduced. By contrast, technical parlance would use the term "negative reinforcement" to describe encouragement of a given behavior by creating a scenario in which an unpleasant factor is or will be present but engaging in the behavior results in either escaping from that factor or preventing its occurrence, as in Martin Seligman’s experiment involving dogs learning to avoid electric shocks.
== Introduction ==
B.F. Skinner was a well-known and influential researcher who articulated many of the theoretical constructs of reinforcement and behaviorism. Skinner defined reinforcers according to the change in response strength (response rate) rather than to more subjective criteria, such as what is pleasurable or valuable to someone. Accordingly, activities, foods or items considered pleasant or enjoyable may not necessarily be reinforcing (because they produce no increase in the response preceding them). Stimuli, settings, and activities only fit the definition of reinforcers if the behavior that immediately precedes the potential reinforcer increases in similar situations in the future; for example, a child who receives a cookie when he or she asks for one. If the frequency of "cookie-requesting behavior" increases, the cookie can be seen as reinforcing "cookie-requesting behavior". If however, "cookie-requesting behavior" does not increase the cookie cannot be considered reinforcing.
The sole criterion that determines if a stimulus is reinforcing is the change in probability of a behavior after administration of that potential reinforcer. Other theories may focus on additional factors such as whether the person expected a behavior to produce a given outcome, but in the behavioral theory, reinforcement is defined by an increased probability of a response.
The study of reinforcement has produced an enormous body of reproducible experimental results. Reinforcement is the central concept and procedure in special education, applied behavior analysis, and the experimental analysis of behavior and is a core concept in some medical and psychopharmacology models, particularly addiction, dependence, and compulsion.
== History ==
Laboratory research on reinforcement is usually dated from the work of Edward Thorndike, known for his experiments with cats escaping from puzzle boxes. A number of others continued this research, notably B.F. Skinner, who published his seminal work on the topic in The Behavior of Organisms, in 1938, and elaborated this research in many subsequent publications. Notably Skinner argued that positive reinforcement is superior to punishment in shaping behavior. Though punishment may seem just the opposite of reinforcement, Skinner claimed that they differ immensely, saying that positive reinforcement results in lasting behavioral modification (long-term) whereas punishment changes behavior only temporarily (short-term) and has many detrimental side-effects.
A great many researchers subsequently expanded our understanding of reinforcement and challenged some of Skinner's conclusions. For example, Azrin and Holz defined punishment as a “consequence of behavior that reduces the future probability of that behavior,” and some studies have shown that positive reinforcement and punishment are equally effective in modifying behavior. Research on the effects of positive reinforcement, negative reinforcement and punishment continue today as those concepts are fundamental to learning theory and apply to many practical applications of that theory.
== Operant conditioning ==
The term operant conditioning was introduced by Skinner to indicate that in his experimental paradigm, the organism is free to operate on the environment. In this paradigm, the experimenter cannot trigger the desirable response; the experimenter waits for the response to occur (to be emitted by the organism) and then a potential reinforcer is delivered. In the classical conditioning paradigm, the experimenter triggers (elicits) the desirable response by presenting a reflex eliciting stimulus, the unconditional stimulus (UCS), which they pair (precede) with a neutral stimulus, the conditional stimulus (CS).
Reinforcement is a basic term in operant conditioning. For the punishment aspect of operant conditioning, see punishment (psychology).
=== Positive reinforcement ===
Positive reinforcement occurs when a desirable event or stimulus is presented as a consequence of a behavior and the chance that this behavior will manifest in similar environments increases.: 253 For example, if reading a book is fun, then experiencing the fun positively reinforces the behavior of reading fun books. The person who receives the positive reinforcement (i.e., who has fun reading the book) will read more books to have more fun.
The high probability instruction (HPI) treatment is a behaviorist treatment based on the idea of positive reinforcement.
=== Negative reinforcement ===
Negative reinforcement increases the rate of a behavior that avoids or escapes an aversive situation or stimulus.: 252–253 That is, something unpleasant is already happening, and the behavior helps the person avoid or escape the unpleasantness. In contrast to positive reinforcement, which involves adding a pleasant stimulus, in negative reinforcement, the focus is on the removal of an unpleasant situation or stimulus. For example, if someone feels unhappy, then they might engage in a behavior (e.g., reading books) to escape from the aversive situation (e.g., their unhappy feelings).: 253 The success of that avoidant or escapist behavior in removing the unpleasant situation or stimulus reinforces the behavior.
Doing something unpleasant to people to prevent or remove a behavior from happening again is punishment, not negative reinforcement.: 252 The main difference is that reinforcement always increases the likelihood of a behavior (e.g., channel surfing while bored temporarily alleviated boredom; therefore, there will be more channel surfing while bored), whereas punishment decreases it (e.g., hangovers are an unpleasant stimulus, so people learn to avoid the behavior that led to that unpleasant stimulus).
=== Extinction ===
Extinction occurs when a given behavior is ignored (i.e. followed up with no consequence). Behaviors disappear over time when they continuously receive no reinforcement. During a deliberate extinction, the targeted behavior spikes first (in an attempt to produce the expected, previously reinforced effects), and then declines over time. Neither reinforcement nor extinction need to be deliberate in order to have an effect on a subject's behavior. For example, if a child reads books because they are fun, then the parents' decision to ignore the book reading will not remove the positive reinforcement (i.e., fun) the child receives from reading books. However, if a child engages in a behavior to get attention from the parents, then the parents' decision to ignore the behavior will cause the behavior to go extinct, and the child will find a different behavior to get their parents' attention.
=== Reinforcement versus punishment ===
Reinforcers serve to increase behaviors whereas punishers serve to decrease behaviors; thus, positive reinforcers are stimuli that the subject will work to attain, and negative reinforcers are stimuli that the subject will work to be rid of or to end. The table below illustrates the adding and subtracting of stimuli (pleasant or aversive) in relation to reinforcement vs. punishment.
=== Further ideas and concepts ===
Distinguishing between positive and negative reinforcement can be difficult and may not always be necessary. Focusing on what is being removed or added and how it affects behavior can be more helpful.
An event that punishes behavior for some may reinforce behavior for others.
Some reinforcement can include both positive and negative features, such as a drug addict taking drugs for the added euphoria (positive reinforcement) and also to eliminate withdrawal symptoms (negative reinforcement).
Reinforcement in the business world is essential in driving productivity. Employees are constantly motivated by the ability to receive a positive stimulus, such as a promotion or a bonus. Employees are also driven by negative reinforcement, such as by eliminating unpleasant tasks.
Though negative reinforcement has a positive effect in the short term for a workplace (i.e. encourages a financially beneficial action), over-reliance on a negative reinforcement hinders the ability of workers to act in a creative, engaged way creating growth in the long term.
=== Primary and secondary reinforcers ===
A primary reinforcer, sometimes called an unconditioned reinforcer, is a stimulus that does not require pairing with a different stimulus in order to function as a reinforcer and most likely has obtained this function through the evolution and its role in species' survival. Examples of primary reinforcers include food, water, and sex. Some primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience). Thus, one person may prefer one type of food while another avoids it. Or one person may eat much food while another eats very little. So even though food is a primary reinforcer for both individuals, the value of food as a reinforcer differs between them.
A secondary reinforcer, sometimes called a conditioned reinforcer, is a stimulus or situation that has acquired its function as a reinforcer after pairing with a stimulus that functions as a reinforcer. This stimulus may be a primary reinforcer or another conditioned reinforcer (such as money).
When trying to distinguish primary and secondary reinforcers in human examples, use the "caveman test." If the stimulus is something that a caveman would naturally find desirable (e.g. candy) then it is a primary reinforcer. If, on the other hand, the caveman would not react to it (e.g. a dollar bill), it is a secondary reinforcer. As with primary reinforcers, an organism can experience satisfaction and deprivation with secondary reinforcers.
=== Other reinforcement terms ===
A generalized reinforcer is a conditioned reinforcer that has obtained the reinforcing function by pairing with many other reinforcers and functions as a reinforcer under a wide-variety of motivating operations. (One example of this is money because it is paired with many other reinforcers).: 83
In reinforcer sampling, a potentially reinforcing but unfamiliar stimulus is presented to an organism without regard to any prior behavior.
Socially-mediated reinforcement involves the delivery of reinforcement that requires the behavior of another organism. For example, another person is providing the reinforcement.
The Premack principle is a special case of reinforcement elaborated by David Premack, which states that a highly preferred activity can be used effectively as a reinforcer for a less-preferred activity.: 123
Reinforcement hierarchy is a list of actions, rank-ordering the most desirable to least desirable consequences that may serve as a reinforcer. A reinforcement hierarchy can be used to determine the relative frequency and desirability of different activities, and is often employed when applying the Premack principle.
Contingent outcomes are more likely to reinforce behavior than non-contingent responses. Contingent outcomes are those directly linked to a causal behavior, such a light turning on being contingent on flipping a switch. Note that contingent outcomes are not necessary to demonstrate reinforcement, but perceived contingency may increase learning.
Contiguous stimuli are stimuli closely associated by time and space with specific behaviors. They reduce the amount of time needed to learn a behavior while increasing its resistance to extinction. Giving a dog a piece of food immediately after sitting is more contiguous with (and therefore more likely to reinforce) the behavior than a several minute delay in food delivery following the behavior.
Noncontingent reinforcement refers to response-independent delivery of stimuli identified as reinforcers for some behaviors of that organism. However, this typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which decreases the rate of the target behavior. As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement".
== Natural and artificial reinforcement ==
In his 1967 paper, Arbitrary and Natural Reinforcement, Charles Ferster proposed classifying reinforcement into events that increase the frequency of an operant behavior as a natural consequence of the behavior itself, and events that affect frequency by their requirement of human mediation, such as in a token economy where subjects are rewarded for certain behavior by the therapist.
In 1970, Baer and Wolf developed the concept of "behavioral traps." A behavioral trap requires only a simple response to enter the trap, yet once entered, the trap cannot be resisted in creating general behavior change. It is the use of a behavioral trap that increases a person's repertoire, by exposing them to the naturally occurring reinforcement of that behavior. Behavioral traps have four characteristics:
They are "baited" with desirable reinforcers that "lure" the student into the trap.
Only a low-effort response already in the repertoire is necessary to enter the trap.
Interrelated contingencies of reinforcement inside the trap motivate the person to acquire, extend, and maintain targeted skills.
They can remain effective for long periods of time because the person shows few, if any, satiation effects.
Thus, artificial reinforcement can be used to build or develop generalizable skills, eventually transitioning to naturally occurring reinforcement to maintain or increase the behavior. Another example is a social situation that will generally result from a specific behavior once it has met a certain criterion.
== Intermittent reinforcement schedules ==
Behavior is not always reinforced every time it is emitted, and the pattern of reinforcement strongly affects how fast an operant response is learned, what its rate is at any given time, and how long it continues when reinforcement ceases. The simplest rules controlling reinforcement are continuous reinforcement, where every response is reinforced, and extinction, where no response is reinforced. Between these extremes, more complex schedules of reinforcement specify the rules that determine how and when a response will be followed by a reinforcer.
Specific schedules of reinforcement reliably induce specific patterns of response, and these rules apply across many different species. The varying consistency and predictability of reinforcement is an important influence on how the different schedules operate. Many simple and complex schedules were investigated at great length by B.F. Skinner using pigeons.
=== Simple schedules ===
Ratio schedule – the reinforcement depends only on the number of responses the organism has performed.
Continuous reinforcement (CRF) – a schedule of reinforcement in which every occurrence of the instrumental response (desired response) is followed by the reinforcer.: 86
Simple schedules have a single rule to determine when a single type of reinforcer is delivered for a specific response.
Fixed ratio (FR) – schedules deliver reinforcement after every nth response.: 88 An FR 1 schedule is synonymous with a CRF schedule.
(ex. Every three times a rat presses a button, that rat receives a slice of cheese)
Variable ratio schedule (VR) – reinforced on average every nth response, but not always on the nth response.: 88
(ex. Gamblers win 1 out every an 10 turns on a slot machine, however this is an average and they could hypothetically win on any given turn)
Fixed interval (FI) – reinforced after n amount of time.
(ex. Every 10 minutes, a rat receives a slice of cheese when it presses a button. Eventually, the rat will learn to ignore the button until each 10 minute interval has elapsed)
Variable interval (VI) – reinforced on an average of n amount of time, but not always exactly n amount of time.: 89
(ie. A radio host gives away concert tickets approximately every hour, but the exact minutes may vary)
Fixed time (FT) – Provides a reinforcing stimulus at a fixed time since the last reinforcement delivery, regardless of whether the subject has responded or not. In other words, it is a non-contingent schedule.
Variable time (VT) – Provides reinforcement at an average variable time since last reinforcement, regardless of whether the subject has responded or not.
Simple schedules are utilized in many differential reinforcement procedures:
Differential reinforcement of alternative behavior (DRA) - A conditioning procedure in which an undesired response is decreased by placing it on extinction or, less commonly, providing contingent punishment, while simultaneously providing reinforcement contingent on a desirable response. An example would be a teacher attending to a student only when they raise their hand, while ignoring the student when he or she calls out.
Differential reinforcement of other behavior (DRO) – Also known as omission training procedures, an instrumental conditioning procedure in which a positive reinforcer is periodically delivered only if the participant does something other than the target response. An example would be reinforcing any hand action other than nose picking.: 338
Differential reinforcement of incompatible behavior (DRI) – Used to reduce a frequent behavior without punishing it by reinforcing an incompatible response. An example would be reinforcing clapping to reduce nose picking
Differential reinforcement of low response rate (DRL) – Used to encourage low rates of responding. It is like an interval schedule, except that premature responses reset the time required between behavior.
Differential reinforcement of high rate (DRH) – Used to increase high rates of responding. It is like an interval schedule, except that a minimum number of responses are required in the interval in order to receive reinforcement.
==== Effects of different types of simple schedules ====
Fixed ratio: activity slows after reinforcer is delivered, then response rates increase until the next reinforcer delivery (post-reinforcement pause).
Variable ratio: rapid, steady rate of responding; most resistant to extinction.
Fixed interval: responding increases towards the end of the interval; poor resistance to extinction.
Variable interval: steady activity results, good resistance to extinction.
Ratio schedules produce higher rates of responding than interval schedules, when the rates of reinforcement are otherwise similar.
Variable schedules produce higher rates and greater resistance to extinction than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE).
The variable ratio schedule produces both the highest rate of responding and the greatest resistance to extinction (for example, the behavior of gamblers at slot machines).
Fixed schedules produce "post-reinforcement pauses" (PRP), where responses will briefly cease immediately following reinforcement, though the pause is a function of the upcoming response requirement rather than the prior reinforcement.
The PRP of a fixed interval schedule is frequently followed by a "scallop-shaped" accelerating rate of response, while fixed ratio schedules produce a more "angular" response.
fixed interval scallop: the pattern of responding that develops with fixed interval reinforcement schedule, performance on a fixed interval reflects subject's accuracy in telling time.
Organisms whose schedules of reinforcement are "thinned" (that is, requiring more responses or a greater wait before reinforcement) may experience "ratio strain" if thinned too quickly. This produces behavior similar to that seen during extinction.
Ratio strain: the disruption of responding that occurs when a fixed ratio response requirement is increased too rapidly.
Ratio run: high and steady rate of responding that completes each ratio requirement. Usually higher ratio requirement causes longer post-reinforcement pauses to occur.
Partial reinforcement schedules are more resistant to extinction than continuous reinforcement schedules.
Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones.
Momentary changes in reinforcement value lead to dynamic changes in behavior.
=== Compound schedules ===
Compound schedules combine two or more different simple schedules in some way using the same reinforcer for the same behavior. There are many possibilities; among those most often used are:
Alternative schedules' – A type of compound schedule where two or more simple schedules are in effect and whichever schedule is completed first results in reinforcement.
Conjunctive schedules – A complex schedule of reinforcement where two or more simple schedules are in effect independently of each other, and requirements on all of the simple schedules must be met for reinforcement.
Multiple schedules – Two or more schedules alternate over time, with a stimulus indicating which is in force. Reinforcement is delivered if the response requirement is met while a schedule is in effect.
Mixed schedules – Either of two, or more, schedules may occur with no stimulus indicating which is in force. Reinforcement is delivered if the response requirement is met while a schedule is in effect.
Concurrent schedules – A complex reinforcement procedure in which the participant can choose any one of two or more simple reinforcement schedules that are available simultaneously. Organisms are free to change back and forth between the response alternatives at any time.
Concurrent-chain schedule of reinforcement' – A complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link. Once a choice has been made, the rejected alternatives become unavailable until the start of the next trial.
Interlocking schedules – A single schedule with two components where progress in one component affects progress in the other component. In an interlocking FR 60 FI 120-s schedule, for example, each response subtracts time from the interval component such that each response is "equal" to removing two seconds from the FI schedule.
Chained schedules – Reinforcement occurs after two or more successive schedules have been completed, with a stimulus indicating when one schedule has been completed and the next has started
Tandem schedules – Reinforcement occurs when two or more successive schedule requirements have been completed, with no stimulus indicating when a schedule has been completed and the next has started.
Higher-order schedules – completion of one schedule is reinforced according to a second schedule; e.g. in FR2 (FI10 secs), two successive fixed interval schedules require completion before a response is reinforced.
=== Superimposed schedules ===
The psychology term superimposed schedules of reinforcement refers to a structure of rewards where two or more simple schedules of reinforcement operate simultaneously. Reinforcers can be positive, negative, or both. An example is a person who comes home after a long day at work. The behavior of opening the front door is rewarded by a big kiss on the lips by the person's spouse and a rip in the pants from the family dog jumping enthusiastically. Another example of superimposed schedules of reinforcement is a pigeon in an experimental cage pecking at a button. The pecks deliver a hopper of grain every 20th peck, and access to water after every 200 pecks.
Superimposed schedules of reinforcement are a type of compound schedule that evolved from the initial work on simple schedules of reinforcement by B.F. Skinner and his colleagues (Skinner and Ferster, 1957). They demonstrated that reinforcers could be delivered on schedules, and further that organisms behaved differently under different schedules. Rather than a reinforcer, such as food or water, being delivered every time as a consequence of some behavior, a reinforcer could be delivered after more than one instance of the behavior. For example, a pigeon may be required to peck a button switch ten times before food appears. This is a "ratio schedule". Also, a reinforcer could be delivered after an interval of time passed following a target behavior. An example is a rat that is given a food pellet immediately following the first response that occurs after two minutes has elapsed since the last lever press. This is called an "interval schedule".
In addition, ratio schedules can deliver reinforcement following fixed or variable number of behaviors by the individual organism. Likewise, interval schedules can deliver reinforcement following fixed or variable intervals of time following a single response by the organism. Individual behaviors tend to generate response rates that differ based upon how the reinforcement schedule is created. Much subsequent research in many labs examined the effects on behaviors of scheduling reinforcers.
If an organism is offered the opportunity to choose between or among two or more simple schedules of reinforcement at the same time, the reinforcement structure is called a "concurrent schedule of reinforcement". Brechner (1974, 1977) introduced the concept of superimposed schedules of reinforcement in an attempt to create a laboratory analogy of social traps, such as when humans overharvest their fisheries or tear down their rainforests. Brechner created a situation where simple reinforcement schedules were superimposed upon each other. In other words, a single response or group of responses by an organism led to multiple consequences. Concurrent schedules of reinforcement can be thought of as "or" schedules, and superimposed schedules of reinforcement can be thought of as "and" schedules. Brechner and Linder (1981) and Brechner (1987) expanded the concept to describe how superimposed schedules and the social trap analogy could be used to analyze the way energy flows through systems.
Superimposed schedules of reinforcement have many real-world applications in addition to generating social traps. Many different human individual and social situations can be created by superimposing simple reinforcement schedules. For example, a human being could have simultaneous tobacco and alcohol addictions. Even more complex situations can be created or simulated by superimposing two or more concurrent schedules. For example, a high school senior could have a choice between going to Stanford University or UCLA, and at the same time have the choice of going into the Army or the Air Force, and simultaneously the choice of taking a job with an internet company or a job with a software company. That is a reinforcement structure of three superimposed concurrent schedules of reinforcement.
Superimposed schedules of reinforcement can create the three classic conflict situations (approach–approach conflict, approach–avoidance conflict, and avoidance–avoidance conflict) described by Kurt Lewin (1935) and can operationalize other Lewinian situations analyzed by his force field analysis. Other examples of the use of superimposed schedules of reinforcement as an analytical tool are its application to the contingencies of rent control (Brechner, 2003) and problem of toxic waste dumping in the Los Angeles County storm drain system (Brechner, 2010).
=== Concurrent schedules ===
In operant conditioning, concurrent schedules of reinforcement are schedules of reinforcement that are simultaneously available to an animal subject or human participant, so that the subject or participant can respond on either schedule. For example, in a two-alternative forced choice task, a pigeon in a Skinner box is faced with two pecking keys; pecking responses can be made on either, and food reinforcement might follow a peck on either. The schedules of reinforcement arranged for pecks on the two keys can be different. They may be independent, or they may be linked so that behavior on one key affects the likelihood of reinforcement on the other.
It is not necessary for responses on the two schedules to be physically distinct. In an alternate way of arranging concurrent schedules, introduced by Findley in 1958, both schedules are arranged on a single key or other response device, and the subject can respond on a second key to change between the schedules. In such a "Findley concurrent" procedure, a stimulus (e.g., the color of the main key) signals which schedule is in effect.
Concurrent schedules often induce rapid alternation between the keys. To prevent this, a "changeover delay" is commonly introduced: each schedule is inactivated for a brief period after the subject switches to it.
When both the concurrent schedules are variable intervals, a quantitative relationship known as the matching law is found between relative response rates in the two schedules and the relative reinforcement rates they deliver; this was first observed by R.J. Herrnstein in 1961. Matching law is a rule for instrumental behavior which states that the relative rate of responding on a particular response alternative equals the relative rate of reinforcement for that response (rate of behavior = rate of reinforcement). Animals and humans have a tendency to prefer choice in schedules.
== Shaping ==
Shaping is the reinforcement of successive approximations to a desired instrumental response. In training a rat to press a lever, for example, simply turning toward the lever is reinforced at first. Then, only turning and stepping toward it is reinforced. Eventually the rat will be reinforced for pressing the lever. The successful attainment of one behavior starts the shaping process for the next. As training progresses, the response becomes progressively more like the desired behavior, with each subsequent behavior becoming a closer approximation of the final behavior.
The intervention of shaping is used in many training situations, and also for individuals with autism as well as other developmental disabilities. When shaping is combined with other evidence-based practices such as Functional Communication Training (FCT), it can yield positive outcomes for human behavior. Shaping typically uses continuous reinforcement, but the response can later be shifted to an intermittent reinforcement schedule.
Shaping is also used for food refusal. Food refusal is when an individual has a partial or total aversion to food items. This can be as minimal as being a picky eater to so severe that it can affect an individual's health. Shaping has been used to have a high success rate for food acceptance.
== Chaining ==
Chaining involves linking discrete behaviors together in a series, such that the consequence of each behavior is both the reinforcement for the previous behavior, and the antecedent stimulus for the next behavior. There are many ways to teach chaining, such as forward chaining (starting from the first behavior in the chain), backwards chaining (starting from the last behavior) and total task chaining (teaching each behavior in the chain simultaneously). People's morning routines are a typical chain, with a series of behaviors (e.g. showering, drying off, getting dressed) occurring in sequence as a well learned habit.
Challenging behaviors seen in individuals with autism and other related disabilities have successfully managed and maintained in studies using a scheduled of chained reinforcements. Functional communication training is an intervention that often uses chained schedules of reinforcement to effectively promote the appropriate and desired functional communication response.
== Mathematical models ==
There has been research on building a mathematical model of reinforcement. This model is known as MPR, which is short for mathematical principles of reinforcement. Peter Killeen has made key discoveries in the field with his research on pigeons.
== Applications ==
Reinforcement and punishment are ubiquitous in human social interactions, and a great many applications of operant principles have been suggested and implemented. Following are a few examples.
=== Addiction and dependence ===
Positive and negative reinforcement play central roles in the development and maintenance of addiction and drug dependence. An addictive drug is intrinsically rewarding; that is, it functions as a primary positive reinforcer of drug use. The brain's reward system assigns it incentive salience (i.e., it is "wanted" or "desired"), so as an addiction develops, deprivation of the drug leads to craving. In addition, stimuli associated with drug use – e.g., the sight of a syringe, and the location of use – become associated with the intense reinforcement induced by the drug. These previously neutral stimuli acquire several properties: their appearance can induce craving, and they can become conditioned positive reinforcers of continued use. Thus, if an addicted individual encounters one of these drug cues, a craving for the associated drug may reappear. For example, anti-drug agencies previously used posters with images of drug paraphernalia as an attempt to show the dangers of drug use. However, such posters are no longer used because of the effects of incentive salience in causing relapse upon sight of the stimuli illustrated in the posters.
In drug dependent individuals, negative reinforcement occurs when a drug is self-administered in order to alleviate or "escape" the symptoms of physical dependence (e.g., tremors and sweating) and/or psychological dependence (e.g., anhedonia, restlessness, irritability, and anxiety) that arise during the state of drug withdrawal.
=== Animal training ===
Animal trainers and pet owners were applying the principles and practices of operant conditioning long before these ideas were named and studied, and animal training still provides one of the clearest and most convincing examples of operant control. Of the concepts and procedures described in this article, a few of the most salient are: availability of immediate reinforcement (e.g. the ever-present bag of dog yummies); contingency, assuring that reinforcement follows the desired behavior and not something else; the use of secondary reinforcement, as in sounding a clicker immediately after a desired response; shaping, as in gradually getting a dog to jump higher and higher; intermittent reinforcement, reducing the frequency of those yummies to induce persistent behavior without satiation; chaining, where a complex behavior is gradually put together.
=== Child behavior – parent management training ===
Providing positive reinforcement for appropriate child behaviors is a major focus of parent management training. Typically, parents learn to reward appropriate behavior through social rewards (such as praise, smiles, and hugs) as well as concrete rewards (such as stickers or points towards a larger reward as part of an incentive system created collaboratively with the child). In addition, parents learn to select simple behaviors as an initial focus and reward each of the small steps that their child achieves towards reaching a larger goal (this concept is called "successive approximations"). They may also use indirect rewards such through progress charts. Providing positive reinforcement in the classroom can be beneficial to student success. When applying positive reinforcement to students, it's crucial to make it individualized to that student's needs. This way, the student understands why they are receiving the praise, they can accept it, and eventually learn to continue the action that was earned by positive reinforcement. For example, using rewards or extra recess time might apply to some students more, whereas others might accept the enforcement by receiving stickers or check marks indicating praise.
=== Economics ===
Both psychologists and economists have become interested in applying operant concepts and findings to the behavior of humans in the marketplace. An example
is the analysis of consumer demand, as indexed by the amount of a commodity that is purchased. In economics, the degree to which price influences consumption is called "the price elasticity of demand." Certain commodities are more elastic than others; for example, a change in price of certain foods may have a large effect on the amount bought, while gasoline and other essentials may be less affected by price changes. In terms of operant analysis, such effects may be interpreted in terms of motivations of consumers and the relative value of the commodities as reinforcers.
=== Gambling – variable ratio scheduling ===
As stated earlier in this article, a variable ratio schedule yields reinforcement after the emission of an unpredictable number of responses. This schedule typically generates rapid, persistent responding. Slot machines pay off on a variable ratio schedule, and they produce just this sort of persistent lever-pulling behavior in gamblers. Because the machines are programmed to pay out less money than they take in, the persistent slot-machine user invariably loses in the long run. Slots machines, and thus variable ratio reinforcement, have often been blamed as a factor underlying gambling addiction.
=== Praise ===
The concept of praise as a means of behavioral reinforcement in humans is rooted in B.F. Skinner's model of operant conditioning. Through this lens, praise has been viewed as a means of positive reinforcement, wherein an observed behavior is made more likely to occur by contingently praising said behavior. Hundreds of studies have demonstrated the effectiveness of praise in promoting positive behaviors, notably in the study of teacher and parent use of praise on child in promoting improved behavior and academic performance, but also in the study of work performance. Praise has also been demonstrated to reinforce positive behaviors in non-praised adjacent individuals (such as a classmate of the praise recipient) through vicarious reinforcement. Praise may be more or less effective in changing behavior depending on its form, content and delivery. In order for praise to effect positive behavior change, it must be contingent on the positive behavior (i.e., only administered after the targeted behavior is enacted), must specify the particulars of the behavior that is to be reinforced, and must be delivered sincerely and credibly.
Acknowledging the effect of praise as a positive reinforcement strategy, numerous behavioral and cognitive behavioral interventions have incorporated the use of praise in their protocols. The strategic use of praise is recognized as an evidence-based practice in both classroom management and parenting training interventions, though praise is often subsumed in intervention research into a larger category of positive reinforcement, which includes strategies such as strategic attention and behavioral rewards.
=== Traumatic bonding ===
Traumatic bonding occurs as the result of ongoing cycles of abuse in which the intermittent reinforcement of reward and punishment creates powerful emotional bonds that are resistant to change.
The other source indicated that
'The necessary conditions for traumatic bonding are that one person must dominate the other and that the level of abuse chronically spikes and then subsides. The relationship is characterized by periods of permissive, compassionate, and even affectionate behavior from the dominant person, punctuated by intermittent episodes of intense abuse. To maintain the upper hand, the victimizer manipulates the behavior of the victim and limits the victim's options so as to perpetuate the power imbalance. Any threat to the balance of dominance and submission may be met with an escalating cycle of punishment ranging from seething intimidation to intensely violent outbursts. The victimizer also isolates the victim from other sources of support, which reduces the likelihood of detection and intervention, impairs the victim's ability to receive countervailing self-referent feedback, and strengthens the sense of unilateral dependency ... The traumatic effects of these abusive relationships may include the impairment of the victim's capacity for accurate self-appraisal, leading to a sense of personal inadequacy and a subordinate sense of dependence upon the dominating person. Victims also may encounter a variety of unpleasant social and legal consequences of their emotional and behavioral affiliation with someone who perpetrated aggressive acts, even if they themselves were the recipients of the aggression.
=== Video games ===
Most video games are designed around some type of compulsion loop, adding a type of positive reinforcement through a variable rate schedule to keep the player playing the game, though this can also lead to video game addiction.
As part of a trend in the monetization of video games in the 2010s, some games offered "loot boxes" as rewards or purchasable by real-world funds that offered a random selection of in-game items, distributed by rarity. The practice has been tied to the same methods that slot machines and other gambling devices dole out rewards, as it follows a variable rate schedule. While the general perception that loot boxes are a form of gambling, the practice is only classified as such in a few countries as gambling and otherwise legal. However, methods to use those items as virtual currency for online gambling or trading for real-world money has created a skin gambling market that is under legal evaluation.
== Criticisms ==
The standard definition of behavioral reinforcement has been criticized as circular, since it appears to argue that response strength is increased by reinforcement, and defines reinforcement as something that increases response strength (i.e., response strength is increased by things that increase response strength). However, the correct usage of reinforcement is that something is a reinforcer because of its effect on behavior, and not the other way around. It becomes circular if one says that a particular stimulus strengthens behavior because it is a reinforcer, and does not explain why a stimulus is producing that effect on the behavior. Other definitions have been proposed, such as F.D. Sheffield's "consummatory behavior contingent on a response", but these are not broadly used in psychology.
Increasingly, understanding of the role reinforcers play is moving away from a "strengthening" effect to a "signalling" effect. That is, the view that reinforcers increase responding because they signal the behaviors that are likely to result in reinforcement. While in most practical applications, the effect of any given reinforcer will be the same regardless of whether the reinforcer is signalling or strengthening, this approach helps to explain a number of behavioral phenomena including patterns of responding on intermittent reinforcement schedules (fixed interval scallops) and the differential outcomes effect.
== See also ==
== References ==
== Further reading ==
== External links ==
An On-Line Positive Reinforcement Tutorial
Scholarpedia Reinforcement
scienceofbehavior.com Archived 2 October 2011 at the Wayback Machine | Wikipedia/Reinforcement |
The transformer is a deep learning architecture based on the multi-head attention mechanism, in which text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished.
Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM). Later variations have been widely adopted for training large language models (LLM) on large (language) datasets.
The modern version of the transformer was proposed in the 2017 paper "Attention Is All You Need" by researchers at Google. Transformers were first developed as an improvement over previous architectures for machine translation, but have found many applications since. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning, audio, multimodal learning, robotics, and even playing chess. It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs) and BERT (bidirectional encoder representations from transformers).
== History ==
=== Predecessors ===
For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens.
A key breakthrough was LSTM (1995), a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called multiplicative units. Neural networks using multiplicative units were later called sigma-pi networks or higher-order networks. LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers.
However, LSTM still used sequential processing, like most other RNNs. Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence.
Modern Transformers overcome this problem, but unlike RNNs, they require computation time that is quadratic in the size of the context window. The linearly scaling fast weight controller (1992) learns to compute a weight matrix for further processing depending on the input. One of its two networks has "fast weights" or "dynamic links" (1981). A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries. This was later shown to be equivalent to the unnormalized linear Transformer.
=== Attention with seq2seq ===
The idea of encoder-decoder sequence transduction had been developed in the early 2010s; commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.
A 380M-parameter model for machine translation uses two long short-term memories (LSTM). Its architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model used gated recurrent units (GRU) instead of LSTM. Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.
These early seq2seq models had no attention mechanism, and the state vector is accessible only after the last word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into a fixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation.
The RNNsearch model introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of the fixed-size output vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation".
The relative performances were compared between global (that of RNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time.
In 2016, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM. It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop.
=== Parallelizing attention ===
Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them from being accelerated on GPUs. In 2016, decomposable attention applied a self-attention mechanism to feedforward networks, which are easy to parallelize, and achieved SOTA result in textual entailment with an order of magnitude fewer parameters than LSTMs. One of its authors, Jakob Uszkoreit, suspected that attention without recurrence would be sufficient for language translation, thus the title "attention is all you need". That hypothesis was against conventional wisdom at the time, and even his father Hans Uszkoreit, a well-known computational linguist, was skeptical. In the same year, self-attention (called intra-attention or intra-sentence attention) was proposed for LSTMs.
In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks.
=== AI boom era ===
Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles. Transformer architecture is now used alongside many generative models that contribute to the ongoing AI boom.
In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model. In 2019 October, Google started using BERT to process search queries. In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.
Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly popular, triggering a boom around large language models.
Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer, speech recognition, robotics, and multimodal. The vision transformer, in turn, stimulated new developments in convolutional neural networks. Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024), and Sora (2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data.
== Training ==
=== Methods for stabilizing training ===
The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again.
A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup.
=== Pretrain-finetune ===
Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include:
language modeling
next-sentence prediction
question answering
reading comprehension
sentiment analysis
paraphrasing
The T5 transformer report documents a large number of natural language pretraining tasks. Some examples are:
restoring or repairing incomplete or corrupted text. For example, the input, "Thank you ~~ me to your party ~~ week", might generate the output, "Thank you for inviting me to your party last week".
translation between natural languages (machine translation)
judging the pragmatic acceptability of natural language. For example, the following sentence might be judged "not acceptable", because even though it is syntactically well-formed, it is improbable in ordinary human usage: The course is jumping well.
Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture.
=== Tasks ===
In general, there are 3 classes of language modelling tasks: "masked", "autoregressive", and "prefixLM". These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer.
In a masked task, one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens:
Loss
=
−
∑
t
∈
masked tokens
ln
(
probability of
t
conditional on its context
)
{\displaystyle {\text{Loss}}=-\sum _{t\in {\text{masked tokens}}}\ln({\text{probability of }}t{\text{ conditional on its context}})}
and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task.
In an autoregressive task, the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks.
In a prefixLM task, the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. The T5 series of models are trained by prefixLM tasks.
Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model).
== Architecture ==
All transformers have the same primary components:
Tokenizers, which convert text into tokens.
Embedding layer, which converts tokens and positions of the tokens into vector representations.
Transformer layers, which carry out repeated transformations on the vector representations, extracting more and more linguistic information. These consist of alternating attention and feedforward layers. There are two major types of transformer layers: encoder layers and decoder layers, with further variants.
Un-embedding layer, which converts the final vector representations back to a probability distribution over the tokens.
The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section.
By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as
x
W
{\displaystyle xW}
.
=== Tokenization ===
As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is a tokenizer.
The set of all tokens is the vocabulary of the tokenizer, and its size is the vocabulary size
n
vocabulary
{\displaystyle n_{\text{vocabulary}}}
. When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown".
Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece.
=== Embedding ===
Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix
M
{\displaystyle M}
. For example, if the input token is
3
{\displaystyle 3}
, then the one-hot representation is
[
0
,
0
,
0
,
1
,
0
,
0
,
…
]
{\displaystyle [0,0,0,1,0,0,\dots ]}
, and its embedding vector is
E
m
b
e
d
(
3
)
=
[
0
,
0
,
0
,
1
,
0
,
0
,
…
]
M
{\displaystyle \mathrm {Embed} (3)=[0,0,0,1,0,0,\dots ]M}
The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors.
The number of dimensions in an embedding vector is called hidden size or embedding size and written as
d
emb
{\displaystyle d_{\text{emb}}}
. This size is written as
d
model
{\displaystyle d_{\text{model}}}
in the original Transformer paper.
=== Un-embedding ===
An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens.
The un-embedding layer is a linear-softmax layer:
U
n
E
m
b
e
d
(
x
)
=
s
o
f
t
m
a
x
(
x
W
+
b
)
{\displaystyle \mathrm {UnEmbed} (x)=\mathrm {softmax} (xW+b)}
The matrix has shape
(
d
emb
,
n
vocabulary
)
{\displaystyle (d_{\text{emb}},n_{\text{vocabulary}})}
. The embedding matrix
M
{\displaystyle M}
and the un-embedding matrix
W
{\displaystyle W}
are sometimes required to be transposes of each other, a practice called weight tying.
=== Positional encoding ===
A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information about where the words are in the input sequence. This shall induce a bias towards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man".
The positional encoding is defined as a function of type
f
:
R
→
R
d
;
d
∈
Z
,
d
>
0
{\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0}
, where
d
{\displaystyle d}
is a positive even integer. The full positional encoding defined in the original paper is:
(
f
(
t
)
2
k
,
f
(
t
)
2
k
+
1
)
=
(
sin
(
θ
)
,
cos
(
θ
)
)
∀
k
∈
{
0
,
1
,
…
,
d
/
2
−
1
}
{\displaystyle (f(t)_{2k},f(t)_{2k+1})=(\sin(\theta ),\cos(\theta ))\quad \forall k\in \{0,1,\ldots ,d/2-1\}}
where
θ
=
t
r
k
,
r
=
N
2
/
d
{\displaystyle \theta ={\frac {t}{r^{k}}},r=N^{2/d}}
.
Here,
N
{\displaystyle N}
is a free parameter that should be significantly larger than the biggest
k
{\displaystyle k}
that would be input into the positional encoding function. The original paper uses
N
=
10000
{\displaystyle N=10000}
.
The function is in a simpler form when written as a complex function of type
f
:
R
→
C
d
/
2
{\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}}
f
(
t
)
=
(
e
i
t
/
r
k
)
k
=
0
,
1
,
…
,
d
2
−
1
{\displaystyle f(t)=\left(e^{it/r^{k}}\right)_{k=0,1,\ldots ,{\frac {d}{2}}-1}}
where
r
=
N
2
/
d
{\displaystyle r=N^{2/d}}
.
The main reason for using this positional encoding function is that using it, shifts are linear transformations:
f
(
t
+
Δ
t
)
=
d
i
a
g
(
f
(
Δ
t
)
)
f
(
t
)
{\displaystyle f(t+\Delta t)=\mathrm {diag} (f(\Delta t))f(t)}
where
Δ
t
∈
R
{\displaystyle \Delta t\in \mathbb {R} }
is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication.
By taking a linear sum, any convolution can also be implemented as linear transformations:
∑
j
c
j
f
(
t
+
Δ
t
j
)
=
(
∑
j
c
j
d
i
a
g
(
f
(
Δ
t
j
)
)
)
f
(
t
)
{\displaystyle \sum _{j}c_{j}f(t+\Delta t_{j})=\left(\sum _{j}c_{j}\,\mathrm {diag} (f(\Delta t_{j}))\right)f(t)}
for any constants
c
j
{\displaystyle c_{j}}
. This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in a convolutional neural network language model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position."
In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference.
=== Encoder-decoder (overview) ===
Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far.
The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).
Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps. These feed-forward layers contain most of the parameters in a Transformer model.
=== Feedforward network ===
The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons:
F
F
N
(
x
)
=
ϕ
(
x
W
(
1
)
+
b
(
1
)
)
W
(
2
)
+
b
(
2
)
{\displaystyle \mathrm {FFN} (x)=\phi (xW^{(1)}+b^{(1)})W^{(2)}+b^{(2)}}
where
W
(
1
)
{\displaystyle W^{(1)}}
and
W
(
2
)
{\displaystyle W^{(2)}}
are weight matrices and
b
(
1
)
{\displaystyle b^{(1)}}
and
b
(
2
)
{\displaystyle b^{(2)}}
are bias vectors, and
ϕ
{\displaystyle \phi }
is its activation function. The original Transformer used ReLU activation.
The number of neurons in the middle layer is called intermediate size (GPT), filter size (BERT), or feedforward size (BERT). It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size:
d
ffn
=
4
d
emb
{\displaystyle d_{\text{ffn}}=4d_{\text{emb}}}
.
=== Scaled dot-product attention ===
==== Attention head ====
The attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights
W
Q
{\displaystyle W^{Q}}
, the key weights
W
K
{\displaystyle W^{K}}
, and the value weights
W
V
{\displaystyle W^{V}}
.
The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length
ℓ
seq, query
{\displaystyle \ell _{\text{seq, query}}}
, and each entry is a vector of dimension
d
emb, query
{\displaystyle d_{\text{emb, query}}}
. Similarly for the key and value sequences.
For each vector
x
i
,
query
{\displaystyle x_{i,{\text{query}}}}
in the query sequence, it is multiplied by a matrix
W
Q
{\displaystyle W^{Q}}
to produce a query vector
q
i
=
x
i
,
query
W
Q
{\displaystyle q_{i}=x_{i,{\text{query}}}W^{Q}}
. The matrix of all query vectors is the query matrix:
Q
=
X
query
W
Q
{\displaystyle Q=X_{\text{query}}W^{Q}}
Similarly, we construct the key matrix
K
=
X
key
W
K
{\displaystyle K=X_{\text{key}}W^{K}}
and the value matrix
V
=
X
value
W
V
{\displaystyle V=X_{\text{value}}W^{V}}
.
It is usually the case that all
W
Q
,
W
K
,
W
V
{\displaystyle W^{Q},W^{K},W^{V}}
are square matrices, meaning
d
emb, query
=
d
query
{\displaystyle d_{\text{emb, query}}=d_{\text{query}}}
, etc.
Attention weights are calculated using the query and key vectors: the attention weight
a
i
j
{\displaystyle a_{ij}}
from token
i
{\displaystyle i}
to token
j
{\displaystyle j}
is the dot product between
q
i
{\displaystyle q_{i}}
and
k
j
{\displaystyle k_{j}}
. The attention weights are divided by the square root of the dimension of the key vectors,
d
k
{\displaystyle {\sqrt {d_{k}}}}
, which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that
W
Q
{\displaystyle W^{Q}}
and
W
K
{\displaystyle W^{K}}
are different matrices allows attention to be non-symmetric: if token
i
{\displaystyle i}
attends to token
j
{\displaystyle j}
(i.e.
q
i
⋅
k
j
{\displaystyle q_{i}\cdot k_{j}}
is large), this does not necessarily mean that token
j
{\displaystyle j}
will attend to token
i
{\displaystyle i}
(i.e.
q
j
⋅
k
i
{\displaystyle q_{j}\cdot k_{i}}
could be small). The output of the attention unit for token
i
{\displaystyle i}
is the weighted sum of the value vectors of all tokens, weighted by
a
i
j
{\displaystyle a_{ij}}
, the attention from token
i
{\displaystyle i}
to each token.
The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices
Q
{\displaystyle Q}
,
K
{\displaystyle K}
and
V
{\displaystyle V}
are defined as the matrices where the
i
{\displaystyle i}
th rows are vectors
q
i
{\displaystyle q_{i}}
,
k
i
{\displaystyle k_{i}}
, and
v
i
{\displaystyle v_{i}}
respectively. Then we can represent the attention as
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
where the softmax is applied over each of the rows of the matrix.
The number of dimensions in a query vector is query size
d
query
{\displaystyle d_{\text{query}}}
and similarly for the key size
d
key
{\displaystyle d_{\text{key}}}
and value size
d
value
{\displaystyle d_{\text{value}}}
. The output dimension of an attention head is its head dimension
d
head
{\displaystyle d_{\text{head}}}
. The attention mechanism requires the following three equalities to hold:
ℓ
seq, key
=
ℓ
seq, value
,
d
query
=
d
key
,
d
value
=
d
head
{\displaystyle \ell _{\text{seq, key}}=\ell _{\text{seq, value}},\;d_{\text{query}}=d_{\text{key}},\;d_{\text{value}}=d_{\text{head}}}
but is otherwise unconstrained.
If the attention head is used in a self-attention fashion, then
X
query
=
X
key
=
X
value
{\displaystyle X_{\text{query}}=X_{\text{key}}=X_{\text{value}}}
. If the attention head is used in a cross-attention fashion, then usually
X
query
≠
X
key
=
X
value
{\displaystyle X_{\text{query}}\neq X_{\text{key}}=X_{\text{value}}}
. It is theoretically possible for all three to be different, but that is rarely the case in practice.
==== Multiheaded attention ====
One set of
(
W
Q
,
W
K
,
W
V
)
{\displaystyle \left(W^{Q},W^{K},W^{V}\right)}
matrices is called an attention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices,
W
Q
{\displaystyle W^{Q}}
and
W
K
{\displaystyle W^{K}}
, which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrix
W
V
{\displaystyle W^{V}}
, in combination with the part of the output projection matrix
W
O
{\displaystyle W^{O}}
, determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects. The computations for each attention head can be performed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers.
Concretely, let the multiple attention heads be indexed by
i
{\displaystyle i}
, then we have
MultiheadedAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
i
K
,
X
W
i
V
)
)
W
O
{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V}))W^{O}}
where the matrix
X
{\displaystyle X}
is the concatenation of word embeddings, and the matrices
W
i
Q
,
W
i
K
,
W
i
V
{\displaystyle W_{i}^{Q},W_{i}^{K},W_{i}^{V}}
are "projection matrices" owned by individual attention head
i
{\displaystyle i}
, and
W
O
{\displaystyle W^{O}}
is a final projection matrix owned by the whole multi-headed attention head.
It is theoretically possible for each attention head to have a different head dimension
d
head
{\displaystyle d_{\text{head}}}
, but that is rarely the case in practice.
As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:
d
emb
=
768
,
n
head
=
12
,
d
head
=
64
{\displaystyle d_{\text{emb}}=768,n_{\text{head}}=12,d_{\text{head}}=64}
Since
12
×
64
=
768
{\displaystyle 12\times 64=768}
, its output projection matrix
W
O
∈
R
(
12
×
64
)
×
768
{\displaystyle W^{O}\in \mathbb {R} ^{(12\times 64)\times 768}}
is a square matrix.
==== Masked attention ====
The Transformer architecture is constructed to calculate output tokens iteratively. Assuming
t
=
0
{\displaystyle t=0}
refers to the calculation of the first output token
i
=
0
{\displaystyle i=0}
, for step
t
>
0
{\displaystyle t>0}
, the output token
i
=
0
{\displaystyle i=0}
shall remain constant. This ensures properties of the model similar to autoregressive models. Therefore, at every time step
t
{\displaystyle t}
, the calculation for all outputs
i
{\displaystyle i}
should not have access to tokens at position
j
{\displaystyle j}
for
j
>=
i
{\displaystyle j>=i}
(as it naturally is the case for time step
t
=
i
{\displaystyle t=i}
, when tokens
j
>
t
{\displaystyle j>t}
are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrix
M
{\displaystyle M}
that is
−
∞
{\displaystyle -\infty }
at entries where the attention link must be cut, and
0
{\displaystyle 0}
at other places:
MaskedAttention
(
Q
,
K
,
V
)
=
softmax
(
M
+
Q
K
T
d
k
)
V
{\displaystyle {\begin{aligned}{\text{MaskedAttention}}(Q,K,V)={\text{softmax}}\left(M+{\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}}
The following matrix is commonly used in decoder self-attention modules, called "causal masking":
M
causal
=
[
0
−
∞
−
∞
…
−
∞
0
0
−
∞
…
−
∞
0
0
0
…
−
∞
⋮
⋮
⋮
⋱
⋮
0
0
0
…
0
]
{\displaystyle M_{\text{causal}}={\begin{bmatrix}0&-\infty &-\infty &\dots &-\infty \\0&0&-\infty &\dots &-\infty \\0&0&0&\dots &-\infty \\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &0\end{bmatrix}}}
In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form
P
M
causal
P
−
1
{\displaystyle PM_{\text{causal}}P^{-1}}
, where
P
{\displaystyle P}
is a random permutation matrix.
=== Encoder ===
An encoder consists of an embedding layer, followed by multiple encoder layers.
Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:
given input vectors
h
0
,
h
1
,
…
combine them into a matrix
H
=
[
h
0
h
1
⋮
]
EncoderLayer
(
H
)
=
[
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
0
)
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
1
)
⋮
]
{\displaystyle {\begin{aligned}{\text{given input vectors }}&h_{0},h_{1},\dots \\{\text{combine them into a matrix }}H&={\begin{bmatrix}h_{0}\\h_{1}\\\vdots \end{bmatrix}}\\{\text{EncoderLayer}}(H)&={\begin{bmatrix}{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{0})\\{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{1})\\\vdots \end{bmatrix}}\\\end{aligned}}}
where
FFN
{\displaystyle {\text{FFN}}}
stands for "feed-forward network". We can more succinctly write it as
EncoderLayer
(
H
)
=
FFN
(
MultiheadedAttention
(
H
,
H
,
H
)
)
{\displaystyle {\text{EncoderLayer}}(H)={\text{FFN}}({\text{MultiheadedAttention}}(H,H,H))}
with the implicit convention that the
FFN
{\displaystyle {\text{FFN}}}
is applied to each row of the matrix individually.
The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder.
As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking.
=== Decoder ===
A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer.
Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention.
Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked.
In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism.
Schematically, we have:
H
′
=
MaskedMultiheadedAttention
(
H
,
H
,
H
)
DecoderLayer
(
H
)
=
FFN
(
MultiheadedAttention
(
H
′
,
H
E
,
H
E
)
)
{\displaystyle {\begin{aligned}H'&={\text{MaskedMultiheadedAttention}}(H,H,H)\\{\text{DecoderLayer}}(H)&={\text{FFN}}({\text{MultiheadedAttention}}(H',H^{E},H^{E}))\end{aligned}}}
where
H
E
{\displaystyle H^{E}}
is the matrix with rows being the output vectors from the encoder.
The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text.
=== Adapted architectures ===
Many large language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. Early GPT models are decoder-only models trained to predict the next token in a sequence. BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence.
== Full transformer architecture ==
=== Sublayers ===
Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network.
The final points of detail are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence.
The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero.
Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector.
There are two common conventions in use: the post-LN and the pre-LN convention. In the post-LN convention, the output of each sublayer is
L
a
y
e
r
N
o
r
m
(
x
+
S
u
b
l
a
y
e
r
(
x
)
)
{\displaystyle \mathrm {LayerNorm} (x+\mathrm {Sublayer} (x))}
where
S
u
b
l
a
y
e
r
(
x
)
{\displaystyle \mathrm {Sublayer} (x)}
is the function implemented by the sublayer itself.
In the pre-LN convention, the output of each sublayer is
x
+
S
u
b
l
a
y
e
r
(
L
a
y
e
r
N
o
r
m
(
x
)
)
{\displaystyle x+\mathrm {Sublayer} (\mathrm {LayerNorm} (x))}
The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018, was found to be easier to train, requiring no warm-up, leading to faster convergence.
=== Pseudocode ===
The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from
input: Encoder input t_e
Decoder input t_d
output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence))
/* encoder */
z_e ← encoder.tokenizer(t_e)
for each t in 1:length(z_e) do
z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t)
for each l in 1:length(encoder.layers) do
layer ← encoder.layers[l]
/* first sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.multiheaded_attention(z_e, z_e, z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
/* second sublayer */
z_e_copy ← copy(z_e)
for each t in 1:length(z_e) do
z_e[t] ← layer.layer_norm(z_e[t])
z_e ← layer.feedforward(z_e)
for each t in 1:length(z_e) do
z_e[t] ← z_e[t] + z_e_copy[t]
for each t in 1:length(z_e) do
z_e[t] ← encoder.final_layer_norm(z_e[t])
/* decoder */
z_d ← decoder.tokenizer(t_d)
for each t in 1:length(z_d) do
z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t)
for each l in 1:length(decoder.layers) do
layer ← decoder.layers[l]
/* first sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* second sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.multiheaded_attention(z_d, z_e, z_e)
for each i in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
/* third sublayer */
z_d_copy ← copy(z_d)
for each t in 1:length(z_d) do
z_d[t] ← layer.layer_norm(z_d[t])
z_d ← layer.feedforward(z_d)
for each t in 1:length(z_d) do
z_d[t] ← z_d[t] + z_d_copy[t]
z_d ← decoder.final_layer_norm(z_d)
output_distributions ← []
for each t in 1:length(z_d) do
output_distributions.append(decoder.unembed(z_d[t]))
return output_distributions
=== Terminology ===
The Transformer architecture, being modular, allows variations. Several common variations are described here.
An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.
A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only.
An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder.
A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form: Figure 3
M
prefixLM
=
[
0
−
∞
0
M
causal
]
{\displaystyle M_{\text{prefixLM}}={\begin{bmatrix}\mathbf {0} &-\infty \\\mathbf {0} &M_{\text{causal}}\end{bmatrix}}}
where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.
There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively.
== Subsequent work ==
=== Alternative activation functions ===
The original transformer uses ReLU activation function. Other activation functions were developed. The Llama series and PaLM used SwiGLU; both GPT-1 and BERT used GELU.
Alternative activation functions are often used in combination with Gated Linear Units in the feedforward module.
=== Alternative normalizations ===
The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm which is used in the Llama series. Other examples include CapsuleNorm ScaleNorm, or FixNorm.
=== Alternative positional encodings ===
Transformers may use other positional encoding methods than sinusoidal.
The original Transformer paper reported using a learned positional encoding, but finding it not superior to the sinusoidal one. Later, found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module.
==== RoPE ====
RoPE (rotary positional embedding), is best explained by considering a list of 2-dimensional vectors
[
(
x
1
(
1
)
,
x
1
(
2
)
)
,
(
x
2
(
1
)
,
x
2
(
2
)
)
,
(
x
3
(
1
)
,
x
3
(
2
)
)
,
.
.
.
]
{\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),...]}
. Now pick some angle
θ
{\displaystyle \theta }
. Then RoPE encoding is
RoPE
(
x
m
(
1
)
,
x
m
(
2
)
,
m
)
=
(
cos
m
θ
−
sin
m
θ
sin
m
θ
cos
m
θ
)
(
x
m
(
1
)
x
m
(
2
)
)
=
(
x
m
(
1
)
cos
m
θ
−
x
m
(
2
)
sin
m
θ
x
m
(
2
)
cos
m
θ
+
x
m
(
1
)
sin
m
θ
)
{\displaystyle {\text{RoPE}}{\big (}x_{m}^{(1)},x_{m}^{(2)},m{\big )}={\begin{pmatrix}\cos m\theta &-\sin m\theta \\\sin m\theta &\cos m\theta \end{pmatrix}}{\begin{pmatrix}x_{m}^{(1)}\\x_{m}^{(2)}\\\end{pmatrix}}={\begin{pmatrix}x_{m}^{(1)}\cos m\theta -x_{m}^{(2)}\sin m\theta \\x_{m}^{(2)}\cos m\theta +x_{m}^{(1)}\sin m\theta \\\end{pmatrix}}}
Equivalently, if we write the 2-dimensional vectors as complex numbers
z
m
:=
x
m
(
1
)
+
i
x
m
(
2
)
{\displaystyle z_{m}:=x_{m}^{(1)}+ix_{m}^{(2)}}
, then RoPE encoding is just multiplication by an angle:
RoPE
(
z
m
,
m
)
=
e
i
m
θ
z
m
{\displaystyle {\text{RoPE}}{\big (}z_{m},m{\big )}=e^{im\theta }z_{m}}
For a list of
2
n
{\displaystyle 2n}
-dimensional vectors, a RoPE encoder is defined by a sequence of angles
θ
(
1
)
,
.
.
.
,
θ
(
n
)
{\displaystyle \theta ^{(1)},...,\theta ^{(n)}}
. Then the RoPE encoding is applied to each pair of coordinates.
The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:
RoPE
(
x
,
m
)
T
RoPE
(
y
,
n
)
=
RoPE
(
x
,
m
+
k
)
T
RoPE
(
y
,
n
+
k
)
{\displaystyle {\text{RoPE}}{\big (}x,m{\big )}^{T}{\text{RoPE}}{\big (}y,n{\big )}={\text{RoPE}}{\big (}x,m+k{\big )}^{T}{\text{RoPE}}{\big (}y,n+k{\big )}}
for any integer
k
{\displaystyle k}
.
==== ALiBi ====
ALiBi (Attention with Linear Biases) is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism is
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
+
s
B
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+sB\right)V\end{aligned}}}
Here,
s
{\displaystyle s}
is a real number ("scalar"), and
B
{\displaystyle B}
is the linear bias matrix defined by
B
=
(
0
1
2
3
⋯
−
1
0
1
2
⋯
−
2
−
1
0
1
⋯
−
3
−
2
−
1
0
⋯
⋮
⋮
⋮
⋮
⋱
)
{\displaystyle B={\begin{pmatrix}0&1&2&3&\cdots \\-1&0&1&2&\cdots \\-2&-1&0&1&\cdots \\-3&-2&-1&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}}
in other words,
B
i
,
j
=
j
−
i
{\displaystyle B_{i,j}=j-i}
. The idea being that the linear bias matrix is a softened mask. Just as
0
{\displaystyle 0}
represent full attention paid, and
−
∞
{\displaystyle -\infty }
represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction.
ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located).
==== Relative Position Encodings ====
Relative Position Encodings is similar to ALiBi, but more generic:
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
+
B
)
V
{\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+B\right)V\end{aligned}}}
where
B
{\displaystyle B}
is a Toeplitz matrix, that is,
B
i
,
j
=
B
i
′
,
j
′
{\displaystyle B_{i,j}=B_{i',j'}}
whenever
i
−
j
=
i
′
−
j
′
{\displaystyle i-j=i'-j'}
. This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding".
=== Efficient implementation ===
The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models.
==== KV caching ====
When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching.
If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots.
==== FlashAttention ====
FlashAttention is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page on softmax for details.
An improved version, FlashAttention-2, was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention.
Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).
Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8.
==== Multi-Query Attention ====
Multi-Query Attention changes the multiheaded attention mechanism. Whereas normally,
MultiheadedAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
i
K
,
X
W
i
V
)
)
W
O
{\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V})\right)W^{O}}
with Multi-Query Attention, there is just one
W
K
,
W
V
{\displaystyle W^{K},W^{V}}
, thus:
MultiQueryAttention
(
Q
,
K
,
V
)
=
Concat
i
∈
[
n
heads
]
(
Attention
(
X
W
i
Q
,
X
W
K
,
X
W
V
)
)
W
O
{\displaystyle {\text{MultiQueryAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW^{K},XW^{V})\right)W^{O}}
This has a neutral effect on model quality and training speed, but increases inference speed.
More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups.
Multihead Latent Attention (MLA) is a low-rank approximation to standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached.
==== Speculative decoding ====
Speculative decoding is a method to accelerate token decoding. Similarly to speculative execution in CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly.
The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense.
Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a token
x
1
,
x
2
,
.
.
.
,
x
512
{\displaystyle x_{1},x_{2},...,x_{512}}
, taking time
512
T
GPT-3
{\displaystyle 512T_{\text{GPT-3}}}
. However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that each
x
t
{\displaystyle x_{t}}
is indeed the token with the largest log-likelihood in the
t
{\displaystyle t}
-th output.
In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens:
x
~
1
,
x
~
2
,
x
~
3
,
x
~
4
{\displaystyle {\tilde {x}}_{1},{\tilde {x}}_{2},{\tilde {x}}_{3},{\tilde {x}}_{4}}
. This only takes
4
T
GPT-3-small
{\displaystyle 4T_{\text{GPT-3-small}}}
. These tokens are then run through the larger GPT-3 in one go. Suppose that
x
~
1
{\displaystyle {\tilde {x}}_{1}}
and
x
~
2
{\displaystyle {\tilde {x}}_{2}}
are verified by GPT-3 as what it would have picked, then those are kept, but
x
~
3
{\displaystyle {\tilde {x}}_{3}}
is not, so
x
~
3
,
x
~
4
{\displaystyle {\tilde {x}}_{3},{\tilde {x}}_{4}}
are discarded, and GPT-3 is run on those. This would take
4
T
GPT-3-small
+
3
T
GPT-3
{\displaystyle 4T_{\text{GPT-3-small}}+3T_{\text{GPT-3}}}
, which might be shorter than
4
T
GPT-3
{\displaystyle 4T_{\text{GPT-3}}}
.
For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.
In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict the next token, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack.
=== Sub-quadratic transformers ===
Training transformer-based architectures can be expensive, especially for long inputs. Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows. In the audio domain, SepTr decouples the attention in time and frequency domains. Long Range Arena (2020) is a standard benchmark for comparing the behavior of transformer architectures over long inputs.
==== Alternative attention graphs ====
The standard attention graph is either all-to-all or causal, both of which scales as
O
(
N
2
)
{\displaystyle O(N^{2})}
where
N
{\displaystyle N}
is the number of tokens in a sequence.
Reformer (2020) reduces the computational load from
O
(
N
2
)
{\displaystyle O(N^{2})}
to
O
(
N
ln
N
)
{\displaystyle O(N\ln N)}
by using locality-sensitive hashing and reversible layers.
Sparse attention uses attention graphs that grows slower than
O
(
N
2
)
{\displaystyle O(N^{2})}
. For example, BigBird (2020) uses random small-world networks which grows as
O
(
N
)
{\displaystyle O(N)}
.
Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value.
==== Random Feature Attention ====
Random Feature Attention (2021) uses Fourier random features:
φ
(
x
)
=
1
D
[
cos
⟨
w
1
,
x
⟩
,
sin
⟨
w
1
,
x
⟩
,
⋯
cos
⟨
w
D
,
x
⟩
,
sin
⟨
w
D
,
x
⟩
]
T
{\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\cdots \cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}}
where
w
1
,
.
.
.
,
w
D
{\displaystyle w_{1},...,w_{D}}
are independent samples from the normal distribution
N
(
0
,
σ
2
I
)
{\displaystyle N(0,\sigma ^{2}I)}
. This choice of parameters satisfy
E
[
⟨
φ
(
x
)
,
φ
(
y
)
⟩
]
=
e
−
‖
x
−
y
‖
2
2
σ
2
{\displaystyle \mathbb {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{-{\frac {\|x-y\|^{2}}{2\sigma ^{2}}}}}
, or
e
⟨
x
,
y
⟩
/
σ
2
=
E
[
⟨
e
‖
x
‖
2
/
2
σ
2
φ
(
x
)
,
e
‖
y
‖
2
/
2
σ
2
φ
(
y
)
⟩
]
≈
⟨
e
‖
x
‖
2
/
2
σ
2
φ
(
x
)
,
e
‖
y
‖
2
/
2
σ
2
φ
(
y
)
⟩
{\displaystyle e^{\langle x,y\rangle /\sigma ^{2}}=\mathbb {E} [\langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle ]\approx \langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle }
Consequently, the one-headed attention, with one query, can be written as
Attention
(
q
,
K
,
V
)
=
softmax
(
q
K
T
d
k
)
V
≈
φ
(
q
)
T
∑
i
e
‖
k
i
‖
2
/
2
σ
2
φ
(
k
i
)
v
i
T
φ
(
q
)
T
∑
i
e
‖
k
i
‖
2
/
2
σ
2
φ
(
k
i
)
{\displaystyle {\text{Attention}}(q,K,V)={\text{softmax}}\left({\frac {qK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx {\frac {\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})v_{i}^{T}}{\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})}}}
where
σ
=
d
K
1
/
4
{\displaystyle \sigma =d_{K}^{1/4}}
. Similarly for multiple queries, and for multiheaded attention.
This approximation can be computed in linear time, as we can compute the matrix
φ
(
k
i
)
v
i
T
{\displaystyle \varphi (k_{i})v_{i}^{T}}
first, then multiply it with the query. In essence, we have managed to obtain a more precise version of
Attention
(
Q
,
K
,
V
)
=
softmax
(
Q
K
T
d
k
)
V
≈
Q
(
K
T
V
/
d
k
)
{\displaystyle {\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx Q(K^{T}V/{\sqrt {d_{k}}})}
Performer (2022) uses the same Random Feature Attention, but
w
1
,
.
.
.
,
w
D
{\displaystyle w_{1},...,w_{D}}
are first independently sampled from the normal distribution
N
(
0
,
σ
2
I
)
{\displaystyle N(0,\sigma ^{2}I)}
, then they are Gram-Schmidt processed.
=== Multimodality ===
Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality.
Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning. The LLaVA was a vision-language model composed of a language model (Vicuna-13B) and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.
Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer.
Conformer and later Whisper follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
Perceivers are a variant of Transformers designed for multimodality.
For image generation, notable architectures are DALL-E 1 (2021), Parti (2022), Phenaki (2023), and Muse (2023). Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image. Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image. Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted. Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.
== Applications ==
The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Gemini, AlbertAGPT, Claude, BERT, Grok, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including:
machine translation
time series prediction
document summarization
document generation
named entity recognition (NER)
writing computer code based on requirements expressed in natural language.
speech-to-text
Beyond traditional NLP, the transformer architecture has had success in other applications, such as:
biological sequence analysis
video understanding
protein folding (such as AlphaFold)
evaluating chess board positions. Using static evaluation alone (that is, with no Minimax search) transformer achieved an Elo of 2895, putting it at grandmaster level.
== See also ==
seq2seq – Family of machine learning approaches
Perceiver – Variant of Transformer designed for multimodal data
Vision transformer – Machine learning model for vision processing
Large language model – Type of machine learning model
BERT (language model) – Series of language models developed by Google AI
Generative pre-trained transformer – Type of large language model
T5 (language model) – Series of large language models developed by Google AI
== Notes ==
== References ==
== Further reading == | Wikipedia/Transformer_(deep_learning_architecture) |
A vision transformer (ViT) is a transformer designed for computer vision. A ViT decomposes an input image into a series of patches (rather than text into tokens), serializes each patch into a vector, and maps it to a smaller dimension with a single matrix multiplication. These vector embeddings are then processed by a transformer encoder as if they were token embeddings.
ViTs were designed as alternatives to convolutional neural networks (CNNs) in computer vision applications. They have different inductive biases, training stability, and data efficiency. Compared to CNNs, ViTs are less data efficient, but have higher capacity. Some of the largest modern computer vision models are ViTs, such as one with 22B parameters.
Subsequent to its publication, many variants were proposed, with hybrid architectures with both features of ViTs and CNNs. ViTs have found application in image recognition, image segmentation, weather prediction, and autonomous driving.
== History ==
Transformers were introduced in Attention Is All You Need (2017), and have found widespread use in natural language processing. A 2019 paper applied ideas from the Transformer to computer vision. Specifically, they started with a ResNet, a standard convolutional neural network used for computer vision, and replaced all convolutional kernels by the self-attention mechanism found in a Transformer. It resulted in superior performance. However, it is not a Vision Transformer.
In 2020, an encoder-only Transformer was adapted for computer vision, yielding the ViT, which reached state of the art in image classification, overcoming the previous dominance of CNN. The masked autoencoder (2022) extended ViT to work with unsupervised training. The vision transformer and the masked autoencoder, in turn, stimulated new developments in convolutional neural networks.
Subsequently, there was cross-fertilization between the previous CNN approach and the ViT approach.
In 2021, some important variants of the Vision Transformers were proposed. These variants are mainly intended to be more efficient, more accurate or better suited to a specific domain. Two studies improved efficiency and robustness of ViT by adding a CNN as a preprocessor. The Swin Transformer achieved state-of-the-art results on some object detection datasets such as COCO, by using convolution-like sliding windows of attention mechanism, and the pyramid process in classical computer vision.
== Overview ==
The basic architecture, used by the original 2020 paper, is as follows. In summary, it is a BERT-like encoder-only Transformer.
The input image is of type
R
H
×
W
×
C
{\displaystyle \mathbb {R} ^{H\times W\times C}}
, where
H
,
W
,
C
{\displaystyle H,W,C}
are height, width, channel (RGB). It is then split into square-shaped patches of type
R
P
×
P
×
C
{\displaystyle \mathbb {R} ^{P\times P\times C}}
.
For each patch, the patch is pushed through a linear operator, to obtain a vector ("patch embedding"). The position of the patch is also transformed into a vector by "position encoding". The two vectors are added, then pushed through several Transformer encoders.
The attention mechanism in a ViT repeatedly transforms representation vectors of image patches, incorporating more and more semantic relations between image patches in an image. This is analogous to how in natural language processing, as representation vectors flow through a transformer, they incorporate more and more semantic relations between words, from syntax to semantics.
The above architecture turns an image into a sequence of vector representations. To use these for downstream applications, an additional head needs to be trained to interpret them.
For example, to use it for classification, one can add a shallow MLP on top of it that outputs a probability distribution over classes. The original paper uses a linear-GeLU-linear-softmax network.
== Variants ==
=== Original ViT ===
The original ViT was an encoder-only Transformer supervise-trained to predict the image label from the patches of the image. As in the case of BERT, it uses a special token <CLS> in the input side, and the corresponding output vector is used as the only input of the final output MLP head. The special token is an architectural hack to allow the model to compress all information relevant for predicting the image label into one vector.
Transformers found their initial applications in natural language processing tasks, as demonstrated by language models such as BERT and GPT-3. By contrast the typical image processing system uses a convolutional neural network (CNN). Well-known projects include Xception, ResNet, EfficientNet, DenseNet, and Inception.
Transformers measure the relationships between pairs of input tokens (words in the case of text strings), termed attention. The cost is quadratic in the number of tokens. For images, the basic unit of analysis is the pixel. However, computing relationships for every pixel pair in a typical image is prohibitive in terms of memory and computation. Instead, ViT computes relationships among pixels in various small sections of the image (e.g., 16x16 pixels), at a drastically reduced cost. The sections (with positional embeddings) are placed in a sequence. The embeddings are learnable vectors. Each section is arranged into a linear sequence and multiplied by the embedding matrix. The result, with the position embedding is fed to the transformer.
=== Architectural improvements ===
==== Pooling ====
After the ViT processes an image, it produces some embedding vectors. These must be converted to a single class probability prediction by some kind of network. In the original ViT and Masked Autoencoder, they used a dummy [CLS] token , in emulation of the BERT language model. The output at [CLS] is the classification token, which is then processed by a LayerNorm-feedforward-softmax module into a probability distribution.
Global average pooling (GAP) does not use the dummy token, but simply takes the average of all output tokens as the classification token. It was mentioned in the original ViT as being equally good.
Multihead attention pooling (MAP) applies a multiheaded attention block to pooling. Specifically, it takes as input a list of vectors
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
, which might be thought of as the output vectors of a layer of a ViT. The output from MAP is
M
u
l
t
i
h
e
a
d
e
d
A
t
t
e
n
t
i
o
n
(
Q
,
V
,
V
)
{\displaystyle \mathrm {MultiheadedAttention} (Q,V,V)}
, where
q
{\displaystyle q}
is a trainable query vector, and
V
{\displaystyle V}
is the matrix with rows being
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
. This was first proposed in the Set Transformer architecture.
Later papers demonstrated that GAP and MAP both perform better than BERT-like pooling. A variant of MAP was proposed as class attention, which applies MAP, then feedforward, then MAP again.
Re-attention was proposed to allow training deep ViT. It changes the multiheaded attention module.
=== Masked Autoencoder ===
The Masked Autoencoder took inspiration from denoising autoencoders and context encoders. It has two ViTs put end-to-end. The first one ("encoder") takes in image patches with positional encoding, and outputs vectors representing each patch. The second one (called "decoder", even though it is still an encoder-only Transformer) takes in vectors with positional encoding and outputs image patches again. During training, both the encoder and the decoder ViTs are used. During inference, only the encoder ViT is used.
During training, each image is cut into patches, and with their positional embeddings added. Of these, only 25% of the patches are selected. The encoder ViT processes the selected patches. No mask tokens are used. Then, mask tokens are added back in, and positional embeddings added again. These are processed by the decoder ViT, which outputs a reconstruction of the full image. The loss is the total mean-squared loss in pixel-space for all masked patches (reconstruction loss is not computed for non-masked patches).
A similar architecture was BERT ViT (BEiT), published concurrently.
=== DINO ===
Like the Masked Autoencoder, the DINO (self-distillation with no labels) method is a way to train a ViT by self-supervision. DINO is a form of teacher-student self-distillation. In DINO, the student is the model itself, and the teacher is an exponential average of the student's past states. The method is similar to previous works like momentum contrast and bootstrap your own latent (BYOL).
The loss function used in DINO is the cross-entropy loss between the output of the teacher network (
f
θ
t
′
{\displaystyle f_{\theta '_{t}}}
) and the output of the student network (
f
θ
t
{\displaystyle f_{\theta _{t}}}
). The teacher network is an exponentially decaying average of the student network's past parameters:
θ
t
′
=
α
θ
t
+
α
(
1
−
α
)
θ
t
−
1
+
⋯
{\displaystyle \theta '_{t}=\alpha \theta _{t}+\alpha (1-\alpha )\theta _{t-1}+\cdots }
. The inputs to the networks are two different crops of the same image, represented as
T
(
x
)
{\displaystyle T(x)}
and
T
′
(
x
)
{\displaystyle T'(x)}
, where
x
{\displaystyle x}
is the original image. The loss function is written as
L
(
f
θ
t
′
(
T
(
x
)
)
,
f
θ
t
(
T
′
(
x
)
)
)
{\displaystyle L(f_{\theta '_{t}}(T(x)),f_{\theta _{t}}(T'(x)))}
One issue is that the network can "collapse" by always outputting the same value (
y
{\displaystyle y}
), regardless of the input. To prevent this collapse, DINO employs two strategies:
Sharpening: The teacher network's output is sharpened using a softmax function with a lower temperature. This makes the teacher more "confident" in its predictions, forcing the student to learn more meaningful representations to match the teacher's sharpened output.
Centering: The teacher network's output is centered by averaging it with its previous outputs. This prevents the teacher from becoming biased towards any particular output value, encouraging the student to learn a more diverse set of features.
In January 2024, Meta AI Research released an updated version called DINOv2 with improvements in architecture, loss function, and optimization technique. It was trained on a larger and more diverse dataset. The features learned by DINOv2 were more transferable, meaning it had better performance in downstream tasks.
=== Swin Transformer ===
The Swin Transformer ("Shifted windows") took inspiration from standard CNNs:
Instead of performing self-attention over the entire sequence of tokens, one for each patch, it performs "shifted window based" self-attention, which means only performing attention over square-shaped blocks of patches. One block of patches is analogous to the receptive field of one convolution.
After every few attention blocks, there is a "merge layer", which merges neighboring 2x2 tokens into a single token. This is analogous to pooling (by 2x2 convolution kernels, with stride 2). Merging means concatenation followed by multiplication with a matrix.
It is improved by Swin Transformer V2, which modifies upon the ViT by a different attention mechanism: Figure 1 :
LayerNorm immediately after each attention and feedforward layer ("res-post-norm");
scaled cosine attention to replace the original dot product attention;
log-spaced continuous relative position bias, which allows transfer learning across different window resolutions.
=== TimeSformer ===
The TimeSformer was designed for video understanding tasks, and it applied a factorized self-attention, similar to the factorized convolution kernels found in the Inception CNN architecture. Schematically, it divides a video into frames, and each frame into a square grid of patches (same as ViT). Let each patch coordinate be denoted by
x
,
y
,
t
{\displaystyle x,y,t}
, denoting horizontal, vertical, and time.
A space attention layer is a self-attention layer where each query patch
q
x
,
y
,
t
{\displaystyle q_{x,y,t}}
attends to only the key and value patches
k
x
′
,
y
′
,
t
′
,
v
x
′
,
y
′
,
t
′
{\displaystyle k_{x',y',t'},v_{x',y',t'}}
such that
t
=
t
′
{\displaystyle t=t'}
.
A time attention layer is where the requirement is
x
′
=
x
,
y
′
=
y
{\displaystyle x'=x,y'=y}
instead.
The TimeSformer also considered other attention layer designs, such as the "height attention layer" where the requirement is
x
′
=
x
,
t
′
=
t
{\displaystyle x'=x,t'=t}
. However, they found empirically that the best design interleaves one space attention layer and one time attention layer.
=== ViT-VQGAN ===
In ViT-VQGAN, there are two ViT encoders and a discriminator. One encodes 8x8 patches of an image into a list of vectors, one for each patch. The vectors can only come from a discrete set of "codebook", as in vector quantization. Another encodes the quantized vectors back to image patches. The training objective attempts to make the reconstruction image (the output image) faithful to the input image. The discriminator (usually a convolutional network, but other networks are allowed) attempts to decide if an image is an original real image, or a reconstructed image by the ViT.
The idea is essentially the same as vector quantized variational autoencoder (VQVAE) plus generative adversarial network (GAN).
After such a ViT-VQGAN is trained, it can be used to code an arbitrary image into a list of symbols, and code an arbitrary list of symbols into an image. The list of symbols can be used to train into a standard autoregressive transformer (like GPT), for autoregressively generating an image. Further, one can take a list of caption-image pairs, convert the images into strings of symbols, and train a standard GPT-style transformer. Then at test time, one can just give an image caption, and have it autoregressively generate the image. This is the structure of Google Parti.
=== Others ===
Other examples include the visual transformer, CoAtNet, CvT, the data-efficient ViT (DeiT), etc.
In the Transformer in Transformer architecture, each layer applies a vision Transformer layer on each image patch embedding, add back the resulting tokens to the embedding, then applies another vision Transformer layer.
== Comparison with CNNs ==
Typically, ViT uses patch sizes larger than standard CNN kernels (3x3 to 7x7). ViT is more sensitive to the choice of the optimizer, hyperparameters, and network depth. Preprocessing with a layer of smaller-size, overlapping (stride < size) convolutional filters helps with performance and stability.
This different behavior seems to derive from the different inductive biases they possess.
CNN applies the same set of filters for processing the entire image. This allows them to be more data efficient and less sensitive to local perturbations. ViT applies self-attention, allowing them to easily capture long-range relationships between patches. They also require more data to train, but they can ingest more training data compared to CNN, which might not improve after training on a large enough training dataset. ViT also appears more robust to input image distortions such as adversarial patches or permutations.
== Applications ==
ViT have been used in many Computer Vision tasks with excellent results and in some cases even state-of-the-art. Image Classification, Object Detection, Video Deepfake Detection, Image segmentation, Anomaly detection, Image Synthesis, Cluster analysis, Autonomous Driving.
ViT had been used for image generation as backbones for GAN and for diffusion models (diffusion transformer, or DiT).
DINO has been demonstrated to learn useful representations for clustering images and exploring morphological profiles on biological datasets, such as images generated with the Cell Painting assay.
In 2024, a 113 billion-parameter ViT model was proposed (the largest ViT to date) for weather and climate prediction, and trained on the Frontier supercomputer with a throughput of 1.6 exaFLOPs.
== See also ==
Transformer (machine learning model)
Convolutional neural network
Attention (machine learning)
Perceiver
Deep learning
PyTorch
TensorFlow
== References ==
== Further reading ==
Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "11.8. Transformers for Vision". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
Steiner, Andreas; Kolesnikov, Alexander; Zhai, Xiaohua; Wightman, Ross; Uszkoreit, Jakob; Beyer, Lucas (June 18, 2021). "How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers". arXiv:2106.10270 [cs.CV]. | Wikipedia/Vision_transformer |
The value function of an optimization problem gives the value attained by the objective function at a solution, while only depending on the parameters of the problem. In a controlled dynamical system, the value function represents the optimal payoff of the system over the interval [t, t1] when started at the time-t state variable x(t)=x. If the objective function represents some cost that is to be minimized, the value function can be interpreted as the cost to finish the optimal program, and is thus referred to as "cost-to-go function." In an economic context, where the objective function usually represents utility, the value function is conceptually equivalent to the indirect utility function.
In a problem of optimal control, the value function is defined as the supremum of the objective function taken over the set of admissible controls. Given
(
t
0
,
x
0
)
∈
[
0
,
t
1
]
×
R
d
{\displaystyle (t_{0},x_{0})\in [0,t_{1}]\times \mathbb {R} ^{d}}
, a typical optimal control problem is to
maximize
J
(
t
0
,
x
0
;
u
)
=
∫
t
0
t
1
I
(
t
,
x
(
t
)
,
u
(
t
)
)
d
t
+
ϕ
(
x
(
t
1
)
)
{\displaystyle {\text{maximize}}\quad J(t_{0},x_{0};u)=\int _{t_{0}}^{t_{1}}I(t,x(t),u(t))\,\mathrm {d} t+\phi (x(t_{1}))}
subject to
d
x
(
t
)
d
t
=
f
(
t
,
x
(
t
)
,
u
(
t
)
)
{\displaystyle {\frac {\mathrm {d} x(t)}{\mathrm {d} t}}=f(t,x(t),u(t))}
with initial state variable
x
(
t
0
)
=
x
0
{\displaystyle x(t_{0})=x_{0}}
. The objective function
J
(
t
0
,
x
0
;
u
)
{\displaystyle J(t_{0},x_{0};u)}
is to be maximized over all admissible controls
u
∈
U
[
t
0
,
t
1
]
{\displaystyle u\in U[t_{0},t_{1}]}
, where
u
{\displaystyle u}
is a Lebesgue measurable function from
[
t
0
,
t
1
]
{\displaystyle [t_{0},t_{1}]}
to some prescribed arbitrary set in
R
m
{\displaystyle \mathbb {R} ^{m}}
. The value function is then defined as
with
V
(
t
1
,
x
(
t
1
)
)
=
ϕ
(
x
(
t
1
)
)
{\displaystyle V(t_{1},x(t_{1}))=\phi (x(t_{1}))}
, where
ϕ
(
x
(
t
1
)
)
{\displaystyle \phi (x(t_{1}))}
is the "scrap value". If the optimal pair of control and state trajectories is
(
x
∗
,
u
∗
)
{\displaystyle (x^{\ast },u^{\ast })}
, then
V
(
t
0
,
x
0
)
=
J
(
t
0
,
x
0
;
u
∗
)
{\displaystyle V(t_{0},x_{0})=J(t_{0},x_{0};u^{\ast })}
. The function
h
{\displaystyle h}
that gives the optimal control
u
∗
{\displaystyle u^{\ast }}
based on the current state
x
{\displaystyle x}
is called a feedback control policy, or simply a policy function.
Bellman's principle of optimality roughly states that any optimal policy at time
t
{\displaystyle t}
,
t
0
≤
t
≤
t
1
{\displaystyle t_{0}\leq t\leq t_{1}}
taking the current state
x
(
t
)
{\displaystyle x(t)}
as "new" initial condition must be optimal for the remaining problem. If the value function happens to be continuously differentiable, this gives rise to an important partial differential equation known as Hamilton–Jacobi–Bellman equation,
−
∂
V
(
t
,
x
)
∂
t
=
max
u
{
I
(
t
,
x
,
u
)
+
∂
V
(
t
,
x
)
∂
x
f
(
t
,
x
,
u
)
}
{\displaystyle -{\frac {\partial V(t,x)}{\partial t}}=\max _{u}\left\{I(t,x,u)+{\frac {\partial V(t,x)}{\partial x}}f(t,x,u)\right\}}
where the maximand on the right-hand side can also be re-written as the Hamiltonian,
H
(
t
,
x
,
u
,
λ
)
=
I
(
t
,
x
,
u
)
+
λ
(
t
)
f
(
t
,
x
,
u
)
{\displaystyle H\left(t,x,u,\lambda \right)=I(t,x,u)+\lambda (t)f(t,x,u)}
, as
−
∂
V
(
t
,
x
)
∂
t
=
max
u
H
(
t
,
x
,
u
,
λ
)
{\displaystyle -{\frac {\partial V(t,x)}{\partial t}}=\max _{u}H(t,x,u,\lambda )}
with
∂
V
(
t
,
x
)
/
∂
x
=
λ
(
t
)
{\displaystyle \partial V(t,x)/\partial x=\lambda (t)}
playing the role of the costate variables. Given this definition, we further have
d
λ
(
t
)
/
d
t
=
∂
2
V
(
t
,
x
)
/
∂
x
∂
t
+
∂
2
V
(
t
,
x
)
/
∂
x
2
⋅
f
(
x
)
{\displaystyle \mathrm {d} \lambda (t)/\mathrm {d} t=\partial ^{2}V(t,x)/\partial x\partial t+\partial ^{2}V(t,x)/\partial x^{2}\cdot f(x)}
, and after differentiating both sides of the HJB equation with respect to
x
{\displaystyle x}
,
−
∂
2
V
(
t
,
x
)
∂
t
∂
x
=
∂
I
∂
x
+
∂
2
V
(
t
,
x
)
∂
x
2
f
(
x
)
+
∂
V
(
t
,
x
)
∂
x
∂
f
(
x
)
∂
x
{\displaystyle -{\frac {\partial ^{2}V(t,x)}{\partial t\partial x}}={\frac {\partial I}{\partial x}}+{\frac {\partial ^{2}V(t,x)}{\partial x^{2}}}f(x)+{\frac {\partial V(t,x)}{\partial x}}{\frac {\partial f(x)}{\partial x}}}
which after replacing the appropriate terms recovers the costate equation
−
λ
˙
(
t
)
=
∂
I
∂
x
+
λ
(
t
)
∂
f
(
x
)
∂
x
⏟
=
∂
H
∂
x
{\displaystyle -{\dot {\lambda }}(t)=\underbrace {{\frac {\partial I}{\partial x}}+\lambda (t){\frac {\partial f(x)}{\partial x}}} _{={\frac {\partial H}{\partial x}}}}
where
λ
˙
(
t
)
{\displaystyle {\dot {\lambda }}(t)}
is Newton notation for the derivative with respect to time.
The value function is the unique viscosity solution to the Hamilton–Jacobi–Bellman equation. In an online closed-loop approximate optimal control, the value function is also a Lyapunov function that establishes global asymptotic stability of the closed-loop system.
== References ==
== Further reading ==
Caputo, Michael R. (2005). "Necessary and Sufficient Conditions for Isoperimetric Problems". Foundations of Dynamic Economic Analysis : Optimal Control Theory and Applications. New York: Cambridge University Press. pp. 174–210. ISBN 0-521-60368-4.
Clarke, Frank H.; Loewen, Philip D. (1986). "The Value Function in Optimal Control: Sensitivity, Controllability, and Time-Optimality". SIAM Journal on Control and Optimization. 24 (2): 243–263. doi:10.1137/0324014.
LaFrance, Jeffrey T.; Barney, L. Dwayne (1991). "The Envelope Theorem in Dynamic Optimization" (PDF). Journal of Economic Dynamics and Control. 15 (2): 355–385. doi:10.1016/0165-1889(91)90018-V.
Stengel, Robert F. (1994). "Conditions for Optimality". Optimal Control and Estimation. New York: Dover. pp. 201–222. ISBN 0-486-68200-5. | Wikipedia/Value_function |
Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics.
== Introduction ==
The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood. Supervised learning involves learning from a training set of data. Every point in the training is an input–output pair, where the input maps to an output. The learning problem consists of inferring the function that maps between the input and the output, such that the learned function can be used to predict the output from future input.
Depending on the type of output, supervised learning problems are either problems of regression or problems of classification. If the output takes a continuous range of values, it is a regression problem. Using Ohm's law as an example, a regression could be performed with voltage as input and current as an output. The regression would find the functional relationship between voltage and current to be
R
{\displaystyle R}
, such that
V
=
I
R
{\displaystyle V=IR}
Classification problems are those for which the output will be an element from a discrete set of labels. Classification is very common for machine learning applications. In facial recognition, for instance, a picture of a person's face would be the input, and the output label would be that person's name. The input would be represented by a large multidimensional vector whose elements represent pixels in the picture.
After learning a function based on the training set data, that function is validated on a test set of data, data that did not appear in the training set.
== Formal description ==
Take
X
{\displaystyle X}
to be the vector space of all possible inputs, and
Y
{\displaystyle Y}
to be the vector space of all possible outputs. Statistical learning theory takes the perspective that there is some unknown probability distribution over the product space
Z
=
X
×
Y
{\displaystyle Z=X\times Y}
, i.e. there exists some unknown
p
(
z
)
=
p
(
x
,
y
)
{\displaystyle p(z)=p(\mathbf {x} ,y)}
. The training set is made up of
n
{\displaystyle n}
samples from this probability distribution, and is notated
S
=
{
(
x
1
,
y
1
)
,
…
,
(
x
n
,
y
n
)
}
=
{
z
1
,
…
,
z
n
}
{\displaystyle S=\{(\mathbf {x} _{1},y_{1}),\dots ,(\mathbf {x} _{n},y_{n})\}=\{\mathbf {z} _{1},\dots ,\mathbf {z} _{n}\}}
Every
x
i
{\displaystyle \mathbf {x} _{i}}
is an input vector from the training data, and
y
i
{\displaystyle y_{i}}
is the output that corresponds to it.
In this formalism, the inference problem consists of finding a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
such that
f
(
x
)
∼
y
{\displaystyle f(\mathbf {x} )\sim y}
. Let
H
{\displaystyle {\mathcal {H}}}
be a space of functions
f
:
X
→
Y
{\displaystyle f:X\to Y}
called the hypothesis space. The hypothesis space is the space of functions the algorithm will search through. Let
V
(
f
(
x
)
,
y
)
{\displaystyle V(f(\mathbf {x} ),y)}
be the loss function, a metric for the difference between the predicted value
f
(
x
)
{\displaystyle f(\mathbf {x} )}
and the actual value
y
{\displaystyle y}
. The expected risk is defined to be
I
[
f
]
=
∫
X
×
Y
V
(
f
(
x
)
,
y
)
p
(
x
,
y
)
d
x
d
y
{\displaystyle I[f]=\int _{X\times Y}V(f(\mathbf {x} ),y)\,p(\mathbf {x} ,y)\,d\mathbf {x} \,dy}
The target function, the best possible function
f
{\displaystyle f}
that can be chosen, is given by the
f
{\displaystyle f}
that satisfies
f
=
argmin
h
∈
H
I
[
h
]
{\displaystyle f=\mathop {\operatorname {argmin} } _{h\in {\mathcal {H}}}I[h]}
Because the probability distribution
p
(
x
,
y
)
{\displaystyle p(\mathbf {x} ,y)}
is unknown, a proxy measure for the expected risk must be used. This measure is based on the training set, a sample from this unknown probability distribution. It is called the empirical risk
I
S
[
f
]
=
1
n
∑
i
=
1
n
V
(
f
(
x
i
)
,
y
i
)
{\displaystyle I_{S}[f]={\frac {1}{n}}\sum _{i=1}^{n}V(f(\mathbf {x} _{i}),y_{i})}
A learning algorithm that chooses the function
f
S
{\displaystyle f_{S}}
that minimizes the empirical risk is called empirical risk minimization.
== Loss functions ==
The choice of loss function is a determining factor on the function
f
S
{\displaystyle f_{S}}
that will be chosen by the learning algorithm. The loss function also affects the convergence rate for an algorithm. It is important for the loss function to be convex.
Different loss functions are used depending on whether the problem is one of regression or one of classification.
=== Regression ===
The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression. The form is:
V
(
f
(
x
)
,
y
)
=
(
y
−
f
(
x
)
)
2
{\displaystyle V(f(\mathbf {x} ),y)=(y-f(\mathbf {x} ))^{2}}
The absolute value loss (also known as the L1-norm) is also sometimes used:
V
(
f
(
x
)
,
y
)
=
|
y
−
f
(
x
)
|
{\displaystyle V(f(\mathbf {x} ),y)=|y-f(\mathbf {x} )|}
=== Classification ===
In some sense the 0-1 indicator function is the most natural loss function for classification. It takes the value 0 if the predicted output is the same as the actual output, and it takes the value 1 if the predicted output is different from the actual output. For binary classification with
Y
=
{
−
1
,
1
}
{\displaystyle Y=\{-1,1\}}
, this is:
V
(
f
(
x
)
,
y
)
=
θ
(
−
y
f
(
x
)
)
{\displaystyle V(f(\mathbf {x} ),y)=\theta (-yf(\mathbf {x} ))}
where
θ
{\displaystyle \theta }
is the Heaviside step function.
== Regularization ==
In machine learning problems, a major problem that arises is that of overfitting. Because learning is a prediction problem, the goal is not to find a function that most closely fits the (previously observed) data, but to find one that will most accurately predict output from future input. Empirical risk minimization runs this risk of overfitting: finding a function that matches the data exactly but does not predict future output well.
Overfitting is symptomatic of unstable solutions; a small perturbation in the training set data would cause a large variation in the learned function. It can be shown that if the stability for the solution can be guaranteed, generalization and consistency are guaranteed as well. Regularization can solve the overfitting problem and give the problem stability.
Regularization can be accomplished by restricting the hypothesis space
H
{\displaystyle {\mathcal {H}}}
. A common example would be restricting
H
{\displaystyle {\mathcal {H}}}
to linear functions: this can be seen as a reduction to the standard problem of linear regression.
H
{\displaystyle {\mathcal {H}}}
could also be restricted to polynomial of degree
p
{\displaystyle p}
, exponentials, or bounded functions on L1. Restriction of the hypothesis space avoids overfitting because the form of the potential functions are limited, and so does not allow for the choice of a function that gives empirical risk arbitrarily close to zero.
One example of regularization is Tikhonov regularization. This consists of minimizing
1
n
∑
i
=
1
n
V
(
f
(
x
i
)
,
y
i
)
+
γ
‖
f
‖
H
2
{\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}V(f(\mathbf {x} _{i}),y_{i})+\gamma \left\|f\right\|_{\mathcal {H}}^{2}}
where
γ
{\displaystyle \gamma }
is a fixed and positive parameter, the regularization parameter. Tikhonov regularization ensures existence, uniqueness, and stability of the solution.
== Bounding empirical risk ==
Consider a binary classifier
f
:
X
→
{
0
,
1
}
{\displaystyle f:{\mathcal {X}}\to \{0,1\}}
. We can apply Hoeffding's inequality to bound the probability that the empirical risk deviates from the true risk to be a Sub-Gaussian distribution.
P
(
|
R
^
(
f
)
−
R
(
f
)
|
≥
ϵ
)
≤
2
e
−
2
n
ϵ
2
{\displaystyle \mathbb {P} (|{\hat {R}}(f)-R(f)|\geq \epsilon )\leq 2e^{-2n\epsilon ^{2}}}
But generally, when we do empirical risk minimization, we are not given a classifier; we must choose it. Therefore, a more useful result is to bound the probability of the supremum of the difference over the whole class.
P
(
sup
f
∈
F
|
R
^
(
f
)
−
R
(
f
)
|
≥
ϵ
)
≤
2
S
(
F
,
n
)
e
−
n
ϵ
2
/
8
≈
n
d
e
−
n
ϵ
2
/
8
{\displaystyle \mathbb {P} {\bigg (}\sup _{f\in {\mathcal {F}}}|{\hat {R}}(f)-R(f)|\geq \epsilon {\bigg )}\leq 2S({\mathcal {F}},n)e^{-n\epsilon ^{2}/8}\approx n^{d}e^{-n\epsilon ^{2}/8}}
where
S
(
F
,
n
)
{\displaystyle S({\mathcal {F}},n)}
is the shattering number and
n
{\displaystyle n}
is the number of samples in your dataset. The exponential term comes from Hoeffding but there is an extra cost of taking the supremum over the whole class, which is the shattering number.
== See also ==
Reproducing kernel Hilbert spaces are a useful choice for
H
{\displaystyle {\mathcal {H}}}
.
Proximal gradient methods for learning
Rademacher complexity
Vapnik–Chervonenkis dimension
== References == | Wikipedia/Statistical_learning_theory |
T5 (Text-to-Text Transfer Transformer) is a series of large language models developed by Google AI introduced in 2019. Like the original Transformer model, T5 models are encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text.
T5 models are usually pretrained on a massive dataset of text and code, after which they can perform the text-based tasks that are similar to their pretrained tasks. They can also be finetuned to perform other tasks.
T5 models have been employed in various applications, including chatbots, machine translation systems, text summarization tools, code generation, and robotics.
== Training ==
The original T5 models are pre-trained on the Colossal Clean Crawled Corpus (C4), containing text and code scraped from the internet. This pre-training process enables the models to learn general language understanding and generation abilities. T5 models can then be fine-tuned on specific downstream tasks, adapting their knowledge to perform well in various applications.
The T5 models were pretrained on many tasks, all in the format of <input text> -> <output text>.
Some examples are:
restoring corrupted text: Thank you <X> me to your party <Y> week. -> <X> for inviting <Y> last <Z>, where the <Z> means "end of output", and the <X> and <Y> denote blanks to be filled, called "sentinels" in the original report.
translation: translate English to German: That is good. -> Das ist gut..
judging the grammatical acceptability of a sentence (CoLA sentence): The course is jumping well. -> not acceptable .
== Architecture ==
The T5 series encompasses several models with varying sizes and capabilities, all encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text.
These models are often distinguished by their parameter count, which indicates the complexity and potential capacity of the model. The original paper reported the following 5 models:
*The encoder and the decoder have the same shape. So for example, the T5-small has 6 layers in the encoder and 6 layers in the decoder.
In the above table,
n
layer
{\displaystyle n_{\text{layer}}}
: Number of layers in the encoder; also, number of layers in the decoder. They always have the same number of layers.
n
head
{\displaystyle n_{\text{head}}}
: Number of attention heads in each attention block.
d
model
{\displaystyle d_{\text{model}}}
: Dimension of the embedding vectors.
d
ff
{\displaystyle d_{\text{ff}}}
: Dimension of the feedforward network within each encoder and decoder layer.
d
kv
{\displaystyle d_{\text{kv}}}
: Dimension of the key and value vectors used in the self-attention mechanism.
Note that unlike typical Transformers, the 3B and 11B models do not satisfy
d
model
=
d
kv
n
head
{\displaystyle d_{\text{model}}=d_{\text{kv}}n_{\text{head}}}
.
Compared to the original Transformer, it uses a few minor modifications: layer normalization with no additive bias; placing the layer normalization outside the residual path; relative positional embedding.
For all experiments, they used a WordPiece tokenizer, with vocabulary size 32,000. The tokenizer is shared across both the input and output of each model. It was trained on a mixture of English, German, French, and Romanian data from the C4 dataset, at a ratio of 10:1:1:1.
== Variants ==
Several subsequent models used the T5 architecture, with non-standardized naming conventions used to differentiate them. This section attempts to collect the main ones. An exhaustive list of the variants released by Google Brain is on the GitHub repo for T5X.
Some models are trained from scratch while others are trained by starting with a previous trained model. By default, each model is trained from scratch, except otherwise noted.
T5 small, base, large, 3B, 11B (2019): The original models.
T5 1.1 small, base, large, XL, XXL: Improved versions of the original T5 series. These have roughly equal parameters. The activation function is GEGLU instead of ReLU. The 3B and the 11B were changed to "XL" and "XXL", and their shapes are changed:
LM-adapted T5 (2021): a series of models (from small to XXL) that started from checkpoints of the T5 series, but trained further on 100B additional tokens from C4.
Switch Transformer (2021): a mixture-of-experts variant of T5, by replacing the feedforward layers in the encoder and decoder blocks with mixture of expert feedforward layers.
T0 3B, 11B (2021): a series of models that started from checkpoints of LM-adapted T5, and further trained to perform tasks based only on task instruction (zero-shot). Different entries in the series uses different finetuning data.
ByT5 (2021): a byte-level version of T5, trained on mC4 (multilingual C4) dataset. It operates on text encoded as UTF-8 bytes, without tokenizers.
Flan-T5-XL (2022): a model that started with a checkpoint of T5 XL, then instruction-tuned on the FLAN dataset.
T5X (2022): a JAX-based re-implementation of the original T5 codebase. It is not a model. The original T5 codebase was implemented in TensorFlow with MeshTF.
UL2 20B (2022): a model with the same architecture as the T5 series, but scaled up to 20B, and trained with "mixture of denoisers" objective on the C4. It was trained on a TPU cluster by accident, when a training run was left running accidentally for a month.
Flan-UL2 20B (2022): UL2 20B instruction-finetuned on the FLAN dataset.
Pile-T5 (2024): has the same architecture of T5, except it used the Llama tokenizer. It was trained on The Pile. It came in sizes of base, large, XL, XXL.
== Applications ==
The T5 model itself is an encoder-decoder model, allowing it to be used for instruction following. The encoder encodes the instruction, and the decoder autoregressively generates the reply.
The T5 encoder can be used as a text encoder, much like BERT. It encodes a text into a sequence of real-number vectors, which can be used for downstream applications. For example, Google Imagen uses T5-XXL as text encoder, and the encoded text vectors are used as conditioning on a diffusion model. As another example, the AuraFlow diffusion model uses Pile-T5-XL.
== References ==
== External links ==
"T5 release - a google Collection". huggingface.co. 2024-07-31. Retrieved 2024-10-16.
== Notes == | Wikipedia/T5_(language_model) |
The actor-critic algorithm (AC) is a family of reinforcement learning (RL) algorithms that combine policy-based RL algorithms such as policy gradient methods, and value-based RL algorithms such as value iteration, Q-learning, SARSA, and TD learning.
An AC algorithm consists of two main components: an "actor" that determines which actions to take according to a policy function, and a "critic" that evaluates those actions according to a value function. Some AC algorithms are on-policy, some are off-policy. Some apply to either continuous or discrete action spaces. Some work in both cases.
== Overview ==
The actor-critic methods can be understood as an improvement over pure policy gradient methods like REINFORCE via introducing a baseline.
=== Actor ===
The actor uses a policy function
π
(
a
|
s
)
{\displaystyle \pi (a|s)}
, while the critic estimates either the value function
V
(
s
)
{\displaystyle V(s)}
, the action-value Q-function
Q
(
s
,
a
)
{\displaystyle Q(s,a)}
, the advantage function
A
(
s
,
a
)
{\displaystyle A(s,a)}
, or any combination thereof.
The actor is a parameterized function
π
θ
{\displaystyle \pi _{\theta }}
, where
θ
{\displaystyle \theta }
are the parameters of the actor. The actor takes as argument the state of the environment
s
{\displaystyle s}
and produces a probability distribution
π
θ
(
⋅
|
s
)
{\displaystyle \pi _{\theta }(\cdot |s)}
.
If the action space is discrete, then
∑
a
π
θ
(
a
|
s
)
=
1
{\displaystyle \sum _{a}\pi _{\theta }(a|s)=1}
. If the action space is continuous, then
∫
a
π
θ
(
a
|
s
)
d
a
=
1
{\displaystyle \int _{a}\pi _{\theta }(a|s)da=1}
.
The goal of policy optimization is to improve the actor. That is, to find some
θ
{\displaystyle \theta }
that maximizes the expected episodic reward
J
(
θ
)
{\displaystyle J(\theta )}
:
J
(
θ
)
=
E
π
θ
[
∑
t
=
0
T
γ
t
r
t
]
{\displaystyle J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{t=0}^{T}\gamma ^{t}r_{t}\right]}
where
γ
{\displaystyle \gamma }
is the discount factor,
r
t
{\displaystyle r_{t}}
is the reward at step
t
{\displaystyle t}
, and
T
{\displaystyle T}
is the time-horizon (which can be infinite).
The goal of policy gradient method is to optimize
J
(
θ
)
{\displaystyle J(\theta )}
by gradient ascent on the policy gradient
∇
J
(
θ
)
{\displaystyle \nabla J(\theta )}
.
As detailed on the policy gradient method page, there are many unbiased estimators of the policy gradient:
∇
θ
J
(
θ
)
=
E
π
θ
[
∑
0
≤
j
≤
T
∇
θ
ln
π
θ
(
A
j
|
S
j
)
⋅
Ψ
j
|
S
0
=
s
0
]
{\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{0\leq j\leq T}\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})\cdot \Psi _{j}{\Big |}S_{0}=s_{0}\right]}
where
Ψ
j
{\textstyle \Psi _{j}}
is a linear sum of the following:
∑
0
≤
i
≤
T
(
γ
i
R
i
)
{\textstyle \sum _{0\leq i\leq T}(\gamma ^{i}R_{i})}
.
γ
j
∑
j
≤
i
≤
T
(
γ
i
−
j
R
i
)
{\textstyle \gamma ^{j}\sum _{j\leq i\leq T}(\gamma ^{i-j}R_{i})}
: the REINFORCE algorithm.
γ
j
∑
j
≤
i
≤
T
(
γ
i
−
j
R
i
)
−
b
(
S
j
)
{\textstyle \gamma ^{j}\sum _{j\leq i\leq T}(\gamma ^{i-j}R_{i})-b(S_{j})}
: the REINFORCE with baseline algorithm. Here
b
{\displaystyle b}
is an arbitrary function.
γ
j
(
R
j
+
γ
V
π
θ
(
S
j
+
1
)
−
V
π
θ
(
S
j
)
)
{\textstyle \gamma ^{j}\left(R_{j}+\gamma V^{\pi _{\theta }}(S_{j+1})-V^{\pi _{\theta }}(S_{j})\right)}
: TD(1) learning.
γ
j
Q
π
θ
(
S
j
,
A
j
)
{\textstyle \gamma ^{j}Q^{\pi _{\theta }}(S_{j},A_{j})}
.
γ
j
A
π
θ
(
S
j
,
A
j
)
{\textstyle \gamma ^{j}A^{\pi _{\theta }}(S_{j},A_{j})}
: Advantage Actor-Critic (A2C).
γ
j
(
R
j
+
γ
R
j
+
1
+
γ
2
V
π
θ
(
S
j
+
2
)
−
V
π
θ
(
S
j
)
)
{\textstyle \gamma ^{j}\left(R_{j}+\gamma R_{j+1}+\gamma ^{2}V^{\pi _{\theta }}(S_{j+2})-V^{\pi _{\theta }}(S_{j})\right)}
: TD(2) learning.
γ
j
(
∑
k
=
0
n
−
1
γ
k
R
j
+
k
+
γ
n
V
π
θ
(
S
j
+
n
)
−
V
π
θ
(
S
j
)
)
{\textstyle \gamma ^{j}\left(\sum _{k=0}^{n-1}\gamma ^{k}R_{j+k}+\gamma ^{n}V^{\pi _{\theta }}(S_{j+n})-V^{\pi _{\theta }}(S_{j})\right)}
: TD(n) learning.
γ
j
∑
n
=
1
∞
λ
n
−
1
1
−
λ
⋅
(
∑
k
=
0
n
−
1
γ
k
R
j
+
k
+
γ
n
V
π
θ
(
S
j
+
n
)
−
V
π
θ
(
S
j
)
)
{\textstyle \gamma ^{j}\sum _{n=1}^{\infty }{\frac {\lambda ^{n-1}}{1-\lambda }}\cdot \left(\sum _{k=0}^{n-1}\gamma ^{k}R_{j+k}+\gamma ^{n}V^{\pi _{\theta }}(S_{j+n})-V^{\pi _{\theta }}(S_{j})\right)}
: TD(λ) learning, also known as GAE (generalized advantage estimate). This is obtained by an exponentially decaying sum of the TD(n) learning terms.
=== Critic ===
In the unbiased estimators given above, certain functions such as
V
π
θ
,
Q
π
θ
,
A
π
θ
{\displaystyle V^{\pi _{\theta }},Q^{\pi _{\theta }},A^{\pi _{\theta }}}
appear. These are approximated by the critic. Since these functions all depend on the actor, the critic must learn alongside the actor. The critic is learned by value-based RL algorithms.
For example, if the critic is estimating the state-value function
V
π
θ
(
s
)
{\displaystyle V^{\pi _{\theta }}(s)}
, then it can be learned by any value function approximation method. Let the critic be a function approximator
V
ϕ
(
s
)
{\displaystyle V_{\phi }(s)}
with parameters
ϕ
{\displaystyle \phi }
.
The simplest example is TD(1) learning, which trains the critic to minimize the TD(1) error:
δ
i
=
R
i
+
γ
V
ϕ
(
S
i
+
1
)
−
V
ϕ
(
S
i
)
{\displaystyle \delta _{i}=R_{i}+\gamma V_{\phi }(S_{i+1})-V_{\phi }(S_{i})}
The critic parameters are updated by gradient descent on the squared TD error:
ϕ
←
ϕ
−
α
∇
ϕ
(
δ
i
)
2
=
ϕ
+
α
δ
i
∇
ϕ
V
ϕ
(
S
i
)
{\displaystyle \phi \leftarrow \phi -\alpha \nabla _{\phi }(\delta _{i})^{2}=\phi +\alpha \delta _{i}\nabla _{\phi }V_{\phi }(S_{i})}
where
α
{\displaystyle \alpha }
is the learning rate. Note that the gradient is taken with respect to the
ϕ
{\displaystyle \phi }
in
V
ϕ
(
S
i
)
{\displaystyle V_{\phi }(S_{i})}
only, since the
ϕ
{\displaystyle \phi }
in
γ
V
ϕ
(
S
i
+
1
)
{\displaystyle \gamma V_{\phi }(S_{i+1})}
constitutes a moving target, and the gradient is not taken with respect to that. This is a common source of error in implementations that use automatic differentiation, and requires "stopping the gradient" at that point.
Similarly, if the critic is estimating the action-value function
Q
π
θ
{\displaystyle Q^{\pi _{\theta }}}
, then it can be learned by Q-learning or SARSA. In SARSA, the critic maintains an estimate of the Q-function, parameterized by
ϕ
{\displaystyle \phi }
, denoted as
Q
ϕ
(
s
,
a
)
{\displaystyle Q_{\phi }(s,a)}
. The temporal difference error is then calculated as
δ
i
=
R
i
+
γ
Q
θ
(
S
i
+
1
,
A
i
+
1
)
−
Q
θ
(
S
i
,
A
i
)
{\displaystyle \delta _{i}=R_{i}+\gamma Q_{\theta }(S_{i+1},A_{i+1})-Q_{\theta }(S_{i},A_{i})}
. The critic is then updated by
θ
←
θ
+
α
δ
i
∇
θ
Q
θ
(
S
i
,
A
i
)
{\displaystyle \theta \leftarrow \theta +\alpha \delta _{i}\nabla _{\theta }Q_{\theta }(S_{i},A_{i})}
The advantage critic can be trained by training both a Q-function
Q
ϕ
(
s
,
a
)
{\displaystyle Q_{\phi }(s,a)}
and a state-value function
V
ϕ
(
s
)
{\displaystyle V_{\phi }(s)}
, then let
A
ϕ
(
s
,
a
)
=
Q
ϕ
(
s
,
a
)
−
V
ϕ
(
s
)
{\displaystyle A_{\phi }(s,a)=Q_{\phi }(s,a)-V_{\phi }(s)}
. Although, it is more common to train just a state-value function
V
ϕ
(
s
)
{\displaystyle V_{\phi }(s)}
, then estimate the advantage by
A
ϕ
(
S
i
,
A
i
)
≈
∑
j
∈
0
:
n
−
1
γ
j
R
i
+
j
+
γ
n
V
ϕ
(
S
i
+
n
)
−
V
ϕ
(
S
i
)
{\displaystyle A_{\phi }(S_{i},A_{i})\approx \sum _{j\in 0:n-1}\gamma ^{j}R_{i+j}+\gamma ^{n}V_{\phi }(S_{i+n})-V_{\phi }(S_{i})}
Here,
n
{\displaystyle n}
is a positive integer. The higher
n
{\displaystyle n}
is, the more lower is the bias in the advantage estimation, but at the price of higher variance.
The Generalized Advantage Estimation (GAE) introduces a hyperparameter
λ
{\displaystyle \lambda }
that smoothly interpolates between Monte Carlo returns (
λ
=
1
{\displaystyle \lambda =1}
, high variance, no bias) and 1-step TD learning (
λ
=
0
{\displaystyle \lambda =0}
, low variance, high bias). This hyperparameter can be adjusted to pick the optimal bias-variance trade-off in advantage estimation. It uses an exponentially decaying average of n-step returns with
λ
{\displaystyle \lambda }
being the decay strength.
== Variants ==
Asynchronous Advantage Actor-Critic (A3C): Parallel and asynchronous version of A2C.
Soft Actor-Critic (SAC): Incorporates entropy maximization for improved exploration.
Deep Deterministic Policy Gradient (DDPG): Specialized for continuous action spaces.
== See also ==
Reinforcement learning
Policy gradient method
Deep reinforcement learning
== References ==
Konda, Vijay R.; Tsitsiklis, John N. (January 2003). "On Actor-Critic Algorithms". SIAM Journal on Control and Optimization. 42 (4): 1143–1166. doi:10.1137/S0363012901385691. ISSN 0363-0129.
Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement learning: an introduction. Adaptive computation and machine learning series (2 ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-03924-6.
Bertsekas, Dimitri P. (2019). Reinforcement learning and optimal control (2 ed.). Belmont, Massachusetts: Athena Scientific. ISBN 978-1-886529-39-7.
Grossi, Csaba (2010). Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning (1 ed.). Cham: Springer International Publishing. ISBN 978-3-031-00423-0.
Grondman, Ivo; Busoniu, Lucian; Lopes, Gabriel A. D.; Babuska, Robert (November 2012). "A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients". IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 42 (6): 1291–1307. doi:10.1109/TSMCC.2012.2218595. ISSN 1094-6977. | Wikipedia/Actor-critic_algorithm |
BigScience Large Open-science Open-access Multilingual Language Model (BLOOM) is a 176-billion-parameter transformer-based autoregressive large language model (LLM). The model, as well as the code base and the data used to train it, are distributed under free licences. BLOOM was trained on approximately 366 billion (1.6TB) tokens from March to July 2022.
BLOOM is the main outcome of the BigScience collaborative initiative, a one-year-long research workshop that took place between May 2021 and May 2022. BigScience was led by HuggingFace and involved several hundreds of researchers and engineers from France and abroad representing both the academia and the private sector. BigScience was supported by a large-scale public compute grant on the French public supercomputer Jean Zay, managed by GENCI and IDRIS (CNRS), on which it was trained.
BLOOM's training corpus, named ROOTS, combines data extracted from the then-latest version of the web-based OSCAR corpus (38% of ROOTS) and newly collected data extracted from a manually selected and documented list of language data sources. It encompasses 46 natural languages (in amounts ranging from 30% of the whole dataset for English to 0.00002% for Chi Tumbuka) and 13 programming languages.
== External links ==
Bigscience project on HuggingFace
== References == | Wikipedia/BLOOM_(language_model) |
An echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behavior is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.
Alternatively, one may consider a nonparametric Bayesian formulation of the output layer, under which: (i) a prior distribution is imposed over the output weights; and (ii) the output weights are marginalized out in the context of prediction generation, given the training data. This idea has been demonstrated in by using Gaussian priors, whereby a Gaussian process model with ESN-driven kernel function is obtained. Such a solution was shown to outperform ESNs with trainable (finite) sets of weights in several benchmarks.
Some publicly available efficient implementations of ESNs are aureservoir (a C++ library for various kinds with python/numpy bindings), MATLAB, ReservoirComputing.jl (a Julia-based implementation of various types) and pyESN (for simple ESNs in Python).
== Background ==
The Echo State Network (ESN) belongs to the Recurrent Neural Network (RNN) family and provide their architecture and supervised learning principle. Unlike Feedforward Neural Networks, Recurrent Neural Networks are dynamic systems and not functions. Recurrent Neural Networks are typically used for:
Learning dynamical processes: signal treatment in engineering and telecommunications, vibration analysis, seismology, and control of engines and generators.
Signal forecasting and generation: text, music, electric signals, chaotic signals.
Modeling of biological systems, neurosciences (cognitive neurodynamics), memory modeling, brain-computer interfaces (BCIs), filtering and Kalman processes, military applications, volatility modeling etc.
For the training of RNNs a number of learning algorithms are available: backpropagation through time, real-time recurrent learning. Convergence is not guaranteed due to instability and bifurcation phenomena.
The main approach of the ESN is firstly to operate a random, large, fixed, recurring neural network with the input signal, which induces a nonlinear response signal in each neuron within this "reservoir" network, and secondly connect a desired output signal by a trainable linear combination of all these response signals.
Another feature of the ESN is the autonomous operation in prediction: if it is trained with an input that is a backshifted version of the output, then it can be used for signal generation/prediction by using the previous output as input.
The main idea of ESNs is tied to liquid state machines, which were independently and simultaneously developed with ESNs by Wolfgang Maass. They, ESNs and the newly researched backpropagation decorrelation learning rule for RNNs are more and more summarized under the name Reservoir Computing.
Schiller and Steil also demonstrated that in conventional training approaches for RNNs, in which all weights (not only output weights) are adapted, the dominant changes are in output weights. In cognitive neuroscience, Peter F. Dominey analysed a related process related to the modelling of sequence processing in the mammalian brain, in particular speech recognition in the human brain. The basic idea also included a model of temporal input discrimination in biological neuronal networks. An early clear formulation of the reservoir computing idea is due to K. Kirby, who disclosed this concept in a largely forgotten conference contribution. The first formulation of the reservoir computing idea known today stems from L. Schomaker, who described how a desired target output could be obtained from an RNN by learning to combine signals from a randomly configured ensemble of spiking neural oscillators.
== Variants ==
Echo state networks can be built in different ways. They can be set up with or without directly trainable input-to-output connections, with or without output reservation feedback, with different neurotypes, different reservoir internal connectivity patterns etc. The output weight can be calculated for linear regression with all algorithms whether they are online or offline. In addition to the solutions for errors with smallest squares, margin maximization criteria, so-called training support vector machines, are used to determine the output values. Other variants of echo state networks seek to change the formulation to better match common models of physical systems, such as those typically those defined by differential equations. Work in this direction includes echo state networks which partially include physical models, hybrid echo state networks, and continuous-time echo state networks.
The fixed RNN acts as a random, nonlinear medium whose dynamic response, the "echo", is used as a signal base. The linear combination of this base can be trained to reconstruct the desired output by minimizing some error criteria.
Quantum echo state networks, defined over nodes based on registers of qubits, are in turn universal . Contrary from other quantum algorithms which suffer of intrinsic noise of quantum computers, amplitude damping noise affecting for instance superconducting qubits is beneficial to induce echo state property and fading memory, so training of a quantum echo state network assisted by quantum noise has been reported experimentally.
== Significance ==
RNNs were rarely used in practice before the introduction of the ESN, because of the complexity involved in adjusting their connections (e.g., lack of autodifferentiation, susceptibility to vanishing/exploding gradients, etc.). RNN training algorithms were slow and often vulnerable to issues, such as branching errors. Convergence could therefore not be guaranteed. On the other hand, ESN training does not have a problem with branching and is easy to implement. In early studies, ESNs were shown to perform well on time series prediction tasks from synthetic datasets.
Today, many of the problems that made RNNs slow and error-prone have been addressed with the advent of autodifferentiation (deep learning) libraries, as well as more stable architectures such as long short-term memory and Gated recurrent unit; thus, the unique selling point of ESNs has been lost. RNNs have also proven themselves in several practical areas, such as language processing. To cope with tasks of similar complexity using reservoir calculation methods requires memory of excessive size.
ESNs are used in some areas, such as signal processing applications. In particular, they have been widely used as a computing principle that mixes well with non-digital computer substrates. Since ESNs do not need to modify the parameters of the RNN, they make it possible to use many different objects as their nonlinear "reservoir″. For example, optical microchips, mechanical nanooscillators, polymer mixtures, or even artificial soft limbs.
== References == | Wikipedia/Echo_state_network |
Spiking neural networks (SNNs) are artificial neural networks (ANN) that mimic natural neural networks. These models leverage timing of discrete spikes as the main information carrier.
In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.
While spike rates can be considered the analogue of the variable output of a traditional ANN, neurobiology research indicated that high speed processing cannot be performed solely through a rate-based scheme. For example humans can perform an image recognition task requiring no more than 10ms of processing time per neuron through the successive layers (going from the retina to the temporal lobe). This time window is too short for rate-based encoding. The precise spike timings in a small set of spiking neurons also has a higher information coding capacity compared with a rate-based approach.
The most prominent spiking neuron model is the leaky integrate-and-fire model. In that model, the momentary activation level (modeled as a differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher or lower, until the state eventually either decays or—if the firing threshold is reached—the neuron fires. After firing, the state variable is reset to a lower value.
Various decoding methods exist for interpreting the outgoing spike train as a real-value number, relying on either the frequency of spikes (rate-code), the time-to-first-spike after stimulation, or the interval between spikes.
== History ==
Many multi-layer artificial neural networks are fully connected, receiving input from every neuron in the previous layer and signalling every neuron in the subsequent layer. Although these networks have achieved breakthroughs, they do not match biological networks and do not mimic neurons.
The biology-inspired Hodgkin–Huxley model of a spiking neuron was proposed in 1952. This model described how action potentials are initiated and propagated. Communication between neurons, which requires the exchange of chemical neurotransmitters in the synaptic gap, is described in models such as the integrate-and-fire model, FitzHugh–Nagumo model (1961–1962), and Hindmarsh–Rose model (1984). The leaky integrate-and-fire model (or a derivative) is commonly used as it is easier to compute than Hodgkin–Huxley.
While the notion of an artificial spiking neural network became popular only in the twenty-first century, studies between 1980 and 1995 supported the concept. The first models of this type of ANN appeared to simulate non-algorithmic intelligent information processing systems. However, the notion of the spiking neural network as a mathematical model was first worked on in the early 1970s.
As of 2019 SNNs lagged behind ANNs in accuracy, but the gap is decreasing, and has vanished on some tasks.
== Underpinnings ==
Information in the brain is represented as action potentials (neuron spikes), which may group into spike trains or coordinated waves. A fundamental question of neuroscience is to determine whether neurons communicate by a rate or temporal code. Temporal coding implies that a single spiking neuron can replace hundreds of hidden units on a conventional neural net.
SNNs define a neuron's current state as its potential (possibly modeled as a differential equation). An input pulse causes the potential to rise and then gradually decline. Encoding schemes can interpret these pulse sequences as a number, considering pulse frequency and pulse interval. Using the precise time of pulse occurrence, a neural network can consider more information and offer better computing properties.
SNNs compute in the continuous domain. Such neurons test for activation only when their potentials reach a certain value. When a neuron is activated, it produces a signal that is passed to connected neurons, accordingly raising or lowering their potentials.
The SNN approach produces a continuous output instead of the binary output of traditional ANNs. Pulse trains are not easily interpretable, hence the need for encoding schemes. However, a pulse train representation may be more suited for processing spatiotemporal data (or real-world sensory data classification). SNNs connect neurons only to nearby neurons so that they process input blocks separately (similar to CNN using filters). They consider time by encoding information as pulse trains so as not to lose information. This avoids the complexity of a recurrent neural network (RNN). Impulse neurons are more powerful computational units than traditional artificial neurons.
SNNs are theoretically more powerful than so called "second-generation networks" defined as ANNs "based on computational units that apply activation function with a continuous set of possible output values to a weighted sum (or polynomial) of the inputs"; however, SNN training issues and hardware requirements limit their use. Although unsupervised biologically inspired learning methods are available such as Hebbian learning and STDP, no effective supervised training method is suitable for SNNs that can provide better performance than second-generation networks. Spike-based activation of SNNs is not differentiable, thus gradient descent-based backpropagation (BP) is not available.
SNNs have much larger computational costs for simulating realistic neural models than traditional ANNs.
Pulse-coupled neural networks (PCNN) are often confused with SNNs. A PCNN can be seen as a kind of SNN.
Researchers are actively working on various topics. The first concerns differentiability. The expressions for both the forward- and backward-learning methods contain the derivative of the neural activation function which is not differentiable because a neuron's output is either 1 when it spikes, and 0 otherwise. This all-or-nothing behavior disrupts gradients and makes these neurons unsuitable for gradient-based optimization. Approaches to resolving it include:
resorting to entirely biologically inspired local learning rules for the hidden units
translating conventionally trained “rate-based” NNs to SNNs
smoothing the network model to be continuously differentiable
defining an SG (Surrogate Gradient) as a continuous relaxation of the real gradients
The second concerns the optimization algorithm. Standard BP can be expensive in terms of computation, memory, and communication and may be poorly suited to the hardware that implements it (e.g., a computer, brain, or neuromorphic device).
Incorporating additional neuron dynamics such as Spike Frequency Adaptation (SFA) is a notable advance, enhancing efficiency and computational power. These neurons sit between biological complexity and computational complexity. Originating from biological insights, SFA offers significant computational benefits by reducing power usage, especially in cases of repetitive or intense stimuli. This adaptation improves signal/noise clarity and introduces an elementary short-term memory at the neuron level, which in turn, improves accuracy and efficiency. This was mostly achieved using compartmental neuron models. The simpler versions are of neuron models with adaptive thresholds, are an indirect way of achieving SFA. It equips SNNs with improved learning capabilities, even with constrained synaptic plasticity, and elevates computational efficiency. This feature lessens the demand on network layers by decreasing the need for spike processing, thus lowering computational load and memory access time—essential aspects of neural computation. Moreover, SNNs utilizing neurons capable of SFA achieve levels of accuracy that rival those of conventional ANNs, while also requiring fewer neurons for comparable tasks. This efficiency streamlines the computational workflow and conserves space and energy, while maintaining technical integrity. High-performance deep spiking neural networks can operate with 0.3 spikes per neuron.
== Applications ==
SNNs can in principle be applied to the same applications as traditional ANNs. In addition, SNNs can model the central nervous system of biological organisms, such as an insect seeking food without prior knowledge of the environment. Due to their relative realism, they can be used to study biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, recordings of this circuit can be compared to the output of a corresponding SNN, evaluating the plausibility of the hypothesis. SNNs lack effective training mechanisms, which can complicate some applications, including computer vision.
When using SNNs for image based data, the images need to be converted into binary spike trains. Types of encodings include:
Temporal coding; generating one spike per neuron, in which spike latency is inversely proportional to the pixel intensity.
Rate coding: converting pixel intensity into a spike train, where the number of spikes is proportional to the pixel intensity.
Direct coding; using a trainable layer to generate a floating-point value for each time step. The layer converts each pixel at a certain time step into a floating-point value, and then a threshold is used on the generated floating-point values to pick either zero or one.
Phase coding; encoding temporal information into spike patterns based on a global oscillator.
Burst coding; transmitting spikes in bursts, increasing communication reliability.
== Software ==
A diverse range of application software can simulate SNNs. This software can be classified according to its uses:
=== SNN simulation ===
These simulate complex neural models. Large networks usually require lengthy processing. Candidates include:
Brian – developed by Romain Brette and Dan Goodman at the École Normale Supérieure;
GENESIS (the GEneral NEural SImulation System) – developed in James Bower's laboratory at Caltech;
NEST – developed by the NEST Initiative;
NEURON – mainly developed by Michael Hines, John W. Moore and Ted Carnevale in Yale University and Duke University;
RAVSim (Runtime Tool) – mainly developed by Sanaullah in Bielefeld University of Applied Sciences and Arts;
== Hardware ==
Sutton and Barton proposed that future neuromorphic architectures will comprise billions of nanosynapses, which require a clear understanding of the accompanying physical mechanisms. Experimental systems based on ferroelectric tunnel junctions have been used to show that STDP can be harnessed from heterogeneous polarization switching. Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, conductance variations can be modelled by nucleation-dominated domain reversal. Simulations showed that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning.
== Benchmarks ==
Classification capabilities of spiking networks trained according to unsupervised learning methods have been tested on benchmark datasets such as Iris, Wisconsin Breast Cancer or Statlog Landsat dataset. Various approaches to information encoding and network design have been used such as a 2-layer feedforward network for data clustering and classification. Based on Hopfield (1995) the authors implemented models of local receptive fields combining the properties of radial basis functions and spiking neurons to convert input signals having a floating-point representation into a spiking representation.
== See also ==
== References == | Wikipedia/Spiking_neural_network |
In mathematics, the Shimizu L-function, introduced by Hideo Shimizu (1963), is a Dirichlet series associated to a totally real algebraic number field.
Michael Francis Atiyah, H. Donnelly, and I. M. Singer (1983)
defined the signature defect of the boundary of a manifold as the eta invariant, the value as s=0 of their eta function, and used this to show that Hirzebruch's signature defect of a cusp of a Hilbert modular surface can be expressed in terms of the value at s=0 or 1 of a Shimizu L-function.
== Definition ==
Suppose that K is a totally real algebraic number field, M is a lattice in the field, and V is a subgroup of maximal rank of the group of totally positive units preserving the lattice. The Shimizu L-series is given by
L
(
M
,
V
,
s
)
=
∑
μ
∈
{
M
−
0
}
/
V
sign
N
(
μ
)
|
N
(
μ
)
|
s
{\displaystyle L(M,V,s)=\sum _{\mu \in \{M-0\}/V}{\frac {\operatorname {sign} N(\mu )}{|N(\mu )|^{s}}}}
== References ==
Atiyah, Michael Francis; Donnelly, H.; Singer, I. M. (1982), "Geometry and analysis of Shimizu L-functions", Proceedings of the National Academy of Sciences of the United States of America, 79 (18): 5751, Bibcode:1982PNAS...79.5751A, doi:10.1073/pnas.79.18.5751, ISSN 0027-8424, JSTOR 12685, MR 0674920, PMC 346984, PMID 16593231
Atiyah, Michael Francis; Donnelly, H.; Singer, I. M. (1983), "Eta invariants, signature defects of cusps, and values of L-functions", Annals of Mathematics, Second Series, 118 (1): 131–177, doi:10.2307/2006957, ISSN 0003-486X, JSTOR 2006957, MR 0707164
Shimizu, Hideo (1963), "On discontinuous groups operating on the product of the upper half planes", Annals of Mathematics, Second Series, 77 (1): 33–71, doi:10.2307/1970201, ISSN 0003-486X, JSTOR 1970201, MR 0145106 | Wikipedia/Shimizu_L-function |
In mathematics, the explicit formulae for L-functions are relations between sums over the complex number zeroes of an L-function and sums over prime powers, introduced by Riemann (1859) for the Riemann zeta function. Such explicit formulae have been applied also to questions on bounding the discriminant of an algebraic number field, and the conductor of a number field.
== Riemann's explicit formula ==
In his 1859 paper "On the Number of Primes Less Than a Given Magnitude" Riemann sketched an explicit formula (it was not fully proven until 1895 by von Mangoldt, see below) for the normalized prime-counting function π0(x) which is related to the prime-counting function π(x) by
π
0
(
x
)
=
1
2
lim
h
→
0
[
π
(
x
+
h
)
+
π
(
x
−
h
)
]
,
{\displaystyle \pi _{0}(x)={\frac {1}{2}}\lim _{h\to 0}\left[\,\pi (x+h)+\pi (x-h)\,\right]\,,}
which takes the arithmetic mean of the limit from the left and the limit from the right at discontinuities. His formula was given in terms of the related function
f
(
x
)
=
π
0
(
x
)
+
1
2
π
0
(
x
1
/
2
)
+
1
3
π
0
(
x
1
/
3
)
+
⋯
{\displaystyle f(x)=\pi _{0}(x)+{\frac {1}{2}}\,\pi _{0}(x^{1/2})+{\frac {1}{3}}\,\pi _{0}(x^{1/3})+\cdots }
in which a prime power pn counts as 1⁄n of a prime. The normalized prime-counting function can be recovered from this function by
π
0
(
x
)
=
∑
n
1
n
μ
(
n
)
f
(
x
1
/
n
)
=
f
(
x
)
−
1
2
f
(
x
1
/
2
)
−
1
3
f
(
x
1
/
3
)
−
1
5
f
(
x
1
/
5
)
+
1
6
f
(
x
1
/
6
)
−
⋯
,
{\displaystyle \pi _{0}(x)=\sum _{n}{\frac {1}{n}}\,\mu (n)\,f(x^{1/n})=f(x)-{\frac {1}{2}}\,f(x^{1/2})-{\frac {1}{3}}\,f(x^{1/3})-{\frac {1}{5}}\,f(x^{1/5})+{\frac {1}{6}}\,f(x^{1/6})-\cdots ,}
where μ(n) is the Möbius function. Riemann's formula is then
f
(
x
)
=
li
(
x
)
−
∑
ρ
li
(
x
ρ
)
−
log
(
2
)
+
∫
x
∞
d
t
t
(
t
2
−
1
)
log
(
t
)
{\displaystyle f(x)=\operatorname {li} (x)-\sum _{\rho }\operatorname {li} (x^{\rho })-\log(2)+\int _{x}^{\infty }{\frac {dt}{~t\,(t^{2}-1)~\log(t)~}}}
involving a sum over the non-trivial zeros ρ of the Riemann zeta function. The sum is not absolutely convergent, but may be evaluated by taking the zeros in order of the absolute value of their imaginary part. The function li occurring in the first term is the (unoffset) logarithmic integral function given by the Cauchy principal value of the divergent integral
li
(
x
)
=
∫
0
x
d
t
log
(
t
)
.
{\displaystyle \operatorname {li} (x)=\int _{0}^{x}{\frac {dt}{\,\log(t)\,}}\,.}
The terms li(xρ) involving the zeros of the zeta function need some care in their definition as li has branch points at 0 and 1, and are defined by analytic continuation in the complex variable ρ in the region x > 1 and Re(ρ) > 0. The other terms also correspond to zeros: The dominant term li(x) comes from the pole at s = 1, considered as a zero of multiplicity −1, and the remaining small terms come from the trivial zeros. This formula says that the zeros of the Riemann zeta function control the oscillations of primes around their "expected" positions. (For graphs of the sums of the first few terms of this series see Zagier 1977.)
The first rigorous proof of the aforementioned formula was given by von Mangoldt in 1895: it started with a proof of the following formula for the Chebyshev's function ψ
ψ
0
(
x
)
=
1
2
π
i
∫
σ
−
i
∞
σ
+
i
∞
(
−
ζ
′
(
s
)
ζ
(
s
)
)
x
s
s
d
s
=
x
−
∑
ρ
x
ρ
ρ
−
log
(
2
π
)
−
1
2
log
(
1
−
x
−
2
)
{\displaystyle \psi _{0}(x)={\dfrac {1}{2\pi i}}\int _{\sigma -i\infty }^{\sigma +i\infty }\left(-{\dfrac {\zeta '(s)}{\zeta (s)}}\right){\dfrac {x^{s}}{s}}\,ds=x-\sum _{\rho }{\frac {~x^{\rho }\,}{\rho }}-\log(2\pi )-{\dfrac {1}{2}}\log(1-x^{-2})}
where the LHS is an inverse Mellin transform with
σ
>
1
,
ψ
(
x
)
=
∑
p
k
≤
x
log
p
,
and
ψ
0
(
x
)
=
1
2
lim
h
→
0
(
ψ
(
x
+
h
)
+
ψ
(
x
−
h
)
)
{\displaystyle \sigma >1\,,\quad \psi (x)=\sum _{p^{k}\leq x}\log p\,,\quad {\text{and}}\quad \psi _{0}(x)={\frac {1}{2}}\lim _{h\to 0}(\psi (x+h)+\psi (x-h))}
and the RHS is obtained from the residue theorem, and then converting it into the formula that Riemann himself actually sketched.
This series is also conditionally convergent and the sum over zeroes should again be taken in increasing order of imaginary part:
∑
ρ
x
ρ
ρ
=
lim
T
→
∞
S
(
x
,
T
)
{\displaystyle \sum _{\rho }{\frac {x^{\rho }}{\rho }}=\lim _{T\to \infty }S(x,T)}
where
S
(
x
,
T
)
=
∑
ρ
:
|
ℑ
ρ
|
≤
T
x
ρ
ρ
.
{\displaystyle S(x,T)=\sum _{\rho :\left|\Im \rho \right|\leq T}{\frac {x^{\rho }}{\rho }}\,.}
The error involved in truncating the sum to S(x,T) is always smaller than ln(x) in absolute value, and when divided by the natural logarithm of x, has absolute value smaller than x⁄T divided by the distance from x to the nearest prime power.
== Weil's explicit formula ==
There are several slightly different ways to state the explicit formula. André Weil's form of the explicit formula states
Φ
(
1
)
+
Φ
(
0
)
−
∑
ρ
Φ
(
ρ
)
=
∑
p
,
m
log
(
p
)
p
m
/
2
(
F
(
log
(
p
m
)
)
+
F
(
−
log
(
p
m
)
)
)
−
1
2
π
∫
−
∞
∞
φ
(
t
)
Ψ
(
t
)
d
t
{\displaystyle {\begin{aligned}&\Phi (1)+\Phi (0)-\sum _{\rho }\Phi (\rho )\\&=\sum _{p,m}{\frac {\log(p)}{p^{m/2}}}{\Big (}F(\log(p^{m}))+F(-\log(p^{m})){\Big )}-{\frac {1}{2\pi }}\int _{-\infty }^{\infty }\varphi (t)\Psi (t)\,dt\end{aligned}}}
where
ρ runs over the non-trivial zeros of the zeta function
p runs over positive primes
m runs over positive integers
F is a smooth function all of whose derivatives are rapidly decreasing
φ
{\displaystyle \varphi }
is a Fourier transform of F:
φ
(
t
)
=
∫
−
∞
∞
F
(
x
)
e
i
t
x
d
x
{\displaystyle \varphi (t)=\int _{-\infty }^{\infty }F(x)e^{itx}\,dx}
Φ
(
1
/
2
+
i
t
)
=
φ
(
t
)
{\displaystyle \Phi (1/2+it)=\varphi (t)}
Ψ
(
t
)
=
−
log
(
π
)
+
Re
(
ψ
(
1
/
4
+
i
t
/
2
)
)
{\displaystyle \Psi (t)=-\log(\pi )+\operatorname {Re} (\psi (1/4+it/2))}
, where
ψ
{\displaystyle \psi }
is the digamma function Γ′/Γ.
Roughly speaking, the explicit formula says the Fourier transform of the zeros of the zeta function is the set of prime powers plus some elementary factors. Once this is said, the formula comes from the fact that the Fourier transform is a unitary operator, so that a scalar product in time domain is equal to the scalar product of the Fourier transforms in the frequency domain.
The terms in the formula arise in the following way.
The terms on the right hand side come from the logarithmic derivative of
ζ
∗
(
s
)
=
Γ
(
s
/
2
)
π
−
s
/
2
∏
p
1
1
−
p
−
s
{\displaystyle \zeta ^{*}(s)=\Gamma (s/2)\pi ^{-s/2}\prod _{p}{\frac {1}{1-p^{-s}}}}
with the terms corresponding to the prime p coming from the Euler factor of p, and the term at the end involving Ψ coming from the gamma factor (the Euler factor at infinity).
The left-hand side is a sum over all zeros of ζ * counted with multiplicities, so the poles at 0 and 1 are counted as zeros of order −1.
Weil's explicit formula can be understood like this. The target is to be able to write that:
d
d
u
[
∑
n
≤
e
|
u
|
Λ
(
n
)
+
1
2
ln
(
1
−
e
−
2
|
u
|
)
]
=
∑
n
=
1
∞
Λ
(
n
)
[
δ
(
u
+
ln
n
)
+
δ
(
u
−
ln
n
)
]
+
1
2
d
ln
(
1
−
e
−
2
|
u
|
)
d
u
=
e
u
−
∑
ρ
e
ρ
u
,
{\displaystyle {\frac {d}{du}}\left[\sum _{n\leq e^{|u|}}\Lambda (n)+{\frac {1}{2}}\ln(1-e^{-2|u|})\right]=\sum _{n=1}^{\infty }\Lambda (n)\left[\delta (u+\ln n)+\delta (u-\ln n)\right]+{\frac {1}{2}}{\frac {d\ln(1-e^{-2|u|})}{du}}=e^{u}-\sum _{\rho }e^{\rho u},}
where Λ is the von Mangoldt function.
So that the Fourier transform of the non trivial zeros is equal to the primes power symmetrized plus a minor term. Of course, the sum involved are not convergent, but the trick is to use the unitary property of Fourier transform which is that it preserves scalar product:
∫
−
∞
∞
f
(
u
)
g
∗
(
u
)
d
u
=
∫
−
∞
∞
F
(
t
)
G
∗
(
t
)
d
t
{\displaystyle \int _{-\infty }^{\infty }f(u)g^{*}(u)\,du=\int _{-\infty }^{\infty }F(t)G^{*}(t)\,dt}
where
F
,
G
{\displaystyle F,G}
are the Fourier transforms of
f
,
g
{\displaystyle f,g}
.
At a first look, it seems to be a formula for functions only, but in fact in many cases it also works when
g
{\displaystyle g}
is a distribution. Hence, by setting
g
(
u
)
=
∑
n
=
1
∞
Λ
(
n
)
[
δ
(
u
+
ln
n
)
+
δ
(
u
−
ln
n
)
]
,
{\displaystyle g(u)=\sum _{n=1}^{\infty }\Lambda (n)\left[\delta (u+\ln n)+\delta (u-\ln n)\right],}
where
δ
(
u
)
{\displaystyle \delta (u)}
is the Dirac delta, and carefully choosing a function
f
{\displaystyle f}
and its Fourier transform, we get the formula above.
== Generalizations ==
The Riemann zeta function can be replaced by a Dirichlet L-function of a Dirichlet character χ. The sum over prime powers then gets extra
factors of χ(p m), and the terms Φ(1) and Φ(0) disappear because the L-series has no poles.
More generally, the Riemann zeta function and the L-series can be replaced by the Dedekind zeta function of an algebraic number field or a Hecke L-series. The sum over primes then gets replaced by a sum over prime ideals.
== Applications ==
Riemann's original use of the explicit formula was to give an exact formula for the number of primes less than a given number. To do this, take F(log(y)) to be y1/2/log(y) for 0 ≤ y ≤ x and 0 elsewhere. Then the main term of the sum on the right is the number of primes less than x. The main term on the left is Φ(1); which turns out to be the dominant terms of the prime number theorem, and the main correction is the sum over non-trivial zeros of the zeta function. (There is a minor technical problem in using this case, in that the function F does not satisfy the smoothness condition.)
== Hilbert–Pólya conjecture ==
According to the Hilbert–Pólya conjecture, the complex zeroes ρ should be the eigenvalues of some linear operator T. The sum over the zeros of the explicit formula is then (at least formally) given by a trace:
∑
ρ
F
(
ρ
)
=
Tr
(
F
(
T
^
)
)
.
{\displaystyle \sum _{\rho }F(\rho )=\operatorname {Tr} (F({\widehat {T}})).\!}
Development of the explicit formulae for a wide class of L-functions was given by Weil (1952), who first extended the idea to local zeta-functions, and formulated a version of a generalized Riemann hypothesis in this setting, as a positivity statement for a generalized function on a topological group. More recent work by Alain Connes has gone much further into the functional-analytic background, providing a trace formula the validity of which is equivalent to such a generalized Riemann hypothesis. A slightly different point of view was given by Meyer (2005), who derived the explicit formula of Weil via harmonic analysis on adelic spaces.
== See also ==
Selberg trace formula
Selberg zeta function
== Footnotes ==
== References ==
Ingham, A.E. (1990) [1932], The Distribution of Prime Numbers, Cambridge Tracts in Mathematics and Mathematical Physics, vol. 30, reissued with a foreword by R. C. Vaughan (2nd ed.), Cambridge University Press, ISBN 978-0-521-39789-6, MR 1074573, Zbl 0715.11045
Lang, Serge (1994), Algebraic number theory, Graduate Texts in Mathematics, vol. 110 (2nd ed.), New York, NY: Springer-Verlag, ISBN 0-387-94225-4, Zbl 0811.11001
Riemann, Bernhard (1859), "Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse", Monatsberichte der Berliner Akademie
Weil, André (1952), "Sur les "formules explicites" de la théorie des nombres premiers" [On "explicit formulas" in the theory of prime numbers], Comm. Sém. Math. Univ. Lund [Medd. Lunds Univ. Mat. Sem.] (in French), Tome Supplémentaire: 252–265, MR 0053152, Zbl 0049.03205
von Mangoldt, Hans (1895), "Zu Riemanns Abhandlung "Über die Anzahl der Primzahlen unter einer gegebenen Grösse"" [On Riemann's paper "The number of prime numbers less than a given magnitude"], Journal für die reine und angewandte Mathematik (in German), 114: 255–305, ISSN 0075-4102, JFM 26.0215.03, MR 1580379
Meyer, Ralf (2005), "On a representation of the idele class group related to primes and zeros of L-functions", Duke Math. J., 127 (3): 519–595, arXiv:math/0311468, doi:10.1215/s0012-7094-04-12734-4, ISSN 0012-7094, MR 2132868, S2CID 119176169, Zbl 1079.11044
Zagier, Don (1977), "The first 50 million prime numbers", The Mathematical Intelligencer, 1 (S2): 7–19, doi:10.1007/bf03351556, S2CID 37866599
== Further reading ==
Edwards, H.M. (1974), Riemann's zeta function, Pure and Applied Mathematics, vol. 58, New York-London: Academic Press, ISBN 0-12-232750-0, Zbl 0315.10035
Riesel, Hans (1994), Prime numbers and computer methods for factorization, Progress in Mathematics, vol. 126 (2nd ed.), Boston, MA: Birkhäuser, ISBN 0-8176-3743-5, Zbl 0821.11001 | Wikipedia/Explicit_formulae_for_L-functions |
In mathematics, the logarithmic integral function or integral logarithm li(x) is a special function. It is relevant in problems of physics and has number theoretic significance. In particular, according to the prime number theorem, it is a very good approximation to the prime-counting function, which is defined as the number of prime numbers less than or equal to a given value x.
== Integral representation ==
The logarithmic integral has an integral representation defined for all positive real numbers x ≠ 1 by the definite integral
li
(
x
)
=
∫
0
x
d
t
ln
t
.
{\displaystyle \operatorname {li} (x)=\int _{0}^{x}{\frac {dt}{\ln t}}.}
Here, ln denotes the natural logarithm. The function 1/(ln t) has a singularity at t = 1, and the integral for x > 1 is interpreted as a Cauchy principal value,
li
(
x
)
=
lim
ε
→
0
+
(
∫
0
1
−
ε
d
t
ln
t
+
∫
1
+
ε
x
d
t
ln
t
)
.
{\displaystyle \operatorname {li} (x)=\lim _{\varepsilon \to 0+}\left(\int _{0}^{1-\varepsilon }{\frac {dt}{\ln t}}+\int _{1+\varepsilon }^{x}{\frac {dt}{\ln t}}\right).}
== Offset logarithmic integral ==
The offset logarithmic integral or Eulerian logarithmic integral is defined as
Li
(
x
)
=
∫
2
x
d
t
ln
t
=
li
(
x
)
−
li
(
2
)
.
{\displaystyle \operatorname {Li} (x)=\int _{2}^{x}{\frac {dt}{\ln t}}=\operatorname {li} (x)-\operatorname {li} (2).}
As such, the integral representation has the advantage of avoiding the singularity in the domain of integration.
Equivalently,
li
(
x
)
=
∫
0
x
d
t
ln
t
=
Li
(
x
)
+
li
(
2
)
.
{\displaystyle \operatorname {li} (x)=\int _{0}^{x}{\frac {dt}{\ln t}}=\operatorname {Li} (x)+\operatorname {li} (2).}
== Special values ==
The function li(x) has a single positive zero; it occurs at x ≈ 1.45136 92348 83381 05028 39684 85892 02744 94930... OEIS: A070769; this number is known as the Ramanujan–Soldner constant.
li
(
Li
−
1
(
0
)
)
=
li
(
2
)
{\displaystyle \operatorname {li} ({\text{Li}}^{-1}(0))={\text{li}}(2)}
≈ 1.045163 780117 492784 844588 889194 613136 522615 578151... OEIS: A069284
This is
−
(
Γ
(
0
,
−
ln
2
)
+
i
π
)
{\displaystyle -(\Gamma (0,-\ln 2)+i\,\pi )}
where
Γ
(
a
,
x
)
{\displaystyle \Gamma (a,x)}
is the incomplete gamma function. It must be understood as the Cauchy principal value of the function.
== Series representation ==
The function li(x) is related to the exponential integral Ei(x) via the equation
li
(
x
)
=
Ei
(
ln
x
)
,
{\displaystyle \operatorname {li} (x)={\hbox{Ei}}(\ln x),}
which is valid for x > 0. This identity provides a series representation of li(x) as
li
(
e
u
)
=
Ei
(
u
)
=
γ
+
ln
|
u
|
+
∑
n
=
1
∞
u
n
n
⋅
n
!
for
u
≠
0
,
{\displaystyle \operatorname {li} (e^{u})={\hbox{Ei}}(u)=\gamma +\ln |u|+\sum _{n=1}^{\infty }{u^{n} \over n\cdot n!}\quad {\text{ for }}u\neq 0\,,}
where γ ≈ 0.57721 56649 01532 ... OEIS: A001620 is the Euler–Mascheroni constant. A more rapidly convergent series by Ramanujan is
li
(
x
)
=
γ
+
ln
|
ln
x
|
+
x
∑
n
=
1
∞
(
(
−
1
)
n
−
1
(
ln
x
)
n
n
!
2
n
−
1
∑
k
=
0
⌊
(
n
−
1
)
/
2
⌋
1
2
k
+
1
)
.
{\displaystyle \operatorname {li} (x)=\gamma +\ln |\ln x|+{\sqrt {x}}\sum _{n=1}^{\infty }\left({\frac {(-1)^{n-1}(\ln x)^{n}}{n!\,2^{n-1}}}\sum _{k=0}^{\lfloor (n-1)/2\rfloor }{\frac {1}{2k+1}}\right).}
== Asymptotic expansion ==
The asymptotic behavior for
x
→
∞
{\displaystyle x\to \infty }
is
li
(
x
)
=
O
(
x
ln
x
)
.
{\displaystyle \operatorname {li} (x)=O\left({\frac {x}{\ln x}}\right).}
where
O
{\displaystyle O}
is the big O notation. The full asymptotic expansion is
li
(
x
)
∼
x
ln
x
∑
k
=
0
∞
k
!
(
ln
x
)
k
{\displaystyle \operatorname {li} (x)\sim {\frac {x}{\ln x}}\sum _{k=0}^{\infty }{\frac {k!}{(\ln x)^{k}}}}
or
li
(
x
)
x
/
ln
x
∼
1
+
1
ln
x
+
2
(
ln
x
)
2
+
6
(
ln
x
)
3
+
⋯
.
{\displaystyle {\frac {\operatorname {li} (x)}{x/\ln x}}\sim 1+{\frac {1}{\ln x}}+{\frac {2}{(\ln x)^{2}}}+{\frac {6}{(\ln x)^{3}}}+\cdots .}
This gives the following more accurate asymptotic behaviour:
li
(
x
)
−
x
ln
x
=
O
(
x
(
ln
x
)
2
)
.
{\displaystyle \operatorname {li} (x)-{\frac {x}{\ln x}}=O\left({\frac {x}{(\ln x)^{2}}}\right).}
As an asymptotic expansion, this series is not convergent: it is a reasonable approximation only if the series is truncated at a finite number of terms, and only large values of x are employed. This expansion follows directly from the asymptotic expansion for the exponential integral.
This implies e.g. that we can bracket li as:
1
+
1
ln
x
<
li
(
x
)
ln
x
x
<
1
+
1
ln
x
+
3
(
ln
x
)
2
{\displaystyle 1+{\frac {1}{\ln x}}<\operatorname {li} (x){\frac {\ln x}{x}}<1+{\frac {1}{\ln x}}+{\frac {3}{(\ln x)^{2}}}}
for all
ln
x
≥
11
{\displaystyle \ln x\geq 11}
.
== Number theoretic significance ==
The logarithmic integral is important in number theory, appearing in estimates of the number of prime numbers less than a given value. For example, the prime number theorem states that:
π
(
x
)
∼
li
(
x
)
{\displaystyle \pi (x)\sim \operatorname {li} (x)}
where
π
(
x
)
{\displaystyle \pi (x)}
denotes the number of primes smaller than or equal to
x
{\displaystyle x}
.
Assuming the Riemann hypothesis, we get the even stronger:
|
li
(
x
)
−
π
(
x
)
|
=
O
(
x
log
x
)
{\displaystyle |\operatorname {li} (x)-\pi (x)|=O({\sqrt {x}}\log x)}
In fact, the Riemann hypothesis is equivalent to the statement that:
|
li
(
x
)
−
π
(
x
)
|
=
O
(
x
1
/
2
+
a
)
{\displaystyle |\operatorname {li} (x)-\pi (x)|=O(x^{1/2+a})}
for any
a
>
0
{\displaystyle a>0}
.
For small
x
{\displaystyle x}
,
li
(
x
)
>
π
(
x
)
{\displaystyle \operatorname {li} (x)>\pi (x)}
but the difference changes sign an infinite number of times as
x
{\displaystyle x}
increases, and the first time that this happens is somewhere between 1019 and 1.4×10316.
== See also ==
Jørgen Pedersen Gram
Skewes' number
List of integrals of logarithmic functions
== References ==
Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 5". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 228. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
Temme, N. M. (2010), "Exponential, Logarithmic, Sine, and Cosine Integrals", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. | Wikipedia/Logarithmic_integral_function |
In mathematics, the floor function is the function that takes as input a real number x, and gives as output the greatest integer less than or equal to x, denoted ⌊x⌋ or floor(x). Similarly, the ceiling function maps x to the least integer greater than or equal to x, denoted ⌈x⌉ or ceil(x).
For example, for floor: ⌊2.4⌋ = 2, ⌊−2.4⌋ = −3, and for ceiling: ⌈2.4⌉ = 3, and ⌈−2.4⌉ = −2.
The floor of x is also called the integral part, integer part, greatest integer, or entier of x, and was historically denoted [x] (among other notations). However, the same term, integer part, is also used for truncation towards zero, which differs from the floor function for negative numbers.
For an integer n, ⌊n⌋ = ⌈n⌉ = n.
Although floor(x + 1) and ceil(x) produce graphs that appear exactly alike, they are not the same when the value of x is an exact integer. For example, when x = 2.0001, ⌊2.0001 + 1⌋ = ⌈2.0001⌉ = 3. However, if x = 2, then ⌊2 + 1⌋ = 3, while ⌈2⌉ = 2.
== Notation ==
The integral part or integer part of a number (partie entière in the original) was first defined in 1798 by Adrien-Marie Legendre in his proof of the Legendre's formula.
Carl Friedrich Gauss introduced the square bracket notation [x] in his third proof of quadratic reciprocity (1808). This remained the standard in mathematics until Kenneth E. Iverson introduced, in his 1962 book A Programming Language, the names "floor" and "ceiling" and the corresponding notations ⌊x⌋ and ⌈x⌉. (Iverson used square brackets for a different purpose, the Iverson bracket notation.) Both notations are now used in mathematics, although Iverson's notation will be followed in this article.
In some sources, boldface or double brackets ⟦x⟧ are used for floor, and reversed brackets ⟧x⟦ or ]x[ for ceiling.
The fractional part is the sawtooth function, denoted by {x} for real x and defined by the formula
{x} = x − ⌊x⌋
For all x,
0 ≤ {x} < 1.
These characters are provided in Unicode:
U+2308 ⌈ LEFT CEILING (⌈, ⌈)
U+2309 ⌉ RIGHT CEILING (⌉, ⌉)
U+230A ⌊ LEFT FLOOR (⌊, ⌊)
U+230B ⌋ RIGHT FLOOR (⌋, ⌋)
In the LaTeX typesetting system, these symbols can be specified with the \lceil, \rceil, \lfloor, and \rfloor commands in math mode. LaTeX has supported UTF-8 since 2018, so the Unicode characters can now be used directly. Larger versions are\left\lceil, \right\rceil, \left\lfloor, and \right\rfloor.
== Definition and properties ==
Given real numbers x and y, integers m and n and the set of integers
Z
{\displaystyle \mathbb {Z} }
, floor and ceiling may be defined by the equations
⌊
x
⌋
=
max
{
m
∈
Z
∣
m
≤
x
}
,
{\displaystyle \lfloor x\rfloor =\max\{m\in \mathbb {Z} \mid m\leq x\},}
⌈
x
⌉
=
min
{
n
∈
Z
∣
n
≥
x
}
.
{\displaystyle \lceil x\rceil =\min\{n\in \mathbb {Z} \mid n\geq x\}.}
Since there is exactly one integer in a half-open interval of length one, for any real number x, there are unique integers m and n satisfying the equation
x
−
1
<
m
≤
x
≤
n
<
x
+
1.
{\displaystyle x-1<m\leq x\leq n<x+1.}
where
⌊
x
⌋
=
m
{\displaystyle \lfloor x\rfloor =m}
and
⌈
x
⌉
=
n
{\displaystyle \lceil x\rceil =n}
may also be taken as the definition of floor and ceiling.
=== Equivalences ===
These formulas can be used to simplify expressions involving floors and ceilings.
⌊
x
⌋
=
m
if and only if
m
≤
x
<
m
+
1
,
⌈
x
⌉
=
n
if and only if
n
−
1
<
x
≤
n
,
⌊
x
⌋
=
m
if and only if
x
−
1
<
m
≤
x
,
⌈
x
⌉
=
n
if and only if
x
≤
n
<
x
+
1.
{\displaystyle {\begin{alignedat}{3}\lfloor x\rfloor &=m\ \ &&{\mbox{ if and only if }}&m&\leq x<m+1,\\\lceil x\rceil &=n&&{\mbox{ if and only if }}&\ \ n-1&<x\leq n,\\\lfloor x\rfloor &=m&&{\mbox{ if and only if }}&x-1&<m\leq x,\\\lceil x\rceil &=n&&{\mbox{ if and only if }}&x&\leq n<x+1.\end{alignedat}}}
In the language of order theory, the floor function is a residuated mapping, that is, part of a Galois connection: it is the upper adjoint of the function that embeds the integers into the reals.
x
<
n
if and only if
⌊
x
⌋
<
n
,
n
<
x
if and only if
n
<
⌈
x
⌉
,
x
≤
n
if and only if
⌈
x
⌉
≤
n
,
n
≤
x
if and only if
n
≤
⌊
x
⌋
.
{\displaystyle {\begin{aligned}x<n&\;\;{\mbox{ if and only if }}&\lfloor x\rfloor &<n,\\n<x&\;\;{\mbox{ if and only if }}&n&<\lceil x\rceil ,\\x\leq n&\;\;{\mbox{ if and only if }}&\lceil x\rceil &\leq n,\\n\leq x&\;\;{\mbox{ if and only if }}&n&\leq \lfloor x\rfloor .\end{aligned}}}
These formulas show how adding an integer n to the arguments affects the functions:
⌊
x
+
n
⌋
=
⌊
x
⌋
+
n
,
⌈
x
+
n
⌉
=
⌈
x
⌉
+
n
,
{
x
+
n
}
=
{
x
}
.
{\displaystyle {\begin{aligned}\lfloor x+n\rfloor &=\lfloor x\rfloor +n,\\\lceil x+n\rceil &=\lceil x\rceil +n,\\\{x+n\}&=\{x\}.\end{aligned}}}
The above are never true if n is not an integer; however, for every x and y, the following inequalities hold:
⌊
x
⌋
+
⌊
y
⌋
≤
⌊
x
+
y
⌋
≤
⌊
x
⌋
+
⌊
y
⌋
+
1
,
⌈
x
⌉
+
⌈
y
⌉
−
1
≤
⌈
x
+
y
⌉
≤
⌈
x
⌉
+
⌈
y
⌉
.
{\displaystyle {\begin{aligned}\lfloor x\rfloor +\lfloor y\rfloor &\leq \lfloor x+y\rfloor \leq \lfloor x\rfloor +\lfloor y\rfloor +1,\\[3mu]\lceil x\rceil +\lceil y\rceil -1&\leq \lceil x+y\rceil \leq \lceil x\rceil +\lceil y\rceil .\end{aligned}}}
=== Monotonicity ===
Both floor and ceiling functions are monotonically non-decreasing functions:
x
1
≤
x
2
⇒
⌊
x
1
⌋
≤
⌊
x
2
⌋
,
x
1
≤
x
2
⇒
⌈
x
1
⌉
≤
⌈
x
2
⌉
.
{\displaystyle {\begin{aligned}x_{1}\leq x_{2}&\Rightarrow \lfloor x_{1}\rfloor \leq \lfloor x_{2}\rfloor ,\\x_{1}\leq x_{2}&\Rightarrow \lceil x_{1}\rceil \leq \lceil x_{2}\rceil .\end{aligned}}}
=== Relations among the functions ===
It is clear from the definitions that
⌊
x
⌋
≤
⌈
x
⌉
,
{\displaystyle \lfloor x\rfloor \leq \lceil x\rceil ,}
with equality if and only if x is an integer, i.e.
⌈
x
⌉
−
⌊
x
⌋
=
{
0
if
x
∈
Z
1
if
x
∉
Z
{\displaystyle \lceil x\rceil -\lfloor x\rfloor ={\begin{cases}0&{\mbox{ if }}x\in \mathbb {Z} \\1&{\mbox{ if }}x\not \in \mathbb {Z} \end{cases}}}
In fact, for integers n, both floor and ceiling functions are the identity:
⌊
n
⌋
=
⌈
n
⌉
=
n
.
{\displaystyle \lfloor n\rfloor =\lceil n\rceil =n.}
Negating the argument switches floor and ceiling and changes the sign:
⌊
x
⌋
+
⌈
−
x
⌉
=
0
−
⌊
x
⌋
=
⌈
−
x
⌉
−
⌈
x
⌉
=
⌊
−
x
⌋
{\displaystyle {\begin{aligned}\lfloor x\rfloor +\lceil -x\rceil &=0\\-\lfloor x\rfloor &=\lceil -x\rceil \\-\lceil x\rceil &=\lfloor -x\rfloor \end{aligned}}}
and:
⌊
x
⌋
+
⌊
−
x
⌋
=
{
0
if
x
∈
Z
−
1
if
x
∉
Z
,
{\displaystyle \lfloor x\rfloor +\lfloor -x\rfloor ={\begin{cases}0&{\text{if }}x\in \mathbb {Z} \\-1&{\text{if }}x\not \in \mathbb {Z} ,\end{cases}}}
⌈
x
⌉
+
⌈
−
x
⌉
=
{
0
if
x
∈
Z
1
if
x
∉
Z
.
{\displaystyle \lceil x\rceil +\lceil -x\rceil ={\begin{cases}0&{\text{if }}x\in \mathbb {Z} \\1&{\text{if }}x\not \in \mathbb {Z} .\end{cases}}}
Negating the argument complements the fractional part:
{
x
}
+
{
−
x
}
=
{
0
if
x
∈
Z
1
if
x
∉
Z
.
{\displaystyle \{x\}+\{-x\}={\begin{cases}0&{\text{if }}x\in \mathbb {Z} \\1&{\text{if }}x\not \in \mathbb {Z} .\end{cases}}}
The floor, ceiling, and fractional part functions are idempotent:
⌊
⌊
x
⌋
⌋
=
⌊
x
⌋
,
⌈
⌈
x
⌉
⌉
=
⌈
x
⌉
,
{
{
x
}
}
=
{
x
}
.
{\displaystyle {\begin{aligned}{\big \lfloor }\lfloor x\rfloor {\big \rfloor }&=\lfloor x\rfloor ,\\{\big \lceil }\lceil x\rceil {\big \rceil }&=\lceil x\rceil ,\\{\big \{}\{x\}{\big \}}&=\{x\}.\end{aligned}}}
The result of nested floor or ceiling functions is the innermost function:
⌊
⌈
x
⌉
⌋
=
⌈
x
⌉
,
⌈
⌊
x
⌋
⌉
=
⌊
x
⌋
{\displaystyle {\begin{aligned}{\big \lfloor }\lceil x\rceil {\big \rfloor }&=\lceil x\rceil ,\\{\big \lceil }\lfloor x\rfloor {\big \rceil }&=\lfloor x\rfloor \end{aligned}}}
due to the identity property for integers.
=== Quotients ===
If m and n are integers and n ≠ 0,
0
≤
{
m
n
}
≤
1
−
1
|
n
|
.
{\displaystyle 0\leq \left\{{\frac {m}{n}}\right\}\leq 1-{\frac {1}{|n|}}.}
If n is positive
⌊
x
+
m
n
⌋
=
⌊
⌊
x
⌋
+
m
n
⌋
,
{\displaystyle \left\lfloor {\frac {x+m}{n}}\right\rfloor =\left\lfloor {\frac {\lfloor x\rfloor +m}{n}}\right\rfloor ,}
⌈
x
+
m
n
⌉
=
⌈
⌈
x
⌉
+
m
n
⌉
.
{\displaystyle \left\lceil {\frac {x+m}{n}}\right\rceil =\left\lceil {\frac {\lceil x\rceil +m}{n}}\right\rceil .}
If m is positive
n
=
⌈
n
1
m
⌉
+
⌈
n
−
1
m
⌉
+
⋯
+
⌈
n
−
m
+
1
m
⌉
,
{\displaystyle n=\left\lceil {\frac {n{\vphantom {1}}}{m}}\right\rceil +\left\lceil {\frac {n-1}{m}}\right\rceil +\dots +\left\lceil {\frac {n-m+1}{m}}\right\rceil ,}
n
=
⌊
n
1
m
⌋
+
⌊
n
+
1
m
⌋
+
⋯
+
⌊
n
+
m
−
1
m
⌋
.
{\displaystyle n=\left\lfloor {\frac {n{\vphantom {1}}}{m}}\right\rfloor +\left\lfloor {\frac {n+1}{m}}\right\rfloor +\dots +\left\lfloor {\frac {n+m-1}{m}}\right\rfloor .}
For m = 2 these imply
n
=
⌊
n
1
2
⌋
+
⌈
n
1
2
⌉
.
{\displaystyle n=\left\lfloor {\frac {n{\vphantom {1}}}{2}}\right\rfloor +\left\lceil {\frac {n{\vphantom {1}}}{2}}\right\rceil .}
More generally, for positive m (See Hermite's identity)
⌈
m
x
⌉
=
⌈
x
⌉
+
⌈
x
−
1
m
⌉
+
⋯
+
⌈
x
−
m
−
1
m
⌉
,
{\displaystyle \lceil mx\rceil =\left\lceil x\right\rceil +\left\lceil x-{\frac {1}{m}}\right\rceil +\dots +\left\lceil x-{\frac {m-1}{m}}\right\rceil ,}
⌊
m
x
⌋
=
⌊
x
⌋
+
⌊
x
+
1
m
⌋
+
⋯
+
⌊
x
+
m
−
1
m
⌋
.
{\displaystyle \lfloor mx\rfloor =\left\lfloor x\right\rfloor +\left\lfloor x+{\frac {1}{m}}\right\rfloor +\dots +\left\lfloor x+{\frac {m-1}{m}}\right\rfloor .}
The following can be used to convert floors to ceilings and vice versa (with m being positive)
⌈
n
1
m
⌉
=
⌊
n
+
m
−
1
m
⌋
=
⌊
n
−
1
m
⌋
+
1
,
{\displaystyle \left\lceil {\frac {n{\vphantom {1}}}{m}}\right\rceil =\left\lfloor {\frac {n+m-1}{m}}\right\rfloor =\left\lfloor {\frac {n-1}{m}}\right\rfloor +1,}
⌊
n
1
m
⌋
=
⌈
n
−
m
+
1
m
⌉
=
⌈
n
+
1
m
⌉
−
1
,
{\displaystyle \left\lfloor {\frac {n{\vphantom {1}}}{m}}\right\rfloor =\left\lceil {\frac {n-m+1}{m}}\right\rceil =\left\lceil {\frac {n+1}{m}}\right\rceil -1,}
For all m and n strictly positive integers:
∑
k
=
1
n
−
1
⌊
k
m
n
⌋
=
(
m
−
1
)
(
n
−
1
)
+
gcd
(
m
,
n
)
−
1
2
,
{\displaystyle \sum _{k=1}^{n-1}\left\lfloor {\frac {km}{n}}\right\rfloor ={\frac {(m-1)(n-1)+\gcd(m,n)-1}{2}},}
which, for positive and coprime m and n, reduces to
∑
k
=
1
n
−
1
⌊
k
m
n
⌋
=
1
2
(
m
−
1
)
(
n
−
1
)
,
{\displaystyle \sum _{k=1}^{n-1}\left\lfloor {\frac {km}{n}}\right\rfloor ={\tfrac {1}{2}}(m-1)(n-1),}
and similarly for the ceiling and fractional part functions (still for positive and coprime m and n),
∑
k
=
1
n
−
1
⌈
k
m
n
⌉
=
1
2
(
m
+
1
)
(
n
−
1
)
,
{\displaystyle \sum _{k=1}^{n-1}\left\lceil {\frac {km}{n}}\right\rceil ={\tfrac {1}{2}}(m+1)(n-1),}
∑
k
=
1
n
−
1
{
k
m
n
}
=
1
2
(
n
−
1
)
.
{\displaystyle \sum _{k=1}^{n-1}\left\{{\frac {km}{n}}\right\}={\tfrac {1}{2}}(n-1).}
Since the right-hand side of the general case is symmetrical in m and n, this implies that
⌊
m
1
n
⌋
+
⌊
2
m
n
⌋
+
⋯
+
⌊
(
n
−
1
)
m
n
⌋
=
⌊
n
1
m
⌋
+
⌊
2
n
m
⌋
+
⋯
+
⌊
(
m
−
1
)
n
m
⌋
.
{\displaystyle \left\lfloor {\frac {m{\vphantom {1}}}{n}}\right\rfloor +\left\lfloor {\frac {2m}{n}}\right\rfloor +\dots +\left\lfloor {\frac {(n-1)m}{n}}\right\rfloor =\left\lfloor {\frac {n{\vphantom {1}}}{m}}\right\rfloor +\left\lfloor {\frac {2n}{m}}\right\rfloor +\dots +\left\lfloor {\frac {(m-1)n}{m}}\right\rfloor .}
More generally, if m and n are positive,
⌊
x
1
n
⌋
+
⌊
m
+
x
n
⌋
+
⌊
2
m
+
x
n
⌋
+
⋯
+
⌊
(
n
−
1
)
m
+
x
n
⌋
=
⌊
x
1
m
⌋
+
⌊
n
+
x
m
⌋
+
⌊
2
n
+
x
m
⌋
+
⋯
+
⌊
(
m
−
1
)
n
+
x
m
⌋
.
{\displaystyle {\begin{aligned}&\left\lfloor {\frac {x{\vphantom {1}}}{n}}\right\rfloor +\left\lfloor {\frac {m+x}{n}}\right\rfloor +\left\lfloor {\frac {2m+x}{n}}\right\rfloor +\dots +\left\lfloor {\frac {(n-1)m+x}{n}}\right\rfloor \\[5mu]=&\left\lfloor {\frac {x{\vphantom {1}}}{m}}\right\rfloor +\left\lfloor {\frac {n+x}{m}}\right\rfloor +\left\lfloor {\frac {2n+x}{m}}\right\rfloor +\cdots +\left\lfloor {\frac {(m-1)n+x}{m}}\right\rfloor .\end{aligned}}}
This is sometimes called a reciprocity law.
Division by positive integers gives rise to an interesting and sometimes useful property. Assuming
m
,
n
>
0
{\displaystyle m,n>0}
,
m
≤
⌊
x
n
⌋
⟺
n
≤
⌊
x
m
⌋
⟺
n
≤
⌊
x
⌋
m
.
{\displaystyle m\leq \left\lfloor {\frac {x}{n}}\right\rfloor \iff n\leq \left\lfloor {\frac {x}{m}}\right\rfloor \iff n\leq {\frac {\lfloor x\rfloor }{m}}.}
Similarly,
m
≥
⌈
x
n
⌉
⟺
n
≥
⌈
x
m
⌉
⟺
n
≥
⌈
x
⌉
m
.
{\displaystyle m\geq \left\lceil {\frac {x}{n}}\right\rceil \iff n\geq \left\lceil {\frac {x}{m}}\right\rceil \iff n\geq {\frac {\lceil x\rceil }{m}}.}
Indeed,
m
≤
⌊
x
n
⌋
⟹
m
≤
x
n
⟹
n
≤
x
m
⟹
n
≤
⌊
x
m
⌋
⟹
…
⟹
m
≤
⌊
x
n
⌋
,
{\displaystyle m\leq \left\lfloor {\frac {x}{n}}\right\rfloor \implies m\leq {\frac {x}{n}}\implies n\leq {\frac {x}{m}}\implies n\leq \left\lfloor {\frac {x}{m}}\right\rfloor \implies \ldots \implies m\leq \left\lfloor {\frac {x}{n}}\right\rfloor ,}
keeping in mind that
⌊
x
n
⌋
=
⌊
⌊
x
⌋
n
⌋
.
{\textstyle \left\lfloor {\frac {x}{n}}\right\rfloor =\left\lfloor {\frac {\lfloor x\rfloor }{n}}\right\rfloor .}
The second equivalence involving the ceiling function can be proved similarly.
=== Nested divisions ===
For a positive integer n, and arbitrary real numbers m and x:
⌊
⌊
x
m
⌋
n
⌋
=
⌊
x
m
n
⌋
⌈
⌈
x
m
⌉
n
⌉
=
⌈
x
m
n
⌉
.
{\displaystyle {\begin{aligned}\left\lfloor {\frac {\left\lfloor {\frac {x}{m}}\right\rfloor }{n}}\right\rfloor &=\left\lfloor {\frac {x}{mn}}\right\rfloor \\[4px]\left\lceil {\frac {\left\lceil {\frac {x}{m}}\right\rceil }{n}}\right\rceil &=\left\lceil {\frac {x}{mn}}\right\rceil .\end{aligned}}}
=== Continuity and series expansions ===
None of the functions discussed in this article are continuous, but all are piecewise linear: the functions
⌊
x
⌋
{\displaystyle \lfloor x\rfloor }
,
⌈
x
⌉
{\displaystyle \lceil x\rceil }
, and
{
x
}
{\displaystyle \{x\}}
have discontinuities at the integers.
⌊
x
⌋
{\displaystyle \lfloor x\rfloor }
is upper semi-continuous and
⌈
x
⌉
{\displaystyle \lceil x\rceil }
and
{
x
}
{\displaystyle \{x\}}
are lower semi-continuous.
Since none of the functions discussed in this article are continuous, none of them have a power series expansion. Since floor and ceiling are not periodic, they do not have uniformly convergent Fourier series expansions. The fractional part function has Fourier series expansion
{
x
}
=
1
2
−
1
π
∑
k
=
1
∞
sin
(
2
π
k
x
)
k
{\displaystyle \{x\}={\frac {1}{2}}-{\frac {1}{\pi }}\sum _{k=1}^{\infty }{\frac {\sin(2\pi kx)}{k}}}
for x not an integer.
At points of discontinuity, a Fourier series converges to a value that is the average of its limits on the left and the right, unlike the floor, ceiling and fractional part functions: for y fixed and x a multiple of y the Fourier series given converges to y/2, rather than to x mod y = 0. At points of continuity the series converges to the true value.
Using the formula
⌊
x
⌋
=
x
−
{
x
}
{\displaystyle \lfloor x\rfloor =x-\{x\}}
gives
⌊
x
⌋
=
x
−
1
2
+
1
π
∑
k
=
1
∞
sin
(
2
π
k
x
)
k
{\displaystyle \lfloor x\rfloor =x-{\frac {1}{2}}+{\frac {1}{\pi }}\sum _{k=1}^{\infty }{\frac {\sin(2\pi kx)}{k}}}
for x not an integer.
== Applications ==
=== Mod operator ===
For an integer x and a positive integer y, the modulo operation, denoted by x mod y, gives the value of the remainder when x is divided by y. This definition can be extended to real x and y, y ≠ 0, by the formula
x
mod
y
=
x
−
y
⌊
x
y
⌋
.
{\displaystyle x{\bmod {y}}=x-y\left\lfloor {\frac {x}{y}}\right\rfloor .}
Then it follows from the definition of floor function that this extended operation satisfies many natural properties. Notably, x mod y is always between 0 and y, i.e.,
if y is positive,
0
≤
x
mod
y
<
y
,
{\displaystyle 0\leq x{\bmod {y}}<y,}
and if y is negative,
0
≥
x
mod
y
>
y
.
{\displaystyle 0\geq x{\bmod {y}}>y.}
=== Quadratic reciprocity ===
Gauss's third proof of quadratic reciprocity, as modified by Eisenstein, has two basic steps.
Let p and q be distinct positive odd prime numbers, and let
m
=
1
2
(
p
−
1
)
,
{\displaystyle m={\tfrac {1}{2}}(p-1),}
n
=
1
2
(
q
−
1
)
.
{\displaystyle n={\tfrac {1}{2}}(q-1).}
First, Gauss's lemma is used to show that the Legendre symbols are given by
(
q
p
)
=
(
−
1
)
⌊
q
p
⌋
+
⌊
2
q
p
⌋
+
⋯
+
⌊
m
q
p
⌋
,
(
p
q
)
=
(
−
1
)
⌊
p
q
⌋
+
⌊
2
p
q
⌋
+
⋯
+
⌊
n
p
q
⌋
.
{\displaystyle {\begin{aligned}\left({\frac {q}{p}}\right)&=(-1)^{\left\lfloor {\frac {q}{p}}\right\rfloor +\left\lfloor {\frac {2q}{p}}\right\rfloor +\dots +\left\lfloor {\frac {mq}{p}}\right\rfloor },\\[5mu]\left({\frac {p}{q}}\right)&=(-1)^{\left\lfloor {\frac {p}{q}}\right\rfloor +\left\lfloor {\frac {2p}{q}}\right\rfloor +\dots +\left\lfloor {\frac {np}{q}}\right\rfloor }.\end{aligned}}}
The second step is to use a geometric argument to show that
⌊
q
p
⌋
+
⌊
2
q
p
⌋
+
⋯
+
⌊
m
q
p
⌋
+
⌊
p
q
⌋
+
⌊
2
p
q
⌋
+
⋯
+
⌊
n
p
q
⌋
=
m
n
.
{\displaystyle \left\lfloor {\frac {q}{p}}\right\rfloor +\left\lfloor {\frac {2q}{p}}\right\rfloor +\dots +\left\lfloor {\frac {mq}{p}}\right\rfloor +\left\lfloor {\frac {p}{q}}\right\rfloor +\left\lfloor {\frac {2p}{q}}\right\rfloor +\dots +\left\lfloor {\frac {np}{q}}\right\rfloor =mn.}
Combining these formulas gives quadratic reciprocity in the form
(
p
q
)
(
q
p
)
=
(
−
1
)
m
n
=
(
−
1
)
p
−
1
2
q
−
1
2
.
{\displaystyle \left({\frac {p}{q}}\right)\left({\frac {q}{p}}\right)=(-1)^{mn}=(-1)^{{\frac {p-1}{2}}{\frac {q-1}{2}}}.}
There are formulas that use floor to express the quadratic character of small numbers mod odd primes p:
(
2
p
)
=
(
−
1
)
⌊
p
+
1
4
⌋
,
(
3
p
)
=
(
−
1
)
⌊
p
+
1
6
⌋
.
{\displaystyle {\begin{aligned}\left({\frac {2}{p}}\right)&=(-1)^{\left\lfloor {\frac {p+1}{4}}\right\rfloor },\\[5mu]\left({\frac {3}{p}}\right)&=(-1)^{\left\lfloor {\frac {p+1}{6}}\right\rfloor }.\end{aligned}}}
=== Rounding ===
For an arbitrary real number
x
{\displaystyle x}
, rounding
x
{\displaystyle x}
to the nearest integer with tie breaking towards positive infinity is given by
rpi
(
x
)
=
⌊
x
+
1
2
⌋
=
⌈
1
2
⌊
2
x
⌋
⌉
;
{\displaystyle {\text{rpi}}(x)=\left\lfloor x+{\tfrac {1}{2}}\right\rfloor =\left\lceil {\tfrac {1}{2}}\lfloor 2x\rfloor \right\rceil ;}
rounding towards negative infinity is given as
rni
(
x
)
=
⌈
x
−
1
2
⌉
=
⌊
1
2
⌈
2
x
⌉
⌋
.
{\displaystyle {\text{rni}}(x)=\left\lceil x-{\tfrac {1}{2}}\right\rceil =\left\lfloor {\tfrac {1}{2}}\lceil 2x\rceil \right\rfloor .}
If tie-breaking is away from 0, then the rounding function is
ri
(
x
)
=
sgn
(
x
)
⌊
|
x
|
+
1
2
⌋
{\displaystyle {\text{ri}}(x)=\operatorname {sgn}(x)\left\lfloor |x|+{\tfrac {1}{2}}\right\rfloor }
(where
sgn
{\displaystyle \operatorname {sgn} }
is the sign function), and rounding towards even can be expressed with the more cumbersome
⌊
x
⌉
=
⌊
x
+
1
2
⌋
+
⌈
1
4
(
2
x
−
1
)
⌉
−
⌊
1
4
(
2
x
−
1
)
⌋
−
1
,
{\displaystyle \lfloor x\rceil =\left\lfloor x+{\tfrac {1}{2}}\right\rfloor +\left\lceil {\tfrac {1}{4}}(2x-1)\right\rceil -\left\lfloor {\tfrac {1}{4}}(2x-1)\right\rfloor -1,}
which is the above expression for rounding towards positive infinity
rpi
(
x
)
{\displaystyle {\text{rpi}}(x)}
minus an integrality indicator for
1
4
(
2
x
−
1
)
{\displaystyle {\tfrac {1}{4}}(2x-1)}
.
Rounding a real number
x
{\displaystyle x}
to the nearest integer value forms a very basic type of quantizer – a uniform one. A typical (mid-tread) uniform quantizer with a quantization step size equal to some value
Δ
{\displaystyle \Delta }
can be expressed as
Q
(
x
)
=
Δ
⋅
⌊
x
Δ
+
1
2
⌋
{\displaystyle Q(x)=\Delta \cdot \left\lfloor {\frac {x}{\Delta }}+{\frac {1}{2}}\right\rfloor }
,
=== Number of digits ===
The number of digits in base b of a positive integer k is
⌊
log
b
k
⌋
+
1
=
⌈
log
b
(
k
+
1
)
⌉
.
{\displaystyle \lfloor \log _{b}{k}\rfloor +1=\lceil \log _{b}{(k+1)}\rceil .}
=== Number of strings without repeated characters ===
The number of possible strings of arbitrary length that doesn't use any character twice is given by
(
n
)
0
+
⋯
+
(
n
)
n
=
⌊
e
n
!
⌋
{\displaystyle (n)_{0}+\cdots +(n)_{n}=\lfloor en!\rfloor }
where:
n > 0 is the number of letters in the alphabet (e.g., 26 in English)
the falling factorial
(
n
)
k
=
n
(
n
−
1
)
⋯
(
n
−
k
+
1
)
{\displaystyle (n)_{k}=n(n-1)\cdots (n-k+1)}
denotes the number of strings of length k that don't use any character twice.
n! denotes the factorial of n
e = 2.718... is Euler's number
For n = 26, this comes out to 1096259850353149530222034277.
=== Factors of factorials ===
Let n be a positive integer and p a positive prime number. The exponent of the highest power of p that divides n! is given by a version of Legendre's formula
⌊
n
p
⌋
+
⌊
n
p
2
⌋
+
⌊
n
p
3
⌋
+
⋯
=
n
−
∑
k
a
k
p
−
1
{\displaystyle \left\lfloor {\frac {n}{p}}\right\rfloor +\left\lfloor {\frac {n}{p^{2}}}\right\rfloor +\left\lfloor {\frac {n}{p^{3}}}\right\rfloor +\dots ={\frac {n-\sum _{k}a_{k}}{p-1}}}
where
n
=
∑
k
a
k
p
k
{\textstyle n=\sum _{k}a_{k}p^{k}}
is the way of writing n in base p. This is a finite sum, since the floors are zero when pk > n.
=== Beatty sequence ===
The Beatty sequence shows how every positive irrational number gives rise to a partition of the natural numbers into two sequences via the floor function.
=== Euler's constant (γ) ===
There are formulas for Euler's constant γ = 0.57721 56649 ... that involve the floor and ceiling, e.g.
γ
=
∫
1
∞
(
1
⌊
x
⌋
−
1
x
)
d
x
,
{\displaystyle \gamma =\int _{1}^{\infty }\left({1 \over \lfloor x\rfloor }-{1 \over x}\right)\,dx,}
γ
=
lim
n
→
∞
1
n
∑
k
=
1
n
(
⌈
n
k
⌉
−
n
k
)
,
{\displaystyle \gamma =\lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}\left(\left\lceil {\frac {n}{k}}\right\rceil -{\frac {n}{k}}\right),}
and
γ
=
∑
k
=
2
∞
(
−
1
)
k
⌊
log
2
k
⌋
k
=
1
2
−
1
3
+
2
(
1
4
−
1
5
+
1
6
−
1
7
)
+
3
(
1
8
−
⋯
−
1
15
)
+
⋯
{\displaystyle \gamma =\sum _{k=2}^{\infty }(-1)^{k}{\frac {\left\lfloor \log _{2}k\right\rfloor }{k}}={\tfrac {1}{2}}-{\tfrac {1}{3}}+2\left({\tfrac {1}{4}}-{\tfrac {1}{5}}+{\tfrac {1}{6}}-{\tfrac {1}{7}}\right)+3\left({\tfrac {1}{8}}-\cdots -{\tfrac {1}{15}}\right)+\cdots }
=== Riemann zeta function (ζ) ===
The fractional part function also shows up in integral representations of the Riemann zeta function. It is straightforward to prove (using integration by parts) that if
φ
(
x
)
{\displaystyle \varphi (x)}
is any function with a continuous derivative in the closed interval [a, b],
∑
a
<
n
≤
b
φ
(
n
)
=
∫
a
b
φ
(
x
)
d
x
+
∫
a
b
(
{
x
}
−
1
2
)
φ
′
(
x
)
d
x
+
(
{
a
}
−
1
2
)
φ
(
a
)
−
(
{
b
}
−
1
2
)
φ
(
b
)
.
{\displaystyle \sum _{a<n\leq b}\varphi (n)=\int _{a}^{b}\varphi (x)\,dx+\int _{a}^{b}\left(\{x\}-{\tfrac {1}{2}}\right)\varphi '(x)\,dx+\left(\{a\}-{\tfrac {1}{2}}\right)\varphi (a)-\left(\{b\}-{\tfrac {1}{2}}\right)\varphi (b).}
Letting
φ
(
n
)
=
n
−
s
{\displaystyle \varphi (n)=n^{-s}}
for real part of s greater than 1 and letting a and b be integers, and letting b approach infinity gives
ζ
(
s
)
=
s
∫
1
∞
1
2
−
{
x
}
x
s
+
1
d
x
+
1
s
−
1
+
1
2
.
{\displaystyle \zeta (s)=s\int _{1}^{\infty }{\frac {{\frac {1}{2}}-\{x\}}{x^{s+1}}}\,dx+{\frac {1}{s-1}}+{\frac {1}{2}}.}
This formula is valid for all s with real part greater than −1, (except s = 1, where there is a pole) and combined with the Fourier expansion for {x} can be used to extend the zeta function to the entire complex plane and to prove its functional equation.
For s = σ + it in the critical strip 0 < σ < 1,
ζ
(
s
)
=
s
∫
−
∞
∞
e
−
σ
ω
(
⌊
e
ω
⌋
−
e
ω
)
e
−
i
t
ω
d
ω
.
{\displaystyle \zeta (s)=s\int _{-\infty }^{\infty }e^{-\sigma \omega }(\lfloor e^{\omega }\rfloor -e^{\omega })e^{-it\omega }\,d\omega .}
In 1947 van der Pol used this representation to construct an analogue computer for finding roots of the zeta function.
=== Formulas for prime numbers ===
The floor function appears in several formulas characterizing prime numbers. For example, since
⌊
n
m
⌋
−
⌊
n
−
1
m
⌋
=
{
1
if
m
divides
n
0
otherwise
,
{\displaystyle \left\lfloor {\frac {n}{m}}\right\rfloor -\left\lfloor {\frac {n-1}{m}}\right\rfloor ={\begin{cases}1&{\text{if }}m{\text{ divides }}n\\0&{\text{otherwise}},\end{cases}}}
it follows that a positive integer n is a prime if and only if
∑
m
=
1
∞
(
⌊
n
m
⌋
−
⌊
n
−
1
m
⌋
)
=
2.
{\displaystyle \sum _{m=1}^{\infty }\left(\left\lfloor {\frac {n}{m}}\right\rfloor -\left\lfloor {\frac {n-1}{m}}\right\rfloor \right)=2.}
One may also give formulas for producing the prime numbers. For example, let pn be the n-th prime, and for any integer r > 1, define the real number α by the sum
α
=
∑
m
=
1
∞
p
m
r
−
m
2
.
{\displaystyle \alpha =\sum _{m=1}^{\infty }p_{m}r^{-m^{2}}.}
Then
p
n
=
⌊
r
n
2
α
⌋
−
r
2
n
−
1
⌊
r
(
n
−
1
)
2
α
⌋
.
{\displaystyle p_{n}=\left\lfloor r^{n^{2}}\alpha \right\rfloor -r^{2n-1}\left\lfloor r^{(n-1)^{2}}\alpha \right\rfloor .}
A similar result is that there is a number θ = 1.3064... (Mills' constant) with the property that
⌊
θ
3
⌋
,
⌊
θ
9
⌋
,
⌊
θ
27
⌋
,
…
{\displaystyle \left\lfloor \theta ^{3}\right\rfloor ,\left\lfloor \theta ^{9}\right\rfloor ,\left\lfloor \theta ^{27}\right\rfloor ,\dots }
are all prime.
There is also a number ω = 1.9287800... with the property that
⌊
2
ω
⌋
,
⌊
2
2
ω
⌋
,
⌊
2
2
2
ω
⌋
,
…
{\displaystyle \left\lfloor 2^{\omega }\right\rfloor ,\left\lfloor 2^{2^{\omega }}\right\rfloor ,\left\lfloor 2^{2^{2^{\omega }}}\right\rfloor ,\dots }
are all prime.
Let π(x) be the number of primes less than or equal to x. It is a straightforward deduction from Wilson's theorem that
π
(
n
)
=
∑
j
=
2
n
⌊
(
j
−
1
)
!
+
1
j
−
⌊
(
j
−
1
)
!
j
⌋
⌋
.
{\displaystyle \pi (n)=\sum _{j=2}^{n}{\Biggl \lfloor }{\frac {(j-1)!+1}{j}}-\left\lfloor {\frac {(j-1)!}{j}}\right\rfloor {\Biggr \rfloor }.}
Also, if n ≥ 2,
π
(
n
)
=
∑
j
=
2
n
⌊
1
∑
k
=
2
j
⌊
⌊
j
k
⌋
k
j
⌋
⌋
.
{\displaystyle \pi (n)=\sum _{j=2}^{n}\left\lfloor {\frac {1}{\displaystyle \sum _{k=2}^{j}\left\lfloor \left\lfloor {\frac {j}{k}}\right\rfloor {\frac {k}{j}}\right\rfloor }}\right\rfloor .}
None of the formulas in this section are of any practical use.
=== Solved problems ===
Ramanujan submitted these problems to the Journal of the Indian Mathematical Society.
If n is a positive integer, prove that
⌊
n
3
⌋
+
⌊
n
+
2
6
⌋
+
⌊
n
+
4
6
⌋
=
⌊
n
2
⌋
+
⌊
n
+
3
6
⌋
,
{\displaystyle \left\lfloor {\tfrac {n}{3}}\right\rfloor +\left\lfloor {\tfrac {n+2}{6}}\right\rfloor +\left\lfloor {\tfrac {n+4}{6}}\right\rfloor =\left\lfloor {\tfrac {n}{2}}\right\rfloor +\left\lfloor {\tfrac {n+3}{6}}\right\rfloor ,}
⌊
1
2
+
n
+
1
2
⌋
=
⌊
1
2
+
n
+
1
4
⌋
,
{\displaystyle \left\lfloor {\tfrac {1}{2}}+{\sqrt {n+{\tfrac {1}{2}}}}\right\rfloor =\left\lfloor {\tfrac {1}{2}}+{\sqrt {n+{\tfrac {1}{4}}}}\right\rfloor ,}
⌊
n
+
n
+
1
⌋
=
⌊
4
n
+
2
⌋
.
{\displaystyle \left\lfloor {\sqrt {n}}+{\sqrt {n+1}}\right\rfloor =\left\lfloor {\sqrt {4n+2}}\right\rfloor .}
Some generalizations to the above floor function identities have been proven.
=== Unsolved problem ===
The study of Waring's problem has led to an unsolved problem:
Are there any positive integers k ≥ 6 such that
3
k
−
2
k
⌊
(
3
2
)
k
⌋
>
2
k
−
⌊
(
3
2
)
k
⌋
−
2
?
{\displaystyle 3^{k}-2^{k}{\Bigl \lfloor }{\bigl (}{\tfrac {3}{2}}{\bigr )}^{k}{\Bigr \rfloor }>2^{k}-{\Bigl \lfloor }{\bigl (}{\tfrac {3}{2}}{\bigr )}^{k}{\Bigr \rfloor }-2\ ?}
Mahler has proved there can only be a finite number of such k; none are known.
== Computer implementations ==
In most programming languages, the simplest method to convert a floating point number to an integer does not do floor or ceiling, but truncation. The reason for this is historical, as the first machines used ones' complement and truncation was simpler to implement (floor is simpler in two's complement). FORTRAN was defined to require this behavior and thus almost all processors implement conversion this way. Some consider this to be an unfortunate historical design decision that has led to bugs handling negative offsets and graphics on the negative side of the origin.
An arithmetic right-shift of a signed integer
x
{\displaystyle x}
by
n
{\displaystyle n}
is the same as
⌊
x
2
n
⌋
{\displaystyle \left\lfloor {\tfrac {x}{2^{n}}}\right\rfloor }
. Division by a power of 2 is often written as a right-shift, not for optimization as might be assumed, but because the floor of negative results is required. Assuming such shifts are "premature optimization" and replacing them with division can break software.
Many programming languages (including C, C++, C#, Java,
Julia,
PHP, R, and Python) provide standard functions for floor and ceiling, usually called floor and ceil, or less commonly ceiling. The language APL uses ⌊x for floor. The J Programming Language, a follow-on to APL that is designed to use standard keyboard symbols, uses <. for floor and >. for ceiling.
ALGOL usesentier for floor.
In Microsoft Excel the function INT rounds down rather than toward zero, while FLOOR rounds toward zero, the opposite of what "int" and "floor" do in other languages. Since 2010 FLOOR has been changed to error if the number is negative. The OpenDocument file format, as used by OpenOffice.org, Libreoffice and others, INT and FLOOR both do floor, and FLOOR has a third argument to reproduce Excel's earlier behavior.
== See also ==
Bracket (mathematics)
Integer-valued function
Step function
Modulo operation
== Citations ==
== References ==
J.W.S. Cassels (1957), An introduction to Diophantine approximation, Cambridge Tracts in Mathematics and Mathematical Physics, vol. 45, Cambridge University Press
Crandall, Richard; Pomerance, Carl (2001), Prime Numbers: A Computational Perspective, New York: Springer, ISBN 0-387-94777-9
Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994), Concrete Mathematics, Reading Ma.: Addison-Wesley, ISBN 0-201-55802-5
Hardy, G. H.; Wright, E. M. (1980), An Introduction to the Theory of Numbers (Fifth edition), Oxford: Oxford University Press, ISBN 978-0-19-853171-5
Nicholas J. Higham, Handbook of writing for the mathematical sciences, SIAM. ISBN 0-89871-420-6, p. 25
ISO/IEC. ISO/IEC 9899::1999(E): Programming languages — C (2nd ed), 1999; Section 6.3.1.4, p. 43.
Iverson, Kenneth E. (1962), A Programming Language, Wiley
Lemmermeyer, Franz (2000), Reciprocity Laws: from Euler to Eisenstein, Berlin: Springer, ISBN 3-540-66957-4
Ramanujan, Srinivasa (2000), Collected Papers, Providence RI: AMS / Chelsea, ISBN 978-0-8218-2076-6
Ribenboim, Paulo (1996), The New Book of Prime Number Records, New York: Springer, ISBN 0-387-94457-5
Michael Sullivan. Precalculus, 8th edition, p. 86
Titchmarsh, Edward Charles; Heath-Brown, David Rodney ("Roger") (1986), The Theory of the Riemann Zeta-function (2nd ed.), Oxford: Oxford U. P., ISBN 0-19-853369-1
== External links ==
"Floor function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Štefan Porubský, "Integer rounding functions", Interactive Information Portal for Algorithmic Mathematics, Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic, retrieved 24 October 2008
Weisstein, Eric W. "Floor Function". MathWorld.
Weisstein, Eric W. "Ceiling Function". MathWorld. | Wikipedia/Floor_function |
Oppermann's conjecture is an unsolved problem in mathematics on the distribution of prime numbers. It is closely related to but stronger than Legendre's conjecture, Andrica's conjecture, and Brocard's conjecture. It is named after Danish mathematician Ludvig Oppermann, who announced it in an unpublished lecture in March 1877.
== Statement ==
The conjecture states that, for every integer
n
>
1
{\displaystyle n>1}
, there is at least one prime number between
n
(
n
−
1
)
{\displaystyle n(n-1)}
and
n
2
{\displaystyle n^{2}}
,
and at least another prime between
n
2
{\displaystyle n^{2}}
and
n
(
n
+
1
)
{\displaystyle n(n+1)}
.
It can also be phrased equivalently as stating that the prime-counting function must take unequal values at the endpoints of each range. That is:
π
(
n
2
−
n
)
<
π
(
n
2
)
<
π
(
n
2
+
n
)
{\displaystyle \pi (n^{2}-n)<\pi (n^{2})<\pi (n^{2}+n)}
for every
n
>
1
{\displaystyle n>1}
with
π
(
x
)
{\displaystyle \pi (x)}
being the number of prime numbers less than or equal to
x
{\displaystyle x}
.
The end points of these two ranges are a square between two pronic numbers, with each of the pronic numbers being twice a pair triangular number. The sum of the pair of triangular numbers is the square.
== Consequences ==
If the conjecture is true, then the gap size would be on the order of
g
n
<
p
n
.
{\displaystyle g_{n}<{\sqrt {p_{n}}}.\,}
This also means there would be at least two primes between
n
2
{\displaystyle n^{2}}
and
(
n
+
1
)
2
{\displaystyle (n+1)^{2}}
(one in the range from
n
2
{\displaystyle n^{2}}
to
n
(
n
+
1
)
{\displaystyle n(n+1)}
and the second in the range from
n
(
n
+
1
)
{\displaystyle n(n+1)}
to
(
n
+
1
)
2
{\displaystyle (n+1)^{2}}
, strengthening Legendre's conjecture that there is at least one prime in this range. Because there is at least one non-prime between any two odd primes it would also imply Brocard's conjecture that there are at least four primes between the squares of consecutive odd primes. Additionally, it would imply that the largest possible gaps between two consecutive prime numbers could be at most proportional to twice the square root of the numbers, as Andrica's conjecture states.
The conjecture also implies that at least one prime can be found in every quarter revolution of the Ulam spiral.
== See also ==
Bertrand's postulate
Firoozbakht's conjecture
Prime number theorem
== References == | Wikipedia/Oppermann's_conjecture |
In mathematics, the explicit formulae for L-functions are relations between sums over the complex number zeroes of an L-function and sums over prime powers, introduced by Riemann (1859) for the Riemann zeta function. Such explicit formulae have been applied also to questions on bounding the discriminant of an algebraic number field, and the conductor of a number field.
== Riemann's explicit formula ==
In his 1859 paper "On the Number of Primes Less Than a Given Magnitude" Riemann sketched an explicit formula (it was not fully proven until 1895 by von Mangoldt, see below) for the normalized prime-counting function π0(x) which is related to the prime-counting function π(x) by
π
0
(
x
)
=
1
2
lim
h
→
0
[
π
(
x
+
h
)
+
π
(
x
−
h
)
]
,
{\displaystyle \pi _{0}(x)={\frac {1}{2}}\lim _{h\to 0}\left[\,\pi (x+h)+\pi (x-h)\,\right]\,,}
which takes the arithmetic mean of the limit from the left and the limit from the right at discontinuities. His formula was given in terms of the related function
f
(
x
)
=
π
0
(
x
)
+
1
2
π
0
(
x
1
/
2
)
+
1
3
π
0
(
x
1
/
3
)
+
⋯
{\displaystyle f(x)=\pi _{0}(x)+{\frac {1}{2}}\,\pi _{0}(x^{1/2})+{\frac {1}{3}}\,\pi _{0}(x^{1/3})+\cdots }
in which a prime power pn counts as 1⁄n of a prime. The normalized prime-counting function can be recovered from this function by
π
0
(
x
)
=
∑
n
1
n
μ
(
n
)
f
(
x
1
/
n
)
=
f
(
x
)
−
1
2
f
(
x
1
/
2
)
−
1
3
f
(
x
1
/
3
)
−
1
5
f
(
x
1
/
5
)
+
1
6
f
(
x
1
/
6
)
−
⋯
,
{\displaystyle \pi _{0}(x)=\sum _{n}{\frac {1}{n}}\,\mu (n)\,f(x^{1/n})=f(x)-{\frac {1}{2}}\,f(x^{1/2})-{\frac {1}{3}}\,f(x^{1/3})-{\frac {1}{5}}\,f(x^{1/5})+{\frac {1}{6}}\,f(x^{1/6})-\cdots ,}
where μ(n) is the Möbius function. Riemann's formula is then
f
(
x
)
=
li
(
x
)
−
∑
ρ
li
(
x
ρ
)
−
log
(
2
)
+
∫
x
∞
d
t
t
(
t
2
−
1
)
log
(
t
)
{\displaystyle f(x)=\operatorname {li} (x)-\sum _{\rho }\operatorname {li} (x^{\rho })-\log(2)+\int _{x}^{\infty }{\frac {dt}{~t\,(t^{2}-1)~\log(t)~}}}
involving a sum over the non-trivial zeros ρ of the Riemann zeta function. The sum is not absolutely convergent, but may be evaluated by taking the zeros in order of the absolute value of their imaginary part. The function li occurring in the first term is the (unoffset) logarithmic integral function given by the Cauchy principal value of the divergent integral
li
(
x
)
=
∫
0
x
d
t
log
(
t
)
.
{\displaystyle \operatorname {li} (x)=\int _{0}^{x}{\frac {dt}{\,\log(t)\,}}\,.}
The terms li(xρ) involving the zeros of the zeta function need some care in their definition as li has branch points at 0 and 1, and are defined by analytic continuation in the complex variable ρ in the region x > 1 and Re(ρ) > 0. The other terms also correspond to zeros: The dominant term li(x) comes from the pole at s = 1, considered as a zero of multiplicity −1, and the remaining small terms come from the trivial zeros. This formula says that the zeros of the Riemann zeta function control the oscillations of primes around their "expected" positions. (For graphs of the sums of the first few terms of this series see Zagier 1977.)
The first rigorous proof of the aforementioned formula was given by von Mangoldt in 1895: it started with a proof of the following formula for the Chebyshev's function ψ
ψ
0
(
x
)
=
1
2
π
i
∫
σ
−
i
∞
σ
+
i
∞
(
−
ζ
′
(
s
)
ζ
(
s
)
)
x
s
s
d
s
=
x
−
∑
ρ
x
ρ
ρ
−
log
(
2
π
)
−
1
2
log
(
1
−
x
−
2
)
{\displaystyle \psi _{0}(x)={\dfrac {1}{2\pi i}}\int _{\sigma -i\infty }^{\sigma +i\infty }\left(-{\dfrac {\zeta '(s)}{\zeta (s)}}\right){\dfrac {x^{s}}{s}}\,ds=x-\sum _{\rho }{\frac {~x^{\rho }\,}{\rho }}-\log(2\pi )-{\dfrac {1}{2}}\log(1-x^{-2})}
where the LHS is an inverse Mellin transform with
σ
>
1
,
ψ
(
x
)
=
∑
p
k
≤
x
log
p
,
and
ψ
0
(
x
)
=
1
2
lim
h
→
0
(
ψ
(
x
+
h
)
+
ψ
(
x
−
h
)
)
{\displaystyle \sigma >1\,,\quad \psi (x)=\sum _{p^{k}\leq x}\log p\,,\quad {\text{and}}\quad \psi _{0}(x)={\frac {1}{2}}\lim _{h\to 0}(\psi (x+h)+\psi (x-h))}
and the RHS is obtained from the residue theorem, and then converting it into the formula that Riemann himself actually sketched.
This series is also conditionally convergent and the sum over zeroes should again be taken in increasing order of imaginary part:
∑
ρ
x
ρ
ρ
=
lim
T
→
∞
S
(
x
,
T
)
{\displaystyle \sum _{\rho }{\frac {x^{\rho }}{\rho }}=\lim _{T\to \infty }S(x,T)}
where
S
(
x
,
T
)
=
∑
ρ
:
|
ℑ
ρ
|
≤
T
x
ρ
ρ
.
{\displaystyle S(x,T)=\sum _{\rho :\left|\Im \rho \right|\leq T}{\frac {x^{\rho }}{\rho }}\,.}
The error involved in truncating the sum to S(x,T) is always smaller than ln(x) in absolute value, and when divided by the natural logarithm of x, has absolute value smaller than x⁄T divided by the distance from x to the nearest prime power.
== Weil's explicit formula ==
There are several slightly different ways to state the explicit formula. André Weil's form of the explicit formula states
Φ
(
1
)
+
Φ
(
0
)
−
∑
ρ
Φ
(
ρ
)
=
∑
p
,
m
log
(
p
)
p
m
/
2
(
F
(
log
(
p
m
)
)
+
F
(
−
log
(
p
m
)
)
)
−
1
2
π
∫
−
∞
∞
φ
(
t
)
Ψ
(
t
)
d
t
{\displaystyle {\begin{aligned}&\Phi (1)+\Phi (0)-\sum _{\rho }\Phi (\rho )\\&=\sum _{p,m}{\frac {\log(p)}{p^{m/2}}}{\Big (}F(\log(p^{m}))+F(-\log(p^{m})){\Big )}-{\frac {1}{2\pi }}\int _{-\infty }^{\infty }\varphi (t)\Psi (t)\,dt\end{aligned}}}
where
ρ runs over the non-trivial zeros of the zeta function
p runs over positive primes
m runs over positive integers
F is a smooth function all of whose derivatives are rapidly decreasing
φ
{\displaystyle \varphi }
is a Fourier transform of F:
φ
(
t
)
=
∫
−
∞
∞
F
(
x
)
e
i
t
x
d
x
{\displaystyle \varphi (t)=\int _{-\infty }^{\infty }F(x)e^{itx}\,dx}
Φ
(
1
/
2
+
i
t
)
=
φ
(
t
)
{\displaystyle \Phi (1/2+it)=\varphi (t)}
Ψ
(
t
)
=
−
log
(
π
)
+
Re
(
ψ
(
1
/
4
+
i
t
/
2
)
)
{\displaystyle \Psi (t)=-\log(\pi )+\operatorname {Re} (\psi (1/4+it/2))}
, where
ψ
{\displaystyle \psi }
is the digamma function Γ′/Γ.
Roughly speaking, the explicit formula says the Fourier transform of the zeros of the zeta function is the set of prime powers plus some elementary factors. Once this is said, the formula comes from the fact that the Fourier transform is a unitary operator, so that a scalar product in time domain is equal to the scalar product of the Fourier transforms in the frequency domain.
The terms in the formula arise in the following way.
The terms on the right hand side come from the logarithmic derivative of
ζ
∗
(
s
)
=
Γ
(
s
/
2
)
π
−
s
/
2
∏
p
1
1
−
p
−
s
{\displaystyle \zeta ^{*}(s)=\Gamma (s/2)\pi ^{-s/2}\prod _{p}{\frac {1}{1-p^{-s}}}}
with the terms corresponding to the prime p coming from the Euler factor of p, and the term at the end involving Ψ coming from the gamma factor (the Euler factor at infinity).
The left-hand side is a sum over all zeros of ζ * counted with multiplicities, so the poles at 0 and 1 are counted as zeros of order −1.
Weil's explicit formula can be understood like this. The target is to be able to write that:
d
d
u
[
∑
n
≤
e
|
u
|
Λ
(
n
)
+
1
2
ln
(
1
−
e
−
2
|
u
|
)
]
=
∑
n
=
1
∞
Λ
(
n
)
[
δ
(
u
+
ln
n
)
+
δ
(
u
−
ln
n
)
]
+
1
2
d
ln
(
1
−
e
−
2
|
u
|
)
d
u
=
e
u
−
∑
ρ
e
ρ
u
,
{\displaystyle {\frac {d}{du}}\left[\sum _{n\leq e^{|u|}}\Lambda (n)+{\frac {1}{2}}\ln(1-e^{-2|u|})\right]=\sum _{n=1}^{\infty }\Lambda (n)\left[\delta (u+\ln n)+\delta (u-\ln n)\right]+{\frac {1}{2}}{\frac {d\ln(1-e^{-2|u|})}{du}}=e^{u}-\sum _{\rho }e^{\rho u},}
where Λ is the von Mangoldt function.
So that the Fourier transform of the non trivial zeros is equal to the primes power symmetrized plus a minor term. Of course, the sum involved are not convergent, but the trick is to use the unitary property of Fourier transform which is that it preserves scalar product:
∫
−
∞
∞
f
(
u
)
g
∗
(
u
)
d
u
=
∫
−
∞
∞
F
(
t
)
G
∗
(
t
)
d
t
{\displaystyle \int _{-\infty }^{\infty }f(u)g^{*}(u)\,du=\int _{-\infty }^{\infty }F(t)G^{*}(t)\,dt}
where
F
,
G
{\displaystyle F,G}
are the Fourier transforms of
f
,
g
{\displaystyle f,g}
.
At a first look, it seems to be a formula for functions only, but in fact in many cases it also works when
g
{\displaystyle g}
is a distribution. Hence, by setting
g
(
u
)
=
∑
n
=
1
∞
Λ
(
n
)
[
δ
(
u
+
ln
n
)
+
δ
(
u
−
ln
n
)
]
,
{\displaystyle g(u)=\sum _{n=1}^{\infty }\Lambda (n)\left[\delta (u+\ln n)+\delta (u-\ln n)\right],}
where
δ
(
u
)
{\displaystyle \delta (u)}
is the Dirac delta, and carefully choosing a function
f
{\displaystyle f}
and its Fourier transform, we get the formula above.
== Generalizations ==
The Riemann zeta function can be replaced by a Dirichlet L-function of a Dirichlet character χ. The sum over prime powers then gets extra
factors of χ(p m), and the terms Φ(1) and Φ(0) disappear because the L-series has no poles.
More generally, the Riemann zeta function and the L-series can be replaced by the Dedekind zeta function of an algebraic number field or a Hecke L-series. The sum over primes then gets replaced by a sum over prime ideals.
== Applications ==
Riemann's original use of the explicit formula was to give an exact formula for the number of primes less than a given number. To do this, take F(log(y)) to be y1/2/log(y) for 0 ≤ y ≤ x and 0 elsewhere. Then the main term of the sum on the right is the number of primes less than x. The main term on the left is Φ(1); which turns out to be the dominant terms of the prime number theorem, and the main correction is the sum over non-trivial zeros of the zeta function. (There is a minor technical problem in using this case, in that the function F does not satisfy the smoothness condition.)
== Hilbert–Pólya conjecture ==
According to the Hilbert–Pólya conjecture, the complex zeroes ρ should be the eigenvalues of some linear operator T. The sum over the zeros of the explicit formula is then (at least formally) given by a trace:
∑
ρ
F
(
ρ
)
=
Tr
(
F
(
T
^
)
)
.
{\displaystyle \sum _{\rho }F(\rho )=\operatorname {Tr} (F({\widehat {T}})).\!}
Development of the explicit formulae for a wide class of L-functions was given by Weil (1952), who first extended the idea to local zeta-functions, and formulated a version of a generalized Riemann hypothesis in this setting, as a positivity statement for a generalized function on a topological group. More recent work by Alain Connes has gone much further into the functional-analytic background, providing a trace formula the validity of which is equivalent to such a generalized Riemann hypothesis. A slightly different point of view was given by Meyer (2005), who derived the explicit formula of Weil via harmonic analysis on adelic spaces.
== See also ==
Selberg trace formula
Selberg zeta function
== Footnotes ==
== References ==
Ingham, A.E. (1990) [1932], The Distribution of Prime Numbers, Cambridge Tracts in Mathematics and Mathematical Physics, vol. 30, reissued with a foreword by R. C. Vaughan (2nd ed.), Cambridge University Press, ISBN 978-0-521-39789-6, MR 1074573, Zbl 0715.11045
Lang, Serge (1994), Algebraic number theory, Graduate Texts in Mathematics, vol. 110 (2nd ed.), New York, NY: Springer-Verlag, ISBN 0-387-94225-4, Zbl 0811.11001
Riemann, Bernhard (1859), "Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse", Monatsberichte der Berliner Akademie
Weil, André (1952), "Sur les "formules explicites" de la théorie des nombres premiers" [On "explicit formulas" in the theory of prime numbers], Comm. Sém. Math. Univ. Lund [Medd. Lunds Univ. Mat. Sem.] (in French), Tome Supplémentaire: 252–265, MR 0053152, Zbl 0049.03205
von Mangoldt, Hans (1895), "Zu Riemanns Abhandlung "Über die Anzahl der Primzahlen unter einer gegebenen Grösse"" [On Riemann's paper "The number of prime numbers less than a given magnitude"], Journal für die reine und angewandte Mathematik (in German), 114: 255–305, ISSN 0075-4102, JFM 26.0215.03, MR 1580379
Meyer, Ralf (2005), "On a representation of the idele class group related to primes and zeros of L-functions", Duke Math. J., 127 (3): 519–595, arXiv:math/0311468, doi:10.1215/s0012-7094-04-12734-4, ISSN 0012-7094, MR 2132868, S2CID 119176169, Zbl 1079.11044
Zagier, Don (1977), "The first 50 million prime numbers", The Mathematical Intelligencer, 1 (S2): 7–19, doi:10.1007/bf03351556, S2CID 37866599
== Further reading ==
Edwards, H.M. (1974), Riemann's zeta function, Pure and Applied Mathematics, vol. 58, New York-London: Academic Press, ISBN 0-12-232750-0, Zbl 0315.10035
Riesel, Hans (1994), Prime numbers and computer methods for factorization, Progress in Mathematics, vol. 126 (2nd ed.), Boston, MA: Birkhäuser, ISBN 0-8176-3743-5, Zbl 0821.11001 | Wikipedia/Explicit_formulae_(L-function) |
The Meissel–Lehmer algorithm (after Ernst Meissel and Derrick Henry Lehmer) is an algorithm that computes exact values of the prime-counting function.
== Description ==
The problem of counting the exact number of primes less than or equal to x, without actually listing them all, dates from Legendre. He observed from the Sieve of Eratosthenes that
π
(
x
)
−
π
(
x
1
/
2
)
+
1
=
⌊
x
⌋
−
∑
i
⌊
x
/
p
i
⌋
+
∑
i
<
j
⌊
x
/
p
i
p
j
⌋
−
…
{\displaystyle \pi (x)-\pi (x^{1/2})+1=\lfloor x\rfloor -\sum _{i}\lfloor x/p_{i}\rfloor +\sum _{i<j}\lfloor x/p_{i}p_{j}\rfloor -\ldots }
where ⌊x⌋ is the floor function, which denotes the greatest integer less than or equal to x and the pi run over all primes ≤ √x.
Since the evaluation of this sum formula becomes more and more complex and confusing for large x, Meissel tried to simplify the counting of the numbers in the Sieve of Eratosthenes. He and Lehmer therefore introduced certain sieve functions, which are detailed below.
=== Key functions ===
Let p1, p2, …, pn be the first n primes. For a natural number a ≥ 1, define
φ
(
x
,
a
)
:=
|
{
n
≤
x
:
p
|
n
⟹
p
>
p
a
}
|
,
{\displaystyle \varphi (x,a):=\left|\left\{n\leq x:p|n\implies p>p_{a}\right\}\right|,}
which counts natural numbers no greater than x with all prime factors greater than pa. Also define for a natural number k,
P
k
(
x
,
a
)
:=
|
{
n
≤
x
:
n
=
q
1
q
2
⋯
q
k
,
with
q
1
,
…
,
q
k
>
p
a
}
|
,
{\displaystyle P_{k}(x,a):=\left|\left\{n\leq x:n=q_{1}q_{2}\cdots q_{k},~{\text{with}}~q_{1},\ldots ,q_{k}>p_{a}\right\}\right|,}
which counts natural numbers no greater than x with exactly k prime factors, all greater than pa. With these, we have
φ
(
x
,
a
)
=
∑
k
=
0
∞
P
k
(
x
,
a
)
,
{\displaystyle \varphi (x,a)=\sum _{k=0}^{\infty }P_{k}(x,a),}
where the sum only has finitely many nonzero terms because Pk(x, a) = 0 when pka > x. Using the fact that P0(x, a) = 1 and P1(x, a) = π(x) − a, we get
π
(
x
)
=
φ
(
x
,
a
)
+
a
−
1
−
∑
k
=
2
∞
P
k
(
x
,
a
)
,
{\displaystyle \pi (x)=\varphi (x,a)+a-1-\sum _{k=2}^{\infty }P_{k}(x,a),}
which proves that one may compute π(x) by computing φ(x,a) and Pk(x, a) for k ≥ 2. This is what the Meissel–Lehmer algorithm does.
=== Formula for Pk(x, a) ===
For k = 2, we get the following formula for Pk(x, a):
P
2
(
x
,
a
)
=
|
{
n
:
n
≤
x
,
n
=
p
b
p
c
,
with
a
<
b
≤
c
}
|
=
∑
b
=
a
+
1
π
(
x
1
/
2
)
|
{
n
:
n
≤
x
,
n
=
p
b
p
c
,
with
b
≤
c
≤
π
(
x
p
b
)
}
|
=
∑
b
=
a
+
1
π
(
x
1
/
2
)
(
π
(
x
p
b
)
−
(
b
−
1
)
)
=
(
a
2
)
−
(
π
(
x
1
/
2
)
2
)
+
∑
b
=
a
+
1
π
(
x
1
/
2
)
π
(
x
p
b
)
.
{\displaystyle {\begin{aligned}P_{2}(x,a)&=\left|\left\{n:n\leq x,~n=p_{b}p_{c},~{\text{with}}~a<b\leq c\right\}\right|\\&=\sum _{b=a+1}^{\pi (x^{1/2})}\left|\left\{n:n\leq x,~n=p_{b}p_{c},~{\text{with}}~b\leq c\leq \pi \left({\frac {x}{p_{b}}}\right)\right\}\right|\\&=\sum _{b=a+1}^{\pi (x^{1/2})}\left(\pi \left({\frac {x}{p_{b}}}\right)-(b-1)\right)\\&={\binom {a}{2}}-{\binom {\pi (x^{1/2})}{2}}+\sum _{b=a+1}^{\pi (x^{1/2})}\pi \left({\frac {x}{p_{b}}}\right).\end{aligned}}}
For k ≥ 3, the identities for Pk(x, a) can be derived similarly.
=== Expanding φ(x, a) ===
With the starting condition
φ
(
x
,
0
)
=
⌊
x
⌋
,
{\displaystyle \varphi (x,0)=\lfloor x\rfloor ,}
and the recurrence
φ
(
x
,
a
)
=
φ
(
x
,
a
−
1
)
−
φ
(
x
p
a
,
a
−
1
)
,
{\displaystyle \varphi (x,a)=\varphi (x,a-1)-\varphi \left({\frac {x}{p_{a}}},a-1\right),}
each value for φ(x,a) can be calculated recursively.
=== Combining the terms ===
The only thing that remains to be done is evaluating φ(x,a) and Pk(x, a) for k ≥ 2, for certain values of x and a. This can be done by direct sieving and using the above formulas.
== History ==
Meissel already found that for k ≥ 3, Pk(x, a) = 0 if a = π(x1/3). He used the resulting equation for calculations of π(x) for big values of x.
Meissel calculated π(x) for values of x up to 109, but he narrowly missed the correct result for the biggest value of x.
Using his method and an IBM 701, Lehmer was able to compute the correct value of π(109) and missed the correct value of π(1010) by 1.
== Extended algorithm ==
Jeffrey Lagarias, Victor Miller and Andrew Odlyzko published a realisation of the algorithm which computes π(x) in time O(x2/3+ε) and space O(x1/3+ε) for any ε > 0. Upon setting a = π(x1/3), the tree of φ(x,a) has O(x2/3) leaf nodes.
This extended Meissel-Lehmer algorithm needs less computing time than the algorithm developed by Meissel and Lehmer, especially for big values of x.
Further improvements of the algorithm are given by M. Deleglise and J. Rivat in 1996.
== References == | Wikipedia/Meissel–Lehmer_algorithm |
In mathematics, the Chebyshev function is either a scalarising function (Tchebycheff function) or one of two related functions. The first Chebyshev function ϑ (x) or θ (x) is given by
ϑ
(
x
)
=
∑
p
≤
x
log
p
{\displaystyle \vartheta (x)=\sum _{p\leq x}\log p}
where
log
{\displaystyle \log }
denotes the natural logarithm, with the sum extending over all prime numbers p that are less than or equal to x.
The second Chebyshev function ψ (x) is defined similarly, with the sum extending over all prime powers not exceeding x
ψ
(
x
)
=
∑
k
∈
N
∑
p
k
≤
x
log
p
=
∑
n
≤
x
Λ
(
n
)
=
∑
p
≤
x
⌊
log
p
x
⌋
log
p
,
{\displaystyle \psi (x)=\sum _{k\in \mathbb {N} }\sum _{p^{k}\leq x}\log p=\sum _{n\leq x}\Lambda (n)=\sum _{p\leq x}\left\lfloor \log _{p}x\right\rfloor \log p,}
where Λ is the von Mangoldt function. The Chebyshev functions, especially the second one ψ (x), are often used in proofs related to prime numbers, because it is typically simpler to work with them than with the prime-counting function, π (x) (see the exact formula below.) Both Chebyshev functions are asymptotic to x, a statement equivalent to the prime number theorem.
Tchebycheff function, Chebyshev utility function, or weighted Tchebycheff scalarizing function is used when one has several functions to be minimized and one wants to "scalarize" them to a single function:
f
T
c
h
b
(
x
,
w
)
=
max
i
w
i
f
i
(
x
)
.
{\displaystyle f_{Tchb}(x,w)=\max _{i}w_{i}f_{i}(x).}
By minimizing this function for different values of
w
{\displaystyle w}
, one obtains every point on a Pareto front, even in the nonconvex parts. Often the functions to be minimized are not
f
i
{\displaystyle f_{i}}
but
|
f
i
−
z
i
∗
|
{\displaystyle |f_{i}-z_{i}^{*}|}
for some scalars
z
i
∗
{\displaystyle z_{i}^{*}}
. Then
f
T
c
h
b
(
x
,
w
)
=
max
i
w
i
|
f
i
(
x
)
−
z
i
∗
|
.
{\displaystyle f_{Tchb}(x,w)=\max _{i}w_{i}|f_{i}(x)-z_{i}^{*}|.}
All three functions are named in honour of Pafnuty Chebyshev.
== Relationships ==
The second Chebyshev function can be seen to be related to the first by writing it as
ψ
(
x
)
=
∑
p
≤
x
k
log
p
{\displaystyle \psi (x)=\sum _{p\leq x}k\log p}
where k is the unique integer such that p k ≤ x and x < p k + 1. The values of k are given in OEIS: A206722. A more direct relationship is given by
ψ
(
x
)
=
∑
n
=
1
∞
ϑ
(
x
1
n
)
.
{\displaystyle \psi (x)=\sum _{n=1}^{\infty }\vartheta {\big (}x^{\frac {1}{n}}{\big )}.}
This last sum has only a finite number of non-vanishing terms, as
ϑ
(
x
1
n
)
=
0
for
n
>
log
2
x
=
log
x
log
2
.
{\displaystyle \vartheta {\big (}x^{\frac {1}{n}}{\big )}=0\quad {\text{for}}\quad n>\log _{2}x={\frac {\log x}{\log 2}}.}
The second Chebyshev function is the logarithm of the least common multiple of the integers from 1 to n.
lcm
(
1
,
2
,
…
,
n
)
=
e
ψ
(
n
)
.
{\displaystyle \operatorname {lcm} (1,2,\dots ,n)=e^{\psi (n)}.}
Values of lcm(1, 2, ..., n) for the integer variable n are given at OEIS: A003418.
== Relationships between ψ(x)/x and ϑ(x)/x ==
The following theorem relates the two quotients
ψ
(
x
)
x
{\displaystyle {\frac {\psi (x)}{x}}}
and
ϑ
(
x
)
x
{\displaystyle {\frac {\vartheta (x)}{x}}}
.
Theorem: For
x
>
0
{\displaystyle x>0}
, we have
0
≤
ψ
(
x
)
x
−
ϑ
(
x
)
x
≤
(
log
x
)
2
2
x
log
2
.
{\displaystyle 0\leq {\frac {\psi (x)}{x}}-{\frac {\vartheta (x)}{x}}\leq {\frac {(\log x)^{2}}{2{\sqrt {x}}\log 2}}.}
This inequality implies that
lim
x
→
∞
(
ψ
(
x
)
x
−
ϑ
(
x
)
x
)
=
0.
{\displaystyle \lim _{x\to \infty }\!\left({\frac {\psi (x)}{x}}-{\frac {\vartheta (x)}{x}}\right)\!=0.}
In other words, if one of the
ψ
(
x
)
/
x
{\displaystyle \psi (x)/x}
or
ϑ
(
x
)
/
x
{\displaystyle \vartheta (x)/x}
tends to a limit then so does the other, and the two limits are equal.
Proof: Since
ψ
(
x
)
=
∑
n
≤
log
2
x
ϑ
(
x
1
/
n
)
{\displaystyle \psi (x)=\sum _{n\leq \log _{2}x}\vartheta (x^{1/n})}
, we find that
0
≤
ψ
(
x
)
−
ϑ
(
x
)
=
∑
2
≤
n
≤
log
2
x
ϑ
(
x
1
/
n
)
.
{\displaystyle 0\leq \psi (x)-\vartheta (x)=\sum _{2\leq n\leq \log _{2}x}\vartheta (x^{1/n}).}
But from the definition of
ϑ
(
x
)
{\displaystyle \vartheta (x)}
we have the trivial inequality
ϑ
(
x
)
≤
∑
p
≤
x
log
x
≤
x
log
x
{\displaystyle \vartheta (x)\leq \sum _{p\leq x}\log x\leq x\log x}
so
0
≤
ψ
(
x
)
−
ϑ
(
x
)
≤
∑
2
≤
n
≤
log
2
x
x
1
/
n
log
(
x
1
/
n
)
≤
(
log
2
x
)
x
log
x
=
log
x
log
2
x
2
log
x
=
x
(
log
x
)
2
2
log
2
.
{\displaystyle {\begin{aligned}0\leq \psi (x)-\vartheta (x)&\leq \sum _{2\leq n\leq \log _{2}x}x^{1/n}\log(x^{1/n})\\&\leq (\log _{2}x){\sqrt {x}}\log {\sqrt {x}}\\&={\frac {\log x}{\log 2}}{\frac {\sqrt {x}}{2}}\log x\\&={\frac {{\sqrt {x}}\,(\log x)^{2}}{2\log 2}}.\end{aligned}}}
Lastly, divide by
x
{\displaystyle x}
to obtain the inequality in the theorem.
== Asymptotics and bounds ==
The following bounds are known for the Chebyshev functions:[1][2] (in these formulas pk is the kth prime number; p1 = 2, p2 = 3, etc.)
ϑ
(
p
k
)
≥
k
(
log
k
+
log
log
k
−
1
+
log
log
k
−
2.050735
log
k
)
for
k
≥
10
11
,
ϑ
(
p
k
)
≤
k
(
log
k
+
log
log
k
−
1
+
log
log
k
−
2
log
k
)
for
k
≥
198
,
|
ϑ
(
x
)
−
x
|
≤
0.006788
x
log
x
for
x
≥
10
544
111
,
|
ψ
(
x
)
−
x
|
≤
0.006409
x
log
x
for
x
≥
e
22
,
0.9999
x
<
ψ
(
x
)
−
ϑ
(
x
)
<
1.00007
x
+
1.78
x
3
for
x
≥
121.
{\displaystyle {\begin{aligned}\vartheta (p_{k})&\geq k\left(\log k+\log \log k-1+{\frac {\log \log k-2.050735}{\log k}}\right)&&{\text{for }}k\geq 10^{11},\\[8px]\vartheta (p_{k})&\leq k\left(\log k+\log \log k-1+{\frac {\log \log k-2}{\log k}}\right)&&{\text{for }}k\geq 198,\\[8px]|\vartheta (x)-x|&\leq 0.006788\,{\frac {x}{\log x}}&&{\text{for }}x\geq 10\,544\,111,\\[8px]|\psi (x)-x|&\leq 0.006409\,{\frac {x}{\log x}}&&{\text{for }}x\geq e^{22},\\[8px]0.9999{\sqrt {x}}&<\psi (x)-\vartheta (x)<1.00007{\sqrt {x}}+1.78{\sqrt[{3}]{x}}&&{\text{for }}x\geq 121.\end{aligned}}}
Furthermore, under the Riemann hypothesis,
|
ϑ
(
x
)
−
x
|
=
O
(
x
1
2
+
ε
)
|
ψ
(
x
)
−
x
|
=
O
(
x
1
2
+
ε
)
{\displaystyle {\begin{aligned}|\vartheta (x)-x|&=O{\Big (}x^{{\frac {1}{2}}+\varepsilon }{\Big )}\\|\psi (x)-x|&=O{\Big (}x^{{\frac {1}{2}}+\varepsilon }{\Big )}\end{aligned}}}
for any ε > 0.
Upper bounds exist for both ϑ (x) and ψ (x) such that [3]
ϑ
(
x
)
<
1.000028
x
ψ
(
x
)
<
1.03883
x
{\displaystyle {\begin{aligned}\vartheta (x)&<1.000028x\\\psi (x)&<1.03883x\end{aligned}}}
for any x > 0.
An explanation of the constant 1.03883 is given at OEIS: A206431.
== The exact formula ==
In 1895, Hans Carl Friedrich von Mangoldt proved[4] an explicit expression for ψ (x) as a sum over the nontrivial zeros of the Riemann zeta function:
ψ
0
(
x
)
=
x
−
∑
ρ
x
ρ
ρ
−
ζ
′
(
0
)
ζ
(
0
)
−
1
2
log
(
1
−
x
−
2
)
.
{\displaystyle \psi _{0}(x)=x-\sum _{\rho }{\frac {x^{\rho }}{\rho }}-{\frac {\zeta '(0)}{\zeta (0)}}-{\tfrac {1}{2}}\log(1-x^{-2}).}
(The numerical value of ζ′ (0)/ζ (0) is log(2π).) Here ρ runs over the nontrivial zeros of the zeta function, and ψ0 is the same as ψ, except that at its jump discontinuities (the prime powers) it takes the value halfway between the values to the left and the right:
ψ
0
(
x
)
=
1
2
(
∑
n
≤
x
Λ
(
n
)
+
∑
n
<
x
Λ
(
n
)
)
=
{
ψ
(
x
)
−
1
2
Λ
(
x
)
x
=
2
,
3
,
4
,
5
,
7
,
8
,
9
,
11
,
13
,
16
,
…
ψ
(
x
)
otherwise.
{\displaystyle \psi _{0}(x)={\frac {1}{2}}\!\left(\sum _{n\leq x}\Lambda (n)+\sum _{n<x}\Lambda (n)\right)={\begin{cases}\psi (x)-{\tfrac {1}{2}}\Lambda (x)&x=2,3,4,5,7,8,9,11,13,16,\dots \\\,\psi (x)&{\mbox{otherwise.}}\end{cases}}}
From the Taylor series for the logarithm, the last term in the explicit formula can be understood as a summation of xω/ω over the trivial zeros of the zeta function, ω = −2, −4, −6, ..., i.e.
∑
k
=
1
∞
x
−
2
k
−
2
k
=
1
2
log
(
1
−
x
−
2
)
.
{\displaystyle \sum _{k=1}^{\infty }{\frac {x^{-2k}}{-2k}}={\tfrac {1}{2}}\log \left(1-x^{-2}\right).}
Similarly, the first term, x = x1/1, corresponds to the simple pole of the zeta function at 1. It being a pole rather than a zero accounts for the opposite sign of the term.
== Properties ==
A theorem due to Erhard Schmidt states that, for some explicit positive constant K, there are infinitely many natural numbers x such that
ψ
(
x
)
−
x
<
−
K
x
{\displaystyle \psi (x)-x<-K{\sqrt {x}}}
and infinitely many natural numbers x such that
ψ
(
x
)
−
x
>
K
x
.
{\displaystyle \psi (x)-x>K{\sqrt {x}}.}
[5][6]
In little-o notation, one may write the above as
ψ
(
x
)
−
x
≠
o
(
x
)
.
{\displaystyle \psi (x)-x\neq o\left({\sqrt {x}}\,\right).}
Hardy and Littlewood[7] prove the stronger result, that
ψ
(
x
)
−
x
≠
o
(
x
log
log
log
x
)
.
{\displaystyle \psi (x)-x\neq o\left({\sqrt {x}}\,\log \log \log x\right).}
== Relation to primorials ==
The first Chebyshev function is the logarithm of the primorial of x, denoted x #:
ϑ
(
x
)
=
∑
p
≤
x
log
p
=
log
∏
p
≤
x
p
=
log
(
x
#
)
.
{\displaystyle \vartheta (x)=\sum _{p\leq x}\log p=\log \prod _{p\leq x}p=\log \left(x\#\right).}
This proves that the primorial x # is asymptotically equal to e(1 + o(1))x, where "o" is the little-o notation (see big O notation) and together with the prime number theorem establishes the asymptotic behavior of pn #.
== Relation to the prime-counting function ==
The Chebyshev function can be related to the prime-counting function as follows. Define
Π
(
x
)
=
∑
n
≤
x
Λ
(
n
)
log
n
.
{\displaystyle \Pi (x)=\sum _{n\leq x}{\frac {\Lambda (n)}{\log n}}.}
Then
Π
(
x
)
=
∑
n
≤
x
Λ
(
n
)
∫
n
x
d
t
t
log
2
t
+
1
log
x
∑
n
≤
x
Λ
(
n
)
=
∫
2
x
ψ
(
t
)
d
t
t
log
2
t
+
ψ
(
x
)
log
x
.
{\displaystyle \Pi (x)=\sum _{n\leq x}\Lambda (n)\int _{n}^{x}{\frac {dt}{t\log ^{2}t}}+{\frac {1}{\log x}}\sum _{n\leq x}\Lambda (n)=\int _{2}^{x}{\frac {\psi (t)\,dt}{t\log ^{2}t}}+{\frac {\psi (x)}{\log x}}.}
The transition from Π to the prime-counting function, π, is made through the equation
Π
(
x
)
=
π
(
x
)
+
1
2
π
(
x
)
+
1
3
π
(
x
3
)
+
⋯
{\displaystyle \Pi (x)=\pi (x)+{\tfrac {1}{2}}\pi \left({\sqrt {x}}\,\right)+{\tfrac {1}{3}}\pi \left({\sqrt[{3}]{x}}\,\right)+\cdots }
Certainly π (x) ≤ x, so for the sake of approximation, this last relation can be recast in the form
π
(
x
)
=
Π
(
x
)
+
O
(
x
)
.
{\displaystyle \pi (x)=\Pi (x)+O\left({\sqrt {x}}\,\right).}
== The Riemann hypothesis ==
The Riemann hypothesis states that all nontrivial zeros of the zeta function have real part 1/2. In this case, |x ρ| = √x, and it can be shown that
∑
ρ
x
ρ
ρ
=
O
(
x
log
2
x
)
.
{\displaystyle \sum _{\rho }{\frac {x^{\rho }}{\rho }}=O\!\left({\sqrt {x}}\,\log ^{2}x\right).}
By the above, this implies
π
(
x
)
=
li
(
x
)
+
O
(
x
log
x
)
.
{\displaystyle \pi (x)=\operatorname {li} (x)+O\!\left({\sqrt {x}}\,\log x\right).}
== Smoothing function ==
The smoothing function is defined as
ψ
1
(
x
)
=
∫
0
x
ψ
(
t
)
d
t
.
{\displaystyle \psi _{1}(x)=\int _{0}^{x}\psi (t)\,dt.}
Obviously
ψ
1
(
x
)
∼
x
2
2
.
{\displaystyle \psi _{1}(x)\sim {\frac {x^{2}}{2}}.}
== Notes ==
^ Pierre Dusart, "Estimates of some functions over primes without R.H.". arXiv:1002.0442
^ Pierre Dusart, "Sharper bounds for ψ, θ, π, pk", Rapport de recherche no. 1998-06, Université de Limoges. An abbreviated version appeared as "The kth prime is greater than k(log k + log log k − 1) for k ≥ 2", Mathematics of Computation, Vol. 68, No. 225 (1999), pp. 411–415.
^ Erhard Schmidt, "Über die Anzahl der Primzahlen unter gegebener Grenze", Mathematische Annalen, 57 (1903), pp. 195–204.
^ G .H. Hardy and J. E. Littlewood, "Contributions to the Theory of the Riemann Zeta-Function and the Theory of the Distribution of Primes", Acta Mathematica, 41 (1916) pp. 119–196.
^ Davenport, Harold (2000). In Multiplicative Number Theory. Springer. p. 104. ISBN 0-387-95097-4. Google Book Search.
== References ==
Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001
== External links ==
Weisstein, Eric W. "Chebyshev functions". MathWorld.
"Mangoldt summatory function". PlanetMath.
"Chebyshev functions". PlanetMath.
Riemann's Explicit Formula, with images and movies | Wikipedia/Chebyshev_function |
In number theory, the totient summatory function
Φ
(
n
)
{\displaystyle \Phi (n)}
is a summatory function of Euler's totient function defined by
Φ
(
n
)
:=
∑
k
=
1
n
φ
(
k
)
,
n
∈
N
.
{\displaystyle \Phi (n):=\sum _{k=1}^{n}\varphi (k),\quad n\in \mathbb {N} .}
It is the number of ordered pairs of coprime integers (p,q), where 1 ≤ p ≤ q ≤ n.
The first few values are 0, 1, 2, 4, 6, 10, 12, 18, 22, 28, 32, ... (sequence A002088 in the OEIS). Values for powers of 10 are 1, 32, 3044, 304192, 30397486, 3039650754, ... (sequence A064018 in the OEIS).
== Properties ==
Applying Möbius inversion to the totient function yields
Φ
(
n
)
=
∑
k
=
1
n
k
∑
d
∣
k
μ
(
d
)
d
=
1
2
∑
k
=
1
n
μ
(
k
)
⌊
n
k
⌋
(
1
+
⌊
n
k
⌋
)
.
{\displaystyle \Phi (n)=\sum _{k=1}^{n}k\sum _{d\mid k}{\frac {\mu (d)}{d}}={\frac {1}{2}}\sum _{k=1}^{n}\mu (k)\left\lfloor {\frac {n}{k}}\right\rfloor \left(1+\left\lfloor {\frac {n}{k}}\right\rfloor \right).}
Φ(n) has the asymptotic expansion
Φ
(
n
)
∼
1
2
ζ
(
2
)
n
2
+
O
(
n
log
n
)
=
3
π
2
n
2
+
O
(
n
log
n
)
,
{\displaystyle \Phi (n)\sim {\frac {1}{2\zeta (2)}}n^{2}+O\left(n\log n\right)={\frac {3}{\pi ^{2}}}n^{2}+O\left(n\log n\right),}
where ζ(2) is the Riemann zeta function evaluated at 2, which is
π
2
6
{\displaystyle {\frac {\pi ^{2}}{6}}}
.
== Reciprocal totient summatory function ==
The summatory function of the reciprocal of the totient is
S
(
n
)
:=
∑
k
=
1
n
1
φ
(
k
)
.
{\displaystyle S(n):=\sum _{k=1}^{n}{\frac {1}{\varphi (k)}}.}
Edmund Landau showed in 1900 that this function has the asymptotic behavior
S
(
n
)
∼
A
(
γ
+
log
n
)
+
B
+
O
(
log
n
n
)
,
{\displaystyle S(n)\sim A(\gamma +\log n)+B+O\left({\frac {\log n}{n}}\right),}
where γ is the Euler–Mascheroni constant,
A
=
∑
k
=
1
∞
μ
(
k
)
2
k
φ
(
k
)
=
ζ
(
2
)
ζ
(
3
)
ζ
(
6
)
=
∏
p
∈
P
(
1
+
1
p
(
p
−
1
)
)
,
{\displaystyle A=\sum _{k=1}^{\infty }{\frac {\mu (k)^{2}}{k\varphi (k)}}={\frac {\zeta (2)\zeta (3)}{\zeta (6)}}=\prod _{p\in \mathbb {P} }\left(1+{\frac {1}{p(p-1)}}\right),}
and
B
=
∑
k
=
1
∞
μ
(
k
)
2
log
k
k
φ
(
k
)
=
A
∏
p
∈
P
(
log
p
p
2
−
p
+
1
)
.
{\displaystyle B=\sum _{k=1}^{\infty }{\frac {\mu (k)^{2}\log k}{k\,\varphi (k)}}=A\,\prod _{p\in \mathbb {P} }\left({\frac {\log p}{p^{2}-p+1}}\right).}
The constant A = 1.943596... is sometimes known as Landau's totient constant. The sum
∑
k
=
1
∞
1
/
(
k
φ
(
k
)
)
{\displaystyle \textstyle \sum _{k=1}^{\infty }1/(k\;\varphi (k))}
converges to
∑
k
=
1
∞
1
k
φ
(
k
)
=
ζ
(
2
)
∏
p
∈
P
(
1
+
1
p
2
(
p
−
1
)
)
=
2.20386
…
.
{\displaystyle \sum _{k=1}^{\infty }{\frac {1}{k\varphi (k)}}=\zeta (2)\prod _{p\in \mathbb {P} }\left(1+{\frac {1}{p^{2}(p-1)}}\right)=2.20386\ldots .}
In this case, the product over the primes in the right side is a constant known as the totient summatory constant, and its value is
∏
p
∈
P
(
1
+
1
p
2
(
p
−
1
)
)
=
1.339784
…
.
{\displaystyle \prod _{p\in \mathbb {P} }\left(1+{\frac {1}{p^{2}(p-1)}}\right)=1.339784\ldots .}
== See also ==
Arithmetic function
== References ==
Weisstein, Eric W. "Totient Summatory Function". MathWorld.
== External links ==
OEIS Totient summatory function
Decimal expansion of totient constant product(1 + 1/(p^2*(p-1))), p prime >= 2) | Wikipedia/Totient_summatory_function |
In mathematics, Carmichael's totient function conjecture concerns the multiplicity of values of Euler's totient function φ(n), which counts the number of integers less than and coprime to n. It states that, for every n there is at least one other integer m ≠ n such that φ(m) = φ(n).
Robert Carmichael first stated this conjecture in 1907, but as a theorem rather than as a conjecture. However, his proof was faulty, and in 1922, he retracted his claim and stated the conjecture as an open problem.
== Examples ==
The totient function φ(n) is equal to 2 when n is one of the three values 3, 4, and 6. Thus, if we take any one of these three values as n, then either of the other two values can be used as the m for which φ(m) = φ(n).
Similarly, the totient is equal to 4 when n is one of the four values 5, 8, 10, and 12, and it is equal to 6 when n is one of the four values 7, 9, 14, and 18. In each case, there is more than one value of n having the same value of φ(n).
The conjecture states that this phenomenon of repeated values holds for every n.
== Lower bounds ==
There are very high lower bounds for Carmichael's conjecture that are relatively easy to determine. Carmichael himself proved that any counterexample to his conjecture (that is, a value n such that φ(n) is different from the totients of all other numbers) must be at least 1037, and Victor Klee extended this result to 10400. A lower bound of
10
10
7
{\displaystyle 10^{10^{7}}}
was given by Schlafly and Wagon, and a lower bound of
10
10
10
{\displaystyle 10^{10^{10}}}
was determined by Kevin Ford in 1998.
The computational technique underlying these lower bounds depends on some key results of Klee that make it possible to show that the smallest counterexample must be divisible by squares of the primes dividing its totient value. Klee's results imply that 8 and Fermat primes (primes of the form 2k + 1) excluding 3 do not divide the smallest counterexample. Consequently, proving the conjecture is equivalent to proving that the conjecture holds for all integers congruent to 4 (mod 8).
== Other results ==
Ford also proved that if there exists a counterexample to the conjecture, then a positive proportion (in the sense of asymptotic density) of the integers are likewise counterexamples.
Although the conjecture is widely believed, Carl Pomerance gave a sufficient condition for an integer n to be a counterexample to the conjecture (Pomerance 1974). According to this condition, n is a counterexample if for every prime p such that p − 1 divides φ(n), p2 divides n. However Pomerance showed that the existence of such an integer is highly improbable. Essentially, one can show that if the first k primes p congruent to 1 (mod q) (where q is a prime) are all less than qk+1, then such an integer will be divisible by every prime and thus cannot exist. In any case, proving that Pomerance's counterexample does not exist is far from proving Carmichael's conjecture. However if it exists then infinitely many counterexamples exist as asserted by Ford.
Another way of stating Carmichael's conjecture is that, if
A(f) denotes the number of positive integers n for which φ(n) = f, then A(f) can never equal 1. Relatedly, Wacław Sierpiński conjectured that every positive integer other than 1 occurs as a value of A(f), a conjecture that was proven in 1999 by Kevin Ford.
== Notes ==
== References ==
Carmichael, R. D. (1907), "On Euler's φ-function", Bulletin of the American Mathematical Society, 13 (5): 241–243, doi:10.1090/S0002-9904-1907-01453-2, MR 1558451.
Carmichael, R. D. (1922), "Note on Euler's φ-function", Bulletin of the American Mathematical Society, 28 (3): 109–110, doi:10.1090/S0002-9904-1922-03504-5, MR 1560520.
Ford, K. (1999), "The number of solutions of φ(x) = m", Annals of Mathematics, 150 (1): 283–311, doi:10.2307/121103, JSTOR 121103, MR 1715326, Zbl 0978.11053.
Guy, Richard K. (2004), Unsolved problems in number theory (3rd ed.), Springer-Verlag, B39, ISBN 978-0-387-20860-2, Zbl 1058.11001.
Klee, V. L. Jr. (1947), "On a conjecture of Carmichael", Bulletin of the American Mathematical Society, 53 (12): 1183–1186, doi:10.1090/S0002-9904-1947-08940-0, MR 0022855, Zbl 0035.02601.
Pomerance, Carl (1974), "On Carmichael's conjecture" (PDF), Proceedings of the American Mathematical Society, 43 (2): 297–298, doi:10.2307/2038881, JSTOR 2038881, Zbl 0254.10009.
Sándor, Jozsef; Crstici, Borislav (2004), Handbook of number theory II, Dordrecht: Kluwer Academic, pp. 228–229, ISBN 978-1-4020-2546-4, Zbl 1079.11001.
Schlafly, A.; Wagon, S. (1994), "Carmichael's conjecture on the Euler function is valid below 1010,000,000", Mathematics of Computation, 63 (207): 415–419, doi:10.2307/2153585, JSTOR 2153585, MR 1226815, Zbl 0801.11001.
== External links ==
Weisstein, Eric W., "Carmichael's Totient Function Conjecture", MathWorld | Wikipedia/Carmichael's_totient_function_conjecture |
In number theory, a branch of mathematics, the Carmichael function λ(n) of a positive integer n is the smallest positive integer m such that
a
m
≡
1
(
mod
n
)
{\displaystyle a^{m}\equiv 1{\pmod {n}}}
holds for every integer a coprime to n. In algebraic terms, λ(n) is the exponent of the multiplicative group of integers modulo n. As this is a finite abelian group, there must exist an element whose order equals the exponent, λ(n). Such an element is called a primitive λ-root modulo n.
The Carmichael function is named after the American mathematician Robert Carmichael who defined it in 1910. It is also known as Carmichael's λ function, the reduced totient function, and the least universal exponent function.
The order of the multiplicative group of integers modulo n is φ(n), where φ is Euler's totient function. Since the order of an element of a finite group divides the order of the group, λ(n) divides φ(n). The following table compares the first 36 values of λ(n) (sequence A002322 in the OEIS) and φ(n) (in bold if they are different; the values of n such that they are different are listed in OEIS: A033949).
== Numerical examples ==
n = 5. The set of numbers less than and coprime to 5 is {1,2,3,4}. Hence Euler's totient function has value φ(5) = 4 and the value of Carmichael's function, λ(5), must be a divisor of 4. The divisor 1 does not satisfy the definition of Carmichael's function since
a
1
≢
1
(
mod
5
)
{\displaystyle a^{1}\not \equiv 1{\pmod {5}}}
except for
a
≡
1
(
mod
5
)
{\displaystyle a\equiv 1{\pmod {5}}}
. Neither does 2 since
2
2
≡
3
2
≡
4
≢
1
(
mod
5
)
{\displaystyle 2^{2}\equiv 3^{2}\equiv 4\not \equiv 1{\pmod {5}}}
. Hence λ(5) = 4. Indeed,
1
4
≡
2
4
≡
3
4
≡
4
4
≡
1
(
mod
5
)
{\displaystyle 1^{4}\equiv 2^{4}\equiv 3^{4}\equiv 4^{4}\equiv 1{\pmod {5}}}
. Both 2 and 3 are primitive λ-roots modulo 5 and also primitive roots modulo 5.
n = 8. The set of numbers less than and coprime to 8 is {1,3,5,7} . Hence φ(8) = 4 and λ(8) must be a divisor of 4. In fact λ(8) = 2 since
1
2
≡
3
2
≡
5
2
≡
7
2
≡
1
(
mod
8
)
{\displaystyle 1^{2}\equiv 3^{2}\equiv 5^{2}\equiv 7^{2}\equiv 1{\pmod {8}}}
. The primitive λ-roots modulo 8 are 3, 5, and 7. There are no primitive roots modulo 8.
== Recurrence for λ(n) ==
The Carmichael lambda function of a prime power can be expressed in terms of the Euler totient. Any number that is not 1 or a prime power can be written uniquely as the product of distinct prime powers, in which case λ of the product is the least common multiple of the λ of the prime power factors. Specifically, λ(n) is given by the recurrence
λ
(
n
)
=
{
φ
(
n
)
if
n
is 1, 2, 4, or an odd prime power,
1
2
φ
(
n
)
if
n
=
2
r
,
r
≥
3
,
lcm
(
λ
(
n
1
)
,
λ
(
n
2
)
,
…
,
λ
(
n
k
)
)
if
n
=
n
1
n
2
…
n
k
where
n
1
,
n
2
,
…
,
n
k
are powers of distinct primes.
{\displaystyle \lambda (n)={\begin{cases}\varphi (n)&{\text{if }}n{\text{ is 1, 2, 4, or an odd prime power,}}\\{\tfrac {1}{2}}\varphi (n)&{\text{if }}n=2^{r},\ r\geq 3,\\\operatorname {lcm} {\Bigl (}\lambda (n_{1}),\lambda (n_{2}),\ldots ,\lambda (n_{k}){\Bigr )}&{\text{if }}n=n_{1}n_{2}\ldots n_{k}{\text{ where }}n_{1},n_{2},\ldots ,n_{k}{\text{ are powers of distinct primes.}}\end{cases}}}
Euler's totient for a prime power, that is, a number pr with p prime and r ≥ 1, is given by
φ
(
p
r
)
=
p
r
−
1
(
p
−
1
)
.
{\displaystyle \varphi (p^{r}){=}p^{r-1}(p-1).}
== Carmichael's theorems ==
Carmichael proved two theorems that, together, establish that if λ(n) is considered as defined by the recurrence of the previous section, then it satisfies the property stated in the introduction, namely that it is the smallest positive integer m such that
a
m
≡
1
(
mod
n
)
{\displaystyle a^{m}\equiv 1{\pmod {n}}}
for all a relatively prime to n.
This implies that the order of every element of the multiplicative group of integers modulo n divides λ(n). Carmichael calls an element a for which
a
λ
(
n
)
{\displaystyle a^{\lambda (n)}}
is the least power of a congruent to 1 (mod n) a primitive λ-root modulo n. (This is not to be confused with a primitive root modulo n, which Carmichael sometimes refers to as a primitive
φ
{\displaystyle \varphi }
-root modulo n.)
If g is one of the primitive λ-roots guaranteed by the theorem, then
g
m
≡
1
(
mod
n
)
{\displaystyle g^{m}\equiv 1{\pmod {n}}}
has no positive integer solutions m less than λ(n), showing that there is no positive m < λ(n) such that
a
m
≡
1
(
mod
n
)
{\displaystyle a^{m}\equiv 1{\pmod {n}}}
for all a relatively prime to n.
The second statement of Theorem 2 does not imply that all primitive λ-roots modulo n are congruent to powers of a single root g. For example, if n = 15, then λ(n) = 4 while
φ
(
n
)
=
8
{\displaystyle \varphi (n)=8}
and
φ
(
λ
(
n
)
)
=
2
{\displaystyle \varphi (\lambda (n))=2}
. There are four primitive λ-roots modulo 15, namely 2, 7, 8, and 13 as
1
≡
2
4
≡
8
4
≡
7
4
≡
13
4
{\displaystyle 1\equiv 2^{4}\equiv 8^{4}\equiv 7^{4}\equiv 13^{4}}
. The roots 2 and 8 are congruent to powers of each other and the roots 7 and 13 are congruent to powers of each other, but neither 7 nor 13 is congruent to a power of 2 or 8 and vice versa. The other four elements of the multiplicative group modulo 15, namely 1, 4 (which satisfies
4
≡
2
2
≡
8
2
≡
7
2
≡
13
2
{\displaystyle 4\equiv 2^{2}\equiv 8^{2}\equiv 7^{2}\equiv 13^{2}}
), 11, and 14, are not primitive λ-roots modulo 15.
For a contrasting example, if n = 9, then
λ
(
n
)
=
φ
(
n
)
=
6
{\displaystyle \lambda (n)=\varphi (n)=6}
and
φ
(
λ
(
n
)
)
=
2
{\displaystyle \varphi (\lambda (n))=2}
. There are two primitive λ-roots modulo 9, namely 2 and 5, each of which is congruent to the fifth power of the other. They are also both primitive
φ
{\displaystyle \varphi }
-roots modulo 9.
== Properties of the Carmichael function ==
In this section, an integer
n
{\displaystyle n}
is divisible by a nonzero integer
m
{\displaystyle m}
if there exists an integer
k
{\displaystyle k}
such that
n
=
k
m
{\displaystyle n=km}
. This is written as
m
∣
n
.
{\displaystyle m\mid n.}
=== A consequence of minimality of λ(n) ===
Suppose am ≡ 1 (mod n) for all numbers a coprime with n. Then λ(n) | m.
Proof: If m = kλ(n) + r with 0 ≤ r < λ(n), then
a
r
=
1
k
⋅
a
r
≡
(
a
λ
(
n
)
)
k
⋅
a
r
=
a
k
λ
(
n
)
+
r
=
a
m
≡
1
(
mod
n
)
{\displaystyle a^{r}=1^{k}\cdot a^{r}\equiv \left(a^{\lambda (n)}\right)^{k}\cdot a^{r}=a^{k\lambda (n)+r}=a^{m}\equiv 1{\pmod {n}}}
for all numbers a coprime with n. It follows that r = 0 since r < λ(n) and λ(n) is the minimal positive exponent for which the congruence holds for all a coprime with n.
=== λ(n) divides φ(n) ===
This follows from elementary group theory, because the exponent of any finite group must divide the order of the group. λ(n) is the exponent of the multiplicative group of integers modulo n while φ(n) is the order of that group. In particular, the two must be equal in the cases where the multiplicative group is cyclic due to the existence of a primitive root, which is the case for odd prime powers.
We can thus view Carmichael's theorem as a sharpening of Euler's theorem.
=== Divisibility ===
a
|
b
⇒
λ
(
a
)
|
λ
(
b
)
{\displaystyle a\,|\,b\Rightarrow \lambda (a)\,|\,\lambda (b)}
Proof.
By definition, for any integer
k
{\displaystyle k}
with
gcd
(
k
,
b
)
=
1
{\displaystyle \gcd(k,b)=1}
(and thus also
gcd
(
k
,
a
)
=
1
{\displaystyle \gcd(k,a)=1}
), we have that
b
|
(
k
λ
(
b
)
−
1
)
{\displaystyle b\,|\,(k^{\lambda (b)}-1)}
, and therefore
a
|
(
k
λ
(
b
)
−
1
)
{\displaystyle a\,|\,(k^{\lambda (b)}-1)}
. This establishes that
k
λ
(
b
)
≡
1
(
mod
a
)
{\displaystyle k^{\lambda (b)}\equiv 1{\pmod {a}}}
for all k relatively prime to a. By the consequence of minimality proved above, we have
λ
(
a
)
|
λ
(
b
)
{\displaystyle \lambda (a)\,|\,\lambda (b)}
.
=== Composition ===
For all positive integers a and b it holds that
λ
(
l
c
m
(
a
,
b
)
)
=
l
c
m
(
λ
(
a
)
,
λ
(
b
)
)
{\displaystyle \lambda (\mathrm {lcm} (a,b))=\mathrm {lcm} (\lambda (a),\lambda (b))}
.
This is an immediate consequence of the recurrence for the Carmichael function.
=== Exponential cycle length ===
If
r
m
a
x
=
max
i
{
r
i
}
{\displaystyle r_{\mathrm {max} }=\max _{i}\{r_{i}\}}
is the biggest exponent in the prime factorization
n
=
p
1
r
1
p
2
r
2
⋯
p
k
r
k
{\displaystyle n=p_{1}^{r_{1}}p_{2}^{r_{2}}\cdots p_{k}^{r_{k}}}
of n, then for all a (including those not coprime to n) and all r ≥ rmax,
a
r
≡
a
λ
(
n
)
+
r
(
mod
n
)
.
{\displaystyle a^{r}\equiv a^{\lambda (n)+r}{\pmod {n}}.}
In particular, for square-free n ( rmax = 1), for all a we have
a
≡
a
λ
(
n
)
+
1
(
mod
n
)
.
{\displaystyle a\equiv a^{\lambda (n)+1}{\pmod {n}}.}
=== Average value ===
For any n ≥ 16:
1
n
∑
i
≤
n
λ
(
i
)
=
n
ln
n
e
B
(
1
+
o
(
1
)
)
ln
ln
n
/
(
ln
ln
ln
n
)
{\displaystyle {\frac {1}{n}}\sum _{i\leq n}\lambda (i)={\frac {n}{\ln n}}e^{B(1+o(1))\ln \ln n/(\ln \ln \ln n)}}
(called Erdős approximation in the following) with the constant
B
:=
e
−
γ
∏
p
∈
P
(
1
−
1
(
p
−
1
)
2
(
p
+
1
)
)
≈
0.34537
{\displaystyle B:=e^{-\gamma }\prod _{p\in \mathbb {P} }\left({1-{\frac {1}{(p-1)^{2}(p+1)}}}\right)\approx 0.34537}
and γ ≈ 0.57721, the Euler–Mascheroni constant.
The following table gives some overview over the first 226 – 1 = 67108863 values of the λ function, for both, the exact average and its Erdős-approximation.
Additionally given is some overview over the more easily accessible “logarithm over logarithm” values LoL(n) := ln λ(n)/ln n with
LoL(n) > 4/5 ⇔ λ(n) > n4/5.
There, the table entry in row number 26 at column
% LoL > 4/5 → 60.49
indicates that 60.49% (≈ 40000000) of the integers 1 ≤ n ≤ 67108863 have λ(n) > n4/5 meaning that the majority of the λ values is exponential in the length l := log2(n) of the input n, namely
(
2
4
5
)
l
=
2
4
l
5
=
(
2
l
)
4
5
=
n
4
5
.
{\displaystyle \left(2^{\frac {4}{5}}\right)^{l}=2^{\frac {4l}{5}}=\left(2^{l}\right)^{\frac {4}{5}}=n^{\frac {4}{5}}.}
=== Prevailing interval ===
For all numbers N and all but o(N) positive integers n ≤ N (a "prevailing" majority):
λ
(
n
)
=
n
(
ln
n
)
ln
ln
ln
n
+
A
+
o
(
1
)
{\displaystyle \lambda (n)={\frac {n}{(\ln n)^{\ln \ln \ln n+A+o(1)}}}}
with the constant
A
:=
−
1
+
∑
p
∈
P
ln
p
(
p
−
1
)
2
≈
0.2269688
{\displaystyle A:=-1+\sum _{p\in \mathbb {P} }{\frac {\ln p}{(p-1)^{2}}}\approx 0.2269688}
=== Lower bounds ===
For any sufficiently large number N and for any Δ ≥ (ln ln N)3, there are at most
N
exp
(
−
0.69
(
Δ
ln
Δ
)
1
3
)
{\displaystyle N\exp \left(-0.69(\Delta \ln \Delta )^{\frac {1}{3}}\right)}
positive integers n ≤ N such that λ(n) ≤ ne−Δ.
=== Minimal order ===
For any sequence n1 < n2 < n3 < ⋯ of positive integers, any constant 0 < c < 1/ln 2, and any sufficiently large i:
λ
(
n
i
)
>
(
ln
n
i
)
c
ln
ln
ln
n
i
.
{\displaystyle \lambda (n_{i})>\left(\ln n_{i}\right)^{c\ln \ln \ln n_{i}}.}
=== Small values ===
For a constant c and any sufficiently large positive A, there exists an integer n > A such that
λ
(
n
)
<
(
ln
A
)
c
ln
ln
ln
A
.
{\displaystyle \lambda (n)<\left(\ln A\right)^{c\ln \ln \ln A}.}
Moreover, n is of the form
n
=
∏
q
∈
P
(
q
−
1
)
|
m
q
{\displaystyle n=\mathop {\prod _{q\in \mathbb {P} }} _{(q-1)|m}q}
for some square-free integer m < (ln A)c ln ln ln A.
=== Image of the function ===
The set of values of the Carmichael function has counting function
x
(
ln
x
)
η
+
o
(
1
)
,
{\displaystyle {\frac {x}{(\ln x)^{\eta +o(1)}}},}
where
η
=
1
−
1
+
ln
ln
2
ln
2
≈
0.08607
{\displaystyle \eta =1-{\frac {1+\ln \ln 2}{\ln 2}}\approx 0.08607}
== Use in cryptography ==
The Carmichael function is important in cryptography due to its use in the RSA encryption algorithm.
== Proof of Theorem 1 ==
For n = p, a prime, Theorem 1 is equivalent to Fermat's little theorem:
a
p
−
1
≡
1
(
mod
p
)
for all
a
coprime to
p
.
{\displaystyle a^{p-1}\equiv 1{\pmod {p}}\qquad {\text{for all }}a{\text{ coprime to }}p.}
For prime powers pr, r > 1, if
a
p
r
−
1
(
p
−
1
)
=
1
+
h
p
r
{\displaystyle a^{p^{r-1}(p-1)}=1+hp^{r}}
holds for some integer h, then raising both sides to the power p gives
a
p
r
(
p
−
1
)
=
1
+
h
′
p
r
+
1
{\displaystyle a^{p^{r}(p-1)}=1+h'p^{r+1}}
for some other integer
h
′
{\displaystyle h'}
. By induction it follows that
a
φ
(
p
r
)
≡
1
(
mod
p
r
)
{\displaystyle a^{\varphi (p^{r})}\equiv 1{\pmod {p^{r}}}}
for all a relatively prime to p and hence to pr. This establishes the theorem for n = 4 or any odd prime power.
=== Sharpening the result for higher powers of two ===
For a coprime to (powers of) 2 we have a = 1 + 2h2 for some integer h2. Then,
a
2
=
1
+
4
h
2
(
h
2
+
1
)
=
1
+
8
(
h
2
+
1
2
)
=:
1
+
8
h
3
{\displaystyle a^{2}=1+4h_{2}(h_{2}+1)=1+8{\binom {h_{2}+1}{2}}=:1+8h_{3}}
,
where
h
3
{\displaystyle h_{3}}
is an integer. With r = 3, this is written
a
2
r
−
2
=
1
+
2
r
h
r
.
{\displaystyle a^{2^{r-2}}=1+2^{r}h_{r}.}
Squaring both sides gives
a
2
r
−
1
=
(
1
+
2
r
h
r
)
2
=
1
+
2
r
+
1
(
h
r
+
2
r
−
1
h
r
2
)
=:
1
+
2
r
+
1
h
r
+
1
,
{\displaystyle a^{2^{r-1}}=\left(1+2^{r}h_{r}\right)^{2}=1+2^{r+1}\left(h_{r}+2^{r-1}h_{r}^{2}\right)=:1+2^{r+1}h_{r+1},}
where
h
r
+
1
{\displaystyle h_{r+1}}
is an integer. It follows by induction that
a
2
r
−
2
=
a
1
2
φ
(
2
r
)
≡
1
(
mod
2
r
)
{\displaystyle a^{2^{r-2}}=a^{{\frac {1}{2}}\varphi (2^{r})}\equiv 1{\pmod {2^{r}}}}
for all
r
≥
3
{\displaystyle r\geq 3}
and all a coprime to
2
r
{\displaystyle 2^{r}}
.
=== Integers with multiple prime factors ===
By the unique factorization theorem, any n > 1 can be written in a unique way as
n
=
p
1
r
1
p
2
r
2
⋯
p
k
r
k
{\displaystyle n=p_{1}^{r_{1}}p_{2}^{r_{2}}\cdots p_{k}^{r_{k}}}
where p1 < p2 < ... < pk are primes and r1, r2, ..., rk are positive integers. The results for prime powers establish that, for
1
≤
j
≤
k
{\displaystyle 1\leq j\leq k}
,
a
λ
(
p
j
r
j
)
≡
1
(
mod
p
j
r
j
)
for all
a
coprime to
n
and hence to
p
i
r
i
.
{\displaystyle a^{\lambda \left(p_{j}^{r_{j}}\right)}\equiv 1{\pmod {p_{j}^{r_{j}}}}\qquad {\text{for all }}a{\text{ coprime to }}n{\text{ and hence to }}p_{i}^{r_{i}}.}
From this it follows that
a
λ
(
n
)
≡
1
(
mod
p
j
r
j
)
for all
a
coprime to
n
,
{\displaystyle a^{\lambda (n)}\equiv 1{\pmod {p_{j}^{r_{j}}}}\qquad {\text{for all }}a{\text{ coprime to }}n,}
where, as given by the recurrence,
λ
(
n
)
=
lcm
(
λ
(
p
1
r
1
)
,
λ
(
p
2
r
2
)
,
…
,
λ
(
p
k
r
k
)
)
.
{\displaystyle \lambda (n)=\operatorname {lcm} {\Bigl (}\lambda \left(p_{1}^{r_{1}}\right),\lambda \left(p_{2}^{r_{2}}\right),\ldots ,\lambda \left(p_{k}^{r_{k}}\right){\Bigr )}.}
From the Chinese remainder theorem one concludes that
a
λ
(
n
)
≡
1
(
mod
n
)
for all
a
coprime to
n
.
{\displaystyle a^{\lambda (n)}\equiv 1{\pmod {n}}\qquad {\text{for all }}a{\text{ coprime to }}n.}
== See also ==
Carmichael number
== Notes ==
== References ==
Erdős, Paul; Pomerance, Carl; Schmutz, Eric (1991). "Carmichael's lambda function". Acta Arithmetica. 58 (4): 363–385. doi:10.4064/aa-58-4-363-385. ISSN 0065-1036. MR 1121092. Zbl 0734.11047.
Friedlander, John B.; Pomerance, Carl; Shparlinski, Igor E. (2001). "Period of the power generator and small values of the Carmichael function". Mathematics of Computation. 70 (236): 1591–1605, 1803–1806. doi:10.1090/s0025-5718-00-01282-5. ISSN 0025-5718. MR 1836921. Zbl 1029.11043.
Sándor, Jozsef; Crstici, Borislav (2004). Handbook of number theory II. Dordrecht: Kluwer Academic. pp. 32–36, 193–195. ISBN 978-1-4020-2546-4. Zbl 1079.11001.
Carmichael, Robert D. [1914]. The Theory of Numbers at Project Gutenberg | Wikipedia/Carmichael_function |
In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function whose domain is the set of positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes.
An example of an arithmetic function is the divisor function whose value at a positive integer n is equal to the number of divisors of n.
Arithmetic functions are often extremely irregular (see table), but some of them have series expansions in terms of Ramanujan's sum.
== Multiplicative and additive functions ==
An arithmetic function a is
completely additive if a(mn) = a(m) + a(n) for all natural numbers m and n;
completely multiplicative if a(1) = 1 and a(mn) = a(m)a(n) for all natural numbers m and n;
Two whole numbers m and n are called coprime if their greatest common divisor is 1, that is, if there is no prime number that divides both of them.
Then an arithmetic function a is
additive if a(mn) = a(m) + a(n) for all coprime natural numbers m and n;
multiplicative if a(1) = 1 and a(mn) = a(m)a(n) for all coprime natural numbers m and n.
== Notation ==
In this article,
∑
p
f
(
p
)
{\textstyle \sum _{p}f(p)}
and
∏
p
f
(
p
)
{\textstyle \prod _{p}f(p)}
mean that the sum or product is over all prime numbers:
∑
p
f
(
p
)
=
f
(
2
)
+
f
(
3
)
+
f
(
5
)
+
⋯
{\displaystyle \sum _{p}f(p)=f(2)+f(3)+f(5)+\cdots }
and
∏
p
f
(
p
)
=
f
(
2
)
f
(
3
)
f
(
5
)
⋯
.
{\displaystyle \prod _{p}f(p)=f(2)f(3)f(5)\cdots .}
Similarly,
∑
p
k
f
(
p
k
)
{\textstyle \sum _{p^{k}}f(p^{k})}
and
∏
p
k
f
(
p
k
)
{\textstyle \prod _{p^{k}}f(p^{k})}
mean that the sum or product is over all prime powers with strictly positive exponent (so k = 0 is not included):
∑
p
k
f
(
p
k
)
=
∑
p
∑
k
>
0
f
(
p
k
)
=
f
(
2
)
+
f
(
3
)
+
f
(
4
)
+
f
(
5
)
+
f
(
7
)
+
f
(
8
)
+
f
(
9
)
+
⋯
.
{\displaystyle \sum _{p^{k}}f(p^{k})=\sum _{p}\sum _{k>0}f(p^{k})=f(2)+f(3)+f(4)+f(5)+f(7)+f(8)+f(9)+\cdots .}
The notations
∑
d
∣
n
f
(
d
)
{\textstyle \sum _{d\mid n}f(d)}
and
∏
d
∣
n
f
(
d
)
{\textstyle \prod _{d\mid n}f(d)}
mean that the sum or product is over all positive divisors of n, including 1 and n. For example, if n = 12, then
∏
d
∣
12
f
(
d
)
=
f
(
1
)
f
(
2
)
f
(
3
)
f
(
4
)
f
(
6
)
f
(
12
)
.
{\displaystyle \prod _{d\mid 12}f(d)=f(1)f(2)f(3)f(4)f(6)f(12).}
The notations can be combined:
∑
p
∣
n
f
(
p
)
{\textstyle \sum _{p\mid n}f(p)}
and
∏
p
∣
n
f
(
p
)
{\textstyle \prod _{p\mid n}f(p)}
mean that the sum or product is over all prime divisors of n. For example, if n = 18, then
∑
p
∣
18
f
(
p
)
=
f
(
2
)
+
f
(
3
)
,
{\displaystyle \sum _{p\mid 18}f(p)=f(2)+f(3),}
and similarly
∑
p
k
∣
n
f
(
p
k
)
{\textstyle \sum _{p^{k}\mid n}f(p^{k})}
and
∏
p
k
∣
n
f
(
p
k
)
{\textstyle \prod _{p^{k}\mid n}f(p^{k})}
mean that the sum or product is over all prime powers dividing n. For example, if n = 24, then
∏
p
k
∣
24
f
(
p
k
)
=
f
(
2
)
f
(
3
)
f
(
4
)
f
(
8
)
.
{\displaystyle \prod _{p^{k}\mid 24}f(p^{k})=f(2)f(3)f(4)f(8).}
== Ω(n), ω(n), νp(n) – prime power decomposition ==
The fundamental theorem of arithmetic states that any positive integer n can be represented uniquely as a product of powers of primes:
n
=
p
1
a
1
⋯
p
k
a
k
{\displaystyle n=p_{1}^{a_{1}}\cdots p_{k}^{a_{k}}}
where p1 < p2 < ... < pk are primes and the aj are positive integers. (1 is given by the empty product.)
It is often convenient to write this as an infinite product over all the primes, where all but a finite number have a zero exponent. Define the p-adic valuation νp(n) to be the exponent of the highest power of the prime p that divides n. That is, if p is one of the pi then νp(n) = ai, otherwise it is zero. Then
n
=
∏
p
p
ν
p
(
n
)
.
{\displaystyle n=\prod _{p}p^{\nu _{p}(n)}.}
In terms of the above the prime omega functions ω and Ω are defined by
To avoid repetition, formulas for the functions listed in this article are, whenever possible, given in terms of n and the corresponding pi, ai, ω, and Ω.
== Multiplicative functions ==
=== σk(n), τ(n), d(n) – divisor sums ===
σk(n) is the sum of the kth powers of the positive divisors of n, including 1 and n, where k is a complex number.
σ1(n), the sum of the (positive) divisors of n, is usually denoted by σ(n).
Since a positive number to the zero power is one, σ0(n) is therefore the number of (positive) divisors of n; it is usually denoted by d(n) or τ(n) (for the German Teiler = divisors).
σ
k
(
n
)
=
∏
i
=
1
ω
(
n
)
p
i
(
a
i
+
1
)
k
−
1
p
i
k
−
1
=
∏
i
=
1
ω
(
n
)
(
1
+
p
i
k
+
p
i
2
k
+
⋯
+
p
i
a
i
k
)
.
{\displaystyle \sigma _{k}(n)=\prod _{i=1}^{\omega (n)}{\frac {p_{i}^{(a_{i}+1)k}-1}{p_{i}^{k}-1}}=\prod _{i=1}^{\omega (n)}\left(1+p_{i}^{k}+p_{i}^{2k}+\cdots +p_{i}^{a_{i}k}\right).}
Setting k = 0 in the second product gives
τ
(
n
)
=
d
(
n
)
=
(
1
+
a
1
)
(
1
+
a
2
)
⋯
(
1
+
a
ω
(
n
)
)
.
{\displaystyle \tau (n)=d(n)=(1+a_{1})(1+a_{2})\cdots (1+a_{\omega (n)}).}
=== φ(n) – Euler totient function ===
φ(n), the Euler totient function, is the number of positive integers not greater than n that are coprime to n.
φ
(
n
)
=
n
∏
p
∣
n
(
1
−
1
p
)
=
n
(
p
1
−
1
p
1
)
(
p
2
−
1
p
2
)
⋯
(
p
ω
(
n
)
−
1
p
ω
(
n
)
)
.
{\displaystyle \varphi (n)=n\prod _{p\mid n}\left(1-{\frac {1}{p}}\right)=n\left({\frac {p_{1}-1}{p_{1}}}\right)\left({\frac {p_{2}-1}{p_{2}}}\right)\cdots \left({\frac {p_{\omega (n)}-1}{p_{\omega (n)}}}\right).}
=== Jk(n) – Jordan totient function ===
Jk(n), the Jordan totient function, is the number of k-tuples of positive integers all less than or equal to n that form a coprime (k + 1)-tuple together with n. It is a generalization of Euler's totient, φ(n) = J1(n).
J
k
(
n
)
=
n
k
∏
p
∣
n
(
1
−
1
p
k
)
=
n
k
(
p
1
k
−
1
p
1
k
)
(
p
2
k
−
1
p
2
k
)
⋯
(
p
ω
(
n
)
k
−
1
p
ω
(
n
)
k
)
.
{\displaystyle J_{k}(n)=n^{k}\prod _{p\mid n}\left(1-{\frac {1}{p^{k}}}\right)=n^{k}\left({\frac {p_{1}^{k}-1}{p_{1}^{k}}}\right)\left({\frac {p_{2}^{k}-1}{p_{2}^{k}}}\right)\cdots \left({\frac {p_{\omega (n)}^{k}-1}{p_{\omega (n)}^{k}}}\right).}
=== μ(n) – Möbius function ===
μ(n), the Möbius function, is important because of the Möbius inversion formula. See § Dirichlet convolution, below.
μ
(
n
)
=
{
(
−
1
)
ω
(
n
)
=
(
−
1
)
Ω
(
n
)
if
ω
(
n
)
=
Ω
(
n
)
0
if
ω
(
n
)
≠
Ω
(
n
)
.
{\displaystyle \mu (n)={\begin{cases}(-1)^{\omega (n)}=(-1)^{\Omega (n)}&{\text{if }}\;\omega (n)=\Omega (n)\\0&{\text{if }}\;\omega (n)\neq \Omega (n).\end{cases}}}
This implies that μ(1) = 1. (Because Ω(1) = ω(1) = 0.)
=== τ(n) – Ramanujan tau function ===
τ(n), the Ramanujan tau function, is defined by its generating function identity:
∑
n
≥
1
τ
(
n
)
q
n
=
q
∏
n
≥
1
(
1
−
q
n
)
24
.
{\displaystyle \sum _{n\geq 1}\tau (n)q^{n}=q\prod _{n\geq 1}(1-q^{n})^{24}.}
Although it is hard to say exactly what "arithmetical property of n" it "expresses", (τ(n) is (2π)−12 times the nth Fourier coefficient in the q-expansion of the modular discriminant function) it is included among the arithmetical functions because it is multiplicative and it occurs in identities involving certain σk(n) and rk(n) functions (because these are also coefficients in the expansion of modular forms).
=== cq(n) – Ramanujan's sum ===
cq(n), Ramanujan's sum, is the sum of the nth powers of the primitive qth roots of unity:
c
q
(
n
)
=
∑
gcd
(
a
,
q
)
=
1
1
≤
a
≤
q
e
2
π
i
a
q
n
.
{\displaystyle c_{q}(n)=\sum _{\stackrel {1\leq a\leq q}{\gcd(a,q)=1}}e^{2\pi i{\tfrac {a}{q}}n}.}
Even though it is defined as a sum of complex numbers (irrational for most values of q), it is an integer. For a fixed value of n it is multiplicative in q:
If q and r are coprime, then
c
q
(
n
)
c
r
(
n
)
=
c
q
r
(
n
)
.
{\displaystyle c_{q}(n)c_{r}(n)=c_{qr}(n).}
=== ψ(n) – Dedekind psi function ===
The Dedekind psi function, used in the theory of modular functions, is defined by the formula
ψ
(
n
)
=
n
∏
p
|
n
(
1
+
1
p
)
.
{\displaystyle \psi (n)=n\prod _{p|n}\left(1+{\frac {1}{p}}\right).}
== Completely multiplicative functions ==
=== λ(n) – Liouville function ===
λ(n), the Liouville function, is defined by
λ
(
n
)
=
(
−
1
)
Ω
(
n
)
.
{\displaystyle \lambda (n)=(-1)^{\Omega (n)}.}
=== χ(n) – characters ===
All Dirichlet characters χ(n) are completely multiplicative. Two characters have special notations:
The principal character (mod n) is denoted by χ0(a) (or χ1(a)). It is defined as
χ
0
(
a
)
=
{
1
if
gcd
(
a
,
n
)
=
1
,
0
if
gcd
(
a
,
n
)
≠
1.
{\displaystyle \chi _{0}(a)={\begin{cases}1&{\text{if }}\gcd(a,n)=1,\\0&{\text{if }}\gcd(a,n)\neq 1.\end{cases}}}
The quadratic character (mod n) is denoted by the Jacobi symbol for odd n (it is not defined for even n):
(
a
n
)
=
(
a
p
1
)
a
1
(
a
p
2
)
a
2
⋯
(
a
p
ω
(
n
)
)
a
ω
(
n
)
.
{\displaystyle \left({\frac {a}{n}}\right)=\left({\frac {a}{p_{1}}}\right)^{a_{1}}\left({\frac {a}{p_{2}}}\right)^{a_{2}}\cdots \left({\frac {a}{p_{\omega (n)}}}\right)^{a_{\omega (n)}}.}
In this formula
(
a
p
)
{\displaystyle ({\tfrac {a}{p}})}
is the Legendre symbol, defined for all integers a and all odd primes p by
(
a
p
)
=
{
0
if
a
≡
0
(
mod
p
)
,
+
1
if
a
≢
0
(
mod
p
)
and for some integer
x
,
a
≡
x
2
(
mod
p
)
−
1
if there is no such
x
.
{\displaystyle \left({\frac {a}{p}}\right)={\begin{cases}\;\;\,0&{\text{if }}a\equiv 0{\pmod {p}},\\+1&{\text{if }}a\not \equiv 0{\pmod {p}}{\text{ and for some integer }}x,\;a\equiv x^{2}{\pmod {p}}\\-1&{\text{if there is no such }}x.\end{cases}}}
Following the normal convention for the empty product,
(
a
1
)
=
1.
{\displaystyle \left({\frac {a}{1}}\right)=1.}
== Additive functions ==
=== ω(n) – distinct prime divisors ===
ω(n), defined above as the number of distinct primes dividing n, is additive (see Prime omega function).
== Completely additive functions ==
=== Ω(n) – prime divisors ===
Ω(n), defined above as the number of prime factors of n counted with multiplicities, is completely additive (see Prime omega function).
=== νp(n) – p-adic valuation of an integer n ===
For a fixed prime p, νp(n), defined above as the exponent of the largest power of p dividing n, is completely additive.
=== Logarithmic derivative ===
ld
(
n
)
=
D
(
n
)
n
=
∑
p
prime
p
∣
n
v
p
(
n
)
p
{\displaystyle \operatorname {ld} (n)={\frac {D(n)}{n}}=\sum _{\stackrel {p\mid n}{p{\text{ prime}}}}{\frac {v_{p}(n)}{p}}}
, where
D
(
n
)
{\displaystyle D(n)}
is the arithmetic derivative.
== Neither multiplicative nor additive ==
=== π(x), Π(x), ϑ(x), ψ(x) – prime-counting functions ===
These important functions (which are not arithmetic functions) are defined for non-negative real arguments, and are used in the various statements and proofs of the prime number theorem. They are summation functions (see the main section just below) of arithmetic functions which are neither multiplicative nor additive.
π(x), the prime-counting function, is the number of primes not exceeding x. It is the summation function of the characteristic function of the prime numbers.
π
(
x
)
=
∑
p
≤
x
1
{\displaystyle \pi (x)=\sum _{p\leq x}1}
A related function counts prime powers with weight 1 for primes, 1/2 for their squares, 1/3 for cubes, etc. It is the summation function of the arithmetic function which takes the value 1/k on integers which are the kth power of some prime number, and the value 0 on other integers.
Π
(
x
)
=
∑
p
k
≤
x
1
k
.
{\displaystyle \Pi (x)=\sum _{p^{k}\leq x}{\frac {1}{k}}.}
ϑ(x) and ψ(x), the Chebyshev functions, are defined as sums of the natural logarithms of the primes not exceeding x.
ϑ
(
x
)
=
∑
p
≤
x
log
p
,
{\displaystyle \vartheta (x)=\sum _{p\leq x}\log p,}
ψ
(
x
)
=
∑
p
k
≤
x
log
p
.
{\displaystyle \psi (x)=\sum _{p^{k}\leq x}\log p.}
The second Chebyshev function ψ(x) is the summation function of the von Mangoldt function just below.
=== Λ(n) – von Mangoldt function ===
Λ(n), the von Mangoldt function, is 0 unless the argument n is a prime power pk, in which case it is the natural logarithm of the prime p:
Λ
(
n
)
=
{
log
p
if
n
=
2
,
3
,
4
,
5
,
7
,
8
,
9
,
11
,
13
,
16
,
…
=
p
k
is a prime power
0
if
n
=
1
,
6
,
10
,
12
,
14
,
15
,
18
,
20
,
21
,
…
is not a prime power
.
{\displaystyle \Lambda (n)={\begin{cases}\log p&{\text{if }}n=2,3,4,5,7,8,9,11,13,16,\ldots =p^{k}{\text{ is a prime power}}\\0&{\text{if }}n=1,6,10,12,14,15,18,20,21,\dots \;\;\;\;{\text{ is not a prime power}}.\end{cases}}}
=== p(n) – partition function ===
p(n), the partition function, is the number of ways of representing n as a sum of positive integers, where two representations with the same summands in a different order are not counted as being different:
p
(
n
)
=
|
{
(
a
1
,
a
2
,
…
a
k
)
:
0
<
a
1
≤
a
2
≤
⋯
≤
a
k
∧
n
=
a
1
+
a
2
+
⋯
+
a
k
}
|
.
{\displaystyle p(n)=\left|\left\{(a_{1},a_{2},\dots a_{k}):0<a_{1}\leq a_{2}\leq \cdots \leq a_{k}\;\land \;n=a_{1}+a_{2}+\cdots +a_{k}\right\}\right|.}
=== λ(n) – Carmichael function ===
λ(n), the Carmichael function, is the smallest positive number such that
a
λ
(
n
)
≡
1
(
mod
n
)
{\displaystyle a^{\lambda (n)}\equiv 1{\pmod {n}}}
for all a coprime to n. Equivalently, it is the least common multiple of the orders of the elements of the multiplicative group of integers modulo n.
For powers of odd primes and for 2 and 4, λ(n) is equal to the Euler totient function of n; for powers of 2 greater than 4 it is equal to one half of the Euler totient function of n:
λ
(
n
)
=
{
ϕ
(
n
)
if
n
=
2
,
3
,
4
,
5
,
7
,
9
,
11
,
13
,
17
,
19
,
23
,
25
,
27
,
…
1
2
ϕ
(
n
)
if
n
=
8
,
16
,
32
,
64
,
…
{\displaystyle \lambda (n)={\begin{cases}\;\;\phi (n)&{\text{if }}n=2,3,4,5,7,9,11,13,17,19,23,25,27,\dots \\{\tfrac {1}{2}}\phi (n)&{\text{if }}n=8,16,32,64,\dots \end{cases}}}
and for general n it is the least common multiple of λ of each of the prime power factors of n:
λ
(
p
1
a
1
p
2
a
2
…
p
ω
(
n
)
a
ω
(
n
)
)
=
lcm
[
λ
(
p
1
a
1
)
,
λ
(
p
2
a
2
)
,
…
,
λ
(
p
ω
(
n
)
a
ω
(
n
)
)
]
.
{\displaystyle \lambda (p_{1}^{a_{1}}p_{2}^{a_{2}}\dots p_{\omega (n)}^{a_{\omega (n)}})=\operatorname {lcm} [\lambda (p_{1}^{a_{1}}),\;\lambda (p_{2}^{a_{2}}),\dots ,\lambda (p_{\omega (n)}^{a_{\omega (n)}})].}
=== h(n) – class number ===
h(n), the class number function, is the order of the ideal class group of an algebraic extension of the rationals with discriminant n. The notation is ambiguous, as there are in general many extensions with the same discriminant. See quadratic field and cyclotomic field for classical examples.
=== rk(n) – sum of k squares ===
rk(n) is the number of ways n can be represented as the sum of k squares, where representations that differ only in the order of the summands or in the signs of the square roots are counted as different.
r
k
(
n
)
=
|
{
(
a
1
,
a
2
,
…
,
a
k
)
:
n
=
a
1
2
+
a
2
2
+
⋯
+
a
k
2
}
|
{\displaystyle r_{k}(n)=\left|\left\{(a_{1},a_{2},\dots ,a_{k}):n=a_{1}^{2}+a_{2}^{2}+\cdots +a_{k}^{2}\right\}\right|}
=== D(n) – Arithmetic derivative ===
Using the Heaviside notation for the derivative, the arithmetic derivative D(n) is a function such that
D
(
n
)
=
1
{\displaystyle D(n)=1}
if n prime, and
D
(
m
n
)
=
m
D
(
n
)
+
D
(
m
)
n
{\displaystyle D(mn)=mD(n)+D(m)n}
(the product rule)
== Summation functions ==
Given an arithmetic function a(n), its summation function A(x) is defined by
A
(
x
)
:=
∑
n
≤
x
a
(
n
)
.
{\displaystyle A(x):=\sum _{n\leq x}a(n).}
A can be regarded as a function of a real variable. Given a positive integer m, A is constant along open intervals m < x < m + 1, and has a jump discontinuity at each integer for which a(m) ≠ 0.
Since such functions are often represented by series and integrals, to achieve pointwise convergence it is usual to define the value at the discontinuities as the average of the values to the left and right:
A
0
(
m
)
:=
1
2
(
∑
n
<
m
a
(
n
)
+
∑
n
≤
m
a
(
n
)
)
=
A
(
m
)
−
1
2
a
(
m
)
.
{\displaystyle A_{0}(m):={\frac {1}{2}}\left(\sum _{n<m}a(n)+\sum _{n\leq m}a(n)\right)=A(m)-{\frac {1}{2}}a(m).}
Individual values of arithmetic functions may fluctuate wildly – as in most of the above examples. Summation functions "smooth out" these fluctuations. In some cases it may be possible to find asymptotic behaviour for the summation function for large x.
A classical example of this phenomenon is given by the divisor summatory function, the summation function of d(n), the number of divisors of n:
lim inf
n
→
∞
d
(
n
)
=
2
{\displaystyle \liminf _{n\to \infty }d(n)=2}
lim sup
n
→
∞
log
d
(
n
)
log
log
n
log
n
=
log
2
{\displaystyle \limsup _{n\to \infty }{\frac {\log d(n)\log \log n}{\log n}}=\log 2}
lim
n
→
∞
d
(
1
)
+
d
(
2
)
+
⋯
+
d
(
n
)
log
(
1
)
+
log
(
2
)
+
⋯
+
log
(
n
)
=
1.
{\displaystyle \lim _{n\to \infty }{\frac {d(1)+d(2)+\cdots +d(n)}{\log(1)+\log(2)+\cdots +\log(n)}}=1.}
An average order of an arithmetic function is some simpler or better-understood function which has the same summation function asymptotically, and hence takes the same values "on average". We say that g is an average order of f if
∑
n
≤
x
f
(
n
)
∼
∑
n
≤
x
g
(
n
)
{\displaystyle \sum _{n\leq x}f(n)\sim \sum _{n\leq x}g(n)}
as x tends to infinity. The example above shows that d(n) has the average order log(n).
== Dirichlet convolution ==
Given an arithmetic function a(n), let Fa(s), for complex s, be the function defined by the corresponding Dirichlet series (where it converges):
F
a
(
s
)
:=
∑
n
=
1
∞
a
(
n
)
n
s
.
{\displaystyle F_{a}(s):=\sum _{n=1}^{\infty }{\frac {a(n)}{n^{s}}}.}
Fa(s) is called a generating function of a(n). The simplest such series, corresponding to the constant function a(n) = 1 for all n, is ζ(s) the Riemann zeta function.
The generating function of the Möbius function is the inverse of the zeta function:
ζ
(
s
)
∑
n
=
1
∞
μ
(
n
)
n
s
=
1
,
ℜ
s
>
1.
{\displaystyle \zeta (s)\,\sum _{n=1}^{\infty }{\frac {\mu (n)}{n^{s}}}=1,\;\;\Re s>1.}
Consider two arithmetic functions a and b and their respective generating functions Fa(s) and Fb(s). The product Fa(s)Fb(s) can be computed as follows:
F
a
(
s
)
F
b
(
s
)
=
(
∑
m
=
1
∞
a
(
m
)
m
s
)
(
∑
n
=
1
∞
b
(
n
)
n
s
)
.
{\displaystyle F_{a}(s)F_{b}(s)=\left(\sum _{m=1}^{\infty }{\frac {a(m)}{m^{s}}}\right)\left(\sum _{n=1}^{\infty }{\frac {b(n)}{n^{s}}}\right).}
It is a straightforward exercise to show that if c(n) is defined by
c
(
n
)
:=
∑
i
j
=
n
a
(
i
)
b
(
j
)
=
∑
i
∣
n
a
(
i
)
b
(
n
i
)
,
{\displaystyle c(n):=\sum _{ij=n}a(i)b(j)=\sum _{i\mid n}a(i)b\left({\frac {n}{i}}\right),}
then
F
c
(
s
)
=
F
a
(
s
)
F
b
(
s
)
.
{\displaystyle F_{c}(s)=F_{a}(s)F_{b}(s).}
This function c is called the Dirichlet convolution of a and b, and is denoted by
a
∗
b
{\displaystyle a*b}
.
A particularly important case is convolution with the constant function a(n) = 1 for all n, corresponding to multiplying the generating function by the zeta function:
g
(
n
)
=
∑
d
∣
n
f
(
d
)
.
{\displaystyle g(n)=\sum _{d\mid n}f(d).}
Multiplying by the inverse of the zeta function gives the Möbius inversion formula:
f
(
n
)
=
∑
d
∣
n
μ
(
n
d
)
g
(
d
)
.
{\displaystyle f(n)=\sum _{d\mid n}\mu \left({\frac {n}{d}}\right)g(d).}
If f is multiplicative, then so is g. If f is completely multiplicative, then g is multiplicative, but may or may not be completely multiplicative.
== Relations among the functions ==
There are a great many formulas connecting arithmetical functions with each other and with the functions of analysis, especially powers, roots, and the exponential and log functions. The page divisor sum identities contains many more generalized and related examples of identities involving arithmetic functions.
Here are a few examples:
=== Dirichlet convolutions ===
∑
δ
∣
n
μ
(
δ
)
=
∑
δ
∣
n
λ
(
n
δ
)
|
μ
(
δ
)
|
=
{
1
if
n
=
1
0
if
n
≠
1
{\displaystyle \sum _{\delta \mid n}\mu (\delta )=\sum _{\delta \mid n}\lambda \left({\frac {n}{\delta }}\right)|\mu (\delta )|={\begin{cases}1&{\text{if }}n=1\\0&{\text{if }}n\neq 1\end{cases}}}
where λ is the Liouville function.
∑
δ
∣
n
φ
(
δ
)
=
n
.
{\displaystyle \sum _{\delta \mid n}\varphi (\delta )=n.}
φ
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
δ
=
n
∑
δ
∣
n
μ
(
δ
)
δ
.
{\displaystyle \varphi (n)=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)\delta =n\sum _{\delta \mid n}{\frac {\mu (\delta )}{\delta }}.}
Möbius inversion
∑
d
∣
n
J
k
(
d
)
=
n
k
.
{\displaystyle \sum _{d\mid n}J_{k}(d)=n^{k}.}
J
k
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
δ
k
=
n
k
∑
δ
∣
n
μ
(
δ
)
δ
k
.
{\displaystyle J_{k}(n)=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)\delta ^{k}=n^{k}\sum _{\delta \mid n}{\frac {\mu (\delta )}{\delta ^{k}}}.}
Möbius inversion
∑
δ
∣
n
δ
s
J
r
(
δ
)
J
s
(
n
δ
)
=
J
r
+
s
(
n
)
{\displaystyle \sum _{\delta \mid n}\delta ^{s}J_{r}(\delta )J_{s}\left({\frac {n}{\delta }}\right)=J_{r+s}(n)}
∑
δ
∣
n
φ
(
δ
)
d
(
n
δ
)
=
σ
(
n
)
.
{\displaystyle \sum _{\delta \mid n}\varphi (\delta )d\left({\frac {n}{\delta }}\right)=\sigma (n).}
∑
δ
∣
n
|
μ
(
δ
)
|
=
2
ω
(
n
)
.
{\displaystyle \sum _{\delta \mid n}|\mu (\delta )|=2^{\omega (n)}.}
|
μ
(
n
)
|
=
∑
δ
∣
n
μ
(
n
δ
)
2
ω
(
δ
)
.
{\displaystyle |\mu (n)|=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)2^{\omega (\delta )}.}
Möbius inversion
∑
δ
∣
n
2
ω
(
δ
)
=
d
(
n
2
)
.
{\displaystyle \sum _{\delta \mid n}2^{\omega (\delta )}=d(n^{2}).}
2
ω
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
d
(
δ
2
)
.
{\displaystyle 2^{\omega (n)}=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)d(\delta ^{2}).}
Möbius inversion
∑
δ
∣
n
d
(
δ
2
)
=
d
2
(
n
)
.
{\displaystyle \sum _{\delta \mid n}d(\delta ^{2})=d^{2}(n).}
d
(
n
2
)
=
∑
δ
∣
n
μ
(
n
δ
)
d
2
(
δ
)
.
{\displaystyle d(n^{2})=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)d^{2}(\delta ).}
Möbius inversion
∑
δ
∣
n
d
(
n
δ
)
2
ω
(
δ
)
=
d
2
(
n
)
.
{\displaystyle \sum _{\delta \mid n}d\left({\frac {n}{\delta }}\right)2^{\omega (\delta )}=d^{2}(n).}
∑
δ
∣
n
λ
(
δ
)
=
{
1
if
n
is a square
0
if
n
is not square.
{\displaystyle \sum _{\delta \mid n}\lambda (\delta )={\begin{cases}&1{\text{ if }}n{\text{ is a square }}\\&0{\text{ if }}n{\text{ is not square.}}\end{cases}}}
where λ is the Liouville function.
∑
δ
∣
n
Λ
(
δ
)
=
log
n
.
{\displaystyle \sum _{\delta \mid n}\Lambda (\delta )=\log n.}
Λ
(
n
)
=
∑
δ
∣
n
μ
(
n
δ
)
log
(
δ
)
.
{\displaystyle \Lambda (n)=\sum _{\delta \mid n}\mu \left({\frac {n}{\delta }}\right)\log(\delta ).}
Möbius inversion
=== Sums of squares ===
For all
k
≥
4
,
r
k
(
n
)
>
0.
{\displaystyle k\geq 4,\;\;\;r_{k}(n)>0.}
(Lagrange's four-square theorem).
r
2
(
n
)
=
4
∑
d
∣
n
(
−
4
d
)
,
{\displaystyle r_{2}(n)=4\sum _{d\mid n}\left({\frac {-4}{d}}\right),}
where the Kronecker symbol has the values
(
−
4
n
)
=
{
+
1
if
n
≡
1
(
mod
4
)
−
1
if
n
≡
3
(
mod
4
)
0
if
n
is even
.
{\displaystyle \left({\frac {-4}{n}}\right)={\begin{cases}+1&{\text{if }}n\equiv 1{\pmod {4}}\\-1&{\text{if }}n\equiv 3{\pmod {4}}\\\;\;\;0&{\text{if }}n{\text{ is even}}.\\\end{cases}}}
There is a formula for r3 in the section on class numbers below.
r
4
(
n
)
=
8
∑
4
∤
d
d
∣
n
d
=
8
(
2
+
(
−
1
)
n
)
∑
2
∤
d
d
∣
n
d
=
{
8
σ
(
n
)
if
n
is odd
24
σ
(
n
2
ν
)
if
n
is even
,
{\displaystyle r_{4}(n)=8\sum _{\stackrel {d\mid n}{4\,\nmid \,d}}d=8(2+(-1)^{n})\sum _{\stackrel {d\mid n}{2\,\nmid \,d}}d={\begin{cases}8\sigma (n)&{\text{if }}n{\text{ is odd }}\\24\sigma \left({\frac {n}{2^{\nu }}}\right)&{\text{if }}n{\text{ is even }}\end{cases}},}
where ν = ν2(n).
r
6
(
n
)
=
16
∑
d
∣
n
χ
(
n
d
)
d
2
−
4
∑
d
∣
n
χ
(
d
)
d
2
,
{\displaystyle r_{6}(n)=16\sum _{d\mid n}\chi \left({\frac {n}{d}}\right)d^{2}-4\sum _{d\mid n}\chi (d)d^{2},}
where
χ
(
n
)
=
(
−
4
n
)
.
{\displaystyle \chi (n)=\left({\frac {-4}{n}}\right).}
Define the function σk*(n) as
σ
k
∗
(
n
)
=
(
−
1
)
n
∑
d
∣
n
(
−
1
)
d
d
k
=
{
∑
d
∣
n
d
k
=
σ
k
(
n
)
if
n
is odd
∑
2
∣
d
d
∣
n
d
k
−
∑
2
∤
d
d
∣
n
d
k
if
n
is even
.
{\displaystyle \sigma _{k}^{*}(n)=(-1)^{n}\sum _{d\mid n}(-1)^{d}d^{k}={\begin{cases}\sum _{d\mid n}d^{k}=\sigma _{k}(n)&{\text{if }}n{\text{ is odd }}\\\sum _{\stackrel {d\mid n}{2\,\mid \,d}}d^{k}-\sum _{\stackrel {d\mid n}{2\,\nmid \,d}}d^{k}&{\text{if }}n{\text{ is even}}.\end{cases}}}
That is, if n is odd, σk*(n) is the sum of the kth powers of the divisors of n, that is, σk(n), and if n is even it is the sum of the kth powers of the even divisors of n minus the sum of the kth powers of the odd divisors of n.
r
8
(
n
)
=
16
σ
3
∗
(
n
)
.
{\displaystyle r_{8}(n)=16\sigma _{3}^{*}(n).}
Adopt the convention that Ramanujan's τ(x) = 0 if x is not an integer.
r
24
(
n
)
=
16
691
σ
11
∗
(
n
)
+
128
691
{
(
−
1
)
n
−
1
259
τ
(
n
)
−
512
τ
(
n
2
)
}
{\displaystyle r_{24}(n)={\frac {16}{691}}\sigma _{11}^{*}(n)+{\frac {128}{691}}\left\{(-1)^{n-1}259\tau (n)-512\tau \left({\frac {n}{2}}\right)\right\}}
=== Divisor sum convolutions ===
Here "convolution" does not mean "Dirichlet convolution" but instead refers to the formula for the coefficients of the product of two power series:
(
∑
n
=
0
∞
a
n
x
n
)
(
∑
n
=
0
∞
b
n
x
n
)
=
∑
i
=
0
∞
∑
j
=
0
∞
a
i
b
j
x
i
+
j
=
∑
n
=
0
∞
(
∑
i
=
0
n
a
i
b
n
−
i
)
x
n
=
∑
n
=
0
∞
c
n
x
n
.
{\displaystyle \left(\sum _{n=0}^{\infty }a_{n}x^{n}\right)\left(\sum _{n=0}^{\infty }b_{n}x^{n}\right)=\sum _{i=0}^{\infty }\sum _{j=0}^{\infty }a_{i}b_{j}x^{i+j}=\sum _{n=0}^{\infty }\left(\sum _{i=0}^{n}a_{i}b_{n-i}\right)x^{n}=\sum _{n=0}^{\infty }c_{n}x^{n}.}
The sequence
c
n
=
∑
i
=
0
n
a
i
b
n
−
i
{\displaystyle c_{n}=\sum _{i=0}^{n}a_{i}b_{n-i}}
is called the convolution or the Cauchy product of the sequences an and bn.
These formulas may be proved analytically (see Eisenstein series) or by elementary methods.
σ
3
(
n
)
=
1
5
{
6
n
σ
1
(
n
)
−
σ
1
(
n
)
+
12
∑
0
<
k
<
n
σ
1
(
k
)
σ
1
(
n
−
k
)
}
.
{\displaystyle \sigma _{3}(n)={\frac {1}{5}}\left\{6n\sigma _{1}(n)-\sigma _{1}(n)+12\sum _{0<k<n}\sigma _{1}(k)\sigma _{1}(n-k)\right\}.}
σ
5
(
n
)
=
1
21
{
10
(
3
n
−
1
)
σ
3
(
n
)
+
σ
1
(
n
)
+
240
∑
0
<
k
<
n
σ
1
(
k
)
σ
3
(
n
−
k
)
}
.
{\displaystyle \sigma _{5}(n)={\frac {1}{21}}\left\{10(3n-1)\sigma _{3}(n)+\sigma _{1}(n)+240\sum _{0<k<n}\sigma _{1}(k)\sigma _{3}(n-k)\right\}.}
σ
7
(
n
)
=
1
20
{
21
(
2
n
−
1
)
σ
5
(
n
)
−
σ
1
(
n
)
+
504
∑
0
<
k
<
n
σ
1
(
k
)
σ
5
(
n
−
k
)
}
=
σ
3
(
n
)
+
120
∑
0
<
k
<
n
σ
3
(
k
)
σ
3
(
n
−
k
)
.
{\displaystyle {\begin{aligned}\sigma _{7}(n)&={\frac {1}{20}}\left\{21(2n-1)\sigma _{5}(n)-\sigma _{1}(n)+504\sum _{0<k<n}\sigma _{1}(k)\sigma _{5}(n-k)\right\}\\&=\sigma _{3}(n)+120\sum _{0<k<n}\sigma _{3}(k)\sigma _{3}(n-k).\end{aligned}}}
σ
9
(
n
)
=
1
11
{
10
(
3
n
−
2
)
σ
7
(
n
)
+
σ
1
(
n
)
+
480
∑
0
<
k
<
n
σ
1
(
k
)
σ
7
(
n
−
k
)
}
=
1
11
{
21
σ
5
(
n
)
−
10
σ
3
(
n
)
+
5040
∑
0
<
k
<
n
σ
3
(
k
)
σ
5
(
n
−
k
)
}
.
{\displaystyle {\begin{aligned}\sigma _{9}(n)&={\frac {1}{11}}\left\{10(3n-2)\sigma _{7}(n)+\sigma _{1}(n)+480\sum _{0<k<n}\sigma _{1}(k)\sigma _{7}(n-k)\right\}\\&={\frac {1}{11}}\left\{21\sigma _{5}(n)-10\sigma _{3}(n)+5040\sum _{0<k<n}\sigma _{3}(k)\sigma _{5}(n-k)\right\}.\end{aligned}}}
τ
(
n
)
=
65
756
σ
11
(
n
)
+
691
756
σ
5
(
n
)
−
691
3
∑
0
<
k
<
n
σ
5
(
k
)
σ
5
(
n
−
k
)
,
{\displaystyle \tau (n)={\frac {65}{756}}\sigma _{11}(n)+{\frac {691}{756}}\sigma _{5}(n)-{\frac {691}{3}}\sum _{0<k<n}\sigma _{5}(k)\sigma _{5}(n-k),}
where τ(n) is Ramanujan's function.
Since σk(n) (for natural number k) and τ(n) are integers, the above formulas can be used to prove congruences for the functions. See Ramanujan tau function for some examples.
Extend the domain of the partition function by setting p(0) = 1.
p
(
n
)
=
1
n
∑
1
≤
k
≤
n
σ
(
k
)
p
(
n
−
k
)
.
{\displaystyle p(n)={\frac {1}{n}}\sum _{1\leq k\leq n}\sigma (k)p(n-k).}
This recurrence can be used to compute p(n).
=== Class number related ===
Peter Gustav Lejeune Dirichlet discovered formulas that relate the class number h of quadratic number fields to the Jacobi symbol.
An integer D is called a fundamental discriminant if it is the discriminant of a quadratic number field. This is equivalent to D ≠ 1 and either a) D is squarefree and D ≡ 1 (mod 4) or b) D ≡ 0 (mod 4), D/4 is squarefree, and D/4 ≡ 2 or 3 (mod 4).
Extend the Jacobi symbol to accept even numbers in the "denominator" by defining the Kronecker symbol:
(
a
2
)
=
{
0
if
a
is even
(
−
1
)
a
2
−
1
8
if
a
is odd.
{\displaystyle \left({\frac {a}{2}}\right)={\begin{cases}\;\;\,0&{\text{ if }}a{\text{ is even}}\\(-1)^{\frac {a^{2}-1}{8}}&{\text{ if }}a{\text{ is odd. }}\end{cases}}}
Then if D < −4 is a fundamental discriminant
h
(
D
)
=
1
D
∑
r
=
1
|
D
|
r
(
D
r
)
=
1
2
−
(
D
2
)
∑
r
=
1
|
D
|
/
2
(
D
r
)
.
{\displaystyle {\begin{aligned}h(D)&={\frac {1}{D}}\sum _{r=1}^{|D|}r\left({\frac {D}{r}}\right)\\&={\frac {1}{2-\left({\tfrac {D}{2}}\right)}}\sum _{r=1}^{|D|/2}\left({\frac {D}{r}}\right).\end{aligned}}}
There is also a formula relating r3 and h. Again, let D be a fundamental discriminant, D < −4. Then
r
3
(
|
D
|
)
=
12
(
1
−
(
D
2
)
)
h
(
D
)
.
{\displaystyle r_{3}(|D|)=12\left(1-\left({\frac {D}{2}}\right)\right)h(D).}
=== Prime-count related ===
Let
H
n
=
1
+
1
2
+
1
3
+
⋯
+
1
n
{\displaystyle H_{n}=1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{n}}}
be the nth harmonic number. Then
σ
(
n
)
≤
H
n
+
e
H
n
log
H
n
{\displaystyle \sigma (n)\leq H_{n}+e^{H_{n}}\log H_{n}}
is true for every natural number n if and only if the Riemann hypothesis is true.
The Riemann hypothesis is also equivalent to the statement that, for all n > 5040,
σ
(
n
)
<
e
γ
n
log
log
n
{\displaystyle \sigma (n)<e^{\gamma }n\log \log n}
(where γ is the Euler–Mascheroni constant). This is Robin's theorem.
∑
p
ν
p
(
n
)
=
Ω
(
n
)
.
{\displaystyle \sum _{p}\nu _{p}(n)=\Omega (n).}
ψ
(
x
)
=
∑
n
≤
x
Λ
(
n
)
.
{\displaystyle \psi (x)=\sum _{n\leq x}\Lambda (n).}
Π
(
x
)
=
∑
n
≤
x
Λ
(
n
)
log
n
.
{\displaystyle \Pi (x)=\sum _{n\leq x}{\frac {\Lambda (n)}{\log n}}.}
e
θ
(
x
)
=
∏
p
≤
x
p
.
{\displaystyle e^{\theta (x)}=\prod _{p\leq x}p.}
e
ψ
(
x
)
=
lcm
[
1
,
2
,
…
,
⌊
x
⌋
]
.
{\displaystyle e^{\psi (x)}=\operatorname {lcm} [1,2,\dots ,\lfloor x\rfloor ].}
=== Menon's identity ===
In 1965 P Kesava Menon proved
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
gcd
(
k
−
1
,
n
)
=
φ
(
n
)
d
(
n
)
.
{\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}\gcd(k-1,n)=\varphi (n)d(n).}
This has been generalized by a number of mathematicians. For example,
B. Sury
∑
gcd
(
k
1
,
n
)
=
1
1
≤
k
1
,
k
2
,
…
,
k
s
≤
n
gcd
(
k
1
−
1
,
k
2
,
…
,
k
s
,
n
)
=
φ
(
n
)
σ
s
−
1
(
n
)
.
{\displaystyle \sum _{\stackrel {1\leq k_{1},k_{2},\dots ,k_{s}\leq n}{\gcd(k_{1},n)=1}}\gcd(k_{1}-1,k_{2},\dots ,k_{s},n)=\varphi (n)\sigma _{s-1}(n).}
N. Rao
∑
gcd
(
k
1
,
k
2
,
…
,
k
s
,
n
)
=
1
1
≤
k
1
,
k
2
,
…
,
k
s
≤
n
gcd
(
k
1
−
a
1
,
k
2
−
a
2
,
…
,
k
s
−
a
s
,
n
)
s
=
J
s
(
n
)
d
(
n
)
,
{\displaystyle \sum _{\stackrel {1\leq k_{1},k_{2},\dots ,k_{s}\leq n}{\gcd(k_{1},k_{2},\dots ,k_{s},n)=1}}\gcd(k_{1}-a_{1},k_{2}-a_{2},\dots ,k_{s}-a_{s},n)^{s}=J_{s}(n)d(n),}
where a1, a2, ..., as are integers, gcd(a1, a2, ..., as, n) = 1.
László Fejes Tóth
∑
gcd
(
k
,
m
)
=
1
1
≤
k
≤
m
gcd
(
k
2
−
1
,
m
1
)
gcd
(
k
2
−
1
,
m
2
)
=
φ
(
n
)
∑
d
2
∣
m
2
d
1
∣
m
1
φ
(
gcd
(
d
1
,
d
2
)
)
2
ω
(
lcm
(
d
1
,
d
2
)
)
,
{\displaystyle \sum _{\stackrel {1\leq k\leq m}{\gcd(k,m)=1}}\gcd(k^{2}-1,m_{1})\gcd(k^{2}-1,m_{2})=\varphi (n)\sum _{\stackrel {d_{1}\mid m_{1}}{d_{2}\mid m_{2}}}\varphi (\gcd(d_{1},d_{2}))2^{\omega (\operatorname {lcm} (d_{1},d_{2}))},}
where m1 and m2 are odd, m = lcm(m1, m2).
In fact, if f is any arithmetical function
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
f
(
gcd
(
k
−
1
,
n
)
)
=
φ
(
n
)
∑
d
∣
n
(
μ
∗
f
)
(
d
)
φ
(
d
)
,
{\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}f(\gcd(k-1,n))=\varphi (n)\sum _{d\mid n}{\frac {(\mu *f)(d)}{\varphi (d)}},}
where
∗
{\displaystyle *}
stands for Dirichlet convolution.
=== Miscellaneous ===
Let m and n be distinct, odd, and positive. Then the Jacobi symbol satisfies the law of quadratic reciprocity:
(
m
n
)
(
n
m
)
=
(
−
1
)
(
m
−
1
)
(
n
−
1
)
/
4
.
{\displaystyle \left({\frac {m}{n}}\right)\left({\frac {n}{m}}\right)=(-1)^{(m-1)(n-1)/4}.}
Let D(n) be the arithmetic derivative. Then the logarithmic derivative
D
(
n
)
n
=
∑
p
prime
p
∣
n
v
p
(
n
)
p
.
{\displaystyle {\frac {D(n)}{n}}=\sum _{\stackrel {p\mid n}{p{\text{ prime}}}}{\frac {v_{p}(n)}{p}}.}
See Arithmetic derivative for details.
Let λ(n) be Liouville's function. Then
|
λ
(
n
)
|
μ
(
n
)
=
λ
(
n
)
|
μ
(
n
)
|
=
μ
(
n
)
,
{\displaystyle |\lambda (n)|\mu (n)=\lambda (n)|\mu (n)|=\mu (n),}
and
λ
(
n
)
μ
(
n
)
=
|
μ
(
n
)
|
=
μ
2
(
n
)
.
{\displaystyle \lambda (n)\mu (n)=|\mu (n)|=\mu ^{2}(n).}
Let λ(n) be Carmichael's function. Then
λ
(
n
)
∣
ϕ
(
n
)
.
{\displaystyle \lambda (n)\mid \phi (n).}
Further,
λ
(
n
)
=
ϕ
(
n
)
if and only if
n
=
{
1
,
2
,
4
;
3
,
5
,
7
,
9
,
11
,
…
(that is,
p
k
, where
p
is an odd prime)
;
6
,
10
,
14
,
18
,
…
(that is,
2
p
k
, where
p
is an odd prime)
.
{\displaystyle \lambda (n)=\phi (n){\text{ if and only if }}n={\begin{cases}1,2,4;\\3,5,7,9,11,\ldots {\text{ (that is, }}p^{k}{\text{, where }}p{\text{ is an odd prime)}};\\6,10,14,18,\ldots {\text{ (that is, }}2p^{k}{\text{, where }}p{\text{ is an odd prime)}}.\end{cases}}}
See Multiplicative group of integers modulo n and Primitive root modulo n.
2
ω
(
n
)
≤
d
(
n
)
≤
2
Ω
(
n
)
.
{\displaystyle 2^{\omega (n)}\leq d(n)\leq 2^{\Omega (n)}.}
6
π
2
<
ϕ
(
n
)
σ
(
n
)
n
2
<
1.
{\displaystyle {\frac {6}{\pi ^{2}}}<{\frac {\phi (n)\sigma (n)}{n^{2}}}<1.}
c
q
(
n
)
=
μ
(
q
gcd
(
q
,
n
)
)
ϕ
(
q
gcd
(
q
,
n
)
)
ϕ
(
q
)
=
∑
δ
∣
gcd
(
q
,
n
)
μ
(
q
δ
)
δ
.
{\displaystyle {\begin{aligned}c_{q}(n)&={\frac {\mu \left({\frac {q}{\gcd(q,n)}}\right)}{\phi \left({\frac {q}{\gcd(q,n)}}\right)}}\phi (q)\\&=\sum _{\delta \mid \gcd(q,n)}\mu \left({\frac {q}{\delta }}\right)\delta .\end{aligned}}}
Note that
ϕ
(
q
)
=
∑
δ
∣
q
μ
(
q
δ
)
δ
.
{\displaystyle \phi (q)=\sum _{\delta \mid q}\mu \left({\frac {q}{\delta }}\right)\delta .}
c
q
(
1
)
=
μ
(
q
)
.
{\displaystyle c_{q}(1)=\mu (q).}
c
q
(
q
)
=
ϕ
(
q
)
.
{\displaystyle c_{q}(q)=\phi (q).}
∑
δ
∣
n
d
3
(
δ
)
=
(
∑
δ
∣
n
d
(
δ
)
)
2
.
{\displaystyle \sum _{\delta \mid n}d^{3}(\delta )=\left(\sum _{\delta \mid n}d(\delta )\right)^{2}.}
Compare this with 13 + 23 + 33 + ... + n3 = (1 + 2 + 3 + ... + n)2
d
(
u
v
)
=
∑
δ
∣
gcd
(
u
,
v
)
μ
(
δ
)
d
(
u
δ
)
d
(
v
δ
)
.
{\displaystyle d(uv)=\sum _{\delta \mid \gcd(u,v)}\mu (\delta )d\left({\frac {u}{\delta }}\right)d\left({\frac {v}{\delta }}\right).}
σ
k
(
u
)
σ
k
(
v
)
=
∑
δ
∣
gcd
(
u
,
v
)
δ
k
σ
k
(
u
v
δ
2
)
.
{\displaystyle \sigma _{k}(u)\sigma _{k}(v)=\sum _{\delta \mid \gcd(u,v)}\delta ^{k}\sigma _{k}\left({\frac {uv}{\delta ^{2}}}\right).}
τ
(
u
)
τ
(
v
)
=
∑
δ
∣
gcd
(
u
,
v
)
δ
11
τ
(
u
v
δ
2
)
,
{\displaystyle \tau (u)\tau (v)=\sum _{\delta \mid \gcd(u,v)}\delta ^{11}\tau \left({\frac {uv}{\delta ^{2}}}\right),}
where τ(n) is Ramanujan's function.
== First 100 values of some arithmetic functions ==
== Notes ==
== References ==
Tom M. Apostol (1976), Introduction to Analytic Number Theory, Springer Undergraduate Texts in Mathematics, ISBN 0-387-90163-9
Apostol, Tom M. (1989), Modular Functions and Dirichlet Series in Number Theory (2nd Edition), New York: Springer, ISBN 0-387-97127-0
Bateman, Paul T.; Diamond, Harold G. (2004), Analytic number theory, an introduction, World Scientific, ISBN 978-981-238-938-1
Cohen, Henri (1993), A Course in Computational Algebraic Number Theory, Berlin: Springer, ISBN 3-540-55640-0
Edwards, Harold (1977). Fermat's Last Theorem. New York: Springer. ISBN 0-387-90230-9.
Hardy, G. H. (1999), Ramanujan: Twelve Lectures on Subjects Suggested by his Life and work, Providence RI: AMS / Chelsea, hdl:10115/1436, ISBN 978-0-8218-2023-0
Hardy, G. H.; Wright, E. M. (1979) [1938]. An Introduction to the Theory of Numbers (5th ed.). Oxford: Clarendon Press. ISBN 0-19-853171-0. MR 0568909. Zbl 0423.10001.
Jameson, G. J. O. (2003), The Prime Number Theorem, Cambridge University Press, ISBN 0-521-89110-8
Koblitz, Neal (1984), Introduction to Elliptic Curves and Modular Forms, New York: Springer, ISBN 0-387-97966-2
Landau, Edmund (1966), Elementary Number Theory, New York: Chelsea
William J. LeVeque (1996), Fundamentals of Number Theory, Courier Dover Publications, ISBN 0-486-68906-9
Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company, LCCN 77-171950
Elliott Mendelson (1987), Introduction to Mathematical Logic, CRC Press, ISBN 0-412-80830-7
Nagell, Trygve (1964), Introduction to number theory (2nd Edition), Chelsea, ISBN 978-0-8218-2833-5 {{citation}}: ISBN / Date incompatibility (help)
Niven, Ivan M.; Zuckerman, Herbert S. (1972), An introduction to the theory of numbers (3rd Edition), John Wiley & Sons, ISBN 0-471-64154-5
Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall, LCCN 77-81766
Ramanujan, Srinivasa (2000), Collected Papers, Providence RI: AMS / Chelsea, ISBN 978-0-8218-2076-6
Williams, Kenneth S. (2011), Number theory in the spirit of Liouville, London Mathematical Society Student Texts, vol. 76, Cambridge: Cambridge University Press, ISBN 978-0-521-17562-3, Zbl 1227.11002
== Further reading ==
Schwarz, Wolfgang; Spilker, Jürgen (1994), Arithmetical Functions. An introduction to elementary and analytic properties of arithmetic functions and to some of their almost-periodic properties, London Mathematical Society Lecture Note Series, vol. 184, Cambridge University Press, ISBN 0-521-42725-8, Zbl 0807.11001
== External links ==
"Arithmetic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Matthew Holden, Michael Orrison, Michael Varble Yet another Generalization of Euler's Totient Function
Huard, Ou, Spearman, and Williams. Elementary Evaluation of Certain Convolution Sums Involving Divisor Functions
Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions Archived 2021-01-16 at the Wayback Machine
László Tóth, Menon's Identity and arithmetical sums representing functions of several variables | Wikipedia/Arithmetical_function |
In number theory, the Dedekind psi function is the multiplicative function on the positive integers defined by
ψ
(
n
)
=
n
∏
p
|
n
(
1
+
1
p
)
,
{\displaystyle \psi (n)=n\prod _{p|n}\left(1+{\frac {1}{p}}\right),}
where the product is taken over all primes
p
{\displaystyle p}
dividing
n
.
{\displaystyle n.}
(By convention,
ψ
(
1
)
{\displaystyle \psi (1)}
, which is the empty product, has value 1.) The function was introduced by Richard Dedekind in connection with modular functions.
The value of
ψ
(
n
)
{\displaystyle \psi (n)}
for the first few integers
n
{\displaystyle n}
is:
1, 3, 4, 6, 6, 12, 8, 12, 12, 18, 12, 24, ... (sequence A001615 in the OEIS).
The function
ψ
(
n
)
{\displaystyle \psi (n)}
is greater than
n
{\displaystyle n}
for all
n
{\displaystyle n}
greater than 1, and is even for all
n
{\displaystyle n}
greater than 2. If
n
{\displaystyle n}
is a square-free number then
ψ
(
n
)
=
σ
(
n
)
{\displaystyle \psi (n)=\sigma (n)}
, where
σ
(
n
)
{\displaystyle \sigma (n)}
is the sum-of-divisors function.
The
ψ
{\displaystyle \psi }
function can also be defined by setting
ψ
(
p
n
)
=
(
p
+
1
)
p
n
−
1
{\displaystyle \psi (p^{n})=(p+1)p^{n-1}}
for powers of any prime
p
{\displaystyle p}
, and then extending the definition to all integers by multiplicativity. This also leads to a proof of the generating function in terms of the Riemann zeta function, which is
∑
ψ
(
n
)
n
s
=
ζ
(
s
)
ζ
(
s
−
1
)
ζ
(
2
s
)
.
{\displaystyle \sum {\frac {\psi (n)}{n^{s}}}={\frac {\zeta (s)\zeta (s-1)}{\zeta (2s)}}.}
This is also a consequence of the fact that we can write as a Dirichlet convolution of
ψ
=
I
d
∗
|
μ
|
{\displaystyle \psi =\mathrm {Id} *|\mu |}
.
There is an additive definition of the psi function as well. Quoting from Dickson,
R. Dedekind proved that, if
n
{\displaystyle n}
is decomposed in every way into a product
a
b
{\displaystyle ab}
and if
e
{\displaystyle e}
is the g.c.d. of
a
,
b
{\displaystyle a,b}
then
∑
a
(
a
/
e
)
φ
(
e
)
=
n
∏
p
|
n
(
1
+
1
p
)
{\displaystyle \sum _{a}(a/e)\varphi (e)=n\prod _{p|n}\left(1+{\frac {1}{p}}\right)}
where
a
{\displaystyle a}
ranges over all divisors of
n
{\displaystyle n}
and
p
{\displaystyle p}
over the prime divisors of
n
{\displaystyle n}
and
φ
{\displaystyle \varphi }
is the totient function.
== Higher orders ==
The generalization to higher orders via ratios of Jordan's totient is
ψ
k
(
n
)
=
J
2
k
(
n
)
J
k
(
n
)
{\displaystyle \psi _{k}(n)={\frac {J_{2k}(n)}{J_{k}(n)}}}
with Dirichlet series
∑
n
≥
1
ψ
k
(
n
)
n
s
=
ζ
(
s
)
ζ
(
s
−
k
)
ζ
(
2
s
)
{\displaystyle \sum _{n\geq 1}{\frac {\psi _{k}(n)}{n^{s}}}={\frac {\zeta (s)\zeta (s-k)}{\zeta (2s)}}}
.
It is also the Dirichlet convolution of a power and the square
of the Möbius function,
ψ
k
(
n
)
=
n
k
∗
μ
2
(
n
)
{\displaystyle \psi _{k}(n)=n^{k}*\mu ^{2}(n)}
.
If
ϵ
2
=
1
,
0
,
0
,
1
,
0
,
0
,
0
,
0
,
1
,
0
,
0
,
0
,
0
,
0
,
0
,
1
,
0
,
0
,
0
…
{\displaystyle \epsilon _{2}=1,0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0\ldots }
is the characteristic function of the squares, another Dirichlet convolution
leads to the generalized σ-function,
ϵ
2
(
n
)
∗
ψ
k
(
n
)
=
σ
k
(
n
)
{\displaystyle \epsilon _{2}(n)*\psi _{k}(n)=\sigma _{k}(n)}
.
== References ==
== External links ==
Weisstein, Eric W. "Dedekind Function". MathWorld.
== See also ==
Goro Shimura (1971). Introduction to the Arithmetic Theory of Automorphic Functions. Princeton. (page 25, equation (1))
Mathar, Richard J. (2011). "Survey of Dirichlet series of multiplicative arithmetic functions". arXiv:1106.4038 [math.NT]. Section 3.13.2
OEIS: A065958 is ψ2, OEIS: A065959 is ψ3, and OEIS: A065960 is ψ4 | Wikipedia/Dedekind_psi_function |
In abstract algebra, a subset
S
{\displaystyle S}
of a field
L
{\displaystyle L}
is algebraically independent over a subfield
K
{\displaystyle K}
if the elements of
S
{\displaystyle S}
do not satisfy any non-trivial polynomial equation with coefficients in
K
{\displaystyle K}
.
In particular, a one element set
{
α
}
{\displaystyle \{\alpha \}}
is algebraically independent over
K
{\displaystyle K}
if and only if
α
{\displaystyle \alpha }
is transcendental over
K
{\displaystyle K}
. In general, all the elements of an algebraically independent set
S
{\displaystyle S}
over
K
{\displaystyle K}
are by necessity transcendental over
K
{\displaystyle K}
, and over all of the field extensions over
K
{\displaystyle K}
generated by the remaining elements of
S
{\displaystyle S}
.
== Example ==
The real numbers
π
{\displaystyle {\sqrt {\pi }}}
and
2
π
+
1
{\displaystyle 2\pi +1}
are transcendental numbers: they are not the roots of any nontrivial polynomial whose coefficients are rational numbers. Thus, the sets
{
π
}
{\displaystyle \{{\sqrt {\pi }}\}}
and
{
2
π
+
1
}
{\displaystyle \{2\pi +1\}}
are both algebraically independent over the rational numbers.
However, the set
{
π
,
2
π
+
1
}
{\displaystyle \{{\sqrt {\pi }},2\pi +1\}}
is not algebraically independent over the rational numbers
Q
{\displaystyle \mathbb {Q} }
, because the nontrivial polynomial
P
(
x
,
y
)
=
2
x
2
−
y
+
1
{\displaystyle P(x,y)=2x^{2}-y+1}
is zero when
x
=
π
{\displaystyle x={\sqrt {\pi }}}
and
y
=
2
π
+
1
{\displaystyle y=2\pi +1}
.
== Algebraic independence of known constants ==
Although π and e are transcendental, it is not known whether
{
π
,
e
}
{\displaystyle \{\pi ,e\}}
is algebraically independent over
Q
{\displaystyle \mathbb {Q} }
. In fact, it is not even known whether
π
+
e
{\displaystyle \pi +e}
is irrational. Nesterenko proved in 1996 that:
the numbers
π
{\displaystyle \pi }
,
e
π
{\displaystyle e^{\pi }}
, and
Γ
(
1
/
4
)
{\displaystyle \Gamma (1/4)}
, where
Γ
{\displaystyle \Gamma }
is the gamma function, are algebraically independent over
Q
{\displaystyle \mathbb {Q} }
;
the numbers
e
π
3
{\displaystyle e^{\pi {\sqrt {3}}}}
and
Γ
(
1
/
3
)
{\displaystyle \Gamma (1/3)}
are algebraically independent over
Q
{\displaystyle \mathbb {Q} }
;
for all positive integers
n
{\displaystyle n}
, the number
e
π
n
{\displaystyle e^{\pi {\sqrt {n}}}}
is algebraically independent over
Q
{\displaystyle \mathbb {Q} }
.
== Results and open problems ==
The Lindemann–Weierstrass theorem can often be used to prove that some sets are algebraically independent over
Q
{\displaystyle \mathbb {Q} }
. It states that whenever
α
1
,
…
,
α
n
{\displaystyle \alpha _{1},\ldots ,\alpha _{n}}
are algebraic numbers that are linearly independent over
Q
{\displaystyle \mathbb {Q} }
, then
e
α
1
,
…
,
e
α
n
{\displaystyle e^{\alpha _{1}},\ldots ,e^{\alpha _{n}}}
are also algebraically independent over
Q
{\displaystyle \mathbb {Q} }
.
The Schanuel conjecture would establish the algebraic independence of many numbers, including π and e, but remains unproven:
Let
{
z
1
,
.
.
.
,
z
n
}
{\displaystyle \{z_{1},...,z_{n}\}}
be any set of
n
{\displaystyle n}
complex numbers that are linearly independent over
Q
{\displaystyle \mathbb {Q} }
. The field extension
Q
(
z
1
,
.
.
.
,
z
n
,
e
z
1
,
.
.
.
,
e
z
n
)
{\displaystyle \mathbb {Q} (z_{1},...,z_{n},e^{z_{1}},...,e^{z_{n}})}
has transcendence degree at least
n
{\displaystyle n}
over
Q
{\displaystyle \mathbb {Q} }
.
== Algebraic matroids ==
Given a field extension
L
/
K
{\displaystyle L/K}
that is not algebraic, Zorn's lemma can be used to show that there always exists a maximal algebraically independent subset of
L
{\displaystyle L}
over
K
{\displaystyle K}
. Further, all the maximal algebraically independent subsets have the same cardinality, known as the transcendence degree of the extension.
For every finite set
S
{\displaystyle S}
of elements of
L
{\displaystyle L}
, the algebraically independent subsets of
S
{\displaystyle S}
satisfy the axioms that define the independent sets of a matroid. In this matroid, the rank of a set of elements is its transcendence degree, and the flat generated by a set
T
{\displaystyle T}
of elements is the intersection of
L
{\displaystyle L}
with the field
K
[
T
]
{\displaystyle K[T]}
. A matroid that can be generated in this way is called an algebraic matroid. No good characterization of algebraic matroids is known, but certain matroids are known to be non-algebraic; the smallest is the Vámos matroid.
Many finite matroids may be represented by a matrix over a field
K
{\displaystyle K}
, in which the matroid elements correspond to matrix columns, and a set of elements is independent if the corresponding set of columns is linearly independent. Every matroid with a linear representation of this type may also be represented as an algebraic matroid, by choosing an indeterminate for each row of the matrix, and by using the matrix coefficients within each column to assign each matroid element a linear combination of these transcendentals. The converse is false: not every algebraic matroid has a linear representation.
== See also ==
Linear independence
Transcendental number
Lindemann-Weierstrass theorem
Schanuel's conjecture
== References ==
== External links ==
Chen, Johnny. "Algebraically Independent". MathWorld. | Wikipedia/Algebraic_independence |
In mathematics, specifically in transcendental number theory and Diophantine approximation, Siegel's lemma refers to bounds on the solutions of linear equations obtained by the construction of auxiliary functions. The existence of these polynomials was proven by Axel Thue; Thue's proof used what would be translated from German as Dirichlet's Drawers principle, which is widely known as the Pigeonhole principle. Carl Ludwig Siegel published his lemma in 1929. It is a pure existence theorem for a system of linear equations.
Siegel's lemma has been refined in recent years to produce sharper bounds on the estimates given by the lemma.
== Statement ==
Suppose we are given a system of M linear equations in N unknowns such that N > M, say
a
11
X
1
+
⋯
+
a
1
N
X
N
=
0
{\displaystyle a_{11}X_{1}+\cdots +a_{1N}X_{N}=0}
⋯
{\displaystyle \cdots }
a
M
1
X
1
+
⋯
+
a
M
N
X
N
=
0
{\displaystyle a_{M1}X_{1}+\cdots +a_{MN}X_{N}=0}
where the coefficients are integers, not all 0, and bounded by B. The system then has a solution
(
X
1
,
X
2
,
…
,
X
N
)
{\displaystyle (X_{1},X_{2},\dots ,X_{N})}
with the Xs all integers, not all 0, and bounded by
(
N
B
)
M
/
(
N
−
M
)
.
{\displaystyle (NB)^{M/(N-M)}.}
Bombieri & Vaaler (1983) gave the following sharper bound for the X's:
max
|
X
j
|
≤
(
D
−
1
det
(
A
A
T
)
)
1
/
(
N
−
M
)
{\displaystyle \max |X_{j}|\,\leq \left(D^{-1}{\sqrt {\det(AA^{T})}}\right)^{\!1/(N-M)}}
where D is the greatest common divisor of the M × M minors of the matrix A, and AT is its transpose. Their proof involved replacing the pigeonhole principle by techniques from the geometry of numbers.
== See also ==
Diophantine approximation
== References ==
Bombieri, E.; Vaaler, J. (1983). "On Siegel's lemma". Inventiones Mathematicae. 73 (1): 11–32. Bibcode:1983InMat..73...11B. doi:10.1007/BF01393823. S2CID 121274024.
Hindry, Marc; Silverman, Joseph H. (2000). Diophantine geometry. Graduate Texts in Mathematics. Vol. 201. Berlin, New York: Springer-Verlag. ISBN 978-0-387-98981-5. MR 1745599.
Wolfgang M. Schmidt. Diophantine approximation. Lecture Notes in Mathematics 785. Springer. (1980 [1996 with minor corrections]) (Pages 125-128 and 283–285)
Wolfgang M. Schmidt. "Chapter I: Siegel's Lemma and Heights" (pages 1–33). Diophantine approximations and Diophantine equations, Lecture Notes in Mathematics, Springer Verlag 2000. | Wikipedia/Siegel's_lemma |
Cantor's first set theory article contains Georg Cantor's first theorems of transfinite set theory, which studies infinite sets and their properties. One of these theorems is his "revolutionary discovery" that the set of all real numbers is uncountably, rather than countably, infinite. This theorem is proved using Cantor's first uncountability proof, which differs from the more familiar proof using his diagonal argument. The title of the article, "On a Property of the Collection of All Real Algebraic Numbers" ("Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen"), refers to its first theorem: the set of real algebraic numbers is countable. Cantor's article was published in 1874. In 1879, he modified his uncountability proof by using the topological notion of a set being dense in an interval.
Cantor's article also contains a proof of the existence of transcendental numbers. Both constructive and non-constructive proofs have been presented as "Cantor's proof." The popularity of presenting a non-constructive proof has led to a misconception that Cantor's arguments are non-constructive. Since the proof that Cantor published either constructs transcendental numbers or does not, an analysis of his article can determine whether or not this proof is constructive. Cantor's correspondence with Richard Dedekind shows the development of his ideas and reveals that he had a choice between two proofs: a non-constructive proof that uses the uncountability of the real numbers and a constructive proof that does not use uncountability.
Historians of mathematics have examined Cantor's article and the circumstances in which it was written. For example, they have discovered that Cantor was advised to leave out his uncountability theorem in the article he submitted — he added it during proofreading. They have traced this and other facts about the article to the influence of Karl Weierstrass and Leopold Kronecker. Historians have also studied Dedekind's contributions to the article, including his contributions to the theorem on the countability of the real algebraic numbers. In addition, they have recognized the role played by the uncountability theorem and the concept of countability in the development of set theory, measure theory, and the Lebesgue integral.
== The article ==
Cantor's article is short, less than four and a half pages. It begins with a discussion of the real algebraic numbers and a statement of his first theorem: The set of real algebraic numbers can be put into one-to-one correspondence with the set of positive integers. Cantor restates this theorem in terms more familiar to mathematicians of his time: "The set of real algebraic numbers can be written as an infinite sequence in which each number appears only once."
Cantor's second theorem works with a closed interval [a, b], which is the set of real numbers ≥ a and ≤ b. The theorem states: Given any sequence of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in the given sequence. Hence, there are infinitely many such numbers.
Cantor observes that combining his two theorems yields a new proof of Liouville's theorem that every interval [a, b] contains infinitely many transcendental numbers.
Cantor then remarks that his second theorem is:
the reason why collections of real numbers forming a so-called continuum (such as, all real numbers which are ≥ 0 and ≤ 1) cannot correspond one-to-one with the collection (ν) [the collection of all positive integers]; thus I have found the clear difference between a so-called continuum and a collection like the totality of real algebraic numbers.
This remark contains Cantor's uncountability theorem, which only states that an interval [a, b] cannot be put into one-to-one correspondence with the set of positive integers. It does not state that this interval is an infinite set of larger cardinality than the set of positive integers. Cardinality is defined in Cantor's next article, which was published in 1878.
Cantor only states his uncountability theorem. He does not use it in any proofs.
== The proofs ==
=== First theorem ===
To prove that the set of real algebraic numbers is countable, define the height of a polynomial of degree n with integer coefficients as: n − 1 + |a0| + |a1| + ... + |an|, where a0, a1, ..., an are the coefficients of the polynomial. Order the polynomials by their height, and order the real roots of polynomials of the same height by numeric order. Since there are only a finite number of roots of polynomials of a given height, these orderings put the real algebraic numbers into a sequence. Cantor went a step further and produced a sequence in which each real algebraic number appears just once. He did this by only using polynomials that are irreducible over the integers. The following table contains the beginning of Cantor's enumeration.
=== Second theorem ===
Only the first part of Cantor's second theorem needs to be proved. It states: Given any sequence of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in the given sequence.
To find a number in [a, b] that is not contained in the given sequence, construct two sequences of real numbers as follows: Find the first two numbers of the given sequence that are in the open interval (a, b). Denote the smaller of these two numbers by a1 and the larger by b1. Similarly, find the first two numbers of the given sequence that are in (a1, b1). Denote the smaller by a2 and the larger by b2. Continuing this procedure generates a sequence of intervals (a1, b1), (a2, b2), (a3, b3), ... such that each interval in the sequence contains all succeeding intervals — that is, it generates a sequence of nested intervals. This implies that the sequence a1, a2, a3, ... is increasing and the sequence b1, b2, b3, ... is decreasing.
Either the number of intervals generated is finite or infinite. If finite, let (aL, bL) be the last interval. If infinite, take the limits a∞ = limn → ∞ an and b∞ = limn → ∞ bn. Since an < bn for all n, either a∞ = b∞ or a∞ < b∞. Thus, there are three cases to consider:
Case 1: There is a last interval (aL, bL). Since at most one xn can be in this interval, every y in this interval except xn (if it exists) is not in the given sequence.
Case 2: a∞ = b∞. Then a∞ is not in the sequence since for all n : a∞ is in the interval (an, bn) but xn does not belong to (an, bn). In symbols: a∞ ∈ (an, bn) but xn ∉ (an, bn).
Case 3: a∞ < b∞. Then every y in [a∞, b∞] is not contained in the given sequence since for all n : y belongs to (an, bn) but xn does not.
The proof is complete since, in all cases, at least one real number in [a, b] has been found that is not contained in the given sequence.
Cantor's proofs are constructive and have been used to write a computer program that generates the digits of a transcendental number. This program applies Cantor's construction to a sequence containing all the real algebraic numbers between 0 and 1. The article that discusses this program gives some of its output, which shows how the construction generates a transcendental.
=== Example of Cantor's construction ===
An example illustrates how Cantor's construction works. Consider the sequence: 1/2, 1/3, 2/3, 1/4, 3/4, 1/5, 2/5, 3/5, 4/5, ... This sequence is obtained by ordering the rational numbers in (0, 1) by increasing denominators, ordering those with the same denominator by increasing numerators, and omitting reducible fractions. The table below shows the first five steps of the construction. The table's first column contains the intervals (an, bn). The second column lists the terms visited during the search for the first two terms in (an, bn). These two terms are in red.
Since the sequence contains all the rational numbers in (0, 1), the construction generates an irrational number, which turns out to be √2 − 1.
== Cantor's 1879 uncountability proof ==
=== Everywhere dense ===
In 1879, Cantor published a new uncountability proof that modifies his 1874 proof. He first defines the topological notion of a point set P being "everywhere dense in an interval":
If P lies partially or completely in the interval [α, β], then the remarkable case can happen that every interval [γ, δ] contained in [α, β], no matter how small, contains points of P. In such a case, we will say that P is everywhere dense in the interval [α, β].
In this discussion of Cantor's proof: a, b, c, d are used instead of α, β, γ, δ. Also, Cantor only uses his interval notation if the first endpoint is less than the second. For this discussion, this means that (a, b) implies a < b.
Since the discussion of Cantor's 1874 proof was simplified by using open intervals rather than closed intervals, the same simplification is used here. This requires an equivalent definition of everywhere dense: A set P is everywhere dense in the interval [a, b] if and only if every open subinterval (c, d) of [a, b] contains at least one point of P.
Cantor did not specify how many points of P an open subinterval (c, d) must contain. He did not need to specify this because the assumption that every open subinterval contains at least one point of P implies that every open subinterval contains infinitely many points of P.
=== Cantor's 1879 proof ===
Cantor modified his 1874 proof with a new proof of its second theorem: Given any sequence P of real numbers x1, x2, x3, ... and any interval [a, b], there is a number in [a, b] that is not contained in P. Cantor's new proof has only two cases. First, it handles the case of P not being dense in the interval, then it deals with the more difficult case of P being dense in the interval. This division into cases not only indicates which sequences are more difficult to handle, but it also reveals the important role denseness plays in the proof.
In the first case, P is not dense in [a, b]. By definition, P is dense in [a, b] if and only if for all subintervals (c, d) of [a, b], there is an x ∈ P such that x ∈ (c, d). Taking the negation of each side of the "if and only if" produces: P is not dense in [a, b] if and only if there exists a subinterval (c, d) of [a, b] such that for all x ∈ P : x ∉ (c, d). Therefore, every number in (c, d) is not contained in the sequence P. This case handles case 1 and case 3 of Cantor's 1874 proof.
In the second case, which handles case 2 of Cantor's 1874 proof, P is dense in [a, b]. The denseness of sequence P is used to recursively define a sequence of nested intervals that excludes all the numbers in P and whose intersection contains a single real number in [a, b]. The sequence of intervals starts with (a, b). Given an interval in the sequence, the next interval is obtained by finding the two numbers with the least indices that belong to P and to the current interval. These two numbers are the endpoints of the next open interval. Since an open interval excludes its endpoints, every nested interval eliminates two numbers from the front of sequence P, which implies that the intersection of the nested intervals excludes all the numbers in P. Details of this proof and a proof that this intersection contains a single real number in [a, b] are given below.
== The development of Cantor's ideas ==
The development leading to Cantor's 1874 article appears in the correspondence between Cantor and Richard Dedekind. On November 29, 1873, Cantor asked Dedekind whether the collection of positive integers and the collection of positive real numbers "can be corresponded so that each individual of one collection corresponds to one and only one individual of the other?" Cantor added that collections having such a correspondence include the collection of positive rational numbers, and collections of the form (an1, n2, . . . , nν) where n1, n2, . . . , nν, and ν are positive integers.
Dedekind replied that he was unable to answer Cantor's question, and said that it "did not deserve too much effort because it has no particular practical interest". Dedekind also sent Cantor a proof that the set of algebraic numbers is countable.
On December 2, Cantor responded that his question does have interest: "It would be nice if it could be answered; for example, provided that it could be answered no, one would have a new proof of Liouville's theorem that there are transcendental numbers."
On December 7, Cantor sent Dedekind a proof by contradiction that the set of real numbers is uncountable. Cantor starts by assuming that the real numbers in
[
0
,
1
]
{\displaystyle [0,1]}
can be written as a sequence. Then, he applies a construction to this sequence to produce a number in
[
0
,
1
]
{\displaystyle [0,1]}
that is not in the sequence, thus contradicting his assumption. Together, the letters of December 2 and 7 provide a non-constructive proof of the existence of transcendental numbers. Also, the proof in Cantor's December 7 letter shows some of the reasoning that led to his discovery that the real numbers form an uncountable set.
Dedekind received Cantor's proof on December 8. On that same day, Dedekind simplified the proof and mailed his proof to Cantor. Cantor used Dedekind's proof in his article. The letter containing Cantor's December 7 proof was not published until 1937.
On December 9, Cantor announced the theorem that allowed him to construct transcendental numbers as well as prove the uncountability of the set of real numbers:
I show directly that if I start with a sequence
(1) ω1, ω2, ... , ωn, ...
I can determine, in every given interval [α, β], a number η that is not included in (1).
This is the second theorem in Cantor's article. It comes from realizing that his construction can be applied to any sequence, not just to sequences that supposedly enumerate the real numbers. So Cantor had a choice between two proofs that demonstrate the existence of transcendental numbers: one proof is constructive, but the other is not. These two proofs can be compared by starting with a sequence consisting of all the real algebraic numbers.
The constructive proof applies Cantor's construction to this sequence and the interval [a, b] to produce a transcendental number in this interval.
The non-constructive proof uses two proofs by contradiction:
The proof by contradiction used to prove the uncountability theorem (see Proof of Cantor's uncountability theorem).
The proof by contradiction used to prove the existence of transcendental numbers from the countability of the real algebraic numbers and the uncountability of real numbers. Cantor's December 2nd letter mentions this existence proof but does not contain it. Here is a proof: Assume that there are no transcendental numbers in [a, b]. Then all the numbers in [a, b] are algebraic. This implies that they form a subsequence of the sequence of all real algebraic numbers, which contradicts Cantor's uncountability theorem. Thus, the assumption that there are no transcendental numbers in [a, b] is false. Therefore, there is a transcendental number in [a, b].
Cantor chose to publish the constructive proof, which not only produces a transcendental number but is also shorter and avoids two proofs by contradiction. The non-constructive proof from Cantor's correspondence is simpler than the one above because it works with all the real numbers rather than the interval [a, b]. This eliminates the subsequence step and all occurrences of [a, b] in the second proof by contradiction.
== A misconception about Cantor's work ==
Akihiro Kanamori, who specializes in set theory, stated that "Accounts of Cantor's work have mostly reversed the order for deducing the existence of transcendental numbers, establishing first the uncountability of the reals and only then drawing the existence conclusion from the countability of the algebraic numbers. In textbooks the inversion may be inevitable, but this has promoted the misconception that Cantor's arguments are non-constructive."
Cantor's published proof and the reverse-order proof both use the theorem: Given a sequence of reals, a real can be found that is not in the sequence. By applying this theorem to the sequence of real algebraic numbers, Cantor produced a transcendental number. He then proved that the reals are uncountable: Assume that there is a sequence containing all the reals. Applying the theorem to this sequence produces a real not in the sequence, contradicting the assumption that the sequence contains all the reals. Hence, the reals are uncountable. The reverse-order proof starts by first proving the reals are uncountable. It then proves that transcendental numbers exist: If there were no transcendental numbers, all the reals would be algebraic and hence countable, which contradicts what was just proved. This contradiction proves that transcendental numbers exist without constructing any.
The correspondence containing Cantor's non-constructive reasoning was published in 1937. By then, other mathematicians had rediscovered his non-constructive, reverse-order proof. As early as 1921, this proof was called "Cantor's proof" and criticized for not producing any transcendental numbers. In that year, Oskar Perron gave the reverse-order proof and then stated: "... Cantor's proof for the existence of transcendental numbers has, along with its simplicity and elegance, the great disadvantage that it is only an existence proof; it does not enable us to actually specify even a single transcendental number."
As early as 1930, some mathematicians have attempted to correct this misconception of Cantor's work. In that year, the set theorist Abraham Fraenkel stated that Cantor's method is "... a method that incidentally, contrary to a widespread interpretation, is fundamentally constructive and not merely existential." In 1972, Irving Kaplansky wrote: "It is often said that Cantor's proof is not 'constructive,' and so does not yield a tangible transcendental number. This remark is not justified. If we set up a definite listing of all algebraic numbers ... and then apply the diagonal procedure ..., we get a perfectly definite transcendental number (it could be computed to any number of decimal places)." Cantor's proof is not only constructive, it is also simpler than Perron's proof, which requires the detour of first proving that the set of all reals is uncountable.
Cantor's diagonal argument has often replaced his 1874 construction in expositions of his proof. The diagonal argument is constructive and produces a more efficient computer program than his 1874 construction. Using it, a computer program has been written that computes the digits of a transcendental number in polynomial time. The program that uses Cantor's 1874 construction requires at least sub-exponential time.
The presentation of the non-constructive proof without mentioning Cantor's constructive proof appears in some books that were quite successful as measured by the length of time new editions or reprints appeared—for example: Oskar Perron's Irrationalzahlen (1921; 1960, 4th edition), Eric Temple Bell's Men of Mathematics (1937; still being reprinted), Godfrey Hardy and E. M. Wright's An Introduction to the Theory of Numbers (1938; 2008 6th edition), Garrett Birkhoff and Saunders Mac Lane's A Survey of Modern Algebra (1941; 1997 5th edition), and Michael Spivak's Calculus (1967; 2008 4th edition). Since 2014, at least two books have appeared stating that Cantor's proof is constructive, and at least four have appeared stating that his proof does not construct any (or a single) transcendental.
Asserting that Cantor gave a non-constructive argument without mentioning the constructive proof he published can lead to erroneous statements about the history of mathematics. In A Survey of Modern Algebra, Birkhoff and Mac Lane state: "Cantor's argument for this result [Not every real number is algebraic] was at first rejected by many mathematicians, since it did not exhibit any specific transcendental number." The proof that Cantor published produces transcendental numbers, and there appears to be no evidence that his argument was rejected. Even Leopold Kronecker, who had strict views on what is acceptable in mathematics and who could have delayed publication of Cantor's article, did not delay it. In fact, applying Cantor's construction to the sequence of real algebraic numbers produces a limiting process that Kronecker accepted—namely, it determines a number to any required degree of accuracy.
== The influence of Weierstrass and Kronecker on Cantor's article ==
Historians of mathematics have discovered the following facts about Cantor's article "On a Property of the Collection of All Real Algebraic Numbers":
Cantor's uncountability theorem was left out of the article he submitted. He added it during proofreading.
The article's title refers to the set of real algebraic numbers. The main topic in Cantor's correspondence was the set of real numbers.
The proof of Cantor's second theorem came from Dedekind. However, it omits Dedekind's explanation of why the limits a∞ and b∞ exist.
Cantor restricted his first theorem to the set of real algebraic numbers. The proof he was using demonstrates the countability of the set of all algebraic numbers.
To explain these facts, historians have pointed to the influence of Cantor's former professors, Karl Weierstrass and Leopold Kronecker. Cantor discussed his results with Weierstrass on December 23, 1873. Weierstrass was first amazed by the concept of countability, but then found the countability of the set of real algebraic numbers useful. Cantor did not want to publish yet, but Weierstrass felt that he must publish at least his results concerning the algebraic numbers.
From his correspondence, it appears that Cantor only discussed his article with Weierstrass. However, Cantor told Dedekind: "The restriction which I have imposed on the published version of my investigations is caused in part by local circumstances ..." Cantor biographer Joseph Dauben believes that "local circumstances" refers to Kronecker who, as a member of the editorial board of Crelle's Journal, had delayed publication of an 1870 article by Eduard Heine, one of Cantor's colleagues. Cantor would submit his article to Crelle's Journal.
Weierstrass advised Cantor to leave his uncountability theorem out of the article he submitted, but Weierstrass also told Cantor that he could add it as a marginal note during proofreading, which he did. It appears in a
remark at the end of the article's introduction. The opinions of Kronecker and Weierstrass both played a role here. Kronecker did not accept infinite sets, and it seems that Weierstrass did not accept that two infinite sets could be so different, with one being countable and the other not. Weierstrass changed his opinion later. Without the uncountability theorem, the article needed a title that did not refer to this theorem. Cantor chose "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" ("On a Property of the Collection of All Real Algebraic Numbers"), which refers to the countability of the set of real algebraic numbers, the result that Weierstrass found useful.
Kronecker's influence appears in the proof of Cantor's second theorem. Cantor used Dedekind's version of the proof except he left out why the limits a∞ = limn → ∞ an and
b∞ = limn → ∞ bn exist. Dedekind had used his "principle of continuity" to prove they exist. This principle (which is equivalent to the least upper bound property of the real numbers) comes from Dedekind's construction of the real numbers, a construction Kronecker did not accept.
Cantor restricted his first theorem to the set of real algebraic numbers even though Dedekind had sent him a proof that handled all algebraic numbers. Cantor did this for expository reasons and because of "local circumstances". This restriction simplifies the article because the second theorem works with real sequences. Hence, the construction in the second theorem can be applied directly to the enumeration of the real algebraic numbers to produce "an effective procedure for the calculation of transcendental numbers". This procedure would be acceptable to Weierstrass.
== Dedekind's contributions to Cantor's article ==
Since 1856, Dedekind had developed theories involving infinitely many infinite sets—for example: ideals, which he used in algebraic number theory, and Dedekind cuts, which he used to construct the real numbers. This work enabled him to understand and contribute to Cantor's work.
Dedekind's first contribution concerns the theorem that the set of real algebraic numbers is countable. Cantor is usually given credit for this theorem, but the mathematical historian José Ferreirós calls it "Dedekind's theorem." Their correspondence reveals what each mathematician contributed to the theorem.
In his letter introducing the concept of countability, Cantor stated without proof that the set of positive rational numbers is countable, as are sets of the form (an1, n2, ..., nν) where n1, n2, ..., nν, and ν are positive integers. Cantor's second result uses an indexed family of numbers: a set of the form (an1, n2, ..., nν) is the range of a function from the ν indices to the set of real numbers. His second result implies his first: let ν = 2 and an1, n2 = n1/n2. The function can be quite general—for example, an1, n2, n3, n4, n5 = (n1/n2)1/n3 + tan(n4/n5).
Dedekind replied with a proof of the theorem that the set of all algebraic numbers is countable. In his reply to Dedekind, Cantor did not claim to have proved Dedekind's result. He did indicate how he proved his theorem about indexed families of numbers: "Your proof that (n) [the set of positive integers] can be correlated one-to-one with the field of all algebraic numbers is approximately the same as the way I prove my contention in the last letter. I take n12 + n22 + ··· + nν2 =
N
{\displaystyle {\mathfrak {N}}}
and order the elements accordingly." However, Cantor's ordering is weaker than Dedekind's and cannot be extended to
n
{\displaystyle n}
-tuples of integers that include zeros.
Dedekind's second contribution is his proof of Cantor's second theorem. Dedekind sent this proof in reply to Cantor's letter that contained the uncountability theorem, which Cantor proved using infinitely many sequences. Cantor next wrote that he had found a simpler proof that did not use infinitely many sequences. So Cantor had a choice of proofs and chose to publish Dedekind's.
Cantor thanked Dedekind privately for his help: "... your comments (which I value highly) and your manner of putting some of the points were of great assistance to me." However, he did not mention Dedekind's help in his article. In previous articles, he had acknowledged help received from Kronecker, Weierstrass, Heine, and Hermann Schwarz. Cantor's failure to mention Dedekind's contributions damaged his relationship with Dedekind. Dedekind stopped replying to his letters and did not resume the correspondence until October 1876.
== The legacy of Cantor's article ==
Cantor's article introduced the uncountability theorem and the concept of countability. Both would lead to significant developments in mathematics. The uncountability theorem demonstrated that one-to-one correspondences can be used to analyze infinite sets. In 1878, Cantor used them to define and compare cardinalities. He also constructed one-to-one correspondences to prove that the n-dimensional spaces Rn (where R is the set of real numbers) and the set of irrational numbers have the same cardinality as R.
In 1883, Cantor extended the positive integers with his infinite ordinals. This extension was necessary for his work on the Cantor–Bendixson theorem. Cantor discovered other uses for the ordinals—for example, he used sets of ordinals to produce an infinity of sets having different infinite cardinalities. His work on infinite sets together with Dedekind's set-theoretical work created set theory.
The concept of countability led to countable operations and objects that are used in various areas of mathematics. For example, in 1878, Cantor introduced countable unions of sets. In the 1890s, Émile Borel used countable unions in his theory of measure, and René Baire used countable ordinals to define his classes of functions. Building on the work of Borel and Baire, Henri Lebesgue created his theories of measure and integration, which were published from 1899 to 1901.
Countable models are used in set theory. In 1922, Thoralf Skolem proved that if conventional axioms of set theory are consistent, then they have a countable model. Since this model is countable, its set of real numbers is countable. This consequence is called Skolem's paradox, and Skolem explained why it does not contradict Cantor's uncountability theorem: although there is a one-to-one correspondence between this set and the set of positive integers, no such one-to-one correspondence is a member of the model. Thus the model considers its set of real numbers to be uncountable, or more precisely, the first-order sentence that says the set of real numbers is uncountable is true within the model. In 1963, Paul Cohen used countable models to prove his independence theorems.
== See also ==
Cantor's theorem
== Notes ==
=== Note on Cantor's 1879 proof ===
== References ==
== Bibliography ==
Arkhangel'skii, A. V.; Fedorchuk, V. V. (1990), "The basic concepts and constructions of general topology", in Arkhangel'skii, A. V.; Pontryagin, L. S. (eds.), General Topology I, New York, Berlin: Springer-Verlag, pp. 1–90, ISBN 978-0-387-18178-3.
Audin, Michèle (2011), Remembering Sofya Kovalevskaya, London: Springer, ISBN 978-0-85729-928-4.
Bell, Eric Temple (1937), Men of Mathematics, New York: Simon & Schuster. Reprinted, 1984, ISBN 978-0-671-62818-5.
Birkhoff, Garrett; Mac Lane, Saunders (1941), A Survey of Modern Algebra, New York: Macmillan. Reprinted, Taylor & Francis, 1997, ISBN 978-1-56881-068-3.
Burton, David M. (1995), Burton's History of Mathematics (3rd ed.), Dubuque, Iowa: William C. Brown, ISBN 978-0-697-16089-8.
Cantor, Georg (1874), "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen", Journal für die Reine und Angewandte Mathematik (in German), 1874 (77): 258–262, doi:10.1515/crll.1874.77.258, S2CID 199545885.
Cantor, Georg (1878), "Ein Beitrag zur Mannigfaltigkeitslehre", Journal für die Reine und Angewandte Mathematik (in German), 1878 (84): 242–258, doi:10.1515/crll.1878.84.242 (inactive 1 November 2024){{citation}}: CS1 maint: DOI inactive as of November 2024 (link).
Cantor, Georg (1879), "Ueber unendliche, lineare Punktmannichfaltigkeiten. 1.", Mathematische Annalen (in German), 15: 1–7, doi:10.1007/bf01444101, S2CID 179177510.
Chowdhary, K. R. (2015), Fundamentals of Discrete Mathematical Structures (3rd ed.), Delhi, India: PHI Learning, ISBN 978-81-203-5074-8.
Cohen, Paul J. (1963), "The Independence of the Continuum Hypothesis", Proceedings of the National Academy of Sciences of the United States of America, 50 (6): 1143–1148, Bibcode:1963PNAS...50.1143C, doi:10.1073/pnas.50.6.1143, PMC 221287, PMID 16578557.
Dasgupta, Abhijit (2014), Set Theory: With an Introduction to Real Point Sets, New York: Springer, ISBN 978-1-4614-8853-8.
Dauben, Joseph (1979), Georg Cantor: His Mathematics and Philosophy of the Infinite, Cambridge, Mass.: Harvard University Press, ISBN 978-0-674-34871-4.
Dauben, Joseph (1993), "Georg Cantor and the Battle for Transfinite Set Theory" (PDF), 9th ACMS Conference Proceedings.
Edwards, Harold M. (1989), "Kronecker's Views on the Foundations of Mathematics", in Rowe, David E.; McCleary, John (eds.), The History of Modern Mathematics, Volume 1, New York: Academic Press, pp. 67–77, ISBN 978-0-12-599662-4.
Ewald, William B., ed. (1996), From Immanuel Kant to David Hilbert: A Source Book in the Foundations of Mathematics, Volume 2, New York: Oxford University Press, ISBN 978-0-19-850536-5.
Ferreirós, José (1993), "On the relations between Georg Cantor and Richard Dedekind", Historia Mathematica, 20 (4): 343–363, doi:10.1006/hmat.1993.1030.
Ferreirós, José (2007), Labyrinth of Thought: A History of Set Theory and Its Role in Mathematical Thought (2nd revised ed.), Basel: Birkhäuser, ISBN 978-3-7643-8349-7.
Fraenkel, Abraham (1930), "Georg Cantor", Jahresbericht der Deutschen Mathematiker-Vereinigung (in German), 39: 189–266.
Grattan-Guinness, Ivor (1971), "The Correspondence between Georg Cantor and Philip Jourdain", Jahresbericht der Deutschen Mathematiker-Vereinigung, 73: 111–130.
Gray, Robert (1994), "Georg Cantor and Transcendental Numbers" (PDF), American Mathematical Monthly, 101 (9): 819–832, doi:10.2307/2975129, JSTOR 2975129, MR 1300488, Zbl 0827.01004, archived from the original (PDF) on 2022-01-21, retrieved 2016-02-13.
Hardy, Godfrey; Wright, E. M. (1938), An Introduction to the Theory of Numbers, Oxford: Clarendon Press.
Havil, Julian (2012), The Irrationals, Princeton, Oxford: Princeton University Press, ISBN 978-0-691-16353-6.
Hawkins, Thomas (1970), Lebesgue's Theory of Integration, Madison, Wisconsin: University of Wisconsin Press, ISBN 978-0-299-05550-9.
Jarvis, Frazer (2014), Algebraic Number Theory, New York: Springer, ISBN 978-3-319-07544-0.
Kanamori, Akihiro (2012), "Set Theory from Cantor to Cohen" (PDF), in Gabbay, Dov M.; Kanamori, Akihiro; Woods, John H. (eds.), Sets and Extensions in the Twentieth Century, Amsterdam, Boston: Cambridge University Press, pp. 1–71, ISBN 978-0-444-51621-3.
Kaplansky, Irving (1972), Set Theory and Metric Spaces, Boston: Allyn and Bacon, ISBN 978-0-8284-0298-9.
Kelley, John L. (1991), General Topology, New York: Springer, ISBN 978-3-540-90125-9.
LeVeque, William J. (1956), Topics in Number Theory, vol. I, Reading, Massachusetts: Addison-Wesley. (Reprinted by Dover Publications, 2002, ISBN 978-0-486-42539-9.)
Noether, Emmy; Cavaillès, Jean, eds. (1937), Briefwechsel Cantor-Dedekind (in German), Paris: Hermann.
Perron, Oskar (1921), Irrationalzahlen (in German), Leipzig, Berlin: W. de Gruyter, OCLC 4636376.
Sheppard, Barnaby (2014), The Logic of Infinity, Cambridge: Cambridge University Press, ISBN 978-1-107-67866-8.
Spivak, Michael (1967), Calculus, London: W. A. Benjamin, ISBN 978-0914098911.
Stewart, Ian (2015), Galois Theory (4th ed.), Boca Raton, Florida: CRC Press, ISBN 978-1-4822-4582-0.
Stewart, Ian; Tall, David (2015), The Foundations of Mathematics (2nd ed.), New York: Oxford University Press, ISBN 978-0-19-870644-1.
Weisstein, Eric W., ed. (2003), "Continued Fraction", CRC Concise Encyclopedia of Mathematics, Boca Raton, Florida: Chapman & Hall/CRC, ISBN 978-1-58488-347-0. | Wikipedia/Georg_Cantor's_first_set_theory_article |
In number theory, Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. It is written using the Greek letter phi as
φ
(
n
)
{\displaystyle \varphi (n)}
or
ϕ
(
n
)
{\displaystyle \phi (n)}
, and may also be called Euler's phi function. In other words, it is the number of integers k in the range 1 ≤ k ≤ n for which the greatest common divisor gcd(n, k) is equal to 1. The integers k of this form are sometimes referred to as totatives of n.
For example, the totatives of n = 9 are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, since gcd(9, 3) = gcd(9, 6) = 3 and gcd(9, 9) = 9. Therefore, φ(9) = 6. As another example, φ(1) = 1 since for n = 1 the only integer in the range from 1 to n is 1 itself, and gcd(1, 1) = 1.
Euler's totient function is a multiplicative function, meaning that if two numbers m and n are relatively prime, then φ(mn) = φ(m)φ(n).
This function gives the order of the multiplicative group of integers modulo n (the group of units of the ring
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
). It is also used for defining the RSA encryption system.
== History, terminology, and notation ==
Leonhard Euler introduced the function in 1763. However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letter π to denote it: he wrote πD for "the multitude of numbers less than D, and which have no common divisor with it". This definition varies from the current definition for the totient function at D = 1 but is otherwise the same. The now-standard notation φ(A) comes from Gauss's 1801 treatise Disquisitiones Arithmeticae, although Gauss did not use parentheses around the argument and wrote φA. Thus, it is often called Euler's phi function or simply the phi function.
In 1879, J. J. Sylvester coined the term totient for this function, so it is also referred to as Euler's totient function, the Euler totient, or Euler's totient. Jordan's totient is a generalization of Euler's.
The cototient of n is defined as n − φ(n). It counts the number of positive integers less than or equal to n that have at least one prime factor in common with n.
== Computing Euler's totient function ==
There are several formulae for computing φ(n).
=== Euler's product formula ===
It states
φ
(
n
)
=
n
∏
p
∣
n
(
1
−
1
p
)
,
{\displaystyle \varphi (n)=n\prod _{p\mid n}\left(1-{\frac {1}{p}}\right),}
where the product is over the distinct prime numbers dividing n.
An equivalent formulation is
φ
(
n
)
=
p
1
k
1
−
1
(
p
1
−
1
)
p
2
k
2
−
1
(
p
2
−
1
)
⋯
p
r
k
r
−
1
(
p
r
−
1
)
,
{\displaystyle \varphi (n)=p_{1}^{k_{1}-1}(p_{1}{-}1)\,p_{2}^{k_{2}-1}(p_{2}{-}1)\cdots p_{r}^{k_{r}-1}(p_{r}{-}1),}
where
n
=
p
1
k
1
p
2
k
2
⋯
p
r
k
r
{\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}}}
is the prime factorization of
n
{\displaystyle n}
(that is,
p
1
,
p
2
,
…
,
p
r
{\displaystyle p_{1},p_{2},\ldots ,p_{r}}
are distinct prime numbers).
The proof of these formulae depends on two important facts.
==== Phi is a multiplicative function ====
This means that if gcd(m, n) = 1, then φ(m) φ(n) = φ(mn). Proof outline: Let A, B, C be the sets of positive integers which are coprime to and less than m, n, mn, respectively, so that |A| = φ(m), etc. Then there is a bijection between A × B and C by the Chinese remainder theorem.
==== Value of phi for a prime power argument ====
If p is prime and k ≥ 1, then
φ
(
p
k
)
=
p
k
−
p
k
−
1
=
p
k
−
1
(
p
−
1
)
=
p
k
(
1
−
1
p
)
.
{\displaystyle \varphi \left(p^{k}\right)=p^{k}-p^{k-1}=p^{k-1}(p-1)=p^{k}\left(1-{\tfrac {1}{p}}\right).}
Proof: Since p is a prime number, the only possible values of gcd(pk, m) are 1, p, p2, ..., pk, and the only way to have gcd(pk, m) > 1 is if m is a multiple of p, that is, m ∈ {p, 2p, 3p, ..., pk − 1p = pk}, and there are pk − 1 such multiples not greater than pk. Therefore, the other pk − pk − 1 numbers are all relatively prime to pk.
==== Proof of Euler's product formula ====
The fundamental theorem of arithmetic states that if n > 1 there is a unique expression
n
=
p
1
k
1
p
2
k
2
⋯
p
r
k
r
,
{\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}},}
where p1 < p2 < ... < pr are prime numbers and each ki ≥ 1. (The case n = 1 corresponds to the empty product.) Repeatedly using the multiplicative property of φ and the formula for φ(pk) gives
φ
(
n
)
=
φ
(
p
1
k
1
)
φ
(
p
2
k
2
)
⋯
φ
(
p
r
k
r
)
=
p
1
k
1
(
1
−
1
p
1
)
p
2
k
2
(
1
−
1
p
2
)
⋯
p
r
k
r
(
1
−
1
p
r
)
=
p
1
k
1
p
2
k
2
⋯
p
r
k
r
(
1
−
1
p
1
)
(
1
−
1
p
2
)
⋯
(
1
−
1
p
r
)
=
n
(
1
−
1
p
1
)
(
1
−
1
p
2
)
⋯
(
1
−
1
p
r
)
.
{\displaystyle {\begin{array}{rcl}\varphi (n)&=&\varphi (p_{1}^{k_{1}})\,\varphi (p_{2}^{k_{2}})\cdots \varphi (p_{r}^{k_{r}})\\[.1em]&=&p_{1}^{k_{1}}\left(1-{\frac {1}{p_{1}}}\right)p_{2}^{k_{2}}\left(1-{\frac {1}{p_{2}}}\right)\cdots p_{r}^{k_{r}}\left(1-{\frac {1}{p_{r}}}\right)\\[.1em]&=&p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}}\left(1-{\frac {1}{p_{1}}}\right)\left(1-{\frac {1}{p_{2}}}\right)\cdots \left(1-{\frac {1}{p_{r}}}\right)\\[.1em]&=&n\left(1-{\frac {1}{p_{1}}}\right)\left(1-{\frac {1}{p_{2}}}\right)\cdots \left(1-{\frac {1}{p_{r}}}\right).\end{array}}}
This gives both versions of Euler's product formula.
An alternative proof that does not require the multiplicative property instead uses the inclusion-exclusion principle applied to the set
{
1
,
2
,
…
,
n
}
{\displaystyle \{1,2,\ldots ,n\}}
, excluding the sets of integers divisible by the prime divisors.
==== Example ====
φ
(
20
)
=
φ
(
2
2
5
)
=
20
(
1
−
1
2
)
(
1
−
1
5
)
=
20
⋅
1
2
⋅
4
5
=
8.
{\displaystyle \varphi (20)=\varphi (2^{2}5)=20\,(1-{\tfrac {1}{2}})\,(1-{\tfrac {1}{5}})=20\cdot {\tfrac {1}{2}}\cdot {\tfrac {4}{5}}=8.}
In words: the distinct prime factors of 20 are 2 and 5; half of the twenty integers from 1 to 20 are divisible by 2, leaving ten; a fifth of those are divisible by 5, leaving eight numbers coprime to 20; these are: 1, 3, 7, 9, 11, 13, 17, 19.
The alternative formula uses only integers:
φ
(
20
)
=
φ
(
2
2
5
1
)
=
2
2
−
1
(
2
−
1
)
5
1
−
1
(
5
−
1
)
=
2
⋅
1
⋅
1
⋅
4
=
8.
{\displaystyle \varphi (20)=\varphi (2^{2}5^{1})=2^{2-1}(2{-}1)\,5^{1-1}(5{-}1)=2\cdot 1\cdot 1\cdot 4=8.}
=== Fourier transform ===
The totient is the discrete Fourier transform of the gcd, evaluated at 1. Let
F
{
x
}
[
m
]
=
∑
k
=
1
n
x
k
⋅
e
−
2
π
i
m
k
n
{\displaystyle {\mathcal {F}}\{\mathbf {x} \}[m]=\sum \limits _{k=1}^{n}x_{k}\cdot e^{{-2\pi i}{\frac {mk}{n}}}}
where xk = gcd(k,n) for k ∈ {1, ..., n}. Then
φ
(
n
)
=
F
{
x
}
[
1
]
=
∑
k
=
1
n
gcd
(
k
,
n
)
e
−
2
π
i
k
n
.
{\displaystyle \varphi (n)={\mathcal {F}}\{\mathbf {x} \}[1]=\sum \limits _{k=1}^{n}\gcd(k,n)e^{-2\pi i{\frac {k}{n}}}.}
The real part of this formula is
φ
(
n
)
=
∑
k
=
1
n
gcd
(
k
,
n
)
cos
2
π
k
n
.
{\displaystyle \varphi (n)=\sum \limits _{k=1}^{n}\gcd(k,n)\cos {\tfrac {2\pi k}{n}}.}
For example, using
cos
π
5
=
5
+
1
4
{\displaystyle \cos {\tfrac {\pi }{5}}={\tfrac {{\sqrt {5}}+1}{4}}}
and
cos
2
π
5
=
5
−
1
4
{\displaystyle \cos {\tfrac {2\pi }{5}}={\tfrac {{\sqrt {5}}-1}{4}}}
:
φ
(
10
)
=
gcd
(
1
,
10
)
cos
2
π
10
+
gcd
(
2
,
10
)
cos
4
π
10
+
gcd
(
3
,
10
)
cos
6
π
10
+
⋯
+
gcd
(
10
,
10
)
cos
20
π
10
=
1
⋅
(
5
+
1
4
)
+
2
⋅
(
5
−
1
4
)
+
1
⋅
(
−
5
−
1
4
)
+
2
⋅
(
−
5
+
1
4
)
+
5
⋅
(
−
1
)
+
2
⋅
(
−
5
+
1
4
)
+
1
⋅
(
−
5
−
1
4
)
+
2
⋅
(
5
−
1
4
)
+
1
⋅
(
5
+
1
4
)
+
10
⋅
(
1
)
=
4.
{\displaystyle {\begin{array}{rcl}\varphi (10)&=&\gcd(1,10)\cos {\tfrac {2\pi }{10}}+\gcd(2,10)\cos {\tfrac {4\pi }{10}}+\gcd(3,10)\cos {\tfrac {6\pi }{10}}+\cdots +\gcd(10,10)\cos {\tfrac {20\pi }{10}}\\&=&1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+5\cdot (-1)\\&&+\ 2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+10\cdot (1)\\&=&4.\end{array}}}
Unlike the Euler product and the divisor sum formula, this one does not require knowing the factors of n. However, it does involve the calculation of the greatest common divisor of n and every positive integer less than n, which suffices to provide the factorization anyway.
=== Divisor sum ===
The property established by Gauss, that
∑
d
∣
n
φ
(
d
)
=
n
,
{\displaystyle \sum _{d\mid n}\varphi (d)=n,}
where the sum is over all positive divisors d of n, can be proven in several ways. (See Arithmetical function for notational conventions.)
One proof is to note that φ(d) is also equal to the number of possible generators of the cyclic group Cd ; specifically, if Cd = ⟨g⟩ with gd = 1, then gk is a generator for every k coprime to d. Since every element of Cn generates a cyclic subgroup, and each subgroup Cd ⊆ Cn is generated by precisely φ(d) elements of Cn, the formula follows. Equivalently, the formula can be derived by the same argument applied to the multiplicative group of the nth roots of unity and the primitive dth roots of unity.
The formula can also be derived from elementary arithmetic. For example, let n = 20 and consider the positive fractions up to 1 with denominator 20:
1
20
,
2
20
,
3
20
,
4
20
,
5
20
,
6
20
,
7
20
,
8
20
,
9
20
,
10
20
,
11
20
,
12
20
,
13
20
,
14
20
,
15
20
,
16
20
,
17
20
,
18
20
,
19
20
,
20
20
.
{\displaystyle {\tfrac {1}{20}},\,{\tfrac {2}{20}},\,{\tfrac {3}{20}},\,{\tfrac {4}{20}},\,{\tfrac {5}{20}},\,{\tfrac {6}{20}},\,{\tfrac {7}{20}},\,{\tfrac {8}{20}},\,{\tfrac {9}{20}},\,{\tfrac {10}{20}},\,{\tfrac {11}{20}},\,{\tfrac {12}{20}},\,{\tfrac {13}{20}},\,{\tfrac {14}{20}},\,{\tfrac {15}{20}},\,{\tfrac {16}{20}},\,{\tfrac {17}{20}},\,{\tfrac {18}{20}},\,{\tfrac {19}{20}},\,{\tfrac {20}{20}}.}
Put them into lowest terms:
1
20
,
1
10
,
3
20
,
1
5
,
1
4
,
3
10
,
7
20
,
2
5
,
9
20
,
1
2
,
11
20
,
3
5
,
13
20
,
7
10
,
3
4
,
4
5
,
17
20
,
9
10
,
19
20
,
1
1
{\displaystyle {\tfrac {1}{20}},\,{\tfrac {1}{10}},\,{\tfrac {3}{20}},\,{\tfrac {1}{5}},\,{\tfrac {1}{4}},\,{\tfrac {3}{10}},\,{\tfrac {7}{20}},\,{\tfrac {2}{5}},\,{\tfrac {9}{20}},\,{\tfrac {1}{2}},\,{\tfrac {11}{20}},\,{\tfrac {3}{5}},\,{\tfrac {13}{20}},\,{\tfrac {7}{10}},\,{\tfrac {3}{4}},\,{\tfrac {4}{5}},\,{\tfrac {17}{20}},\,{\tfrac {9}{10}},\,{\tfrac {19}{20}},\,{\tfrac {1}{1}}}
These twenty fractions are all the positive k/d ≤ 1 whose denominators are the divisors d = 1, 2, 4, 5, 10, 20. The fractions with 20 as denominator are those with numerators relatively prime to 20, namely 1/20, 3/20, 7/20, 9/20, 11/20, 13/20, 17/20, 19/20; by definition this is φ(20) fractions. Similarly, there are φ(10) fractions with denominator 10, and φ(5) fractions with denominator 5, etc. Thus the set of twenty fractions is split into subsets of size φ(d) for each d dividing 20. A similar argument applies for any n.
Möbius inversion applied to the divisor sum formula gives
φ
(
n
)
=
∑
d
∣
n
μ
(
d
)
⋅
n
d
=
n
∑
d
∣
n
μ
(
d
)
d
,
{\displaystyle \varphi (n)=\sum _{d\mid n}\mu \left(d\right)\cdot {\frac {n}{d}}=n\sum _{d\mid n}{\frac {\mu (d)}{d}},}
where μ is the Möbius function, the multiplicative function defined by
μ
(
p
)
=
−
1
{\displaystyle \mu (p)=-1}
and
μ
(
p
k
)
=
0
{\displaystyle \mu (p^{k})=0}
for each prime p and k ≥ 2. This formula may also be derived from the product formula by multiplying out
∏
p
∣
n
(
1
−
1
p
)
{\textstyle \prod _{p\mid n}(1-{\frac {1}{p}})}
to get
∑
d
∣
n
μ
(
d
)
d
.
{\textstyle \sum _{d\mid n}{\frac {\mu (d)}{d}}.}
An example:
φ
(
20
)
=
μ
(
1
)
⋅
20
+
μ
(
2
)
⋅
10
+
μ
(
4
)
⋅
5
+
μ
(
5
)
⋅
4
+
μ
(
10
)
⋅
2
+
μ
(
20
)
⋅
1
=
1
⋅
20
−
1
⋅
10
+
0
⋅
5
−
1
⋅
4
+
1
⋅
2
+
0
⋅
1
=
8.
{\displaystyle {\begin{aligned}\varphi (20)&=\mu (1)\cdot 20+\mu (2)\cdot 10+\mu (4)\cdot 5+\mu (5)\cdot 4+\mu (10)\cdot 2+\mu (20)\cdot 1\\[.5em]&=1\cdot 20-1\cdot 10+0\cdot 5-1\cdot 4+1\cdot 2+0\cdot 1=8.\end{aligned}}}
== Some values ==
The first 100 values (sequence A000010 in the OEIS) are shown in the table and graph below:
In the graph at right the top line y = n − 1 is an upper bound valid for all n other than one, and attained if and only if n is a prime number. A simple lower bound is
φ
(
n
)
≥
n
/
2
{\displaystyle \varphi (n)\geq {\sqrt {n/2}}}
, which is rather loose: in fact, the lower limit of the graph is proportional to n/log log n.
== Euler's theorem ==
This states that if a and n are relatively prime then
a
φ
(
n
)
≡
1
mod
n
.
{\displaystyle a^{\varphi (n)}\equiv 1\mod n.}
The special case where n is prime is known as Fermat's little theorem.
This follows from Lagrange's theorem and the fact that φ(n) is the order of the multiplicative group of integers modulo n.
The RSA cryptosystem is based on this theorem: it implies that the inverse of the function a ↦ ae mod n, where e is the (public) encryption exponent, is the function b ↦ bd mod n, where d, the (private) decryption exponent, is the multiplicative inverse of e modulo φ(n). The difficulty of computing φ(n) without knowing the factorization of n is thus the difficulty of computing d: this is known as the RSA problem which can be solved by factoring n. The owner of the private key knows the factorization, since an RSA private key is constructed by choosing n as the product of two (randomly chosen) large primes p and q. Only n is publicly disclosed, and given the difficulty to factor large numbers we have the guarantee that no one else knows the factorization.
== Other formulae ==
a
∣
b
⟹
φ
(
a
)
∣
φ
(
b
)
{\displaystyle a\mid b\implies \varphi (a)\mid \varphi (b)}
m
∣
φ
(
a
m
−
1
)
{\displaystyle m\mid \varphi (a^{m}-1)}
φ
(
m
n
)
=
φ
(
m
)
φ
(
n
)
⋅
d
φ
(
d
)
where
d
=
gcd
(
m
,
n
)
{\displaystyle \varphi (mn)=\varphi (m)\varphi (n)\cdot {\frac {d}{\varphi (d)}}\quad {\text{where }}d=\operatorname {gcd} (m,n)}
In particular:
φ
(
2
m
)
=
{
2
φ
(
m
)
if
m
is even
φ
(
m
)
if
m
is odd
{\displaystyle \varphi (2m)={\begin{cases}2\varphi (m)&{\text{ if }}m{\text{ is even}}\\\varphi (m)&{\text{ if }}m{\text{ is odd}}\end{cases}}}
φ
(
n
m
)
=
n
m
−
1
φ
(
n
)
{\displaystyle \varphi \left(n^{m}\right)=n^{m-1}\varphi (n)}
φ
(
lcm
(
m
,
n
)
)
⋅
φ
(
gcd
(
m
,
n
)
)
=
φ
(
m
)
⋅
φ
(
n
)
{\displaystyle \varphi (\operatorname {lcm} (m,n))\cdot \varphi (\operatorname {gcd} (m,n))=\varphi (m)\cdot \varphi (n)}
Compare this to the formula
lcm
(
m
,
n
)
⋅
gcd
(
m
,
n
)
=
m
⋅
n
{\textstyle \operatorname {lcm} (m,n)\cdot \operatorname {gcd} (m,n)=m\cdot n}
(see least common multiple).
φ(n) is even for n ≥ 3. Moreover, if n has r distinct odd prime factors, 2r | φ(n)
For any a > 1 and n > 6 such that 4 ∤ n there exists an l ≥ 2n such that l | φ(an − 1).
φ
(
n
)
n
=
φ
(
rad
(
n
)
)
rad
(
n
)
{\displaystyle {\frac {\varphi (n)}{n}}={\frac {\varphi (\operatorname {rad} (n))}{\operatorname {rad} (n)}}}
where rad(n) is the radical of n (the product of all distinct primes dividing n).
∑
d
∣
n
μ
2
(
d
)
φ
(
d
)
=
n
φ
(
n
)
{\displaystyle \sum _{d\mid n}{\frac {\mu ^{2}(d)}{\varphi (d)}}={\frac {n}{\varphi (n)}}}
∑
1
≤
k
≤
n
−
1
g
c
d
(
k
,
n
)
=
1
k
=
1
2
n
φ
(
n
)
for
n
>
1
{\displaystyle \sum _{1\leq k\leq n-1 \atop gcd(k,n)=1}\!\!k={\tfrac {1}{2}}n\varphi (n)\quad {\text{for }}n>1}
∑
k
=
1
n
φ
(
k
)
=
1
2
(
1
+
∑
k
=
1
n
μ
(
k
)
⌊
n
k
⌋
2
)
=
3
π
2
n
2
+
O
(
n
(
log
n
)
2
3
(
log
log
n
)
4
3
)
{\displaystyle \sum _{k=1}^{n}\varphi (k)={\tfrac {1}{2}}\left(1+\sum _{k=1}^{n}\mu (k)\left\lfloor {\frac {n}{k}}\right\rfloor ^{2}\right)={\frac {3}{\pi ^{2}}}n^{2}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)}
( cited in)
∑
k
=
1
n
φ
(
k
)
=
3
π
2
n
2
+
O
(
n
(
log
n
)
2
3
(
log
log
n
)
1
3
)
{\displaystyle \sum _{k=1}^{n}\varphi (k)={\frac {3}{\pi ^{2}}}n^{2}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {1}{3}}\right)}
[Liu (2016)]
∑
k
=
1
n
φ
(
k
)
k
=
∑
k
=
1
n
μ
(
k
)
k
⌊
n
k
⌋
=
6
π
2
n
+
O
(
(
log
n
)
2
3
(
log
log
n
)
4
3
)
{\displaystyle \sum _{k=1}^{n}{\frac {\varphi (k)}{k}}=\sum _{k=1}^{n}{\frac {\mu (k)}{k}}\left\lfloor {\frac {n}{k}}\right\rfloor ={\frac {6}{\pi ^{2}}}n+O\left((\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)}
∑
k
=
1
n
k
φ
(
k
)
=
315
ζ
(
3
)
2
π
4
n
−
log
n
2
+
O
(
(
log
n
)
2
3
)
{\displaystyle \sum _{k=1}^{n}{\frac {k}{\varphi (k)}}={\frac {315\,\zeta (3)}{2\pi ^{4}}}n-{\frac {\log n}{2}}+O\left((\log n)^{\frac {2}{3}}\right)}
∑
k
=
1
n
1
φ
(
k
)
=
315
ζ
(
3
)
2
π
4
(
log
n
+
γ
−
∑
p
prime
log
p
p
2
−
p
+
1
)
+
O
(
(
log
n
)
2
3
n
)
{\displaystyle \sum _{k=1}^{n}{\frac {1}{\varphi (k)}}={\frac {315\,\zeta (3)}{2\pi ^{4}}}\left(\log n+\gamma -\sum _{p{\text{ prime}}}{\frac {\log p}{p^{2}-p+1}}\right)+O\left({\frac {(\log n)^{\frac {2}{3}}}{n}}\right)}
(where γ is the Euler–Mascheroni constant).
=== Menon's identity ===
In 1965 P. Kesava Menon proved
∑
gcd
(
k
,
n
)
=
1
1
≤
k
≤
n
gcd
(
k
−
1
,
n
)
=
φ
(
n
)
d
(
n
)
,
{\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}\!\!\!\!\gcd(k-1,n)=\varphi (n)d(n),}
where d(n) = σ0(n) is the number of divisors of n.
=== Divisibility by any fixed positive integer ===
The following property, which is part of the « folklore » (i.e., apparently unpublished as a specific result: see the introduction of this article in which it is stated as having « long been known ») has important consequences. For instance it rules out uniform distribution of the values of
φ
(
n
)
{\displaystyle \varphi (n)}
in the arithmetic progressions modulo
q
{\displaystyle q}
for any integer
q
>
1
{\displaystyle q>1}
.
For every fixed positive integer
q
{\displaystyle q}
, the relation
q
|
φ
(
n
)
{\displaystyle q|\varphi (n)}
holds for almost all
n
{\displaystyle n}
, meaning for all but
o
(
x
)
{\displaystyle o(x)}
values of
n
≤
x
{\displaystyle n\leq x}
as
x
→
∞
{\displaystyle x\rightarrow \infty }
.
This is an elementary consequence of the fact that the sum of the reciprocals of the primes congruent to 1 modulo
q
{\displaystyle q}
diverges, which itself is a corollary of the proof of Dirichlet's theorem on arithmetic progressions.
== Generating functions ==
The Dirichlet series for φ(n) may be written in terms of the Riemann zeta function as:
∑
n
=
1
∞
φ
(
n
)
n
s
=
ζ
(
s
−
1
)
ζ
(
s
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {\varphi (n)}{n^{s}}}={\frac {\zeta (s-1)}{\zeta (s)}}}
where the left-hand side converges for
ℜ
(
s
)
>
2
{\displaystyle \Re (s)>2}
.
The Lambert series generating function is
∑
n
=
1
∞
φ
(
n
)
q
n
1
−
q
n
=
q
(
1
−
q
)
2
{\displaystyle \sum _{n=1}^{\infty }{\frac {\varphi (n)q^{n}}{1-q^{n}}}={\frac {q}{(1-q)^{2}}}}
which converges for |q| < 1.
Both of these are proved by elementary series manipulations and the formulae for φ(n).
== Growth rate ==
In the words of Hardy & Wright, the order of φ(n) is "always 'nearly n'."
First
lim
sup
φ
(
n
)
n
=
1
,
{\displaystyle \lim \sup {\frac {\varphi (n)}{n}}=1,}
but as n goes to infinity, for all δ > 0
φ
(
n
)
n
1
−
δ
→
∞
.
{\displaystyle {\frac {\varphi (n)}{n^{1-\delta }}}\rightarrow \infty .}
These two formulae can be proved by using little more than the formulae for φ(n) and the divisor sum function σ(n).
In fact, during the proof of the second formula, the inequality
6
π
2
<
φ
(
n
)
σ
(
n
)
n
2
<
1
,
{\displaystyle {\frac {6}{\pi ^{2}}}<{\frac {\varphi (n)\sigma (n)}{n^{2}}}<1,}
true for n > 1, is proved.
We also have
lim
inf
φ
(
n
)
n
log
log
n
=
e
−
γ
.
{\displaystyle \lim \inf {\frac {\varphi (n)}{n}}\log \log n=e^{-\gamma }.}
Here γ is Euler's constant, γ = 0.577215665..., so eγ = 1.7810724... and e−γ = 0.56145948....
Proving this does not quite require the prime number theorem. Since log log n goes to infinity, this formula shows that
lim
inf
φ
(
n
)
n
=
0.
{\displaystyle \lim \inf {\frac {\varphi (n)}{n}}=0.}
In fact, more is true.
φ
(
n
)
>
n
e
γ
log
log
n
+
3
log
log
n
for
n
>
2
{\displaystyle \varphi (n)>{\frac {n}{e^{\gamma }\;\log \log n+{\frac {3}{\log \log n}}}}\quad {\text{for }}n>2}
and
φ
(
n
)
<
n
e
γ
log
log
n
for infinitely many
n
.
{\displaystyle \varphi (n)<{\frac {n}{e^{\gamma }\log \log n}}\quad {\text{for infinitely many }}n.}
The second inequality was shown by Jean-Louis Nicolas. Ribenboim says "The method of proof is interesting, in that the inequality is shown first under the assumption that the Riemann hypothesis is true, secondly under the contrary assumption.": 173
For the average order, we have
φ
(
1
)
+
φ
(
2
)
+
⋯
+
φ
(
n
)
=
3
n
2
π
2
+
O
(
n
(
log
n
)
2
3
(
log
log
n
)
4
3
)
as
n
→
∞
,
{\displaystyle \varphi (1)+\varphi (2)+\cdots +\varphi (n)={\frac {3n^{2}}{\pi ^{2}}}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)\quad {\text{as }}n\rightarrow \infty ,}
due to Arnold Walfisz, its proof exploiting estimates on exponential sums due to I. M. Vinogradov and N. M. Korobov.
By a combination of van der Corput's and Vinogradov's methods, H.-Q. Liu (On Euler's function.Proc. Roy. Soc. Edinburgh Sect. A 146 (2016), no. 4, 769–775)
improved the error term to
O
(
n
(
log
n
)
2
3
(
log
log
n
)
1
3
)
{\displaystyle O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {1}{3}}\right)}
(this is currently the best known estimate of this type). The "Big O" stands for a quantity that is bounded by a constant times the function of n inside the parentheses (which is small compared to n2).
This result can be used to prove that the probability of two randomly chosen numbers being relatively prime is 6/π2.
== Ratio of consecutive values ==
In 1950 Somayajulu proved
lim
inf
φ
(
n
+
1
)
φ
(
n
)
=
0
and
lim
sup
φ
(
n
+
1
)
φ
(
n
)
=
∞
.
{\displaystyle {\begin{aligned}\lim \inf {\frac {\varphi (n+1)}{\varphi (n)}}&=0\quad {\text{and}}\\[5px]\lim \sup {\frac {\varphi (n+1)}{\varphi (n)}}&=\infty .\end{aligned}}}
In 1954 Schinzel and Sierpiński strengthened this, proving that the set
{
φ
(
n
+
1
)
φ
(
n
)
,
n
=
1
,
2
,
…
}
{\displaystyle \left\{{\frac {\varphi (n+1)}{\varphi (n)}},\;\;n=1,2,\ldots \right\}}
is dense in the positive real numbers. They also proved that the set
{
φ
(
n
)
n
,
n
=
1
,
2
,
…
}
{\displaystyle \left\{{\frac {\varphi (n)}{n}},\;\;n=1,2,\ldots \right\}}
is dense in the interval (0,1).
== Totient number ==
A totient number is a value of Euler's totient function: that is, an m for which there is at least one n for which φ(n) = m. The valency or multiplicity of a totient number m is the number of solutions to this equation. A nontotient is a natural number which is not a totient number. Every odd integer exceeding 1 is trivially a nontotient. There are also infinitely many even nontotients, and indeed every positive integer has a multiple which is an even nontotient.
The number of totient numbers up to a given limit x is
x
log
x
e
(
C
+
o
(
1
)
)
(
log
log
log
x
)
2
{\displaystyle {\frac {x}{\log x}}e^{{\big (}C+o(1){\big )}(\log \log \log x)^{2}}}
for a constant C = 0.8178146....
If counted accordingly to multiplicity, the number of totient numbers up to a given limit x is
|
{
n
:
φ
(
n
)
≤
x
}
|
=
ζ
(
2
)
ζ
(
3
)
ζ
(
6
)
⋅
x
+
R
(
x
)
{\displaystyle {\Big \vert }\{n:\varphi (n)\leq x\}{\Big \vert }={\frac {\zeta (2)\zeta (3)}{\zeta (6)}}\cdot x+R(x)}
where the error term R is of order at most x/(log x)k for any positive k.
It is known that the multiplicity of m exceeds mδ infinitely often for any δ < 0.55655.
=== Ford's theorem ===
Ford (1999) proved that for every integer k ≥ 2 there is a totient number m of multiplicity k: that is, for which the equation φ(n) = m has exactly k solutions; this result had previously been conjectured by Wacław Sierpiński, and it had been obtained as a consequence of Schinzel's hypothesis H. Indeed, each multiplicity that occurs, does so infinitely often.
However, no number m is known with multiplicity k = 1. Carmichael's totient function conjecture is the statement that there is no such m.
=== Perfect totient numbers ===
A perfect totient number is an integer that is equal to the sum of its iterated totients. That is, we apply the totient function to a number n, apply it again to the resulting totient, and so on, until the number 1 is reached, and add together the resulting sequence of numbers; if the sum equals n, then n is a perfect totient number.
== Applications ==
=== Cyclotomy ===
In the last section of the Disquisitiones Gauss proves that a regular n-gon can be constructed with straightedge and compass if φ(n) is a power of 2. If n is a power of an odd prime number the formula for the totient says its totient can be a power of two only if n is a first power and n − 1 is a power of 2. The primes that are one more than a power of 2 are called Fermat primes, and only five are known: 3, 5, 17, 257, and 65537. Fermat and Gauss knew of these. Nobody has been able to prove whether there are any more.
Thus, a regular n-gon has a straightedge-and-compass construction if n is a product of distinct Fermat primes and any power of 2. The first few such n are
2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40,... (sequence A003401 in the OEIS).
=== Prime number theorem for arithmetic progressions ===
=== The RSA cryptosystem ===
Setting up an RSA system involves choosing large prime numbers p and q, computing n = pq and k = φ(n), and finding two numbers e and d such that ed ≡ 1 (mod k). The numbers n and e (the "encryption key") are released to the public, and d (the "decryption key") is kept private.
A message, represented by an integer m, where 0 < m < n, is encrypted by computing S = me (mod n).
It is decrypted by computing t = Sd (mod n). Euler's Theorem can be used to show that if 0 < t < n, then t = m.
The security of an RSA system would be compromised if the number n could be efficiently factored or if φ(n) could be efficiently computed without factoring n.
== Unsolved problems ==
=== Lehmer's conjecture ===
If p is prime, then φ(p) = p − 1. In 1932 D. H. Lehmer asked if there are any composite numbers n such that φ(n) divides n − 1. None are known.
In 1933 he proved that if any such n exists, it must be odd, square-free, and divisible by at least seven primes (i.e. ω(n) ≥ 7). In 1980 Cohen and Hagis proved that n > 1020 and that ω(n) ≥ 14. Further, Hagis showed that if 3 divides n then n > 101937042 and ω(n) ≥ 298848.
=== Carmichael's conjecture ===
This states that there is no number n with the property that for all other numbers m, m ≠ n, φ(m) ≠ φ(n). See Ford's theorem above.
As stated in the main article, if there is a single counterexample to this conjecture, there must be infinitely many counterexamples, and the smallest one has at least ten billion digits in base 10.
=== Riemann hypothesis ===
The Riemann hypothesis is true if and only if the inequality
n
φ
(
n
)
<
e
γ
log
log
n
+
e
γ
(
4
+
γ
−
log
4
π
)
log
n
{\displaystyle {\frac {n}{\varphi (n)}}<e^{\gamma }\log \log n+{\frac {e^{\gamma }(4+\gamma -\log 4\pi )}{\sqrt {\log n}}}}
is true for all n ≥ p120569# where γ is Euler's constant and p120569# is the product of the first 120569 primes.
== See also ==
Carmichael function (λ)
Dedekind psi function (𝜓)
Divisor function (σ)
Duffin–Schaeffer conjecture
Generalizations of Fermat's little theorem
Highly composite number
Multiplicative group of integers modulo n
Ramanujan sum
Totient summatory function (𝛷)
== Notes ==
== References ==
== External links ==
"Totient function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Euler's Phi Function and the Chinese Remainder Theorem — proof that φ(n) is multiplicative Archived 2021-02-28 at the Wayback Machine
Euler's totient function calculator in JavaScript — up to 20 digits
Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions Archived 2021-01-16 at the Wayback Machine
Plytage, Loomis, Polhill Summing Up The Euler Phi Function | Wikipedia/Euler_totient_function |
In field theory, a simple extension is a field extension that is generated by the adjunction of a single element, called a primitive element. Simple extensions are well understood and can be completely classified.
The primitive element theorem provides a characterization of the finite simple extensions.
== Definition ==
A field extension L/K is called a simple extension if there exists an element θ in L with
L
=
K
(
θ
)
.
{\displaystyle L=K(\theta ).}
This means that every element of L can be expressed as a rational fraction in θ, with coefficients in K; that is, it is produced from θ and elements of K by the field operations +, −, •, / . Equivalently, L is the smallest field that contains both K and θ.
There are two different kinds of simple extensions (see § Structure of simple extensions below):
The element θ may be transcendental over K, which means that it is not a root of any polynomial with coefficients in K. In this case
K
(
θ
)
{\displaystyle K(\theta )}
is isomorphic to the field of rational functions
K
(
X
)
.
{\displaystyle K(X).}
Otherwise, θ is algebraic over K; that is, θ is a root of a polynomial over K. The monic polynomial
p
(
X
)
{\displaystyle p(X)}
of minimal degree n, with θ as a root, is called the minimal polynomial of θ. Its degree equals the degree of the field extension, that is, the dimension of L viewed as a K-vector space. In this case, every element of
K
(
θ
)
{\displaystyle K(\theta )}
can be uniquely expressed as a polynomial in θ of degree less than n, and
K
(
θ
)
{\displaystyle K(\theta )}
is isomorphic to the quotient ring
K
[
X
]
/
(
p
(
X
)
)
.
{\displaystyle K[X]/(p(X)).}
In both cases, the element θ is called a generating element or primitive element for the extension; one says also L is generated over K by θ.
For example, every finite field is a simple extension of the prime field of the same characteristic. More precisely, if p is a prime number and
q
=
p
n
,
{\displaystyle q=p^{n},}
the field
L
=
F
q
{\displaystyle L=\mathbb {F} _{q}}
of q elements is a simple extension of degree n of
K
=
F
p
.
{\displaystyle K=\mathbb {F} _{p}.}
In fact, L is generated as a field by any element θ that is a root of an irreducible polynomial of degree n in
K
[
X
]
{\displaystyle K[X]}
.
However, in the case of finite fields, the term primitive element is usually reserved for a stronger notion, an element γ that generates
L
×
=
L
−
{
0
}
{\displaystyle L^{\times }=L-\{0\}}
as a multiplicative group, so that every nonzero element of L is a power of γ, i.e. is produced from γ using only the group operation • . To distinguish these meanings, one uses the term "generator" or field primitive element for the weaker meaning, reserving "primitive element" or group primitive element for the stronger meaning. (See Finite field § Multiplicative structure and Primitive element (finite field)).
== Structure of simple extensions ==
Let L be a simple extension of K generated by θ. For the polynomial ring K[X], one of its main properties is the unique ring homomorphism
φ
:
K
[
X
]
→
L
f
(
X
)
↦
f
(
θ
)
.
{\displaystyle {\begin{aligned}\varphi :K[X]&\rightarrow L\\f(X)&\mapsto f(\theta )\,.\end{aligned}}}
Two cases may occur:
If
φ
{\displaystyle \varphi }
is injective, it may be extended injectively to the field of fractions K(X) of K[X]. Since L is generated by θ, this implies that
φ
{\displaystyle \varphi }
is an isomorphism from K(X) onto L. This implies that every element of L is equal to an irreducible fraction of polynomials in θ, and that two such irreducible fractions are equal if and only if one may pass from one to the other by multiplying the numerator and the denominator by the same non zero element of K.
If
φ
{\displaystyle \varphi }
is not injective, let p(X) be a generator of its kernel, which is thus the minimal polynomial of θ. The image of
φ
{\displaystyle \varphi }
is a subring of L, and thus an integral domain. This implies that p is an irreducible polynomial, and thus that the quotient ring
K
[
X
]
/
⟨
p
(
X
)
⟩
{\displaystyle K[X]/\langle p(X)\rangle }
is a field. As L is generated by θ,
φ
{\displaystyle \varphi }
is surjective, and
φ
{\displaystyle \varphi }
induces an isomorphism from
K
[
X
]
/
⟨
p
(
X
)
⟩
{\displaystyle K[X]/\langle p(X)\rangle }
onto L. This implies that every element of L is equal to a unique polynomial in θ of degree lower than the degree
n
=
deg
p
(
X
)
{\displaystyle n=\operatorname {deg} p(X)}
. That is, we have a K-basis of L given by
1
,
θ
,
θ
2
,
…
,
θ
n
−
1
{\displaystyle 1,\theta ,\theta ^{2},\ldots ,\theta ^{n-1}}
.
== Examples ==
C / R generated by
θ
=
i
=
−
1
{\displaystyle \theta =i={\sqrt {-1}}}
.
Q(
2
{\displaystyle {\sqrt {2}}}
) / Q generated by
θ
=
2
{\displaystyle \theta ={\sqrt {2}}}
.
Any number field (i.e., a finite extension of Q) is a simple extension Q(θ) for some θ. For example,
Q
(
3
,
7
)
{\displaystyle \mathbf {Q} ({\sqrt {3}},{\sqrt {7}})}
is generated by
θ
=
3
+
7
{\displaystyle \theta ={\sqrt {3}}+{\sqrt {7}}}
.
F(X) / F, a field of rational functions, is generated by the formal variable X.
== See also ==
Companion matrix for the multiplication map on a simple field extension
== References ==
== Literature ==
Roman, Steven (1995). Field Theory. Graduate Texts in Mathematics. Vol. 158. New York: Springer-Verlag. ISBN 0-387-94408-7. Zbl 0816.12001. | Wikipedia/Primitive_element_(field_theory) |
In mathematics, the study of special values of L-functions is a subfield of number theory devoted to generalising formulae such as the Leibniz formula for π, namely
1
−
1
3
+
1
5
−
1
7
+
1
9
−
⋯
=
π
4
,
{\displaystyle 1\,-\,{\frac {1}{3}}\,+\,{\frac {1}{5}}\,-\,{\frac {1}{7}}\,+\,{\frac {1}{9}}\,-\,\cdots \;=\;{\frac {\pi }{4}},\!}
by the recognition that expression on the left-hand side is also
L
(
1
)
{\displaystyle L(1)}
where
L
(
s
)
{\displaystyle L(s)}
is the Dirichlet L-function for the field of Gaussian rational numbers. This formula is a special case of the analytic class number formula, and in those terms reads that the Gaussian field has class number 1. The factor
1
4
{\displaystyle {\tfrac {1}{4}}}
on the right hand side of the formula corresponds to the fact that this field contains four roots of unity.
== Conjectures ==
There are two families of conjectures, formulated for general classes of L-functions (the very general setting being for L-functions associated to Chow motives over number fields), the division into two reflecting the questions of:
how to replace
π
{\displaystyle \pi }
in the Leibniz formula by some other "transcendental" number (regardless of whether it is currently possible for transcendental number theory to provide a proof of the transcendence); and
how to generalise the rational factor in the formula (class number divided by number of roots of unity) by some algebraic construction of a rational number that will represent the ratio of the L-function value to the "transcendental" factor.
Subsidiary explanations are given for the integer values of
n
{\displaystyle n}
for which a formulae of this sort involving
L
(
n
)
{\displaystyle L(n)}
can be expected to hold.
The conjectures for (a) are called Beilinson's conjectures, for Alexander Beilinson. The idea is to abstract from the regulator of a number field to some "higher regulator" (the Beilinson regulator), a determinant constructed on a real vector space that comes from algebraic K-theory.
The conjectures for (b) are called the Bloch–Kato conjectures for special values (for Spencer Bloch and Kazuya Kato; this circle of ideas is distinct from the Bloch–Kato conjecture of K-theory, extending the Milnor conjecture, a proof of which was announced in 2009). They are also called the Tamagawa number conjecture, a name arising via the Birch–Swinnerton-Dyer conjecture and its formulation as an elliptic curve analogue of the Tamagawa number problem for linear algebraic groups. In a further extension, the equivariant Tamagawa number conjecture (ETNC) has been formulated, to consolidate the connection of these ideas with Iwasawa theory, and its so-called Main Conjecture.
=== Current status ===
All of these conjectures are known to be true only in special cases.
== See also ==
Brumer–Stark conjecture
== Notes ==
== References ==
Kings, Guido (2003), "The Bloch–Kato conjecture on special values of L-functions. A survey of known results", Journal de théorie des nombres de Bordeaux, 15 (1): 179–198, doi:10.5802/jtnb.396, ISSN 1246-7405, MR 2019010
"Beilinson conjectures", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"K-functor in algebraic geometry", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Mathar, Richard J. (2010), "Table of Dirichlet L-Series and Prime Zeta Modulo Functions for small moduli", arXiv:1008.2547 [math.NT]
== External links ==
L-funktionen und die Vermutingen von Deligne und Beilinson (L-functions and the conjectures of Deligne and Beilsnson) | Wikipedia/Tamagawa_number_conjecture |
The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter ζ (zeta), is a mathematical function of a complex variable defined as
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
=
1
1
s
+
1
2
s
+
1
3
s
+
⋯
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+\cdots }
for
Re
(
s
)
>
1
{\displaystyle \operatorname {Re} (s)>1}
, and its analytic continuation elsewhere.
The Riemann zeta function plays a pivotal role in analytic number theory and has applications in physics, probability theory, and applied statistics.
Leonhard Euler first introduced and studied the function over the reals in the first half of the eighteenth century. Bernhard Riemann's 1859 article "On the Number of Primes Less Than a Given Magnitude" extended the Euler definition to a complex variable, proved its meromorphic continuation and functional equation, and established a relation between its zeros and the distribution of prime numbers. This paper also contained the Riemann hypothesis, a conjecture about the distribution of complex zeros of the Riemann zeta function that many mathematicians consider the most important unsolved problem in pure mathematics.
The values of the Riemann zeta function at even positive integers were computed by Euler. The first of them, ζ(2), provides a solution to the Basel problem. In 1979 Roger Apéry proved the irrationality of ζ(3). The values at negative integer points, also found by Euler, are rational numbers and play an important role in the theory of modular forms. Many generalizations of the Riemann zeta function, such as Dirichlet series, Dirichlet L-functions and L-functions, are known.
== Definition ==
The Riemann zeta function ζ(s) is a function of a complex variable s = σ + it, where σ and t are real numbers. (The notation s, σ, and t is used traditionally in the study of the zeta function, following Riemann.) When Re(s) = σ > 1, the function can be written as a converging summation or as an integral:
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
=
1
Γ
(
s
)
∫
0
∞
x
s
−
1
e
x
−
1
d
x
,
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{\Gamma (s)}}\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}-1}}\,\mathrm {d} x\,,}
where
Γ
(
s
)
=
∫
0
∞
x
s
−
1
e
−
x
d
x
{\displaystyle \Gamma (s)=\int _{0}^{\infty }x^{s-1}\,e^{-x}\,\mathrm {d} x}
is the gamma function. The Riemann zeta function is defined for other complex values via analytic continuation of the function defined for σ > 1.
Leonhard Euler considered the above series in 1740 for positive integer values of s, and later Chebyshev extended the definition to
Re
(
s
)
>
1.
{\displaystyle \operatorname {Re} (s)>1.}
The above series is a prototypical Dirichlet series that converges absolutely to an analytic function for s such that σ > 1 and diverges for all other values of s. Riemann showed that the function defined by the series on the half-plane of convergence can be continued analytically to all complex values s ≠ 1. For s = 1, the series is the harmonic series which diverges to +∞, and
lim
s
→
1
(
s
−
1
)
ζ
(
s
)
=
1.
{\displaystyle \lim _{s\to 1}(s-1)\zeta (s)=1.}
Thus the Riemann zeta function is a meromorphic function on the whole complex plane, which is holomorphic everywhere except for a simple pole at s = 1 with residue 1.
== Euler's product formula ==
In 1737, the connection between the zeta function and prime numbers was discovered by Euler, who proved the identity
∑
n
=
1
∞
1
n
s
=
∏
p
prime
1
1
−
p
−
s
,
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}},}
where, by definition, the left hand side is ζ(s) and the infinite product on the right hand side extends over all prime numbers p (such expressions are called Euler products):
∏
p
prime
1
1
−
p
−
s
=
1
1
−
2
−
s
⋅
1
1
−
3
−
s
⋅
1
1
−
5
−
s
⋅
1
1
−
7
−
s
⋅
1
1
−
11
−
s
⋯
1
1
−
p
−
s
⋯
{\displaystyle \prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}={\frac {1}{1-2^{-s}}}\cdot {\frac {1}{1-3^{-s}}}\cdot {\frac {1}{1-5^{-s}}}\cdot {\frac {1}{1-7^{-s}}}\cdot {\frac {1}{1-11^{-s}}}\cdots {\frac {1}{1-p^{-s}}}\cdots }
Both sides of the Euler product formula converge for Re(s) > 1. The proof of Euler's identity uses only the formula for the geometric series and the fundamental theorem of arithmetic. Since the harmonic series, obtained when s = 1, diverges, Euler's formula (which becomes Πp p/p − 1) implies that there are infinitely many primes. Since the logarithm of p/p − 1 is approximately 1/p, the formula can also be used to prove the stronger result that the sum of the reciprocals of the primes is infinite. On the other hand, combining that with the sieve of Eratosthenes shows that the density of the set of primes within the set of positive integers is zero.
The Euler product formula can be used to calculate the asymptotic probability that s randomly selected integers are set-wise coprime. Intuitively, the probability that any single number is divisible by a prime (or any integer) p is 1/p. Hence the probability that s numbers are all divisible by this prime is 1/ps, and the probability that at least one of them is not is 1 − 1/ps. Now, for distinct primes, these divisibility events are mutually independent because the candidate divisors are coprime (a number is divisible by coprime divisors n and m if and only if it is divisible by nm, an event which occurs with probability 1/nm). Thus the asymptotic probability that s numbers are coprime is given by a product over all primes,
∏
p
prime
(
1
−
1
p
s
)
=
(
∏
p
prime
1
1
−
p
−
s
)
−
1
=
1
ζ
(
s
)
.
{\displaystyle \prod _{p{\text{ prime}}}\left(1-{\frac {1}{p^{s}}}\right)=\left(\prod _{p{\text{ prime}}}{\frac {1}{1-p^{-s}}}\right)^{-1}={\frac {1}{\zeta (s)}}.}
== Riemann's functional equation ==
This zeta function satisfies the functional equation
ζ
(
s
)
=
2
s
π
s
−
1
sin
(
π
s
2
)
Γ
(
1
−
s
)
ζ
(
1
−
s
)
,
{\displaystyle \zeta (s)=2^{s}\pi ^{s-1}\ \sin \left({\frac {\pi s}{2}}\right)\ \Gamma (1-s)\ \zeta (1-s)\ ,}
where Γ(s) is the gamma function. This is an equality of meromorphic functions valid on the whole complex plane. The equation relates values of the Riemann zeta function at the points s and 1 − s, in particular relating even positive integers with odd negative integers. Owing to the zeros of the sine function, the functional equation implies that ζ(s) has a simple zero at each even negative integer s = −2n, known as the trivial zeros of ζ(s). When s is an even positive integer, the product sin( π s / 2 ) Γ(1 − s) on the right is non-zero because Γ(1 − s) has a simple pole, which cancels the simple zero of the sine factor.
The functional equation was established by Riemann in his 1859 paper "On the Number of Primes Less Than a Given Magnitude" and used to construct the analytic continuation in the first place.
== Riemann's Xi function ==
Riemann also found a symmetric version of the functional equation by setting
ξ
(
s
)
=
s
(
s
−
1
)
2
×
π
−
s
2
Γ
(
s
2
)
ζ
(
s
)
=
(
s
−
1
)
π
−
s
2
Γ
(
s
2
+
1
)
ζ
(
s
)
,
{\displaystyle \xi (s)={\frac {s(s-1)}{2}}\times \pi ^{-{\frac {s}{2}}}\Gamma \left({\frac {s}{2}}\right)\zeta (s)=(s-1)\pi ^{-{\frac {s}{2}}}\Gamma \left({\frac {s}{2}}+1\right)\zeta (s)\ ,}
which satisfies:
ξ
(
s
)
=
ξ
(
1
−
s
)
.
{\displaystyle \xi (s)=\xi (1-s)~.}
Returning to the functional equation's derivation in the previous section, we have
ξ
(
s
)
=
1
2
+
s
(
s
−
1
)
2
∫
1
∞
(
x
−
s
2
−
1
2
+
x
s
2
−
1
)
ψ
(
x
)
d
x
{\displaystyle \xi (s)={\frac {1}{2}}+{\frac {s(s-1)}{2}}\int _{1}^{\infty }\left(x^{-{\frac {s}{2}}-{\frac {1}{2}}}+x^{{\frac {s}{2}}-1}\right)\psi (x)dx}
Using integration by parts,
ξ
(
s
)
=
1
2
−
[
(
s
x
1
−
s
2
+
(
1
−
s
)
x
s
2
)
ψ
(
x
)
]
1
∞
+
∫
1
∞
(
s
x
1
−
s
2
+
(
1
−
s
)
x
s
2
)
ψ
′
(
x
)
d
x
{\displaystyle \xi (s)={\frac {1}{2}}-\left[\left(sx^{\frac {1-s}{2}}+(1-s)x^{\frac {s}{2}}\right)\psi (x)\right]_{1}^{\infty }+\int _{1}^{\infty }\left(sx^{\frac {1-s}{2}}+(1-s)x^{\frac {s}{2}}\right)\psi '(x)dx}
ξ
(
s
)
=
1
2
+
ψ
(
1
)
+
∫
1
∞
(
s
x
1
−
s
2
+
(
1
−
s
)
x
s
2
)
ψ
′
(
x
)
d
x
{\displaystyle \xi (s)={\frac {1}{2}}+\psi (1)+\int _{1}^{\infty }\left(sx^{\frac {1-s}{2}}+(1-s)x^{\frac {s}{2}}\right)\psi '(x)dx}
Using integration by parts again with a factorization of
x
3
2
{\displaystyle x^{\frac {3}{2}}}
,
ξ
(
s
)
=
1
2
+
ψ
(
1
)
−
2
[
x
3
2
ψ
′
(
x
)
(
x
s
−
1
2
+
x
−
s
2
)
]
1
∞
+
2
∫
1
∞
(
x
s
−
1
2
+
x
−
s
2
)
d
d
x
[
x
3
2
ψ
′
(
x
)
]
d
x
{\displaystyle \xi (s)={\frac {1}{2}}+\psi (1)-2\left[x^{\frac {3}{2}}\psi '(x)\left(x^{\frac {s-1}{2}}+x^{-{\frac {s}{2}}}\right)\right]_{1}^{\infty }+2\int _{1}^{\infty }\left(x^{\frac {s-1}{2}}+x^{-{\frac {s}{2}}}\right){\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]dx}
ξ
(
s
)
=
1
2
+
ψ
(
1
)
+
4
ψ
′
(
1
)
+
2
∫
1
∞
d
d
x
[
x
3
2
ψ
′
(
x
)
]
(
x
s
−
1
2
+
x
−
s
2
)
d
x
{\displaystyle \xi (s)={\frac {1}{2}}+\psi (1)+4\psi '(1)+2\int _{1}^{\infty }{\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]\left(x^{\frac {s-1}{2}}+x^{-{\frac {s}{2}}}\right)dx}
As
1
2
+
ψ
(
1
)
+
4
ψ
′
(
1
)
=
0
{\displaystyle {\frac {1}{2}}+\psi (1)+4\psi '(1)=0}
,
ξ
(
s
)
=
2
∫
1
∞
d
d
x
[
x
3
2
ψ
′
(
x
)
]
(
x
s
−
1
2
+
x
−
s
2
)
d
x
{\displaystyle \xi (s)=2\int _{1}^{\infty }{\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]\left(x^{\frac {s-1}{2}}+x^{-{\frac {s}{2}}}\right)dx}
Remove a factor of
x
−
1
4
{\displaystyle x^{-{\frac {1}{4}}}}
to make the exponents in the remainder opposites.
ξ
(
s
)
=
2
∫
1
∞
d
d
x
[
x
3
2
ψ
′
(
x
)
]
x
−
1
4
(
x
s
−
1
2
2
+
x
1
2
−
s
2
)
d
x
{\displaystyle \xi (s)=2\int _{1}^{\infty }{\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]x^{-{\frac {1}{4}}}\left(x^{\frac {s-{\frac {1}{2}}}{2}}+x^{\frac {{\frac {1}{2}}-s}{2}}\right)dx}
Using the hyperbolic functions, namely
cos
(
x
)
=
cosh
(
i
x
)
=
e
i
x
+
e
−
i
x
2
{\displaystyle \cos(x)=\cosh(ix)={\frac {e^{ix}+e^{-ix}}{2}}}
, and letting
s
=
1
2
+
i
t
{\displaystyle s={\frac {1}{2}}+it}
gives
ξ
(
s
)
=
4
∫
1
∞
d
d
x
[
x
3
2
ψ
′
(
x
)
]
x
−
1
4
cos
(
t
2
log
x
)
d
x
{\displaystyle \xi (s)=4\int _{1}^{\infty }{\frac {d}{dx}}\left[x^{\frac {3}{2}}\psi '(x)\right]x^{-{\frac {1}{4}}}\cos({\frac {t}{2}}\log x)dx}
and by separating the integral and using the power series for
cos
{\displaystyle \cos }
,
ξ
(
s
)
=
∑
n
=
0
∞
a
2
n
t
2
n
{\displaystyle \xi (s)=\sum _{n=0}^{\infty }a_{2n}t^{2n}}
which led Riemann to his famous hypothesis.
== Zeros, the critical line, and the Riemann hypothesis ==
The functional equation shows that the Riemann zeta function has zeros at −2, −4,.... These are called the trivial zeros. They are trivial in the sense that their existence is relatively easy to prove, for example, from sin πs/2 being 0 in the functional equation. The non-trivial zeros have captured far more attention because their distribution not only is far less understood but, more importantly, their study yields important results concerning prime numbers and related objects in number theory. It is known that any non-trivial zero lies in the open strip
{
s
∈
C
:
0
<
Re
(
s
)
<
1
}
{\displaystyle \{s\in \mathbb {C} :0<\operatorname {Re} (s)<1\}}
, which is called the critical strip. The set
{
s
∈
C
:
Re
(
s
)
=
1
/
2
}
{\displaystyle \{s\in \mathbb {C} :\operatorname {Re} (s)=1/2\}}
is called the critical line. The Riemann hypothesis, considered one of the greatest unsolved problems in mathematics, asserts that all non-trivial zeros are on the critical line. In 1989, Conrey proved that more than 40% of the non-trivial zeros of the Riemann zeta function are on the critical line. This has since been improved to 41.7%.
For the Riemann zeta function on the critical line, see Z-function.
=== Number of zeros in the critical strip ===
Let
N
(
T
)
{\displaystyle N(T)}
be the number of zeros of
ζ
(
s
)
{\displaystyle \zeta (s)}
in the critical strip
0
<
Re
(
s
)
<
1
{\displaystyle 0<\operatorname {Re} (s)<1}
, whose imaginary parts are in the interval
0
<
Im
(
s
)
<
T
{\displaystyle 0<\operatorname {Im} (s)<T}
.
Timothy Trudgian proved that, if
T
>
e
{\displaystyle T>e}
, then
|
N
(
T
)
−
T
2
π
log
T
2
π
e
|
≤
0.112
log
T
+
0.278
log
log
T
+
3.385
+
0.2
T
{\displaystyle \left|N(T)-{\frac {T}{2\pi }}\log {\frac {T}{2\pi e}}\right|\leq 0.112\log T+0.278\log \log T+3.385+{\frac {0.2}{T}}}
.
=== The Hardy–Littlewood conjectures ===
In 1914, G. H. Hardy proved that ζ (1/2 + it) has infinitely many real zeros.
Hardy and J. E. Littlewood formulated two conjectures on the density and distance between the zeros of ζ (1/2 + it) on intervals of large positive real numbers. In the following, N(T) is the total number of real zeros and N0(T) the total number of zeros of odd order of the function ζ (1/2 + it) lying in the interval (0, T].
These two conjectures opened up new directions in the investigation of the Riemann zeta function.
=== Zero-free region ===
The location of the Riemann zeta function's zeros is of great importance in number theory. The prime number theorem is equivalent to the fact that there are no zeros of the zeta function on the Re(s) = 1 line. It is also known that zeros do not exist in certain regions slightly to the left of the Re(s) = 1 line, known as zero-free regions. For instance, Korobov and Vinogradov independently showed via the Vinogradov's mean-value theorem that for sufficiently large
|
t
|
{\displaystyle |t|}
,
ζ
(
σ
+
i
t
)
≠
0
{\displaystyle \zeta (\sigma +it)\neq 0}
for
σ
≥
1
−
c
(
log
|
t
|
)
2
/
3
+
ε
{\displaystyle \sigma \geq 1-{\frac {c}{(\log |t|)^{2/3+\varepsilon }}}}
for any
ε
>
0
{\displaystyle \varepsilon >0}
and a number
c
>
0
{\displaystyle c>0}
depending on
ε
{\displaystyle \varepsilon }
. Asymptotically, this is the largest known zero-free region for the zeta function.
Explicit zero-free regions are also known. Platt and Trudgian
verified computationally that
ζ
(
σ
+
i
t
)
≠
0
{\displaystyle \zeta (\sigma +it)\neq 0}
if
σ
≠
1
/
2
{\displaystyle \sigma \neq 1/2}
and
|
t
|
≤
3
⋅
10
12
{\displaystyle |t|\leq 3\cdot 10^{12}}
. Mossinghoff, Trudgian and Yang proved that zeta has no zeros in the region
σ
≥
1
−
1
5.558691
log
|
t
|
{\displaystyle \sigma \geq 1-{\frac {1}{5.558691\log |t|}}}
for |t| ≥ 2, which is the largest known zero-free region in the critical strip for
3
⋅
10
12
<
|
t
|
<
e
64.1
≈
7
⋅
10
27
{\displaystyle 3\cdot 10^{12}<|t|<e^{64.1}\approx 7\cdot 10^{27}}
(for previous results see).
Yang showed that
ζ
(
σ
+
i
t
)
≠
0
{\displaystyle \zeta (\sigma +it)\neq 0}
if
σ
≥
1
−
log
log
|
t
|
21.233
log
|
t
|
{\displaystyle \sigma \geq 1-{\frac {\log \log |t|}{21.233\log |t|}}}
and
|
t
|
≥
3
{\displaystyle |t|\geq 3}
which is the largest known zero-free region for
e
170.2
<
|
t
|
<
e
4.8
⋅
10
5
{\displaystyle e^{170.2}<|t|<e^{4.8\cdot 10^{5}}}
.
Bellotti proved (building on the work of Ford) the zero-free region
σ
≥
1
−
1
53.989
(
log
|
t
|
)
2
/
3
(
log
log
|
t
|
)
1
/
3
{\displaystyle \sigma \geq 1-{\frac {1}{53.989(\log |t|)^{2/3}(\log \log |t|)^{1/3}}}}
and
|
t
|
≥
3
{\displaystyle |t|\geq 3}
.
This is the largest known zero-free region for fixed
|
t
|
≥
exp
(
4.8
⋅
10
5
)
.
{\displaystyle |t|\geq \exp(4.8\cdot 10^{5}).}
Bellotti also showed that for sufficiently large
|
t
|
{\displaystyle |t|}
, the following better result is known:
ζ
(
σ
+
i
t
)
≠
0
{\displaystyle \zeta (\sigma +it)\neq 0}
for
σ
≥
1
−
1
48.0718
(
log
|
t
|
)
2
/
3
(
log
log
|
t
|
)
1
/
3
.
{\displaystyle \sigma \geq 1-{\frac {1}{48.0718(\log |t|)^{2/3}(\log \log |t|)^{1/3}}}.}
The strongest result of this kind one can hope for is the truth of the Riemann hypothesis, which would have many profound consequences in the theory of numbers.
=== Other results ===
It is known that there are infinitely many zeros on the critical line. Littlewood showed that if the sequence (γn) contains the imaginary parts of all zeros in the upper half-plane in ascending order, then
lim
n
→
∞
(
γ
n
+
1
−
γ
n
)
=
0.
{\displaystyle \lim _{n\rightarrow \infty }\left(\gamma _{n+1}-\gamma _{n}\right)=0.}
The critical line theorem asserts that a positive proportion of the nontrivial zeros lies on the critical line. (The Riemann hypothesis would imply that this proportion is 1.)
In the critical strip, the zero with smallest non-negative imaginary part is 1/2 + 14.13472514...i (OEIS: A058303). The fact that
ζ
(
s
)
=
ζ
(
s
¯
)
¯
{\displaystyle \zeta (s)={\overline {\zeta ({\overline {s}})}}}
for all complex s ≠ 1 implies that the zeros of the Riemann zeta function are symmetric about the real axis. Combining this symmetry with the functional equation, furthermore, one sees that the non-trivial zeros are symmetric about the critical line Re(s) = 1/2.
It is also known that no zeros lie on the line with real part 1.
== Specific values ==
For any positive even integer 2n,
ζ
(
2
n
)
=
|
B
2
n
|
(
2
π
)
2
n
2
(
2
n
)
!
,
{\displaystyle \zeta (2n)={\frac {|{B_{2n}}|(2\pi )^{2n}}{2(2n)!}},}
where B2n is the 2n-th Bernoulli number.
For odd positive integers, no such simple expression is known, although these values are thought to be related to the algebraic K-theory of the integers; see Special values of L-functions.
For nonpositive integers, one has
ζ
(
−
n
)
=
−
B
n
+
1
n
+
1
{\displaystyle \zeta (-n)=-{\frac {B_{n+1}}{n+1}}}
for n ≥ 0 (using the convention that B1 = 1/2).
In particular, ζ vanishes at the negative even integers because Bm = 0 for all odd m other than 1. These are the so-called "trivial zeros" of the zeta function.
Via analytic continuation, one can show that
ζ
(
−
1
)
=
−
1
12
{\displaystyle \zeta (-1)=-{\tfrac {1}{12}}}
This gives a pretext for assigning a finite value to the divergent series 1 + 2 + 3 + 4 + ⋯, which has been used in certain contexts (Ramanujan summation) such as string theory. Analogously, the particular value
ζ
(
0
)
=
−
1
2
{\displaystyle \zeta (0)=-{\tfrac {1}{2}}}
can be viewed as assigning a finite result to the divergent series 1 + 1 + 1 + 1 + ⋯.
The value
ζ
(
1
2
)
=
−
1.46035450880958681288
…
{\displaystyle \zeta {\bigl (}{\tfrac {1}{2}}{\bigr )}=-1.46035450880958681288\ldots }
is employed in calculating kinetic boundary layer problems of linear kinetic equations.
Although
ζ
(
1
)
=
1
+
1
2
+
1
3
+
⋯
{\displaystyle \zeta (1)=1+{\tfrac {1}{2}}+{\tfrac {1}{3}}+\cdots }
diverges, its Cauchy principal value
lim
ε
→
0
ζ
(
1
+
ε
)
+
ζ
(
1
−
ε
)
2
{\displaystyle \lim _{\varepsilon \to 0}{\frac {\zeta (1+\varepsilon )+\zeta (1-\varepsilon )}{2}}}
exists and is equal to the Euler–Mascheroni constant γ = 0.5772....
The demonstration of the particular value
ζ
(
2
)
=
1
+
1
2
2
+
1
3
2
+
⋯
=
π
2
6
{\displaystyle \zeta (2)=1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\cdots ={\frac {\pi ^{2}}{6}}}
is known as the Basel problem. The reciprocal of this sum answers the question: What is the probability that two numbers selected at random are relatively prime?
The value
ζ
(
3
)
=
1
+
1
2
3
+
1
3
3
+
⋯
=
1.202056903159594285399...
{\displaystyle \zeta (3)=1+{\frac {1}{2^{3}}}+{\frac {1}{3^{3}}}+\cdots =1.202056903159594285399...}
is Apéry's constant.
Taking the limit
s
→
+
∞
{\displaystyle s\rightarrow +\infty }
through the real numbers, one obtains
ζ
(
+
∞
)
=
1
{\displaystyle \zeta (+\infty )=1}
. But at complex infinity on the Riemann sphere the zeta function has an essential singularity.
== Various properties ==
For sums involving the zeta function at integer and half-integer values, see rational zeta series.
=== Reciprocal ===
The reciprocal of the zeta function may be expressed as a Dirichlet series over the Möbius function μ(n):
1
ζ
(
s
)
=
∑
n
=
1
∞
μ
(
n
)
n
s
{\displaystyle {\frac {1}{\zeta (s)}}=\sum _{n=1}^{\infty }{\frac {\mu (n)}{n^{s}}}}
for every complex number s with real part greater than 1. There are a number of similar relations involving various well-known multiplicative functions; these are given in the article on the Dirichlet series.
The Riemann hypothesis is equivalent to the claim that this expression is valid when the real part of s is greater than 1/2.
=== Universality ===
The critical strip of the Riemann zeta function has the remarkable property of universality. This zeta function universality states that there exists some location on the critical strip that approximates any holomorphic function arbitrarily well. Since holomorphic functions are very general, this property is quite remarkable. The first proof of universality was provided by Sergei Mikhailovitch Voronin in 1975. More recent work has included effective versions of Voronin's theorem and extending it to Dirichlet L-functions.
=== Estimates of the maximum of the modulus of the zeta function ===
Let the functions F(T;H) and G(s0;Δ) be defined by the equalities
F
(
T
;
H
)
=
max
|
t
−
T
|
≤
H
|
ζ
(
1
2
+
i
t
)
|
,
G
(
s
0
;
Δ
)
=
max
|
s
−
s
0
|
≤
Δ
|
ζ
(
s
)
|
.
{\displaystyle F(T;H)=\max _{|t-T|\leq H}\left|\zeta \left({\tfrac {1}{2}}+it\right)\right|,\qquad G(s_{0};\Delta )=\max _{|s-s_{0}|\leq \Delta }|\zeta (s)|.}
Here T is a sufficiently large positive number, 0 < H ≪ log log T, s0 = σ0 + iT, 1/2 ≤ σ0 ≤ 1, 0 < Δ < 1/3. Estimating the values F and G from below shows, how large (in modulus) values ζ(s) can take on short intervals of the critical line or in small neighborhoods of points lying in the critical strip 0 ≤ Re(s) ≤ 1.
The case H ≫ log log T was studied by Kanakanahalli Ramachandra; the case Δ > c, where c is a sufficiently large constant, is trivial.
Anatolii Karatsuba proved, in particular, that if the values H and Δ exceed certain sufficiently small constants, then the estimates
F
(
T
;
H
)
≥
T
−
c
1
,
G
(
s
0
;
Δ
)
≥
T
−
c
2
,
{\displaystyle F(T;H)\geq T^{-c_{1}},\qquad G(s_{0};\Delta )\geq T^{-c_{2}},}
hold, where c1 and c2 are certain absolute constants.
=== The argument of the Riemann zeta function ===
The function
S
(
t
)
=
1
π
arg
ζ
(
1
2
+
i
t
)
{\displaystyle S(t)={\frac {1}{\pi }}\arg {\zeta \left({\tfrac {1}{2}}+it\right)}}
is called the argument of the Riemann zeta function. Here arg ζ(1/2 + it) is the increment of an arbitrary continuous branch of arg ζ(s) along the broken line joining the points 2, 2 + it and 1/2 + it.
There are some theorems on properties of the function S(t). Among those results are the mean value theorems for S(t) and its first integral
S
1
(
t
)
=
∫
0
t
S
(
u
)
d
u
{\displaystyle S_{1}(t)=\int _{0}^{t}S(u)\,\mathrm {d} u}
on intervals of the real line, and also the theorem claiming that every interval (T, T + H] for
H
≥
T
27
82
+
ε
{\displaystyle H\geq T^{{\frac {27}{82}}+\varepsilon }}
contains at least
H
ln
T
3
e
−
c
ln
ln
T
{\displaystyle H{\sqrt[{3}]{\ln T}}e^{-c{\sqrt {\ln \ln T}}}}
points where the function S(t) changes sign. Earlier similar results were obtained by Atle Selberg for the case
H
≥
T
1
2
+
ε
.
{\displaystyle H\geq T^{{\frac {1}{2}}+\varepsilon }.}
== Representations ==
=== Dirichlet series ===
An extension of the area of convergence can be obtained by rearranging the original series. The series
ζ
(
s
)
=
1
s
−
1
∑
n
=
1
∞
(
n
(
n
+
1
)
s
−
n
−
s
n
s
)
{\displaystyle \zeta (s)={\frac {1}{s-1}}\sum _{n=1}^{\infty }\left({\frac {n}{(n+1)^{s}}}-{\frac {n-s}{n^{s}}}\right)}
converges for Re(s) > 0, while
ζ
(
s
)
=
1
s
−
1
∑
n
=
1
∞
n
(
n
+
1
)
2
(
2
n
+
3
+
s
(
n
+
1
)
s
+
2
−
2
n
−
1
−
s
n
s
+
2
)
{\displaystyle \zeta (s)={\frac {1}{s-1}}\sum _{n=1}^{\infty }{\frac {n(n+1)}{2}}\left({\frac {2n+3+s}{(n+1)^{s+2}}}-{\frac {2n-1-s}{n^{s+2}}}\right)}
converge even for Re(s) > −1. In this way, the area of convergence can be extended to Re(s) > −k for any negative integer −k.
The recurrence connection is clearly visible from the expression valid for Re(s) > −2 enabling further expansion by integration by parts.
ζ
(
s
)
=
1
+
1
s
−
1
−
s
2
!
[
ζ
(
s
+
1
)
−
1
]
−
s
(
s
+
1
)
3
!
[
ζ
(
s
+
2
)
−
1
]
−
s
(
s
+
1
)
(
s
+
2
)
3
!
∑
n
=
1
∞
∫
0
1
t
3
d
t
(
n
+
t
)
s
+
3
{\displaystyle {\begin{aligned}\zeta (s)=&1+{\frac {1}{s-1}}-{\frac {s}{2!}}[\zeta (s+1)-1]\\-&{\frac {s(s+1)}{3!}}[\zeta (s+2)-1]\\&-{\frac {s(s+1)(s+2)}{3!}}\sum _{n=1}^{\infty }\int _{0}^{1}{\frac {t^{3}dt}{(n+t)^{s+3}}}\end{aligned}}}
=== Mellin-type integrals ===
The Mellin transform of a function f(x) is defined as
∫
0
∞
f
(
x
)
x
s
d
x
x
{\displaystyle \int _{0}^{\infty }f(x)x^{s}\,{\frac {\mathrm {d} x}{x}}}
in the region where the integral is defined. There are various expressions for the zeta function as Mellin transform-like integrals. If the real part of s is greater than one, we have
Γ
(
s
)
ζ
(
s
)
=
∫
0
∞
x
s
−
1
e
x
−
1
d
x
{\displaystyle \Gamma (s)\zeta (s)=\int _{0}^{\infty }{\frac {x^{s-1}}{e^{x}-1}}\,\mathrm {d} x\quad }
and
Γ
(
s
)
ζ
(
s
)
=
1
2
s
∫
0
∞
x
s
cosh
(
x
)
−
1
d
x
{\displaystyle \quad \Gamma (s)\zeta (s)={\frac {1}{2s}}\int _{0}^{\infty }{\frac {x^{s}}{\cosh(x)-1}}\,\mathrm {d} x}
,
where Γ denotes the gamma function. By modifying the contour, Riemann showed that
2
sin
(
π
s
)
Γ
(
s
)
ζ
(
s
)
=
i
∮
H
(
−
x
)
s
−
1
e
x
−
1
d
x
{\displaystyle 2\sin(\pi s)\Gamma (s)\zeta (s)=i\oint _{H}{\frac {(-x)^{s-1}}{e^{x}-1}}\,\mathrm {d} x}
for all s (where H denotes the Hankel contour).
We can also find expressions which relate to prime numbers and the prime number theorem. If π(x) is the prime-counting function, then
ln
ζ
(
s
)
=
s
∫
0
∞
π
(
x
)
x
(
x
s
−
1
)
d
x
,
{\displaystyle \ln \zeta (s)=s\int _{0}^{\infty }{\frac {\pi (x)}{x(x^{s}-1)}}\,\mathrm {d} x,}
for values with Re(s) > 1.
A similar Mellin transform involves the Riemann function J(x), which counts prime powers pn with a weight of 1/n, so that
J
(
x
)
=
∑
π
(
x
1
n
)
n
.
{\displaystyle J(x)=\sum {\frac {\pi \left(x^{\frac {1}{n}}\right)}{n}}.}
Now
ln
ζ
(
s
)
=
s
∫
0
∞
J
(
x
)
x
−
s
−
1
d
x
.
{\displaystyle \ln \zeta (s)=s\int _{0}^{\infty }J(x)x^{-s-1}\,\mathrm {d} x.}
These expressions can be used to prove the prime number theorem by means of the inverse Mellin transform. Riemann's prime-counting function is easier to work with, and π(x) can be recovered from it by Möbius inversion.
=== Theta functions ===
The Riemann zeta function can be given by a Mellin transform
2
π
−
s
2
Γ
(
s
2
)
ζ
(
s
)
=
∫
0
∞
(
θ
(
i
t
)
−
1
)
t
s
2
−
1
d
t
,
{\displaystyle 2\pi ^{-{\frac {s}{2}}}\Gamma \left({\frac {s}{2}}\right)\zeta (s)=\int _{0}^{\infty }{\bigl (}\theta (it)-1{\bigr )}t^{{\frac {s}{2}}-1}\,\mathrm {d} t,}
in terms of Jacobi's theta function
θ
(
τ
)
=
∑
n
=
−
∞
∞
e
π
i
n
2
τ
.
{\displaystyle \theta (\tau )=\sum _{n=-\infty }^{\infty }e^{\pi in^{2}\tau }.}
However, this integral only converges if the real part of s is greater than 1, but it can be regularized. This gives the following expression for the zeta function, which is well defined for all s except 0 and 1:
π
−
s
2
Γ
(
s
2
)
ζ
(
s
)
=
1
s
−
1
−
1
s
+
1
2
∫
0
1
(
θ
(
i
t
)
−
t
−
1
2
)
t
s
2
−
1
d
t
+
1
2
∫
1
∞
(
θ
(
i
t
)
−
1
)
t
s
2
−
1
d
t
.
{\displaystyle \pi ^{-{\frac {s}{2}}}\Gamma \left({\frac {s}{2}}\right)\zeta (s)={\frac {1}{s-1}}-{\frac {1}{s}}+{\frac {1}{2}}\int _{0}^{1}\left(\theta (it)-t^{-{\frac {1}{2}}}\right)t^{{\frac {s}{2}}-1}\,\mathrm {d} t+{\frac {1}{2}}\int _{1}^{\infty }{\bigl (}\theta (it)-1{\bigr )}t^{{\frac {s}{2}}-1}\,\mathrm {d} t.}
=== Laurent series ===
The Riemann zeta function is meromorphic with a single pole of order one at s = 1. It can therefore be expanded as a Laurent series about s = 1; the series development is then
ζ
(
s
)
=
1
s
−
1
+
∑
n
=
0
∞
γ
n
n
!
(
1
−
s
)
n
.
{\displaystyle \zeta (s)={\frac {1}{s-1}}+\sum _{n=0}^{\infty }{\frac {\gamma _{n}}{n!}}(1-s)^{n}.}
The constants γn here are called the Stieltjes constants and can be defined by the limit
γ
n
=
lim
m
→
∞
(
(
∑
k
=
1
m
(
ln
k
)
n
k
)
−
(
ln
m
)
n
+
1
n
+
1
)
.
{\displaystyle \gamma _{n}=\lim _{m\rightarrow \infty }{\left(\left(\sum _{k=1}^{m}{\frac {(\ln k)^{n}}{k}}\right)-{\frac {(\ln m)^{n+1}}{n+1}}\right)}.}
The constant term γ0 is the Euler–Mascheroni constant.
=== Integral ===
For all s ∈ ℂ, s ≠ 1, the integral relation (cf. Abel–Plana formula)
ζ
(
s
)
=
1
s
−
1
+
1
2
+
2
∫
0
∞
sin
(
s
arctan
t
)
(
1
+
t
2
)
s
/
2
(
e
2
π
t
−
1
)
d
t
{\displaystyle \ \zeta (s)\ =\ {\frac {1}{\ s-1\ }}+{\frac {\ 1\ }{2}}+2\int _{0}^{\infty }{\frac {\sin(\ s\ \arctan t\ )}{\ \left(1+t^{2}\right)^{s/2}\left(e^{2\pi t}-1\right)\ }}\ \operatorname {d} t\ }
holds true, which may be used for a numerical evaluation of the zeta function.
=== Rising factorial ===
Another series development using the rising factorial valid for the entire complex plane is
ζ
(
s
)
=
s
s
−
1
−
∑
n
=
1
∞
(
ζ
(
s
+
n
)
−
1
)
s
(
s
+
1
)
⋯
(
s
+
n
−
1
)
(
n
+
1
)
!
.
{\displaystyle \zeta (s)={\frac {s}{s-1}}-\sum _{n=1}^{\infty }{\bigl (}\zeta (s+n)-1{\bigr )}{\frac {s(s+1)\cdots (s+n-1)}{(n+1)!}}.}
This can be used recursively to extend the Dirichlet series definition to all complex numbers.
The Riemann zeta function also appears in a form similar to the Mellin transform in an integral over the Gauss–Kuzmin–Wirsing operator acting on xs − 1; that context gives rise to a series expansion in terms of the falling factorial.
=== Hadamard product ===
On the basis of Weierstrass's factorization theorem, Hadamard gave the infinite product expansion
ζ
(
s
)
=
e
(
log
(
2
π
)
−
1
−
γ
2
)
s
2
(
s
−
1
)
Γ
(
1
+
s
2
)
∏
ρ
(
1
−
s
ρ
)
e
s
ρ
,
{\displaystyle \zeta (s)={\frac {e^{\left(\log(2\pi )-1-{\frac {\gamma }{2}}\right)s}}{2(s-1)\Gamma \left(1+{\frac {s}{2}}\right)}}\prod _{\rho }\left(1-{\frac {s}{\rho }}\right)e^{\frac {s}{\rho }},}
where the product is over the non-trivial zeros ρ of ζ and the letter γ again denotes the Euler–Mascheroni constant. A simpler infinite product expansion is
ζ
(
s
)
=
π
s
2
∏
ρ
(
1
−
s
ρ
)
2
(
s
−
1
)
Γ
(
1
+
s
2
)
.
{\displaystyle \zeta (s)=\pi ^{\frac {s}{2}}{\frac {\prod _{\rho }\left(1-{\frac {s}{\rho }}\right)}{2(s-1)\Gamma \left(1+{\frac {s}{2}}\right)}}.}
This form clearly displays the simple pole at s = 1, the trivial zeros at −2, −4, ... due to the gamma function term in the denominator, and the non-trivial zeros at s = ρ. (To ensure convergence in the latter formula, the product should be taken over "matching pairs" of zeros, i.e. the factors for a pair of zeros of the form ρ and 1 − ρ should be combined.)
=== Globally convergent series ===
A globally convergent series for the zeta function, valid for all complex numbers s except s = 1 + 2πi/ln 2n for some integer n, was conjectured by Konrad Knopp in 1926 and proven by Helmut Hasse in 1930 (cf. Euler summation):
ζ
(
s
)
=
1
1
−
2
1
−
s
∑
n
=
0
∞
1
2
n
+
1
∑
k
=
0
n
(
n
k
)
(
−
1
)
k
(
k
+
1
)
s
.
{\displaystyle \zeta (s)={\frac {1}{1-2^{1-s}}}\sum _{n=0}^{\infty }{\frac {1}{2^{n+1}}}\sum _{k=0}^{n}{\binom {n}{k}}{\frac {(-1)^{k}}{(k+1)^{s}}}.}
The series appeared in an appendix to Hasse's paper, and was published for the second time by Jonathan Sondow in 1994.
Hasse also proved the globally converging series
ζ
(
s
)
=
1
s
−
1
∑
n
=
0
∞
1
n
+
1
∑
k
=
0
n
(
n
k
)
(
−
1
)
k
(
k
+
1
)
s
−
1
{\displaystyle \zeta (s)={\frac {1}{s-1}}\sum _{n=0}^{\infty }{\frac {1}{n+1}}\sum _{k=0}^{n}{\binom {n}{k}}{\frac {(-1)^{k}}{(k+1)^{s-1}}}}
in the same publication. Research by Iaroslav Blagouchine
has found that a similar, equivalent series was published by Joseph Ser in 1926.
In 1997 K. Maślanka gave another globally convergent (except s = 1) series for the Riemann zeta function:
ζ
(
s
)
=
1
s
−
1
∑
k
=
0
∞
(
∏
i
=
1
k
(
i
−
s
2
)
)
A
k
k
!
=
1
s
−
1
∑
k
=
0
∞
(
1
−
s
2
)
k
A
k
k
!
{\displaystyle \zeta (s)={\frac {1}{s-1}}\sum _{k=0}^{\infty }{\biggl (}\prod _{i=1}^{k}(i-{\frac {s}{2}}){\biggl )}{\frac {A_{k}}{k!}}={\frac {1}{s-1}}\sum _{k=0}^{\infty }{\biggl (}1-{\frac {s}{2}}{\biggl )}_{k}{\frac {A_{k}}{k!}}}
where real coefficients
A
k
{\displaystyle A_{k}}
are given by:
A
k
=
∑
j
=
0
k
(
−
1
)
j
(
k
j
)
(
2
j
+
1
)
ζ
(
2
j
+
2
)
=
∑
j
=
0
k
(
k
j
)
B
2
j
+
2
π
2
j
+
2
(
2
)
j
(
1
2
)
j
{\displaystyle A_{k}=\sum _{j=0}^{k}(-1)^{j}{\binom {k}{j}}(2j+1)\zeta (2j+2)=\sum _{j=0}^{k}{\binom {k}{j}}{\frac {B_{2j+2}\pi ^{2j+2}}{\left(2\right)_{j}\left({\frac {1}{2}}\right)_{j}}}}
Here
B
n
{\displaystyle B_{n}}
are the Bernoulli numbers and
(
x
)
k
{\displaystyle (x)_{k}}
denotes the Pochhammer symbol.
Note that this representation of the zeta function is essentially an interpolation with nodes, where the nodes are points
s
=
2
,
4
,
6
,
…
{\displaystyle s=2,4,6,\ldots }
, i.e. exactly those where the zeta values are precisely known, as Euler showed. An elegant and very short proof of this representation of the zeta function, based on Carlson's theorem, was presented by Philippe Flajolet in 2006.
The asymptotic behavior of the coefficients
A
k
{\displaystyle A_{k}}
is rather curious: for growing
k
{\displaystyle k}
values, we observe regular oscillations with a nearly exponentially decreasing amplitude and slowly decreasing frequency (roughly as
k
−
2
/
3
{\displaystyle k^{-2/3}}
). Using the saddle point method, we can show that
A
k
∼
4
π
3
/
2
3
κ
exp
(
−
3
κ
2
+
π
2
4
κ
)
cos
(
4
π
3
−
3
3
κ
2
+
3
π
2
4
κ
)
{\displaystyle A_{k}\sim {\frac {4\pi ^{3/2}}{\sqrt {3\kappa }}}\exp {\biggl (}-{\frac {3\kappa }{2}}+{\frac {\pi ^{2}}{4\kappa }}{\biggl )}\cos {\biggl (}{\frac {4\pi }{3}}-{\frac {3{\sqrt {3}}\kappa }{2}}+{\frac {{\sqrt {3}}\pi ^{2}}{4\kappa }}{\biggl )}}
where
κ
{\displaystyle \kappa }
stands for:
κ
:=
π
2
k
3
{\displaystyle \kappa :={\sqrt[{3}]{\pi ^{2}k}}}
(see for details).
On the basis of this representation, in 2003 Luis Báez-Duarte provided a new criterion for the Riemann hypothesis. Namely, if we define the coefficients
c
k
{\displaystyle c_{k}}
as
c
k
:=
∑
j
=
0
k
(
−
1
)
j
(
k
j
)
1
ζ
(
2
j
+
2
)
{\displaystyle c_{k}:=\sum _{j=0}^{k}(-1)^{j}{\binom {k}{j}}{\frac {1}{\zeta (2j+2)}}}
then the Riemann hypothesis is equivalent to
c
k
=
O
(
k
−
3
/
4
+
ε
)
(
∀
ε
>
0
)
{\displaystyle c_{k}={\mathcal {O}}{\biggl (}k^{-3/4+\varepsilon }{\biggl )}\qquad (\forall \varepsilon >0)}
=== Rapidly convergent series ===
Peter Borwein developed an algorithm that applies Chebyshev polynomials to the Dirichlet eta function to produce a very rapidly convergent series suitable for high precision numerical calculations.
=== Series representation at positive integers via the primorial ===
ζ
(
k
)
=
2
k
2
k
−
1
+
∑
r
=
2
∞
(
p
r
−
1
#
)
k
J
k
(
p
r
#
)
k
=
2
,
3
,
…
.
{\displaystyle \zeta (k)={\frac {2^{k}}{2^{k}-1}}+\sum _{r=2}^{\infty }{\frac {(p_{r-1}\#)^{k}}{J_{k}(p_{r}\#)}}\qquad k=2,3,\ldots .}
Here pn# is the primorial sequence and Jk is Jordan's totient function.
=== Series representation by the incomplete poly-Bernoulli numbers ===
The function ζ can be represented, for Re(s) > 1, by the infinite series
ζ
(
s
)
=
∑
n
=
0
∞
B
n
,
≥
2
(
s
)
(
W
k
(
−
1
)
)
n
n
!
,
{\displaystyle \zeta (s)=\sum _{n=0}^{\infty }B_{n,\geq 2}^{(s)}{\frac {(W_{k}(-1))^{n}}{n!}},}
where k ∈ {−1, 0}, Wk is the kth branch of the Lambert W-function, and B(μ)n, ≥2 is an incomplete poly-Bernoulli number.
=== The Mellin transform of the Engel map ===
The function
g
(
x
)
=
x
(
1
+
⌊
x
−
1
⌋
)
−
1
{\displaystyle g(x)=x\left(1+\left\lfloor x^{-1}\right\rfloor \right)-1}
is iterated to find the coefficients appearing in Engel expansions.
The Mellin transform of the map
g
(
x
)
{\displaystyle g(x)}
is related to the Riemann zeta function by the formula
∫
0
1
g
(
x
)
x
s
−
1
d
x
=
∑
n
=
1
∞
∫
1
n
+
1
1
n
(
x
(
n
+
1
)
−
1
)
x
s
−
1
d
x
=
∑
n
=
1
∞
n
−
s
(
s
−
1
)
+
(
n
+
1
)
−
s
−
1
(
n
2
+
2
n
+
1
)
+
n
−
s
−
1
s
−
n
1
−
s
(
s
+
1
)
s
(
n
+
1
)
=
ζ
(
s
+
1
)
s
+
1
−
1
s
(
s
+
1
)
{\displaystyle {\begin{aligned}\int _{0}^{1}g(x)x^{s-1}\,dx&=\sum _{n=1}^{\infty }\int _{\frac {1}{n+1}}^{\frac {1}{n}}(x(n+1)-1)x^{s-1}\,dx\\[6pt]&=\sum _{n=1}^{\infty }{\frac {n^{-s}(s-1)+(n+1)^{-s-1}(n^{2}+2n+1)+n^{-s-1}s-n^{1-s}}{(s+1)s(n+1)}}\\[6pt]&={\frac {\zeta (s+1)}{s+1}}-{\frac {1}{s(s+1)}}\end{aligned}}}
=== Thue-Morse sequence ===
Certain linear combinations of Dirichlet series whose coefficients are terms of the Thue-Morse sequence give rise to identities involving the Riemann Zeta function. For instance:
∑
n
≥
1
5
t
n
−
1
+
3
t
n
n
2
=
4
ζ
(
2
)
=
2
π
2
3
,
∑
n
≥
1
9
t
n
−
1
+
7
t
n
n
3
=
8
ζ
(
3
)
,
{\displaystyle {\begin{aligned}\sum _{n\geq 1}{\frac {5t_{n-1}+3t_{n}}{n^{2}}}&=4\zeta (2)={\frac {2\pi ^{2}}{3}},\\\sum _{n\geq 1}{\frac {9t_{n-1}+7t_{n}}{n^{3}}}&=8\zeta (3),\end{aligned}}}
where
(
t
n
)
n
≥
0
{\displaystyle (t_{n})_{n\geq 0}}
is the
n
t
h
{\displaystyle n^{\rm {th}}}
term of the Thue-Morse sequence. In fact, for all
s
{\displaystyle s}
with real part greater than
1
{\displaystyle 1}
, we have
(
2
s
+
1
)
∑
n
≥
1
t
n
−
1
n
s
+
(
2
s
−
1
)
∑
n
≥
1
t
n
n
s
=
2
s
ζ
(
s
)
.
{\displaystyle (2^{s}+1)\sum _{n\geq 1}{\frac {t_{n-1}}{n^{s}}}+(2^{s}-1)\sum _{n\geq 1}{\frac {t_{n}}{n^{s}}}=2^{s}\zeta (s).}
== Numerical algorithms ==
A classical algorithm, in use prior to about 1930, proceeds by applying the Euler-Maclaurin formula to obtain, for n and m positive integers,
ζ
(
s
)
=
∑
j
=
1
n
−
1
j
−
s
+
1
2
n
−
s
+
n
1
−
s
s
−
1
+
∑
k
=
1
m
T
k
,
n
(
s
)
+
E
m
,
n
(
s
)
{\displaystyle \zeta (s)=\sum _{j=1}^{n-1}j^{-s}+{\tfrac {1}{2}}n^{-s}+{\frac {n^{1-s}}{s-1}}+\sum _{k=1}^{m}T_{k,n}(s)+E_{m,n}(s)}
where, letting
B
2
k
{\displaystyle B_{2k}}
denote the indicated Bernoulli number,
T
k
,
n
(
s
)
=
B
2
k
(
2
k
)
!
n
1
−
s
−
2
k
∏
j
=
0
2
k
−
2
(
s
+
j
)
{\displaystyle T_{k,n}(s)={\frac {B_{2k}}{(2k)!}}n^{1-s-2k}\prod _{j=0}^{2k-2}(s+j)}
and the error satisfies
|
E
m
,
n
(
s
)
|
<
|
s
+
2
m
+
1
σ
+
2
m
+
1
T
m
+
1
,
n
(
s
)
|
,
{\displaystyle |E_{m,n}(s)|<\left|{\frac {s+2m+1}{\sigma +2m+1}}T_{m+1,n}(s)\right|,}
with σ = Re(s).
A modern numerical algorithm is the Odlyzko–Schönhage algorithm.
== Applications ==
The zeta function occurs in applied statistics including Zipf's law, Zipf–Mandelbrot law, and Lotka's law.
Zeta function regularization is used as one possible means of regularization of divergent series and divergent integrals in quantum field theory. In one notable example, the Riemann zeta function shows up explicitly in one method of calculating the Casimir effect. The zeta function is also useful for the analysis of dynamical systems.
=== Musical tuning ===
In the theory of musical tunings, the zeta function can be used to find equal divisions of the octave (EDOs) that closely approximate the intervals of the harmonic series. For increasing values of
t
∈
R
{\displaystyle t\in \mathbb {R} }
, the value of
|
ζ
(
1
2
+
2
π
i
ln
(
2
)
t
)
|
{\displaystyle \left\vert \zeta \left({\frac {1}{2}}+{\frac {2\pi {i}}{\ln {(2)}}}t\right)\right\vert }
peaks near integers that correspond to such EDOs. Examples include popular choices such as 12, 19, and 53.
=== Infinite series ===
The zeta function evaluated at equidistant positive integers appears in infinite series representations of a number of constants.
∑
n
=
2
∞
(
ζ
(
n
)
−
1
)
=
1
{\displaystyle \sum _{n=2}^{\infty }{\bigl (}\zeta (n)-1{\bigr )}=1}
In fact the even and odd terms give the two sums
∑
n
=
1
∞
(
ζ
(
2
n
)
−
1
)
=
3
4
{\displaystyle \sum _{n=1}^{\infty }{\bigl (}\zeta (2n)-1{\bigr )}={\frac {3}{4}}}
and
∑
n
=
1
∞
(
ζ
(
2
n
+
1
)
−
1
)
=
1
4
{\displaystyle \sum _{n=1}^{\infty }{\bigl (}\zeta (2n+1)-1{\bigr )}={\frac {1}{4}}}
Parametrized versions of the above sums are given by
∑
n
=
1
∞
(
ζ
(
2
n
)
−
1
)
t
2
n
=
t
2
t
2
−
1
+
1
2
(
1
−
π
t
cot
(
t
π
)
)
{\displaystyle \sum _{n=1}^{\infty }(\zeta (2n)-1)\,t^{2n}={\frac {t^{2}}{t^{2}-1}}+{\frac {1}{2}}\left(1-\pi t\cot(t\pi )\right)}
and
∑
n
=
1
∞
(
ζ
(
2
n
+
1
)
−
1
)
t
2
n
=
t
2
t
2
−
1
−
1
2
(
ψ
0
(
t
)
+
ψ
0
(
−
t
)
)
−
γ
{\displaystyle \sum _{n=1}^{\infty }(\zeta (2n+1)-1)\,t^{2n}={\frac {t^{2}}{t^{2}-1}}-{\frac {1}{2}}\left(\psi ^{0}(t)+\psi ^{0}(-t)\right)-\gamma }
with
|
t
|
<
2
{\displaystyle |t|<2}
and where
ψ
{\displaystyle \psi }
and
γ
{\displaystyle \gamma }
are the polygamma function and Euler's constant, respectively, as well as
∑
n
=
1
∞
ζ
(
2
n
)
−
1
n
t
2
n
=
log
(
1
−
t
2
sinc
(
π
t
)
)
{\displaystyle \sum _{n=1}^{\infty }{\frac {\zeta (2n)-1}{n}}\,t^{2n}=\log \left({\dfrac {1-t^{2}}{\operatorname {sinc} (\pi \,t)}}\right)}
all of which are continuous at
t
=
1
{\displaystyle t=1}
. Other sums include
∑
n
=
2
∞
ζ
(
n
)
−
1
n
=
1
−
γ
{\displaystyle \sum _{n=2}^{\infty }{\frac {\zeta (n)-1}{n}}=1-\gamma }
∑
n
=
1
∞
ζ
(
2
n
)
−
1
n
=
ln
2
{\displaystyle \sum _{n=1}^{\infty }{\frac {\zeta (2n)-1}{n}}=\ln 2}
∑
n
=
2
∞
ζ
(
n
)
−
1
n
(
(
3
2
)
n
−
1
−
1
)
=
1
3
ln
π
{\displaystyle \sum _{n=2}^{\infty }{\frac {\zeta (n)-1}{n}}\left(\left({\tfrac {3}{2}}\right)^{n-1}-1\right)={\frac {1}{3}}\ln \pi }
∑
n
=
1
∞
(
ζ
(
4
n
)
−
1
)
=
7
8
−
π
4
(
e
2
π
+
1
e
2
π
−
1
)
.
{\displaystyle \sum _{n=1}^{\infty }{\bigl (}\zeta (4n)-1{\bigr )}={\frac {7}{8}}-{\frac {\pi }{4}}\left({\frac {e^{2\pi }+1}{e^{2\pi }-1}}\right).}
∑
n
=
2
∞
ζ
(
n
)
−
1
n
ℑ
(
(
1
+
i
)
n
−
1
−
i
n
)
=
π
4
{\displaystyle \sum _{n=2}^{\infty }{\frac {\zeta (n)-1}{n}}\Im {\bigl (}(1+i)^{n}-1-i^{n}{\bigr )}={\frac {\pi }{4}}}
where
ℑ
{\displaystyle \Im }
denotes the imaginary part of a complex number.
Another interesting series that relates to the natural logarithm of the lemniscate constant is the following
∑
n
=
2
∞
[
2
(
−
1
)
n
ζ
(
n
)
4
n
n
−
(
−
1
)
n
ζ
(
n
)
2
n
n
]
=
ln
(
ϖ
2
2
)
{\displaystyle \sum _{n=2}^{\infty }\left[{\frac {2(-1)^{n}\zeta (n)}{4^{n}n}}-{\frac {(-1)^{n}\zeta (n)}{2^{n}n}}\right]=\ln \left({\frac {\varpi }{2{\sqrt {2}}}}\right)}
There are yet more formulas in the article Harmonic number.
== Generalizations ==
There are a number of related zeta functions that can be considered to be generalizations of the Riemann zeta function. These include the Hurwitz zeta function
ζ
(
s
,
q
)
=
∑
k
=
0
∞
1
(
k
+
q
)
s
{\displaystyle \zeta (s,q)=\sum _{k=0}^{\infty }{\frac {1}{(k+q)^{s}}}}
(the convergent series representation was given by Helmut Hasse in 1930, cf. Hurwitz zeta function), which coincides with the Riemann zeta function when q = 1 (the lower limit of summation in the Hurwitz zeta function is 0, not 1), the Dirichlet L-functions and the Dedekind zeta function. For other related functions see the articles zeta function and L-function.
The polylogarithm is given by
Li
s
(
z
)
=
∑
k
=
1
∞
z
k
k
s
{\displaystyle \operatorname {Li} _{s}(z)=\sum _{k=1}^{\infty }{\frac {z^{k}}{k^{s}}}}
which coincides with the Riemann zeta function when z = 1.
The Clausen function Cls(θ) can be chosen as the real or imaginary part of Lis(eiθ).
The Lerch transcendent is given by
Φ
(
z
,
s
,
q
)
=
∑
k
=
0
∞
z
k
(
k
+
q
)
s
{\displaystyle \Phi (z,s,q)=\sum _{k=0}^{\infty }{\frac {z^{k}}{(k+q)^{s}}}}
which coincides with the Riemann zeta function when z = 1 and q = 1 (the lower limit of summation in the Lerch transcendent is 0, not 1).
The multiple zeta functions are defined by
ζ
(
s
1
,
s
2
,
…
,
s
n
)
=
∑
k
1
>
k
2
>
⋯
>
k
n
>
0
k
1
−
s
1
k
2
−
s
2
⋯
k
n
−
s
n
.
{\displaystyle \zeta (s_{1},s_{2},\ldots ,s_{n})=\sum _{k_{1}>k_{2}>\cdots >k_{n}>0}{k_{1}}^{-s_{1}}{k_{2}}^{-s_{2}}\cdots {k_{n}}^{-s_{n}}.}
One can analytically continue these functions to the n-dimensional complex space. The special values taken by these functions at positive integer arguments are called multiple zeta values by number theorists and have been connected to many different branches in mathematics and physics.
== See also ==
1 + 2 + 3 + 4 + ···
Arithmetic zeta function
Dirichlet eta function
Generalized Riemann hypothesis
Lehmer pair
Particular values of the Riemann zeta function
Prime zeta function
Renormalization
Riemann–Siegel theta function
ZetaGrid
== References ==
== Sources ==
== External links ==
Media related to Riemann zeta function at Wikimedia Commons
"Zeta-function". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
Riemann Zeta Function, in Wolfram Mathworld — an explanation with a more mathematical approach
Tables of selected zeros Archived 17 May 2009 at the Wayback Machine
Prime Numbers Get Hitched A general, non-technical description of the significance of the zeta function in relation to prime numbers.
X-Ray of the Zeta Function Visually oriented investigation of where zeta is real or purely imaginary.
Formulas and identities for the Riemann Zeta function functions.wolfram.com
Riemann Zeta Function and Other Sums of Reciprocal Powers, section 23.2 of Abramowitz and Stegun
Frenkel, Edward. "Million Dollar Math Problem" (video). Brady Haran. Archived from the original on 11 December 2021. Retrieved 11 March 2014.
Mellin transform and the functional equation of the Riemann Zeta function—Computational examples of Mellin transform methods involving the Riemann Zeta Function
Visualizing the Riemann zeta function and analytic continuation a video from 3Blue1Brown | Wikipedia/Riemann_zeta-function |
In automata theory, a tree is a particular way of representing a tree structure as sequences of natural numbers.
For example, each node of the tree is a word over set of natural numbers (
N
{\displaystyle \mathbb {N} }
), which helps this definition to be used in automata theory.
A tree is a set T ⊆
N
{\displaystyle \mathbb {N} }
* such that if t.c ∈ T, with t ∈
N
{\displaystyle \mathbb {N} }
* and c ∈
N
{\displaystyle \mathbb {N} }
, then t ∈ T and t.c1 ∈ T for all 0 ≤ c1 < c. The elements of T are known as nodes, and the empty word ε is the (single) root of T. For every t ∈ T, the element t.c ∈ T is a successor of t in direction c. The number of successors of t is called its degree or arity, and represented as d(t). A node is a leaf if it has no successors. If every node of a tree has finitely many successors, then it is called a finitely, otherwise an infinitely branching tree. A path π is a subset of T such that ε ∈ π and for every t ∈ T, either t is a leaf or there exists a unique c ∈
N
{\displaystyle \mathbb {N} }
such that t.c ∈ π. A path may be a finite or infinite set. If all paths of a tree are finite then the tree is called finite, otherwise infinite. A tree is called fully infinite if all its paths are infinite. Given an alphabet Σ, a Σ-labeled tree is a pair (T,V), where T is a tree and V: T → Σ maps each node of T to a symbol in Σ. A labeled tree formally defines a commonly used term tree structure. A set of labeled trees is called a tree language.
A tree is called ordered if there is an order among the successors of each of its nodes. The above definition of tree naturally suggests an order among the successors, which can be used to make the tree ranked.
In the case of ranked alphabets, an extra function Ar: Σ →
N
{\displaystyle \mathbb {N} }
is defined. This function associates a fixed arity to each symbol of the alphabet. In this case, each t ∈ T has to satisfy Ar(V(t)) = d(t). The trees that satisfy this property are called ranked trees. The trees that do not (necessarily) satisfy that property are called unranked.
For example, the above definition is used in the definition of an infinite tree automaton.
== Example ==
Let T = {0,1}* and Σ = {a,b}. We define a labeling function V as follows: the labeling for the root node is V(ε) = a and, for every other node t ∈ {0,1}*, the labellings for its successor nodes are V(t.0) = a and V(t.1) = b. It is clear from the picture that T forms a (fully) infinite binary tree.
== References ==
Comon, Hubert; Dauchet, Max; Gilleron, Rémi; Jacquemard, Florent; Lugiez, Denis; Löding, Christof; Tison, Sophie; Tommasi, Marc (November 2008). "Preliminaries". Tree Automata Techniques and Applications (PDF). Retrieved 11 February 2014. | Wikipedia/Tree_(automata_theory) |
In the theory of formal languages, the pumping lemma for regular languages is a lemma that describes an essential property of all regular languages. Informally, it says that all sufficiently long strings in a regular language may be pumped—that is, have a middle section of the string repeated an arbitrary number of times—to produce a new string that is also part of the language. The pumping lemma is useful for proving that a specific language is not a regular language, by showing that the language does not have the property.
Specifically, the pumping lemma says that for any regular language
L
{\displaystyle L}
, there exists a constant
p
{\displaystyle p}
such that any string
w
{\displaystyle w}
in
L
{\displaystyle L}
with length at least
p
{\displaystyle p}
can be split into three substrings
x
{\displaystyle x}
,
y
{\displaystyle y}
and
z
{\displaystyle z}
(
w
=
x
y
z
{\displaystyle w=xyz}
, with
y
{\displaystyle y}
being non-empty), such that the strings
x
z
,
x
y
z
,
x
y
y
z
,
x
y
y
y
z
,
.
.
.
{\displaystyle xz,xyz,xyyz,xyyyz,...}
are also in
L
{\displaystyle L}
. The process of repeating
y
{\displaystyle y}
zero or more times is known as "pumping". Moreover, the pumping lemma guarantees that the length of
x
y
{\displaystyle xy}
will be at most
p
{\displaystyle p}
, thus giving a "small" substring
x
y
{\displaystyle xy}
that has the desired property.
Languages with a finite number of strings vacuously satisfy the pumping lemma by having
p
{\displaystyle p}
equal to the maximum string length in
L
{\displaystyle L}
plus one. By doing so, zero strings in
L
{\displaystyle L}
have length greater than
p
{\displaystyle p}
.
The pumping lemma was first proven by Michael Rabin and Dana Scott in 1959, and rediscovered shortly after by Yehoshua Bar-Hillel, Micha A. Perles, and Eli Shamir in 1961, as a simplification of their pumping lemma for context-free languages.
== Formal statement ==
Let
L
{\displaystyle L}
be a regular language. Then there exists an integer
p
≥
1
{\displaystyle p\geq 1}
depending only on
L
{\displaystyle L}
such that every string
w
{\displaystyle w}
in
L
{\displaystyle L}
of length at least
p
{\displaystyle p}
(
p
{\displaystyle p}
is called the "pumping length") can be written as
w
=
x
y
z
{\displaystyle w=xyz}
(i.e.,
w
{\displaystyle w}
can be divided into three substrings), satisfying the following conditions:
|
y
|
≥
1
{\displaystyle |y|\geq 1}
|
x
y
|
≤
p
{\displaystyle |xy|\leq p}
(
∀
n
≥
0
)
(
x
y
n
z
∈
L
)
{\displaystyle (\forall n\geq 0)(xy^{n}z\in L)}
y
{\displaystyle y}
is the substring that can be pumped (removed or repeated any number of times, and the resulting string is always in
L
{\displaystyle L}
). (1) means the loop
y
{\displaystyle y}
to be pumped must be of length at least one, that is, not an empty string; (2) means the loop must occur within the first
p
{\displaystyle p}
characters.
|
x
|
{\displaystyle |x|}
must be smaller than
p
{\displaystyle p}
(conclusion of (1) and (2)), but apart from that, there is no restriction on
x
{\displaystyle x}
and
z
{\displaystyle z}
.
In simple words, for any regular language
L
{\displaystyle L}
, any sufficiently long string
w
{\displaystyle w}
(in
L
{\displaystyle L}
) can be split into 3 parts, i.e.
w
=
x
y
z
{\displaystyle w=xyz}
, such that all the strings
x
y
n
z
{\displaystyle xy^{n}z}
for
n
≥
0
{\displaystyle n\geq 0}
are also in
L
{\displaystyle L}
.
Below is a formal expression of the pumping lemma.
∀
L
⊆
Σ
∗
,
regular
(
L
)
⟹
∃
p
≥
1
,
∀
w
∈
L
,
|
w
|
≥
p
⟹
∃
x
,
y
,
z
∈
Σ
∗
,
(
w
=
x
y
z
)
∧
(
|
y
|
≥
1
)
∧
(
|
x
y
|
≤
p
)
∧
(
∀
n
≥
0
,
x
y
n
z
∈
L
)
{\displaystyle {\begin{array}{l}\forall L\subseteq \Sigma ^{*},{\mbox{regular}}(L)\implies \\\quad \exists p\geq 1,\forall w\in L,|w|\geq p\implies \\\qquad \exists x,y,z\in \Sigma ^{*},(w=xyz)\land (|y|\geq 1)\land (|xy|\leq p)\land (\forall n\geq 0,xy^{n}z\in L)\end{array}}}
== Use of the lemma to prove non-regularity ==
The pumping lemma is often used to prove that a particular language is non-regular: a proof by contradiction may consist of exhibiting a string (of the required length) in the language that lacks the property outlined in the pumping lemma.
Example: The language
L
=
{
a
n
b
n
:
n
≥
0
}
{\displaystyle L=\{a^{n}b^{n}:n\geq 0\}}
over the alphabet
Σ
=
{
a
,
b
}
{\displaystyle \Sigma =\{a,b\}}
can be shown to be non-regular as follows:
Assume that some constant
p
≥
1
{\displaystyle p\geq 1}
exists as required by the lemma.
Let
w
{\displaystyle w}
in
L
{\displaystyle L}
be given by
w
=
a
p
b
p
{\displaystyle w=a^{p}b^{p}}
, which is a string longer than
p
{\displaystyle p}
.
By the pumping lemma, there must exist a decomposition
w
=
x
y
z
{\displaystyle w=xyz}
with
|
x
y
|
≤
p
{\displaystyle |xy|\leq p}
and
|
y
|
≥
1
{\displaystyle |y|\geq 1}
such that
x
y
i
z
{\displaystyle xy^{i}z}
in
L
{\displaystyle L}
for every
i
≥
0
{\displaystyle i\geq 0}
.
Since
|
x
y
|
≤
p
{\displaystyle |xy|\leq p}
, the string
y
{\displaystyle y}
only consists of instances of
a
{\displaystyle a}
.
Because
|
y
|
≥
1
{\displaystyle |y|\geq 1}
, it contains at least one instance of the letter
a
{\displaystyle a}
.
Pumping
y
{\displaystyle y}
to give
x
y
2
z
{\displaystyle xy^{2}z}
gives a word with more instances of the letter
a
{\displaystyle a}
than the letter
b
{\displaystyle b}
, since some instances of
a
{\displaystyle a}
but none of
b
{\displaystyle b}
were added.
Therefore,
x
y
2
z
{\displaystyle xy^{2}z}
is not in
L
{\displaystyle L}
which contradicts the pumping lemma.
Therefore,
L
{\displaystyle L}
cannot be regular.
The proof that the language of balanced (i.e., properly nested) parentheses is not regular follows the same idea. Given
p
{\displaystyle p}
, there is a string of balanced parentheses that begins with more than
p
{\displaystyle p}
left parentheses, so that
y
{\displaystyle y}
will consist entirely of left parentheses. By repeating
y
{\displaystyle y}
, a string can be produced that does not contain the same number of left and right parentheses, and so they cannot be balanced.
== Proof of the pumping lemma ==
For every regular language there is a finite-state automaton (FSA) that accepts the language. The number of states in such an FSA are counted and that count is used as the pumping length
p
{\displaystyle p}
. For a string of length at least
p
{\displaystyle p}
, let
q
0
{\displaystyle q_{0}}
be the start state and let
q
1
,
.
.
.
,
q
p
{\displaystyle q_{1},...,q_{p}}
be the sequence of the next
p
{\displaystyle p}
states visited as the string is emitted. Because the FSA has only
p
{\displaystyle p}
states, within this sequence of
p
+
1
{\displaystyle p+1}
visited states there must be at least one state that is repeated. Write
q
s
{\displaystyle q_{s}}
for such a state. The transitions that take the machine from the first encounter of state
q
s
{\displaystyle q_{s}}
to the second encounter of state
q
s
{\displaystyle q_{s}}
match some string. This string is called
y
{\displaystyle y}
in the lemma, and since the machine will match a string without the
y
{\displaystyle y}
portion, or with the string
y
{\displaystyle y}
repeated any number of times, the conditions of the lemma are satisfied.
For example, the following image shows an FSA.
The FSA accepts the string: abcd. Since this string has a length at least as large as the number of states, which is four (so the total number of states that the machine passes through to scan abcd would be 5), the pigeonhole principle indicates that there must be at least one repeated state among the start state and the next four visited states. In this example, only
q
1
{\displaystyle q_{1}}
is a repeated state. Since the substring bc takes the machine through transitions that start at state
q
1
{\displaystyle q_{1}}
and end at state
q
1
{\displaystyle q_{1}}
, that portion could be repeated and the FSA would still accept, giving the string abcbcd. Alternatively, the bc portion could be removed and the FSA would still accept giving the string ad. In terms of the pumping lemma, the string abcd is broken into an
x
{\displaystyle x}
portion a, a
y
{\displaystyle y}
portion bc and a
z
{\displaystyle z}
portion d.
As a side remark, the problem of checking whether a given string can be accepted by a given nondeterministic finite automaton without visiting any state repeatedly, is NP hard.
== General version of pumping lemma for regular languages ==
If a language
L
{\displaystyle L}
is regular, then there exists a number
p
≥
1
{\displaystyle p\geq 1}
(the pumping length) such that every string
u
w
v
{\displaystyle uwv}
in
L
{\displaystyle L}
with
|
w
|
≥
p
{\displaystyle |w|\geq p}
can be written in the form
u
w
v
=
u
x
y
z
v
{\displaystyle uwv=uxyzv}
with strings
x
{\displaystyle x}
,
y
{\displaystyle y}
and
z
{\displaystyle z}
such that
|
x
y
|
≤
p
{\displaystyle |xy|\leq p}
,
|
y
|
≥
1
{\displaystyle |y|\geq 1}
and
u
x
y
i
z
v
{\displaystyle uxy^{i}zv}
is in
L
{\displaystyle L}
for every integer
i
≥
0
{\displaystyle i\geq 0}
.
From this, the above standard version follows a special case, with both
u
{\displaystyle u}
and
v
{\displaystyle v}
being the empty string.
Since the general version imposes stricter requirements on the language, it can be used to prove the non-regularity of many more languages.
== Invalidity of the lemma converse ==
While the pumping lemma states that all regular languages satisfy the conditions described above, the converse of this statement is not true: a language that satisfies these conditions may still be non-regular. In other words, both the original and the general version of the pumping lemma give a necessary but not sufficient condition for a language to be regular.
For example, consider the following language:
L
=
{
u
v
w
x
y
:
u
,
y
∈
{
0
,
1
,
2
,
3
}
∗
;
v
,
w
,
x
∈
{
0
,
1
,
2
,
3
}
∧
(
v
=
w
∨
v
=
x
∨
x
=
w
)
}
∪
{
w
:
w
∈
{
0
,
1
,
2
,
3
}
∗
∧
precisely
1
7
of the characters in
w
are 3's
}
{\displaystyle {\begin{matrix}L&=&\{uvwxy:u,y\in \{0,1,2,3\}^{*};v,w,x\in \{0,1,2,3\}\land (v=w\lor v=x\lor x=w)\}\\&&\cup \ \{w:w\in \{0,1,2,3\}^{*}\land {\text{precisely }}{\tfrac {1}{7}}{\text{ of the characters in }}w{\text{ are 3's}}\}\end{matrix}}}
.
In other words,
L
{\displaystyle L}
contains all strings over the alphabet
{
0
,
1
,
2
,
3
}
{\displaystyle \{0,1,2,3\}}
with a substring of length 3 including a duplicate character, as well as all strings over this alphabet where precisely 1/7 of the string's characters are 3's. This language is not regular but can still be "pumped" with
p
=
5
{\displaystyle p=5}
. Suppose some string s has length at least 5. Then, since the alphabet has only four characters, at least two of the first five characters in the string must be duplicates. They are separated by at most three characters.
If the duplicate characters are separated by 0 characters, or 1, pump one of the other two characters in the string, which will not affect the substring containing the duplicates.
If the duplicate characters are separated by 2 or 3 characters, pump 2 of the characters separating them. Pumping either down or up results in the creation of a substring of size 3 that contains 2 duplicate characters.
The second condition of
L
{\displaystyle L}
ensures that
L
{\displaystyle L}
is not regular: Consider the string
(
013
)
3
m
(
012
)
i
{\displaystyle (013)^{3m}(012)^{i}}
. This string is in
L
{\displaystyle L}
exactly when
i
=
4
m
{\displaystyle i=4m}
and thus
L
{\displaystyle L}
is not regular by the Myhill–Nerode theorem.
The Myhill–Nerode theorem provides a test that exactly characterizes regular languages. The typical method for proving that a language is regular is to construct either a finite-state machine or a regular expression for the language.
== See also ==
Ogden's lemma
Pumping lemma for context-free languages
Pumping lemma for regular tree languages
== Notes ==
== References ==
Lawson, Mark V. (2004). Finite automata. Chapman and Hall/CRC. ISBN 978-1-58488-255-8. Zbl 1086.68074.
Sipser, Michael (1997). "1.4: Nonregular Languages". Introduction to the Theory of Computation. PWS Publishing. pp. 77–83. ISBN 978-0-534-94728-6. Zbl 1169.68300.
Hopcroft, John E.; Ullman, Jeffrey D. (1979). Introduction to Automata Theory, Languages, and Computation. Reading, Massachusetts: Addison-Wesley Publishing. ISBN 978-0-201-02988-8. Zbl 0426.68001. (See chapter 3.)
Bakhadyr Khoussainov; Anil Nerode (6 December 2012). Automata Theory and its Applications. Springer Science & Business Media. ISBN 978-1-4612-0171-7. | Wikipedia/Pumping_lemma_for_regular_languages |
Transition refers to a computer science paradigm in the context of communication systems which describes the change of communication mechanisms, i.e., functions of a communication system, in particular, service and protocol components. In a transition, communication mechanisms within a system are replaced by functionally comparable mechanisms with the aim to ensure the highest possible quality, e.g., as captured by the quality of service.
Transitions enable communication systems to adapt to changing conditions during runtime. This change in conditions can, for example, be a rapid increase in the load on a certain service that may be caused, e.g., by large gatherings of people with mobile devices. A transition often impacts multiple mechanisms at different communication layers of a layered architecture.
Mechanisms are given as conceptual elements of a networked communication system and are linked to specific functional units, for example, as a service or protocol component. In some cases, a mechanism can also comprise an entire protocol. For example on the transmission layer, LTE can be regarded as such a mechanism. Following this definition, there exist numerous communication mechanisms that are partly equivalent in their basic functionality, such as Wi-Fi, Bluetooth and Zigbee for local wireless networks and UMTS and LTE for broadband wireless connections. For example, LTE and Wi-Fi have equivalent basic functionality, but they are technologically significantly different in their design and operation. Mechanisms affected by transitions are often components of a protocol or service. For example, in case of video streaming/transmission, the use of different video data encoding can be carried out depending on the available data transmission rate. These changes are controlled and implemented by transitions; A research example is a context-aware video adaptation service to support mobile video applications. Through analyzing the current processes in a communication system, it is possible to determine which transitions need to be executed at which communication layer in order to meet the quality requirements. In order for communication systems to adapt to the respective framework conditions, architectural approaches of self-organizing, adaptive systems can be used, such as the MAPE cycle (Monitor-Analyze-Plan-Execute). This central concept of Autonomic Computing can be used to determine the state of the communication system, to analyze the monitoring data and to plan and execute the necessary transition(s). A central goal is that users do not consciously perceive a transition while running applications and that the functionality of the used services is perceived as smooth and fluid.
== Recent research ==
The study of new and fundamental design methods, models and techniques that enable automated, coordinated and cross-layer transitions between functionally similar mechanisms within a communication system is the main goal of a collaborative research center funded by the German research foundation (DFG). The DFG collaborative research center 1053 MAKI - Multi-mechanism Adaptation for the future Internet - focuses on research questions in the following areas: (i) Fundamental research on transition methods, (ii) Techniques for adapting transition-capable communication systems on the basis of achieved and targeted quality, and (iii) specific and exemplary transitions in communication systems as regarded from different technical perspectives.
A formalization of the concept of transitions that captures the features and relations within a communication system to express and optimize the decision making process that is associated with such a system is given in. The associated building blocks comprise (i) Dynamic Software Product Lines, (ii) Markov Decision Processes and (iii) Utility Design. While Dynamic Software Product Lines provide a method to concisely capture a large configuration space and to specify run time variability of adaptive systems, Markov Decision Processes provide a mathematical tool to define and plan transitions between available communication mechanisms. Finally, utility functions quantify the performance of individual configurations of the transition-based communication system and provide the means to optimize the performance in such a system.
Applications of the idea of transitions have found their way to wireless sensor networks and mobile networks, distributed reactive programming, WiFi firmware modification, planning of autonomic computing systems, analysis of CDNs, flexible extensions of the ISO OSI stack, 5G mmWave vehicular communications, the analysis of MapReduce-like parallel systems, scheduling of Multipath TCP, adaptivity for beam training in 802.11ad, operator placement in dynamic user environments, DASH video player analysis, adaptive bitrate streaming and complex event processing on mobile devices.
== References ==
== External links ==
MAKI | Wikipedia/Transition_(computer_science) |
Processor design is a subfield of computer science and computer engineering (fabrication) that deals with creating a processor, a key component of computer hardware.
The design process involves choosing an instruction set and a certain execution paradigm (e.g. VLIW or RISC) and results in a microarchitecture, which might be described in e.g. VHDL or Verilog. For microprocessor design, this description is then manufactured employing some of the various semiconductor device fabrication processes, resulting in a die which is bonded onto a chip carrier. This chip carrier is then soldered onto, or inserted into a socket on, a printed circuit board (PCB).
The mode of operation of any processor is the execution of lists of instructions. Instructions typically include those to compute or manipulate data values using registers, change or retrieve values in read/write memory, perform relational tests between data values and to control program flow.
Processor designs are often tested and validated on one or several FPGAs before sending the design of the processor to a foundry for semiconductor fabrication.
== Details ==
=== Basics ===
CPU design is divided into multiple components. Information is transferred through datapaths (such as ALUs and pipelines). These datapaths are controlled through logic by control units. Memory components include register files and caches to retain information, or certain actions. Clock circuitry maintains internal rhythms and timing through clock drivers, PLLs, and clock distribution networks. Pad transceiver circuitry which allows signals to be received and sent and a logic gate cell library which is used to implement the logic. Logic gates are the foundation for processor design as they are used to implement most of the processor's components.
CPUs designed for high-performance markets might require custom (optimized or application specific (see below)) designs for each of these items to achieve frequency, power-dissipation, and chip-area goals whereas CPUs designed for lower performance markets might lessen the implementation burden by acquiring some of these items by purchasing them as intellectual property. Control logic implementation techniques (logic synthesis using CAD tools) can be used to implement datapaths, register files, and clocks. Common logic styles used in CPU design include unstructured random logic, finite-state machines, microprogramming (common from 1965 to 1985), and Programmable logic arrays (common in the 1980s, no longer common).
=== Implementation logic ===
Device types used to implement the logic include:
Individual vacuum tubes, individual transistors and semiconductor diodes, and transistor-transistor logic small-scale integration logic chips – no longer used for CPUs
Programmable array logic and programmable logic devices – no longer used for CPUs
Emitter-coupled logic (ECL) gate arrays – no longer common
CMOS gate arrays – no longer used for CPUs
CMOS mass-produced ICs – the vast majority of CPUs by volume
CMOS ASICs – only for a minority of special applications due to expense
Field-programmable gate arrays (FPGA) – common for soft microprocessors, and more or less required for reconfigurable computing
A CPU design project generally has these major tasks:
Programmer-visible instruction set architecture, which can be implemented by a variety of microarchitectures
Architectural study and performance modeling in ANSI C/C++ or SystemC
High-level synthesis (HLS) or register transfer level (RTL, e.g. logic) implementation
RTL verification
Circuit design of speed critical components (caches, registers, ALUs)
Logic synthesis or logic-gate-level design
Timing analysis to confirm that all logic and circuits will run at the specified operating frequency
Physical design including floorplanning, place and route of logic gates
Checking that RTL, gate-level, transistor-level and physical-level representations are equivalent
Checks for signal integrity, chip manufacturability
Re-designing a CPU core to a smaller die area helps to shrink everything (a "photomask shrink"), resulting in the same number of transistors on a smaller die. It improves performance (smaller transistors switch faster), reduces power (smaller wires have less parasitic capacitance) and reduces cost (more CPUs fit on the same wafer of silicon). Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within one very-large-scale integration chip (additional cache, multiple CPUs or other components), improving performance and reducing overall system cost.
As with most complex electronic designs, the logic verification effort (proving that the design does not have bugs) now dominates the project schedule of a CPU.
Key CPU architectural innovations include index register, cache, virtual memory, instruction pipelining, superscalar, CISC, RISC, virtual machine, emulators, microprogram, and stack.
=== Microarchitectural concepts ===
=== Research topics ===
A variety of new CPU design ideas have been proposed,
including reconfigurable logic, clockless CPUs, computational RAM, and optical computing.
=== Performance analysis and benchmarking ===
Benchmarking is a way of testing CPU speed. Examples include SPECint and SPECfp, developed by Standard Performance Evaluation Corporation, and ConsumerMark developed by the Embedded Microprocessor Benchmark Consortium EEMBC.
Some of the commonly used metrics include:
Instructions per second - Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see Megahertz Myth).
FLOPS - The number of floating point operations per second is often important in selecting computers for scientific computations.
Performance per watt - System designers building parallel computers, such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.
Some system designers building parallel computers pick CPUs based on the speed per dollar.
System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response. (DSP)
Computer programmers who program directly in assembly language want a CPU to support a full featured instruction set.
Low power - For systems with limited power sources (e.g. solar, batteries, human power).
Small size or low weight - for portable embedded systems, systems for spacecraft.
Environmental impact - Minimizing environmental impact of computers during manufacturing and recycling as well during use. Reducing waste, reducing hazardous materials. (see Green computing).
There may be tradeoffs in optimizing some of these metrics. In particular, many design techniques that make a CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa.
== Markets ==
There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets.
=== General-purpose computing ===
As of 2010, in the general-purpose computing market, that is, desktop, laptop, and server computers commonly used in businesses and homes, the Intel IA-32 and the 64-bit version x86-64 architecture dominate the market, with its rivals PowerPC and SPARC maintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market. A growing percentage of these processors are for mobile implementations such as netbooks and laptops.
Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption.
==== High-end processor economics ====
In 1984, most high-performance CPUs required four to five years to develop.
=== Scientific computing ===
Scientific computing is a much smaller niche market (in revenue and units shipped). It is used in government research labs and universities. Before 1990, CPU design was often done for this market, but mass market CPUs organized into large clusters have proven to be more affordable. The main remaining area of active hardware design and research for scientific computing is for high-speed data transmission systems to connect mass market CPUs.
=== Embedded design ===
As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors.
These single-function devices differ from the more familiar general-purpose CPUs in several ways:
Low cost is of high importance.
It is important to maintain a low power dissipation as embedded devices often have a limited battery life and it is often impractical to include cooling fans.
To give lower system cost, peripherals are integrated with the processor on the same silicon chip.
Keeping peripherals on-chip also reduces power consumption as external GPIO ports typically require buffering so that they can source or sink the relatively high current loads that are required to maintain a strong signal outside of the chip.
Many embedded applications have a limited amount of physical space for circuitry; keeping peripherals on-chip will reduce the space required for the circuit board.
The program and data memories are often integrated on the same chip. When the only allowed program memory is ROM, the device is known as a microcontroller.
For many embedded applications, interrupt latency will be more critical than in some general-purpose processors.
==== Embedded processor economics ====
The embedded CPU family with the largest number of total units shipped is the 8051, averaging nearly a billion units per year. The 8051 is widely used because it is very inexpensive. The design time is now roughly zero, because it is widely available as commercial intellectual property. It is now often embedded as a small part of a larger system on a chip. The silicon cost of an 8051 is now as low as US$0.001, because some implementations use as few as 2,200 logic gates and take 0.4730 square millimeters of silicon.
As of 2009, more CPUs are produced using the ARM architecture family instruction sets than any other 32-bit instruction set.
The ARM architecture and the first ARM chip were designed in about one and a half years and 5 human years of work time.
The 32-bit Parallax Propeller microcontroller architecture and the first chip were designed by two people in about 10 human years of work time.
The 8-bit AVR architecture and first AVR microcontroller was conceived and designed by two students at the Norwegian Institute of Technology.
The 8-bit 6502 architecture and the first MOS Technology 6502 chip were designed in 13 months by a group of about 9 people.
==== Research and educational CPU design ====
The 32-bit Berkeley RISC I and RISC II processors were mostly designed by a series of students as part of a four quarter sequence of graduate courses.
This design became the basis of the commercial SPARC processor design.
For about a decade, every student taking the 6.004 class at MIT was part of a team—each team had one semester to design and build a simple 8 bit CPU out of 7400 series integrated circuits.
One team of 4 students designed and built a simple 32 bit CPU during that semester.
Some undergraduate courses require a team of 2 to 5 students to design, implement, and test a simple CPU in a FPGA in a single 15-week semester.
The MultiTitan CPU was designed with 2.5 man years of effort, which was considered "relatively little design effort" at the time.
24 people contributed to the 3.5 year MultiTitan research project, which included designing and building a prototype CPU.
==== Soft microprocessor cores ====
For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented by logic synthesis techniques. These synthesized processors can be implemented in a much shorter amount of time, giving quicker time-to-market.
== See also ==
Amdahl's law
Central processing unit
Comparison of instruction set architectures
Complex instruction set computer
CPU cache
Electronic design automation
Heterogeneous computing
High-level synthesis
History of general-purpose CPUs
Integrated circuit design
Microarchitecture
Microprocessor
Minimal instruction set computer
Moore's law
Reduced instruction set computer
System on a chip
Network on a chip
Process design kit – a set of documents created or accumulated for a semiconductor device production process
Uncore
== References ==
=== General references ===
Hwang, Enoch (2006). Digital Logic and Microprocessor Design with VHDL. Thomson. ISBN 0-534-46593-5.
Processor Design: An Introduction | Wikipedia/Hardware_design |
Static application security testing (SAST) is used to secure software by reviewing the source code of the software to identify sources of vulnerabilities. Although the process of checking programs by reading their code (modernly known as static program analysis) has existed as long as computers have existed, the technique spread to security in the late 90s and the first public discussion of SQL injection in 1998 when Web applications integrated new technologies like JavaScript and Flash.
Unlike dynamic application security testing (DAST) tools for black-box testing of application functionality, SAST tools focus on the code content of the application, white-box testing.
A SAST tool scans the source code of applications and its components to identify potential security vulnerabilities in their software and architecture.
Static analysis tools can detect an estimated 50% of existing security vulnerabilities.
In the software development life cycle (SDLC), SAST is performed early in the development process and at code level, and also when all pieces of code and components are put together in a consistent testing environment. SAST is also used for software quality assurance, even if the many resulting false-positive impede its adoption by developers
SAST tools are integrated into the development process to help development teams as they are primarily focusing on developing and delivering software respecting requested specifications.
SAST tools, like other security tools, focus on reducing the risk of downtime of applications or that private information stored in applications will not be compromised.
For the year of 2018, the Privacy Rights Clearinghouse database shows that more than 612 million records have been compromised by hacking.
== Overview ==
Application security tests of applications their release: static application security testing (SAST), dynamic application security testing (DAST), and interactive application security testing (IAST), a combination of the two.
Static analysis tools examine the text of a program syntactically. They look for a fixed set of patterns or rules in the source code. Theoretically, they can also examine a compiled form of the software. This technique relies on instrumentation of the code to do the mapping between compiled components and source code components to identify issues.
Static analysis can be done manually as a code review or auditing of the code for different purposes, including security, but it is time-consuming.
The precision of SAST tool is determined by its scope of analysis and the specific techniques used to identify vulnerabilities. Different levels of analysis include:
function level - sequences of instruction.
file or class-level - an extensible program-code-template for object creation.
application level - a program or group of programs that interact.
The scope of the analysis determines its accuracy and capacity to detect vulnerabilities using contextual information. SAST tools unlike DAST gives the developers real-time feedback, and help them secure flaws before they the code to the next level.
At a function level, a common technique is the construction of an Abstract syntax tree to control the flow of data within the function.
Since late 90s, the need to adapt to business challenges has transformed software development with componentization enforced by processes and organization of development teams.
Following the flow of data between all the components of an application or group of applications allows validation of required calls to dedicated procedures for sanitization and that proper actions are taken to taint data in specific pieces of code.
The rise of web applications entailed testing them: Verizon Data Breach reports in 2016 that 40% of all data breaches use web application vulnerabilities.
As well as external security validations, there is a rise in focus on internal threats. The Clearswift Insider Threat Index (CITI) has reported that 92% of their respondents in a 2015 survey said they had experienced IT or security incidents in the previous 12 months and that 74% of these breaches were originated by insiders. Lee Hadlington categorized internal threats in 3 categories: malicious, accidental, and unintentional. Mobile applications' explosive growth implies securing applications earlier in the development process to reduce malicious code development.
== SAST strengths ==
The earlier a vulnerability is fixed in the SDLC, the cheaper it is to fix. Costs to fix in development are 10 times lower than in testing, and 100 times lower than in production.
SAST tools run automatically, either at the code level or application-level and do not require interaction. When integrated into a CI/CD context, SAST tools can be used to automatically stop the integration process if critical vulnerabilities are identified.
Because the tool scans the entire source-code, it can cover 100% of it, while dynamic application security testing covers its execution possibly missing part of the application, or unsecured configuration in configuration files.
SAST tools can offer extended functionalities such as quality and architectural testing. There is a direct correlation between the quality and the security. Bad quality software is also poorly secured software.
== SAST weaknesses ==
Even though developers are positive about the usage of SAST tools, there are different challenges to the adoption of SAST tools by developers. The usability of the output generated by these tools may challenge how much developers can make use of these tools. Research shows that despite the long out generated by these tools, they may lack usability.
With Agile Processes in software development, early integration of SAST generates many bugs, as developers using this framework focus first on features and delivery.
Scanning many lines of code with SAST tools may result in hundreds or thousands of vulnerability warnings for a single application. It can generate many false-positives, increasing investigation time and reducing trust in such tools. This is particularly the case when the context of the vulnerability cannot be caught by the tool.
== See also ==
Security testing
Lint (software)
Dynamic application security testing
Interactive application security testing
Static program analysis
== References == | Wikipedia/Static_application_security_testing |
In physical security and information security, access control (AC) is the action of deciding whether a subject should be granted or denied access to an object (for example, a place or a resource). The act of accessing may mean consuming, entering, or using. It is often used interchangeably with authorization, although the authorization may be granted well in advance of the access control decision.
Access control on digital platforms is also termed admission control. The protection of external databases is essential to preserve digital security.
Access control is considered to be a significant aspect of privacy that should be further studied. Access control policy (also access policy) is part of an organization’s security policy. In order to verify the access control policy, organizations use an access control model. General security policies require designing or selecting appropriate security controls to satisfy an organization's risk appetite - access policies similarly require the organization to design or select access controls.
Broken access control is often listed as the number one risk in web applications. On the basis of the "principle of least privilege", consumers should only be authorized to access whatever they need to do their jobs, and nothing more.
== Physical security ==
Geographical access control may be enforced by personnel (e.g. border guard, bouncer, ticket checker), or with a device such as a turnstile. There may be fences to avoid circumventing this access control. An alternative of access control in the strict sense (physically controlling access itself) is a system of checking authorized presence, see e.g. Ticket controller (transportation). A variant is exit control, e.g. of a shop (checkout) or a country.
The term access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons. Physical access control can be achieved by a human (a guard, bouncer, or receptionist), through mechanical means such as locks and keys, or through technological means such as access control systems like the mantrap. Within these environments, physical key management may also be employed as a means of further managing and monitoring access to mechanically keyed areas or access to certain small assets.
Physical access control is a matter of who, where, and when. An access control system determines who is allowed to enter or exit, where they are allowed to exit or enter, and when they are allowed to enter or exit. Historically, this was partially accomplished through keys and locks. When a door is locked, only someone with a key can enter through the door, depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door, and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed.
=== Electronic access control ===
Electronic access control (EAC) uses computers to solve the limitations of mechanical locks and keys. It is particularly difficult to guarantee identification (a critical component of authentication) with mechanical locks and keys. A wide range of credentials can be used to replace mechanical keys, allowing for complete authentication, authorization, and accounting. The electronic access control system grants access based on the credential presented. When access is granted, the resource is unlocked for a predetermined time and the transaction is recorded. When access is refused, the resource remains locked and the attempted access is recorded. The system will also monitor the resource and alarm if the resource is forcefully unlocked or held open too long after being unlocked.
When a credential is presented to a reader, the reader sends the credential's information, usually a number, to a control panel, a highly reliable processor. The control panel compares the credential's number to an access control list, grants or denies the presented request, and sends a transaction log to a database. When access is denied based on the access control list, the door remains locked. If there is a match between the credential and the access control list, the control panel operates a relay that in turn unlocks the resource. The control panel also ignores an opening signal to prevent an alarm. Often the reader provides feedback, such as a flashing red LED for an access denied and a flashing green LED for an access granted.
The above description illustrates a single factor transaction. Credentials can be passed around, thus subverting the access control list. For example, Alice has access rights to the server room, but Bob does not. Alice either gives Bob her credential, or Bob takes it; he now has access to the server room. To prevent this, two-factor authentication can be used. In a two factor transaction, the presented credential and a second factor are needed for access to be granted; another factor can be a PIN, a second credential, operator intervention, or a biometric input.
There are three types (factors) of authenticating information:
something the user knows, e.g. a password, pass-phrase or PIN
something the user has, such as smart card or a key fob
something the user is, such as the users fingerprint, verified by biometric measurement
Passwords are a common means of verifying a user's identity before access is given to information systems. In addition, a fourth factor of authentication is now recognized: someone you know, whereby another person who knows you can provide a human element of authentication in situations where systems have been set up to allow for such scenarios. For example, a user may have their password, but have forgotten their smart card. In such a scenario, if the user is known to designated cohorts, the cohorts may provide their smart card and password, in combination with the extant factor of the user in question, and thus provide two factors for the user with the missing credential, giving three factors overall to allow access.
=== Credential ===
A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being that enables an individual access to a given physical facility or computer-based information system. Typically, credentials can be something a person knows (such as a number or PIN), something they have (such as an access badge), something they are (such as a biometric feature), something they do (measurable behavioural patterns), or some combination of these items. This is known as multi-factor authentication. The typical credential is an access card or key-fob, and newer software can also turn users' smartphones into access devices.
There are many card technologies including magnetic stripe, bar code, Wiegand, 125 kHz proximity, 26-bit card-swipe, contact smart cards, and contactless smart cards. Also available are key-fobs, which are more compact than ID cards, and attach to a key ring. Biometric technologies include fingerprint, facial recognition, iris recognition, retinal scan, voice, and hand geometry. The built-in biometric technologies found on newer smartphones can also be used as credentials in conjunction with access software running on mobile devices. In addition to older more traditional card access technologies, newer technologies such as near-field communication (NFC), Bluetooth low energy or Ultra-wideband (UWB) can also communicate user credentials to readers for system or building access.
=== Access control system components ===
Components of an access control system include:
An access control panel (also known as a controller)
An access-controlled entry, such as a door, turnstile, parking gate, elevator, or other physical barrier
A reader installed near the entry. (In cases where the exit is also controlled, a second reader is used on the opposite side of the entry.)
Locking hardware, such as electric door strikes and electromagnetic locks
A magnetic door switch for monitoring door position
Request-to-exit (RTE) devices for allowing egress. When a RTE button is pushed, or the motion detector detects motion at the door, the door alarm is temporarily ignored while the door is opened. Exiting a door without having to electrically unlock the door is called mechanical free egress. This is an important safety feature. In cases where the lock must be electrically unlocked on exit, the request-to-exit device also unlocks the door.
=== Access control topology ===
Access control decisions are made by comparing the credentials to an access control list. This look-up can be done by a host or server, by an access control panel, or by a reader. The development of access control systems has observed a steady push of the look-up out from a central host to the edge of the system, or the reader. The predominant topology circa 2009 is hub and spoke with a control panel as the hub, and the readers as the spokes. The look-up and control functions are by the control panel. The spokes communicate through a serial connection; usually RS-485. Some manufactures are pushing the decision making to the edge by placing a controller at the door. The controllers are IP enabled, and connect to a host and database using standard networks
=== Types of readers ===
Access control readers may be classified by the functions they are able to perform:
Basic (non-intelligent) readers: simply read card number or PIN, and forward it to a control panel. In case of biometric identification, such readers output the ID number of a user. Typically, Wiegand protocol is used for transmitting data to the control panel, but other options such as RS-232, RS-485 and Clock/Data are not uncommon. This is the most popular type of access control readers. Examples of such readers are RF Tiny by RFLOGICS, ProxPoint by HID, and P300 by Farpointe Data.
Semi-intelligent readers: have all inputs and outputs necessary to control door hardware (lock, door contact, exit button), but do not make any access decisions. When a user presents a card or enters a PIN, the reader sends information to the main controller, and waits for its response. If the connection to the main controller is interrupted, such readers stop working, or function in a degraded mode. Usually semi-intelligent readers are connected to a control panel via an RS-485 bus. Examples of such readers are InfoProx Lite IPL200 by CEM Systems, and AP-510 by Apollo.
Intelligent readers: have all inputs and outputs necessary to control door hardware; they also have memory and processing power necessary to make access decisions independently. Like semi-intelligent readers, they are connected to a control panel via an RS-485 bus. The control panel sends configuration updates, and retrieves events from the readers. Examples of such readers could be InfoProx IPO200 by CEM Systems, and AP-500 by Apollo. There is also a new generation of intelligent readers referred to as "IP readers". Systems with IP readers usually do not have traditional control panels, and readers communicate directly to a PC that acts as a host.
Some readers may have additional features such as an LCD and function buttons for data collection purposes (i.e. clock-in/clock-out events for attendance reports), camera/speaker/microphone for intercom, and smart card read/write support.
=== Access control system topologies ===
1. Serial controllers. Controllers are connected to a host PC via a serial RS-485 communication line (or via 20mA current loop in some older systems). External RS-232/485 converters or internal RS-485 cards have to be installed, as standard PCs do not have RS-485 communication ports.
Advantages:
RS-485 standard allows long cable runs, up to 4000 feet (1200 m)
Relatively short response time. The maximum number of devices on an RS-485 line is limited to 32, which means that the host can frequently request status updates from each device, and display events almost in real time.
High reliability and security as the communication line is not shared with any other systems.
Disadvantages:
RS-485 does not allow Star-type wiring unless splitters are used
RS-485 is not well suited for transferring large amounts of data (i.e. configuration and users). The highest possible throughput is 115.2 kbit/sec, but in most system it is downgraded to 56.2 kbit/sec, or less, to increase reliability.
RS-485 does not allow the host PC to communicate with several controllers connected to the same port simultaneously. Therefore, in large systems, transfers of configuration, and users to controllers may take a very long time, interfering with normal operations.
Controllers cannot initiate communication in case of an alarm. The host PC acts as a master on the RS-485 communication line, and controllers have to wait until they are polled.
Special serial switches are required, in order to build a redundant host PC setup.
Separate RS-485 lines have to be installed, instead of using an already existing network infrastructure.
Cable that meets RS-485 standards is significantly more expensive than regular Category 5 UTP network cable.
Operation of the system is highly dependent on the host PC. In the case that the host PC fails, events from controllers are not retrieved, and functions that require interaction between controllers (i.e. anti-passback) stop working.
2. Serial main and sub-controllers. All door hardware is connected to sub-controllers (a.k.a. door controllers or door interfaces). Sub-controllers usually do not make access decisions, and instead forward all requests to the main controllers. Main controllers usually support from 16 to 32 sub-controllers.
Advantages:
Work load on the host PC is significantly reduced, because it only needs to communicate with a few main controllers.
The overall cost of the system is lower, as sub-controllers are usually simple and inexpensive devices.
All other advantages listed in the first paragraph apply.
Disadvantages:
Operation of the system is highly dependent on main controllers. In case one of the main controllers fails, events from its sub-controllers are not retrieved, and functions that require interaction between sub-controllers (i.e. anti-passback) stop working.
Some models of sub-controllers (usually lower cost) do not have the memory or processing power to make access decisions independently. If the main controller fails, sub-controllers change to degraded mode in which doors are either completely locked or unlocked, and no events are recorded. Such sub-controllers should be avoided, or used only in areas that do not require high security.
Main controllers tend to be expensive, therefore such a topology is not very well suited for systems with multiple remote locations that have only a few doors.
All other RS-485-related disadvantages listed in the first paragraph apply.
3. Serial main controllers & intelligent readers. All door hardware is connected directly to intelligent or semi-intelligent readers. Readers usually do not make access decisions, and forward all requests to the main controller. Only if the connection to the main controller is unavailable, will the readers use their internal database to make access decisions and record events. Semi-intelligent reader that have no database and cannot function without the main controller should be used only in areas that do not require high security. Main controllers usually support from 16 to 64 readers. All advantages and disadvantages are the same as the ones listed in the second paragraph.
4. Serial controllers with terminal servers. In spite of the rapid development and increasing use of computer networks, access control manufacturers remained conservative, and did not rush to introduce network-enabled products. When pressed for solutions with network connectivity, many chose the option requiring less efforts: addition of a terminal server, a device that converts serial data for transmission via LAN or WAN.
Advantages:
Allows utilizing the existing network infrastructure for connecting separate segments of the system.
Provides a convenient solution in cases when the installation of an RS-485 line would be difficult or impossible.
Disadvantages:
Increases complexity of the system.
Creates additional work for installers: usually terminal servers have to be configured independently, and not through the interface of the access control software.
Serial communication link between the controller and the terminal server acts as a bottleneck: even though the data between the host PC and the terminal server travels at the 10/100/1000 Mbit/sec network speed, it must slow down to the serial speed of 112.5 kbit/sec or less. There are also additional delays introduced in the process of conversion between serial and network data.
All the RS-485-related advantages and disadvantages also apply.
5. Network-enabled main controllers. The topology is nearly the same as described in the second and third paragraphs. The same advantages and disadvantages apply, but the on-board network interface offers a couple of valuable improvements. Transmission of configuration and user data to the main controllers is faster, and may be done in parallel. This makes the system more responsive, and does not interrupt normal operations. No special hardware is required in order to achieve redundant host PC setup: in the case that the primary host PC fails, the secondary host PC may start polling network controllers. The disadvantages introduced by terminal servers (listed in the fourth paragraph) are also eliminated.
6. IP controllers. Controllers are connected to a host PC via Ethernet LAN or WAN.
Advantages:
An existing network infrastructure is fully utilized, and there is no need to install new communication lines.
There are no limitations regarding the number of controllers (as the 32 per line in cases of RS-485).
Special RS-485 installation, termination, grounding and troubleshooting knowledge is not required.
Communication with the controllers may be done at the full network speed, which is important if transferring a lot of data (databases with thousands of users, possibly including biometric records).
In case of an alarm, controllers may initiate connection to the host PC. This ability is important in large systems, because it serves to reduce network traffic caused by unnecessary polling.
Simplifies installation of systems consisting of multiple sites that are separated by large distances. A basic Internet link is sufficient to establish connections to the remote locations.
Wide selection of standard network equipment is available to provide connectivity in various situations (fiber, wireless, VPN, dual path, PoE)
Disadvantages:
The system becomes susceptible to network related problems, such as delays in case of heavy traffic and network equipment failures.
Access controllers and workstations may become accessible to hackers if the network of the organization is not well protected. This threat may be eliminated by physically separating the access control network from the network of the organization. Most IP controllers utilize either Linux platform or proprietary operating systems, which makes them more difficult to hack. Industry standard data encryption is also used.
Maximum distance from a hub or a switch to the controller (if using a copper cable) is 100 meters (330 ft).
Operation of the system is dependent on the host PC. In case the host PC fails, events from controllers are not retrieved and functions that require interaction between controllers (i.e. anti-passback) stop working. Some controllers, however, have a peer-to-peer communication option in order to reduce dependency on the host PC.
7. IP readers. Readers are connected to a host PC via Ethernet LAN or WAN.
Advantages:
Most IP readers are PoE capable. This feature makes it very easy to provide battery backed power to the entire system, including the locks and various types of detectors (if used).
IP readers eliminate the need for controller enclosures.
There is no wasted capacity when using IP readers (e.g. a 4-door controller would have 25% of unused capacity if it was controlling only 3 doors).
IP reader systems scale easily: there is no need to install new main or sub-controllers.
Failure of one IP reader does not affect any other readers in the system.
Disadvantages:
In order to be used in high-security areas, IP readers require special input/output modules to eliminate the possibility of intrusion by accessing lock and/or exit button wiring. Not all IP reader manufacturers have such modules available.
Being more sophisticated than basic readers, IP readers are also more expensive and sensitive, therefore they should not be installed outdoors in areas with harsh weather conditions, or high probability of vandalism, unless specifically designed for exterior installation. A few manufacturers make such models.
The advantages and disadvantages of IP controllers apply to the IP readers as well.
=== Security risks ===
The most common security risk of intrusion through an access control system is by simply following a legitimate user through a door, and this is referred to as tailgating. Often the legitimate user will hold the door for the intruder. This risk can be minimized through security awareness training of the user population or more active means such as turnstiles. In very high-security applications this risk is minimized by using a sally port, sometimes called a security vestibule or mantrap, where operator intervention is required presumably to assure valid identification.
The second most common risk is from levering a door open. This is relatively difficult on properly secured doors with strikes or high holding force magnetic locks. Fully implemented access control systems include forced door monitoring alarms. These vary in effectiveness, usually failing from high false positive alarms, poor database configuration, or lack of active intrusion monitoring. Most newer access control systems incorporate some type of door prop alarm to inform system administrators of a door left open longer than a specified length of time.
The third most common security risk is natural disasters. In order to mitigate risk from natural disasters, the structure of the building, down to the quality of the network and computer equipment vital. From an organizational perspective, the leadership will need to adopt and implement an All Hazards Plan, or Incident Response Plan. The highlights of any incident plan determined by the National Incident Management System must include Pre-incident planning, during incident actions, disaster recovery, and after-action review.
Similar to levering is crashing through cheap partition walls. In shared tenant spaces, the divisional wall is a vulnerability. A vulnerability along the same lines is the breaking of sidelights.
Spoofing locking hardware is fairly simple and more elegant than levering. A strong magnet can operate the solenoid controlling bolts in electric locking hardware. Motor locks, more prevalent in Europe than in the US, are also susceptible to this attack using a doughnut-shaped magnet. It is also possible to manipulate the power to the lock either by removing or adding current, although most Access Control systems incorporate battery back-up systems and the locks are almost always located on the secure side of the door.
Access cards themselves have proven vulnerable to sophisticated attacks. Enterprising hackers have built portable readers that capture the card number from a user's proximity card. The hacker simply walks by the user, reads the card, and then presents the number to a reader securing the door. This is possible because card numbers are sent in the clear, no encryption being used. To counter this, dual authentication methods, such as a card plus a PIN should always be used.
Many access control credentials unique serial numbers are programmed in sequential order during manufacturing. Known as a sequential attack, if an intruder has a credential once used in the system they can simply increment or decrement the serial number until they find a credential that is currently authorized in the system. Ordering credentials with random unique serial numbers is recommended to counter this threat.
Finally, most electric locking hardware still has mechanical keys as a fail-over. Mechanical key locks are vulnerable to bumping.
== Computer security ==
In computer security, general access control includes authentication, authorization, and audit. A more narrow definition of access control would cover only access approval, whereby the system makes a decision to grant or reject an access request from an already authenticated subject, based on what the subject is authorized to access. Authentication and access control are often combined into a single operation, so that access is approved based on successful authentication, or based on an anonymous access token. Authentication methods and tokens include passwords, biometric analysis, physical keys, electronic keys and devices, hidden paths, social barriers, and monitoring by humans and automated systems.
In any access-control model, the entities that can perform actions on the system are called subjects, and the entities representing resources to which access may need to be controlled are called objects (see also Access Control Matrix). Subjects and objects should both be considered as software entities, rather than as human users: any human users can only have an effect on the system via the software entities that they control.
Although some systems equate subjects with user IDs, so that all processes started by a user by default have the same authority, this level of control is not fine-grained enough to satisfy the principle of least privilege, and arguably is responsible for the prevalence of malware in such systems (see computer insecurity).
In some models, for example the object-capability model, any software entity can potentially act as both subject and object.
As of 2014, access-control models tend to fall into one of two classes: those based on capabilities and those based on access control lists (ACLs).
In a capability-based model, holding an unforgeable reference or capability to an object provides access to the object (roughly analogous to how possession of one's house key grants one access to one's house); access is conveyed to another party by transmitting such a capability over a secure channel
In an ACL-based model, a subject's access to an object depends on whether its identity appears on a list associated with the object (roughly analogous to how a bouncer at a private party would check an ID to see if a name appears on the guest list); access is conveyed by editing the list. (Different ACL systems have a variety of different conventions regarding who or what is responsible for editing the list and how it is edited.)
Both capability-based and ACL-based models have mechanisms to allow access rights to be granted to all members of a group of subjects (often the group is itself modeled as a subject).
Access control systems provide the essential services of authorization, identification and authentication (I&A), access approval, and accountability where:
authorization specifies what a subject can do
identification and authentication ensure that only legitimate subjects can log on to a system
access approval grants access during operations, by association of users with the resources that they are allowed to access, based on the authorization policy
accountability identifies what a subject (or all subjects associated with a user) did
=== Access control models ===
Access to accounts can be enforced through many types of controls.
Attribute-based Access Control (ABAC) An access control paradigm whereby access rights are granted to users through the use of policies which evaluate attributes (user attributes, resource attributes and environment conditions)
Discretionary Access Control (DAC)In DAC, the data owner determines who can access specific resources. For example, a system administrator may create a hierarchy of files to be accessed based on certain permissions.
Graph-based Access Control (GBAC)Compared to other approaches like RBAC or ABAC, the main difference is that in GBAC access rights are defined using an organizational query language instead of total enumeration.
History-Based Access Control (HBAC)Access is granted or declined based on the real-time evaluation of a history of activities of the inquiring party, e.g. behavior, time between requests, content of requests. For example, the access to a certain service or data source can be granted or declined on the personal behavior, e.g. the request interval exceeds one query per second.
History-of-Presence Based Access Control (HPBAC)Access control to resources is defined in terms of presence policies that need to be satisfied by presence records stored by the requestor. Policies are usually written in terms of frequency, spread and regularity. An example policy would be "The requestor has made k separate visitations, all within last week, and no two consecutive visitations are apart by more than T hours."
Identity-Based Access Control (IBAC)Using this network administrators can more effectively manage activity and access based on individual needs.
Lattice-Based Access Control (LBAC)A lattice is used to define the levels of security that an object may have and that a subject may have access to. The subject is only allowed to access an object if the security level of the subject is greater than or equal to that of the object.
Mandatory Access Control (MAC)In MAC, users do not have much freedom to determine who has access to their files. For example, security clearance of users and classification of data (as confidential, secret or top secret) are used as security labels to define the level of trust.
Organization-Based Access Control (OrBAC) OrBAC model allows the policy designer to define a security policy independently of the implementation
Relationship-Based Access Control (ReBAC)A subject's permission to access a resource is defined by the presence of relationships between those subjects and resources.
Role-Based Access Control (RBAC)RBAC allows access based on the job title. RBAC largely eliminates discretion when providing access to objects. For example, a human resources specialist should not have permissions to create network accounts; this should be a role reserved for network administrators.
Rule-Based Access Control (RAC)RAC method, also referred to as Rule-Based Role-Based Access Control (RB-RBAC), is largely context based. Example of this would be allowing students to use labs only during a certain time of day; it is the combination of students' RBAC-based information system access control with the time-based lab access rules.
Responsibility Based Access Control Information is accessed based on the responsibilities assigned to an actor or a business role
== Telecommunications ==
In telecommunications, the term access control is defined in U.S. Federal Standard 1037C with the following meanings:
A service feature or technique used to permit or deny use of the components of a communication system.
A technique used to define or restrict the rights of individuals or application programs to obtain data from, or place data onto, a storage device.
The definition or restriction of the rights of individuals or application programs to obtain data from, or place data into, a storage device.
The process of limiting access to the resources of an AIS (Automated Information System) to authorized users, programs, processes, or other systems.
That function performed by the resource controller that allocates system resources to satisfy user requests.
This definition depends on several other technical terms from Federal Standard 1037C.
=== Attribute accessors ===
Special public member methods – accessors (aka getters) and mutator methods (often called setters) are used to control changes to class variables in order to prevent unauthorized access and data corruption.
== Public policy ==
In public policy, access control to restrict access to systems ("authorization") or to track or monitor behavior within systems ("accountability") is an implementation feature of using trusted systems for security or social control.
== See also ==
Alarm device, Alarm management, Security alarm
Border barrier, Border control, Border checkpoint, Border outpost
Card reader, Common Access Card, Magnetic stripe card, Proximity card, Smart card, Optical turnstile, Access badge
Castle, Fortification
Computer security, Logical security, .htaccess, Wiegand effect, XACML, Credential
Door security, Lock picking, Lock (security device), Electronic lock, Safe, Safe-cracking, Bank vault
Fingerprint scanner, Photo identification, Biometrics
Key management, Key cards
Lock screen
Permissive action link, Multi-factor authentication, Gold Codes
Physical security information management
Physical Security Professional
Prison, Barbed tape, Mantrap
Security, Security engineering, Security lighting, Security management, Security policy
Security by design
Vehicle access control: barricades, bollards, gates
== References ==
U.S. Federal 1037C
U.S. MIL-188
U.S. National Information Systems Security Glossary
Harris, Shon, All-in-one CISSP Exam Guide, 6th Edition, McGraw Hill Osborne, Emeryville, California, 2012.
"Integrated Security Systems Design" – Butterworth/Heinenmann – 2007 – Thomas L. Norman, CPP/PSP/CSC Author
NIST.gov – Computer Security Division – Computer Security Resource Center – ATTRIBUTE BASED ACCESS CONTROL (ABAC) – OVERVIEW
=== Further reading ===
Ouaddah, Aafaf; Mousannif, Hajar; Elkalam, Anas; Ouahman, Abdellah (15 January 2017). "Access control in the Internet of Things: Big challenges and new opportunities". Computer Networks. 112: 237–262. doi:10.1016/j.comnet.2016.11.007.
== External links ==
eXtensible Access Control Markup Language. An OASIS standard language/model for access control. | Wikipedia/Access_control |
Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified and enumerated, and countermeasures prioritized. The purpose of threat modeling is to provide defenders with a systematic analysis of what controls or defenses need to be included, given the nature of the system, the probable attacker's profile, the most likely attack vectors, and the assets most desired by an attacker. Threat modeling answers questions like "Where am I most vulnerable to attack?", "What are the most relevant threats?", and "What do I need to do to safeguard against these threats?".
Conceptually, most people incorporate some form of threat modeling in their daily life and don't even realize it. Commuters use threat modeling to consider what might go wrong during the morning journey to work and to take preemptive action to avoid possible accidents. Children engage in threat modeling when determining the best path toward an intended goal while avoiding the playground bully. In a more formal sense, threat modeling has been used to prioritize military defensive preparations since antiquity.
== Evolution of technology-centric threat modeling ==
Shortly after shared computing made its debut in the early 1960s, individuals began seeking ways to exploit security vulnerabilities for personal gain. As a result, engineers and computer scientists soon began developing threat modeling concepts for information technology systems.
Early technology-centered threat modeling methodologies were based on the concept of architectural patterns first presented by Christopher Alexander in 1977. In 1988 Robert Barnard developed and successfully applied the first profile for an IT-system attacker.
In 1994, Edward Amoroso put forth the concept of a "threat tree" in his book, "Fundamentals of Computer Security Technology." The concept of a threat tree was based on decision tree diagrams. Threat trees graphically represent how a potential threat to an IT system can be exploited.
Independently, similar work was conducted by the NSA and DARPA on a structured graphical representation of how specific attacks against IT-systems could be executed. The resulting representation was called "attack trees." In 1998 Bruce Schneier published his analysis of cyber risks utilizing attack trees in his paper entitled "Toward a Secure System Engineering Methodology". The paper proved to be a seminal contribution in the evolution of threat modeling for IT-systems. In Schneier's analysis, the attacker's goal is represented as a "root node," with the potential means of reaching the goal represented as "leaf nodes." Utilizing the attack tree in this way allowed cybersecurity professionals to systematically consider multiple attack vectors against any defined target.
In 1999, Microsoft cybersecurity professionals Loren Kohnfelder and Praerit Garg developed a model for considering attacks relevant to the Microsoft Windows development environment. (STRIDE is an acrostic for: Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service, Elevation of privilege) The resultant mnemonic helps security professionals systematically determine how a potential attacker could utilize any threat included in STRIDE.
In 2003, OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) method, an operations-centric threat modeling methodology, was introduced with a focus on organizational risk management.
In 2004, Frank Swiderski and Window Snyder wrote "Threat Modeling," published by Microsoft press. In it they developed the concept of using threat models to create secure applications.
In 2014, Ryan Stillions expressed the idea that cyber threats should be expressed with different semantic levels, and proposed the DML (Detection Maturity Level) model. An attack is an instantiation of a threat scenario which is caused by a specific attacker with a specific goal in mind and a strategy for reaching that goal. The goal and strategy represent the highest semantic levels of the DML model. This is followed by the TTP (Tactics, Techniques and Procedures) which represent intermediate semantic levels. The lowest semantic levels of the DML model are the tools used by the attacker, host and observed network artifacts such as packets and payloads, and finally atomic indicators such as IP addresses at the lowest semantic level. Current SIEM (Security Information and Event Management) tools typically only provide indicators at the lowest semantic levels. There is therefore a need to develop SIEM tools that can provide threat indicators at higher semantic levels.
== Threat Modeling Manifesto ==
The threat modeling manifesto is a document published in 2020 by threat modeling authorities in order to clearly state the core values and principles that every threat modeler should know and follow.
In 2024 the same group of authors followed up the Manifesto with a Threat Modeling Capabilities document, which "...provides a catalog of capabilities to help you cultivate value from your Threat Modeling practice".
== Threat modeling frameworks ==
Conceptually, a threat modeling practice flows from a methodology. Numerous threat modeling methodologies are available for implementation. Typically, threat modeling has been implemented using one of five approaches independently: asset-centric, attacker-centric, software-centric, value and stakeholder-centric, and hybrid. Based on the volume of published online content, the methodologies discussed below are the most well known.
=== STRIDE ===
The STRIDE was created in 1999 at Microsoft as a mnemonic for developers to find 'threats to our products'. STRIDE can be used as a simple prompt or checklist, or in more structured approaches such as STRIDE per element. STRIDE, Patterns and Practices, and Asset/entry point were amongst the threat modeling approaches developed and published by Microsoft. References to "the" Microsoft methodology commonly mean STRIDE and Data Flow Diagrams.
=== PASTA ===
The Process for Attack Simulation and Threat Analysis (PASTA) is a seven-step, risk-centric methodology. It provides a seven-step process for aligning business objectives and technical requirements, taking into account compliance issues and business analysis. The intent of the method is to provide a dynamic threat identification, enumeration, and scoring process. Once the threat model is completed, security subject matter experts develop a detailed analysis of the identified threats. Finally, appropriate security controls can be enumerated. This methodology is intended to provide an attacker-centric view of the application and infrastructure from which defenders can develop an asset-centric mitigation strategy.
=== 'The Hybrid' Threat Modeling Method ===
Researchers created this method to combine the positive elements of different methodologies. This methodology combines different methodologies, including SQUARE and the Security Cards and Personae Non Gratae.
== Generally accepted technology threat modeling processes ==
All IT-related threat modeling processes start with creating a visual representation of the application, infrastructure or both being analyzed. The application or infrastructure is decomposed into various elements to aid in the analysis. Once completed, the visual representation is used to identify and enumerate potential threats. Further analysis of the model regarding risks associated with identified threats, prioritization of threats, and enumeration of the appropriate mitigating controls depends on the methodological basis for the threat model process being utilized. Threat modeling approaches can focus on the system in use, attackers, or assets.
=== Visual representations based on data flow diagrams ===
Most threat modeling approaches use data flow diagrams (DFD). DFDs were developed in the 1970s as tool for system engineers to communicate, on a high level, how an application caused data to flow, be stored, and manipulated by the infrastructure upon which the application runs. Traditionally, DFDs utilize only four unique symbols: data flows, data stores, processes, and interactors. In the early 2000s, an additional symbol, trust boundaries, were added to improve the usefulness of DFDs for threat modeling.
Once the application-infrastructure system is decomposed into its five elements, security experts consider each identified threat entry point against all known threat categories. Once the potential threats are identified, mitigating security controls can be enumerated or additional analysis can be performed.
== Threat modeling tools ==
Microsoft's free Threat Modeling Tool (formerly SDL Threat Modeling Tool), also uses the Microsoft threat modeling methodology, is based on DFD and identifies threats based on the STRIDE threat classification system. It is mainly intended for general use.
IriusRisk provides both a community and a commercial version of the tool. This tool focuses on creating and maintaining a living threat model throughout the SDLC. It drives the process using fully customizable questionnaires and risk model libraries, and connects to several other different tools (OWASP ZAP, BDD-Security, Threadfix) to enable automation.
securiCAD is a threat modeling and risk management tool from the Scandinavian company foreseeti. It is intended for enterprise cybersecurity management, from CISO to security engineer, including technician. securiCAD performs automated attack simulations on current and future IT architectures, identifies and quantifies risks globally, including structural vulnerabilities, and provides decision support based on results. securiCAD is available in commercial and community editions.
SD Elements by Security Compass is a software security requirements management platform that includes automated threat modeling capabilities. A set of threats is generated by filling out a short questionnaire on the application's technical details and compliance factors. Countermeasures are included in the form of actionable tasks for developers that can be tracked and managed across the SDLC.
OWASP Threat Dragon is a modeling tool used to create threat model diagrams as part of a secure development lifecycle. Threat Dragon follows the values and principles of the threat modeling manifesto. It can be used to record possible threats and decide on their mitigations, as well as giving a visual indication of the threat model components and threat surfaces. Threat Dragon runs either as a web application or as a desktop application. Threat Dragon supports STRIDE / LINDDUN / CIA / DIE / PLOT4ai, provides modeling diagrams and implements a rule engine to auto-generate threats and their mitigations.
OWASP pytm is a Pythonic framework for threat modeling and the first Threat-Model-as-Code tool: The system is first defined in Python using the elements and properties described in the pytm framework. Based on this definition, pytm can generate a Data Flow Diagram (DFD), a Sequence Diagram and most important of all, threats to the system.
== Further fields of application ==
Threat modeling is being applied not only to IT but also to other areas such as vehicle, building and home automation. In this context, threats to security and privacy like information about the inhabitant's movement profiles, working times, and health situations are modeled as well as physical or network-based attacks. The latter could make use of more and more available smart building features, i.e., sensors (e.g., to spy on the inhabitant) and actuators (e.g., to unlock doors).
== References == | Wikipedia/Threat_model |
Dynamic application security testing (DAST) represents a non-functional testing process to identify security weaknesses and vulnerabilities in an application. This testing process can be carried out either manually or by using automated tools. Manual assessment of an application involves human intervention to identify the security flaws which might slip from an automated tool. Usually business logic errors, race condition checks, and certain zero-day vulnerabilities can only be identified using manual assessments.
On the other side, a DAST tool is a program which communicates with a web application through the web front-end in order to identify potential security vulnerabilities in the web application and architectural weaknesses. It performs a black-box test. Unlike static application security testing tools, DAST tools do not have access to the source code and therefore detect vulnerabilities by actually performing attacks.
DAST tools allow sophisticated scans, detecting vulnerabilities with minimal user interactions once configured with host name, crawling parameters and authentication credentials. These tools will attempt to detect vulnerabilities in query strings, headers, fragments, verbs (GET/POST/PUT) and DOM injection.
== Overview ==
DAST tools facilitate the automated review of a web application with the express purpose of discovering security vulnerabilities and are required to comply with various regulatory requirements. Web application scanners can look for a wide variety of vulnerabilities, such as input/output validation: (e.g. cross-site scripting and SQL injection), specific application problems and server configuration mistakes.
== Commercial and open-source scanners ==
Commercial scanners are a category of web-assessment tools which need to be purchased. Some scanners include some free features but most need to be bought for full access to the tool's power.
Open-source scanners are often free of cost to the user.
=== Strengths ===
These tools can detect vulnerabilities of the finalized release candidate versions prior to shipping. Scanners simulate a malicious user by attacking and probing, identifying results which are not part of the expected result set, allowing for a realistic attack simulation. The big advantage of these types of tools are that they can scan year-round to be constantly searching for vulnerabilities. With new vulnerabilities being discovered regularly this allows companies to find and patch vulnerabilities before they can become exploited.
As a dynamic testing tool, web scanners are not language-dependent. A web application scanner is able to scan engine-driven web applications. Attackers use the same tools, so if the tools can find a vulnerability, so can attackers.
=== Weaknesses ===
While scanning with a DAST tool, data may be overwritten or malicious payloads injected into the subject site. Sites should be scanned in a production-like but non-production environment to ensure accurate results while protecting the data in the production environment.
Because the tool is implementing a dynamic testing method, it cannot cover 100% of the source code of the application and then, the application itself. The penetration tester should look at the coverage of the web application or of its attack surface to know if the tool was configured correctly or was able to understand the web application.
The tool cannot implement all variants of attacks for a given vulnerability. So the tools generally have a predefined list of attacks and do not generate the attack payloads depending on the tested web application. Some tools are also quite limited in their understanding of the behavior of applications with dynamic content such as JavaScript and Flash.
== See also ==
Security testing
Static application security testing
Interactive application security testing
== References ==
== External links ==
Web Application Security Scanner Evaluation Criteria from the Web Application Security Consortium (WASC)
Web Application Scanners, operated by the NIST
Challenges faced by automated web application security assessment from Robert Auger
The WASC security scanner list | Wikipedia/Dynamic_application_security_testing |
A web application firewall (WAF) is a specific form of application firewall that filters, monitors, and blocks HTTP traffic to and from a web service. By inspecting HTTP traffic, it can prevent attacks exploiting a web application's known vulnerabilities, such as SQL injection, cross-site scripting (XSS), file inclusion, and improper system configuration. Most of the major financial institutions utilize WAFs to help in the mitigation of web application "zero-day" vulnerabilities, as well as hard-to-patch bugs or weaknesses through custom attack signature strings.
== History ==
Dedicated web application firewalls entered the market in the late 1990s during a time when web server attacks were becoming more prevalent.
Early WAF products, from Kavado and Gilian technologies, were available, trying to solve the increasing amount of attacks on web applications in the late 1990s. In 2002, the open-source project ModSecurity was formed in order to make WAF technology more accessible. They finalized a core rule set for protecting web applications, based on OASIS Web Application Security Technical Committee’s (WAS TC) vulnerability work. In 2003, they expanded and standardized rules through the Open Web Application Security Project’s (OWASP) Top 10 List, an annual ranking for web security vulnerabilities. This list would become the industry standard for web application security compliance.
Since then, the market has continued to grow and evolve, especially focusing on credit card fraud prevention. With the development of the Payment Card Industry Data Security Standard (PCI DSS), a standardization of control over cardholder data, security has become more regulated in this sector. According to CISO Magazine, the WAF market was expected to grow to $5.48 billion by 2022.
== Description ==
A web application firewall is a special type of application firewall that applies specifically to web applications. It is deployed in front of web applications and analyzes bi-directional web-based (HTTP) traffic – detecting and blocking anything malicious. The OWASP provides a broad technical definition for a WAF as “a security solution on the web application level which – from a technical point of view – does not depend on the application itself”. According to the PCI DSS Information Supplement for requirement 6.6, a WAF is defined as “a security policy enforcement point positioned between a web application and the client endpoint. This functionality can be implemented in software or hardware, running in an appliance device, or in a typical server running a common operating system. It may be a stand-alone device or integrated into other network components.” In other words, a WAF can be a virtual or physical appliance that prevents vulnerabilities in web applications from being exploited by outside threats. These vulnerabilities may be because the application itself is a legacy type or was insufficiently coded by design. The WAF addresses these code shortcomings by special configurations of rule-sets, also known as policies.
Previously unknown vulnerabilities can be discovered through penetration testing or via a vulnerability scanner. A web application vulnerability scanner, also known as a web application security scanner, is defined in the SAMATE NIST 500-269 as “an automated program that examines web applications for potential security vulnerabilities. In addition to searching for web application-specific vulnerabilities, the tools also look for software coding errors.” Resolving vulnerabilities is commonly referred to as remediation. Corrections to the code can be made in the application, but typically a more prompt response is necessary. In these situations, the application of a custom policy for a unique web application vulnerability to provide a temporary but immediate fix (known as a virtual patch) may be necessary.
WAFs are not an ultimate security solution, rather they are meant to be used in conjunction with other network perimeter security solutions such as network firewalls and intrusion prevention systems to provide a holistic defense strategy.
WAFs typically follow a positive security model, a negative security, or a combination of both as mentioned by the SANS Institute. WAFs use a combination of rule-based logic, parsing, and signatures to detect and prevent attacks such as cross-site scripting and SQL injection. In general, features like browser emulation, obfuscation and virtualization, and IP obfuscation are used to attempt to bypass WAFs. The OWASP produces a list of the top ten web application security flaws. All commercial WAF offerings cover these ten flaws at a minimum. There are non-commercial options as well. As mentioned earlier, the well-known open-source WAF engine called ModSecurity is one of these options. A WAF engine alone is insufficient to provide adequate protection, therefore OWASP along with Trustwave's Spiderlabs help organize and maintain a Core-Rule Set via GitHub to use with the ModSecurity WAF engine.
== Deployment options ==
Although the names for operating mode may differ, WAFs are basically deployed inline in three different ways. According to NSS Labs, deployment options are transparent bridge, transparent reverse proxy, and reverse proxy. "Transparent" refers to the fact that the HTTP traffic is sent straight to the web application, therefore the WAF is transparent between the client and server. This is in contrast to reverse proxy, where the WAF acts as a proxy, and the client’s traffic is sent directly to the WAF. The WAF then separately sends filtered traffic to web applications. This can provide additional benefits such as IP masking but may introduce disadvantages such as performance latencies.
== JA3 fingerprint ==
JA3, developed by Salesforce in 2017, is a technique for generating a unique fingerprint for SSL/TLS traffic based on specific fields in the handshake, such as the version, cipher suites, and extensions used by the client. This fingerprint enables the identification and tracking of clients based on the characteristics of their encrypted traffic. In the context of distributed denial of service (DDoS) protection, JA3 fingerprints are used to detect and differentiate malicious traffic, often associated with attack bots, from legitimate traffic, allowing for more precise filtering of potential threats. In September 2023, AWS WAF announced built-in support for JA3, enabling customers to inspect the JA3 fingerprints of incoming requests. JA3 was deprecated in May 2025 in favor of JA4.
== See also ==
Application firewall
Payment Card Industry Data Security Standard (PCI DSS)
Web application
Software as a service (SaaS)
Computer security
Network security
Application security
Web application security
== References == | Wikipedia/Web_application_firewall |
A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure of a programming language.
== Overview ==
A modeling language can be graphical or textual.
Graphical modeling languages use a diagram technique with named symbols that represent concepts and lines that connect the symbols and represent relationships and various other graphical notation to represent constraints.
Textual modeling languages may use standardized keywords accompanied by parameters or natural language terms and phrases to make computer-interpretable expressions.
An example of a graphical modeling language and a corresponding textual modeling language is EXPRESS.
Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean that programmers are no longer required. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more challenging problems, such as parallel computing and distributed systems.
A large number of modeling languages appear in the literature.
== Type of modeling languages ==
=== Graphical types ===
Example of graphical modeling languages in the field of computer science, project management and systems engineering:
Behavior Trees are a formal, graphical modeling language used primarily in systems and software engineering. Commonly used to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system.
Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a Process Modeling language.
C-K theory consists of a modeling language for design processes.
DRAKON is a general-purpose algorithmic modeling language for specifying software-intensive systems, a schematic representation of an algorithm or a stepwise process, and a family of programming languages.
EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling language.
Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling across a number of layers.
Flowchart is a schematic representation of an algorithm or a stepwise process.
Fundamental Modeling Concepts (FMC) modeling language for software-intensive systems.
IDEF is a family of modeling languages, which include IDEF0 for functional modeling, IDEF1X for information modeling, IDEF3 for business process modeling, IDEF4 for Object-Oriented Design and IDEF5 for modeling ontologies.
Jackson Structured Programming (JSP) is a method for structured programming based on correspondences between data stream structure and program structure.
LePUS3 is an object-oriented visual Design Description Language and a formal specification language that is suitable primarily for modeling large object-oriented (Java, C++, C#) programs and design patterns.
Lifecycle Modeling Language is an open-standard language for systems engineering that supports the full system lifecycle: conceptual, utilization, support and retirement stages.
Object-Role Modeling (ORM) in the field of software engineering is a method for conceptual modeling, and can be used as a tool for information and rules analysis.
Petri nets use variations on exactly one diagramming technique and topology, namely the bipartite graph. The simplicity of its basic user interface easily enabled extensive tool support over the years, particularly in the areas of model checking, graphically oriented simulation, and software verification.
Southbeach Notation is a visual modeling language used to describe situations in terms of agents that are considered useful or harmful from the modeler's perspective. The notation shows how the agents interact with each other and whether this interaction improves or worsens the situation.
Specification and Description Language (SDL) is a specification language targeted at the unambiguous specification and description of the behavior of reactive and distributed systems.
SysML is a Domain-Specific Modeling language for systems engineering that is defined as a UML profile (customization).
Unified Modeling Language (UML) is a general-purpose modeling language that is an industry standard for specifying software-intensive systems. UML 2.0, the current version, supports thirteen different diagram techniques, and has widespread tool support.
FLINT — language which allows a high-level description of normative systems.
Service-oriented modeling framework (SOMF) is a holistic language for designing enterprise and application level architecture models in the space of enterprise architecture, virtualization, service-oriented architecture (SOA), cloud computing, and more.
Architecture description language (ADL) is a language used to describe and represent the systems architecture of a system.
Architecture Analysis & Design Language (AADL) is a modeling language that supports early and repeated analyses of a system's architecture with respect to performance-critical properties through an extendable notation, a tool framework, and precisely defined semantics.
Examples of graphical modeling languages in other fields of science.
EAST-ADL is a Domain-Specific Modeling language dedicated to automotive system design.
Energy Systems Language (ESL), a language that aims to model ecological energetics & global economics.
IEC 61499 defines Domain-Specific Modeling language dedicated to distribute industrial process measurement and control systems.
=== Textual types ===
Information models can also be expressed in formalized natural languages, such as Gellish. Gellish has natural language variants such as Gellish Formal English and Gellish Formal Dutch (Gellish Formeel Nederlands), etc. Gellish Formal English is an information representation language or semantic modeling language that is defined in the Gellish English Dictionary-Taxonomy, which has the form of a Taxonomy-Ontology (similarly for Dutch). Gellish Formal English is not only suitable to express knowledge, requirements and dictionaries, taxonomies and ontologies, but also information about individual things. All that information is expressed in one language and therefore it can all be integrated, independent of the question whether it is stored in central or distributed or in federated databases. Information models in Gellish Formal English consists of collections of Gellish Formal English expressions, that use natural language terms and formalized phrases. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as:
- the Eiffel tower <is located in> Paris
- Paris <is classified as a> city
whereas information requirements and knowledge can be expressed for example as follows:
- tower <shall be located in a> geographical area
- city <is a kind of> geographical area
Such Gellish Formal English expressions use names of concepts (such as "city") and phrases that represent relation types (such as ⟨is located in⟩ and ⟨is classified as a⟩) that should be selected from the Gellish English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains more than 600 standard relation types and contains definitions of more than 40000 concepts. An information model in Gellish can express facts or make statements, queries and answers.
=== More specific types ===
In the field of computer science recently more specific types of modeling languages have emerged.
==== Algebraic ====
Algebraic Modeling Languages (AML) are high-level programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of AMLs like AIMMS, AMPL, GAMS, Gekko, Mosel, OPL, MiniZinc, and OptimJ is the similarity of its syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization, which is supported by certain language elements like sets, indices, algebraic expressions, powerful sparse index and data handling variables, constraints with arbitrary names. The algebraic formulation of a model does not contain any hints how to process it.
==== Behavioral ====
Behavioral languages are designed to describe the observable behavior of complex systems consisting of components that
execute concurrently. These languages focus on the description of key concepts such as: concurrency, nondeterminism, synchronization, and communication. The semantic foundations of Behavioral languages are process calculus or process algebra.
==== Discipline-specific ====
A discipline-specific modeling (DspM) language is focused on deliverables affiliated with a specific software development life cycle stage. Therefore, such language offers a distinct vocabulary, syntax, and notation for each stage, such as discovery, analysis, design, architecture, contraction, etc. For example, for the analysis phase of a project, the modeler employs specific analysis notation to deliver an analysis proposition diagram. During the design phase, however, logical design notation is used to depict the relationship between software entities. In addition, the discipline-specific modeling language best practices does not preclude practitioners from combining the various notations in a single diagram.
==== Domain-specific ====
Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, most often IT systems such as computer software. It involves the systematic use of a graphical domain-specific language (DSL) to represent the various facets of a system. DSM languages tend to support higher-level abstractions than General-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system.
==== Framework-specific ====
A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework. FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices.
A FSML concept can be configured by selecting features and providing values for features. Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept.
==== Information and knowledge modeling ====
Linked data and ontology engineering require 'host languages' to represent entities and the relations between them, constraints between the properties of entities and relations, and metadata attributes. JSON-LD and RDF are two major (and semantically almost equivalent) languages in this context, primarily because they support statement reification and contextualisation which are essential properties to support the higher-order logic needed to reason about models. Model transformation is a common example of such reasoning.
==== Object-oriented ====
Object modeling languages are modeling languages based on a standardized set of symbols and ways of arranging them to model (part of) an object oriented software design or system design.
Some organizations use them extensively in combination with a software development methodology to progress from initial specification to an implementation plan and to communicate that plan to an entire team of developers and stakeholders. Because a modeling language is visual and at a higher-level of abstraction than code, using models encourages the generation of a shared vision that may prevent problems of differing interpretation later in development. Often software modeling tools are used to construct these models, which may then be capable of automatic translation to code.
==== Virtual reality ====
Virtual Reality Modeling Language (VRML), before 1995 known as the Virtual Reality Markup Language is a standard file format for representing 3-dimensional (3D) interactive vector graphics, designed particularly with the World Wide Web in mind.
==== Others ====
Architecture Description Language
Face Modeling Language
Generative Modelling Language
Java Modeling Language
Promela
Rebeca Modeling Language
Service Modeling Language
Web Services Modeling Language
X3D
== Applications ==
Various kinds of modeling languages are applied in different disciplines, including computer science, information management, business process modeling, software engineering, and systems engineering. Modeling languages can be used to specify:
system requirements,
structures and
behaviors.
Modeling languages are intended to be used to precisely specify systems so that stakeholders (e.g., customers, operators, analysts, designers) can better understand the system being modeled.
The more mature modeling languages are precise, consistent and executable. Informal diagramming techniques applied with drawing tools are expected to produce useful pictorial representations of system requirements, structures and behaviors, which can be useful for communication, design, and problem solving but cannot be used programmatically.: 539 Executable modeling languages applied with proper tool support, however, are expected to automate system verification and validation, simulation and code generation from the same representations.
== Quality ==
A review of modelling languages is essential to be able to assign which languages are appropriate for different modelling settings. In the term settings we include stakeholders, domain and the knowledge connected. Assessing the language quality is a means that aims to achieve better models.
=== Framework for evaluation ===
Here language quality is stated in accordance with the SEQUAL framework for quality of models developed by Krogstie, Sindre and Lindland (2003), since this is a framework that connects the language quality to a framework for general model quality. Five areas are used in this framework to describe language quality and these are supposed to express both the conceptual as well as the visual notation of the language. We will not go into a thorough explanation of the underlying quality framework of models but concentrate on the areas used to explain the language quality framework.
==== Domain appropriateness ====
The framework states the ability to represent the domain as domain appropriateness. The statement appropriateness can be a bit vague, but in this particular context it means able to express. You should ideally only be able to express things that are in the domain but be powerful enough to include everything that is in the domain. This requirement might seem a bit strict, but the aim is to get a visually expressed model which includes everything relevant to the domain and excludes everything not appropriate for the domain. To achieve this, the language has to have a good distinction of which notations and syntaxes that are advantageous to present.
==== Participant appropriateness ====
To evaluate the participant appropriateness we try to identify how well the language expresses the knowledge held by the stakeholders. This involves challenges since a stakeholder's knowledge is subjective. The knowledge of the stakeholder is both tacit and explicit. Both types of knowledge are of dynamic character. In this framework only the explicit type of knowledge is taken into account. The language should to a large extent express all the explicit knowledge of the stakeholders relevant to the domain.
==== Modeller appropriateness ====
Last paragraph stated that knowledge of the stakeholders should be presented in a good way. In addition it is imperative that the language should be able to express all possible explicit knowledge of the stakeholders. No knowledge should be left unexpressed due to lacks in the language.
==== Comprehensibility appropriateness ====
Comprehensibility appropriateness makes sure that the social actors understand the model due to a consistent use of the language. To achieve this the framework includes a set of criteria. The general importance that these express is that the language should be flexible, easy to organize and easy to distinguish different parts of the language internally as well as from other languages. In addition to this, the goal should be as simple as possible and that each symbol in the language has a unique representation.
This is in connection to also to the structure of the development requirements.
.
==== Tool appropriateness ====
To ensure that the domain actually modelled is usable for analyzing and further processing, the language has to ensure that it is possible to reason in an automatic way. To achieve this it has to include formal syntax and semantics. Another advantage by formalizing is the ability to discover errors in an early stage. It is not always that the language best fitted for the technical actors is the same as for the social actors.
==== Organizational appropriateness ====
The language used is appropriate for the organizational context, e.g. that the language is standardized within the organization, or that it is supported by tools that are chosen as standard in the organization.
== See also ==
== References ==
== Further reading ==
John Krogstie (2003) "Evaluating UML using a generic quality framework" . SINTEF Telecom and Informatics and IDI, NTNU, Norway
Krogstie and Sølvsberg (2003). Information Systems Engineering: Conceptual Modeling in a Quality Perspective. Institute of computer and information sciences.\
Anna Gunhild Nysetvold and John Krogstie (2005). "Assessing business processing modeling languages using a generic quality framework". Institute of computer and information sciences.
== External links ==
Fundamental Modeling Concepts
Software Modeling Languages Portal
BIP -- Incremental Component-based Construction of Real-time Systems
Gellish Formal English | Wikipedia/Modeling_languages |
Computer facial animation is primarily an area of computer graphics that encapsulates methods and techniques for generating and animating images or models of a character face. The character can be a human, a humanoid, an animal, a legendary creature or character, etc. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.
Although development of computer graphics methods for facial animation started in the early-1970s, major achievements in this field are more recent and happened since the late 1980s.
The body of work around computer facial animation can be divided into two main areas: techniques to generate animation data, and methods to apply such data to a character. Techniques such as motion capture and keyframing belong to the first group, while morph targets animation (more commonly known as blendshape animation) and skeletal animation belong to the second. Facial animation has become well-known and popular through animated feature films and computer games but its applications include many more areas such as communication, education, scientific simulation, and agent-based systems (for example online customer service representatives). With the recent advancements in computational power in personal and mobile devices, facial animation has transitioned from appearing in pre-rendered content to being created at runtime.
== History ==
Human facial expression has been the subject of scientific investigation for more than one hundred years. Study of facial movements and expressions started from a biological point of view. After some older investigations, for example by John Bulwer in the late 1640s, Charles Darwin's book The Expression of the Emotions in Men and Animals can be considered a major departure for modern research in behavioural biology.
Computer based facial expression modelling and animation is not a new endeavour. The earliest work with computer based facial representation was done in the early-1970s. The first three-dimensional facial animation was created by Parke in 1972. In 1973, Gillenson developed an interactive system to assemble and edit line drawn facial images. in 1974, Parke developed a parameterized three-dimensional facial model.
One of the most important attempts to describe facial movements was Facial Action Coding System (FACS). Originally developed by Carl-Herman Hjortsjö in the 1960s and updated by Ekman and Friesen in 1978, FACS defines 46 basic facial Action Units (AUs). A major group of these Action Units represent primitive movements of facial muscles in actions such as raising brows, winking, and talking. Eight AU's are for rigid three-dimensional head movements, (i.e. turning and tilting left and right and going up, down, forward and backward). FACS has been successfully used for describing desired movements of synthetic faces and also in tracking facial activities.
The early-1980s saw the development of the first physically based muscle-controlled face model by Platt and the development of techniques for facial caricatures by Brennan. In 1985, the animated short film Tony de Peltrie was a landmark for facial animation. This marked the first time computer facial expression and speech animation were a fundamental part of telling the story.
The late-1980s saw the development of a new muscle-based model by Waters, the development of an abstract muscle action model by Magnenat-Thalmann and colleagues, and approaches to automatic speech synchronization by Lewis and Hill. The 1990s have seen increasing activity in the development of facial animation techniques and the use of computer facial animation as a key storytelling component as illustrated in animated films such as Toy Story (1995), Antz (1998), Shrek, and Monsters, Inc. (both 2001), and computer games such as Sims. Casper (1995), a milestone in this decade, was the first movie in which a lead actor was produced exclusively using digital facial animation.
The sophistication of the films increased after 2000. In The Matrix Reloaded and The Matrix Revolutions, dense optical flow from several high-definition cameras was used to capture realistic facial movement at every point on the face. Polar Express (film) used a large Vicon system to capture upward of 150 points. Although these systems are automated, a large amount of manual clean-up effort is still needed to make the data usable. Another milestone in facial animation was reached by The Lord of the Rings, where a character specific shape base system was developed. Mark Sagar pioneered the use of FACS in entertainment facial animation, and FACS based systems developed by Sagar were used on Monster House, King Kong, and other films.
== Techniques ==
=== Generating facial animation data ===
The generation of facial animation data can be approached in different ways: 1.) marker-based motion capture on points or marks on the face of a performer, 2.) markerless motion capture techniques using different type of cameras, 3.) audio-driven techniques, and 4.) keyframe animation.
Motion capture uses cameras placed around a subject. The subject is generally fitted either with reflectors (passive motion capture) or sources (active motion capture) that precisely determine the subject's position in space. The data recorded by the cameras is then digitized and converted into a three-dimensional computer model of the subject. Until recently, the size of the detectors/sources used by motion capture systems made the technology inappropriate for facial capture. However, miniaturization and other advancements have made motion capture a viable tool for computer facial animation. Facial motion capture was used extensively in Polar Express by Imageworks where hundreds of motion points were captured. This film was very accomplished and while it attempted to recreate realism, it was criticized for having fallen in the 'uncanny valley', the realm where animation realism is sufficient for human recognition and to convey the emotional message but where the characters fail to be perceived as realistic. The main difficulties of motion capture are the quality of the data which may include vibration as well as the retargeting of the geometry of the points.
Markerless motion capture aims at simplifying the motion capture process by avoiding encumbering the performer with markers. Several techniques came out recently leveraging different sensors, among which standard video cameras, Kinect and depth sensors or other structured-light based devices. Systems based on structured light may achieve real-time performance without the use of any markers using a high speed structured light scanner. The system is based on a robust offline face tracking stage which trains the system with different facial expressions. The matched sequences are used to build a person-specific linear face model that is subsequently used for online face tracking and expression transfer.
Audio-driven techniques are particularly well fitted for speech animation. Speech is usually treated in a different way to the animation of facial expressions, this is because simple keyframe-based approaches to animation typically provide a poor approximation to real speech dynamics. Often visemes are used to represent the key poses in observed speech (i.e. the position of the lips, jaw and tongue when producing a particular phoneme), however there is a great deal of variation in the realisation of visemes during the production of natural speech. The source of this variation is termed coarticulation which is the influence of surrounding visemes upon the current viseme (i.e. the effect of context). To account for coarticulation current systems either explicitly take into account context when blending viseme keyframes or use longer units such as diphone, triphone, syllable or even word and sentence-length units. One of the most common approaches to speech animation is the use of dominance functions introduced by Cohen and Massaro. Each dominance function represents the influence over time that a viseme has on a speech utterance. Typically the influence will be greatest at the center of the viseme and will degrade with distance from the viseme center. Dominance functions are blended together to generate a speech trajectory in much the same way that spline basis functions are blended together to generate a curve. The shape of each dominance function will be different according to both which viseme it represents and what aspect of the face is being controlled (e.g. lip width, jaw rotation etc.). This approach to computer-generated speech animation can be seen in the Baldi talking head. Other models of speech use basis units which include context (e.g. diphones, triphones etc.) instead of visemes. As the basis units already incorporate the variation of each viseme according to context and to some degree the dynamics of each viseme, no model of coarticulation is required. Speech is simply generated by selecting appropriate units from a database and blending the units together. This is similar to concatenative techniques in audio speech synthesis. The disadvantage to these models is that a large amount of captured data is required to produce natural results, and whilst longer units produce more natural results the size of database required expands with the average length of each unit. Finally, some models directly generate speech animations from audio. These systems typically use hidden Markov models or neural nets to transform audio parameters into a stream of control parameters for a facial model. The advantage of this method is the capability of voice context handling, the natural rhythm, tempo, emotional and dynamics handling without complex approximation algorithms. The training database is not needed to be labeled since there are no phonemes or visemes needed; the only needed data is the voice and the animation parameters.
Keyframe animation is the least automated of the processes to create animation data although it delivers the maximum amount of control over the animation. It is often used in combination with other techniques to deliver the final polish to the animation. The keyframe data can be made of scalar values defining the morph targets coefficients or rotation and translation values of the bones in models with a bone based rig. Often to speed up the keyframe animation process a control rig is used by the animation. The control rig represents a higher level of abstraction that can act on multiple morph targets coefficients or bones at the same time. For example, a "smile" control can act simultaneously on the mouth shape curving up and the eyes squinting.
=== Applying facial animation to a character ===
The main techniques used to apply facial animation to a character are: 1.) morph targets animation, 2.) bone driven animation, 3.) texture-based animation (2D or 3D), and 4.) physiological models.
Morph targets (also called "blendshapes") based systems offer a fast playback as well as a high degree of fidelity of expressions. The technique involves modeling portions of the face mesh to approximate expressions and visemes and then blending the different sub meshes, known as morph targets or blendshapes. Perhaps the most accomplished character using this technique was Gollum, from The Lord of the Rings. Drawbacks of this technique are that they involve intensive manual labor and are specific to each character. Recently, new concepts in 3D modeling have started to emerge. Recently, a new technology departing from the traditional techniques starts to emerge, such as Curve Controlled Modeling that emphasizes the modeling of the movement of a 3D object instead of the traditional modeling of the static shape.
Bone driven animation is very broadly used in games. The bones setup can vary between few bones to close to a hundred to allow all subtle facial expressions. The main advantages of bone driven animation is that the same animation can be used for different characters as long as the morphology of their faces is similar, and secondly they do not require loading in memory all the Morph targets data. Bone driven animation is most widely supported by 3D game engines. Bone driven animation can be used for both 2D and 3D animation. For example, it is possible to rig and animate using bones a 2D character using Adobe Flash.
Texture-based animation uses pixel color to create the animation on the character face. 2D facial animation is commonly based upon the transformation of images, including both images from still photography and sequences of video. Image morphing is a technique which allows in-between transitional images to be generated between a pair of target still images or between frames from sequences of video. These morphing techniques usually consist of a combination of a geometric deformation technique, which aligns the target images, and a cross-fade which creates the smooth transition in the image texture. An early example of image morphing can be seen in Michael Jackson's video for "Black Or White". In 3D animation texture based animation can be achieved by animating the texture itself or the UV mapping. In the latter case a texture map of all the facial expression is created and the UV map animation is used to transition from one expression to the next.
Physiological models, such as skeletal muscle systems and physically based head models, form another approach in modeling the head and face. Here, the physical and anatomical characteristics of bones, tissues, and skin are simulated to provide a realistic appearance (e.g. spring-like elasticity). Such methods can be very powerful for creating realism but the complexity of facial structures make them computationally expensive, and difficult to create. Considering the effectiveness of parameterized models for communicative purposes (as explained in the next section), it may be argued that physically based models are not a very efficient choice in many applications. This does not deny the advantages of physically based models and the fact that they can even be used within the context of parameterized models to provide local details when needed.
== Face animation languages ==
Many face animation languages are used to describe the content of facial animation. They can be input to a compatible "player" software which then creates the requested actions. Face animation languages are closely related to other multimedia presentation languages such as SMIL and VRML. Due to the popularity and effectiveness of XML as a data representation mechanism, most face animation languages are XML-based. For instance, this is a sample from Virtual Human Markup Language (VHML):
More advanced languages allow decision-making, event handling, and parallel and sequential actions. The Face Modeling Language (FML) is an XML-based language for describing face animation. FML supports MPEG-4 Face Animation Parameters (FAPS), decision-making and dynamic event handling, and typical programming constructs such as loops. It is part of the iFACE system. The following is an example from FML:
== See also ==
Animation
Caricature
Computer animation
Computer graphics
Deepfake
Facial expression
Facial motion capture
Interactive online characters
Morphing
Parametric surface
Texture mapping
== References ==
== Further reading ==
Computer Facial Animation by Frederic I. Parke, Keith Waters 2008 ISBN 1-56881-448-8
Data-driven 3D facial animation by Zhigang Deng, Ulrich Neumann 2007 ISBN 1-84628-906-8
Handbook of Virtual Humans by Nadia Magnenat-Thalmann and Daniel Thalmann, 2004 ISBN 0-470-02316-3
Osipa, Jason (2005). Stop Staring: Facial Modeling and Animation Done Right (2nd ed.). John Wiley & Sons. ISBN 978-0-471-78920-8.
== External links ==
Face/Off: Live Facial Puppetry - Realtime markerless facial animation technology developed at ETH Zurich
The "Artificial Actors" Project - Institute of Animation
iFACE
Animated Baldi
download of Carl-Herman Hjortsjö, Man's face and mimic language" Archived 2022-08-06 at the Wayback Machine (the original Swedish title of the book is: "Människans ansikte och mimiska språket". The correct translation would be: "Man's face and facial language") | Wikipedia/Face_Modeling_Language |
Generative Modelling Language (GML) in computer graphics and generative computer programming is a very simple programming language for the concise description of complex 3D shapes. It follows the "Generative Modelling" paradigm, where complex datasets are represented by "lists of operations" rather than by lists of objects, which is for instance the case in a relational database.
== Overview ==
Usual 3D file formats describe a virtual world in terms of geometric primitives. These may be cubes and spheres in a CSG tree, NURBS patches, a set of implicit functions, a triangle mesh, or just a cloud of points. The term "generative 3D modelling" describes a different paradigm for describing shape. The main idea is to replace 3D objects by object-generating operations: A shape is described by a sequence of processing steps, rather than the triangles which are the result of applying these operations. Shape design becomes rule design. The approach can be generally applied to any shape representation that provides a basic set of generating functions, called in this context 'elementary shape operators'. Its effectiveness has been demonstrated, e.g., in the field of procedural mesh generation, with Euler operators as complete and closed set of invertible shape generating functions for meshes, operating on the half-edge level.
Generative modelling gains efficiency through the possibility of creating high-level shape operators from low-level shape operators. Any sequence of processing steps can be grouped together to create a new combined operator. It may use elementary operators as well as other combined operators. Concrete values can easily be replaced by parameters, which makes it possible to separate data from operations: The same processing sequence can be applied to different input data sets. The same data can be used to produce different shapes by applying different combined operators from, e.g., a library of domain-dependent modelling operators. This makes it possible to create very complex objects from only a few high-level input parameters, such as for instance a style library.
== The Generative Modelling Language ==
The GML is a concrete implementation of the generative approach. It is a stack-based, interpreted programming language, very similar to Adobe's PostScript, but without any of the 2D layout operators. It provides instead a number of operators for creating 3D models (polygons, b-reps, subdivision surfaces). As a "shape programming language," it is a true generalization of "flat" 3D file formats like OBJ, DXF, or VRML that contain just lists of geometric primitives.
Together with its OpenGL-based runtime engine the GML can also be seen as a viewer with an integrated modeller, to overcome the usual separation of 3D modelling from interactive visualization. Both are interwoven instead. GML permits a concise representation of parameterized 3D objects which can be evaluated on-the-fly at runtime, rendered with adaptive level-of-detail, and allows for the interactive manipulation of all parameters.
== GML Example ==
== Applications ==
With procedural models, the model complexity is no longer directly (i.e., linearly) related with the file size. The Procedural Cathedral, a basic model of the Cologne Cathedral, contains 70 tracery windows, and a single window in highest resolution contains about 7 million triangles. These are "unfolded" from only 126 KB of GML code (18 KB zipped).
Gothic architecture is a prime example for the effectiveness of procedural shape design: In the Gothic style, all geometric constructions are exclusively executed using compass and ruler. Variations were obtained by procedurally combining in ever changing ways a set of simple basic parameterized geometric operations. Therefore, it is practically impossible to find two tracery windows in different buildings that follow an identical geometric construction.
The interactive CAVE designer helps to fit a CAVE into a small room. Because of the concrete bars under the ceiling it is difficult to place it using only 2D plans of the room. Degrees of freedom (blue arrows) are the position and orientation of the projection screen cubicle, the opening angle of the projectors, and the position/orientation of the top mirror. The DOFs are mildly restricted to take only valid values. DOFs are kept consistent, i.e., when moving the cubicles, the projector centers move as well (or get reflected at the walls).
Given a set of about 30 CAD models of car wheel rims, the task was to find a common parametrization that is capable of generating each of the individual instances (generative surface reconstruction). As a result, new, similar wheel rims can be synthesized within the design space that is spanned by the given 30 rims, that were manually classified into 3 main categories. A few of the high-level parameters can be directly manipulated using sliders and buttons (arrows and balls).
Generative modelling suggests to differentiate between "structure" and "appearance" (e.g., the style) of 3D models. Surprisingly many objects have the same structure as a chair, i.e., they are "close" to a chair on the structural level. The differentiation then permits (in principle) to apply the appearance of one object in this class to another.
Didactic applet showing the construction of Voronoi diagrams: Is it possible to reconstruct the centers of the Voronoi cells from the region boundaries? The interactive applet conveys a good intuition of the idea behind the formal proof.
== See also ==
Procedural generation
OpenSCAD
== References ==
== Further reading ==
Michael Leyton. A Generative Theory of Shape (available from his homepage)
John Snyder. Generative Modeling for Computer Graphics and CAD: Symbolic Shape Design Using Interval Analysis
== External links ==
Generative-modeling.org GML homepage.
Dissertation of Sven Havemann on UB TU Braunschweig describes why and how GML was created
Caltech pages on GENMOD | Wikipedia/Generative_Modelling_Language |
The Lifecycle Modeling Language (LML) is an open-standard modeling language designed for systems engineering. It supports the full lifecycle: conceptual, utilization, support and retirement stages. Along with the integration of all lifecycle disciplines including, program management, systems and design engineering, verification and validation, deployment and maintenance into one framework.
LML was originally designed by the LML steering committee. The specification was published October 17, 2013.
This is a modeling language like UML and SysML that supports additional project management uses such as risk analysis and scheduling. LML uses common language to define its modeling elements such as entity, attribute, schedule, cost, and relationship.
== Overview ==
LML communicates cost, schedule and performance to all stakeholders in the system lifecycle.
LML combines the logical constructs with an ontology to capture information. SysML is mainly constructs and has a limited ontology, while DoDAF MetaModel 2.0 (DM2) only has an ontology. Instead LML simplifies both the constructs and ontology to make them more complete, but still easier to use. There are only 12 primary entity classes. Almost all of the classes relate to each other and themselves with consistent words, i.e., Asset performs Action. Action performed by Asset.
SysML uses object oriented design, because it was designed to relate systems thinking to software development. No other discipline in the lifecycle uses object oriented design and analysis extensively. LML captures the entire lifecycle from cradle to grave.
Systems Engineers have identified complexity as a major issue. LML is a new approach to analyzing, planning, specifying, designing, building and maintaining modern systems.
LML focuses on these 6 goals:
1. To be easy to understand
2. To be easy to extend
3. To support both functional and object oriented approaches within the same design
4. To be a language that can be understood by most system stakeholders, not just Systems Engineers
5. To support systems from cradle to grave
6. To support both evolutionary and revolutionary changes to system plans and designs over the lifetime of a system
== History ==
The LML Steering Committee was formed in February 2013 to review a proposed draft ontology and set of diagrams that forms the LML specification. Contributors from many academic and commercial organizations provided direct input into the specification, resulting in its publication in October 2013. Presentations and tutorials were given at the National Defense Industrial Association (NDIA) Systems Engineering Conference (October 2013) and the Systems Engineering in DC (SEDC) in April 2014.
A predecessor to LML was developed by Dr. Steven H. Dam, SPEC Innovations, as part of a methodology called Knowledge-Based Analysis and Design (KBAD). The ontology portion was prototyping in a systems engineering database tool. Ideas on how to better implement it and the development of key LML diagrams (Action and Asset) were part of their Innoslate product development from 2009 to present.
== Ontology ==
Ontologies provide a set of defined terms and relationships between the terms to capture the information that describes the physical, functional, performance, and programmatic aspects of the system.
Common ways for describing such ontologies are "Entity", "Relationship", and "Attribute" (ERA). ERA is often used to define database schemas. LML extends the ERA schema with "Attributes on Relationship", a feature that can reduce the number of required "Relationships", in the same way that "Attribute" reduce the number of required "Entities" in ERA.
In alignment with the first goal of LML, "Entity", "Relationship", "Attribute", and "Attribute on Relationship" have equivalent English language elements: noun, verb, adjective and adverb.
Entity (noun)
An entity is defined as something that is uniquely identifiable and can exist by itself. There are only 12 parent entities in LML: Action, Artifact, Asset, Characteristic, Connection, Cost, Decision, Input/Output, Location, Risk, Statement and Time.
Several child entities have been defined to capture information that stakeholders need. The child entities have the attributes and relationships of the parents plus additional attributes and relationships that make them unique. Child entities include: Conduit (child of Connection), Logical (child of Connection), Measure (child of Characteristic), Orbital (child of Location), Physical (child of Location), Requirement (child of Statement), Resource (child of Asset), and Virtual (child of Location).
Every entity has a name or number or description attribute or combination of the three to identify it uniquely. The name is a word or small collection of words providing an overview of information about the entity.
The number provides a numerical way to identify the entity. The description provides more detail about that entity.
Attribute (adjective)
The attributes work in the same way an adjective. Entities (the nouns) can have names, numbers, and description attributes. The inherent characteristic or quality of an entity is an attribute. Every attribute has a name that identifies it uniquely within an entity. Attributes names are unique within an entity, but may be used in other entities. The name provides an overview of information about the attribute. The attribute data type specifies the data associated with the attribute.
Relationship (verb)
The relationship works the same way a verb connects nouns or in this case the entities. The relationships enable a simple method to see how [entities] connect. For example, when connecting an action to a statement, LML uses “traced from” as the relationship: an Action is traced from a Statement. The inverse relation of traced from is “traced to.” Relationships are defined in both directions and have unique names with the same verb. The standard parent child relationship is decomposed by and its inverse is decomposes.
Relationship names are unique across the whole schema.
Attributes on Relationships (adverb)
Classic ERA modeling does not include "attributes on relationships", but is included in LML. In terms of the English language, an "attribute on a relationship" is like an adverb, helping to describe the relationship. Analogous to the way in which attributes relate to entities the "attribute on a relationship" has a name that is unique to its relationship, but need not be unique across other relationships.
== List of LML Tools ==
Innoslate is the model-based systems engineering tool with LML available on the market. Innoslate implements LML and enables translation to UML, SysML, DoDAF 2.0, and other languages.
3DExperience platform is the enterprise software platform that fully supports LML modeling concepts. Particular tool for schema modeling is "Business Modeler" and basic tool for instance modelling based on that schema is "Matrix Navigator". Software is evolution of MatrixOne and Dassault Systemes V6 platform. CAD, CAM, CAE, PDM and other PLM technologies tools are provided based on that platform.
== See also ==
Formal specification
Functional specification
Process specification
Product design specification
Requirements analysis
Specification (technical standard)
Specification tree
== References == | Wikipedia/Lifecycle_Modeling_Language |
The term process model is used in various contexts. For example, in business process modeling the enterprise process model is often referred to as the business process model.
== Overview ==
Process models are processes of the same nature that are classified together into a model. Thus, a process model is a description of a process at the type level. Since the process model is at the type level, a process is an instantiation of it. The same process model is used repeatedly for the development of many applications and thus, has many instantiations. One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development.
The goals of a process model are to be:
Descriptive
Track what actually happens during a process
Take the point of view of an external observer who looks at the way a process has been performed and determines the improvements that must be made to make it perform more effectively or efficiently.
Prescriptive
Define the desired processes and how they should/could/might be performed.
Establish rules, guidelines, and behavior patterns which, if followed, would lead to the desired process performance. They can range from strict enforcement to flexible guidance.
Explanatory
Provide explanations about the rationale of processes.
Explore and evaluate the several possible courses of action based on rational arguments.
Establish an explicit link between processes and the requirements that the model needs to fulfill.
Pre-defines points at which data can be extracted for reporting purposes.
== Purpose ==
From a theoretical point of view, the meta-process modeling explains the key concepts needed to describe what happens in the development process, on what, when it happens, and why. From an operational point of view, the meta-process modeling is aimed at providing guidance for method engineers and application developers.
The activity of modeling a business process usually predicates a need to change processes or identify issues to be corrected. This transformation may or may not require IT involvement, although that is a common driver for the need to model a business process. Change management programmes are desired to put the processes into practice. With advances in technology from larger platform vendors, the vision of business process models (BPM) becoming fully executable (and capable of round-trip engineering) is coming closer to reality every day. Supporting technologies include Unified Modeling Language (UML), model-driven architecture, and service-oriented architecture.
Process modeling addresses the process aspects of an enterprise business architecture, leading to an all encompassing enterprise architecture. The relationships of a business processes in the context of the rest of the enterprise systems, data, organizational structure, strategies, etc. create greater capabilities in analyzing and planning a change. One real-world example is in corporate mergers and acquisitions; understanding the processes in both companies in detail, allowing management to identify redundancies resulting in a smoother merger.
Process modeling has always been a key aspect of business process reengineering, and continuous improvement approaches seen in Six Sigma.
== Classification of process models ==
=== By coverage ===
There are five types of coverage where the term process model has been defined differently:
Activity-oriented: related set of activities conducted for the specific purpose of product definition; a set of partially ordered steps intended to reach a goal.
Product-oriented: series of activities that cause sensitive product transformations to reach the desired product.
Decision-oriented: set of related decisions conducted for the specific purpose of product definition.
Context-oriented: sequence of contexts causing successive product transformations under the influence of a decision taken in a context.
Strategy-oriented: allow building models representing multi-approach processes and plan different possible ways to elaborate the product based on the notion of intention and strategy.
=== By alignment ===
Processes can be of different kinds. These definitions "correspond to the various ways in which a process can be modelled".
Strategic processes
investigate alternative ways of doing a thing and eventually produce a plan for doing it
are often creative and require human co-operation; thus, alternative generation and selection from an alternative are very critical activities
Tactical processes
help in the achievement of a plan
are more concerned with the tactics to be adopted for actual plan achievement than with the development of a plan of achievement
Implementation processes
are the lowest level processes
are directly concerned with the details of the what and how of plan implementation
=== By granularity ===
Granularity refers to the level of detail of a process model and affects the kind of guidance, explanation and trace that can be provided. Coarse granularity restricts these to a rather limited level of detail whereas fine granularity provides more detailed capability. The nature of granularity needed is dependent on the situation at hand.
Project manager, customer representatives, the general, top-level, or middle management require rather coarse-grained process description as they want to gain an overview of time, budget, and resource planning for their decisions. In contrast, software engineers, users, testers, analysts, or software system architects will prefer a fine-grained process model where the details of the model can provide them with instructions and important execution dependencies such as the dependencies between people.
While notations for fine-grained models exist, most traditional process models are coarse-grained descriptions. Process models should, ideally, provide a wide range of granularity (e.g. Process Weaver).
=== By flexibility ===
It was found that while process models were prescriptive, in actual practice departures from the prescription can occur. Thus, frameworks for adopting methods evolved so that systems development methods match specific organizational situations and thereby improve their usefulness. The development of such frameworks is also called situational method engineering.
Method construction approaches can be organized in a flexibility spectrum ranging from 'low' to 'high'.
Lying at the 'low' end of this spectrum are rigid methods, whereas at the 'high' end there are modular method construction. Rigid methods are completely pre-defined and leave little scope for adapting them to the situation at hand. On the other hand, modular methods can be modified and augmented to fit a given situation. Selecting a rigid methods allows each project to choose its method from a panel of rigid, pre-defined methods, whereas selecting a path within a method consists of choosing the appropriate path for the situation at hand. Finally, selecting and tuning a method allows each project to select methods from different approaches and tune them to the project's needs."
== Quality of methods ==
As the quality of process models is being discussed in this paper, there is a need to elaborate quality of modeling techniques as an important essence in quality of process models. In most existing frameworks created for understanding the quality, the line between quality of modeling techniques and the quality of models as a result of the application of those techniques are not clearly drawn. This report will concentrate both on quality of process modeling techniques and quality of process models to clearly differentiate the two.
Various frameworks were developed to help in understanding quality of process modeling techniques, one example is Quality based modeling evaluation framework or known as Q-Me framework which argued to provide set of well defined quality properties and procedures to make an objective assessment of this properties possible.
This framework also has advantages of providing uniform and formal description of the model element within one or different model types using one modeling techniques
In short this can make assessment of both the product quality and the process quality of modeling techniques with regard to a set of properties that have been defined before.
Quality properties that relate to business process modeling techniques discussed in are:
Expressiveness: the degree to which a given modeling technique is able to denote the models of any number and kinds of application domains.
Arbitrariness: the degree of freedom one has when modeling one and the same domain
Suitability: the degree to which a given modeling technique is specifically tailored for a specific kind of application domain.
Comprehensibility: the ease with which the way of working and way of modeling are understood by participants.
Coherence: the degree to which the individual sub models of a way of modeling constitute a whole.
Completeness; the degree to which all necessary concepts of the application domain are represented in the way of modeling.
Efficiency: the degree to which the modeling process uses resources such as time and people.
Effectiveness: the degree to which the modeling process achieves its goal.
To assess the quality of Q-ME framework; it is used to illustrate the quality of the dynamic essentials modeling of the organisation (DEMO) business modeling techniques.
It is stated that the evaluation of the Q-ME framework to the DEMO modeling techniques has revealed the shortcomings of Q-ME. One particular is that it does not include quantifiable metric to express the quality of business modeling technique which makes it hard to compare quality of different techniques in an overall rating.
There is also a systematic approach for quality measurement of modeling techniques known as complexity metrics suggested by Rossi et al. (1996). Techniques of Meta model is used as a basis for computation of these complexity metrics. In comparison to quality framework proposed by Krogstie, quality measurement focus more on technical level instead of individual model level.
Authors (Cardoso, Mendling, Neuman and Reijers, 2006) used complexity metrics to measure the simplicity and understandability of a design. This is supported by later research done by Mendling et al. who argued that without using the quality metrics to help question quality properties of a model, simple process can be modeled in a complex and unsuitable way. This in turn can lead to a lower understandability, higher maintenance cost and perhaps inefficient execution of the process in question.
The quality of modeling technique is important in creating models that are of quality and contribute to the correctness and usefulness of models.
== Quality of models ==
Earliest process models reflected the dynamics of the process with a practical process obtained by instantiation in terms of relevant concepts, available technologies, specific implementation environments, process constraints and so on.
Enormous number of research has been done on quality of models but less focus has been shifted towards the quality of process models. Quality issues of process models cannot be evaluated exhaustively however there are four main guidelines and frameworks in practice for such. These are: top-down quality frameworks, bottom-up metrics related to quality aspects, empirical surveys related to modeling techniques, and pragmatic guidelines.
Hommes quoted Wang et al. (1994) that all the main characteristic of quality of models can all be grouped under 2 groups namely correctness and usefulness of a model, correctness ranges from the model correspondence to the phenomenon that is modeled to its correspondence to syntactical rules of the modeling and also it is independent of the purpose to which the model is used.
Whereas the usefulness can be seen as the model being helpful for the specific purpose at hand for which the model is constructed at first place. Hommes also makes a further distinction between internal correctness (empirical, syntactical and semantic quality) and external correctness (validity).
A common starting point for defining the quality of conceptual model is to look at the linguistic properties of the modeling language of which syntax and semantics are most often applied.
Also the broader approach is to be based on semiotics rather than linguistic as was done by Krogstie using the top-down quality framework known as SEQUAL. It defines several quality aspects based on relationships between a model, knowledge Externalisation, domain, a modeling language, and the activities of learning, taking action, and modeling.
The framework does not however provide ways to determine various degrees of quality but has been used extensively for business process modeling in empirical tests carried out
According to previous research done by Moody et al. with use of conceptual model quality framework proposed by Lindland et al. (1994) to evaluate quality of process model, three levels of quality were identified:
Syntactic quality: Assesses extent to which the model conforms to the grammar rules of modeling language being used.
Semantic quality: whether the model accurately represents user requirements
Pragmatic quality: whether the model can be understood sufficiently by all relevant stakeholders in the modeling process. That is the model should enable its interpreters to make use of it for fulfilling their need.
From the research it was noticed that the quality framework was found to be both easy to use and useful in evaluating the quality of process models however it had limitations in regards to reliability and difficult to identify defects. These limitations led to refinement of the framework through subsequent research done by Krogstie. This framework is called SEQUEL framework by Krogstie et al. 1995 (Refined further by Krogstie & Jørgensen, 2002) which included three more quality aspects.
Physical quality: whether the externalized model is persistent and available for the audience to make sense of it.
Empirical quality: whether the model is modeled according to the established regulations regarding a given language.
Social quality: This regards the agreement between the stakeholders in the modeling domain.
Dimensions of Conceptual Quality framework
Modeling Domain is the set of all statements that are relevant and correct for describing a problem domain, Language Extension is the set of all statements that are possible given the grammar and vocabulary of the modeling languages used. Model Externalization is the conceptual representation of the problem domain.
It is defined as the set of statements about the problem domain that are actually made. Social Actor Interpretation and Technical Actor Interpretation are the sets of statements that actors both human model users and the tools that interact with the model, respectively 'think' the conceptual representation of the problem domain contains.
Finally, Participant Knowledge is the set of statements that human actors, who are involved in the modeling process, believe should be made to represent the problem domain. These quality dimensions were later divided into two groups that deal with physical and social aspects of the model.
In later work, Krogstie et al. stated that while the extension of the SEQUAL framework has fixed some of the limitation of the initial framework, however other limitation remain .
In particular, the framework is too static in its view upon semantic quality, mainly considering models, not modeling activities, and comparing these models to a static domain rather than seeing the model as a facilitator for changing the domain.
Also, the framework's definition of pragmatic quality is quite narrow, focusing on understanding, in line with the semiotics of Morris, while newer research in linguistics and semiotics has focused beyond mere understanding, on how the model is used and affects its interpreters.
The need for a more dynamic view in the semiotic quality framework is particularly evident when considering process models, which themselves often prescribe or even enact actions in the problem domain, hence a change to the model may also change the problem domain directly. This paper discusses the quality framework in relation to active process models and suggests a revised framework based on this.
Further work by Krogstie et al. (2006) to revise SEQUAL framework to be more appropriate for active process models by redefining physical quality with a more narrow interpretation than previous research.
The other framework in use is Guidelines of Modeling (GoM) based on general accounting principles include the six principles: Correctness, Clarity deals with the comprehensibility and explicitness (System description) of model systems.
Comprehensibility relates to graphical arrangement of the information objects and, therefore, supports the understand ability of a model.
Relevance relates to the model and the situation being presented. Comparability involves the ability to compare models that is semantic comparison between two models, Economic efficiency; the produced cost of the design process need at least to be covered by the proposed use of cost cuttings and revenue increases.
Since the purpose of organizations in most cases is the maximization of profit, the principle defines the borderline for the modeling process. The last principle is Systematic design defines that there should be an accepted differentiation between diverse views within modeling.
Correctness, relevance and economic efficiency are prerequisites in the quality of models and must be fulfilled while the remaining guidelines are optional but necessary.
The two frameworks SEQUAL and GOM have a limitation of use in that they cannot be used by people who are not competent with modeling. They provide major quality metrics but are not easily applicable by non-experts.
The use of bottom-up metrics related to quality aspects of process models is trying to bridge the gap of use of the other two frameworks by non-experts in modeling but it is mostly theoretical and no empirical tests have been carried out to support their use.
Most experiments carried out relate to the relationship between metrics and quality aspects and these works have been done individually by different authors: Canfora et al. study the connection mainly between count metrics (for example, the number of tasks or splits -and maintainability of software process models); Cardoso validates the correlation between control flow complexity and perceived complexity; and Mendling et al. use metrics to predict control flow errors such as deadlocks in process models.
The results reveal that an increase in size of a model appears to reduce its quality and comprehensibility.
Further work by Mendling et al. investigates the connection between metrics and understanding and While some metrics are confirmed regarding their effect, also personal factors of the modeler – like competence – are revealed as important for understanding about the models.
Several empirical surveys carried out still do not give clear guidelines or ways of evaluating the quality of process models but it is necessary to have clear set of guidelines to guide modelers in this task. Pragmatic guidelines have been proposed by different practitioners even though it is difficult to provide an exhaustive account of such guidelines from practice.
Most of the guidelines are not easily put to practice but "label activities verb–noun" rule has been suggested by other practitioners before and analyzed empirically.
From the research. value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels which need to be analyzed. It was found that it results in better models in terms of understanding than alternative labelling styles.
From the earlier research and ways to evaluate process model quality it has been seen that the process model's size, structure, expertise of the modeler and modularity affect its overall comprehensibility.
Based on these a set of guidelines was presented 7 Process Modeling Guidelines (7PMG). This guideline uses the verb-object style, as well as guidelines on the number of elements in a model, the application of structured modeling, and the decomposition of a process model. The guidelines are as follows:
G1 Minimize the number of elements in a model
G2 Minimize the routing paths per element
G3 Use one start and one end event
G4 Model as structured as possible
G5 Avoid OR routing elements
G6 Use verb-object activity labels
G7 Decompose a model with more than 50 elements
7PMG still though has limitations with its use: Validity problem 7PMG does not relate to the content of a process model, but only to the way this content is organized and represented.
It does suggest ways of organizing different structures of the process model while the content is kept intact but the pragmatic issue of what must be included in the model is still left out.
The second limitation relates to the prioritizing guideline the derived ranking has a small empirical basis as it relies on the involvement of 21 process modelers only.
This could be seen on the one hand as a need for a wider involvement of process modelers' experience, but it also raises the question, what alternative approaches may be available to arrive at a prioritizing guideline?
== See also ==
Model selection
Process (science)
Process architecture
Process calculus
Process flow diagram
Process ontology
Process Specification Language
== References ==
== External links ==
Modeling processes regarding workflow patterns; link appears to be broken
"Abstraction Levels for Processes Presentation: Process Modeling Principles" (PDF). Archived from the original (PDF) on 2011-07-14. Retrieved 2008-06-12.
American Productivity and Quality Center (APQC), a worldwide organization for process and performance improvement
The Application of Petri Nets to Workflow Management, W.M.P. van der Aalst, 1998. | Wikipedia/Process_Modeling |
A modeling perspective in information systems is a particular way to represent pre-selected aspects of a system. Any perspective has a different focus, conceptualization, dedication and visualization of what the model is representing.
The traditional way to distinguish between modeling perspectives is structural, functional and behavioral/processual perspectives. This together with rule, object, communication and actor and role perspectives is one way of classifying modeling approaches.
== Types of perspectives ==
=== Structural modeling perspective ===
This approach concentrates on describing the static structure. The main concept in this modeling perspective is the entity, this could be an object, phenomena, concept, thing etc.
The data modeling languages have traditionally handled this perspective, examples of such being:
The ER-language (Entity-Relationship)
Generic Semantic Modeling language (GSM)
Other approaches including:
The NIAM language (Binary relationship language)
Conceptual graphs (Sowa)
Looking at the ER-language we have the basic components:
Entities: Distinctively identifiable phenomenon.
Relationships: An association among the entities.
Attributes: Used to give value to a property of an entity/relationship.
Looking at the generic semantic modeling language we have the basic components:
Constructed types built by abstraction: Aggregation, generalization, and association.
Attributes.
Primitive types: Data types in GSM are classified into printable and abstract types.
Printable: Used to specify visible values.
Abstract: Representing entities.
=== Functional modeling perspective ===
The functional modeling approach concentrates on describing the dynamic process. The main concept in this modeling perspective is the process, this could be a function, transformation, activity, action, task etc. A well-known example of a modeling language employing this perspective is data flow diagrams.
The perspective uses four symbols to describe a process, these being:
Process: Illustrates transformation from input to output.
Store: Data-collection or some sort of material.
Flow: Movement of data or material in the process.
External Entity: External to the modeled system, but interacts with it.
Now, with these symbols, a process can be represented as a network of these symbols.
This decomposed process is a DFD, data flow diagram.
=== Behavioral perspective ===
Behavioral perspective gives a description of system dynamics. The main concepts in behavioral perspective are states and transitions between states. State transitions are triggered by events. State Transition Diagrams (STD/STM), State charts and Petri-nets are some examples of well-known behaviorally oriented modeling languages. Different types of State Transition Diagrams are used particularly within real-time systems and telecommunications systems.
=== Rule perspective ===
Rule perspective gives a description of goals/means connections. The main concepts in rule perspective are rule, goal and constraint. A rule is something that influences the actions of a set of actors. The standard form of rule is “IF condition THEN action/expression”. Rule hierarchies (goal-oriented modeling), Tempora and Expert systems are some examples of rule oriented modeling.
=== Object perspective ===
The object-oriented perspective describes the world as autonomous, communicating objects. An object is an “entity” which has a unique and unchangeable identifier and a local state consisting of a collection of attributes with assignable values. The state can only be manipulated with a set of methods defined on the object. The value of the state can only be accessed by sending a message to the object to call on one of its methods. An event is when an operation is being triggered by receiving a message, and the trace of the events during the existence of the object is called the object’s life cycle or the process of an object. Several objects that share the same definitions of attributes and operations can be parts of an object class. The perspective is originally based on design and programming of object oriented systems. Unified Modelling Language (UML) is a well known language for modeling with an object perspective.
=== Communication perspective ===
This perspective is based on language/action theory from philosophical linguistics. The basic assumption in this perspective is that person/objects cooperate on a process/action through communication within them.
An illocutionary act consists of five elements: Speaker, hearer, time, location and circumstances. It is a reason and goal for the communication, where the participations in a communication act is oriented towards mutual agreement. In a communication act, the speaker generally can raise three claims: truth (referring an object), justice (referring a social world of the participations) and claim to sincerity (referring the subjective world of the speaker).
=== Actor and role perspective ===
Actor and role perspective is a description of organisational and system structure. An actor can be defined as a phenomenon that influences the history of another actor, whereas a role can be defined as the behaviour which is expected by an actor, amongst other actors, when filling the role. Modeling within these perspectives is based both on work with object-oriented programming languages and work with intelligent agents in artificial intelligence. I* is an example of an actor oriented language.
== See also ==
Domain-Specific Modeling (DSM)
Glossary of Unified Modeling Language terms
General-purpose modeling
Model Driven Engineering (MDE)
Modeling language
Three schema approach for data modeling
View model
== References ==
== Further reading ==
Ingeman Arbnor and Björn Bjerke (1997). Methodology for Creating Business Knowledge. California : Sage Publications. (Third Edition 2009). | Wikipedia/Modeling_perspective |
A design pattern is the re-usable form of a solution to a design problem. The idea was introduced by the architect Christopher Alexander and has been adapted for various other disciplines, particularly software engineering.
== Details ==
An organized collection of design patterns that relate to a particular field is called a pattern language. This language gives a common terminology for discussing the situations designers are faced with.
The elements of this language are entities called patterns. Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.
Documenting a pattern requires explaining why a particular situation causes problems, and how the components of the pattern relate to each other to give the solution. Christopher Alexander describes common design problems as arising from "conflicting forces"—such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern's author to determine which solution is "best", must also be documented within the pattern.
Pattern documentation should also explain when it is applicable. Since two houses may be very different from one another, a design pattern for houses must be broad enough to apply to both of them, but not so vague that it doesn't help the designer make decisions. The range of situations in which a pattern can be used is called its context. Some examples might be "all houses", "all two-story houses", or "all places where people spend time".
For instance, in Christopher Alexander's work, bus stops and waiting rooms in a surgery center are both within the context for the pattern "A PLACE TO WAIT".
== Examples ==
Software design pattern, in software design
Architectural pattern, for software architecture
Interaction design pattern, used in interaction design / human–computer interaction
Pedagogical patterns, in teaching
Pattern gardening, in gardening
Business models also have design patterns. See Business model § Examples.
== See also ==
Style guide
Design paradigm
Anti-pattern
Dark pattern
== References ==
== Further reading == | Wikipedia/Design_patterns |
In computer science, the process calculi (or process algebras) are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation). Leading examples of process calculi include CSP, CCS, ACP, and LOTOS. More recent additions to the family include the π-calculus, the ambient calculus, PEPA, the fusion calculus and the join-calculus.
== Essential features ==
While the variety of existing process calculi is very large (including variants that incorporate stochastic behaviour, timing information, and specializations for studying molecular interactions), there are several features that all process calculi have in common:
Representing interactions between independent processes as communication (message-passing), rather than as modification of shared variables.
Describing processes and systems using a small collection of primitives, and operators for combining those primitives.
Defining algebraic laws for the process operators, which allow process expressions to be manipulated using equational reasoning.
== Mathematics of processes ==
To define a process calculus, one starts with a set of names (or channels) whose purpose is to provide means of communication. In many implementations, channels have rich internal structure to improve efficiency, but this is abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old ones. The basic operators, always present in some form or other, allow:
parallel composition of processes
specification of which channels to use for sending and receiving data
sequentialization of interactions
hiding of interaction points
recursion or process replication
=== Parallel composition ===
Parallel composition of two processes
P
{\displaystyle {\mathit {P}}}
and
Q
{\displaystyle {\mathit {Q}}}
, usually written
P
|
Q
{\displaystyle P\vert Q}
, is the key primitive distinguishing the process calculi from sequential models of computation. Parallel composition allows computation in
P
{\displaystyle {\mathit {P}}}
and
Q
{\displaystyle {\mathit {Q}}}
to proceed simultaneously and independently. But it also allows interaction, that is synchronisation and flow of information from
P
{\displaystyle {\mathit {P}}}
to
Q
{\displaystyle {\mathit {Q}}}
(or vice versa) on a channel shared by both. Crucially, an agent or process can be connected to more than one channel at a time.
Channels may be synchronous or asynchronous. In the case of a synchronous channel, the agent sending a message waits until another agent has received the message. Asynchronous channels do not require any such synchronization. In some process calculi (notably the π-calculus) channels themselves can be sent in messages through (other) channels, allowing the topology of process interconnections to change. Some process calculi also allow channels to be created during the execution of a computation.
=== Communication ===
Interaction can be (but isn't always) a directed flow of information. That is, input and output can be distinguished as dual interaction primitives. Process calculi that make such distinctions typically define an input operator (e.g.
x
(
v
)
{\displaystyle x(v)}
) and an output operator (e.g.
x
⟨
y
⟩
{\displaystyle x\langle y\rangle }
), both of which name an interaction point (here
x
{\displaystyle {\mathit {x}}}
) that is used to synchronise with a dual interaction primitive.
Should information be exchanged, it will flow from the outputting to the inputting process. The output primitive will specify the data to be sent. In
x
⟨
y
⟩
{\displaystyle x\langle y\rangle }
, this data is
y
{\displaystyle y}
. Similarly, if an input expects to receive data, one or more bound variables will act as place-holders to be substituted by data, when it arrives. In
x
(
v
)
{\displaystyle x(v)}
,
v
{\displaystyle v}
plays that role. The choice of the kind of data that can be exchanged in an interaction is one of the key features that distinguishes different process calculi.
=== Sequential composition ===
Sometimes interactions must be temporally ordered. For example, it might be desirable to specify algorithms such as: first receive some data on
x
{\displaystyle {\mathit {x}}}
and then send that data on
y
{\displaystyle {\mathit {y}}}
. Sequential composition can be used for such purposes. It is well known from other models of computation. In process calculi, the sequentialisation operator is usually integrated with input or output, or both. For example, the process
x
(
v
)
⋅
P
{\displaystyle x(v)\cdot P}
will wait for an input on
x
{\displaystyle {\mathit {x}}}
. Only when this input has occurred will the process
P
{\displaystyle {\mathit {P}}}
be activated, with the received data through
x
{\displaystyle {\mathit {x}}}
substituted for identifier
v
{\displaystyle {\mathit {v}}}
.
=== Reduction semantics ===
The key operational reduction rule, containing the computational essence of process calculi, can be given solely in terms of parallel composition, sequentialization, input, and output. The details of this reduction vary among the calculi, but the essence remains roughly the same. The reduction rule is:
x
⟨
y
⟩
⋅
P
|
x
(
v
)
⋅
Q
⟶
P
|
Q
[
y
/
v
]
{\displaystyle x\langle y\rangle \cdot P\;\vert \;x(v)\cdot Q\longrightarrow P\;\vert \;Q[^{y}\!/\!_{v}]}
The interpretation to this reduction rule is:
The process
x
⟨
y
⟩
⋅
P
{\displaystyle x\langle y\rangle \cdot P}
sends a message, here
y
{\displaystyle {\mathit {y}}}
, along the channel
x
{\displaystyle {\mathit {x}}}
. Dually, the process
x
(
v
)
⋅
Q
{\displaystyle x(v)\cdot Q}
receives that message on channel
x
{\displaystyle {\mathit {x}}}
.
Once the message has been sent,
x
⟨
y
⟩
⋅
P
{\displaystyle x\langle y\rangle \cdot P}
becomes the process
P
{\displaystyle {\mathit {P}}}
, while
x
(
v
)
⋅
Q
{\displaystyle x(v)\cdot Q}
becomes the process
Q
[
y
/
v
]
{\displaystyle Q[^{y}\!/\!_{v}]}
, which is
Q
{\displaystyle {\mathit {Q}}}
with the place-holder
v
{\displaystyle {\mathit {v}}}
substituted by
y
{\displaystyle {\mathit {y}}}
, the data received on
x
{\displaystyle {\mathit {x}}}
.
The class of processes that
P
{\displaystyle {\mathit {P}}}
is allowed to range over as the continuation of the output operation substantially influences the properties of the calculus.
=== Hiding ===
Processes do not limit the number of connections that can be made at a given interaction point. But interaction points allow interference (i.e. interaction). For the
synthesis of compact, minimal and compositional systems, the ability to restrict interference is crucial. Hiding operations allow control of the connections made between interaction points when composing
agents in parallel. Hiding can be denoted in a variety of ways. For example, in the π-calculus the hiding of a name
x
{\displaystyle {\mathit {x}}}
in
P
{\displaystyle {\mathit {P}}}
can be expressed as
(
ν
x
)
P
{\displaystyle (\nu \;x)P}
, while in CSP it might be written as
P
∖
{
x
}
{\displaystyle P\setminus \{x\}}
.
=== Recursion and replication ===
The operations presented so far describe only finite interaction and are consequently insufficient for full computability, which includes non-terminating behaviour. Recursion and replication are operations that allow finite descriptions of infinite behaviour. Recursion is well known from the sequential world. Replication
!
P
{\displaystyle !P}
can be understood as abbreviating the parallel composition of a countably infinite number of
P
{\displaystyle {\mathit {P}}}
processes:
!
P
=
P
∣
!
P
{\displaystyle !P=P\mid !P}
=== Null process ===
Process calculi generally also include a null process (variously denoted as
n
i
l
{\displaystyle {\mathit {nil}}}
,
0
{\displaystyle 0}
,
S
T
O
P
{\displaystyle {\mathit {STOP}}}
,
δ
{\displaystyle \delta }
, or some other appropriate symbol) which has no interaction points. It is utterly inactive and its sole purpose is to act as the inductive anchor on top of which more interesting processes can be generated.
== Discrete and continuous process algebra ==
Process algebra has been studied for discrete time and continuous time (real time or dense time).
== History ==
In the first half of the 20th century, various formalisms were proposed to capture the informal concept of a computable function, with μ-recursive functions, Turing machines and the lambda calculus possibly being the best-known examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into each other, supports the Church-Turing thesis. Another shared feature is more rarely commented on: they all are most readily understood as models of sequential computation. The subsequent consolidation of computer science required a more subtle formulation of the notion of computation, in particular explicit representations of concurrency and communication. Models of concurrency such as the process calculi, Petri nets in 1962, and the actor model in 1973 emerged from this line of inquiry.
Research on process calculi began in earnest with Robin Milner's seminal work on the Calculus of Communicating Systems (CCS) during the period from 1973 to 1980. C.A.R. Hoare's Communicating Sequential Processes (CSP) first appeared in 1978, and was subsequently developed into a full-fledged process calculus during the early 1980s. There was much cross-fertilization of ideas between CCS and CSP as they developed. In 1982 Jan Bergstra and Jan Willem Klop began work on what came to be known as the Algebra of Communicating Processes (ACP), and introduced the term process algebra to describe their work. CCS, CSP, and ACP constitute the three major branches of the process calculi family: the majority of the other process calculi can trace their roots to one of these three calculi.
== Current research ==
Various process calculi have been studied and not all of them fit the paradigm sketched here. The most prominent example may be the ambient calculus. This is to be expected as process calculi are an active field of study. Currently research on process calculi focuses on the following problems.
Developing new process calculi for better modeling of computational phenomena.
Finding well-behaved subcalculi of a given process calculus. This is valuable because (1) most calculi are fairly wild in the sense that they are rather general and not much can be said about arbitrary processes; and (2) computational applications rarely exhaust the whole of a calculus. Rather they use only processes that are very constrained in form. Constraining the shape of processes is mostly studied by way of type systems.
Logics for processes that allow one to reason about (essentially) arbitrary properties of processes, following the ideas of Hoare logic.
Behavioural theory: what does it mean for two processes to be the same? How can we decide whether two processes are different or not? Can we find representatives for equivalence classes of processes? Generally, processes are considered to be the same if no context, that is other processes running in parallel, can detect a difference. Unfortunately, making this intuition precise is subtle and mostly yields unwieldy characterisations of equality (which in most cases must also be undecidable, as a consequence of the halting problem). Bisimulations are a technical tool that aids reasoning about process equivalences.
Expressivity of calculi. Programming experience shows that certain problems are easier to solve in some languages than in others. This phenomenon calls for a more precise characterisation of the expressivity of calculi modeling computation than that afforded by the Church–Turing thesis. One way of doing this is to consider encodings between two formalisms and see what properties encodings can potentially preserve. The more properties can be preserved, the more expressive the target of the encoding is said to be. For process calculi, the celebrated results are that the synchronous π-calculus is more expressive than its asynchronous variant, has the same expressive power as the higher-order π-calculus, but is less than the ambient calculus.
Using process calculus to model biological systems (stochastic π-calculus, BioAmbients, Beta Binders, BioPEPA, Brane calculus). It is thought by some that the compositionality offered by process-theoretic tools can help biologists to organise their knowledge more formally.
== Software implementations ==
The ideas behind process algebra have given rise to several tools including:
CADP
Concurrency Workbench
mCRL2 toolset
== Relationship to other models of concurrency ==
The history monoid is the free object that is generically able to represent the histories of individual communicating processes. A process calculus is then a formal language imposed on a history monoid in a consistent fashion. That is, a history monoid can only record a sequence of events, with synchronization, but does not specify the allowed state transitions. Thus, a process calculus is to a history monoid what a formal language is to a free monoid (a formal language is a subset of the set of all possible finite-length strings of an alphabet generated by the Kleene star).
The use of channels for communication is one of the features distinguishing the process calculi from other models of concurrency, such as Petri nets and the actor model (see Actor model and process calculi). One of the fundamental motivations for including channels in the process calculi was to enable certain algebraic techniques, thereby making it easier to reason about processes algebraically.
== See also ==
Communicating sequential processes
ProVerif
Stochastic probe
Tamarin Prover
Temporal Process Language
π-calculus
== References ==
== Further reading ==
Matthew Hennessy: Algebraic Theory of Processes, The MIT Press, ISBN 0-262-08171-7.
C. A. R. Hoare: Communicating Sequential Processes, Prentice Hall, ISBN 0-13-153289-8.
This book has been updated by Jim Davies at the Oxford University Computing Laboratory and the new edition is available for download as a PDF file at the Using CSP website.
Robin Milner: A Calculus of Communicating Systems, Springer Verlag, ISBN 0-387-10235-3.
Robin Milner: Communicating and Mobile Systems: the Pi-Calculus, Springer Verlag, ISBN 0-521-65869-1.
Valk, Rüdiger; Moldt, Daniel; Köhler-Bußmeier, Michael, eds. (2011). "Chapter 5: Prozessalgebra - Parallele und kommunizierende Prozesse" (PDF). Formale Grundlagen der Informatik II: Modellierung und Analyse von Informatiksystemen (in German). Vol. Part 2. University of Hamburg. FGI2. Archived (PDF) from the original on 2019-07-09. Retrieved 2019-07-13. {{cite book}}: |work= ignored (help) | Wikipedia/Process_algebra |
Fundamental modeling concepts (FMC) provide a framework to describe software-intensive systems. It strongly emphasizes the communication about software-intensive systems by using a semi-formal graphical notation that can easily be understood.
== Introduction ==
FMC distinguishes three perspectives to look at a software system:
Structure of the system
Processes in the system
Value domains of the system
FMC defines a dedicated diagram type for each perspective. FMC diagrams use a simple and lean notation. The purpose of FMC diagrams is to facilitate the communication about a software system, not only between technical experts but also between technical experts and business or domain experts. The comprehensibility of FMC diagrams has made them famous among its supporters.
The common approach when working with FMC is to start with a high-level diagram of the compositional structure of a system. This “big picture” diagram serves as a reference in the communication with all involved stakeholders of the project. Later on, the high-level diagram is iteratively refined to model technical details of the system. Complementary diagrams for processes observed in the system or value domains found in the system are introduced as needed.
== Diagram Types ==
FMC uses three diagram types to model different aspects of a system:
Compositional Structure Diagram depicts the static structure of a system. This diagram type is also known as FMC Block Diagram
Dynamic Structure Diagram depicts processes that can be observed in a system. This diagram type is also known as FMC Petri-net
Value Range Structure Diagram depicts structures of values found in the system. This diagram type is also known as FMC E/R Diagram
All FMC diagrams are bipartite graphs. Each bipartite graph consists of two disjoint sets of vertices with the condition that no vertex is connected to another vertex of the same set. In FMC diagrams, members of one set are represented by angular shapes, and members of the other set are represented by curved shapes. Each element in an FMC diagram can be refined by another diagram of the same type, provided that the combined graph is also bipartite. This mechanism allows modeling all relevant layers of abstraction with the same notation.
=== Compositional Structure Diagram ===
Compositional structure diagrams depict the static structure of a system, and the relationships between system components. System components can be active or passive. Agents are active system components. They perform activities in the system. Storages and channels are passive components which store or transmit information.
The image to the right is an example of a compositional structure diagram. It contains the agents Order Processor, Supplier Manager, Supplier, Online Shop and an unnamed human agent. Agents are represented by rectangles. The dots and the shadow of the agent Supplier indicate that this agent has multiple instances, i.e. the Supplier Manager communicates with one or many suppliers. The so-called human agent represents a user interacting with the system.
The diagram contains the storages Orders, Purchase Order and Product Catalog. Storages are represented by curved shapes. Agents can read from storages, write to storages or modify the content of storages. The directions of the arrows indicate which operation is performed by an agent. In the diagram, the Supplier Manager can modify the content of the Product Catalog, whereas the Order Processor can only read the content of the Product Catalog.
Agents communicate via channels. The direction of information flow is either indicated by arrows (not shown in the picture), by a request-response-symbol (e.g. between Supplier Manager and Supplier) or omitted (e.g. between Order Processor and Supplier Manager).
=== Dynamic Structure Diagram ===
Dynamic structures are derived from petri nets.
"They are used to express system behavior over time, depicting the actions performed by the agents. So they clarify how a system is working and how communication takes place between different agents."
=== Value Range Structure Diagram ===
Value range structure diagrams (also known as FMC Entity Relationship Diagrams) can be compared with the Entity-relationship model.
"[They] are used to depict value range structures or topics as mathematical structures. Value range structures describe observable values at locations within the system whereas topic diagrams allow a much wider usage in order to cover all correlations between interesting points."
== References ==
Knoepfel, Andreas; Bernhard Groene; Peter Tabeling (2005). Fundamental Modeling Concepts - Effective Communication of IT Systems. Wiley. 0-470-02710-X.
== External links ==
FMC home page
FMC-Stencils for MS-Visio
FMC-Coaching & Training
[1] | Wikipedia/Fundamental_Modeling_Concepts |
Analogical models are a method of representing a phenomenon of the world, often called the "target system" by another, more understandable or analysable system. They are also called dynamical analogies.
Two open systems have analog representations (see illustration) if they are black box isomorphic systems.
== Explanation ==
A simple type of analogy is one that is based on shared properties; and analogizing is the process of representing information about a particular subject (the analogue or source system) by another particular subject (the target system), in order "to illustrate some particular aspect (or clarify selected attributes) of the primary domain".
Analogical models, also called "analog" or "analogue" models, seek the analogous systems that share properties with the target system as a means of representing the world. It is often practicable to construct source systems that are smaller and/or faster than the target system so that one can deduce a priori knowledge of target system behaviour. Analog devices are therefore those in which may differ in substance or structure but share properties of dynamic behaviour (Truit and Rogers, p. 1-3).
dynamical analogies establish the analogies between electrical, mechanical, acoustical, magnetic and electronic systems: Olson (1958), p. 2.
For example, in analog electronic circuits, one can use voltage to represent an arithmetic quantity; operational amplifiers might then represent the arithmetic operations (addition, subtraction, multiplication, and division). Through the process of calibration these smaller/bigger, slower/faster systems are scaled up or down so that they match the functioning of the target system, and are therefore called analogs of the target system. Once the calibration has taken place, modellers speak of a one-to-one correspondence in behaviour between the primary system and its analog. Thus the behaviour of two systems can be determined by experimenting with one.
== Creating an analogical model ==
Many different instruments and systems can be used to create an analogical model.
"Many important discoveries have been made when scientists commenced their work as if their theoretically postulated models of atoms, viruses, vitamins, hormones, and genes had actual, real world substantial existence. They proceeded as though each imaginary concept actually existed in precisely the form their theoretical speculation outlined; and, discarding any pretence of analogy, they proceeded with the view that the substantial, real world was exactly as they had theoretically described it. ... Consider the analogue model advanced to assist understanding of the behaviour of gases which suggests possible relationships between some theoretical activities of gas particles and some observable activities of billiard-balls. Achinstein (1964, p.332) reminds us that, despite thinking about gases in this useful way, "the physicist obviously supposes that molecules, not billiard balls, comprise gases" — Yeates (2004, pp.71, 73)
A mechanical device can be used to represent mathematical calculations. For instance, the Phillips Hydraulic Computer MONIAC used the flow of water to model economic systems (the target system); electronic circuits can be used to represent both physiological and ecological systems. When a model is run on either an analog or digital computer this is known as the process of simulation.
== Mechanical analogies ==
Any number of systems could be used for mapping electrical phenomena to mechanical phenomena, but two principle systems are commonly used: the impedance analogy and the mobility analogy. The impedance analogy maps force to voltage whereas the mobility analogy maps force to current.
The impedance analogy preserves the analogy between electrical impedance and mechanical impedance but does not preserve the network topology. The mobility analogy preserves the network topology but does not preserve the analogy between impedances. Both preserve the correct energy and power relationships by making power conjugate pairs of variables analogous.
== Hydraulic analogy ==
In a hydraulic analogy, a water integrator might perform the mathematical operation of integration.
== Physiological analogies ==
Francis Crick used the study of the visual system as a proxy for the study of awareness.
== Formal analogies ==
"The same equations have the same solutions." -- Richard Feynman
For example, the inverse-square laws of gravitation and electromagnetism can be described by analogous equations on a geometrical basis, almost without regard to the physical details about masses and charges.
In population ecology, differential equations arise that are the same as those found in mechanics, albeit with different interpretations.
Recursion requires a similarity within a situation; for example, Archimedes used the myriad to count the number of grains of sand on a beach by using the concept of myriad myriads.
== Dynamical analogies ==
Dynamical analogies establish analogies between systems in different energy domains by means of comparison of the system dynamic equations. There are many ways such analogies can be built, but one of the most useful methods is to form analogies between pairs of power conjugate variables. That is, a pair of variables whose product is power. Doing so preserves the correct energy flow between domains, a useful feature when modelling a system as an integrated whole. Examples of systems that require unified modelling are mechatronics and audio electronics.
The earliest such analogy is due to James Clerk Maxwell who, in 1873, associated mechanical force with electrical voltage. This analogy became so widespread that sources of voltage are still today referred to as electromotive force. The power conjugate of voltage is electric current which, in the Maxwell analogy, maps to mechanical velocity. Electrical impedance is the ratio of voltage and current, so by analogy, mechanical impedance is the ratio of force and velocity. The concept of impedance can be extended to other domains, for instance in acoustics and fluid flow it is the ratio of pressure to rate of flow. In general, impedance is the ratio of an effort variable and the flow variable that results. For this reason, the Maxwell analogy is often referred to as the impedance analogy, although the concept of impedance was not conceived until 1886 by Oliver Heaviside, some time after Maxwell's death.
Specifying power conjugate variables still does not result in a unique analogy, there are multiple ways the conjugates and analogies can be specified. A new analogy was proposed by Floyd A. Firestone in 1933 now known as the mobility analogy. In this analogy electrical impedance is made analogous to mechanical mobility (the inverse of mechanical impedance). Firestone's idea was to make analogous variables that are measured across an element, and make analogous variables that flow through an element. For instance, the across variable voltage is the analogy of velocity, and the through variable current is the analogy of force. Firestone's analogy has the advantage of preserving the topology of element connections when converting between domains. A modified form of the through and across analogy was proposed in 1955 by Horace M. Trent and is the modern understanding of through and across.
where
V is voltage
F is force
T is torque
p is pressure
I is electric current
u is velocity
ω is angular velocity
Q is volumetric flow rate
=== Table of equivalents ===
=== Hamiltonian variables ===
The Hamiltonian variables, also called the energy variables, are those variables which when time-differentiated are equal to the power conjugate variables. The Hamiltonian variables are so called because they are the variables which usually appear in Hamiltonian mechanics. The Hamiltonian variables in the electrical domain are charge (q) and flux linkage (λ) because
d
λ
d
t
=
v
{\displaystyle {\frac {d\lambda }{dt}}=v}
(Faraday's law of induction), and
d
q
d
t
=
i
.
{\displaystyle {\frac {dq}{dt}}=i.}
In the translational mechanical domain, the Hamiltonian variables are distance displacement (x) and momentum (p) because
d
p
d
t
=
F
{\displaystyle {\frac {dp}{dt}}=F}
(Newton's second law of motion), and
d
x
d
t
=
u
.
{\displaystyle {\frac {dx}{dt}}=u.}
There is a corresponding relationship for other analogies and sets of variables. The Hamiltonian variables are also called the energy variables. The integrand of a power conjugate variable with respect to a Hamiltonian variable is a measure of energy. For instance,
∫
F
d
x
{\displaystyle \int F\,dx}
and
∫
u
d
p
{\displaystyle \int u\,dp}
are both expressions of energy.
=== Practical uses ===
Maxwell's analogy was initially used merely to help explain electrical phenomena in more familiar mechanical terms. The work of Firestone, Trent and others moved the field well beyond this, looking to represent systems of multiple energy domains as a single system. In particular, designers started converting the mechanical parts of an electromechanical system to the electrical domain so that the whole system could be analyzed as an electrical circuit. Vannevar Bush was a pioneer of this kind of modelling in his development of analogue computers, and a coherent presentation of this method was presented in a 1925 paper by Clifford A. Nickle.
From the 1950s onward, manufacturers of mechanical filters, notably Collins Radio, widely used these analogies in order to take the well -developed theory of filter design in electrical engineering and apply it to mechanical systems. The quality of filters required for radio applications could not be achieved with electrical components. Much better quality resonators (higher Q factor) could be made with mechanical parts but there was no equivalent filter theory in mechanical engineering. It was also necessary to have the mechanical parts, the transducers, and the electrical components of the circuit analyzed as a complete system in order to predict the overall response of the filter.
Harry F. Olson helped popularise the use of dynamical analogies in the audio electronics field with his book dynamical analogies first published in 1943.
=== Non-power-conjugate analogies ===
A common analogy of magnetic circuits maps magnetomotive force (mmf) to voltage and magnetic flux (φ) to electric current. However, mmf and φ are not power conjugate variables. The product of these is not in units of power and the ratio, known as magnetic reluctance, does not measure the rate of dissipation of energy so is not a true impedance. Where a compatible analogy is required, mmf can be used as the effort variable and dφ/dt (rate of change of magnetic flux) will then be the flow variable. This is known as the gyrator-capacitor model.
A widely used analogy in the thermal domain maps temperature difference as the effort variable and thermal power as the flow variable. Again, these are not power conjugate variables, and the ratio, known as thermal resistance, is not really an analogy of either impedance or electrical resistance as far as energy flows are concerned. A compatible analogy could take temperature difference as the effort variable and entropy flow rate as the flow variable.
=== Generalisation ===
Many applications of dynamical models convert all energy domains in the system into an electrical circuit and then proceed to analyse the complete system in the electrical domain. There are, however, more generalised methods of representation. One such representation is through the use of bond graphs, introduced by Henry M. Paynter in 1960. It is usual to use the force-voltage analogy (impedance analogy) with bond graphs, but it is not a requirement to do so. Likewise Trent used a different representation (linear graphs) and his representation has become associated with the force-current analogy (mobility analogy), but again this is not mandatory.
Some authors discourage the use of domain specific terminology for the sake of generalisation. For instance, because much of the theory of dynamical analogies arose from electrical theory the power conjugate variables are sometimes called V-type and I-type according to whether they are analogs of voltage or current respectively in the electrical domain. Likewise, the Hamiltonian variables are sometimes called generalised momentum and generalised displacement according to whether they are analogs of momentum or displacement in the mechanical domain.
== Electronic circuit analogies ==
=== Functional analogs ===
Functional analogs (or functional analogues) are entities (models, representations, etc.) that can be replaced, to fulfill the same function. When the entities in question are formally represented by black boxes, the concept of analog is related to "same behavior": they take the same output sequence when submitted to the same input sequence.
=== Hydraulic analogy ===
A fluid or hydraulic analogy of an electric circuit attempts to explain circuitry intuitively in terms of plumbing, where water is analogous to the mobile sea of charge within metals, pressure difference is analogous to voltage, and water's flow rate is analogous to electric current.
=== Analogue computers ===
Electronic circuits were used to model and simulate engineering systems such as aeroplanes and nuclear power plants before digital computers became widely available with fast enough turn over times to be practically useful. Electronic circuit instruments called analog computers were used to speed up circuit construction time. However analog computers like the Norden bombsight could also consist of gears and pulleys in calculation.
Examples are Vogel and Ewel who published 'An Electrical Analog of a Trophic Pyramid' (1972, Chpt 11, pp. 105–121), Elmore and Sands (1949) who published circuits devised for research in nuclear physics and the study of fast electrical transients done under the Manhattan Project (however no circuits having application to weapon technology were included for security reasons), and Howard T. Odum (1994) who published circuits devised to analogically model ecological-economic systems at many scales of the geobiosphere.
== Philosophical conundrum ==
The process of analogical modelling has philosophical difficulties. As noted in the Stanford Encyclopedia of Philosophy, there is the question of how the physical/biological laws of the target system relate to the analogical models created by humans to represent the target system. We seem to assume that the process of constructing analogical models gives us access to the fundamental laws governing the target system. However strictly speaking we only have empirical knowledge of the laws that hold true for the analogical system, and if the time constant for the target system is larger than the life cycle of human being (as in the case of the geobiosphere) it is therefore very difficult for any single human to empirically verify the validity of the extension of the laws of their model to the target system in their lifetime.
== See also ==
== References ==
== Bibliography ==
== Further reading ==
== External links ==
Stanford Encyclopedia of Philosophy entry on Models in Science
Interdisciplinary Electrical Analogies Archived 2010-05-13 at the Wayback Machine | Wikipedia/Analogical_models |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.