text
stringlengths
9
3.55k
source
stringlengths
31
280
In mathematics, the Wallis product for π, published in 1656 by John Wallis, states that π 2 = ∏ n = 1 ∞ 4 n 2 4 n 2 − 1 = ∏ n = 1 ∞ ( 2 n 2 n − 1 ⋅ 2 n 2 n + 1 ) = ( 2 1 ⋅ 2 3 ) ⋅ ( 4 3 ⋅ 4 5 ) ⋅ ( 6 5 ⋅ 6 7 ) ⋅ ( 8 7 ⋅ 8 9 ) ⋅ ⋯ {\displaystyle {\begin{aligned}{\frac {\pi }{2}}&=\prod _{n=1}^{\infty }{\frac {4n^{2}}{4n^{2}-1}}=\prod _{n=1}^{\infty }\left({\frac {2n}{2n-1}}\cdot {\frac {2n}{2n+1}}\right)\\&={\Big (}{\frac {2}{1}}\cdot {\frac {2}{3}}{\Big )}\cdot {\Big (}{\frac {4}{3}}\cdot {\frac {4}{5}}{\Big )}\cdot {\Big (}{\frac {6}{5}}\cdot {\frac {6}{7}}{\Big )}\cdot {\Big (}{\frac {8}{7}}\cdot {\frac {8}{9}}{\Big )}\cdot \;\cdots \\\end{aligned}}}
https://en.wikipedia.org/wiki/Wallis_product
In mathematics, the Wallman compactification, generally called Wallman–Shanin compactification is a compactification of T1 topological spaces that was constructed by Wallman (1938).
https://en.wikipedia.org/wiki/Wallman_compactification
In mathematics, the Walter theorem, proved by John H. Walter (1967, 1969), describes the finite groups whose Sylow 2-subgroup is abelian. Bender (1970) used Bender's method to give a simpler proof.
https://en.wikipedia.org/wiki/Walter's_theorem
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space M {\displaystyle M} . It is named after Leonid Vaseršteĭn. Intuitively, if each distribution is viewed as a unit amount of earth (soil) piled on M {\displaystyle M} , the metric is the minimum "cost" of turning one pile into the other, which is assumed to be the amount of earth that needs to be moved times the mean distance it has to be moved. This problem was first formalised by Gaspard Monge in 1781.
https://en.wikipedia.org/wiki/Kantorovich_metric
Because of this analogy, the metric is known in computer science as the earth mover's distance. The name "Wasserstein distance" was coined by R. L. Dobrushin in 1970, after learning of it in the work of Leonid Vaseršteĭn on Markov processes describing large systems of automata (Russian, 1969). However the metric was first defined by Leonid Kantorovich in The Mathematical Method of Production Planning and Organization (Russian original 1939) in the context of optimal transport planning of goods and materials. Some scholars thus encourage use of the terms "Kantorovich metric" and "Kantorovich distance". Most English-language publications use the German spelling "Wasserstein" (attributed to the name "Vaseršteĭn" (Russian: Васерштейн) being of German origin).
https://en.wikipedia.org/wiki/Kantorovich_metric
In mathematics, the Weber modular functions are a family of three functions f, f1, and f2, studied by Heinrich Martin Weber.
https://en.wikipedia.org/wiki/Weber_modular_function
In mathematics, the Weeks manifold, sometimes called the Fomenko–Matveev–Weeks manifold, is a closed hyperbolic 3-manifold obtained by (5, 2) and (5, 1) Dehn surgeries on the Whitehead link. It has volume approximately equal to 0.942707… (OEIS: A126774) and David Gabai, Robert Meyerhoff, and Peter Milley (2009) showed that it has the smallest volume of any closed orientable hyperbolic 3-manifold. The manifold was independently discovered by Jeffrey Weeks (1985) as well as Sergei V. Matveev and Anatoly T. Fomenko (1988).
https://en.wikipedia.org/wiki/Weeks_manifold
In mathematics, the Weierstrass M-test is a test for determining whether an infinite series of functions converges uniformly and absolutely. It applies to series whose terms are bounded functions with real or complex values, and is analogous to the comparison test for determining the convergence of series of real or complex numbers. It is named after the German mathematician Karl Weierstrass (1815-1897).
https://en.wikipedia.org/wiki/Weierstrass_M-test
In mathematics, the Weierstrass elliptic functions are elliptic functions that take a particularly simple form. They are named for Karl Weierstrass. This class of functions are also referred to as ℘-functions and they are usually denoted by the symbol ℘, a uniquely fancy script p. They play an important role in the theory of elliptic functions. A ℘-function together with its derivative can be used to parameterize elliptic curves and they generate the field of elliptic functions with respect to a given period lattice.
https://en.wikipedia.org/wiki/Weierstrass_elliptic_function
In mathematics, the Weierstrass function is an example of a real-valued function that is continuous everywhere but differentiable nowhere. It is an example of a fractal curve. It is named after its discoverer Karl Weierstrass.
https://en.wikipedia.org/wiki/Weierstrass_function
The Weierstrass function has historically served the role of a pathological function, being the first published example (1872) specifically concocted to challenge the notion that every continuous function is differentiable except on a set of isolated points. Weierstrass's demonstration that continuity did not imply almost-everywhere differentiability upended mathematics, overturning several proofs that relied on geometric intuition and vague definitions of smoothness. These types of functions were denounced by contemporaries: Henri Poincaré famously described them as "monsters" and called Weierstrass' work "an outrage against common sense", while Charles Hermite wrote that they were a "lamentable scourge". The functions were difficult to visualize until the arrival of computers in the next century, and the results did not gain wide acceptance until practical applications such as models of Brownian motion necessitated infinitely jagged functions (nowadays known as fractal curves).
https://en.wikipedia.org/wiki/Weierstrass_function
In mathematics, the Weierstrass functions are special functions of a complex variable that are auxiliary to the Weierstrass elliptic function. They are named for Karl Weierstrass. The relation between the sigma, zeta, and ℘ {\displaystyle \wp } functions is analogous to that between the sine, cotangent, and squared cosecant functions: the logarithmic derivative of the sine is the cotangent, whose derivative is negative the squared cosecant.
https://en.wikipedia.org/wiki/Weierstrass_sigma_function
In mathematics, the Weierstrass preparation theorem is a tool for dealing with analytic functions of several complex variables, at a given point P. It states that such a function is, up to multiplication by a function not zero at P, a polynomial in one fixed variable z, which is monic, and whose coefficients of lower degree terms are analytic functions in the remaining variables and zero at P. There are also a number of variants of the theorem, that extend the idea of factorization in some ring R as u·w, where u is a unit and w is some sort of distinguished Weierstrass polynomial. Carl Siegel has disputed the attribution of the theorem to Weierstrass, saying that it occurred under the current name in some of late nineteenth century Traités d'analyse without justification.
https://en.wikipedia.org/wiki/Weierstrass_preparation_theorem
In mathematics, the Weierstrass product inequality states that for any real numbers 0 ≤ x1, ..., xn ≤ 1 we have ( 1 − x 1 ) ( 1 − x 2 ) ( 1 − x 3 ) ( 1 − x 4 ) . . . .
https://en.wikipedia.org/wiki/Weierstrass_product_inequality
( 1 − x n ) ≥ 1 − S n , {\displaystyle (1-x_{1})(1-x_{2})(1-x_{3})(1-x_{4})....(1-x_{n})\geq 1-S_{n},} ( 1 + x 1 ) ( 1 + x 2 ) ( 1 + x 3 ) ( 1 + x 4 ) . . .
https://en.wikipedia.org/wiki/Weierstrass_product_inequality
. ( 1 + x n ) ≥ 1 + S n , {\displaystyle (1+x_{1})(1+x_{2})(1+x_{3})(1+x_{4})....(1+x_{n})\geq 1+S_{n},} where S n = x 1 + x 2 + x 3 + x 4 + . .
https://en.wikipedia.org/wiki/Weierstrass_product_inequality
. . + x n . {\displaystyle S_{n}=x_{1}+x_{2}+x_{3}+x_{4}+....+x_{n}.} The inequality is named after the German mathematician Karl Weierstrass.
https://en.wikipedia.org/wiki/Weierstrass_product_inequality
In mathematics, the Weierstrass transform of a function f: R → R, named after Karl Weierstrass, is a "smoothed" version of f(x) obtained by averaging the values of f, weighted with a Gaussian centered at x. Specifically, it is the function F defined by F ( x ) = 1 4 π ∫ − ∞ ∞ f ( y ) e − ( x − y ) 2 4 d y = 1 4 π ∫ − ∞ ∞ f ( x − y ) e − y 2 4 d y , {\displaystyle F(x)={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }f(y)\;e^{-{\frac {(x-y)^{2}}{4}}}\;dy={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }f(x-y)\;e^{-{\frac {y^{2}}{4}}}\;dy~,} the convolution of f with the Gaussian function 1 4 π e − x 2 / 4 . {\displaystyle {\frac {1}{\sqrt {4\pi }}}e^{-x^{2}/4}~.} The factor 1/√(4π) is chosen so that the Gaussian will have a total integral of 1, with the consequence that constant functions are not changed by the Weierstrass transform.
https://en.wikipedia.org/wiki/Weierstrass_transform
Instead of F(x) one also writes W(x). Note that F(x) need not exist for every real number x, when the defining integral fails to converge. The Weierstrass transform is intimately related to the heat equation (or, equivalently, the diffusion equation with constant diffusion coefficient). If the function f describes the initial temperature at each point of an infinitely long rod that has constant thermal conductivity equal to 1, then the temperature distribution of the rod t = 1 time units later will be given by the function F. By using values of t different from 1, we can define the generalized Weierstrass transform of f. The generalized Weierstrass transform provides a means to approximate a given integrable function f arbitrarily well with analytic functions.
https://en.wikipedia.org/wiki/Weierstrass_transform
In mathematics, the Weierstrass–Enneper parameterization of minimal surfaces is a classical piece of differential geometry. Alfred Enneper and Karl Weierstrass studied minimal surfaces as far back as 1863. Let f {\displaystyle f} and g {\displaystyle g} be functions on either the entire complex plane or the unit disk, where g {\displaystyle g} is meromorphic and f {\displaystyle f} is analytic, such that wherever g {\displaystyle g} has a pole of order m {\displaystyle m} , f {\displaystyle f} has a zero of order 2 m {\displaystyle 2m} (or equivalently, such that the product f g 2 {\displaystyle fg^{2}} is holomorphic), and let c 1 , c 2 , c 3 {\displaystyle c_{1},c_{2},c_{3}} be constants. Then the surface with coordinates ( x 1 , x 2 , x 3 ) {\displaystyle (x_{1},x_{2},x_{3})} is minimal, where the x k {\displaystyle x_{k}} are defined using the real part of a complex integral, as follows: The converse is also true: every nonplanar minimal surface defined over a simply connected domain can be given a parametrization of this type.For example, Enneper's surface has f(z) = 1, g(z) = zm.
https://en.wikipedia.org/wiki/Weierstrass–Enneper_parameterization
In mathematics, the Weil conjecture on Tamagawa numbers is the statement that the Tamagawa number τ ( G ) {\displaystyle \tau (G)} of a simply connected simple algebraic group defined over a number field is 1. In this case, simply connected means "not having a proper algebraic covering" in the algebraic group theory sense, which is not always the topologists' meaning.
https://en.wikipedia.org/wiki/Weil_conjecture_for_Tamagawa_numbers
In mathematics, the Weil conjectures were highly influential proposals by André Weil (1949). They led to a successful multi-decade program to prove them, in which many leading researchers developed the framework of modern algebraic geometry and number theory. The conjectures concern the generating functions (known as local zeta functions) derived from counting points on algebraic varieties over finite fields. A variety V over a finite field with q elements has a finite number of rational points (with coordinates in the original field), as well as points with coordinates in any finite extension of the original field.
https://en.wikipedia.org/wiki/Weil_conjectures
The generating function has coefficients derived from the numbers Nk of points over the extension field with qk elements. Weil conjectured that such zeta functions for smooth varieties are rational functions, satisfy a certain functional equation, and have their zeros in restricted places. The last two parts were consciously modelled on the Riemann zeta function, a kind of generating function for prime integers, which obeys a functional equation and (conjecturally) has its zeros restricted by the Riemann hypothesis. The rationality was proved by Bernard Dwork (1960), the functional equation by Alexander Grothendieck (1965), and the analogue of the Riemann hypothesis by Pierre Deligne (1974).
https://en.wikipedia.org/wiki/Weil_conjectures
In mathematics, the Weil conjectures were some highly influential proposals by André Weil (1949) on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields. A variety V over a finite field with q elements has a finite number of rational points, as well as points over every finite field with qk elements containing that field. The generating function has coefficients derived from the numbers Nk of points over the (essentially unique) field with qk elements.
https://en.wikipedia.org/wiki/Conjecture
Weil conjectured that such zeta-functions should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by Dwork (1960), the functional equation by Grothendieck (1965), and the analogue of the Riemann hypothesis was proved by Deligne (1974).
https://en.wikipedia.org/wiki/Conjecture
In mathematics, the Weil pairing is a pairing (bilinear form, though with multiplicative notation) on the points of order dividing n of an elliptic curve E, taking values in nth roots of unity. More generally there is a similar Weil pairing between points of order n of an abelian variety and its dual. It was introduced by André Weil (1940) for Jacobians of curves, who gave an abstract algebraic definition; the corresponding results for elliptic functions were known, and can be expressed simply by use of the Weierstrass sigma function.
https://en.wikipedia.org/wiki/Weil_pairing
In mathematics, the Weil reciprocity law is a result of André Weil holding in the function field K(C) of an algebraic curve C over an algebraically closed field K. Given functions f and g in K(C), i.e. rational functions on C, then f((g)) = g((f))where the notation has this meaning: (h) is the divisor of the function h, or in other words the formal sum of its zeroes and poles counted with multiplicity; and a function applied to a formal sum means the product (with multiplicities, poles counting as a negative multiplicity) of the values of the function at the points of the divisor. With this definition there must be the side-condition, that the divisors of f and g have disjoint support (which can be removed). In the case of the projective line, this can be proved by manipulations with the resultant of polynomials. To remove the condition of disjoint support, for each point P on C a local symbol (f, g)Pis defined, in such a way that the statement given is equivalent to saying that the product over all P of the local symbols is 1.
https://en.wikipedia.org/wiki/Weil_reciprocity
When f and g both take the values 0 or ∞ at P, the definition is essentially in limiting or removable singularity terms, by considering (up to sign) fagbwith a and b such that the function has neither a zero nor a pole at P. This is achieved by taking a to be the multiplicity of g at P, and −b the multiplicity of f at P. The definition is then (f, g)P = (−1)ab fagb.See for example Jean-Pierre Serre, Groupes algébriques et corps de classes, pp. 44–46, for this as a special case of a theory on mapping algebraic curves into commutative groups. There is a generalisation of Serge Lang to abelian varieties (Lang, Abelian Varieties).
https://en.wikipedia.org/wiki/Weil_reciprocity
In mathematics, the Weil–Petersson metric is a Kähler metric on the Teichmüller space Tg,n of genus g Riemann surfaces with n marked points. It was introduced by André Weil (1958, 1979) using the Petersson inner product on forms on a Riemann surface (introduced by Hans Petersson).
https://en.wikipedia.org/wiki/Weil–Petersson_metric
In mathematics, the Weinstein conjecture refers to a general existence problem for periodic orbits of Hamiltonian or Reeb vector flows. More specifically, the conjecture claims that on a compact contact manifold, its Reeb vector field should carry at least one periodic orbit. By definition, a level set of contact type admits a contact form obtained by contracting the Hamiltonian vector field into the symplectic form. In this case, the Hamiltonian flow is a Reeb vector field on that level set.
https://en.wikipedia.org/wiki/Weinstein_conjecture
It is a fact that any contact manifold (M,α) can be embedded into a canonical symplectic manifold, called the symplectization of M, such that M is a contact type level set (of a canonically defined Hamiltonian) and the Reeb vector field is a Hamiltonian flow. That is, any contact manifold can be made to satisfy the requirements of the Weinstein conjecture. Since, as is trivial to show, any orbit of a Hamiltonian flow is contained in a level set, the Weinstein conjecture is a statement about contact manifolds.
https://en.wikipedia.org/wiki/Weinstein_conjecture
It has been known that any contact form is isotopic to a form that admits a closed Reeb orbit; for example, for any contact manifold there is a compatible open book decomposition, whose binding is a closed Reeb orbit. This is not enough to prove the Weinstein conjecture, though, because the Weinstein conjecture states that every contact form admits a closed Reeb orbit, while an open book determines a closed Reeb orbit for a form which is only isotopic to the given form. The conjecture was formulated in 1978 by Alan Weinstein.
https://en.wikipedia.org/wiki/Weinstein_conjecture
In several cases, the existence of a periodic orbit was known. For instance, Rabinowitz showed that on star-shaped level sets of a Hamiltonian function on a symplectic manifold, there were always periodic orbits (Weinstein independently proved the special case of convex level sets). Weinstein observed that the hypotheses of several such existence theorems could be subsumed in the condition that the level set be of contact type.
https://en.wikipedia.org/wiki/Weinstein_conjecture
(Weinstein's original conjecture included the condition that the first de Rham cohomology group of the level set is trivial; this hypothesis turned out to be unnecessary). The Weinstein conjecture was first proved for contact hypersurfaces in R 2 n {\displaystyle \mathbb {R} ^{2n}} in 1986 by Viterbo, then extended to cotangent bundles by Hofer–Viterbo and to wider classes of aspherical manifolds by Floer–Hofer–Viterbo. The presence of holomorphic spheres was used by Hofer–Viterbo.
https://en.wikipedia.org/wiki/Weinstein_conjecture
All these cases dealt with the situation where the contact manifold is a contact submanifold of a symplectic manifold. A new approach without this assumption was discovered in dimension 3 by Hofer and is at the origin of contact homology.The Weinstein conjecture has now been proven for all closed 3-dimensional manifolds by Clifford Taubes. The proof uses a variant of Seiberg–Witten Floer homology and pursues a strategy analogous to Taubes' proof that the Seiberg-Witten and Gromov invariants are equivalent on a symplectic four-manifold. In particular, the proof provides a shortcut to the closely related program of proving the Weinstein conjecture by showing that the embedded contact homology of any contact three-manifold is nontrivial.
https://en.wikipedia.org/wiki/Weinstein_conjecture
In mathematics, the Weinstein–Aronszajn identity states that if A {\displaystyle A} and B {\displaystyle B} are matrices of size m × n and n × m respectively (either or both of which may be infinite) then, provided A B {\displaystyle AB} (and hence, also B A {\displaystyle BA} ) is of trace class, det ( I m + A B ) = det ( I n + B A ) , {\displaystyle \det(I_{m}+AB)=\det(I_{n}+BA),} where I k {\displaystyle I_{k}} is the k × k identity matrix. It is closely related to the matrix determinant lemma and its generalization. It is the determinant analogue of the Woodbury matrix identity for matrix inverses.
https://en.wikipedia.org/wiki/Sylvester's_determinant_theorem
In mathematics, the Weyl character formula in representation theory describes the characters of irreducible representations of compact Lie groups in terms of their highest weights. It was proved by Hermann Weyl (1925, 1926a, 1926b). There is a closely related formula for the character of an irreducible representation of a semisimple Lie algebra.
https://en.wikipedia.org/wiki/Weyl_character_formula
In Weyl's approach to the representation theory of connected compact Lie groups, the proof of the character formula is a key step in proving that every dominant integral element actually arises as the highest weight of some irreducible representation. Important consequences of the character formula are the Weyl dimension formula and the Kostant multiplicity formula. By definition, the character χ {\displaystyle \chi } of a representation π {\displaystyle \pi } of G is the trace of π ( g ) {\displaystyle \pi (g)} , as a function of a group element g ∈ G {\displaystyle g\in G} .
https://en.wikipedia.org/wiki/Weyl_character_formula
The irreducible representations in this case are all finite-dimensional (this is part of the Peter–Weyl theorem); so the notion of trace is the usual one from linear algebra. Knowledge of the character χ {\displaystyle \chi } of π {\displaystyle \pi } gives a lot of information about π {\displaystyle \pi } itself. Weyl's formula is a closed formula for the character χ {\displaystyle \chi } , in terms of other objects constructed from G and its Lie algebra.
https://en.wikipedia.org/wiki/Weyl_character_formula
In mathematics, the Weyl integration formula, introduced by Hermann Weyl, is an integration formula for a compact connected Lie group G in terms of a maximal torus T. Precisely, it says there exists a real-valued continuous function u on T such that for every class function f on G: ∫ G f ( g ) d g = ∫ T f ( t ) u ( t ) d t . {\displaystyle \int _{G}f(g)\,dg=\int _{T}f(t)u(t)\,dt.} Moreover, u {\displaystyle u} is explicitly given as: u = | δ | 2 / # W {\displaystyle u=|\delta |^{2}/\#W} where W = N G ( T ) / T {\displaystyle W=N_{G}(T)/T} is the Weyl group determined by T and δ ( t ) = ∏ α > 0 ( e α ( t ) / 2 − e − α ( t ) / 2 ) , {\displaystyle \delta (t)=\prod _{\alpha >0}\left(e^{\alpha (t)/2}-e^{-\alpha (t)/2}\right),} the product running over the positive roots of G relative to T. More generally, if f {\displaystyle f} is only a continuous function, then ∫ G f ( g ) d g = ∫ T ( ∫ G f ( g t g − 1 ) d g ) u ( t ) d t . {\displaystyle \int _{G}f(g)\,dg=\int _{T}\left(\int _{G}f(gtg^{-1})\,dg\right)u(t)\,dt.} The formula can be used to derive the Weyl character formula. (The theory of Verma modules, on the other hand, gives a purely algebraic derivation of the Weyl character formula.)
https://en.wikipedia.org/wiki/Weyl_integration_formula
In mathematics, the Weyl–von Neumann theorem is a result in operator theory due to Hermann Weyl and John von Neumann. It states that, after the addition of a compact operator (Weyl (1909)) or Hilbert–Schmidt operator (von Neumann (1935)) of arbitrarily small norm, a bounded self-adjoint operator or unitary operator on a Hilbert space is conjugate by a unitary operator to a diagonal operator. The results are subsumed in later generalizations for bounded normal operators due to David Berg (1971, compact perturbation) and Dan-Virgil Voiculescu (1979, Hilbert–Schmidt perturbation).
https://en.wikipedia.org/wiki/Weyl–von_Neumann_theorem
The theorem and its generalizations were one of the starting points of operator K-homology, developed first by Lawrence G. Brown, Ronald Douglas and Peter Fillmore and, in greater generality, by Gennadi Kasparov. In 1958 Kuroda showed that the Weyl–von Neumann theorem is also true if the Hilbert–Schmidt class is replaced by any Schatten class Sp with p ≠ 1. For S1, the trace-class operators, the situation is quite different. The Kato–Rosenblum theorem, proved in 1957 using scattering theory, states that if two bounded self-adjoint operators differ by a trace-class operator, then their absolutely continuous parts are unitarily equivalent. In particular if a self-adjoint operator has absolutely continuous spectrum, no perturbation of it by a trace-class operator can be unitarily equivalent to a diagonal operator.
https://en.wikipedia.org/wiki/Weyl–von_Neumann_theorem
In mathematics, the Whitehead manifold is an open 3-manifold that is contractible, but not homeomorphic to R 3 . {\displaystyle \mathbb {R} ^{3}.} J. H. C. Whitehead (1935) discovered this puzzling object while he was trying to prove the Poincaré conjecture, correcting an error in an earlier paper Whitehead (1934, theorem 3) where he incorrectly claimed that no such manifold exists.
https://en.wikipedia.org/wiki/Whitehead_continuum
A contractible manifold is one that can continuously be shrunk to a point inside the manifold itself. For example, an open ball is a contractible manifold. All manifolds homeomorphic to the ball are contractible, too.
https://en.wikipedia.org/wiki/Whitehead_continuum
One can ask whether all contractible manifolds are homeomorphic to a ball. For dimensions 1 and 2, the answer is classical and it is "yes". In dimension 2, it follows, for example, from the Riemann mapping theorem. Dimension 3 presents the first counterexample: the Whitehead manifold.
https://en.wikipedia.org/wiki/Whitehead_continuum
In mathematics, the Whitehead product is a graded quasi-Lie algebra structure on the homotopy groups of a space. It was defined by J. H. C. Whitehead in (Whitehead 1941). The relevant MSC code is: 55Q15, Whitehead products and generalizations.
https://en.wikipedia.org/wiki/Whitehead_product
In mathematics, the Whitney inequality gives an upper bound for the error of best approximation of a function by polynomials in terms of the moduli of smoothness. It was first proved by Hassler Whitney in 1957, and is an important tool in the field of approximation theory for obtaining upper estimates on the errors of best approximation.
https://en.wikipedia.org/wiki/Whitney_inequality
In mathematics, the Wielandt theorem characterizes the gamma function, defined for all complex numbers z {\displaystyle z} for which R e z > 0 {\displaystyle \mathrm {Re} \,z>0} by Γ ( z ) = ∫ 0 + ∞ t z − 1 e − t d t , {\displaystyle \Gamma (z)=\int _{0}^{+\infty }t^{z-1}\mathrm {e} ^{-t}\,\mathrm {d} t,} as the only function f {\displaystyle f} defined on the half-plane H := { z ∈ C: Re z > 0 } {\displaystyle H:=\{z\in \mathbb {C} :\operatorname {Re} \,z>0\}} such that: f {\displaystyle f} is holomorphic on H {\displaystyle H} ; f ( 1 ) = 1 {\displaystyle f(1)=1} ; f ( z + 1 ) = z f ( z ) {\displaystyle f(z+1)=z\,f(z)} for all z ∈ H {\displaystyle z\in H} and f {\displaystyle f} is bounded on the strip { z ∈ C: 1 ≤ Re z ≤ 2 } {\displaystyle \{z\in \mathbb {C} :1\leq \operatorname {Re} \,z\leq 2\}} .This theorem named after the mathematician Helmut Wielandt.
https://en.wikipedia.org/wiki/Wielandt_theorem
In mathematics, the Wiener algebra, named after Norbert Wiener and usually denoted by A(T), is the space of absolutely convergent Fourier series. Here T denotes the circle group.
https://en.wikipedia.org/wiki/Wiener_algebra
In mathematics, the Wiener process is a real-valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It is often also called Brownian motion due to its historical connection with the physical process of the same name originally observed by Scottish botanist Robert Brown. It is one of the best known Lévy processes (càdlàg stochastic processes with stationary independent increments) and occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics. The Wiener process plays an important role in both pure and applied mathematics.
https://en.wikipedia.org/wiki/Wiener_integral
In pure mathematics, the Wiener process gave rise to the study of continuous time martingales. It is a key process in terms of which more complicated stochastic processes can be described. As such, it plays a vital role in stochastic calculus, diffusion processes and even potential theory.
https://en.wikipedia.org/wiki/Wiener_integral
It is the driving process of Schramm–Loewner evolution. In applied mathematics, the Wiener process is used to represent the integral of a white noise Gaussian process, and so is useful as a model of noise in electronics engineering (see Brownian noise), instrument errors in filtering theory and disturbances in control theory. The Wiener process has applications throughout the mathematical sciences.
https://en.wikipedia.org/wiki/Wiener_integral
In physics it is used to study Brownian motion, the diffusion of minute particles suspended in fluid, and other types of diffusion via the Fokker–Planck and Langevin equations. It also forms the basis for the rigorous path integral formulation of quantum mechanics (by the Feynman–Kac formula, a solution to the Schrödinger equation can be represented in terms of the Wiener process) and the study of eternal inflation in physical cosmology. It is also prominent in the mathematical theory of finance, in particular the Black–Scholes option pricing model.
https://en.wikipedia.org/wiki/Wiener_integral
In mathematics, the Wiener series, or Wiener G-functional expansion, originates from the 1958 book of Norbert Wiener. It is an orthogonal expansion for nonlinear functionals closely related to the Volterra series and having the same relation to it as an orthogonal Hermite polynomial expansion has to a power series. For this reason it is also known as the Wiener–Hermite expansion. The analogue of the coefficients are referred to as Wiener kernels.
https://en.wikipedia.org/wiki/Wiener_series
The terms of the series are orthogonal (uncorrelated) with respect to a statistical input of white noise. This property allows the terms to be identified in applications by the Lee–Schetzen method. The Wiener series is important in nonlinear system identification.
https://en.wikipedia.org/wiki/Wiener_series
In this context, the series approximates the functional relation of the output to the entire history of system input at any time. The Wiener series has been applied mostly to the identification of biological systems, especially in neuroscience. The name Wiener series is almost exclusively used in system theory. In the mathematical literature it occurs as the Itô expansion (1951) which has a different form but is entirely equivalent to it. The Wiener series should not be confused with the Wiener filter, which is another algorithm developed by Norbert Wiener used in signal processing.
https://en.wikipedia.org/wiki/Wiener_series
In mathematics, the Wiener–Wintner theorem, named after Norbert Wiener and Aurel Wintner, is a strengthening of the ergodic theorem, proved by Wiener and Wintner (1941).
https://en.wikipedia.org/wiki/Wiener–Wintner_theorem
In mathematics, the Wirtinger plane sextic curve, studied by Wirtinger, is a degree 6 genus 4 plane curve with double points at the 6 vertices of a complete quadrilateral.
https://en.wikipedia.org/wiki/Wirtinger_sextic
In mathematics, the Witten zeta function, is a function associated to a root system that encodes the degrees of the irreducible representations of the corresponding Lie group. These zeta functions were introduced by Don Zagier who named them after Edward Witten's study of their special values (among other things). Note that in, Witten zeta functions do not appear as explicit objects in their own right.
https://en.wikipedia.org/wiki/Witten_zeta_function
In mathematics, the Wright omega function or Wright function, denoted ω, is defined in terms of the Lambert W function as: ω ( z ) = W ⌈ I m ( z ) − π 2 π ⌉ ( e z ) . {\displaystyle \omega (z)=W_{{\big \lceil }{\frac {\mathrm {Im} (z)-\pi }{2\pi }}{\big \rceil }}(e^{z}).}
https://en.wikipedia.org/wiki/Wright_Omega_function
In mathematics, the Wythoff array is an infinite matrix of integers derived from the Fibonacci sequence and named after Dutch mathematician Willem Abraham Wythoff. Every positive integer occurs exactly once in the array, and every integer sequence defined by the Fibonacci recurrence can be derived by shifting a row of the array. The Wythoff array was first defined by Morrison (1980) using Wythoff pairs, the coordinates of winning positions in Wythoff's game. It can also be defined using Fibonacci numbers and Zeckendorf's theorem, or directly from the golden ratio and the recurrence relation defining the Fibonacci numbers.
https://en.wikipedia.org/wiki/Wythoff_array
In mathematics, the X-ray transform (also called ray transform or John transform) is an integral transform introduced by Fritz John in 1938 that is one of the cornerstones of modern integral geometry. It is very closely related to the Radon transform, and coincides with it in two dimensions. In higher dimensions, the X-ray transform of a function is defined by integrating over lines rather than over hyperplanes as in the Radon transform.
https://en.wikipedia.org/wiki/X-ray_transform
The X-ray transform derives its name from X-ray tomography (used in CT scans) because the X-ray transform of a function ƒ represents the attenuation data of a tomographic scan through an inhomogeneous medium whose density is represented by the function ƒ. Inversion of the X-ray transform is therefore of practical importance because it allows one to reconstruct an unknown density ƒ from its known attenuation data. In detail, if ƒ is a compactly supported continuous function on the Euclidean space Rn, then the X-ray transform of ƒ is the function Xƒ defined on the set of all lines in Rn by X f ( L ) = ∫ L f = ∫ R f ( x 0 + t θ ) d t {\displaystyle Xf(L)=\int _{L}f=\int _{\mathbf {R} }f(x_{0}+t\theta )dt} where x0 is an initial point on the line and θ is a unit vector in Rn giving the direction of the line L. The latter integral is not regarded in the oriented sense: it is the integral with respect to the 1-dimensional Lebesgue measure on the Euclidean line L. The X-ray transform satisfies an ultrahyperbolic wave equation called John's equation. The Gauss hypergeometric function can be written as an X-ray transform (Gelfand, Gindikin & Graev 2003, 2.1.2).
https://en.wikipedia.org/wiki/X-ray_transform
In mathematics, the Yang–Mills–Higgs equations are a set of non-linear partial differential equations for a Yang–Mills field, given by a connection, and a Higgs field, given by a section of a vector bundle (specifically, the adjoint bundle). These equations are D A ∗ F A + = 0 , D A ∗ D A Φ = 0 {\displaystyle {\begin{aligned}D_{A}*F_{A}+&=0,\\D_{A}*D_{A}\Phi &=0\end{aligned}}} with a boundary condition lim | x | → ∞ | Φ | ( x ) = 1 {\displaystyle \lim _{|x|\rightarrow \infty }|\Phi |(x)=1} where A is a connection on a vector bundle, DA is the exterior covariant derivative, FA is the curvature of that connection, Φ is a section of that vector bundle, ∗ is the Hodge star, and is the natural, graded bracket.These equations are named after Chen Ning Yang, Robert Mills, and Peter Higgs. They are very closely related to the Ginzburg–Landau equations, when these are expressed in a general geometric setting. M.V.
https://en.wikipedia.org/wiki/Yang–Mills–Higgs_equations
Goganov and L.V. Kapitanskii have shown that the Cauchy problem for hyperbolic Yang–Mills–Higgs equations in Hamiltonian gauge on 4-dimensional Minkowski space have a unique global solution with no restrictions at the spatial infinity. Furthermore, the solution has the finite propagation speed property.
https://en.wikipedia.org/wiki/Yang–Mills–Higgs_equations
In mathematics, the Yoneda lemma is arguably the most important result in category theory. It is an abstract result on functors of the type morphisms into a fixed object. It is a vast generalisation of Cayley's theorem from group theory (viewing a group as a miniature category with just one object and only isomorphisms). It allows the embedding of any locally small category into a category of functors (contravariant set-valued functors) defined on that category.
https://en.wikipedia.org/wiki/Yoneda_lemma
It also clarifies how the embedded category, of representable functors and their natural transformations, relates to the other objects in the larger functor category. It is an important tool that underlies several modern developments in algebraic geometry and representation theory. It is named after Nobuo Yoneda.
https://en.wikipedia.org/wiki/Yoneda_lemma
In mathematics, the Young–Deruyts development is a method of writing invariants of an action of a group on an n-dimensional vector space V in terms of invariants depending on at most n–1 vectors (Dieudonné & Carrell 1970, 1971, p.36, 39).
https://en.wikipedia.org/wiki/Young–Deruyts_development
In mathematics, the Young–Fibonacci graph and Young–Fibonacci lattice, named after Alfred Young and Leonardo Fibonacci, are two closely related structures involving sequences of the digits 1 and 2. Any digit sequence of this type can be assigned a rank, the sum of its digits: for instance, the rank of 11212 is 1 + 1 + 2 + 1 + 2 = 7. As was already known in ancient India, the number of sequences with a given rank is a Fibonacci number. The Young–Fibonacci lattice is an infinite modular lattice having these digit sequences as its elements, compatible with this rank structure.
https://en.wikipedia.org/wiki/Young–Fibonacci_lattice
The Young–Fibonacci graph is the graph of this lattice, and has a vertex for each digit sequence. As the graph of a modular lattice, it is a modular graph. The Young–Fibonacci graph and the Young–Fibonacci lattice were both initially studied in two papers by Fomin (1988) and Stanley (1988). They are named after the closely related Young's lattice and after the Fibonacci number of their elements at any given rank.
https://en.wikipedia.org/wiki/Young–Fibonacci_lattice
In mathematics, the Z function is a function used for studying the Riemann zeta function along the critical line where the argument is one-half. It is also called the Riemann–Siegel Z function, the Riemann–Siegel zeta function, the Hardy function, the Hardy Z function and the Hardy zeta function. It can be defined in terms of the Riemann–Siegel theta function and the Riemann zeta function by Z ( t ) = e i θ ( t ) ζ ( 1 2 + i t ) . {\displaystyle Z(t)=e^{i\theta (t)}\zeta \left({\frac {1}{2}}+it\right).}
https://en.wikipedia.org/wiki/Z_function
It follows from the functional equation of the Riemann zeta function that the Z function is real for real values of t. It is an even function, and real analytic for real values. It follows from the fact that the Riemann-Siegel theta function and the Riemann zeta function are both holomorphic in the critical strip, where the imaginary part of t is between −1/2 and 1/2, that the Z function is holomorphic in the critical strip also. Moreover, the real zeros of Z(t) are precisely the zeros of the zeta function along the critical line, and complex zeros in the Z function critical strip correspond to zeros off the critical line of the Riemann zeta function in its critical strip.
https://en.wikipedia.org/wiki/Z_function
In mathematics, the Zahlbericht (number report) was a report on algebraic number theory by Hilbert (1897, 1998, (English translation)).
https://en.wikipedia.org/wiki/Zahlbericht
In mathematics, the Zak transform (also known as the Gelfand mapping) is a certain operation which takes as input a function of one variable and produces as output a function of two variables. The output function is called the Zak transform of the input function. The transform is defined as an infinite series in which each term is a product of a dilation of a translation by an integer of the function and an exponential function. In applications of Zak transform to signal processing the input function represents a signal and the transform will be a mixed time–frequency representation of the signal.
https://en.wikipedia.org/wiki/Zak_transform
The signal may be real valued or complex-valued, defined on a continuous set (for example, the real numbers) or a discrete set (for example, the integers or a finite subset of integers). The Zak transform is a generalization of the discrete Fourier transform.The Zak transform had been discovered by several people in different fields and was called by different names. It was called the "Gelfand mapping" because Israel Gelfand introduced it in his work on eigenfunction expansions. The transform was rediscovered independently by Joshua Zak in 1967 who called it the "k-q representation". There seems to be a general consensus among experts in the field to call it the Zak transform, since Zak was the first to systematically study that transform in a more general setting and recognize its usefulness.
https://en.wikipedia.org/wiki/Zak_transform
In mathematics, the Zakharov system is a system of non-linear partial differential equations, introduced by Vladimir Zakharov in 1972 to describe the propagation of Langmuir waves in an ionized plasma. The system consists of a complex field u and a real field n satisfying the equations i ∂ t u + ∇ 2 u = u n ◻ n = − ∇ 2 ( | u | 2 ) {\displaystyle {\begin{aligned}i\partial _{t}^{}u+\nabla ^{2}u&=un\\\Box n&=-\nabla ^{2}(|u|_{}^{2})\end{aligned}}} where ◻ {\displaystyle \Box } is the d'Alembert operator.
https://en.wikipedia.org/wiki/Zakharov_system
In mathematics, the Zakharov–Schulman system is a system of nonlinear partial differential equations introduced in Zakharov & Schulman (1980) to describe the interactions of small amplitude, high frequency waves with acoustic waves. The equations are i ∂ t u + L 1 u = ϕ u {\displaystyle i\partial _{t}^{}u+L_{1}u=\phi u} L 2 ϕ = L 3 ( | u | 2 ) {\displaystyle L_{2}\phi =L_{3}(|u|^{2})} where L1, L2, and L3, are constant coefficient differential operators.
https://en.wikipedia.org/wiki/Zakharov–Schulman_system
In mathematics, the Zassenhaus algorithm is a method to calculate a basis for the intersection and sum of two subspaces of a vector space. It is named after Hans Zassenhaus, but no publication of this algorithm by him is known. It is used in computer algebra systems.
https://en.wikipedia.org/wiki/Zassenhaus_algorithm
In mathematics, the Zeeman conjecture or Zeeman's collapsibility conjecture asks whether given a finite contractible 2-dimensional CW complex K {\displaystyle K} , the space K × {\displaystyle K\times } is collapsible. The conjecture, due to Christopher Zeeman, implies the Poincaré conjecture and the Andrews–Curtis conjecture.
https://en.wikipedia.org/wiki/Zeeman_conjecture
In mathematics, the Zernike polynomials are a sequence of polynomials that are orthogonal on the unit disk. Named after optical physicist Frits Zernike, laureate of the 1953 Nobel Prize in Physics and the inventor of phase-contrast microscopy, they play important roles in various optics branches such as beam optics and imaging.
https://en.wikipedia.org/wiki/Zernike_polynomial
In mathematics, the Zilber–Pink conjecture is a far-reaching generalisation of many famous Diophantine conjectures and statements, such as André–Oort, Manin–Mumford, and Mordell–Lang. For algebraic tori and semiabelian varieties it was proposed by Boris Zilber and independently by Enrico Bombieri, David Masser, Umberto Zannier in the early 2000's. For semiabelian varieties the conjecture implies the Mordell–Lang and Manin–Mumford conjectures. Richard Pink proposed (again independently) a more general conjecture for Shimura varieties which also implies the André–Oort conjecture.
https://en.wikipedia.org/wiki/Zilber–Pink_conjecture
In the case of algebraic tori, Zilber called it the Conjecture on Intersection with Tori (CIT). The general version is now known as the Zilber–Pink conjecture. It states roughly that atypical or unlikely intersections of an algebraic variety with certain special varieties are accounted for by finitely many special varieties.
https://en.wikipedia.org/wiki/Zilber–Pink_conjecture
In mathematics, the absolute Galois group GK of a field K is the Galois group of Ksep over K, where Ksep is a separable closure of K. Alternatively it is the group of all automorphisms of the algebraic closure of K that fix K. The absolute Galois group is well-defined up to inner automorphism. It is a profinite group. (When K is a perfect field, Ksep is the same as an algebraic closure Kalg of K. This holds e.g. for K of characteristic zero, or K a finite field.)
https://en.wikipedia.org/wiki/Absolute_Galois_group
In mathematics, the absolute value or modulus of a real number x {\displaystyle x} , denoted | x | {\displaystyle |x|} , is the non-negative value of x {\displaystyle x} without regard to its sign. Namely, | x | = x {\displaystyle |x|=x} if x {\displaystyle x} is a positive number, and | x | = − x {\displaystyle |x|=-x} if x {\displaystyle x} is negative (in which case negating x {\displaystyle x} makes − x {\displaystyle -x} positive), and | 0 | = 0 {\displaystyle |0|=0} . For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3.
https://en.wikipedia.org/wiki/Modulus_of_a_complex_number
The absolute value of a number may be thought of as its distance from zero. Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example, an absolute value is also defined for the complex numbers, the quaternions, ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts.
https://en.wikipedia.org/wiki/Modulus_of_a_complex_number
In mathematics, the abstract additive Schwarz method, named after Hermann Schwarz, is an abstract version of the additive Schwarz method for boundary value problems on partial differential equations, formulated only in terms of linear algebra without reference to domains, subdomains, etc. Many if not all domain decomposition methods can be cast as abstract additive Schwarz method, which is often the first and most convenient approach to their analysis. == References ==
https://en.wikipedia.org/wiki/Abstract_additive_Schwarz_method
In mathematics, the actuarial polynomials a(β)n(x) are polynomials studied by Toscano (1950) given by the generating function ∑ n a n ( β ) ( x ) n ! t n = exp ⁡ ( β t + x ( 1 − e t ) ) {\displaystyle \displaystyle \sum _{n}{\frac {a_{n}^{(\beta )}(x)}{n! }}t^{n}=\exp(\beta t+x(1-e^{t}))} (Roman 1984, 4.3.4), Boas & Buck (1958).
https://en.wikipedia.org/wiki/Actuarial_polynomials
In mathematics, the additive Schwarz method, named after Hermann Schwarz, solves a boundary value problem for a partial differential equation approximately by splitting it into boundary value problems on smaller domains and adding the results.
https://en.wikipedia.org/wiki/Additive_Schwarz_method
In mathematics, the additive identity of a set that is equipped with the operation of addition is an element which, when added to any element x in the set, yields x. One of the most familiar additive identities is the number 0 from elementary mathematics, but additive identities occur in other mathematical structures where addition is defined, such as in groups and rings.
https://en.wikipedia.org/wiki/Additive_identity
In mathematics, the additive inverse of a number a (sometimes called the opposite of a) is the number that, when added to a, yields zero. The operation taking a number to its additive inverse is known as sign change or negation. For a real number, it reverses its sign: the additive inverse (opposite number) of a positive number is negative, and the additive inverse of a negative number is positive. Zero is the additive inverse of itself.
https://en.wikipedia.org/wiki/Opposite_number
The additive inverse of a is denoted by unary minus: −a (see also § Relation to subtraction below). For example, the additive inverse of 7 is −7, because 7 + (−7) = 0, and the additive inverse of −0.3 is 0.3, because −0.3 + 0.3 = 0. Similarly, the additive inverse of a − b is −(a − b) which can be simplified to b − a. The additive inverse of 2x − 3 is 3 − 2x, because 2x − 3 + 3 − 2x = 0.The additive inverse is defined as its inverse element under the binary operation of addition (see also § Formal definition below), which allows a broad generalization to mathematical objects other than numbers. As for any inverse operation, double additive inverse has no net effect: −(−x) = x.
https://en.wikipedia.org/wiki/Opposite_number
In mathematics, the additive polynomials are an important topic in classical algebraic number theory.
https://en.wikipedia.org/wiki/Additive_polynomial
In mathematics, the adele ring of a global field (also adelic ring, ring of adeles or ring of adèles) is a central object of class field theory, a branch of algebraic number theory. It is the restricted product of all the completions of the global field and is an example of a self-dual topological ring. An adele derives from a particular kind of idele. "Idele" derives from the French "idèle" and was coined by the French mathematician Claude Chevalley.
https://en.wikipedia.org/wiki/Valuation_vector
The word stands for 'ideal element' (abbreviated: id.el.). Adele (French: "adèle") stands for 'additive idele' (that is, additive ideal element).
https://en.wikipedia.org/wiki/Valuation_vector
The ring of adeles allows one to describe the Artin reciprocity law, which is a generalisation of quadratic reciprocity, and other reciprocity laws over finite fields. In addition, it is a classical theorem from Weil that G {\displaystyle G} -bundles on an algebraic curve over a finite field can be described in terms of adeles for a reductive group G {\displaystyle G} . Adeles are also connected with the adelic algebraic groups and adelic curves. The study of geometry of numbers over the ring of adeles of a number field is called adelic geometry.
https://en.wikipedia.org/wiki/Valuation_vector
In mathematics, the adjective Noetherian is used to describe objects that satisfy an ascending or descending chain condition on certain kinds of subobjects, meaning that certain ascending or descending sequences of subobjects must have finite length. Noetherian objects are named after Emmy Noether, who was the first to study the ascending and descending chain conditions for rings. Specifically: Noetherian group, a group that satisfies the ascending chain condition on subgroups. Noetherian ring, a ring that satisfies the ascending chain condition on ideals.
https://en.wikipedia.org/wiki/Noetherian
Noetherian module, a module that satisfies the ascending chain condition on submodules. More generally, an object in a category is said to be Noetherian if there is no infinitely increasing filtration of it by subobjects. A category is Noetherian if every object in it is Noetherian.
https://en.wikipedia.org/wiki/Noetherian
Noetherian relation, a binary relation that satisfies the ascending chain condition on its elements. Noetherian topological space, a topological space that satisfies the descending chain condition on closed sets. Noetherian induction, also called well-founded induction, a proof method for binary relations that satisfy the descending chain condition. Noetherian rewriting system, an abstract rewriting system that has no infinite chains. Noetherian scheme, a scheme in algebraic geometry that admits a finite covering by open spectra of Noetherian rings.
https://en.wikipedia.org/wiki/Noetherian
In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or an object which possesses a simple structure (e.g., groups, topological spaces). The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum.
https://en.wikipedia.org/wiki/Triviality_(mathematics)
The opposite of trivial is nontrivial, which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove.The judgement of whether a situation under consideration is trivial or not depends on who considers it since the situation is obviously true for someone who has sufficient knowledge or experience of it while to someone who has never seen this, it may be even hard to be understood so not trivial at all. And there can be an argument about how quickly and easily a problem should be recognized for the problem to be treated as trivial. So, triviality is not a universally agreed property in mathematics and logic.
https://en.wikipedia.org/wiki/Triviality_(mathematics)