text
stringlengths 9
3.55k
| source
stringlengths 31
280
|
|---|---|
In mathematics, the Grace–Walsh–Szegő coincidence theorem is a result named after John Hilton Grace, Joseph L. Walsh, and Gábor Szegő.
|
https://en.wikipedia.org/wiki/Grace–Walsh–Szegő_theorem
|
In mathematics, the Graham–Rothschild theorem is a theorem that applies Ramsey theory to combinatorics on words and combinatorial cubes. It is named after Ronald Graham and Bruce Lee Rothschild, who published its proof in 1971. Through the work of Graham, Rothschild, and Klaus Leeb in 1972, it became part of the foundations of structural Ramsey theory. A special case of the Graham–Rothschild theorem motivates the definition of Graham's number, a number that was popularized by Martin Gardner in Scientific American and listed in the Guinness Book of World Records as the largest number ever appearing in a mathematical proof.
|
https://en.wikipedia.org/wiki/Graham–Rothschild_theorem
|
In mathematics, the Grassmannian Gr(k, V ) is a space that parameterizes all k-dimensional linear subspaces of the n-dimensional vector space V. For example, the Grassmannian Gr(1, V ) is the space of lines through the origin in V, so it is the same as the projective space of one dimension lower than V.When V is a real or complex vector space, Grassmannians are compact smooth manifolds. In general they have the structure of a smooth algebraic variety, of dimensions k ( n − k ) . {\displaystyle k(n-k).}
|
https://en.wikipedia.org/wiki/Grassmannian_variety
|
The earliest work on a non-trivial Grassmannian is due to Julius Plücker, who studied the set of projective lines in projective 3-space, equivalent to Gr(2, R4) and parameterized them by what are now called Plücker coordinates. Hermann Grassmann later introduced the concept in general. Notations for the Grassmannian vary between authors; notations include Grk(V ), Gr(k, V ), Grk(n), or Gr(k, n) to denote the Grassmannian of k-dimensional subspaces of an n-dimensional vector space V.
|
https://en.wikipedia.org/wiki/Grassmannian_variety
|
In mathematics, the Grauert–Riemenschneider vanishing theorem is an extension of the Kodaira vanishing theorem on the vanishing of higher cohomology groups of coherent sheaves on a compact complex manifold, due to Grauert and Riemenschneider (1970).
|
https://en.wikipedia.org/wiki/Grauert-Riemenschneider_conjecture
|
In mathematics, the Griess algebra is a commutative non-associative algebra on a real vector space of dimension 196884 that has the Monster group M as its automorphism group. It is named after mathematician R. L. Griess, who constructed it in 1980 and subsequently used it in 1982 to construct M. The Monster fixes (vectorwise) a 1-space in this algebra and acts absolutely irreducibly on the 196883-dimensional orthogonal complement of this 1-space. (The Monster preserves the standard inner product on the 196884-space.) Griess's construction was later simplified by Jacques Tits and John H. Conway. The Griess algebra is the same as the degree 2 piece of the monster vertex algebra, and the Griess product is one of the vertex algebra products.
|
https://en.wikipedia.org/wiki/Griess_algebra
|
In mathematics, the Griewank function is often used in testing of optimization. It is defined as follows: 1 + 1 4000 ∑ i = 1 n x i 2 − ∏ i = 1 n cos ( x i i ) {\displaystyle 1+{\frac {1}{4000}}\sum _{i=1}^{n}x_{i}^{2}-\prod _{i=1}^{n}\cos \left({\frac {x_{i}}{\sqrt {i}}}\right)} The following paragraphs display the special cases of first, second and third order Griewank function, and their plots.
|
https://en.wikipedia.org/wiki/Griewank_function
|
In mathematics, the Gromov boundary of a δ-hyperbolic space (especially a hyperbolic group) is an abstract concept generalizing the boundary sphere of hyperbolic space. Conceptually, the Gromov boundary is the set of all points at infinity. For instance, the Gromov boundary of the real line is two points, corresponding to positive and negative infinity.
|
https://en.wikipedia.org/wiki/Gromov_boundary
|
In mathematics, the Gromov invariant of Clifford Taubes counts embedded (possibly disconnected) pseudoholomorphic curves in a symplectic 4-manifold, where the curves are holomorphic with respect to an auxiliary compatible almost complex structure. (Multiple covers of 2-tori with self-intersection 0 are also counted.) Taubes proved the information contained in this invariant is equivalent to invariants derived from the Seiberg–Witten equations in a series of four long papers.
|
https://en.wikipedia.org/wiki/Taubes's_Gromov_invariant
|
Much of the analytical complexity connected to this invariant comes from properly counting multiply covered pseudoholomorphic curves so that the result is invariant of the choice of almost complex structure. The crux is a topologically defined index for pseudoholomorphic curves which controls embeddedness and bounds the Fredholm index. Embedded contact homology is an extension due to Michael Hutchings of this work to noncompact four-manifolds of the form Y × R {\displaystyle Y\times \mathbb {R} } , where Y is a compact contact 3-manifold.
|
https://en.wikipedia.org/wiki/Taubes's_Gromov_invariant
|
ECH is a symplectic field theory-like invariant; namely, it is the homology of a chain complex generated by certain combinations of Reeb orbits of a contact form on Y, and whose differential counts certain embedded pseudoholomorphic curves and multiply covered pseudoholomorphic cylinders with "ECH index" 1 in Y × R {\displaystyle Y\times \mathbb {R} } . The ECH index is a version of Taubes's index for the cylindrical case, and again, the curves are pseudoholomorphic with respect to a suitable almost complex structure. The result is a topological invariant of Y, which Taubes proved is isomorphic to monopole Floer homology, a version of Seiberg–Witten homology for Y.
|
https://en.wikipedia.org/wiki/Taubes's_Gromov_invariant
|
In mathematics, the Gross–Koblitz formula, introduced by Gross and Koblitz (1979) expresses a Gauss sum using a product of values of the p-adic gamma function. It is an analog of the Chowla–Selberg formula for the usual gamma function. It implies the Hasse–Davenport relation and generalizes the Stickelberger theorem. Boyarsky (1980) gave another proof of the Gross–Koblitz formula (Boyarski being a pseudonym of Bernard Dwork), and Robert (2001) gave an elementary proof.
|
https://en.wikipedia.org/wiki/Gross–Koblitz_formula
|
In mathematics, the Grothendieck existence theorem, introduced by Grothendieck (1961, section 5), gives conditions that enable one to lift infinitesimal deformations of a scheme to a deformation, and to lift schemes over infinitesimal neighborhoods over a subscheme of a scheme S to schemes over S. The theorem can be viewed as an instance of (Grothendieck's) formal GAGA.
|
https://en.wikipedia.org/wiki/Grothendieck_existence_theorem
|
In mathematics, the Grothendieck group, or group of differences, of a commutative monoid M is a certain abelian group. This abelian group is constructed from M in the most universal way, in the sense that any abelian group containing a homomorphic image of M will also contain a homomorphic image of the Grothendieck group of M. The Grothendieck group construction takes its name from a specific case in category theory, introduced by Alexander Grothendieck in his proof of the Grothendieck–Riemann–Roch theorem, which resulted in the development of K-theory. This specific case is the monoid of isomorphism classes of objects of an abelian category, with the direct sum as its operation.
|
https://en.wikipedia.org/wiki/Grothendieck_group
|
In mathematics, the Grothendieck inequality states that there is a universal constant K G {\displaystyle K_{G}} with the following property. If Mij is an n × n (real or complex) matrix with | ∑ i , j M i j s i t j | ≤ 1 {\displaystyle {\Big |}\sum _{i,j}M_{ij}s_{i}t_{j}{\Big |}\leq 1} for all (real or complex) numbers si, tj of absolute value at most 1, then | ∑ i , j M i j ⟨ S i , T j ⟩ | ≤ K G {\displaystyle {\Big |}\sum _{i,j}M_{ij}\langle S_{i},T_{j}\rangle {\Big |}\leq K_{G}} for all vectors Si, Tj in the unit ball B(H) of a (real or complex) Hilbert space H, the constant K G {\displaystyle K_{G}} being independent of n. For a fixed Hilbert space of dimension d, the smallest constant that satisfies this property for all n × n matrices is called a Grothendieck constant and denoted K G ( d ) {\displaystyle K_{G}(d)} . In fact, there are two Grothendieck constants, K G R ( d ) {\displaystyle K_{G}^{\mathbb {R} }(d)} and K G C ( d ) {\displaystyle K_{G}^{\mathbb {C} }(d)} , depending on whether one works with real or complex numbers, respectively.The Grothendieck inequality and Grothendieck constants are named after Alexander Grothendieck, who proved the existence of the constants in a paper published in 1953.
|
https://en.wikipedia.org/wiki/Grothendieck_constant
|
In mathematics, the Grothendieck–Katz p-curvature conjecture is a local-global principle for linear ordinary differential equations, related to differential Galois theory and in a loose sense analogous to the result in the Chebotarev density theorem considered as the polynomial case. It is a conjecture of Alexander Grothendieck from the late 1960s, and apparently not published by him in any form. The general case remains unsolved, despite recent progress; it has been linked to geometric investigations involving algebraic foliations.
|
https://en.wikipedia.org/wiki/Grothendieck_conjecture
|
In mathematics, the Grothendieck–Teichmüller group GT is a group closely related to (and possibly equal to) the absolute Galois group of the rational numbers. It was introduced by Vladimir Drinfeld (1990) and named after Alexander Grothendieck and Oswald Teichmüller, based on Grothendieck's suggestion in his 1984 essay Esquisse d'un Programme to study the absolute Galois group of the rationals by relating it to its action on the Teichmüller tower of Teichmüller groupoids Tg,n, the fundamental groupoids of moduli stacks of genus g curves with n points removed. There are several minor variations of the group: a discrete version, a pro-l version, a k-pro-unipotent version, and a profinite version; the first three versions were defined by Drinfeld, and the version most often used is the profinite version.
|
https://en.wikipedia.org/wiki/Galois–Teichmüller_theory
|
In mathematics, the Grünwald–Letnikov derivative is a basic extension of the derivative in fractional calculus that allows one to take the derivative a non-integer number of times. It was introduced by Anton Karl Grünwald (1838–1920) from Prague, in 1867, and by Aleksey Vasilievich Letnikov (1837–1888) in Moscow in 1868.
|
https://en.wikipedia.org/wiki/Grünwald–Letnikov_derivative
|
In mathematics, the Gudermannian function relates a hyperbolic angle measure ψ {\textstyle \psi } to a circular angle measure ϕ {\textstyle \phi } called the gudermannian of ψ {\textstyle \psi } and denoted gd ψ {\textstyle \operatorname {gd} \psi } . The Gudermannian function reveals a close relationship between the circular functions and hyperbolic functions. It was introduced in the 1760s by Johann Heinrich Lambert, and later named for Christoph Gudermann who also described the relationship between circular and hyperbolic functions in 1830. The gudermannian is sometimes called the hyperbolic amplitude as a limiting case of the Jacobi elliptic amplitude am ( ψ , m ) {\textstyle \operatorname {am} (\psi ,m)} when parameter m = 1.
|
https://en.wikipedia.org/wiki/Gudermannian_function
|
{\textstyle m=1.} The real Gudermannian function is typically defined for − ∞ < ψ < ∞ {\textstyle -\infty <\psi <\infty } to be the integral of the hyperbolic secant ϕ = gd ψ ≡ ∫ 0 ψ sech t d t = arctan ( sinh ψ ) . {\displaystyle \phi =\operatorname {gd} \psi \equiv \int _{0}^{\psi }\operatorname {sech} t\,\mathrm {d} t=\operatorname {arctan} (\sinh \psi ).}
|
https://en.wikipedia.org/wiki/Gudermannian_function
|
The real inverse Gudermannian function can be defined for − 1 2 π < ϕ < 1 2 π {\textstyle -{\tfrac {1}{2}}\pi <\phi <{\tfrac {1}{2}}\pi } as the integral of the secant ψ = gd − 1 ϕ = ∫ 0 ϕ sec t d t = arsinh ( tan ϕ ) . {\displaystyle \psi =\operatorname {gd} ^{-1}\phi =\int _{0}^{\phi }\operatorname {sec} t\,\mathrm {d} t=\operatorname {arsinh} (\tan \phi ).} The hyperbolic angle measure ψ = gd − 1 ϕ {\displaystyle \psi =\operatorname {gd} ^{-1}\phi } is called the anti-gudermannian of ϕ {\displaystyle \phi } or sometimes the lambertian of ϕ {\displaystyle \phi } , denoted ψ = lam ϕ .
|
https://en.wikipedia.org/wiki/Gudermannian_function
|
{\displaystyle \psi =\operatorname {lam} \phi .} In the context of geodesy and navigation for latitude ϕ {\textstyle \phi } , k gd − 1 ϕ {\displaystyle k\operatorname {gd} ^{-1}\phi } (scaled by arbitrary constant k {\textstyle k} ) was historically called the meridional part of ϕ {\displaystyle \phi } (French: latitude croissante). It is the vertical coordinate of the Mercator projection. The two angle measures ϕ {\textstyle \phi } and ψ {\textstyle \psi } are related by a common stereographic projection s = tan 1 2 ϕ = tanh 1 2 ψ , {\displaystyle s=\tan {\tfrac {1}{2}}\phi =\tanh {\tfrac {1}{2}}\psi ,} and this identity can serve as an alternative definition for gd {\textstyle \operatorname {gd} } and gd − 1 {\textstyle \operatorname {gd} ^{-1}} valid throughout the complex plane: gd ψ = 2 arctan ( tanh 1 2 ψ ) , gd − 1 ϕ = 2 artanh ( tan 1 2 ϕ ) . {\displaystyle {\begin{aligned}\operatorname {gd} \psi &={2\arctan }{\bigl (}\tanh {\tfrac {1}{2}}\psi \,{\bigr )},\\\operatorname {gd} ^{-1}\phi &={2\operatorname {artanh} }{\bigl (}\tan {\tfrac {1}{2}}\phi \,{\bigr )}.\end{aligned}}}
|
https://en.wikipedia.org/wiki/Gudermannian_function
|
In mathematics, the H-derivative is a notion of derivative in the study of abstract Wiener spaces and the Malliavin calculus.
|
https://en.wikipedia.org/wiki/H-derivative
|
In mathematics, the HM-GM-AM-QM inequalities, also known as the mean inequality chain, state the relationship between the harmonic mean, geometric mean, arithmetic mean, and quadratic mean (also known as root mean square). Suppose that x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} are positive real numbers. Then 0 < n 1 x 1 + 1 x 2 + ⋯ + 1 x n ≤ x 1 x 2 ⋯ x n n ≤ x 1 + x 2 + ⋯ + x n n ≤ x 1 2 + x 2 2 + ⋯ + x n 2 n . {\displaystyle 0<{\frac {n}{{\frac {1}{x_{1}}}+{\frac {1}{x_{2}}}+\cdots +{\frac {1}{x_{n}}}}}\leq {\sqrt{x_{1}x_{2}\cdots x_{n}}}\leq {\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}\leq {\sqrt {\frac {x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}}{n}}}.} These inequalities often appear in mathematical competitions and have applications in many fields of science.
|
https://en.wikipedia.org/wiki/HM-GM-AM-QM_inequalities
|
In mathematics, the HNN extension is an important construction of combinatorial group theory. Introduced in a 1949 paper Embedding Theorems for Groups by Graham Higman, Bernhard Neumann, and Hanna Neumann, it embeds a given group G into another group G' , in such a way that two given isomorphic subgroups of G are conjugate (through a given isomorphism) in G' .
|
https://en.wikipedia.org/wiki/Britton's_Lemma
|
In mathematics, the Haar wavelet is a sequence of rescaled "square-shaped" functions which together form a wavelet family or basis. Wavelet analysis is similar to Fourier analysis in that it allows a target function over an interval to be represented in terms of an orthonormal basis. The Haar sequence is now recognised as the first known wavelet basis and is extensively used as a teaching example. The Haar sequence was proposed in 1909 by Alfréd Haar.
|
https://en.wikipedia.org/wiki/Haar_transform
|
Haar used these functions to give an example of an orthonormal system for the space of square-integrable functions on the unit interval . The study of wavelets, and even the term "wavelet", did not come until much later. As a special case of the Daubechies wavelet, the Haar wavelet is also known as Db1.
|
https://en.wikipedia.org/wiki/Haar_transform
|
The Haar wavelet is also the simplest possible wavelet. The technical disadvantage of the Haar wavelet is that it is not continuous, and therefore not differentiable.
|
https://en.wikipedia.org/wiki/Haar_transform
|
This property can, however, be an advantage for the analysis of signals with sudden transitions (discrete signals), such as monitoring of tool failure in machines.The Haar wavelet's mother wavelet function ψ ( t ) {\displaystyle \psi (t)} can be described as ψ ( t ) = { 1 0 ≤ t < 1 2 , − 1 1 2 ≤ t < 1 , 0 otherwise. {\displaystyle \psi (t)={\begin{cases}1\quad &0\leq t<{\frac {1}{2}},\\-1&{\frac {1}{2}}\leq t<1,\\0&{\mbox{otherwise. }}\end{cases}}} Its scaling function φ ( t ) {\displaystyle \varphi (t)} can be described as φ ( t ) = { 1 0 ≤ t < 1 , 0 otherwise. {\displaystyle \varphi (t)={\begin{cases}1\quad &0\leq t<1,\\0&{\mbox{otherwise. }}\end{cases}}}
|
https://en.wikipedia.org/wiki/Haar_transform
|
In mathematics, the Hadamard derivative is a concept of directional derivative for maps between Banach spaces. It is particularly suited for applications in stochastic programming and asymptotic statistics.
|
https://en.wikipedia.org/wiki/Hadamard_derivative
|
In mathematics, the Hadamard product (also known as the element-wise product, entrywise product: ch. 5 or Schur product) is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French-Jewish mathematician Jacques Hadamard or German-Jewish mathematician Issai Schur. The Hadamard product is associative and distributive. Unlike the matrix product, it is also commutative.
|
https://en.wikipedia.org/wiki/Elementwise_division
|
In mathematics, the Hadwiger–Finsler inequality is a result on the geometry of triangles in the Euclidean plane. It states that if a triangle in the plane has side lengths a, b and c and area T, then
|
https://en.wikipedia.org/wiki/Hadwiger–Finsler_inequality
|
In mathematics, the Hahn decomposition theorem, named after the Austrian mathematician Hans Hahn, states that for any measurable space ( X , Σ ) {\displaystyle (X,\Sigma )} and any signed measure μ {\displaystyle \mu } defined on the σ {\displaystyle \sigma } -algebra Σ {\displaystyle \Sigma } , there exist two Σ {\displaystyle \Sigma } -measurable sets, P {\displaystyle P} and N {\displaystyle N} , of X {\displaystyle X} such that: P ∪ N = X {\displaystyle P\cup N=X} and P ∩ N = ∅ {\displaystyle P\cap N=\varnothing } . For every E ∈ Σ {\displaystyle E\in \Sigma } such that E ⊆ P {\displaystyle E\subseteq P} , one has μ ( E ) ≥ 0 {\displaystyle \mu (E)\geq 0} , i.e., P {\displaystyle P} is a positive set for μ {\displaystyle \mu } . For every E ∈ Σ {\displaystyle E\in \Sigma } such that E ⊆ N {\displaystyle E\subseteq N} , one has μ ( E ) ≤ 0 {\displaystyle \mu (E)\leq 0} , i.e., N {\displaystyle N} is a negative set for μ {\displaystyle \mu } .Moreover, this decomposition is essentially unique, meaning that for any other pair ( P ′ , N ′ ) {\displaystyle (P',N')} of Σ {\displaystyle \Sigma } -measurable subsets of X {\displaystyle X} fulfilling the three conditions above, the symmetric differences P △ P ′ {\displaystyle P\triangle P'} and N △ N ′ {\displaystyle N\triangle N'} are μ {\displaystyle \mu } -null sets in the strong sense that every Σ {\displaystyle \Sigma } -measurable subset of them has zero measure. The pair ( P , N ) {\displaystyle (P,N)} is then called a Hahn decomposition of the signed measure μ {\displaystyle \mu } .
|
https://en.wikipedia.org/wiki/Jordan_decomposition_theorem
|
In mathematics, the Hahn polynomials are a family of orthogonal polynomials in the Askey scheme of hypergeometric orthogonal polynomials, introduced by Pafnuty Chebyshev in 1875 (Chebyshev 1907) and rediscovered by Wolfgang Hahn (Hahn 1949). The Hahn class is a name for special cases of Hahn polynomials, including Hahn polynomials, Meixner polynomials, Krawtchouk polynomials, and Charlier polynomials. Sometimes the Hahn class is taken to include limiting cases of these polynomials, in which case it also includes the classical orthogonal polynomials. Hahn polynomials are defined in terms of generalized hypergeometric functions by Q n ( x ; α , β , N ) = 3 F 2 ( − n , − x , n + α + β + 1 ; α + 1 , − N + 1 ; 1 ) .
|
https://en.wikipedia.org/wiki/Hahn_polynomials
|
{\displaystyle Q_{n}(x;\alpha ,\beta ,N)={}_{3}F_{2}(-n,-x,n+\alpha +\beta +1;\alpha +1,-N+1;1).\ } Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties. If α = β = 0 {\displaystyle \alpha =\beta =0} , these polynomials are identical to the discrete Chebyshev polynomials except for a scale factor. Closely related polynomials include the dual Hahn polynomials Rn(x;γ,δ,N), the continuous Hahn polynomials pn(x,a,b, a, b), and the continuous dual Hahn polynomials Sn(x;a,b,c). These polynomials all have q-analogs with an extra parameter q, such as the q-Hahn polynomials Qn(x;α,β, N;q), and so on.
|
https://en.wikipedia.org/wiki/Hahn_polynomials
|
In mathematics, the Hahn–Exton q-Bessel function or the third Jackson q-Bessel function is a q-analog of the Bessel function, and satisfies the Hahn-Exton q-difference equation (Swarttouw (1992)). This function was introduced by Hahn (1953) in a special case and by Exton (1983) in general. The Hahn–Exton q-Bessel function is given by J ν ( 3 ) ( x ; q ) = x ν ( q ν + 1 ; q ) ∞ ( q ; q ) ∞ ∑ k ≥ 0 ( − 1 ) k q k ( k + 1 ) / 2 x 2 k ( q ν + 1 ; q ) k ( q ; q ) k = ( q ν + 1 ; q ) ∞ ( q ; q ) ∞ x ν 1 ϕ 1 ( 0 ; q ν + 1 ; q , q x 2 ) . {\displaystyle J_{\nu }^{(3)}(x;q)={\frac {x^{\nu }(q^{\nu +1};q)_{\infty }}{(q;q)_{\infty }}}\sum _{k\geq 0}{\frac {(-1)^{k}q^{k(k+1)/2}x^{2k}}{(q^{\nu +1};q)_{k}(q;q)_{k}}}={\frac {(q^{\nu +1};q)_{\infty }}{(q;q)_{\infty }}}x^{\nu }{}_{1}\phi _{1}(0;q^{\nu +1};q,qx^{2}).} ϕ {\displaystyle \phi } is the basic hypergeometric function.
|
https://en.wikipedia.org/wiki/Hahn–Exton_q-Bessel_function
|
In mathematics, the Hales–Jewett theorem is a fundamental combinatorial result of Ramsey theory named after Alfred W. Hales and Robert I. Jewett, concerning the degree to which high-dimensional objects must necessarily exhibit some combinatorial structure; it is impossible for such objects to be "completely random".An informal geometric statement of the theorem is that for any positive integers n and c there is a number H such that if the cells of a H-dimensional n×n×n×...×n cube are colored with c colors, there must be one row, column, or certain diagonal (more details below) of length n all of whose cells are the same color. In other words, the higher-dimensional, multi-player, n-in-a-row generalization of a game of tic-tac-toe cannot end in a draw, no matter how large n is, no matter how many people c are playing, and no matter which player plays each turn, provided only that it is played on a board of sufficiently high dimension H. By a standard strategy stealing argument, one can thus conclude that if two players alternate, then the first player has a winning strategy when H is sufficiently large, though no practical algorithm for obtaining this strategy is known.
|
https://en.wikipedia.org/wiki/Hales–Jewett_theorem
|
In mathematics, the Hall algebra is an associative algebra with a basis corresponding to isomorphism classes of finite abelian p-groups. It was first discussed by Steinitz (1901) but forgotten until it was rediscovered by Philip Hall (1959), both of whom published no more than brief summaries of their work. The Hall polynomials are the structure constants of the Hall algebra. The Hall algebra plays an important role in the theory of Masaki Kashiwara and George Lusztig regarding canonical bases in quantum groups. Ringel (1990) generalized Hall algebras to more general categories, such as the category of representations of a quiver.
|
https://en.wikipedia.org/wiki/Hall_algebra
|
In mathematics, the Hall–Littlewood polynomials are symmetric functions depending on a parameter t and a partition λ. They are Schur functions when t is 0 and monomial symmetric functions when t is 1 and are special cases of Macdonald polynomials. They were first defined indirectly by Philip Hall using the Hall algebra, and later defined directly by Dudley E. Littlewood (1961).
|
https://en.wikipedia.org/wiki/Hall-Littlewood_polynomials
|
In mathematics, the Halpern–Läuchli theorem is a partition result about finite products of infinite trees. Its original purpose was to give a model for set theory in which the Boolean prime ideal theorem is true but the axiom of choice is false. It is often called the Halpern–Läuchli theorem, but the proper attribution for the theorem as it is formulated below is to Halpern–Läuchli–Laver–Pincus or HLLP (named after James D. Halpern, Hans Läuchli, Richard Laver, and David Pincus), following Milliken (1979). Let d,r < ω, ⟨ T i: i ∈ d ⟩ {\displaystyle \langle T_{i}:i\in d\rangle } be a sequence of finitely splitting trees of height ω. Let ⋃ n ∈ ω ( ∏ i < d T i ( n ) ) = C 1 ∪ ⋯ ∪ C r , {\displaystyle \bigcup _{n\in \omega }\left(\prod _{i
|
https://en.wikipedia.org/wiki/Halpern–Läuchli_theorem
|
In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that m n = ∫ − ∞ ∞ x n d μ ( x ) ? {\displaystyle m_{n}=\int _{-\infty }^{\infty }x^{n}\,d\mu (x){\text{ ?}}} In other words, an affirmative answer to the problem means that (m0, m1, m2, ...) is the sequence of moments of some positive Borel measure μ. The Stieltjes moment problem, Vorobyev moment problem, and the Hausdorff moment problem are similar but replace the real line by [ 0 , + ∞ ) {\displaystyle [0,+\infty )} (Stieltjes and Vorobyev; but Vorobyev formulates the problem in the terms of matrix theory), or a bounded interval (Hausdorff).
|
https://en.wikipedia.org/wiki/Hamburger_moment_problem
|
In mathematics, the Hamiltonian cycle polynomial of an n×n-matrix is a polynomial in its entries, defined as ham ( A ) = ∑ σ ∈ H n ∏ i = 1 n a i , σ ( i ) {\displaystyle \operatorname {ham} (A)=\sum _{\sigma \in H_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}} where H n {\displaystyle H_{n}} is the set of n-permutations having exactly one cycle. This is an algebraic option useful, in a number of cases, for determining the existence of a Hamiltonian cycle in a directed graph. It is a generalization of the number of Hamiltonian cycles of a digraph as the sum of the products of its Hamiltonian cycles' arc weights (all of which equal unity) for weighted digraphs with arc weights taken from a given commutative ring. In the meantime, for an undirected weighted graph the sum of the products of the edge weights of its Hamiltonian cycles containing any fixed edge (i,j) can be expressed as the product of the weight of (i,j) and the Hamiltonian cycle polynomial of a matrix received from its weighted adjacency matrix via subjecting its rows and columns to any permutation mapping i to 1 and j to 2 and then removing its 1-st row and 2-nd column.
|
https://en.wikipedia.org/wiki/Hamiltonian_cycle_polynomial
|
Hence if it's possible to polynomial-time assign weights from a field of characteristic 2 to a digraph's arcs that make its weighted adjacency matrix unitary and having a non-zero Hamiltonian cycle polynomial then the digraph is Hamiltonian. Therefore the Hamiltonian cycle problem is computable on such graphs in polynomial time. In characteristic 2, the Hamiltonian cycle polynomial of an n×n-matrix is zero if n > 2k where k is its rank or if it's involutory and n > 2.
|
https://en.wikipedia.org/wiki/Hamiltonian_cycle_polynomial
|
For k = 1 {\displaystyle k=1} the latter statement can be re-formulated as the # 2 {\displaystyle _{2}} P-completeness of computing, for a given unitary n×n-matrix U {\displaystyle U} over a field of characteristic 2, the n×n-matrix H ( U ) {\displaystyle H(U)} whose i,j-th entry is the Hamiltonian cycle polynomial of a matrix received from U {\displaystyle U} via subjecting its rows and columns to any permutation mapping j to 1 and i to 2 and then removing its 1-st row and 2-nd column (i.e. the sum of the products of the arc weights of the corresponding weighted digraph's Hamiltonian paths from vertex i to vertex j) for i ≠ j and zero for i = j. This matrix satisfies the matrix equation U ( H ( U ) ) T = H ( U ) U T {\displaystyle U(H(U))^{T}=H(U)U^{T}} , while ham ( U U a a T 1 ) = ( a 1 2 + . .
|
https://en.wikipedia.org/wiki/Hamiltonian_cycle_polynomial
|
. + a n 2 ) ham ( U ) {\displaystyle \operatorname {ham} \left({\begin{matrix}U&{Ua}\\a^{T}&1\end{matrix}}\right)=(a_{1}^{2}+...+a_{n}^{2})\operatorname {ham} (U)} where a {\displaystyle a} is an arbitrary n-vector (what can be interpreted as the polynomial-time computability of the Hamiltonian cycle polynomial of any 1-semi-unitary m×m-matrix A {\displaystyle A} such that A A T = I + b b T {\displaystyle AA^{T}=I+bb^{T}} where b {\displaystyle b} is the m {\displaystyle m} -th column of A {\displaystyle A} with its m {\displaystyle m} -th entry replaced by 0 and I {\displaystyle I} is the identity m×m-matrix). Moreover, it would be worth noting that in characteristic 2 the Hamiltonian cycle polynomial possesses its invariant matrix compressions (partly analogical to the Gaussian modification that is invariant for the determinant), taking into account the fact that ham ( X ) = 0 {\displaystyle \operatorname {ham} (X)=0} for any t×t-matrix X {\displaystyle X} having three equal rows or, if t {\displaystyle t} > 2, a pair of indexes i,j such that its i-th and j-th rows are identical and its i-th and j-th columns are identical too.
|
https://en.wikipedia.org/wiki/Hamiltonian_cycle_polynomial
|
Besides, in characteristic 2 for square matrices X, Y ham ( X Y Y X ) {\displaystyle \operatorname {ham} \left({\begin{matrix}X&Y\\Y&X\end{matrix}}\right)} is the square of the sum, over all the pairs of non-equal indexes i,j, of the i,j-th entry of Y multiplied by the Hamiltonian cycle polynomial of the matrix received from X+Y via removing its i-th row and j-th column (in the manner described above). Hence, upon putting in the above equality A = B and U = V, we receive another extension of the class of unitary matrices where the Hamiltonian cycle polynomial is computable in polynomial time. Apart from the above-mentioned compression transformations, in characteristic 2 the following relation is also valid for the Hamiltonian cycle polynomials of a matrix and its partial inverse (for A 11 {\displaystyle A_{11}} and A 22 {\displaystyle A_{22}} being square, A 11 {\displaystyle A_{11}} being invertible): ham ( A 11 A 12 A 21 A 22 ) = det 2 ( A 11 ) ham ( A 11 − 1 A 11 − 1 A 12 A 21 A 11 − 1 A 22 + A 21 A 11 − 1 A 12 ) {\displaystyle \operatorname {ham} \left({\begin{matrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{matrix}}\right)={\det }^{2}\left(A_{11}\right)\operatorname {ham} \left({\begin{matrix}A_{11}^{-1}&A_{11}^{-1}A_{12}\\A_{21}A_{11}^{-1}&A_{22}+A_{21}A_{11}^{-1}A_{12}\end{matrix}}\right)} and, due to the fact that the Hamiltonian cycle polynomial doesn't depend on the matrix's diagonal entries, adding an arbitrary diagonal matrix doesn't change this polynomial too.
|
https://en.wikipedia.org/wiki/Hamiltonian_cycle_polynomial
|
These two types of transformation don't compress the matrix, but keep its size unchanged. However, in a number of cases their application allows to reduce the matrix's size by some of the above-mentioned compression operators. Hence there is a variety of matrix compression operators performed in polynomial time and preserving the Hamiltonian cycle polynomial in characteristic 2 whose sequential application, together with the transpose transformation (utilized each time the operators leave the matrix intact), has, for each matrix, a certain limit that can be defined as the compression-closure operator. When applied to classes of matrices, that operator thus maps one class to another. As it was proven in (Knezevic & Cohen (2017)), if the compression-closure of the class of unitary matrices contains a subset where computing this polynomial is # 2 {\displaystyle _{2}} P-complete then the Hamiltonian cycle polynomial is computable in polynomial time over any field of this characteristic -- what would imply the equality RP = NP.
|
https://en.wikipedia.org/wiki/Hamiltonian_cycle_polynomial
|
In mathematics, the Hankel transform expresses any given function f(r) as the weighted sum of an infinite number of Bessel functions of the first kind Jν(kr). The Bessel functions in the sum are all of the same order ν, but differ in a scaling factor k along the r axis. The necessary coefficient Fν of each Bessel function in the sum, as a function of the scaling factor k constitutes the transformed function.
|
https://en.wikipedia.org/wiki/Fourier–Bessel_transform
|
The Hankel transform is an integral transform and was first developed by the mathematician Hermann Hankel. It is also known as the Fourier–Bessel transform. Just as the Fourier transform for an infinite interval is related to the Fourier series over a finite interval, so the Hankel transform over an infinite interval is related to the Fourier–Bessel series over a finite interval.
|
https://en.wikipedia.org/wiki/Fourier–Bessel_transform
|
In mathematics, the Haran diamond theorem gives a general sufficient condition for a separable extension of a Hilbertian field to be Hilbertian.
|
https://en.wikipedia.org/wiki/Haran's_diamond_theorem
|
In mathematics, the Hardy–Littlewood maximal operator M is a significant non-linear operator used in real analysis and harmonic analysis.
|
https://en.wikipedia.org/wiki/Hardy–Littlewood_maximal_operator
|
In mathematics, the Hardy–Littlewood zeta-function conjectures, named after Godfrey Harold Hardy and John Edensor Littlewood, are two conjectures concerning the distances between zeros and the density of zeros of the Riemann zeta function.
|
https://en.wikipedia.org/wiki/Hardy–Littlewood_zeta-function_conjectures
|
In mathematics, the Hardy–Ramanujan theorem, proved by Ramanujan and checked by Hardy states that the normal order of the number ω(n) of distinct prime factors of a number n is log(log(n)). Roughly speaking, this means that most numbers have about this number of distinct prime factors.
|
https://en.wikipedia.org/wiki/Hardy–Ramanujan_theorem
|
In mathematics, the Hardy–Ramanujan–Littlewood circle method is a technique of analytic number theory. It is named for G. H. Hardy, S. Ramanujan, and J. E. Littlewood, who developed it in a series of papers on Waring's problem.
|
https://en.wikipedia.org/wiki/Hardy–Littlewood_circle_method
|
In mathematics, the Harish-Chandra character, named after Harish-Chandra, of a representation of a semisimple Lie group G on a Hilbert space H is a distribution on the group G that is analogous to the character of a finite-dimensional representation of a compact group.
|
https://en.wikipedia.org/wiki/Distributional_character
|
In mathematics, the Harish-Chandra isomorphism, introduced by Harish-Chandra (1951), is an isomorphism of commutative rings constructed in the theory of Lie algebras. The isomorphism maps the center Z ( U ( g ) ) {\displaystyle {\mathcal {Z}}(U({\mathfrak {g}}))} of the universal enveloping algebra U ( g ) {\displaystyle U({\mathfrak {g}})} of a reductive Lie algebra g {\displaystyle {\mathfrak {g}}} to the elements S ( h ) W {\displaystyle S({\mathfrak {h}})^{W}} of the symmetric algebra S ( h ) {\displaystyle S({\mathfrak {h}})} of a Cartan subalgebra h {\displaystyle {\mathfrak {h}}} that are invariant under the Weyl group W {\displaystyle W} .
|
https://en.wikipedia.org/wiki/Harish-Chandra_isomorphism
|
In mathematics, the Hartley transform (HT) is an integral transform closely related to the Fourier transform (FT), but which transforms real-valued functions to real-valued functions. It was proposed as an alternative to the Fourier transform by Ralph V. L. Hartley in 1942, and is one of many known Fourier-related transforms. Compared to the Fourier transform, the Hartley transform has the advantages of transforming real functions to real functions (as opposed to requiring complex numbers) and of being its own inverse. The discrete version of the transform, the discrete Hartley transform (DHT), was introduced by Ronald N. Bracewell in 1983.The two-dimensional Hartley transform can be computed by an analog optical process similar to an optical Fourier transform (OFT), with the proposed advantage that only its amplitude and sign need to be determined rather than its complex phase. However, optical Hartley transforms do not seem to have seen widespread use.
|
https://en.wikipedia.org/wiki/Cas_(mathematics)
|
In mathematics, the Hartogs–Rosenthal theorem is a classical result in complex analysis on the uniform approximation of continuous functions on compact subsets of the complex plane by rational functions. The theorem was proved in 1931 by the German mathematicians Friedrich Hartogs and Arthur Rosenthal and has been widely applied, particularly in operator theory.
|
https://en.wikipedia.org/wiki/Hartogs–Rosenthal_theorem
|
In mathematics, the Hartree equation, named after Douglas Hartree, is i ∂ t u + ∇ 2 u = V ( u ) u {\displaystyle i\,\partial _{t}u+\nabla ^{2}u=V(u)u} in R d + 1 {\displaystyle \mathbb {R} ^{d+1}} where V ( u ) = ± | x | − n ∗ | u | 2 {\displaystyle V(u)=\pm |x|^{-n}*|u|^{2}} and 0 < n < d {\displaystyle 0
|
https://en.wikipedia.org/wiki/Hartree_equation
|
In mathematics, the Hasse derivative is a generalisation of the derivative which allows the formulation of Taylor's theorem in coordinate rings of algebraic varieties.
|
https://en.wikipedia.org/wiki/Hasse_derivative
|
In mathematics, the Hasse invariant (or Hasse–Witt invariant) of a quadratic form Q over a field K takes values in the Brauer group Br(K). The name "Hasse–Witt" comes from Helmut Hasse and Ernst Witt. The quadratic form Q may be taken as a diagonal form Σ aixi2.Its invariant is then defined as the product of the classes in the Brauer group of all the quaternion algebras (ai, aj) for i < j.This is independent of the diagonal form chosen to compute it.It may also be viewed as the second Stiefel–Whitney class of Q.
|
https://en.wikipedia.org/wiki/Hasse_invariant_of_a_quadratic_form
|
In mathematics, the Hasse invariant of an algebra is an invariant attached to a Brauer class of algebras over a field. The concept is named after Helmut Hasse. The invariant plays a role in local class field theory.
|
https://en.wikipedia.org/wiki/Hasse_invariant_of_an_algebra
|
In mathematics, the Hasse–Weil zeta function attached to an algebraic variety V defined over an algebraic number field K is a meromorphic function on the complex plane defined in terms of the number of points on the variety after reducing modulo each prime number p. It is a global L-function defined as an Euler product of local zeta functions. Hasse–Weil L-functions form one of the two major classes of global L-functions, alongside the L-functions associated to automorphic representations. Conjecturally, these two types of global L-functions are actually two descriptions of the same type of global L-function; this would be a vast generalisation of the Taniyama-Weil conjecture, itself an important result in number theory. For an elliptic curve over a number field K, the Hasse–Weil zeta function is conjecturally related to the group of rational points of the elliptic curve over K by the Birch and Swinnerton-Dyer conjecture.
|
https://en.wikipedia.org/wiki/Hasse–Weil_zeta_function
|
In mathematics, the Hasse–Witt matrix H of a non-singular algebraic curve C over a finite field F is the matrix of the Frobenius mapping (p-th power mapping where F has q elements, q a power of the prime number p) with respect to a basis for the differentials of the first kind. It is a g × g matrix where C has genus g. The rank of the Hasse–Witt matrix is the Hasse or Hasse–Witt invariant.
|
https://en.wikipedia.org/wiki/Hasse-Witt_matrix
|
In mathematics, the Hausdorff distance, or Hausdorff metric, also called Pompeiu–Hausdorff distance, measures how far two subsets of a metric space are from each other. It turns the set of non-empty compact subsets of a metric space into a metric space in its own right. It is named after Felix Hausdorff and Dimitrie Pompeiu. Informally, two sets are close in the Hausdorff distance if every point of either set is close to some point of the other set.
|
https://en.wikipedia.org/wiki/Hausdorff_convergence
|
The Hausdorff distance is the longest distance you can be forced to travel by an adversary who chooses a point in one of the two sets, from where you then must travel to the other set. In other words, it is the greatest of all the distances from a point in one set to the closest point in the other set. This distance was first introduced by Hausdorff in his book Grundzüge der Mengenlehre, first published in 1914, although a very close relative appeared in the doctoral thesis of Maurice Fréchet in 1906, in his study of the space of all continuous curves from → R 3 {\displaystyle \to \mathbb {R} ^{3}} .
|
https://en.wikipedia.org/wiki/Hausdorff_convergence
|
In mathematics, the Hausdorff maximal principle is an alternate and earlier formulation of Zorn's lemma proved by Felix Hausdorff in 1914 (Moore 1982:168). It states that in any partially ordered set, every totally ordered subset is contained in a maximal totally ordered subset. The Hausdorff maximal principle is one of many statements equivalent to the axiom of choice over ZF (Zermelo–Fraenkel set theory without the axiom of choice). The principle is also called the Hausdorff maximality theorem or the Kuratowski lemma (Kelley 1955:33).
|
https://en.wikipedia.org/wiki/Hausdorff_maximal_principle
|
In mathematics, the Hausdorff moment problem, named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence (m0, m1, m2, ...) be the sequence of moments m n = ∫ 0 1 x n d μ ( x ) {\displaystyle m_{n}=\int _{0}^{1}x^{n}\,d\mu (x)} of some Borel measure μ supported on the closed unit interval . In the case m0 = 1, this is equivalent to the existence of a random variable X supported on , such that E = mn. The essential difference between this and other well-known moment problems is that this is on a bounded interval, whereas in the Stieltjes moment problem one considers a half-line [0, ∞), and in the Hamburger moment problem one considers the whole line (−∞, ∞). The Stieltjes moment problems and the Hamburger moment problems, if they are solvable, may have infinitely many solutions (indeterminate moment problem) whereas a Hausdorff moment problem always has a unique solution if it is solvable (determinate moment problem).
|
https://en.wikipedia.org/wiki/Hausdorff_moment_problem
|
In the indeterminate moment problem case, there are infinite measures corresponding to the same prescribed moments and they consist of a convex set. The set of polynomials may or may not be dense in the associated Hilbert spaces if the moment problem is indeterminate, and it depends on whether measure is extremal or not. But in the determinate moment problem case, the set of polynomials is dense in the associated Hilbert space.
|
https://en.wikipedia.org/wiki/Hausdorff_moment_problem
|
In mathematics, the Hawaiian earring H {\displaystyle \mathbb {H} } is the topological space defined by the union of circles in the Euclidean plane R 2 {\displaystyle \mathbb {R} ^{2}} with center ( 1 n , 0 ) {\displaystyle \left({\tfrac {1}{n}},0\right)} and radius 1 n {\displaystyle {\tfrac {1}{n}}} for n = 1 , 2 , 3 , … {\displaystyle n=1,2,3,\ldots } endowed with the subspace topology: H = ⋃ n = 1 ∞ { ( x , y ) ∈ R 2 ∣ ( x − 1 n ) 2 + y 2 = ( 1 n ) 2 } {\displaystyle \mathbb {H} =\bigcup _{n=1}^{\infty }\left\{(x,y)\in \mathbb {R} ^{2}\mid \left(x-{\frac {1}{n}}\right)^{2}+y^{2}=\left({\frac {1}{n}}\right)^{2}\right\}} The space H {\displaystyle \mathbb {H} } is homeomorphic to the one-point compactification of the union of a countable family of disjoint open intervals. The Hawaiian earring is a one-dimensional, compact, locally path-connected metrizable space. Although H {\displaystyle \mathbb {H} } is locally homeomorphic to R {\displaystyle \mathbb {R} } at all non-origin points, H {\displaystyle \mathbb {H} } is not semi-locally simply connected at ( 0 , 0 ) {\displaystyle (0,0)} .
|
https://en.wikipedia.org/wiki/Hawaiian_earring
|
Therefore, H {\displaystyle \mathbb {H} } does not have a simply connected covering space and is usually given as the simplest example of a space with this complication. The Hawaiian earring looks very similar to the wedge sum of countably infinitely many circles; that is, the rose with infinitely many petals, but these two spaces are not homeomorphic. The difference between their topologies is seen in the fact that, in the Hawaiian earring, every open neighborhood of the point of intersection of the circles contains all but finitely many of the circles (an ε-ball around (0, 0) contains every circle whose radius is less than ε/2); in the rose, a neighborhood of the intersection point might not fully contain any of the circles. Additionally, the rose is not compact: the complement of the distinguished point is an infinite union of open intervals; to those add a small open neighborhood of the distinguished point to get an open cover with no finite subcover.
|
https://en.wikipedia.org/wiki/Hawaiian_earring
|
In mathematics, the Haynsworth inertia additivity formula, discovered by Emilie Virginia Haynsworth (1916–1985), concerns the number of positive, negative, and zero eigenvalues of a Hermitian matrix and of block matrices into which it is partitioned.The inertia of a Hermitian matrix H is defined as the ordered triple I n ( H ) = ( π ( H ) , ν ( H ) , δ ( H ) ) {\displaystyle \mathrm {In} (H)=\left(\pi (H),\nu (H),\delta (H)\right)} whose components are respectively the numbers of positive, negative, and zero eigenvalues of H. Haynsworth considered a partitioned Hermitian matrix H = {\displaystyle H={\begin{bmatrix}H_{11}&H_{12}\\H_{12}^{\ast }&H_{22}\end{bmatrix}}} where H11 is nonsingular and H12* is the conjugate transpose of H12. The formula states: I n = I n ( H 11 ) + I n ( H / H 11 ) {\displaystyle \mathrm {In} {\begin{bmatrix}H_{11}&H_{12}\\H_{12}^{\ast }&H_{22}\end{bmatrix}}=\mathrm {In} (H_{11})+\mathrm {In} (H/H_{11})} where H/H11 is the Schur complement of H11 in H: H / H 11 = H 22 − H 12 ∗ H 11 − 1 H 12 . {\displaystyle H/H_{11}=H_{22}-H_{12}^{\ast }H_{11}^{-1}H_{12}.}
|
https://en.wikipedia.org/wiki/Haynsworth_inertia_additivity_formula
|
In mathematics, the Heawood number of a surface is an upper bound for the number of colors that suffice to color any graph embedded in the surface. In 1890 Heawood proved for all surfaces except the sphere that no more than H ( S ) = ⌊ 7 + 49 − 24 e ( S ) 2 ⌋ = ⌊ 7 + 1 + 48 g ( S ) 2 ⌋ {\displaystyle H(S)=\left\lfloor {\frac {7+{\sqrt {49-24e(S)}}}{2}}\right\rfloor =\left\lfloor {\frac {7+{\sqrt {1+48g(S)}}}{2}}\right\rfloor } colors are needed to color any graph embedded in a surface of Euler characteristic e ( S ) {\displaystyle e(S)} , or genus g ( S ) {\displaystyle g(S)} for an orientable surface. The number H ( S ) {\displaystyle H(S)} became known as Heawood number in 1976.
|
https://en.wikipedia.org/wiki/Heawood_number
|
Franklin proved that the chromatic number of a graph embedded in the Klein bottle can be as large as 6 {\displaystyle 6} , but never exceeds 6 {\displaystyle 6} . Later it was proved in the works of Gerhard Ringel, J. W. T. Youngs, and other contributors that the complete graph with H ( S ) {\displaystyle H(S)} vertices can be embedded in the surface S {\displaystyle S} unless S {\displaystyle S} is the Klein bottle. This established that Heawood's bound could not be improved. For example, the complete graph on 7 {\displaystyle 7} vertices can be embedded in the torus as follows: The case of the sphere is the four-color conjecture, which was settled by Kenneth Appel and Wolfgang Haken in 1976.
|
https://en.wikipedia.org/wiki/Heawood_number
|
In mathematics, the Hecke algebra is the algebra generated by Hecke operators.
|
https://en.wikipedia.org/wiki/Hecke_algebra
|
In mathematics, the Heine–Cantor theorem, named after Eduard Heine and Georg Cantor, states that if f: M → N {\displaystyle f\colon M\to N} is a continuous function between two metric spaces M {\displaystyle M} and N {\displaystyle N} , and M {\displaystyle M} is compact, then f {\displaystyle f} is uniformly continuous. An important special case is that every continuous function from a closed bounded interval to the real numbers is uniformly continuous.
|
https://en.wikipedia.org/wiki/Heine-Cantor_theorem
|
In mathematics, the Heine–Stieltjes polynomials or Stieltjes polynomials, introduced by T. J. Stieltjes (1885), are polynomial solutions of a second-order Fuchsian equation, a differential equation all of whose singularities are regular. The Fuchsian equation has the form d 2 S d z 2 + ( ∑ j = 1 N γ j z − a j ) d S d z + V ( z ) ∏ j = 1 N ( z − a j ) S = 0 {\displaystyle {\frac {d^{2}S}{dz^{2}}}+\left(\sum _{j=1}^{N}{\frac {\gamma _{j}}{z-a_{j}}}\right){\frac {dS}{dz}}+{\frac {V(z)}{\prod _{j=1}^{N}(z-a_{j})}}S=0} for some polynomial V(z) of degree at most N − 2, and if this has a polynomial solution S then V is called a Van Vleck polynomial (after Edward Burr Van Vleck) and S is called a Heine–Stieltjes polynomial. Heun polynomials are the special cases of Stieltjes polynomials when the differential equation has four singular points.
|
https://en.wikipedia.org/wiki/Heine–Stieltjes_polynomials
|
In mathematics, the Heinz mean (named after E. Heinz) of two non-negative real numbers A and B, was defined by Bhatia as: H x ( A , B ) = A x B 1 − x + A 1 − x B x 2 , {\displaystyle \operatorname {H} _{x}(A,B)={\frac {A^{x}B^{1-x}+A^{1-x}B^{x}}{2}},} with 0 ≤ x ≤ 1/2. For different values of x, this Heinz mean interpolates between the arithmetic (x = 0) and geometric (x = 1/2) means such that for 0 < x < 1/2: A B = H 1 2 ( A , B ) < H x ( A , B ) < H 0 ( A , B ) = A + B 2 . {\displaystyle {\sqrt {AB}}=\operatorname {H} _{\frac {1}{2}}(A,B)<\operatorname {H} _{x}(A,B)<\operatorname {H} _{0}(A,B)={\frac {A+B}{2}}.} The Heinz means appear naturally when symmetrizing α {\textstyle \alpha } -divergences.It may also be defined in the same way for positive semidefinite matrices, and satisfies a similar interpolation formula.
|
https://en.wikipedia.org/wiki/Heinz_mean
|
In mathematics, the Heisenberg group H {\displaystyle H} , named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form ( 1 a c 0 1 b 0 0 1 ) {\displaystyle {\begin{pmatrix}1&a&c\\0&1&b\\0&0&1\\\end{pmatrix}}} under the operation of matrix multiplication. Elements a, b and c can be taken from any commutative ring with identity, often taken to be the ring of real numbers (resulting in the "continuous Heisenberg group") or the ring of integers (resulting in the "discrete Heisenberg group"). The continuous Heisenberg group arises in the description of one-dimensional quantum mechanical systems, especially in the context of the Stone–von Neumann theorem. More generally, one can consider Heisenberg groups associated to n-dimensional systems, and most generally, to any symplectic vector space.
|
https://en.wikipedia.org/wiki/Heisenberg_group
|
In mathematics, the Hellinger integral is an integral introduced by Hellinger (1909) that is a special case of the Kolmogorov integral. It is used to define the Hellinger distance in probability theory.
|
https://en.wikipedia.org/wiki/Hellinger_integral
|
In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the linear partial differential equation where ∇2 is the Laplace operator, k2 is the eigenvalue, and f is the (eigen)function. When the equation is applied to waves, k is known as the wave number. The Helmholtz equation has a variety of applications in physics, including the wave equation and the diffusion equation, and it has uses in other sciences.
|
https://en.wikipedia.org/wiki/Paraxial_Helmholtz_equation
|
In mathematics, the Henstock–Kurzweil integral or generalized Riemann integral or gauge integral – also known as the (narrow) Denjoy integral (pronounced ), Luzin integral or Perron integral, but not to be confused with the more general wide Denjoy integral – is one of a number of inequivalent definitions of the integral of a function. It is a generalization of the Riemann integral, and in some situations is more general than the Lebesgue integral. In particular, a function is Lebesgue integrable if and only if the function and its absolute value are Henstock–Kurzweil integrable. This integral was first defined by Arnaud Denjoy (1912).
|
https://en.wikipedia.org/wiki/Henstock_integral
|
Denjoy was interested in a definition that would allow one to integrate functions like f ( x ) = 1 x sin ( 1 x 3 ) . {\displaystyle f(x)={\frac {1}{x}}\sin \left({\frac {1}{x^{3}}}\right).} This function has a singularity at 0, and is not Lebesgue integrable.
|
https://en.wikipedia.org/wiki/Henstock_integral
|
However, it seems natural to calculate its integral except over the interval and then let ε, δ → 0. Trying to create a general theory, Denjoy used transfinite induction over the possible types of singularities, which made the definition quite complicated. Other definitions were given by Nikolai Luzin (using variations on the notions of absolute continuity), and by Oskar Perron, who was interested in continuous major and minor functions.
|
https://en.wikipedia.org/wiki/Henstock_integral
|
It took a while to understand that the Perron and Denjoy integrals are actually identical. Later, in 1957, the Czech mathematician Jaroslav Kurzweil discovered a new definition of this integral elegantly similar in nature to Riemann's original definition which he named the gauge integral. Ralph Henstock independently introduced a similar integral that extended the theory in 1961, citing his investigations of Ward's extensions to the Perron integral. Due to these two important contributions, it is now commonly known as the Henstock–Kurzweil integral. The simplicity of Kurzweil's definition made some educators advocate that this integral should replace the Riemann integral in introductory calculus courses.
|
https://en.wikipedia.org/wiki/Henstock_integral
|
In mathematics, the Herbrand quotient is a quotient of orders of cohomology groups of a cyclic group. It was invented by Jacques Herbrand. It has an important application in class field theory.
|
https://en.wikipedia.org/wiki/Herbrand_quotient
|
In mathematics, the Herbrand–Ribet theorem is a result on the class group of certain number fields. It is a strengthening of Ernst Kummer's theorem to the effect that the prime p divides the class number of the cyclotomic field of p-th roots of unity if and only if p divides the numerator of the n-th Bernoulli number Bn for some n, 0 < n < p − 1. The Herbrand–Ribet theorem specifies what, in particular, it means when p divides such an Bn.
|
https://en.wikipedia.org/wiki/Herbrand–Ribet_theorem
|
In mathematics, the Herglotz–Zagier function, named after Gustav Herglotz and Don Zagier, is the function F ( x ) = ∑ n = 1 ∞ { Γ ′ ( n x ) Γ ( n x ) − log ( n x ) } 1 n . {\displaystyle F(x)=\sum _{n=1}^{\infty }\left\{{\frac {\Gamma ^{\prime }(nx)}{\Gamma (nx)}}-\log(nx)\right\}{\frac {1}{n}}.} introduced by Zagier (1975) who used it to obtain a Kronecker limit formula for real quadratic fields.
|
https://en.wikipedia.org/wiki/Herglotz–Zagier_function
|
In mathematics, the Hermite constant, named after Charles Hermite, determines how long a shortest element of a lattice in Euclidean space can be. The constant γn for integers n > 0 is defined as follows. For a lattice L in Euclidean space Rn with unit covolume, i.e. vol(Rn/L) = 1, let λ1(L) denote the least length of a nonzero element of L. Then √γn is the maximum of λ1(L) over all such lattices L. The square root in the definition of the Hermite constant is a matter of historical convention. Alternatively, the Hermite constant γn can be defined as the square of the maximal systole of a flat n-dimensional torus of unit volume.
|
https://en.wikipedia.org/wiki/Hermite_constant
|
In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence. The polynomials arise in: signal processing as Hermitian wavelets for wavelet transform analysis probability, such as the Edgeworth series, as well as in connection with Brownian motion; combinatorics, as an example of an Appell sequence, obeying the umbral calculus; numerical analysis as Gaussian quadrature; physics, where they give rise to the eigenstates of the quantum harmonic oscillator; and they also occur in some cases of the heat equation (when the term x u x {\displaystyle {\begin{aligned}xu_{x}\end{aligned}}} is present); systems theory in connection with nonlinear operations on Gaussian noise. random matrix theory in Gaussian ensembles.Hermite polynomials were defined by Pierre-Simon Laplace in 1810, though in scarcely recognizable form, and studied in detail by Pafnuty Chebyshev in 1859. Chebyshev's work was overlooked, and they were named later after Charles Hermite, who wrote on the polynomials in 1864, describing them as new. They were consequently not new, although Hermite was the first to define the multidimensional polynomials in his later 1865 publications.
|
https://en.wikipedia.org/wiki/Hermite_polynomials
|
In mathematics, the Hermite–Hadamard inequality, named after Charles Hermite and Jacques Hadamard and sometimes also called Hadamard's inequality, states that if a function ƒ: → R is convex, then the following chain of inequalities hold: f ( a + b 2 ) ≤ 1 b − a ∫ a b f ( x ) d x ≤ f ( a ) + f ( b ) 2 . {\displaystyle f\left({\frac {a+b}{2}}\right)\leq {\frac {1}{b-a}}\int _{a}^{b}f(x)\,dx\leq {\frac {f(a)+f(b)}{2}}.} The inequality has been generalized to higher dimensions: if Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} is a bounded, convex domain and f: Ω → R {\displaystyle f:\Omega \rightarrow \mathbb {R} } is a positive convex function, then 1 | Ω | ∫ Ω f ( x ) d x ≤ c n | ∂ Ω | ∫ ∂ Ω f ( y ) d σ ( y ) {\displaystyle {\frac {1}{|\Omega |}}\int _{\Omega }f(x)\,dx\leq {\frac {c_{n}}{|\partial \Omega |}}\int _{\partial \Omega }f(y)\,d\sigma (y)} where c n {\displaystyle c_{n}} is a constant depending only on the dimension.
|
https://en.wikipedia.org/wiki/Hermite–Hadamard_inequality
|
In mathematics, the Heronian mean H of two non-negative real numbers A and B is given by the formula It is named after Hero of Alexandria.
|
https://en.wikipedia.org/wiki/Heronian_mean
|
In mathematics, the Herzog–Schönheim conjecture is a combinatorial problem in the area of group theory, posed by Marcel Herzog and Jochanan Schönheim in 1974.Let G {\displaystyle G} be a group, and let A = { a 1 G 1 , … , a k G k } {\displaystyle A=\{a_{1}G_{1},\ \ldots ,\ a_{k}G_{k}\}} be a finite system of left cosets of subgroups G 1 , … , G k {\displaystyle G_{1},\ldots ,G_{k}} of G {\displaystyle G} . Herzog and Schönheim conjectured that if A {\displaystyle A} forms a partition of G {\displaystyle G} with k > 1 {\displaystyle k>1} , then the (finite) indices , … , {\displaystyle ,\ldots ,} cannot be distinct. In contrast, if repeated indices are allowed, then partitioning a group into cosets is easy: if H {\displaystyle H} is any subgroup of G {\displaystyle G} with index k = < ∞ {\displaystyle k=<\infty } then G {\displaystyle G} can be partitioned into k {\displaystyle k} left cosets of H {\displaystyle H} .
|
https://en.wikipedia.org/wiki/Herzog–Schönheim_conjecture
|
In mathematics, the Hessian group is a finite group of order 216, introduced by Jordan (1877) who named it for Otto Hesse. It may be represented as the group of affine transformations with determinant 1 of the affine plane over the field of 3 elements. It has a normal subgroup that is an elementary abelian group of order 32, and the quotient by this subgroup is isomorphic to the group SL2(3) of order 24. It also acts on the Hesse pencil of elliptic curves, and forms the automorphism group of the Hesse configuration of the 9 inflection points of these curves and the 12 lines through triples of these points. The triple cover of this group is a complex reflection group, 333 or of order 648, and the product of this with a group of order 2 is another complex reflection group, 332 or of order 1296.
|
https://en.wikipedia.org/wiki/Hessian_group
|
In mathematics, the Hessian matrix, Hessian or (less commonly) Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants".
|
https://en.wikipedia.org/wiki/Hessian_determinant
|
In mathematics, the Higman group, introduced by Graham Higman (1951), was the first example of an infinite finitely presented group with no non-trivial finite quotients. The quotient by the maximal proper normal subgroup is a finitely generated infinite simple group. Higman (1974) later found some finitely presented infinite groups Gn,r that are simple if n is even and have a simple subgroup of index 2 if n is odd, one of which is one of the Thompson groups. Higman's group is generated by 4 elements a, b, c, d with the relations a − 1 b a = b 2 , b − 1 c b = c 2 , c − 1 d c = d 2 , d − 1 a d = a 2 . {\displaystyle a^{-1}ba=b^{2},\quad b^{-1}cb=c^{2},\quad c^{-1}dc=d^{2},\quad d^{-1}ad=a^{2}.}
|
https://en.wikipedia.org/wiki/Higman_group
|
In mathematics, the Hilbert cube, named after David Hilbert, is a topological space that provides an instructive example of some ideas in topology. Furthermore, many interesting topological spaces can be embedded in the Hilbert cube; that is, can be viewed as subspaces of the Hilbert cube (see below).
|
https://en.wikipedia.org/wiki/Hilbert_cube
|
In mathematics, the Hilbert metric, also known as the Hilbert projective metric, is an explicitly defined distance function on a bounded convex subset of the n-dimensional Euclidean space Rn. It was introduced by David Hilbert (1895) as a generalization of Cayley's formula for the distance in the Cayley–Klein model of hyperbolic geometry, where the convex set is the n-dimensional open unit ball. Hilbert's metric has been applied to Perron–Frobenius theory and to constructing Gromov hyperbolic spaces.
|
https://en.wikipedia.org/wiki/Hilbert_metric
|
In mathematics, the Hilbert projection theorem is a famous result of convex analysis that says that for every vector x {\displaystyle x} in a Hilbert space H {\displaystyle H} and every nonempty closed convex C ⊆ H , {\displaystyle C\subseteq H,} there exists a unique vector m ∈ C {\displaystyle m\in C} for which ‖ c − x ‖ {\displaystyle \|c-x\|} is minimized over the vectors c ∈ C {\displaystyle c\in C} ; that is, such that ‖ m − x ‖ ≤ ‖ c − x ‖ {\displaystyle \|m-x\|\leq \|c-x\|} for every c ∈ C . {\displaystyle c\in C.}
|
https://en.wikipedia.org/wiki/Hilbert_projection_theorem
|
In mathematics, the Hilbert symbol or norm-residue symbol is a function (–, –) from K× × K× to the group of nth roots of unity in a local field K such as the fields of reals or p-adic numbers. It is related to reciprocity laws, and can be defined in terms of the Artin symbol of local class field theory. The Hilbert symbol was introduced by David Hilbert (1897, sections 64, 131, 1998, English translation) in his Zahlbericht, with the slight difference that he defined it for elements of global fields rather than for the larger local fields. The Hilbert symbol has been generalized to higher local fields.
|
https://en.wikipedia.org/wiki/Hilbert's_reciprocity_law
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.