text
stringlengths
9
3.55k
source
stringlengths
31
280
In mathematics, Delta-convergence, or Δ-convergence, is a mode of convergence in metric spaces, weaker than the usual metric convergence, and similar to (but distinct from) the weak convergence in Banach spaces. In Hilbert space, Delta-convergence and weak convergence coincide. For a general class of spaces, similarly to weak convergence, every bounded sequence has a Delta-convergent subsequence. Delta convergence was first introduced by Teck-Cheong Lim, and, soon after, under the name of almost convergence, by Tadeusz Kuczumow.
https://en.wikipedia.org/wiki/Delta-convergence
In mathematics, Denisyuk polynomials Den(x) or Mn(x) are generalizations of the Laguerre polynomials introduced by Denisyuk (1954) given by the generating function
https://en.wikipedia.org/wiki/Denisyuk_polynomials
In mathematics, Descartes' rule of signs, first described by René Descartes in his work La Géométrie, is a technique for getting information on the number of positive real roots of a polynomial. It asserts that the number of positive roots is at most the number of sign changes in the sequence of polynomial's coefficients (omitting the zero coefficients), and that the difference between these two numbers is always even. This implies, in particular, that if the number of sign changes is zero or one, then there are exactly zero or one positive roots, respectively.
https://en.wikipedia.org/wiki/Descartes'_rule_of_signs
By a linear fractional transformation of the variable, one may use Descartes' rule of signs for getting a similar information on the number of roots in any interval. This is the basic idea of Budan's theorem and Budan–Fourier theorem. By repeating the division of an interval into two intervals, one gets eventually a list of disjoint intervals containing together all real roots of the polynomial, and containing each exactly one real root. Descartes rule of signs and linear fractional transformations of the variable are, nowadays, the basis of the fastest algorithms for computer computation of real roots of polynomials (see real-root isolation). Descartes himself used the transformation x → −x for using his rule for getting information of the number of negative roots.
https://en.wikipedia.org/wiki/Descartes'_rule_of_signs
In mathematics, Dickson's lemma states that every set of n {\displaystyle n} -tuples of natural numbers has finitely many minimal elements. This simple fact from combinatorics has become attributed to the American algebraist L. E. Dickson, who used it to prove a result in number theory about perfect numbers. However, the lemma was certainly known earlier, for example to Paul Gordan in his research on invariant theory.
https://en.wikipedia.org/wiki/Dickson's_lemma
In mathematics, Dieudonné's theorem, named after Jean Dieudonné, is a theorem on when the Minkowski sum of closed sets is closed.
https://en.wikipedia.org/wiki/Dieudonné's_theorem
In mathematics, Dini's criterion is a condition for the pointwise convergence of Fourier series, introduced by Ulisse Dini (1880).
https://en.wikipedia.org/wiki/Dini_criterion
In mathematics, Diophantine geometry is the study of Diophantine equations by means of powerful methods in algebraic geometry. By the 20th century it became clear for some mathematicians that methods of algebraic geometry are ideal tools to study these equations. Diophantine geometry is part of the broader field of arithmetic geometry. Four theorems in Diophantine geometry which are of fundamental importance include: Mordell–Weil theorem Roth's theorem Siegel's theorem Faltings's theorem
https://en.wikipedia.org/wiki/Diophantine_geometry
In mathematics, Dirichlet integrals play an important role in distribution theory. We can see the Dirichlet integral in terms of distributions. One of those is the improper integral of the sinc function over the positive real line, ∫ 0 ∞ sin ⁡ x x d x = ∫ 0 ∞ sin 2 ⁡ x x 2 d x = π 2 . {\displaystyle \int _{0}^{\infty }{\frac {\sin x}{x}}\,dx=\int _{0}^{\infty }{\frac {\sin ^{2}x}{x^{2}}}\,dx={\frac {\pi }{2}}.}
https://en.wikipedia.org/wiki/Lobachevsky_integral_formula
In mathematics, Dirichlet's test is a method of testing for the convergence of a series. It is named after its author Peter Gustav Lejeune Dirichlet, and was published posthumously in the Journal de Mathématiques Pures et Appliquées in 1862.
https://en.wikipedia.org/wiki/Dirichlet's_test
In mathematics, Dirichlet's unit theorem is a basic result in algebraic number theory due to Peter Gustav Lejeune Dirichlet. It determines the rank of the group of units in the ring OK of algebraic integers of a number field K. The regulator is a positive real number that determines how "dense" the units are. The statement is that the group of units is finitely generated and has rank (maximal number of multiplicatively independent elements) equal to where r1 is the number of real embeddings and r2 the number of conjugate pairs of complex embeddings of K. This characterisation of r1 and r2 is based on the idea that there will be as many ways to embed K in the complex number field as the degree n = {\displaystyle n=} ; these will either be into the real numbers, or pairs of embeddings related by complex conjugation, so that Note that if K is Galois over Q {\displaystyle \mathbb {Q} } then either r1 = 0 or r2 = 0. Other ways of determining r1 and r2 are use the primitive element theorem to write K = Q ( α ) {\displaystyle K=\mathbb {Q} (\alpha )} , and then r1 is the number of conjugates of α that are real, 2r2 the number that are complex; in other words, if f is the minimal polynomial of α over Q {\displaystyle \mathbb {Q} } , then r1 is the number of real roots and 2r2 is the number of non-real complex roots of f (which come in complex conjugate pairs); write the tensor product of fields K ⊗ Q R {\displaystyle K\otimes _{\mathbb {Q} }\mathbb {R} } as a product of fields, there being r1 copies of R {\displaystyle \mathbb {R} } and r2 copies of C {\displaystyle \mathbb {C} } .As an example, if K is a quadratic field, the rank is 1 if it is a real quadratic field, and 0 if an imaginary quadratic field.
https://en.wikipedia.org/wiki/Regulator_of_an_algebraic_number_field
The theory for real quadratic fields is essentially the theory of Pell's equation. The rank is positive for all number fields besides Q {\displaystyle \mathbb {Q} } and imaginary quadratic fields, which have rank 0. The 'size' of the units is measured in general by a determinant called the regulator.
https://en.wikipedia.org/wiki/Regulator_of_an_algebraic_number_field
In principle a basis for the units can be effectively computed; in practice the calculations are quite involved when n is large. The torsion in the group of units is the set of all roots of unity of K, which form a finite cyclic group. For a number field with at least one real embedding the torsion must therefore be only {1,−1}.
https://en.wikipedia.org/wiki/Regulator_of_an_algebraic_number_field
There are number fields, for example most imaginary quadratic fields, having no real embeddings which also have {1,−1} for the torsion of its unit group. Totally real fields are special with respect to units. If L/K is a finite extension of number fields with degree greater than 1 and the units groups for the integers of L and K have the same rank then K is totally real and L is a totally complex quadratic extension.
https://en.wikipedia.org/wiki/Regulator_of_an_algebraic_number_field
The converse holds too. (An example is K equal to the rationals and L equal to an imaginary quadratic field; both have unit rank 0.)
https://en.wikipedia.org/wiki/Regulator_of_an_algebraic_number_field
The theorem not only applies to the maximal order OK but to any order O ⊂ OK. There is a generalisation of the unit theorem by Helmut Hasse (and later Claude Chevalley) to describe the structure of the group of S-units, determining the rank of the unit group in localizations of rings of integers. Also, the Galois module structure of Q ⊕ O K , S ⊗ Z Q {\displaystyle \mathbb {Q} \oplus O_{K,S}\otimes _{\mathbb {Z} }\mathbb {Q} } has been determined.
https://en.wikipedia.org/wiki/Regulator_of_an_algebraic_number_field
In mathematics, Dixon's identity (or Dixon's theorem or Dixon's formula) is any of several different but closely related identities proved by A. C. Dixon, some involving finite sums of products of three binomial coefficients, and some evaluating a hypergeometric sum. These identities famously follow from the MacMahon Master theorem, and can now be routinely proved by computer algorithms (Ekhad 1990).
https://en.wikipedia.org/wiki/Dixon's_identity
In mathematics, Dodgson condensation or method of contractants is a method of computing the determinants of square matrices. It is named for its inventor, Charles Lutwidge Dodgson (better known by his pseudonym, as Lewis Carroll, the popular author). The method in the case of an n × n matrix is to construct an (n − 1) × (n − 1) matrix, an (n − 2) × (n − 2), and so on, finishing with a 1 × 1 matrix, which has one entry, the determinant of the original matrix.
https://en.wikipedia.org/wiki/Dodgson_condensation
In mathematics, Dolgachev surfaces are certain simply connected elliptic surfaces, introduced by Igor Dolgachev (1981). They can be used to give examples of an infinite family of homeomorphic simply connected compact 4-manifolds, no two of which are diffeomorphic.
https://en.wikipedia.org/wiki/Dolgachev_surface
In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartingale exceeds any given value over a given interval of time. As the name suggests, the result is usually given in the case that the process is a martingale, but the result is also valid for submartingales. The inequality is due to the American mathematician Joseph L. Doob.
https://en.wikipedia.org/wiki/Doob's_martingale_inequality
In mathematics, Drinfeld reciprocity, introduced by Drinfeld (1974), is a correspondence between eigenforms of the moduli space of Drinfeld modules and factors of the corresponding Jacobian variety, such that all twisted L-functions are the same.
https://en.wikipedia.org/wiki/Drinfeld_reciprocity
In mathematics, Dvoretzky's theorem is an important structural theorem about normed vector spaces proved by Aryeh Dvoretzky in the early 1960s, answering a question of Alexander Grothendieck. In essence, it says that every sufficiently high-dimensional normed vector space will have low-dimensional subspaces that are approximately Euclidean. Equivalently, every high-dimensional bounded symmetric convex set has low-dimensional sections that are approximately ellipsoids. A new proof found by Vitali Milman in the 1970s was one of the starting points for the development of asymptotic geometric analysis (also called asymptotic functional analysis or the local theory of Banach spaces).
https://en.wikipedia.org/wiki/Dvoretzky's_theorem
In mathematics, E-functions are a type of power series that satisfy particular arithmetic conditions on the coefficients. They are of interest in transcendental number theory, and are more special than G-functions.
https://en.wikipedia.org/wiki/E-function
In mathematics, E6 is the name of some closely related Lie groups, linear algebraic groups or their Lie algebras e 6 {\displaystyle {\mathfrak {e}}_{6}} , all of which have dimension 78; the same notation E6 is used for the corresponding root lattice, which has rank 6. The designation E6 comes from the Cartan–Killing classification of the complex simple Lie algebras (see Élie Cartan § Work). This classifies Lie algebras into four infinite series labeled An, Bn, Cn, Dn, and five exceptional cases labeled E6, E7, E8, F4, and G2. The E6 algebra is thus one of the five exceptional cases.
https://en.wikipedia.org/wiki/E6_(mathematics)
The fundamental group of the complex form, compact real form, or any algebraic version of E6 is the cyclic group Z/3Z, and its outer automorphism group is the cyclic group Z/2Z. Its fundamental representation is 27-dimensional (complex), and a basis is given by the 27 lines on a cubic surface. The dual representation, which is inequivalent, is also 27-dimensional. In particle physics, E6 plays a role in some grand unified theories.
https://en.wikipedia.org/wiki/E6_(mathematics)
In mathematics, E7 is the name of several closely related Lie groups, linear algebraic groups or their Lie algebras e7, all of which have dimension 133; the same notation E7 is used for the corresponding root lattice, which has rank 7. The designation E7 comes from the Cartan–Killing classification of the complex simple Lie algebras, which fall into four infinite series labeled An, Bn, Cn, Dn, and five exceptional cases labeled E6, E7, E8, F4, and G2. The E7 algebra is thus one of the five exceptional cases. The fundamental group of the (adjoint) complex form, compact real form, or any algebraic version of E7 is the cyclic group Z/2Z, and its outer automorphism group is the trivial group. The dimension of its fundamental representation is 56.
https://en.wikipedia.org/wiki/E7_(mathematics)
In mathematics, E8 is any of several closely related exceptional simple Lie groups, linear algebraic groups or Lie algebras of dimension 248; the same notation is used for the corresponding root lattice, which has rank 8. The designation E8 comes from the Cartan–Killing classification of the complex simple Lie algebras, which fall into four infinite series labeled An, Bn, Cn, Dn, and five exceptional cases labeled G2, F4, E6, E7, and E8. The E8 algebra is the largest and most complicated of these exceptional cases.
https://en.wikipedia.org/wiki/E8_Lie_algebra
In mathematics, Ehrling's lemma, also known as Lions' lemma, is a result concerning Banach spaces. It is often used in functional analysis to demonstrate the equivalence of certain norms on Sobolev spaces. It was named after Gunnar Ehrling.
https://en.wikipedia.org/wiki/Ehrling's_lemma
In mathematics, Eichler cohomology (also called parabolic cohomology or cuspidal cohomology) is a cohomology theory for Fuchsian groups, introduced by Eichler (1957), that is a variation of group cohomology analogous to the image of the cohomology with compact support in the ordinary cohomology group. The Eichler–Shimura isomorphism, introduced by Eichler for complex cohomology and by Shimura (1959) for real cohomology, is an isomorphism between an Eichler cohomology group and a space of cusp forms. There are several variations of the Eichler–Shimura isomorphism, because one can use either real or complex coefficients, and can also use either Eichler cohomology or ordinary group cohomology as in (Gunning 1961). There is also a variation of the Eichler–Shimura isomorphisms using l-adic cohomology instead of real cohomology, which relates the coefficients of cusp forms to eigenvalues of Frobenius acting on these groups. Deligne (1971) used this to reduce the Ramanujan conjecture to the Weil conjectures that he later proved.
https://en.wikipedia.org/wiki/Eichler_cohomology
In mathematics, Eisenstein's criterion gives a sufficient condition for a polynomial with integer coefficients to be irreducible over the rational numbers – that is, for it to not be factorizable into the product of non-constant polynomials with rational coefficients. This criterion is not applicable to all polynomials with integer coefficients that are irreducible over the rational numbers, but it does allow in certain important cases for irreducibility to be proved with very little effort. It may apply either directly or after transformation of the original polynomial. This criterion is named after Gotthold Eisenstein. In the early 20th century, it was also known as the Schönemann–Eisenstein theorem because Theodor Schönemann was the first to publish it.
https://en.wikipedia.org/wiki/Eisenstein_criterion
In mathematics, Eisenstein's theorem, named after the German mathematician Gotthold Eisenstein, applies to the coefficients of any power series which is an algebraic function with rational number coefficients. Through the theorem, it is readily demonstrable, for example, that the exponential function must be a transcendental function.
https://en.wikipedia.org/wiki/Eisenstein's_theorem
In mathematics, Eisenstein–Kronecker numbers are an analogue for imaginary quadratic fields of generalized Bernoulli numbers. They are defined in terms of classical Eisenstein–Kronecker series, which were studied by Kenichi Bannai and Shinichi Kobayashi using the Poincaré bundle.Eisenstein–Kronecker numbers are algebraic and satisfy congruences that can be used in the construction of two-variable p-adic L-functions. They are related to critical L-values of Hecke characters. == References ==
https://en.wikipedia.org/wiki/Eisenstein–Kronecker_number
In mathematics, Enriques surfaces are algebraic surfaces such that the irregularity q = 0 and the canonical line bundle K is non-trivial but has trivial square. Enriques surfaces are all projective (and therefore Kähler over the complex numbers) and are elliptic surfaces of genus 0. Over fields of characteristic not 2 they are quotients of K3 surfaces by a group of order 2 acting without fixed points and their theory is similar to that of algebraic K3 surfaces. Enriques surfaces were first studied in detail by Enriques (1896) as an answer to a question discussed by Castelnuovo (1895) about whether a surface with q = pg = 0 is necessarily rational, though some of the Reye congruences introduced earlier by Reye (1882) are also examples of Enriques surfaces.
https://en.wikipedia.org/wiki/Reye_congruence
Enriques surfaces can also be defined over other fields. Over fields of characteristic other than 2, Artin (1960) showed that the theory is similar to that over the complex numbers. Over fields of characteristic 2 the definition is modified, and there are two new families, called singular and supersingular Enriques surfaces, described by Bombieri & Mumford (1976). These two extra families are related to the two non-discrete algebraic group schemes of order 2 in characteristic 2.
https://en.wikipedia.org/wiki/Reye_congruence
In mathematics, Erdős space is a topological space named after Paul Erdős, who described it in 1940. Erdős space is defined as a subspace E ⊂ ℓ 2 {\displaystyle E\subset \ell ^{2}} of the Hilbert space of square summable sequences, consisting of the sequences whose elements are all rational numbers. Erdős space is a totally disconnected, one-dimensional topological space. The space E {\displaystyle E} is homeomorphic to E × E {\displaystyle E\times E} in the product topology.
https://en.wikipedia.org/wiki/Erdős_space
If the set of all homeomorphisms of the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} (for n ≥ 2 {\displaystyle n\geq 2} ) that leave invariant the set Q n {\displaystyle \mathbb {Q} ^{n}} of rational vectors is endowed with the compact-open topology, it becomes homeomorphic to the Erdős space.Erdős space also surfaces in complex dynamics via iteration of the function f ( z ) = e z − 1 {\displaystyle f(z)=e^{z}-1} . Let f n {\displaystyle f^{n}} denote the n {\displaystyle n} -fold composition of f {\displaystyle f} . The set of all points z ∈ C {\displaystyle z\in \mathbb {C} } such that Im ( f n ( z ) ) → ∞ {\displaystyle {\text{Im}}(f^{n}(z))\to \infty } is a collection of pairwise disjoint rays (homeomorphic copies of [ 0 , ∞ ) {\displaystyle [0,\infty )} ), each joining an endpoint in C {\displaystyle \mathbb {C} } to the point at infinity. The set of finite endpoints is homeomorphic to Erdős space E {\displaystyle E} .
https://en.wikipedia.org/wiki/Erdős_space
In mathematics, Esakia duality is the dual equivalence between the category of Heyting algebras and the category of Esakia spaces. Esakia duality provides an order-topological representation of Heyting algebras via Esakia spaces. Let Esa denote the category of Esakia spaces and Esakia morphisms.
https://en.wikipedia.org/wiki/Esakia_duality
Let H be a Heyting algebra, X denote the set of prime filters of H, and ≤ denote set-theoretic inclusion on the prime filters of H. Also, for each a ∈ H, let φ(a) = {x ∈ X: a ∈ x}, and let τ denote the topology on X generated by {φ(a), X − φ(a): a ∈ H}. Theorem: (X, τ, ≤) is an Esakia space, called the Esakia dual of H. Moreover, φ is a Heyting algebra isomorphism from H onto the Heyting algebra of all clopen up-sets of (X,τ,≤). Furthermore, each Esakia space is isomorphic in Esa to the Esakia dual of some Heyting algebra. This representation of Heyting algebras by means of Esakia spaces is functorial and yields a dual equivalence between the categories HA of Heyting algebras and Heyting algebra homomorphismsand Esa of Esakia spaces and Esakia morphisms.Theorem: HA is dually equivalent to Esa. The duality can also be expressed in terms of spectral spaces, where it says that the category of Heyting algebras is dually equivalent to the category of Heyting spaces.
https://en.wikipedia.org/wiki/Esakia_duality
In mathematics, Esenin-Volpin's theorem states that weight of an infinite compact dyadic space is the supremum of the weights of its points. It was introduced by Alexander Esenin-Volpin (1949). It was generalized by (Efimov 1965) and (Turzański 1992).
https://en.wikipedia.org/wiki/Esenin-Volpin's_theorem
In mathematics, Euclid numbers are integers of the form En = pn # + 1, where pn # is the nth primorial, i.e. the product of the first n prime numbers. They are named after the ancient Greek mathematician Euclid, in connection with Euclid's theorem that there are infinitely many prime numbers.
https://en.wikipedia.org/wiki/Euclid_number
In mathematics, Euclidean relations are a class of binary relations that formalize "Axiom 1" in Euclid's Elements: "Magnitudes which are equal to the same are equal to each other."
https://en.wikipedia.org/wiki/Euclidean_relation
In mathematics, Euler's differential equation is a first-order non-linear ordinary differential equation, named after Leonhard Euler. It is given by: d y d x + a 0 + a 1 y + a 2 y 2 + a 3 y 3 + a 4 y 4 a 0 + a 1 x + a 2 x 2 + a 3 x 3 + a 4 x 4 = 0 {\displaystyle {\frac {dy}{dx}}+{\frac {\sqrt {a_{0}+a_{1}y+a_{2}y^{2}+a_{3}y^{3}+a_{4}y^{4}}}{\sqrt {a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+a_{4}x^{4}}}}=0} This is a separable equation and the solution is given by the following integral equation: ∫ d y a 0 + a 1 y + a 2 y 2 + a 3 y 3 + a 4 y 4 + ∫ d x a 0 + a 1 x + a 2 x 2 + a 3 x 3 + a 4 x 4 = c {\displaystyle \int {\frac {dy}{\sqrt {a_{0}+a_{1}y+a_{2}y^{2}+a_{3}y^{3}+a_{4}y^{4}}}}+\int {\frac {dx}{\sqrt {a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+a_{4}x^{4}}}}=c} == References ==
https://en.wikipedia.org/wiki/Euler's_differential_equation
In mathematics, Euler's four-square identity says that the product of two numbers, each of which is a sum of four squares, is itself a sum of four squares.
https://en.wikipedia.org/wiki/Four_squares_formula
In mathematics, Euler's identity (also known as Euler's equation) is the equality where e is Euler's number, the base of natural logarithms, i is the imaginary unit, which by definition satisfies i2 = −1, and π is pi, the ratio of the circumference of a circle to its diameter.Euler's identity is named after the Swiss mathematician Leonhard Euler. It is a special case of Euler's formula e i x = cos ⁡ x + i sin ⁡ x {\displaystyle e^{ix}=\cos x+i\sin x} when evaluated for x = π. Euler's identity is considered to be an exemplar of mathematical beauty as it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used in a proof that π is transcendental, which implies the impossibility of squaring the circle.
https://en.wikipedia.org/wiki/Euler's_identity
In mathematics, Euler's idoneal numbers (also called suitable numbers or convenient numbers) are the positive integers D such that any integer expressible in only one way as x2 ± Dy2 (where x2 is relatively prime to Dy2) is a prime power or twice a prime power. In particular, a number that has two distinct representations as a sum of two squares is composite. Every idoneal number generates a set containing infinitely many primes and missing infinitely many other primes.
https://en.wikipedia.org/wiki/Idoneal_number
In mathematics, F4 is the name of a Lie group and also its Lie algebra f4. It is one of the five exceptional simple Lie groups. F4 has rank 4 and dimension 52. The compact form is simply connected and its outer automorphism group is the trivial group.
https://en.wikipedia.org/wiki/F4_(mathematics)
Its fundamental representation is 26-dimensional. The compact real form of F4 is the isometry group of a 16-dimensional Riemannian manifold known as the octonionic projective plane OP2. This can be seen systematically using a construction known as the magic square, due to Hans Freudenthal and Jacques Tits.
https://en.wikipedia.org/wiki/F4_(mathematics)
There are 3 real forms: a compact one, a split one, and a third one. They are the isometry groups of the three real Albert algebras. The F4 Lie algebra may be constructed by adding 16 generators transforming as a spinor to the 36-dimensional Lie algebra so(9), in analogy with the construction of E8. In older books and papers, F4 is sometimes denoted by E4.
https://en.wikipedia.org/wiki/F4_(mathematics)
In mathematics, Fatou components are components of the Fatou set. They were named after Pierre Fatou.
https://en.wikipedia.org/wiki/Classification_of_Fatou_components
In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou. Fatou's lemma can be used to prove the Fatou–Lebesgue theorem and Lebesgue's dominated convergence theorem.
https://en.wikipedia.org/wiki/Fatou's_lemma
In mathematics, Faulhaber's formula, named after the early 17th century mathematician Johann Faulhaber, expresses the sum of the p-th powers of the first n positive integers as a polynomial in n. In modern notation, Faulhaber's formula is Here, ( p + 1 k ) {\textstyle {\binom {p+1}{k}}} is the binomial coefficient "p + 1 choose k", and the Bj are the Bernoulli numbers with the convention that B 1 = + 1 2 {\textstyle B_{1}=+{\frac {1}{2}}} .
https://en.wikipedia.org/wiki/Bernoulli's_formula
In mathematics, Favard's theorem, also called the Shohat–Favard theorem, states that a sequence of polynomials satisfying a suitable 3-term recurrence relation is a sequence of orthogonal polynomials. The theorem was introduced in the theory of orthogonal polynomials by Favard (1935) and Shohat (1938), though essentially the same theorem was used by Stieltjes in the theory of continued fractions many years before Favard's paper, and was rediscovered several times by other authors before Favard's work.
https://en.wikipedia.org/wiki/Favard_theorem
In mathematics, Fejér's theorem, named after Hungarian mathematician Lipót Fejér, states the following:
https://en.wikipedia.org/wiki/Fejér's_theorem
In mathematics, Felix Klein's j-invariant or j function, regarded as a function of a complex variable τ, is a modular function of weight zero for SL(2, Z) defined on the upper half-plane of complex numbers. It is the unique such function which is holomorphic away from a simple pole at the cusp such that j ( e 2 π i / 3 ) = 0 , j ( i ) = 1728 = 12 3 . {\displaystyle j\left(e^{2\pi i/3}\right)=0,\quad j(i)=1728=12^{3}.} Rational functions of j are modular, and in fact give all modular functions. Classically, the j-invariant was studied as a parameterization of elliptic curves over C, but it also has surprising connections to the symmetries of the Monster group (this connection is referred to as monstrous moonshine).
https://en.wikipedia.org/wiki/Klein_J-invariant
In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel. Let ƒ be a proper convex function on Rn and let g be a proper concave function on Rn. Then, if regularity conditions are satisfied, inf x ( f ( x ) − g ( x ) ) = sup p ( g ∗ ( p ) − f ∗ ( p ) ) . {\displaystyle \inf _{x}(f(x)-g(x))=\sup _{p}(g_{*}(p)-f^{*}(p)).} where ƒ * is the convex conjugate of ƒ (also referred to as the Fenchel–Legendre transform) and g * is the concave conjugate of g. That is, f ∗ ( x ∗ ) := sup { ⟨ x ∗ , x ⟩ − f ( x ) | x ∈ R n } {\displaystyle f^{*}\left(x^{*}\right):=\sup \left\{\left.\left\langle x^{*},x\right\rangle -f\left(x\right)\right|x\in \mathbb {R} ^{n}\right\}} g ∗ ( x ∗ ) := inf { ⟨ x ∗ , x ⟩ − g ( x ) | x ∈ R n } {\displaystyle g_{*}\left(x^{*}\right):=\inf \left\{\left.\left\langle x^{*},x\right\rangle -g\left(x\right)\right|x\in \mathbb {R} ^{n}\right\}}
https://en.wikipedia.org/wiki/Fenchel_duality
In mathematics, Fenchel–Nielsen coordinates are coordinates for Teichmüller space introduced by Werner Fenchel and Jakob Nielsen.
https://en.wikipedia.org/wiki/Fenchel–Nielsen_coordinates
In mathematics, Fermat's theorem (also known as interior extremum theorem) is a method to find local maxima and minima of differentiable functions on open sets by showing that every local extremum of the function is a stationary point (the function's derivative is zero at that point). Fermat's theorem is a theorem in real analysis, named after Pierre de Fermat. By using Fermat's theorem, the potential extrema of a function f {\displaystyle \displaystyle f} , with derivative f ′ {\displaystyle \displaystyle f'} , are found by solving an equation in f ′ {\displaystyle \displaystyle f'} . Fermat's theorem gives only a necessary condition for extreme function values, as some stationary points are inflection points (not a maximum or minimum). The function's second derivative, if it exists, can sometimes be used to determine whether a stationary point is a maximum or minimum.
https://en.wikipedia.org/wiki/Fermat's_theorem_(stationary_points)
In mathematics, Ferrers functions are certain special functions defined in terms of hypergeometric functions. They are named after Norman Macleod Ferrers.
https://en.wikipedia.org/wiki/Ferrers_function
In mathematics, Fischer's inequality gives an upper bound for the determinant of a positive-semidefinite matrix whose entries are complex numbers in terms of the determinants of its principal diagonal blocks. Suppose A, C are respectively p×p, q×q positive-semidefinite complex matrices and B is a p×q complex matrix. Let M := {\displaystyle M:=\left} so that M is a (p+q)×(p+q) matrix. Then Fischer's inequality states that det ( M ) ≤ det ( A ) det ( C ) .
https://en.wikipedia.org/wiki/Fischer's_inequality
{\displaystyle \det(M)\leq \det(A)\det(C).} If M is positive-definite, equality is achieved in Fischer's inequality if and only if all the entries of B are 0.
https://en.wikipedia.org/wiki/Fischer's_inequality
Inductively one may conclude that a similar inequality holds for a block decomposition of M with multiple principal diagonal blocks. Considering 1×1 blocks, a corollary is Hadamard's inequality. On the other hand, Fischer's inequality can also be proved by using Hadamard's inequality, see the proof of Theorem 7.8.5 in Horn and Johnson's Matrix Analysis.
https://en.wikipedia.org/wiki/Fischer's_inequality
In mathematics, Fisher's equation (named after statistician and biologist Ronald Fisher) also known as the Kolmogorov–Petrovsky–Piskunov equation (named after Andrey Kolmogorov, Ivan Petrovsky, and Nikolai Piskunov), KPP equation or Fisher–KPP equation is the partial differential equation:It is a kind of reaction–diffusion system that can be used to model population growth and wave propagation.
https://en.wikipedia.org/wiki/KPP_equation
In mathematics, Floer homology is a tool for studying symplectic geometry and low-dimensional topology. Floer homology is a novel invariant that arises as an infinite-dimensional analogue of finite-dimensional Morse homology. Andreas Floer introduced the first version of Floer homology, now called Lagrangian Floer homology, in his proof of the Arnold conjecture in symplectic geometry.
https://en.wikipedia.org/wiki/Contact_homology
Floer also developed a closely related theory for Lagrangian submanifolds of a symplectic manifold. A third construction, also due to Floer, associates homology groups to closed three-dimensional manifolds using the Yang–Mills functional. These constructions and their descendants play a fundamental role in current investigations into the topology of symplectic and contact manifolds as well as (smooth) three- and four-dimensional manifolds.
https://en.wikipedia.org/wiki/Contact_homology
Floer homology is typically defined by associating to the object of interest an infinite-dimensional manifold and a real valued function on it. In the symplectic version, this is the free loop space of a symplectic manifold with the symplectic action functional. For the (instanton) version for three-manifolds, it is the space of SU(2)-connections on a three-dimensional manifold with the Chern–Simons functional.
https://en.wikipedia.org/wiki/Contact_homology
Loosely speaking, Floer homology is the Morse homology of the function on the infinite-dimensional manifold. A Floer chain complex is formed from the abelian group spanned by the critical points of the function (or possibly certain collections of critical points). The differential of the chain complex is defined by counting the function's gradient flow lines connecting certain pairs of critical points (or collections thereof).
https://en.wikipedia.org/wiki/Contact_homology
Floer homology is the homology of this chain complex. The gradient flow line equation, in a situation where Floer's ideas can be successfully applied, is typically a geometrically meaningful and analytically tractable equation. For symplectic Floer homology, the gradient flow equation for a path in the loopspace is (a perturbed version of) the Cauchy–Riemann equation for a map of a cylinder (the total space of the path of loops) to the symplectic manifold of interest; solutions are known as pseudoholomorphic curves. The Gromov compactness theorem is then used to show that the differential is well-defined and squares to zero, so that the Floer homology is defined. For instanton Floer homology, the gradient flow equations is exactly the Yang–Mills equation on the three-manifold crossed with the real line.
https://en.wikipedia.org/wiki/Contact_homology
In mathematics, Fontaine's period rings are a collection of commutative rings first defined by Jean-Marc Fontaine that are used to classify p-adic Galois representations.
https://en.wikipedia.org/wiki/Ring_of_p-adic_periods
In mathematics, Fourier analysis () is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer. The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis.
https://en.wikipedia.org/wiki/Fourier_synthesis
For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.
https://en.wikipedia.org/wiki/Fourier_synthesis
The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed.
https://en.wikipedia.org/wiki/Fourier_synthesis
Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis. To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the least-squares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems.
https://en.wikipedia.org/wiki/Fourier_synthesis
In mathematics, Fourier–Bessel series is a particular kind of generalized Fourier series (an infinite series expansion on a finite interval) based on Bessel functions. Fourier–Bessel series are used in the solution to partial differential equations, particularly in cylindrical coordinate systems.
https://en.wikipedia.org/wiki/Fourier–Bessel_series
In mathematics, Fredholm operators are certain operators that arise in the Fredholm theory of integral equations. They are named in honour of Erik Ivar Fredholm. By definition, a Fredholm operator is a bounded linear operator T: X → Y between two Banach spaces with finite-dimensional kernel ker ⁡ T {\displaystyle \ker T} and finite-dimensional (algebraic) cokernel c o k e r T = Y / r a n T {\displaystyle \mathrm {coker} \,T=Y/\mathrm {ran} \,T} , and with closed range r a n T {\displaystyle \mathrm {ran} \,T} . The last condition is actually redundant.The index of a Fredholm operator is the integer i n d T := dim ⁡ ker ⁡ T − c o d i m r a n T {\displaystyle \mathrm {ind} \,T:=\dim \ker T-\mathrm {codim} \,\mathrm {ran} \,T} or in other words, i n d T := dim ⁡ ker ⁡ T − d i m c o k e r T . {\displaystyle \mathrm {ind} \,T:=\dim \ker T-\mathrm {dim} \,\mathrm {coker} \,T.}
https://en.wikipedia.org/wiki/Fredholm_index
In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm.
https://en.wikipedia.org/wiki/Fredholm_theory
In mathematics, Fredholm's theorems are a set of celebrated results of Ivar Fredholm in the Fredholm theory of integral equations. There are several closely related theorems, which may be stated in terms of integral equations, in terms of linear algebra, or in terms of the Fredholm operator on Banach spaces. The Fredholm alternative is one of the Fredholm theorems.
https://en.wikipedia.org/wiki/Fredholm's_theorem
In mathematics, Fresnel's wave surface, found by Augustin-Jean Fresnel in 1822, is a quartic surface describing the propagation of light in an optically biaxial crystal. Wave surfaces are special cases of tetrahedroids which are in turn special cases of Kummer surfaces. In projective coordinates (w:x:y:z) the wave surface is given by a 2 x 2 x 2 + y 2 + z 2 − a 2 w 2 + b 2 y 2 x 2 + y 2 + z 2 − b 2 w 2 + c 2 z 2 x 2 + y 2 + z 2 − c 2 w 2 = 0 {\displaystyle {\frac {a^{2}x^{2}}{x^{2}+y^{2}+z^{2}-a^{2}w^{2}}}+{\frac {b^{2}y^{2}}{x^{2}+y^{2}+z^{2}-b^{2}w^{2}}}+{\frac {c^{2}z^{2}}{x^{2}+y^{2}+z^{2}-c^{2}w^{2}}}=0}
https://en.wikipedia.org/wiki/Wave_surface
In mathematics, Friedrichs's inequality is a theorem of functional analysis, due to Kurt Friedrichs. It places a bound on the Lp norm of a function using Lp bounds on the weak derivatives of the function and the geometry of the domain, and can be used to show that certain norms on Sobolev spaces are equivalent. Friedrichs's inequality generalizes the Poincaré–Wirtinger inequality, which deals with the case k = 1.
https://en.wikipedia.org/wiki/Friedrichs'_inequality
In mathematics, Frobenius' theorem gives necessary and sufficient conditions for finding a maximal set of independent solutions of an overdetermined system of first-order homogeneous linear partial differential equations. In modern geometric terms, given a family of vector fields, the theorem gives necessary and sufficient integrability conditions for the existence of a foliation by maximal integral manifolds whose tangent bundles are spanned by the given vector fields. The theorem generalizes the existence theorem for ordinary differential equations, which guarantees that a single vector field always gives rise to integral curves; Frobenius gives compatibility conditions under which the integral curves of r vector fields mesh into coordinate grids on r-dimensional integral manifolds.
https://en.wikipedia.org/wiki/Frobenius_integration_theorem
The theorem is foundational in differential topology and calculus on manifolds. Contact geometry studies 1-forms that maximally violates the assumptions of Frobenius' theorem. An example is shown on the right.
https://en.wikipedia.org/wiki/Frobenius_integration_theorem
In mathematics, Frölicher spaces extend the notions of calculus and smooth manifolds. They were introduced in 1982 by the mathematician Alfred Frölicher.
https://en.wikipedia.org/wiki/Frölicher_space
In mathematics, Fubini's theorem on differentiation, named after Guido Fubini, is a result in real analysis concerning the differentiation of series of monotonic functions. It can be proven by using Fatou's lemma and the properties of null sets.
https://en.wikipedia.org/wiki/Fubini's_theorem_on_differentiation
In mathematics, Fuchs' theorem, named after Lazarus Fuchs, states that a second-order differential equation of the form has a solution expressible by a generalised Frobenius series when p ( x ) {\displaystyle p(x)} , q ( x ) {\displaystyle q(x)} and g ( x ) {\displaystyle g(x)} are analytic at x = a {\displaystyle x=a} or a {\displaystyle a} is a regular singular point. That is, any solution to this second-order differential equation can be written as for some positive real s, or for some positive real r, where y0 is a solution of the first kind. Its radius of convergence is at least as large as the minimum of the radii of convergence of p ( x ) {\displaystyle p(x)} , q ( x ) {\displaystyle q(x)} and g ( x ) {\displaystyle g(x)} .
https://en.wikipedia.org/wiki/Fuchs's_theorem
In mathematics, Fuglede's theorem is a result in operator theory, named after Bent Fuglede.
https://en.wikipedia.org/wiki/Fuglede's_theorem
In mathematics, Fujita's conjecture is a problem in the theories of algebraic geometry and complex manifolds, unsolved as of 2017. It is named after Takao Fujita, who formulated it in 1985.
https://en.wikipedia.org/wiki/Fujita_conjecture
In mathematics, Functional analysis is concerned with the study of vector spaces and operators acting upon them. It has its historical roots in the study of functional spaces, in particular transformations of functions, such as the Fourier transform, as well as in the study of differential and integral equations. In functional analysis, an important class of vector spaces consists of the complete normed vector spaces over the real or complex numbers, which are called Banach spaces.
https://en.wikipedia.org/wiki/Per_Enflo
An important example of a Banach space is a Hilbert space, where the norm arises from an inner product. Hilbert spaces are of fundamental importance in many areas, including the mathematical formulation of quantum mechanics, stochastic processes, and time-series analysis. Besides studying spaces of functions, functional analysis also studies the continuous linear operators on spaces of functions.
https://en.wikipedia.org/wiki/Per_Enflo
In mathematics, G2 is the name of three simple Lie groups (a complex form, a compact real form and a split real form), their Lie algebras g 2 , {\displaystyle {\mathfrak {g}}_{2},} as well as some algebraic groups. They are the smallest of the five exceptional simple Lie groups. G2 has rank 2 and dimension 14. It has two fundamental representations, with dimension 7 and 14. The compact form of G2 can be described as the automorphism group of the octonion algebra or, equivalently, as the subgroup of SO(7) that preserves any chosen particular vector in its 8-dimensional real spinor representation (a spin representation).
https://en.wikipedia.org/wiki/G2_(mathematics)
In mathematics, Gabriel's theorem, proved by Pierre Gabriel, classifies the quivers of finite type in terms of Dynkin diagrams.
https://en.wikipedia.org/wiki/Gabriel's_theorem
In mathematics, Galois cohomology is the study of the group cohomology of Galois modules, that is, the application of homological algebra to modules for Galois groups. A Galois group G associated to a field extension L/K acts in a natural way on some abelian groups, for example those constructed directly from L, but also through other Galois representations that may be derived by more abstract means. Galois cohomology accounts for the way in which taking Galois-invariant elements fails to be an exact functor.
https://en.wikipedia.org/wiki/Galois_cohomology
In mathematics, Galois rings are a type of finite commutative rings which generalize both the finite fields and the rings of integers modulo a prime power. A Galois ring is constructed from the ring Z / p n Z {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} } similar to how a finite field F p r {\displaystyle \mathbb {F} _{p^{r}}} is constructed from F p {\displaystyle \mathbb {F} _{p}} . It is a Galois extension of Z / p n Z {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} } , when the concept of a Galois extension is generalized beyond the context of fields.
https://en.wikipedia.org/wiki/Galois_ring
Galois rings were studied by Krull (1924), and independently by Janusz (1966) and by Raghavendran (1969), who both introduced the name Galois ring. They are named after Évariste Galois, similar to Galois fields, which is another name for finite fields. Galois rings have found applications in coding theory, where certain codes are best understood as linear codes over Z / 4 Z {\displaystyle \mathbb {Z} /4\mathbb {Z} } using Galois rings GR(4, r).
https://en.wikipedia.org/wiki/Galois_ring
In mathematics, Galois theory, originally introduced by Évariste Galois, provides a connection between field theory and group theory. This connection, the fundamental theorem of Galois theory, allows reducing certain problems in field theory to group theory, which makes them simpler and easier to understand. Galois introduced the subject for studying roots of polynomials. This allowed him to characterize the polynomial equations that are solvable by radicals in terms of properties of the permutation group of their roots—an equation is solvable by radicals if its roots may be expressed by a formula involving only integers, nth roots, and the four basic arithmetic operations.
https://en.wikipedia.org/wiki/Solvable_by_radicals
This widely generalizes the Abel–Ruffini theorem, which asserts that a general polynomial of degree at least five cannot be solved by radicals. Galois theory has been used to solve classic problems including showing that two problems of antiquity cannot be solved as they were stated (doubling the cube and trisecting the angle), and characterizing the regular polygons that are constructible (this characterization was previously given by Gauss, but all known proofs that this characterization is complete require Galois theory).
https://en.wikipedia.org/wiki/Solvable_by_radicals
Galois' work was published by Joseph Liouville fourteen years after his death. The theory took longer to become popular among mathematicians and to be well understood. Galois theory has been generalized to Galois connections and Grothendieck's Galois theory.
https://en.wikipedia.org/wiki/Solvable_by_radicals
In mathematics, Gaussian brackets are a special notation invented by Carl Friedrich Gauss to represent the convergents of a simple continued fraction in the form of a simple fraction. Gauss used this notation in the context of finding solutions of the indeterminate equations of the form a x = b y ± 1 {\displaystyle ax=by\pm 1} .This notation should not be confused with the widely prevalent use of square brackets to denote the greatest integer function: {\displaystyle } denotes the greatest integer less than or equal to x {\displaystyle x} . This notation was also invented by Gauss and was used in the third proof of the quadratic reciprocity law. The notation ⌊ x ⌋ {\displaystyle \lfloor x\rfloor } , denoting the floor function, is now more commonly used to denote the greatest integer less than or equal to x {\displaystyle x} .
https://en.wikipedia.org/wiki/Gaussian_brackets
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855).
https://en.wikipedia.org/wiki/Gauss_elimination
To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: Swapping two rows, Multiplying a row by a nonzero number, Adding a multiple of one row to another row.Using these operations, a matrix can always be transformed into an upper triangular matrix, and in fact one that is in row echelon form. Once all of the leading coefficients (the leftmost nonzero entry in each row) are 1, and every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in reduced row echelon form.
https://en.wikipedia.org/wiki/Gauss_elimination
This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where two elementary operations on different rows are done at the first and third steps), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form. → → → {\displaystyle {\begin{bmatrix}1&3&1&9\\1&1&-1&1\\3&11&5&35\end{bmatrix}}\to {\begin{bmatrix}1&3&1&9\\0&-2&-2&-8\\0&2&2&8\end{bmatrix}}\to {\begin{bmatrix}1&3&1&9\\0&-2&-2&-8\\0&0&0&0\end{bmatrix}}\to {\begin{bmatrix}1&0&-2&-3\\0&1&1&4\\0&0&0&0\end{bmatrix}}} Using row operations to convert a matrix into reduced row echelon form is sometimes called Gauss–Jordan elimination. In this case, the term Gaussian elimination refers to the process until it has reached its upper triangular, or (unreduced) row echelon form. For computational reasons, when solving systems of linear equations, it is sometimes preferable to stop row operations before the matrix is completely reduced.
https://en.wikipedia.org/wiki/Gauss_elimination
In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space Rn, closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces. Gaussian measures are named after the German mathematician Carl Friedrich Gauss. One reason why Gaussian measures are so ubiquitous in probability theory is the central limit theorem. Loosely speaking, it states that if a random variable X is obtained by summing a large number N of independent random variables of order 1, then X is of order N {\displaystyle {\sqrt {N}}} and its law is approximately Gaussian.
https://en.wikipedia.org/wiki/Gaussian_measure