text
stringlengths 9
3.55k
| source
stringlengths 31
280
|
|---|---|
In mathematics, the Dirichlet space on the domain Ω ⊆ C , D ( Ω ) {\displaystyle \Omega \subseteq \mathbb {C} ,\,{\mathcal {D}}(\Omega )} (named after Peter Gustav Lejeune Dirichlet), is the reproducing kernel Hilbert space of holomorphic functions, contained within the Hardy space H 2 ( Ω ) {\displaystyle H^{2}(\Omega )} , for which the Dirichlet integral, defined by D ( f ) := 1 π ∬ Ω | f ′ ( z ) | 2 d A = 1 4 π ∬ Ω | ∂ x f | 2 + | ∂ y f | 2 d x d y {\displaystyle {\mathcal {D}}(f):={1 \over \pi }\iint _{\Omega }|f^{\prime }(z)|^{2}\,dA={1 \over 4\pi }\iint _{\Omega }|\partial _{x}f|^{2}+|\partial _{y}f|^{2}\,dx\,dy} is finite (here dA denotes the area Lebesgue measure on the complex plane C {\displaystyle \mathbb {C} } ). The latter is the integral occurring in Dirichlet's principle for harmonic functions. The Dirichlet integral defines a seminorm on D ( Ω ) {\displaystyle {\mathcal {D}}(\Omega )} .
|
https://en.wikipedia.org/wiki/Dirichlet_space
|
It is not a norm in general, since D ( f ) = 0 {\displaystyle {\mathcal {D}}(f)=0} whenever f is a constant function. For f , g ∈ D ( Ω ) {\displaystyle f,\,g\in {\mathcal {D}}(\Omega )} , we define D ( f , g ) := 1 π ∬ Ω f ′ ( z ) g ′ ( z ) ¯ d A ( z ) . {\displaystyle {\mathcal {D}}(f,\,g):={1 \over \pi }\iint _{\Omega }f'(z){\overline {g'(z)}}\,dA(z).}
|
https://en.wikipedia.org/wiki/Dirichlet_space
|
This is a semi-inner product, and clearly D ( f , f ) = D ( f ) {\displaystyle {\mathcal {D}}(f,\,f)={\mathcal {D}}(f)} . We may equip D ( Ω ) {\displaystyle {\mathcal {D}}(\Omega )} with an inner product given by ⟨ f , g ⟩ D ( Ω ) := ⟨ f , g ⟩ H 2 ( Ω ) + D ( f , g ) ( f , g ∈ D ( Ω ) ) , {\displaystyle \langle f,g\rangle _{{\mathcal {D}}(\Omega )}:=\langle f,\,g\rangle _{H^{2}(\Omega )}+{\mathcal {D}}(f,\,g)\;\;\;\;\;(f,\,g\in {\mathcal {D}}(\Omega )),} where ⟨ ⋅ , ⋅ ⟩ H 2 ( Ω ) {\displaystyle \langle \cdot ,\,\cdot \rangle _{H^{2}(\Omega )}} is the usual inner product on H 2 ( Ω ) . {\displaystyle H^{2}(\Omega ).}
|
https://en.wikipedia.org/wiki/Dirichlet_space
|
The corresponding norm ‖ ⋅ ‖ D ( Ω ) {\displaystyle \|\cdot \|_{{\mathcal {D}}(\Omega )}} is given by ‖ f ‖ D ( Ω ) 2 := ‖ f ‖ H 2 ( Ω ) 2 + D ( f ) ( f ∈ D ( Ω ) ) . {\displaystyle \|f\|_{{\mathcal {D}}(\Omega )}^{2}:=\|f\|_{H^{2}(\Omega )}^{2}+{\mathcal {D}}(f)\;\;\;\;\;(f\in {\mathcal {D}}(\Omega )).} Note that this definition is not unique, another common choice is to take ‖ f ‖ 2 = | f ( c ) | 2 + D ( f ) {\displaystyle \|f\|^{2}=|f(c)|^{2}+{\mathcal {D}}(f)} , for some fixed c ∈ Ω {\displaystyle c\in \Omega } .
|
https://en.wikipedia.org/wiki/Dirichlet_space
|
The Dirichlet space is not an algebra, but the space D ( Ω ) ∩ H ∞ ( Ω ) {\displaystyle {\mathcal {D}}(\Omega )\cap H^{\infty }(\Omega )} is a Banach algebra, with respect to the norm ‖ f ‖ D ( Ω ) ∩ H ∞ ( Ω ) := ‖ f ‖ H ∞ ( Ω ) + D ( f ) 1 / 2 ( f ∈ D ( Ω ) ∩ H ∞ ( Ω ) ) . {\displaystyle \|f\|_{{\mathcal {D}}(\Omega )\cap H^{\infty }(\Omega )}:=\|f\|_{H^{\infty }(\Omega )}+{\mathcal {D}}(f)^{1/2}\;\;\;\;\;(f\in {\mathcal {D}}(\Omega )\cap H^{\infty }(\Omega )).} We usually have Ω = D {\displaystyle \Omega =\mathbb {D} } (the unit disk of the complex plane C {\displaystyle \mathbb {C} } ), in that case D ( D ) := D {\displaystyle {\mathcal {D}}(\mathbb {D} ):={\mathcal {D}}} , and if f ( z ) = ∑ n ≥ 0 a n z n ( f ∈ D ) , {\displaystyle f(z)=\sum _{n\geq 0}a_{n}z^{n}\;\;\;\;\;(f\in {\mathcal {D}}),} then D ( f ) = ∑ n ≥ 1 n | a n | 2 , {\displaystyle D(f)=\sum _{n\geq 1}n|a_{n}|^{2},} and ‖ f ‖ D 2 = ∑ n ≥ 0 ( n + 1 ) | a n | 2 .
|
https://en.wikipedia.org/wiki/Dirichlet_space
|
{\displaystyle \|f\|_{\mathcal {D}}^{2}=\sum _{n\geq 0}(n+1)|a_{n}|^{2}.} Clearly, D {\displaystyle {\mathcal {D}}} contains all the polynomials and, more generally, all functions f {\displaystyle f} , holomorphic on D {\displaystyle \mathbb {D} } such that f ′ {\displaystyle f'} is bounded on D {\displaystyle \mathbb {D} } . The reproducing kernel of D {\displaystyle {\mathcal {D}}} at w ∈ C ∖ { 0 } {\displaystyle w\in \mathbb {C} \setminus \{0\}} is given by k w ( z ) = 1 z w ¯ log ( 1 1 − z w ¯ ) ( z ∈ C ∖ { 0 } ) . {\displaystyle k_{w}(z)={\frac {1}{z{\overline {w}}}}\log \left({\frac {1}{1-z{\overline {w}}}}\right)\;\;\;\;\;(z\in \mathbb {C} \setminus \{0\}).}
|
https://en.wikipedia.org/wiki/Dirichlet_space
|
In mathematics, the Dirichlet–Jordan test gives sufficient conditions for a real-valued, periodic function f to be equal to the sum of its Fourier series at a point of continuity. Moreover, the behavior of the Fourier series at points of discontinuity is determined as well (it is the midpoint of the values of the discontinuity). It is one of many conditions for the convergence of Fourier series. The original test was established by Peter Gustav Lejeune Dirichlet in 1829, for piecewise monotone functions. It was extended in the late 19th century by Camille Jordan to functions of bounded variation (any function of bounded variation is the difference of two increasing functions).
|
https://en.wikipedia.org/wiki/Dirichlet_conditions
|
In mathematics, the Dixmier mapping describes the space Prim(U(g)) of primitive ideals of the universal enveloping algebra U(g) of a finite-dimensional solvable Lie algebra g over an algebraically closed field of characteristic 0 in terms of coadjoint orbits. More precisely, it is a homeomorphism from the space of orbits g*/G of the dual g* of g (with the Zariski topology) under the action of the adjoint group G to Prim(U(g)) (with the Jacobson topology). The Dixmier map is closely related to the orbit method, which relates the irreducible representations of a nilpotent Lie group to its coadjoint orbits. Dixmier (1963) introduced the Dixmier map for nilpotent Lie algebras and then in (Dixmier 1966) extended it to solvable ones. Dixmier (1996, chapter 6) describes the Dixmier mapping in detail.
|
https://en.wikipedia.org/wiki/Dixmier_mapping
|
In mathematics, the Dixmier trace, introduced by Jacques Dixmier (1966), is a non-normal trace on a space of linear operators on a Hilbert space larger than the space of trace class operators. Dixmier traces are examples of singular traces. Some applications of Dixmier traces to noncommutative geometry are described in (Connes 1994).
|
https://en.wikipedia.org/wiki/Dixmier_trace
|
In mathematics, the Dixon elliptic functions sm and cm are two elliptic functions (doubly periodic meromorphic functions on the complex plane) that map from each regular hexagon in a hexagonal tiling to the whole complex plane. Because these functions satisfy the identity cm 3 z + sm 3 z = 1 {\displaystyle \operatorname {cm} ^{3}z+\operatorname {sm} ^{3}z=1} , as real functions they parametrize the cubic Fermat curve x 3 + y 3 = 1 {\displaystyle x^{3}+y^{3}=1} , just as the trigonometric functions sine and cosine parametrize the unit circle x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} . They were named sm and cm by Alfred Dixon in 1890, by analogy to the trigonometric functions sine and cosine and the Jacobi elliptic functions sn and cn; Göran Dillner described them earlier in 1873.
|
https://en.wikipedia.org/wiki/Dixon_elliptic_functions
|
In mathematics, the Dottie number is a constant that is the unique real root of the equation cos x = x {\displaystyle \cos x=x} ,where the argument of cos {\displaystyle \cos } is in radians. The decimal expansion of the Dottie number is 0.739085... {\displaystyle 0.739085...} .Since cos ( x ) − x {\displaystyle \cos(x)-x} is decreasing and its derivative is non-zero at cos ( x ) − x = 0 {\displaystyle \cos(x)-x=0} , it only crosses zero at one point. This implies that the equation cos ( x ) = x {\displaystyle \cos(x)=x} has only one real solution. It is the single real-valued fixed point of the cosine function and is a nontrivial example of a universal attracting fixed point.
|
https://en.wikipedia.org/wiki/Dottie_number
|
In mathematics, the Double extension set theory (DEST) is an axiomatic set theory proposed by Andrzej Kisielewicz consisting of two separate membership relations on the universe of sets, denoted here by ∈ {\displaystyle \in } and ε {\displaystyle \varepsilon } , and a set of axioms relating the two. The intention behind defining the two membership relations is to avoid the usual paradoxes of set theory, without substantially weakening the axiom of unrestricted comprehension. Intuitively, in DEST, comprehension is used to define the elements of a set under one membership relation using formulas that involve only the other membership relation. Let ϕ ( x ) {\displaystyle \phi (x)} be a first-order formula with free variable x {\displaystyle x} in the language of DEST not involving the membership relation ε {\displaystyle \varepsilon } .
|
https://en.wikipedia.org/wiki/Double_extension_set_theory
|
Then, the axioms of DEST posit a set A = { x | ϕ ( x ) } {\displaystyle A=\{x|\phi (x)\}} such that x ε A ⟺ ϕ ( x ) {\displaystyle x\varepsilon A\iff \phi (x)} . For instance, x ∉ x {\displaystyle x\notin x} is a formula involving only ∈ {\displaystyle \in } , and thus DEST posits the Russell set R = { x | x ∉ x } {\displaystyle R=\{x|x\notin x\}} , where x ε R ⟺ x ∉ x {\displaystyle x\varepsilon R\iff x\notin x} . Observe that for x = R {\displaystyle x=R} , we obtain R ε R ⟺ R ∉ R {\displaystyle R\varepsilon R\iff R\notin R} .
|
https://en.wikipedia.org/wiki/Double_extension_set_theory
|
Since the membership relations are different, we thus avoid the Russell's paradox. The focus in DEST is on regular sets, which are sets whose extensions under the two membership relations coincide, i.e., sets A {\displaystyle A} for which it holds that ∀ x .
|
https://en.wikipedia.org/wiki/Double_extension_set_theory
|
x ∈ A ⟺ x ε A {\displaystyle \forall x.x\in A\iff x\varepsilon A} . The preceding discussion suggests that the Russell set R = { x | x ∉ x } {\displaystyle R=\{x|x\notin x\}} cannot be regular, as otherwise it leads to the Russell's paradox. == References ==
|
https://en.wikipedia.org/wiki/Double_extension_set_theory
|
In mathematics, the Drazin inverse, named after Michael P. Drazin, is a kind of generalized inverse of a matrix. Let A be a square matrix. The index of A is the least nonnegative integer k such that rank(Ak+1) = rank(Ak).
|
https://en.wikipedia.org/wiki/Drazin_inverse
|
The Drazin inverse of A is the unique matrix AD that satisfies A k + 1 A D = A k , A D A A D = A D , A A D = A D A . {\displaystyle A^{k+1}A^{\text{D}}=A^{k},\quad A^{\text{D}}AA^{\text{D}}=A^{\text{D}},\quad AA^{\text{D}}=A^{\text{D}}A.} It's not a generalized inverse in the classical sense, since A A D A ≠ A {\displaystyle AA^{\text{D}}A\neq A} in general.
|
https://en.wikipedia.org/wiki/Drazin_inverse
|
If A is invertible with inverse A − 1 {\displaystyle A^{-1}} , then A D = A − 1 {\displaystyle A^{\text{D}}=A^{-1}} . If A is a block diagonal matrix A = {\displaystyle A={\begin{bmatrix}B&0\\0&N\end{bmatrix}}} where B {\displaystyle B} is invertible with inverse B − 1 {\displaystyle B^{-1}} and N {\displaystyle N} is a nilpotent matrix, then A D = {\displaystyle A^{D}={\begin{bmatrix}B^{-1}&0\\0&0\end{bmatrix}}} Drazin inversion is invariant under conjugation. If A D {\displaystyle A^{\text{D}}} is the Drazin inverse of A {\displaystyle A} , then P A D P − 1 {\displaystyle PA^{\text{D}}P^{-1}} is the Drazin inverse of P A P − 1 {\displaystyle PAP^{-1}} .
|
https://en.wikipedia.org/wiki/Drazin_inverse
|
The Drazin inverse of a matrix of index 0 or 1 is called the group inverse or {1,2,5}-inverse and denoted A#. The group inverse can be defined, equivalently, by the properties AA#A = A, A#AA# = A#, and AA# = A#A. A projection matrix P, defined as a matrix such that P2 = P, has index 1 (or 0) and has Drazin inverse PD = P. If A is a nilpotent matrix (for example a shift matrix), then A D = 0.
|
https://en.wikipedia.org/wiki/Drazin_inverse
|
{\displaystyle A^{\text{D}}=0.} The hyper-power sequence is A i + 1 := A i + A i ( I − A A i ) ; {\displaystyle A_{i+1}:=A_{i}+A_{i}\left(I-AA_{i}\right);} for convergence notice that A i + j = A i ∑ k = 0 2 j − 1 ( I − A A i ) k . {\displaystyle A_{i+j}=A_{i}\sum _{k=0}^{2^{j}-1}\left(I-AA_{i}\right)^{k}.} For A 0 := α A {\displaystyle A_{0}:=\alpha A} or any regular A 0 {\displaystyle A_{0}} with A 0 A = A A 0 {\displaystyle A_{0}A=AA_{0}} chosen such that ‖ A 0 − A 0 A A 0 ‖ < ‖ A 0 ‖ {\displaystyle \left\|A_{0}-A_{0}AA_{0}\right\|<\left\|A_{0}\right\|} the sequence tends to its Drazin inverse, A i → A D . {\displaystyle A_{i}\rightarrow A^{\text{D}}.}
|
https://en.wikipedia.org/wiki/Drazin_inverse
|
In mathematics, the Drinfeld upper half plane is a rigid analytic space analogous to the usual upper half plane for function fields, introduced by Drinfeld (1976). It is defined to be P1(C)\P1(F∞), where F is a function field of a curve over a finite field, F∞ its completion at ∞, and C the completion of the algebraic closure of F∞. The analogy with the usual upper half plane arises from the fact that the global function field F is analogous to the rational numbers Q. Then, F∞ is the real numbers R and the algebraic closure of F∞ is the complex numbers C (which are already complete). Finally, P1(C) is the Riemann sphere, so P1(C)\P1(R) is the upper half plane together with the lower half plane.
|
https://en.wikipedia.org/wiki/Drinfeld_upper_half_plane
|
In mathematics, the Duflo isomorphism is an isomorphism between the center of the universal enveloping algebra of a finite-dimensional Lie algebra and the invariants of its symmetric algebra. It was introduced by Michel Duflo (1977) and later generalized to arbitrary finite-dimensional Lie algebras by Kontsevich. The Poincaré-Birkoff-Witt theorem gives for any Lie algebra g {\displaystyle {\mathfrak {g}}} a vector space isomorphism from the polynomial algebra S ( g ) {\displaystyle S({\mathfrak {g}})} to the universal enveloping algebra U ( g ) {\displaystyle U({\mathfrak {g}})} . This map is not an algebra homomorphism.
|
https://en.wikipedia.org/wiki/Duflo_isomorphism
|
It is equivariant with respect to the natural representation of g {\displaystyle {\mathfrak {g}}} on these spaces, so it restricts to a vector space isomorphism F: S ( g ) g → U ( g ) g {\displaystyle F\colon S({\mathfrak {g}})^{\mathfrak {g}}\to U({\mathfrak {g}})^{\mathfrak {g}}} where the superscript indicates the subspace annihilated by the action of g {\displaystyle {\mathfrak {g}}} . Both S ( g ) g {\displaystyle S({\mathfrak {g}})^{\mathfrak {g}}} and U ( g ) g {\displaystyle U({\mathfrak {g}})^{\mathfrak {g}}} are commutative subalgebras, indeed U ( g ) g {\displaystyle U({\mathfrak {g}})^{\mathfrak {g}}} is the center of U ( g ) {\displaystyle U({\mathfrak {g}})} , but F {\displaystyle F} is still not an algebra homomorphism. However, Duflo proved that in some cases we can compose F {\displaystyle F} with a map G: S ( g ) g → S ( g ) g {\displaystyle G\colon S({\mathfrak {g}})^{\mathfrak {g}}\to S({\mathfrak {g}})^{\mathfrak {g}}} to get an algebra isomorphism F ∘ G: S ( g ) g → U ( g ) g .
|
https://en.wikipedia.org/wiki/Duflo_isomorphism
|
{\displaystyle F\circ G\colon S({\mathfrak {g}})^{\mathfrak {g}}\to U({\mathfrak {g}})^{\mathfrak {g}}.} Later, using the Kontsevich formality theorem, Kontsevich showed that this works for all finite-dimensional Lie algebras.
|
https://en.wikipedia.org/wiki/Duflo_isomorphism
|
Following Calaque and Rossi, the map G {\displaystyle G} can be defined as follows. The adjoint action of g {\displaystyle {\mathfrak {g}}} is the map g → E n d ( g ) {\displaystyle {\mathfrak {g}}\to \mathrm {End} ({\mathfrak {g}})} sending x ∈ g {\displaystyle x\in {\mathfrak {g}}} to the operation {\displaystyle } on g {\displaystyle {\mathfrak {g}}} . We can treat map as an element of g ∗ ⊗ E n d ( g ) {\displaystyle {\mathfrak {g}}^{\ast }\otimes \mathrm {End} ({\mathfrak {g}})} or, for that matter, an element of the larger space S ( g ∗ ) ⊗ E n d ( g ) {\displaystyle S({\mathfrak {g}}^{\ast })\otimes \mathrm {End} ({\mathfrak {g}})} , since g ∗ ⊂ S ( g ∗ ) {\displaystyle {\mathfrak {g}}^{\ast }\subset S({\mathfrak {g}}^{\ast })} .
|
https://en.wikipedia.org/wiki/Duflo_isomorphism
|
Call this element a d ∈ S ( g ∗ ) ⊗ E n d ( g ) {\displaystyle \mathrm {ad} \in S({\mathfrak {g}}^{\ast })\otimes \mathrm {End} ({\mathfrak {g}})} Both S ( g ∗ ) {\displaystyle S({\mathfrak {g}}^{\ast })} and E n d ( g ) {\displaystyle \mathrm {End} ({\mathfrak {g}})} are algebras so their tensor product is as well. Thus, we can take powers of a d {\displaystyle \mathrm {ad} } , say a d k ∈ S ( g ∗ ) ⊗ E n d ( g ) . {\displaystyle \mathrm {ad} ^{k}\in S({\mathfrak {g}}^{\ast })\otimes \mathrm {End} ({\mathfrak {g}}).}
|
https://en.wikipedia.org/wiki/Duflo_isomorphism
|
As a result, the algebra S ( g ∗ ) {\displaystyle S({\mathfrak {g}}^{\ast })} acts on as differential operators on S ( g ) {\displaystyle S({\mathfrak {g}})} , and this extends to an action of S ( g ) {\displaystyle S({\mathfrak {g}})} on S ( g ) {\displaystyle S({\mathfrak {g}})} . We can thus define a linear map G: S ( g ) → S ( g ) {\displaystyle G\colon S({\mathfrak {g}})\to S({\mathfrak {g}})} by G ( ψ ) = J ~ 1 / 2 ψ {\displaystyle G(\psi )={\tilde {J}}^{1/2}\psi } and since the whole construction was invariant, G {\displaystyle G} restricts to the desired linear map G: S ( g ) g → S ( g ) g . {\displaystyle G\colon S({\mathfrak {g}})^{\mathfrak {g}}\to S({\mathfrak {g}})^{\mathfrak {g}}.}
|
https://en.wikipedia.org/wiki/Duflo_isomorphism
|
In mathematics, the Dushnik–Miller theorem is a result in order theory stating that every infinite linear order has a non-identity order embedding into itself. It is named for Ben Dushnik and E. W. Miller, who published this theorem for countable linear orders in 1940. More strongly, they showed that in the countable case there exists an order embedding into a proper subset of the given order; however, they provided examples showing that this strengthening does not always hold for uncountable orders.In reverse mathematics, the Dushnik–Miller theorem for countable linear orders has the same strength as the arithmetical comprehension axiom (ACA0), one of the "big five" subsystems of second-order arithmetic. This result is closely related to the fact that (as Louise Hay and Joseph Rosenstein proved) there exist computable linear orders with no computable non-identity self-embedding.
|
https://en.wikipedia.org/wiki/Dushnik–Miller_theorem
|
In mathematics, the Dwork unit root zeta function, named after Bernard Dwork, is the L-function attached to the p-adic Galois representation arising from the p-adic etale cohomology of an algebraic variety defined over a global function field of characteristic p. The Dwork conjecture (1973) states that his unit root zeta function is p-adic meromorphic everywhere. This conjecture was proved by Wan (2000). == References. ==
|
https://en.wikipedia.org/wiki/Dwork_conjecture_on_unit_root_zeta_functions
|
In mathematics, the Dynkin index I ( λ ) {\displaystyle I({\lambda })} of a finite-dimensional highest-weight representation of a compact simple Lie algebra g {\displaystyle {\mathfrak {g}}} with highest weight λ {\displaystyle \lambda } is defined by where V 0 {\displaystyle V_{0}} is the 'defining representation', which is most often taken to be the fundamental representation if the Lie algebra under consideration is a matrix Lie algebra. The notation Tr V {\displaystyle {\text{Tr}}_{V}} is the trace form on the representation ρ: g → End ( V ) {\displaystyle \rho :{\mathfrak {g}}\rightarrow {\text{End}}(V)} . By Schur's lemma, since the trace forms are all invariant forms, they are related by constants, so the index is well-defined.
|
https://en.wikipedia.org/wiki/Dynkin_index
|
Since the trace forms are bilinear forms, we can take traces to obtain I ( λ ) = dim V λ 2 dim g ( λ , λ + 2 ρ ) {\displaystyle I(\lambda )={\frac {\dim V_{\lambda }}{2\dim {\mathfrak {g}}}}(\lambda ,\lambda +2\rho )} where the Weyl vector ρ = 1 2 ∑ α ∈ Δ + α {\displaystyle \rho ={\frac {1}{2}}\sum _{\alpha \in \Delta ^{+}}\alpha } is equal to half of the sum of all the positive roots of g {\displaystyle {\mathfrak {g}}} . The expression ( λ , λ + 2 ρ ) {\displaystyle (\lambda ,\lambda +2\rho )} is the value of the quadratic Casimir in the representation V λ {\displaystyle V_{\lambda }} . The index I ( λ ) {\displaystyle I(\lambda )} is always a positive integer. In the particular case where λ {\displaystyle \lambda } is the highest root, so that V λ {\displaystyle V_{\lambda }} is the adjoint representation, the Dynkin index I ( λ ) {\displaystyle I(\lambda )} is equal to the dual Coxeter number.
|
https://en.wikipedia.org/wiki/Dynkin_index
|
In mathematics, the Dyson conjecture (Freeman Dyson 1962) is a conjecture about the constant term of certain Laurent polynomials, proved independently in 1962 by Wilson and Gunson. Andrews generalized it to the q-Dyson conjecture, proved by Zeilberger and Bressoud and sometimes called the Zeilberger–Bressoud theorem. Macdonald generalized it further to more general root systems with the Macdonald constant term conjecture, proved by Cherednik.
|
https://en.wikipedia.org/wiki/Dyson_conjecture
|
In mathematics, the E8 lattice is a special lattice in R8. It can be characterized as the unique positive-definite, even, unimodular lattice of rank 8. The name derives from the fact that it is the root lattice of the E8 root system. The norm of the E8 lattice (divided by 2) is a positive definite even unimodular quadratic form in 8 variables, and conversely such a quadratic form can be used to construct a positive-definite, even, unimodular lattice of rank 8. The existence of such a form was first shown by H. J. S. Smith in 1867, and the first explicit construction of this quadratic form was given by Korkin and Zolotarev in 1873. The E8 lattice is also called the Gosset lattice after Thorold Gosset who was one of the first to study the geometry of the lattice itself around 1900.
|
https://en.wikipedia.org/wiki/E8_lattice
|
In mathematics, the E8 manifold is the unique compact, simply connected topological 4-manifold with intersection form the E8 lattice.
|
https://en.wikipedia.org/wiki/E8_manifold
|
In mathematics, the EHP spectral sequence is a spectral sequence used for inductively calculating the homotopy groups of spheres localized at some prime p. It is described in more detail in Ravenel (2003, chapter 1.5) and Mahowald (2001). It is related to the EHP long exact sequence of Whitehead (1953); the name "EHP" comes from the fact that George W. Whitehead named 3 of the maps of his sequence "E" (the first letter of the German word "Einhängung" meaning "suspension"), "H" (for Heinz Hopf, as this map is the second Hopf–James invariant), and "P" (related to Whitehead products). For p = 2 {\displaystyle p=2} the spectral sequence uses some exact sequences associated to the fibration (James 1957) S n ( 2 ) → Ω S n + 1 ( 2 ) → Ω S 2 n + 1 ( 2 ) {\displaystyle S^{n}(2)\rightarrow \Omega S^{n+1}(2)\rightarrow \Omega S^{2n+1}(2)} ,where Ω {\displaystyle \Omega } stands for a loop space and the (2) is localization of a topological space at the prime 2. This gives a spectral sequence with E 1 k , n {\displaystyle E_{1}^{k,n}} term equal to π k + n ( S 2 n − 1 ( 2 ) ) {\displaystyle \pi _{k+n}(S^{2n-1}(2))} and converging to π ∗ S ( 2 ) {\displaystyle \pi _{*}^{S}(2)} (stable homotopy groups of spheres localized at 2).
|
https://en.wikipedia.org/wiki/EHP_spectral_sequence
|
The spectral sequence has the advantage that the input is previously calculated homotopy groups. It was used by Oda (1977) to calculate the first 31 stable homotopy groups of spheres. For arbitrary primes one uses some fibrations found by Toda (1962): S ^ 2 n ( p ) → Ω S 2 n + 1 ( p ) → Ω S 2 p n + 1 ( p ) {\displaystyle {\widehat {S}}^{2n}(p)\rightarrow \Omega S^{2n+1}(p)\rightarrow \Omega S^{2pn+1}(p)} S 2 n − 1 ( p ) → Ω S ^ 2 n ( p ) → Ω S 2 p n − 1 ( p ) {\displaystyle S^{2n-1}(p)\rightarrow \Omega {\widehat {S}}^{2n}(p)\rightarrow \Omega S^{2pn-1}(p)} where S ^ 2 n {\displaystyle {\widehat {S}}^{2n}} is the ( 2 n p − 1 ) {\displaystyle (2np-1)} -skeleton of the loop space Ω S 2 n + 1 {\displaystyle \Omega S^{2n+1}} . (For p = 2 {\displaystyle p=2} , the space S ^ 2 n {\displaystyle {\widehat {S}}^{2n}} is the same as S 2 n {\displaystyle S^{2n}} , so Toda's fibrations at p = 2 {\displaystyle p=2} are the same as the James fibrations.)
|
https://en.wikipedia.org/wiki/EHP_spectral_sequence
|
In mathematics, the ELSV formula, named after its four authors Torsten Ekedahl, Sergei Lando, Michael Shapiro, Alek Vainshtein, is an equality between a Hurwitz number (counting ramified coverings of the sphere) and an integral over the moduli space of stable curves. Several fundamental results in the intersection theory of moduli spaces of curves can be deduced from the ELSV formula, including the Witten conjecture, the Virasoro constraints, and the λ g {\displaystyle \lambda _{g}} -conjecture. It is generalized by the Gopakumar–Mariño–Vafa formula.
|
https://en.wikipedia.org/wiki/ELSV_formula
|
In mathematics, the Earle–Hamilton fixed point theorem is a result in geometric function theory giving sufficient conditions for a holomorphic mapping of an open domain in a complex Banach space into itself to have a fixed point. The result was proved in 1968 by Clifford Earle and Richard S. Hamilton by showing that, with respect to the Carathéodory metric on the domain, the holomorphic mapping becomes a contraction mapping to which the Banach fixed-point theorem can be applied.
|
https://en.wikipedia.org/wiki/Earle–Hamilton_fixed-point_theorem
|
In mathematics, the Eckmann–Hilton argument (or Eckmann–Hilton principle or Eckmann–Hilton theorem) is an argument about two unital magma structures on a set where one is a homomorphism for the other. Given this, the structures are the same, and the resulting magma is a commutative monoid. This can then be used to prove the commutativity of the higher homotopy groups. The principle is named after Beno Eckmann and Peter Hilton, who used it in a 1962 paper.
|
https://en.wikipedia.org/wiki/Eckmann–Hilton_argument
|
In mathematics, the Edwards curves are a family of elliptic curves studied by Harold Edwards in 2007. The concept of elliptic curves over finite fields is widely used in elliptic curve cryptography. Applications of Edwards curves to cryptography were developed by Daniel J. Bernstein and Tanja Lange: they pointed out several advantages of the Edwards form in comparison to the more well known Weierstrass form.
|
https://en.wikipedia.org/wiki/Edwards_curves
|
In mathematics, the Ehrenpreis conjecture of Leon Ehrenpreis states that for any K greater than 1, any two closed Riemann surfaces of genus at least 2 have finite-degree covers which are K-quasiconformal: that is, the covers are arbitrarily close in the Teichmüller metric. A proof was announced by Jeremy Kahn and Vladimir Markovic in January 2011, using their proof of the Surface subgroup conjecture and a newly developed "good pants homology" theory. In June 2012, Kahn and Markovic were given the Clay Research Awards for their work on these two problems by the Clay Mathematics Institute at a ceremony at Oxford University.
|
https://en.wikipedia.org/wiki/Ehrenpreis_conjecture
|
In mathematics, the Eisenstein ideal is an ideal in the endomorphism ring of the Jacobian variety of a modular curve, consisting roughly of elements of the Hecke algebra of Hecke operators that annihilate the Eisenstein series. It was introduced by Barry Mazur (1977), in studying the rational points of modular curves. An Eisenstein prime is a prime in the support of the Eisenstein ideal (this has nothing to do with primes in the Eisenstein integers).
|
https://en.wikipedia.org/wiki/Eisenstein_ideal
|
In mathematics, the Eisenstein integers (named after Gotthold Eisenstein), occasionally also known as Eulerian integers (after Leonhard Euler), are the complex numbers of the form z = a + b ω , {\displaystyle z=a+b\omega ,} where a and b are integers and ω = − 1 + i 3 2 = e i 2 π / 3 {\displaystyle \omega ={\frac {-1+i{\sqrt {3}}}{2}}=e^{i2\pi /3}} is a primitive (hence non-real) cube root of unity. The Eisenstein integers form a triangular lattice in the complex plane, in contrast with the Gaussian integers, which form a square lattice in the complex plane. The Eisenstein integers are a countably infinite set.
|
https://en.wikipedia.org/wiki/Eisenstein_integer
|
In mathematics, the Ellis–Numakura lemma states that if S is a non-empty semigroup with a topology such that S is compact and the product is semi-continuous, then S has an idempotent element p, (that is, with pp = p). The lemma is named after Robert Ellis and Katsui Numakura.
|
https://en.wikipedia.org/wiki/Ellis–Numakura_lemma
|
In mathematics, the Enriques–Kodaira classification is a classification of compact complex surfaces into ten classes. For each of these classes, the surfaces in the class can be parametrized by a moduli space. For most of the classes the moduli spaces are well understood, but for the class of surfaces of general type the moduli spaces seem too complicated to describe explicitly, though some components are known. Max Noether began the systematic study of algebraic surfaces, and Guido Castelnuovo proved important parts of the classification.
|
https://en.wikipedia.org/wiki/Enriques–Kodaira_classification
|
Federigo Enriques (1914, 1949) described the classification of complex projective surfaces. Kunihiko Kodaira (1964, 1966, 1968a, 1968b) later extended the classification to include non-algebraic compact surfaces. The analogous classification of surfaces in positive characteristic was begun by David Mumford (1969) and completed by Enrico Bombieri and David Mumford (1976, 1977); it is similar to the characteristic 0 projective case, except that one also gets singular and supersingular Enriques surfaces in characteristic 2, and quasi-hyperelliptic surfaces in characteristics 2 and 3.
|
https://en.wikipedia.org/wiki/Enriques–Kodaira_classification
|
In mathematics, the Erdős–Ko–Rado theorem limits the number of sets in a family of sets for which every two sets have at least one element in common. Paul Erdős, Chao Ko, and Richard Rado proved the theorem in 1938, but did not publish it until 1961. It is part of the field of combinatorics, and one of the central results of extremal set theory. The theorem applies to families of sets that all have the same size, r {\displaystyle r} , and are all subsets of some larger set of size n {\displaystyle n} .
|
https://en.wikipedia.org/wiki/Erdős–Ko–Rado_theorem
|
One way to construct a family of sets with these parameters, each two sharing an element, is to choose a single element to belong to all the subsets, and then form all of the subsets that contain the chosen element. The Erdős–Ko–Rado theorem states that when n {\displaystyle n} is large enough for the problem to be nontrivial ( n ≥ 2 r {\displaystyle n\geq 2r} ) this construction produces the largest possible intersecting families.
|
https://en.wikipedia.org/wiki/Erdős–Ko–Rado_theorem
|
When n = 2 r {\displaystyle n=2r} there are other equally-large families, but for larger values of n {\displaystyle n} only the families constructed in this way can be largest. The Erdős–Ko–Rado theorem can also be described in terms of hypergraphs or independent sets in Kneser graphs. Several analogous theorems apply to other kinds of mathematical object than sets, including linear subspaces, permutations, and strings. They again describe the largest possible intersecting families as being formed by choosing an element and forming the family of all objects that contain the chosen element.
|
https://en.wikipedia.org/wiki/Erdős–Ko–Rado_theorem
|
In mathematics, the Erdős–Szekeres theorem asserts that, given r, s, any sequence of distinct real numbers with length at least (r − 1)(s − 1) + 1 contains a monotonically increasing subsequence of length r or a monotonically decreasing subsequence of length s. The proof appeared in the same 1935 paper that mentions the Happy Ending problem.It is a finitary result that makes precise one of the corollaries of Ramsey's theorem. While Ramsey's theorem makes it easy to prove that every infinite sequence of distinct real numbers contains a monotonically increasing infinite subsequence or a monotonically decreasing infinite subsequence, the result proved by Paul Erdős and George Szekeres goes further.
|
https://en.wikipedia.org/wiki/Erdős–Szekeres_theorem
|
In mathematics, the Erdős–Turán inequality bounds the distance between a probability measure on the circle and the Lebesgue measure, in terms of Fourier coefficients. It was proved by Paul Erdős and Pál Turán in 1948.Let μ be a probability measure on the unit circle R/Z. The Erdős–Turán inequality states that, for any natural number n, sup A | μ ( A ) − m e s A | ≤ C ( 1 n + ∑ k = 1 n | μ ^ ( k ) | k ) , {\displaystyle \sup _{A}\left|\mu (A)-\mathrm {mes} \,A\right|\leq C\left({\frac {1}{n}}+\sum _{k=1}^{n}{\frac {|{\hat {\mu }}(k)|}{k}}\right),} where the supremum is over all arcs A ⊂ R/Z of the unit circle, mes stands for the Lebesgue measure, μ ^ ( k ) = ∫ exp ( 2 π i k θ ) d μ ( θ ) {\displaystyle {\hat {\mu }}(k)=\int \exp(2\pi ik\theta )\,d\mu (\theta )} are the Fourier coefficients of μ, and C > 0 is a numerical constant.
|
https://en.wikipedia.org/wiki/Erdős–Turán_inequality
|
In mathematics, the Erdős–Ulam problem asks whether the plane contains a dense set of points whose Euclidean distances are all rational numbers. It is named after Paul Erdős and Stanislaw Ulam.
|
https://en.wikipedia.org/wiki/Erdős–Ulam_problem
|
In mathematics, the Erlangen program is a method of characterizing geometries based on group theory and projective geometry. It was published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen. It is named after the University Erlangen-Nürnberg, where Klein worked.
|
https://en.wikipedia.org/wiki/Erlangen_program
|
By 1872, non-Euclidean geometries had emerged, but without a way to determine their hierarchy and relationships. Klein's method was fundamentally innovative in three ways: Projective geometry was emphasized as the unifying frame for all other geometries considered by him. In particular, Euclidean geometry was more restrictive than affine geometry, which in turn is more restrictive than projective geometry.Klein proposed that group theory, a branch of mathematics that uses algebraic methods to abstract the idea of symmetry, was the most useful way of organizing geometrical knowledge; at the time it had already been introduced into the theory of equations in the form of Galois theory.Klein made much more explicit the idea that each geometrical language had its own, appropriate concepts, thus for example projective geometry rightly talked about conic sections, but not about circles or angles because those notions were not invariant under projective transformations (something familiar in geometrical perspective). The way the multiple languages of geometry then came back together could be explained by the way subgroups of a symmetry group related to each other.Later, Élie Cartan generalized Klein's homogeneous model spaces to Cartan connections on certain principal bundles, which generalized Riemannian geometry.
|
https://en.wikipedia.org/wiki/Erlangen_program
|
In mathematics, the Ernst equation is an integrable non-linear partial differential equation, named after the American physicist Frederick J. Ernst.
|
https://en.wikipedia.org/wiki/Ernst_equation
|
In mathematics, the Euclidean algorithm or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC). It is one of the oldest algorithms in common use.
|
https://en.wikipedia.org/wiki/Algorithm
|
It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. Euclid poses the problem thus: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number a multitude composed of units": a counting number, a positive integer not including zero.
|
https://en.wikipedia.org/wiki/Algorithm
|
To "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s. In modern words, remainder r = l − q×s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division.For Euclid's method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be zero, AND (ii) the subtraction must be "proper"; i.e., a test must guarantee that the smaller of the two numbers is subtracted from the larger (or the two can be equal so their subtraction yields zero). Euclid's original proof adds a third requirement: the two lengths must not be prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest. While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another, it yields the number "1" for their common measure. So, to be precise, the following is really Nicomachus' algorithm.
|
https://en.wikipedia.org/wiki/Algorithm
|
In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC).
|
https://en.wikipedia.org/wiki/Euclid's_algorithm
|
It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number.
|
https://en.wikipedia.org/wiki/Euclid's_algorithm
|
For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers.
|
https://en.wikipedia.org/wiki/Euclid's_algorithm
|
By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, 21 = 5 × 105 + (−2) × 252). The fact that the GCD can always be expressed in this way is known as Bézout's identity. The version of the Euclidean algorithm described above (and by Euclid) can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other.
|
https://en.wikipedia.org/wiki/Euclid's_algorithm
|
A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844 (Lamé's Theorem), and marks the beginning of computational complexity theory.
|
https://en.wikipedia.org/wiki/Euclid's_algorithm
|
Additional methods for improving the algorithm's efficiency were developed in the 20th century. The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic.
|
https://en.wikipedia.org/wiki/Euclid's_algorithm
|
Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations. The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains.
|
https://en.wikipedia.org/wiki/Euclid's_algorithm
|
In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore occasionally being called the Pythagorean distance. These names come from the ancient Greek mathematicians Euclid and Pythagoras, although Euclid did not represent distances as numbers, and the connection from the Pythagorean theorem to distance calculation was not made until the 18th century.
|
https://en.wikipedia.org/wiki/Distance_formula
|
The distance between two objects that are not points is usually defined to be the smallest distance among pairs of points from the two objects. Formulas are known for computing distances between different types of objects, such as the distance from a point to a line. In advanced mathematics, the concept of distance has been generalized to abstract metric spaces, and other distances than Euclidean have been studied. In some applications in statistics and optimization, the square of the Euclidean distance is used instead of the distance itself.
|
https://en.wikipedia.org/wiki/Distance_formula
|
In mathematics, the Euler function is given by ϕ ( q ) = ∏ k = 1 ∞ ( 1 − q k ) , | q | < 1. {\displaystyle \phi (q)=\prod _{k=1}^{\infty }(1-q^{k}),\quad |q|<1.} Named after Leonhard Euler, it is a model example of a q-series and provides the prototypical example of a relation between combinatorics and complex analysis.
|
https://en.wikipedia.org/wiki/Euler_function
|
In mathematics, the Euler numbers are a sequence En of integers (sequence A122045 in the OEIS) defined by the Taylor series expansion 1 cosh t = 2 e t + e − t = ∑ n = 0 ∞ E n n ! ⋅ t n {\displaystyle {\frac {1}{\cosh t}}={\frac {2}{e^{t}+e^{-t}}}=\sum _{n=0}^{\infty }{\frac {E_{n}}{n! }}\cdot t^{n}} ,where cosh ( t ) {\displaystyle \cosh(t)} is the hyperbolic cosine function.
|
https://en.wikipedia.org/wiki/Euler_numbers
|
The Euler numbers are related to a special value of the Euler polynomials, namely: E n = 2 n E n ( 1 2 ) . {\displaystyle E_{n}=2^{n}E_{n}({\tfrac {1}{2}}).} The Euler numbers appear in the Taylor series expansions of the secant and hyperbolic secant functions. The latter is the function in the definition. They also occur in combinatorics, specifically when counting the number of alternating permutations of a set with an even number of elements.
|
https://en.wikipedia.org/wiki/Euler_numbers
|
In mathematics, the Euler sequence is a particular exact sequence of sheaves on n-dimensional projective space over a ring. It shows that the sheaf of relative differentials is stably isomorphic to an ( n + 1 ) {\displaystyle (n+1)} -fold sum of the dual of the Serre twisting sheaf. The Euler sequence generalizes to that of a projective bundle as well as a Grassmann bundle (see the latter article for this generalization.)
|
https://en.wikipedia.org/wiki/Euler_sequence
|
In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence.
|
https://en.wikipedia.org/wiki/Euler-Maclaurin_formula
|
The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula.
|
https://en.wikipedia.org/wiki/Euler-Maclaurin_formula
|
In mathematics, the Euler–Poisson–Darboux equation is the partial differential equation u x , y + N ( u x + u y ) x + y = 0. {\displaystyle u_{x,y}+{\frac {N(u_{x}+u_{y})}{x+y}}=0.} This equation is named for Siméon Poisson, Leonhard Euler, and Gaston Darboux. It plays an important role in solving the classical wave equation. This equation is related to u r r + m r u r − u t t = 0 , {\displaystyle u_{rr}+{\frac {m}{r}}u_{r}-u_{tt}=0,} by x = r + t {\displaystyle x=r+t} , y = r − t {\displaystyle y=r-t} , where N = m 2 {\displaystyle N={\frac {m}{2}}} and some sources quote this equation when referring to the Euler–Poisson–Darboux equation.
|
https://en.wikipedia.org/wiki/Euler–Poisson–Darboux_equation
|
In mathematics, the Euler–Tricomi equation is a linear partial differential equation useful in the study of transonic flow. It is named after mathematicians Leonhard Euler and Francesco Giacomo Tricomi. u x x + x u y y = 0.
|
https://en.wikipedia.org/wiki/Tricomi_equation
|
{\displaystyle u_{xx}+xu_{yy}=0.\,} It is elliptic in the half plane x > 0, parabolic at x = 0 and hyperbolic in the half plane x < 0. Its characteristics are x d x 2 + d y 2 = 0 , {\displaystyle x\,dx^{2}+dy^{2}=0,\,} which have the integral y ± 2 3 x 3 / 2 = C , {\displaystyle y\pm {\frac {2}{3}}x^{3/2}=C,} where C is a constant of integration. The characteristics thus comprise two families of semicubical parabolas, with cusps on the line x = 0, the curves lying on the right hand side of the y-axis.
|
https://en.wikipedia.org/wiki/Tricomi_equation
|
In mathematics, the Ext functors are the derived functors of the Hom functor. Along with the Tor functor, Ext is one of the core concepts of homological algebra, in which ideas from algebraic topology are used to define invariants of algebraic structures. The cohomology of groups, Lie algebras, and associative algebras can all be defined in terms of Ext. The name comes from the fact that the first Ext group Ext1 classifies extensions of one module by another.
|
https://en.wikipedia.org/wiki/Ext_functor
|
In the special case of abelian groups, Ext was introduced by Reinhold Baer (1934). It was named by Samuel Eilenberg and Saunders MacLane (1942), and applied to topology (the universal coefficient theorem for cohomology). For modules over any ring, Ext was defined by Henri Cartan and Eilenberg in their 1956 book Homological Algebra.
|
https://en.wikipedia.org/wiki/Ext_functor
|
In mathematics, the F. and M. Riesz theorem is a result of the brothers Frigyes Riesz and Marcel Riesz, on analytic measures. It states that for a measure μ on the circle, any part of μ that is not absolutely continuous with respect to the Lebesgue measure dθ can be detected by means of Fourier coefficients. More precisely, it states that if the Fourier–Stieltjes coefficients of μ {\displaystyle \mu } satisfy μ ^ n = ∫ 0 2 π e − i n θ μ ( d θ ) 2 π = 0 , {\displaystyle {\hat {\mu }}_{n}=\int _{0}^{2\pi }{\rm {e}}^{-in\theta }{\frac {\mu (d\theta )}{2\pi }}=0,\ } for all n < 0 {\displaystyle n<0} , then μ is absolutely continuous with respect to dθ. The original statements are rather different (see Zygmund, Trigonometric Series, VII.8).
|
https://en.wikipedia.org/wiki/F._and_M._Riesz_theorem
|
The formulation here is as in Walter Rudin, Real and Complex Analysis, p. 335. The proof given uses the Poisson kernel and the existence of boundary values for the Hardy space H1. Expansions to this theorem were made by James E. Weatherbee in his 1968 dissertation: Some Extensions Of The F. And M. Riesz Theorem On Absolutely Continuous Measures.
|
https://en.wikipedia.org/wiki/F._and_M._Riesz_theorem
|
In mathematics, the FBI transform or Fourier–Bros–Iagolnitzer transform is a generalization of the Fourier transform developed by the French mathematical physicists Jacques Bros and Daniel Iagolnitzer in order to characterise the local analyticity of functions (or distributions) on Rn. The transform provides an alternative approach to analytic wave front sets of distributions, developed independently by the Japanese mathematicians Mikio Sato, Masaki Kashiwara and Takahiro Kawai in their approach to microlocal analysis. It can also be used to prove the analyticity of solutions of analytic elliptic partial differential equations as well as a version of the classical uniqueness theorem, strengthening the Cauchy–Kowalevski theorem, due to the Swedish mathematician Erik Albert Holmgren (1872–1943).
|
https://en.wikipedia.org/wiki/Fourier–Bros–Iagolnitzer_transform
|
In mathematics, the FEE method, or fast E-function evaluation method, is the method of fast summation of series of a special form. It was constructed in 1990 by Ekaterina Karatsuba and is so-named because it makes fast computations of the Siegel E-functions possible, in particular of e x {\displaystyle e^{x}} . A class of functions, which are "similar to the exponential function," was given the name "E-functions" by Carl Ludwig Siegel. Among these functions are such special functions as the hypergeometric function, cylinder, spherical functions and so on.
|
https://en.wikipedia.org/wiki/FEE_method
|
Using the FEE, it is possible to prove the following theorem: Theorem: Let y = f ( x ) {\displaystyle y=f(x)} be an elementary transcendental function, that is the exponential function, or a trigonometric function, or an elementary algebraic function, or their superposition, or their inverse, or a superposition of the inverses. Then s f ( n ) = O ( M ( n ) log 2 n ) . {\displaystyle s_{f}(n)=O(M(n)\log ^{2}n).\,} Here s f ( n ) {\displaystyle s_{f}(n)} is the complexity of computation (bit) of the function f ( x ) {\displaystyle f(x)} with accuracy up to n {\displaystyle n} digits, M ( n ) {\displaystyle M(n)} is the complexity of multiplication of two n {\displaystyle n} -digit integers.
|
https://en.wikipedia.org/wiki/FEE_method
|
The algorithms based on the method FEE include the algorithms for fast calculation of any elementary transcendental function for any value of the argument, the classical constants e, π , {\displaystyle \pi ,} the Euler constant γ , {\displaystyle \gamma ,} the Catalan and the Apéry constants, such higher transcendental functions as the Euler gamma function and its derivatives, the hypergeometric, spherical, cylinder (including the Bessel) functions and some other functions for algebraic values of the argument and parameters, the Riemann zeta function for integer values of the argument and the Hurwitz zeta function for integer argument and algebraic values of the parameter, and also such special integrals as the integral of probability, the Fresnel integrals, the integral exponential function, the trigonometric integrals, and some other integrals for algebraic values of the argument with the complexity bound which is close to the optimal one, namely s f ( n ) = O ( M ( n ) log 2 n ) . {\displaystyle s_{f}(n)=O(M(n)\log ^{2}n).\,} At present, only the FEE makes it possible to calculate fast the values of the functions from the class of higher transcendental functions, certain special integrals of mathematical physics and such classical constants as Euler's, Catalan's and Apéry's constants. An additional advantage of the method FEE is the possibility of parallelizing the algorithms based on the FEE.
|
https://en.wikipedia.org/wiki/FEE_method
|
In mathematics, the Faber polynomials Pm of a Laurent series f ( z ) = z − 1 + a 0 + a 1 z + ⋯ {\displaystyle \displaystyle f(z)=z^{-1}+a_{0}+a_{1}z+\cdots } are the polynomials such that P m ( f ) − z − m {\displaystyle \displaystyle P_{m}(f)-z^{-m}} vanishes at z=0. They were introduced by Faber (1903, 1919) and studied by Grunsky (1939) and Schur (1945).
|
https://en.wikipedia.org/wiki/Faber_polynomials
|
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform of f ^ ( z ) = ∏ m = 1 ∞ ( cos π z 2 m ) m {\displaystyle {\hat {f}}(z)=\prod _{m=1}^{\infty }\left(\cos {\frac {\pi z}{2^{m}}}\right)^{m}} by Børge Jessen and Aurel Wintner (1935). The Fabius function is defined on the unit interval, and is given by the cumulative distribution function of ∑ n = 1 ∞ 2 − n ξ n , {\displaystyle \sum _{n=1}^{\infty }2^{-n}\xi _{n},} where the ξn are independent uniformly distributed random variables on the unit interval. This function satisfies the initial condition f ( 0 ) = 0 {\displaystyle f(0)=0} , the symmetry condition f ( 1 − x ) = 1 − f ( x ) {\displaystyle f(1-x)=1-f(x)} for 0 ≤ x ≤ 1 , {\displaystyle 0\leq x\leq 1,} and the functional differential equation f ′ ( x ) = 2 f ( 2 x ) {\displaystyle f'(x)=2f(2x)} for 0 ≤ x ≤ 1 / 2.
|
https://en.wikipedia.org/wiki/Fabius_function
|
{\displaystyle 0\leq x\leq 1/2.} It follows that f ( x ) {\displaystyle f(x)} is monotone increasing for 0 ≤ x ≤ 1 , {\displaystyle 0\leq x\leq 1,} with f ( 1 / 2 ) = 1 / 2 {\displaystyle f(1/2)=1/2} and f ( 1 ) = 1. {\displaystyle f(1)=1.} There is a unique extension of f to the real numbers that satisfies the same differential equation for all x. This extension can be defined by f (x) = 0 for x ≤ 0, f (x + 1) = 1 − f (x) for 0 ≤ x ≤ 1, and f (x + 2r) = −f (x) for 0 ≤ x ≤ 2r with r a positive integer. The sequence of intervals within which this function is positive or negative follows the same pattern as the Thue–Morse sequence.
|
https://en.wikipedia.org/wiki/Fabius_function
|
In mathematics, the Fabry gap theorem is a result about the analytic continuation of complex power series whose non-zero terms are of orders that have a certain "gap" between them. Such a power series is "badly behaved" in the sense that it cannot be extended to be an analytic function anywhere on the boundary of its disc of convergence. The theorem may be deduced from the first main theorem of Turán's method.
|
https://en.wikipedia.org/wiki/Fabry_gap_theorem
|
In mathematics, the Farey sequence of order n is the sequence of completely reduced fractions, either between 0 and 1, or without this restriction, which when in lowest terms have denominators less than or equal to n, arranged in order of increasing size. With the restricted definition, each Farey sequence starts with the value 0, denoted by the fraction 0/1, and ends with the value 1, denoted by the fraction 1/1 (although some authors omit these terms). A Farey sequence is sometimes called a Farey series, which is not strictly correct, because the terms are not summed.
|
https://en.wikipedia.org/wiki/Farey_graph
|
In mathematics, the Farrell–Jones conjecture, named after F. Thomas Farrell and Lowell E. Jones, states that certain assembly maps are isomorphisms. These maps are given as certain homomorphisms. The motivation is the interest in the target of the assembly maps; this may be, for instance, the algebraic K-theory of a group ring K n ( R G ) {\displaystyle K_{n}(RG)} or the L-theory of a group ring L n ( R G ) {\displaystyle L_{n}(RG)} ,where G is some group. The sources of the assembly maps are equivariant homology theory evaluated on the classifying space of G with respect to the family of virtually cyclic subgroups of G. So assuming the Farrell–Jones conjecture is true, it is possible to restrict computations to virtually cyclic subgroups to get information on complicated objects such as K n ( R G ) {\displaystyle K_{n}(RG)} or L n ( R G ) {\displaystyle L_{n}(RG)} . The Baum–Connes conjecture formulates a similar statement, for the topological K-theory of reduced group C ∗ {\displaystyle C^{*}} -algebras K n t o p ( C ∗ r ( G ) ) {\displaystyle K_{n}^{top}(C_{*}^{r}(G))} .
|
https://en.wikipedia.org/wiki/Farrell-Jones_conjecture
|
In mathematics, the Farrell–Markushevich theorem, proved independently by O. J. Farrell (1899–1981) and A. I. Markushevich (1908–1979) in 1934, is a result concerning the approximation in mean square of holomorphic functions on a bounded open set in the complex plane by complex polynomials. It states that complex polynomials form a dense subspace of the Bergman space of a domain bounded by a simple closed Jordan curve. The Gram–Schmidt process can be used to construct an orthonormal basis in the Bergman space and hence an explicit form of the Bergman kernel, which in turn yields an explicit Riemann mapping function for the domain.
|
https://en.wikipedia.org/wiki/Farrell–Markushevich_theorem
|
In mathematics, the Fatou conjecture, named after Pierre Fatou, states that a quadratic family of maps from the complex plane to itself is hyperbolic for an open dense set of parameters.
|
https://en.wikipedia.org/wiki/Fatou_conjecture
|
In mathematics, the Fatou–Lebesgue theorem establishes a chain of inequalities relating the integrals (in the sense of Lebesgue) of the limit inferior and the limit superior of a sequence of functions to the limit inferior and the limit superior of integrals of these functions. The theorem is named after Pierre Fatou and Henri Léon Lebesgue. If the sequence of functions converges pointwise, the inequalities turn into equalities and the theorem reduces to Lebesgue's dominated convergence theorem.
|
https://en.wikipedia.org/wiki/Fatou–Lebesgue_theorem
|
In mathematics, the Favard constant, also called the Akhiezer–Krein–Favard constant, of order r is defined as K r = 4 π ∑ k = 0 ∞ r + 1 . {\displaystyle K_{r}={\frac {4}{\pi }}\sum \limits _{k=0}^{\infty }\left^{r+1}.} This constant is named after the French mathematician Jean Favard, and after the Soviet mathematicians Naum Akhiezer and Mark Krein.
|
https://en.wikipedia.org/wiki/Favard_constant
|
In mathematics, the Faxén integral (also named Faxén function) is the following integral Fi ( α , β ; x ) = ∫ 0 ∞ exp ( − t + x t α ) t β − 1 d t , ( 0 ≤ Re ( α ) < 1 , Re ( β ) > 0 ) . {\displaystyle \operatorname {Fi} (\alpha ,\beta ;x)=\int _{0}^{\infty }\exp(-t+xt^{\alpha })t^{\beta -1}\mathrm {d} t,\qquad (0\leq \operatorname {Re} (\alpha )<1,\;\operatorname {Re} (\beta )>0).} The integral is named after the Swedish physicist Olov Hilding Faxén, who published it in 1921 in his PhD thesis.
|
https://en.wikipedia.org/wiki/Faxén_integral
|
In mathematics, the Federer–Morse theorem, introduced by Federer and Morse (1943), states that if f is a surjective continuous map from a compact metric space X to a compact metric space Y, then there is a Borel subset Z of X such that f restricted to Z is a bijection from Z to Y. Moreover, the inverse of that restriction is a Borel section of f—it is a Borel isomorphism.
|
https://en.wikipedia.org/wiki/Federer–Morse_theorem
|
In mathematics, the Feferman–Schütte ordinal Γ0 is a large countable ordinal. It is the proof-theoretic ordinal of several mathematical theories, such as arithmetical transfinite recursion. It is named after Solomon Feferman and Kurt Schütte, the former of whom suggested the name Γ0.There is no standard notation for ordinals beyond the Feferman–Schütte ordinal. There are several ways of representing the Feferman–Schütte ordinal, some of which use ordinal collapsing functions: ψ ( Ω Ω ) {\displaystyle \psi (\Omega ^{\Omega })} , θ ( Ω ) {\displaystyle \theta (\Omega )} , φ Ω ( 0 ) {\displaystyle \varphi _{\Omega }(0)} , or φ ( 1 , 0 , 0 ) {\displaystyle \varphi (1,0,0)} .
|
https://en.wikipedia.org/wiki/Feferman–Schütte_ordinal
|
In mathematics, the Feit–Thompson conjecture is a conjecture in number theory, suggested by Walter Feit and John G. Thompson (1962). The conjecture states that there are no distinct prime numbers p and q such that p q − 1 p − 1 {\displaystyle {\frac {p^{q}-1}{p-1}}} divides q p − 1 q − 1 {\displaystyle {\frac {q^{p}-1}{q-1}}} .If the conjecture were true, it would greatly simplify the final chapter of the proof (Feit & Thompson 1963) of the Feit–Thompson theorem that every finite group of odd order is solvable. A stronger conjecture that the two numbers are always coprime was disproved by Stephens (1971) with the counterexample p = 17 and q = 3313 with common factor 2pq + 1 = 112643. It is known that the conjecture is true for q = 3 (Le 2012). Informal probability arguments suggest that the "expected" number of counterexamples to the Feit–Thompson conjecture is very close to 0, suggesting that the Feit–Thompson conjecture is likely to be true.
|
https://en.wikipedia.org/wiki/Feit–Thompson_conjecture
|
In mathematics, the Feit–Thompson theorem, or odd order theorem, states that every finite group of odd order is solvable. It was proved by Walter Feit and John Griggs Thompson (1962, 1963).
|
https://en.wikipedia.org/wiki/Odd_order_theorem
|
In mathematics, the Fejér kernel is a summability kernel used to express the effect of Cesàro summation on Fourier series. It is a non-negative kernel, giving rise to an approximate identity. It is named after the Hungarian mathematician Lipót Fejér (1880–1959).
|
https://en.wikipedia.org/wiki/Fejér_kernel
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.