source
stringlengths 16
98
| text
stringlengths 40
168k
|
|---|---|
Wikipedia:Eigenoperator#0
|
In mathematics, an eigenoperator, A, of a matrix H is a linear operator such that [ H , A ] = λ A {\displaystyle [H,A]=\lambda A\,} where λ {\displaystyle \lambda } is a corresponding scalar called an eigenvalue. == References ==
|
Wikipedia:Eigenplane#0
|
In mathematics, an eigenplane is a two-dimensional invariant subspace in a given vector space. By analogy with the term eigenvector for a vector which, when operated on by a linear operator is another vector which is a scalar multiple of itself, the term eigenplane can be used to describe a two-dimensional plane (a 2-plane), such that the operation of a linear operator on a vector in the 2-plane always yields another vector in the same 2-plane. A particular case that has been studied is that in which the linear operator is an isometry M of the hypersphere (written S3) represented within four-dimensional Euclidean space: M [ s t ] = [ s t ] Λ θ {\displaystyle M\;[\mathbf {s} \;\mathbf {t} ]\;=\;[\mathbf {s} \;\mathbf {t} ]\Lambda _{\theta }} where s and t are four-dimensional column vectors and Λθ is a two-dimensional eigenrotation within the eigenplane. In the usual eigenvector problem, there is freedom to multiply an eigenvector by an arbitrary scalar; in this case there is freedom to multiply by an arbitrary non-zero rotation. This case is potentially physically interesting in the case that the shape of the universe is a multiply connected 3-manifold, since finding the angles of the eigenrotations of a candidate isometry for topological lensing is a way to falsify such hypotheses. == See also == Bivector Plane of rotation == External links == possible relevance of eigenplanes in cosmology GNU GPL software for calculating eigenplanes Proof constructed by J M Shelley 2017
|
Wikipedia:Eigenvalue perturbation#0
|
In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system A x = λ x {\displaystyle Ax=\lambda x} that is perturbed from one with known eigenvectors and eigenvalues A 0 x 0 = λ 0 x 0 {\displaystyle A_{0}x_{0}=\lambda _{0}x_{0}} . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues x 0 i , λ 0 i , i = 1 , … n {\displaystyle x_{0i},\lambda _{0i},i=1,\dots n} are to changes in the system. This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities. The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra or numerical functional analysis. This article is focused on the case of the perturbation of a simple eigenvalue (see in multiplicity of eigenvalues). == Why generalized eigenvalues? == In the entry applications of eigenvalues and eigenvectors we find numerous scientific fields in which eigenvalues are used to obtain solutions. Generalized eigenvalue problems are less widespread but are a key in the study of vibrations. They are useful when we use the Galerkin method or Rayleigh-Ritz method to find approximate solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant (1943) is fundamental. The Finite element method is a widespread particular case. In classical mechanics, generalized eigenvalues may crop up when we look for vibrations of multiple degrees of freedom systems close to equilibrium; the kinetic energy provides the mass matrix M {\displaystyle M} , the potential strain energy provides the rigidity matrix K {\displaystyle K} . For further details, see the first section of this article of Weinstein (1941, in French) With both methods, we obtain a system of differential equations or Matrix differential equation M x ¨ + B x ˙ + K x = 0 {\displaystyle M{\ddot {x}}+B{\dot {x}}+Kx=0} with the mass matrix M {\displaystyle M} , the damping matrix B {\displaystyle B} and the rigidity matrix K {\displaystyle K} . If we neglect the damping effect, we use B = 0 {\displaystyle B=0} , we can look for a solution of the following form x = e i ω t u {\displaystyle x=e^{i\omega t}u} ; we obtain that u {\displaystyle u} and ω 2 {\displaystyle \omega ^{2}} are solution of the generalized eigenvalue problem − ω 2 M u + K u = 0 {\displaystyle -\omega ^{2}Mu+Ku=0} == Setting of perturbation for a generalized eigenvalue problem == Suppose we have solutions to the generalized eigenvalue problem, K 0 x 0 i = λ 0 i M 0 x 0 i . ( 0 ) {\displaystyle \mathbf {K} _{0}\mathbf {x} _{0i}=\lambda _{0i}\mathbf {M} _{0}\mathbf {x} _{0i}.\qquad (0)} where K 0 {\displaystyle \mathbf {K} _{0}} and M 0 {\displaystyle \mathbf {M} _{0}} are matrices. That is, we know the eigenvalues λ0i and eigenvectors x0i for i = 1, ..., N. It is also required that the eigenvalues are distinct. Now suppose we want to change the matrices by a small amount. That is, we want to find the eigenvalues and eigenvectors of K x i = λ i M x i ( 1 ) {\displaystyle \mathbf {K} \mathbf {x} _{i}=\lambda _{i}\mathbf {M} \mathbf {x} _{i}\qquad (1)} where K = K 0 + δ K M = M 0 + δ M {\displaystyle {\begin{aligned}\mathbf {K} &=\mathbf {K} _{0}+\delta \mathbf {K} \\\mathbf {M} &=\mathbf {M} _{0}+\delta \mathbf {M} \end{aligned}}} with the perturbations δ K {\displaystyle \delta \mathbf {K} } and δ M {\displaystyle \delta \mathbf {M} } much smaller than K {\displaystyle \mathbf {K} } and M {\displaystyle \mathbf {M} } respectively. Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations: λ i = λ 0 i + δ λ i x i = x 0 i + δ x i {\displaystyle {\begin{aligned}\lambda _{i}&=\lambda _{0i}+\delta \lambda _{i}\\\mathbf {x} _{i}&=\mathbf {x} _{0i}+\delta \mathbf {x} _{i}\end{aligned}}} == Steps == We assume that the matrices are symmetric and positive definite, and assume we have scaled the eigenvectors such that x 0 j ⊤ M 0 x 0 i = δ i j , {\displaystyle \mathbf {x} _{0j}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}=\delta _{ij},\quad } x i T M x j = δ i j ( 2 ) {\displaystyle \mathbf {x} _{i}^{T}\mathbf {M} \mathbf {x} _{j}=\delta _{ij}\qquad (2)} where δij is the Kronecker delta. Now we want to solve the equation K x i − λ i M x i = 0. {\displaystyle \mathbf {K} \mathbf {x} _{i}-\lambda _{i}\mathbf {M} \mathbf {x} _{i}=0.} In this article we restrict the study to first order perturbation. === First order expansion of the equation === Substituting in (1), we get ( K 0 + δ K ) ( x 0 i + δ x i ) = ( λ 0 i + δ λ i ) ( M 0 + δ M ) ( x 0 i + δ x i ) , {\displaystyle (\mathbf {K} _{0}+\delta \mathbf {K} )(\mathbf {x} _{0i}+\delta \mathbf {x} _{i})=\left(\lambda _{0i}+\delta \lambda _{i}\right)\left(\mathbf {M} _{0}+\delta \mathbf {M} \right)\left(\mathbf {x} _{0i}+\delta \mathbf {x} _{i}\right),} which expands to K 0 x 0 i + δ K x 0 i + K 0 δ x i + δ K δ x i = λ 0 i M 0 x 0 i + λ 0 i M 0 δ x i + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i + λ 0 i δ M δ x i + δ λ i δ M x 0 i + δ λ i M 0 δ x i + δ λ i δ M δ x i . {\displaystyle {\begin{aligned}\mathbf {K} _{0}\mathbf {x} _{0i}&+\delta \mathbf {K} \mathbf {x} _{0i}+\mathbf {K} _{0}\delta \mathbf {x} _{i}+\delta \mathbf {K} \delta \mathbf {x} _{i}=\\[6pt]&\lambda _{0i}\mathbf {M} _{0}\mathbf {x} _{0i}+\lambda _{0i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}+\\&\quad \lambda _{0i}\delta \mathbf {M} \delta \mathbf {x} _{i}+\delta \lambda _{i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\delta \lambda _{i}\delta \mathbf {M} \delta \mathbf {x} _{i}.\end{aligned}}} Canceling from (0) ( K 0 x 0 i = λ 0 i M 0 x 0 i {\displaystyle \mathbf {K} _{0}\mathbf {x} _{0i}=\lambda _{0i}\mathbf {M} _{0}\mathbf {x} _{0i}} ) leaves δ K x 0 i + K 0 δ x i + δ K δ x i = λ 0 i M 0 δ x i + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i + λ 0 i δ M δ x i + δ λ i δ M x 0 i + δ λ i M 0 δ x i + δ λ i δ M δ x i . {\displaystyle {\begin{aligned}\delta \mathbf {K} \mathbf {x} _{0i}+&\mathbf {K} _{0}\delta \mathbf {x} _{i}+\delta \mathbf {K} \delta \mathbf {x} _{i}=\lambda _{0i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}+\\&\lambda _{0i}\delta \mathbf {M} \delta \mathbf {x} _{i}+\delta \lambda _{i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\delta \lambda _{i}\delta \mathbf {M} \delta \mathbf {x} _{i}.\end{aligned}}} Removing the higher-order terms, this simplifies to K 0 δ x i + δ K x 0 i = λ 0 i M 0 δ x i + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i . ( 3 ) {\displaystyle \mathbf {K} _{0}\delta \mathbf {x} _{i}+\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\delta \mathbf {M} \mathrm {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}.\qquad (3)} In other words, δ λ i {\displaystyle \delta \lambda _{i}} no longer denotes the exact variation of the eigenvalue but its first order approximation. As the matrix is symmetric, the unperturbed eigenvectors are M {\displaystyle M} orthogonal and so we use them as a basis for the perturbed eigenvectors. That is, we want to construct δ x i = ∑ j = 1 N ε i j x 0 j ( 4 ) {\displaystyle \delta \mathbf {x} _{i}=\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}\qquad (4)\quad } with ε i j = x 0 j T M δ x i {\displaystyle \varepsilon _{ij}=\mathbf {x} _{0j}^{T}M\delta \mathbf {x} _{i}} , where the εij are small constants that are to be determined. In the same way, substituting in (2), and removing higher order terms, we get δ x j M 0 x 0 i + x 0 j M 0 δ x i + x 0 j δ M 0 x 0 i = 0 ( 5 ) {\displaystyle \delta \mathbf {x} _{j}\mathbf {M} _{0}\mathbf {x} _{0i}+\mathbf {x} _{0j}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\mathbf {x} _{0j}\delta \mathbf {M} _{0}\mathbf {x} _{0i}=0\quad {(5)}} The derivation can go on with two forks. ==== First fork: get first eigenvalue perturbation ==== ===== Eigenvalue perturbation ===== We start with (3) K 0 δ x i + δ K x 0 i = λ 0 i M 0 δ x i + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i ; {\displaystyle \quad \mathbf {K} _{0}\delta \mathbf {x} _{i}+\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\delta \mathbf {M} \mathrm {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i};} we left multiply with x 0 i T {\displaystyle \mathbf {x} _{0i}^{T}} and use (2) as well as its first order variation (5); we get x 0 i T δ K x 0 i = λ 0 i x 0 i T δ M x 0 i + δ λ i {\displaystyle \mathbf {x} _{0i}^{T}\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{T}\delta \mathbf {M} \mathrm {x} _{0i}+\delta \lambda _{i}} or δ λ i = x 0 i T δ K x 0 i − λ 0 i x 0 i T δ M x 0 i {\displaystyle \delta \lambda _{i}=\mathbf {x} _{0i}^{T}\delta \mathbf {K} \mathbf {x} _{0i}-\lambda _{0i}\mathbf {x} _{0i}^{T}\delta \mathbf {M} \mathrm {x} _{0i}} We notice that it is the first order perturbation of the generalized Rayleigh quotient with fixed x 0 i {\displaystyle x_{0i}} : R ( K , M ; x 0 i ) = x 0 i T K x 0 i / x 0 i T M x 0 i , with x 0 i T M x 0 i = 1 {\displaystyle R(K,M;x_{0i})=x_{0i}^{T}Kx_{0i}/x_{0i}^{T}Mx_{0i},{\text{ with }}x_{0i}^{T}Mx_{0i}=1} Moreover, for M = I {\displaystyle M=I} , the formula δ λ i = x 0 i T δ K x 0 i {\displaystyle \delta \lambda _{i}=x_{0i}^{T}\delta Kx_{0i}} should be compared with Bauer-Fike theorem which provides a bound for eigenvalue perturbation. ===== Eigenvector perturbation ===== We left multiply (3) with x 0 j T {\displaystyle x_{0j}^{T}} for j ≠ i {\displaystyle j\neq i} and get x 0 j T K 0 δ x i + x 0 j T δ K x 0 i = λ 0 i x 0 j T M 0 δ x i + λ 0 i x 0 j T δ M x 0 i + δ λ i x 0 j T M 0 x 0 i . {\displaystyle \mathbf {x} _{0j}^{T}\mathbf {K} _{0}\delta \mathbf {x} _{i}+\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}+\delta \lambda _{i}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\mathbf {x} _{0i}.} We use x 0 j T K = λ 0 j x 0 j T M and x 0 j T M 0 x 0 i = 0 , {\displaystyle \mathbf {x} _{0j}^{T}K=\lambda _{0j}\mathbf {x} _{0j}^{T}M{\text{ and }}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\mathbf {x} _{0i}=0,} for j ≠ i {\displaystyle j\neq i} . λ 0 j x 0 j T M 0 δ x i + x 0 j T δ K x 0 i = λ 0 i x 0 j T M 0 δ x i + λ 0 i x 0 j T δ M x 0 i . {\displaystyle \lambda _{0j}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}.} or ( λ 0 j − λ 0 i ) x 0 j T M 0 δ x i + x 0 j T δ K x 0 i = λ 0 i x 0 j T δ M x 0 i . {\displaystyle (\lambda _{0j}-\lambda _{0i})\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}.} As the eigenvalues are assumed to be simple, for j ≠ i {\displaystyle j\neq i} ϵ i j = x 0 j T M 0 δ x i = − x 0 j T δ K x 0 i + λ 0 i x 0 j T δ M x 0 i ( λ 0 j − λ 0 i ) , i = 1 , … N ; j = 1 , … N ; j ≠ i . {\displaystyle \epsilon _{ij}=\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}={\frac {-\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}+\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}}{(\lambda _{0j}-\lambda _{0i})}},i=1,\dots N;j=1,\dots N;j\neq i.} Moreover (5) (the first order variation of (2) ) yields 2 ϵ i i = 2 x 0 i T M 0 δ x i = − x 0 i T δ M x 0 i . {\displaystyle 2\epsilon _{ii}=2\mathbf {x} _{0i}^{T}\mathbf {M} _{0}\delta x_{i}=-\mathbf {x} _{0i}^{T}\delta M\mathbf {x} _{0i}.} We have obtained all the components of δ x i {\displaystyle \delta x_{i}} . ==== Second fork: Straightforward manipulations ==== Substituting (4) into (3) and rearranging gives K 0 ∑ j = 1 N ε i j x 0 j + δ K x 0 i = λ 0 i M 0 ∑ j = 1 N ε i j x 0 j + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i ( 5 ) ∑ j = 1 N ε i j K 0 x 0 j + δ K x 0 i = λ 0 i M 0 ∑ j = 1 N ε i j x 0 j + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i ( applying K 0 to the sum ) ∑ j = 1 N ε i j λ 0 j M 0 x 0 j + δ K x 0 i = λ 0 i M 0 ∑ j = 1 N ε i j x 0 j + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i ( using Eq. ( 1 ) ) {\displaystyle {\begin{aligned}\mathbf {K} _{0}\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}+\delta \mathbf {K} \mathbf {x} _{0i}&=\lambda _{0i}\mathbf {M} _{0}\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}&&(5)\\\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {K} _{0}\mathbf {x} _{0j}+\delta \mathbf {K} \mathbf {x} _{0i}&=\lambda _{0i}\mathbf {M} _{0}\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}&&\\({\text{applying }}\mathbf {K} _{0}{\text{ to the sum}})\\\sum _{j=1}^{N}\varepsilon _{ij}\lambda _{0j}\mathbf {M} _{0}\mathbf {x} _{0j}+\delta \mathbf {K} \mathbf {x} _{0i}&=\lambda _{0i}\mathbf {M} _{0}\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}&&({\text{using Eq. }}(1))\end{aligned}}} Because the eigenvectors are M0-orthogonal when M0 is positive definite, we can remove the summations by left-multiplying by x 0 i ⊤ {\displaystyle \mathbf {x} _{0i}^{\top }} : x 0 i ⊤ ε i i λ 0 i M 0 x 0 i + x 0 i ⊤ δ K x 0 i = λ 0 i x 0 i ⊤ M 0 ε i i x 0 i + λ 0 i x 0 i ⊤ δ M x 0 i + δ λ i x 0 i ⊤ M 0 x 0 i . {\displaystyle \mathbf {x} _{0i}^{\top }\varepsilon _{ii}\lambda _{0i}\mathbf {M} _{0}\mathbf {x} _{0i}+\mathbf {x} _{0i}^{\top }\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\varepsilon _{ii}\mathbf {x} _{0i}+\lambda _{0i}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}.} By use of equation (1) again: x 0 i ⊤ K 0 ε i i x 0 i + x 0 i ⊤ δ K x 0 i = λ 0 i x 0 i ⊤ M 0 ε i i x 0 i + λ 0 i x 0 i ⊤ δ M x 0 i + δ λ i x 0 i ⊤ M 0 x 0 i . ( 6 ) {\displaystyle \mathbf {x} _{0i}^{\top }\mathbf {K} _{0}\varepsilon _{ii}\mathbf {x} _{0i}+\mathbf {x} _{0i}^{\top }\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\varepsilon _{ii}\mathbf {x} _{0i}+\lambda _{0i}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}.\qquad (6)} The two terms containing εii are equal because left-multiplying (1) by x 0 i ⊤ {\displaystyle \mathbf {x} _{0i}^{\top }} gives x 0 i ⊤ K 0 x 0 i = λ 0 i x 0 i ⊤ M 0 x 0 i . {\displaystyle \mathbf {x} _{0i}^{\top }\mathbf {K} _{0}\mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}.} Canceling those terms in (6) leaves x 0 i ⊤ δ K x 0 i = λ 0 i x 0 i ⊤ δ M x 0 i + δ λ i x 0 i ⊤ M 0 x 0 i . {\displaystyle \mathbf {x} _{0i}^{\top }\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}.} Rearranging gives δ λ i = x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i x 0 i ⊤ M 0 x 0 i {\displaystyle \delta \lambda _{i}={\frac {\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}}{\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}}}} But by (2), this denominator is equal to 1. Thus δ λ i = x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i . {\displaystyle \delta \lambda _{i}=\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}.} Then, as λ i ≠ λ k {\displaystyle \lambda _{i}\neq \lambda _{k}} for i ≠ k {\displaystyle i\neq k} (assumption simple eigenvalues) by left-multiplying equation (5) by x 0 k ⊤ {\displaystyle \mathbf {x} _{0k}^{\top }} : ε i k = x 0 k ⊤ ( δ K − λ 0 i δ M ) x 0 i λ 0 i − λ 0 k , i ≠ k . {\displaystyle \varepsilon _{ik}={\frac {\mathbf {x} _{0k}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}}{\lambda _{0i}-\lambda _{0k}}},\qquad i\neq k.} Or by changing the name of the indices: ε i j = x 0 j ⊤ ( δ K − λ 0 i δ M ) x 0 i λ 0 i − λ 0 j , i ≠ j . {\displaystyle \varepsilon _{ij}={\frac {\mathbf {x} _{0j}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}}{\lambda _{0i}-\lambda _{0j}}},\qquad i\neq j.} To find εii, use the fact that: x i ⊤ M x i = 1 {\displaystyle \mathbf {x} _{i}^{\top }\mathbf {M} \mathbf {x} _{i}=1} implies: ε i i = − 1 2 x 0 i ⊤ δ M x 0 i . {\displaystyle \varepsilon _{ii}=-{\tfrac {1}{2}}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}.} == Summary of the first order perturbation result == In the case where all the matrices are Hermitian positive definite and all the eigenvalues are distinct, λ i = λ 0 i + x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i x i = x 0 i ( 1 − 1 2 x 0 i ⊤ δ M x 0 i ) + ∑ j = 1 j ≠ i N x 0 j ⊤ ( δ K − λ 0 i δ M ) x 0 i λ 0 i − λ 0 j x 0 j {\displaystyle {\begin{aligned}\lambda _{i}&=\lambda _{0i}+\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}\\\mathbf {x} _{i}&=\mathbf {x} _{0i}\left(1-{\tfrac {1}{2}}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}\right)+\sum _{j=1 \atop j\neq i}^{N}{\frac {\mathbf {x} _{0j}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}}{\lambda _{0i}-\lambda _{0j}}}\mathbf {x} _{0j}\end{aligned}}} for infinitesimal δ K {\displaystyle \delta \mathbf {K} } and δ M {\displaystyle \delta \mathbf {M} } (the higher order terms in (3) being neglected). So far, we have not proved that these higher order terms may be neglected. This point may be derived using the implicit function theorem; in next section, we summarize the use of this theorem in order to obtain a first order expansion. == Theoretical derivation == === Perturbation of an implicit function. === In the next paragraph, we shall use the Implicit function theorem (Statement of the theorem ); we notice that for a continuously differentiable function f : R n + m → R m , f : ( x , y ) ↦ f ( x , y ) {\displaystyle f:\mathbb {R} ^{n+m}\to \mathbb {R} ^{m},\;f:(x,y)\mapsto f(x,y)} , with an invertible Jacobian matrix J f , b ( x 0 , y 0 ) {\displaystyle J_{f,b}(x_{0},y_{0})} , from a point ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} solution of f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} , we get solutions of f ( x , y ) = 0 {\displaystyle f(x,y)=0} with x {\displaystyle x} close to x 0 {\displaystyle x_{0}} in the form y = g ( x ) {\displaystyle y=g(x)} where g {\displaystyle g} is a continuously differentiable function ; moreover the Jacobian marix of g {\displaystyle g} is provided by the linear system J f , y ( x , g ( x ) ) J g , x ( x ) + J f , x ( x , g ( x ) ) = 0 ( 6 ) {\displaystyle J_{f,y}(x,g(x))J_{g,x}(x)+J_{f,x}(x,g(x))=0\quad (6)} . As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix of g {\displaystyle g} may be computed with a first order expansion of f ( x 0 + δ x , y 0 + δ y ) = 0 {\displaystyle f(x_{0}+\delta x,y_{0}+\delta y)=0} , we get J f , x ( x , g ( x ) ) δ x + J f , y ( x , g ( x ) ) δ y = 0 {\displaystyle J_{f,x}(x,g(x))\delta x+J_{f,y}(x,g(x))\delta y=0} ; as δ y = J g , x ( x ) δ x {\displaystyle \delta y=J_{g,x}(x)\delta x} , it is equivalent to equation ( 6 ) {\displaystyle (6)} . === Eigenvalue perturbation: a theoretical basis. === We use the previous paragraph (Perturbation of an implicit function) with somewhat different notations suited to eigenvalue perturbation; we introduce f ~ : R 2 n 2 × R n + 1 → R n + 1 {\displaystyle {\tilde {f}}:\mathbb {R} ^{2n^{2}}\times \mathbb {R} ^{n+1}\to \mathbb {R} ^{n+1}} , with f ~ ( K , M , λ , x ) = ( f ( K , M , λ , x ) f n + 1 ( x ) ) {\displaystyle {\tilde {f}}(K,M,\lambda ,x)={\binom {f(K,M,\lambda ,x)}{f_{n+1}(x)}}} with f ( K , M , λ , x ) = K x − λ x , f n + 1 ( M , x ) = x T M x − 1 {\displaystyle f(K,M,\lambda ,x)=Kx-\lambda x,f_{n+1}(M,x)=x^{T}Mx-1} . In order to use the Implicit function theorem, we study the invertibility of the Jacobian J f ~ ; λ , x ( K , M ; λ 0 i , x 0 i ) {\displaystyle J_{{\tilde {f}};\lambda ,x}(K,M;\lambda _{0i},x_{0i})} with J f ~ ; λ , x ( K , M ; λ i , x i ) ( δ λ , δ x ) = ( − M x i 0 ) δ λ + ( K − λ M 2 x i T M ) δ x i {\displaystyle J_{{\tilde {f}};\lambda ,x}(K,M;\lambda _{i},x_{i})(\delta \lambda ,\delta x)={\binom {-Mx_{i}}{0}}\delta \lambda +{\binom {K-\lambda M}{2x_{i}^{T}M}}\delta x_{i}} . Indeed, the solution of J f ~ ; λ 0 i , x 0 i ( K , M ; λ 0 i , x 0 i ) ( δ λ i , δ x i ) = {\displaystyle J_{{\tilde {f}};\lambda _{0i},x_{0i}}(K,M;\lambda _{0i},x_{0i})(\delta \lambda _{i},\delta x_{i})=} ( y y n + 1 ) {\displaystyle {\binom {y}{y_{n+1}}}} may be derived with computations similar to the derivation of the expansion. δ λ i = − x 0 i T y , and ( λ 0 i − λ 0 j ) x 0 j T M δ x i = x j T y , j = 1 , … , n , j ≠ i ; {\displaystyle \delta \lambda _{i}=-x_{0i}^{T}y,\;{\text{ and }}(\lambda _{0i}-\lambda _{0j})x_{0j}^{T}M\delta x_{i}=x_{j}^{T}y,j=1,\dots ,n,j\neq i\;;} or x 0 j T M δ x i = x j T y / ( λ 0 i − λ 0 j ) , and 2 x 0 i T M δ x i = y n + 1 {\displaystyle {\text{ or }}x_{0j}^{T}M\delta x_{i}=x_{j}^{T}y/(\lambda _{0i}-\lambda _{0j}),{\text{ and }}\;2x_{0i}^{T}M\delta x_{i}=y_{n+1}} When λ i {\displaystyle \lambda _{i}} is a simple eigenvalue, as the eigenvectors x 0 j , j = 1 , … , n {\displaystyle x_{0j},j=1,\dots ,n} form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible. The implicit function theorem provides a continuously differentiable function ( K , M ) ↦ ( λ i ( K , M ) , x i ( K , M ) ) {\displaystyle (K,M)\mapsto (\lambda _{i}(K,M),x_{i}(K,M))} hence the expansion with little o notation: λ i = λ 0 i + δ λ i + o ( ‖ δ K ‖ + ‖ δ M ‖ ) {\displaystyle \lambda _{i}=\lambda _{0i}+\delta \lambda _{i}+o(\|\delta K\|+\|\delta M\|)} x i = x 0 i + δ x i + o ( ‖ δ K ‖ + ‖ δ M ‖ ) {\displaystyle x_{i}=x_{0i}+\delta x_{i}+o(\|\delta K\|+\|\delta M\|)} . with δ λ i = x 0 i T δ K x 0 i − λ 0 i x 0 i T δ M x 0 i ; {\displaystyle \delta \lambda _{i}=\mathbf {x} _{0i}^{T}\delta \mathbf {K} \mathbf {x} _{0i}-\lambda _{0i}\mathbf {x} _{0i}^{T}\delta \mathbf {M} \mathrm {x} _{0i};} δ x i = x 0 j T M 0 δ x i x 0 j with {\displaystyle \delta x_{i}=\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}\mathbf {x} _{0j}{\text{ with}}} x 0 j T M 0 δ x i = − x 0 j T δ K x 0 i + λ 0 i x 0 j T δ M x 0 i ( λ 0 j − λ 0 i ) , i = 1 , … n ; j = 1 , … n ; j ≠ i . {\displaystyle \mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}={\frac {-\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}+\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}}{(\lambda _{0j}-\lambda _{0i})}},i=1,\dots n;j=1,\dots n;j\neq i.} This is the first order expansion of the perturbed eigenvalues and eigenvectors. which is proved. == Results of sensitivity analysis with respect to the entries of the matrices == === The results === This means it is possible to efficiently do a sensitivity analysis on λi as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing Kkℓ will also change Kℓk, hence the (2 − δkℓ) term.) ∂ λ i ∂ K ( k ℓ ) = ∂ ∂ K ( k ℓ ) ( λ 0 i + x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i ) = x 0 i ( k ) x 0 i ( ℓ ) ( 2 − δ k ℓ ) ∂ λ i ∂ M ( k ℓ ) = ∂ ∂ M ( k ℓ ) ( λ 0 i + x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i ) = − λ i x 0 i ( k ) x 0 i ( ℓ ) ( 2 − δ k ℓ ) . {\displaystyle {\begin{aligned}{\frac {\partial \lambda _{i}}{\partial \mathbf {K} _{(k\ell )}}}&={\frac {\partial }{\partial \mathbf {K} _{(k\ell )}}}\left(\lambda _{0i}+\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}\right)=x_{0i(k)}x_{0i(\ell )}\left(2-\delta _{k\ell }\right)\\{\frac {\partial \lambda _{i}}{\partial \mathbf {M} _{(k\ell )}}}&={\frac {\partial }{\partial \mathbf {M} _{(k\ell )}}}\left(\lambda _{0i}+\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}\right)=-\lambda _{i}x_{0i(k)}x_{0i(\ell )}\left(2-\delta _{k\ell }\right).\end{aligned}}} Similarly ∂ x i ∂ K ( k ℓ ) = ∑ j = 1 j ≠ i N x 0 j ( k ) x 0 i ( ℓ ) ( 2 − δ k ℓ ) λ 0 i − λ 0 j x 0 j ∂ x i ∂ M ( k ℓ ) = − x 0 i x 0 i ( k ) x 0 i ( ℓ ) 2 ( 2 − δ k ℓ ) − ∑ j = 1 j ≠ i N λ 0 i x 0 j ( k ) x 0 i ( ℓ ) λ 0 i − λ 0 j x 0 j ( 2 − δ k ℓ ) . {\displaystyle {\begin{aligned}{\frac {\partial \mathbf {x} _{i}}{\partial \mathbf {K} _{(k\ell )}}}&=\sum _{j=1 \atop j\neq i}^{N}{\frac {x_{0j(k)}x_{0i(\ell )}\left(2-\delta _{k\ell }\right)}{\lambda _{0i}-\lambda _{0j}}}\mathbf {x} _{0j}\\{\frac {\partial \mathbf {x} _{i}}{\partial \mathbf {M} _{(k\ell )}}}&=-\mathbf {x} _{0i}{\frac {x_{0i(k)}x_{0i(\ell )}}{2}}(2-\delta _{k\ell })-\sum _{j=1 \atop j\neq i}^{N}{\frac {\lambda _{0i}x_{0j(k)}x_{0i(\ell )}}{\lambda _{0i}-\lambda _{0j}}}\mathbf {x} _{0j}\left(2-\delta _{k\ell }\right).\end{aligned}}} === Eigenvalue sensitivity, a small example === A simple case is K = [ 2 b b 0 ] {\displaystyle K={\begin{bmatrix}2&b\\b&0\end{bmatrix}}} ; however you can compute eigenvalues and eigenvectors with the help of online tools such as [1] (see introduction in Wikipedia WIMS) or using Sage SageMath. You get the smallest eigenvalue λ = − [ b 2 + 1 + 1 ] {\displaystyle \lambda =-\left[{\sqrt {b^{2}+1}}+1\right]} and an explicit computation ∂ λ ∂ b = − x x 2 + 1 {\displaystyle {\frac {\partial \lambda }{\partial b}}={\frac {-x}{\sqrt {x^{2}+1}}}} ; more over, an associated eigenvector is x ~ 0 = [ x , − ( x 2 + 1 + 1 ) ) ] T {\displaystyle {\tilde {x}}_{0}=[x,-({\sqrt {x^{2}+1}}+1))]^{T}} ; it is not an unitary vector; so x 01 x 02 = x ~ 01 x ~ 02 / ‖ x ~ 0 ‖ 2 {\displaystyle x_{01}x_{02}={\tilde {x}}_{01}{\tilde {x}}_{02}/\|{\tilde {x}}_{0}\|^{2}} ; we get ‖ x ~ 0 ‖ 2 = 2 x 2 + 1 ( x 2 + 1 + 1 ) {\displaystyle \|{\tilde {x}}_{0}\|^{2}=2{\sqrt {x^{2}+1}}({\sqrt {x^{2}+1}}+1)} and x ~ 01 x ~ 02 = − x ( x 2 + 1 + 1 ) {\displaystyle {\tilde {x}}_{01}{\tilde {x}}_{02}=-x({\sqrt {x^{2}+1}}+1)} ; hence x 01 x 02 = − x 2 x 2 + 1 {\displaystyle x_{01}x_{02}=-{\frac {x}{2{\sqrt {x^{2}+1}}}}} ; for this example , we have checked that ∂ λ ∂ b = 2 x 01 x 02 {\displaystyle {\frac {\partial \lambda }{\partial b}}=2x_{01}x_{02}} or δ λ = 2 x 01 x 02 δ b {\displaystyle \delta \lambda =2x_{01}x_{02}\delta b} . == Existence of eigenvectors == Note that in the above example we assumed that both the unperturbed and the perturbed systems involved symmetric matrices, which guaranteed the existence of N {\displaystyle N} linearly independent eigenvectors. An eigenvalue problem involving non-symmetric matrices is not guaranteed to have N {\displaystyle N} linearly independent eigenvectors, though a sufficient condition is that K {\displaystyle \mathbf {K} } and M {\displaystyle \mathbf {M} } be simultaneously diagonalizable. == The case of repeated eigenvalues == A technical report of Rellich for perturbation of eigenvalue problems provides several examples. The elementary examples are in chapter 2. The report may be downloaded from archive.org. We draw an example in which the eigenvectors have a nasty behavior. === Example 1 === Consider the following matrix B ( ϵ ) = ϵ [ cos ( 2 / ϵ ) , sin ( 2 / ϵ ) sin ( 2 / ϵ ) , s cos ( 2 / ϵ ) ] {\displaystyle B(\epsilon )=\epsilon {\begin{bmatrix}\cos(2/\epsilon )&,\sin(2/\epsilon )\\\sin(2/\epsilon )&,s\cos(2/\epsilon )\end{bmatrix}}} and A ( ϵ ) = I − e − 1 / ϵ 2 B ; {\displaystyle A(\epsilon )=I-e^{-1/\epsilon ^{2}}B;} A ( 0 ) = I . {\displaystyle A(0)=I.} For ϵ ≠ 0 {\displaystyle \epsilon \neq 0} , the matrix A ( ϵ ) {\displaystyle A(\epsilon )} has eigenvectors Φ 1 = [ cos ( 1 / ϵ ) , − sin ( 1 / ϵ ) ] T ; Φ 2 = [ sin ( 1 / ϵ ) , − cos ( 1 / ϵ ) ] T {\displaystyle \Phi ^{1}=[\cos(1/\epsilon ),-\sin(1/\epsilon )]^{T};\Phi ^{2}=[\sin(1/\epsilon ),-\cos(1/\epsilon )]^{T}} belonging to eigenvalues λ 1 = 1 − e − 1 / ϵ 2 ) , λ 2 = 1 + e − 1 / ϵ 2 ) {\displaystyle \lambda _{1}=1-e^{-1/\epsilon ^{2})},\lambda _{2}=1+e^{-1/\epsilon ^{2})}} . Since λ 1 ≠ λ 2 {\displaystyle \lambda _{1}\neq \lambda _{2}} for ϵ ≠ 0 {\displaystyle \epsilon \neq 0} if u j ( ϵ ) , j = 1 , 2 , {\displaystyle u^{j}(\epsilon ),j=1,2,} are any normalized eigenvectors belonging to λ j ( ϵ ) , j = 1 , 2 {\displaystyle \lambda _{j}(\epsilon ),j=1,2} respectively then u j = e α j ( ϵ ) Φ j ( ϵ ) {\displaystyle u^{j}=e^{\alpha _{j}(\epsilon )}\Phi ^{j}(\epsilon )} where α j , j = 1 , 2 {\displaystyle \alpha _{j},j=1,2} are real for ϵ ≠ 0. {\displaystyle \epsilon \neq 0.} It is obviously impossible to define α 1 ( ϵ ) {\displaystyle \alpha _{1}(\epsilon )} , say, in such a way that u 1 ( ϵ ) {\displaystyle u^{1}(\epsilon )} tends to a limit as ϵ → 0 , {\displaystyle \epsilon \rightarrow 0,} because | u 1 ( ϵ ) | = | cos ( 1 / ϵ ) | {\displaystyle |u^{1}(\epsilon )|=|\cos(1/\epsilon )|} has no limit as ϵ → 0. {\displaystyle \epsilon \rightarrow 0.} Note in this example that A j k ( ϵ ) {\displaystyle A_{jk}(\epsilon )} is not only continuous but also has continuous derivatives of all orders. Rellich draws the following important consequence. << Since in general the individual eigenvectors do not depend continuously on the perturbation parameter even though the operator A ( ϵ ) {\displaystyle A(\epsilon )} does, it is necessary to work, not with an eigenvector, but rather with the space spanned by all the eigenvectors belonging to the same eigenvalue. >> === Example 2 === This example is less nasty that the previous one. Suppose [ K 0 ] {\displaystyle [K_{0}]} is the 2x2 identity matrix, any vector is an eigenvector; then u 0 = [ 1 , 1 ] T / 2 {\displaystyle u_{0}=[1,1]^{T}/{\sqrt {2}}} is one possible eigenvector. But if one makes a small perturbation, such as [ K ] = [ K 0 ] + [ ϵ 0 0 0 ] {\displaystyle [K]=[K_{0}]+{\begin{bmatrix}\epsilon &0\\0&0\end{bmatrix}}} Then the eigenvectors are v 1 = [ 1 , 0 ] T {\displaystyle v_{1}=[1,0]^{T}} and v 2 = [ 0 , 1 ] T {\displaystyle v_{2}=[0,1]^{T}} ; they are constant with respect to ϵ {\displaystyle \epsilon } so that ‖ u 0 − v 1 ‖ {\displaystyle \|u_{0}-v_{1}\|} is constant and does not go to zero. == See also == Perturbation theory (quantum mechanics) Bauer–Fike theorem == References == == Further reading == === Books === Ren-Cang Li (2014). "Matrix Perturbation Theory". In Hogben, Leslie (ed.). Handbook of linear algebra (Second ed.). CRC Press. ISBN 978-1466507289. Rellich, F., & Berkowitz, J. (1969). Perturbation theory of eigenvalue problems. CRC Press.{{cite book}}: CS1 maint: multiple names: authors list (link). Bhatia, R. (1987). Perturbation bounds for matrix eigenvalues. SIAM. === Report === Rellich, Franz (1954). Perturbation theory of eigenvalue problems. New-York: Courant Institute of Mathematical Sciences, New-York University. === Journal papers === Simon, B. (1982). Large orders and summability of eigenvalue perturbation theory: a mathematical overview. International Journal of Quantum Chemistry, 21(1), 3-25. Crandall, M. G., & Rabinowitz, P. H. (1973). Bifurcation, perturbation of simple eigenvalues, and linearized stability. Archive for Rational Mechanics and Analysis, 52(2), 161-180. Stewart, G. W. (1973). Error and perturbation bounds for subspaces associated with certain eigenvalue problems. SIAM review, 15(4), 727-764. Löwdin, P. O. (1962). Studies in perturbation theory. IV. Solution of eigenvalue problem by projection operator formalism. Journal of Mathematical Physics, 3(5), 969-982.
|
Wikipedia:Eigenvalues and eigenvectors#0
|
In linear algebra, an eigenvector ( EYE-gən-) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector v {\displaystyle \mathbf {v} } of a linear transformation T {\displaystyle T} is scaled by a constant factor λ {\displaystyle \lambda } when the linear transformation is applied to it: T v = λ v {\displaystyle T\mathbf {v} =\lambda \mathbf {v} } . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor λ {\displaystyle \lambda } (possibly negative). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is the steady state of the system. == Matrices == For an n × n {\displaystyle n{\times }n} matrix A and a nonzero vector v {\displaystyle \mathbf {v} } of length n {\displaystyle n} , if multiplying A by v {\displaystyle \mathbf {v} } (denoted A v {\displaystyle A\mathbf {v} } ) simply scales v {\displaystyle \mathbf {v} } by a factor λ, where λ is a scalar, then v {\displaystyle \mathbf {v} } is called an eigenvector of A, and λ is the corresponding eigenvalue. This relationship can be expressed as: A v = λ v {\displaystyle A\mathbf {v} =\lambda \mathbf {v} } . Given an n-dimensional vector space and a choice of basis, there is a direct correspondence between linear transformations from the vector space into itself and n-by-n square matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language of matrices. == Overview == Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation T ( v ) = λ v , {\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,} referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like d d x {\displaystyle {\tfrac {d}{dx}}} , in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as d d x e λ x = λ e λ x . {\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.} Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication A v = λ v , {\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,} where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them: The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation. The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue. If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis. == History == Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations. In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix. In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation. Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his 1822 treatise The Analytic Theory of Heat (Théorie analytique de la chaleur). Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices. Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability. In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later. At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today. The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis and Vera Kublanovskaya in 1961. == Eigenvalues and eigenvectors of matrices == Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices. Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications. Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors x = [ 1 − 3 4 ] and y = [ − 20 60 − 80 ] . {\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.} These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that x = λ y . {\displaystyle \mathbf {x} =\lambda \mathbf {y} .} In this case, λ = − 1 20 {\displaystyle \lambda =-{\frac {1}{20}}} . Now consider the linear transformation of n-dimensional vectors defined by an n by n matrix A, A v = w , {\displaystyle A\mathbf {v} =\mathbf {w} ,} or [ A 11 A 12 ⋯ A 1 n A 21 A 22 ⋯ A 2 n ⋮ ⋮ ⋱ ⋮ A n 1 A n 2 ⋯ A n n ] [ v 1 v 2 ⋮ v n ] = [ w 1 w 2 ⋮ w n ] {\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}} where, for each row, w i = A i 1 v 1 + A i 2 v 2 + ⋯ + A i n v n = ∑ j = 1 n A i j v j . {\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.} If it occurs that v and w are scalar multiples, that is if then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue equation for the matrix A. Equation (1) can be stated equivalently as where I is the n by n identity matrix and 0 is the zero vector. === Eigenvalues and the characteristic polynomial === Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A − λI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation Using the Leibniz formula for determinants, the left-hand side of equation (3) is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation (3) is called the characteristic equation or the secular equation of A. The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms, where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A. As a brief example, which is described in more detail in the examples section later, consider the matrix A = [ 2 1 1 2 ] . {\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} Taking the determinant of (A − λI), the characteristic polynomial of A is det ( A − λ I ) = | 2 − λ 1 1 2 − λ | = 3 − 4 λ + λ 2 . {\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.} Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation ( A − λ I ) v = 0 {\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} } . In this example, the eigenvectors are any nonzero scalar multiples of v λ = 1 = [ 1 − 1 ] , v λ = 3 = [ 1 1 ] . {\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.} If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers. The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs. === Spectrum of a matrix === The spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities. An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the spectral radius of the matrix. === Algebraic multiplicity === Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial. Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. Whereas equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity, det ( A − λ I ) = ( λ 1 − λ ) μ A ( λ 1 ) ( λ 2 − λ ) μ A ( λ 2 ) ⋯ ( λ d − λ ) μ A ( λ d ) . {\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.} If d = n then the right-hand side is the product of n linear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as 1 ≤ μ A ( λ i ) ≤ n , μ A = ∑ i = 1 d μ A ( λ i ) = n . {\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}} If μA(λi) = 1, then λi is said to be a simple eigenvalue. If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue. === Eigenspaces, geometric multiplicity, and the eigenbasis for matrices === Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (2), E = { v : ( A − λ I ) v = 0 } . {\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.} On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ. In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of C n {\displaystyle \mathbb {C} ^{n}} . Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written u, v ∈ E, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if v ∈ E and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ. The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} . Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A − λI) as γ A ( λ ) = n − rank ( A − λ I ) . {\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).} Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n. 1 ≤ γ A ( λ ) ≤ μ A ( λ ) ≤ n {\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n} To prove the inequality γ A ( λ ) ≤ μ A ( λ ) {\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )} , consider how the definition of geometric multiplicity implies the existence of γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} orthonormal eigenvectors v 1 , … , v γ A ( λ ) {\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}} , such that A v k = λ v k {\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}} . We can therefore find a (unitary) matrix V whose first γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} columns are these eigenvectors, and whose remaining columns can be any orthonormal set of n − γ A ( λ ) {\displaystyle n-\gamma _{A}(\lambda )} vectors orthogonal to these eigenvectors of A. Then V has full rank and is therefore invertible. Evaluating D := V T A V {\displaystyle D:=V^{T}AV} , we get a matrix whose top left block is the diagonal matrix λ I γ A ( λ ) {\displaystyle \lambda I_{\gamma _{A}(\lambda )}} . This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding − ξ V {\displaystyle -\xi V} on both sides, we get ( A − ξ I ) V = V ( D − ξ I ) {\displaystyle (A-\xi I)V=V(D-\xi I)} since I commutes with V. In other words, A − ξ I {\displaystyle A-\xi I} is similar to D − ξ I {\displaystyle D-\xi I} , and det ( A − ξ I ) = det ( D − ξ I ) {\displaystyle \det(A-\xi I)=\det(D-\xi I)} . But from the definition of D, we know that det ( D − ξ I ) {\displaystyle \det(D-\xi I)} contains a factor ( ξ − λ ) γ A ( λ ) {\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}} , which means that the algebraic multiplicity of λ {\displaystyle \lambda } must satisfy μ A ( λ ) ≥ γ A ( λ ) {\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )} . Suppose A has d ≤ n {\displaystyle d\leq n} distinct eigenvalues λ 1 , … , λ d {\displaystyle \lambda _{1},\ldots ,\lambda _{d}} , where the geometric multiplicity of λ i {\displaystyle \lambda _{i}} is γ A ( λ i ) {\displaystyle \gamma _{A}(\lambda _{i})} . The total geometric multiplicity of A, γ A = ∑ i = 1 d γ A ( λ i ) , d ≤ γ A ≤ n , {\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}} is the dimension of the sum of all the eigenspaces of A's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of A. If γ A = n {\displaystyle \gamma _{A}=n} , then The direct sum of the eigenspaces of all of A's eigenvalues is the entire vector space C n {\displaystyle \mathbb {C} ^{n}} . A basis of C n {\displaystyle \mathbb {C} ^{n}} can be formed from n linearly independent eigenvectors of A; such a basis is called an eigenbasis Any vector in C n {\displaystyle \mathbb {C} ^{n}} can be written as a linear combination of eigenvectors of A. === Additional properties === Let A {\displaystyle A} be an arbitrary n × n {\displaystyle n\times n} matrix of complex numbers with eigenvalues λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} . Each eigenvalue appears μ A ( λ i ) {\displaystyle \mu _{A}(\lambda _{i})} times in this list, where μ A ( λ i ) {\displaystyle \mu _{A}(\lambda _{i})} is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues: The trace of A {\displaystyle A} , defined as the sum of its diagonal elements, is also the sum of all eigenvalues, tr ( A ) = ∑ i = 1 n a i i = ∑ i = 1 n λ i = λ 1 + λ 2 + ⋯ + λ n . {\displaystyle \operatorname {tr} (A)=\sum _{i=1}^{n}a_{ii}=\sum _{i=1}^{n}\lambda _{i}=\lambda _{1}+\lambda _{2}+\cdots +\lambda _{n}.} The determinant of A {\displaystyle A} is the product of all its eigenvalues, det ( A ) = ∏ i = 1 n λ i = λ 1 λ 2 ⋯ λ n . {\displaystyle \det(A)=\prod _{i=1}^{n}\lambda _{i}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}.} The eigenvalues of the k {\displaystyle k} th power of A {\displaystyle A} ; i.e., the eigenvalues of A k {\displaystyle A^{k}} , for any positive integer k {\displaystyle k} , are λ 1 k , … , λ n k {\displaystyle \lambda _{1}^{k},\ldots ,\lambda _{n}^{k}} . The matrix A {\displaystyle A} is invertible if and only if every eigenvalue is nonzero. If A {\displaystyle A} is invertible, then the eigenvalues of A − 1 {\displaystyle A^{-1}} are 1 λ 1 , … , 1 λ n {\textstyle {\frac {1}{\lambda _{1}}},\ldots ,{\frac {1}{\lambda _{n}}}} and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity. If A {\displaystyle A} is equal to its conjugate transpose A ∗ {\displaystyle A^{*}} , or equivalently if A {\displaystyle A} is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix. If A {\displaystyle A} is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively. If A {\displaystyle A} is unitary, every eigenvalue has absolute value | λ i | = 1 {\displaystyle |\lambda _{i}|=1} . If A {\displaystyle A} is a n × n {\displaystyle n\times n} matrix and { λ 1 , … , λ k } {\displaystyle \{\lambda _{1},\ldots ,\lambda _{k}\}} are its eigenvalues, then the eigenvalues of matrix I + A {\displaystyle I+A} (where I {\displaystyle I} is the identity matrix) are { λ 1 + 1 , … , λ k + 1 } {\displaystyle \{\lambda _{1}+1,\ldots ,\lambda _{k}+1\}} . Moreover, if α ∈ C {\displaystyle \alpha \in \mathbb {C} } , the eigenvalues of α I + A {\displaystyle \alpha I+A} are { λ 1 + α , … , λ k + α } {\displaystyle \{\lambda _{1}+\alpha ,\ldots ,\lambda _{k}+\alpha \}} . More generally, for a polynomial P {\displaystyle P} the eigenvalues of matrix P ( A ) {\displaystyle P(A)} are { P ( λ 1 ) , … , P ( λ k ) } {\displaystyle \{P(\lambda _{1}),\ldots ,P(\lambda _{k})\}} . === Left and right eigenvectors === Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the n × n {\displaystyle n\times n} matrix A {\displaystyle A} in the defining equation, equation (1), A v = λ v . {\displaystyle A\mathbf {v} =\lambda \mathbf {v} .} The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix A {\displaystyle A} . In this formulation, the defining equation is u A = κ u , {\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,} where κ {\displaystyle \kappa } is a scalar and u {\displaystyle u} is a 1 × n {\displaystyle 1\times n} matrix. Any row vector u {\displaystyle u} satisfying this equation is called a left eigenvector of A {\displaystyle A} and κ {\displaystyle \kappa } is its associated eigenvalue. Taking the transpose of this equation, A T u T = κ u T . {\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.} Comparing this equation to equation (1), it follows immediately that a left eigenvector of A {\displaystyle A} is the same as the transpose of a right eigenvector of A T {\displaystyle A^{\textsf {T}}} , with the same eigenvalue. Furthermore, since the characteristic polynomial of A T {\displaystyle A^{\textsf {T}}} is the same as the characteristic polynomial of A {\displaystyle A} , the left and right eigenvectors of A {\displaystyle A} are associated with the same eigenvalues. === Diagonalization and the eigendecomposition === Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A, Q = [ v 1 v 2 ⋯ v n ] . {\displaystyle Q={\begin{bmatrix}\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{n}\end{bmatrix}}.} Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue, A Q = [ λ 1 v 1 λ 2 v 2 ⋯ λ n v n ] . {\displaystyle AQ={\begin{bmatrix}\lambda _{1}\mathbf {v} _{1}&\lambda _{2}\mathbf {v} _{2}&\cdots &\lambda _{n}\mathbf {v} _{n}\end{bmatrix}}.} With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then A Q = Q Λ . {\displaystyle AQ=Q\Lambda .} Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1, A = Q Λ Q − 1 , {\displaystyle A=Q\Lambda Q^{-1},} or by instead left multiplying both sides by Q−1, Q − 1 A Q = Λ . {\displaystyle Q^{-1}AQ=\Lambda .} A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ. Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, AP = PD. Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable. A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces. === Variational characterization === In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of H {\displaystyle H} is the maximum value of the quadratic form x T H x / x T x {\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} } . A value of x {\displaystyle \mathbf {x} } that realizes that maximum is an eigenvector. === Matrix examples === ==== Two-dimensional matrix example ==== Consider the matrix A = [ 2 1 1 2 ] . {\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues. Taking the determinant to find characteristic polynomial of A, det ( A − λ I ) = | [ 2 1 1 2 ] − λ [ 1 0 0 1 ] | = | 2 − λ 1 1 2 − λ | = 3 − 4 λ + λ 2 = ( λ − 3 ) ( λ − 1 ) . {\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}} Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. For λ=1, equation (2) becomes, ( A − I ) v λ = 1 = [ 1 1 1 1 ] [ v 1 v 2 ] = [ 0 0 ] {\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}} 1 v 1 + 1 v 2 = 0 {\displaystyle 1v_{1}+1v_{2}=0} Any nonzero vector with v1 = −v2 solves this equation. Therefore, v λ = 1 = [ v 1 − v 1 ] = [ 1 − 1 ] {\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}} is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector. For λ=3, equation (2) becomes ( A − 3 I ) v λ = 3 = [ − 1 1 1 − 1 ] [ v 1 v 2 ] = [ 0 0 ] − 1 v 1 + 1 v 2 = 0 ; 1 v 1 − 1 v 2 = 0 {\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}} Any nonzero vector with v1 = v2 solves this equation. Therefore, v λ = 3 = [ v 1 v 1 ] = [ 1 1 ] {\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}} is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector. Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ=1 and λ=3, respectively. ==== Three-dimensional matrix example ==== Consider the matrix A = [ 2 0 0 0 3 4 0 4 9 ] . {\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = | [ 2 0 0 0 3 4 0 4 9 ] − λ [ 1 0 0 0 1 0 0 0 1 ] | = | 2 − λ 0 0 0 3 − λ 4 0 4 9 − λ | , = ( 2 − λ ) [ ( 3 − λ ) ( 9 − λ ) − 16 ] = − λ 3 + 14 λ 2 − 35 λ + 22. {\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}} The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors [ 1 0 0 ] T {\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}} , [ 0 − 2 1 ] T {\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}} , and [ 0 1 2 ] T {\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}} , or any nonzero multiple thereof. ==== Three-dimensional matrix example with complex eigenvalues ==== Consider the cyclic permutation matrix A = [ 0 1 0 0 0 1 1 0 0 ] . {\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.} This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are λ 1 = 1 λ 2 = − 1 2 + i 3 2 λ 3 = λ 2 ∗ = − 1 2 − i 3 2 {\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}} where i {\displaystyle i} is an imaginary unit with i 2 = − 1 {\displaystyle i^{2}=-1} . For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example, A [ 5 5 5 ] = [ 5 5 5 ] = 1 ⋅ [ 5 5 5 ] . {\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.} For the complex conjugate pair of imaginary eigenvalues, λ 2 λ 3 = 1 , λ 2 2 = λ 3 , λ 3 2 = λ 2 . {\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.} Then A [ 1 λ 2 λ 3 ] = [ λ 2 λ 3 1 ] = λ 2 ⋅ [ 1 λ 2 λ 3 ] , {\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},} and A [ 1 λ 3 λ 2 ] = [ λ 3 λ 2 1 ] = λ 3 ⋅ [ 1 λ 3 λ 2 ] . {\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.} Therefore, the other two eigenvectors of A are complex and are v λ 2 = [ 1 λ 2 λ 3 ] T {\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}} and v λ 3 = [ 1 λ 3 λ 2 ] T {\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}} with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair, v λ 2 = v λ 3 ∗ . {\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.} ==== Diagonal matrix example ==== Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix A = [ 1 0 0 0 2 0 0 0 3 ] . {\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = ( 1 − λ ) ( 2 − λ ) ( 3 − λ ) , {\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A. Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors, v λ 1 = [ 1 0 0 ] , v λ 2 = [ 0 1 0 ] , v λ 3 = [ 0 0 1 ] , {\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. ==== Triangular matrix example ==== A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. Consider the lower triangular matrix, A = [ 1 0 0 1 2 0 2 3 3 ] . {\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = ( 1 − λ ) ( 2 − λ ) ( 3 − λ ) , {\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A. These eigenvalues correspond to the eigenvectors, v λ 1 = [ 1 − 1 1 2 ] , v λ 2 = [ 0 1 − 3 ] , v λ 3 = [ 0 0 1 ] , {\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. ==== Matrix with repeated eigenvalues example ==== As in the previous example, the lower triangular matrix A = [ 2 0 0 0 1 2 0 0 0 1 3 0 0 0 1 3 ] , {\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},} has a characteristic polynomial that is the product of its diagonal elements, det ( A − λ I ) = | 2 − λ 0 0 0 1 2 − λ 0 0 0 1 3 − λ 0 0 0 1 3 − λ | = ( 2 − λ ) 2 ( 3 − λ ) 2 . {\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.} The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A. On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector [ 0 1 − 1 1 ] T {\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}} and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector [ 0 0 0 1 ] T {\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}} . The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section. === Eigenvector-eigenvalue identity === For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix, | v i , j | 2 = ∏ k ( λ i − λ k ( M j ) ) ∏ k ≠ i ( λ i − λ k ) , {\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},} where M j {\textstyle M_{j}} is the submatrix formed by removing the jth row and column from the original matrix. This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature. == Eigenvalues and eigenfunctions of differential operators == The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C∞ of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation D f ( t ) = λ f ( t ) {\displaystyle Df(t)=\lambda f(t)} The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions. === Derivative operator example === Consider the derivative operator d d t {\displaystyle {\tfrac {d}{dt}}} with eigenvalue equation d d t f ( t ) = λ f ( t ) . {\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).} This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function f ( t ) = f ( 0 ) e λ t , {\displaystyle f(t)=f(0)e^{\lambda t},} is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant. The main eigenfunction article gives other examples. == General definition == The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V, T : V → V . {\displaystyle T:V\to V.} We say that a nonzero vector v ∈ V is an eigenvector of T if and only if there exists a scalar λ ∈ K such that This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v. === Eigenspaces, geometric multiplicity, and the eigenbasis === Given an eigenvalue λ, consider the set E = { v : T ( v ) = λ v } , {\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},} which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ. By definition of a linear transformation, T ( x + y ) = T ( x ) + T ( y ) , T ( α x ) = α T ( x ) , {\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}} for x, y ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u, v ∈ E, then T ( u + v ) = λ ( u + v ) , T ( α v ) = λ ( α v ) . {\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}} So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V. If that subspace has dimension 1, it is sometimes called an eigenline. The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector. The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues. Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable. === Spectral theory === If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue. For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them. === Associative algebras and representation theory === One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory. The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively. Hecke eigensheaf is a tensor-multiple of itself and is considered in Langlands correspondence. == Dynamic equations == The simplest difference equations have the form x t = a 1 x t − 1 + a 2 x t − 2 + ⋯ + a k x t − k . {\displaystyle x_{t}=a_{1}x_{t-1}+a_{2}x_{t-2}+\cdots +a_{k}x_{t-k}.} The solution of this equation for x in terms of t is found by using its characteristic equation λ k − a 1 λ k − 1 − a 2 λ k − 2 − ⋯ − a k − 1 λ − a k = 0 , {\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0,} which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations x t − 1 = x t − 1 , … , x t − k + 1 = x t − k + 1 , {\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},} giving a k-dimensional system of the first order in the stacked variable vector [ x t ⋯ x t − k + 1 ] {\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}} in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots λ 1 , … , λ k , {\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},} for use in the solution equation x t = c 1 λ 1 t + ⋯ + c k λ k t . {\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{k}\lambda _{k}^{t}.} A similar procedure is used for solving a differential equation of the form d k x d t k + a k − 1 d k − 1 x d t k − 1 + ⋯ + a 1 d x d t + a 0 x = 0. {\displaystyle {\frac {d^{k}x}{dt^{k}}}+a_{k-1}{\frac {d^{k-1}x}{dt^{k-1}}}+\cdots +a_{1}{\frac {dx}{dt}}+a_{0}x=0.} == Calculation == The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice. === Classical method === The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point. ==== Eigenvalues ==== The eigenvalues of a matrix A {\displaystyle A} can be determined by finding the roots of the characteristic polynomial. This is easy for 2 × 2 {\displaystyle 2\times 2} matrices, but the difficulty increases rapidly with the size of the matrix. In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial). Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an n × n {\displaystyle n\times n} matrix is a sum of n ! {\displaystyle n!} different products. Explicit algebraic formulas for the roots of a polynomial exist only if the degree n {\displaystyle n} is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree n {\displaystyle n} is the characteristic polynomial of some companion matrix of order n {\displaystyle n} .) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical. ==== Eigenvectors ==== Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix A = [ 4 1 6 3 ] {\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}} we can find its eigenvectors by solving the equation A v = 6 v {\displaystyle Av=6v} , that is [ 4 1 6 3 ] [ x y ] = 6 ⋅ [ x y ] {\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}} This matrix equation is equivalent to two linear equations { 4 x + y = 6 x 6 x + 3 y = 6 y {\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.} that is { − 2 x + y = 0 6 x − 3 y = 0 {\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.} Both equations reduce to the single linear equation y = 2 x {\displaystyle y=2x} . Therefore, any vector of the form [ a 2 a ] T {\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}} , for any nonzero real number a {\displaystyle a} , is an eigenvector of A {\displaystyle A} with eigenvalue λ = 6 {\displaystyle \lambda =6} . The matrix A {\displaystyle A} above has another eigenvalue λ = 1 {\displaystyle \lambda =1} . A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of 3 x + y = 0 {\displaystyle 3x+y=0} , that is, any vector of the form [ b − 3 b ] T {\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}} , for any nonzero real number b {\displaystyle b} . === Simple iterative methods === The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by ( A − μ I ) − 1 {\displaystyle (A-\mu I)^{-1}} ; this causes it to converge to an eigenvector of the eigenvalue closest to μ ∈ C {\displaystyle \mu \in \mathbb {C} } . If v {\displaystyle \mathbf {v} } is (a good approximation of) an eigenvector of A {\displaystyle A} , then the corresponding eigenvalue can be computed as λ = v ∗ A v v ∗ v {\displaystyle \lambda ={\frac {\mathbf {v} ^{*}A\mathbf {v} }{\mathbf {v} ^{*}\mathbf {v} }}} where v ∗ {\displaystyle \mathbf {v} ^{*}} denotes the conjugate transpose of v {\displaystyle \mathbf {v} } . === Modern methods === Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities. Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed. == Applications == === Geometric transformations === Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes. The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors. The characteristic equation for a rotation is a quadratic equation with discriminant D = − 4 ( sin θ ) 2 {\displaystyle D=-4(\sin \theta )^{2}} , which is a negative number whenever θ is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers, cos θ ± i sin θ {\displaystyle \cos \theta \pm i\sin \theta } ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane. A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues. === Principal component analysis === The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data. Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling. === Graphs === In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A {\displaystyle A} , or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either D − A {\displaystyle D-A} (sometimes called the combinatorial Laplacian) or I − D − 1 / 2 A D − 1 / 2 {\displaystyle I-D^{-1/2}AD^{-1/2}} (sometimes called the normalized Laplacian), where D {\displaystyle D} is a diagonal matrix with D i i {\displaystyle D_{ii}} equal to the degree of vertex v i {\displaystyle v_{i}} , and in D − 1 / 2 {\displaystyle D^{-1/2}} , the i {\displaystyle i} th diagonal entry is 1 / deg ( v i ) {\textstyle 1/{\sqrt {\deg(v_{i})}}} . The k {\displaystyle k} th principal eigenvector of a graph is defined as either the eigenvector corresponding to the k {\displaystyle k} th largest or k {\displaystyle k} th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering. === Markov chains === A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state. === Vibration analysis === Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by m x ¨ + k x = 0 {\displaystyle m{\ddot {x}}+kx=0} or m x ¨ = − k x {\displaystyle m{\ddot {x}}=-kx} That is, acceleration is proportional to position (i.e., we expect x {\displaystyle x} to be sinusoidal in time). In n {\displaystyle n} dimensions, m {\displaystyle m} becomes a mass matrix and k {\displaystyle k} a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem k x = ω 2 m x {\displaystyle kx=\omega ^{2}mx} where ω 2 {\displaystyle \omega ^{2}} is the eigenvalue and ω {\displaystyle \omega } is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of k {\displaystyle k} alone. Furthermore, damped vibration, governed by m x ¨ + c x ˙ + k x = 0 {\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0} leads to a so-called quadratic eigenvalue problem, ( ω 2 m + ω c + k ) x = 0. {\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.} This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems. === Tensor of moment of inertia === In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass. === Stress tensor === In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components. === Schrödinger equation === An example of an eigenvalue equation where the transformation T {\displaystyle T} is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: H ψ E = E ψ E {\displaystyle H\psi _{E}=E\psi _{E}\,} where H {\displaystyle H} , the Hamiltonian, is a second-order differential operator and ψ E {\displaystyle \psi _{E}} , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue E {\displaystyle E} , interpreted as its energy. However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for ψ E {\displaystyle \psi _{E}} within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which ψ E {\displaystyle \psi _{E}} and H {\displaystyle H} can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } . In this notation, the Schrödinger equation is: H | Ψ E ⟩ = E | Ψ E ⟩ {\displaystyle H|\Psi _{E}\rangle =E|\Psi _{E}\rangle } where | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } is an eigenstate of H {\displaystyle H} and E {\displaystyle E} represents the eigenvalue. H {\displaystyle H} is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above H | Ψ E ⟩ {\displaystyle H|\Psi _{E}\rangle } is understood to be the vector obtained by application of the transformation H {\displaystyle H} to | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } . === Wave transport === Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix t {\displaystyle \mathbf {t} } . The eigenvectors of the transmission operator t † t {\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} } form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues, τ {\displaystyle \tau } , of t † t {\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} } correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with τ max = 1 {\displaystyle \tau _{\max }=1} and τ min = 0 {\displaystyle \tau _{\min }=0} . Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels. === Molecular orbitals === In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations. === Geology and glaciology === In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,. A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used in crystallography to create stereograms. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered v 1 , v 2 , v 3 {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}} by their eigenvalues E 1 ≥ E 2 ≥ E 3 {\displaystyle E_{1}\geq E_{2}\geq E_{3}} ; v 1 {\displaystyle \mathbf {v} _{1}} then is the primary orientation/dip of clast, v 2 {\displaystyle \mathbf {v} _{2}} is the secondary and v 3 {\displaystyle \mathbf {v} _{3}} is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E 1 {\displaystyle E_{1}} , E 2 {\displaystyle E_{2}} , and E 3 {\displaystyle E_{3}} are dictated by the nature of the sediment's fabric. If E 1 = E 2 = E 3 {\displaystyle E_{1}=E_{2}=E_{3}} , the fabric is said to be isotropic. If E 1 = E 2 > E 3 {\displaystyle E_{1}=E_{2}>E_{3}} , the fabric is said to be planar. If E 1 > E 2 > E 3 {\displaystyle E_{1}>E_{2}>E_{3}} , the fabric is said to be linear. === Basic reproduction number === The basic reproduction number ( R 0 {\displaystyle R_{0}} ) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then R 0 {\displaystyle R_{0}} is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, t G {\displaystyle t_{G}} , from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time t G {\displaystyle t_{G}} has passed. The value R 0 {\displaystyle R_{0}} is then the largest eigenvalue of the next generation matrix. === Eigenfaces === In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation. == See also == Antieigenvalue theory Eigenoperator Eigenplane Eigenmoments Eigenvalue algorithm Quantum states Jordan normal form List of numerical-analysis software Nonlinear eigenproblem Normal eigenvalue Quadratic eigenvalue problem Singular value Spectrum of a matrix == Notes == === Citations === == Sources == == Further reading == == External links == What are Eigen Values? – non-technical introduction from PhysLink.com's "Ask the Experts" Eigen Values and Eigen Vectors Numerical Examples – Tutorial and Interactive Program from Revoledu. Introduction to Eigen Vectors and Eigen Values – lecture from Khan Academy Eigenvectors and eigenvalues | Essence of linear algebra, chapter 10 – A visual explanation with 3Blue1Brown Matrix Eigenvectors Calculator from Symbolab (Click on the bottom right button of the 2×12 grid to select a matrix size. Select an n × n {\displaystyle n\times n} size (for a square matrix), then fill out the entries numerically and click on the Go button. It can accept complex numbers as well.) Wikiversity uses introductory physics to introduce Eigenvalues and eigenvectors === Theory === Computation of Eigenvalues Numerical solution of eigenvalue problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst
|
Wikipedia:Eigenvalues and eigenvectors of the second derivative#0
|
Explicit formulas for eigenvalues and eigenvectors of the second derivative with different boundary conditions are provided both for the continuous and discrete cases. In the discrete case, the standard central difference approximation of the second derivative is used on a uniform grid. These formulas are used to derive the expressions for eigenfunctions of Laplacian in case of separation of variables, as well as to find eigenvalues and eigenvectors of multidimensional discrete Laplacian on a regular grid, which is presented as a Kronecker sum of discrete Laplacians in one-dimension. == The continuous case == The index j represents the jth eigenvalue or eigenvector and runs from 1 to ∞ {\displaystyle \infty } . Assuming the equation is defined on the domain x ∈ [ 0 , L ] {\displaystyle x\in [0,L]} , the following are the eigenvalues and normalized eigenvectors. The eigenvalues are ordered in descending order. === Pure Dirichlet boundary conditions === λ j = − j 2 π 2 L 2 {\displaystyle \lambda _{j}=-{\frac {j^{2}\pi ^{2}}{L^{2}}}} v j ( x ) = 2 L sin ( j π x L ) {\displaystyle v_{j}(x)={\sqrt {\frac {2}{L}}}\sin \left({\frac {j\pi x}{L}}\right)} === Pure Neumann boundary conditions === λ j = − ( j − 1 ) 2 π 2 L 2 {\displaystyle \lambda _{j}=-{\frac {(j-1)^{2}\pi ^{2}}{L^{2}}}} v j ( x ) = { L − 1 2 j = 1 2 L cos ( ( j − 1 ) π x L ) otherwise {\displaystyle v_{j}(x)=\left\{{\begin{array}{lr}L^{-{\frac {1}{2}}}&j=1\\{\sqrt {\frac {2}{L}}}\cos \left({\frac {(j-1)\pi x}{L}}\right)&{\mbox{otherwise}}\end{array}}\right.} === Periodic boundary conditions === λ j = { − j 2 π 2 L 2 j is even. − ( j − 1 ) 2 π 2 L 2 j is odd. {\displaystyle \lambda _{j}=\left\{{\begin{array}{lr}-{\frac {j^{2}\pi ^{2}}{L^{2}}}&{\mbox{j is even.}}\\-{\frac {(j-1)^{2}\pi ^{2}}{L^{2}}}&{\mbox{j is odd.}}\end{array}}\right.} (That is: 0 {\displaystyle 0} is a simple eigenvalue and all further eigenvalues are given by j 2 π 2 L 2 {\displaystyle {\frac {j^{2}\pi ^{2}}{L^{2}}}} , j = 1 , 2 , … {\displaystyle j=1,2,\ldots } , each with multiplicity 2). v j ( x ) = { L − 1 2 if j = 1. 2 L sin ( j π x L ) if j is even. 2 L cos ( ( j − 1 ) π x L ) if j is odd. {\displaystyle v_{j}(x)={\begin{cases}L^{-{\frac {1}{2}}}&{\mbox{if }}j=1.\\{\sqrt {\frac {2}{L}}}\sin \left({\frac {j\pi x}{L}}\right)&{\mbox{ if j is even.}}\\{\sqrt {\frac {2}{L}}}\cos \left({\frac {(j-1)\pi x}{L}}\right)&{\mbox{ if j is odd.}}\end{cases}}} === Mixed Dirichlet-Neumann boundary conditions === λ j = − ( 2 j − 1 ) 2 π 2 4 L 2 {\displaystyle \lambda _{j}=-{\frac {(2j-1)^{2}\pi ^{2}}{4L^{2}}}} v j ( x ) = 2 L sin ( ( 2 j − 1 ) π x 2 L ) {\displaystyle v_{j}(x)={\sqrt {\frac {2}{L}}}\sin \left({\frac {(2j-1)\pi x}{2L}}\right)} === Mixed Neumann-Dirichlet boundary conditions === λ j = − ( 2 j − 1 ) 2 π 2 4 L 2 {\displaystyle \lambda _{j}=-{\frac {(2j-1)^{2}\pi ^{2}}{4L^{2}}}} v j ( x ) = 2 L cos ( ( 2 j − 1 ) π x 2 L ) {\displaystyle v_{j}(x)={\sqrt {\frac {2}{L}}}\cos \left({\frac {(2j-1)\pi x}{2L}}\right)} == The discrete case == Notation: The index j represents the jth eigenvalue or eigenvector. The index i represents the ith component of an eigenvector. Both i and j go from 1 to n, where the matrix is size n x n. Eigenvectors are normalized. The eigenvalues are ordered in descending order. === Pure Dirichlet boundary conditions === λ j = − 4 h 2 sin 2 ( π j 2 ( n + 1 ) ) {\displaystyle \lambda _{j}=-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {\pi j}{2(n+1)}}\right)} v i , j = 2 n + 1 sin ( i j π n + 1 ) {\displaystyle v_{i,j}={\sqrt {\frac {2}{n+1}}}\sin \left({\frac {ij\pi }{n+1}}\right)} === Pure Neumann boundary conditions === λ j = − 4 h 2 sin 2 ( π ( j − 1 ) 2 n ) {\displaystyle \lambda _{j}=-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {\pi (j-1)}{2n}}\right)} v i , j = { n − 1 2 j = 1 2 n cos ( π ( j − 1 ) ( i − 0.5 ) n ) otherwise {\displaystyle v_{i,j}={\begin{cases}n^{-{\frac {1}{2}}}&{\mbox{j = 1}}\\{\sqrt {\frac {2}{n}}}\cos \left({\frac {\pi (j-1)(i-0.5)}{n}}\right)&{\mbox{otherwise}}\end{cases}}} === Periodic boundary conditions === λ j = { − 4 h 2 sin 2 ( π ( j − 1 ) 2 n ) if j is odd. − 4 h 2 sin 2 ( π j 2 n ) if j is even. {\displaystyle \lambda _{j}={\begin{cases}-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {\pi (j-1)}{2n}}\right)&{\mbox{ if j is odd.}}\\-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {\pi j}{2n}}\right)&{\mbox{ if j is even.}}\end{cases}}} (Note that eigenvalues are repeated except for 0 and the largest one if n is even.) v i , j = { n − 1 2 if j = 1. n − 1 2 ( − 1 ) i if j = n and n is even. 2 n sin ( π ( i − 0.5 ) j n ) otherwise if j is even. 2 n cos ( π ( i − 0.5 ) ( j − 1 ) n ) otherwise if j is odd. {\displaystyle v_{i,j}={\begin{cases}n^{-{\frac {1}{2}}}&{\mbox{if }}j=1.\\n^{-{\frac {1}{2}}}(-1)^{i}&{\mbox{if }}j=n{\mbox{ and n is even.}}\\{\sqrt {\frac {2}{n}}}\sin \left({\frac {\pi (i-0.5)j}{n}}\right)&{\mbox{ otherwise if j is even.}}\\{\sqrt {\frac {2}{n}}}\cos \left({\frac {\pi (i-0.5)(j-1)}{n}}\right)&{\mbox{ otherwise if j is odd.}}\end{cases}}} === Mixed Dirichlet-Neumann boundary conditions === λ j = − 4 h 2 sin 2 ( π ( j − 1 2 ) 2 n + 1 ) {\displaystyle \lambda _{j}=-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {\pi (j-{\frac {1}{2}})}{2n+1}}\right)} v i , j = 2 n + 0.5 sin ( π i ( 2 j − 1 ) 2 n + 1 ) {\displaystyle v_{i,j}={\sqrt {\frac {2}{n+0.5}}}\sin \left({\frac {\pi i(2j-1)}{2n+1}}\right)} === Mixed Neumann-Dirichlet boundary conditions === λ j = − 4 h 2 sin 2 ( π ( j − 1 2 ) 2 n + 1 ) {\displaystyle \lambda _{j}=-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {\pi (j-{\frac {1}{2}})}{2n+1}}\right)} v i , j = 2 n + 0.5 cos ( π ( i − 0.5 ) ( 2 j − 1 ) 2 n + 1 ) {\displaystyle v_{i,j}={\sqrt {\frac {2}{n+0.5}}}\cos \left({\frac {\pi (i-0.5)(2j-1)}{2n+1}}\right)} == Derivation of Eigenvalues and Eigenvectors in the Discrete Case == === Dirichlet case === In the 1D discrete case with Dirichlet boundary conditions, we are solving v k + 1 − 2 v k + v k − 1 h 2 = λ v k , k = 1 , . . . , n , v 0 = v n + 1 = 0. {\displaystyle {\frac {v_{k+1}-2v_{k}+v_{k-1}}{h^{2}}}=\lambda v_{k},\ k=1,...,n,\ v_{0}=v_{n+1}=0.} Rearranging terms, we get v k + 1 = ( 2 + h 2 λ ) v k − v k − 1 . {\displaystyle v_{k+1}=(2+h^{2}\lambda )v_{k}-v_{k-1}.\!} Now let 2 α = ( 2 + h 2 λ ) {\displaystyle 2\alpha =(2+h^{2}\lambda )} . Also, assuming v 1 ≠ 0 {\displaystyle v_{1}\neq 0} , we can scale eigenvectors by any nonzero scalar, so scale v {\displaystyle v} so that v 1 = 1 {\displaystyle v_{1}=1} . Then we find the recurrence v 0 = 0 {\displaystyle v_{0}=0\,\!} v 1 = 1. {\displaystyle v_{1}=1.\,\!} v k + 1 = 2 α v k − v k − 1 {\displaystyle v_{k+1}=2\alpha v_{k}-v_{k-1}\,\!} Considering α {\displaystyle \alpha } as an indeterminate, v k + 1 = U k ( α ) {\displaystyle v_{k+1}=U_{k}(\alpha )\,\!} where U k {\displaystyle U_{k}} is the kth Chebyshev polynomial of the 2nd kind. Since v n + 1 = 0 {\displaystyle v_{n+1}=0} , we get that U n ( α ) = 0 {\displaystyle U_{n}(\alpha )=0\,\!} . It is clear that the eigenvalues of our problem will be the zeros of the nth Chebyshev polynomial of the second kind, with the relation 2 α = ( 2 + h 2 λ ) {\displaystyle 2\alpha =(2+h^{2}\lambda )} . These zeros are well known and are: α k = cos ( k π n + 1 ) . {\displaystyle \alpha _{k}=\cos \left({\frac {k\pi }{n+1}}\right).\,\!} Plugging these into the formula for λ {\displaystyle \lambda } , 2 cos ( k π n + 1 ) = h 2 λ k + 2 {\displaystyle 2\cos \left({\frac {k\pi }{n+1}}\right)=h^{2}\lambda _{k}+2\,\!} λ k = − 2 h 2 [ 1 − cos ( k π n + 1 ) ] . {\displaystyle \lambda _{k}=-{\frac {2}{h^{2}}}\left[1-\cos \left({\frac {k\pi }{n+1}}\right)\right].\,\!} And using a trig formula to simplify, we find λ k = − 4 h 2 sin 2 ( k π 2 ( n + 1 ) ) . {\displaystyle \lambda _{k}=-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {k\pi }{2(n+1)}}\right).\,\!} === Neumann case === In the Neumann case, we are solving v k + 1 − 2 v k + v k − 1 h 2 = λ v k , k = 1 , . . . , n , v 0.5 ′ = v n + 0.5 ′ = 0. {\displaystyle {\frac {v_{k+1}-2v_{k}+v_{k-1}}{h^{2}}}=\lambda v_{k},\ k=1,...,n,\ v'_{0.5}=v'_{n+0.5}=0.\,\!} In the standard discretization, we introduce v 0 {\displaystyle v_{0}\,\!} and v n + 1 {\displaystyle v_{n+1}\,\!} and define v 0.5 ′ := v 1 − v 0 h , v n + 0.5 ′ := v n + 1 − v n h {\displaystyle v'_{0.5}:={\frac {v_{1}-v_{0}}{h}},\ v'_{n+0.5}:={\frac {v_{n+1}-v_{n}}{h}}\,\!} The boundary conditions are then equivalent to v 1 − v 0 = 0 , v n + 1 − v n = 0. {\displaystyle v_{1}-v_{0}=0,\ v_{n+1}-v_{n}=0.} If we make a change of variables, w k = v k + 1 − v k , k = 0 , . . . , n {\displaystyle w_{k}=v_{k+1}-v_{k},\ k=0,...,n\,\!} we can derive the following: v k + 1 − 2 v k + v k − 1 h 2 = λ v k v k + 1 − 2 v k + v k − 1 = h 2 λ v k ( v k + 1 − v k ) − ( v k − v k − 1 ) = h 2 λ v k w k − w k − 1 = h 2 λ v k = h 2 λ w k − 1 + h 2 λ v k − 1 = h 2 λ w k − 1 + w k − 1 − w k − 2 w k = ( 2 + h 2 λ ) w k − 1 − w k − 2 w k + 1 = ( 2 + h 2 λ ) w k − w k − 1 = 2 α w k − w k − 1 . {\displaystyle {\begin{alignedat}{2}{\frac {v_{k+1}-2v_{k}+v_{k-1}}{h^{2}}}&=\lambda v_{k}\\v_{k+1}-2v_{k}+v_{k-1}&=h^{2}\lambda v_{k}\\(v_{k+1}-v_{k})-(v_{k}-v_{k-1})&=h^{2}\lambda v_{k}\\w_{k}-w_{k-1}&=h^{2}\lambda v_{k}\\&=h^{2}\lambda w_{k-1}+h^{2}\lambda v_{k-1}\\&=h^{2}\lambda w_{k-1}+w_{k-1}-w_{k-2}\\w_{k}&=(2+h^{2}\lambda )w_{k-1}-w_{k-2}\\w_{k+1}&=(2+h^{2}\lambda )w_{k}-w_{k-1}\\&=2\alpha w_{k}-w_{k-1}.\end{alignedat}}} with w n = w 0 = 0 {\displaystyle w_{n}=w_{0}=0} being the boundary conditions. This is precisely the Dirichlet formula with n − 1 {\displaystyle n-1} interior grid points and grid spacing h {\displaystyle h} . Similar to what we saw in the above, assuming w 1 ≠ 0 {\displaystyle w_{1}\neq 0} , we get λ k = − 4 h 2 sin 2 ( k π 2 n ) , k = 1 , . . . , n − 1. {\displaystyle \lambda _{k}=-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {k\pi }{2n}}\right),\ k=1,...,n-1.} This gives us n − 1 {\displaystyle n-1} eigenvalues and there are n {\displaystyle n} . If we drop the assumption that w 1 ≠ 0 {\displaystyle w_{1}\neq 0} , we find there is also a solution with v k = c o n s t a n t ∀ k = 0 , . . . , n + 1 , {\displaystyle v_{k}=\mathrm {constant} \ \forall \ k=0,...,n+1,} and this corresponds to eigenvalue 0 {\displaystyle 0} . Relabeling the indices in the formula above and combining with the zero eigenvalue, we obtain, λ k = − 4 h 2 sin 2 ( ( k − 1 ) π 2 n ) , k = 1 , . . . , n . {\displaystyle \lambda _{k}=-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {(k-1)\pi }{2n}}\right),\ k=1,...,n.} === Dirichlet-Neumann Case === For the Dirichlet-Neumann case, we are solving v k + 1 − 2 v k + v k − 1 h 2 = λ v k , k = 1 , . . . , n , v 0 = v n + 0.5 ′ = 0. {\displaystyle {\frac {v_{k+1}-2v_{k}+v_{k-1}}{h^{2}}}=\lambda v_{k},\ k=1,...,n,\ v_{0}=v'_{n+0.5}=0.} , where v n + 0.5 ′ := v n + 1 − v n h . {\displaystyle v'_{n+0.5}:={\frac {v_{n+1}-v_{n}}{h}}.} We need to introduce auxiliary variables v j + 0.5 , j = 0 , . . . , n . {\displaystyle v_{j+0.5},\ j=0,...,n.} Consider the recurrence v k + 0.5 = 2 β v k − v k − 0.5 , for some β {\displaystyle v_{k+0.5}=2\beta v_{k}-v_{k-0.5},{\text{ for some }}\beta \,\!} . Also, we know v 0 = 0 {\displaystyle v_{0}=0} and assuming v 0.5 ≠ 0 {\displaystyle v_{0.5}\neq 0} , we can scale v 0.5 {\displaystyle v_{0.5}} so that v 0.5 = 1. {\displaystyle v_{0.5}=1.} We can also write v k = 2 β v k − 0.5 − v k − 1 {\displaystyle v_{k}=2\beta v_{k-0.5}-v_{k-1}\,\!} v k + 1 = 2 β v k + 0.5 − v k . {\displaystyle v_{k+1}=2\beta v_{k+0.5}-v_{k}.\,\!} Taking the correct combination of these three equations, we can obtain v k + 1 = ( 4 β 2 − 2 ) v k − v k − 1 . {\displaystyle v_{k+1}=(4\beta ^{2}-2)v_{k}-v_{k-1}.\,\!} And thus our new recurrence will solve our eigenvalue problem when h 2 λ + 2 = ( 4 β 2 − 2 ) . {\displaystyle h^{2}\lambda +2=(4\beta ^{2}-2).\,\!} Solving for λ {\displaystyle \lambda } we get λ = 4 ( β 2 − 1 ) h 2 . {\displaystyle \lambda ={\frac {4(\beta ^{2}-1)}{h^{2}}}.} Our new recurrence gives v n + 1 = U 2 n + 1 ( β ) , v n = U 2 n − 1 ( β ) , {\displaystyle v_{n+1}=U_{2n+1}(\beta ),\ v_{n}=U_{2n-1}(\beta ),\,\!} where U k ( β ) {\displaystyle U_{k}(\beta )} again is the kth Chebyshev polynomial of the 2nd kind. And combining with our Neumann boundary condition, we have U 2 n + 1 ( β ) − U 2 n − 1 ( β ) = 0. {\displaystyle U_{2n+1}(\beta )-U_{2n-1}(\beta )=0.\,\!} A well-known formula relates the Chebyshev polynomials of the first kind, T k ( β ) {\displaystyle T_{k}(\beta )} , to those of the second kind by U k ( β ) − U k − 2 ( β ) = T k ( β ) . {\displaystyle U_{k}(\beta )-U_{k-2}(\beta )=T_{k}(\beta ).\,\!} Thus our eigenvalues solve T 2 n + 1 ( β ) = 0 , λ = 4 ( β 2 − 1 ) h 2 . {\displaystyle T_{2n+1}(\beta )=0,\ \lambda ={\frac {4(\beta ^{2}-1)}{h^{2}}}.\,\!} The zeros of this polynomial are also known to be β k = cos ( π ( k − 0.5 ) 2 n + 1 ) , k = 1 , . . . , 2 n + 1 {\displaystyle \beta _{k}=\cos \left({\frac {\pi (k-0.5)}{2n+1}}\right),\ k=1,...,2n+1\,\!} And thus λ k = 4 h 2 [ cos 2 ( π ( k − 0.5 ) 2 n + 1 ) − 1 ] = − 4 h 2 sin 2 ( π ( k − 0.5 ) 2 n + 1 ) . {\displaystyle {\begin{alignedat}{2}\lambda _{k}&={\frac {4}{h^{2}}}\left[\cos ^{2}\left({\frac {\pi (k-0.5)}{2n+1}}\right)-1\right]\\&=-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {\pi (k-0.5)}{2n+1}}\right).\end{alignedat}}} Note that there are 2n + 1 of these values, but only the first n + 1 are unique. The (n + 1)th value gives us the zero vector as an eigenvector with eigenvalue 0, which is trivial. This can be seen by returning to the original recurrence. So we consider only the first n of these values to be the n eigenvalues of the Dirichlet - Neumann problem. λ k = − 4 h 2 sin 2 ( π ( k − 0.5 ) 2 n + 1 ) , k = 1 , . . . , n . {\displaystyle \lambda _{k}=-{\frac {4}{h^{2}}}\sin ^{2}\left({\frac {\pi (k-0.5)}{2n+1}}\right),\ k=1,...,n.} == References ==
|
Wikipedia:Eikonal approximation#0
|
In theoretical physics, the eikonal approximation (Greek εἰκών for likeness, icon or image) is an approximative method useful in wave scattering equations, which occur in optics, seismology, quantum mechanics, quantum electrodynamics, and partial wave expansion. == Informal description == The main advantage that the eikonal approximation offers is that the equations reduce to a differential equation in a single variable. This reduction into a single variable is the result of the straight line approximation or the eikonal approximation, which allows us to choose the straight line as a special direction. == Relation to the WKB approximation == The early steps involved in the eikonal approximation in quantum mechanics are very closely related to the WKB approximation for one-dimensional waves. The WKB method, like the eikonal approximation, reduces the equations into a differential equation in a single variable. But the difficulty with the WKB approximation is that this variable is described by the trajectory of the particle which, in general, is complicated. == Formal description == Making use of WKB approximation we can write the wave function of the scattered system in terms of action S: Ψ = e i S / ℏ {\displaystyle \Psi =e^{iS/{\hbar }}} Inserting the wavefunction Ψ in the Schrödinger equation without the presence of a magnetic field we obtain − ℏ 2 2 m ∇ 2 Ψ = ( E − V ) Ψ {\displaystyle -{\frac {{\hbar }^{2}}{2m}}{\nabla }^{2}\Psi =(E-V)\Psi } − ℏ 2 2 m ∇ 2 e i S / ℏ = ( E − V ) e i S / ℏ {\displaystyle -{\frac {{\hbar }^{2}}{2m}}{\nabla }^{2}{e^{iS/{\hbar }}}=(E-V)e^{iS/{\hbar }}} 1 2 m ( ∇ S ) 2 − i ℏ 2 m ∇ 2 S = E − V {\displaystyle {\frac {1}{2m}}{(\nabla S)}^{2}-{\frac {i\hbar }{2m}}{\nabla }^{2}S=E-V} We write S as a power series in ħ S = S 0 + ℏ i S 1 + . . . {\displaystyle S=S_{0}+{\frac {\hbar }{i}}S_{1}+...} For the zero-th order: 1 2 m ( ∇ S 0 ) 2 = E − V {\displaystyle {\frac {1}{2m}}{(\nabla S_{0})}^{2}=E-V} If we consider the one-dimensional case then ∇ 2 → ∂ z 2 {\displaystyle {\nabla }^{2}\rightarrow {\partial _{z}}^{2}} . We obtain a differential equation with the boundary condition: S ( z = z 0 ) ℏ = k z 0 {\displaystyle {\frac {S(z=z_{0})}{\hbar }}=kz_{0}} for V → 0 {\displaystyle V\rightarrow 0} , z → − ∞ {\displaystyle z\rightarrow -\infty } . d d z S 0 ℏ = k 2 − 2 m V / ℏ 2 {\displaystyle {\frac {d}{dz}}{\frac {S_{0}}{\hbar }}={\sqrt {k^{2}-2mV/{\hbar }^{2}}}} S 0 ( z ) ℏ = k z − m ℏ 2 k ∫ − ∞ z V d z ′ {\displaystyle {\frac {S_{0}(z)}{\hbar }}=kz-{\frac {m}{{\hbar }^{2}k}}\int _{-\infty }^{z}{Vdz'}} == See also == Eikonal equation Correspondence principle Principle of least action == References == === Notes === [1]Eikonal Approximation K. V. Shajesh Department of Physics and Astronomy, University of Oklahoma === Further reading === R.R. Dubey (1995). Comparison of exact solution with Eikonal approximation for elastic heavy ion scattering (3rd ed.). NASA. W. Qian; H. Narumi; N. Daigaku. P. Kenkyūjo (1989). Eikonal approximation in partial wave version (3rd ed.). Nagoya.{{cite book}}: CS1 maint: location missing publisher (link) M. Lévy; J. Sucher (1969). "Eikonal Approximation in Quantum Field Theory". Phys. Rev. 186 (5). Maryland, USA: 1656–1670. Bibcode:1969PhRv..186.1656L. doi:10.1103/PhysRev.186.1656. I. T. Todorov (1970). "Quasipotential Equation Corresponding to the Relativistic Eikonal Approximation". Phys. Rev. D. 3 (10). New Jersey, USA: 2351–2356. Bibcode:1971PhRvD...3.2351T. doi:10.1103/PhysRevD.3.2351. Archived from the original on 2013-02-23. D.R. Harrington (1969). "Multiple Scattering, the Glauber Approximation, and the Off-Shell Eikonal Approximation". Phys. Rev. 184 (5). New Jersey, USA: 1745–1749. Bibcode:1969PhRv..184.1745H. doi:10.1103/PhysRev.184.1745.
|
Wikipedia:Eilenberg–Niven theorem#0
|
The Eilenberg–Niven theorem is a theorem that generalizes the fundamental theorem of algebra to quaternionic polynomials, that is, polynomials with quaternion coefficients and variables. It is due to Samuel Eilenberg and Ivan M. Niven. == Statement == Let P ( x ) = a 0 x a 1 x ⋯ x a n + φ ( x ) {\displaystyle P(x)=a_{0}xa_{1}x\cdots xa_{n}+\varphi (x)} where x, a0, a1, ... , an are non-zero quaternions and φ(x) is a finite sum of monomials similar to the first term but with degree less than n. Then P(x) = 0 has at least one solution. == Generalizations == If permitting multiple monomials with the highest degree, then the theorem does not hold, and P(x) = x + ixi + 1 = 0 is a counterexample with no solutions. Eilenberg–Niven theorem can also be generalized to octonions: all octonionic polynomials with a unique monomial of higher degree have at least one solution, independent of the order of the parenthesis (the octonions are a non-associative algebra). Different from quaternions, however, the monic and non-monic octonionic polynomials do not have always the same set of zeros. == References ==
|
Wikipedia:Eilenberg–Watts theorem#0
|
In mathematics, specifically homological algebra, the Eilenberg–Watts theorem tells when a functor between the categories of modules is given by an application of a tensor product. Precisely, it says that a functor F : M o d R → M o d S {\displaystyle F:\mathbf {Mod} _{R}\to \mathbf {Mod} _{S}} is additive, is right-exact and preserves coproducts if and only if it is of the form F ≃ − ⊗ R F ( R ) {\displaystyle F\simeq -\otimes _{R}F(R)} . For a proof, see The theorems of Eilenberg & Watts (Part 1) == References == Charles E. Watts, Intrinsic characterizations of some additive functors, Proc. Amer. Math. Soc. 11, 1960, 5–8. Samuel Eilenberg, Abstract description of some basic functors, J. Indian Math. Soc. (N.S.) 24, 1960, 231–234 (1961). == Further reading == Eilenberg-Watts theorem in nLab
|
Wikipedia:Eilon Solan#0
|
Eilon Solan (Hebrew: אילון סולן; born 1969) is an Israeli mathematician and professor at the School of Mathematical Sciences of Tel Aviv University. His research focuses on game theory, stochastic processes, and measure theory. == Biography == Solan obtained a B.Sc. in mathematics and computer science from the Hebrew University of Jerusalem in 1989, and an M.Sc. in mathematics from Tel Aviv University in 1993. He completed his doctorate at the Hebrew University of Jerusalem in 1998 under the supervision of Abraham Neyman, with a dissertation on stochastic games. == Scientific career == Solan was one of the inventors of CAPTCHA in 1997, along with Eran Reshef and Gili Raanan. Solan has 12 research papers joint with his son, Omri Nisan Solan. Some of these were published before Omri finished undergraduate studies. == References == == External links == Media related to Eilon Solan at Wikimedia Commons Eilon Solan at the Mathematics Genealogy Project
|
Wikipedia:Eisenstein series#0
|
Eisenstein series, named after German mathematician Gotthold Eisenstein, are particular modular forms with infinite series expansions that may be written down directly. Originally defined for the modular group, Eisenstein series can be generalized in the theory of automorphic forms. == Eisenstein series for the modular group == Let τ be a complex number with strictly positive imaginary part. Define the holomorphic Eisenstein series G2k(τ) of weight 2k, where k ≥ 2 is an integer, by the following series: G 2 k ( τ ) = ∑ ( m , n ) ∈ Z 2 ∖ { ( 0 , 0 ) } 1 ( m + n τ ) 2 k . {\displaystyle G_{2k}(\tau )=\sum _{(m,n)\in \mathbb {Z} ^{2}\setminus \{(0,0)\}}{\frac {1}{(m+n\tau )^{2k}}}.} This series absolutely converges to a holomorphic function of τ in the upper half-plane and its Fourier expansion given below shows that it extends to a holomorphic function at τ = i∞. It is a remarkable fact that the Eisenstein series is a modular form. Indeed, the key property is its SL(2, Z {\displaystyle \mathbb {Z} } )-covariance. Explicitly if a, b, c, d ∈ Z {\displaystyle \mathbb {Z} } and ad − bc = 1 then G 2 k ( a τ + b c τ + d ) = ( c τ + d ) 2 k G 2 k ( τ ) {\displaystyle G_{2k}\left({\frac {a\tau +b}{c\tau +d}}\right)=(c\tau +d)^{2k}G_{2k}(\tau )} Note that k ≥ 2 is necessary such that the series converges absolutely, whereas k needs to be even otherwise the sum vanishes because the (-m, -n) and (m, n) terms cancel out. For k = 1 the series converges but it is not a modular form. == Relation to modular invariants == The modular invariants g2 and g3 of an elliptic curve are given by the first two Eisenstein series: g 2 = 60 G 4 g 3 = 140 G 6 . {\displaystyle {\begin{aligned}g_{2}&=60G_{4}\\g_{3}&=140G_{6}.\end{aligned}}} The article on modular invariants provides expressions for these two functions in terms of theta functions. == Recurrence relation == Any holomorphic modular form for the modular group can be written as a polynomial in G4 and G6. Specifically, the higher order G2k can be written in terms of G4 and G6 through a recurrence relation. Let dk = (2k + 3)k! G2k + 4, so for example, d0 = 3G4 and d1 = 5G6. Then the dk satisfy the relation ∑ k = 0 n ( n k ) d k d n − k = 2 n + 9 3 n + 6 d n + 2 {\displaystyle \sum _{k=0}^{n}{n \choose k}d_{k}d_{n-k}={\frac {2n+9}{3n+6}}d_{n+2}} for all n ≥ 0. Here, ( n k ) {\displaystyle n \choose k} is the binomial coefficient. The dk occur in the series expansion for the Weierstrass's elliptic functions: ℘ ( z ) = 1 z 2 + z 2 ∑ k = 0 ∞ d k z 2 k k ! = 1 z 2 + ∑ k = 1 ∞ ( 2 k + 1 ) G 2 k + 2 z 2 k . {\displaystyle {\begin{aligned}\wp (z)&={\frac {1}{z^{2}}}+z^{2}\sum _{k=0}^{\infty }{\frac {d_{k}z^{2k}}{k!}}\\&={\frac {1}{z^{2}}}+\sum _{k=1}^{\infty }(2k+1)G_{2k+2}z^{2k}.\end{aligned}}} == Fourier series == Define q = e2πiτ. (Some older books define q to be the nome q = eπiτ, but q = e2πiτ is now standard in number theory.) Then the Fourier series of the Eisenstein series is G 2 k ( τ ) = 2 ζ ( 2 k ) ( 1 + c 2 k ∑ n = 1 ∞ σ 2 k − 1 ( n ) q n ) {\displaystyle G_{2k}(\tau )=2\zeta (2k)\left(1+c_{2k}\sum _{n=1}^{\infty }\sigma _{2k-1}(n)q^{n}\right)} where the coefficients c2k are given by c 2 k = ( 2 π i ) 2 k ( 2 k − 1 ) ! ζ ( 2 k ) = − 4 k B 2 k = 2 ζ ( 1 − 2 k ) . {\displaystyle {\begin{aligned}c_{2k}&={\frac {(2\pi i)^{2k}}{(2k-1)!\zeta (2k)}}\\[4pt]&={\frac {-4k}{B_{2k}}}={\frac {2}{\zeta (1-2k)}}.\end{aligned}}} Here, Bn are the Bernoulli numbers, ζ(z) is Riemann's zeta function and σp(n) is the divisor sum function, the sum of the pth powers of the divisors of n. In particular, one has G 4 ( τ ) = π 4 45 ( 1 + 240 ∑ n = 1 ∞ σ 3 ( n ) q n ) G 6 ( τ ) = 2 π 6 945 ( 1 − 504 ∑ n = 1 ∞ σ 5 ( n ) q n ) . {\displaystyle {\begin{aligned}G_{4}(\tau )&={\frac {\pi ^{4}}{45}}\left(1+240\sum _{n=1}^{\infty }\sigma _{3}(n)q^{n}\right)\\[4pt]G_{6}(\tau )&={\frac {2\pi ^{6}}{945}}\left(1-504\sum _{n=1}^{\infty }\sigma _{5}(n)q^{n}\right).\end{aligned}}} The summation over q can be resummed as a Lambert series; that is, one has ∑ n = 1 ∞ q n σ a ( n ) = ∑ n = 1 ∞ n a q n 1 − q n {\displaystyle \sum _{n=1}^{\infty }q^{n}\sigma _{a}(n)=\sum _{n=1}^{\infty }{\frac {n^{a}q^{n}}{1-q^{n}}}} for arbitrary complex |q| < 1 and a. When working with the q-expansion of the Eisenstein series, this alternate notation is frequently introduced: E 2 k ( τ ) = G 2 k ( τ ) 2 ζ ( 2 k ) = 1 + 2 ζ ( 1 − 2 k ) ∑ n = 1 ∞ n 2 k − 1 q n 1 − q n = 1 − 4 k B 2 k ∑ n = 1 ∞ σ 2 k − 1 ( n ) q n = 1 − 4 k B 2 k ∑ d , n ≥ 1 n 2 k − 1 q n d . {\displaystyle {\begin{aligned}E_{2k}(\tau )&={\frac {G_{2k}(\tau )}{2\zeta (2k)}}\\&=1+{\frac {2}{\zeta (1-2k)}}\sum _{n=1}^{\infty }{\frac {n^{2k-1}q^{n}}{1-q^{n}}}\\&=1-{\frac {4k}{B_{2k}}}\sum _{n=1}^{\infty }\sigma _{2k-1}(n)q^{n}\\&=1-{\frac {4k}{B_{2k}}}\sum _{d,n\geq 1}n^{2k-1}q^{nd}.\end{aligned}}} == Identities involving Eisenstein series == === As theta functions === Source: Given q = e2πiτ, let E 4 ( τ ) = 1 + 240 ∑ n = 1 ∞ n 3 q n 1 − q n E 6 ( τ ) = 1 − 504 ∑ n = 1 ∞ n 5 q n 1 − q n E 8 ( τ ) = 1 + 480 ∑ n = 1 ∞ n 7 q n 1 − q n {\displaystyle {\begin{aligned}E_{4}(\tau )&=1+240\sum _{n=1}^{\infty }{\frac {n^{3}q^{n}}{1-q^{n}}}\\E_{6}(\tau )&=1-504\sum _{n=1}^{\infty }{\frac {n^{5}q^{n}}{1-q^{n}}}\\E_{8}(\tau )&=1+480\sum _{n=1}^{\infty }{\frac {n^{7}q^{n}}{1-q^{n}}}\end{aligned}}} and define the Jacobi theta functions which normally uses the nome eπiτ, a = θ 2 ( 0 ; e π i τ ) = ϑ 10 ( 0 ; τ ) b = θ 3 ( 0 ; e π i τ ) = ϑ 00 ( 0 ; τ ) c = θ 4 ( 0 ; e π i τ ) = ϑ 01 ( 0 ; τ ) {\displaystyle {\begin{aligned}a&=\theta _{2}\left(0;e^{\pi i\tau }\right)=\vartheta _{10}(0;\tau )\\b&=\theta _{3}\left(0;e^{\pi i\tau }\right)=\vartheta _{00}(0;\tau )\\c&=\theta _{4}\left(0;e^{\pi i\tau }\right)=\vartheta _{01}(0;\tau )\end{aligned}}} where θm and ϑij are alternative notations. Then we have the symmetric relations, E 4 ( τ ) = 1 2 ( a 8 + b 8 + c 8 ) E 6 ( τ ) = 1 2 ( a 8 + b 8 + c 8 ) 3 − 54 ( a b c ) 8 2 E 8 ( τ ) = 1 2 ( a 16 + b 16 + c 16 ) = a 8 b 8 + a 8 c 8 + b 8 c 8 {\displaystyle {\begin{aligned}E_{4}(\tau )&={\tfrac {1}{2}}\left(a^{8}+b^{8}+c^{8}\right)\\[4pt]E_{6}(\tau )&={\tfrac {1}{2}}{\sqrt {\frac {\left(a^{8}+b^{8}+c^{8}\right)^{3}-54(abc)^{8}}{2}}}\\[4pt]E_{8}(\tau )&={\tfrac {1}{2}}\left(a^{16}+b^{16}+c^{16}\right)=a^{8}b^{8}+a^{8}c^{8}+b^{8}c^{8}\end{aligned}}} Basic algebra immediately implies E 4 3 − E 6 2 = 27 4 ( a b c ) 8 {\displaystyle E_{4}^{3}-E_{6}^{2}={\tfrac {27}{4}}(abc)^{8}} an expression related to the modular discriminant, Δ = g 2 3 − 27 g 3 2 = ( 2 π ) 12 ( 1 2 a b c ) 8 {\displaystyle \Delta =g_{2}^{3}-27g_{3}^{2}=(2\pi )^{12}\left({\tfrac {1}{2}}abc\right)^{8}} The third symmetric relation, on the other hand, is a consequence of E8 = E24 and a4 − b4 + c4 = 0. === Products of Eisenstein series === Eisenstein series form the most explicit examples of modular forms for the full modular group SL(2, Z {\displaystyle \mathbb {Z} } ). Since the space of modular forms of weight 2k has dimension 1 for 2k = 4, 6, 8, 10, 14, different products of Eisenstein series having those weights have to be equal up to a scalar multiple. In fact, we obtain the identities: E 4 2 = E 8 , E 4 E 6 = E 10 , E 4 E 10 = E 14 , E 6 E 8 = E 14 . {\displaystyle E_{4}^{2}=E_{8},\quad E_{4}E_{6}=E_{10},\quad E_{4}E_{10}=E_{14},\quad E_{6}E_{8}=E_{14}.} Using the q-expansions of the Eisenstein series given above, they may be restated as identities involving the sums of powers of divisors: ( 1 + 240 ∑ n = 1 ∞ σ 3 ( n ) q n ) 2 = 1 + 480 ∑ n = 1 ∞ σ 7 ( n ) q n , {\displaystyle \left(1+240\sum _{n=1}^{\infty }\sigma _{3}(n)q^{n}\right)^{2}=1+480\sum _{n=1}^{\infty }\sigma _{7}(n)q^{n},} hence σ 7 ( n ) = σ 3 ( n ) + 120 ∑ m = 1 n − 1 σ 3 ( m ) σ 3 ( n − m ) , {\displaystyle \sigma _{7}(n)=\sigma _{3}(n)+120\sum _{m=1}^{n-1}\sigma _{3}(m)\sigma _{3}(n-m),} and similarly for the others. The theta function of an eight-dimensional even unimodular lattice Γ is a modular form of weight 4 for the full modular group, which gives the following identities: θ Γ ( τ ) = 1 + ∑ n = 1 ∞ r Γ ( 2 n ) q n = E 4 ( τ ) , r Γ ( n ) = 240 σ 3 ( n ) {\displaystyle \theta _{\Gamma }(\tau )=1+\sum _{n=1}^{\infty }r_{\Gamma }(2n)q^{n}=E_{4}(\tau ),\qquad r_{\Gamma }(n)=240\sigma _{3}(n)} for the number rΓ(n) of vectors of the squared length 2n in the root lattice of the type E8. Similar techniques involving holomorphic Eisenstein series twisted by a Dirichlet character produce formulas for the number of representations of a positive integer n' as a sum of two, four, or eight squares in terms of the divisors of n. Using the above recurrence relation, all higher E2k can be expressed as polynomials in E4 and E6. For example: E 8 = E 4 2 E 10 = E 4 ⋅ E 6 691 ⋅ E 12 = 441 ⋅ E 4 3 + 250 ⋅ E 6 2 E 14 = E 4 2 ⋅ E 6 3617 ⋅ E 16 = 1617 ⋅ E 4 4 + 2000 ⋅ E 4 ⋅ E 6 2 43867 ⋅ E 18 = 38367 ⋅ E 4 3 ⋅ E 6 + 5500 ⋅ E 6 3 174611 ⋅ E 20 = 53361 ⋅ E 4 5 + 121250 ⋅ E 4 2 ⋅ E 6 2 77683 ⋅ E 22 = 57183 ⋅ E 4 4 ⋅ E 6 + 20500 ⋅ E 4 ⋅ E 6 3 236364091 ⋅ E 24 = 49679091 ⋅ E 4 6 + 176400000 ⋅ E 4 3 ⋅ E 6 2 + 10285000 ⋅ E 6 4 {\displaystyle {\begin{aligned}E_{8}&=E_{4}^{2}\\E_{10}&=E_{4}\cdot E_{6}\\691\cdot E_{12}&=441\cdot E_{4}^{3}+250\cdot E_{6}^{2}\\E_{14}&=E_{4}^{2}\cdot E_{6}\\3617\cdot E_{16}&=1617\cdot E_{4}^{4}+2000\cdot E_{4}\cdot E_{6}^{2}\\43867\cdot E_{18}&=38367\cdot E_{4}^{3}\cdot E_{6}+5500\cdot E_{6}^{3}\\174611\cdot E_{20}&=53361\cdot E_{4}^{5}+121250\cdot E_{4}^{2}\cdot E_{6}^{2}\\77683\cdot E_{22}&=57183\cdot E_{4}^{4}\cdot E_{6}+20500\cdot E_{4}\cdot E_{6}^{3}\\236364091\cdot E_{24}&=49679091\cdot E_{4}^{6}+176400000\cdot E_{4}^{3}\cdot E_{6}^{2}+10285000\cdot E_{6}^{4}\end{aligned}}} Many relationships between products of Eisenstein series can be written in an elegant way using Hankel determinants, e.g. Garvan's identity ( Δ ( 2 π ) 12 ) 2 = − 691 1728 2 ⋅ 250 det | E 4 E 6 E 8 E 6 E 8 E 10 E 8 E 10 E 12 | {\displaystyle \left({\frac {\Delta }{(2\pi )^{12}}}\right)^{2}=-{\frac {691}{1728^{2}\cdot 250}}\det {\begin{vmatrix}E_{4}&E_{6}&E_{8}\\E_{6}&E_{8}&E_{10}\\E_{8}&E_{10}&E_{12}\end{vmatrix}}} where Δ = ( 2 π ) 12 E 4 3 − E 6 2 1728 {\displaystyle \Delta =(2\pi )^{12}{\frac {E_{4}^{3}-E_{6}^{2}}{1728}}} is the modular discriminant. === Ramanujan identities === Srinivasa Ramanujan gave several interesting identities between the first few Eisenstein series involving differentiation. Let L ( q ) = 1 − 24 ∑ n = 1 ∞ n q n 1 − q n = E 2 ( τ ) M ( q ) = 1 + 240 ∑ n = 1 ∞ n 3 q n 1 − q n = E 4 ( τ ) N ( q ) = 1 − 504 ∑ n = 1 ∞ n 5 q n 1 − q n = E 6 ( τ ) , {\displaystyle {\begin{aligned}L(q)&=1-24\sum _{n=1}^{\infty }{\frac {nq^{n}}{1-q^{n}}}&&=E_{2}(\tau )\\M(q)&=1+240\sum _{n=1}^{\infty }{\frac {n^{3}q^{n}}{1-q^{n}}}&&=E_{4}(\tau )\\N(q)&=1-504\sum _{n=1}^{\infty }{\frac {n^{5}q^{n}}{1-q^{n}}}&&=E_{6}(\tau ),\end{aligned}}} then q d L d q = L 2 − M 12 q d M d q = L M − N 3 q d N d q = L N − M 2 2 . {\displaystyle {\begin{aligned}q{\frac {dL}{dq}}&={\frac {L^{2}-M}{12}}\\q{\frac {dM}{dq}}&={\frac {LM-N}{3}}\\q{\frac {dN}{dq}}&={\frac {LN-M^{2}}{2}}.\end{aligned}}} These identities, like the identities between the series, yield arithmetical convolution identities involving the sum-of-divisor function. Following Ramanujan, to put these identities in the simplest form it is necessary to extend the domain of σp(n) to include zero, by setting σ p ( 0 ) = 1 2 ζ ( − p ) ⟹ σ ( 0 ) = − 1 24 σ 3 ( 0 ) = 1 240 σ 5 ( 0 ) = − 1 504 . {\displaystyle {\begin{aligned}\sigma _{p}(0)={\tfrac {1}{2}}\zeta (-p)\quad \Longrightarrow \quad \sigma (0)&=-{\tfrac {1}{24}}\\\sigma _{3}(0)&={\tfrac {1}{240}}\\\sigma _{5}(0)&=-{\tfrac {1}{504}}.\end{aligned}}} Then, for example ∑ k = 0 n σ ( k ) σ ( n − k ) = 5 12 σ 3 ( n ) − 1 2 n σ ( n ) . {\displaystyle \sum _{k=0}^{n}\sigma (k)\sigma (n-k)={\tfrac {5}{12}}\sigma _{3}(n)-{\tfrac {1}{2}}n\sigma (n).} Other identities of this type, but not directly related to the preceding relations between L, M and N functions, have been proved by Ramanujan and Giuseppe Melfi, as for example ∑ k = 0 n σ 3 ( k ) σ 3 ( n − k ) = 1 120 σ 7 ( n ) ∑ k = 0 n σ ( 2 k + 1 ) σ 3 ( n − k ) = 1 240 σ 5 ( 2 n + 1 ) ∑ k = 0 n σ ( 3 k + 1 ) σ ( 3 n − 3 k + 1 ) = 1 9 σ 3 ( 3 n + 2 ) . {\displaystyle {\begin{aligned}\sum _{k=0}^{n}\sigma _{3}(k)\sigma _{3}(n-k)&={\tfrac {1}{120}}\sigma _{7}(n)\\\sum _{k=0}^{n}\sigma (2k+1)\sigma _{3}(n-k)&={\tfrac {1}{240}}\sigma _{5}(2n+1)\\\sum _{k=0}^{n}\sigma (3k+1)\sigma (3n-3k+1)&={\tfrac {1}{9}}\sigma _{3}(3n+2).\end{aligned}}} == Generalizations == Automorphic forms generalize the idea of modular forms for general Lie groups; and Eisenstein series generalize in a similar fashion. Defining OK to be the ring of integers of a totally real algebraic number field K, one then defines the Hilbert–Blumenthal modular group as PSL(2,OK). One can then associate an Eisenstein series to every cusp of the Hilbert–Blumenthal modular group. == References == == Further reading == Akhiezer, Naum Illyich (1970). Elements of the Theory of Elliptic Functions (in Russian). Moscow.{{cite book}}: CS1 maint: location missing publisher (link) Translated into English as Elements of the Theory of Elliptic Functions. AMS Translations of Mathematical Monographs 79. Providence, RI: American Mathematical Society. 1990. ISBN 0-8218-4532-2. Apostol, Tom M. (1990). Modular Functions and Dirichlet Series in Number Theory (2nd ed.). New York, NY: Springer. ISBN 0-387-97127-0. Chan, Heng Huat; Ong, Yau Lin (1999). "On Eisenstein Series" (PDF). Proc. Amer. Math. Soc. 127 (6): 1735–1744. doi:10.1090/S0002-9939-99-04832-7. Iwaniec, Henryk (2002). Spectral Methods of Automorphic Forms. Graduate Studies in Mathematics 53 (2nd ed.). Providence, RI: American Mathematical Society. ch. 3. ISBN 0-8218-3160-7. Serre, Jean-Pierre (1973). A Course in Arithmetic. Graduate Texts in Mathematics 7 (transl. ed.). New York & Heidelberg: Springer-Verlag. ISBN 9780387900407.
|
Wikipedia:Eitan Tadmor#0
|
Eitan Tadmor (Hebrew: איתן תדמור; born May 4, 1954) is a distinguished university professor at the University of Maryland, College Park. His work has featured contributions to the theory and computation of Partial differential equations with diverse applications to shock wave, kinetic transport, incompressible flows, image processing, and self-organized collective dynamics. == Academic biography == Tadmor completed his mathematical studies (BSc, 1973, MSc, 1975, PhD, 1978) at Tel-Aviv University. In 1980–1982 he was a Bateman Research Instructor in Caltech. He returned to his alma mater, and held professorship positions at Tel-Aviv University during 1983–1998, where he chaired the Department of Applied Mathematics (1991–1993). He moved to UCLA (1995–2002), where he was the founding co-director of the NSF Institute for Pure and Applied Mathematics (IPAM) (1999–2001). In 2002 he joined the University of Maryland, College Park, serving as the founding Director of the university Center for Scientific Computation and Mathematical Modeling (CSCAMM), (2002–2016). He is on the faculty of the Department of Mathematics, the Institute for Physical Sciences and Technology and CSCAMM. In 2012 he was awarded as the PI of the NSF Research network "Kinetic Description of Emerging Challenges in Natural Sciences" (KI-Net) (2012–2018). == Research contributions == Tadmor has made contributions to the development of high-resolution methods for nonlinear conservation laws, introducing the classes of central schemes, entropy stable schemes and spectral viscosity methods. He was involved in work on kinetic theories and critical thresholds phenomena in nonlinear transport models. He introduced novel ideas of multi-scale hierarchical descriptions of images, and is leading an interdisciplinary program on self-collective dynamics with applications to flocking and opinion dynamics. Tadmor has been an adviser of more than 30 PhD students and postdoctoral fellows. == Honors == Tadmor was listed on the 2003 ISI most cited researchers in Mathematics. He has given an invited lecture at the 2002 International Congress of Mathematicians (ICM) (Beijing), plenary addresses in the international conferences on hyperbolic problems (Zürich 1990 and Beijing 1998), and the 2008 Foundations of Computational Mathematics meeting in Hong Kong, and the SIAM invited address at the 2014 Joint Mathematical meeting in Baltimore. In 2012 he was in the inaugural class of Fellows of the American Mathematical Society. In 2015 he was awarded the SIAM-ETH Henrici prize for ″original, broad and fundamental contributions to the applied and numerical analysis of nonlinear differential equations and their applications in areas such as fluid dynamics, image processing and social dynamics". He was named a SIAM Fellow in the 2021 class of fellows, "for original, broad, and fundamental contributions to applied and computational mathematics, including conservation laws, kinetics, image processing, and social dynamics". In 2022 he was awarded the Norbert Wiener Prize in Applied Mathematics. and delivered the 2022 AMS Josiah Willard Gibbs Lectureship. == References == == External links == Tadmor's home page at University of Maryland, College Park Eitan Tadmor at the Mathematics Genealogy Project
|
Wikipedia:El Nombre#0
|
El Nombre is a children's educational programme about an anthropomorphic Mexican gerbil character, originally from a series of educational sketches on Numbertime, the BBC schools programme about mathematics. He was also the only character to appear in all Numbertime episodes. His voice was provided by Steve Steen, while the other characters' voices were provided by Sophie Aldred, Kate Robbins, and (from 1999) former Blue Peter host Janet Ellis. For the ninth (and final) series of Numbertime in 2001, Michael Fenton-Stevens also provided voices of certain other characters in the El Nombre sketches. The character's name means "The Name" in Spanish, not "The Number", which would be "El Número", but El Nombre does mean "The Number" in Catalan. == Setting == El Nombre is set in the fictional town of Santa Flamingo (originally known as Santo Flamingo), home of Little Juan, his Mama, Pedro Gonzales, Juanita Conchita, Maria Consuela Tequila Chiquita, Little Pepita Consuela Tequila Chiquita, Tanto the tarantula, Señor Gelato the ice-cream seller, Leonardo de Sombrero the pizza delivery boy, Señor Calculo the bank manager, Señor Manuel the greengrocer, Miss Constanza Bonanza the school teacher, Señora Fedora the balloon seller and mayor, Señor Loco the steam engine driver, Señor Chipito the carpenter and the local bandit Don Fandango (although it was not actually given a name until the fifth series of Numbertime premiered in January 1998); whenever he was needed, El Nombre swung into action to solve the townspeople's simple mathematical problems, usually talking in rhyme. His character was a parody of the fictional hero Zorro, wearing a similar black cowl mask and huge sombrero, appearing unexpectedly to save the townsfolk from injustice, and generally swinging around on his bullwhip – however, unlike Zorro, he was often quite inept (in fact, on one occasion, Tanto tipped a bucket of water onto him after he made him reenact the Incy Wincy Spider rhyme). When El Nombre first appeared on Numbertime in 1993, his purpose was merely to write numbers in the desert sand and demonstrate the correct ways to form them as his four-piece mariachi band played The Mexican Hat Dance (and said "Again!" once he had finished, as it gave them an excuse to play again); this was shot from an angle directly overhead leaving El Nombre almost completely eclipsed by his large sombrero. His appeal was instant and his success prompted rapid development of his role in the series (as from the second series in 1995, he was given two sketches per episode) – and since his basic beginning, El Nombre went on to appear in a total of 79 (89, if counting those from the "revised" version of the first series) sketches on Numbertime before gaining a series of his own, acquiring dramatic storylines and a full cast of characters, while continuing to demonstrate mathematical concepts, albeit in a dramatic and entertaining way. The stories moved away from solving simple mathematical equations to fighting petty crime, unrelated to the number-solving which made his name and for which he was created. As well as being popular with schoolchildren, El Nombre also developed a cult following amongst students and parents, because of the many references to classic spaghetti Westerns; indeed, his popularity grew so much that in March 2004, the BBC released a 3-minute El Nombre theme song as a single. == Characters == El Nombre: The eponymous main character of the Numbertime sketches, and the spin-off series they began, El Nombre started his life as an adaptation of Words and Pictures' Magic Pencil (in the sense of showing the viewer how to write numbers as opposed to letters); after showing Juan how to draw from one to ten, he went on to show him how to identify (and draw) shapes, as well as teach him about instances of space and position, addition and subtraction, time and money in his everyday life. Little Juan: A young gerbil whose name was a pun on "little one", Juan started his life being upset about not being able to write numbers and cried until El Nombre arrived to show him how to do it; after learning to draw from one to ten, he could not identify shapes and was despondent about it until El Nombre arrived (first to show him instances of the shapes around the town and second to draw them). He and his friends then got themselves into various dilemmas in their everyday lives, which El Nombre was called on to help them out of. Mama: Little Juan's mother, who usually could not assist Juan with his various mathematical dilemmas until El Nombre had arrived. Pedro Gonzales & Juanita Conchita: Two of Juan's friends, who first appeared in the second series of Numbertime, but were not named until the third; on one occasion in the third series, Pedro professed to be "the greatest goalkeeper in the world" when Juan could not score past him, and on one occasion in the fourth series, Juan accidentally blew up Juanita's balloon ten times causing it to burst. Señor Chipito: The town carpenter who first appeared in the second series of Numbertime as the owner of The Maggot and Cactus Saloon, but was not named and given his present occupation until the sixth; on one occasion in that series, Juan and Pedro had to take a wheel from Señor Gelato's ice-cream tricycle to him for repairs, as it struck a three-legged table that they were already taking to him. Señor Manuel: The town greengrocer, who first appeared in the second series of Numbertime but was not named until the fourth; the store he ran was called "Hurrell's" (which was an inside reference to the BBC's then-current education officer in 1995, Su Hurrell). Tanto: Little Juan's pet tarantula spider, who was introduced in the third series of Numbertime; he communicated by mumbling, and on one occasion in the fifth series, Pedro bet Juan he could find a spider who was faster than him (the one he found was mechanical, reflected by the key for winding on its back) and challenged him to a race around the then-newly named town against it, which Tanto won. Maria Consuela Tequila Chiquita: Another of Juan's friends, who was introduced in the third series of Numbertime, and did not appear in as many sketches as Pedro or Juanita; in the seventh series, her younger sister (named Pepita) started at San Flamingo School. Señora Fedora: The town balloon seller, who was introduced at the end of the fourth series of Numbertime, but was later shown to be its mayor as well in the seventh one after she opened its fifteenth annual Egg Festival and chose Mama to make its giant omelette. Miss Constanza Bonanza: The teacher for San Flamingo School, who was introduced in the fifth series of Numbertime (as was the school itself); in the eighth series she got married and Juan was responsible for the school collection with which to buy her a present. Delietta Smith: A television cook who was introduced in the fifth series of Numbertime; known as The Great Delietta and a spoof of Delia Smith, Mama once tried to make her omelette with red and green peppers (but could not, so El Nombre had to help her). Señor Gelato: The town ice-cream seller, who was introduced in the sixth series of Numbertime; on one occasion in that series he swerved on his tricycle to avoid striking Juan and Pedro (who were playing football), and crashed into Señor Manuel's tomato display. Señor Calculo & Don Fandango: The town bank manager and bandit, who were introduced in the sixth series of Numbertime; on one occasion in that series, Don Fandango stole twenty gold coins from the bank (but Tanto bit a hole in his bag, causing them to fall out). Pepita Consuela Tequila Chiquita: Maria's younger sister who started San Flamingo School in the seventh series of Numbertime. Leonardo de Sombrero: The town pizza delivery boy, who was introduced in the eighth series of Numbertime; his name is a spoof on that of Leonardo da Vinci, and once, he delivered a pizza to Juan and his friends when they were having a horror movie sleepover. Señor Loco: The town's steam engine driver, who was introduced in the eighth series of Numbertime; his name is a reference to the fact "loco" is short for locomotive, and once, he took Juan's class to the Santo Flamingo National Park to see the Giant Cactus. Señor Singalotti: A famous opera singer (who only appeared in the fourteenth episode of the spin-off series, "Going for a Song"). El Presidente: The president (who visited the town in the twenty-fifth episode of the spin-off series, "A Very Important Visit"). A gerbil named Pablo also appeared in the ninth series of Numbertime after Juan entered a competition on Radio Flamingo to win a holiday to the seaside resort of Costa Fortuna and won; Juan, Mama, Pedro, Juanita and Maria met him when they arrived at the resort's hotel (because he was their guide to it), and he went on to front a ring-toss stall when they visited its fairground the following week. == Episode list == Although none of the El Nombre sketches on Numbertime ever had a specific title, those of the first series were introduced by an announcer as "Episodes 1-10" (and they were slightly lengthened for the "revised" edition of that series, in September 1998; the third line of the opening song and his farewell catchphrase were also changed several times, to reflect the series' focus). All twenty-six episodes of the spin-off El Nombre series (thirteen in 2001 and a further thirteen in 2003), however, were titled – and their names are listed here. === Series 1 (2001) === The first six episodes of the first series were aired on BBC One as double bills in the CBBC strand on Fridays at 3:45 pm, while the next seven were aired individually on Wednesdays in the same timeslot; three episodes were later repeated on BBC Two as part of the CBBC Breakfast Show on 1 June, 19 July and 20 July 2001, but neither they, or the other ten episodes of the series, were repeated after that. === Series 2 (2003) === The second series was aired as double bills with in vision sign language on the CBeebies Channel on Saturdays and Sundays at 3:30 pm; after the last episode aired on 29 November, the first one was immediately repeated again, and the series concluded its second consecutive run in the same timeslot on 4 January 2004. All thirteen episodes were later repeated without signing on BBC Two in the CBeebies strand on Wednesdays from 7 January to 31 March 2004. == DVD release == In October 2005, all twenty-six episodes were released on DVD by Maverick Entertainment; the first ten were previously released on a VHS entitled El Nombre to the Rescue by BBC Worldwide in 2001, which also featured an exclusive short (entitled Learn Your Numbers With Little Juan, and edited together from the El Nombre sketches of the "original" first series of Numbertime). Some of the El Nombre (and cell-animated) sketches of the "revised" first, second and fifth, and fourth series of Numbertime were also released by BBC Active in 2009 on three DVDs entitled Fun with Numbers – which all came with accompanying books featuring the characters, and were subtitled Counting 1 to 10, Shapes and Time (the featured sketches were mostly from the second series), and Adding and Taking Away respectively. == Credits == Written by: Christopher Lillicrap Original designs: Ealing Animation Voices by Steven Steen, Kate Robbins, Sophie Aldred and Janet Ellis Models: Fin Leadbitter, Humphrey Leadbitter and Katy Maxwell Props by Graeme Owen, Fin Leadbitter, Sophie Brown and Katy Maxwell Sets by Graeme Owen, Colin Armitage, Sophie Brown and Humphrey Leadbitter Animation by Humphrey Leadbitter, Tim Allen, Chris Mendham and Dan Sharp Editing and Special Effects by David Brylewski Facilities by Oasis Television Theme Tune Composer by Christopher Lillicrap Music and Effects by Steve Marshall Sound by Adrian Sear Executive Producer: Theresa Plummer-Andrews Produced by Jilly Joseph and Richard Randolph Directed by Geoff Walker An Ealing Animation production for BBC Worldwide == References == == External links == El Nombre at IMDb El Nombre at Toonhound.com
|
Wikipedia:Elasticity of a function#0
|
In mathematics, the elasticity or point elasticity of a positive differentiable function f of a positive variable (positive input, positive output) at point a is defined as E f ( a ) = a f ( a ) f ′ ( a ) {\displaystyle Ef(a)={\frac {a}{f(a)}}f'(a)} = lim x → a f ( x ) − f ( a ) x − a a f ( a ) = lim x → a f ( x ) − f ( a ) f ( a ) a x − a = lim x → a f ( x ) f ( a ) − 1 x a − 1 ≈ % Δ f ( a ) % Δ a {\displaystyle =\lim _{x\to a}{\frac {f(x)-f(a)}{x-a}}{\frac {a}{f(a)}}=\lim _{x\to a}{\frac {f(x)-f(a)}{f(a)}}{\frac {a}{x-a}}=\lim _{x\to a}{\frac {{\frac {f(x)}{f(a)}}-1}{{\frac {x}{a}}-1}}\approx {\frac {\%\Delta f(a)}{\%\Delta a}}} or equivalently E f ( x ) = d log f ( x ) d log x . {\displaystyle Ef(x)={\frac {d\log f(x)}{d\log x}}.} It is thus the ratio of the relative (percentage) change in the function's output f ( x ) {\displaystyle f(x)} with respect to the relative change in its input x {\displaystyle x} , for infinitesimal changes from a point ( a , f ( a ) ) {\displaystyle (a,f(a))} . Equivalently, it is the ratio of the infinitesimal change of the logarithm of a function with respect to the infinitesimal change of the logarithm of the argument. Generalizations to multi-input–multi-output cases also exist in the literature. The elasticity of a function is a constant α {\displaystyle \alpha } if and only if the function has the form f ( x ) = C x α {\displaystyle f(x)=Cx^{\alpha }} for a constant C > 0 {\displaystyle C>0} . The elasticity at a point is the limit of the arc elasticity between two points as the separation between those two points approaches zero. The concept of elasticity is widely used in economics and metabolic control analysis (MCA); see elasticity (economics) and elasticity coefficient respectively for details. == Rules == Rules for finding the elasticity of products and quotients are simpler than those for derivatives. Let f, g be differentiable. Then E ( f ( x ) ⋅ g ( x ) ) = E f ( x ) + E g ( x ) {\displaystyle E(f(x)\cdot g(x))=Ef(x)+Eg(x)} E f ( x ) g ( x ) = E f ( x ) − E g ( x ) {\displaystyle E{\frac {f(x)}{g(x)}}=Ef(x)-Eg(x)} E ( f ( x ) + g ( x ) ) = f ( x ) ⋅ E ( f ( x ) ) + g ( x ) ⋅ E ( g ( x ) ) f ( x ) + g ( x ) {\displaystyle E(f(x)+g(x))={\frac {f(x)\cdot E(f(x))+g(x)\cdot E(g(x))}{f(x)+g(x)}}} E ( f ( x ) − g ( x ) ) = f ( x ) ⋅ E ( f ( x ) ) − g ( x ) ⋅ E ( g ( x ) ) f ( x ) − g ( x ) {\displaystyle E(f(x)-g(x))={\frac {f(x)\cdot E(f(x))-g(x)\cdot E(g(x))}{f(x)-g(x)}}} The derivative can be expressed in terms of elasticity as D f ( x ) = E f ( x ) ⋅ f ( x ) x {\displaystyle Df(x)={\frac {Ef(x)\cdot f(x)}{x}}} Let a and b be constants. Then E ( a ) = 0 {\displaystyle E(a)=0\ } E ( a ⋅ f ( x ) ) = E f ( x ) {\displaystyle E(a\cdot f(x))=Ef(x)} , E ( b x a ) = a {\displaystyle E(bx^{a})=a\ } . == Estimating point elasticities == In economics, the price elasticity of demand refers to the elasticity of a demand function Q(P), and can be expressed as (dQ/dP)/(Q(P)/P) or the ratio of the value of the marginal function (dQ/dP) to the value of the average function (Q(P)/P). This relationship provides an easy way of determining whether a demand curve is elastic or inelastic at a particular point. First, suppose one follows the usual convention in mathematics of plotting the independent variable (P) horizontally and the dependent variable (Q) vertically. Then the slope of a line tangent to the curve at that point is the value of the marginal function at that point. The slope of a ray drawn from the origin through the point is the value of the average function. If the absolute value of the slope of the tangent is greater than the slope of the ray then the function is elastic at the point; if the slope of the secant is greater than the absolute value of the slope of the tangent then the curve is inelastic at the point. If the tangent line is extended to the horizontal axis the problem is simply a matter of comparing angles created by the lines and the horizontal axis. If the marginal angle is greater than the average angle then the function is elastic at the point; if the marginal angle is less than the average angle then the function is inelastic at that point. If, however, one follows the convention adopted by economists and plots the independent variable P on the vertical axis and the dependent variable Q on the horizontal axis, then the opposite rules would apply. The same graphical procedure can also be applied to a supply function or other functions. == Semi-elasticity == A semi-elasticity (or semielasticity) gives the percentage change in f(x) in terms of a change (not percentage-wise) in x. Algebraically, the semi-elasticity S of a function f at point x is S f ( x ) = 1 f ( x ) f ′ ( x ) = d ln f ( x ) d x {\displaystyle Sf(x)={\frac {1}{f(x)}}f'(x)={\frac {d\ln f(x)}{dx}}} The semi-elasticity will be constant for exponential functions of the form, f ( x ) = C α x {\displaystyle f(x)=C\alpha ^{x}} since, ln f = ln C α x = ln C + x ln α ⟹ d ln f d x = ln α . {\displaystyle \ln {f}=\ln {C\alpha ^{x}}=\ln {C}+x\ln {\alpha }\implies {\frac {d\ln {f}}{dx}}=\ln {\alpha }.} An example of semi-elasticity is modified duration in bond trading. The opposite definition is sometimes used in the literature. That is, the term "semi-elasticity" is also sometimes used for the change (not percentage-wise) in f(x) in terms of a percentage change in x which would be d f ( x ) d ln ( x ) = d f ( x ) d x x {\displaystyle {\frac {df(x)}{d\ln(x)}}={\frac {df(x)}{dx}}x} == See also == Arc elasticity Elasticity (economics) Elasticity coefficient (biochemistry) Homogeneous function Logarithmic derivative == References == == Further reading == Nievergelt, Yves (1983). "The Concept of Elasticity in Economics". SIAM Review. 25 (2): 261–265. doi:10.1137/1025049.
|
Wikipedia:Eleanor Mollie Horadam#0
|
Eleanor Mollie Horadam (29 June 1921 – 5 May 2002) was an English-Australian mathematician specialising in the number theory of generalised integers. == Life == Horadam was born in Dewsbury, Yorkshire. She read mathematics at Girton College, Cambridge. Then, while doing wartime service by day for Rolls-Royce performing stress–strain analysis of jet engines, she took night classes in engineering at the University of London, earning first-class honours there. She moved to Australia by herself in 1949, becoming a lecturer at the University of New England. There, she married mathematician Alwyn Horadam and raised three children, persuading the university to update their maternity policies so that (unusually for the time) she could keep her position as a lecturer. She completed a doctorate and became a senior lecturer in 1965, retired in 1983, and was named a fellow of the university in 1995. Her daughter Kathy Horadam, also became a mathematician. == Mathematics == Horadam's research concerned generalised integers, formed from a sequence of real numbers greater than one (called generalised prime numbers) as the products of finite multisets of generalised primes. She was also the author of a textbook published by the University of New England, Principles of mathematics for economists (1982). == References == == Further reading == Horadam, Kathy (2002), "Obituary: Eleanor Mollie Horadam (29 June 1921 – 5 May 2002)", The Australian Mathematical Society Gazette, 29 (4): 224–225, MR 1932854
|
Wikipedia:Electronic Journal of Linear Algebra#0
|
The Electronic Journal of Linear Algebra is a peer-reviewed platinum open access scientific journal covering matrix analysis and linear algebra, together with their applications. It is published by the International Linear Algebra Society and its editor-in-chief is Froilán M. Dopico (University Carlos III of Madrid). == Editors-in-chief == The first editors-in-chief were Volker Mehrmann (Technische Universität Berlin; 1996–1999) and Daniel Hershkowitz (Bar-Ilan University; 1996–2010). Other former editors-in-chief are Ludwig Elsner (Bielefeld University; 2010–2011), Bryan Shader (University of Wyoming; 2010–2019), Michael Tsatsomeros (Washington State University; 2016–2022). The current editor-in-chief is Froilán M. Dopico (Universidad Carlos III de Madrid; since 2019). == Abstracting and indexing == The journal is abstracted and indexed in: Current Contents/Physical, Chemical & Earth Sciences Mathematical Reviews Science Citation Index Expanded Scopus Zentralblatt MATH According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.7. == References == == External links == Official website
|
Wikipedia:Elementary Number Theory, Group Theory and Ramanujan Graphs#0
|
Elementary Number Theory, Group Theory and Ramanujan Graphs is a book in mathematics whose goal is to make the construction of Ramanujan graphs accessible to undergraduate-level mathematics students. In order to do so, it covers several other significant topics in graph theory, number theory, and group theory. It was written by Giuliana Davidoff, Peter Sarnak, and Alain Valette, and published in 2003 by the Cambridge University Press, as volume 55 of the London Mathematical Society Student Texts book series. == Background == In graph theory, expander graphs are undirected graphs with high connectivity: every small-enough subset of vertices has many edges connecting it to the remaining parts of the graph. Sparse expander graphs have many important applications in computer science, including the development of error correcting codes, the design of sorting networks, and the derandomization of randomized algorithms. For these applications, the graph must be constructed explicitly, rather than merely having its existence proven. One way to show that a graph is an expander is to study the eigenvalues of its adjacency matrix. For an r {\displaystyle r} -regular graph, these are real numbers in the interval [ − r , r ] {\displaystyle [-r,r]} , and the largest eigenvalue (corresponding to the all-1s eigenvector) is exactly r {\displaystyle r} . The spectral expansion of the graph is defined from the difference between the largest and second-largest eigenvalues, the spectral gap, which controls how quickly a random walk on the graph settles to its stable distribution; this gap can be at most 2 r − 1 {\displaystyle 2{\sqrt {r-1}}} . The Ramanujan graphs are defined as the graphs that are optimal from the point of view of spectral expansion: they are r {\displaystyle r} -regular graphs whose spectral gap is exactly 2 r − 1 {\displaystyle 2{\sqrt {r-1}}} . Although Ramanujan graphs with high degree, such as the complete graphs, are easy to construct, expander graphs of low degree are needed for the applications of these graphs. Several constructions of low-degree Ramanujan graphs are now known, the first of which were by Lubotzky, Phillips & Sarnak (1988) and Margulis (1988). Reviewer Jürgen Elstrod writes that "while the description of these graphs is elementary, the proof that they have the desired properties is not". Elementary Number Theory, Group Theory and Ramanujan Graphs aims to make as much of this theory accessible at an elementary level as possible. == Topics == Its authors have divided Elementary Number Theory, Group Theory and Ramanujan Graphs into four chapters. The first of these provides background in graph theory, including material on the girth of graphs (the length of the shortest cycle), on graph coloring, and on the use of the probabilistic method to prove the existence of graphs for which both the girth and the number of colors needed are large. This provides additional motivation for the construction of Ramanujan graphs, as the ones constructed in the book provide explicit examples of the same phenomenon. This chapter also provides the expected material on spectral graph theory, needed for the definition of Ramanujan graphs. Chapter 2, on number theory, includes the sum of two squares theorem characterizing the positive integers that can be represented as sums of two squares of integers (closely connected to the norms of Gaussian integers), Lagrange's four-square theorem according to which all positive integers can be represented as sums of four squares (proved using the norms of Hurwitz quaternions), and quadratic reciprocity. Chapter 3 concerns group theory, and in particular the theory of the projective special linear groups P S L ( 2 , F q ) {\displaystyle PSL(2,\mathbb {F} _{q})} and projective linear groups P G L ( 2 , F q ) {\displaystyle PGL(2,\mathbb {F} _{q})} over the finite fields whose order is a prime number q {\displaystyle q} , and the representation theory of finite groups. The final chapter constructs the Ramanujan graph X p , q {\displaystyle X^{p,q}} for two prime numbers p {\displaystyle p} and q {\displaystyle q} as a Cayley graph of the group P S L ( 2 , F q ) {\displaystyle PSL(2,\mathbb {F} _{q})} or P G L ( 2 , F q ) {\displaystyle PGL(2,\mathbb {F} _{q})} (depending on quadratic reciprocity) with generators defined by taking modulo q {\displaystyle q} a set of p + 1 {\displaystyle p+1} quaternions coming from representations of p {\displaystyle p} as a sum of four squares. These graphs are automatically ( p + 1 ) {\displaystyle (p+1)} -regular. The chapter provides formulas for their numbers of vertices, and estimates of their girth. While not fully proving that these graphs are Ramanujan graphs, the chapter proves that they are spectral expanders, and describes how the claim that they are Ramanujan graphs follows from Pierre Deligne's proof of the Ramanujan conjecture (the connection to Ramanujan from which the name of these graphs was derived). == Audience and reception == This book is intended for advanced undergraduates who have already seen some abstract algebra and real analysis. Reviewer Thomas Shemanske suggests using it as the basis of a senior seminar, as a quick path to many important topics and an interesting example of how these seemingly-separate topics join forces in this application. On the other hand, Thomas Pfaff thinks it would be difficult going even for most senior-level undergraduates, but could be a good choice for independent study or an elective graduate course. == References ==
|
Wikipedia:Elementary algebra#0
|
Elementary algebra, also known as high school algebra or college algebra, encompasses the basic concepts of algebra. It is often contrasted with arithmetic: arithmetic deals with specified numbers, whilst algebra introduces variables (quantities without fixed values). This use of variables entails use of algebraic notation and an understanding of the general rules of the operations introduced in arithmetic: addition, subtraction, multiplication, division, etc. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers. It is typically taught to secondary school students and at introductory college level in the United States, and builds on their understanding of arithmetic. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Many quantitative relationships in science and mathematics are expressed as algebraic equations. == Algebraic operations == == Algebraic notation == Algebraic notation describes the rules and conventions for writing mathematical expressions, as well as the terminology used for talking about parts of expressions. For example, the expression 3 x 2 − 2 x y + c {\displaystyle 3x^{2}-2xy+c} has the following components: A coefficient is a numerical value, or letter representing a numerical constant, that multiplies a variable (the operator is omitted). A term is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators. Letters represent variables and constants. By convention, letters at the beginning of the alphabet (e.g. a , b , c {\displaystyle a,b,c} ) are typically used to represent constants, and those toward the end of the alphabet (e.g. x , y {\displaystyle x,y} and z) are used to represent variables. They are usually printed in italics. Algebraic operations work in the same way as arithmetic operations, such as addition, subtraction, multiplication, division and exponentiation, and are applied to algebraic variables and terms. Multiplication symbols are usually omitted, and implied when there is no space between two variables or terms, or when a coefficient is used. For example, 3 × x 2 {\displaystyle 3\times x^{2}} is written as 3 x 2 {\displaystyle 3x^{2}} , and 2 × x × y {\displaystyle 2\times x\times y} may be written 2 x y {\displaystyle 2xy} . Usually terms with the highest power (exponent), are written on the left, for example, x 2 {\displaystyle x^{2}} is written to the left of x. When a coefficient is one, it is usually omitted (e.g. 1 x 2 {\displaystyle 1x^{2}} is written x 2 {\displaystyle x^{2}} ). Likewise when the exponent (power) is one, (e.g. 3 x 1 {\displaystyle 3x^{1}} is written 3 x {\displaystyle 3x} ). When the exponent is zero, the result is always 1 (e.g. x 0 {\displaystyle x^{0}} is always rewritten to 1). However 0 0 {\displaystyle 0^{0}} , being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents. === Alternative notation === Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters and symbols are available. As an illustration of this, while exponents are usually formatted using superscripts, e.g., x 2 {\displaystyle x^{2}} , in plain text, and in the TeX mark-up language, the caret symbol ^ represents exponentiation, so x 2 {\displaystyle x^{2}} is written as "x^2". This also applies to some programming languages such as Lua. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so x 2 {\displaystyle x^{2}} is written as "x**2". Many programming languages and calculators use a single asterisk to represent the multiplication symbol, and it must be explicitly used, for example, 3 x {\displaystyle 3x} is written "3*x". == Concepts == === Variables === Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers. This is useful for several reasons. Variables may represent numbers whose values are not yet known. For example, if the temperature of the current day, C, is 20 degrees higher than the temperature of the previous day, P, then the problem can be described algebraically as C = P + 20 {\displaystyle C=P+20} . Variables allow one to describe general problems, without specifying the values of the quantities that are involved. For example, it can be stated specifically that 5 minutes is equivalent to 60 × 5 = 300 {\displaystyle 60\times 5=300} seconds. A more general (algebraic) description may state that the number of seconds, s = 60 × m {\displaystyle s=60\times m} , where m is the number of minutes. Variables allow one to describe mathematical relationships between quantities that may vary. For example, the relationship between the circumference, c, and diameter, d, of a circle is described by π = c / d {\displaystyle \pi =c/d} . Variables allow one to describe some mathematical properties. For example, a basic property of addition is commutativity which states that the order of numbers being added together does not matter. Commutativity is stated algebraically as ( a + b ) = ( b + a ) {\displaystyle (a+b)=(b+a)} . === Simplifying expressions === Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example, Added terms are simplified using coefficients. For example, x + x + x {\displaystyle x+x+x} can be simplified as 3 x {\displaystyle 3x} (where 3 is a numerical coefficient). Multiplied terms are simplified using exponents. For example, x × x × x {\displaystyle x\times x\times x} is represented as x 3 {\displaystyle x^{3}} Like terms are added together, for example, 2 x 2 + 3 a b − x 2 + a b {\displaystyle 2x^{2}+3ab-x^{2}+ab} is written as x 2 + 4 a b {\displaystyle x^{2}+4ab} , because the terms containing x 2 {\displaystyle x^{2}} are added together, and the terms containing a b {\displaystyle ab} are added together. Brackets can be "multiplied out", using the distributive property. For example, x ( 2 x + 3 ) {\displaystyle x(2x+3)} can be written as ( x × 2 x ) + ( x × 3 ) {\displaystyle (x\times 2x)+(x\times 3)} which can be written as 2 x 2 + 3 x {\displaystyle 2x^{2}+3x} Expressions can be factored. For example, 6 x 5 + 3 x 2 {\displaystyle 6x^{5}+3x^{2}} , by dividing both terms by the common factor, 3 x 2 {\displaystyle 3x^{2}} can be written as 3 x 2 ( 2 x 3 + 1 ) {\displaystyle 3x^{2}(2x^{3}+1)} === Equations === An equation states that two expressions are equal using the symbol for equality, = (the equals sign). One of the best-known equations describes Pythagoras' law relating the length of the sides of a right angle triangle: c 2 = a 2 + b 2 {\displaystyle c^{2}=a^{2}+b^{2}} This equation states that c 2 {\displaystyle c^{2}} , representing the square of the length of the side that is the hypotenuse, the side opposite the right angle, is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by a and b. An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as a + b = b + a {\displaystyle a+b=b+a} ); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. x 2 − 1 = 8 {\displaystyle x^{2}-1=8} is true only for x = 3 {\displaystyle x=3} and x = − 3 {\displaystyle x=-3} . The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving. Another type of equation is inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: a > b {\displaystyle a>b} where > {\displaystyle >} represents 'greater than', and a < b {\displaystyle a<b} where < {\displaystyle <} represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped. ==== Properties of equality ==== By definition, equality is an equivalence relation, meaning it is reflexive (i.e. b = b {\displaystyle b=b} ), symmetric (i.e. if a = b {\displaystyle a=b} then b = a {\displaystyle b=a} ), and transitive (i.e. if a = b {\displaystyle a=b} and b = c {\displaystyle b=c} then a = c {\displaystyle a=c} ). It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will remain true. This implies the following properties: if a = b {\displaystyle a=b} and c = d {\displaystyle c=d} then a + c = b + d {\displaystyle a+c=b+d} and a c = b d {\displaystyle ac=bd} ; if a = b {\displaystyle a=b} then a + c = b + c {\displaystyle a+c=b+c} and a c = b c {\displaystyle ac=bc} ; more generally, for any function f, if a = b {\displaystyle a=b} then f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} . ==== Properties of inequality ==== The relations less than < {\displaystyle <} and greater than > {\displaystyle >} have the property of transitivity: If a < b {\displaystyle a<b} and b < c {\displaystyle b<c} then a < c {\displaystyle a<c} ; If a < b {\displaystyle a<b} and c < d {\displaystyle c<d} then a + c < b + d {\displaystyle a+c<b+d} ; If a < b {\displaystyle a<b} and c > 0 {\displaystyle c>0} then a c < b c {\displaystyle ac<bc} ; If a < b {\displaystyle a<b} and c < 0 {\displaystyle c<0} then b c < a c {\displaystyle bc<ac} . By reversing the inequation, < {\displaystyle <} and > {\displaystyle >} can be swapped, for example: a < b {\displaystyle a<b} is equivalent to b > a {\displaystyle b>a} === Substitution === Substitution is replacing the terms in an expression to create a new expression. Substituting 3 for a in the expression a*5 makes a new expression 3*5 with meaning 15. Substituting the terms of a statement makes a new statement. When the original statement is true independently of the values of the terms, the statement created by substitutions is also true. Hence, definitions can be made in symbolic terms and interpreted through substitution: if a 2 := a × a {\displaystyle a^{2}:=a\times a} is meant as the definition of a 2 , {\displaystyle a^{2},} as the product of a with itself, substituting 3 for a informs the reader of this statement that 3 2 {\displaystyle 3^{2}} means 3 × 3 = 9. Often it's not known whether the statement is true independently of the values of the terms. And, substitution allows one to derive restrictions on the possible values, or show what conditions the statement holds under. For example, taking the statement x + 1 = 0, if x is substituted with 1, this implies 1 + 1 = 2 = 0, which is false, which implies that if x + 1 = 0 then x cannot be 1. If x and y are integers, rationals, or real numbers, then xy = 0 implies x = 0 or y = 0. Consider abc = 0. Then, substituting a for x and bc for y, we learn a = 0 or bc = 0. Then we can substitute again, letting x = b and y = c, to show that if bc = 0 then b = 0 or c = 0. Therefore, if abc = 0, then a = 0 or (b = 0 or c = 0), so abc = 0 implies a = 0 or b = 0 or c = 0. If the original fact were stated as "ab = 0 implies a = 0 or b = 0", then when saying "consider abc = 0," we would have a conflict of terms when substituting. Yet the above logic is still valid to show that if abc = 0 then a = 0 or b = 0 or c = 0 if, instead of letting a = a and b = bc, one substitutes a for a and b for bc (and with bc = 0, substituting b for a and c for b). This shows that substituting for the terms in a statement isn't always the same as letting the terms from the statement equal the substituted terms. In this situation it's clear that if we substitute an expression a into the a term of the original equation, the a substituted does not refer to the a in the statement "ab = 0 implies a = 0 or b = 0." == Solving algebraic equations == The following sections lay out examples of some of the types of algebraic equations that may be encountered. === Linear equations with one variable === Linear equations are so-called, because when they are plotted, they describe a straight line. The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. As an example, consider: Problem in words: If you double the age of a child and add 4, the resulting answer is 12. How old is the child? Equivalent equation: 2 x + 4 = 12 {\displaystyle 2x+4=12} where x represent the child's age To solve this kind of equation, the technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable. This problem and its solution are as follows: In words: the child is 4 years old. The general form of a linear equation with one variable, can be written as: a x + b = c {\displaystyle ax+b=c} Following the same procedure (i.e. subtract b from both sides, and then divide by a), the general solution is given by x = c − b a {\displaystyle x={\frac {c-b}{a}}} === Linear equations with two variables === A linear equation with two variables has many (i.e. an infinite number of) solutions. For example: Problem in words: A father is 22 years older than his son. How old are they? Equivalent equation: y = x + 22 {\displaystyle y=x+22} where y is the father's age, x is the son's age. That cannot be worked out by itself. If the son's age was made known, then there would no longer be two unknowns (variables). The problem then becomes a linear equation with just one variable, that can be solved as described above. To solve a linear equation with two variables (unknowns), requires two related equations. For example, if it was also revealed that: Problem in words In 10 years, the father will be twice as old as his son. Equivalent equation y + 10 = 2 × ( x + 10 ) y = 2 × ( x + 10 ) − 10 Subtract 10 from both sides y = 2 x + 20 − 10 Multiple out brackets y = 2 x + 10 Simplify {\displaystyle {\begin{aligned}y+10&=2\times (x+10)\\y&=2\times (x+10)-10&&{\text{Subtract 10 from both sides}}\\y&=2x+20-10&&{\text{Multiple out brackets}}\\y&=2x+10&&{\text{Simplify}}\end{aligned}}} Now there are two related linear equations, each with two unknowns, which enables the production of a linear equation with just one variable, by subtracting one from the other (called the elimination method): { y = x + 22 First equation y = 2 x + 10 Second equation {\displaystyle {\begin{cases}y=x+22&{\text{First equation}}\\y=2x+10&{\text{Second equation}}\end{cases}}} Subtract the first equation from ( y − y ) = ( 2 x − x ) + 10 − 22 the second in order to remove y 0 = x − 12 Simplify 12 = x Add 12 to both sides x = 12 Rearrange {\displaystyle {\begin{aligned}&&&{\text{Subtract the first equation from}}\\(y-y)&=(2x-x)+10-22&&{\text{the second in order to remove }}y\\0&=x-12&&{\text{Simplify}}\\12&=x&&{\text{Add 12 to both sides}}\\x&=12&&{\text{Rearrange}}\end{aligned}}} In other words, the son is aged 12, and since the father 22 years older, he must be 34. In 10 years, the son will be 22, and the father will be twice his age, 44. This problem is illustrated on the associated plot of the equations. For other ways to solve this kind of equations, see below, System of linear equations. === Quadratic equations === A quadratic equation is one which includes a term with an exponent of 2, for example, x 2 {\displaystyle x^{2}} , and no term with higher exponent. The name derives from the Latin quadrus, meaning square. In general, a quadratic equation can be expressed in the form a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} , where a is not zero (if it were zero, then the equation would not be quadratic but linear). Because of this a quadratic equation must contain the term a x 2 {\displaystyle ax^{2}} , which is known as the quadratic term. Hence a ≠ 0 {\displaystyle a\neq 0} , and so we may divide by a and rearrange the equation into the standard form x 2 + p x + q = 0 {\displaystyle x^{2}+px+q=0} where p = b a {\displaystyle p={\frac {b}{a}}} and q = c a {\displaystyle q={\frac {c}{a}}} . Solving this, by a process known as completing the square, leads to the quadratic formula x = − b ± b 2 − 4 a c 2 a , {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}},} where the symbol "±" indicates that both x = − b + b 2 − 4 a c 2 a and x = − b − b 2 − 4 a c 2 a {\displaystyle x={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\quad {\text{and}}\quad x={\frac {-b-{\sqrt {b^{2}-4ac}}}{2a}}} are solutions of the quadratic equation. Quadratic equations can also be solved using factorization (the reverse process of which is expansion, but for two linear terms is sometimes denoted foiling). As an example of factoring: x 2 + 3 x − 10 = 0 , {\displaystyle x^{2}+3x-10=0,} which is the same thing as ( x + 5 ) ( x − 2 ) = 0. {\displaystyle (x+5)(x-2)=0.} It follows from the zero-product property that either x = 2 {\displaystyle x=2} or x = − 5 {\displaystyle x=-5} are the solutions, since precisely one of the factors must be equal to zero. All quadratic equations will have two solutions in the complex number system, but need not have any in the real number system. For example, x 2 + 1 = 0 {\displaystyle x^{2}+1=0} has no real number solution since no real number squared equals −1. Sometimes a quadratic equation has a root of multiplicity 2, such as: ( x + 1 ) 2 = 0. {\displaystyle (x+1)^{2}=0.} For this equation, −1 is a root of multiplicity 2. This means −1 appears twice, since the equation can be rewritten in factored form as [ x − ( − 1 ) ] [ x − ( − 1 ) ] = 0. {\displaystyle [x-(-1)][x-(-1)]=0.} ==== Complex numbers ==== All quadratic equations have exactly two solutions in complex numbers (but they may be equal to each other), a category that includes real numbers, imaginary numbers, and sums of real and imaginary numbers. Complex numbers first arise in the teaching of quadratic equations and the quadratic formula. For example, the quadratic equation x 2 + x + 1 = 0 {\displaystyle x^{2}+x+1=0} has solutions x = − 1 + − 3 2 and x = − 1 − − 3 2 . {\displaystyle x={\frac {-1+{\sqrt {-3}}}{2}}\quad \quad {\text{and}}\quad \quad x={\frac {-1-{\sqrt {-3}}}{2}}.} Since − 3 {\displaystyle {\sqrt {-3}}} is not any real number, both of these solutions for x are complex numbers. === Exponential and logarithmic equations === An exponential equation is one which has the form a x = b {\displaystyle a^{x}=b} for a > 0 {\displaystyle a>0} , which has solution x = log a b = ln b ln a {\displaystyle x=\log _{a}b={\frac {\ln b}{\ln a}}} when b > 0 {\displaystyle b>0} . Elementary algebraic techniques are used to rewrite a given equation in the above way before arriving at the solution. For example, if 3 ⋅ 2 x − 1 + 1 = 10 {\displaystyle 3\cdot 2^{x-1}+1=10} then, by subtracting 1 from both sides of the equation, and then dividing both sides by 3 we obtain 2 x − 1 = 3 {\displaystyle 2^{x-1}=3} whence x − 1 = log 2 3 {\displaystyle x-1=\log _{2}3} or x = log 2 3 + 1. {\displaystyle x=\log _{2}3+1.} A logarithmic equation is an equation of the form l o g a ( x ) = b {\displaystyle log_{a}(x)=b} for a > 0 {\displaystyle a>0} , which has solution x = a b . {\displaystyle x=a^{b}.} For example, if 4 log 5 ( x − 3 ) − 2 = 6 {\displaystyle 4\log _{5}(x-3)-2=6} then, by adding 2 to both sides of the equation, followed by dividing both sides by 4, we get log 5 ( x − 3 ) = 2 {\displaystyle \log _{5}(x-3)=2} whence x − 3 = 5 2 = 25 {\displaystyle x-3=5^{2}=25} from which we obtain x = 28. {\displaystyle x=28.} === Radical equations === A radical equation is one that includes a radical sign, which includes square roots, x , {\displaystyle {\sqrt {x}},} cube roots, x 3 {\displaystyle {\sqrt[{3}]{x}}} , and nth roots, x n {\displaystyle {\sqrt[{n}]{x}}} . Recall that an nth root can be rewritten in exponential format, so that x n {\displaystyle {\sqrt[{n}]{x}}} is equivalent to x 1 n {\displaystyle x^{\frac {1}{n}}} . Combined with regular exponents (powers), then x 3 2 {\displaystyle {\sqrt[{2}]{x^{3}}}} (the square root of x cubed), can be rewritten as x 3 2 {\displaystyle x^{\frac {3}{2}}} . So a common form of a radical equation is x m n = a {\displaystyle {\sqrt[{n}]{x^{m}}}=a} (equivalent to x m n = a {\displaystyle x^{\frac {m}{n}}=a} ) where m and n are integers. It has real solution(s): For example, if: ( x + 5 ) 2 / 3 = 4 {\displaystyle (x+5)^{2/3}=4} then x + 5 = ± ( 4 ) 3 , x + 5 = ± 8 , x = − 5 ± 8 , {\displaystyle {\begin{aligned}x+5&=\pm ({\sqrt {4}})^{3},\\x+5&=\pm 8,\\x&=-5\pm 8,\end{aligned}}} and thus x = 3 or x = − 13 {\displaystyle x=3\quad {\text{or}}\quad x=-13} === System of linear equations === There are different methods to solve a system of linear equations with two variables. ==== Elimination method ==== An example of solving a system of linear equations is by using the elimination method: { 4 x + 2 y = 14 2 x − y = 1. {\displaystyle {\begin{cases}4x+2y&=14\\2x-y&=1.\end{cases}}} Multiplying the terms in the second equation by 2: 4 x + 2 y = 14 {\displaystyle 4x+2y=14} 4 x − 2 y = 2. {\displaystyle 4x-2y=2.} Adding the two equations together to get: 8 x = 16 {\displaystyle 8x=16} which simplifies to x = 2. {\displaystyle x=2.} Since the fact that x = 2 {\displaystyle x=2} is known, it is then possible to deduce that y = 3 {\displaystyle y=3} by either of the original two equations (by using 2 instead of x ) The full solution to this problem is then { x = 2 y = 3. {\displaystyle {\begin{cases}x=2\\y=3.\end{cases}}} This is not the only way to solve this specific system; y could have been resolved before x. ==== Substitution method ==== Another way of solving the same system of linear equations is by substitution. { 4 x + 2 y = 14 2 x − y = 1. {\displaystyle {\begin{cases}4x+2y&=14\\2x-y&=1.\end{cases}}} An equivalent for y can be deduced by using one of the two equations. Using the second equation: 2 x − y = 1 {\displaystyle 2x-y=1} Subtracting 2 x {\displaystyle 2x} from each side of the equation: 2 x − 2 x − y = 1 − 2 x − y = 1 − 2 x {\displaystyle {\begin{aligned}2x-2x-y&=1-2x\\-y&=1-2x\end{aligned}}} and multiplying by −1: y = 2 x − 1. {\displaystyle y=2x-1.} Using this y value in the first equation in the original system: 4 x + 2 ( 2 x − 1 ) = 14 4 x + 4 x − 2 = 14 8 x − 2 = 14 {\displaystyle {\begin{aligned}4x+2(2x-1)&=14\\4x+4x-2&=14\\8x-2&=14\end{aligned}}} Adding 2 on each side of the equation: 8 x − 2 + 2 = 14 + 2 8 x = 16 {\displaystyle {\begin{aligned}8x-2+2&=14+2\\8x&=16\end{aligned}}} which simplifies to x = 2 {\displaystyle x=2} Using this value in one of the equations, the same solution as in the previous method is obtained. { x = 2 y = 3. {\displaystyle {\begin{cases}x=2\\y=3.\end{cases}}} This is not the only way to solve this specific system; in this case as well, y could have been solved before x. === Other types of systems of linear equations === ==== Inconsistent systems ==== In the above example, a solution exists. However, there are also systems of equations which do not have any solution. Such a system is called inconsistent. An obvious example is { x + y = 1 0 x + 0 y = 2 . {\displaystyle {\begin{cases}{\begin{aligned}x+y&=1\\0x+0y&=2\,.\end{aligned}}\end{cases}}} As 0≠2, the second equation in the system has no solution. Therefore, the system has no solution. However, not all inconsistent systems are recognized at first sight. As an example, consider the system { 4 x + 2 y = 12 − 2 x − y = − 4 . {\displaystyle {\begin{cases}{\begin{aligned}4x+2y&=12\\-2x-y&=-4\,.\end{aligned}}\end{cases}}} Multiplying by 2 both sides of the second equation, and adding it to the first one results in 0 x + 0 y = 4 , {\displaystyle 0x+0y=4\,,} which clearly has no solution. ==== Undetermined systems ==== There are also systems which have infinitely many solutions, in contrast to a system with a unique solution (meaning, a unique pair of values for x and y) For example: { 4 x + 2 y = 12 − 2 x − y = − 6 {\displaystyle {\begin{cases}{\begin{aligned}4x+2y&=12\\-2x-y&=-6\end{aligned}}\end{cases}}} Isolating y in the second equation: y = − 2 x + 6 {\displaystyle y=-2x+6} And using this value in the first equation in the system: 4 x + 2 ( − 2 x + 6 ) = 12 4 x − 4 x + 12 = 12 12 = 12 {\displaystyle {\begin{aligned}4x+2(-2x+6)=12\\4x-4x+12=12\\12=12\end{aligned}}} The equality is true, but it does not provide a value for x. Indeed, one can easily verify (by just filling in some values of x) that for any x there is a solution as long as y = − 2 x + 6 {\displaystyle y=-2x+6} . There is an infinite number of solutions for this system. ==== Over- and underdetermined systems ==== Systems with more variables than the number of linear equations are called underdetermined. Such a system, if it has any solutions, does not have a unique one but rather an infinitude of them. An example of such a system is { x + 2 y = 10 y − z = 2. {\displaystyle {\begin{cases}{\begin{aligned}x+2y&=10\\y-z&=2.\end{aligned}}\end{cases}}} When trying to solve it, one is led to express some variables as functions of the other ones if any solutions exist, but cannot express all solutions numerically because there are an infinite number of them if there are any. A system with a higher number of equations than variables is called overdetermined. If an overdetermined system has any solutions, necessarily some equations are linear combinations of the others. == See also == History of algebra Binary operation Gaussian elimination Mathematics education Number line Polynomial Cancelling out Tarski's high school algebra problem == References == Leonhard Euler, Elements of Algebra, 1770. English translation Tarquin Press, 2007, ISBN 978-1-899618-79-8, also online digitized editions 2006, 1822. Charles Smith, A Treatise on Algebra, in Cornell University Library Historical Math Monographs. Redden, John. Elementary Algebra Archived 2016-06-10 at the Wayback Machine. Flat World Knowledge, 2011 == External links == Media related to Elementary algebra at Wikimedia Commons
|
Wikipedia:Elementary function#0
|
In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and exponential functions, and their inverses (e.g., arcsin, log, or x1/n). All elementary functions are continuous on their domains. Elementary functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. An algebraic treatment of elementary functions was started by Joseph Fels Ritt in the 1930s. Many textbooks and dictionaries do not give a precise definition of the elementary functions, and mathematicians differ on it. == Examples == === Basic examples === Elementary functions of a single variable x include: Constant functions: 2 , π , e , {\displaystyle 2,\ \pi ,\ e,} etc. Rational powers of x: x , x 2 , x ( x 1 2 ) , x 2 3 , {\displaystyle x,\ x^{2},\ {\sqrt {x}}\ (x^{\frac {1}{2}}),\ x^{\frac {2}{3}},} etc. Exponential functions: e x , a x {\displaystyle e^{x},\ a^{x}} Logarithms: log x , log a x {\displaystyle \log x,\ \log _{a}x} Trigonometric functions: sin x , cos x , tan x , {\displaystyle \sin x,\ \cos x,\ \tan x,} etc. Inverse trigonometric functions: arcsin x , arccos x , {\displaystyle \arcsin x,\ \arccos x,} etc. Hyperbolic functions: sinh x , cosh x , {\displaystyle \sinh x,\ \cosh x,} etc. Inverse hyperbolic functions: arsinh x , arcosh x , {\displaystyle \operatorname {arsinh} x,\ \operatorname {arcosh} x,} etc. All functions obtained by adding, subtracting, multiplying or dividing a finite number of any of the previous functions All functions obtained by root extraction of a polynomial with coefficients in elementary functions All functions obtained by composing a finite number of any of the previously listed functions Certain elementary functions of a single complex variable z, such as z {\displaystyle {\sqrt {z}}} and log z {\displaystyle \log z} , may be multivalued. Additionally, certain classes of functions may be obtained by others using the final two rules. For example, the exponential function e z {\displaystyle e^{z}} composed with addition, subtraction, and division provides the hyperbolic functions, while initial composition with i z {\displaystyle iz} instead provides the trigonometric functions. === Composite examples === Examples of elementary functions include: Addition, e.g. (x + 1) Multiplication, e.g. (2x) Polynomial functions e tan x 1 + x 2 sin ( 1 + ( log x ) 2 ) {\displaystyle {\frac {e^{\tan x}}{1+x^{2}}}\sin \left({\sqrt {1+(\log x)^{2}}}\right)} − i log ( x + i 1 − x 2 ) {\displaystyle -i\log \left(x+i{\sqrt {1-x^{2}}}\right)} The last function is equal to arccos x {\displaystyle \arccos x} , the inverse cosine, in the entire complex plane. All monomials, polynomials, rational functions and algebraic functions are elementary. The absolute value function, for real x {\displaystyle x} , is also elementary as it can be expressed as the composition of a power and root of x {\displaystyle x} : | x | = x 2 {\textstyle |x|={\sqrt {x^{2}}}} . === Non-elementary functions === Many mathematicians exclude non-analytic functions such as the absolute value function or discontinuous functions such as the step function, but others allow them. Some have proposed extending the set to include, for example, the Lambert W function. Some examples of functions that are not elementary: tetration the gamma function non-elementary Liouvillian functions, including the exponential integral (Ei), logarithmic integral (Li or li) and Fresnel integrals (S and C). the error function, e r f ( x ) = 2 π ∫ 0 x e − t 2 d t , {\displaystyle \mathrm {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt,} a fact that may not be immediately obvious, but can be proven using the Risch algorithm. other nonelementary integrals, including the Dirichlet integral and elliptic integral. == Closure == It follows directly from the definition that the set of elementary functions is closed under arithmetic operations, root extraction and composition. The elementary functions are closed under differentiation. They are not closed under limits and infinite sums. Importantly, the elementary functions are not closed under integration, as shown by Liouville's theorem, see nonelementary integral. The Liouvillian functions are defined as the elementary functions and, recursively, the integrals of the Liouvillian functions. == Differential algebra == The mathematical definition of an elementary function, or a function in elementary form, is considered in the context of differential algebra. A differential algebra is an algebra with the extra operation of derivation (algebraic version of differentiation). Using the derivation operation new equations can be written and their solutions used in extensions of the algebra. By starting with the field of rational functions, two special types of transcendental extensions (the logarithm and the exponential) can be added to the field building a tower containing elementary functions. A differential field F is a field F0 (rational functions over the rationals Q for example) together with a derivation map u → ∂u. (Here ∂u is a new function. Sometimes the notation u′ is used.) The derivation captures the properties of differentiation, so that for any two elements of the base field, the derivation is linear ∂ ( u + v ) = ∂ u + ∂ v {\displaystyle \partial (u+v)=\partial u+\partial v} and satisfies the Leibniz product rule ∂ ( u ⋅ v ) = ∂ u ⋅ v + u ⋅ ∂ v . {\displaystyle \partial (u\cdot v)=\partial u\cdot v+u\cdot \partial v\,.} An element h is a constant if ∂h = 0. If the base field is over the rationals, care must be taken when extending the field to add the needed transcendental constants. A function u of a differential extension F[u] of a differential field F is an elementary function over F if the function u is algebraic over F, or is an exponential, that is, ∂u = u ∂a for a ∈ F, or is a logarithm, that is, ∂u = ∂a / a for a ∈ F. (see also Liouville's theorem) == See also == Algebraic function – Mathematical function Closed-form expression – Mathematical formula involving a given set of operations Differential Galois theory – Study of Galois symmetry groups of differential fields Elementary function arithmetic – System of arithmetic in proof theory Liouville's theorem (differential algebra) – Says when antiderivatives of elementary functions can be expressed as elementary functions Tarski's high school algebra problem – Mathematical problem Transcendental function – Analytic function that does not satisfy a polynomial equation Tupper's self-referential formula – Formula that visually represents itself when graphed == Notes == == References == Liouville, Joseph (1833a). "Premier mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 124–148. Liouville, Joseph (1833b). "Second mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 149–193. Liouville, Joseph (1833c). "Note sur la détermination des intégrales dont la valeur est algébrique". Journal für die reine und angewandte Mathematik. 10: 347–359. Ritt, Joseph (1950). Differential Algebra. AMS. Rosenlicht, Maxwell (1972). "Integration in finite terms". American Mathematical Monthly. 79 (9): 963–972. doi:10.2307/2318066. JSTOR 2318066. == Further reading == Davenport, James H. (2007). "What Might "Understand a Function" Mean?". Towards Mechanized Mathematical Assistants. Lecture Notes in Computer Science. Vol. 4573. pp. 55–65. doi:10.1007/978-3-540-73086-6_5. ISBN 978-3-540-73083-5. S2CID 8049737. == External links == Elementary functions at Encyclopaedia of Mathematics Weisstein, Eric W. "Elementary function". MathWorld.
|
Wikipedia:Elementary matrix#0
|
In mathematics, an elementary matrix is a square matrix obtained from the application of a single elementary row operation to the identity matrix. The elementary matrices generate the general linear group GLn(F) when F is a field. Left multiplication (pre-multiplication) by an elementary matrix represents elementary row operations, while right multiplication (post-multiplication) represents elementary column operations. Elementary row operations are used in Gaussian elimination to reduce a matrix to row echelon form. They are also used in Gauss–Jordan elimination to further reduce the matrix to reduced row echelon form. == Elementary row operations == There are three types of elementary matrices, which correspond to three types of row operations (respectively, column operations): Row switching A row within the matrix can be switched with another row. R i ↔ R j {\displaystyle R_{i}\leftrightarrow R_{j}} Row multiplication Each element in a row can be multiplied by a non-zero constant. It is also known as scaling a row. k R i → R i , where k ≠ 0 {\displaystyle kR_{i}\rightarrow R_{i},\ {\mbox{where }}k\neq 0} Row addition A row can be replaced by the sum of that row and a multiple of another row. R i + k R j → R i , where i ≠ j {\displaystyle R_{i}+kR_{j}\rightarrow R_{i},{\mbox{where }}i\neq j} If E is an elementary matrix, as described below, to apply the elementary row operation to a matrix A, one multiplies A by the elementary matrix on the left, EA. The elementary matrix for any row operation is obtained by executing the operation on the identity matrix. This fact can be understood as an instance of the Yoneda lemma applied to the category of matrices. === Row-switching transformations === The first type of row operation on a matrix A switches all matrix elements on row i with their counterparts on a different row j. The corresponding elementary matrix is obtained by swapping row i and row j of the identity matrix. T i , j = [ 1 ⋱ 0 1 ⋱ 1 0 ⋱ 1 ] {\displaystyle T_{i,j}={\begin{bmatrix}1&&&&&&\\&\ddots &&&&&\\&&0&&1&&\\&&&\ddots &&&\\&&1&&0&&\\&&&&&\ddots &\\&&&&&&1\end{bmatrix}}} So Ti,j A is the matrix produced by exchanging row i and row j of A. Coefficient wise, the matrix Ti,j is defined by : [ T i , j ] k , l = { 0 k ≠ i , k ≠ j , k ≠ l 1 k ≠ i , k ≠ j , k = l 0 k = i , l ≠ j 1 k = i , l = j 0 k = j , l ≠ i 1 k = j , l = i {\displaystyle [T_{i,j}]_{k,l}={\begin{cases}0&k\neq i,k\neq j,k\neq l\\1&k\neq i,k\neq j,k=l\\0&k=i,l\neq j\\1&k=i,l=j\\0&k=j,l\neq i\\1&k=j,l=i\\\end{cases}}} ==== Properties ==== The inverse of this matrix is itself: T i , j − 1 = T i , j . {\displaystyle T_{i,j}^{-1}=T_{i,j}.} Since the determinant of the identity matrix is unity, det ( T i , j ) = − 1. {\displaystyle \det(T_{i,j})=-1.} It follows that for any square matrix A (of the correct size), we have det ( T i , j A ) = − det ( A ) . {\displaystyle \det(T_{i,j}A)=-\det(A).} For theoretical considerations, the row-switching transformation can be obtained from row-addition and row-multiplication transformations introduced below because T i , j = D i ( − 1 ) L i , j ( − 1 ) L j , i ( 1 ) L i , j ( − 1 ) . {\displaystyle T_{i,j}=D_{i}(-1)\,L_{i,j}(-1)\,L_{j,i}(1)\,L_{i,j}(-1).} === Row-multiplying transformations === The next type of row operation on a matrix A multiplies all elements on row i by m where m is a non-zero scalar (usually a real number). The corresponding elementary matrix is a diagonal matrix, with diagonal entries 1 everywhere except in the ith position, where it is m. D i ( m ) = [ 1 ⋱ 1 m 1 ⋱ 1 ] {\displaystyle D_{i}(m)={\begin{bmatrix}1&&&&&&\\&\ddots &&&&&\\&&1&&&&\\&&&m&&&\\&&&&1&&\\&&&&&\ddots &\\&&&&&&1\end{bmatrix}}} So Di(m)A is the matrix produced from A by multiplying row i by m. Coefficient wise, the Di(m) matrix is defined by : [ D i ( m ) ] k , l = { 0 k ≠ l 1 k = l , k ≠ i m k = l , k = i {\displaystyle [D_{i}(m)]_{k,l}={\begin{cases}0&k\neq l\\1&k=l,k\neq i\\m&k=l,k=i\end{cases}}} ==== Properties ==== The inverse of this matrix is given by D i ( m ) − 1 = D i ( 1 m ) . {\displaystyle D_{i}(m)^{-1}=D_{i}\left({\tfrac {1}{m}}\right).} The matrix and its inverse are diagonal matrices. det ( D i ( m ) ) = m . {\displaystyle \det(D_{i}(m))=m.} Therefore, for a square matrix A (of the correct size), we have det ( D i ( m ) A ) = m det ( A ) . {\displaystyle \det(D_{i}(m)A)=m\det(A).} === Row-addition transformations === The final type of row operation on a matrix A adds row j multiplied by a scalar m to row i. The corresponding elementary matrix is the identity matrix but with an m in the (i, j) position. L i j ( m ) = [ 1 ⋱ 1 ⋱ m 1 ⋱ 1 ] {\displaystyle L_{ij}(m)={\begin{bmatrix}1&&&&&&\\&\ddots &&&&&\\&&1&&&&\\&&&\ddots &&&\\&&m&&1&&\\&&&&&\ddots &\\&&&&&&1\end{bmatrix}}} So Lij(m)A is the matrix produced from A by adding m times row j to row i. And A Lij(m) is the matrix produced from A by adding m times column i to column j. Coefficient wise, the matrix Li,j(m) is defined by : [ L i , j ( m ) ] k , l = { 0 k ≠ l , k ≠ i , l ≠ j 1 k = l m k = i , l = j {\displaystyle [L_{i,j}(m)]_{k,l}={\begin{cases}0&k\neq l,k\neq i,l\neq j\\1&k=l\\m&k=i,l=j\end{cases}}} ==== Properties ==== These transformations are a kind of shear mapping, also known as a transvections. The inverse of this matrix is given by L i j ( m ) − 1 = L i j ( − m ) . {\displaystyle L_{ij}(m)^{-1}=L_{ij}(-m).} The matrix and its inverse are triangular matrices. det ( L i j ( m ) ) = 1. {\displaystyle \det(L_{ij}(m))=1.} Therefore, for a square matrix A (of the correct size) we have det ( L i j ( m ) A ) = det ( A ) . {\displaystyle \det(L_{ij}(m)A)=\det(A).} Row-addition transforms satisfy the Steinberg relations. == See also == Gaussian elimination Linear algebra System of linear equations Matrix (mathematics) LU decomposition Frobenius matrix == References == Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0 Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7 Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on 2009-10-31 Perrone, Paolo (2024), Starting Category Theory, World Scientific, doi:10.1142/9789811286018_0005, ISBN 978-981-12-8600-1 Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3 Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall Strang, Gilbert (2016), Introduction to Linear Algebra (5th ed.), Wellesley-Cambridge Press, ISBN 978-09802327-7-6
|
Wikipedia:Elementary symmetric polynomial#0
|
In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial can be expressed as a polynomial in elementary symmetric polynomials. That is, any symmetric polynomial P is given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree d in n variables for each positive integer d ≤ n, and it is formed by adding together all distinct products of d distinct variables. == Definition == The elementary symmetric polynomials in n variables X1, ..., Xn, written ek(X1, ..., Xn) for k = 1, ..., n, are defined by e 1 ( X 1 , X 2 , … , X n ) = ∑ 1 ≤ a ≤ n X a , e 2 ( X 1 , X 2 , … , X n ) = ∑ 1 ≤ a < b ≤ n X a X b , e 3 ( X 1 , X 2 , … , X n ) = ∑ 1 ≤ a < b < c ≤ n X a X b X c , {\displaystyle {\begin{aligned}e_{1}(X_{1},X_{2},\dots ,X_{n})&=\sum _{1\leq a\leq n}X_{a},\\e_{2}(X_{1},X_{2},\dots ,X_{n})&=\sum _{1\leq a<b\leq n}X_{a}X_{b},\\e_{3}(X_{1},X_{2},\dots ,X_{n})&=\sum _{1\leq a<b<c\leq n}X_{a}X_{b}X_{c},\\\end{aligned}}} and so forth, ending with e n ( X 1 , X 2 , … , X n ) = X 1 X 2 ⋯ X n . {\displaystyle e_{n}(X_{1},X_{2},\dots ,X_{n})=X_{1}X_{2}\cdots X_{n}.} In general, for k > 0 we define e k ( X 1 , … , X n ) = ∑ 1 ≤ a 1 < a 2 < ⋯ < a k ≤ n X a 1 X a 2 ⋯ X a k , {\displaystyle e_{k}(X_{1},\ldots ,X_{n})=\sum _{1\leq a_{1}<a_{2}<\cdots <a_{k}\leq n}X_{a_{1}}X_{a_{2}}\dotsm X_{a_{k}},} Also, ek(X1, ..., Xn) = 0 if k > n. Sometimes, e0(X1, ..., Xn) = 1 is included among the elementary symmetric polynomials, but excluding it allows generally simpler formulation of results and properties. Thus, for each positive integer k less than or equal to n there exists exactly one elementary symmetric polynomial of degree k in n variables. To form the one that has degree k, we take the sum of all products of k-subsets of the n variables. (By contrast, if one performs the same operation using multisets of variables, that is, taking variables with repetition, one arrives at the complete homogeneous symmetric polynomials.) Given an integer partition (that is, a finite non-increasing sequence of positive integers) λ = (λ1, ..., λm), one defines the symmetric polynomial eλ(X1, ..., Xn), also called an elementary symmetric polynomial, by e λ ( X 1 , … , X n ) = e λ 1 ( X 1 , … , X n ) ⋅ e λ 2 ( X 1 , … , X n ) ⋯ e λ m ( X 1 , … , X n ) {\displaystyle e_{\lambda }(X_{1},\dots ,X_{n})=e_{\lambda _{1}}(X_{1},\dots ,X_{n})\cdot e_{\lambda _{2}}(X_{1},\dots ,X_{n})\cdots e_{\lambda _{m}}(X_{1},\dots ,X_{n})} . Sometimes the notation σk is used instead of ek. == Recursive definition == The following definition is equivalent to the above and might be useful for computer implementations: e 1 ( X 1 , … , X n ) = ∑ 1 ≤ j ≤ n X j , e k ( X 1 , … , X n ) = ∑ 1 ≤ j ≤ n − k + 1 X j e k − 1 ( X j + 1 , … , X n ) {\displaystyle {\begin{aligned}e_{1}(X_{1},\dots ,X_{n})&=\sum _{1\leq j\leq n}X_{j},\\e_{k}(X_{1},\dots ,X_{n})&=\sum _{1\leq j\leq n-k+1}X_{j}e_{k-1}(X_{j+1},\dots ,X_{n})\\\end{aligned}}} == Examples == The following lists the n elementary symmetric polynomials for the first four positive values of n. For n = 1: e 1 ( X 1 ) = X 1 . {\displaystyle e_{1}(X_{1})=X_{1}.} For n = 2: e 1 ( X 1 , X 2 ) = X 1 + X 2 , e 2 ( X 1 , X 2 ) = X 1 X 2 . {\displaystyle {\begin{aligned}e_{1}(X_{1},X_{2})&=X_{1}+X_{2},\\e_{2}(X_{1},X_{2})&=X_{1}X_{2}.\,\\\end{aligned}}} For n = 3: e 1 ( X 1 , X 2 , X 3 ) = X 1 + X 2 + X 3 , e 2 ( X 1 , X 2 , X 3 ) = X 1 X 2 + X 1 X 3 + X 2 X 3 , e 3 ( X 1 , X 2 , X 3 ) = X 1 X 2 X 3 . {\displaystyle {\begin{aligned}e_{1}(X_{1},X_{2},X_{3})&=X_{1}+X_{2}+X_{3},\\e_{2}(X_{1},X_{2},X_{3})&=X_{1}X_{2}+X_{1}X_{3}+X_{2}X_{3},\\e_{3}(X_{1},X_{2},X_{3})&=X_{1}X_{2}X_{3}.\,\\\end{aligned}}} For n = 4: e 1 ( X 1 , X 2 , X 3 , X 4 ) = X 1 + X 2 + X 3 + X 4 , e 2 ( X 1 , X 2 , X 3 , X 4 ) = X 1 X 2 + X 1 X 3 + X 1 X 4 + X 2 X 3 + X 2 X 4 + X 3 X 4 , e 3 ( X 1 , X 2 , X 3 , X 4 ) = X 1 X 2 X 3 + X 1 X 2 X 4 + X 1 X 3 X 4 + X 2 X 3 X 4 , e 4 ( X 1 , X 2 , X 3 , X 4 ) = X 1 X 2 X 3 X 4 . {\displaystyle {\begin{aligned}e_{1}(X_{1},X_{2},X_{3},X_{4})&=X_{1}+X_{2}+X_{3}+X_{4},\\e_{2}(X_{1},X_{2},X_{3},X_{4})&=X_{1}X_{2}+X_{1}X_{3}+X_{1}X_{4}+X_{2}X_{3}+X_{2}X_{4}+X_{3}X_{4},\\e_{3}(X_{1},X_{2},X_{3},X_{4})&=X_{1}X_{2}X_{3}+X_{1}X_{2}X_{4}+X_{1}X_{3}X_{4}+X_{2}X_{3}X_{4},\\e_{4}(X_{1},X_{2},X_{3},X_{4})&=X_{1}X_{2}X_{3}X_{4}.\,\\\end{aligned}}} == Properties == The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity ∏ j = 1 n ( λ − X j ) = λ n − e 1 ( X 1 , … , X n ) λ n − 1 + e 2 ( X 1 , … , X n ) λ n − 2 + ⋯ + ( − 1 ) n e n ( X 1 , … , X n ) . {\displaystyle \prod _{j=1}^{n}(\lambda -X_{j})=\lambda ^{n}-e_{1}(X_{1},\ldots ,X_{n})\lambda ^{n-1}+e_{2}(X_{1},\ldots ,X_{n})\lambda ^{n-2}+\cdots +(-1)^{n}e_{n}(X_{1},\ldots ,X_{n}).} That is, when we substitute numerical values for the variables X1, X2, ..., Xn, we obtain the monic univariate polynomial (with variable λ) whose roots are the values substituted for X1, X2, ..., Xn and whose coefficients are – up to their sign – the elementary symmetric polynomials. These relations between the roots and the coefficients of a polynomial are called Vieta's formulas. The characteristic polynomial of a square matrix is an example of application of Vieta's formulas. The roots of this polynomial are the eigenvalues of the matrix. When we substitute these eigenvalues into the elementary symmetric polynomials, we obtain – up to their sign – the coefficients of the characteristic polynomial, which are invariants of the matrix. In particular, the trace (the sum of the elements of the diagonal) is the value of e1, and thus the sum of the eigenvalues. Similarly, the determinant is – up to the sign – the constant term of the characteristic polynomial, i.e. the value of en. Thus the determinant of a square matrix is the product of the eigenvalues. The set of elementary symmetric polynomials in n variables generates the ring of symmetric polynomials in n variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring Z {\displaystyle \mathbb {Z} } [e1(X1, ..., Xn), ..., en(X1, ..., Xn)]. (See below for a more general statement and proof.) This fact is one of the foundations of invariant theory. For another system of symmetric polynomials with the same property see Complete homogeneous symmetric polynomials, and for a system with a similar, but slightly weaker, property see Power sum symmetric polynomial. == Fundamental theorem of symmetric polynomials == For any commutative ring A, denote the ring of symmetric polynomials in the variables X1, ..., Xn with coefficients in A by A[X1, ..., Xn]Sn. This is a polynomial ring in the n elementary symmetric polynomials ek(X1, ..., Xn) for k = 1, ..., n. This means that every symmetric polynomial P(X1, ..., Xn) ∈ A[X1, ..., Xn]Sn has a unique representation P ( X 1 , … , X n ) = Q ( e 1 ( X 1 , … , X n ) , … , e n ( X 1 , … , X n ) ) {\displaystyle P(X_{1},\ldots ,X_{n})=Q{\big (}e_{1}(X_{1},\ldots ,X_{n}),\ldots ,e_{n}(X_{1},\ldots ,X_{n}){\big )}} for some polynomial Q ∈ A[Y1, ..., Yn]. Another way of saying the same thing is that the ring homomorphism that sends Yk to ek(X1, ..., Xn) for k = 1, ..., n defines an isomorphism between A[Y1, ..., Yn] and A[X1, ..., Xn]Sn. === Proof sketch === The theorem may be proved for symmetric homogeneous polynomials by a double induction with respect to the number of variables n and, for fixed n, with respect to the degree of the homogeneous polynomial. The general case then follows by splitting an arbitrary symmetric polynomial into its homogeneous components (which are again symmetric). In the case n = 1 the result is trivial because every polynomial in one variable is automatically symmetric. Assume now that the theorem has been proved for all polynomials for m < n variables and all symmetric polynomials in n variables with degree < d. Every homogeneous symmetric polynomial P in A[X1, ..., Xn]Sn can be decomposed as a sum of homogeneous symmetric polynomials P ( X 1 , … , X n ) = P lacunary ( X 1 , … , X n ) + X 1 ⋯ X n ⋅ Q ( X 1 , … , X n ) . {\displaystyle P(X_{1},\ldots ,X_{n})=P_{\text{lacunary}}(X_{1},\ldots ,X_{n})+X_{1}\cdots X_{n}\cdot Q(X_{1},\ldots ,X_{n}).} Here the "lacunary part" Placunary is defined as the sum of all monomials in P which contain only a proper subset of the n variables X1, ..., Xn, i.e., where at least one variable Xj is missing. Because P is symmetric, the lacunary part is determined by its terms containing only the variables X1, ..., Xn − 1, i.e., which do not contain Xn. More precisely: If A and B are two homogeneous symmetric polynomials in X1, ..., Xn having the same degree, and if the coefficient of A before each monomial which contains only the variables X1, ..., Xn − 1 equals the corresponding coefficient of B, then A and B have equal lacunary parts. (This is because every monomial which can appear in a lacunary part must lack at least one variable, and thus can be transformed by a permutation of the variables into a monomial which contains only the variables X1, ..., Xn − 1.) But the terms of P which contain only the variables X1, ..., Xn − 1 are precisely the terms that survive the operation of setting Xn to 0, so their sum equals P(X1, ..., Xn − 1, 0), which is a symmetric polynomial in the variables X1, ..., Xn − 1 that we shall denote by P̃(X1, ..., Xn − 1). By the inductive hypothesis, this polynomial can be written as P ~ ( X 1 , … , X n − 1 ) = Q ~ ( σ 1 , n − 1 , … , σ n − 1 , n − 1 ) {\displaystyle {\tilde {P}}(X_{1},\ldots ,X_{n-1})={\tilde {Q}}(\sigma _{1,n-1},\ldots ,\sigma _{n-1,n-1})} for some Q̃. Here the doubly indexed σj,n − 1 denote the elementary symmetric polynomials in n − 1 variables. Consider now the polynomial R ( X 1 , … , X n ) := Q ~ ( σ 1 , n , … , σ n − 1 , n ) . {\displaystyle R(X_{1},\ldots ,X_{n}):={\tilde {Q}}(\sigma _{1,n},\ldots ,\sigma _{n-1,n}).} Then R(X1, ..., Xn) is a symmetric polynomial in X1, ..., Xn, of the same degree as Placunary, which satisfies R ( X 1 , … , X n − 1 , 0 ) = Q ~ ( σ 1 , n − 1 , … , σ n − 1 , n − 1 ) = P ( X 1 , … , X n − 1 , 0 ) {\displaystyle R(X_{1},\ldots ,X_{n-1},0)={\tilde {Q}}(\sigma _{1,n-1},\ldots ,\sigma _{n-1,n-1})=P(X_{1},\ldots ,X_{n-1},0)} (the first equality holds because setting Xn to 0 in σj,n gives σj,n − 1, for all j < n). In other words, the coefficient of R before each monomial which contains only the variables X1, ..., Xn − 1 equals the corresponding coefficient of P. As we know, this shows that the lacunary part of R coincides with that of the original polynomial P. Therefore the difference P − R has no lacunary part, and is therefore divisible by the product X1···Xn of all variables, which equals the elementary symmetric polynomial σn,n. Then writing P − R = σn,nQ, the quotient Q is a homogeneous symmetric polynomial of degree less than d (in fact degree at most d − n) which by the inductive hypothesis can be expressed as a polynomial in the elementary symmetric functions. Combining the representations for P − R and R one finds a polynomial representation for P. The uniqueness of the representation can be proved inductively in a similar way. (It is equivalent to the fact that the n polynomials e1, ..., en are algebraically independent over the ring A.) The fact that the polynomial representation is unique implies that A[X1, ..., Xn]Sn is isomorphic to A[Y1, ..., Yn]. === Alternative proof === The following proof is also inductive, but does not involve other polynomials than those symmetric in X1, ..., Xn, and also leads to a fairly direct procedure to effectively write a symmetric polynomial as a polynomial in the elementary symmetric ones. Assume the symmetric polynomial to be homogeneous of degree d; different homogeneous components can be decomposed separately. Order the monomials in the variables Xi lexicographically, where the individual variables are ordered X1 > ... > Xn, in other words the dominant term of a polynomial is one with the highest occurring power of X1, and among those the one with the highest power of X2, etc. Furthermore parametrize all products of elementary symmetric polynomials that have degree d (they are in fact homogeneous) as follows by partitions of d. Order the individual elementary symmetric polynomials ei(X1, ..., Xn) in the product so that those with larger indices i come first, then build for each such factor a column of i boxes, and arrange those columns from left to right to form a Young diagram containing d boxes in all. The shape of this diagram is a partition of d, and each partition λ of d arises for exactly one product of elementary symmetric polynomials, which we shall denote by eλt (X1, ..., Xn) (the t is present only because traditionally this product is associated to the transpose partition of λ). The essential ingredient of the proof is the following simple property, which uses multi-index notation for monomials in the variables Xi. Lemma. The leading term of eλt (X1, ..., Xn) is X λ. Proof. The leading term of the product is the product of the leading terms of each factor (this is true whenever one uses a monomial order, like the lexicographic order used here), and the leading term of the factor ei (X1, ..., Xn) is clearly X1X2···Xi. To count the occurrences of the individual variables in the resulting monomial, fill the column of the Young diagram corresponding to the factor concerned with the numbers 1, ..., i of the variables, then all boxes in the first row contain 1, those in the second row 2, and so forth, which means the leading term is X λ. Now one proves by induction on the leading monomial in lexicographic order, that any nonzero homogeneous symmetric polynomial P of degree d can be written as polynomial in the elementary symmetric polynomials. Since P is symmetric, its leading monomial has weakly decreasing exponents, so it is some X λ with λ a partition of d. Let the coefficient of this term be c, then P − ceλt (X1, ..., Xn) is either zero or a symmetric polynomial with a strictly smaller leading monomial. Writing this difference inductively as a polynomial in the elementary symmetric polynomials, and adding back ceλt (X1, ..., Xn) to it, one obtains the sought for polynomial expression for P. The fact that this expression is unique, or equivalently that all the products (monomials) eλt (X1, ..., Xn) of elementary symmetric polynomials are linearly independent, is also easily proved. The lemma shows that all these products have different leading monomials, and this suffices: if a nontrivial linear combination of the eλt (X1, ..., Xn) were zero, one focuses on the contribution in the linear combination with nonzero coefficient and with (as polynomial in the variables Xi) the largest leading monomial; the leading term of this contribution cannot be cancelled by any other contribution of the linear combination, which gives a contradiction. == See also == Symmetric polynomial Complete homogeneous symmetric polynomial Schur polynomial Newton's identities Newton's inequalities Maclaurin's inequality MacMahon Master theorem Symmetric function Representation theory == References == Macdonald, I. G. (1995). Symmetric Functions and Hall Polynomials (2nd ed.). Oxford: Clarendon Press. ISBN 0-19-850450-0. Stanley, Richard P. (1999). Enumerative Combinatorics, Vol. 2. Cambridge: Cambridge University Press. ISBN 0-521-56069-1. == External links == Trifonov, Martin (5 March 2024). Prelude to Galois Theory: Exploring Symmetric Polynomials (Video). YouTube. Retrieved 2024-03-26.
|
Wikipedia:Elena Braverman#0
|
Elena Yanovna Braverman (née Lumelskaya, Russian: Елена Яновна Браверман) is a Russian, Israeli, and Canadian mathematician known for her research in delay differential equations, difference equations, and population dynamics. She is a professor of mathematics and applied mathematics at the University of Calgary, and one of the editors-in-chief of the journal Advances in Difference Equations. == Education and career == Braverman is originally from the Soviet Union, and earned bachelor's and master's degrees at Perm State University in 1981 and 1983 respectively. She defended her Ph.D. at Ural State University in 1990. Her dissertation, Linear impulsive functional differential equations, was supervised by Nikolai V. Azbelev. In 1992, she emigrated to Israel, where she took a postdoctoral research position at the Technion – Israel Institute of Technology. She remained in Israel for most of the following decade, with teaching positions at the Technion and at the ORT Braude College of Engineering. After visiting Yale University in 2001–2002, she moved to her present position at the University of Calgary in 2002. She was tenured there in 2007 and promoted to full professor in 2011. == Book == Braverman is a co-author of the book Nonoscillation Theory of Functional Differential Equations with Applications (with Ravi P. Agarwal, Leonid Berezansky, and Alexander Domoshnitsky, Springer, 2012). == Family == Braverman is the daughter of mathematical statistician Yan Petrovich Lumel'skii. Braverman's mother, Ludmila Mikhailovna Tsirulnikova, was also a university-level physics teacher, whose father was Soviet weapons engineer Mikhail Yuryevich Tsirulnikov. Braverman is the mother of theoretical computer scientist Mark Braverman. == References == == External links == Home page Elena Braverman publications indexed by Google Scholar
|
Wikipedia:Elena Celledoni#0
|
Elena Celledoni (born 1967) is an Italian mathematician who works in Norway as a professor of mathematical sciences at the Norwegian University of Science and Technology (NTNU). Her research involves the numerical analysis of numerical algorithms for partial differential equations and for Lie group computations, including the study of structure preserving algorithms. == Education and career == Celledoni earned a master's degree at the University of Trieste in 1993. She completed a Ph.D. at the University of Padua in 1997. Her dissertation, Krylov Subspace Methods For Linear Systems Of ODEs, was jointly supervised by Igor Moret and Alfredo Bellen. Before becoming a faculty member at NTNU in 2004, she was a postdoctoral researcher at the University of Cambridge, at the Mathematical Sciences Research Institute, and at NTNU. == Recognition == Celledoni is a member of the Royal Norwegian Society of Sciences and Letters. == References == == External links == Elena Celledoni publications indexed by Google Scholar
|
Wikipedia:Elena Deza#0
|
Elena Ivanovna Deza (Russian: Елена Ивановна Деза, née Panteleeva; born 23 August 1961) is a French and Russian mathematician known for her books on metric spaces and figurate numbers. == Education and career == Deza was born on 23 August 1961 in Volgograd, and is a French and Russian citizen. She earned a diploma in mathematics in 1983, a candidate's degree (doctorate) in mathematics and physics in 1993, and a docent's certificate in number theory in 1995, all from Moscow State Pedagogical University. From 1983 to 1988, Deza was an assistant professor of mathematics at Moscow State Forest University. In 1988 she moved to Moscow State Pedagogical University; she became a lecturer there in 1993, a reader in 1994, and a full professor in 2006. == Books == As well as many Russian-language books, Deza's books include: Dictionary of Distances (with Michel Deza, Elsevier, 2006) Encyclopedia of Distances (with Michel Deza, Springer, 2009; 4th ed., 2016) Figurate Numbers (with Michel Deza, World Scientific, 2012) Generalizations of Finite Metrics and Cuts (with Michel Deza and Mathieu Dutour Sikirić, World Scientific, 2016) Mersenne Numbers and Fermat Numbers (World Scientific, 2021) == References == == External links == Home page Elena Deza publications indexed by Google Scholar
|
Wikipedia:Elena Freda#0
|
Elena Freda (25 March 1890 – 25 November 1978) was an Italian mathematician and mathematical physicist known for her collaboration with Vito Volterra on mathematical analysis and its applications to electromagnetism and biomathematics. == Life == Freda was born on 25 March 1890. She studied projective geometry with Guido Castelnuovo at the Sapienza University of Rome, graduating in 1912, but then shifted her interests to mathematical physics, working with Orso Mario Corbino and earning a second degree in physics from Sapienza University in 1915. Her earliest documented connection to Vito Volterra is also from 1915, in the form of a letter from Freda to Volterra with the date 23 September 1915, describing her work. Italy entered World War I in 1915, on the side of the Allied Powers. This was something that Volterra had strongly advocated, and he enlisted for the war effort, bringing with him students including Freda to assist him in ballistics calculations. A letter from her to Volterra from 1915 discusses the difficulties of spending days on calculations on "millimetered paper". After the war, Freda earned a habilitation (libera docenza) in physics in 1918, and was appointed as a docent in mathematical physics at Sapienza University in 1919; her habilitation was confirmed in 1929. She taught courses in mathematical physics and rational mechanics at the University of Messina in 1923–1924, but then, with uncertain continued career prospects in Messina, returned to Rome. She taught there for the rest of her career until retiring from teaching in 1959, under the mandatory retirement rules then in place. She died on 25 November 1978 in Rome. == Research == Freda's initial publications were in projective geometry, but by 1915 her interests had already begun shifting to mathematical analysis and mathematical physics, with one publication on Euler's homogeneous function theorem and another applying mathematical analysis to study Corbino's experimental work in electromagnetics. She continued publishing works on the analysis of electromagnetics into the 1920s. Her early work in analysis was already inspired by Volterra, who presented one of her results to the Accademia dei Lincei in 1916, and a 1921 paper was coauthored with Volterra. She also collaborated in this period with Nella Mortara, another female Italian physicist. Her work in mathematical biology, again inspired by Volterra and his work in population dynamics, began in 1927 and in 1931 she published a review of Volterra's work in this area. Her work through the 1930s returned to more purely mathematical studies in analysis. This period includes what has been described as her "greatest work", Méthode des caractéristiques pour intégration des équations aux dérivées partielles linéaires hyperboliques, a 1937 publication in French (under the name Hélène Freda) on the solution of second-order hyperbolic partial differential equations, based on a course of study she gave beginning in 1931, with a preface by Volterra. == References == == Further reading == Giannetto, Enrico (2007), "Elena Freda, Vito Volterra and the conception of a hysterical nature", in Babini, Valeria Paola; Simili, Raffaella (eds.), More Than Pupils: Italian Women in Science at the Turn of the 20th Century, L.S. Olschki, p. 107
|
Wikipedia:Elena Mantovan#0
|
Elena Mantovan is a mathematician specializing in arithmetic geometry. Educated in Italy and the US, she works in the US as Taussky-Todd–Lonergan Professor of Mathematics at the California Institute of Technology (Caltech). == Education and career == Mantovan earned a laurea in mathematics at the University of Padua in 1995. She completed her Ph.D. in 2002 at Harvard University. Her dissertation, On Certain Unitary Group Shimura Varieties, was supervised by Richard Taylor. She later published it as part of the monograph Variétés de Shimura, espaces de Rapoport-Zink et correspondances de Langlands locales, co-authored with Laurent Fargues (Astérisque 291, Société mathématique de France, 2004). She was a Miller Research Fellow at the University of California, Berkeley, with Ken Ribet as a mentor, from 2002 until 2005. In 2005, she joined the Caltech faculty. From August 2010 through March 2011, she was a von Neumann Fellow at the Institute for Advanced Study. She was promoted to full professor at Caltech in 2010, and was the executive officer of the mathematics department from 2016 to 2019. == Mentorship == Mantovan is faculty advisor for the Caltech chapter of the Association for Women in Mathematics. She has been cited as a mentor for undergraduate mathematicians including Ila Varma, 2009 honorable mention for the Alice T. Schafer Prize, and Laura Lewis, 2021 winner of the National Center for Women and Information Technology (NCWIT) 2021 Collegiate Award. == References == == External links == Home page
|
Wikipedia:Elena Prieto-Rodriguez#0
|
Elena Prieto-Rodriguez is a Spanish and Australian mathematician, computer scientist, and mathematics educator known for her research in parameterized complexity and her work in mathematics education. She is a professor in the School of Education at the University of Newcastle in Australia, and deputy head of the school for teaching and learning. == Early life and education == Prieto's interest in computing was sparked by getting a personal computer as a pre-teen. After earning a bachelor's degree from the Complutense University of Madrid, Prieto worked in El Salvador from 1998 to 1999 in a mathematics education program at the University of El Salvador. After beginning graduate study in computer science in 2000 at the University of Victoria in Canada, she transferred to the University of Newcastle in Australia in 2001 and completed a doctorate in theoretical computer science in 2005. Her dissertation, Systematic Kernelization in FPT Algorithm Design, concerned methods for kernelization in parameterized algorithms, and was jointly supervised by Michael Fellows and Frances A. Rosamond. == Work in education == Prieto became a postdoctoral researcher in the Newcastle Bioinformatics Initiative at Newcastle's School of Engineering and Built Environments, and then moved to the Australian Research Council as a researcher. She became interested in mathematics education out of curiosity about why young people choose careers in science, technology, engineering, and mathematics, and returned to Newcastle as a lecturer in the school of education in 2012. Since 2017, she has led a project at Newcastle named HunterWISE that links mentorship networks of women in STEM careers with schools outreach to bring younger women into STEM; the project won the University of Newcastle's Excellence Award for Equity, Diversity, and Inclusion in 2019. She is also affiliated as an external member of the Mathematics and Science Education Research Group of the University of Tasmania. == References == == External links == Elena Prieto-Rodriguez publications indexed by Google Scholar Homepage at University of Newcastle
|
Wikipedia:Elena Yanovskaya#0
|
Elena Yanovskaya (Russian: Еле́на Бори́совна Яно́вская, born 20 May 1938) is a Soviet and Russian mathematician and economist known for her contributions to cooperative game theory. == Biography == Elena Yanovskaya was born in Leningrad on May 20, 1938. She studied at the School of Mathematics and Mechanics of the Leningrad State University majoring in probability theory and statistics. After graduation in 1959, she started working as a junior researcher at the Leningrad Department of Steklov Institute of Mathematics, where she worked until 1965. Yanovskaya defended her doctoral thesis (Candidate of Sciences) in 1964. From 1965 to 1975, Yanovskaya worked at the Leningrad branch of the Central Economic Mathematical Institute, where she started as a junior researcher and became the head of the game theory lab. From 1975 to 1990, she worked at the Institute of Socio-Economic Problems of the USSR Academy of Sciences. Yanovskaya defended her postdoctoral thesis (Doctor of Sciences) in 1980. From 1990 to 2015, she worked as the head of the laboratory of Game Theory and Decision Making of the St. Petersburg Economics and Mathematics Institute. Since 2009 she works as a professor at the St. Petersburg campus of the Higher School of Economics. == Publications == E. B. Janovskaya, “Minimax Theorems for Games on Unit Square”, Theory Probab. Appl., 9:3 (1964), 500–502 E. B. Yanovskaya, “The solution of the infinite zero-sum two-person games infinite-additive strategies”, Theory Probab. Appl., 15:1 (1970), 153–158 E. B. Yanovskaya, “Infinite antagonistic games”, J. Soviet Math., 2:5 (1974), 520–541 E. B. Yanovskaya, “Axiomatic characterization of maximin and lexicographically maximin solutions of bargaining schemes”, Autom. Remote Control, 46 (1985), 1177–1185 E. B. Yanovskaya, “Group choice rules in problems with comparisons of individual preferences”, Autom. Remote Control, 50:6 (1989), 822–830 Naumova N.I., Yanovskaya E. Nash Social Choice Orderings. "Mathematical Social Sciences", 2001, vol.42, N3, 203–231; Yanovskaya E. Proportional values for TU games. International Journal of Mathematics, Game Theory and Algebra. Nova Sci.Publishers, 2006, vol.16, issue 3. Elena Yanovskaya, “One More Uniqueness of the Shapley Value”, Contributions to Game Theory and Management, 1 (2007), 504–523 Elena B. Yanovskaya, “The Nucleolus and the τ-value of Interval Games”, Contributions to Game Theory and Management, 3 (2010), 421–430 Elena B. Yanovskaya, “Consistent Subsolutions of the Least Core”, Contributions to Game Theory and Management, 5 (2012), 321–333 Elena B. Yanovskaya, “The bounded core for games with restricted cooperation”, Autom. Remote Control, 77:9 (2016), 1699–1710 == Awards == Kantorovich Prize (2014) "for the work on the cooperative approach to problems of aggregation and distribution". == References == == External links == Personal page at Higher School of Economics Yanovskaya, Elena Borisovna's profile page at http://www.mathnet.ru Web archive of a personal page at St. Petersburg Economics and Mathematics Institute (in Russian)
|
Wikipedia:Eleonor Harboure#0
|
Eleonor "Pola" Ofelia Harboure de Aguilera (15 June 1948 – 15 January 2022), who published professionally as Eleonor Harboure, was a mathematician from Argentina who was the first woman president of Unión Matemática Argentina (UMA), the Argentinian mathematical professional society. Harboure also served as the UMA's secretary. Harboure earned a PhD from the University of Minnesota in 1978. Her PhD advisor was Nestor Marcelo Riviere and her dissertation work was in functional analysis. She had 8 PhD students of her own. == References ==
|
Wikipedia:Eleonora Catsigeras#0
|
Eleonora Dolores Catsigeras García (born 1956) is a Uruguayan mathematician who specializes in dynamical systems and is a winner of the L'Oréal-UNESCO Award for Women in Science in 2014, with her project Neurodynamics. She completed the Doctorate in Sciences at the National Institute of Pure and Applied Mathematics in Rio de Janeiro since 1991, obtaining the title in 1995. She later began working as a teacher at the Institute of Mathematics and Statistics of the University of the Republic, where she became professor. == Awards == Special Mention at the Pedeciba-UNDP Caldeyro Barcia Awards (1999) L'Oréal-UNESCO Award for Women in Science (2014) == References == == External links == Doctora en Matemática Eleonora Catsigeras (interview)
|
Wikipedia:Eli Maor#0
|
Eli Maor (Hebrew: אלי מאור; born 4 October 1937) is a mathematician and historian of mathematics, best known for several books about mathematics and its history written for a popular audience. Eli Maor received his PhD at the Technion – Israel Institute of Technology. He taught history of mathematics at Loyola University Chicago. Maor was the editor of the article on trigonometry for the Encyclopædia Britannica. Asteroid 226861 Elimaor, discovered at the Jarnac Observatory in 2004, was named in his honor. The official naming citation was published by the Minor Planet Center on 22 July 2013 (M.P.C. 84383). == Selected works == To Infinity and Beyond: A Cultural History of the Infinite, 1991, Princeton University Press. ISBN 978-0-691-02511-7 e:The story of a Number, by Eli Maor, Princeton University Press (Princeton, New Jersey) (1994) ISBN 0-691-05854-7 Venus in Transit, 2000, Princeton University Press. ISBN 0-691-04874-6 Trigonometric Delights, Princeton University Press, 2002 ISBN 0-691-09541-8. Ebook version, in PDF format, full text presented. The Pythagorean Theorem: A 4,000-Year History, 2007, Princeton University Press, ISBN 978-0-691-12526-8 The Facts on File Calculus Handbook (Facts on File, 2003), 2005, Checkmark Books, an encyclopedia of calculus concepts geared for high school and college students Music by the Numbers. Princeton University Press. 2018. ISBN 9780691176901. == References ==
|
Wikipedia:Eli Shamir#0
|
Eliahu (Eli) Shamir (Hebrew: אליהו שמיר) is an Israeli mathematician and computer scientist, the Jean and Helene Alfassa Professor Emeritus of Computer Science at the Hebrew University of Jerusalem. == Biography == Shamir earned his Ph.D. from the Hebrew University in 1963, under the supervision of Shmuel Agmon. After briefly holding faculty positions at the University of California, Berkeley and Northwestern University, he returned to the Hebrew University in 1966 and was promoted to full professor in 1972. == Contributions == Shamir was one of the discoverers of the pumping lemma for context-free languages. He did research in partial differential equations, automata theory, random graphs, computational learning theory, and computational linguistics. He was (with Michael O. Rabin) one of the founders of the computer science program at the Hebrew University. == Awards and honors == He was given his named chair in 1987, and in 2002 a workshop on learning and formal verification was held in his honor at Neve Ilan, Israel. == Selected publications == Bar-Hillel, Y.; Perles, M.; Shamir, E. (1961), "On formal properties of simple phrase structure grammars", Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 14 (2): 143–172. Shamir, E.; Spencer, J. (1987), "Sharp concentration of the chromatic number on random graphs Gn,p", Combinatorica, 7 (1): 121–129, doi:10.1007/BF02579208, MR 0905159, S2CID 27769008. Freund, Yoav; Seung, H. Sebastian; Shamir, Eli; Tishby, Naftali (1997), "Selective sampling using the query by committee algorithm", Machine Learning, 28 (2–3): 133–168, doi:10.1023/A:1007330508534. == References == == External links == Eli Shamir at DBLP Bibliography Server
|
Wikipedia:Eli Turkel#0
|
Eli L. Turkel (Hebrew: אלי טורקל; born January 22, 1944) is an Israeli applied mathematician and currently an emeritus professor of applied mathematics at the School of Mathematical Sciences, Tel Aviv University. He is known for his contributions to numerical analysis of Partial Differential equations particularly in the fields of computational fluid dynamics, computational electromagnetics, acoustics, elasticity and image processing with applications to first Temple ostraca and recently deep earning for forward and inverse problems in PDEs, == Research == His research interests include algorithms solving partial differential equations (PDEs) including scattering and inverse scattering, image processing, and crack propagation. His most quoted paper is with Jameson and Schmidt (JST) on a Runge-Kutta scheme to solve the Euler equations. Another main contribution includes fast algorithms for the Navier-Stokes equations based on preconditioning techniques, radiation boundary conditions and high order accuracy for wave propagation in general shaped domains using difference potentials. He has published work on reading ostraca from the first Temple period. Algorithmic handwriting analysis of Judah’s military correspondence sheds light on the composition of biblical texts, which appeared in PNAS was quoted by numerous sources including the front page of the NY Times. Later articles deal with ostraca at both Samaria and Arad. Other research includes high order compact numerical methods for hyperbolic equations, including the Helmholtz equation, acoustics and Maxwell's equations, using Cartesian grids but general shaped boundaries and interfaces. Other research uses deep learning to detect sources and obstacles underwater using the acoustic wave equation and data at a few noisy sensors. Recent applications of deep learning include using large time steps and improving the accuracy of finite differences for high frequencies on coarse grids. Other deep learning algorithms include, HINTS for iterative methods, VITO for inverse problems and MATCH for time-dependent PDEs. He has also authored articles in Tradition and the Journal of Contemporary Halacha. Turkel was listed as an ISI highly cited researcher in mathematics. Google Scholar lists over 20,000 citations. == Education == Turkel was born in New York City, United States. He received his B.A. degree from the Yeshiva University in 1965, M.S. degree from the New York University in 1967, and Ph.D. degree from the Courant Institute at New York University in 1970; all in mathematics. His Ph.D. thesis advisors were J. J. Stoker and Eugene Isaacson. He received rabbinical ordination from Rabbi Joseph B. Soloveitchik. == References == == External links == Eli Turkel at the Mathematics Genealogy Project Home page: http://www.math.tau.ac.il/~turkel/ Index book for the Rav Soloveitchik https://www.otzar.org/wotzar/book.aspx?191322&&lang=eng
|
Wikipedia:Eliane R. Rodrigues#0
|
Eliane Regina Rodrigues is a Brazilian applied mathematician and statistician who works in Mexico as a researcher at the Institute of Mathematics of the National Autonomous University of Mexico (UNAM). Her research involves using stochastic processes including Markov chains and Poisson point processes to model phenomena such as air pollution, noise pollution, the health effects of fat taxes, and the effectiveness of vaccination. == Education == After undergraduate study in mathematics at São Paulo State University, Rodrigues earned a master's degree in probability theory from the University of Brasília, both in Brazil. She completed a PhD in applied probability from Queen Mary and Westfield College (now Queen Mary University of London) in England. == Book == Rodrigues is the coauthor, with Brazilian mathematician Jorge Alberto Achcar, of the book Applications of Discrete-time Markov Chains and Poisson Processes to Air Pollution Modeling and Studies (Springer Briefs in Mathematics, 2013). == Recognition == Rodrigues is a member of the Mexican Academy of Sciences, and an Elected Member of the International Statistical Institute. == References ==
|
Wikipedia:Elimination theory#0
|
In commutative algebra and algebraic geometry, elimination theory is the classical name for algorithmic approaches to eliminating some variables between polynomials of several variables, in order to solve systems of polynomial equations. Classical elimination theory culminated with the work of Francis Macaulay on multivariate resultants, as described in the chapter on Elimination theory in the first editions (1930) of Bartel van der Waerden's Moderne Algebra. After that, elimination theory was ignored by most algebraic geometers for almost thirty years, until the introduction of new methods for solving polynomial equations, such as Gröbner bases, which were needed for computer algebra. == History and connection to modern theories == The field of elimination theory was motivated by the need of methods for solving systems of polynomial equations. One of the first results was Bézout's theorem, which bounds the number of solutions (in the case of two polynomials in two variables at Bézout time). Except for Bézout's theorem, the general approach was to eliminate variables for reducing the problem to a single equation in one variable. The case of linear equations was completely solved by Gaussian elimination, where the older method of Cramer's rule does not proceed by elimination, and works only when the number of equations equals the number of variables. In the 19th century, this was extended to linear Diophantine equations and abelian group with Hermite normal form and Smith normal form. Before the 20th century, different types of eliminants were introduced, including resultants, and various kinds of discriminants. In general, these eliminants are also invariant under various changes of variables, and are also fundamental in invariant theory. All these concepts are effective, in the sense that their definitions include a method of computation. Around 1890, David Hilbert introduced non-effective methods, and this was seen as a revolution, which led most algebraic geometers of the first half of the 20th century to try to "eliminate elimination". Nevertheless Hilbert's Nullstellensatz, may be considered to belong to elimination theory, as it asserts that a system of polynomial equations does not have any solution if and only if one may eliminate all unknowns to obtain the constant equation 1 = 0. Elimination theory culminated with the work of Leopold Kronecker, and finally Macaulay, who introduced multivariate resultants and U-resultants, providing complete elimination methods for systems of polynomial equations, which are described in the chapter on Elimination theory in the first editions (1930) of van der Waerden's Moderne Algebra. Later, elimination theory was considered old-fashioned and removed from subsequent editions of Moderne Algebra. It was generally ignored until the introduction of computers, and more specifically of computer algebra, which again made relevant the design of efficient elimination algorithms, rather than merely existence and structural results. The main methods for this renewal of elimination theory are Gröbner bases and cylindrical algebraic decomposition, introduced around 1970. == Connection to logic == There is also a logical facet to elimination theory, as seen in the Boolean satisfiability problem. In the worst case, it is presumably hard to eliminate variables computationally. Quantifier elimination is a term used in mathematical logic to explain that, in some theories, every formula is equivalent to a formula without quantifier. This is the case of the theory of polynomials over an algebraically closed field, where elimination theory may be viewed as the theory of the methods to make quantifier elimination algorithmically effective. Quantifier elimination over the reals is another example, which is fundamental in computational algebraic geometry. == See also == Buchberger's algorithm Faugère's F4 and F5 algorithms Resultant Triangular decomposition Main theorem of elimination theory == References == Israel Gelfand, Mikhail Kapranov, Andrey Zelevinsky, Discriminants, resultants, and multidimensional determinants. Mathematics: Theory & Applications. Birkhäuser Boston, Inc., Boston, MA, 1994. x+523 pp. ISBN 0-8176-3660-9 Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 David Cox, John Little, Donal O'Shea, Using Algebraic Geometry. Revised second edition. Graduate Texts in Mathematics, vol. 185. Springer-Verlag, 2005, xii+558 pp., ISBN 978-0-387-20733-9
|
Wikipedia:Elisabeth Hagemann#0
|
Elisabeth Hagemann (born 6 Mar 1906 in Essen, died 1989) was among the first female German mathematicians to obtain a Doctor of Philosophy degree. == Life == Her parents were Otto Hagemann, a department director at Friedrich Krupp AG, and Else Hagemann, née Clausius. Elisabeth Hagemann got her abitur from the Victoria school Essen (de) on 6 March 1926. She enlisted at Munich University in spring 1926, then at Bonn University in spring 1928, where she got her Staatsexamen in mathematics, physics, and geography on 4 March 1932. Then she worked as a school teacher in Bad Godesberg, Bonn, and Rhine Province. In May 1935, she became a research fellow (Wissenschaftlicher Assistent) of Otto Toeplitz at Bonn University. On 28 Apr 1937, she obtained her Ph.D. At that time, her main advisor, Otto Toeplitz, had already been dismissed due to the Nazi Civil Service Law. == References ==
|
Wikipedia:Elisabeth M. Werner#0
|
Elisabeth M. Werner is a mathematician who works as a professor of mathematics at Case Western Reserve University, as associate director of the Institute for Mathematics and its Applications, and as maître de conférences at the Lille University of Science and Technology. Her research interests include convex geometry, functional analysis, probability theory, and their applications. Werner earned a diploma in mathematics from the University of Tübingen, in Germany, in 1985. She moved to France for her graduate studies, finishing her doctorate in 1989 at Pierre and Marie Curie University, under the supervision of Gilles Godefroy. On completing her doctorate she took a faculty position at Case, and two years later added her affiliation with Lille. At Case, she was promoted to full professor in 2002. In 2012, she became one of the inaugural fellows of the American Mathematical Society. == References == == External links == Home page
|
Wikipedia:Elisha Netanyahu#0
|
Elisha Netanyahu (Hebrew: אֱלִישָׁע נְתַנְיָהוּ; December 21, 1912 – April 3, 1986) was an Israeli mathematician specializing in complex analysis. Over the course of his work at the Technion he was the Dean of the Faculty of Sciences and established the separate Department of Mathematics. He was the brother of historian Benzion Netanyahu and the uncle of current Israeli Prime Minister Benjamin Netanyahu. == Biography == Elisha Netanyahu was born in Warsaw, Poland, to Sarah (Lurie) and the Russian Jewish writer and Zionist activist Nathan Mileikowsky. He was the third of nine children. In 1920 the family made aliyah to the Land of Israel. The family eventually settled in Jerusalem and adopted Hebrew name Netanyahu. Elisha Netanyahu went to the Reali School in Haifa, from which he graduated in 1930. He later returned to Reali in 1935 to teach mathematics there. He studied at the Hebrew University of Jerusalem, from which he received his BS, MA and PhD (1942). His advisors were Michael Fekete and Binyamin Amirà. After the graduation he joined the British Army as a volunteer, serving in Egypt and then in Italy as an officer in a unit of the Royal Engineering Corps. He specialized in preparation of maps, which he continued to do during the 1948 Arab–Israeli War. After he was demobilized in 1946, he became a lecturer at the Technion. He rose to a professor in 1958, and later became the head of the Mathematics Section, then as Dean of the Faculty of Sciences. His administrative efforts also played an important role towards establishment of the Ben-Gurion University of the Negev. He had long term visits at Stanford University (1953–54), NYU (1961), the University of New Mexico (1969), the University of Maryland, College Park (1973), and ETH Zürich (1979). In 1980, Netanyahu retired from the Technion and moved to Jerusalem, where he died of cancer in 1986. Throughout his long career, Netanyahu collaborated with Paul Erdős, Charles Loewner and other leading mathematicians, continuing and expanding the analytical traditions at the Technion. === Personal life === He was the brother of Benzion Netanyahu, a professor of history, and the uncle of the Prime Minister of Israel, Benjamin Netanyahu. In 1949 Netanyahu married Shoshana Shenburg, his former student at the Reali, who later became the second female justice at the Israel Supreme Court. They had two children: Nathan (b. 1951), a professor of computer science at Bar-Ilan University, and Dan (b. 1954), an information systems auditor. == Elisha Netanyahu Memorial Lectures == The Elisha Netanyahu Memorial Lecture Series was established by his family: His brother Amos Milo (Mileikowsky), his wife, his children and the Technion to honor the memory in 1987 with the first lecture by Paul Erdős. In other years, the speakers included Lars Ahlfors, Robert Aumann, Lipman Bers, Enrico Bombieri, Charles Fefferman, Samuel Karlin, David Kazhdan, Louis Nirenberg, Terence Tao, Wendelin Werner, and Don Zagier. == References == Anderson, J. M. (1988). "Obituary: Elisha Netanyahu". Bulletin of the London Mathematical Society. 20 (6): 613–618. doi:10.1112/blms/20.6.613. Zalcman, Lawrence (December 1993). "In memoriam Elisha Netanyahu 1912–1986". Journal d'Analyse Mathématique. 60 (1): 1–10. doi:10.1007/BF02786592. Elisha Netanyahu Memorial Lectures
|
Wikipedia:Elisha Scott Loomis#0
|
Elisha Scott Loomis (September 18, 1852 – December 11, 1940) was an American teacher, mathematician, genealogist, writer and engineer. == Ancestry and early life == Elisha Scott Loomis, of English–Scottish and Pennsylvania Dutch ancestry, was born in a log-cabin in Wadsworth, Ohio, which at that time was a village in Medina County. He was the eldest son of Charles W. Loomis, a descendant of the pioneer Joseph Loomis of Windsor, Connecticut. His mother was Sarah Oberholtzer, descendant of pioneer Jacob Oberholtzer, of Montgomery County, Pennsylvania. When Loomis was 12 his father died. By that time he had six younger brothers and a sister, and for seven years from the age of 13 he helped his mother make ends meet by working as a farm labourer during summertime. Four months each winter he attended district schools, working for his board while doing so. During his schooldays he wished to learn algebra, and as his district school teacher knew no algebra, he walked several miles to a neighbouring town where he bought Ray's Elementary Algebra. He proceeded to master the material without any support except encouragement of his mother, who had had too little schooling to learn to write. Loomis proved to be a sufficiently apt scholar to become a teacher himself in 1873. He taught during the summer and managed not only to save enough money to help his mother support her family, but also for himself to attend and assist at Baldwin University at Berea, Ohio, during the winter. His industry and thrift enabled him to buy a home, in Shreve, Ohio, where he established his mother and brothers the fall of 1876. This was partly possible because of his abstemious habits, eschewing both tobacco and strong drink. He joined the Presbyterian church early in his adult life, but later converted to Methodism. After he attained his B.S. degree in mid-1880, Loomis married a teacher from Loudonville, Ohio, Miss Letitia E. Shire. == Further education and career == While teaching, Loomis continued his own studies and earned postgraduate degrees. While in Berea, studied civil engineering and became the village engineer. He attained his B.S. at Baldwin University in 1880 under Professor Aaron Schuyler, and then his A.M. in 1886 and Ph.D. in 1888 from Wooster University, Ohio. In 1900 the Cleveland Law School awarded him the LL.B. degree, and he was admitted to the State Bar in June 1900. From 1880 to 1885 he served as principal, first at the Burbank Academy in Burbank, Ohio, then at the Richfield Township High School, in Summit County, Ohio. In 1885, he accepted the chair of mathematics in Baldwin University, succeeding Professor Aaron Schuyler, where he served for ten years. In 1895, he accepted the post of head of the Mathematics Department at West High School, Cleveland, Ohio, where he taught for 29 years, not retiring until 1923 as required by law for Ohio state teachers. In writing his own obituary he estimated that in his 50 years as a teacher he had ploughed "habit-formation grooves in the plastic brains" of over 4000 boys and girls and young men and women and said that he prized the title of "Teacher" more than any other honour. == Written works == His written works included his thesis for his Ph.D. degree in metaphysics: "Theism the Result of Completed Investigation", a genealogy of "The Loomis Family in America", and "The Genealogy of Jacob Oberholtzer and His Descendants"'. He also wrote "The Teaching of Mathematics in High Schools", and "Original Investigation Or How to Attack an Exercise in Geometry". Possibly his best-known work however, is "The Pythagorean Proposition", in which he collected, classified, and discussed 344 proofs. The book is still a work of reference. In 2021, the book was published in a revised and expanded version in German. Also he prepared in manuscript, ready for publication, books and articles estimated to number over one hundred, but it is not clear how many of them ever were printed. Titles that he mentioned include: "Recollections and Reflections of a Log-Cabin Boy", "This and That, from 50 Years of Experience as a Teacher", a genealogy of his family, a biography of Dr. Aaron Schuyler, and many articles on educational, mathematical and genealogical subjects. He held that true teaching, worth-while education and right living consist in ethical and moral habit formation to control one's social contributions throughout life; and that service should guide one's action rather than profits. At the time of his death he had a son, Elatus G. Loomis of Cleveland, Ohio, a daughter, Mrs. R. L. Lechner of Buenos Aires, and three grandchildren. == References ==
|
Wikipedia:Elizabeth Mansfield (mathematician)#0
|
Elizabeth Louise Mansfield is an Australian mathematician whose research includes the study of moving frames and conservation laws for discretisations of physical systems. She is a Fellow of the Institute of Mathematics and its Applications and was a Vice-President thereof from January 2015 until December 2018. She was the first female full professor of mathematics at the University of Kent. She was one of the co-editors of the LMS Journal of Computation and Mathematics, a journal published by the London Mathematical Society from 1998 to 2015. She is on the Editorial Board of the Journal of the Foundations of Computational Mathematics. Mansfield obtained her Ph.D. from the University of Sydney in 1992. Her dissertation, Differential Gröbner Bases, was supervised by Edward Douglas Fackerell. At the University of Kent, she is a professor in the School of Mathematics, Statistics and Actuarial Science. Mansfield is one of the people the Estevez–Mansfield–Clarkson equation was named for. She is the author of a book on the method of moving frames, A Practical Guide to the Invariant Calculus (Cambridge Monographs on Applied and Computational Mathematics 26, Cambridge University Press, 2010). In 2018, she organized the Noether Celebration in London, a conference concerning the works of Emmy Noether, whom Mansfield cites as an inspiration for her own work. == References ==
|
Wikipedia:Elizabeth Williams (educationist)#0
|
Elizabeth Williams (née Larby, formerly Emily May; 29 January 1895 – 29 March 1986) was a British mathematician and educationist. == Life == Williams was born on 29 January 1895 in Pimlico, London. She studied in Chelsea and Forest Gate during her childhood, and at the age of 16 began attending Bedford College, University of London for a college degree. At Bedford, one of her mentors was Alfred North Whitehead. She became a grammar school teacher, but had to stop teaching when she became married in 1922. Because of this situation, she founded her own school in North London with her husband, and then in 1930 (with the assistance of Percy Nunn, who had been a former tutor) she took a position in education at King's College London. She became a Commander of the Order of the British Empire in 1958, and president of the Mathematical Association for 1965–1966. == Works == Oxford Junior Mathematics: Teacher's, Book 5 (1966) == References ==
|
Wikipedia:Elizaveta Litvinova#0
|
Elizaveta Fedorovna Litvinova (1845–1919?) was a Russian mathematician and pedagogue. She is the author of over 70 articles about mathematics education. == Early life and education == Born in 1845 in czarist Russia as Elizaveta Fedorovna Ivashkina, she completed her early education at a women's high school in Saint Petersburg. In 1866 Elizaveta married Viktor Litvinov, who, unlike Vladimir Kovalevsky (Sofia Kovalevskaya's husband), would not allow her to travel to Europe to study at the universities there. Thus, Litvinova started to study with Strannoliubskii, who had also privately tutored Kovalevskaya. In 1872, as soon as her husband died, Litvinova went to Zürich and enrolled at a polytechnic institute. In 1873 the Russian czar decreed all Russian women studying in Zürich had to return to Russia or face the consequences. Litvinova was one of the few to ignore the decree and she remained to continue her studies, earning her baccalaureate in Zürich in 1876. She completed her doctoral degree in 1878 from the University of Bern, as a student of Ludwig Schläfli, becoming the first woman to earn a doctorate in mathematics in Switzerland. == Career and later life == When Litvinova returned to Russia, she was denied university appointments because she had defied the 1873 recall. She taught at a women's high school and supplemented her meager income by writing biographies of more famous mathematicians such as Kovalevskaya and Aristotle. After retiring, Litvinova moved to the countryside in 1917. Although no subsequent records have been found, it is believed that she must have died soon after, possibly in the Russian famine of 1921–1922 or earlier. == References == == External links == "Elizaveta Litvinova", Biographies of Women Mathematicians, Agnes Scott College This article incorporates material from Elizaveta Litvinova on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Wikipedia:Elja Arjas#0
|
Elja Arjas (born February 9, 1943, in Tampere) is a Finnish mathematician and statistician. He is professor emeritus at the University of Helsinki. == Education and career == Arjas studied mathematics at the University of Helsinki and graduated with a bachelor's degree in philosophy in 1964. He graduated with a licentiate in mathematics and statistics in 1970 and received his doctorate in mathematics in 1972, under the supervision of Olli Lokki and Gustav Elfving. He was a research fellow at the Center for Operations Research and Econometrics at the Université catholique de Louvain until 1973, before moving back to Finland. Arjas was a professor of applied mathematics and statistics at the University of Oulu between 1975 and 1997. Between 1992 and 1997, he worked as an academy professor at the Academy of Finland, and from 1997 to 2009 as a part-time professor of biometrics at the University of Helsinki and as a research professor at the Institute of Health and Welfare. Arjas was a visiting professor at the University of British Columbia between 1978 and 1979, a visiting professor at the University of Washington and the Fred Hutchinson Cancer Research Center from 1984 to 1985. == Honors and awards == Arjas was elected a fellow of the International Statistical Institute in 1977, a fellow of the a member of the Institute of Mathematical Statistics in 1982, and a member of the Finnish Academy of Sciences in 2001. He received a honorary doctorate from the University of Oulu in 2006. == References ==
|
Wikipedia:Ellen Eischen#0
|
Ellen Elizabeth Eischen (born 1979) is an American mathematician specializing in number theory, and especially in the analytic, geometric, and algebraic properties of automorphic forms and L-functions. She is a professor of mathematics at the University of Oregon and a von Neumann Fellow at the Institute for Advanced Study. Beyond mathematics research, Eischen has also popularized mathematical visualization and creativity through an exhibit at the Jordan Schnitzer Museum of Art that became "the museum’s most visited virtual exhibit of all time". == Education and career == Eischen graduated from Princeton University in 2003. She completed a Ph.D. at the University of Michigan in 2009, with the dissertation p {\displaystyle p} -adic Differential Operators on Automorphic Forms and Applications supervised by Christopher Skinner. She became a Ralph Boas Assistant Professor at Northwestern University from 2009 to 2012, and an assistant professor at the University of North Carolina at Chapel Hill from 2012 to 2015, before moving to the University of Oregon in 2015. She was promoted to associate professor in 2017 and full professor in 2023. She is a von Neumann Fellow at the Institute for Advanced Study for 2024–2025. == Recognition == Eischen was named as a 2024 Fellow of the Association for Women in Mathematics, "for her outstanding leadership in support of women in mathematics; for her sustained efforts to create new research opportunities for women at conferences, including at APAW, AWM, WIN, and MSRI/SLMath; and for her innovative approach to creating diverse communities in math with an AWM reading room and math art exhibits". She was elected as a Fellow of the American Mathematical Society, in the 2025 class of fellows. == References == == External links == Home page Creativity Counts: Possibilities Shaped by Constraints of Arithmetic, Jordan Schnitzer Museum of Art, 2021
|
Wikipedia:Ellina Grigorieva#0
|
Ellina Grigorieva is a Russian mathematician and mathematics educator known for her books on mathematical problem solving. She is a professor in the Texas Woman's University Department of Mathematics and Computer Science, and an expert on control theory and its applications to the spread of disease. == Education and career == Grigorieva was born in Moscow, and educated at Moscow State University. == Books == Grigorieva's problem-solving books include: Methods of Solving Number Theory Problems (Birkhäuser, 2018) Methods of Solving Sequence and Series Problems (Birkhäuser, 2016) Methods of Solving Nonstandard Problems (Birkhäuser, 2015) Methods of Solving Complex Geometry Problems (Birkhäuser, 2013) == References ==
|
Wikipedia:Elling Holst#0
|
Elling Bolt Holst (19 July 1849 – 2 September 1915) was a Norwegian mathematician, biographer and children's writer. == Early and personal life == Holst was born in Drammen, Norway. He was a son of bookseller Adolph Theodor Holst and Amalie Fredrikke Bergh. He was a grandson of merchant and politician, member of the Storting, Elling Mathias Holst (1785–1852). Holst enrolled as a student at the University of Christiania (now University of Oslo), his doctoral advisor was Sophus Lie, and he graduated as cand.real. in 1874. He continued his studies in Germany, where Felix Klein was among his teachers. He was appointed teacher at Aars og Voss skole in Christiania (now Oslo). His thesis Et par syntetiske Methoder, især til Brug ved Studiet af metriske Egenskaber was finished in 1882. == Career == Holst lectured in mathematics at the University of Oslo from 1894. Among his other mathematical works are his contribution from 1878, Om Poncelets betydning for geometrien, and several course books. He wrote biographies of several mathematicians, including Cato Maximilian Guldberg, Carl Anton Bjerknes, Sophus Lie and Niels Henrik Abel. Holst is particularly known for his children's books Norsk Billedbog for Børn, three collections from 1888, 1890 and 1903 (with illustrations by Eivind Nielsen). The first of these books has been called Norway's first national picture book (although a picture abc had been published previously, in 1876). Holst started collecting traditional poems for children, several of which were first published in Norwegian writing in these books. These poems, such as "Ride, ride ranke", "Bake kake søte", "Kjerringa med staven", "Hoppe! sa gåsa" and "Du og jeg og vi to", have had a constant popularity over many years, and Norsk Billedbog for Børn has been reissued several times. Also contemporary poetry was included in the books, such as some poems by Henrik Wergeland, "Kom bukken til gutten" by Bjørnstjerne Bjørnson, and "Blaamand" by Aasmund Olavsson Vinje. Among his other children's books are Julegodter for Børn from 1892, and A.B.C. for Skole og Hjem from 1893 (together with Anna Rogstad). He published the picture book Fra Sæteren in 1899, with illustrations by Lisbeth Bergh. == Personal life == He was married twice, first to Inger Skavlan (1852–99), and was a brother-in-law of Olaf Skavlan, Sigvald Skavlan, Aage Skavlan and Harald Skavlan. After her death, he married Marie Michelet (1872–1960), sister of Simon Michelet, in 1900. He was decorated Knight, First Class of the Order of St. Olav in 1902. == References ==
|
Wikipedia:Elliptic algebra#0
|
In algebra, an elliptic algebra is a certain regular algebra of a Gelfand–Kirillov dimension three (quantum polynomial ring in three variables) that corresponds to a cubic divisor in the projective space P2. If the cubic divisor happens to be an elliptic curve, then the algebra is called a Sklyanin algebra. The notion is studied in the context of noncommutative projective geometry. == References == Ajitabh, Kaushal (1994), Modules over regular algebras and quantum planes (PDF) (Ph.D. thesis)
|
Wikipedia:Elliptic boundary value problem#0
|
In the study of differential equations, a boundary-value problem is a differential equation subjected to constraints called boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions. Boundary value problems arise in several branches of physics as any physical differential equation will have them. Problems involving the wave equation, such as the determination of normal modes, are often stated as boundary value problems. A large class of important boundary value problems are the Sturm–Liouville problems. The analysis of these problems, in the linear case, involves the eigenfunctions of a differential operator. To be useful in applications, a boundary value problem should be well posed. This means that given the input to the problem there exists a unique solution, which depends continuously on the input. Much theoretical work in the field of partial differential equations is devoted to proving that boundary value problems arising from scientific and engineering applications are in fact well-posed. Among the earliest boundary value problems to be studied is the Dirichlet problem, of finding the harmonic functions (solutions to Laplace's equation); the solution was given by the Dirichlet's principle. == Explanation == Boundary value problems are similar to initial value problems. A boundary value problem has conditions specified at the extremes ("boundaries") of the independent variable in the equation whereas an initial value problem has all of the conditions specified at the same value of the independent variable (and that value is at the lower boundary of the domain, thus the term "initial" value). A boundary value is a data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component. For example, if the independent variable is time over the domain [0,1], a boundary value problem would specify values for y ( t ) {\displaystyle y(t)} at both t = 0 {\displaystyle t=0} and t = 1 {\displaystyle t=1} , whereas an initial value problem would specify a value of y ( t ) {\displaystyle y(t)} and y ′ ( t ) {\displaystyle y'(t)} at time t = 0 {\displaystyle t=0} . Finding the temperature at all points of an iron bar with one end kept at absolute zero and the other end at the freezing point of water would be a boundary value problem. If the problem is dependent on both space and time, one could specify the value of the problem at a given point for all time or at a given time for all space. Concretely, an example of a boundary value problem (in one spatial dimension) is y ″ ( x ) + y ( x ) = 0 {\displaystyle y''(x)+y(x)=0} to be solved for the unknown function y ( x ) {\displaystyle y(x)} with the boundary conditions y ( 0 ) = 0 , y ( π / 2 ) = 2. {\displaystyle y(0)=0,\ y(\pi /2)=2.} Without the boundary conditions, the general solution to this equation is y ( x ) = A sin ( x ) + B cos ( x ) . {\displaystyle y(x)=A\sin(x)+B\cos(x).} From the boundary condition y ( 0 ) = 0 {\displaystyle y(0)=0} one obtains 0 = A ⋅ 0 + B ⋅ 1 {\displaystyle 0=A\cdot 0+B\cdot 1} which implies that B = 0. {\displaystyle B=0.} From the boundary condition y ( π / 2 ) = 2 {\displaystyle y(\pi /2)=2} one finds 2 = A ⋅ 1 {\displaystyle 2=A\cdot 1} and so A = 2. {\displaystyle A=2.} One sees that imposing boundary conditions allowed one to determine a unique solution, which in this case is y ( x ) = 2 sin ( x ) . {\displaystyle y(x)=2\sin(x).} == Types of boundary value problems == === Boundary value conditions === A boundary condition which specifies the value of the function itself is a Dirichlet boundary condition, or first-type boundary condition. For example, if one end of an iron rod is held at absolute zero, then the value of the problem would be known at that point in space. A boundary condition which specifies the value of the normal derivative of the function is a Neumann boundary condition, or second-type boundary condition. For example, if there is a heater at one end of an iron rod, then energy would be added at a constant rate but the actual temperature would not be known. If the boundary has the form of a curve or surface that gives a value to the normal derivative and the variable itself then it is a Cauchy boundary condition. ==== Examples ==== Summary of boundary conditions for the unknown function, y {\displaystyle y} , constants c 0 {\displaystyle c_{0}} and c 1 {\displaystyle c_{1}} specified by the boundary conditions, and known scalar functions f {\displaystyle f} and g {\displaystyle g} specified by the boundary conditions. === Differential operators === Aside from the boundary condition, boundary value problems are also classified according to the type of differential operator involved. For an elliptic operator, one discusses elliptic boundary value problems. For a hyperbolic operator, one discusses hyperbolic boundary value problems. These categories are further subdivided into linear and various nonlinear types. == Applications == === Electromagnetic potential === In electrostatics, a common problem is to find a function which describes the electric potential of a given region. If the region does not contain charge, the potential must be a solution to Laplace's equation (a so-called harmonic function). The boundary conditions in this case are the Interface conditions for electromagnetic fields. If there is no current density in the region, it is also possible to define a magnetic scalar potential using a similar procedure. == See also == == Notes == == References == A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2. A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9. == External links == "Boundary value problems in potential theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Boundary value problem, complex-variable methods", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Linear Partial Differential Equations: Exact Solutions and Boundary Value Problems at EqWorld: The World of Mathematical Equations. "Boundary value problem". Scholarpedia.
|
Wikipedia:Elliptic operator#0
|
In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions. Elliptic operators are typical of potential theory, and they appear frequently in electrostatics and continuum mechanics. Elliptic regularity implies that their solutions tend to be smooth functions (if the coefficients in the operator are smooth). Steady-state solutions to hyperbolic and parabolic equations generally solve elliptic equations. == Definitions == Let L {\displaystyle L} be a linear differential operator of order m on a domain Ω {\displaystyle \Omega } in Rn given by L u = ∑ | α | ≤ m a α ( x ) ∂ α u {\displaystyle Lu=\sum _{|\alpha |\leq m}a_{\alpha }(x)\partial ^{\alpha }u} where α = ( α 1 , … , α n ) {\displaystyle \alpha =(\alpha _{1},\dots ,\alpha _{n})} denotes a multi-index, and ∂ α u = ∂ 1 α 1 ⋯ ∂ n α n u {\displaystyle \partial ^{\alpha }u=\partial _{1}^{\alpha _{1}}\cdots \partial _{n}^{\alpha _{n}}u} denotes the partial derivative of order α i {\displaystyle \alpha _{i}} in x i {\displaystyle x_{i}} . Then L {\displaystyle L} is called elliptic if for every x in Ω {\displaystyle \Omega } and every non-zero ξ {\displaystyle \xi } in Rn, ∑ | α | = m a α ( x ) ξ α ≠ 0 , {\displaystyle \sum _{|\alpha |=m}a_{\alpha }(x)\xi ^{\alpha }\neq 0,} where ξ α = ξ 1 α 1 ⋯ ξ n α n {\displaystyle \xi ^{\alpha }=\xi _{1}^{\alpha _{1}}\cdots \xi _{n}^{\alpha _{n}}} . In many applications, this condition is not strong enough, and instead a uniform ellipticity condition may be imposed for operators of order m = 2k: ( − 1 ) k ∑ | α | = 2 k a α ( x ) ξ α > C | ξ | 2 k , {\displaystyle (-1)^{k}\sum _{|\alpha |=2k}a_{\alpha }(x)\xi ^{\alpha }>C|\xi |^{2k},} where C is a positive constant. Note that ellipticity only depends on the highest-order terms. A nonlinear operator L ( u ) = F ( x , u , ( ∂ α u ) | α | ≤ m ) {\displaystyle L(u)=F\left(x,u,\left(\partial ^{\alpha }u\right)_{|\alpha |\leq m}\right)} is elliptic if its linearization is; i.e. the first-order Taylor expansion with respect to u and its derivatives about any point is an elliptic operator. Example 1 The negative of the Laplacian in Rd given by − Δ u = − ∑ i = 1 d ∂ i 2 u {\displaystyle -\Delta u=-\sum _{i=1}^{d}\partial _{i}^{2}u} is a uniformly elliptic operator. The Laplace operator occurs frequently in electrostatics. If ρ is the charge density within some region Ω, the potential Φ must satisfy the equation − Δ Φ = 4 π ρ . {\displaystyle -\Delta \Phi =4\pi \rho .} Example 2 Given a matrix-valued function A(x) which is uniformly positive definite for every x, having components aij, the operator L u = − ∂ i ( a i j ( x ) ∂ j u ) + b j ( x ) ∂ j u + c u {\displaystyle Lu=-\partial _{i}\left(a^{ij}(x)\partial _{j}u\right)+b^{j}(x)\partial _{j}u+cu} is elliptic. This is the most general form of a second-order divergence form linear elliptic differential operator. The Laplace operator is obtained by taking A = I. These operators also occur in electrostatics in polarized media. Example 3 For p a non-negative number, the p-Laplacian is a nonlinear elliptic operator defined by L ( u ) = − ∑ i = 1 d ∂ i ( | ∇ u | p − 2 ∂ i u ) . {\displaystyle L(u)=-\sum _{i=1}^{d}\partial _{i}\left(|\nabla u|^{p-2}\partial _{i}u\right).} A similar nonlinear operator occurs in glacier mechanics. The Cauchy stress tensor of ice, according to Glen's flow law, is given by τ i j = B ( ∑ k , l = 1 3 ( ∂ l u k ) 2 ) − 1 3 ⋅ 1 2 ( ∂ j u i + ∂ i u j ) {\displaystyle \tau _{ij}=B\left(\sum _{k,l=1}^{3}\left(\partial _{l}u_{k}\right)^{2}\right)^{-{\frac {1}{3}}}\cdot {\frac {1}{2}}\left(\partial _{j}u_{i}+\partial _{i}u_{j}\right)} for some constant B. The velocity of an ice sheet in steady state will then solve the nonlinear elliptic system ∑ j = 1 3 ∂ j τ i j + ρ g i − ∂ i p = Q , {\displaystyle \sum _{j=1}^{3}\partial _{j}\tau _{ij}+\rho g_{i}-\partial _{i}p=Q,} where ρ is the ice density, g is the gravitational acceleration vector, p is the pressure and Q is a forcing term. == Elliptic regularity theorems == Let L be an elliptic operator of order 2k with coefficients having 2k continuous derivatives. The Dirichlet problem for L is to find a function u, given a function f and some appropriate boundary values, such that Lu = f and such that u has the appropriate boundary values and normal derivatives. The existence theory for elliptic operators, using Gårding's inequality, Lax–Milgram lemma and Fredholm alternative, states the sufficient condition for a weak solution u to exist in the Sobolev space Hk. For example, for a Second-order Elliptic operator as in Example 2, There is a number γ>0 such that for each μ>γ, each f ∈ L 2 ( U ) {\displaystyle f\in L^{2}(U)} , there exists a unique solution u ∈ H 0 1 ( U ) {\displaystyle u\in H_{0}^{1}(U)} of the boundary value problem L u + μ u = f in U , u = 0 on ∂ U {\displaystyle Lu+\mu u=f{\text{ in }}U,u=0{\text{ on }}\partial U} , which is based on Lax-Milgram lemma. Either (a) for any f ∈ L 2 ( U ) {\displaystyle f\in L^{2}(U)} , L u = f in U , u = 0 on ∂ U {\displaystyle Lu=f{\text{ in }}U,u=0{\text{ on }}\partial U} (1) has a unique solution, or (b) L u = 0 in U , u = 0 on ∂ U {\displaystyle Lu=0{\text{ in }}U,u=0{\text{ on }}\partial U} has a solution u ≢ 0 {\displaystyle u\not \equiv 0} , which is based on the property of compact operators and Fredholm alternative. This situation is ultimately unsatisfactory, as the weak solution u might not have enough derivatives for the expression Lu to be well-defined in the classical sense. The elliptic regularity theorem guarantees that, provided f is square-integrable, u will in fact have 2k square-integrable weak derivatives. In particular, if f is infinitely-often differentiable, then so is u. For L as in Example 2, Interior regularity: If m is a natural number, a i j , b j , c ∈ C m + 1 ( U ) , f ∈ H m ( U ) {\displaystyle a^{ij},b^{j},c\in C^{m+1}(U),f\in H^{m}(U)} (2) , u ∈ H 0 1 ( U ) {\displaystyle u\in H_{0}^{1}(U)} is a weak solution to (1), then for any open set V in U with compact closure, ‖ u ‖ H m + 2 ( V ) ≤ C ( ‖ f ‖ H m ( U ) + ‖ u ‖ L 2 ( U ) ) {\displaystyle \|u\|_{H^{m+2}(V)}\leq C(\|f\|_{H^{m}(U)}+\|u\|_{L^{2}(U)})} (3), where C depends on U, V, L, m, per se u ∈ H l o c m + 2 ( U ) {\displaystyle u\in H_{loc}^{m+2}(U)} , which also holds if m is infinity by Sobolev embedding theorem. Boundary regularity: (2) together with the assumption that ∂ U {\displaystyle \partial U} is C m + 2 {\displaystyle C^{m+2}} indicates that (3) still holds after replacing V with U, i.e. u ∈ H m + 2 ( U ) {\displaystyle u\in H^{m+2}(U)} , which also holds if m is infinity. Any differential operator exhibiting this property is called a hypoelliptic operator; thus, every elliptic operator is hypoelliptic. The property also means that every fundamental solution of an elliptic operator is infinitely differentiable in any neighborhood not containing 0. As an application, suppose a function f {\displaystyle f} satisfies the Cauchy–Riemann equations. Since the Cauchy-Riemann equations form an elliptic operator, it follows that f {\displaystyle f} is smooth. == Properties == For L as in Example 2 on U, which is an open domain with C1 boundary, then there is a number γ>0 such that for each μ>γ, L + μ I : H 0 1 ( U ) → H 0 1 ( U ) {\displaystyle L+\mu I:H_{0}^{1}(U)\rightarrow H_{0}^{1}(U)} satisfies the assumptions of Lax–Milgram lemma. Invertibility: For each μ>γ, L + μ I : L 2 ( U ) → L 2 ( U ) {\displaystyle L+\mu I:L^{2}(U)\rightarrow L^{2}(U)} admits a compact inverse. Eigenvalues and eigenvectors: If A is symmetric, bi,c are zero, then (1) Eigenvalues of L, are real, positive, countable, unbounded (2) There is an orthonormal basis of L2(U) composed of eigenvectors of L. (See Spectral theorem.) Generates a semigroup on L2(U): −L generates a semigroup { S ( t ) ; t ≥ 0 } {\displaystyle \{S(t);t\geq 0\}} of bounded linear operators on L2(U) s.t. d d t S ( t ) u 0 = − L S ( t ) u 0 , ‖ S ( t ) ‖ ≤ e γ t {\displaystyle {\frac {d}{dt}}S(t)u_{0}=-LS(t)u_{0},\|S(t)\|\leq e^{\gamma t}} in the norm of L2(U), for every u 0 ∈ L 2 ( U ) {\displaystyle u_{0}\in L^{2}(U)} , by Hille–Yosida theorem. == General definition == Let D {\displaystyle D} be a (possibly nonlinear) differential operator between vector bundles of any rank. Take its principal symbol σ ξ ( D ) {\displaystyle \sigma _{\xi }(D)} with respect to a one-form ξ {\displaystyle \xi } . (Basically, what we are doing is replacing the highest order covariant derivatives ∇ {\displaystyle \nabla } by vector fields ξ {\displaystyle \xi } .) We say D {\displaystyle D} is weakly elliptic if σ ξ ( D ) {\displaystyle \sigma _{\xi }(D)} is a linear isomorphism for every non-zero ξ {\displaystyle \xi } . We say D {\displaystyle D} is (uniformly) strongly elliptic if for some constant c > 0 {\displaystyle c>0} , ( [ σ ξ ( D ) ] ( v ) , v ) ≥ c ‖ v ‖ 2 {\displaystyle \left([\sigma _{\xi }(D)](v),v\right)\geq c\|v\|^{2}} for all ‖ ξ ‖ = 1 {\displaystyle \|\xi \|=1} and all v {\displaystyle v} . The definition of ellipticity in the previous part of the article is strong ellipticity. Here ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} is an inner product. Notice that the ξ {\displaystyle \xi } are covector fields or one-forms, but the v {\displaystyle v} are elements of the vector bundle upon which D {\displaystyle D} acts. The quintessential example of a (strongly) elliptic operator is the Laplacian (or its negative, depending upon convention). It is not hard to see that D {\displaystyle D} needs to be of even order for strong ellipticity to even be an option. Otherwise, just consider plugging in both ξ {\displaystyle \xi } and its negative. On the other hand, a weakly elliptic first-order operator, such as the Dirac operator can square to become a strongly elliptic operator, such as the Laplacian. The composition of weakly elliptic operators is weakly elliptic. Weak ellipticity is nevertheless strong enough for the Fredholm alternative, Schauder estimates, and the Atiyah–Singer index theorem. On the other hand, we need strong ellipticity for the maximum principle, and to guarantee that the eigenvalues are discrete, and their only limit point is infinity. == See also == Sobolev space Hypoelliptic operator Elliptic partial differential equation Hyperbolic partial differential equation Parabolic partial differential equation Hopf maximum principle Elliptic complex Ultrahyperbolic wave equation Semi-elliptic operator Weyl's lemma == Notes == == References == Evans, L. C. (2010) [1998], Partial differential equations, Graduate Studies in Mathematics, vol. 19 (2nd ed.), Providence, RI: American Mathematical Society, ISBN 978-0-8218-4974-3, MR 2597943 Review: Rauch, J. (2000). "Partial differential equations, by L. C. Evans" (PDF). Journal of the American Mathematical Society. 37 (3): 363–367. doi:10.1090/s0273-0979-00-00868-5. Gilbarg, D.; Trudinger, N. S. (1983) [1977], Elliptic partial differential equations of second order, Grundlehren der Mathematischen Wissenschaften, vol. 224 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-13025-3, MR 0737190 Shubin, M. A. (2001) [1994], "Elliptic operator", Encyclopedia of Mathematics, EMS Press == External links == Linear Elliptic Equations at EqWorld: The World of Mathematical Equations. Nonlinear Elliptic Equations at EqWorld: The World of Mathematical Equations.
|
Wikipedia:Elod Macskasy#0
|
Elod Macskasy (Hungarian: Macskásy Előd) (7 April 1919 – 21 January 1990) was a Hungarian-Canadian chess master. == Early life and education == Macskasy was born in Arad, which at the time was part of the Kingdom of Hungary, but was shortly afterwards ceded to Romania by the Treaty of Trianon. He completed his early schooling there, and at age 16 won the city's chess championship. He also competed for Hungary in swimming at the 1936 Berlin Olympics. He studied mathematics in Budapest from 1937 to 1942, at Pázmány Péter University, earning his doctorate. During this time, he competed with some success in team and student chess tournaments. Macskasy scored 1/1 on the first reserve board for Hungary at the 2nd Balkaniad, Sofia 1947; his team won the gold medal. In 1947, he gained the Hungarian National Master title following his performance in the 1947 Hungarian championship. Perhaps his best Hungarian result occurred in 1952, when he won a Master tournament ahead of Árpád Vajda, István Bilek and Károly Honfi. Macskasy co-authored a book on the 1952 Hungarian championship. == Life in Canada == Following the Hungarian Revolution of 1956, he emigrated to Canada, where he secured a position as professor of mathematics at the University of British Columbia in Vancouver. He was a surprise winner of the 1958 Canadian Open Championship, at Winnipeg, ahead of Larry Evans, with 9/10. Macskasy won the British Columbia Championship for five straight years, from 1958 to 1962, and shared this title in 1967. He continued to play often in this event, generally scoring well, into the late 1980s. In 1961, he played an eight-game training match with Abe Yanofsky, Canada's top player, in Vancouver, losing +2 =1 -5; the match helped Yanofsky to prepare for the 1962 Stockholm Interzonal. Macskasy competed several times in the Canadian Chess Championship, generally with good results. At Brockville 1961, he tied for 5-6th, with 6/11. At Winnipeg 1963, he was third, with 10/15. At Vancouver 1965, he finished tied 4-5th, with 6 ½/11. At Toronto 1972, he scored 8 ½/17 for a tied 12-13th. At Calgary 1975, at age 56, he struggled with 5/15 for a shared 12-13th. In the early 1960s, he had a Canadian Chess Federation rating of 2400, indicating a player of International Master strength; however, he was never awarded the FIDE title. Macskasy represented Canada twice at Chess Olympiads: 1964 at Tel Aviv on board 4: 5/13 (+3 =4 -6); 1968 at Lugano on board 3: 6 ½/13 (+4 =5 -4). He remained a strong player throughout his life, maintaining a master's rating of over 2200 until his final tournament, the 1989 Paul Keres Memorial in Vancouver. He co-edited the magazine Canadian Chess Chat for many years from the late 1950s. He was a chess mentor, notably in the late 1960s, when he mentored a group of young British Columbia masters that included Robert Zuk, Bruce Harper, Jonathan Berry, Peter Biyiasis, and Duncan Suttles. Macskasy died unexpectedly on 21 January 1990. == References == == External links == Elod Macskasy player profile and games at Chessgames.com Elod Macskasy Chessmetrics player profile Elod Macskasy Canadian chess - Biographies
|
Wikipedia:Ely Merzbach#0
|
Ely Yissachar Merzbach (Hebrew: עלי יששכר מרצבך; born 11 February 1950) is an Israeli mathematician and emeritus professor at Bar-Ilan University's Department of Mathematics and the Gonda Brain Research Center. == Biography == Ely Merzbach was born in 1950 in Paris, where he attended École Yabné. He immigrated to Israel at the age of 17, studying for a year at Yeshivat Be'er Ya'akov and then enlisting in the Nahal Brigade of the Israel Defense Forces. He obtained a B.Sc. in mathematics and statistics and an M.Sc. in mathematics from the Hebrew University of Jerusalem, and completed his doctoral studies in 1979 at Ben-Gurion University of the Negev. == Academic career == After working as a postdoctoral fellow at Paris 6 and the École Polytechnique, Merzbach joined the faculty at Bar-Ilan University in 1980, becoming full professor in 1993. His research focuses on point processes theory, measure theory, stochastic geometry, and applications thereof. He served as head of the Bar-Ilan's Department of Mathematics and Computer Science from 1991, academic head of Ariel University from 1996 to 1997, and was elected dean of the Faculty of Exact Sciences at Bar-Ilan in 1997. == References == == External links == Ely Merzbach at the Mathematics Genealogy Project
|
Wikipedia:Elza Furtado Gomide#0
|
Elza Furtado Gomide (August 20, 1925 – October 26, 2013) was a Brazilian mathematician and the first woman to receive a doctorate in mathematics from the University of São Paulo, in 1950, and the second in Brazil. Gomide was involved in the creation of the Society of Mathematics of São Paulo and was elected head of the department of mathematics of the University of São Paulo in 1968. == See also == Marília Chaves Peixoto, another Brazilian mathematician who earned her doctorate in Brazil in 1948. == References ==
|
Wikipedia:Eléna Wexler-Kreindler#0
|
Eléna Wexler-Kreindler (15 October 1931 – August 1992) was a Romanian mathematician. She spent most of her professional career in France, where she specialized in modern algebra and studied the Ore extensions, the theory of the filtration of rings, or algebraic microlocalisation. == Career == Kreindler was born on 15 October 1931 in Brăila, Romania into a Jewish family. She obtained in 1951 a fellowship and spent the next four years in the USSR studying mathematics at the Ural State University, located in Sverdlovsk (nowadays Yekaterinburg). In 1955, she completed a master thesis on "Multiplicative Lattices with Additive Basis" under the supervision of Petr Grigor'evich Kontorovich, before returning to Bucharest to join the faculty of Mathematics at the Polytechnic Institute of Bucharest. Next to her duties as assistant professor, she continued with her research in the field of functional analysis under the guidance of Grigore Moisil. She earned her Ph.D. thesis in mathematics on the "Theory of Pseudolinear Operators". In 1969 she was promoted to associate professor. Kreindler married fellow mathematician Dinu Wexler and changed her name to Eléna Wexler-Kreindler. She left Romania with her husband in 1972 to move to France. She had to start a new professional career in Paris, first as untenured and later tenured associate professor at the Pierre and Marie Curie University. She was eventually promoted to associate professor in 1989. Her work in France was dedicated to the study of problems in modern algebra, such as the Ore extensions, the theory of the filtration of rings, or algebraic microlocalisation. She published with Marie José Bertin a collection of solved problems of algebra and a companion to the book of Marie Paule Maliaving "Algèbre commutative: applications en géométrie et théorie des nombres". == References ==
|
Wikipedia:Emanuel Derman#0
|
Emanuel Derman (born 1945) is a South African-born academic, businessman and writer. He is best known as a quantitative analyst, and author of the book My Life as a Quant: Reflections on Physics and Finance. He is a co-author of Black–Derman–Toy model, one of the first interest-rate models, and the Derman–Kani local volatility or implied tree model, a model consistent with the volatility smile. Derman, who first came to the U.S. at age 21, in 1966, is currently a professor at Columbia University and Director of its program in financial engineering. Until recently he was also the Head of Risk and a partner at KKR Prisma Capital Partners, a fund of funds. His book My Life as a Quant: Reflections on Physics and Finance, published by Wiley in September 2004, was one of Business Week's top ten books of the year for 2004. In 2011, he published Models.Behaving.Badly, a book contrasting financial models with the theories of hard science, and also containing some autobiographical material. == Biography == Born to a South African Jewish family, Derman obtained a B.Sc. (Hons) at the University of Cape Town, and received a Ph.D. in theoretical physics from Columbia in 1973, where he wrote a thesis that proposed a test for a weak-neutral current in electron-hadron scattering. This experiment was carried out at SLAC in 1978 by a team led by Charles Prescott and Richard Taylor, and confirmed the Weinberg–Salam model. Between 1973 and 1980 he did research in theoretical particle physics at the University of Pennsylvania, the University of Oxford, Rockefeller University and the University of Colorado at Boulder. From 1980 to 1985 he worked at AT&T Bell Laboratories, where he developed computer languages for business modeling applications. In 1985 Derman joined Goldman Sachs' fixed income division where he was one of the co-developers of the Black–Derman–Toy interest-rate model. He left Goldman Sachs at the end of 1988 to take a position at Salomon Brothers Inc. as head of Adjustable Rate Mortgage Research in the Bond Portfolio Analysis group. Rehired by Goldman Sachs, from 1990 to 2000 he led the Quantitative Strategies group in the Equities division, which pioneered the study of local volatility models and the volatility smile. He was appointed a managing director of Goldman Sachs in 1997. In 2000, he became head of the firm’s Quantitative Risk Strategies group. He retired from Goldman Sachs in 2002 and took a position at Columbia University and Prisma Capital Partners (acquired by KKR). Derman was named the IAFE/SunGard Financial Engineer of the Year 2000, and was elected to the Risk Hall of Fame in 2002. He is the author of numerous articles on quantitative finance on the topics of volatility and the nature of financial modeling. Since 1995, Derman has written many articles pointing out the essential difference between models in physics and models in finance. Good models in physics aim to predict the future accurately from the present, or to predict new previously unobserved phenomena; models in finance are used mostly to estimate the values of illiquid securities from liquid ones. Models in physics deal with objective variables; models in finance deal with subjective ones. "In physics there may one day be a Theory of Everything; in finance and the social sciences, you’re lucky if there is a usable theory of anything." Derman together with Paul Wilmott wrote the Financial Modelers' Manifesto, a set of principles for doing responsible financial modeling. From February 2011 to July 2012, Derman wrote a financial blog for Reuters. Beginning in September 2012, for one year, Derman wrote a regular column for the Frankfurter Allgemeine Zeitung. == Models.Behaving.Badly == In 2011, Derman published a new book titled Models.Behaving.Badly: Why Confusing Illusion With Reality Can Lead to Disaster, on Wall Street and in Life. In that work he decries the breakdown of capitalism as a model during the bailouts characterizing the 2008 financial crisis and calls for a return to principles, to the notion that if you want to take a chance on the upside, you have also taken a chance on the downside. More generally, he analyzes three ways of understanding the behavior of the world: models, theory and intuition. Models, he argues, are merely metaphors that compare something you would like to understand with something you already do. Models provide relative knowledge. Theories, in contrast, are attempts to understand the world on absolute terms; while models stand on someone else's legs, theories, like Newton's or Maxwell's or Spinoza's, stand on their own. Intuition, the deepest kind of knowledge, comes only occasionally, after long and hard work, and is a merging of the understander with the understood. His book elaborates on these ideas with examples from the theories of physics and philosophy, and the models of finance. == The Volatility Smile == In 2016, Derman and Michael Miller published a textbook titled The Volatility Smile, a textbook about the principles of financial modeling, option valuation, and the variety of models that can account for the volatility smile. == Brief Hours and Weeks: My Life as a Capetonian == In 2025 Derman published Brief Hours and Weeks, a memoir about youth in the 1940s, 50s, and 60s in a Polish-Jewish off-the-boat immigrant community in Cape Town, South Africa. JM Coetzee wrote about it:"Brief Hours and Weeks awakes many memories of Cape Town, the city of Emanuel Derman's youth and mine, as it was half a century ago. The chapter on the lonely Mrs Gold is a triumph." - J M Coetzee, Nobel Laureate == See also == All models are wrong Financial engineering Mathematical finance Mathiness == References == == External links == Emanuel Derman: Writings on Quantitative Finance – Personal website His profile at Department of Industrial Engineering and Operations Research, Columbia University Derman's Blog (earlier Blogs on wilmott.com) Emanuel Derman at the Mathematics Genealogy Project Roberts, Russ (12 March 2012). "Derman on Theories, Models, and Science". EconTalk. Library of Economics and Liberty. The Volatility Smile Emanuel Derman's research papers at the Social Science Research Network
|
Wikipedia:Emanuel Lasker#0
|
Emanuel Lasker (German pronunciation: [eˈmaːnuɛl ˈlaskɐ] ; December 24, 1868 – January 11, 1941) was a German chess player, mathematician, and philosopher. He was the second World Chess Champion, holding the title for 27 years, from 1894 to 1921, the longest reign of any officially recognised World Chess Champion, winning 6 World Chess Championships. In his prime, Lasker was one of the most dominant champions. His contemporaries used to say that Lasker used a "psychological" approach to the game, and even that he sometimes deliberately played inferior moves to confuse opponents. Recent analysis, however, indicates that he was ahead of his time and used a more flexible approach than his contemporaries, which mystified many of them. Lasker knew contemporary analyses of openings well but disagreed with many of them. He published chess magazines and five chess books, but later players and commentators found it difficult to draw lessons from his methods. Lasker made contributions to the development of other games. He was a first-class contract bridge player and wrote about bridge, Go, and his own invention, Lasca. His books about games presented a problem that is still considered notable in the mathematical analysis of card games. Lasker was a research mathematician who was known for his contributions to commutative algebra, which included proving the primary decomposition of the ideals of polynomial rings. His philosophical works and a drama that he co-wrote, however, received little attention. == Life and career == === Early years 1868–1894 === Emanuel Lasker was born on December 24, 1868, at Berlinchen in Neumark (now Barlinek in Poland), the son of a Jewish cantor. At the age of eleven he was sent to study mathematics in Berlin, where he lived with his brother Berthold, eight years his senior, who taught him how to play chess. Berthold was among the world's top ten players in the early 1890s. To supplement their income, Emanuel Lasker played chess and card games for small stakes, especially at the Café Kaiserhof. Lasker won the Café Kaiserhof's annual Winter tournament 1888/89 and the Hauptturnier A ("second division" tournament) at the sixth DSB Congress (German Chess Federation's congress), held in Breslau. Winning the Hauptturnier earned Lasker the title of "master". The candidates were divided into two groups of ten. The top four in each group competed in a final. Lasker won his section, with 2½ points more than his nearest rival. However, scores were reset to 0 for the final. With two rounds to go, Lasker trailed the leader, Viennese amateur von Feierfeil, by 1½ points. Lasker won both of his final games, while von Feierfeil lost in the penultimate round (being mated in 121 moves after the position was reconstructed incorrectly following an adjournment) and drew in the last round. The two players were now tied. Lasker won a playoff and garnered the master title. This enabled him to play in master-level tournaments and thus launched his chess career. Lasker finished second in an international tournament at Amsterdam, ahead of Mason and Gunsberg. In spring 1892, he won two tournaments in London, the second and stronger of these without losing a game. At New York City in 1893, he won all thirteen games, one of the few times in chess history that a player has achieved a perfect score in a significant tournament. His record in matches was equally impressive: At Berlin in 1890 he drew a short playoff match against his brother Berthold and won all his other matches from 1889 to 1893, mostly against top-class opponents: Curt von Bardeleben (1889), Jacques Mieses (1889), Henry Edward Bird (1890), Berthold Englisch (1890), Joseph Henry Blackburne (1892), Jackson Showalter (1892–93) and Celso Golmayo Zúpide (1893). In 1892 Lasker founded the first of his chess magazines, The London Chess Fortnightly, which was published from August 15, 1892, to July 30, 1893. In the second quarter of 1893, there was a gap of ten weeks between issues, allegedly because of problems with the printer. Shortly after its last issue, Lasker traveled to the US, where he spent the next two years. Lasker challenged Siegbert Tarrasch, who had won three consecutive strong international tournaments (Breslau 1889, Manchester 1890, and Dresden 1892), to a match. Tarrasch haughtily declined, stating that Lasker should first prove his mettle by attempting to win one or two major international events. === Chess competition 1894–1918 === ==== Matches against Steinitz ==== Rebuffed by Tarrasch, Lasker challenged the reigning World Champion, Wilhelm Steinitz, to a match for the title. Initially Lasker wanted to play for US$5,000 a side, and a match was agreed to at stakes of $3,000 a side, but Steinitz agreed to a series of reductions when Lasker found it difficult to raise the money. The final figure was $2,000, which was less than for some of Steinitz's earlier matches (the final combined stake of $4,000 would be equivalent to $150,000 in 2024). The match was played in 1894 at venues in New York, Philadelphia, and Montreal. Steinitz had previously declared he would win without doubt, so it came as a shock when Lasker won the first game. Steinitz won the second game and maintained the balance through the sixth. However, Lasker won all the games from the seventh to the eleventh, and Steinitz asked for a week's rest. When the match resumed, Steinitz looked in better shape and won the 13th and 14th games. Lasker struck back in the 15th and 16th, and Steinitz did not compensate for his losses in the middle of the match. Hence Lasker won convincingly with ten wins, five losses, and four draws. On May 26, Lasker thus became the second formally recognized World Chess Champion and confirmed his title by beating Steinitz even more convincingly in their rematch in 1896–97 (ten wins, two losses, and five draws). ==== Tournament successes ==== Influential players and journalists belittled the 1894 match both before and after it took place. Lasker's difficulty in getting backing may have been caused by hostile pre-match comments from Gunsberg and Leopold Hoffer, who had long been a bitter enemy of Steinitz. One of the complaints was that Lasker had never played the other two members of the top four, Siegbert Tarrasch and Mikhail Chigorin – although Tarrasch had rejected a challenge from Lasker in 1892, publicly telling him to go and win an international tournament first. After the match some commentators, notably Tarrasch, said Lasker had won mainly because Steinitz was old (58 in 1894). Emanuel Lasker answered these criticisms by creating an even more impressive playing record. He came third at Hastings 1895 (where he may have been suffering from the after-effects of typhoid fever), behind Pillsbury and Chigorin but ahead of Tarrasch and Steinitz, and then won first prizes at very strong tournaments in St Petersburg 1895–96 (an elite, 4-player tournament, ahead of Steinitz, Pillsbury and Chigorin), Nuremberg (1896), London (1899) and Paris (1900); tied for second at Cambridge Springs 1904, and tied for first at the Chigorin Memorial in St Petersburg 1909. Later, at St Petersburg (1914), he overcame a 1½-point deficit to finish ahead of the rising stars, Capablanca and Alexander Alekhine, who later became the next two World Champions. For decades chess writers have reported that Tsar Nicholas II of Russia conferred the title of Grandmaster of Chess upon each of the five finalists at St Petersburg 1914 (Lasker, Capablanca, Alekhine, Tarrasch and Marshall), but chess historian Edward Winter has questioned this, stating that the earliest known sources supporting this story were published in 1940 and 1942. ==== Matches against Marshall and Tarrasch ==== Lasker's match record was as impressive between his 1896–97 rematch with Steinitz and 1914: he won all but one of his normal matches, and three of those were convincing defenses of his title. In 1906 Lasker and Géza Maróczy agreed to terms for a World Championship, but the arrangements could not be finalised, and the match never took place. Lasker's first world championship match since 1897 was against Frank Marshall in the World Chess Championship 1907. Despite his aggressive style, Marshall could not win a single game, losing eight and drawing seven (final score: 11½–3½). Lasker then played Tarrasch in the World Chess Championship 1908, first at Düsseldorf then at Munich. Tarrasch firmly believed the game of chess was governed by a precise set of principles. For him the strength of a chess move was in its logic, not in its efficiency. Because of his stubborn principles he considered Lasker as a coffeehouse player who won his games only thanks to dubious tricks, while Lasker mocked the arrogance of Tarrasch who, in his opinion, shone more in salons than at the chessboard. At the opening ceremony, Tarrasch refused to talk to Lasker, only saying: "Mr. Lasker, I have only three words to say to you: check and mate!" Lasker gave a brilliant answer on the chessboard, winning four of the first five games, and playing a type of chess Tarrasch could not understand. For example, in the second game after 19 moves arose a situation (diagram) in which Lasker was a pawn down, with a bad bishop and doubled pawns. At this point it appeared Tarrasch was winning, but 20 moves later he was forced to resign. Lasker eventually won by 10½–5½ (eight wins, five draws, and three losses). Tarrasch claimed the wet weather was the cause of his defeat. ==== Matches against Janowski ==== In 1909 Lasker drew a short match (two wins, two losses) against Dawid Janowski, an all-out attacking Polish expatriate. Several months later they played a longer match in Paris, and chess historians still debate whether this was for the World Chess Championship. Understanding Janowski's style, Lasker chose to defend solidly so that Janowski unleashed his attacks too soon and left himself vulnerable. Lasker easily won the match 8–2 (seven wins, two draws, one loss). This victory was convincing for everyone but Janowski, who asked for a revenge match. Lasker accepted and they played a World Chess Championship match in Berlin in November–December 1910. Lasker crushed his opponent, winning 9½–1½ (eight wins, three draws, no losses). Janowski did not understand Lasker's moves, and after his first three losses he declared to Edward Lasker, "Your homonym plays so stupidly that I cannot even look at the chessboard when he thinks. I am afraid I will not do anything good in this match." ==== Match against Schlechter ==== Between his two matches against Janowski, Lasker arranged another World Chess Championship in January–February 1910 against Carl Schlechter. Schlechter was a modest gentleman, who was generally unlikely to win the major chess tournaments by his peaceful inclination, his lack of aggressiveness and his willingness to accept most draw offers from his opponents (about 80% of his games finished by a draw). At the beginning, Lasker tried to attack but Schlechter had no difficulty defending, so that the first four games finished in draws. In the fifth game Lasker had a big advantage, but committed a blunder that cost him the game. Hence at the middle of the match Schlechter was one point ahead. The next four games were drawn, despite fierce play from both players. In the sixth Schlechter managed to draw a game being a pawn down. In the seventh Lasker nearly lost because of a beautiful exchange sacrifice from Schlechter. In the ninth only a blunder from Lasker allowed Schlechter to draw a lost ending. The score before the last game was thus 5–4 for Schlechter. In the tenth game Schlechter tried to win tactically and took a big advantage, but he missed a clear win at the 35th move, continued to take increasing risks and finished by losing. Hence the match was a draw and Lasker remained World Champion. It has been speculated that Schlechter played unusually risky chess in the tenth game because the terms of the match required him to win by a margin of two games. But according to Isaak and Vladimir Linder, this was unlikely. The match was originally to be a 30-game affair and Schlechter would have to win by two games. But they note that according to the Austrian chess historian Michael Ehn, Lasker agreed to forgo the plus two provision in view of the match being subsequently reduced to only 10 games. For proof Ehn quoted Schlechter's comment printed in Allgemeine Sportzeitung (ASZ) of December 9, 1909 "There will be ten games in all. The winner on points will receive the title of world champion. If the points are equal, the decision will be made by the arbiter." ==== Abandoned challenges ==== In 1911 Lasker received a challenge for a world title match against the rising star José Raúl Capablanca. Lasker was unwilling to play the traditional "first to win ten games" type of match in the semi-tropical conditions of Havana, especially as drawn games were becoming more frequent and the match might last for over six months. He therefore made a counter-proposal: if neither player had a lead of at least two games by the end of the match, it should be considered a draw; the match should be limited to the best of thirty games, counting draws; except that if either player won six games and led by at least two games before thirty games were completed, he should be declared the winner; the champion should decide the venue and stakes, and should have the exclusive right to publish the games; the challenger should deposit a forfeit of US$2,000 (equivalent to over $250,000 in 2020 values); the time limit should be twelve moves per hour; play should be limited to two sessions of 2½ hours each per day, five days a week. Capablanca objected to the time limit, the short playing times, the thirty-game limit, and especially the requirement that he must win by two games to claim the title, which he regarded as unfair. Lasker took offence at the terms in which Capablanca criticized the two-game lead condition and broke off negotiations, and until 1914 Lasker and Capablanca were not on speaking terms. However, at the 1914 St. Petersburg tournament, Capablanca proposed a set of rules for the conduct of World Championship matches, which were accepted by all the leading players, including Lasker. Late in 1912 Lasker entered into negotiations for a world title match with Akiba Rubinstein, whose tournament record for the previous few years had been on a par with Lasker's and a little ahead of Capablanca's. The two players agreed to play a match if Rubinstein could raise the funds, but Rubinstein had few rich friends to back him and the match was never played. This situation demonstrated some of the flaws inherent in the championship system then being used. The start of World War I in summer 1914 put an end to hopes that Lasker would play either Rubinstein or Capablanca for the World Championship in the near future. Throughout World War I (1914–1918) Lasker played in only two serious chess events. He convincingly won (5½−½) a non-title match against Tarrasch in 1916. In September–October 1918, shortly before the armistice, he won a quadrangular (four-player) tournament, half a point ahead of Rubinstein. === Academic activities 1894–1918 === Despite his superb playing results, chess was not Lasker's only interest. His parents recognized his intellectual talents, especially for mathematics, and sent the adolescent Emanuel to study in Berlin (where he found he also had a talent for chess). Lasker gained his abitur (high school graduation certificate) at Landsberg an der Warthe, now a Polish town named Gorzów Wielkopolski but then part of Prussia. He then studied mathematics and philosophy at the universities in Berlin, Göttingen (where David Hilbert was one of his doctoral advisors) and Heidelberg. In 1895 he published two mathematical articles in Nature. On the advice of David Hilbert he registered for doctoral studies at Erlangen during 1900–1902. In 1901 he presented his doctoral thesis Über Reihen auf der Convergenzgrenze ("On Series at Convergence Boundaries") at Erlangen and in the same year it was published by the Royal Society. He was awarded a doctorate in mathematics in 1902. His most significant mathematical article, in 1905, published a theorem on primary decompositions of which Emmy Noether developed a more generalized form, which is now regarded as of fundamental importance to modern algebra and algebraic geometry. Lasker held short-term positions as a mathematics lecturer at Tulane University in New Orleans (1893) and Victoria University in Manchester (1901; Victoria University was one of the "parents" of the current University of Manchester). However, he was unable to secure a longer-term position, and pursued his scholarly interests independently. In 1906 Lasker published a booklet titled Kampf (Struggle), in which he attempted to create a general theory of all competitive activities, including chess, business and war. He produced two other books which are generally categorized as philosophy, Das Begreifen der Welt (Comprehending the World; 1913) and Die Philosophie des Unvollendbar (sic; The Philosophy of the Unattainable; 1918). === Other activities 1894–1918 === In 1896–97 Lasker published his book Common Sense in Chess, based on lectures he had given in London in 1895. In 1903, Lasker played in Ostend against Mikhail Chigorin, a six-game match that was sponsored by the wealthy lawyer and industrialist Isaac Rice in order to test the Rice Gambit. Lasker narrowly lost the match. Three years later Lasker became secretary of the Rice Gambit Association, founded by Rice in order to promote the Rice Gambit, and in 1907 Lasker quoted with approval Rice's views on the convergence of chess and military strategy. In November 1904, Lasker founded Lasker's Chess Magazine, which ran until 1909. Beginning in 1910, he wrote a weekly chess column for the New York Evening Post, for which he was Chess Editor. Emanuel Lasker became interested in the strategy game Go after being introduced to it by his namesake Edward Lasker, probably in 1907 or 1908 (Edward Lasker wrote a successful book Go and Go-Moku in 1934). He and Edward played Go together while Edward was helping him prepare for his 1908 match with Tarrasch. He kept his interest in Go for the rest of his life, becoming one of the strongest players in Germany and Europe and contributing occasionally to the magazine Deutsche Go-Zeitung. It is alleged that he once said "Had I discovered Go sooner, I would probably have never become world chess champion". At the age of 42, in July 1911, Lasker married Martha Cohn (née Bamberger), a rich widow who was a year older than Lasker and already a grandmother. They lived in Berlin. Martha Cohn wrote popular stories under the pseudonym "L. Marco". During World War I, Lasker invested all of his savings in German war bonds, which lost nearly their entire value with the wartime and post-war inflation. During the war, he wrote a pamphlet which claimed that civilization would be in danger if Germany lost the war. === Match against Capablanca === In January 1920 Lasker and José Raúl Capablanca signed an agreement to play a World Championship match in 1921, noting that Capablanca was not free to play in 1920. Because of the delay, Lasker insisted on a final clause that allowed him to play anyone else for the championship in 1920, that nullified the contract with Capablanca if Lasker lost a title match in 1920, and that stipulated that if Lasker resigned the title Capablanca should become World Champion. Lasker had previously included in his agreement before World War I to play Akiba Rubinstein for the title a similar clause that if he resigned the title, it should become Rubinstein's. A report in the American Chess Bulletin (July–August 1920 issue) said that Lasker had resigned the world title in favor of Capablanca because the conditions of the match were unpopular in the chess world. The American Chess Bulletin speculated that the conditions were not sufficiently unpopular to warrant resignation of the title, and that Lasker's real concern was that there was not enough financial backing to justify his devoting nine months to the match. When Lasker resigned the title in favor of Capablanca he was unaware that enthusiasts in Havana had just raised $20,000 to fund the match provided it was played there. When Capablanca learned of Lasker's resignation he went to the Netherlands, where Lasker was living at the time, to inform him that Havana would finance the match. In August 1920 Lasker agreed to play in Havana, but insisted that he was the challenger as Capablanca was now the champion. Capablanca signed an agreement that accepted this point, and soon afterwards published a letter confirming this. Lasker also stated that, if he beat Capablanca, he would resign the title so that younger masters could compete for it. The match was played in March–April 1921. After four draws, the fifth game saw Lasker blunder with Black in an equal ending. Capablanca's solid style allowed him to easily draw the next four games, without taking any risks. In the tenth game, Lasker as White played a position with an Isolated Queen Pawn but failed to create the necessary activity and Capablanca reached a superior ending, which he duly won. The eleventh and fourteenth games were also won by Capablanca, and Lasker resigned the match. Reuben Fine and Harry Golombek attributed this to Lasker's being in mysteriously poor form. On the other hand, Vladimir Kramnik thought that Lasker played quite well and the match was an "even and fascinating fight" until Lasker blundered in the last game, and explained that Capablanca was 20 years younger, a slightly stronger player, and had more recent competitive practice. === European life and travels === Lasker was in his early 50s when he lost the world championship to Capablanca, and he retired from serious match play afterwards; his only other match was a short exhibition against Frank James Marshall in 1940, which was never completed due to Lasker's illness and subsequent death a few months after it started.: 311 After winning the Moravská Ostrava 1923 chess tournament (without a single loss) and the New York 1924 chess tournament (1½ points ahead of Capablanca) and finishing second at Moscow in 1925 (1½ points behind Efim Bogoljubow, ½ point ahead of Capablanca), he effectively retired from serious chess. During the Moscow 1925 chess tournament, Lasker received a telegram informing him that the drama written by himself and his brother Berthold, Vom Menschen die Geschichte ("History of Mankind"), had been accepted for performance at the Lessing theatre in Berlin. Lasker was so distracted by this news that he lost badly to Carlos Torre the same day. The play, however, was not a success. In 1926, Lasker wrote Lehrbuch des Schachspiels, which he re-wrote in English in 1927 as Lasker's Manual of Chess. He also wrote books on other games of mental skill: Encyclopedia of Games (1929) and Das verständige Kartenspiel (means "Sensible Card Play"; 1929; English translation in the same year), both of which posed a problem in the mathematical analysis of card games; Brettspiele der Völker ("Board Games of the Nations"; 1931), which includes 30 pages about Go and a section about a game he had invented in 1911, Lasca. In 1930, Lasker was a special correspondent for Dutch and German newspapers reporting on the Culbertson-Buller bridge match during which he became a registered teacher of the Culbertson system. He became an expert bridge player, representing Germany at international events in the early 1930s, and wrote Das Bridgespiel ("The Game of Bridge") in 1931. In October 1928 Emanuel Lasker's brother Berthold died. In spring 1933 Adolf Hitler started a campaign of discrimination and intimidation against Jews, depriving them of their property and citizenship. Lasker and his wife Martha, who were both Jewish, were forced to leave Germany in the same year. After a short stay in England, in 1935 they were invited to live in the USSR by Nikolai Krylenko, the Commissar of Justice who had been responsible for show trials and, in his other capacity as Sports Minister, was an enthusiastic supporter of chess. In the USSR, Lasker renounced his German citizenship and received Soviet citizenship. He took permanent residence in Moscow, and was given a post at Moscow's Institute for Mathematics and a post of trainer of the USSR national team. Lasker returned to competitive chess to make some money, finishing fifth in Zürich 1934 and third in Moscow 1935 (undefeated, ½ point behind Mikhail Botvinnik and Salo Flohr; ahead of Capablanca, Rudolf Spielmann and several Soviet masters), sixth in Moscow 1936 and equal seventh in Nottingham 1936. His performance in Moscow 1935 at age 66 was hailed as "a biological miracle". === Settling in the United States === In August 1937, Martha and Emanuel Lasker decided to leave the Soviet Union, and they moved, via the Netherlands, to the United States (first Chicago, next New York) in October 1937. They were visiting Martha's daughter, but they may also have been motivated by political upheaval in the Soviet Union. In the United States Lasker tried to support himself by giving chess and bridge lectures and exhibitions, as he was now too old for serious competition. In 1940 he published his last book, The Community of the Future, in which he proposed solutions for serious political problems, including anti-Semitism and unemployment. == Assessment == === Playing strength and style === Lasker was considered to have a "psychological" method of play in which he considered the subjective qualities of his opponent, in addition to the objective requirements of his position on the board. Richard Réti published a lengthy analysis of Lasker's play in which he concluded that Lasker deliberately played inferior moves that he knew would make his opponent uncomfortable. W. H. K. Pollock commented, "It is no easy matter to reply correctly to Lasker's bad moves." Lasker himself denied the claim that he deliberately played bad moves, and most modern writers agree. According to Grandmaster Andrew Soltis and International Master John L. Watson, the features that made his play mysterious to contemporaries now appear regularly in modern play: sacrifices to gain positional advantage; playing the "practical" move rather than trying to find the best move; counterattacking and complicating the game before a disadvantage became serious. Former World Champion Vladimir Kramnik said, "He realized that different types of advantage could be interchangeable: tactical edge could be converted into strategic advantage and vice versa", which mystified contemporaries who were just becoming used to the theories of Steinitz as codified by Siegbert Tarrasch. Max Euwe opined that the real reason behind Lasker's success was his "exceptional defensive technique" and that "almost all there is to say about defensive chess can be demonstrated by examples from the games of Steinitz and Lasker", the former exemplifying passive defence and the latter an active defence. The famous win against José Raúl Capablanca at St. Petersburg in 1914, which Lasker needed in order to retain any chance of catching up with Capablanca, is sometimes offered as evidence of his "psychological" approach. Reuben Fine describes Lasker's choice of opening, the Exchange Variation of the Ruy Lopez, as "innocuous but psychologically potent". Luděk Pachman writes that Lasker's choice presented his opponent with a dilemma: with only a ½ point lead, Capablanca would have wanted to play safe; but the Exchange Variation's pawn structure gives White an endgame advantage, and Black must use his bishop pair aggressively in the middlegame to nullify this. However, an analysis of Lasker's use of this variation throughout his career concludes that he had excellent results with it as White against top-class opponents, and sometimes used it in "must-win" situations. In Kramnik's opinion, Lasker's play in this game demonstrated deep positional understanding, rather than psychology. Fine reckoned Lasker paid little attention to the openings, while Capablanca thought that Lasker knew the openings very well but disagreed with a lot of contemporary opening analysis. In fact before the 1894 world title match, Lasker studied the openings thoroughly, especially Steinitz's favorite lines. He played primarily e4 openings, particularly the Ruy Lopez. He opened with 1.d4 relatively rarely, although his d4 games had a higher winning percentage than his e4 ones. With the Black pieces, he mainly answered 1.e4 with the French Defense and 1.d4 with the Queen's Gambit. Lasker also used the Sicilian Defense fairly often. In Capablanca's opinion, no player surpassed Lasker in the ability to assess a position quickly and accurately, in terms of who had the better prospects of winning and what strategy each side should adopt. Capablanca also wrote that Lasker was so adaptable that he played in no definite style, and that he was both a tenacious defender and a very efficient finisher of his own attacks. Lasker followed Steinitz's principles, and both demonstrated a completely different chess paradigm than the "romantic" mentality before them. Thanks to Steinitz and Lasker, positional players gradually became common (Tarrasch, Schlechter, and Rubinstein stand out.) But, while Steinitz created a new school of chess thought, Lasker's talents were far harder for the masses to grasp; hence there was no Lasker school. In addition to his enormous chess skill, Lasker was said to have an excellent competitive temperament: his rival Siegbert Tarrasch once said, "Lasker occasionally loses a game, but he never loses his head." Lasker enjoyed the need to adapt to varying styles and to the shifting fortunes of tournaments. Although very strong in matches, he was even stronger in tournaments. For over 20 years, he always finished ahead of the younger Capablanca: at St. Petersburg 1914, New York 1924, Moscow 1925, and Moscow 1935. Only in 1936 (15 years after their match), when Lasker was 67, did Capablanca finish ahead of him. In 1964, Chessworld magazine published an article in which future World Champion Bobby Fischer listed the ten greatest players in history. Fischer did not include Lasker in the list, deriding him as a "coffee-house player [who] knew nothing about openings and didn't understand positional chess". In a poll of the world's leading players taken some time after Fischer's list appeared, Tal, Korchnoi, and Robert Byrne all said that Lasker was the greatest player ever. Both Pal Benko and Byrne stated that Fischer later reconsidered and said that Lasker was a great player. Statistical ranking systems place Lasker high among the greatest players of all time. The book Warriors of the Mind places him sixth, behind Garry Kasparov, Anatoly Karpov, Fischer, Mikhail Botvinnik and Capablanca. In his 1978 book The Rating of Chessplayers, Past and Present, Arpad Elo gave retrospective ratings to players based on their performance over the best five-year span of their career. He concluded that Lasker was the joint second strongest player of those surveyed (tied with Botvinnik and behind Capablanca). The most up-to-date system, Chessmetrics, is rather sensitive to the length of the periods being compared, and ranks Lasker between fifth and second strongest of all time for peak periods ranging in length from one to twenty years. Its author, the statistician Jeff Sonas, concluded that only Kasparov and Karpov surpassed Lasker's long-term dominance of the game. By Chessmetrics' reckoning, Lasker was the number 1 player in 292 different months—a total of over 24 years. His first No. 1 rank was in June 1890, and his last in December 1926—a span of 36½ years. Chessmetrics also considers him the strongest 67-year-old in history: in December 1935, at age 67 years and 0 months, his rating was 2691 (number 7 in the world), well above second-place Viktor Korchnoi's rating at that age (2660, number 39 in the world, in March 1998). === Influence on chess === Lasker founded no school of players who played in a similar style. Max Euwe, World Champion 1935–1937 and a prolific writer of chess manuals, who had a lifetime 0–3 score against Lasker, said, "It is not possible to learn much from him. One can only stand and wonder." However, Lasker's pragmatic, combative approach had a great influence on Soviet players like Mikhail Tal and Viktor Korchnoi. There are several "Lasker Variations" in the chess openings, including Lasker's Defense to the Queen's Gambit, Lasker's Defense to the Evans Gambit (which effectively ended the use of this gambit in tournament play until a revival in the 1990s), and the Lasker Variation in the McCutcheon Variation of the French Defense. Lasker was shocked by the poverty in which Wilhelm Steinitz died and did not intend to die in similar circumstances. He became notorious for demanding high fees for playing matches and tournaments, and he argued that players should own the copyright in their games rather than let publishers get all the profits. These demands initially angered editors and other players, but helped to pave the way for the rise of full-time chess professionals who earn most of their living from playing, writing and teaching. Copyright in chess games had been contentious at least as far back as the mid-1840s, and Steinitz and Lasker vigorously asserted that players should own the copyright and wrote copyright clauses into their match contracts. However, Lasker's demands that challengers should raise large purses prevented or delayed some eagerly awaited World Championship matches—for example Frank James Marshall challenged him in 1904 to a match for the World Championship but could not raise the stakes demanded by Lasker until 1907. This problem continued throughout the reign of his successor, Capablanca. Some of the controversial conditions that Lasker insisted on for championship matches led Capablanca to attempt twice (1914 and 1922) to publish rules for such matches, to which other top players readily agreed. === Work in other fields === In his 1905 article on commutative algebra, Lasker introduced the theory of primary decomposition of ideals, which has influence in the theory of Noetherian rings. Rings having the primary decomposition property are called "Laskerian rings" in his honor. His attempt to create a general theory of all competitive activities were followed by more consistent efforts from von Neumann on game theory, and his later writings about card games presented a significant issue in the mathematical analysis of card games. According to R. J. Nowakowski, he came close to a complete theory of impartial games. However, his dramatic and philosophical works have never been highly regarded. == Death == Lasker died of a kidney infection in New York on January 11, 1941, at the age of 72, as a charity patient at the Mount Sinai Hospital. His funeral service was held at the Riverside Memorial Chapel, and he was buried at historic Beth Olam Cemetery, Queens, New York. == Personal life, family and friends == His wife Martha and his sister, Mrs. Lotta Hirschberg, survived him. Poet Else Lasker-Schüler was his sister-in-law. Edward Lasker, born in Kempen (Kępno), Greater Poland (then Prussia), the German-American chess master, engineer, and author, claimed that he was distantly related to Emanuel Lasker. They both played in the great New York 1924 chess tournament. Lasker was a good friend of Albert Einstein, who wrote the introduction to the posthumous biography, Emanuel Lasker: The Life of a Chess Master, by Jacques Hannak (1952). In the preface Einstein expressed satisfaction for having met Lasker: Emanuel Lasker was undoubtedly one of the most interesting people I came to know in my later years. We must be thankful to those who have penned the story of his life for this and succeeding generations. For there are few men who have had a warm interest in all the great human problems and at the same time kept their personality so uniquely independent. == Publications == === Chess === The London Chess Fortnightly, 1892–93 Lasker, Emanuel (1965) [1896]. Common Sense in Chess. Dover. ["The following is an abstract of Twelve Lectures given before an audience of London chess players in the spring of 1895" - author's preface] Lasker's How to Play Chess: An Elementary Text Book for Beginners, Which Teaches Chess By a New, Easy and Comprehensive Method, 1900 Lasker's Chess Magazine, OCLC 5002324, 1904–1907. Lasker, Emanuel, ed. (1910). The International Chess Congress, St. Petersburg, 1909. Press of Emanuel Lasker. Lasker's Manual of Chess, 1925, is as famous in chess circles for its philosophical tone as for its content. Lehrbuch des Schachspiels, 1926 – English version Lasker's Manual of Chess published in 1927. Lasker, Emanuel (1988) [1934]. Lasker's Chess Primer. === Other games === Encyclopedia of Games Vol. I, Card Strategy, New York 1929, urn:nbn:de:hbz:5:1-331264. Das verständige Kartenspiel (Sensible Card Play), Berlin 1929, urn:nbn:de:hbz:5:1-331248 – not only a translation of Encyclopedia of Games. Brettspiele der Völker (Board Games of the Nations), Berlin 1931, urn:nbn:at:at-ubms:3-1736 – includes sections about Go and Lasca. Das Bridgespiel ("The Game of Bridge"), 1931. === Mathematics === Lasker, Emanuel (August 1895). "Metrical Relations of Plane Spaces of n Manifoldness". Nature. 52 (1345): 340–343. Bibcode:1895Natur..52R.340L. doi:10.1038/052340d0. S2CID 4017358. Lasker, Emanuel (October 1895). "About a certain Class of Curved Lines in Space of n Manifoldness". Nature. 52 (1355): 596. Bibcode:1895Natur..52..596L. doi:10.1038/052596a0. S2CID 4016031. Lasker, Emanuel (1901). "Über Reihen auf der Convergenzgrenze ( "On Series at Convergence Boundaries" )". Philosophical Transactions of the Royal Society A. 196 (274–286): 431–477. Bibcode:1901RSPTA.196..431L. doi:10.1098/rsta.1901.0009. – Lasker's PhD thesis. Lasker, E. (1905). "Zur Theorie der Moduln und Ideale". Math. Ann. 60 (1): 20–116. doi:10.1007/BF01447495. S2CID 120367750. === Philosophy === Kampf (Struggle), 1906. Das Begreifen der Welt (Comprehending the World), 1913. Die Philosophie des Unvollendbar (sic; The Philosophy of the Unattainable), 1918. Vom Menschen die Geschichte ("History of Mankind"), 1925 – a play, co-written with his brother Berthold. The Community of the Future, 1940. == In popular culture == In Michael Chabon's alternate history mystery novel, The Yiddish Policemen's Union, the murdered man, Mendel Shpilman (born during the 1960s), being a chess enthusiast, uses the name "Emanuel Lasker" as an alias. The reference is clearly understood by the protagonist, Detective Meyer Landsman, because he has also studied chess. == Tournament results == The following table gives Lasker's placings and scores in tournaments. The first "Score" column gives the number of points on the total possible. In the second "Score" column, "+" indicates the number of won games, "−" the number of losses, and "=" the number of draws. == Match results == Here are Lasker's results in matches. The first "Score" column gives the number of points on the total possible. In the second "Score" column, "+" indicates the number of won games, "−" the number of losses, and "=" the number of draws. == Notable games == Lasker vs. Johann Hermann Bauer, Amsterdam 1889. Although this was not the earliest-known game with a successful two-bishops sacrifice, this combination is now known as a "Lasker–Bauer combination" or "Lasker sacrifice". Harry Nelson Pillsbury vs. Lasker, St Petersburg 1895. A brilliant sacrifice on the 17th move leads to a victorious attack. Wilhelm Steinitz vs. Lasker, London 1899. The old champion and the new one really go for it. Frank James Marshall vs. Lasker, World Championship Match 1907, game 1. Lasker's attack is insufficient for a quick win, so he trades it in for an endgame in which he quickly ties Marshall in knots. Lasker vs. Carl Schlechter, match 1910, game 10. Not a great game, but the one that saved Lasker from losing his world title in 1910. Lasker vs. Jose Raul Capablanca, St Petersburg 1914. Lasker, who needed a win here, surprisingly used a quiet opening, allowing Capablanca to simplify the game early. There has been much debate about whether Lasker's approach represented subtle psychology or deep positional understanding. Max Euwe vs. Lasker, Zurich 1934. 66-year-old Lasker beats a future World Champion, sacrificing his queen to turn defence into attack. == See also == List of Jewish chess players == References == == Further reading == Chernev, Irving (1995). Twelve Great Chess Players and Their Best Games. New York: Dover. pp. 143–162. ISBN 0-486-28674-6. Hannak, J. (1991) [1952]. Emanuel Lasker: The Life of a Chess Master. New York: Dover. ISBN 0-486-26706-7. Kasparov, Garry (2003). My Great Predecessors, part I. Everyman Chess. ISBN 1-85744-330-6. Soltis, Andrew (2005). Why Lasker Matters. Batsford. ISBN 0-7134-8983-9. Whyld, Ken (1998). The Collected Games of Emanuel Lasker. The Chess Player. Winter, Edward, ed. (1981). World Chess Champions. Oxford: Pergamon Press. ISBN 0-08-024094-1. Forster, Richard; Hansen, Stefan; Negele, Michael (2009). Emanuel Lasker: Denker, Weltenburger, Schachweltmeister. Exzelsior Verlag. ISBN 978-3935800051. Forster, Richard; Negele, Michael; Tischbierek, Raj (2018). Emanuel Lasker Volume 1: Struggle and Victories: World Chess Champion for 27 Years. Exzelsior Verlag. ISBN 978-3935800099.Forster, Richard; Negele, Michael; Tischbierek, Raj (2020). Emanuel Lasker Volume 2: Choices and Chances: Chess and other Games of the Mind. Exzelsior Verlag. ISBN 978-3935800105. == External links == Emanuel Lasker Society Emanuel Lasker player profile and games at Chessgames.com O'Connor, John J.; Robertson, Edmund F., "Emanuel Lasker", MacTutor History of Mathematics Archive, University of St Andrews Emanuel Lasker at the Mathematics Genealogy Project "About Lasca – a little-known abstract game". Human–Computer Interface Research. Archived from the original on May 9, 2008. Hans Kmoch. "Grandmasters I have known" (PDF). ChessCafe.com. Tryfon Gavriel; Janet Edwardson. "Biography of Emanuel Lasker". Barnet chess club. Archived from the original on May 30, 2013. Retrieved October 15, 2010. "Lasker's Chess Magazine, January 1905 edition, excerpts". 100bestwebsites.org. Jacobs, Joseph; Porter, A. (1901–1906). "Lasker, Emanuel". In Singer, Isidore (ed.). Jewish Encyclopedia. Vol. 7. pp. 622–3. Retrieved November 21, 2008. Works by or about Emanuel Lasker at the Internet Archive Obituary of Emanuel Lasker, The Times, 1941 Articles about Emanuel Lasker by Edward Winter
|
Wikipedia:Emanuel Lodewijk Elte#0
|
Emanuel Lodewijk Elte (16 March 1881 in Amsterdam – 9 April 1943 in Sobibór) was a Dutch mathematician. He is noted for discovering and classifying semiregular polytopes in dimensions four and higher. Elte's father Hartog Elte was headmaster of a school in Amsterdam. Emanuel Elte married Rebecca Stork in 1912 in Amsterdam, when he was a teacher at a high school in that city. By 1943 the family lived in Haarlem. When on January 30 of that year a German officer was shot in that town, in reprisal a hundred inhabitants of Haarlem were transported to the Camp Vught, including Elte and his family. As Jews, he and his wife were further deported to Sobibór, where they were murdered; his two children were murdered at Auschwitz. == Elte's semiregular polytopes of the first kind == His work rediscovered the finite semiregular polytopes of Thorold Gosset, and further allowing not only regular facets, but recursively also allowing one or two semiregular ones. These were enumerated in his 1912 book, The Semiregular Polytopes of the Hyperspaces. He called them semiregular polytopes of the first kind, limiting his search to one or two types of regular or semiregular k-faces. These polytopes and more were rediscovered again by Coxeter, and renamed as a part of a larger class of uniform polytopes. In the process he discovered all the main representatives of the exceptional En family of polytopes, save only 142 which did not satisfy his definition of semiregularity. (*) Added in this table as a sequence Elte recognized but did not enumerate explicitly Regular dimensional families: Sn = n-simplex: S3, S4, S5, S6, S7, S8, ... Mn = n-cube= measure polytope: M3, M4, M5, M6, M7, M8, ... HMn = n-demicube= half-measure polytope: HM3, HM4, M5, M6, HM7, HM8, ... Crn = n-orthoplex= cross polytope: Cr3, Cr4, Cr5, Cr6, Cr7, Cr8, ... Semiregular polytopes of first order: Vn = semiregular polytope with n vertices Polygons Pn = regular n-gon Polyhedra: Regular: T, C, O, I, D Truncated: tT, tC, tO, tI, tD Quasiregular (rectified): CO, ID Cantellated: RCO, RID Truncated quasiregular (omnitruncated): tCO, tID Prismatic: Pn, APn 4-polytopes: Cn = Regular 4-polytopes with n cells: C5, C8, C16, C24, C120, C600 Rectified: tC5, tC8, tC16, tC24, tC120, tC600 == See also == Gosset–Elte figures == Notes ==
|
Wikipedia:Embedding#0
|
In mathematics, an embedding (or imbedding) is one instance of some mathematical structure contained within another instance, such as a group that is a subgroup. When some object X {\displaystyle X} is said to be embedded in another object Y {\displaystyle Y} , the embedding is given by some injective and structure-preserving map f : X → Y {\displaystyle f:X\rightarrow Y} . The precise meaning of "structure-preserving" depends on the kind of mathematical structure of which X {\displaystyle X} and Y {\displaystyle Y} are instances. In the terminology of category theory, a structure-preserving map is called a morphism. The fact that a map f : X → Y {\displaystyle f:X\rightarrow Y} is an embedding is often indicated by the use of a "hooked arrow" (U+21AA ↪ RIGHTWARDS ARROW WITH HOOK); thus: f : X ↪ Y . {\displaystyle f:X\hookrightarrow Y.} (On the other hand, this notation is sometimes reserved for inclusion maps.) Given X {\displaystyle X} and Y {\displaystyle Y} , several different embeddings of X {\displaystyle X} in Y {\displaystyle Y} may be possible. In many cases of interest there is a standard (or "canonical") embedding, like those of the natural numbers in the integers, the integers in the rational numbers, the rational numbers in the real numbers, and the real numbers in the complex numbers. In such cases it is common to identify the domain X {\displaystyle X} with its image f ( X ) {\displaystyle f(X)} contained in Y {\displaystyle Y} , so that X ⊆ Y {\displaystyle X\subseteq Y} . == Topology and geometry == === General topology === In general topology, an embedding is a homeomorphism onto its image. More explicitly, an injective continuous map f : X → Y {\displaystyle f:X\to Y} between topological spaces X {\displaystyle X} and Y {\displaystyle Y} is a topological embedding if f {\displaystyle f} yields a homeomorphism between X {\displaystyle X} and f ( X ) {\displaystyle f(X)} (where f ( X ) {\displaystyle f(X)} carries the subspace topology inherited from Y {\displaystyle Y} ). Intuitively then, the embedding f : X → Y {\displaystyle f:X\to Y} lets us treat X {\displaystyle X} as a subspace of Y {\displaystyle Y} . Every embedding is injective and continuous. Every map that is injective, continuous and either open or closed is an embedding; however there are also embeddings that are neither open nor closed. The latter happens if the image f ( X ) {\displaystyle f(X)} is neither an open set nor a closed set in Y {\displaystyle Y} . For a given space Y {\displaystyle Y} , the existence of an embedding X → Y {\displaystyle X\to Y} is a topological invariant of X {\displaystyle X} . This allows two spaces to be distinguished if one is able to be embedded in a space while the other is not. ==== Related definitions ==== If the domain of a function f : X → Y {\displaystyle f:X\to Y} is a topological space then the function is said to be locally injective at a point if there exists some neighborhood U {\displaystyle U} of this point such that the restriction f | U : U → Y {\displaystyle f{\big \vert }_{U}:U\to Y} is injective. It is called locally injective if it is locally injective around every point of its domain. Similarly, a local (topological, resp. smooth) embedding is a function for which every point in its domain has some neighborhood to which its restriction is a (topological, resp. smooth) embedding. Every injective function is locally injective but not conversely. Local diffeomorphisms, local homeomorphisms, and smooth immersions are all locally injective functions that are not necessarily injective. The inverse function theorem gives a sufficient condition for a continuously differentiable function to be (among other things) locally injective. Every fiber of a locally injective function f : X → Y {\displaystyle f:X\to Y} is necessarily a discrete subspace of its domain X . {\displaystyle X.} === Differential topology === In differential topology: Let M {\displaystyle M} and N {\displaystyle N} be smooth manifolds and f : M → N {\displaystyle f:M\to N} be a smooth map. Then f {\displaystyle f} is called an immersion if its derivative is everywhere injective. An embedding, or a smooth embedding, is defined to be an immersion that is an embedding in the topological sense mentioned above (i.e. homeomorphism onto its image). In other words, the domain of an embedding is diffeomorphic to its image, and in particular the image of an embedding must be a submanifold. An immersion is precisely a local embedding, i.e. for any point x ∈ M {\displaystyle x\in M} there is a neighborhood x ∈ U ⊂ M {\displaystyle x\in U\subset M} such that f : U → N {\displaystyle f:U\to N} is an embedding. When the domain manifold is compact, the notion of a smooth embedding is equivalent to that of an injective immersion. An important case is N = R n {\displaystyle N=\mathbb {R} ^{n}} . The interest here is in how large n {\displaystyle n} must be for an embedding, in terms of the dimension m {\displaystyle m} of M {\displaystyle M} . The Whitney embedding theorem states that n = 2 m {\displaystyle n=2m} is enough, and is the best possible linear bound. For example, the real projective space R P m {\displaystyle \mathbb {R} \mathrm {P} ^{m}} of dimension m {\displaystyle m} , where m {\displaystyle m} is a power of two, requires n = 2 m {\displaystyle n=2m} for an embedding. However, this does not apply to immersions; for instance, R P 2 {\displaystyle \mathbb {R} \mathrm {P} ^{2}} can be immersed in R 3 {\displaystyle \mathbb {R} ^{3}} as is explicitly shown by Boy's surface—which has self-intersections. The Roman surface fails to be an immersion as it contains cross-caps. An embedding is proper if it behaves well with respect to boundaries: one requires the map f : X → Y {\displaystyle f:X\rightarrow Y} to be such that f ( ∂ X ) = f ( X ) ∩ ∂ Y {\displaystyle f(\partial X)=f(X)\cap \partial Y} , and f ( X ) {\displaystyle f(X)} is transverse to ∂ Y {\displaystyle \partial Y} in any point of f ( ∂ X ) {\displaystyle f(\partial X)} . The first condition is equivalent to having f ( ∂ X ) ⊆ ∂ Y {\displaystyle f(\partial X)\subseteq \partial Y} and f ( X ∖ ∂ X ) ⊆ Y ∖ ∂ Y {\displaystyle f(X\setminus \partial X)\subseteq Y\setminus \partial Y} . The second condition, roughly speaking, says that f ( X ) {\displaystyle f(X)} is not tangent to the boundary of Y {\displaystyle Y} . === Riemannian and pseudo-Riemannian geometry === In Riemannian geometry and pseudo-Riemannian geometry: Let ( M , g ) {\displaystyle (M,g)} and ( N , h ) {\displaystyle (N,h)} be Riemannian manifolds or more generally pseudo-Riemannian manifolds. An isometric embedding is a smooth embedding f : M → N {\displaystyle f:M\rightarrow N} that preserves the (pseudo-)metric in the sense that g {\displaystyle g} is equal to the pullback of h {\displaystyle h} by f {\displaystyle f} , i.e. g = f ∗ h {\displaystyle g=f^{*}h} . Explicitly, for any two tangent vectors v , w ∈ T x ( M ) {\displaystyle v,w\in T_{x}(M)} we have g ( v , w ) = h ( d f ( v ) , d f ( w ) ) . {\displaystyle g(v,w)=h(df(v),df(w)).} Analogously, isometric immersion is an immersion between (pseudo)-Riemannian manifolds that preserves the (pseudo)-Riemannian metrics. Equivalently, in Riemannian geometry, an isometric embedding (immersion) is a smooth embedding (immersion) that preserves length of curves (cf. Nash embedding theorem). == Algebra == In general, for an algebraic category C {\displaystyle C} , an embedding between two C {\displaystyle C} -algebraic structures X {\displaystyle X} and Y {\displaystyle Y} is a C {\displaystyle C} -morphism e : X → Y {\displaystyle e:X\rightarrow Y} that is injective. === Field theory === In field theory, an embedding of a field E {\displaystyle E} in a field F {\displaystyle F} is a ring homomorphism σ : E → F {\displaystyle \sigma :E\rightarrow F} . The kernel of σ {\displaystyle \sigma } is an ideal of E {\displaystyle E} , which cannot be the whole field E {\displaystyle E} , because of the condition 1 = σ ( 1 ) = 1 {\displaystyle 1=\sigma (1)=1} . Furthermore, any field has as ideals only the zero ideal and the whole field itself (because if there is any non-zero field element in an ideal, it is invertible, showing the ideal is the whole field). Therefore, the kernel is 0 {\displaystyle 0} , so any embedding of fields is a monomorphism. Hence, E {\displaystyle E} is isomorphic to the subfield σ ( E ) {\displaystyle \sigma (E)} of F {\displaystyle F} . This justifies the name embedding for an arbitrary homomorphism of fields. === Universal algebra and model theory === If σ {\displaystyle \sigma } is a signature and A , B {\displaystyle A,B} are σ {\displaystyle \sigma } -structures (also called σ {\displaystyle \sigma } -algebras in universal algebra or models in model theory), then a map h : A → B {\displaystyle h:A\to B} is a σ {\displaystyle \sigma } -embedding exactly if all of the following hold: h {\displaystyle h} is injective, for every n {\displaystyle n} -ary function symbol f ∈ σ {\displaystyle f\in \sigma } and a 1 , … , a n ∈ A n , {\displaystyle a_{1},\ldots ,a_{n}\in A^{n},} we have h ( f A ( a 1 , … , a n ) ) = f B ( h ( a 1 ) , … , h ( a n ) ) {\displaystyle h(f^{A}(a_{1},\ldots ,a_{n}))=f^{B}(h(a_{1}),\ldots ,h(a_{n}))} , for every n {\displaystyle n} -ary relation symbol R ∈ σ {\displaystyle R\in \sigma } and a 1 , … , a n ∈ A n , {\displaystyle a_{1},\ldots ,a_{n}\in A^{n},} we have A ⊨ R ( a 1 , … , a n ) {\displaystyle A\models R(a_{1},\ldots ,a_{n})} iff B ⊨ R ( h ( a 1 ) , … , h ( a n ) ) . {\displaystyle B\models R(h(a_{1}),\ldots ,h(a_{n})).} Here A ⊨ R ( a 1 , … , a n ) {\displaystyle A\models R(a_{1},\ldots ,a_{n})} is a model theoretical notation equivalent to ( a 1 , … , a n ) ∈ R A {\displaystyle (a_{1},\ldots ,a_{n})\in R^{A}} . In model theory there is also a stronger notion of elementary embedding. == Order theory and domain theory == In order theory, an embedding of partially ordered sets is a function F {\displaystyle F} between partially ordered sets X {\displaystyle X} and Y {\displaystyle Y} such that ∀ x 1 , x 2 ∈ X : x 1 ≤ x 2 ⟺ F ( x 1 ) ≤ F ( x 2 ) . {\displaystyle \forall x_{1},x_{2}\in X:x_{1}\leq x_{2}\iff F(x_{1})\leq F(x_{2}).} Injectivity of F {\displaystyle F} follows quickly from this definition. In domain theory, an additional requirement is that ∀ y ∈ Y : { x ∣ F ( x ) ≤ y } {\displaystyle \forall y\in Y:\{x\mid F(x)\leq y\}} is directed. == Metric spaces == A mapping ϕ : X → Y {\displaystyle \phi :X\to Y} of metric spaces is called an embedding (with distortion C > 0 {\displaystyle C>0} ) if L d X ( x , y ) ≤ d Y ( ϕ ( x ) , ϕ ( y ) ) ≤ C L d X ( x , y ) {\displaystyle Ld_{X}(x,y)\leq d_{Y}(\phi (x),\phi (y))\leq CLd_{X}(x,y)} for every x , y ∈ X {\displaystyle x,y\in X} and some constant L > 0 {\displaystyle L>0} . === Normed spaces === An important special case is that of normed spaces; in this case it is natural to consider linear embeddings. One of the basic questions that can be asked about a finite-dimensional normed space ( X , ‖ ⋅ ‖ ) {\displaystyle (X,\|\cdot \|)} is, what is the maximal dimension k {\displaystyle k} such that the Hilbert space ℓ 2 k {\displaystyle \ell _{2}^{k}} can be linearly embedded into X {\displaystyle X} with constant distortion? The answer is given by Dvoretzky's theorem. == Category theory == In category theory, there is no satisfactory and generally accepted definition of embeddings that is applicable in all categories. One would expect that all isomorphisms and all compositions of embeddings are embeddings, and that all embeddings are monomorphisms. Other typical requirements are: any extremal monomorphism is an embedding and embeddings are stable under pullbacks. Ideally the class of all embedded subobjects of a given object, up to isomorphism, should also be small, and thus an ordered set. In this case, the category is said to be well powered with respect to the class of embeddings. This allows defining new local structures in the category (such as a closure operator). In a concrete category, an embedding is a morphism f : A → B {\displaystyle f:A\rightarrow B} that is an injective function from the underlying set of A {\displaystyle A} to the underlying set of B {\displaystyle B} and is also an initial morphism in the following sense: If g {\displaystyle g} is a function from the underlying set of an object C {\displaystyle C} to the underlying set of A {\displaystyle A} , and if its composition with f {\displaystyle f} is a morphism f g : C → B {\displaystyle fg:C\rightarrow B} , then g {\displaystyle g} itself is a morphism. A factorization system for a category also gives rise to a notion of embedding. If ( E , M ) {\displaystyle (E,M)} is a factorization system, then the morphisms in M {\displaystyle M} may be regarded as the embeddings, especially when the category is well powered with respect to M {\displaystyle M} . Concrete theories often have a factorization system in which M {\displaystyle M} consists of the embeddings in the previous sense. This is the case of the majority of the examples given in this article. As usual in category theory, there is a dual concept, known as quotient. All the preceding properties can be dualized. An embedding can also refer to an embedding functor. == See also == Embedding (machine learning) Ambient space Closed immersion Cover Dimensionality reduction Flat (geometry) Immersion Johnson–Lindenstrauss lemma Submanifold Subspace Universal space == Notes == == References == == External links == Adámek, Jiří; Horst Herrlich; George Strecker (2006). Abstract and Concrete Categories (The Joy of Cats). Embedding of manifolds on the Manifold Atlas
|
Wikipedia:Emil Artin#0
|
Emil Artin (German: [ˈaʁtiːn]; March 3, 1898 – December 20, 1962) was an Austrian mathematician of Armenian descent. Artin was one of the leading mathematicians of the twentieth century. He is best known for his work on algebraic number theory, contributing largely to class field theory and a new construction of L-functions. He also contributed to the pure theories of rings, groups and fields. Along with Emmy Noether, he is considered the founder of modern abstract algebra. == Early life and education == === Parents === Emil Artin was born in Vienna to parents Emma Maria, née Laura (stage name Clarus), a soubrette on the operetta stages of Austria and Germany, and Emil Hadochadus Maria Artin, Austrian-born of mixed Austrian and Armenian descent. His Armenian last name was Artinian which was shortened to Artin. Several documents, including Emil's birth certificate, list the father's occupation as "opera singer" though others list it as "art dealer." It seems at least plausible that he and Emma had met as colleagues in the theater. They were married in St. Stephen's Parish on July 24, 1895. === Early education === Artin entered school in September 1904, presumably in Vienna. By then, his father was already suffering symptoms of advanced syphilis, among them increasing mental instability, and was eventually institutionalized at the recently established (and imperially sponsored) insane asylum at Mauer Öhling, 125 kilometers west of Vienna. It is notable that neither wife nor child contracted this highly infectious disease. Artin's father died there July 20, 1906. Young Artin was eight. On July 15, 1907, Artin's mother remarried to Rudolf Hübner: a prosperous manufacturing entrepreneur from Reichenberg, Bohemia (now Liberec in the Czech Republic). Documentary evidence suggests that Emma had already been a resident in Reichenberg the previous year, and in deference to her new husband, she had abandoned her vocal career. Hübner deemed a life in the theater unseemly—unfit for the wife of a man of his position. In September 1907, Artin entered the Volksschule in Horní Stropnice. For that year, he lived away from home, boarding on a local farm. The following year, he returned to the home of his mother and stepfather, and entered the Realschule in Reichenberg, where he pursued his secondary education until June 1916. In Reichenberg, Artin formed a lifelong friendship with a young neighbor, Arthur Baer, who became an astronomer, teaching for many years at University of Cambridge. Astronomy was an interest the two boys shared already at this time. They each had telescopes. They also rigged a telegraph between their houses, over which once Baer excitedly reported to his friend an astronomical discovery he thought he had made—perhaps a supernova, he thought—and told Artin where in the sky to look. Artin tapped back the terse reply "A-N-D-R-O-M-E-D-A N-E-B-E-L." (Andromeda nebula) Artin's academic performance in the first years at the Realschule was spotty. Up to the end of the 1911–1912 school year, for instance, his grade in mathematics was merely "genügend," (satisfactory). Of his mathematical inclinations at this early period he later wrote, "Meine eigene Vorliebe zur Mathematik zeigte sich erst im sechzehnten Lebensjahr, während vorher von irgendeiner Anlage dazu überhaupt nicht die Rede sein konnte." ("My own predilection for mathematics manifested itself only in my sixteenth year; before that, one could certainly not speak of any particular aptitude for it.") His grade in French for 1912 was actually "nicht genügend" (unsatisfactory). He did rather better work in physics and chemistry. But from 1910 to 1912, his grade for "Comportment" was "nicht genügend." Artin spent the school year 1912–1913 away from home, in France, a period he spoke of later as one of the happiest of his life. He lived that year with the family of Edmond Fritz, in the vicinity of Paris, and attended a school there. When he returned from France to Reichenberg, his academic work markedly improved, and he began consistently receiving grades of "gut" or "sehr gut" (good or very good) in virtually all subjects—including French and "Comportment." By the time he completed studies at the Realschule in June 1916, he was awarded the Reifezeugnis (diploma—not to be confused with the Abitur) that affirmed him "reif mit Auszeichnung" (qualified with distinction) for graduation to a technical university. === University education === Now that it was time to move on to university studies, Artin was no doubt content to leave Reichenberg, for relations with his stepfather were clouded. According to him, Hübner reproached him "day and night" with being a financial burden, and even when Artin became a university lecturer and then a professor, Hübner deprecated his academic career as self-indulgent and belittled its paltry emolument. In October 1916, Artin matriculated at the University of Vienna, having focused by now on mathematics. He studied there with Philipp Furtwängler, and also took courses in astrophysics and Latin. Studies at Vienna were interrupted when Artin was drafted in June 1918 into the Austrian army (his Army photo ID is dated July 1, 1918). Assigned to the K.u. K. 44th Infantry Regiment, he was stationed northwest of Venice at Primolano, on the Italian front in the foothills of the Dolomites. To his great relief, Artin managed to avoid combat by volunteering for service as a translator—his ignorance of Italian notwithstanding. He did know French, of course, and some Latin, was generally a quick study, and was motivated by a highly rational fear in a theater of that war that had often proven a meat-grinder. In his scramble to learn at least some Italian, Artin had recourse to an encyclopedia, which he once consulted for help in dealing with the cockroaches that infested the Austrian barracks. At some length, the article described a variety of technical methods, concluding finally with—Artin laughingly recalled in later years—"la caccia diretta" ("the direct hunt"). Indeed, "la caccia diretta" was the straightforward method he and his fellow infantrymen adopted. Artin survived both war and vermin on the Italian front, and returned late in 1918 to the University of Vienna, where he remained through Easter of the following year. By June 1919, he had moved to Leipzig and matriculated at the university there as a "Class 2 Auditor" ("Hörer zweiter Ordnung"). Late the same year, Artin undertook the formality of standing for a qualifying examination by an academic board of the Oberrealschule in Leipzig, which he passed with the grade of "gut" (good), receiving for the second time the Reifezeugnis (diploma attesting the equivalence of satisfactory completion of 6 years at a Realschule). How this Leipzig Reifezeugnis differed technically from the one he had been granted at Reichenberg is unclear from the document, but it apparently qualified him for regular matriculation as a student at the university, which normally required the Abitur. From 1919 to June 1921, Artin pursued mostly mathematical studies at Leipzig. His principal teacher and dissertation advisor was Gustav Herglotz. Additionally, Artin took courses in chemistry and various fields of physics, including mechanics, atomic theory, quantum theory, Maxwellian theory, radioactivity, and astrophysics. In June 1921 he was awarded the Doctor of Philosophy degree, based on his "excellent" dissertation, "Quadratische Körper im Gebiete der höheren Kongruenzen" ("Quadratic Fields in the domain of higher congruences"), and the oral examination which—his diploma affirms—he had passed three days earlier "with extraordinary success." In the fall of 1921, Artin moved to the University of Göttingen, considered the "Mecca" of mathematics at the time, where he pursued one year of post-doctoral studies in mathematics and mathematical physics with Richard Courant and David Hilbert. While at Göttingen, he worked closely with Emmy Noether and Helmut Hasse. Aside from consistently good school grades in singing, the first documentary evidence of Artin's deep and lifelong engagement with music comes from the year in Göttingen, where he was regularly invited to join in the chamber music sessions hosted by Richard Courant. He played all the keyboard instruments, and was an especially accomplished flautist, although it is not known exactly by what instruction he had achieved proficiency on these instruments. He became especially devoted to the music of Johann Sebastian Bach. == Career == === Professorship at Hamburg === Courant arranged for Artin to receive a stipend for the summer of 1922 in Göttingen, which occasioned his declining a position offered him at the University of Kiel. The following October, however, he accepted an equivalent position at Hamburg, where in 1923, he completed the Habilitation thesis (required of aspirants to a professorship in Germany), and on July 24 advanced to the rank of Privatdozent. On April 1, 1925, Artin was promoted to Associate Professor (außerordentlicher Professor). In this year also, Artin applied for and was granted German citizenship. He was promoted to full Professor (ordentlicher Professor) on October 15, 1926. Early in the summer of 1925, Artin attended the Congress of the Wandervogel youth movement at Wilhelmshausen near Kassel with the intention of gathering a congenial group to undertake a trek through Iceland later that summer. Iceland (before the transforming presence of American and British forces stationed there during World War II) was still a primitive country in 1925, with a thinly scattered population and little transportation infrastructure. Artin succeeded in finding six young men to join him in this adventure. In the second half of August 1925, the group set out by steamer from Hamburg, first to Norway, where they boarded a second steamer that took them to Iceland, stopping at several of the small east fjord ports before arriving at their destination, Húsavík in the north of the island. Here the Wandervogel group disembarked, their initial goal, trekking down the Laxá River to Lake Mývatn. They made a circuit of the large, irregular lake, staying in farm houses, barns, and occasionally a tent as they went. When they slept in barns, it was often on piles of wet straw or hay. On those lucky occasions when they slept in beds, it could be nearly as damp on account of the rain trickling through the sod roofs. The tent leaked as well. Artin kept a meticulous journal of this trip, making daily entries in a neat, minuscule hand. He and several of the young men had brought cameras, so that the trek is documented also by nearly 200 photographs. Artin's journal attests to his overarching interest in the geology of this mid-Atlantic island, situated over the boundary of two tectonic plates whose shifting relation makes it geologically hyperactive. In keeping with the Wandervogel ethos, Artin and his companions carried music with them wherever they visited. The young men had packed guitars and violins, and Artin played the harmoniums common in the isolated farmsteads where they found lodging. The group regularly entertained their Icelandic hosts, not in full exchange for board and lodging, to be sure, but for goodwill certainly, and sometimes for a little extra on their plates, or a modestly discounted tariff. From Lake Mývatn, Artin and his companions headed west towards Akureyri, passing the large waterfall Goðafoss on the way. From Akureyri, they trekked west down the Öxnadalur (Ox Valley) intending to rent pack horses and cross the high and barren interior by foot to Reykjavík. By the time they reached the lower end of Skagafjörður, however, they were persuaded by a local farmer from whom they had hoped to rent the horses that a cross-country trek was by then impracticable; with the approach of winter, highland routes were already snow-bound and impassable. Instead of turning south, then, they turned north to Siglufjörður, where they boarded another steamer that took them around the western peninsula and down the coast to Reykjavík. From Reykjavík, they returned via Norway to Hamburg. By Artin's calculation the distance they had covered on foot through Iceland totaled 450 kilometers. Early in 1926, the University of Münster offered Artin a professorial position; however, Hamburg matched the offer financially, and (as noted above) promoted him to full professor, making him (along with his young colleague Helmut Hasse) one of the two youngest professors of mathematics in Germany. It was in this period that he acquired his lifelong nickname, "Ma," short for mathematics, which he came to prefer to his given name, and which virtually everyone who knew him well used. Although the nickname might seem to imply a narrow intellectual focus, quite the reverse was true of Artin. Even his teaching at the University of Hamburg went beyond the strict boundaries of mathematics to include mechanics and relativity theory. He kept up on a serious level with advances in astronomy, chemistry and biology (he owned and used a fine microscope), and the circle of his friends in Hamburg attests to the catholicity of his interests. It included the painter Heinrich Stegemann, and the author and organ-builder Hans Henny Jahnn. Stegemann was a particularly close friend, and made portraits of Artin, his wife Natascha, and their two Hamburg-born children. Music continued to play a central role in his life; he acquired a Neupert double manual harpsichord, and a clavichord made by the Hamburg builder Walther Ebeloe, as well as a silver flute made in Hamburg by G. Urban. Chamber music gatherings became a regular event at the Artin apartment as they had been at the Courants in Göttingen. On August 15, 1929, Artin married Natalia Naumovna Jasny (Natascha), a young Russian émigré who had been a student in several of his classes. One of their shared interests was photography, and when Artin bought a Leica for their joint use (a Leica A, the first commercial model of this legendary camera), Natascha began chronicling the life of the family, as well as the city of Hamburg. For the next decade, she made a series of artful and expressive portraits of Artin that remain by far the best images of him taken at any age. Artin, in turn, took many fine and evocative portraits of Natascha. Lacking access to a professional darkroom, their films and prints had to be developed in a makeshift darkroom set up each time (and then dismantled again) in the small bathroom of whatever apartment they were occupying. The makeshift darkroom notwithstanding, the high artistic level of the resulting photographic prints is attested to by the exhibit of Natascha's photographs mounted in 2001 by the Museum für Kunst und Gewerbe Hamburg, and its accompanying catalogue, "Hamburg—Wie Ich Es Sah." In 1930, Artin was offered a professorship at ETH (Eidgenössische Technische Hochschule) in Zürich, to replace Hermann Weyl, who had moved to Göttingen. He chose to remain at Hamburg, however. Two years later, in 1932, for contributions leading to the advancement of mathematics, Artin was honored—jointly with Emmy Noether—with the Ackermann–Teubner Memorial Award, which carried a grant of 500 marks. === Nazi period === In January 1933, Natascha gave birth to their first child, Karin. A year and a half later, in the summer of 1934, son Michael was born. The political climate at Hamburg was not so poisonous as that at Göttingen, where by 1935 the mathematics department had been purged of Jewish and dissident professors. Still, Artin's situation became increasingly precarious, not only because Natascha was of Jewish descent, but also because Artin made no secret of his distaste for the Hitler regime (he evidently signed the 1933 Vow of allegiance of the Professors of the German Universities and High-Schools to Adolf Hitler and the National Socialistic State, though he said his name had been added without his knowledge). At one point, Wilhelm Blaschke, by then a Nazi Party member, but nonetheless solicitous of the Artins’ well-being, warned Artin discreetly to close his classroom door so his frankly anti-Nazi comments could not be heard by passersby in the hallway. Natascha recalled going down to the newsstand on the corner one day and being warned in hushed tones by the man from whom she and Artin bought their paper that a man had daily been watching their apartment from across the street. Once tipped off, she and Artin became very aware of the watcher (Natascha liked to refer to him as their "spy"), and even rather enjoyed the idea of his being forced to follow them on the long walks they loved taking in the afternoons to a café far out in the countryside. Toying with their watcher on a fine autumn afternoon was one thing, but the atmosphere was in fact growing inexorably serious. Natascha's Jewish father and her sister, seeing the handwriting on the wall, had already left for the U.S. in the summer of 1933. Of Jewish descent, Natascha's status was, if not ultimately quite hopeless, certainly not good. Hasse, like Blaschke a nationalistic supporter of the regime, had applied for Party membership, but was nonetheless no anti-Semite. Besides he was a long-time friend and colleague of Artin's. He suggested that the two Artin children—in Nazi terminology, "Mischlinge zweiten Grades"—might, if a few strategic strings could be pulled, be officially "aryanized." Hasse offered to exert his influence with the Ministry of Education (Kultur- und Schulbehörde, Hochschulwesen), and Artin—not daring to leave any stone unturned, especially with respect to the safety of his children—went along with this effort. He asked his father-in-law, by then resident in Washington D.C., to draft and have notarized an affidavit attesting to the Christian lineage of his late wife, Natascha's mother. Artin submitted this affidavit to the Ministry of Education, but to no avail. By this time, to be precise, on July 15, 1937, because of Natascha's status as "Mischling ersten Grades," Artin had lost his post at the university—technically, compelled into early retirement—on the grounds of paragraph 6 of the Act to Restore the Professional Civil Service (Gesetz zur Wiederherstellung des Berufsbeamtentums) of April 7, 1933. Ironically, he had applied only some months earlier, on February 8, 1937, for a leave of absence from the university in order to accept a position offered him at Stanford. On March 15, 1937, the response had come back denying his application for leave on the grounds that his services to the university were indispensable ("Da die Tätigkeit des Professors Dr. Artin an der Universität Hamburg nicht entbehrt werden kann. . ."). By July, when he was summarily "retired," ("in Ruhestand versetzt") the position at Stanford University had been filled. However, through the efforts of Richard Courant (by then at New York University), and Solomon Lefschetz at Princeton University, a position was found for him at the University of Notre Dame in South Bend, Indiana. === Emigration to the U.S. === The family must have worked feverishly to prepare for emigration to the United States, for this entailed among other things packing their entire household for shipment. Since German law forbade emigrants taking more than a token sum of money out of the country, the Artins sank all the funds at their disposal into shipping their entire household, from beds, tables, chairs and double-manual harpsichord down to the last kitchen knife, cucumber slicer, and potato masher to their new home. This is why each of their residences in the United States bore such a striking resemblance to the rooms photographed so beautifully by Natascha in their Hamburg apartment. On the morning they were to board the Hamburg-Amerika line ship in Bremerhaven, October 21, 1937, daughter Karin woke with a high temperature. Terrified that should this opportunity be missed, the window of escape from Nazi Germany might close forever, Artin and Natascha chose to risk somehow getting Karin past emigration and customs officials without their noticing her condition. They managed to conceal Karin's feverish state, and without incident boarded the ship. When they landed a week later at Hoboken, New Jersey, Richard Courant and Natascha's father, the Russian agronomist Naum Jasny (then working for the U.S. Department of Agriculture) were on the dock to welcome the family to the United States. === Bloomington years === It was early November 1937 by the time they arrived in South Bend, where Artin joined the faculty at Notre Dame, and taught for the rest of that academic year. He was offered a permanent position the following year 170 miles to the south at Indiana University, in Bloomington. Shortly after the family resettled there, a second son, Thomas, was born on November 12, 1938. After moving to Bloomington, Artin quickly acquired a piano, and soon after that a Hammond Organ, a recently invented electronic instrument that simulated the sound of a pipe organ. He wanted this instrument in order primarily to play the works of J. S. Bach, and because the pedal set that came with the production model had a range of only two octaves (not quite wide enough for all the Bach pieces), he set about extending its range. Music was a constant presence in the Artin household. Karin played the cello, and then the piano as well, and Michael played the violin. As in Hamburg, the Artin living room was regularly the venue for amateur chamber music performances. The circle of the Artins’ University friends reflected Artin's wide cultural and intellectual interests. Notable among them were Alfred Kinsey and his wife of the Psychology Department, as well as prominent members of the Fine Arts, Art History, Anthropology, German Literature, and Music Departments. For several summer semesters, Artin accepted teaching positions at other universities, viz., Stanford in 1939 and 1940, The University of Michigan at Ann Arbor in 1941 and 1951, and The University of Colorado, in Boulder, in 1953. On each of these occasions, the family accompanied him. Artin insisted that only German be spoken in the house. Even Tom, born in the U.S., spoke German as his first language, acquiring English only from his siblings and his playmates in the neighborhood; for the first four or five years of his life, he spoke English with a pronounced German accent. Consistent with his program of maintaining the family's German cultural heritage, Artin gave high priority to regularly reading German literature aloud to the children. The text was frequently from Goethe's autobiographical Dichtung und Wahrheit, or his poems, "Erlkönig," for instance. Occasionally, he would read from an English text. Favorites were Mark Twain's Tom Sawyer, Charles Dickens’s A Christmas Carol, and Oscar Wilde’s "The Canterville Ghost". For the Artin children, these readings replaced radio entertainment, which was strictly banned from the house. There was a radio, but (with the notable exception of Sunday morning broadcasts by E. Power Biggs from the organ at the Busch-Reisinger Museum in Cambridge, to which Artin and Natascha listened still lounging in bed) it was switched on only to hear news of the war. Similarly, the Artin household would never in years to come harbor a television set. Once the war had ended, the radio was retired to the rear of a dark closet. As German citizens, Artin and Natascha were technically classified as enemy aliens for the duration of the war. On April 12, 1945, with the end of the war in Europe only weeks away, they applied for naturalization as American citizens. American citizenship was granted them on February 7, 1946. On the orders of a Hamburg doctor whom he had consulted about a chronic cough, Artin had given up smoking years before. He had vowed not to smoke so long as Adolf Hitler remained in power. On May 8, 1945, at the news of Germany's surrender and the fall of the Third Reich, Natascha made the mistake of reminding him of this vow, and in lieu of a champagne toast, he indulged in what was intended to be the smoking of a single, celebratory cigarette. Unfortunately, the single cigarette led to a second, and another after that. Artin returned to heavy smoking for the rest of his life. === Princeton years === If Göttingen had been the "Mecca" of mathematics in the 1920s and early 1930s, Princeton, following the decimation of German mathematics under the Nazis, had become the center of the mathematical world in the 1940s. In April 1946, Artin was appointed Professor at Princeton, at a yearly salary of $8,000. The family moved there in the fall of 1946. Notable among his graduate students at Princeton are Serge Lang, John Tate, Harold N. Shapiro, and O. Timothy O'Meara. Emil chose also to teach the honors section of Freshman calculus each year. He was renowned for the elegance of his teaching. Frei and Roquette write that Artin's "main medium of communication was teaching and conversation: in groups, seminars and in smaller circles. We have many statements of people near to him describing his unpretentious way of communicating with everybody, demanding quick grasp of the essentials but never tired of explaining the necessary. He was open to all kinds of suggestions, and distributed joyfully what he knew. He liked to teach, also to young students, and his excellent lectures, always well prepared but without written notes, were hailed for their clarity and beauty." Whenever he was asked whether mathematics was a science, Artin would reply unhesitatingly, "No. An art." His explanation was that: "[Mathematicians] all believe that mathematics is an art. The author of a book, the lecturer in a classroom tries to convey the structural beauty of mathematics to his readers, to his listeners. In this attempt, he must always fail. Mathematics is logical to be sure, each conclusion is drawn from previously derived statements. Yet the whole of it, the real piece of art, is not linear; worse than that, its perception should be instantaneous. We have all experienced on some rare occasion the feeling of elation in realizing that we have enabled our listeners to see at a glance the whole architecture and all its ramifications." During the Princeton years, Artin built a 6-inch (15 cm) reflecting telescope to plans he found in the magazine Sky and Telescope, which he subscribed to. He spent weeks in the basement attempting to grind the mirror to specifications, without success, and his continued failure to get it right led to increasing frustration. Then, in California to give a talk, he made a side trip to the Mt. Wilson Observatory, where he discussed his project with the astronomers. Whether it was their technical advice, or Natascha's intuitive suggestion that it might be too cold in the basement, and that he should try the procedure upstairs in the warmth of his study (which he did), he completed the grinding of the mirror in a matter of days. With this telescope, he surveyed the night skies over Princeton. In September 1955, Artin accepted an invitation to visit Japan. From his letters, it is clear he was treated like royalty by the Japanese mathematical community, and was charmed by the country. He was interested in learning about the diverse threads of Buddhism, and visiting its holy sites. In a letter home he describes his visit to the temples at Nara. "Then we were driven to a place nearby, Horiuji where a very beautiful Buddhist temple is. We were received by the abbot, and a priest translated into English. We obtained the first sensible explanation about modern Buddhism. The difficulty of obtaining such an explanation is enormous. To begin with most Japanese do not know and do not understand our questions. All this is made more complicated by the fact that there are numerous sects and each one has another theory. Since you get your information only piece wise, you cannot put it together. This results in an absurd picture. I am talking of the present day, not of its original form." His letter goes on to outline at length the general eschatological framework of Buddhist belief. Then he adds, "By the way, a problem given by the Zens for meditation is the following: If you clap your hands, does the sound come from the left hand or from the right?" == Return to Hamburg and personal life == The following year, Artin took a leave of absence to return to Germany for the first time since emigration, nearly twenty years earlier. He spent the fall semester at Göttingen, and the next at Hamburg. For the Christmas holidays, he travelled to his birthplace, Vienna, to visit his mother, Vienna being a city he had not seen in decades. In a letter home he described the experience of his return in a single, oddly laconic sentence: "It is kind of amusing to walk through Vienna again." In 1957, an honorary doctorate was conferred on Artin by the University of Freiburg. That fall, he returned to Princeton for what would be his final academic year at that institution. He was elected a Fellow of the American Academy of Arts and Sciences in 1957. Artin's marriage to Natascha had by this time seriously frayed. Though nominally still husband and wife, resident in the same house, they were for all intents and purposes living separate lives. Artin was offered a professorship at Hamburg, and at the conclusion of Princeton's spring semester, 1958, he moved permanently to Germany. His decision to leave Princeton University and the United States was complicated, based on multiple factors, prominent among them Princeton's (then operative) mandatory retirement age of 65. Artin had no wish to retire from teaching and direct involvement with students. Hamburg's offer was open-ended. Artin and Natascha were divorced in 1959. In Hamburg, Artin had taken an apartment, but soon gave it over to his mother whom he had brought from Vienna to live near him in Hamburg. He in turn moved into the apartment of the mathematician Hel Braun in the same neighborhood; though they never married, their relationship was equivalent to marriage. On January 4, 1961, he was granted German citizenship. In June 1962, on the occasion of the 300th anniversary of the death of Blaise Pascal, the University of Clermont-Ferrand conferred an honorary doctorate on him. On December 20 of the same year, Artin died at home in Hamburg, aged 64, of a heart attack. The University of Hamburg honored his memory on April 26, 2005, by naming one of its newly renovated lecture halls The Emil Artin Lecture Hall. == Influence and work == Artin was one of the leading algebraists of the century, with an influence larger than might be guessed from the one volume of his Collected Papers edited by Serge Lang and John Tate. He worked in algebraic number theory, contributing largely to class field theory and a new construction of L-functions. He also contributed to the pure theories of rings, groups and fields. The influential treatment of abstract algebra by van der Waerden is said to derive in part from Artin's ideas, as well as those of Emmy Noether. Artin solved Hilbert's seventeenth problem in 1927. He also developed the theory of braids as a branch of algebraic topology. In 1955 Artin was teaching foundations of geometry at New York University. He used his notes to publish Geometric Algebra in 1957, where he extended the material to include symplectic geometry. Artin was also an important expositor of Galois theory, and of the group cohomology approach to class ring theory (with John Tate), to mention two theories where his formulations became standard. == Conjectures == He left two conjectures, both known as Artin's conjecture. The first concerns Artin L-functions for a linear representation of a Galois group; and the second the frequency with which a given integer a is a primitive root modulo primes p, when a is fixed and p varies. These are unproven; in 1967, Hooley published a conditional proof for the second conjecture, assuming certain cases of the generalized Riemann hypothesis. == Supervision of research == Artin advised over thirty doctoral students, including Bernard Dwork, Serge Lang, K. G. Ramanathan, John Tate, Harold N. Shapiro, Hans Zassenhaus and Max Zorn. A more complete list of his students can be found at the Mathematics Genealogy Project website (see "External links," below). == Family == In 1932 he married Natascha Jasny, born in Russia to mixed parentage (her mother was Christian, her father, Jewish). Artin was not himself Jewish, but, on account of his wife's racial status in Nazi Germany, was dismissed from his university position in 1937. They had three children, one of whom is Michael Artin, an American algebraic geometer and professor emeritus at the Massachusetts Institute of Technology. His daughter, Karin Artin, was the first wife of John Tate. == Selected bibliography == Artin, Emil (1964) [1931], The gamma function., Athena Series: Selected Topics in Mathematics, New York-Toronto-London: Holt, Rinehart and Winston, MR 0165148 Reprinted in (Artin 2007) Artin, Emil (1947), "Theory of braids", Ann. of Math., 2, 48 (1): 101–126, doi:10.2307/1969218, ISSN 0003-486X, JSTOR 1969218, MR 0019087 Artin, Emil (1998) [1944], Galois Theory, Dover Publications, Inc., ISBN 0-486-62342-4 Reprinted in (Artin 2007) Artin, Emil; Nesbitt, Cecil J.; Thrall, Robert M. (1944), Rings with Minimum Condition, University of Michigan Publications in Mathematics, vol. 1, Ann Arbor, Mich.: University of Michigan Press, MR 0010543 Artin, Emil (1955), Elements of algebraic geometry, Courant Institute of Mathematical Sciences, New York University Artin, Emil (1958), A Freshman Honors Course in Calculus and Analytic Geometry, University of Buffalo, ISBN 0-923891-52-8 {{citation}}: ISBN / Date incompatibility (help) Artin, Emil (1959), Theory of algebraic numbers, Göttingen: Mathematisches Institut, MR 0132037 Reprinted in (Artin 2007) Artin, Emil (1988) [1957], Geometric Algebra, Wiley Classics Library, New York: John Wiley & Sons Inc., pp. x+214, doi:10.1002/9781118164518, ISBN 0-471-60839-4, MR 1009557 Artin, Emil (1982) [1965], Lang, Serge; Tate, John T. (eds.), Collected papers, New York-Berlin: Springer-Verlag, ISBN 0-387-90686-X, MR 0671416 Artin, Emil (2006) [1967], Algebraic numbers and algebraic functions., Providence, RI: AMS Chelsea Publishing, doi:10.1090/chel/358, ISBN 0-8218-4075-4, MR 2218376 Artin, Emil. (1898–1962) Beiträge zu Leben, Werk und Persönlichkeit, eds., Karin Reich and Alexander Kreuzer (Dr. Erwin Rauner Verlag, Augsburg, 2007). Artin, Emil; Tate, John (2009) [1967], Class field theory, AMS Chelsea Publishing, Providence, RI, pp. viii+194, ISBN 978-0-8218-4426-7, MR 2467155 Artin, Emil (2007), Rosen, Michael (ed.), Exposition by Emil Artin: a selection., History of Mathematics, vol. 30, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4172-3, MR 2288274 Reprints Artin's books on the gamma function, Galois theory, the theory of algebraic numbers, and several of his papers. == See also == List of things named after Emil Artin List of second-generation Mathematicians == References == == Further reading == Schoeneberg, Bruno (1970). "Artin, Emil". Dictionary of Scientific Biography. Vol. 1. New York: Charles Scribner's Sons. pp. 306–308. ISBN 0-684-10114-9. Zassenhaus, Hans (Jan 1964). "Emil Artin, His Life and His Work". Notre Dame Journal of Formal Logic. 5 (1): 1–9. doi:10.1305/ndjfl/1093957731. == External links == O'Connor, John J.; Robertson, Edmund F., "Emil Artin", MacTutor History of Mathematics Archive, University of St Andrews Emil Artin at the Mathematics Genealogy Project "Fine Hall in its golden age: Remembrances of Princeton in the early fifties", by Gian-Carlo Rota. Contains a section on Artin at Princeton. Author profile in the database zbMATH
|
Wikipedia:Emil J. Straube#0
|
Emil Josef Straube is a Swiss and American mathematician. == Education and career == He received from ETH Zurich in 1977 his diploma in mathematics and in 1983 his doctorate in mathematics. For the academic year 1983–1984 Straube was a visiting research scholar at the University of North Carolina at Chapel Hill. He was a visiting assistant professor from 1984 to 1986 at Indiana University Bloomington and from 1986 to 1987 at the University of Pittsburgh. From 1996 to the present, he is a full professor at Texas A&M University, where he was an assistant professor from 1987 to 1991 and an associate professor from 1991 to 1996; from 2011 to the present, he is the head of the mathematics department there. He has held visiting research positions in Switzerland, Germany, the US, and Austria. In 1995 he was a co-winner, with Harold P. Boas, of the Stefan Bergman Prize of the American Mathematical Society. In 2006 Straube was an invited speaker at the International Congress of Mathematicians in Madrid. In 2012 he was elected a fellow of the American Mathematical Society. == Selected publications == === Articles === Straube, Emil J. (1984). "Harmonic and analytic functions admitting a distribution boundary value". Annali della Scuola Normale Superiore di Pisa-Classe di Scienze. 11 (4): 559–591. with H. P. Boas: Boas, Harold P.; Straube, Emil J. (1988). "Integral inequalities of Hardy and Poincaré type". Proceedings of the American Mathematical Society. 103 (1): 172–176. doi:10.1090/S0002-9939-1988-0938664-0. with H. P. Boas: "Sobolev estimates for the ∂ ¯ {\displaystyle {\overline {\partial }}} -Neumann operator on domains in C {\displaystyle \mathbb {C} } n admitting a defining function that is plurisubharmonic on the boundary". Mathematische Zeitschrift. 206 (1): 81–88. doi:10.1007/BF02571327. S2CID 123468230. with H. P. Boas: Boas, Harold P.; Straube, Emil J. (1991). "Sobolev estimates for the complex Green operator on a class of weakly pseudoconvex boundaries". Communications in Partial Differential Equations. 16 (10): 1573–1582. doi:10.1080/03605309108820813. "Good Stein neighborhood bases and regularity of the ∂ ¯ {\displaystyle {\overline {\partial }}} -Neumann problem". Illinois Journal of Mathematics. 45 (3): 865–871. 2001. doi:10.1215/ijm/1258138156. with Siqi Fu: Fu, Siqi; Straube, Emil J. (2002). "Semi-classical analysis of Schrödinger operators and compactness in the ∂ ¯ {\displaystyle {\overline {\partial }}} -Neumann problem". Journal of Mathematical Analysis and Applications. 271 (1): 267–282. arXiv:math/0201149. doi:10.1016/S0022-247X(02)00086-0. with Marcel K. Sucheston: "Levi foliations in pseudoconvex boundaries and vector fields that commute approximately with ∂ ¯ {\displaystyle {\overline {\partial }}} ". Trans. Amer. Math. Soc. 355: 143–154. 2003. doi:10.1090/S0002-9947-02-03133-1. "A sufficient condition for global regularity of the ∂ ¯ {\displaystyle {\overline {\partial }}} -Neumann operator". Advances in Mathematics. 217 (3): 1072–1095. 2008. doi:10.1016/j.aim.2007.08.003. === Books === Lectures on the L {\displaystyle L} 2-Sobolev theory of the ∂ ¯ {\displaystyle {\overline {\partial }}} -Neumann problem. Lectures in Mathematics and Physics, volume 7. European Mathematical Society. 2010. ISBN 9783037190760. == References ==
|
Wikipedia:Emil Spjøtvoll#0
|
Emil Oskar Spjøtvoll (21 July 1940 – 4 March 2002) was a Norwegian mathematician and statistician. == Early life == Spjøtvoll was born in Hemne Municipality. He finished his secondary education in 1959 at Trondheim Cathedral School, took the cand.mag. degree at the University of Oslo in 1962 and then the cand.real. degree in 1964. Spjøtvoll lectured at the university from 1965 to 1968, and in 1968 he took the dr.philos. degree with the thesis A Mixed Model in the Analysis of Variance. Optimal Properties. == Career == He was a guest scholar at University of California, Berkeley and the University of Wisconsin, Madison from 1968 to 1970, then docent at the University of Oslo. From 1973 to 1983 he was a professor at the Norwegian College of Agriculture. Spjøtvoll had guest scholarships in Paris (1980–1981) and Zürich (1982–1983). In 1983 Spjøtvoll was hired at the Norwegian Institute of Technology. He was prorector from 1990 to 1993 and rector from 1993 to 1995. Spjøtvoll was then the first rector at the Norwegian University of Science and Technology, a post he held from 1996 until 31 December 2001. He also worked for Statistics Norway and SINTEF. Spjøtvoll was originally an opponent of the merger which led to the Norwegian University of Science and Technology, but then turned around and became a defender of the new university. Spjøtvoll was a fellow of the Norwegian Academy of Technological Sciences from 1984, the Royal Norwegian Society of Sciences and Letters from 1985, the Norwegian Academy of Science and Letters from 1987. In 1999 he was decorated as a Commander of the Order of St. Olav. == Personal life == He was married twice. He died from cancer in March 2002. == References ==
|
Wikipedia:Emilio Baiada#0
|
Emilio Baiada (January 12, 1914 in Tunis – May 14, 1984 in Modena) (also known as Emilio Bajada) was an Italian mathematician. == Education and career == He studied at the Scuola Normale Superiore in Pisa, where he graduated with highest honors in June 1937 along with Leonida Tonelli, with whom he worked as an assistant from 1938 to 1941, when he left for the war. In 1945 he began to teach analysis, theory of functions, calculus and rational mechanics at the Scuola Normale. In 1948 he obtained a degree in Analysis; his Ph.D. thesis was written under the direction of Tonelli and Marston Morse. In 1949 he went on leave from the University of Pisa and moved first to the University of Cincinnati, where he worked with scientists like Otto Szász and Charles Napoleon Moore, and then to Princeton University, where he worked with Morse. In 1952 he obtained the chair of analysis of the University of Palermo, where he taught until 1961 before transferring to the University of Modena. where he re-launched the Institute of Mathematics and developed its Library and Mathematical Seminar. == Work == === Institutional work === Baiada was one of the leading forces behind the reprise in mathematical studies in Modena in the postwar period. === Research activity === He published more than 60 papers on differential equations, Fourier series and the series expansion of orthonormal functions, topology of varieties, real analysis, calculus of variations and the theory of functions. === Teaching activity === Vinti (2007) gives a complete list of Emilio Baiada's doctoral students. == Honors == Baiada won the Michel prize for the best thesis in Pisa, and the 1940 Merlani prize of the Accademia delle Scienze dell'Istituto di Bologna for "contributions on subjects of calculus of variations". During his stay in Palermo, from 1952 to 1961, he was elected corresponding member of the Accademia Nazionale di Scienze, Lettere e Arti di Palermo. In 1967 he was elected corresponding member of the Accademia di Scienze, Lettere e Arti di Modena. On June 9, 1976, he was awarded the Golden medal "Benemeriti della Scuola, della Cultura, dell'Arte" by the President of the Italian Republic. == Selected publications == Baiada, Emilio (1939), "Osservazioni sulla misurabilita secondo Caratheodory." [Observations on measurability according to Caratheodory], Annali della Scuola Normale Superiore, Serie II (in Italian), 8 (1): 69–74, JFM 65.0199.02, MR 1556817, Zbl 0020.10803. Baiada, Emilio (1951), "L'area delle superficie armoniche quale funzione delle rappresentazioni del contorno" [The area of harmonic surfaces as a function of their contour representations] (PDF), Rivista di Matematica della Università di Parma, (1) (in Italian), 2: 315–330, MR 0047125, Zbl 0044.28102. Morse, Marston; Baiada, Emilio (1953), "Homotopy and homology related to the Schoenflies problem", Annals of Mathematics, 2, 58: 142–165, doi:10.2307/1969825, MR 0056922, Zbl 0052.19902. == Notes == == References ==
|
Wikipedia:Emily E. Witt#0
|
Emily Elspeth Witt is an American mathematician, an associate professor and Keeler Intra-University Professor of mathematics at the University of Kansas. Her research involves commutative algebra, representation theory, and singularity theory. == Education and career == Witt is a 2005 graduate of the University of Chicago, where she majored in mathematics with a specialization in computer science. She completed her Ph.D. in 2011 at the University of Michigan. Her dissertation, Local cohomology and group actions, was supervised by Melvin Hochster. Witt worked as a Dunham Jackson Assistant Professor at the University of Minnesota from 2011 to 2014, and as a research assistant professor at the University of Utah from 2014 to 2015. In 20155, she obtained a tenure-track assistant professorship at the University of Kansas. She was promoted to associate professor in 2020, and named Keeler Intra-University Professor for 2021–2022. == Recognition == Witt is the 2022–2023 winner of the Ruth I. Michler Memorial Prize of the Association for Women in Mathematics. == References == == External links == Home page
|
Wikipedia:Emma Castelnuovo#0
|
Emma Castelnuovo (12 December 1913 – 13 April 2014) was an Italian mathematician and teacher of Jewish descent. In 2013, the year of her 100th birthday, the International Commission on Mathematical Instruction created an award named after Castelnuovo to recognize outstanding contributions to mathematics education. == Education and career == Emma Castelnuovo was born in Rome on 12 December 1913, the fifth child of Elbina and Guido Castelnuovo; her father and her mother's brother Federigo Enriques were both professors of mathematics. Castelnuovo graduated from the University of Rome in 1936 with a thesis on algebraic geometry. After this she worked as a librarian at the same university. She won a permanent position there in 1938, but later that year, Italy passed new laws preventing Jews from holding state positions, preventing her from taking it. From 1939 to 1943 she taught in Hebrew schools attended by Jewish students who had been kicked out of public schools as a result of the same racial legislation that denied Emma an official university position. In 1943 the German occupation of Italy forced the entire Castelnuovo family to go underground, seeking shelter under a false name. They sought refuge with friends and then at hospitals, religious institutions, and small pensions, and continued all the time to travel frequently for their own safety. With the end of World War II in 1945, she became a secondary school teacher and stayed there until her retirement in 1979. Even as she worked as a teacher, she conducted continuous research on the subject of mathematics education and published many papers. In 1948 she published the first of many editions of the book Intuitive geometry, which demonstrated her personal approach to teaching mathematics. Her teaching texts have since been translated into several other languages and used in schools, especially in Spanish-speaking countries. In 1978 and 1980 she was sent by UNESCO to Niger to teach in a class that corresponded to the Italian eighth grade. She served as president of the "International Commission for the Improvement of Mathematics Teaching." She died in Rome on 13 April 2014 and was buried in the Verano cemetery with her father and mother. == Selected works == Castelnuovo authored dozens of publications. Intuitive geometry for lower secondary schools, Rome, Carabba, 1949; Florence, The New Italy, 1952; 1959. The numbers. Practical arithmetic, Florence, La Nuova Italia, 1962. Didactics of Mathematics Florence, La Nuova Italia, 1963. Documents of a mathematical exhibition. "From children to men", Turin, Boringhieri, 1972. Mathematics in reality, with Mario Barra, Turin, Boringhieri, 1976. Mathematics, Florence, New Italy, 1979. Pots, shadows, ants. Traveling with mathematics, Scandicci, La Nuova Italia, 1993. ISBN 88-221-1165-6 The math workshop. Reasoning with materials. The lessons of the greatest Italian researcher in mathematics education, Molfetta, La Meridiana, 2008. ISBN 978-88-6153-046-1 == References == == External sources == P. Odifreddi, The teacher who made people love geometry, in La Repubblica, 15 April 2014, p. 51
|
Wikipedia:Emma McCoy#0
|
Emma Joan McCoy is the Vice President and Pro-Vice Chancellor for Education and a Professor of Statistics at the London School of Economics and Political Science. She has acted as a mathematics subject expert for discussions on reform of the National Curriculum, and has been a member of the Royal Statistical Society council, and of the Royal Society Advisory Committee on Mathematics Education. == Education == McCoy completed a PhD at Imperial College London in 1995 and a Master of Science degree in Computational Statistics in 1991 at the University of Bath. McCoy's PhD focused on the analysis and synthesis of long-memory processes. In particular, she investigated the use of the discrete wavelet transform and multitaper spectral estimation. She completed her thesis, Some New Statistical Approaches to the Analysis of Long Memory Processes, under the supervision of Andrew Walden. == Research and career == McCoy is interested in time series analysis and causal inference, with a particular focus on transport. Prior to joining LSE in October 2022, she was the Vice-Provost (Education and Student Experience) at Imperial College London, where she was appointed Professor of Statistics in 2014. McCoy previously taught several undergraduate courses at Imperial, as well as being an advisor for the EPSRC funded Mathematics of Planet Earth doctoral training centre. She has given several public talks related to her research, and real world applications, like Inference Challenges in Transportation. In 2006 she delivered the London Mathematical Society popular lecture, From Magic Squares to Sudoku. She has been involved with the Royal Institution mathematics masterclasses since they started being held at Imperial College London. She is concerned about the future of mathematics education in the UK, and is a member of the Royal Society Advisory Committee of Mathematics Education. McCoy established a joint Mathematics with Education BSc at Imperial College, which was delivered jointly by Imperial College London and Canterbury Christ Church University. McCoy is a Fellow of the Institute of Mathematics and its Applications and the Royal Statistical Society. She has also been a member of the Royal Statistical Society's Council and the Academic Affairs Advisory group. In 2017 she was appointed Vice-Dean for Education for the Faculty of Natural Sciences at Imperial College London. McCoy was appointed as Pro-Director (Education) at LSE in 2022. McCoy was the first female professor of maths at Imperial College London. She was the mathematical advisor to the maths and computing section of the Suffrage Science scheme, which celebrates women in science for their scientific achievement and for their ability to inspire others. Suffrage Science was established in 2011 by the MRC Clinical Sciences Centre. In 2017 she received an award from the London Institute of Medical Sciences for establishing a Maths and Computing Group. == References ==
|
Wikipedia:Emmanuel Carvallo#0
|
Emmanuel Carvallo (1856–1945) was a French mathematician born in Narbonne. He is notable for showing in 1897 that bicycles could be self-stable, for opposing wave models of X-rays in 1900, and for claiming in 1912 that Einstein's Theory of Relativity had been proven false. == References ==
|
Wikipedia:Emmanuel David Tannenbaum#0
|
Emmanuel David Tannenbaum (June 28, 1978 – May 28, 2012) was an Israeli/American biophysicist and applied mathematician. He worked as a professor and researcher in the department of chemistry at the Ben-Gurion University of the Negev and the department of biology at the Georgia Institute of Technology, specializing in the fields of mathematical biology, systems biology, and quantum physics. Tannenbaum's initial work was in quantum chemistry as part of his Harvard University doctoral thesis where he developed a novel partial differential equation approach to the EBK quantization of nearly separable Hamiltonians in the quasi-integrable regime. Emmanuel Tannenbaum subsequently devoted his research to studying various problems in evolutionary dynamics using quasispecies models. His seminal work centered on the key question of the evolutionary advantages of sexual reproduction. Tannenbaum demonstrated a strong selective advantage for sexual reproduction with fewer and much less restrictive assumptions than previously considered. Closely related to this line of reasoning, was the original work by Tannenbaum and James Sherley on the immortal strand hypothesis. Tannenbaum also proposed a pioneering theory of why higher organisms need sleep. Towards the end of his life, he proposed a new approach to anti-stealth technology based on the theory of Bose–Einstein condensate. Emmanuel Tannenbaum received a number of honors, including the Robert Karplus Prize in Chemical Physics from Harvard University, the prestigious Alon Fellowship from the Israel Academy of Sciences and Humanities, and a National Institutes of Health research fellowship. Dr. Tannenbaum is the son of mathematician Allen Tannenbaum and chemist Rina Tannenbaum. His sister, Sarah Tannenbaum-Dvir, is an oncologist/hematologist. == References == == External links == Link to Emmanuel Tannenbaum's Homepage
|
Wikipedia:Emmanuel Ullmo#0
|
Emmanuel Ullmo (born 25 June 1965) is a French mathematician, specialised in arithmetic geometry. Since 2013 he has served as director of the Institut des Hautes Études scientifiques. == Education == Ullmo wrote his thesis under Lucien Szpiro at the University of Paris-Sud in 1993. == Career == Ullmo was appointed a professor at University of Paris-Sud in 2001. He also held temporary positions at IMPA for 18 months, then two years at Princeton University, and six months at Tsinghua University. In 2013, following the retirement of Jean-Pierre Bourguignon, he became the 5th director of the IHÉS. He was an editor of the journal Inventiones mathematicae between 2007 and 2014. == Research == The Bogomolov conjecture was proved by Ullmo and Shou-Wu Zhang using Arakelov theory in 1998. == Awards and honors == He was an invited speaker at the International Congress of Mathematicians at Beijing in 2002. Between 2003 and 2008 he was a junior fellow at the Institut de France. He received the Élie Cartan Prize of the French Academy of Sciences in 2006 for his work on the proof of the Bogomolov conjecture with Shou-Wu Zhang. In 2022, he became a chevalier (knight) of the Legion of Honour, an award that was bestowed upon him in September 2023 by Sylvie Retailleau. == References ==
|
Wikipedia:Emmy Noether bibliography#0
|
Emmy Noether was a German mathematician. This article lists the publications upon which her reputation is built (in part). == First epoch (1908–1919) == == Second epoch (1920–1926) == In the second epoch, Noether turned her attention to the theory of rings. With her paper Moduln in nichtkommutativen Bereichen, insbesondere aus Differential- und Differenzenausdrücken, Hermann Weyl states, "It is here for the first time that the Emmy Noether appears whom we all know, and who changed the face of algebra by her work." | Jahresbericht der Deutschen Mathematiker-Vereinigung, 34 (Abt. 2), 101 || |- | 28 || 1926 || Ableitung der Elementarteilertheorie aus der Gruppentheorie] | Jahresbericht der Deutschen Mathematiker-Vereinigung, 34 (Abt. 2), 104 || |- | 29 || 1925 || Gruppencharaktere und Idealtheorie | Jahresbericht der Deutschen Mathematiker-Vereinigung, 34 (Abt. 2), 144 || Group representations, modules and ideals. First of four papers showing the close connection between these three subjects. See also publications #32, #33, and #35. |- | 30 || 1926 || Der Endlichkeitssatz der Invarianten endlicher linearer Gruppen der Charakteristik p | Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen, Math.-phys. Klasse, 1926, 28–35 || By applying ascending and descending chain conditions to finite extensions of a ring, Noether shows that the algebraic invariants of a finite group are finitely generated even in positive characteristic. |- | 31 || 1926 || Abstrakter Aufbau der Idealtheorie in algebraischen Zahl- und Funktionenkörpern | Mathematische Annalen, 96, 26–61 || Ideals. Seminal paper in which Noether determined the minimal set of conditions required that a primary ideal be representable as a power of prime ideals, as Richard Dedekind had done for algebraic numbers. Three conditions were required: an ascending chain condition, a dimension condition, and the condition that the ring be integrally closed. |} == Third epoch (1927–1935) == In the third epoch, Emmy Noether focused on non-commutative algebras, and unified much earlier work on the representation theory of groups. == References == == Bibliography == Brewer JW, Smith MK, eds. (1981). Emmy Noether: A Tribute to Her Life and Work. New York: Marcel Dekker. ISBN 0-8247-1550-0. Dick A (1970). Emmy Noether 1882–1935 ((Beihft Nr. 13 zur Zeitschrift Elemente der Mathematik) ed.). Basel: Birkhäuser Verlag. pp. 40–42. Kimberling, Clark (1981), "Emmy Noether and Her Influence", in James W. Brewer; Martha K. Smith (eds.), Emmy Noether: A Tribute to Her Life and Work, New York: Marcel Dekker, Inc., pp. 3–61, ISBN 0-8247-1550-0. Noether, Emmy (1983), Jacobson, Nathan (ed.), Gesammelte Abhandlungen (Collected Works), Berlin, New York: Springer-Verlag, pp. 773–775, ISBN 978-3-540-11504-5, MR 0703862 == External links == List of Emmy Noether's publications by Dr. Cordula Tollmien List of Emmy Noether's publications in the eulogy by Bartel Leendert van der Waerden Partial listing of important works at the Contributions of 20th century Women to Physics at UCLA MacTutor biography of Emmy Noether
|
Wikipedia:En-ring#0
|
In mathematics, an E n {\displaystyle {\mathcal {E}}_{n}} -algebra in a symmetric monoidal infinity category C consists of the following data: An object A ( U ) {\displaystyle A(U)} for any open subset U of Rn homeomorphic to an n-disk. A multiplication map: μ : A ( U 1 ) ⊗ ⋯ ⊗ A ( U m ) → A ( V ) {\displaystyle \mu :A(U_{1})\otimes \cdots \otimes A(U_{m})\to A(V)} for any disjoint open disks U j {\displaystyle U_{j}} contained in some open disk V subject to the requirements that the multiplication maps are compatible with composition, and that μ {\displaystyle \mu } is an equivalence if m = 1 {\displaystyle m=1} . An equivalent definition is that A is an algebra in C over the little n-disks operad. == Examples == An E n {\displaystyle {\mathcal {E}}_{n}} -algebra in vector spaces over a field is a unital associative algebra if n = 1, and a unital commutative associative algebra if n ≥ 2. An E n {\displaystyle {\mathcal {E}}_{n}} -algebra in categories is a monoidal category if n = 1, a braided monoidal category if n = 2, and a symmetric monoidal category if n ≥ 3. If Λ is a commutative ring, then X ↦ C ∗ ( Ω n X ; Λ ) {\displaystyle X\mapsto C_{*}(\Omega ^{n}X;\Lambda )} defines an E n {\displaystyle {\mathcal {E}}_{n}} -algebra in the infinity category of chain complexes of Λ {\displaystyle \Lambda } -modules. == See also == Categorical ring Highly structured ring spectrum == References == http://www.math.harvard.edu/~lurie/282ynotes/LectureXXII-En.pdf http://www.math.harvard.edu/~lurie/282ynotes/LectureXXIII-Koszul.pdf == External links == "En-algebra", ncatlab.org
|
Wikipedia:Encyclopedia of the Brethren of Purity#0
|
The Encyclopedia of the Brethren of Purity (Arabic: رسائل إخوان الصفا, Rasā'il Ikhwān al-ṣafā') also variously known as the Epistles of the Brethren of Sincerity, Epistles of the Brethren of Purity and Epistles of the Brethren of Purity and Loyal Friends is an Islamic encyclopedia in 52 treatises (rasā'il) written by the mysterious Brethren of Purity of Basra, Iraq sometime in the second half of the 10th century CE (or possibly later, in the 11th century). It had a great influence on later intellectual leading lights of the Muslim world, such as Ibn Arabi, and was transmitted as far abroad within the Muslim world as al-Andalus. The identity and period of the authors of the Encyclopedia have not been conclusively established, though the work has been mostly linked with Isma'ilism. Idris Imad al-Din, a prominent 15th-century Isma'ili missionary in Yemen, credited the authorship of the encyclopedia to Muhammad al-Taqi, the 9th Isma'ili Imam, who lived in occultation in the era of the Abbasid Caliphate at the beginning of the Islamic Golden Age. Some suggest that besides Isma'ilism, the Brethren of Purity also contains elements of Sufism, Mu'tazilism, Nusayrism and others. Some scholars present the work as Sunni Sufi. The subject of the work is vast and ranges from mathematics, music, astronomy, and natural sciences, to ethics, politics, religion, and magic—all compiled for one, basic purpose, that learning is training for the soul and a preparation for its eventual life once freed from the body. Turn from the sleep of negligence and the slumber of ignorance, for the world is a house of delusion and tribulations. – Encyclopedia of the Brethren of Sincerity == Authorship == Authorship of the Encyclopedia is usually ascribed to the mysterious "Brethren of Purity" a group of unknown scholars placed in Basra, Iraq sometime around 10th century CE . While it is generally accepted that it was the group who authored at least the 52 rasa'il, the authorship of the "Summary" (al-Risalat al-Jami'a) is uncertain; it has been ascribed to the later Majriti but this has been disproved by Yves Marquet (see the Risalat al-Jami'a section). Since style of the text is plain, and there are numerous ambiguities, due to language and vocabulary, often of Persian origin. Some philosophers and historians such as Tawhidi, Ibn al-Qifti, Shahrazuri disclosed the names of those allegedly involved in the development of the work: Abu Sulayma Bisti, Muqaddasi, 'Ali ibn Harun, Zanjani, Muahmmad ibn Ahmad Narhruji, 'Awfi. All these people are according to Henry Corbin, Ismailis Other scholars, such as Susanne Diwald and Abdul Latif Tibawi have asserted a Sunni-Sufi nature of the work. Further perplexities abound; the use of pronouns for the authorial "sender" of the rasa'il is not consistent, with the writer occasionally slipping from third person to first-person (for example, in Epistle 44, "The Doctrine of the Sincere Brethren"). This has led some to suggest that the rasa'il were not in fact written co-operatively by a group or consolidated notes from lectures and discussions, but were actually the work of a single person. Of course, if one accepts the longer time spans proposed for the composition of the Encyclopedia, or the simpler possibility that each risala was written by a separate person, sole authorship would be impossible. == Contents == The subject matter of the Rasa'il is vast and ranges from mathematics, music, logic, astronomy, the physical and natural sciences, as well as exploring the nature of the soul and investigating associated matters in ethics, revelation, and spirituality. Its philosophical outlook was Neoplatonic and it tried to integrate Greek philosophy (and especially the dialectical reasoning and logic of Aristotelianism) with various astrological, Hermetic, Gnostic and Islamic schools of thought. Scholars have seen Ismaili and Sufi influences in the religious content, and Mu'tazilite acceptance of reasoning in the work. Others, however, hold the Brethren to be "free-thinkers" who transcended sectarian divisions and were not bound by the doctrines of any specific creed. Their unabashed eclecticism is fairly unusual in this period of Arabic thought, characterised by fierce theological disputes; they refused to condemn rival schools of thought or religions, instead insisting that they be examined fairly and open-mindedly for what truth they may contain: ...to shun no science, scorn any book, or to cling fanatically to no single creed. For [their] own creed encompasses all the others and comprehends all the sciences generally. This creed is the consideration of all existing things, both sensible and intelligible, from beginning to end, whether hidden or overt, manifest or obscure . . . in so far as they all derive from a single principle, a single cause, a single world, and a single Soul." - (from the Ikhwan al-Safa, or Encyclopedia of the Brethren of Purity; Rasa'il IV, pg 52) In total, they cover most of the areas an educated person was expected to understand in that era. The epistles (or "rasa'il") generally increase in abstractness, finally dealing with the Brethren's somewhat pantheistic philosophy, in which each soul is an emanation, a fragment of a universal soul with which it will reunite at death; in turn, the universal soul will reunite with Allah on Doomsday. The epistles are intended to transmit right knowledge, leading to harmony with the universe and happiness. === Organization === Organizationally, it is divided into 52 epistles. The 52 rasa'il are subdivided into four sections, sometimes called books (indeed, some complete editions of the Encyclopedia are in four volumes); in order, they are: 14 on the Mathematical Sciences, 17 on the Natural Sciences, 10 on the Psychological and Rational Sciences, 11 on Theological Sciences. The division into four sections is no accident; the number four held great importance in Neoplatonic numerology, being the first square number and for being even. Reputedly, Pythagoras held that a man's life was divided into four sections, much like a year was divided into four seasons. The Brethren divided mathematics itself into four sections: arithmetic was Pythagoras and Nicomachus' domain; Ptolemy ruled over astronomy with his Almagest; geometry was associated with Euclid, naturally; and the fourth and last division was that of music. The fours did not cease there- the Brethren observed that four was crucial to a decimal system, as 1 + 2 + 3 + 4 = 10 {\displaystyle 1+2+3+4=10} ; numbers themselves were broken down into four orders of magnitude: the ones, tens, hundreds, and thousands; there were four winds from the four directions (north, south, east, west); medicine concerned itself with the four humours, and natural philosophers with the four elements of Empedocles. Another possibility, suggested by Netton is that the veneration for four stems instead from the Brethren's great interest in the Corpus Hermeticum of Hermes Trismegistus (identified with the god Hermes, to whom the number four was sacred); that hermetic tradition's magical lore was the main subject of the 51st rasa'il. Netton mentions that there are suggestions that the 52nd risalah (on talismans and magic) is a later addition to the Encyclopedia, because of intertextual evidence: a number of the rasa'il claim that the total of rasa'il is 51. However, the 52nd risalah itself claims to be number 51 in one area, and number 52 in another, leading to the possibility that the Brethren's attraction for the number 51 (or 17 times 3; there were 17 rasa'il on natural sciences) is responsible for the confusion. Seyyed Hossein Nasr suggests that the origin of the preference for 17 stemmed from the alchemist Jābir ibn Hayyān's numerological symbolism. ==== Risalat al-Jami'a ==== Besides the fifty-odd epistles, there exists what claims to be overarching summary of the work, which is not counted in the 52, called "The Summary" (al-Risalat al-Jami'a) which exists in two versions. It has been claimed to have been the work of Majriti (d. circa 1008), although Netton states Majriti could not have composed it, and that Yves Marquet concludes from a philological analysis of the vocabulary and style in his La Philosophie des Ihwan al-Safa (1975) that it had to have been composed at the same time as the main corpus. === Style === Like conventional Arabic Islamic works, the Epistles have no lack of time-worn honorifics and quotations from the Qur'an, but the Encyclopedia is also famous for some of the didactic fables it sprinkled throughout the text; a particular one, the "Island of Animals" or the "Debate of Animals" (embedded within the 22nd rasa'il, titled "On How The Animals and their Kinds are Formed"), is one of the most popular animal fables in Islam. The fable concerns how 70 men, nearly shipwrecked, discover an island where animals ruled, and began to settle on it. They oppressed and killed the animals, who unused to such harsh treatment, complained to the King (or Shah) of Djinns. The King arranged a series of debates between the humans and various representatives of the animals, such as the nightingale, the bee, and the jackal. The animals nearly defeat the humans, but an Arabian ends the series by pointing out that there was one way in which humans were superior to animals and so worthy of making animals their servants: they were the only ones Allah had offered the chance of eternal life to. The King was convinced by this argument, and granted his judgement to them, but strongly cautioned them that the same Qur'an that supported them also promised them hellfire should they mistreat their animals. == Philosophy == More metaphysical were the four ranks (or "spiritual principles"), which apparently were an elaboration of Plotinus' triad of Thought, Soul, and the One, known to the Brethren through The Theology of Aristotle (a version of Plotinus' Enneads in Arabic, modified with changes and paraphrases, and attributed to Aristotle); first, the Creator (al-Bārī) emanated down to Universal Intellect (al-'Aql al-Kullī), then to Universal Soul (al-Nafs), and through Prime Matter (al-Hayūlā al-Ūlā), which emanated still further down through (and creating) the mundane hierarchy. The mundane hierarchy consisted of Nature (al-Tabī'a), the Absolute Body (al-Jism al-Mutlaq), the Sphere (al-Falak), the Four Elements (al-Arkān), and the Beings of this world (al-Muwalladāt) in their three varieties of animals, minerals, and vegetables, for a total hierarchy of nine members. Furthermore, each member increased in subdivisions proportional to how far down in the hierarchy it was, for instance, Sphere, being number seven has the seven planets as its members. The Absolute Body is also a form in Prime Matter as we explained in the Chapter on Matter. Prime Matter is a spiritual form which emanated from the Universal Soul. The Universal Soul also is a spiritual form which emanated from the Universal Intellect which is the first thing the Creator Created." Not all Pythagorean doctrines were followed, however. The Brethren argued strenuously against transmigration of the soul. Since they refused to accept transmigration, then the Platonic idea that all learning is "remembrance" and that man can never attain to complete knowledge whilst shackled in his body must be false; the Brethren's stance was rather that a person could potentially learn everything worth knowing and avoid the snares and delusion of this sinful world, eventually attaining to Paradise, Allah, and salvation, but unless they studied wise men and wise books - like their encyclopedia, whose sole purpose was to entice men to learn its knowledge and possibly be saved - that possibility would never become an actuality. As Netton writes, "The magpie eclecticism with which they surveyed and utilized elements from the philosophies of Pythagoras, Plato, Aristotle and Plotinus, and religions such as Nestorian Christianity, Judaism and Hinduism, was not an early attempt at ecumenism or interfaith dialogue. Their accumulation of knowledge was ordered towards the sublime goal of salvation. To use their own image, they perceived their Brotherhood, to which they invited others, as a "Ship of Salvation" that would float free from the sea of matter; the Ikhwan, with their doctrines of mutual cooperation, asceticism, and righteous living, would reach the gates of Paradise in its care." Another area in which the Brethren differed was in their conceptions of nature, in which they rejected the emanation of Forms that characterized Platonic philosophy for a quasi-Aristotelian system of substances: Know, O brother, that the scholars have said that all things are of two types, substances and accidents, and that all substances are of one kind and self-existent, while accidents are of nine kinds, present in the substances, and they are attributes of them. But the Creator may not be described as either accident or substance, for He is their Creator and efficient cause. The first thing which the Creator produced and called into existence is a simple, spiritual, extremely perfect and excellent substance in which the form of all things is contained. This substance is called the Intellect. From this substance proceeds a second one which in hierarchy is below the first and is called the Universal Soul (al-nafs al-kullīyah). From the Universal Soul proceeds another substance which is below the Soul and which is called Original Matter. The latter is transformed into the Absolute Body, that is, into Secondary Matter which has length, width and depth." The 14th edition (EB-2:187a; 14th Ed., 1930) of the Encyclopædia Britannica described the mingling of Neoplatonism and Aristotelianism this way: The materials of the work come chiefly from Aristotle, but they are conceived of in a Platonizing spirit, which places as the bond of all things a universal soul of the world with its partial or fragmentary souls." === Evolution === The text in the "Encyclopedia of the Brethren of Purity" describes biological diversity in a manner similar to the modern day theory of evolution. The contexts of such passages are interpreted differently by scholars. The Brethren view as a proof on pre-Darwinian evolution theory also has been criticized by some scholars. In this document some modern day scholars note that “chain of being described by the Ikhwan possess a temporal aspect which has led certain scholars to view that the authors of the Rasai’l believed in the modern theory of evolution”. According to the Rasa’il “But individuals are in perpetual flow; they are neither definite nor preserved. The reason for the conservation of forms, genus and species in matter is fixity of their celestial cause because their efficient cause is the Universal Soul of the spheres instead of the change and continuous flux of individuals which is due to the variability of their cause”. This statement is supporting the concept that species and individuals are not static, and that when they change it is due to a new purpose given. In the Ikhwan doctrine there are similarities between that and the theory of evolution. Both believe that “the time of existence of terrestrial plants precedes that of animals, minerals precede plants, and organism adapt to their environment”, but asserts that everything exists for a purpose. Muhammad Hamidullah describes the ideas on evolution found in the Encyclopedia of the Brethren of Purity (The Epistles of Ikhwan al-Safa) as follows: "[These books] state that God first created matter and invested it with energy for development. Matter, therefore, adopted the form of vapour which assumed the shape of water in due time. The next stage of development was mineral life. Different kinds of stones developed in course of time. Their highest form being mirjan (coral). It is a stone which has in it branches like those of a tree. After mineral life evolves vegetation. The evolution of vegetation culminates with a tree which bears the qualities of an animal. This is the date-palm. It has male and female genders. It does not wither if all its branches are chopped but it dies when the head is cut off. The date-palm is therefore considered the highest among the trees and resembles the lowest among animals. Then is born the lowest of animals. It evolves into an ape. This is not the statement of Darwin. This is what Ibn Maskawayh states and this is precisely what is written in the Epistles of Ikhwan al-Safa. The Muslim thinkers state that ape then evolved into a lower kind of a barbarian man. He then became a superior human being. Man becomes a saint, a prophet. He evolves into a higher stage and becomes an angel. The one higher to angels is indeed none but God. Everything begins from Him and everything returns to Him." English translations of the Encyclopedia of the Brethren of Purity were available from 1812, hence this work may have had an influence on Charles Darwin and his inception of Darwinism. However Hamidullah's "Darwin was inspired by the Epistles of the Ihkwan al-Safa" theory sounds unlikely as Charles Darwin comes from an evolutionist family with his well-known physician grandfather, Erasmus Darwin, author of the poem The Origin of Society on evolution, was one of the leading Enlightenment evolutionists. == Literature == The 48th epistle of the Encyclopedia of the Brethren of Purity features a fictional Arabic narrative. It is an anecdote of a "prince who strays from his palace during his wedding feast and, drunk, spends the night in a cemetery, confusing a corpse with his bride. The story is used as a gnostic parable of the soul's pre-existence and return from its terrestrial sojourn". == Editions and translations == Complete editions of the encyclopedia have been printed at least three times: Kitāb Ikhwān al-Ṣafā' (edited by Wilayat Husayn, Bombay 1888) Rasā'il Ikhwān al-Ṣafā' (edited by Khayr al-din al-Zarkali with introductions by Tāha Ḥusayn and Aḥmad Zakī Pasha, in 4 volumes, Cairo 1928) Rasā'il Ikhwān al-Ṣafā' (4 volumes, Beirut: Dār Ṣādir 1957) The Encyclopedia has been widely translated, appearing not merely in its original Arabic, but in German, English, Persian, Turkish, and Hindustani. Although portions of the Encyclopedia were translated into English as early as 1812, with the Rev. T. Thomason's prose English introduction to Shaikh Ahmad b. Muhammed Shurwan's Arabic edition of the "Debate of Animals" published in Calcutta, a complete translation of the Encyclopedia into English does not exist as of 2006, although Friedrich Dieterici (Professor of Arabic in Berlin) translated the first 40 of the epistles into German; presumably, the remainder have since been translated. The "Island of Animals" have been translated several times in differing completion; the fifth risalah, on music, has been translated into English as have the 43rd through the 47th epistles. As of 2021, the first complete Arabic critical edition and annotated English translation of the Rasa’il Ikhwan al-Safa’, with English commentaries, is being published by Oxford University Press in association with London's Institute of Ismaili Studies. The series' General Editor is Nader El-Bizri. This series began in 2008 with an introductory volume of studies edited by Nader El-Bizri, and continued with the publication of: Epistle 22: The Case of the Animals versus Man Before the King of the Jinn (eds. trans. L. Goodman & R. McGregor) Epistle 5: On Music (ed. trans. O. Wright, 2010) Epistles 10–15: On Logic (ed. trans. C. Baffioni, 2010) Epistle 52a: On Magic, Part I (eds. trans. G. de Callatay & B. Halflants, 2011) Epistles 1–2: Arithmetic and Geometry (ed. trans. N. El-Bizri, 2012) Epistles 15–21: Natural Sciences (ed. trans. C. Baffioni, 2013) Epistle 4: Geography (ed. trans. I Sanchez and J. Montgomery, 2014) Epistle 3: On "Astronomia" (ed. trans. J. F. Ragep and T. Mimura, 2015) Epistles 32–36: Sciences of the Soul and Intellect, Part I (ed. trans. I. Poonawala, G. de Callatay, P. Walker, D. Simonowitz, 2015) Epistles 39–41: Sciences of the Soul and Intellect, Part III (2017) Epistles 43–45: On Companionship and Belief (2017) Epistles 6–8: On Composition and the Arts (Nader El-Bizri, 2018) Epistle 48: The Call to God (Abbas Hamdani and Abdallah Soufan, 2019) Epistles 49–51: On God and the World (Wilferd Madelung, 2019) Epistles 29–31: On Life, Death, and Languages (Eric Ormsby, 2021) Both the editors' approach to the project and the quality of its English translations have been criticized. == See also == Magic squares Socrates == Notes == == References == Lane-Poole, Stanley (1966) [1883], Studies in a Mosque (1st ed.), Beirut: Khayat Book & Publishing Company S.A.L.; based on Dieterici's outline and translations. Nasr, Seyyed Hossein (1964), An Introduction to Islamic Cosmological Doctrines: Conceptions of nature and methods used for its study by the Ihwan Al-Safa, Al-Biruni, and Ibn Sina, Boston, Massachusetts: Belknap Press of Harvard University Press, LCCN 64-13430 Van Reijn, Eric (1995), The Epistles of the Sincere Brethren: an annotated translation of Epistles 43-47, vol. 1 (1st ed.), Minerva Press, ISBN 1-85863-418-0; a partial translation Netton, Ian Richard (1991), Muslim Neoplatonists: An Introduction to the Thought of the Brethren of Purity, vol. 1 (1st ed.), Edinburgh, England: Edinburgh University Press, ISBN 0-7486-0251-8 Ivanov, Valdimir Alekseevich (1946), The Alleged Founder of Ismailism., The Ismaili Society series; no. 1; Variation: Ismaili Society, Bombay; Ismaili Society series; no. 1., Bombay, Pub. for the Ismaili Society by Thacker, p. 197, LCCN 48-3517, OCLC: 385503 Ikhwan as-Safa and their Rasa'il: A Critical Review of a Century and a Half of Research, by A. L. Tibawi, published in volume 2 of The Islamic Quarterly in 1955 Rasa'il Ikhwan al-Safa', vol. 4, Beirut: Dar Sadir Johnson-Davies, Denys (1994), The Island of Animals / Khemir, Sabiha; (Illustrator - Ill.), Austin: University of Texas Press, p. 76, ISBN 0-292-74035-2 "Notices of some copies of the Arabic work entitled "Rasàyil Ikhwàm al-cafâ"", written by Aloys Sprenger, originally published by the Journal of the Asiatic Society of Bengal (in Calcutta) in 1848 [2] "Abū Ḥayyan Al-Tawḥīdī and The Brethren of Purity", Abbas Hamdani. International Journal of Middle East Studies, 9 (1978), 345-353 == Further reading == (in French) La philosophie des Ihwan al-Safa' ("The philosophy of the Brethren of Purity"), Yves Marquet, 1975. Published in Algiers by the Société Nationale d'Édition et de Diffusion Epistles of the Brethren of Purity. The Ikhwan al-Safa' and their Rasa'il, ed. Nader El-Bizri (Oxford: Oxford University Press, 2008). == External links == Article at Encyclopædia Britannica "Ikhwan al-Safa'". Internet Encyclopedia of Philosophy. Ikhwān al-Safā’ - (general encyclopedia-style article) The Rasail Ikhwan as-Safa "Ikhwan al-Safa by Omar A. Farrukh" from A History of Muslim Philosophy [3] Review of Yves Marquet's La philosophie des Ihwan al-Safa': de Dieu a l'homme by F. W. Zimmermann "The Classification of the Sciences according to the Rasa'il Ikhwan al-Safa'" by Godefroid de Callataÿ Archived 2013-05-12 at the Wayback Machine The Institute of Ismaili Studies article on the Brethren, by Nader El-Bizri Archived 2014-05-29 at the Wayback Machine The Institute of Ismaili Studies gallery of images of manuscripts of the Rasa’il of the Ikhwan al-Safa’ Archived 2014-10-15 at the Wayback Machine "Beastly Colloquies: Of Plagiarism and Pluralism in Two Medieval Disputations Between Animals and Men" -(by Lourdes María Alvarez; a discussion of the animal fables and later imitators; PDF file) "Pages of Medieval Mideastern History" - (by Eloise Hart; covers various small scholarly groups influential in the Arabic world) "Ikhwanus Safa: A Rational and Liberal Approach to Islam" - (by Asghar Ali Engineer) "Mark Swaney on the History of Magic Squares" -(includes a discussion of magic squares and the Encyclopedia)
|
Wikipedia:Endre Pap#0
|
Endre Pap is a mathematician in Serbia. He is a former rector and a professor emeritus of the Singidunum University in Belgrade. Pap was born 26 February 1947 in Mali Iđoš in Vojvodina, Yugoslavia. B.Sc. 1970. M.Sc. 1973. Ph.D. 1975. Full Professor since 1986 at the Faculty of Sciences of the university in Novi Sad. Director of the Institute of Mathematics 1979–1980. He was a president of Academy of Sciences and Arts of Vojvodina (VANU). He is now a corresponding member of European Academy of Sciences (EAS). He is a member from the outside of the Public Organ of the Hungarian Academy of Sciences, since 2000. He is an honorary professor at Budapest Tech University since 2005, and a professor at the Obuda University in Budapest. He obtained in 2003 the October prize of the city Novi Sad for his scientific work. He was a member of the Accreditation Commission for High Education of Serbia since 2006, the president of Council for Natural Sciences, and a member of the Senat of the University of Novi Sad since 2007. He has been a member of the National Council for High Education since 2015. He was a long-time professor at the Singidunum University, Belgrade, rector and now he is a professor emeritus at the Singidunum University, Belgrade. He has taught courses in partial differential equations, real analysis, complex analysis, decision theory, fuzzy systems, optimization methods, ordinary differential equations, and measure theory. He was in 1986 and 1988 a visiting researcher at ETH in Zurich, Switzerland; in 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2001, 2002, 2004 at the University Johannes Kepler in Linz, Austria, where he was a visiting professor in 1997, 2003, 2006 (giving Ph.D. courses); in 1994 at the University in Potenza; in 1992, 1994, 1996, 2001 (giving Ph.D. courses) at the University Federico II in Naples, Italy; Universite Paul Sabatier, Toulouse, France, in 1999; University "La Sapienza", Rome, Italy, in 1999, 2003; Sorbonne in Paris, 2008. == Research == His mathematical interests are in measure theory (non-additive measures), aggregation operators, decision making, fuzzy systems, functional analysis, theory of generalized functions, partial differential equations. He is the author of more than 420 scientific papers, 7 monographs, and 15 textbooks. He has more than 16 500 citations (h-index 49). He was the editor of proceedings of many international conferences, and he was the main organizer of the traditional international conferences on computational intelligence SISY. He is a collaborator for the Encyclopaedia of Mathematics, Kluwer Academic Publishers, Dordrecht (Springer). He has supervised 10 M.A. and 9 Ph.D. theses and has given 40 invited lectures and organized several scientific seminars. He was the head of Applied Analysis, Chairman of Nonadditive Set Functions Group. He is editor in the journal Fuzzy Sets and Systems (Elsevier) and Soft Computing (Springer), and member of the editorial boards of the journals Tatra Mountains Mathematical Publications, Acta Polytechnica Hungarica, "Panoeconomicus", Archive of Oncology, and YUJOR, a reviewer for Zentralblatt für Mathematik and Math. Reviews, a referee for 30 international and 4 Serbian journals, a member of OMG, AMS, EUSFLAT. == References == == External links == (in English) Endre Pap (in Serbian) Endre Pap, page 16
|
Wikipedia:Endre Szemerédi#0
|
Endre Szemerédi (Hungarian: [ˈɛndrɛ ˈsɛmɛreːdi]; born August 21, 1940) is a Hungarian-American mathematician and computer scientist, working in the field of combinatorics and theoretical computer science. He has been the State of New Jersey Professor of computer science at Rutgers University since 1986. He also holds a professor emeritus status at the Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences. Szemerédi has won prizes in mathematics and science, including the Abel Prize in 2012. He has made a number of discoveries in combinatorics and computer science, including Szemerédi's theorem, the Szemerédi regularity lemma, the Erdős–Szemerédi theorem, the Hajnal–Szemerédi theorem and the Szemerédi–Trotter theorem. == Early life == Szemerédi was born in Budapest. Since his parents wished him to become a doctor, Szemerédi enrolled at a college of medicine, but he dropped out after six months (in an interview he explained it: "I was not sure I could do work bearing such responsibility."). He studied at the Faculty of Sciences of the Eötvös Loránd University in Budapest and received his PhD from Moscow State University. His adviser was Israel Gelfand. This stemmed from a misspelling, as Szemerédi originally wanted to study with Alexander Gelfond. == Academic career == Szemerédi has been the State of New Jersey Professor of computer science at Rutgers University since 1986. He has held visiting positions at Stanford University (1974), McGill University (1980), the University of South Carolina (1981–1983) and the University of Chicago (1985–1986). == Work == Endre Szemerédi has published over 200 scientific articles in the fields of discrete mathematics, theoretical computer science, arithmetic combinatorics and discrete geometry. He is best known for his proof from 1975 of an old conjecture of Paul Erdős and Pál Turán: if a sequence of natural numbers has positive upper density then it contains arbitrarily long arithmetic progressions. This is now known as Szemerédi's theorem. One of the lemmas introduced in his proof is now known as the Szemerédi regularity lemma, which has become an important lemma in combinatorics, being used for instance in property testing for graphs and in the theory of graph limits. He is also known for the Szemerédi–Trotter theorem in incidence geometry and the Hajnal–Szemerédi theorem and Ruzsa–Szemerédi problem in graph theory. Miklós Ajtai and Szemerédi proved the corners theorem, an important step toward higher-dimensional generalizations of the Szemerédi theorem. With Ajtai and János Komlós he proved the ct2/log t upper bound for the Ramsey number R(3,t), and constructed a sorting network of optimal depth. With Ajtai, Václav Chvátal, and Monroe M. Newborn, Szemerédi proved the famous crossing number inequality, that a graph with n vertices and m edges, where m > 4n has at least m3 / 64n2 crossings. With Paul Erdős, he proved the Erdős–Szemerédi theorem on the number of sums and products in a finite set. With Wolfgang Paul, Nick Pippenger, and William Trotter, he established a separation between nondeterministic linear time and deterministic linear time, in the spirit of the infamous P versus NP problem. == Awards and honors == Szemerédi has won numerous awards and honors for his contribution to mathematics and computer science. A few of them are listed here: Honorary John von Neumann Professor (2021) Grünwald Prize (1967) Grünwald Prize (1968) Rényi Prize (1973) George Pólya Prize for Achievement in Applied Combinatorics (SIAM), (1975) Prize of the Hungarian Academy of Sciences (1979) State of New Jersey Professorship (1986) The Leroy P. Steele Prize for Seminal Contribution to Research (AMS), (2008) The Rolf Schock Prize in Mathematics for deep and pioneering work from 1975 on arithmetic progressions in subsets of the integers (2008) The Széchenyi Prize of the Hungarian Republic for his many fundamental contributions to mathematics and computer science (2012) The Abel Prize for his fundamental contributions to discrete mathematics and theoretical computer science (2012) Hungarian Order of Saint Stephen(2020) Szemerédi is a corresponding member (1982), and member (1987) of the Hungarian Academy of Sciences and a member (2010) of the United States National Academy of Sciences. He was elected to the Academia Europaea in 2022. He is also a member of the Institute for Advanced Study in Princeton, New Jersey and a permanent research fellow at the Alfréd Rényi Institute of Mathematics in Budapest. He was the Fairchild Distinguished Scholar at the California Institute of Technology in 1987–88. He is an honorary doctor of Charles University in Prague. He was the lecturer in the Forty-Seventh Annual DeLong Lecture Series at the University of Colorado. He is also a recipient of the Aisenstadt Chair at CRM, University of Montreal. In 2008 he was the Eisenbud Professor at the Mathematical Sciences Research Institute in Berkeley, California. In 2012, Szemerédi was awarded the Abel Prize "for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory" The Abel Prize citation also credited Szemerédi with bringing combinatorics to the centre-stage of mathematics and noted his place in the tradition of Hungarian mathematicians such as George Pólya who emphasized a problem-solving approach to mathematics. Szemerédi reacted to the announcement by saying that "It is not my own personal achievement, but recognition for this field of mathematics and Hungarian mathematicians," that gave him the most pleasure. == Conferences == On August 2–7, 2010, the Alfréd Rényi Institute of Mathematics and the János Bolyai Mathematical Society organized a conference in honor of the 70th birthday of Endre Szemerédi. Prior to the conference a volume of the Bolyai Society Mathematical Studies Series, An Irregular Mind, a collection of papers edited by Imre Bárány and József Solymosi, was published to celebrate Szemerédi's achievements on the occasion of his 70th birthday. Another conference devoted to celebrating Szemerédi's work is the Third Abel Conference: A Mathematical Celebration of Endre Szemerédi. == Personal life == Szemerédi is married to Anna Kepes; they have five children, Andrea, Anita, Peter, Kati, and Zsuzsi. == References == == External links == Personal Homepage at the Alfréd Rényi Institute of Mathematics 6,000,000 and Abel Prize – Numberphile Interview by Gabor Stockert (translated from the Hungarian into English by Zsuzsanna Dancso)
|
Wikipedia:Endre Süli#0
|
Endre Süli (also, Endre Suli or Endre Šili) is a mathematician. He is Professor of Numerical Analysis in the Mathematical Institute, University of Oxford, Fellow and Tutor in Mathematics at Worcester College, Oxford and Adjunct Fellow of Linacre College, Oxford. He was educated at the University of Belgrade and, as a British Council Visiting Student, at the University of Reading and St Catherine's College, Oxford. His research is concerned with the mathematical analysis of numerical algorithms for nonlinear partial differential equations. == Biography == Süli is a Foreign Member of the Serbian Academy of Sciences and Arts (2009), Fellow of the European Academy of Sciences (FEurASc, 2010), Fellow of the Society for Industrial and Applied Mathematics (FSIAM, 2016), a Member of the Academia Europaea (MAE, 2020), and a Fellow of the Royal Society (FRS, 2021). He was an invited speaker at the International Congress of Mathematicians in Madrid in 2006 and was Chair of the Society for the Foundations of Computational Mathematics (2002–2005). Other honours include: Fellow of the Institute of Mathematics and its Applications (FIMA, 2007), Charlemagne Distinguished Lecture (2011), IMA Service Award (2011), Professor Hospitus Universitatis Carolinae Pragensis, Charles University in Prague (2012–), Distinguished Visiting Chair Professor Shanghai Jiao Tong University (2013), President, SIAM United Kingdom and Republic of Ireland Section (2013–2015), London Mathematical Society/New Zealand Mathematical Society Forder Lectureship (2015), Aziz Lecture (2015), BIMOS Distinguished Lecture (2016), John von Neumann Lecture (2016), Sibe Mardešić Lecture (2018), London Mathematical Society Naylor Prize and Lectureship (2021). Since 2005 Süli has been co-Editor-in-Chief of the IMA Journal of Numerical Analysis published by Oxford University Press. He is a member of the Scientific Advisory Board of the Berlin Mathematics Research Center MATH+ and the Board of the Doctoral School for Mathematical and Physical Sciences for Advanced Materials and Technologies of the Scuola Superiore Meridionale at the University of Naples, and was a member of the Scientific Steering Committee of the Isaac Newton Institute for Mathematical Sciences at the University of Cambridge (2011–2014), the Scientific Advisory Board of the Berlin Mathematical School (2016–2018), the Scientific Council of Société de Mathématiques Appliquées et Industrielles (SMAI) (2014–2020), the Scientific Committee of the Mathematisches Forschungsinstitut Oberwolfach (Mathematical Research Institute of Oberwolfach) (2013–2021), and the Scientific Advisory Board of the Archimedes Center for Modeling, Analysis and Computation at the University of Crete (2010–2014). Between 2014 and 2022 he served as Delegate for Mathematics to the Board of Delegates of Oxford University Press. He grew up in Subotica and is a recipient of the Pro Urbe Prize of the City of Subotica (2021). He is the father of Sterija Award-winning Serbian playwright and dramatist Fedor Süli (also, Fedor Šili) and social scientist Timea Süli. == Notes == == External links == Endre Süli's official home page at the University of Oxford Endre Süli at the Mathematics Genealogy Project
|
Wikipedia:Ene-Margit Tiit#0
|
Tiit is predominantly an Estonian masculine given name and occurs, to a lesser extent, as a surname. Given name Tiit Arge (born 1963), politician Tiit Helimets (born 1977), ballet dancer Tiit Haagma (1954–2021), ice yacht sailor and musician (Ruja) Tiit Härm (born 1946), ballet dancer, ballet master and choreographer Tiit Helmja (born 1945), rower Tiit Hennoste (born 1953), linguist Tiit Käbin (1937–2011), jurist and politician Tiit Kala (born 1954), politician Tiit Kaljundi (1946–2008), architect Tiit Kändler (born 1948), humorist and science journalist Tiit Kuningas (born 1949), sports journalist Tiit Kuusik (1911–1990), opera singer Tiit Kuusmik (born 1950), politician Tiit Lääne (born 1958), sportsman, sports journalist and politician Tiit Land (born 1964), biochemist Tiit Lilleorg (1941–2021), actor Tiit Made (born 1940), economist, journalist, publicist and politician Tiit Madisson (1950–2021), dissident, writer and politician Tiit Niilo (born 1962), politician Tiit Nuudi (born 1949), tennis player and politician Tiit Pääsuke (born 1941), painter Tiit Rosenberg (born 1946), historian Tiit Salumäe (born 1951), Lutheran prelate Tiit Sinissaar (born 1947), politician Tiit Sokk (born 1964), basketball player Tiit Sukk (born 1974), actor, television presenter, and director Tiit Tamm (born 1952), ski jumper and coach Tiit Tammsaar (born 1951), politician Tiit Terik (born 1979), politician Tiit Tikerpe (born 1965), sprint canoer and Olympic competitor Tiit Toomsalu (born 1949), politician Tiit Trummal (born 1954), architect Tiit Vähi (born 1947), politician and former Prime Minister of Estonia Tiit-Rein Viitso (1938–2022), linguist Surname Ene-Margit Tiit (born 1934), Estonian mathematician and statistician Valdur Tiit (1931-2019), Estonian physicist == References ==
|
Wikipedia:Engel expansion#0
|
The Engel expansion of a positive real number x is the unique non-decreasing sequence of positive integers ( a 1 , a 2 , a 3 , … ) {\displaystyle (a_{1},a_{2},a_{3},\dots )} such that x = 1 a 1 + 1 a 1 a 2 + 1 a 1 a 2 a 3 + ⋯ = 1 a 1 ( 1 + 1 a 2 ( 1 + 1 a 3 ( 1 + ⋯ ) ) ) {\displaystyle x={\frac {1}{a_{1}}}+{\frac {1}{a_{1}a_{2}}}+{\frac {1}{a_{1}a_{2}a_{3}}}+\cdots ={\frac {1}{a_{1}}}\!\left(1+{\frac {1}{a_{2}}}\!\left(1+{\frac {1}{a_{3}}}\left(1+\cdots \right)\right)\right)} For instance, Euler's number e has the Engel expansion 1, 1, 2, 3, 4, 5, 6, 7, 8, ... corresponding to the infinite series e = 1 1 + 1 1 + 1 1 ⋅ 2 + 1 1 ⋅ 2 ⋅ 3 + 1 1 ⋅ 2 ⋅ 3 ⋅ 4 + ⋯ {\displaystyle e={\frac {1}{1}}+{\frac {1}{1}}+{\frac {1}{1\cdot 2}}+{\frac {1}{1\cdot 2\cdot 3}}+{\frac {1}{1\cdot 2\cdot 3\cdot 4}}+\cdots } Rational numbers have a finite Engel expansion, while irrational numbers have an infinite Engel expansion. If x is rational, its Engel expansion provides a representation of x as an Egyptian fraction. Engel expansions are named after Friedrich Engel, who studied them in 1913. An expansion analogous to an Engel expansion, in which alternating terms are negative, is called a Pierce expansion. == Engel expansions, continued fractions, and Fibonacci == Kraaikamp & Wu (2004) observe that an Engel expansion can also be written as an ascending variant of a continued fraction: x = 1 + 1 + 1 + ⋯ a 3 a 2 a 1 . {\displaystyle x={\cfrac {1+{\cfrac {1+{\cfrac {1+\cdots }{a_{3}}}}{a_{2}}}}{a_{1}}}.} They claim that ascending continued fractions such as this have been studied as early as Fibonacci's Liber Abaci (1202). This claim appears to refer to Fibonacci's compound fraction notation in which a sequence of numerators and denominators sharing the same fraction bar represents an ascending continued fraction: a b c d e f g h = d + c + b + a e f g h . {\displaystyle {\frac {a\ b\ c\ d}{e\ f\ g\ h}}={\dfrac {d+{\cfrac {c+{\cfrac {b+{\cfrac {a}{e}}}{f}}}{g}}}{h}}.} If such a notation has all numerators 0 or 1, as occurs in several instances in Liber Abaci, the result is an Engel expansion. However, Engel expansion as a general technique does not seem to be described by Fibonacci. == Algorithm for computing Engel expansions == To find the Engel expansion of x, let u 1 = x , {\displaystyle u_{1}=x,} a k = ⌈ 1 u k ⌉ , {\displaystyle a_{k}=\left\lceil {\frac {1}{u_{k}}}\right\rceil \!,} and u k + 1 = u k a k − 1 {\displaystyle u_{k+1}=u_{k}a_{k}-1} where ⌈ r ⌉ {\displaystyle \left\lceil r\right\rceil } is the ceiling function (the smallest integer not less than r). If u i = 0 {\displaystyle u_{i}=0} for any i, halt the algorithm. == Iterated functions for computing Engel expansions == Another equivalent method is to consider the map g ( x ) = x ( 1 + ⌊ x − 1 ⌋ ) − 1 {\displaystyle g(x)=x\!\left(1+\left\lfloor x^{-1}\right\rfloor \right)-1} and set u k = 1 + ⌊ 1 g ( n − 1 ) ( x ) ⌋ {\displaystyle u_{k}=1+\left\lfloor {\frac {1}{g^{(n-1)}(x)}}\right\rfloor } where g ( n ) ( x ) = g ( g ( n − 1 ) ( x ) ) {\displaystyle g^{(n)}(x)=g(g^{(n-1)}(x))} and g ( 0 ) ( x ) = x . {\displaystyle g^{(0)}(x)=x.} Yet another equivalent method, called the modified Engel expansion calculated by h ( x ) = ⌊ 1 x ⌋ g ( x ) = ⌊ 1 x ⌋ ( x ⌊ 1 x ⌋ + x − 1 ) {\displaystyle h(x)=\left\lfloor {\frac {1}{x}}\right\rfloor g(x)=\left\lfloor {\frac {1}{x}}\right\rfloor \!\left(x\left\lfloor {\frac {1}{x}}\right\rfloor +x-1\right)} and u k = { 1 + ⌊ 1 x ⌋ n = 1 ⌊ 1 h ( k − 2 ) ( x ) ⌋ ( 1 + ⌊ 1 h ( k − 1 ) ( x ) ⌋ ) n ≥ 2 {\displaystyle u_{k}={\begin{cases}1+\left\lfloor {\frac {1}{x}}\right\rfloor &n=1\\\left\lfloor {\frac {1}{h^{(k-2)}(x)}}\right\rfloor \!\left(1+\left\lfloor {\frac {1}{h^{(k-1)}(x)}}\right\rfloor \right)&n\geq 2\end{cases}}} === The transfer operator of the Engel map === The Frobenius–Perron transfer operator of the Engel map g ( x ) {\displaystyle g(x)} acts on functions f ( x ) {\displaystyle f(x)} with [ L g f ] ( x ) = ∑ y : g ( y ) = x f ( y ) | d d z g ( z ) | z = y = ∑ n = 1 ∞ f ( x + 1 n + 1 ) n + 1 {\displaystyle [{\mathcal {L}}_{g}f](x)=\sum _{y:g(y)=x}{\frac {f(y)}{\left|{\frac {d}{dz}}g(z)\right|_{z=y}}}=\sum _{n=1}^{\infty }{\frac {f\left({\frac {x+1}{n+1}}\right)}{n+1}}} since d d x [ x ( n + 1 ) − 1 ] = n + 1 {\displaystyle {\frac {d}{dx}}[x(n+1)-1]=n+1} and the inverse of the n-th component is x + 1 n + 1 {\displaystyle {\frac {x+1}{n+1}}} which is found by solving x ( n + 1 ) − 1 = y {\displaystyle x(n+1)-1=y} for x {\displaystyle x} . == Relation to the Riemann ζ function == The Mellin transform of the map g ( x ) {\displaystyle g(x)} is related to the Riemann zeta function by the formula ∫ 0 1 g ( x ) x s − 1 d x = ∑ n = 1 ∞ ∫ 1 n + 1 1 n ( x ( n + 1 ) − 1 ) x s − 1 d x = ∑ n = 1 ∞ n − s ( s − 1 ) + ( n + 1 ) − s − 1 ( n 2 + 2 n + 1 ) + n − s − 1 s − n 1 − s ( s + 1 ) s ( n + 1 ) = ζ ( s + 1 ) s + 1 − 1 s ( s + 1 ) . {\displaystyle {\begin{aligned}\int _{0}^{1}g(x)x^{s-1}\,dx&=\sum _{n=1}^{\infty }\int _{\frac {1}{n+1}}^{\frac {1}{n}}(x(n+1)-1)x^{s-1}\,dx\\[5pt]&=\sum _{n=1}^{\infty }{\frac {n^{-s}(s-1)+(n+1)^{-s-1}(n^{2}+2n+1)+n^{-s-1}s-n^{1-s}}{(s+1)s(n+1)}}\\[5pt]&={\frac {\zeta (s+1)}{s+1}}-{\frac {1}{s(s+1)}}\end{aligned}}.} == Example == To find the Engel expansion of 1.175, we perform the following steps. u 1 = 1.175 , a 1 = ⌈ 1 1.175 ⌉ = 1 ; {\displaystyle u_{1}=1.175,a_{1}=\left\lceil {\frac {1}{1.175}}\right\rceil =1;} u 2 = u 1 a 1 − 1 = 1.175 ⋅ 1 − 1 = 0.175 , a 2 = ⌈ 1 0.175 ⌉ = 6 {\displaystyle u_{2}=u_{1}a_{1}-1=1.175\cdot 1-1=0.175,a_{2}=\left\lceil {\frac {1}{0.175}}\right\rceil =6} u 3 = u 2 a 2 − 1 = 0.175 ⋅ 6 − 1 = 0.05 , a 3 = ⌈ 1 0.05 ⌉ = 20 {\displaystyle u_{3}=u_{2}a_{2}-1=0.175\cdot 6-1=0.05,a_{3}=\left\lceil {\frac {1}{0.05}}\right\rceil =20} u 4 = u 3 a 3 − 1 = 0.05 ⋅ 20 − 1 = 0 {\displaystyle u_{4}=u_{3}a_{3}-1=0.05\cdot 20-1=0} The series ends here. Thus, 1.175 = 1 1 + 1 1 ⋅ 6 + 1 1 ⋅ 6 ⋅ 20 {\displaystyle 1.175={\frac {1}{1}}+{\frac {1}{1\cdot 6}}+{\frac {1}{1\cdot 6\cdot 20}}} and the Engel expansion of 1.175 is (1, 6, 20). == Engel expansions of rational numbers == Every positive rational number has a unique finite Engel expansion. In the algorithm for Engel expansion, if ui is a rational number x/y, then ui +1 = (−y mod x)/y. Therefore, at each step, the numerator in the remaining fraction ui decreases and the process of constructing the Engel expansion must terminate in a finite number of steps. Every rational number also has a unique infinite Engel expansion: using the identity 1 n = ∑ r = 1 ∞ 1 ( n + 1 ) r . {\displaystyle {\frac {1}{n}}=\sum _{r=1}^{\infty }{\frac {1}{(n+1)^{r}}}.} the final digit n in a finite Engel expansion can be replaced by an infinite sequence of (n + 1)s without changing its value. For example, 1.175 = ( 1 , 6 , 20 ) = ( 1 , 6 , 21 , 21 , 21 , … ) . {\displaystyle 1.175=(1,6,20)=(1,6,21,21,21,\dots ).} This is analogous to the fact that any rational number with a finite decimal representation also has an infinite decimal representation (see 0.999...). An infinite Engel expansion in which all terms are equal is a geometric series. Erdős, Rényi, and Szüsz asked for nontrivial bounds on the length of the finite Engel expansion of a rational number x/y ; this question was answered by Erdős and Shallit, who proved that the number of terms in the expansion is O(y1/3 + ε) for any ε > 0. == The Engel expansion for arithmetic progressions == Consider this sum: ∑ k = 1 ∞ 1 ∏ i = 0 k − 1 ( α + i β ) = 1 α + 1 α ( α + β ) + 1 α ( α + β ) ( α + 2 β ) + ⋯ , {\displaystyle \sum _{k=1}^{\infty }{\frac {1}{\prod _{i=0}^{k-1}(\alpha +i\beta )}}={\frac {1}{\alpha }}+{\frac {1}{\alpha (\alpha +\beta )}}+{\frac {1}{\alpha (\alpha +\beta )(\alpha +2\beta )}}+\cdots ,} where α , β ∈ N {\displaystyle \alpha ,\beta \in \mathbb {N} } and 0 < α ≤ β {\displaystyle 0<\alpha \leq \beta } . Thus, in general ( 1 β ) 1 − α β e 1 β γ ( α β , 1 β ) = { α , α ( α + β ) , α ( α + β ) ( α + 2 β ) , … } {\displaystyle \left({\frac {1}{\beta }}\right)^{1-{\frac {\alpha }{\beta }}}e^{\frac {1}{\beta }}\gamma \left({\frac {\alpha }{\beta }},{\frac {1}{\beta }}\right)=\{{\alpha },\alpha (\alpha +\beta ),\alpha (\alpha +\beta )(\alpha +2\beta ),\dots \}\;} , where γ {\displaystyle \gamma } represents the lower Incomplete gamma function. Specifically, if α = β {\displaystyle \alpha =\beta } , e 1 / β − 1 = { 1 β , 2 β , 3 β , 4 β , 5 β , 6 β , … } {\displaystyle e^{1/\beta }-1=\{1\beta ,2\beta ,3\beta ,4\beta ,5\beta ,6\beta ,\dots \}\;} . == Engel expansion for powers of q == The Gauss identity of the q-analog can be written as: ∏ n = 1 ∞ 1 − 1 q 2 n 1 − 1 q 2 n − 1 = ∑ n = 0 ∞ 1 q n ( n + 1 ) 2 , q ∈ N . {\displaystyle \prod _{n=1}^{\infty }{\frac {1-{\frac {1}{q^{2n}}}}{1-{\frac {1}{q^{2n-1}}}}}=\sum _{n=0}^{\infty }{\frac {1}{q^{\frac {n(n+1)}{2}}}},\quad q\in \mathbb {N} .} Using this identity, we can express the Engel expansion for powers of q {\displaystyle q} as follows: ∏ n = 1 ∞ ( 1 − 1 q n ) ( − 1 ) n = ∑ n = 0 ∞ 1 ∏ i = 1 n q i . {\displaystyle \prod _{n=1}^{\infty }\left(1-{\frac {1}{q^{n}}}\right)^{(-1)^{n}}=\sum _{n=0}^{\infty }{\frac {1}{\prod _{i=1}^{n}q^{i}}}.} Furthermore, this expression can be written in closed form as: q 1 / 8 ϑ 2 ( 1 q ) 2 = { 1 , q , q 3 , q 6 , q 10 , … } {\displaystyle {\frac {q^{1/8}\vartheta _{2}\left({\frac {1}{\sqrt {q}}}\right)}{2}}=\{1,q,q^{3},q^{6},q^{10},\ldots \}} where ϑ 2 {\displaystyle \vartheta _{2}} is the second Theta function. == Engel expansions for some well-known constants == π {\displaystyle \pi } = (1, 1, 1, 8, 8, 17, 19, 300, 1991, 2492, ...) (sequence A006784 in the OEIS) 2 {\displaystyle {\sqrt {2}}} = (1, 3, 5, 5, 16, 18, 78, 102, 120, 144, ...) (sequence A028254 in the OEIS) e {\displaystyle e} = (1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, ...) (sequence A028310 in the OEIS) More Engel expansions for constants can be found here. == Growth rate of the expansion terms == The coefficients ai of the Engel expansion typically exhibit exponential growth; more precisely, for almost all numbers in the interval (0,1], the limit lim n → ∞ a n 1 / n {\displaystyle \lim _{n\to \infty }a_{n}^{1/n}} exists and is equal to e. However, the subset of the interval for which this is not the case is still large enough that its Hausdorff dimension is one. The same typical growth rate applies to the terms in expansion generated by the greedy algorithm for Egyptian fractions. However, the set of real numbers in the interval (0,1] whose Engel expansions coincide with their greedy expansions has measure zero, and Hausdorff dimension 1/2. == See also == Euler's continued fraction formula == Notes == == References == == External links == Weisstein, Eric W. "Engel Expansion". MathWorld–A Wolfram Web Resource.
|
Wikipedia:Engel subalgebra#0
|
In mathematics, an Engel subalgebra of a Lie algebra with respect to some element x is the subalgebra of elements annihilated by some power of ad x. Engel subalgebras are named after Friedrich Engel. For finite-dimensional Lie algebras over infinite fields the minimal Engel subalgebras are the Cartan subalgebras. == See also == Engel's theorem == References == Winter, David J. (1972), Abstract Lie algebras, The M.I.T. Press, Cambridge, Mass.-London, ISBN 978-0-486-46282-0, MR 0332905
|
Wikipedia:Enn Tõugu#0
|
Enn Tõugu (20 May 1935 Tallinn – 30 March 2020) was an Estonian computer scientist and mathematician. He dealt with system programming, declarative languages and topics related to artificial intelligence. In 1960s, he focused on the design and construction of the original STEM mini computer (:et). He was a candidate in 1996 Estonian presidential election. == Awards == 1987 State Prize of the USSR 1995 the Medal of the Estonian Academy of Sciences 2001 Order of the White Star, III class == References ==
|
Wikipedia:Enok Palm#0
|
Enok Johannes Palm (5 December 1924 – 31 August 2012) was a Norwegian mathematician. He was born in Kristiansand. He took the cand.real. degree in 1950 and the dr.philos. degree at the University of Oslo in 1954. He was a professor in mechanics at the Norwegian Institute of Technology from 1960 to 1963 and professor of applied mathematics at the University of Oslo until 1963 to 1994. He was a fellow of the Norwegian Academy of Science and Letters from 1959 and the Royal Norwegian Society of Sciences and Letters from 1961. He was decorated as a Knight, First Class of the Order of St. Olav in 1993. == References ==
|
Wikipedia:Enrique Planchart#0
|
Enrique Aurelio Planchart Rotundo (3 April 1937 – 27 July 2021) was a Venezuelan mathematician and academic. He was rector of Simón Bolívar University in Caracas from 2009 until his death in 2021. == Career == Planchart graduated as a Bachelor of Science from the Central University of Venezuela and obtained his Doctorate in Mathematics from the University of California, Berkeley, where he was also a visiting professor in its Department of Mathematics between 1986 and 1987. From 1973 he was part of the Department of Pure and Applied Mathematics of the Simón Bolívar University. While at Simón Bolívar University, between 1989 and 1999 he directed the National Center for the Improvement of Science Education, and from 1999 he directed the Equal Opportunities Program (PIO). In 1989 he was awarded the National Council for Scientific and Technological Research Award. Throughout his scientific career, Planchart published nine books and nine journal articles and gave thirty lectures. == References ==
|
Wikipedia:Enrique Pujals#0
|
Enrique Ramiro Pujals is an Argentine-Brazilian mathematician known for his contributions to the understanding of dynamical systems. Since fall of 2018, he has been a professor at the Graduate Center at the City University of New York. == Education == After earning an undergraduate degree in mathematics at the University of Buenos Aires in 1992, he became a Ph.D. student at the Instituto Nacional de Matemática Pura e Aplicada, where he was a student of Jacob Palis, completing his Ph.D. in 1996. He was a Guggenheim Fellow in 2000. Before moving to CUNY in 2018, he was a faculty member at IMPA since 2003. == Awards == He was an invited speaker at the International Congress of Mathematicians in Beijing 2002. Won the ICTP Ramanujan Prize (2008), UMALCA Prize in Mathematics (2004), TWAS Prize in Mathematics (2009), is a member of the Brazilian Academy of Sciences and receive the Brazilian National Order of Scientific Merit in 2013. == Selected publications == S. Crovisier, E.R. Pujals, C. Tresser, Mildly dissipative diffeomorphisms of the disk with zero entropy Acta Mathematica, Volume 232 (2024) Number 2, 221-323. S. Crovisier, E.R. Pujals, Essential hyperbolicity and homoclinic bifurcations: a dichotomy phenomenon/mechanism for diffeomorphisms, Inventiones Mathematicae, (2015) Volume 201, Issue 2, 385–517. Pujals, E. R.; Sambarino, M. "On the dynamics of dominated splitting", Annals of Mathematics, Princeton, (169) (2009), 675–740. Morales, C.; Pacifico, M.J.; Pujals, E. R. Robust transitive singular sets for 3-flows are partially hyperbolic attractors or repellers, Annals of Mathematics, Princeton. 160, no 2, (2004), 375–43. Bonatti, C.; Diaz, L.; Pujals, E. R. "A C1-generic dichotomy for diffeomorphisms: Weak forms of hyperbolicity or infinitely many sinks or sources". Annals of Mathematics, Princeton, v. 158, pp. 355–418, 2003. Pujals, E. R.; Sambarino, M. "Homoclinic tangencies and hyperbolicity for surface diffeomorphisms". Annals of Mathematics, Princeton, v. 151, n. 3, pp. 961–1023, 2000. L. Diaz, E.R. Pujals, R. Ures, Partial hyperbolicity and robust transitivity, Acta Mathematica 183, no. 1 (1999), 1–43. == References ==
|
Wikipedia:Enrique Zuazua#0
|
Enrique Zuazua (Iriondo, second family name) is the Head of the Chair for Dynamics, Control, Machine Learning and Numerics - FAU DCN-AvH (Alexander von Humboldt Professorship) at the University of Erlangen–Nuremberg (FAU). He is also Distinguished Research Professor and the Director of the Chair of Computational Mathematics of DeustoTech Research Center of the University of Deusto in Bilbao, Basque Country, Spain and Professor of Applied Mathematics at Universidad Autónoma de Madrid (UAM). == Biography == Born in Eibar (Gipuzkoa-Basque Country-Spain) in 1961, after finishing his primary education at Ikastola (Basque School) and secondary education at the Eibar La Salle School, he took his baccalaureate at this town's Universidad Laboral de Eibar. In 1984 he graduated in mathematics from the Universidad del País Vasco-Euskal Herriko Unibertsitatea (UPV-EHU) before obtaining his PhD from this university in 1987, receiving the Faculty Award for Outstanding Achievements for both. In 1988 he got the PhD degree at the Laboratoire Jacques-Louis Lions of the Université Pierre et Marie Curie, funded by a doctoral fellowship of the Basque Government and a Research Grant from the Jacques Louis Lions Chair at the Collège de France. == University career == During the 1987–1988 academic year, he was associate professor at UPV-EHU, before becoming an associate professor in Mathematical Analysis at the Universidad Autónoma de Madrid. In 1990, he won a Professorship in Applied Mathematics at the Universidad Complutense de Madrid where he was Head of the Applied Mathematics Section at the Faculty of Chemistry and of the Applied Mathematics Department. In 2001 got an Excellence Professorship in Applied Mathematics at Universidad Autónoma de Madrid. From 2008 to 2012 he was the Founding Scientific Director Research of the BCAM - Basque Center for Applied Mathematics, in Bilbao, Basque Country, Spain, created by the Basque Government, with the aim of promoting research into the most computational, applied and multi-disciplinary aspects of Mathematics, where he led the team on "Partial Differential Equations, Numerics and Control" until September 2015 as a Distinguished Ikerbasque Professor of the Basque Foundation for Science Ikerbasque. He has also acted as a member of the advisory board for the launching of the Institute for Mathematical Sciences (ICMAT) [1], a consortium comprising the Consejo Superior de Investigaciones Científicas (Higher Council for Scientific Research, CSIC) and three Madrid Universities: the Universidad Autónoma de Madrid (UAM), Universidad Carlos III de Madrid (UC3M) and the Universidad Complutense de Madrid (UCM). He has held Visiting Professorships at various overseas institutions, including the Courant Institute, the University of Minnesota and Rice University in the US, the Federal University of Rio de Janeiro, the Isaac Newton Institute, Cambridge, and at various French Universities including the Université Pierre et Marie Curie, Université de Paris-Sud, Versailles Saint-Quentin-en-Yvelines University, Université d'Orléans, Université de Toulouse, Université de Nice and the École Polytechnique (Paris Polytechnic School), where he held the post of Associate Professor for four academic years. == Research == His domains of expertise in Applied Mathematics include Partial Differential Equations, Control Theory and Numerical Analysis. These subjects interrelate with the final to model, analyse, computer simulate, and finally contribute to the control and design of the most diverse natural phenomena and all fields of R + D + i. Twenty four PhD students got the degree under his advice and they now occupy positions in centres throughout the world: Brazil, Chile, China, Mexico, Romania, Spain, etc. He has developed intensive international work having led co-operation programmes with various Latin American countries, as well as with Portugal, the Maghreb, China and Iran, amongst others. He has been guest speaker in many International Conferences worldwide, with highlights including the Second European Congress of Mathematics, Budapest, 1996, ECCOMAS 2004 in Jyväskylä (Finland) the SMAI2005 Congress in Evian (France), FoCM2005 in Santander, ENUMATH2005 in Santiago de Compostela, the International Congress of Mathematicians (ICM) Madrid, 2006 and the EQUADIFF 2007 in Vienna (Austria), the von Mises Lecture of Humboldt University of Berlin, in 2008, the "Giornata INdAM", Turin, 2009, "SIMAI2010", Cagliari in 2010, the "Aachen Conference on Computational Engineering Science", Aachen, 2011 and the "PASI-CIPPDE-2012" Conference in Santiago de Chile en 2012. He has also given numerous monographic courses in research at various Centres, both in Spain and overseas. He has sat on and is still an active member of the Scientific Committee of various international events, including the International Programming Committee of ICM2006. He has developed a number of cooperative projects with industries such as AIRBUS-Spain and Arteche Group. He has also been the Principal Investigator for National Plan projects and since 1990 the co-ordinator for European and NATO project networks and the first research coordinator of the i-Math CONSOLIDER Mathematics Project (2007-2011), the Madrid project SIMUMAT (2006–2009). He is Editor in Chief, in collaboration with Xu Zhang (Sichuan University, Chengdu, China) of the Journal Mathematical Control and Related Fields, and member of the editorial board for other journals including "" ESAIM:COCV", "Journal de Mathématiques pures et appliquées", "Mathematical Models and Methods in Applied Sciences", "Numerische Mathematik", "Systems and Control Letters", "Journal of Differential Equations", "Asymptotic Analysis" and "Journal of Optimization Theory and Applications". He also belongs to the editorial board of the Series "Mathématiques et Applications" of SMAI - Springer and "Modeling, Simulation and Applications" of Springer, coordinated by Alfio Quarteroni and coordinates the series BCAM SpringerBriefs [2]. He has managed the Mathematics Programme (2001–2004) of the Spanish National Research Plan and directed and participated in various international panels belonging of the French CNRS, ANR, IUF, AERES and INRIA and the German DFG amongst other agencies. He was the chair of the European Research Council's Panel A "Advanced Grants," in Mathematics, and the CIMPA's (Centre International de Mathematiques Pures et Appliquées) [3] Scientific Council in Nice. He is a member of the Scientific Committee of various Institutes such as the CUMP in Porto, Portugal [4], CERFACS in Toulouse [5], the Pedro Pascual de Benasque Science Center [6], and the UNESCO "Mathematics and Development," Chair co-ordinated by M.Jaoua. == Dissemination activities == He has also written several popular science works, for which he has twice been awarded the [Sociedad Española de Matematica Applicada] (SEMA Spanish Society of Applied Mathematics) [7] Prize for Popularising Mathematics. His articles in this field have been published in various magazines, including ARBOR, CIC-Network, SIGMA, DIVULGAMAT, La Gaceta de la Real Sociedad Matemática Española (RSME The Royal Society of Spanish Mathematics) [8] and Transatlántica de Educación. He also founded the blog "Matemáticas y sus Fronteras" (Mathematics and its boundaries). He has been an active member of RSME and SEMA, and contributes to the editorial board of the SEMA Journal. In the period 2009-2015, he ran, in cooperation with the journalist Xabier Lapitz a Radio Program at Onda Vasca on various topics related to Mathematics, Higher Education, and Research. He is also the author of the columns "Matemanías" and "cons-CIENCIA" at the Basque daily newspaper Deia [9] and the weekly one Zazpika [10]. He was a collaborator on Basque public radios on the broadcasts "Faktoria" in "Euskadi Irratia" (in Basque) and "Boulevard Magazine" in Radio Euskadi and within the TV shows "Azpimarra" and "Ahoz Aho" of the Basque Public TV EITB. == Prizes and awards == His work has had a significant impact. He was recognised as a "highly cited researcher," by the ISI Institute (Thompson) in 2004. He received the 2006 Euskadi Prize for Science and Technology and has been nominated as a numerary of the "Jakiunde," Basque Academy of Sciences, Arts and Humanities. In 2007, he was awarded the Julio Rey Pastor National Prize for "Mathematics, Information and Communications Technology," the highest national award in these disciplines. He received his award from His Majesty the King on 15 January 2008, together with the other 2007 prize-winners (Ignacio Cirac Saturain, Carlos Duarte Quesada, Luis A. Oro Giral, and Daniel Ramón Vidal). His paper "On the optimality of the observability inequalities for parabolic and hyperbolic systems with potentials", in collaboration with Thomas Duyckaerts and Xu Zhang, was published in Ann. Inst. H. Poincaré Anal. Non Linéaire 25 (2008), no. 1, pp. 1–41, got the 2008 Award for the best paper on that Journal. In 2013, he received the "Research in Paris" award of the Paris City Hall [11], an Excellence Chair of the CIMI - Centre International de Mathématiques et Informatique de Toulouse [8] and a Humboldt Research Award [9] to develop research activities within the research group by Professor Günter Leugering at Friedrich-Alexander University, Erlangen-Nürnberg. In 2014 he got the Doctor Honoris Causa degree from the Université de Lorraine in France [10] and in 2015 he became a member of Academia Europaea and the First Ambassador of Friedrich-Alexander University, Erlangen-Nürnberg. In 2019, he received an Alexander von Humboldt Professor for a chair in "Applied Analysis" at the Friedrich-Alexander University, Erlangen-Nürnberg. In 2022, he received the SIAM W. T. and Idalia Reid Prize for fundamental theoretical and computational contributions to the Control, Numerics, and Analysis of nonlinear PDEs and multi-physical systems with impactful scientific and industrial applications. == References == == External links == Enrique Zuazua at the Mathematics Genealogy Project Enrique Zuazua's website at the Chair of Computational Mathematics of the University of Deusto, [12]. Enrique Zuazua's "enzuazua" dissemination website [13] Enrique Zuazua's website at Universidad Autónoma de Madrid (UAM) [14]. Enrique Zuazua's website at Jakiunde, the Basque Academy of Sciences, [15]. E. Zuazua's website at Academia Europaea, [16]. 2019 Alexander von Humboldt Professorship Award [17] Website of DeustoTech [18] Website of the Departamento de Matemáticas de la Universidad Autónoma de Madrid, [19].
|
Wikipedia:Enriqueta González Baz#0
|
Enriqueta González Baz y de la Vega (September 22, 1915 – December 22, 2002) was a Mexican mathematician, a co-founder of the Mexican Mathematical Society, and the first woman to earn a degree in mathematics at the National Autonomous University of Mexico in 1944. == Early life == Enriqueta González Baz was born in Mexico City on Calle de Correo Mayor on September 22, 1915. She attended Escuela número 8 for women where she studied to become a teacher. After completing secondary school, her father, Roberto González Baz, sent her to a two-year program at the Escuela Doméstica for domestic studies. Her father believed that above all, his daughters should be women: learn how to cook, tend the house, etc. At this school, one of her teachers, Elena Picazo de Murray, recognized González Baz's talent for studying, so she urged her to pursue higher education. == Education == After finishing domestic school, González Baz enrolled in night classes at the former San Ildefonso College while studying at the Escuela Nacional de Maestros in Mexico City where she earned a teaching credential to become a school teacher. She then enrolled in the National Preparatory School, where she studied physical sciences and mathematics. After graduating high school, she enrolled at the Faculty of Sciences of the National Autonomous University of Mexico. She was part of the first cohorts of students majoring in mathematics, which also included Manuela Garín. In 1944, she became the first woman at National Autonomous University of Mexico and in Mexico to earn a degree in mathematics. She wrote a thesis on Special Functions (Bessel, Gamma, and Legendre) and completed postgraduate studies at Bryn Mawr College in Philadelphia, Pennsylvania. == Career and contributions == During this time, the Ministry of Public Education in Mexico did not distinguish between the title of a mathematician and a mathematics teacher, so González Baz became a high school math teacher. She taught mathematics at the National Preparatory School and various other secondary schools. She also taught mathematics at the Faculty of Sciences and was a researcher at the Institute of Physics at the National Autonomous University of Mexico. Among her mathematical works, she translated Solomon Lefschetz's 1930 textbook Topology. == Legacy == González Baz was one of the five founding women of the Mexican Mathematical Society. She is regarded as a distinguished student and professor of mathematics. González Baz died on December 22, 2002, "leaving an open door for the next generations of women attracted to the study of mathematics." == References ==
|
Wikipedia:Entanglement-assisted stabilizer formalism#0
|
In the theory of quantum communication, the entanglement-assisted stabilizer formalism is a method for protecting quantum information with the help of entanglement shared between a sender and receiver before they transmit quantum data over a quantum communication channel. It extends the standard stabilizer formalism by including shared entanglement (Brun et al. 2006). The advantage of entanglement-assisted stabilizer codes is that the sender can exploit the error-correcting properties of an arbitrary set of Pauli operators. The sender's Pauli operators do not necessarily have to form an Abelian subgroup of the Pauli group Π n {\displaystyle \Pi ^{n}} over n {\displaystyle n} qubits. The sender can make clever use of her shared ebits so that the global stabilizer is Abelian and thus forms a valid quantum error-correcting code. == Definition == We review the construction of an entanglement-assisted code (Brun et al. 2006). Suppose that there is a nonabelian subgroup S ⊂ Π n {\displaystyle {\mathcal {S}}\subset \Pi ^{n}} of size n − k = 2 c + s {\displaystyle n-k=2c+s} . Application of the fundamental theorem of symplectic geometry (Lemma 1 in the first external reference) states that there exists a minimal set of independent generators { Z ¯ 1 , … , Z ¯ s + c , X ¯ s + 1 , … , X ¯ s + c } {\displaystyle \left\{{\bar {Z}}_{1},\ldots ,{\bar {Z}}_{s+c},{\bar {X}}_{s+1},\ldots ,{\bar {X}}_{s+c}\right\}} for S {\displaystyle {\mathcal {S}}} with the following commutation relations: [ Z ¯ i , Z ¯ j ] = 0 ∀ i , j , {\displaystyle \left[{\bar {Z}}_{i},{\bar {Z}}_{j}\right]=0\ \ \ \ \ \forall i,j,} [ X ¯ i , X ¯ j ] = 0 ∀ i , j , {\displaystyle \left[{\bar {X}}_{i},{\bar {X}}_{j}\right]=0\ \ \ \ \ \forall i,j,} [ X ¯ i , Z ¯ j ] = 0 ∀ i ≠ j , {\displaystyle \left[{\bar {X}}_{i},{\bar {Z}}_{j}\right]=0\ \ \ \ \ \forall i\neq j,} { X ¯ i , Z ¯ i } = 0 ∀ i . {\displaystyle \left\{{\bar {X}}_{i},{\bar {Z}}_{i}\right\}=0\ \ \ \ \ \forall i.} The decomposition of S {\displaystyle {\mathcal {S}}} into the above minimal generating set determines that the code requires s {\displaystyle s} ancilla qubits and c {\displaystyle c} ebits. The code requires an ebit for every anticommuting pair in the minimal generating set. The simple reason for this requirement is that an ebit is a simultaneous + 1 {\displaystyle +1} -eigenstate of the Pauli operators { X X , Z Z } {\displaystyle \left\{XX,ZZ\right\}} . The second qubit in the ebit transforms the anticommuting pair { X , Z } {\displaystyle \left\{X,Z\right\}} into a commuting pair { X X , Z Z } {\displaystyle \left\{XX,ZZ\right\}} . The above decomposition also minimizes the number of ebits required for the code---it is an optimal decomposition. We can partition the nonabelian group S {\displaystyle {\mathcal {S}}} into two subgroups: the isotropic subgroup S I {\displaystyle {\mathcal {S}}_{I}} and the entanglement subgroup S E {\displaystyle {\mathcal {S}}_{E}} . The isotropic subgroup S I {\displaystyle {\mathcal {S}}_{I}} is a commuting subgroup of S {\displaystyle {\mathcal {S}}} and thus corresponds to ancilla qubits: S I = { Z ¯ 1 , … , Z ¯ s } {\displaystyle {\mathcal {S}}_{I}=\left\{{\bar {Z}}_{1},\ldots ,{\bar {Z}}_{s}\right\}} . The elements of the entanglement subgroup S E {\displaystyle {\mathcal {S}}_{E}} come in anticommuting pairs and thus correspond to ebits: S E = { Z ¯ s + 1 , … , Z ¯ s + c , X ¯ s + 1 , … , X ¯ s + c } {\displaystyle {\mathcal {S}}_{E}=\left\{{\bar {Z}}_{s+1},\ldots ,{\bar {Z}}_{s+c},{\bar {X}}_{s+1},\ldots ,{\bar {X}}_{s+c}\right\}} . == Entanglement-assisted stabilizer code error correction conditions == The two subgroups S I {\displaystyle {\mathcal {S}}_{I}} and S E {\displaystyle {\mathcal {S}}_{E}} play a role in the error-correcting conditions for the entanglement-assisted stabilizer formalism. An entanglement-assisted code corrects errors in a set E ⊂ Π n {\displaystyle {\mathcal {E}}\subset \Pi ^{n}} if for all E 1 , E 2 ∈ E {\displaystyle E_{1},E_{2}\in {\mathcal {E}}} , E 1 † E 2 ∈ S I ∪ ( Π n − Z ( ⟨ S I , S E ⟩ ) ) . {\displaystyle E_{1}^{\dagger }E_{2}\in {\mathcal {S}}_{I}\cup \left(\Pi ^{n}-{\mathcal {Z}}\left(\left\langle {\mathcal {S}}_{I},{\mathcal {S}}_{E}\right\rangle \right)\right).} == Operation == The operation of an entanglement-assisted code is as follows. The sender performs an encoding unitary on her unprotected qubits, ancilla qubits, and her half of the ebits. The unencoded state is a simultaneous +1-eigenstate of the following Pauli operators: { Z 1 , … , Z s , Z s + 1 | Z 1 , … , Z s + c | Z c , X s + 1 | X 1 , … , X s + c | X c } . {\displaystyle \left\{Z_{1},\ldots ,Z_{s},Z_{s+1}|Z_{1},\ldots ,Z_{s+c}|Z_{c},X_{s+1}|X_{1},\ldots ,X_{s+c}|X_{c}\right\}.} The Pauli operators to the right of the vertical bars indicate the receiver's half of the shared ebits. The encoding unitary transforms the unencoded Pauli operators to the following encoded Pauli operators: { Z ¯ 1 , … , Z ¯ s , Z ¯ s + 1 | Z 1 , … , Z ¯ s + c | Z c , X ¯ s + 1 | X 1 , … , X ¯ s + c | X c } . {\displaystyle \left\{{\bar {Z}}_{1},\ldots ,{\bar {Z}}_{s},{\bar {Z}}_{s+1}|Z_{1},\ldots ,{\bar {Z}}_{s+c}|Z_{c},{\bar {X}}_{s+1}|X_{1},\ldots ,{\bar {X}}_{s+c}|X_{c}\right\}.} The sender transmits all of her qubits over the noisy quantum channel. The receiver then possesses the transmitted qubits and his half of the ebits. He measures the above encoded operators to diagnose the error. The last step is to correct the error. == Rate of an entanglement-assisted code == We can interpret the rate of an entanglement-assisted code in three different ways (Wilde and Brun 2007b). Suppose that an entanglement-assisted quantum code encodes k {\displaystyle k} information qubits into n {\displaystyle n} physical qubits with the help of c {\displaystyle c} ebits. The entanglement-assisted rate assumes that entanglement shared between sender and receiver is free. Bennett et al. make this assumption when deriving the entanglement assisted capacity of a quantum channel for sending quantum information. The entanglement-assisted rate is k / n {\displaystyle k/n} for a code with the above parameters. The trade-off rate assumes that entanglement is not free and a rate pair determines performance. The first number in the pair is the number of noiseless qubits generated per channel use, and the second number in the pair is the number of ebits consumed per channel use. The rate pair is ( k / n , c / n ) {\displaystyle \left(k/n,c/n\right)} for a code with the above parameters. Quantum information theorists have computed asymptotic trade-off curves that bound the rate region in which achievable rate pairs lie. The construction for an entanglement-assisted quantum block code minimizes the number c {\displaystyle c} of ebits given a fixed number k {\displaystyle k} and n {\displaystyle n} of respective information qubits and physical qubits. The catalytic rate assumes that bits of entanglement are built up at the expense of transmitted qubits. A noiseless quantum channel or the encoded use of noisy quantum channel are two different ways to build up entanglement between a sender and receiver. The catalytic rate of an [ n , k ; c ] {\displaystyle \left[n,k;c\right]} code is ( k − c ) / n {\displaystyle \left(k-c\right)/n} . Which interpretation is most reasonable depends on the context in which we use the code. In any case, the parameters n {\displaystyle n} , k {\displaystyle k} , and c {\displaystyle c} ultimately govern performance, regardless of which definition of the rate we use to interpret that performance. == Example of an entanglement-assisted code == We present an example of an entanglement-assisted code that corrects an arbitrary single-qubit error (Brun et al. 2006). Suppose the sender wants to use the quantum error-correcting properties of the following nonabelian subgroup of Π 4 {\displaystyle \Pi ^{4}} : Z X Z I Z Z I Z X Y X I X X I X {\displaystyle {\begin{array}{cccc}Z&X&Z&I\\Z&Z&I&Z\\X&Y&X&I\\X&X&I&X\end{array}}} The first two generators anticommute. We obtain a modified third generator by multiplying the third generator by the second. We then multiply the last generator by the first, second, and modified third generators. The error-correcting properties of the generators are invariant under these operations. The modified generators are as follows: g 1 = Z X Z I g 2 = Z Z I Z g 3 = Y X X Z g 4 = Z Y Y X {\displaystyle {\begin{array}{cccccc}g_{1}&=&Z&X&Z&I\\g_{2}&=&Z&Z&I&Z\\g_{3}&=&Y&X&X&Z\\g_{4}&=&Z&Y&Y&X\end{array}}} The above set of generators have the commutation relations given by the fundamental theorem of symplectic geometry: { g 1 , g 2 } = [ g 1 , g 3 ] = [ g 1 , g 4 ] = [ g 2 , g 3 ] = [ g 2 , g 4 ] = [ g 3 , g 4 ] = 0. {\displaystyle \left\{g_{1},g_{2}\right\}=\left[g_{1},g_{3}\right]=\left[g_{1},g_{4}\right]=\left[g_{2},g_{3}\right]=\left[g_{2},g_{4}\right]=\left[g_{3},g_{4}\right]=0.} The above set of generators is unitarily equivalent to the following canonical generators: X I I I Z I I I I Z I I I I Z I {\displaystyle {\begin{array}{cccc}X&I&I&I\\Z&I&I&I\\I&Z&I&I\\I&I&Z&I\end{array}}} We can add one ebit to resolve the anticommutativity of the first two generators and obtain the canonical stabilizer: X Z I I | X I I I Z I I I I Z I I I I Z I {\displaystyle {\begin{array}{c}X\\Z\\I\\I\end{array}}\left\vert {\begin{array}{cccc}X&I&I&I\\Z&I&I&I\\I&Z&I&I\\I&I&Z&I\end{array}}\right.} The receiver Bob possesses the qubit on the left and the sender Alice possesses the four qubits on the right. The following state is an eigenstate of the above stabilizer | Φ + ⟩ B A | 00 ⟩ A | ψ ⟩ A . {\displaystyle \left\vert \Phi ^{+}\right\rangle ^{BA}\left\vert 00\right\rangle ^{A}\left\vert \psi \right\rangle ^{A}.} where | ψ ⟩ A {\displaystyle \left\vert \psi \right\rangle ^{A}} is a qubit that the sender wants to encode. The encoding unitary then rotates the canonical stabilizer to the following set of globally commuting generators: X Z I I | Z X Z I Z Z I Z Y X X Z Z Y Y X {\displaystyle {\begin{array}{c}X\\Z\\I\\I\end{array}}\left\vert {\begin{array}{cccc}Z&X&Z&I\\Z&Z&I&Z\\Y&X&X&Z\\Z&Y&Y&X\end{array}}\right.} The receiver measures the above generators upon receipt of all qubits to detect and correct errors. == Encoding algorithm == We continue with the previous example. We detail an algorithm for determining an encoding circuit and the optimal number of ebits for the entanglement-assisted code---this algorithm first appeared in the appendix of (Wilde and Brun 2007a) and later in the appendix of (Shaw et al. 2008). The operators in the above example have the following representation as a binary matrix (See the stabilizer code article): H = [ 1 0 1 0 1 1 0 1 0 1 0 0 0 0 0 0 | 0 1 0 0 0 0 0 0 1 1 1 0 1 1 0 1 ] . {\displaystyle H=\left[\left.{\begin{array}{cccc}1&0&1&0\\1&1&0&1\\0&1&0&0\\0&0&0&0\end{array}}\right\vert {\begin{array}{cccc}0&1&0&0\\0&0&0&0\\1&1&1&0\\1&1&0&1\end{array}}\right].} Call the matrix to the left of the vertical bar the " Z {\displaystyle Z} matrix" and the matrix to the right of the vertical bar the " X {\displaystyle X} matrix." The algorithm consists of row and column operations on the above matrix. Row operations do not affect the error-correcting properties of the code but are crucial for arriving at the optimal decomposition from the fundamental theorem of symplectic geometry. The operations available for manipulating columns of the above matrix are Clifford operations. Clifford operations preserve the Pauli group Π n {\displaystyle \Pi ^{n}} under conjugation. The CNOT gate, the Hadamard gate, and the Phase gate generate the Clifford group. A CNOT gate from qubit i {\displaystyle i} to qubit j {\displaystyle j} adds column i {\displaystyle i} to column j {\displaystyle j} in the X {\displaystyle X} matrix and adds column j {\displaystyle j} to column i {\displaystyle i} in the Z {\displaystyle Z} matrix. A Hadamard gate on qubit i {\displaystyle i} swaps column i {\displaystyle i} in the Z {\displaystyle Z} matrix with column i {\displaystyle i} in the X {\displaystyle X} matrix and vice versa. A phase gate on qubit i {\displaystyle i} adds column i {\displaystyle i} in the X {\displaystyle X} matrix to column i {\displaystyle i} in the Z {\displaystyle Z} matrix. Three CNOT gates implement a qubit swap operation. The effect of a swap on qubits i {\displaystyle i} and j {\displaystyle j} is to swap columns i {\displaystyle i} and j {\displaystyle j} in both the X {\displaystyle X} and Z {\displaystyle Z} matrix. The algorithm begins by computing the symplectic product between the first row and all other rows. We emphasize that the symplectic product here is the standard symplectic product. Leave the matrix as it is if the first row is not symplectically orthogonal to the second row or if the first row is symplectically orthogonal to all other rows. Otherwise, swap the second row with the first available row that is not symplectically orthogonal to the first row. In our example, the first row is not symplectically orthogonal to the second so we leave all rows as they are. Arrange the first row so that the top left entry in the X {\displaystyle X} matrix is one. A CNOT, swap, Hadamard, or combinations of these operations can achieve this result. We can have this result in our example by swapping qubits one and two. The matrix becomes [ 0 1 1 0 1 1 0 1 1 0 0 0 0 0 0 0 | 1 0 0 0 0 0 0 0 1 1 1 0 1 1 0 1 ] . {\displaystyle \left[\left.{\begin{array}{cccc}0&1&1&0\\1&1&0&1\\1&0&0&0\\0&0&0&0\end{array}}\right\vert {\begin{array}{cccc}1&0&0&0\\0&0&0&0\\1&1&1&0\\1&1&0&1\end{array}}\right].} Perform CNOTs to clear the entries in the X {\displaystyle X} matrix in the top row to the right of the leftmost entry. These entries are already zero in this example so we need not do anything. Proceed to the clear the entries in the first row of the Z {\displaystyle Z} matrix. Perform a phase gate to clear the leftmost entry in the first row of the Z {\displaystyle Z} matrix if it is equal to one. It is equal to zero in this case so we need not do anything. We then use Hadamards and CNOTs to clear the other entries in the first row of the Z {\displaystyle Z} matrix. We perform the above operations for our example. Perform a Hadamard on qubits two and three. The matrix becomes [ 0 0 0 0 1 0 0 1 1 1 1 0 0 1 0 0 | 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 1 ] . {\displaystyle \left[\left.{\begin{array}{cccc}0&0&0&0\\1&0&0&1\\1&1&1&0\\0&1&0&0\end{array}}\right\vert {\begin{array}{cccc}1&1&1&0\\0&1&0&0\\1&0&0&0\\1&0&0&1\end{array}}\right].} Perform a CNOT from qubit one to qubit two and from qubit one to qubit three. The matrix becomes [ 0 0 0 0 1 0 0 1 1 1 1 0 1 1 0 0 | 1 0 0 0 0 1 0 0 1 1 1 0 1 1 1 1 ] . {\displaystyle \left[\left.{\begin{array}{cccc}0&0&0&0\\1&0&0&1\\1&1&1&0\\1&1&0&0\end{array}}\right\vert {\begin{array}{cccc}1&0&0&0\\0&1&0&0\\1&1&1&0\\1&1&1&1\end{array}}\right].} The first row is complete. We now proceed to clear the entries in the second row. Perform a Hadamard on qubits one and four. The matrix becomes [ 1 0 0 0 0 0 0 0 1 1 1 0 1 1 0 1 | 0 0 0 0 1 1 0 1 1 1 1 0 1 1 1 0 ] . {\displaystyle \left[\left.{\begin{array}{cccc}1&0&0&0\\0&0&0&0\\1&1&1&0\\1&1&0&1\end{array}}\right\vert {\begin{array}{cccc}0&0&0&0\\1&1&0&1\\1&1&1&0\\1&1&1&0\end{array}}\right].} Perform a CNOT from qubit one to qubit two and from qubit one to qubit four. The matrix becomes [ 1 0 0 0 0 0 0 0 0 1 1 0 1 1 0 1 | 0 0 0 0 1 0 0 0 1 0 1 1 1 0 1 1 ] . {\displaystyle \left[\left.{\begin{array}{cccc}1&0&0&0\\0&0&0&0\\0&1&1&0\\1&1&0&1\end{array}}\right\vert {\begin{array}{cccc}0&0&0&0\\1&0&0&0\\1&0&1&1\\1&0&1&1\end{array}}\right].} The first two rows are now complete. They need one ebit to compensate for their anticommutativity or their nonorthogonality with respect to the symplectic product. Now we perform a "Gram-Schmidt orthogonalization" with respect to the symplectic product. Add row one to any other row that has one as the leftmost entry in its Z {\displaystyle Z} matrix. Add row two to any other row that has one as the leftmost entry in its X {\displaystyle X} matrix. For our example, we add row one to row four and we add row two to rows three and four. The matrix becomes [ 1 0 0 0 0 0 0 0 0 1 1 0 0 1 0 1 | 0 0 0 0 1 0 0 0 0 0 1 1 0 0 1 1 ] . {\displaystyle \left[\left.{\begin{array}{cccc}1&0&0&0\\0&0&0&0\\0&1&1&0\\0&1&0&1\end{array}}\right\vert {\begin{array}{cccc}0&0&0&0\\1&0&0&0\\0&0&1&1\\0&0&1&1\end{array}}\right].} The first two rows are now symplectically orthogonal to all other rows per the fundamental theorem of symplectic geometry. We proceed with the same algorithm on the next two rows. The next two rows are symplectically orthogonal to each other so we can deal with them individually. Perform a Hadamard on qubit two. The matrix becomes [ 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 | 0 0 0 0 1 0 0 0 0 1 1 1 0 1 1 1 ] . {\displaystyle \left[\left.{\begin{array}{cccc}1&0&0&0\\0&0&0&0\\0&0&1&0\\0&0&0&1\end{array}}\right\vert {\begin{array}{cccc}0&0&0&0\\1&0&0&0\\0&1&1&1\\0&1&1&1\end{array}}\right].} Perform a CNOT from qubit two to qubit three and from qubit two to qubit four. The matrix becomes [ 1 0 0 0 0 0 0 0 0 1 1 0 0 1 0 1 | 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 ] . {\displaystyle \left[\left.{\begin{array}{cccc}1&0&0&0\\0&0&0&0\\0&1&1&0\\0&1&0&1\end{array}}\right\vert {\begin{array}{cccc}0&0&0&0\\1&0&0&0\\0&1&0&0\\0&1&0&0\end{array}}\right].} Perform a phase gate on qubit two: [ 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 | 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 ] . {\displaystyle \left[\left.{\begin{array}{cccc}1&0&0&0\\0&0&0&0\\0&0&1&0\\0&0&0&1\end{array}}\right\vert {\begin{array}{cccc}0&0&0&0\\1&0&0&0\\0&1&0&0\\0&1&0&0\end{array}}\right].} Perform a Hadamard on qubit three followed by a CNOT from qubit two to qubit three: [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 | 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 ] . {\displaystyle \left[\left.{\begin{array}{cccc}1&0&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&1\end{array}}\right\vert {\begin{array}{cccc}0&0&0&0\\1&0&0&0\\0&1&0&0\\0&1&1&0\end{array}}\right].} Add row three to row four and perform a Hadamard on qubit two: [ 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 | 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 ] . {\displaystyle \left[\left.{\begin{array}{cccc}1&0&0&0\\0&0&0&0\\0&1&0&0\\0&0&0&1\end{array}}\right\vert {\begin{array}{cccc}0&0&0&0\\1&0&0&0\\0&0&0&0\\0&0&1&0\end{array}}\right].} Perform a Hadamard on qubit four followed by a CNOT from qubit three to qubit four. End by performing a Hadamard on qubit three: [ 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 | 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 ] . {\displaystyle \left[\left.{\begin{array}{cccc}1&0&0&0\\0&0&0&0\\0&1&0&0\\0&0&1&0\end{array}}\right\vert {\begin{array}{cccc}0&0&0&0\\1&0&0&0\\0&0&0&0\\0&0&0&0\end{array}}\right].} The above matrix now corresponds to the canonical Pauli operators. Adding one half of an ebit to the receiver's side gives the canonical stabilizer whose simultaneous +1-eigenstate is the above state. The above operations in reverse order take the canonical stabilizer to the encoded stabilizer. == References == Brun, T.; Devetak, I.; Hsieh, M.-H. (2006-10-20). "Correcting Quantum Errors with Entanglement". Science. 314 (5798). American Association for the Advancement of Science (AAAS): 436–439. arXiv:quant-ph/0610092. Bibcode:2006Sci...314..436B. doi:10.1126/science.1131563. ISSN 0036-8075. PMID 17008489. S2CID 18106089. Min-Hsiu Hsieh. Entanglement-assisted Coding Theory. Ph.D. Dissertation, University of Southern California, August 2008. Available at https://arxiv.org/abs/0807.2080 Mark M. Wilde. Quantum Coding with Entanglement. Ph.D. Dissertation, University of Southern California, August 2008. Available at https://arxiv.org/abs/0806.4214 Hsieh, Min-Hsiu; Devetak, Igor; Brun, Todd (2007-12-19). "General entanglement-assisted quantum error-correcting codes". Physical Review A. 76 (6): 062313. arXiv:0708.2142. Bibcode:2007PhRvA..76f2313H. doi:10.1103/physreva.76.062313. ISSN 1050-2947. S2CID 119155178. Kremsky, Isaac; Hsieh, Min-Hsiu; Brun, Todd A. (2008-07-21). "Classical enhancement of quantum-error-correcting codes". Physical Review A. 78 (1): 012341. arXiv:0802.2414. Bibcode:2008PhRvA..78a2341K. doi:10.1103/physreva.78.012341. ISSN 1050-2947. S2CID 119252610. Wilde, Mark M.; Brun, Todd A. (2008-06-19). "Optimal entanglement formulas for entanglement-assisted quantum coding". Physical Review A. 77 (6): 064302. arXiv:0804.1404. Bibcode:2008PhRvA..77f4302W. doi:10.1103/physreva.77.064302. ISSN 1050-2947. S2CID 118411793. Wilde, Mark M.; Krovi, Hari; Brun, Todd A. (2010). "Convolutional entanglement distillation". 2010 IEEE International Symposium on Information Theory. IEEE. pp. 2657–2661. arXiv:0708.3699. doi:10.1109/isit.2010.5513666. ISBN 978-1-4244-7892-7. Wilde, Mark M.; Brun, Todd A. (2010-04-30). "Entanglement-assisted quantum convolutional coding". Physical Review A. 81 (4): 042333. arXiv:0712.2223. Bibcode:2010PhRvA..81d2333W. doi:10.1103/physreva.81.042333. ISSN 1050-2947. S2CID 8410654. Wilde, Mark M.; Brun, Todd A. (2010-06-08). "Quantum convolutional coding with shared entanglement: general structure". Quantum Information Processing. 9 (5). Springer Science and Business Media LLC: 509–540. arXiv:0807.3803. doi:10.1007/s11128-010-0179-9. ISSN 1570-0755. S2CID 18185704. Shaw, Bilal; Wilde, Mark M.; Oreshkov, Ognyan; Kremsky, Isaac; Lidar, Daniel A. (2008-07-18). "Encoding one logical qubit into six physical qubits". Physical Review A. 78 (1): 012337. arXiv:0803.1495. Bibcode:2008PhRvA..78a2337S. doi:10.1103/physreva.78.012337. ISSN 1050-2947. S2CID 40040752.
|
Wikipedia:Enumerator polynomial#0
|
In coding theory, the weight enumerator polynomial of a binary linear code specifies the number of words of each possible Hamming weight. Let C ⊂ F 2 n {\displaystyle C\subset \mathbb {F} _{2}^{n}} be a binary linear code of length n {\displaystyle n} . The weight distribution is the sequence of numbers A t = # { c ∈ C ∣ w ( c ) = t } {\displaystyle A_{t}=\#\{c\in C\mid w(c)=t\}} giving the number of codewords c in C having weight t as t ranges from 0 to n. The weight enumerator is the bivariate polynomial W ( C ; x , y ) = ∑ w = 0 n A w x w y n − w . {\displaystyle W(C;x,y)=\sum _{w=0}^{n}A_{w}x^{w}y^{n-w}.} == Basic properties == W ( C ; 0 , 1 ) = A 0 = 1 {\displaystyle W(C;0,1)=A_{0}=1} W ( C ; 1 , 1 ) = ∑ w = 0 n A w = | C | {\displaystyle W(C;1,1)=\sum _{w=0}^{n}A_{w}=|C|} W ( C ; 1 , 0 ) = A n = 1 if ( 1 , … , 1 ) ∈ C and 0 otherwise {\displaystyle W(C;1,0)=A_{n}=1{\mbox{ if }}(1,\ldots ,1)\in C\ {\mbox{ and }}0{\mbox{ otherwise}}} W ( C ; 1 , − 1 ) = ∑ w = 0 n A w ( − 1 ) n − w = A n + ( − 1 ) 1 A n − 1 + … + ( − 1 ) n − 1 A 1 + ( − 1 ) n A 0 {\displaystyle W(C;1,-1)=\sum _{w=0}^{n}A_{w}(-1)^{n-w}=A_{n}+(-1)^{1}A_{n-1}+\ldots +(-1)^{n-1}A_{1}+(-1)^{n}A_{0}} == MacWilliams identity == Denote the dual code of C ⊂ F 2 n {\displaystyle C\subset \mathbb {F} _{2}^{n}} by C ⊥ = { x ∈ F 2 n ∣ ⟨ x , c ⟩ = 0 ∀ c ∈ C } {\displaystyle C^{\perp }=\{x\in \mathbb {F} _{2}^{n}\,\mid \,\langle x,c\rangle =0{\mbox{ }}\forall c\in C\}} (where ⟨ , ⟩ {\displaystyle \langle \ ,\ \rangle } denotes the vector dot product and which is taken over F 2 {\displaystyle \mathbb {F} _{2}} ). The MacWilliams identity states that W ( C ⊥ ; x , y ) = 1 ∣ C ∣ W ( C ; y − x , y + x ) . {\displaystyle W(C^{\perp };x,y)={\frac {1}{\mid C\mid }}W(C;y-x,y+x).} The identity is named after Jessie MacWilliams. == Distance enumerator == The distance distribution or inner distribution of a code C of size M and length n is the sequence of numbers A i = 1 M # { ( c 1 , c 2 ) ∈ C × C ∣ d ( c 1 , c 2 ) = i } {\displaystyle A_{i}={\frac {1}{M}}\#\left\lbrace (c_{1},c_{2})\in C\times C\mid d(c_{1},c_{2})=i\right\rbrace } where i ranges from 0 to n. The distance enumerator polynomial is A ( C ; x , y ) = ∑ i = 0 n A i x i y n − i {\displaystyle A(C;x,y)=\sum _{i=0}^{n}A_{i}x^{i}y^{n-i}} and when C is linear this is equal to the weight enumerator. The outer distribution of C is the 2n-by-n+1 matrix B with rows indexed by elements of GF(2)n and columns indexed by integers 0...n, and entries B x , i = # { c ∈ C ∣ d ( c , x ) = i } . {\displaystyle B_{x,i}=\#\left\lbrace c\in C\mid d(c,x)=i\right\rbrace .} The sum of the rows of B is M times the inner distribution vector (A0,...,An). A code C is regular if the rows of B corresponding to the codewords of C are all equal. == References == Hill, Raymond (1986). A first course in coding theory. Oxford Applied Mathematics and Computing Science Series. Oxford University Press. pp. 165–173. ISBN 0-19-853803-0. Pless, Vera (1982). Introduction to the theory of error-correcting codes. Wiley-Interscience Series in Discrete Mathematics. John Wiley & Sons. pp. 103–119. ISBN 0-471-08684-3. J.H. van Lint (1992). Introduction to Coding Theory. GTM. Vol. 86 (2nd ed.). Springer-Verlag. ISBN 3-540-54894-7. Chapters 3.5 and 4.3.
|
Wikipedia:Envelope theorem#0
|
In mathematics and economics, the envelope theorem is a major result about the differentiability properties of the value function of a parameterized optimization problem. As we change parameters of the objective, the envelope theorem shows that, in a certain sense, changes in the optimizer of the objective do not contribute to the change in the objective function. The envelope theorem is an important tool for comparative statics of optimization models. The term envelope derives from describing the graph of the value function as the "upper envelope" of the graphs of the parameterized family of functions { f ( x , ⋅ ) } x ∈ X {\displaystyle \left\{f\left(x,\cdot \right)\right\}_{x\in X}} that are optimized. == Statement == Let f ( x , α ) {\displaystyle f(x,\alpha )} and g j ( x , α ) , j = 1 , 2 , … , m {\displaystyle g_{j}(x,\alpha ),j=1,2,\ldots ,m} be real-valued continuously differentiable functions on R n + l {\displaystyle \mathbb {R} ^{n+l}} , where x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} are choice variables and α ∈ R l {\displaystyle \alpha \in \mathbb {R} ^{l}} are parameters, and consider the problem of choosing x {\displaystyle x} , for a given α {\displaystyle \alpha } , so as to: max x f ( x , α ) {\displaystyle \max _{x}f(x,\alpha )} subject to g j ( x , α ) ≥ 0 , j = 1 , 2 , … , m {\displaystyle g_{j}(x,\alpha )\geq 0,j=1,2,\ldots ,m} and x ≥ 0 {\displaystyle x\geq 0} . The Lagrangian expression of this problem is given by L ( x , λ , α ) = f ( x , α ) + λ ⋅ g ( x , α ) {\displaystyle {\mathcal {L}}(x,\lambda ,\alpha )=f(x,\alpha )+\lambda \cdot g(x,\alpha )} where λ ∈ R m {\displaystyle \lambda \in \mathbb {R} ^{m}} are the Lagrange multipliers. Now let x ∗ ( α ) {\displaystyle x^{\ast }(\alpha )} and λ ∗ ( α ) {\displaystyle \lambda ^{\ast }(\alpha )} together be the solution that maximizes the objective function f subject to the constraints (and hence are saddle points of the Lagrangian), L ∗ ( α ) ≡ f ( x ∗ ( α ) , α ) + λ ∗ ( α ) ⋅ g ( x ∗ ( α ) , α ) , {\displaystyle {\mathcal {L}}^{\ast }(\alpha )\equiv f(x^{\ast }(\alpha ),\alpha )+\lambda ^{\ast }(\alpha )\cdot g(x^{\ast }(\alpha ),\alpha ),} and define the value function V ( α ) ≡ f ( x ∗ ( α ) , α ) . {\displaystyle V(\alpha )\equiv f(x^{\ast }(\alpha ),\alpha ).} Then we have the following theorem. Theorem: Assume that V {\displaystyle V} and L {\displaystyle {\mathcal {L}}} are continuously differentiable. Then ∂ V ( α ) ∂ α k = ∂ L ∗ ( α ) ∂ α k = ∂ L ( x ∗ ( α ) , λ ∗ ( α ) , α ) ∂ α k , k = 1 , 2 , … , l {\displaystyle {\frac {\partial V(\alpha )}{\partial \alpha _{k}}}={\frac {\partial {\mathcal {L}}^{\ast }(\alpha )}{\partial \alpha _{k}}}={\frac {\partial {\mathcal {L}}(x^{\ast }(\alpha ),\lambda ^{\ast }(\alpha ),\alpha )}{\partial \alpha _{k}}},k=1,2,\ldots ,l} where ∂ L / ∂ α k = ∂ f / ∂ α k + λ ⋅ ∂ g / ∂ α k {\displaystyle \partial {\mathcal {L}}/\partial \alpha _{k}=\partial f/\partial \alpha _{k}+\lambda \cdot \partial g/\partial \alpha _{k}} . == For arbitrary choice sets == Let X {\displaystyle X} denote the choice set and let the relevant parameter be t ∈ [ 0 , 1 ] {\displaystyle t\in \lbrack 0,1]} . Letting f : X × [ 0 , 1 ] → R {\displaystyle f:X\times \lbrack 0,1]\rightarrow R} denote the parameterized objective function, the value function V {\displaystyle V} and the optimal choice correspondence (set-valued function) X ∗ {\displaystyle X^{\ast }} are given by: "Envelope theorems" describe sufficient conditions for the value function V {\displaystyle V} to be differentiable in the parameter t {\displaystyle t} and describe its derivative as where f t {\displaystyle f_{t}} denotes the partial derivative of f {\displaystyle f} with respect to t {\displaystyle t} . Namely, the derivative of the value function with respect to the parameter equals the partial derivative of the objective function with respect to t {\displaystyle t} holding the maximizer fixed at its optimal level. Traditional envelope theorem derivations use the first-order condition for (1), which requires that the choice set X {\displaystyle X} have the convex and topological structure, and the objective function f {\displaystyle f} be differentiable in the variable x {\displaystyle x} . (The argument is that changes in the maximizer have only a "second-order effect" at the optimum and so can be ignored.) However, in many applications such as the analysis of incentive constraints in contract theory and game theory, nonconvex production problems, and "monotone" or "robust" comparative statics, the choice sets and objective functions generally lack the topological and convexity properties required by the traditional envelope theorems. Paul Milgrom and Ilya Segal (2002) observe that the traditional envelope formula holds for optimization problems with arbitrary choice sets at any differentiability point of the value function, provided that the objective function is differentiable in the parameter: Theorem 1: Let t ∈ ( 0 , 1 ) {\displaystyle t\in \left(0,1\right)} and x ∈ X ∗ ( t ) {\displaystyle x\in X^{\ast }\left(t\right)} . If both V ′ ( t ) {\displaystyle V^{\prime }\left(t\right)} and f t ( x , t ) {\displaystyle f_{t}\left(x,t\right)} exist, the envelope formula (3) holds. Proof: Equation (1) implies that for x ∈ X ∗ ( t ) {\displaystyle x\in X^{\ast }\left(t\right)} , max s ∈ [ 0 , 1 ] [ f ( x , s ) − V ( s ) ] = f ( x , t ) − V ( t ) = 0. {\displaystyle \max _{s\in \left[0,1\right]}\left[f\left(x,s\right)-V\left(s\right)\right]=f\left(x,t\right)-V\left(t\right)=0.} Under the assumptions, the objective function of the displayed maximization problem is differentiable at s = t {\displaystyle s=t} , and the first-order condition for this maximization is exactly equation (3). Q.E.D. While differentiability of the value function in general requires strong assumptions, in many applications weaker conditions such as absolute continuity, differentiability almost everywhere, or left- and right-differentiability, suffice. In particular, Milgrom and Segal's (2002) Theorem 2 offers a sufficient condition for V {\displaystyle V} to be absolutely continuous, which means that it is differentiable almost everywhere and can be represented as an integral of its derivative: Theorem 2: Suppose that f ( x , ⋅ ) {\displaystyle f(x,\cdot )} is absolutely continuous for all x ∈ X {\displaystyle x\in X} . Suppose also that there exists an integrable function b : [ 0 , 1 ] {\displaystyle b:[0,1]} → {\displaystyle \rightarrow } R + {\displaystyle \mathbb {R} _{+}} such that | f t ( x , t ) | ≤ b ( t ) {\displaystyle |f_{t}(x,t)|\leq b(t)} for all x ∈ X {\displaystyle x\in X} and almost all t ∈ [ 0 , 1 ] {\displaystyle t\in \lbrack 0,1]} . Then V {\displaystyle V} is absolutely continuous. Suppose, in addition, that f ( x , ⋅ ) {\displaystyle f(x,\cdot )} is differentiable for all x ∈ X {\displaystyle x\in X} , and that X ∗ ( t ) ≠ ∅ {\displaystyle X^{\ast }(t)\neq \varnothing } almost everywhere on [ 0 , 1 ] {\displaystyle [0,1]} . Then for any selection x ∗ ( t ) ∈ X ∗ ( t ) {\displaystyle x^{\ast }(t)\in X^{\ast }(t)} , Proof: Using (1)(1), observe that for any t ′ , t ′ ′ ∈ [ 0 , 1 ] {\displaystyle t^{\prime },t^{\prime \prime }\in \lbrack 0,1]} with t ′ < t ′ ′ {\displaystyle t^{\prime }<t^{\prime \prime }} , | V ( t ′ ′ ) − V ( t ′ ) | ≤ sup x ∈ X | f ( x , t ′ ′ ) − f ( x , t ′ ) | = sup x ∈ X | ∫ t ′ t ′ ′ f t ( x , t ) d t | ≤ ∫ t ′ t ′ ′ sup x ∈ X | f t ( x , t ) | d t ≤ ∫ t ′ t ′ ′ b ( t ) d t . {\displaystyle |V(t^{\prime \prime })-V(t^{\prime })|\leq \sup _{x\in X}|f(x,t^{\prime \prime })-f(x,t^{\prime })|=\sup _{x\in X}\left\vert \int _{t^{\prime }}^{t^{\prime \prime }}f_{t}(x,t)dt\right\vert \leq \int _{t^{\prime }}^{t^{\prime \prime }}\sup _{x\in X}|f_{t}(x,t)|dt\leq \int _{t^{\prime }}^{t^{\prime \prime }}b(t)dt.} This implies that V {\displaystyle V} is absolutely continuous. Therefore, V {\displaystyle V} is differentiable almost everywhere, and using (3) yields (4). Q.E.D. This result dispels the common misconception that nice behavior of the value function requires correspondingly nice behavior of the maximizer. Theorem 2 ensures the absolute continuity of the value function even though the maximizer may be discontinuous. In a similar vein, Milgrom and Segal's (2002) Theorem 3 implies that the value function must be differentiable at t = t 0 {\displaystyle t=t_{0}} and hence satisfy the envelope formula (3) when the family { f ( x , ⋅ ) } x ∈ X {\displaystyle \left\{f\left(x,\cdot \right)\right\}_{x\in X}} is equi-differentiable at t 0 ∈ ( 0 , 1 ) {\displaystyle t_{0}\in \left(0,1\right)} and f t ( X ∗ ( t ) , t 0 ) {\displaystyle f_{t}\left(X^{\ast }\left(t\right),t_{0}\right)} is single-valued and continuous at t = t 0 {\displaystyle t=t_{0}} , even if the maximizer is not differentiable at t 0 {\displaystyle t_{0}} (e.g., if X {\displaystyle X} is described by a set of inequality constraints and the set of binding constraints changes at t 0 {\displaystyle t_{0}} ). == Applications == === Applications to producer theory === Theorem 1 implies Hotelling's lemma at any differentiability point of the profit function, and Theorem 2 implies the producer surplus formula. Formally, let π ( p ) {\displaystyle \pi \left(p\right)} denote the indirect profit function of a price-taking firm with production set X ⊆ R L {\displaystyle X\subseteq \mathbb {R} ^{L}} facing prices p ∈ R L {\displaystyle p\in \mathbb {R} ^{L}} , and let x ∗ ( p ) {\displaystyle x^{\ast }\left(p\right)} denote the firm's supply function, i.e., π ( p ) = max x ∈ X p ⋅ x = p ⋅ x ∗ ( p ) . {\displaystyle \pi (p)=\max _{x\in X}p\cdot x=p\cdot x^{\ast }\left(p\right){\text{.}}} Let t = p i {\displaystyle t=p_{i}} (the price of good i {\displaystyle i} ) and fix the other goods' prices at p − i ∈ R L − 1 {\displaystyle p_{-i}\in \mathbb {R} ^{L-1}} . Applying Theorem 1 to f ( x , t ) = t x i + p − i ⋅ x − i {\displaystyle f(x,t)=tx_{i}+p_{-i}\cdot x_{-i}} yields ∂ π ( p ) ∂ p i = x i ∗ ( p ) {\displaystyle {\frac {\partial \pi (p)}{\partial p_{i}}}=x_{i}^{\ast }(p)} (the firm's optimal supply of good i {\displaystyle i} ). Applying Theorem 2 (whose assumptions are verified when p i {\displaystyle p_{i}} is restricted to a bounded interval) yields π ( t , p − i ) − π ( 0 , p − i ) = ∫ 0 p i x i ∗ ( s , p − i ) d s , {\displaystyle \pi (t,p_{-i})-\pi (0,p_{-i})=\int _{0}^{p_{i}}x_{i}^{\ast }(s,p_{-i})ds,} i.e. the producer surplus π ( t , p − i ) − π ( 0 , p − i ) {\displaystyle \pi (t,p_{-i})-\pi (0,p_{-i})} can be obtained by integrating under the firm's supply curve for good i {\displaystyle i} . === Applications to mechanism design and auction theory === Consider an agent whose utility function f ( x , t ) {\displaystyle f(x,t)} over outcomes x ∈ X ¯ {\displaystyle x\in {\bar {X}}} depends on his type t ∈ [ 0 , 1 ] {\displaystyle t\in \lbrack 0,1]} . Let X ⊆ X ¯ {\displaystyle X\subseteq {\bar {X}}} represent the "menu" of possible outcomes the agent could obtain in the mechanism by sending different messages. The agent's equilibrium utility V ( t ) {\displaystyle V(t)} in the mechanism is then given by (1), and the set X ∗ ( t ) {\displaystyle X^{\ast }(t)} of the mechanism's equilibrium outcomes is given by (2). Any selection x ∗ ( t ) ∈ X ∗ ( t ) {\displaystyle x^{\ast }(t)\in X^{\ast }(t)} is a choice rule implemented by the mechanism. Suppose that the agent's utility function f ( x , t ) {\displaystyle f(x,t)} is differentiable and absolutely continuous in t {\displaystyle t} for all x ∈ Y {\displaystyle x\in Y} , and that sup x ∈ X ¯ | f t ( x , t ) | {\displaystyle \sup _{x\in {\bar {X}}}|f_{t}(x,t)|} is integrable on [ 0 , 1 ] {\displaystyle [0,1]} . Then Theorem 2 implies that the agent's equilibrium utility V {\displaystyle V} in any mechanism implementing a given choice rule x ∗ {\displaystyle x^{\ast }} must satisfy the integral condition (4). The integral condition (4) is a key step in the analysis of mechanism design problems with continuous type spaces. In particular, in Myerson's (1981) analysis of single-item auctions, the outcome from the viewpoint of one bidder can be described as x = ( y , z ) {\displaystyle x=\left(y,z\right)} , where y {\displaystyle y} is the bidder's probability of receiving the object and z {\displaystyle z} is his expected payment, and the bidder's expected utility takes the form f ( ( y , z ) , t ) = t y − z {\displaystyle f\left(\left(y,z\right),t\right)=ty-z} . In this case, letting t _ {\displaystyle {\underline {t}}} denote the bidder's lowest possible type, the integral condition (4) for the bidder's equilibrium expected utility V {\displaystyle V} takes the form V ( t ) − V ( t _ ) = ∫ 0 t y ∗ ( s ) d s . {\displaystyle V(t)-V({\underline {t}})=\int _{0}^{t}y^{\ast }(s)ds.} (This equation can be interpreted as the producer surplus formula for the firm whose production technology for converting numeraire z {\displaystyle z} into probability y {\displaystyle y} of winning the object is defined by the auction and which resells the object at a fixed price t {\displaystyle t} ). This condition in turn yields Myerson's (1981) celebrated revenue equivalence theorem: the expected revenue generated in an auction in which bidders have independent private values is fully determined by the bidders' probabilities y ∗ ( t ) {\displaystyle y^{\ast }\left(t\right)} of getting the object for all types t {\displaystyle t} as well as by the expected payoffs V ( t _ ) {\displaystyle V({\underline {t}})} of the bidders' lowest types. Finally, this condition is a key step in Myerson's (1981) of optimal auctions. For other applications of the envelope theorem to mechanism design see Mirrlees (1971), Holmstrom (1979), Laffont and Maskin (1980), Riley and Samuelson (1981), Fudenberg and Tirole (1991), and Williams (1999). While these authors derived and exploited the envelope theorem by restricting attention to (piecewise) continuously differentiable choice rules or even narrower classes, it may sometimes be optimal to implement a choice rule that is not piecewise continuously differentiable. (One example is the class of trading problems with linear utility described in chapter 6.5 of Myerson (1991).) Note that the integral condition (3) still holds in this setting and implies such important results as Holmstrom's lemma (Holmstrom, 1979), Myerson's lemma (Myerson, 1981), the revenue equivalence theorem (for auctions), the Green–Laffont–Holmstrom theorem (Green and Laffont, 1979; Holmstrom, 1979), the Myerson–Satterthwaite inefficiency theorem (Myerson and Satterthwaite, 1983), the Jehiel–Moldovanu impossibility theorems (Jehiel and Moldovanu, 2001), the McAfee–McMillan weak-cartels theorem (McAfee and McMillan, 1992), and Weber's martingale theorem (Weber, 1983), etc. The details of these applications are provided in Chapter 3 of Milgrom (2004), who offers an elegant and unifying framework in auction and mechanism design analysis mainly based on the envelope theorem and other familiar techniques and concepts in demand theory. === Applications to multidimensional parameter spaces === For a multidimensional parameter space T ⊆ R K {\displaystyle T\subseteq \mathbb {R} ^{K}} , Theorem 1 can be applied to partial and directional derivatives of the value function. If both the objective function f {\displaystyle f} and the value function V {\displaystyle V} are (totally) differentiable in t {\displaystyle t} , Theorem 1 implies the envelope formula for their gradients: ∇ V ( t ) = ∇ t f ( x , t ) {\displaystyle \nabla V\left(t\right)=\nabla _{t}f\left(x,t\right)} for each x ∈ X ∗ ( t ) {\displaystyle x\in X^{\ast }\left(t\right)} . While total differentiability of the value function may not be easy to ensure, Theorem 2 can be still applied along any smooth path connecting two parameter values t 0 {\displaystyle t_{0}} and t {\displaystyle t} . Namely, suppose that functions f ( x , ⋅ ) {\displaystyle f(x,\cdot )} are differentiable for all x ∈ X {\displaystyle x\in X} with | ∇ t f ( x , t ) | ≤ B {\displaystyle |\nabla _{t}f(x,t)|\leq B} for all x ∈ X , {\displaystyle x\in X,} t ∈ T {\displaystyle t\in T} . A smooth path from t 0 {\displaystyle t_{0}} to t {\displaystyle t} is described by a differentiable mapping γ : [ 0 , 1 ] → T {\displaystyle \gamma :\left[0,1\right]\rightarrow T} with a bounded derivative, such that γ ( 0 ) = t 0 {\displaystyle \gamma \left(0\right)=t_{0}} and γ ( 1 ) = t {\displaystyle \gamma \left(1\right)=t} . Theorem 2 implies that for any such smooth path, the change of the value function can be expressed as the path integral of the partial gradient ∇ t f ( x ∗ ( t ) , t ) {\displaystyle \nabla _{t}f(x^{\ast }(t),t)} of the objective function along the path: V ( t ) − V ( t 0 ) = ∫ γ ∇ t f ( x ∗ ( s ) , s ) ⋅ d s . {\displaystyle V(t)-V(t_{0})=\int _{\gamma }\nabla _{t}f(x^{\ast }(s),s)\cdot ds.} In particular, for t = t 0 {\displaystyle t=t_{0}} , this establishes that cyclic path integrals along any smooth path γ {\displaystyle \gamma } must be zero: ∫ ∇ t f ( x ∗ ( s ) , s ) ⋅ d s = 0. {\displaystyle \int \nabla _{t}f(x^{\ast }(s),s)\cdot ds=0.} This "integrability condition" plays an important role in mechanism design with multidimensional types, constraining what kind of choice rules x ∗ {\displaystyle x^{\ast }} can be sustained by mechanism-induced menus X ⊆ X ¯ {\displaystyle X\subseteq {\bar {X}}} . In application to producer theory, with x ∈ X ⊆ R L {\displaystyle x\in X\subseteq \mathbb {R} ^{L}} being the firm's production vector and t ∈ R L {\displaystyle t\in \mathbb {R} ^{L}} being the price vector, f ( x , t ) = t ⋅ x {\displaystyle f\left(x,t\right)=t\cdot x} , and the integrability condition says that any rationalizable supply function x ∗ {\displaystyle x^{\ast }} must satisfy ∫ x ∗ ( s ) ⋅ d s = 0. {\displaystyle \int x^{\ast }(s)\cdot ds=0.} When x ∗ {\displaystyle x^{\ast }} is continuously differentiable, this integrability condition is equivalent to the symmetry of the substitution matrix ( ∂ x i ∗ ( t ) / ∂ t j ) i , j = 1 L {\displaystyle \left(\partial x_{i}^{\ast }\left(t\right)/\partial t_{j}\right)_{i,j=1}^{L}} . (In consumer theory, the same argument applied to the expenditure minimization problem yields symmetry of the Slutsky matrix.) === Applications to parameterized constraints === Suppose now that the feasible set X ( t ) {\displaystyle X\left(t\right)} depends on the parameter, i.e., V ( t ) = sup x ∈ X ( t ) f ( x , t ) {\displaystyle V(t)=\sup _{x\in X\left(t\right)}f(x,t)} X ∗ ( t ) = { x ∈ X ( t ) : f ( x , t ) = V ( t ) } , {\displaystyle X^{\ast }(t)=\{x\in X\left(t\right):f(x,t)=V(t)\}{\text{, }}} where X ( t ) = { x ∈ X : g ( x , t ) ≥ 0 } {\displaystyle X\left(t\right)=\left\{x\in X:g\left(x,t\right)\geq 0\right\}} for some g : X × [ 0 , 1 ] → R K . {\displaystyle g:X\times \left[0,1\right]\rightarrow \mathbb {R} ^{K}.} Suppose that X {\displaystyle X} is a convex set, f {\displaystyle f} and g {\displaystyle g} are concave in x {\displaystyle x} , and there exists x ^ ∈ X {\displaystyle {\hat {x}}\in X} such that g ( x ^ , t ) > 0 {\displaystyle g\left({\hat {x}},t\right)>0} for all t ∈ [ 0 , 1 ] {\displaystyle t\in \left[0,1\right]} . Under these assumptions, it is well known that the above constrained optimization program can be represented as a saddle-point problem for the Lagrangian L ( x , λ , t ) = f ( x , t ) + λ ⋅ g ( x , t ) {\displaystyle L\left(x,\lambda ,t\right)=f(x,t)+\lambda \cdot g\left(x,t\right)} , where λ ∈ R + K {\displaystyle \lambda \in \mathbb {R} _{+}^{K}} is the vector of Lagrange multipliers chosen by the adversary to minimize the Lagrangian. This allows the application of Milgrom and Segal's (2002, Theorem 4) envelope theorem for saddle-point problems, under the additional assumptions that X {\displaystyle X} is a compact set in a normed linear space, f {\displaystyle f} and g {\displaystyle g} are continuous in x {\displaystyle x} , and f t {\displaystyle f_{t}} and g t {\displaystyle g_{t}} are continuous in ( x , t ) {\displaystyle \left(x,t\right)} . In particular, letting ( x ∗ ( t ) , λ ∗ ( t ) ) {\displaystyle \left(x^{\ast }(t),\lambda ^{\ast }\left(t\right)\right)} denote the Lagrangian's saddle point for parameter value t {\displaystyle t} , the theorem implies that V {\displaystyle V} is absolutely continuous and satisfies V ( t ) = V ( 0 ) + ∫ 0 t L t ( x ∗ ( s ) , λ ∗ ( s ) , s ) d s . {\displaystyle V(t)=V(0)+\int _{0}^{t}L_{t}(x^{\ast }(s),\lambda ^{\ast }\left(s\right),s)ds.} For the special case in which f ( x , t ) {\displaystyle f\left(x,t\right)} is independent of t {\displaystyle t} , K = 1 {\displaystyle K=1} , and g ( x , t ) = h ( x ) + t {\displaystyle g\left(x,t\right)=h\left(x\right)+t} , the formula implies that V ′ ( t ) = L t ( x ∗ ( t ) , λ ∗ ( t ) , t ) = λ ∗ ( t ) {\displaystyle V^{\prime }(t)=L_{t}(x^{\ast }(t),\lambda ^{\ast }\left(t\right),t)=\lambda ^{\ast }\left(t\right)} for a.e. t {\displaystyle t} . That is, the Lagrange multiplier λ ∗ ( t ) {\displaystyle \lambda ^{\ast }\left(t\right)} on the constraint is its "shadow price" in the optimization program. === Other applications === Milgrom and Segal (2002) demonstrate that the generalized version of the envelope theorems can also be applied to convex programming, continuous optimization problems, saddle-point problems, and optimal stopping problems. == See also == == References ==
|
Wikipedia:Enzo Tonti#0
|
Enzo Tonti (30 October 1935 – 10 June 2021) was an Italian physicist and mathematician, known for his contributions to engineering and mathematical physics. == Life == Enzo Tonti was born in Milan. He attended a fine arts high school. He graduated in Mathematics and Physics at the University of Milan in 1961. He began work there in 1962 as a research assistant in the field of mathematical physics. In 1976, he accepted a professorship at the Engineering Faculty of the University of Trieste. After retirement, he was nominated professor emeritus. He married in 1962 and had three children (two daughters and one son). == Selected publications == === Journal articles === Tonti E. A direct discrete formulation of field laws: The cell method. CMES- Computer Modeling in Engineering and Sciences. 2001 Jan 1;2(2):237-58. Tonti E. Finite formulation of the electromagnetic field. Progress in electromagnetics research, 2001. 32, pp. 1–44.[1] Tonti E. Finite formulation of the electromagnetic field.IEEE Transactions on Magnetics. 2002 Aug 7;38(2):333-6. [2] Tonti E. The reason for analogies between physical theories. Applied Mathematical Modelling. 1976 Jun 1;1(1):37-50 [3] Tonti E. Variational formulation for every nonlinear problem. International Journal of Engineering Science. 1984 Jan 1;22(11-12):1343-71. [4] === Books === Tonti, Enzo. The Mathematical Structure of Classical and Relativistic Physics A General Classification Diagram. Springer. 2013. ISBN 9781461474227 == References == == External links == Enzo Tonti publications indexed by Google Scholar
|
Wikipedia:Epigraph (mathematics)#0
|
In mathematics, the epigraph or supergraph of a function f : X → [ − ∞ , ∞ ] {\displaystyle f:X\to [-\infty ,\infty ]} valued in the extended real numbers [ − ∞ , ∞ ] = R ∪ { ± ∞ } {\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}} is the set epi f = { ( x , r ) ∈ X × R : r ≥ f ( x ) } {\displaystyle \operatorname {epi} f=\{(x,r)\in X\times \mathbb {R} ~:~r\geq f(x)\}} consisting of all points in the Cartesian product X × R {\displaystyle X\times \mathbb {R} } lying on or above the function's graph. Similarly, the strict epigraph epi S f {\displaystyle \operatorname {epi} _{S}f} is the set of points in X × R {\displaystyle X\times \mathbb {R} } lying strictly above its graph. Importantly, unlike the graph of f , {\displaystyle f,} the epigraph always consists entirely of points in X × R {\displaystyle X\times \mathbb {R} } (this is true of the graph only when f {\displaystyle f} is real-valued). If the function takes ± ∞ {\displaystyle \pm \infty } as a value then graph f {\displaystyle \operatorname {graph} f} will not be a subset of its epigraph epi f . {\displaystyle \operatorname {epi} f.} For example, if f ( x 0 ) = ∞ {\displaystyle f\left(x_{0}\right)=\infty } then the point ( x 0 , f ( x 0 ) ) = ( x 0 , ∞ ) {\displaystyle \left(x_{0},f\left(x_{0}\right)\right)=\left(x_{0},\infty \right)} will belong to graph f {\displaystyle \operatorname {graph} f} but not to epi f . {\displaystyle \operatorname {epi} f.} These two sets are nevertheless closely related because the graph can always be reconstructed from the epigraph, and vice versa. The study of continuous real-valued functions in real analysis has traditionally been closely associated with the study of their graphs, which are sets that provide geometric information (and intuition) about these functions. Epigraphs serve this same purpose in the fields of convex analysis and variational analysis, in which the primary focus is on convex functions valued in [ − ∞ , ∞ ] {\displaystyle [-\infty ,\infty ]} instead of continuous functions valued in a vector space (such as R {\displaystyle \mathbb {R} } or R 2 {\displaystyle \mathbb {R} ^{2}} ). This is because in general, for such functions, geometric intuition is more readily obtained from a function's epigraph than from its graph. Similarly to how graphs are used in real analysis, the epigraph can often be used to give geometrical interpretations of a convex function's properties, to help formulate or prove hypotheses, or to aid in constructing counterexamples. == Definition == The definition of the epigraph was inspired by that of the graph of a function, where the graph of f : X → Y {\displaystyle f:X\to Y} is defined to be the set graph f := { ( x , y ) ∈ X × Y : y = f ( x ) } . {\displaystyle \operatorname {graph} f:=\{(x,y)\in X\times Y~:~y=f(x)\}.} The epigraph or supergraph of a function f : X → [ − ∞ , ∞ ] {\displaystyle f:X\to [-\infty ,\infty ]} valued in the extended real numbers [ − ∞ , ∞ ] = R ∪ { ± ∞ } {\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}} is the set epi f = { ( x , r ) ∈ X × R : r ≥ f ( x ) } = [ f − 1 ( − ∞ ) × R ] ∪ ⋃ x ∈ f − 1 ( R ) ( { x } × [ f ( x ) , ∞ ) ) {\displaystyle {\begin{alignedat}{4}\operatorname {epi} f&=\{(x,r)\in X\times \mathbb {R} ~:~r\geq f(x)\}\\&=\left[f^{-1}(-\infty )\times \mathbb {R} \right]\cup \bigcup _{x\in f^{-1}(\mathbb {R} )}(\{x\}\times [f(x),\infty ))\end{alignedat}}} where all sets being unioned in the last line are pairwise disjoint. In the union over x ∈ f − 1 ( R ) {\displaystyle x\in f^{-1}(\mathbb {R} )} that appears above on the right hand side of the last line, the set { x } × [ f ( x ) , ∞ ) {\displaystyle \{x\}\times [f(x),\infty )} may be interpreted as being a "vertical ray" consisting of ( x , f ( x ) ) {\displaystyle (x,f(x))} and all points in X × R {\displaystyle X\times \mathbb {R} } "directly above" it. Similarly, the set of points on or below the graph of a function is its hypograph. The strict epigraph is the epigraph with the graph removed: epi S f = { ( x , r ) ∈ X × R : r > f ( x ) } = epi f ∖ graph f = ⋃ x ∈ X ( { x } × ( f ( x ) , ∞ ) ) {\displaystyle {\begin{alignedat}{4}\operatorname {epi} _{S}f&=\{(x,r)\in X\times \mathbb {R} ~:~r>f(x)\}\\&=\operatorname {epi} f\setminus \operatorname {graph} f\\&=\bigcup _{x\in X}\left(\{x\}\times (f(x),\infty )\right)\end{alignedat}}} where all sets being unioned in the last line are pairwise disjoint, and some may be empty. == Relationships with other sets == Despite the fact that f {\displaystyle f} might take one (or both) of ± ∞ {\displaystyle \pm \infty } as a value (in which case its graph would not be a subset of X × R {\displaystyle X\times \mathbb {R} } ), the epigraph of f {\displaystyle f} is nevertheless defined to be a subset of X × R {\displaystyle X\times \mathbb {R} } rather than of X × [ − ∞ , ∞ ] . {\displaystyle X\times [-\infty ,\infty ].} This is intentional because when X {\displaystyle X} is a vector space then so is X × R {\displaystyle X\times \mathbb {R} } but X × [ − ∞ , ∞ ] {\displaystyle X\times [-\infty ,\infty ]} is never a vector space (since the extended real number line [ − ∞ , ∞ ] {\displaystyle [-\infty ,\infty ]} is not a vector space). This deficiency in X × [ − ∞ , ∞ ] {\displaystyle X\times [-\infty ,\infty ]} remains even if instead of being a vector space, X {\displaystyle X} is merely a non-empty subset of some vector space. The epigraph being a subset of a vector space allows for tools related to real analysis and functional analysis (and other fields) to be more readily applied. The domain (rather than the codomain) of the function is not particularly important for this definition; it can be any linear space or even an arbitrary set instead of R n {\displaystyle \mathbb {R} ^{n}} . The strict epigraph epi S f {\displaystyle \operatorname {epi} _{S}f} and the graph graph f {\displaystyle \operatorname {graph} f} are always disjoint. The epigraph of a function f : X → [ − ∞ , ∞ ] {\displaystyle f:X\to [-\infty ,\infty ]} is related to its graph and strict epigraph by epi f ⊆ epi S f ∪ graph f {\displaystyle \,\operatorname {epi} f\,\subseteq \,\operatorname {epi} _{S}f\,\cup \,\operatorname {graph} f} where set equality holds if and only if f {\displaystyle f} is real-valued. However, epi f = [ epi S f ∪ graph f ] ∩ [ X × R ] {\displaystyle \operatorname {epi} f=\left[\operatorname {epi} _{S}f\,\cup \,\operatorname {graph} f\right]\,\cap \,[X\times \mathbb {R} ]} always holds. == Reconstructing functions from epigraphs == The epigraph is empty if and only if the function is identically equal to infinity. Just as any function can be reconstructed from its graph, so too can any extended real-valued function f {\displaystyle f} on X {\displaystyle X} be reconstructed from its epigraph E := epi f {\displaystyle E:=\operatorname {epi} f} (even when f {\displaystyle f} takes on ± ∞ {\displaystyle \pm \infty } as a value). Given x ∈ X , {\displaystyle x\in X,} the value f ( x ) {\displaystyle f(x)} can be reconstructed from the intersection E ∩ ( { x } × R ) {\displaystyle E\cap (\{x\}\times \mathbb {R} )} of E {\displaystyle E} with the "vertical line" { x } × R {\displaystyle \{x\}\times \mathbb {R} } passing through x {\displaystyle x} as follows: case 1: E ∩ ( { x } × R ) = ∅ {\displaystyle E\cap (\{x\}\times \mathbb {R} )=\varnothing } if and only if f ( x ) = ∞ , {\displaystyle f(x)=\infty ,} case 2: E ∩ ( { x } × R ) = { x } × R {\displaystyle E\cap (\{x\}\times \mathbb {R} )=\{x\}\times \mathbb {R} } if and only if f ( x ) = − ∞ , {\displaystyle f(x)=-\infty ,} case 3: otherwise, E ∩ ( { x } × R ) {\displaystyle E\cap (\{x\}\times \mathbb {R} )} is necessarily of the form { x } × [ f ( x ) , ∞ ) , {\displaystyle \{x\}\times [f(x),\infty ),} from which the value of f ( x ) {\displaystyle f(x)} can be obtained by taking the infimum of the interval. The above observations can be combined to give a single formula for f ( x ) {\displaystyle f(x)} in terms of E := epi f . {\displaystyle E:=\operatorname {epi} f.} Specifically, for any x ∈ X , {\displaystyle x\in X,} f ( x ) = inf { r ∈ R : ( x , r ) ∈ E } {\displaystyle f(x)=\inf _{}\{r\in \mathbb {R} ~:~(x,r)\in E\}} where by definition, inf ∅ := ∞ . {\displaystyle \inf _{}\varnothing :=\infty .} This same formula can also be used to reconstruct f {\displaystyle f} from its strict epigraph E := epi S f . {\displaystyle E:=\operatorname {epi} _{S}f.} == Relationships between properties of functions and their epigraphs == A function is convex if and only if its epigraph is a convex set. The epigraph of a real affine function g : R n → R {\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} } is a halfspace in R n + 1 . {\displaystyle \mathbb {R} ^{n+1}.} A function is lower semicontinuous if and only if its epigraph is closed. == See also == Effective domain Hypograph (mathematics) – Region underneath a graph Proper convex function == Citations == == References == Rockafellar, R. Tyrrell; Wets, Roger J.-B. (26 June 2009). Variational Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 317. Berlin New York: Springer Science & Business Media. ISBN 9783642024313. OCLC 883392544. Rockafellar, Ralph Tyrell (1996), Convex Analysis, Princeton University Press, Princeton, NJ. ISBN 0-691-01586-4.
|
Wikipedia:Equal incircles theorem#0
|
In geometry, the equal incircles theorem derives from a Japanese Sangaku, and pertains to the following construction: a series of rays are drawn from a given point to a given line such that the inscribed circles of the triangles formed by adjacent rays and the base line are equal. In the illustration the equal blue circles define the spacing between the rays, as described. The theorem states that the incircles of the triangles formed (starting from any given ray) by every other ray, every third ray, etc. and the base line are also equal. The case of every other ray is illustrated above by the green circles, which are all equal. From the fact that the theorem does not depend on the angle of the initial ray, it can be seen that the theorem properly belongs to analysis, rather than geometry, and must relate to a continuous scaling function which defines the spacing of the rays. In fact, this function is the hyperbolic sine. The theorem is a direct corollary of the following lemma: Suppose that the nth ray makes an angle γ n {\displaystyle \gamma _{n}} with the normal to the baseline. If γ n {\displaystyle \gamma _{n}} is parameterized according to the equation, tan γ n = sinh θ n {\displaystyle \tan \gamma _{n}=\sinh \theta _{n}} , then values of θ n = a + n b {\displaystyle \theta _{n}=a+nb} , where a {\displaystyle a} and b {\displaystyle b} are real constants, define a sequence of rays that satisfy the condition of equal incircles, and furthermore any sequence of rays satisfying the condition can be produced by suitable choice of the constants a {\displaystyle a} and b {\displaystyle b} . == Proof of the lemma == In the diagram, lines PS and PT are adjacent rays making angles γ n {\displaystyle \gamma _{n}} and γ n + 1 {\displaystyle \gamma _{n+1}} with line PR, which is perpendicular to the baseline, RST. Line QXOY is parallel to the baseline and passes through O, the center of the incircle of △ {\displaystyle \triangle } PST, which is tangent to the rays at W and Z. Also, line PQ has length h − r {\displaystyle h-r} , and line QR has length r {\displaystyle r} , the radius of the incircle. Then △ {\displaystyle \triangle } OWX is similar to △ {\displaystyle \triangle } PQX and △ {\displaystyle \triangle } OZY is similar to △ {\displaystyle \triangle } PQY, and from XY = XO + OY we get ( h − r ) ( tan γ n + 1 − tan γ n ) = r ( sec γ n + sec γ n + 1 ) . {\displaystyle (h-r)(\tan \gamma _{n+1}-\tan \gamma _{n})=r(\sec \gamma _{n}+\sec \gamma _{n+1}).} This relation on a set of angles, { γ m } {\displaystyle \{\gamma _{m}\}} , expresses the condition of equal incircles. To prove the lemma, we set tan γ n = sinh ( a + n b ) {\displaystyle \tan \gamma _{n}=\sinh(a+nb)} , which gives sec γ n = cosh ( a + n b ) {\displaystyle \sec \gamma _{n}=\cosh(a+nb)} . Using a + ( n + 1 ) b = ( a + n b ) + b {\displaystyle a+(n+1)b=(a+nb)+b} , we apply the addition rules for sinh {\displaystyle \sinh } and cosh {\displaystyle \cosh } , and verify that the equal incircles relation is satisfied by setting r h − r = tanh b 2 . {\displaystyle {\frac {r}{h-r}}=\tanh {\frac {b}{2}}.} This gives an expression for the parameter b {\displaystyle b} in terms of the geometric measures, h {\displaystyle h} and r {\displaystyle r} . With this definition of b {\displaystyle b} we then obtain an expression for the radii, r N {\displaystyle r_{N}} , of the incircles formed by taking every Nth ray as the sides of the triangles r N h − r N = tanh N b 2 . {\displaystyle {\frac {r_{N}}{h-r_{N}}}=\tanh {\frac {Nb}{2}}.} == See also == Hyperbolic function Japanese theorem for cyclic polygons Japanese theorem for cyclic quadrilaterals Tangent lines to circles == References == Equal Incircles Theorem at cut-the-knot J. Tabov. A note on the five-circle theorem. Mathematics Magazine 63 (1989), 2, 92–94.
|
Wikipedia:Equating coefficients#0
|
In mathematics, the method of equating the coefficients is a way of solving a functional equation of two expressions such as polynomials for a number of unknown parameters. It relies on the fact that two expressions are identical precisely when corresponding coefficients are equal for each different type of term. The method is used to bring formulas into a desired form. == Example in real fractions == Suppose we want to apply partial fraction decomposition to the expression: 1 x ( x − 1 ) ( x − 2 ) , {\displaystyle {\frac {1}{x(x-1)(x-2)}},\,} that is, we want to bring it into the form: A x + B x − 1 + C x − 2 , {\displaystyle {\frac {A}{x}}+{\frac {B}{x-1}}+{\frac {C}{x-2}},\,} in which the unknown parameters are A, B and C. Multiplying these formulas by x(x − 1)(x − 2) turns both into polynomials, which we equate: A ( x − 1 ) ( x − 2 ) + B x ( x − 2 ) + C x ( x − 1 ) = 1 , {\displaystyle A(x-1)(x-2)+Bx(x-2)+Cx(x-1)=1,\,} or, after expansion and collecting terms with equal powers of x: ( A + B + C ) x 2 − ( 3 A + 2 B + C ) x + 2 A = 1. {\displaystyle (A+B+C)x^{2}-(3A+2B+C)x+2A=1.\,} At this point it is essential to realize that the polynomial 1 is in fact equal to the polynomial 0x2 + 0x + 1, having zero coefficients for the positive powers of x. Equating the corresponding coefficients now results in this system of linear equations: A + B + C = 0 , {\displaystyle A+B+C=0,\,} 3 A + 2 B + C = 0 , {\displaystyle 3A+2B+C=0,\,} 2 A = 1. {\displaystyle 2A=1.\,} Solving it results in: A = 1 2 , B = − 1 , C = 1 2 . {\displaystyle A={\frac {1}{2}},\,B=-1,\,C={\frac {1}{2}}.\,} == Example in nested radicals == A similar problem, involving equating like terms rather than coefficients of like terms, arises if we wish to de-nest the nested radicals a + b c {\displaystyle {\sqrt {a+b{\sqrt {c}}\ }}} to obtain an equivalent expression not involving a square root of an expression itself involving a square root, we can postulate the existence of rational parameters d, e such that a + b c = d + e . {\displaystyle {\sqrt {a+b{\sqrt {c}}\ }}={\sqrt {d}}+{\sqrt {e}}.} Squaring both sides of this equation yields: a + b c = d + e + 2 d e . {\displaystyle a+b{\sqrt {c}}=d+e+2{\sqrt {de}}.} To find d and e we equate the terms not involving square roots, so a = d + e , {\displaystyle a=d+e,} and equate the parts involving radicals, so b c = 2 d e {\displaystyle b{\sqrt {c}}=2{\sqrt {de}}} which when squared implies b 2 c = 4 d e . {\displaystyle b^{2}c=4de.} This gives us two equations, one quadratic and one linear, in the desired parameters d and e, and these can be solved to obtain e = a + a 2 − b 2 c 2 , {\displaystyle e={\frac {a+{\sqrt {a^{2}-b^{2}c}}}{2}},} d = a − a 2 − b 2 c 2 , {\displaystyle d={\frac {a-{\sqrt {a^{2}-b^{2}c}}}{2}},} which is a valid solution pair if and only if a 2 − b 2 c {\displaystyle {\sqrt {a^{2}-b^{2}c}}} is a rational number. == Example of testing for linear dependence of equations == Consider this overdetermined system of equations (with 3 equations in just 2 unknowns): x − 2 y + 1 = 0 , {\displaystyle x-2y+1=0,} 3 x + 5 y − 8 = 0 , {\displaystyle 3x+5y-8=0,} 4 x + 3 y − 7 = 0. {\displaystyle 4x+3y-7=0.} To test whether the third equation is linearly dependent on the first two, postulate two parameters a and b such that a times the first equation plus b times the second equation equals the third equation. Since this always holds for the right sides, all of which are 0, we merely need to require it to hold for the left sides as well: a ( x − 2 y + 1 ) + b ( 3 x + 5 y − 8 ) = 4 x + 3 y − 7. {\displaystyle a(x-2y+1)+b(3x+5y-8)=4x+3y-7.} Equating the coefficients of x on both sides, equating the coefficients of y on both sides, and equating the constants on both sides gives the following system in the desired parameters a, b: a + 3 b = 4 , {\displaystyle a+3b=4,} − 2 a + 5 b = 3 , {\displaystyle -2a+5b=3,} a − 8 b = − 7. {\displaystyle a-8b=-7.} Solving it gives: a = 1 , b = 1 {\displaystyle a=1,\ b=1} The unique pair of values a, b satisfying the first two equations is (a, b) = (1, 1); since these values also satisfy the third equation, there do in fact exist a, b such that a times the original first equation plus b times the original second equation equals the original third equation; we conclude that the third equation is linearly dependent on the first two. Note that if the constant term in the original third equation had been anything other than –7, the values (a, b) = (1, 1) that satisfied the first two equations in the parameters would not have satisfied the third one (a – 8b = constant), so there would exist no a, b satisfying all three equations in the parameters, and therefore the third original equation would be independent of the first two. == Example in complex numbers == The method of equating coefficients is often used when dealing with complex numbers. For example, to divide the complex number a+bi by the complex number c+di, we postulate that the ratio equals the complex number e+fi, and we wish to find the values of the parameters e and f for which this is true. We write a + b i c + d i = e + f i , {\displaystyle {\frac {a+bi}{c+di}}=e+fi,} and multiply both sides by the denominator to obtain ( c e − f d ) + ( e d + c f ) i = a + b i . {\displaystyle (ce-fd)+(ed+cf)i=a+bi.} Equating real terms gives c e − f d = a , {\displaystyle ce-fd=a,} and equating coefficients of the imaginary unit i gives e d + c f = b . {\displaystyle ed+cf=b.} These are two equations in the unknown parameters e and f, and they can be solved to obtain the desired coefficients of the quotient: e = a c + b d c 2 + d 2 and f = b c − a d c 2 + d 2 . {\displaystyle e={\frac {ac+bd}{c^{2}+d^{2}}}\quad \quad {\text{and}}\quad \quad f={\frac {bc-ad}{c^{2}+d^{2}}}.} == References == Tanton, James (2005). Encyclopedia of Mathematics. Facts on File. p. 162. ISBN 0-8160-5124-0.
|
Wikipedia:Equation#0
|
Education is the transmission of knowledge and skills and the development of character traits. Formal education occurs within a structured institutional framework, such as public schools, following a curriculum. Non-formal education also follows a structured approach but occurs outside the formal schooling system, while informal education involves unstructured learning through daily experiences. Formal and non-formal education are categorized into levels, including early childhood education, primary education, secondary education, and tertiary education. Other classifications focus on teaching methods, such as teacher-centered and student-centered education, and on subjects, such as science education, language education, and physical education. Additionally, the term "education" can denote the mental states and qualities of educated individuals and the academic field studying educational phenomena. The precise definition of education is disputed, and there are disagreements about the aims of education and the extent to which education differs from indoctrination by fostering critical thinking. These disagreements impact how to identify, measure, and enhance various forms of education. Essentially, education socializes children into society by instilling cultural values and norms, equipping them with the skills necessary to become productive members of society. In doing so, it stimulates economic growth and raises awareness of local and global problems. Organized institutions play a significant role in education. For instance, governments establish education policies to determine the timing of school classes, the curriculum, and attendance requirements. International organizations, such as UNESCO, have been influential in promoting primary education for all children. Many factors influence the success of education. Psychological factors include motivation, intelligence, and personality. Social factors, such as socioeconomic status, ethnicity, and gender, are often associated with discrimination. Other factors encompass access to educational technology, teacher quality, and parental involvement. The primary academic field examining education is known as education studies. It delves into the nature of education, its objectives, impacts, and methods for enhancement. Education studies encompasses various subfields, including philosophy, psychology, sociology, and economics of education. Additionally, it explores topics such as comparative education, pedagogy, and the history of education. In prehistory, education primarily occurred informally through oral communication and imitation. With the emergence of ancient civilizations, the invention of writing led to an expansion of knowledge, prompting a transition from informal to formal education. Initially, formal education was largely accessible to elites and religious groups. The advent of the printing press in the 15th century facilitated widespread access to books, thus increasing general literacy. In the 18th and 19th centuries, public education gained significance, paving the way for the global movement to provide primary education to all, free of charge, and compulsory up to a certain age. Presently, over 90% of primary-school-age children worldwide attend primary school. == Definitions == The term "education" originates from the Latin words educare, meaning "to bring up," and educere, meaning "to bring forth." The definition of education has been explored by theorists from various fields. Many agree that education is a purposeful activity aimed at achieving goals like the transmission of knowledge, skills, and character traits. However, extensive debate surrounds its precise nature beyond these general features. One approach views education as a process occurring during events such as schooling, teaching, and learning. Another perspective perceives education not as a process but as the mental states and dispositions of educated individuals resulting from this process. Furthermore, the term may also refer to the academic field that studies the methods, processes, and social institutions involved in teaching and learning. Having a clear understanding of the term is crucial when attempting to identify educational phenomena, measure educational success, and improve educational practices. Some theorists provide precise definitions by identifying specific features exclusive to all forms of education. Education theorist R. S. Peters, for instance, outlines three essential features of education, including imparting knowledge and understanding to the student, ensuring the process is beneficial, and conducting it in a morally appropriate manner. While such precise definitions often characterize the most typical forms of education effectively, they face criticism because less common types of education may occasionally fall outside their parameters. Dealing with counterexamples not covered by precise definitions can be challenging, which is why some theorists prefer offering less exact definitions based on family resemblance instead. This approach suggests that all forms of education are similar to each other but need not share a set of essential features common to all. Some education theorists, such as Keira Sewell and Stephen Newman, argue that the term "education" is context-dependent. Evaluative or thick conceptions of education assert that it is inherent in the nature of education to lead to some form of improvement. They contrast with thin conceptions, which offer a value-neutral explanation. Some theorists provide a descriptive conception of education by observing how the term is commonly used in ordinary language. Prescriptive conceptions, on the other hand, define what constitutes good education or how education should be practiced. Many thick and prescriptive conceptions view education as an endeavor that strives to achieve specific objectives, which may encompass acquiring knowledge, learning to think rationally, and cultivating character traits such as kindness and honesty. Various scholars emphasize the importance of critical thinking in distinguishing education from indoctrination. They argue that indoctrination focuses solely on instilling beliefs in students, regardless of their rationality; whereas education also encourages the rational ability to critically examine and question those beliefs. However, it is not universally accepted that these two phenomena can be clearly distinguished, as some forms of indoctrination may be necessary in the early stages of education when the child's mind is not yet fully developed. This is particularly relevant in cases where young children must learn certain things without comprehending the underlying reasons, such as specific safety rules and hygiene practices. Education can be characterized from both the teacher's and the student's perspectives. Teacher-centered definitions emphasize the perspective and role of the teacher in transmitting knowledge and skills in a morally appropriate manner. On the other hand, student-centered definitions analyze education based on the student's involvement in the learning process, suggesting that this process transforms and enriches their subsequent experiences. It is also possible to consider definitions that incorporate both perspectives. In this approach, education is seen as a process of shared experience, involving the discovery of a common world and the collaborative solving of problems. == Types == There are several classifications of education. One classification depends on the institutional framework, distinguishing between formal, non-formal, and informal education. Another classification involves different levels of education based on factors such as the student's age and the complexity of the content. Further categories focus on the topic, teaching method, medium used, and funding. === Formal, non-formal, and informal === The most common division is between formal, non-formal, and informal education. Formal education occurs within a structured institutional framework, typically with a chronological and hierarchical order. The modern schooling system organizes classes based on the student's age and progress, ranging from primary school to university. Formal education is usually overseen and regulated by the government and often mandated up to a certain age. Non-formal and informal education occur outside the formal schooling system, with non-formal education serving as a middle ground. Like formal education, non-formal education is organized, systematic, and pursued with a clear purpose, as seen in activities such as tutoring, fitness classes, and participation in the scouting movement. Informal education, on the other hand, occurs in an unsystematic manner through daily experiences and exposure to the environment. Unlike formal and non-formal education, there is typically no designated authority figure responsible for teaching. Informal education unfolds in various settings and situations throughout one's life, often spontaneously, such as children learning their first language from their parents or individuals mastering cooking skills by preparing a dish together. Some theorists differentiate between the three types based on the learning environment: formal education occurs within schools, non-formal education takes place in settings not regularly frequented, such as museums, and informal education unfolds in the context of everyday routines. Additionally, there are disparities in the source of motivation. Formal education tends to be propelled by extrinsic motivation, driven by external rewards. Conversely, in non-formal and informal education, intrinsic motivation, stemming from the enjoyment of the learning process, typically prevails. While the differentiation among the three types is generally clear, certain forms of education may not neatly fit into a single category. In primitive cultures, education predominantly occurred informally, with little distinction between educational activities and other daily endeavors. Instead, the entire environment served as a classroom, and adults commonly assumed the role of educators. However, informal education often proves insufficient for imparting large quantities of knowledge. To address this limitation, formal educational settings and trained instructors are typically necessary. This necessity contributed to the increasing significance of formal education throughout history. Over time, formal education led to a shift towards more abstract learning experiences and topics, distancing itself from daily life. There was a greater emphasis on understanding general principles and concepts rather than simply observing and imitating specific behaviors. === Levels === Types of education are often categorized into different levels or stages. One influential framework is the International Standard Classification of Education, maintained by the United Nations Educational, Scientific and Cultural Organization (UNESCO). This classification encompasses both formal and non-formal education and distinguishes levels based on factors such as the student's age, the duration of learning, and the complexity of the content covered. Additional criteria include entry requirements, teacher qualifications, and the intended outcome of successful completion. The levels are grouped into early childhood education (level 0), primary education (level 1), secondary education (levels 2–3), post-secondary non-tertiary education (level 4), and tertiary education (levels 5–8). Early childhood education, also referred to as preschool education or nursery education, encompasses the period from birth until the commencement of primary school. It is designed to facilitate holistic child development, addressing physical, mental, and social aspects. Early childhood education is pivotal in fostering socialization and personality development, while also imparting fundamental skills in communication, learning, and problem-solving. Its overarching goal is to prepare children for the transition to primary education. While preschool education is typically optional, in certain countries such as Brazil, it is mandatory starting from the age of four. Primary (or elementary) education usually begins between the ages of five and seven and spans four to seven years. It has no additional entry requirements and aims to impart fundamental skills in reading, writing, and mathematics. Additionally, it provides essential knowledge in subjects such as history, geography, the sciences, music, and art. Another objective is to facilitate personal development. Presently, primary education is compulsory in nearly all nations, with over 90% of primary-school-age children worldwide attending such schools. Secondary education succeeds primary education and typically spans the ages of 12 to 18 years. It is normally divided into lower secondary education (such as middle school or junior high school) and upper secondary education (like high school, senior high school, or college, depending on the country). Lower secondary education usually requires the completion of primary school as its entry prerequisite. It aims to expand and deepen learning outcomes, with a greater focus on subject-specific curricula, and teachers often specialize in one or a few specific subjects. One of its goals is to acquaint students with fundamental theoretical concepts across various subjects, laying a strong foundation for lifelong learning. In certain instances, it may also incorporate rudimentary forms of vocational training. Lower secondary education is compulsory in numerous countries across Central and East Asia, Europe, and the Americas. In some nations, it represents the final phase of compulsory education. However, mandatory lower secondary education is less common in Arab states, sub-Saharan Africa, and South and West Asia. Upper secondary education typically commences around the age of 15, aiming to equip students with the necessary skills and knowledge for employment or tertiary education. Completion of lower secondary education is normally a prerequisite. The curriculum encompasses a broader range of subjects, often affording students the opportunity to select from various options. Attainment of a formal qualification, such as a high school diploma, is frequently linked to successful completion of upper secondary education. Education beyond the secondary level may fall under the category of post-secondary non-tertiary education, which is akin to secondary education in complexity but places greater emphasis on vocational training to ready students for the workforce. In some countries, tertiary education is synonymous with higher education, while in others, tertiary education encompasses a broader spectrum. Tertiary education builds upon the foundation laid in secondary education but delves deeper into specific fields or subjects. Its culmination results in an academic degree. Tertiary education comprises four levels: short-cycle tertiary, bachelor's, master's, and doctoral education. These levels often form a hierarchical structure, with the attainment of earlier levels serving as a prerequisite for higher ones. Short-cycle tertiary education concentrates on practical aspects, providing advanced vocational and professional training tailored to specialized professions. Bachelor's level education, also known as undergraduate education, is typically longer than short-cycle tertiary education. It is commonly offered by universities and culminates in an intermediary academic credential known as a bachelor's degree. Master's level education is more specialized than undergraduate education and often involves independent research, normally in the form of a master's thesis. Doctoral level education leads to an advanced research qualification, usually a doctor's degree, such as a Doctor of Philosophy (PhD). It usually involves the submission of a substantial academic work, such as a dissertation. More advanced levels include post-doctoral studies and habilitation. Successful completion of formal education typically leads to certification, a prerequisite for advancing to higher levels of education and entering certain professions. Undetected cheating during exams, such as utilizing a cheat sheet, poses a threat to this system by potentially certifying unqualified students. In most countries, primary and secondary education is provided free of charge. However, there are significant global disparities in the cost of tertiary education. Some countries, such as Sweden, Finland, Poland, and Mexico, offer tertiary education for free or at a low cost. Conversely, in nations like the United States and Singapore, tertiary education often comes with high tuition fees, leading students to rely on substantial loans to finance their studies. High education costs can pose a significant barrier for students in developing countries, as their families may struggle to cover school fees, purchase uniforms, and buy textbooks. === Others === The academic literature explores various types of education, including traditional and alternative approaches. Traditional education encompasses long-standing and conventional schooling methods, characterized by teacher-centered instruction within a structured school environment. Regulations govern various aspects, such as the curriculum and class schedules. Alternative education serves as an umbrella term for schooling methods that diverge from the conventional traditional approach. These variances might encompass differences in the learning environment, curriculum content, or the dynamics of the teacher-student relationship. Characteristics of alternative schooling include voluntary enrollment, relatively modest class and school sizes, and customized instruction, fostering a more inclusive and emotionally supportive environment. This category encompasses various forms, such as charter schools and specialized programs catering to challenging or exceptionally talented students, alongside homeschooling and unschooling. Alternative education incorporates diverse educational philosophies, including Montessori schools, Waldorf education, Round Square schools, Escuela Nueva schools, free schools, and democratic schools. Alternative education encompasses indigenous education, which emphasizes the preservation and transmission of knowledge and skills rooted in indigenous heritage. This approach often employs traditional methods such as oral narration and storytelling. Other forms of alternative schooling include gurukul schools in India, madrasa schools in the Middle East, and yeshivas in Jewish tradition. Some distinctions revolve around the recipients of education. Categories based on the age of the learner are childhood education, adolescent education, adult education, and elderly education. Categories based on the biological sex of students include single-sex education and mixed-sex education. Special education is tailored to meet the unique needs of students with disabilities, addressing various impairments on intellectual, social, communicative, and physical levels. Its goal is to overcome the challenges posed by these impairments, providing affected students with access to an appropriate educational structure. In the broadest sense, special education also encompasses education for intellectually gifted children, who require adjusted curricula to reach their fullest potential. Classifications based on the teaching method include teacher-centered education, where the teacher plays a central role in imparting information to students, and student-centered education, where students take on a more active and responsible role in shaping classroom activities. In conscious education, learning and teaching occur with a clear purpose in mind. Unconscious education unfolds spontaneously without conscious planning or guidance. This may occur, in part, through the influence of teachers' and adults' personalities, which can indirectly impact the development of students' personalities. Evidence-based education employs scientific studies to determine the most effective educational methods. Its aim is to optimize the effectiveness of educational practices and policies by ensuring they are grounded in the best available empirical evidence. This encompasses evidence-based teaching, evidence-based learning, and school effectiveness research. Autodidacticism, or self-education, occurs independently of teachers and institutions. Primarily observed in adult education, it offers the freedom to choose what and when to study, making it a potentially more fulfilling learning experience. However, the lack of structure and guidance may lead to aimless learning, while the absence of external feedback could result in autodidacts developing misconceptions and inaccurately assessing their learning progress. Autodidacticism is closely associated with lifelong education, which entails continuous learning throughout one's life. Categories of education based on the subject encompass science education, language education, art education, religious education, physical education, and sex education. Special mediums such as radio or websites are utilized in distance education, including e-learning (use of computers), m-learning (use of mobile devices), and online education. Often, these take the form of open education, wherein courses and materials are accessible with minimal barriers, contrasting with traditional classroom or onsite education. However, not all forms of online education are open; for instance, some universities offer full online degree programs that are not part of open education initiatives. State education, also known as public education, is funded and controlled by the government and available to the general public. It typically does not require tuition fees and is therefore a form of free education. In contrast, private education is funded and managed by private institutions. Private schools often have a more selective admission process and offer paid education by charging tuition fees. A more detailed classification focuses on the social institutions responsible for education, such as family, school, civil society, state, and church. Compulsory education refers to education that individuals are legally mandated to receive, primarily affecting children who must attend school up to a certain age. This stands in contrast to voluntary education, which individuals pursue based on personal choice rather than legal obligation. == Role in society == Education serves various roles in society, spanning social, economic, and personal domains. Socially, education establishes and maintains a stable society by imparting fundamental skills necessary for interacting with the environment and fulfilling individual needs and aspirations. In contemporary society, these skills encompass speaking, reading, writing, arithmetic, and proficiency in information and communications technology. Additionally, education facilitates socialization by instilling awareness of dominant social and cultural norms, shaping appropriate behavior across diverse contexts. It fosters social cohesion, stability, and peace, fostering productive engagement in daily activities. While socialization occurs throughout life, early childhood education holds particular significance. Moreover, education plays a pivotal role in democracies by enhancing civic participation through voting and organizing, while also promoting equal opportunities for all. On an economic level, individuals become productive members of society through education, acquiring the technical and analytical skills necessary for their professions, as well as for producing goods and providing services to others. In early societies, there was minimal specialization, with children typically learning a broad range of skills essential for community functioning. However, modern societies are increasingly complex, with many professions requiring specialized training alongside general education. Consequently, only a relatively small number of individuals master certain professions. Additionally, skills and tendencies acquired for societal functioning may sometimes conflict, with their value dependent on context. For instance, fostering curiosity and questioning established teachings promotes critical thinking and innovation, while at times, obedience to authority is necessary to maintain social stability. By facilitating individuals' integration into society, education fosters economic growth and diminishes poverty. It enables workers to enhance their skills, thereby improving the quality of goods and services produced, which ultimately fosters prosperity and enhances competitiveness. Public education is widely regarded as a long-term investment that benefits society as a whole, with primary education showing particularly high rates of return. Additionally, besides bolstering economic prosperity, education contributes to technological and scientific advancements, reduces unemployment, and promotes social equity. Moreover, increased education is associated with lower birth rates, partly due to heightened awareness of family planning, expanded opportunities for women, and delayed marriage. Education plays a pivotal role in equipping a country to adapt to changes and effectively confront new challenges. It raises awareness and contributes to addressing contemporary global issues, including climate change, sustainability, and the widening disparities between the rich and the poor. By instilling in students an understanding of how their lives and actions impact others, education can inspire individuals to strive towards realizing a more sustainable and equitable world. Thus, education not only serves to maintain societal norms but also acts as a catalyst for social development. This extends to evolving economic circumstances, where technological advancements, notably increased automation, impose new demands on the workforce that education can help meet. As circumstances evolve, skills and knowledge taught may become outdated, necessitating curriculum adjustments to include subjects like digital literacy, and promote proficiency in handling new technologies. Moreover, education can embrace innovative forms such as massive open online courses to prepare individuals for emerging challenges and opportunities. On a more individual level, education fosters personal development, encompassing learning new skills, honing talents, nurturing creativity, enhancing self-knowledge, and refining problem-solving and decision-making abilities. Moreover, education contributes positively to health and well-being. Educated individuals are often better informed about health issues and adjust their behavior accordingly, benefit from stronger social support networks and coping strategies, and enjoy higher incomes, granting them access to superior healthcare services. The social significance of education is underscored by the annual International Day of Education on January 24, established by the United Nations, which designated 1970 as the International Education Year. == Role of institutions == Organized institutions play a pivotal role in multiple facets of education. Entities such as schools, universities, teacher training institutions, and ministries of education comprise the education sector. They interact not only with one another but also with various stakeholders, including parents, local communities, religious groups, non-governmental organizations, healthcare professionals, law enforcement agencies, media platforms, and political leaders. Numerous individuals are directly engaged in the education sector, such as students, teachers, school principals, as well as school nurses and curriculum developers. Various aspects of formal education are regulated by the policies of governmental institutions. These policies determine at what age children need to attend school and at what times classes are held, as well as issues pertaining to the school environment, such as infrastructure. Regulations also cover the exact qualifications and requirements that teachers need to fulfill. An important aspect of education policy concerns the curriculum used for teaching at schools, colleges, and universities. A curriculum is a plan of instruction or a program of learning that guides students to achieve their educational goals. The topics are usually selected based on their importance and depend on the type of school. The goals of public school curricula are usually to offer a comprehensive and well-rounded education, while vocational training focuses more on specific practical skills within a field. The curricula also cover various aspects besides the topic to be discussed, such as the teaching method, the objectives to be reached, and the standards for assessing progress. By determining the curricula, governmental institutions have a strong impact on what knowledge and skills are transmitted to the students. Examples of governmental institutions include the Ministry of Education in India, the Department of Basic Education in South Africa, and the Secretariat of Public Education in Mexico. International organizations also play a pivotal role in education. For example, UNESCO is an intergovernmental organization that promotes education through various means. One of its activities is advocating for education policies, such as the treaty Convention on the Rights of the Child, which declares education as a fundamental human right for all children and young people. The Education for All initiative aimed to provide basic education to all children, adolescents, and adults by 2015, later succeeded by the Sustainable Development Goals initiative, particularly goal 4. Related policies include the Convention against Discrimination in Education and the Futures of Education initiative. Some influential organizations are non-governmental rather than intergovernmental. For instance, the International Association of Universities promotes collaboration and knowledge exchange among colleges and universities worldwide, while the International Baccalaureate offers international diploma programs. Institutions like the Erasmus Programme facilitate student exchanges between countries, while initiatives such as the Fulbright Program provide similar services for teachers. == Factors of educational success == Educational success, also referred to as student and academic achievement, pertains to the extent to which educational objectives are met, such as the acquisition of knowledge and skills by students. For practical purposes, it is often primarily measured in terms of official exam scores, but numerous additional indicators exist, including attendance rates, graduation rates, dropout rates, student attitudes, and post-school indicators such as later income and incarceration rates. Several factors influence educational achievement, such as psychological factors related to the individual student, and sociological factors associated with the student's social environment. Additional factors encompass access to educational technology, teacher quality, and parental involvement. Many of these factors overlap and mutually influence each other. === Psychological === On a psychological level, relevant factors include motivation, intelligence, and personality. Motivation is the internal force propelling people to engage in learning. Motivated students are more likely to interact with the content to be learned by participating in classroom activities like discussions, resulting in a deeper understanding of the subject. Motivation can also help students overcome difficulties and setbacks. An important distinction lies between intrinsic and extrinsic motivation. Intrinsically motivated students are driven by an interest in the subject and the learning experience itself. Extrinsically motivated students seek external rewards such as good grades and recognition from peers. Intrinsic motivation tends to be more beneficial, leading to increased creativity, engagement, and long-term commitment. Educational psychologists aim to discover methods to increase motivation, such as encouraging healthy competition among students while maintaining a balance of positive and negative feedback through praise and constructive criticism. Intelligence significantly influences individuals' responses to education. It is a cognitive trait associated with the capacity to learn from experience, comprehend, and apply knowledge and skills to solve problems. Individuals with higher scores in intelligence metrics typically perform better academically and pursue higher levels of education. Intelligence is often closely associated with the concept of IQ, a standardized numerical measure assessing intelligence based on mathematical-logical and verbal abilities. However, it has been argued that intelligence encompasses various types beyond IQ. Psychologist Howard Gardner posited distinct forms of intelligence in domains such as mathematics, logic, spatial cognition, language, and music. Additional types of intelligence influence interpersonal and intrapersonal interactions. These intelligences are largely autonomous, meaning that an individual may excel in one type while performing less well in another. According to proponents of learning style theory, the preferred method of acquiring knowledge and skills is another factor. They hold that students with an auditory learning style find it easy to comprehend spoken lectures and discussions, whereas visual learners benefit from information presented visually, such as in diagrams and videos. To facilitate efficient learning, it may be advantageous to incorporate a wide variety of learning modalities. Learning styles have been criticized for ambiguous empirical evidence of student benefits and unreliability of student learning style assessment by teachers. The learner's personality may also influence educational achievement. For instance, characteristics such as conscientiousness and openness to experience, identified in the Big Five personality traits, are associated with academic success. Other mental factors include self-efficacy, self-esteem, and metacognitive abilities. === Sociological === Sociological factors center not on the psychological attributes of learners but on their environment and societal position. These factors encompass socioeconomic status, ethnicity, cultural background, and gender, drawing significant interest from researchers due to their association with inequality and discrimination. Consequently, they play a pivotal role in policy-making endeavors aimed at mitigating their impact. Socioeconomic status is influenced by factors beyond just income, including financial security, social status, social class, and various attributes related to quality of life. Low socioeconomic status impacts educational success in several ways. It correlates with slower cognitive development in language and memory, as well as higher dropout rates. Families with limited financial means may struggle to meet their children's basic nutritional needs, hindering their development. Additionally, they may lack resources to invest in educational materials such as stimulating toys, books, and computers. Financial constraints may also prevent attendance at prestigious schools, leading to enrollment in institutions located in economically disadvantaged areas. Such schools often face challenges such as teacher shortages and inadequate educational materials and facilities like libraries, resulting in lower teaching standards. Moreover, parents may be unable to afford private lessons for children falling behind academically. In some cases, students from economically disadvantaged backgrounds are compelled to drop out of school to contribute to family income. Limited access to information about higher education and challenges in securing and repaying student loans further exacerbate the situation. Low socioeconomic status is also associated with poorer physical and mental health, contributing to a cycle of social inequality that persists across generations. Ethnic background correlates with cultural distinctions and language barriers, which can pose challenges for students in adapting to the school environment and comprehending classes. Moreover, explicit and implicit biases and discrimination against ethnic minorities further compound these difficulties. Such biases can impact students' self-esteem, motivation, and access to educational opportunities. For instance, teachers may harbor stereotypical perceptions, albeit not overtly racist, leading to differential grading of comparable performances based on a child's ethnicity. Historically, gender has played a pivotal role in education as societal norms dictated distinct roles for men and women. Education traditionally favored men, who were tasked with providing for the family, while women were expected to manage households and care for children, often limiting their access to education. Although these disparities have improved in many modern societies, gender differences persist in education. This includes biases and stereotypes related to gender roles in various academic domains, notably in fields such as science, technology, engineering, and mathematics (STEM), which are often portrayed as male-dominated. Such perceptions can deter female students from pursuing these subjects. In various instances, discrimination based on gender and social factors occurs openly as part of official educational policies, such as the severe restrictions imposed on female education by the Taliban in Afghanistan, and the school segregation of migrants and locals in urban China under the hukou system. One facet of several social factors is characterized by the expectations linked to stereotypes. These expectations operate externally, influenced by how others respond to individuals belonging to specific groups, and internally, shaped by how individuals internalize and conform to them. In this regard, these expectations can manifest as self-fulfilling prophecies by affecting the educational outcomes they predict. Such outcomes may be influenced by both positive and negative stereotypes. === Technology and others === Technology plays a crucial role in educational success. While educational technology is often linked with modern digital devices such as computers, its scope extends far beyond that. It encompasses a diverse array of resources and tools for learning, including traditional aids like books and worksheets, in addition to digital devices. Educational technology can enhance learning in various ways. In the form of media, it often serves as the primary source of information in the classroom, allowing teachers to allocate their time and energy to other tasks such as lesson planning, student guidance, and performance assessment. By presenting information using graphics, audio, and video instead of mere text, educational technology can also enhance comprehension. Interactive elements, such as educational games, further engage learners in the learning process. Moreover, technology facilitates the accessibility of educational materials to a wide audience, particularly through online resources, while also promoting collaboration among students and communication with teachers. The integration of artificial intelligence in education holds promise for providing new learning experiences to students and supporting teachers in their work. However, it also introduces new risks related to data privacy, misinformation, and manipulation. Various organizations advocate for student access to educational technologies, including initiatives such as the One Laptop per Child initiative, the African Library Project, and Pratham. School infrastructure also plays a crucial role in educational success. It encompasses physical aspects such as the school's location, size, and available facilities and equipment. A healthy and safe environment, well-maintained classrooms, appropriate classroom furniture, as well as access to a library and a canteen, all contribute to fostering educational success. Additionally, the quality of teachers significantly impacts student achievement. Skilled teachers possess the ability to motivate and inspire students, and tailor instructions to individual abilities and needs. Their skills depend on their own education, training, and teaching experience. A meta-analysis by Engin Karadağ et al. concludes that, compared to other influences, factors related to the school and the teacher have the greatest impact on educational success. Parent involvement also enhances achievement and can increase children's motivation and commitment when they know their parents are invested in their educational endeavors. This often results in heightened self-esteem, improved attendance rates, and more positive behavior at school. Parent involvement covers communication with teachers and other school staff to raise awareness of current issues and explore potential resolutions. Other relevant factors, occasionally addressed in academic literature, encompass historical, political, demographic, religious, and legal aspects. == Education studies == The primary field exploring education is known as education studies, also termed education sciences. It seeks to understand how knowledge is transmitted and acquired by examining various methods and forms of education. This discipline delves into the goals, impacts, and significance of education, along with the cultural, societal, governmental, and historical contexts that influence it. Education theorists draw insights from various disciplines, including philosophy, psychology, sociology, economics, history, politics, and international relations. Consequently, some argue that education studies lacks the clear methodological and subject delineations found in disciplines like physics or history. Education studies focuses on academic analysis and critical reflection and differs in this respect from teacher training programs, which show participants how to become effective teachers. Furthermore, it encompasses not only formal education but also explores all forms and facets of educational processes. Various research methods are utilized to investigate educational phenomena, broadly categorized into quantitative, qualitative, and mixed-methods approaches. Quantitative research mirrors the methodologies of the natural sciences, employing precise numerical measurements to collect data from numerous observations and utilizing statistical tools for analysis. Its goal is to attain an objective and impartial understanding. Conversely, qualitative research typically involves a smaller sample size and seeks to gain a nuanced insight into subjective and personal factors, such as individuals' experiences within the educational process. Mixed-methods research aims to integrate data gathered from both approaches to achieve a balanced and comprehensive understanding. Data collection methods vary and may include direct observation, test scores, interviews, and questionnaires. Research projects may investigate fundamental factors influencing all forms of education or focus on specific applications, seek solutions to particular problems, or evaluate the effectiveness of educational initiatives and policies. === Subfields === Education studies encompasses various subfields such as pedagogy, educational research, comparative education, and the philosophy, psychology, sociology, economics, and history of education. The philosophy of education is the branch of applied philosophy that examines many of the fundamental assumptions underlying the theory and practice of education. It explores education both as a process and a discipline while seeking to provide precise definitions of its nature and distinctions from other phenomena. Additionally, it delves into the purpose of education, its various types, and the conceptualization of teachers, students, and their relationship. Furthermore, it encompasses educational ethics, which examines the moral implications of education, such as the ethical principles guiding it and how teachers should apply them to specific situations. The philosophy of education boasts a long history and was a subject of discourse in ancient Greek philosophy. The term "pedagogy" is sometimes used interchangeably with education studies, but in a more specific sense, it refers to the subfield focused on teaching methods. It investigates how educational objectives, such as knowledge transmission or the development of skills and character traits, can be achieved. Pedagogy is concerned with the methods and techniques employed in teaching within conventional educational settings. While some definitions confine it to this context, in a broader sense, it encompasses all forms of education, including teaching methods beyond traditional school environments. In this broader context, it explores how teachers can facilitate learning experiences for students to enhance their understanding of the subject matter and how learning itself occurs. The psychology of education delves into the mental processes underlying learning, focusing on how individuals acquire new knowledge and skills and experience personal development. It investigates the various factors influencing educational outcomes, how these factors vary among individuals, and the extent to which nature or nurture contribute to these outcomes. Key psychological theories shaping education encompass behaviorism, cognitivism, and constructivism. Related disciplines include educational neuroscience and the neurology of education, which explore the neuropsychological processes and changes associated with learning. The field of sociology of education delves into how education shapes socialization, examining how social factors and ideologies influence access to education and individual success within it. It explores the impact of education on different societal groups and its role in shaping personal identity. Specifically, the sociology of education focuses on understanding the root causes of inequalities, offering insights relevant to education policy aimed at identifying and addressing factors contributing to inequality. Two prominent perspectives within this field are consensus theory and conflict theory. Consensus theorists posit that education benefits society by preparing individuals for their societal roles, while conflict theorists view education as a tool employed by the ruling class to perpetuate inequalities. The field of economics of education investigates the production, distribution, and consumption of education. It seeks to optimize resource allocation to enhance education, such as assessing the impact of increased teacher salaries on teacher quality. Additionally, it explores the effects of smaller class sizes and investments in new educational technologies. By providing insights into resource allocation, the economics of education aids policymakers in making decisions that maximize societal benefits. Furthermore, it examines the long-term economic implications of education, including its role in fostering a highly skilled workforce and enhancing national competitiveness. A related area of interest involves analyzing the economic advantages and disadvantages of different educational systems. Comparative education is the discipline that examines and contrasts education systems. Comparisons can occur from a general perspective or focus on specific factors like social, political, or economic aspects. Often applied to different countries, comparative education assesses the similarities and differences of their educational institutions and practices, evaluating the consequences of distinct approaches. It can be used to glean insights from other countries on effective education policies and how one's own system may be improved. This practice, known as policy borrowing, presents challenges as policy success can hinge on the social and cultural context of students and teachers. A related and contentious topic concerns whether the educational systems of developed countries are superior and should be exported to less developed ones. Other key topics include the internationalization of education and the role of education in transitioning from authoritarian regimes to democracies. The history of education delves into the evolution of educational practices, systems, and institutions. It explores various key processes, their potential causes and effects, and their interrelations. === Aims and ideologies === A central topic in education studies revolves around how people should be educated and what goals should guide this process. Various aims have been proposed, including the acquisition of knowledge and skills, personal development, and the cultivation of character traits. Commonly suggested attributes encompass qualities like curiosity, creativity, rationality, and critical thinking, along with tendencies to think, feel, and act morally. Scholars diverge on whether to prioritize liberal values such as freedom, autonomy, and open-mindedness, or qualities like obedience to authority, ideological purity, piety, and religious faith. Some education theorists concentrate on a single overarching purpose of education, viewing more specific aims as means to this end. At a personal level, this purpose is often equated with assisting the student in leading a good life. Societally, education aims to cultivate individuals into productive members of society. There is debate regarding whether the primary aim of education is to benefit the educated individual or society as a whole. Educational ideologies encompass systems of fundamental philosophical assumptions and principles utilized to interpret, understand, and assess existing educational practices and policies. They address various aspects beyond the aims of education, including the subjects taught, the structure of learning activities, the role of teachers, methods for assessing educational progress, and the design of institutional frameworks and policies. These ideologies are diverse and often interrelated. Teacher-centered ideologies prioritize the role of teachers in imparting knowledge to students, while student-centered ideologies afford students a more active role in the learning process. Process-based ideologies focus on the methods of teaching and learning, contrasting with product-based ideologies, which consider education in terms of the desired outcomes. Conservative ideologies uphold traditional practices, whereas Progressive ideologies advocate for innovation and creativity. Additional categories are humanism, romanticism, essentialism, encyclopaedism, pragmatism, as well as authoritarian and democratic ideologies. === Learning theories === Learning theories attempt to elucidate the mechanisms underlying learning. Influential theories include behaviorism, cognitivism, and constructivism. Behaviorism posits that learning entails a modification in behavior in response to environmental stimuli. This occurs through the presentation of a stimulus, the association of this stimulus with the desired response, and the reinforcement of this stimulus-response connection. Cognitivism views learning as a transformation in cognitive structures and emphasizes the mental processes involved in encoding, retrieving, and processing information. Constructivism asserts that learning is grounded in the individual's personal experiences and places greater emphasis on social interactions and their interpretation by the learner. These theories carry significant implications for instructional practices. For instance, behaviorists often emphasize repetitive drills, cognitivists may advocate for mnemonic techniques, and constructivists typically employ collaborative learning strategies. Various theories suggest that learning is more effective when it is based on personal experience. Additionally, aiming for a deeper understanding by connecting new information to pre-existing knowledge is considered more beneficial than simply memorizing a list of unrelated facts. An influential developmental theory of learning is proposed by psychologist Jean Piaget, who outlines four stages of learning through which children progress on their way to adulthood: the sensorimotor, pre-operational, concrete operational, and formal operational stages. These stages correspond to different levels of abstraction, with early stages focusing more on simple sensory and motor activities, while later stages involve more complex internal representations and information processing, such as logical reasoning. === Teaching methods === The teaching method pertains to how the content is delivered by the teacher, such as whether group work is employed rather than focusing on individual learning. There is a wide array of teaching methods available, and the most effective one in a given scenario depends on factors like the subject matter and the learner's age and level of competence. This is reflected in modern school systems, which organize students into different classes based on age, competence, specialization, and native language to ensure an effective learning process. Different subjects often employ distinct approaches; for example, language education frequently emphasizes verbal learning, while mathematical education focuses on abstract and symbolic thinking alongside deductive reasoning. One crucial aspect of teaching methodologies is ensuring that learners remain motivated, either through intrinsic factors like interest and curiosity or through external rewards. The teaching method also includes the utilization of instructional media, such as books, worksheets, and audio-visual recordings, as well as implementing some form of test or evaluation to gauge learning progress. Educational assessment is the process of documenting the student's knowledge and skills, which can happen formally or informally and may take place before, during, or after the learning activity. Another significant pedagogical element in many modern educational approaches is that each lesson is part of a broader educational framework governed by a syllabus, which often spans several months or years. According to Herbartianism, teaching is broken down into phases. The initial phase involves preparing the student's mind for new information. Subsequently, new ideas are introduced to the learner and then linked to concepts already familiar to them. In later phases, understanding transitions to a more general level beyond specific instances, and the ideas are then applied in practical contexts. == History == The history of education delves into the processes, methods, and institutions entwined with teaching and learning, aiming to elucidate their interplay and influence on educational practices over time. === Prehistory === Education during prehistory primarily facilitated enculturation, emphasizing practical knowledge and skills essential for daily life, such as food production, clothing, shelter, and safety. Formal schools and specialized instructors were absent, with adults in the community assuming teaching roles, and learning transpiring informally through daily activities, including observation and imitation of elders. In oral societies, storytelling served as a pivotal means of transmitting cultural and religious beliefs across generations. With the advent of agriculture during the Neolithic Revolution around 9000 BCE, a gradual educational shift toward specialization ensued, driven by the formation of larger communities and the demand for increasingly intricate artisanal and technical skills. === Ancient era === Commencing in the 4th millennium BCE and spanning subsequent eras, a pivotal transformation in educational methodologies unfolded with the advent of writing in regions such as Mesopotamia, ancient Egypt, the Indus Valley, and ancient China. This breakthrough profoundly influenced the trajectory of education. Writing facilitated the storage, preservation, and dissemination of information, ushering in subsequent advancements such as the creation of educational aids like textbooks and the establishment of institutions such as schools. Another significant aspect of ancient education was the establishment of formal education. This became necessary as civilizations evolved and the volume of knowledge expanded, surpassing what informal education could effectively transmit across generations. Teachers assumed specialized roles to impart knowledge, leading to a more abstract educational approach less tied to daily life. Formal education remained relatively rare in ancient societies, primarily accessible to the intellectual elite. It covered fields like reading and writing, record keeping, leadership, civic and political life, religion, and technical skills associated with specific professions. Formal education introduced a new teaching paradigm that emphasized discipline and drills over the informal methods prevalent earlier. Two notable achievements of ancient education include the founding of Plato's Academy in Ancient Greece, often regarded as the earliest institution of higher learning, and the establishment of the Great Library of Alexandria in Ancient Egypt, renowned as one of the ancient world's premier libraries. === Medieval era === Many facets of education during the medieval period were profoundly influenced by religious traditions. In Europe, the Catholic Church wielded considerable authority over formal education. In the Arab world, the rapid spread of Islam led to various educational advancements during the Islamic Golden Age, integrating classical and religious knowledge and establishing madrasa schools. In Jewish communities, yeshivas emerged as institutions dedicated to the study of religious texts and Jewish law. In China, an expansive state educational and examination system, shaped by Confucian teachings, was instituted. As new complex societies emerged in regions like Africa, the Americas, Northern Europe, and Japan, some adopted existing educational practices, while others developed new traditions. Additionally, this era witnessed the establishment of various institutes of higher education and research. Prominent among these were the University of Bologna (the world's oldest university in continuous operation), the University of Paris, and Oxford University in Europe. Other influential centers included the Al-Qarawiyyin University in Morocco, Al-Azhar University in Egypt, and the House of Wisdom in Iraq. Another significant development was the formation of guilds, associations of skilled craftsmen and merchants who regulated their trades and provided vocational education. Prospective members underwent various stages of training on their journey to mastery. === Modern era === Starting in the early modern period, education in Europe during the Renaissance slowly began to shift from a religious approach towards one that was more secular. This development was tied to an increased appreciation of the importance of education and a broadened range of topics, including a revived interest in ancient literary texts and educational programs. The turn toward secularization was accelerated during the Age of Enlightenment starting in the 17th century, which emphasized the role of reason and the empirical sciences. European colonization affected education in the Americas through Christian missionary initiatives. In China, the state educational system was further expanded and focused more on the teachings of neo-Confucianism. In the Islamic world, the outreach of formal education increased and remained under the influence of religion. A key development in the early modern period was the invention and popularization of the printing press in the middle of the 15th century, which had a profound impact on general education. It significantly reduced the cost of producing books, which were hand-written before, and thereby augmented the dissemination of written documents, including new forms like newspapers and pamphlets. The increased availability of written media had a major influence on the general literacy of the population. These alterations paved the way for the advancement of public education during the 18th and 19th centuries. This era witnessed the establishment of publicly funded schools with the goal of providing education for all, in contrast to previous periods when formal education was primarily delivered by private schools, religious institutions, and individual tutors. An exception to this trend was the Aztec civilization, where formal education was compulsory for youth across social classes as early as the 14th century. Closely related changes were to make education compulsory and free of charge for all children up to a certain age. === Contemporary era === The promotion of public education and universal access to education gained momentum in the 20th and 21st centuries, endorsed by intergovernmental organizations such as the UN. Key initiatives included the Universal Declaration of Human Rights, the Convention on the Rights of the Child, the Education for All initiative, the Millennium Development Goals, and the Sustainable Development Goals. These endeavors led to a consistent increase in all forms of education, particularly impacting primary education. In 1970, 28% of all primary-school-age children worldwide were not enrolled in school; by 2015, this figure had decreased to 9%. The establishment of public education was accompanied by the introduction of standardized curricula for public schools as well as standardized tests to assess the progress of students. Contemporary examples are the Test of English as a Foreign Language, which is a globally used test to assess language proficiency in non-native English speakers, and the Programme for International Student Assessment, which evaluates education systems across the world based on the performance of 15-year-old students in reading, mathematics, and science. Similar shifts impacted teachers, with the establishment of institutions and norms to regulate and oversee teacher training, including certification mandates for teaching in public schools. Emerging educational technologies have significantly influenced modern education. The widespread availability of computers and the internet has notably expanded access to educational resources and facilitated new forms of learning, such as online education. This became particularly pertinent during the COVID-19 pandemic when schools worldwide closed for prolonged periods, prompting many to adopt remote learning methods through video conferencing or pre-recorded video lessons to sustain instruction. Additionally, contemporary education is impacted by the increasing globalization and internationalization of educational practices. == See also == == References == === Notes === === Citations === === Sources === == External links == Education – OECD Education – UNESCO Education – World Bank
|
Wikipedia:Equiareal map#0
|
In differential geometry, an equiareal map, sometimes called an authalic map, is a smooth map from one surface to another that preserves the areas of figures. == Properties == If M and N are two Riemannian (or pseudo-Riemannian) surfaces, then an equiareal map f from M to N can be characterized by any of the following equivalent conditions: The surface area of f(U) is equal to the area of U for every open set U on M. The pullback of the area element μN on N is equal to μM, the area element on M. At each point p of M, and tangent vectors v and w to M at p, | d f p ( v ) ∧ d f p ( w ) | = | v ∧ w | {\displaystyle {\bigl |}df_{p}(v)\wedge df_{p}(w){\bigr |}=|v\wedge w|\,} where ∧ {\textstyle \wedge } denotes the Euclidean wedge product of vectors and df denotes the pushforward along f. == Example == An example of an equiareal map, due to Archimedes of Syracuse, is the projection from the unit sphere x2 + y2 + z2 = 1 to the unit cylinder x2 + y2 = 1 outward from their common axis. An explicit formula is f ( x , y , z ) = ( x x 2 + y 2 , y x 2 + y 2 , z ) {\displaystyle f(x,y,z)=\left({\frac {x}{\sqrt {x^{2}+y^{2}}}},{\frac {y}{\sqrt {x^{2}+y^{2}}}},z\right)} for (x, y, z) a point on the unit sphere. == Linear transformations == Every Euclidean isometry of the Euclidean plane is equiareal, but the converse is not true. In fact, shear mapping and squeeze mapping are counterexamples to the converse. Shear mapping takes a rectangle to a parallelogram of the same area. Written in matrix form, a shear mapping along the x-axis is ( 1 v 0 1 ) ( x y ) = ( x + v y y ) . {\displaystyle {\begin{pmatrix}1&v\\0&1\end{pmatrix}}\,{\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}x+vy\\y\end{pmatrix}}.} Squeeze mapping lengthens and contracts the sides of a rectangle in a reciprocal manner so that the area is preserved. Written in matrix form, with λ > 1 the squeeze reads ( λ 0 0 1 / λ ) ( x y ) = ( λ x y / λ . ) {\displaystyle {\begin{pmatrix}\lambda &0\\0&1/\lambda \end{pmatrix}}\,{\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}\lambda x\\y/\lambda .\end{pmatrix}}} A linear transformation ( a b c d ) {\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}} multiplies areas by the absolute value of its determinant |ad – bc|. Gaussian elimination shows that every equiareal linear transformation (rotations included) can be obtained by composing at most two shears along the axes, a squeeze and (if the determinant is negative), a reflection. == In map projections == In the context of geographic maps, a map projection is called equal-area, equivalent, authalic, equiareal, or area-preserving, if areas are preserved up to a constant factor; embedding the target map, usually considered a subset of R2, in the obvious way in R3, the requirement above then is weakened to: | d f p ( v ) × d f p ( w ) | = κ | v × w | {\displaystyle |df_{p}(v)\times df_{p}(w)|=\kappa |v\times w|} for some κ > 0 not depending on v {\displaystyle v} and w {\displaystyle w} . For examples of such projections, see equal-area map projection. == See also == Jacobian matrix and determinant == References == Pressley, Andrew (2001), Elementary differential geometry, Springer Undergraduate Mathematics Series, London: Springer-Verlag, ISBN 978-1-85233-152-8, MR 1800436
|
Wikipedia:Equicontinuity#0
|
In mathematical analysis, a family of functions is equicontinuous if all the functions are continuous and they have equal variation over a given neighbourhood, in a precise sense described herein. In particular, the concept applies to countable families, and thus sequences of functions. Equicontinuity appears in the formulation of Ascoli's theorem, which states that a subset of C(X), the space of continuous functions on a compact Hausdorff space X, is compact if and only if it is closed, pointwise bounded and equicontinuous. As a corollary, a sequence in C(X) is uniformly convergent if and only if it is equicontinuous and converges pointwise to a function (not necessarily continuous a-priori). In particular, the limit of an equicontinuous pointwise convergent sequence of continuous functions fn on either metric space or locally compact space is continuous. If, in addition, fn are holomorphic, then the limit is also holomorphic. The uniform boundedness principle states that a pointwise bounded family of continuous linear operators between Banach spaces is equicontinuous. == Equicontinuity between metric spaces == Let X and Y be two metric spaces, and F a family of functions from X to Y. We shall denote by d the respective metrics of these spaces. The family F is equicontinuous at a point x0 ∈ X if for every ε > 0, there exists a δ > 0 such that d(ƒ(x0), ƒ(x)) < ε for all ƒ ∈ F and all x such that d(x0, x) < δ. The family is pointwise equicontinuous if it is equicontinuous at each point of X. The family F is uniformly equicontinuous if for every ε > 0, there exists a δ > 0 such that d(ƒ(x1), ƒ(x2)) < ε for all ƒ ∈ F and all x1, x2 ∈ X such that d(x1, x2) < δ. For comparison, the statement 'all functions ƒ in F are continuous' means that for every ε > 0, every ƒ ∈ F, and every x0 ∈ X, there exists a δ > 0 such that d(ƒ(x0), ƒ(x)) < ε for all x ∈ X such that d(x0, x) < δ. For continuity, δ may depend on ε, ƒ, and x0. For uniform continuity, δ may depend on ε and ƒ. For pointwise equicontinuity, δ may depend on ε and x0. For uniform equicontinuity, δ may depend only on ε. More generally, when X is a topological space, a set F of functions from X to Y is said to be equicontinuous at x if for every ε > 0, x has a neighborhood Ux such that d Y ( f ( y ) , f ( x ) ) < ϵ {\displaystyle d_{Y}(f(y),f(x))<\epsilon } for all y ∈ Ux and ƒ ∈ F. This definition usually appears in the context of topological vector spaces. When X is compact, a set is uniformly equicontinuous if and only if it is equicontinuous at every point, for essentially the same reason as that uniform continuity and continuity coincide on compact spaces. Used on its own, the term "equicontinuity" may refer to either the pointwise or uniform notion, depending on the context. On a compact space, these notions coincide. Some basic properties follow immediately from the definition. Every finite set of continuous functions is equicontinuous. The closure of an equicontinuous set is again equicontinuous. Every member of a uniformly equicontinuous set of functions is uniformly continuous, and every finite set of uniformly continuous functions is uniformly equicontinuous. === Examples === A set of functions with a common Lipschitz constant is (uniformly) equicontinuous. In particular, this is the case if the set consists of functions with derivatives bounded by the same constant. Uniform boundedness principle gives a sufficient condition for a set of continuous linear operators to be equicontinuous. A family of iterates of an analytic function is equicontinuous on the Fatou set. === Counterexamples === The sequence of functions fn(x) = arctan(nx), is not equicontinuous because the definition is violated at x0=0. == Equicontinuity of maps valued in topological groups == Suppose that T is a topological space and Y is an additive topological group (i.e. a group endowed with a topology making its operations continuous). Topological vector spaces are prominent examples of topological groups and every topological group has an associated canonical uniformity. Definition: A family H of maps from T into Y is said to be equicontinuous at t ∈ T if for every neighborhood V of 0 in Y, there exists some neighborhood U of t in T such that h(U) ⊆ h(t) + V for every h ∈ H. We say that H is equicontinuous if it is equicontinuous at every point of T. Note that if H is equicontinuous at a point then every map in H is continuous at the point. Clearly, every finite set of continuous maps from T into Y is equicontinuous. == Equicontinuous linear maps == Because every topological vector space (TVS) is a topological group, the definition of an equicontinuous family of maps given for topological groups transfers to TVSs without change. === Characterization of equicontinuous linear maps === A family H {\displaystyle H} of maps of the form X → Y {\displaystyle X\to Y} between two topological vector spaces is said to be equicontinuous at a point x ∈ X {\displaystyle x\in X} if for every neighborhood V {\displaystyle V} of the origin in Y {\displaystyle Y} there exists some neighborhood U {\displaystyle U} of the origin in X {\displaystyle X} such that h ( x + U ) ⊆ h ( x ) + V {\displaystyle h(x+U)\subseteq h(x)+V} for all h ∈ H . {\displaystyle h\in H.} If H {\displaystyle H} is a family of maps and U {\displaystyle U} is a set then let H ( U ) := ⋃ h ∈ H h ( U ) . {\displaystyle H(U):=\bigcup _{h\in H}h(U).} With notation, if U {\displaystyle U} and V {\displaystyle V} are sets then h ( U ) ⊆ V {\displaystyle h(U)\subseteq V} for all h ∈ H {\displaystyle h\in H} if and only if H ( U ) ⊆ V . {\displaystyle H(U)\subseteq V.} Let X {\displaystyle X} and Y {\displaystyle Y} be topological vector spaces (TVSs) and H {\displaystyle H} be a family of linear operators from X {\displaystyle X} into Y . {\displaystyle Y.} Then the following are equivalent: H {\displaystyle H} is equicontinuous; H {\displaystyle H} is equicontinuous at every point of X . {\displaystyle X.} H {\displaystyle H} is equicontinuous at some point of X . {\displaystyle X.} H {\displaystyle H} is equicontinuous at the origin. that is, for every neighborhood V {\displaystyle V} of the origin in Y , {\displaystyle Y,} there exists a neighborhood U {\displaystyle U} of the origin in X {\displaystyle X} such that H ( U ) ⊆ V {\displaystyle H(U)\subseteq V} (or equivalently, h ( U ) ⊆ V {\displaystyle h(U)\subseteq V} for every h ∈ H {\displaystyle h\in H} ). for every neighborhood V {\displaystyle V} of the origin in Y , {\displaystyle Y,} ⋂ h ∈ H h − 1 ( V ) {\displaystyle \bigcap _{h\in H}h^{-1}(V)} is a neighborhood of the origin in X . {\displaystyle X.} the closure of H {\displaystyle H} in L σ ( X ; Y ) {\displaystyle L_{\sigma }(X;Y)} is equicontinuous. L σ ( X ; Y ) {\displaystyle L_{\sigma }(X;Y)} denotes L ( X ; Y ) {\displaystyle L(X;Y)} endowed with the topology of point-wise convergence. the balanced hull of H {\displaystyle H} is equicontinuous. while if Y {\displaystyle Y} is locally convex then this list may be extended to include: the convex hull of H {\displaystyle H} is equicontinuous. the convex balanced hull of H {\displaystyle H} is equicontinuous. while if X {\displaystyle X} and Y {\displaystyle Y} are locally convex then this list may be extended to include: for every continuous seminorm q {\displaystyle q} on Y , {\displaystyle Y,} there exists a continuous seminorm p {\displaystyle p} on X {\displaystyle X} such that q ∘ h ≤ p {\displaystyle q\circ h\leq p} for all h ∈ H . {\displaystyle h\in H.} Here, q ∘ h ≤ p {\displaystyle q\circ h\leq p} means that q ( h ( x ) ) ≤ p ( x ) {\displaystyle q(h(x))\leq p(x)} for all x ∈ X . {\displaystyle x\in X.} while if X {\displaystyle X} is barreled and Y {\displaystyle Y} is locally convex then this list may be extended to include: H {\displaystyle H} is bounded in L σ ( X ; Y ) {\displaystyle L_{\sigma }(X;Y)} ; H {\displaystyle H} is bounded in L b ( X ; Y ) . {\displaystyle L_{b}(X;Y).} L b ( X ; Y ) {\displaystyle L_{b}(X;Y)} denotes L ( X ; Y ) {\displaystyle L(X;Y)} endowed with the topology of bounded convergence (that is, uniform convergence on bounded subsets of X . {\displaystyle X.} while if X {\displaystyle X} and Y {\displaystyle Y} are Banach spaces then this list may be extended to include: sup { ‖ T ‖ : T ∈ H } < ∞ {\displaystyle \sup\{\|T\|:T\in H\}<\infty } (that is, H {\displaystyle H} is uniformly bounded in the operator norm). ==== Characterization of equicontinuous linear functionals ==== Let X {\displaystyle X} be a topological vector space (TVS) over the field F {\displaystyle \mathbb {F} } with continuous dual space X ′ . {\displaystyle X^{\prime }.} A family H {\displaystyle H} of linear functionals on X {\displaystyle X} is said to be equicontinuous at a point x ∈ X {\displaystyle x\in X} if for every neighborhood V {\displaystyle V} of the origin in F {\displaystyle \mathbb {F} } there exists some neighborhood U {\displaystyle U} of the origin in X {\displaystyle X} such that h ( x + U ) ⊆ h ( x ) + V {\displaystyle h(x+U)\subseteq h(x)+V} for all h ∈ H . {\displaystyle h\in H.} For any subset H ⊆ X ′ , {\displaystyle H\subseteq X^{\prime },} the following are equivalent: H {\displaystyle H} is equicontinuous. H {\displaystyle H} is equicontinuous at the origin. H {\displaystyle H} is equicontinuous at some point of X . {\displaystyle X.} H {\displaystyle H} is contained in the polar of some neighborhood of the origin in X {\displaystyle X} the (pre)polar of H {\displaystyle H} is a neighborhood of the origin in X . {\displaystyle X.} the weak* closure of H {\displaystyle H} in X ′ {\displaystyle X^{\prime }} is equicontinuous. the balanced hull of H {\displaystyle H} is equicontinuous. the convex hull of H {\displaystyle H} is equicontinuous. the convex balanced hull of H {\displaystyle H} is equicontinuous. while if X {\displaystyle X} is normed then this list may be extended to include: H {\displaystyle H} is a strongly bounded subset of X ′ . {\displaystyle X^{\prime }.} while if X {\displaystyle X} is a barreled space then this list may be extended to include: H {\displaystyle H} is relatively compact in the weak* topology on X ′ . {\displaystyle X^{\prime }.} H {\displaystyle H} is weak* bounded (that is, H {\displaystyle H} is σ ( X ′ , X ) − {\displaystyle \sigma \left(X^{\prime },X\right)-} bounded in X ′ {\displaystyle X^{\prime }} ). H {\displaystyle H} is bounded in the topology of bounded convergence (that is, H {\displaystyle H} is b ( X ′ , X ) − {\displaystyle b\left(X^{\prime },X\right)-} bounded in X ′ {\displaystyle X^{\prime }} ). === Properties of equicontinuous linear maps === The uniform boundedness principle (also known as the Banach–Steinhaus theorem) states that a set H {\displaystyle H} of linear maps between Banach spaces is equicontinuous if it is pointwise bounded; that is, sup h ∈ H ‖ h ( x ) ‖ < ∞ {\displaystyle \sup _{h\in H}\|h(x)\|<\infty } for each x ∈ X . {\displaystyle x\in X.} The result can be generalized to a case when Y {\displaystyle Y} is locally convex and X {\displaystyle X} is a barreled space. ==== Properties of equicontinuous linear functionals ==== Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of X ′ {\displaystyle X^{\prime }} is weak-* compact; thus that every equicontinuous subset is weak-* relatively compact. If X {\displaystyle X} is any locally convex TVS, then the family of all barrels in X {\displaystyle X} and the family of all subsets of X ′ {\displaystyle X^{\prime }} that are convex, balanced, closed, and bounded in X σ ′ , {\displaystyle X_{\sigma }^{\prime },} correspond to each other by polarity (with respect to ⟨ X , X # ⟩ {\displaystyle \left\langle X,X^{\#}\right\rangle } ). It follows that a locally convex TVS X {\displaystyle X} is barreled if and only if every bounded subset of X σ ′ {\displaystyle X_{\sigma }^{\prime }} is equicontinuous. == Equicontinuity and uniform convergence == Let X be a compact Hausdorff space, and equip C(X) with the uniform norm, thus making C(X) a Banach space, hence a metric space. Then Arzelà–Ascoli theorem states that a subset of C(X) is compact if and only if it is closed, uniformly bounded and equicontinuous. This is analogous to the Heine–Borel theorem, which states that subsets of Rn are compact if and only if they are closed and bounded. As a corollary, every uniformly bounded equicontinuous sequence in C(X) contains a subsequence that converges uniformly to a continuous function on X. In view of Arzelà–Ascoli theorem, a sequence in C(X) converges uniformly if and only if it is equicontinuous and converges pointwise. The hypothesis of the statement can be weakened a bit: a sequence in C(X) converges uniformly if it is equicontinuous and converges pointwise on a dense subset to some function on X (not assumed continuous). This weaker version is typically used to prove Arzelà–Ascoli theorem for separable compact spaces. Another consequence is that the limit of an equicontinuous pointwise convergent sequence of continuous functions on a metric space, or on a locally compact space, is continuous. (See below for an example.) In the above, the hypothesis of compactness of X cannot be relaxed. To see that, consider a compactly supported continuous function g on R with g(0) = 1, and consider the equicontinuous sequence of functions {ƒn} on R defined by ƒn(x) = g(x − n). Then, ƒn converges pointwise to 0 but does not converge uniformly to 0. This criterion for uniform convergence is often useful in real and complex analysis. Suppose we are given a sequence of continuous functions that converges pointwise on some open subset G of Rn. As noted above, it actually converges uniformly on a compact subset of G if it is equicontinuous on the compact set. In practice, showing the equicontinuity is often not so difficult. For example, if the sequence consists of differentiable functions or functions with some regularity (e.g., the functions are solutions of a differential equation), then the mean value theorem or some other kinds of estimates can be used to show the sequence is equicontinuous. It then follows that the limit of the sequence is continuous on every compact subset of G; thus, continuous on G. A similar argument can be made when the functions are holomorphic. One can use, for instance, Cauchy's estimate to show the equicontinuity (on a compact subset) and conclude that the limit is holomorphic. Note that the equicontinuity is essential here. For example, ƒn(x) = arctan n x converges to a multiple of the discontinuous sign function. == Generalizations == === Equicontinuity in topological spaces === The most general scenario in which equicontinuity can be defined is for topological spaces whereas uniform equicontinuity requires the filter of neighbourhoods of one point to be somehow comparable with the filter of neighbourhood of another point. The latter is most generally done via a uniform structure, giving a uniform space. Appropriate definitions in these cases are as follows: A set A of functions continuous between two topological spaces X and Y is topologically equicontinuous at the points x ∈ X and y ∈ Y if for any open set O about y, there are neighborhoods U of x and V of y such that for every f ∈ A, if the intersection of f[U] and V is nonempty, f[U] ⊆ O. Then A is said to be topologically equicontinuous at x ∈ X if it is topologically equicontinuous at x and y for each y ∈ Y. Finally, A is equicontinuous if it is equicontinuous at x for all points x ∈ X. A set A of continuous functions between two uniform spaces X and Y is uniformly equicontinuous if for every element W of the uniformity on Y, the set { (u,v) ∈ X × X: for all f ∈ A. (f(u),f(v)) ∈ W } is a member of the uniformity on X Introduction to uniform spaces We now briefly describe the basic idea underlying uniformities. The uniformity 𝒱 is a non-empty collection of subsets of Y × Y where, among many other properties, every V ∈ 𝒱, V contains the diagonal of Y (i.e. {(y, y) ∈ Y}). Every element of 𝒱 is called an entourage. Uniformities generalize the idea (taken from metric spaces) of points that are "r-close" (for r > 0), meaning that their distance is < r. To clarify this, suppose that (Y, d) is a metric space (so the diagonal of Y is the set {(y, z) ∈ Y × Y : d(y, z) = 0}) For any r > 0, let Ur = {(y, z) ∈ Y × Y : d(y, z) < r} denote the set of all pairs of points that are r-close. Note that if we were to "forget" that d existed then, for any r > 0, we would still be able to determine whether or not two points of Y are r-close by using only the sets Ur. In this way, the sets Ur encapsulate all the information necessary to define things such as uniform continuity and uniform convergence without needing any metric. Axiomatizing the most basic properties of these sets leads to the definition of a uniformity. Indeed, the sets Ur generate the uniformity that is canonically associated with the metric space (Y, d). The benefit of this generalization is that we may now extend some important definitions that make sense for metric spaces (e.g. completeness) to a broader category of topological spaces. In particular, to topological groups and topological vector spaces. A weaker concept is that of even continuity A set A of continuous functions between two topological spaces X and Y is said to be evenly continuous at x ∈ X and y ∈ Y if given any open set O containing y there are neighborhoods U of x and V of y such that f[U] ⊆ O whenever f(x) ∈ V. It is evenly continuous at x if it is evenly continuous at x and y for every y ∈ Y, and evenly continuous if it is evenly continuous at x for every x ∈ X. === Stochastic equicontinuity === Stochastic equicontinuity is a version of equicontinuity used in the context of sequences of functions of random variables, and their convergence. == See also == Absolute continuity – Form of continuity for functions Classification of discontinuities – Mathematical analysis of discontinuous points Coarse function Continuous function – Mathematical function with no sudden changes Continuous function (set theory) – sequence of ordinals such that the values assumed at limit stages are the limits (limit suprema and limit infima) of all values at previous stagesPages displaying wikidata descriptions as a fallback Continuous stochastic process – Stochastic process that is a continuous function of time or index parameter Dini continuity Direction-preserving function - an analogue of a continuous function in discrete spaces. Microcontinuity – Mathematical term Normal function – Function of ordinals in mathematics Piecewise – Function defined by multiple sub-functionsPages displaying short descriptions of redirect targets Symmetrically continuous function Uniform continuity – Uniform restraint of the change in functions == Notes == == References == "Equicontinuity", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Reed, Michael; Simon, Barry (1980), Functional Analysis (revised and enlarged ed.), Boston, MA: Academic Press, ISBN 978-0-12-585050-6. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Rudin, Walter (1987), Real and Complex Analysis (3rd ed.), New York: McGraw-Hill. Schaefer, Helmut H. (1966), Topological vector spaces, New York: The Macmillan Company Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
|
Wikipedia:Equioscillation theorem#0
|
In mathematics, the equioscillation theorem concerns the approximation of continuous functions using polynomials when the merit function is the maximum difference (uniform norm). Its discovery is attributed to Chebyshev. == Statement == Let f {\displaystyle f} be a continuous function from [ a , b ] {\displaystyle [a,b]} to R {\displaystyle \mathbb {R} } . Among all the polynomials of degree ≤ n {\displaystyle \leq n} , the polynomial g {\displaystyle g} minimizes the uniform norm of the difference ‖ f − g ‖ ∞ {\displaystyle \|f-g\|_{\infty }} if and only if there are n + 2 {\displaystyle n+2} points a ≤ x 0 < x 1 < ⋯ < x n + 1 ≤ b {\displaystyle a\leq x_{0}<x_{1}<\cdots <x_{n+1}\leq b} such that f ( x i ) − g ( x i ) = σ ( − 1 ) i ‖ f − g ‖ ∞ {\displaystyle f(x_{i})-g(x_{i})=\sigma (-1)^{i}\|f-g\|_{\infty }} where σ {\displaystyle \sigma } is either -1 or +1. == Variants == The equioscillation theorem is also valid when polynomials are replaced by rational functions: among all rational functions whose numerator has degree ≤ n {\displaystyle \leq n} and denominator has degree ≤ m {\displaystyle \leq m} , the rational function g = p / q {\displaystyle g=p/q} , with p {\displaystyle p} and q {\displaystyle q} being relatively prime polynomials of degree n − ν {\displaystyle n-\nu } and m − μ {\displaystyle m-\mu } , minimizes the uniform norm of the difference ‖ f − g ‖ ∞ {\displaystyle \|f-g\|_{\infty }} if and only if there are m + n + 2 − min { μ , ν } {\displaystyle m+n+2-\min\{\mu ,\nu \}} points a ≤ x 0 < x 1 < ⋯ < x m + n + 1 − min { μ , ν } ≤ b {\displaystyle a\leq x_{0}<x_{1}<\cdots <x_{m+n+1-\min\{\mu ,\nu \}}\leq b} such that f ( x i ) − g ( x i ) = σ ( − 1 ) i ‖ f − g ‖ ∞ {\displaystyle f(x_{i})-g(x_{i})=\sigma (-1)^{i}\|f-g\|_{\infty }} where σ {\displaystyle \sigma } is either -1 or +1. == Algorithms == Several minimax approximation algorithms are available, the most common being the Remez algorithm. == References == == External links == Notes on how to prove Chebyshev’s equioscillation theorem at the Wayback Machine (archived July 2, 2011) The Chebyshev Equioscillation Theorem by Robert Mayans The de la Vallée-Poussin alternation theorem at the Encyclopedia of Mathematics Approximation theory by Remco Bloemen
|
Wikipedia:Equivalence class#0
|
In mathematics, when the elements of some set S {\displaystyle S} have a notion of equivalence (formalized as an equivalence relation), then one may naturally split the set S {\displaystyle S} into equivalence classes. These equivalence classes are constructed so that elements a {\displaystyle a} and b {\displaystyle b} belong to the same equivalence class if, and only if, they are equivalent. Formally, given a set S {\displaystyle S} and an equivalence relation ∼ {\displaystyle \sim } on S , {\displaystyle S,} the equivalence class of an element a {\displaystyle a} in S {\displaystyle S} is denoted [ a ] {\displaystyle [a]} or, equivalently, [ a ] ∼ {\displaystyle [a]_{\sim }} to emphasize its equivalence relation ∼ {\displaystyle \sim } , and is defined as the set of all elements in S {\displaystyle S} with which a {\displaystyle a} is ∼ {\displaystyle \sim } -related. The definition of equivalence relations implies that the equivalence classes form a partition of S , {\displaystyle S,} meaning, that every element of the set belongs to exactly one equivalence class. The set of the equivalence classes is sometimes called the quotient set or the quotient space of S {\displaystyle S} by ∼ , {\displaystyle \sim ,} and is denoted by S / ∼ . {\displaystyle S/{\sim }.} When the set S {\displaystyle S} has some structure (such as a group operation or a topology) and the equivalence relation ∼ , {\displaystyle \sim ,} is compatible with this structure, the quotient set often inherits a similar structure from its parent set. Examples include quotient spaces in linear algebra, quotient spaces in topology, quotient groups, homogeneous spaces, quotient rings, quotient monoids, and quotient categories. == Definition and notation == An equivalence relation on a set X {\displaystyle X} is a binary relation ∼ {\displaystyle \sim } on X {\displaystyle X} satisfying the three properties: a ∼ a {\displaystyle a\sim a} for all a ∈ X {\displaystyle a\in X} (reflexivity), a ∼ b {\displaystyle a\sim b} implies b ∼ a {\displaystyle b\sim a} for all a , b ∈ X {\displaystyle a,b\in X} (symmetry), if a ∼ b {\displaystyle a\sim b} and b ∼ c {\displaystyle b\sim c} then a ∼ c {\displaystyle a\sim c} for all a , b , c ∈ X {\displaystyle a,b,c\in X} (transitivity). The equivalence class of an element a {\displaystyle a} is defined as [ a ] = { x ∈ X : a ∼ x } . {\displaystyle [a]=\{x\in X:a\sim x\}.} The word "class" in the term "equivalence class" may generally be considered as a synonym of "set", although some equivalence classes are not sets but proper classes. For example, "being isomorphic" is an equivalence relation on groups, and the equivalence classes, called isomorphism classes, are not sets. The set of all equivalence classes in X {\displaystyle X} with respect to an equivalence relation R {\displaystyle R} is denoted as X / R , {\displaystyle X/R,} and is called X {\displaystyle X} modulo R {\displaystyle R} (or the quotient set of X {\displaystyle X} by R {\displaystyle R} ). The surjective map x ↦ [ x ] {\displaystyle x\mapsto [x]} from X {\displaystyle X} onto X / R , {\displaystyle X/R,} which maps each element to its equivalence class, is called the canonical surjection, or the canonical projection. Every element of an equivalence class characterizes the class, and may be used to represent it. When such an element is chosen, it is called a representative of the class. The choice of a representative in each class defines an injection from X / R {\displaystyle X/R} to X. Since its composition with the canonical surjection is the identity of X / R , {\displaystyle X/R,} such an injection is called a section, when using the terminology of category theory. Sometimes, there is a section that is more "natural" than the other ones. In this case, the representatives are called canonical representatives. For example, in modular arithmetic, for every integer m greater than 1, the congruence modulo m is an equivalence relation on the integers, for which two integers a and b are equivalent—in this case, one says congruent—if m divides a − b ; {\displaystyle a-b;} this is denoted a ≡ b ( mod m ) . {\textstyle a\equiv b{\pmod {m}}.} Each class contains a unique non-negative integer smaller than m , {\displaystyle m,} and these integers are the canonical representatives. The use of representatives for representing classes allows avoiding considering explicitly classes as sets. In this case, the canonical surjection that maps an element to its class is replaced by the function that maps an element to the representative of its class. In the preceding example, this function is denoted a mod m , {\displaystyle a{\bmod {m}},} and produces the remainder of the Euclidean division of a by m. == Properties == For a set X {\displaystyle X} with an equivalence relation ~, every element x {\displaystyle x} of X {\displaystyle X} is a member of the equivalence class [ x ] {\displaystyle [x]} by reflexivity ( a ∼ a {\displaystyle a\sim a} for all a ∈ X {\displaystyle a\in X} ). Every two equivalence classes [ x ] {\displaystyle [x]} and [ y ] {\displaystyle [y]} are either equal if x ∼ y {\displaystyle x\sim y} , or disjoint otherwise. (The proof is shown below.) Therefore, the set of all equivalence classes of X {\displaystyle X} forms a partition of X {\displaystyle X} : every element x {\displaystyle x} of X {\displaystyle X} belongs to one and only one equivalence class [ x ] {\displaystyle [x]} , which may be the equivalence classes for other elements of X {\displaystyle X} . (I.e., all elements in X {\displaystyle X} are grouped into non-empty sets, that are here equivalence classes of X {\displaystyle X} .) Conversely, for a set X {\displaystyle X} , every partition comes from an equivalence relation in this way, and different relations give different partitions. Thus x ∼ y {\displaystyle x\sim y} if and only if x {\displaystyle x} and y {\displaystyle y} belong to the same set of the partition. It follows from the properties in the previous section that if ∼ {\displaystyle \,\sim \,} is an equivalence relation on a set X , {\displaystyle X,} and x {\displaystyle x} and y {\displaystyle y} are two elements of X , {\displaystyle X,} the following statements are equivalent: x ∼ y {\displaystyle x\sim y} [ x ] = [ y ] {\displaystyle [x]=[y]} [ x ] ∩ [ y ] ≠ ∅ . {\displaystyle [x]\cap [y]\neq \emptyset .} === Proof === Proof of " x ∼ y {\displaystyle x\sim y} if and only if [ x ] = [ y ] {\displaystyle [x]=[y]} ". Proof of "If x ∼ y {\displaystyle x\sim y} then [ x ] = [ y ] {\displaystyle [x]=[y]} ". For c ∈ [ x ] {\displaystyle c\in [x]} , x ∼ c {\displaystyle x\sim c} . By symmetry y ∼ x {\displaystyle y\sim x} from x ∼ y {\displaystyle x\sim y} , and by transitivity y ∼ c {\displaystyle y\sim c} or c ∈ [ y ] {\displaystyle c\in [y]} , Thus, [ x ] ⊆ [ y ] {\displaystyle [x]\subseteq [y]} . For c ′ ∈ [ y ] {\displaystyle c'\in [y]} , y ∼ c ′ {\displaystyle y\sim c'} . By transitivity x ∼ c ′ {\displaystyle x\sim c'} or c ′ ∈ [ x ] {\displaystyle c'\in [x]} , Thus, [ y ] ⊆ [ x ] {\displaystyle [y]\subseteq [x]} . Thus [ x ] = [ y ] {\displaystyle [x]=[y]} . Proof of "If [ x ] = [ y ] {\displaystyle [x]=[y]} then x ∼ y {\displaystyle x\sim y} ". For c ∈ [ x ] {\displaystyle c\in [x]} , x ∼ c {\displaystyle x\sim c} , and y ∼ c {\displaystyle y\sim c} by [ x ] = [ y ] {\displaystyle [x]=[y]} . By symmetry and transitivity, x ∼ y {\displaystyle x\sim y} . Proof of "If [ x ] ∩ [ y ] ≠ ∅ {\displaystyle [x]\cap [y]\neq \emptyset } then [ x ] = [ y ] {\displaystyle [x]=[y]} ". If [ x ] ∩ [ y ] ≠ ∅ {\displaystyle [x]\cap [y]\neq \emptyset } , then there is c {\displaystyle c} such that x ∼ c {\displaystyle x\sim c} and y ∼ c {\displaystyle y\sim c} . By symmetry and transitivity x ∼ y {\displaystyle x\sim y} , and by the above theorem, [ x ] = [ y ] {\displaystyle [x]=[y]} . == Examples == Let X {\displaystyle X} be the set of all rectangles in a plane, and ∼ {\displaystyle \,\sim \,} the equivalence relation "has the same area as", then for each positive real number A , {\displaystyle A,} there will be an equivalence class of all the rectangles that have area A . {\displaystyle A.} Consider the modulo 2 equivalence relation on the set of integers, Z , {\displaystyle \mathbb {Z} ,} such that x ∼ y {\displaystyle x\sim y} if and only if their difference x − y {\displaystyle x-y} is an even number. This relation gives rise to exactly two equivalence classes: one class consists of all even numbers, and the other class consists of all odd numbers. Using square brackets around one member of the class to denote an equivalence class under this relation, [ 7 ] , [ 9 ] , {\displaystyle [7],[9],} and [ 1 ] {\displaystyle [1]} all represent the same element of Z / ∼ . {\displaystyle \mathbb {Z} /{\sim }.} Let X {\displaystyle X} be the set of ordered pairs of integers ( a , b ) {\displaystyle (a,b)} with non-zero b , {\displaystyle b,} and define an equivalence relation ∼ {\displaystyle \,\sim \,} on X {\displaystyle X} such that ( a , b ) ∼ ( c , d ) {\displaystyle (a,b)\sim (c,d)} if and only if a d = b c , {\displaystyle ad=bc,} then the equivalence class of the pair ( a , b ) {\displaystyle (a,b)} can be identified with the rational number a / b , {\displaystyle a/b,} and this equivalence relation and its equivalence classes can be used to give a formal definition of the set of rational numbers. The same construction can be generalized to the field of fractions of any integral domain. If X {\displaystyle X} consists of all the lines in, say, the Euclidean plane, and L ∼ M {\displaystyle L\sim M} means that L {\displaystyle L} and M {\displaystyle M} are parallel lines, then the set of lines that are parallel to each other form an equivalence class, as long as a line is considered parallel to itself. In this situation, each equivalence class determines a point at infinity. == Graphical representation == An undirected graph may be associated to any symmetric relation on a set X , {\displaystyle X,} where the vertices are the elements of X , {\displaystyle X,} and two vertices s {\displaystyle s} and t {\displaystyle t} are joined if and only if s ∼ t . {\displaystyle s\sim t.} Among these graphs are the graphs of equivalence relations. These graphs, called cluster graphs, are characterized as the graphs such that the connected components are cliques. == Invariants == If ∼ {\displaystyle \,\sim \,} is an equivalence relation on X , {\displaystyle X,} and P ( x ) {\displaystyle P(x)} is a property of elements of X {\displaystyle X} such that whenever x ∼ y , {\displaystyle x\sim y,} P ( x ) {\displaystyle P(x)} is true if P ( y ) {\displaystyle P(y)} is true, then the property P {\displaystyle P} is said to be an invariant of ∼ , {\displaystyle \,\sim \,,} or well-defined under the relation ∼ . {\displaystyle \,\sim .} A frequent particular case occurs when f {\displaystyle f} is a function from X {\displaystyle X} to another set Y {\displaystyle Y} ; if f ( x 1 ) = f ( x 2 ) {\displaystyle f\left(x_{1}\right)=f\left(x_{2}\right)} whenever x 1 ∼ x 2 , {\displaystyle x_{1}\sim x_{2},} then f {\displaystyle f} is said to be class invariant under ∼ , {\displaystyle \,\sim \,,} or simply invariant under ∼ . {\displaystyle \,\sim .} This occurs, for example, in the character theory of finite groups. Some authors use "compatible with ∼ {\displaystyle \,\sim \,} " or just "respects ∼ {\displaystyle \,\sim \,} " instead of "invariant under ∼ {\displaystyle \,\sim \,} ". Any function f : X → Y {\displaystyle f:X\to Y} is class invariant under ∼ , {\displaystyle \,\sim \,,} according to which x 1 ∼ x 2 {\displaystyle x_{1}\sim x_{2}} if and only if f ( x 1 ) = f ( x 2 ) . {\displaystyle f\left(x_{1}\right)=f\left(x_{2}\right).} The equivalence class of x {\displaystyle x} is the set of all elements in X {\displaystyle X} which get mapped to f ( x ) , {\displaystyle f(x),} that is, the class [ x ] {\displaystyle [x]} is the inverse image of f ( x ) . {\displaystyle f(x).} This equivalence relation is known as the kernel of f . {\displaystyle f.} More generally, a function may map equivalent arguments (under an equivalence relation ∼ X {\displaystyle \sim _{X}} on X {\displaystyle X} ) to equivalent values (under an equivalence relation ∼ Y {\displaystyle \sim _{Y}} on Y {\displaystyle Y} ). Such a function is a morphism of sets equipped with an equivalence relation. == Quotient space in topology == In topology, a quotient space is a topological space formed on the set of equivalence classes of an equivalence relation on a topological space, using the original space's topology to create the topology on the set of equivalence classes. In abstract algebra, congruence relations on the underlying set of an algebra allow the algebra to induce an algebra on the equivalence classes of the relation, called a quotient algebra. In linear algebra, a quotient space is a vector space formed by taking a quotient group, where the quotient homomorphism is a linear map. By extension, in abstract algebra, the term quotient space may be used for quotient modules, quotient rings, quotient groups, or any quotient algebra. However, the use of the term for the more general cases can as often be by analogy with the orbits of a group action. The orbits of a group action on a set may be called the quotient space of the action on the set, particularly when the orbits of the group action are the right cosets of a subgroup of a group, which arise from the action of the subgroup on the group by left translations, or respectively the left cosets as orbits under right translation. A normal subgroup of a topological group, acting on the group by translation action, is a quotient space in the senses of topology, abstract algebra, and group actions simultaneously. Although the term can be used for any equivalence relation's set of equivalence classes, possibly with further structure, the intent of using the term is generally to compare that type of equivalence relation on a set X , {\displaystyle X,} either to an equivalence relation that induces some structure on the set of equivalence classes from a structure of the same kind on X , {\displaystyle X,} or to the orbits of a group action. Both the sense of a structure preserved by an equivalence relation, and the study of invariants under group actions, lead to the definition of invariants of equivalence relations given above. == See also == Equivalence partitioning, a method for devising test sets in software testing based on dividing the possible program inputs into equivalence classes according to the behavior of the program on those inputs Homogeneous space, the quotient space of Lie groups Partial equivalence relation – Mathematical concept for comparing objects Quotient by an equivalence relation – Generalization of equivalence classes to scheme theory Setoid – Mathematical construction of a set with an equivalence relation Transversal (combinatorics) – Set that intersects every one of a family of sets == Notes == == References == Avelsgaard, Carol (1989), Foundations for Advanced Mathematics, Scott Foresman, ISBN 0-673-38152-8 Devlin, Keith (2004), Sets, Functions, and Logic: An Introduction to Abstract Mathematics (3rd ed.), Chapman & Hall/ CRC Press, ISBN 978-1-58488-449-1 Maddox, Randall B. (2002), Mathematical Thinking and Writing, Harcourt/ Academic Press, ISBN 0-12-464976-9 Stein, Elias M.; Shakarchi, Rami (2012). Functional Analysis: Introduction to Further Topics in Analysis. Princeton University Press. doi:10.1515/9781400840557. ISBN 978-1-4008-4055-7. Wolf, Robert S. (1998), Proof, Logic and Conjecture: A Mathematician's Toolbox, Freeman, ISBN 978-0-7167-3050-7 == Further reading == Sundstrom (2003), Mathematical Reasoning: Writing and Proof, Prentice-Hall Smith; Eggen; St.Andre (2006), A Transition to Advanced Mathematics (6th ed.), Thomson (Brooks/Cole) Schumacher, Carol (1996), Chapter Zero: Fundamental Notions of Abstract Mathematics, Addison-Wesley, ISBN 0-201-82653-4 O'Leary (2003), The Structure of Proof: With Logic and Set Theory, Prentice-Hall Lay (2001), Analysis with an introduction to proof, Prentice Hall Morash, Ronald P. (1987), Bridge to Abstract Mathematics, Random House, ISBN 0-394-35429-X Gilbert; Vanstone (2005), An Introduction to Mathematical Thinking, Pearson Prentice-Hall Fletcher; Patty, Foundations of Higher Mathematics, PWS-Kent Iglewicz; Stoyle, An Introduction to Mathematical Reasoning, MacMillan D'Angelo; West (2000), Mathematical Thinking: Problem Solving and Proofs, Prentice Hall Cupillari, The Nuts and Bolts of Proofs, Wadsworth Bond, Introduction to Abstract Mathematics, Brooks/Cole Barnier; Feldman (2000), Introduction to Advanced Mathematics, Prentice Hall Ash, A Primer of Abstract Mathematics, MAA == External links == Media related to Equivalence classes at Wikimedia Commons
|
Wikipedia:Erasmus Fröhlich#0
|
Erasmus Fröhlich (2 October 1700 – 7 July 1758) was an Austrian Jesuit mathematics teacher and numismatist. He also took an interest in history and astronomy. As a teacher at the Theresianum, he influenced a number of studies in the region in history, mathematics, and astronomy. He also served as the librarian at the Theresianum. == Life and work == Fröhlich was born in Graz and joined the Jesuit order at sixteen. After studies in Graz and Vienna he taught mathematics at Klagenfurt and Vienna. His mathematics students included Karl Scherffer and He also began to collect coins and study them. This interest was inspired by Father Christian Edschlager who had worked in Turkey and Greece and Father Karl Granelli. He wrote on coinage between 1733 and 1737 in Quatuor tentamina in re nummaria vetere and related works. In 1744 he wrote on coinage related to the time of Seleucus I Nicator Annales compendiarii regum et rerum Syriae (1744). This work included some historic and theological views which were disputed by Leipzig scholar Gottlieb Wernsdorf. His work attracted the attention of Empress Maria Theresia who appointed him teacher of history, archaeology and Greek at the Theresianum. His students included Count Coronini who wrote on the history of Gorizia and Istria and Georg Pray who wrote on Hungary. He collaborated with Maximilianus Hell (1720–1792) on astronomy and on optics with Louis Bertrand Castel. Other correspondents included Joseph Khell who succeeded him at the Theresianum and the astronomer Christian Rieger (1714–1780). == References == == External links == Notitia. Elementaris. Nvmismatvm. Antiqvorvm. Illorvm. Qvae. Vrbium. Liberarvm. Regvm. Et. Principvm. Ac. Personarvm. Illvstrivm. Appellantvr. (c. 1756)
|
Wikipedia:Erdal Arıkan#0
|
Erdal Arıkan (born 1958) is a Turkish professor in Electrical and Electronics Engineering Department at Bilkent University, Ankara, Turkey. He is known for his invention of polar codes, which is a key component of 5G technologies. == Early life and education == The son of a doctor and a homemaker, Erdal Arikan was born in 1958 and grew up in Turkey. He attended the Middle East Technical University to study electrical engineering, but transferred to California Institute of Technology in the middle of his freshman year as a result of political violence in Turkey. He started graduate studies at Massachusetts Institute of Technology in 1981 where he was advised by Robert G. Gallager and obtained his PhD in 1986. == Career == === Academic background === Arıkan briefly served as a tenure-track assistant professor at the University of Illinois at Urbana-Champaign. He joined Bilkent University as a faculty member in 1987. Arıkan developed polar codes, a system of coding that provides a mathematical basis for the solution of Shannon's channel capacity problem. He presented a three-session lecture on the polar codes at Simons Institute's Information Theory Boot Camp at the University of California, Berkeley. The lecture is also featured on the Simons Institute webpage, which includes the slides used by Arıkan in his presentation. Arıkan is an IEEE Fellow (Class of 2012), and was an IEEE Distinguished Lecturer for 2014–2015. === Awards === Arıkan received the IEEE Information Theory Society Paper Award in 2010, and the Sedat Simavi Science Award for the construction of new channel coding schemes. Arıkan became the recipient of the Kadir Has Achievement Award in 2011 for the same accomplishment. He was named an IEEE fellow in 2012. Arıkan received the IEEE Richard W. Hamming Award in 2018 "for contributions to information and communications theory, especially the discovery of polar codes and polarization techniques." The same year, it was announced that he would be honored with the 2019 Claude E. Shannon Award. The Chinese telecommunications company Huawei recognized his "outstanding contribution to the development of communications technology in 2018." === Students === The list of Ph.D. dissertations completed under the supervision of Erdal Arıkan: Akar, Nail – Performance analysis of an asynchronous transfer mode multiplexer with Markov modulated inputs (1993) Abdelati, Mohamed – A framework for handling connectionless services in ATM networks (1997) Tan, A Serdar – Error resilient stereoscopic video streaming using model-based fountain codes (2009) Önay, Saygun – Polar codes for distributed source coding (2014) Dizdar, Onur – High throughput decoding methods and architectures for polar codes with high energy-efficiency and low latency (2017-11) Moradi, Mohsen – Performance and computational analysis of polarization-adjusted convolutional (PAC) codes (2022-06) Hokmabadi, Amir Mozammel – Hardware implementation of Fano Decoder for polarization-adjusted convolutional (PAC) codes (2022-06) == References == === Sources === Gibson, Jerry D., ed. (2017). Mobile Communications Handbook (3 ed.). CRC Press. ISBN 978-1-4398-1724-7.
|
Wikipedia:Erdős–Graham problem#0
|
In combinatorial number theory, the Erdős–Graham problem is the problem of proving that, if the set { 2 , 3 , 4 , … } {\displaystyle \{2,3,4,\dots \}} of integers greater than one is partitioned into finitely many subsets, then one of the subsets can be used to form an Egyptian fraction representation of unity. That is, for every r > 0 {\displaystyle r>0} , and every r {\displaystyle r} -coloring of the integers greater than one, there is a finite monochromatic subset S {\displaystyle S} of these integers such that ∑ n ∈ S 1 n = 1. {\displaystyle \sum _{n\in S}{\frac {1}{n}}=1.} In more detail, Paul Erdős and Ronald Graham conjectured that, for sufficiently large r {\displaystyle r} , the largest member of S {\displaystyle S} could be bounded by b r {\displaystyle b^{r}} for some constant b {\displaystyle b} independent of r {\displaystyle r} . It was known that, for this to be true, b {\displaystyle b} must be at least Euler's constant e {\displaystyle e} . Ernie Croot proved the conjecture as part of his Ph.D. thesis, and later (while a post-doctoral researcher at UC Berkeley) published the proof in the Annals of Mathematics. The value Croot gives for b {\displaystyle b} is very large: it is at most e 167000 {\displaystyle e^{167000}} . Croot's result follows as a corollary of a more general theorem stating the existence of Egyptian fraction representations of unity for sets C {\displaystyle C} of smooth numbers in intervals of the form [ X , X 1 + δ ] {\displaystyle [X,X^{1+\delta }]} , where C {\displaystyle C} contains sufficiently many numbers so that the sum of their reciprocals is at least six. The Erdős–Graham conjecture follows from this result by showing that one can find an interval of this form in which the sum of the reciprocals of all smooth numbers is at least 6 r {\displaystyle 6r} ; therefore, if the integers are r {\displaystyle r} -colored there must be a monochromatic subset C {\displaystyle C} satisfying the conditions of Croot's theorem. A stronger form of the result, that any set of integers with positive upper density includes the denominators of an Egyptian fraction representation of one, was announced in 2021 by Thomas Bloom, a postdoctoral researcher at the University of Oxford. == See also == Conjectures by Erdős == References == == External links == Ernie Croot's Webpage
|
Wikipedia:Erdős–Straus conjecture#0
|
The Erdős–Straus conjecture is an unproven statement in number theory. The conjecture is that, for every integer n {\displaystyle n} that is greater than or equal to 2, there exist positive integers x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} for which 4 n = 1 x + 1 y + 1 z . {\displaystyle {\frac {4}{n}}={\frac {1}{x}}+{\frac {1}{y}}+{\frac {1}{z}}.} In other words, the number 4 / n {\displaystyle 4/n} can be written as a sum of three positive unit fractions. The conjecture is named after Paul Erdős and Ernst G. Straus, who formulated it in 1948, but it is connected to much more ancient mathematics; sums of unit fractions, like the one in this problem, are known as Egyptian fractions, because of their use in ancient Egyptian mathematics. The Erdős–Straus conjecture is one of many conjectures by Erdős, and one of many unsolved problems in mathematics concerning Diophantine equations. Although a solution is not known for all values of n, infinitely many values in certain infinite arithmetic progressions have simple formulas for their solution, and skipping these known values can speed up searches for counterexamples. Additionally, these searches need only consider values of n {\displaystyle n} that are prime numbers, because any composite counterexample would have a smaller counterexample among its prime factors. Computer searches have verified the truth of the conjecture up to n ≤ 10 17 {\displaystyle n\leq 10^{17}} . If the conjecture is reframed to allow negative unit fractions, then it is known to be true. Generalizations of the conjecture to fractions with numerator 5 or larger have also been studied. == Background and history == When a rational number is expanded into a sum of unit fractions, the expansion is called an Egyptian fraction. This way of writing fractions dates to the mathematics of ancient Egypt, in which fractions were written this way instead of in the more modern vulgar fraction form a b {\displaystyle {\tfrac {a}{b}}} with a numerator a {\displaystyle a} and denominator b {\displaystyle b} . The Egyptians produced tables of Egyptian fractions for unit fractions multiplied by two, the numbers that in modern notation would be written 2 n {\displaystyle {\tfrac {2}{n}}} , such as the Rhind Mathematical Papyrus table; in these tables, most of these expansions use either two or three terms. These tables were needed, because the obvious expansion 2 n = 1 n + 1 n {\displaystyle {\tfrac {2}{n}}={\tfrac {1}{n}}+{\tfrac {1}{n}}} was not allowed: the Egyptians required all of the fractions in an Egyptian fraction to be different from each other. This same requirement, that all fractions be different, is sometimes imposed in the Erdős–Straus conjecture, but it makes no significant difference to the problem, because for n > 2 {\displaystyle n>2} any solution to 4 n = 1 x + 1 y + 1 z {\displaystyle {\tfrac {4}{n}}={\tfrac {1}{x}}+{\tfrac {1}{y}}+{\tfrac {1}{z}}} where the unit fractions are not distinct can be converted into a solution where they are all distinct; see below. Although the Egyptians did not always find expansions using as few terms as possible, later mathematicians have been interested in the question of how few terms are needed. Every fraction a b {\displaystyle {\tfrac {a}{b}}} has an expansion of at most a {\displaystyle a} terms, so in particular 2 n {\displaystyle {\tfrac {2}{n}}} needs at most two terms, 3 n {\displaystyle {\tfrac {3}{n}}} needs at most three terms, and 4 n {\displaystyle {\tfrac {4}{n}}} needs at most four terms. For 2 n {\displaystyle {\tfrac {2}{n}}} , two terms are always needed, and for 3 n {\displaystyle {\tfrac {3}{n}}} , three terms are sometimes needed, so for both of these numerators, the maximum number of terms that might be needed is known. However, for 4 n {\displaystyle {\tfrac {4}{n}}} , it is unknown whether four terms are sometimes needed, or whether it is possible to express all fractions of the form 4 n {\displaystyle {\tfrac {4}{n}}} using only three unit fractions; this is the Erdős–Straus conjecture. Thus, the conjecture covers the first unknown case of a more general question, the problem of finding for all a {\displaystyle a} the maximum number of terms needed in expansions for fractions a b {\displaystyle {\tfrac {a}{b}}} . One way to find short (but not always shortest) expansions uses the greedy algorithm for Egyptian fractions, first described in 1202 by Fibonacci in his book Liber Abaci. This method chooses one unit fraction at a time, at each step choosing the largest possible unit fraction that would not cause the expanded sum to exceed the target number. After each step, the numerator of the fraction that still remains to be expanded decreases, so the total number of steps can never exceed the starting numerator, but sometimes it is smaller. For example, when it is applied to 3 n {\displaystyle {\tfrac {3}{n}}} , the greedy algorithm will use two terms whenever n {\displaystyle n} is 2 modulo 3, but there exists a two-term expansion whenever n {\displaystyle n} has a factor that is 2 modulo 3, a weaker condition. For numbers of the form 4 n {\displaystyle {\tfrac {4}{n}}} , the greedy algorithm will produce a four-term expansion whenever n {\displaystyle n} is 1 modulo 4, and an expansion with fewer terms otherwise. Thus, another way of rephrasing the Erdős–Straus conjecture asks whether there exists another method for producing Egyptian fractions, using a smaller maximum number of terms for the numbers 4 n {\displaystyle {\tfrac {4}{n}}} . The Erdős–Straus conjecture was formulated in 1948 by Paul Erdős and Ernst G. Straus, and published by Erdős (1950). Richard Obláth also published an early work on the conjecture, a paper written in 1948 and published in 1950, in which he extended earlier calculations of Straus and Harold N. Shapiro in order to verify the conjecture for all n ≤ 10 5 {\displaystyle n\leq 10^{5}} . == Formulation == The conjecture states that, for every integer n ≥ 2 {\displaystyle n\geq 2} , there exist positive integers x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} such that 4 n = 1 x + 1 y + 1 z . {\displaystyle {\frac {4}{n}}={\frac {1}{x}}+{\frac {1}{y}}+{\frac {1}{z}}.} For instance, for n = 5 {\displaystyle n=5} , there are two solutions: 4 5 = 1 2 + 1 4 + 1 20 = 1 2 + 1 5 + 1 10 . {\displaystyle {\frac {4}{5}}={\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{20}}={\frac {1}{2}}+{\frac {1}{5}}+{\frac {1}{10}}.} Multiplying both sides of the equation 4 n = 1 x + 1 y + 1 z {\displaystyle {\tfrac {4}{n}}={\tfrac {1}{x}}+{\tfrac {1}{y}}+{\tfrac {1}{z}}} by n x y z {\displaystyle nxyz} leads to an equivalent polynomial form 4 x y z = n ( x y + x z + y z ) {\displaystyle 4xyz=n(xy+xz+yz)} for the problem. === Distinct unit fractions === Some researchers additionally require that the integers x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} be distinct from each other, as the Egyptians would have, while others allow them to be equal. For n ≥ 3 {\displaystyle n\geq 3} , it does not matter whether they are required to be distinct: if there exists a solution with any three integers, then there exists a solution with distinct integers. This is because two identical unit fractions can be replaced through one of the following two expansions: 1 2 r + 1 2 r ⇒ 1 r + 1 + 1 r ( r + 1 ) 1 2 r + 1 + 1 2 r + 1 ⇒ 1 r + 1 + 1 ( r + 1 ) ( 2 r + 1 ) {\displaystyle {\begin{aligned}{\frac {1}{2r}}+{\frac {1}{2r}}&\Rightarrow {\frac {1}{r+1}}+{\frac {1}{r(r+1)}}\\{\frac {1}{2r+1}}+{\frac {1}{2r+1}}&\Rightarrow {\frac {1}{r+1}}+{\frac {1}{(r+1)(2r+1)}}\\\end{aligned}}} (according to whether the repeated fraction has an even or odd denominator) and this replacement can be repeated until no duplicate fractions remain. For n = 2 {\displaystyle n=2} , however, the only solutions are permutations of 4 2 = 1 2 + 1 2 + 1 1 {\displaystyle {\tfrac {4}{2}}={\tfrac {1}{2}}+{\tfrac {1}{2}}+{\tfrac {1}{1}}} . === Negative-number solutions === The Erdős–Straus conjecture requires that all three of x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} be positive. This requirement is essential to the difficulty of the problem. Even without this relaxation, the Erdős–Straus conjecture is difficult only for odd values of n {\displaystyle n} , and if negative values were allowed then the problem could be solved for every odd n {\displaystyle n} by the following formula: 4 n = 1 ( n − 1 ) / 2 + 1 ( n + 1 ) / 2 − 1 n ( n − 1 ) ( n + 1 ) / 4 . {\displaystyle {\frac {4}{n}}={\frac {1}{(n-1)/2}}+{\frac {1}{(n+1)/2}}-{\frac {1}{n(n-1)(n+1)/4}}.} == Computational results == If the conjecture is false, it could be proven false simply by finding a number 4 n {\displaystyle {\tfrac {4}{n}}} that has no three-term representation. In order to check this, various authors have performed brute-force searches for counterexamples to the conjecture. Searches of this type have confirmed that the conjecture is true for all n {\displaystyle n} up to 10 17 {\displaystyle 10^{17}} . In such searches, it is only necessary to look for expansions for numbers 4 n {\displaystyle {\tfrac {4}{n}}} where n {\displaystyle n} is a prime number. This is because, whenever 4 n {\displaystyle {\tfrac {4}{n}}} has a three-term expansion, so does 4 m n {\displaystyle {\tfrac {4}{mn}}} for all positive integers m {\displaystyle m} . To find a solution for 4 m n {\displaystyle {\tfrac {4}{mn}}} , just divide all of the unit fractions in the solution for 4 n {\displaystyle {\tfrac {4}{n}}} by m {\displaystyle m} : 4 n = 1 x + 1 y + 1 z ⇒ 4 m n = 1 m x + 1 m y + 1 m z . {\displaystyle {\frac {4}{n}}={\frac {1}{x}}+{\frac {1}{y}}+{\frac {1}{z}}\ \Rightarrow \ {\frac {4}{mn}}={\frac {1}{mx}}+{\frac {1}{my}}+{\frac {1}{mz}}.} If 4 n {\displaystyle {\tfrac {4}{n}}} were a counterexample to the conjecture, for a composite number n {\displaystyle n} , every prime factor p {\displaystyle p} of n {\displaystyle n} would also provide a counterexample 4 p {\displaystyle {\tfrac {4}{p}}} that would have been found earlier by the brute-force search. Therefore, checking the existence of a solution for composite numbers is redundant, and can be skipped by the search. Additionally, the known modular identities for the conjecture (see below) can speed these searches by skipping over other values known to have a solution. For instance, the greedy algorithm finds an expansion with three or fewer terms for every number 4 n {\displaystyle {\tfrac {4}{n}}} where n {\displaystyle n} is not 1 modulo 4, so the searches only need to test values that are 1 modulo 4. One way to make progress on this problem is to collect more modular identities, allowing computer searches to reach higher limits with fewer tests. The number of distinct solutions to the 4 n {\displaystyle {\tfrac {4}{n}}} problem, as a function of n {\displaystyle n} , has also been found by computer searches for small n {\displaystyle n} and appears to grow somewhat irregularly with n {\displaystyle n} . Starting with n = 3 {\displaystyle n=3} , the numbers of distinct solutions with distinct denominators are Even for larger n {\displaystyle n} there can sometimes be relatively few solutions; for instance there are only seven distinct solutions for n = 73 {\displaystyle n=73} . == Theoretical results == In the form 4 x y z = n ( x y + x z + y z ) {\displaystyle 4xyz=n(xy+xz+yz)} , a polynomial equation with integer variables, the Erdős–Straus conjecture is an example of a Diophantine equation. The Hasse principle for Diophantine equations suggests that these equations should be studied using modular arithmetic. If a polynomial equation has a solution in the integers, then taking this solution modulo q {\displaystyle q} , for any integer q {\displaystyle q} , provides a solution in modulo- q {\displaystyle q} arithmetic. In the other direction, if an equation has a solution modulo q {\displaystyle q} for every prime power q {\displaystyle q} , then in some cases it is possible to piece together these modular solutions, using methods related to the Chinese remainder theorem, to get a solution in the integers. The power of the Hasse principle to solve some problems is limited by the Manin obstruction, but for the Erdős–Straus conjecture this obstruction does not exist. On the face of it this principle makes little sense for the Erdős–Straus conjecture. For every n {\displaystyle n} , the equation 4 x y z = n ( x y + x z + y z ) {\displaystyle 4xyz=n(xy+xz+yz)} is easily solvable modulo any prime, or prime power, but there appears to be no way to piece those solutions together to get a positive integer solution to the equation. Nevertheless, modular arithmetic, and identities based on modular arithmetic, have proven a very important tool in the study of the conjecture. === Modular identities === For values of n {\displaystyle n} satisfying certain congruence relations, one can find an expansion for 4 n {\displaystyle {\tfrac {4}{n}}} automatically as an instance of a polynomial identity. For instance, whenever n {\displaystyle n} is 2 modulo 3, 4 n {\displaystyle {\tfrac {4}{n}}} has the expansion 4 n = 1 n + 1 ( n + 1 ) / 3 + 1 n ( n + 1 ) / 3 . {\displaystyle {\frac {4}{n}}={\frac {1}{n}}+{\frac {1}{(n+1)/3}}+{\frac {1}{n(n+1)/3}}.} Here each of the three denominators n {\displaystyle n} , ( n + 1 ) / 3 {\displaystyle (n+1)/3} , and n ( n + 1 ) / 3 {\displaystyle n(n+1)/3} is a polynomial of n {\displaystyle n} , and each is an integer whenever n {\displaystyle n} is 2 modulo 3. The greedy algorithm for Egyptian fractions finds a solution in three or fewer terms whenever n {\displaystyle n} is not 1 or 17 mod 24, and the 17 mod 24 case is covered by the 2 mod 3 relation, so the only values of n {\displaystyle n} for which these two methods do not find expansions in three or fewer terms are those congruent to 1 mod 24. Polynomial identities listed by Mordell (1967) provide three-term Egyptian fractions for 4 n {\displaystyle {\tfrac {4}{n}}} whenever n {\displaystyle n} is one of: 2 mod 3 (above), 3 mod 4, 2 or 3 mod 5, 3, 5, or 6 mod 7, or 5 mod 8. Combinations of Mordell's identities can be used to expand 4 n {\displaystyle {\tfrac {4}{n}}} for all n {\displaystyle n} except possibly those that are 1, 121, 169, 289, 361, or 529 mod 840. The smallest prime that these identities do not cover is 1009. By combining larger classes of modular identities, Webb and others showed that the natural density of potential counterexamples to the conjecture is zero: as a parameter N {\displaystyle N} goes to infinity, the fraction of values in the interval [ 1 , N ] {\displaystyle [1,N]} . that could be counterexamples tends to zero in the limit. === Nonexistence of identities === If it were possible to find solutions such as the ones above for enough different moduli, forming a complete covering system of congruences, the problem would be solved. However, as Mordell (1967) showed, a polynomial identity that provides a solution for values of n {\displaystyle n} congruent to r {\displaystyle r} mod p {\displaystyle p} can exist only when r {\displaystyle r} is not congruent to a square modulo p {\displaystyle p} . (More formally, this kind of identity can exist only when r {\displaystyle r} is not a quadratic residue modulo p {\displaystyle p} .) For instance, 2 is a non-square mod 3, so Mordell's result allows the existence of an identity for n {\displaystyle n} congruent to 2 mod 3. However, 1 is a square mod 3 (equal to the square of both 1 and 2 mod 3), so there can be no similar identity for all values of n {\displaystyle n} that are congruent to 1 mod 3. More generally, as 1 is a square mod n {\displaystyle n} for all n > 1 {\displaystyle n>1} , there can be no complete covering system of modular identities for all n {\displaystyle n} , because 1 will always be uncovered. Despite Mordell's result limiting the form of modular identities for this problem, there is still some hope of using modular identities to prove the Erdős–Straus conjecture. No prime number can be a square, so by the Hasse–Minkowski theorem, whenever p {\displaystyle p} is prime, there exists a larger prime q {\displaystyle q} such that p {\displaystyle p} is not a quadratic residue modulo q {\displaystyle q} . One possible approach to proving the conjecture would be to find for each prime p {\displaystyle p} a larger prime q {\displaystyle q} and a congruence solving the 4 n {\displaystyle {\tfrac {4}{n}}} problem for n {\displaystyle n} congruent to p {\displaystyle p} mod q {\displaystyle q} . If this could be done, no prime p {\displaystyle p} could be a counterexample to the conjecture and the conjecture would be true. === The number of solutions === Elsholtz & Tao (2013) showed that the average number of solutions to the 4 n {\displaystyle {\tfrac {4}{n}}} problem (averaged over the prime numbers up to n {\displaystyle n} ) is upper bounded polylogarithmically in n {\displaystyle n} . For some other Diophantine problems, the existence of a solution can be demonstrated through asymptotic lower bounds on the number of solutions, but this works best when the number of solutions grows at least polynomially, so the slower growth rate of Elsholtz and Tao's result makes a proof of this type less likely. Elsholtz and Tao classify solutions according to whether one or two of x {\displaystyle x} , y {\displaystyle y} , or z {\displaystyle z} is divisible by n {\displaystyle n} ; for prime n {\displaystyle n} , these are the only possibilities, although (on average) most solutions for composite n {\displaystyle n} are of other types. Their proof uses the Bombieri–Vinogradov theorem, the Brun–Titchmarsh theorem, and a system of modular identities, valid when n {\displaystyle n} is congruent to − c {\displaystyle -c} or − 1 c {\displaystyle -{\tfrac {1}{c}}} modulo 4 a b {\displaystyle 4ab} , where a {\displaystyle a} and b {\displaystyle b} are any two coprime positive integers and c {\displaystyle c} is any odd factor of a + b {\displaystyle a+b} . For instance, setting a = b = 1 {\displaystyle a=b=1} gives one of Mordell's identities, valid when n {\displaystyle n} is 3 mod 4. == Generalizations == As with fractions of the form 4 n {\displaystyle {\tfrac {4}{n}}} , it has been conjectured that every fraction 5 n {\displaystyle {\tfrac {5}{n}}} (for n > 1 {\displaystyle n>1} ) can be expressed as a sum of three positive unit fractions. A generalized version of the conjecture states that, for any positive k {\displaystyle k} , all but finitely many fractions k n {\displaystyle {\tfrac {k}{n}}} can be expressed as a sum of three positive unit fractions. The conjecture for fractions 5 n {\displaystyle {\tfrac {5}{n}}} was made by Wacław Sierpiński in a 1956 paper, which went on to credit the full conjecture to Sierpiński's student Andrzej Schinzel. Even if the generalized conjecture is false for any fixed value of k {\displaystyle k} , then the number of fractions k n {\displaystyle {\tfrac {k}{n}}} with n {\displaystyle n} in the range from 1 to N {\displaystyle N} that do not have three-term expansions must grow only sublinearly as a function of N {\displaystyle N} . In particular, if the Erdős–Straus conjecture itself (the case k = 4 {\displaystyle k=4} ) is false, then the number of counterexamples grows only sublinearly. Even more strongly, for any fixed k {\displaystyle k} , only a sublinear number of values of n {\displaystyle n} need more than two terms in their Egyptian fraction expansions. The generalized version of the conjecture is equivalent to the statement that the number of unexpandable fractions is not just sublinear but finite. When n {\displaystyle n} is an odd number, by analogy to the problem of odd greedy expansions for Egyptian fractions, one may ask for solutions to k n = 1 x + 1 y + 1 z {\displaystyle {\tfrac {k}{n}}={\tfrac {1}{x}}+{\tfrac {1}{y}}+{\tfrac {1}{z}}} in which x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} are distinct positive odd numbers. Solutions to this equation are known to always exist for the case in which k = 3. == See also == List of sums of reciprocals == Notes == == References ==
|
Wikipedia:Eric Jakeman#0
|
Eric Jakeman (born 1939) is a British mathematical physicist specialising in the statistics and quantum statistics of waves. He is an emeritus professor at the University of Nottingham. == Education == Jakeman was educated at The Brunts School in Mansfield, England. He received a degree in mathematical physics from Birmingham University in 1960, and a PhD in superconductivity theory in 1963. == Career == He was the head of the scattering and quantum optics section at the Defence Research Agency, a visiting professor at Imperial College London, an honorary secretary of the Institute of Physics from 1994 until 2003, and finally a Professor of Applied Statistical Optics at the University of Nottingham from 1996. He was a member of the Council of the European Physical Society from 1985 until 2003. == Awards and honours == In 1977, Jakeman received the Maxwell Medal of the Institute of Physics for his work on statistical optics. He was elected a Fellow of the Royal Society (FRS) in 1990. His certificate of election reads: Dr Jakeman is an internationally recognised expert in the statistics and quantum statistics of wave fields, particularly those arising in laser scattering. His theoretical work on photon statistics and speckle has made a unique contribution to the development of the technique of photon correlation spectroscopy which is now used to investigate structure and motion in a wide range of systems of importance in engineering, medicine, physics, chemistry and biology. He has also significantly advanced the subject of non-Gaussian scattering of waves by random media and has developed new noise models which are being widely applied in optical, microwave and acoustic scattering problems. Jakeman has also made contributions to the field of heat and mass transfer, particularly on the subjects of morphological stability and oscillatory convection in crystal growth, and was jointly responsible for the notion of doubly-diffusive convection driven by the Soret Effect. == References ==
|
Wikipedia:Eric Stephen Barnes#0
|
Eric Stephen Barnes (1924–2000), was an Australian pure mathematician. He was awarded the Thomas Ranken Lyle Medal in 1959, and was (Sir Thomas) Elder Professor of Mathematics at the University of Adelaide. He was elected a Fellow of the Australian Academy of Science in 1954. He was born in Cardiff, Wales, 16 January 1924 and died 16 October 2000 in Adelaide, South Australia. He was educated at the Universities of Sydney and Cambridge. He held appointments as a Fellow of Trinity College, Cambridge 1950–1954; assistant lecturer, Cambridge 1951–1953; reader in pure mathematics, University of Sydney 1953–1958; Elder Professor of Mathematics, University of Adelaide 1959–1974; Secretary (Physical Sciences) Australian Academy of Science 1972–1976; Deputy Vice-chancellor University of Adelaide 1975–1980; Professor of Pure Mathematics University of Adelaide 1981–1983. == See also == Barnes–Wall lattice == References ==
|
Wikipedia:Eric Vanden-Eijnden#0
|
Eric Vanden-Eijnden is a professor of mathematics at the Courant Institute of Mathematical Sciences, New York University. Vanden-Eijnden earned his doctorate in 1997 from the Université libre de Bruxelles under the supervision of Radu Bălescu. In 2009 he was awarded the Germund Dahlquist Prize of the Society for Industrial and Applied Mathematics "for his work in developing mathematical tools and numerical methods for the analysis of dynamical systems that are both stochastic and multiscale", and in 2011 he won SIAM's J.D. Crawford Prize for outstanding research in nonlinear science. == References == == External links == Home page Google scholar profile
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.