text
stringlengths
559
401k
source
stringlengths
13
121
In mathematics, and especially gauge theory, Seiberg–Witten invariants are invariants of compact smooth oriented 4-manifolds introduced by Edward Witten (1994), using the Seiberg–Witten theory studied by Nathan Seiberg and Witten (1994a, 1994b) during their investigations of Seiberg–Witten gauge theory. Seiberg–Witten invariants are similar to Donaldson invariants and can be used to prove similar (but sometimes slightly stronger) results about smooth 4-manifolds. They are technically much easier to work with than Donaldson invariants; for example, the moduli spaces of solutions of the Seiberg–Witten equations tends to be compact, so one avoids the hard problems involved in compactifying the moduli spaces in Donaldson theory. For detailed descriptions of Seiberg–Witten invariants see (Donaldson 1996), (Moore 2001), (Morgan 1996), (Nicolaescu 2000), (Scorpan 2005, Chapter 10). For the relation to symplectic manifolds and Gromov–Witten invariants see (Taubes 2000). For the early history see (Jackson 1995). == Spinc structures == The fourth spinᶜ group is Spin c ⁡ ( 4 ) = ( Spin ⁡ ( 4 ) × U ⁡ ( 1 ) ) / Z 2 ≅ U ⁡ ( 2 ) × U ⁡ ( 1 ) U ⁡ ( 2 ) {\displaystyle \operatorname {Spin} ^{\mathrm {c} }(4)=(\operatorname {Spin} (4)\times \operatorname {U} (1))/\mathbb {Z} _{2}\cong \operatorname {U} (2)\times _{\operatorname {U} (1)}\operatorname {U} (2)} where the Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } acts as a sign on both factors. The group has a natural homomorphism to SO(4) = Spin(4)/±1. Given a compact oriented 4 manifold, choose a smooth Riemannian metric g {\displaystyle g} with Levi Civita connection ∇ g {\displaystyle \nabla ^{g}} . This reduces the structure group from the connected component GL(4)+ to SO(4) and is harmless from a homotopical point of view. A spinᶜ structure or complex spin structure on M is a reduction of the structure group to Spinc, i.e. a lift of the SO(4) structure on the tangent bundle to the group Spinc. By a theorem of Hirzebruch and Hopf, every smooth oriented compact 4-manifold M {\displaystyle M} admits a Spinc structure. The existence of a Spinc structure is equivalent to the existence of a lift of the second Stiefel–Whitney class w 2 ( M ) ∈ H 2 ( M , Z / 2 Z ) {\displaystyle w_{2}(M)\in H^{2}(M,\mathbb {Z} /2\mathbb {Z} )} to a class K ∈ H 2 ( M , Z ) . {\displaystyle K\in H^{2}(M,\mathbb {Z} ).} Conversely such a lift determines the Spinc structure up to 2 torsion in H 2 ( M , Z ) . {\displaystyle H^{2}(M,\mathbb {Z} ).} A spin structure proper requires the more restrictive w 2 ( M ) = 0. {\displaystyle w_{2}(M)=0.} A Spinc structure determines (and is determined by) a spinor bundle W = W + ⊕ W − {\displaystyle W=W^{+}\oplus W^{-}} coming from the 2 complex dimensional positive and negative spinor representation of Spin(4) on which U(1) acts by multiplication. We have K = c 1 ( W + ) = c 1 ( W − ) {\displaystyle K=c_{1}(W^{+})=c_{1}(W^{-})} . The spinor bundle W {\displaystyle W} comes with a graded Clifford algebra bundle representation i.e. a map γ : C l i f f ( M , g ) → E n d ( W ) {\displaystyle \gamma :\mathrm {Cliff} (M,g)\to {\mathcal {E}}{\mathit {nd}}(W)} such that for each 1 form a {\displaystyle a} we have γ ( a ) : W ± → W ∓ {\displaystyle \gamma (a):W^{\pm }\to W^{\mp }} and γ ( a ) 2 = − g ( a , a ) {\displaystyle \gamma (a)^{2}=-g(a,a)} . There is a unique hermitian metric h {\displaystyle h} on W {\displaystyle W} s.t. γ ( a ) {\displaystyle \gamma (a)} is skew Hermitian for real 1 forms a {\displaystyle a} . It gives an induced action of the forms ∧ ∗ M {\displaystyle \wedge ^{*}M} by anti-symmetrising. In particular this gives an isomorphism of ∧ + M ≅ E n d 0 s h ( W + ) {\displaystyle \wedge ^{+}M\cong {\mathcal {E}}{\mathit {nd}}_{0}^{sh}(W^{+})} of the selfdual two forms with the traceless skew Hermitian endomorphisms of W + {\displaystyle W^{+}} which are then identified. == Seiberg–Witten equations == Let L = det ( W + ) ≡ det ( W − ) {\displaystyle L=\det(W^{+})\equiv \det(W^{-})} be the determinant line bundle with c 1 ( L ) = K {\displaystyle c_{1}(L)=K} . For every connection ∇ A = ∇ 0 + A {\displaystyle \nabla _{A}=\nabla _{0}+A} with A ∈ i A R 1 ( M ) {\displaystyle A\in iA_{\mathbb {R} }^{1}(M)} on L {\displaystyle L} , there is a unique spinor connection ∇ A {\displaystyle \nabla ^{A}} on W {\displaystyle W} i.e. a connection such that ∇ X A ( γ ( a ) ) := [ ∇ X A , γ ( a ) ] = γ ( ∇ X g a ) {\displaystyle \nabla _{X}^{A}(\gamma (a)):=[\nabla _{X}^{A},\gamma (a)]=\gamma (\nabla _{X}^{g}a)} for every 1-form a {\displaystyle a} and vector field X {\displaystyle X} . The Clifford connection then defines a Dirac operator D A = γ ⊗ 1 ∘ ∇ A = γ ( d x μ ) ∇ μ A {\displaystyle D^{A}=\gamma \otimes 1\circ \nabla ^{A}=\gamma (dx^{\mu })\nabla _{\mu }^{A}} on W {\displaystyle W} . The group of maps G = { u : M → U ( 1 ) } {\displaystyle {\mathcal {G}}=\{u:M\to U(1)\}} acts as a gauge group on the set of all connections on L {\displaystyle L} . The action of G {\displaystyle {\mathcal {G}}} can be "gauge fixed" e.g. by the condition d ∗ A = 0 {\displaystyle d^{*}A=0} , leaving an effective parametrisation of the space of all such connections of H 1 ( M , R ) h a r m / H 1 ( M , Z ) ⊕ d ∗ A R + ( M ) {\displaystyle H^{1}(M,\mathbb {R} )^{\mathrm {harm} }/H^{1}(M,\mathbb {Z} )\oplus d^{*}A_{\mathbb {R} }^{+}(M)} with a residual U ( 1 ) {\displaystyle U(1)} gauge group action. Write ϕ {\displaystyle \phi } for a spinor field of positive chirality, i.e. a section of W + {\displaystyle W^{+}} . The Seiberg–Witten equations for ( ϕ , ∇ A ) {\displaystyle (\phi ,\nabla ^{A})} are now D A ϕ = 0 {\displaystyle D^{A}\phi =0} F A + = σ ( ϕ ) + i ω {\displaystyle F_{A}^{+}=\sigma (\phi )+i\omega } Here F A ∈ i A R 2 ( M ) {\displaystyle F^{A}\in iA_{\mathbb {R} }^{2}(M)} is the closed curvature 2-form of ∇ A {\displaystyle \nabla ^{A}} , F A + {\displaystyle F_{A}^{+}} is its self-dual part, and σ is the squaring map ϕ ↦ ( ϕ h ( ϕ , − ) − 1 2 h ( ϕ , ϕ ) 1 W + ) {\displaystyle \phi \mapsto \left(\phi h(\phi ,-)-{\tfrac {1}{2}}h(\phi ,\phi )1_{W^{+}}\right)} from W + {\displaystyle W^{+}} to the a traceless Hermitian endomorphism of W + {\displaystyle W^{+}} identified with an imaginary self-dual 2-form, and ω {\displaystyle \omega } is a real selfdual two form, often taken to be zero or harmonic. The gauge group G {\displaystyle {\mathcal {G}}} acts on the space of solutions. After adding the gauge fixing condition d ∗ A = 0 {\displaystyle d^{*}A=0} the residual U(1) acts freely, except for "reducible solutions" with ϕ = 0 {\displaystyle \phi =0} . For technical reasons, the equations are in fact defined in suitable Sobolev spaces of sufficiently high regularity. An application of the Weitzenböck formula ∇ A ∗ ∇ A ϕ = ( D A ) 2 ϕ − ( 1 2 γ ( F A + ) + s ) ϕ {\displaystyle {\nabla ^{A}}^{*}\nabla ^{A}\phi =\left(D^{A}\right)^{2}\phi -\left({\tfrac {1}{2}}\gamma {\left(F_{A}^{+}\right)}+s\right)\phi } and the identity Δ g | ϕ | h 2 = 2 h ( ∇ A ∗ ∇ A ϕ , ϕ ) − 2 | ∇ A ϕ | g ⊗ h {\displaystyle \Delta _{g}\left|\phi \right|_{h}^{2}=2h{\left({\nabla ^{A}}^{*}\nabla ^{A}\phi ,\phi \right)}-2\left|\nabla ^{A}\phi \right|_{g\otimes h}} to solutions of the equations gives an equality Δ | ϕ | 2 + | ∇ A ϕ | 2 + 1 4 | ϕ | 4 = ( − s ) | ϕ | 2 − 1 2 h ( ϕ , γ ( ω ) ϕ ) . {\displaystyle \Delta \left|\phi \right|^{2}+\left|\nabla ^{A}\phi \right|^{2}+{\tfrac {1}{4}}\left|\phi \right|^{4}=(-s)\left|\phi \right|^{2}-{\tfrac {1}{2}}h(\phi ,\gamma (\omega )\phi ).} If | ϕ | 2 {\displaystyle |\phi |^{2}} is maximal Δ | ϕ | 2 ≥ 0 {\displaystyle \Delta |\phi |^{2}\geq 0} , so this shows that for any solution, the sup norm ‖ ϕ ‖ ∞ {\displaystyle \|\phi \|_{\infty }} is a priori bounded with the bound depending only on the scalar curvature s {\displaystyle s} of ( M , g ) {\displaystyle (M,g)} and the self dual form ω {\displaystyle \omega } . After adding the gauge fixing condition, elliptic regularity of the Dirac equation shows that solutions are in fact a priori bounded in Sobolev norms of arbitrary regularity, which shows all solutions are smooth, and that the space of all solutions up to gauge equivalence is compact. The solutions ( ϕ , ∇ A ) {\displaystyle (\phi ,\nabla ^{A})} of the Seiberg–Witten equations are called monopoles, as these equations are the field equations of massless magnetic monopoles on the manifold M {\displaystyle M} . == The moduli space of solutions == The space of solutions is acted on by the gauge group, and the quotient by this action is called the moduli space of monopoles. The moduli space is usually a manifold. For generic metrics, after gauge fixing, the equations cut out the solution space transversely and so define a smooth manifold. The residual U(1) "gauge fixed" gauge group U(1) acts freely except at reducible monopoles i.e. solutions with ϕ = 0 {\displaystyle \phi =0} . By the Atiyah–Singer index theorem the moduli space is finite dimensional and has "virtual dimension" ( K 2 − 2 χ t o p ( M ) − 3 sign ⁡ ( M ) ) / 4 {\displaystyle (K^{2}-2\chi _{\mathrm {top} }(M)-3\operatorname {sign} (M))/4} which for generic metrics is the actual dimension away from the reducibles. It means that the moduli space is generically empty if the virtual dimension is negative. For a self dual 2 form ω {\displaystyle \omega } , the reducible solutions have ϕ = 0 {\displaystyle \phi =0} , and so are determined by connections ∇ A = ∇ 0 + A {\displaystyle \nabla _{A}=\nabla _{0}+A} on L {\displaystyle L} such that F 0 + d A = i ( α + ω ) {\displaystyle F_{0}+dA=i(\alpha +\omega )} for some anti selfdual 2-form α {\displaystyle \alpha } . By the Hodge decomposition, since F 0 {\displaystyle F_{0}} is closed, the only obstruction to solving this equation for A {\displaystyle A} given α {\displaystyle \alpha } and ω {\displaystyle \omega } , is the harmonic part of α {\displaystyle \alpha } and ω {\displaystyle \omega } , and the harmonic part, or equivalently, the (de Rham) cohomology class of the curvature form i.e. [ F 0 ] = F 0 h a r m = i ( ω h a r m + α h a r m ) ∈ H 2 ( M , R ) {\displaystyle [F_{0}]=F_{0}^{\mathrm {harm} }=i(\omega ^{\mathrm {harm} }+\alpha ^{\mathrm {harm} })\in H^{2}(M,\mathbb {R} )} . Thus, since the [ 1 2 π i F 0 ] = K {\displaystyle [{\tfrac {1}{2\pi i}}F_{0}]=K} the necessary and sufficient condition for a reducible solution is ω h a r m ∈ 2 π K + H − ∈ H 2 ( X , R ) {\displaystyle \omega ^{\mathrm {harm} }\in 2\pi K+{\mathcal {H}}^{-}\in H^{2}(X,\mathbb {R} )} where H − {\displaystyle {\mathcal {H}}^{-}} is the space of harmonic anti-selfdual 2-forms. A two form ω {\displaystyle \omega } is K {\displaystyle K} -admissible if this condition is not met and solutions are necessarily irreducible. In particular, for b + ≥ 1 {\displaystyle b^{+}\geq 1} , the moduli space is a (possibly empty) compact manifold for generic metrics and admissible ω {\displaystyle \omega } . Note that, if b + ≥ 2 {\displaystyle b_{+}\geq 2} the space of K {\displaystyle K} -admissible two forms is connected, whereas if b + = 1 {\displaystyle b_{+}=1} it has two connected components (chambers). The moduli space can be given a natural orientation from an orientation on the space of positive harmonic 2 forms, and the first cohomology. The a priori bound on the solutions, also gives a priori bounds on F h a r m {\displaystyle F^{\mathrm {harm} }} . There are therefore (for fixed ω {\displaystyle \omega } ) only finitely many K ∈ H 2 ( M , Z ) {\displaystyle K\in H^{2}(M,\mathbb {Z} )} , and hence only finitely many Spinc structures, with a non empty moduli space. == Seiberg–Witten invariants == The Seiberg–Witten invariant of a four-manifold M with b2+(M) ≥ 2 is a map from the spinc structures on M to Z. The value of the invariant on a spinc structure is easiest to define when the moduli space is zero-dimensional (for a generic metric). In this case the value is the number of elements of the moduli space counted with signs. The Seiberg–Witten invariant can also be defined when b2+(M) = 1, but then it depends on the choice of a chamber. A manifold M is said to be of simple type if the Seiberg–Witten invariant vanishes whenever the expected dimension of the moduli space is nonzero. The simple type conjecture states that if M is simply connected and b2+(M) ≥ 2 then the manifold is of simple type. This is true for symplectic manifolds. If the manifold M has a metric of positive scalar curvature and b2+(M) ≥ 2 then all Seiberg–Witten invariants of M vanish. If the manifold M is the connected sum of two manifolds both of which have b2+ ≥ 1 then all Seiberg–Witten invariants of M vanish. If the manifold M is simply connected and symplectic and b2+(M) ≥ 2 then it has a spinc structure s on which the Seiberg–Witten invariant is 1. In particular it cannot be split as a connected sum of manifolds with b2+ ≥ 1. == References == Donaldson, Simon K. (1996), "The Seiberg-Witten equations and 4-manifold topology.", Bulletin of the American Mathematical Society, (N.S.), 33 (1): 45–70, doi:10.1090/S0273-0979-96-00625-8, MR 1339810 Jackson, Allyn (1995), A revolution in mathematics, archived from the original on April 26, 2010 Morgan, John W. (1996), The Seiberg–Witten equations and applications to the topology of smooth four-manifolds, Mathematical Notes, vol. 44, Princeton, NJ: Princeton University Press, pp. viii+128, ISBN 978-0-691-02597-1, MR 1367507 Moore, John Douglas (2001), Lectures on Seiberg-Witten invariants, Lecture Notes in Mathematics, vol. 1629 (2nd ed.), Berlin: Springer-Verlag, pp. viii+121, CiteSeerX 10.1.1.252.2658, doi:10.1007/BFb0092948, ISBN 978-3-540-41221-2, MR 1830497 Nash, Ch. (2001) [1994], "Seiberg-Witten equations", Encyclopedia of Mathematics, EMS Press Nicolaescu, Liviu I. (2000), Notes on Seiberg-Witten theory (PDF), Graduate Studies in Mathematics, vol. 28, Providence, RI: American Mathematical Society, pp. xviii+484, doi:10.1090/gsm/028, ISBN 978-0-8218-2145-9, MR 1787219 Scorpan, Alexandru (2005), The wild world of 4-manifolds, American Mathematical Society, ISBN 978-0-8218-3749-8, MR 2136212. Seiberg, Nathan; Witten, Edward (1994a), "Electric-magnetic duality, monopole condensation, and confinement in N=2 supersymmetric Yang-Mills theory", Nuclear Physics B, 426 (1): 19–52, arXiv:hep-th/9407087, Bibcode:1994NuPhB.426...19S, doi:10.1016/0550-3213(94)90124-4, MR 1293681, S2CID 14361074; "Erratum", Nuclear Physics B, 430 (2): 485–486, 1994, Bibcode:1994NuPhB.430..485., doi:10.1016/0550-3213(94)00449-8, MR 1303306 Seiberg, N.; Witten, E. (1994b), "Monopoles, duality and chiral symmetry breaking in N=2 supersymmetric QCD", Nuclear Physics B, 431 (3): 484–550, arXiv:hep-th/9408099, Bibcode:1994NuPhB.431..484S, doi:10.1016/0550-3213(94)90214-3, MR 1306869, S2CID 17584951 Taubes, Clifford Henry (2000), Wentworth, Richard (ed.), Seiberg Witten and Gromov invariants for symplectic 4-manifolds, First International Press Lecture Series, vol. 2, Somerville, MA: International Press, pp. vi+401, ISBN 978-1-57146-061-5, MR 1798809 Witten, Edward (1994), "Monopoles and four-manifolds.", Mathematical Research Letters, 1 (6): 769–796, arXiv:hep-th/9411102, Bibcode:1994MRLet...1..769W, doi:10.4310/MRL.1994.v1.n6.a13, MR 1306021, S2CID 10611124, archived from the original on 2013-06-29
Wikipedia/Seiberg–Witten_equations
In mathematics and theoretical physics, and especially gauge theory, the deformed Hermitian Yang–Mills (dHYM) equation is a differential equation describing the equations of motion for a D-brane in the B-model (commonly called a B-brane) of string theory. The equation was derived by Mariño-Minasian-Moore-Strominger in the case of Abelian gauge group (the unitary group U ⁡ ( 1 ) {\displaystyle \operatorname {U} (1)} ), and by Leung–Yau–Zaslow using mirror symmetry from the corresponding equations of motion for D-branes in the A-model of string theory. == Definition == In this section we present the dHYM equation as explained in the mathematical literature by Collins-Xie-Yau. The deformed Hermitian–Yang–Mills equation is a fully non-linear partial differential equation for a Hermitian metric on a line bundle over a compact Kähler manifold, or more generally for a real ( 1 , 1 ) {\displaystyle (1,1)} -form. Namely, suppose ( X , ω ) {\displaystyle (X,\omega )} is a Kähler manifold and [ α ] ∈ H 1 , 1 ( X , R ) {\displaystyle [\alpha ]\in H^{1,1}(X,\mathbb {R} )} is a class. The case of a line bundle consists of setting [ α ] = c 1 ( L ) {\displaystyle [\alpha ]=c_{1}(L)} where c 1 ( L ) {\displaystyle c_{1}(L)} is the first Chern class of a holomorphic line bundle L → X {\displaystyle L\to X} . Suppose that dim ⁡ X = n {\displaystyle \dim X=n} and consider the topological constant z ^ ( [ ω ] , [ α ] ) = ∫ X ( ω + i α ) n . {\displaystyle {\hat {z}}([\omega ],[\alpha ])=\int _{X}(\omega +i\alpha )^{n}.} Notice that z ^ {\displaystyle {\hat {z}}} depends only on the class of ω {\displaystyle \omega } and α {\displaystyle \alpha } . Suppose that z ^ ≠ 0 {\displaystyle {\hat {z}}\neq 0} . Then this is a complex number z ^ ( [ ω ] , [ α ] ) = r e i θ {\displaystyle {\hat {z}}([\omega ],[\alpha ])=re^{i\theta }} for some real r > 0 {\displaystyle r>0} and angle θ ∈ [ 0 , 2 π ) {\displaystyle \theta \in [0,2\pi )} which is uniquely determined. Fix a smooth representative differential form α {\displaystyle \alpha } in the class [ α ] {\displaystyle [\alpha ]} . For a smooth function ϕ : X → R {\displaystyle \phi :X\to \mathbb {R} } write α ϕ = α + i ∂ ∂ ¯ ϕ {\displaystyle \alpha _{\phi }=\alpha +i\partial {\bar {\partial }}\phi } , and notice that [ α ϕ ] = [ α ] {\displaystyle [\alpha _{\phi }]=[\alpha ]} . The deformed Hermitian Yang–Mills equation for ( X , ω ) {\displaystyle (X,\omega )} with respect to [ α ] {\displaystyle [\alpha ]} is { Im ⁡ ( e − i θ ( ω + i α ϕ ) n ) = 0 Re ⁡ ( e − i θ ( ω + i α ϕ ) n ) > 0. {\displaystyle {\begin{cases}\operatorname {Im} (e^{-i\theta }(\omega +i\alpha _{\phi })^{n})=0\\\operatorname {Re} (e^{-i\theta }(\omega +i\alpha _{\phi })^{n})>0.\end{cases}}} The second condition should be seen as a positivity condition on solutions to the first equation. That is, one looks for solutions to the equation Im ⁡ ( e − i θ ( ω + i α ϕ ) n ) = 0 {\displaystyle \operatorname {Im} (e^{-i\theta }(\omega +i\alpha _{\phi })^{n})=0} such that Re ⁡ ( e − i θ ( ω + i α ϕ ) n ) > 0 {\displaystyle \operatorname {Re} (e^{-i\theta }(\omega +i\alpha _{\phi })^{n})>0} . This is in analogy to the related problem of finding Kähler-Einstein metrics by looking for metrics ω + i ∂ ∂ ¯ ϕ {\displaystyle \omega +i\partial {\bar {\partial }}\phi } solving the Einstein equation, subject to the condition that ϕ {\displaystyle \phi } is a Kähler potential (which is a positivity condition on the form ω + i ∂ ∂ ¯ ϕ {\displaystyle \omega +i\partial {\bar {\partial }}\phi } ). == Discussion == === Relation to Hermitian Yang–Mills equation === The dHYM equations can be transformed in several ways to illuminate several key properties of the equations. First, simple algebraic manipulation shows that the dHYM equation may be equivalently written Im ⁡ ( ( ω + i α ) n ) = tan ⁡ θ Re ⁡ ( ( ω + i α ) n ) . {\displaystyle \operatorname {Im} ((\omega +i\alpha )^{n})=\tan \theta \operatorname {Re} ((\omega +i\alpha )^{n}).} In this form, it is possible to see the relation between the dHYM equation and the regular Hermitian Yang–Mills equation. In particular, the dHYM equation should look like the regular HYM equation in the so-called large volume limit. Precisely, one replaces the Kähler form ω {\displaystyle \omega } by k ω {\displaystyle k\omega } for a positive integer k {\displaystyle k} , and allows k → ∞ {\displaystyle k\to \infty } . Notice that the phase θ k {\displaystyle \theta _{k}} for ( X , k ω , [ α ] ) {\displaystyle (X,k\omega ,[\alpha ])} depends on k {\displaystyle k} . In fact, tan ⁡ θ k = O ( k − 1 ) {\displaystyle \tan \theta _{k}=O(k^{-1})} , and we can expand ( k ω + i α ) n = k n ω n + i n k n − 1 ω n − 1 ∧ α + O ( k n − 2 ) . {\displaystyle (k\omega +i\alpha )^{n}=k^{n}\omega ^{n}+ink^{n-1}\omega ^{n-1}\wedge \alpha +O(k^{n-2}).} Here we see that Re ⁡ ( ( k ω + i α ) n ) = k n ω n + O ( k n − 2 ) , Im ⁡ ( ( k ω + i α ) n ) = n k n − 1 ω n − 1 ∧ α + O ( k n − 3 ) , {\displaystyle \operatorname {Re} ((k\omega +i\alpha )^{n})=k^{n}\omega ^{n}+O(k^{n-2}),\quad \operatorname {Im} ((k\omega +i\alpha )^{n})=nk^{n-1}\omega ^{n-1}\wedge \alpha +O(k^{n-3}),} and we see the dHYM equation for k ω {\displaystyle k\omega } takes the form C k n − 1 ω n + O ( k n − 3 ) = n k n − 1 ω n − 1 ∧ α + O ( k n − 3 ) {\displaystyle Ck^{n-1}\omega ^{n}+O(k^{n-3})=nk^{n-1}\omega ^{n-1}\wedge \alpha +O(k^{n-3})} for some topological constant C {\displaystyle C} determined by tan ⁡ θ {\displaystyle \tan \theta } . Thus we see the leading order term in the dHYM equation is n ω n − 1 ∧ α = C ω n {\displaystyle n\omega ^{n-1}\wedge \alpha =C\omega ^{n}} which is just the HYM equation (replacing α {\displaystyle \alpha } by F ( h ) {\displaystyle F(h)} if necessary). === Local form === The dHYM equation may also be written in local coordinates. Fix p ∈ X {\displaystyle p\in X} and holomorphic coordinates ( z 1 , … , z n ) {\displaystyle (z^{1},\dots ,z^{n})} such that at the point p {\displaystyle p} , we have ω = ∑ j = 1 n i d z j ∧ d z ¯ j , α = ∑ j = 1 n λ j i d z j ∧ d z ¯ j . {\displaystyle \omega =\sum _{j=1}^{n}idz^{j}\wedge d{\bar {z}}^{j},\quad \alpha =\sum _{j=1}^{n}\lambda _{j}idz^{j}\wedge d{\bar {z}}^{j}.} Here λ j ∈ R {\displaystyle \lambda _{j}\in \mathbb {R} } for all j {\displaystyle j} as we assumed α {\displaystyle \alpha } was a real form. Define the Lagrangian phase operator to be Θ ω ( α ) = ∑ j = 1 n arctan ⁡ ( λ j ) . {\displaystyle \Theta _{\omega }(\alpha )=\sum _{j=1}^{n}\arctan(\lambda _{j}).} Then simple computation shows that the dHYM equation in these local coordinates takes the form Θ ω ( α ) = ϕ {\displaystyle \Theta _{\omega }(\alpha )=\phi } where ϕ = θ mod 2 π {\displaystyle \phi =\theta \mod 2\pi } . In this form one sees that the dHYM equation is fully non-linear and elliptic. == Solutions == It is possible to use algebraic geometry to study the existence of solutions to the dHYM equation, as demonstrated by the work of Collins–Jacob–Yau and Collins–Yau. Suppose that V ⊂ X {\displaystyle V\subset X} is any analytic subvariety of dimension p {\displaystyle p} . Define the central charge Z V ( [ α ] ) {\displaystyle Z_{V}([\alpha ])} by Z V ( [ α ] ) = − ∫ V e − i ω + α . {\displaystyle Z_{V}([\alpha ])=-\int _{V}e^{-i\omega +\alpha }.} When the dimension of X {\displaystyle X} is 2, Collins–Jacob–Yau show that if Im ⁡ ( Z X ( [ α ] ) ) > 0 {\displaystyle \operatorname {Im} (Z_{X}([\alpha ]))>0} , then there exists a solution of the dHYM equation in the class [ α ] ∈ H 1 , 1 ( X , R ) {\displaystyle [\alpha ]\in H^{1,1}(X,\mathbb {R} )} if and only if for every curve C ⊂ X {\displaystyle C\subset X} we have Im ⁡ ( Z C ( [ α ] ) Z X ( [ α ] ) ) > 0. {\displaystyle \operatorname {Im} \left({\frac {Z_{C}([\alpha ])}{Z_{X}([\alpha ])}}\right)>0.} In the specific example where X = Bl p ⁡ C P n {\displaystyle X=\operatorname {Bl} _{p}\mathbb {CP} ^{n}} , the blow-up of complex projective space, Jacob-Sheu show that [ α ] {\displaystyle [\alpha ]} admits a solution to the dHYM equation if and only if Z X ( [ α ] ) ≠ 0 {\displaystyle Z_{X}([\alpha ])\neq 0} and for any V ⊂ X {\displaystyle V\subset X} , we similarly have Im ⁡ ( Z V ( [ α ] ) Z X ( [ α ] ) ) > 0. {\displaystyle \operatorname {Im} \left({\frac {Z_{V}([\alpha ])}{Z_{X}([\alpha ])}}\right)>0.} It has been shown by Gao Chen that in the so-called supercritical phase, where ( n − 2 ) π 2 < θ < n π 2 {\displaystyle {\frac {(n-2)\pi }{2}}<\theta <{\frac {n\pi }{2}}} , algebraic conditions analogous to those above imply the existence of a solution to the dHYM equation. This is achieved through comparisons between the dHYM and the so-called J-equation in Kähler geometry. The J-equation appears as the *small volume limit* of the dHYM equation, where ω {\displaystyle \omega } is replaced by ε ω {\displaystyle \varepsilon \omega } for a small real number ε > 0 {\displaystyle \varepsilon >0} and one allows ϵ → 0 {\displaystyle \epsilon \to 0} . In general it is conjectured that the existence of solutions to the dHYM equation for a class [ α ] = c 1 ( L ) {\displaystyle [\alpha ]=c_{1}(L)} should be equivalent to the Bridgeland stability of the line bundle L {\displaystyle L} . This is motivated both from comparisons with similar theorems in the non-deformed case, such as the famous Kobayashi–Hitchin correspondence which asserts that solutions exist to the HYM equations if and only if the underlying bundle is slope stable. It is also motivated by physical reasoning coming from string theory, which predicts that physically realistic B-branes (those admitting solutions to the dHYM equation for example) should correspond to Π-stability. == Relation to string theory == Superstring theory predicts that spacetime is 10-dimensional, consisting of a Lorentzian manifold of dimension 4 (usually assumed to be Minkowski space or De sitter or anti-De Sitter space) along with a Calabi–Yau manifold X {\displaystyle X} of dimension 6 (which therefore has complex dimension 3). In this string theory open strings must satisfy Dirichlet boundary conditions on their endpoints. These conditions require that the end points of the string lie on so-called D-branes (D for Dirichlet), and there is much mathematical interest in describing these branes. In the B-model of topological string theory, homological mirror symmetry suggests D-branes should be viewed as elements of the derived category of coherent sheaves on the Calabi–Yau 3-fold X {\displaystyle X} . This characterisation is abstract, and the case of primary importance, at least for the purpose of phrasing the dHYM equation, is when a B-brane consists of a holomorphic submanifold Y ⊂ X {\displaystyle Y\subset X} and a holomorphic vector bundle E → Y {\displaystyle E\to Y} over it (here Y {\displaystyle Y} would be viewed as the support of the coherent sheaf E {\displaystyle E} over X {\displaystyle X} ), possibly with a compatible Chern connection on the bundle. This Chern connection arises from a choice of Hermitian metric h {\displaystyle h} on E {\displaystyle E} , with corresponding connection ∇ {\displaystyle \nabla } and curvature form F ( h ) {\displaystyle F(h)} . Ambient on the spacetime there is also a B-field or Kalb–Ramond field B {\displaystyle B} (not to be confused with the B in B-model), which is the string theoretic equivalent of the classical background electromagnetic field (hence the use of B {\displaystyle B} , which commonly denotes the magnetic field strength). Mathematically the B-field is a gerbe or bundle gerbe over spacetime, which means B {\displaystyle B} consists of a collection of two-forms B i ∈ Ω 2 ( U i ) {\displaystyle B_{i}\in \Omega ^{2}(U_{i})} for an open cover U i {\displaystyle U_{i}} of spacetime, but these forms may not agree on overlaps, where they must satisfy cocycle conditions in analogy with the transition functions of line bundles (0-gerbes). This B-field has the property that when pulled back along the inclusion map ι : Y → X {\displaystyle \iota :Y\to X} the gerbe is trivial, which means the B-field may be identified with a globally defined two-form on Y {\displaystyle Y} , written β {\displaystyle \beta } . The differential form α {\displaystyle \alpha } discussed above in this context is given by α = F ( h ) + β {\displaystyle \alpha =F(h)+\beta } , and studying the dHYM equations in the special case where α = F ( h ) {\displaystyle \alpha =F(h)} or equivalently [ α ] = c 1 ( L ) {\displaystyle [\alpha ]=c_{1}(L)} should be seen as turning the B-field off or setting β = 0 {\displaystyle \beta =0} , which in string theory corresponds to a spacetime with no background higher electromagnetic field. The dHYM equation describes the equations of motion for this D-brane ( Y , E ) {\displaystyle (Y,E)} in spacetime equipped with a B-field B {\displaystyle B} , and is derived from the corresponding equations of motion for A-branes through mirror symmetry. Mathematically the A-model describes D-branes as elements of the Fukaya category of X {\displaystyle X} , special Lagrangian submanifolds of X {\displaystyle X} equipped with a flat unitary line bundle over them, and the equations of motion for these A-branes is understood. In the above section the dHYM equation has been phrased for the D6-brane Y = X {\displaystyle Y=X} . == See also == Hermitian Yang–Mills connection Yang–Mills connection Thomas–Yau conjecture == References ==
Wikipedia/Deformed_Hermitian_Yang–Mills_equation
In mathematics, and especially gauge theory, Donaldson theory is the study of the topology of smooth 4-manifolds using moduli spaces of anti-self-dual instantons. It was started by Simon Donaldson (1983) who proved Donaldson's theorem restricting the possible quadratic forms on the second cohomology group of a compact simply connected 4-manifold. Important consequences of this theorem include the existence of an exotic R4 and the failure of the smooth h-cobordism theorem in 4 dimensions. The results of Donaldson theory depend therefore on the manifold having a differential structure, and are largely false for topological 4-manifolds. Many of the theorems in Donaldson theory can now be proved more easily using Seiberg–Witten theory, though there are a number of open problems remaining in Donaldson theory, such as the Witten conjecture and the Atiyah–Floer conjecture. == See also == Kronheimer–Mrowka basic class Instanton Floer homology Yang–Mills equations == References == Donaldson, Simon (1983), "An Application of Gauge Theory to Four Dimensional Topology", Journal of Differential Geometry, 18 (2): 279–315, MR 0710056. Donaldson, S. K.; Kronheimer, P. B. (1997), The Geometry of Four-Manifolds, Oxford Mathematical Monographs, Oxford: Clarendon Press, ISBN 0-19-850269-9. Freed, D. S.; Uhlenbeck, K. K. (1984), Instantons and four-manifolds, New York: Springer, ISBN 0-387-96036-8. Scorpan, A. (2005), The wild world of 4-manifolds, Providence: American Mathematical Society, ISBN 0-8218-3749-4.
Wikipedia/Donaldson_theory
In differential geometry and gauge theory, the Nahm equations are a system of ordinary differential equations introduced by Werner Nahm in the context of the Nahm transform – an alternative to Ward's twistor construction of monopoles. The Nahm equations are formally analogous to the algebraic equations in the ADHM construction of instantons, where finite order matrices are replaced by differential operators. Deep study of the Nahm equations was carried out by Nigel Hitchin and Simon Donaldson. Conceptually, the equations arise in the process of infinite-dimensional hyperkähler reduction. They can also be viewed as a dimensional reduction of the anti-self-dual Yang-Mills equations. Among their many applications we can mention: Hitchin's construction of monopoles, where this approach is critical for establishing nonsingularity of monopole solutions; Donaldson's description of the moduli space of monopoles; and the existence of hyperkähler structure on coadjoint orbits of complex semisimple Lie groups, proved by Kronheimer, Biquard, and Kovalev. == Equations == Let T 1 ( z ) , T 2 ( z ) , T 3 ( z ) {\displaystyle T_{1}(z),T_{2}(z),T_{3}(z)} be three matrix-valued meromorphic functions of a complex variable z {\displaystyle z} . The Nahm equations are a system of matrix differential equations d T 1 d z = [ T 2 , T 3 ] d T 2 d z = [ T 3 , T 1 ] d T 3 d z = [ T 1 , T 2 ] , {\displaystyle {\begin{aligned}{\frac {dT_{1}}{dz}}&=[T_{2},T_{3}]\\[3pt]{\frac {dT_{2}}{dz}}&=[T_{3},T_{1}]\\[3pt]{\frac {dT_{3}}{dz}}&=[T_{1},T_{2}],\end{aligned}}} together with certain analyticity properties, reality conditions, and boundary conditions. The three equations can be written concisely using the Levi-Civita symbol, in the form d T i d z = 1 2 ∑ j , k ϵ i j k [ T j , T k ] = ∑ j , k ϵ i j k T j T k . {\displaystyle {\frac {dT_{i}}{dz}}={\frac {1}{2}}\sum _{j,k}\epsilon _{ijk}[T_{j},T_{k}]=\sum _{j,k}\epsilon _{ijk}T_{j}T_{k}.} More generally, instead of considering N {\displaystyle N} by N {\displaystyle N} matrices, one can consider Nahm's equations with values in a Lie algebra g {\displaystyle g} . === Additional conditions === The variable z {\displaystyle z} is restricted to the open interval ( 0 , 2 ) {\displaystyle (0,2)} , and the following conditions are imposed: T i ∗ = − T i ; {\displaystyle T_{i}^{*}=-T_{i};} T i ( 2 − z ) = T i ( z ) T ; {\displaystyle T_{i}(2-z)=T_{i}(z)^{T};\,} T i N {\displaystyle T_{i}N} can be continued to a meromorphic function of z {\displaystyle z} in a neighborhood of the closed interval [ 0 , 2 ] {\displaystyle [0,2]} , analytic outside of 0 {\displaystyle 0} and 2 {\displaystyle 2} , and with simple poles at z = 0 {\displaystyle z=0} and z = 2 {\displaystyle z=2} ; and At the poles, the residues of T 1 , T 2 , T 3 {\displaystyle T_{1},T_{2},T_{3}} form an irreducible representation of the group SU(2). == Nahm–Hitchin description of monopoles == There is a natural equivalence between the monopoles of charge K {\displaystyle K} for the group S U ( 2 ) {\displaystyle SU(2)} , modulo gauge transformations, and the solutions of Nahm equations satisfying the additional conditions above, modulo the simultaneous conjugation of T 1 , T 2 , T 3 {\displaystyle T_{1},T_{2},T_{3}} by the group O ( k , R ) {\displaystyle O(k,R)} . == Lax representation == The Nahm equations can be written in the Lax form as follows. Set A 0 = T 1 + i T 2 , A 1 = − 2 i T 3 , A 2 = T 1 − i T 2 A ( ζ ) = A 0 + ζ A 1 + ζ 2 A 2 , B ( ζ ) = 1 2 d A d ζ = 1 2 A 1 + ζ A 2 , {\displaystyle {\begin{aligned}&A_{0}=T_{1}+iT_{2},\quad A_{1}=-2iT_{3},\quad A_{2}=T_{1}-iT_{2}\\[3pt]&A(\zeta )=A_{0}+\zeta A_{1}+\zeta ^{2}A_{2},\quad B(\zeta )={\frac {1}{2}}{\frac {dA}{d\zeta }}={\frac {1}{2}}A_{1}+\zeta A_{2},\end{aligned}}} then the system of Nahm equations is equivalent to the Lax equation d A d z = [ A , B ] . {\displaystyle {\frac {dA}{dz}}=[A,B].} As an immediate corollary, we obtain that the spectrum of the matrix A {\displaystyle A} does not depend on z {\displaystyle z} . Therefore, the characteristic equation det ( λ I + A ( ζ , z ) ) = 0 , {\displaystyle \det(\lambda I+A(\zeta ,z))=0,} which determines the so-called spectral curve in the twistor space T P 1 {\displaystyle TP^{1}} is invariant under the flow in z {\displaystyle z} . == See also == Bogomolny equation Yang–Mills–Higgs equations == References == Nahm, W. (1981). "All self-dual multimonopoles for arbitrary gauge groups". CERN, Preprint TH. 3172. Hitchin, Nigel (1983). "On the construction of monopoles". Communications in Mathematical Physics. 89 (2): 145–190. Bibcode:1983CMaPh..89..145H. doi:10.1007/BF01211826. S2CID 120823242. Donaldson, Simon (1984). "Nahm's equations and the classification of monopoles". Communications in Mathematical Physics. 96 (3): 387–407. Bibcode:1984CMaPh..96..387D. doi:10.1007/BF01214583. S2CID 119959346. Atiyah, Michael; Hitchin, N. J. (1988). The geometry and dynamics of magnetic monopoles. M. B. Porter Lectures. Princeton, NJ: Princeton University Press. ISBN 0-691-08480-7. Kronheimer, Peter B. (1990). "A hyper-Kählerian structure on coadjoint orbits of a semisimple complex group". Journal of the London Mathematical Society. 42 (2): 193–208. doi:10.1112/jlms/s2-42.2.193. Kovalev, A. G. (1996). "Nahm's equations and complex adjoint orbits". Quart. J. Math. Oxford. 47 (185): 41–58. doi:10.1093/qmath/47.1.41. Biquard, Olivier (1996). "Sur les équations de Nahm et la structure de Poisson des algèbres de Lie semi-simples complexes" [Nahm equations and Poisson structure of complex semisimple Lie algebras]. Math. Ann. 304 (2): 253–276. doi:10.1007/BF01446293. S2CID 73680531. == External links == Islands project – a wiki about the Nahm equations and related topics
Wikipedia/Nahm_equations
In mathematics, a vector-valued differential form on a manifold M is a differential form on M with values in a vector space V. More generally, it is a differential form with values in some vector bundle E over M. Ordinary differential forms can be viewed as R-valued differential forms. An important case of vector-valued differential forms are Lie algebra-valued forms. (A connection form is an example of such a form.) == Definition == Let M be a smooth manifold and E → M be a smooth vector bundle over M. We denote the space of smooth sections of a bundle E by Γ(E). An E-valued differential form of degree p is a smooth section of the tensor product bundle of E with Λp(T ∗M), the p-th exterior power of the cotangent bundle of M. The space of such forms is denoted by Ω p ( M , E ) = Γ ( E ⊗ Λ p T ∗ M ) . {\displaystyle \Omega ^{p}(M,E)=\Gamma (E\otimes \Lambda ^{p}T^{*}M).} Because Γ is a strong monoidal functor, this can also be interpreted as Γ ( E ⊗ Λ p T ∗ M ) = Γ ( E ) ⊗ Ω 0 ( M ) Γ ( Λ p T ∗ M ) = Γ ( E ) ⊗ Ω 0 ( M ) Ω p ( M ) , {\displaystyle \Gamma (E\otimes \Lambda ^{p}T^{*}M)=\Gamma (E)\otimes _{\Omega ^{0}(M)}\Gamma (\Lambda ^{p}T^{*}M)=\Gamma (E)\otimes _{\Omega ^{0}(M)}\Omega ^{p}(M),} where the latter two tensor products are the tensor product of modules over the ring Ω0(M) of smooth R-valued functions on M (see the seventh example here). By convention, an E-valued 0-form is just a section of the bundle E. That is, Ω 0 ( M , E ) = Γ ( E ) . {\displaystyle \Omega ^{0}(M,E)=\Gamma (E).\,} Equivalently, an E-valued differential form can be defined as a bundle morphism T M ⊗ ⋯ ⊗ T M → E {\displaystyle TM\otimes \cdots \otimes TM\to E} which is totally skew-symmetric. Let V be a fixed vector space. A V-valued differential form of degree p is a differential form of degree p with values in the trivial bundle M × V. The space of such forms is denoted Ωp(M, V). When V = R one recovers the definition of an ordinary differential form. If V is finite-dimensional, then one can show that the natural homomorphism Ω p ( M ) ⊗ R V → Ω p ( M , V ) , {\displaystyle \Omega ^{p}(M)\otimes _{\mathbb {R} }V\to \Omega ^{p}(M,V),} where the first tensor product is of vector spaces over R, is an isomorphism. == Operations on vector-valued forms == === Pullback === One can define the pullback of vector-valued forms by smooth maps just as for ordinary forms. The pullback of an E-valued form on N by a smooth map φ : M → N is an (φ*E)-valued form on M, where φ*E is the pullback bundle of E by φ. The formula is given just as in the ordinary case. For any E-valued p-form ω on N the pullback φ*ω is given by ( φ ∗ ω ) x ( v 1 , ⋯ , v p ) = ω φ ( x ) ( d φ x ( v 1 ) , ⋯ , d φ x ( v p ) ) . {\displaystyle (\varphi ^{*}\omega )_{x}(v_{1},\cdots ,v_{p})=\omega _{\varphi (x)}(\mathrm {d} \varphi _{x}(v_{1}),\cdots ,\mathrm {d} \varphi _{x}(v_{p})).} === Wedge product === Just as for ordinary differential forms, one can define a wedge product of vector-valued forms. The wedge product of an E1-valued p-form with an E2-valued q-form is naturally an (E1⊗E2)-valued (p+q)-form: ∧ : Ω p ( M , E 1 ) × Ω q ( M , E 2 ) → Ω p + q ( M , E 1 ⊗ E 2 ) . {\displaystyle \wedge :\Omega ^{p}(M,E_{1})\times \Omega ^{q}(M,E_{2})\to \Omega ^{p+q}(M,E_{1}\otimes E_{2}).} The definition is just as for ordinary forms with the exception that real multiplication is replaced with the tensor product: ( ω ∧ η ) ( v 1 , ⋯ , v p + q ) = 1 p ! q ! ∑ σ ∈ S p + q sgn ⁡ ( σ ) ω ( v σ ( 1 ) , ⋯ , v σ ( p ) ) ⊗ η ( v σ ( p + 1 ) , ⋯ , v σ ( p + q ) ) . {\displaystyle (\omega \wedge \eta )(v_{1},\cdots ,v_{p+q})={\frac {1}{p!q!}}\sum _{\sigma \in S_{p+q}}\operatorname {sgn}(\sigma )\omega (v_{\sigma (1)},\cdots ,v_{\sigma (p)})\otimes \eta (v_{\sigma (p+1)},\cdots ,v_{\sigma (p+q)}).} In particular, the wedge product of an ordinary (R-valued) p-form with an E-valued q-form is naturally an E-valued (p+q)-form (since the tensor product of E with the trivial bundle M × R is naturally isomorphic to E). In terms of local frames {eα} and {lβ} for E1 and E2 respectively, the wedge product of an E1-valued p-form ω = ωα eα, and an E2-valued q-form η = ηβ lβ is ω ∧ η = ∑ α , β ( ω α ∧ η β ) ( e α ⊗ l β ) , {\displaystyle \omega \wedge \eta =\sum _{\alpha ,\beta }(\omega ^{\alpha }\wedge \eta ^{\beta })(e_{\alpha }\otimes l_{\beta }),} where ωα ∧ ηβ is the ordinary wedge product of R {\displaystyle \mathbb {R} } -valued forms. For ω ∈ Ωp(M) and η ∈ Ωq(M, E) one has the usual commutativity relation: ω ∧ η = ( − 1 ) p q η ∧ ω . {\displaystyle \omega \wedge \eta =(-1)^{pq}\eta \wedge \omega .} In general, the wedge product of two E-valued forms is not another E-valued form, but rather an (E⊗E)-valued form. However, if E is an algebra bundle (i.e. a bundle of algebras rather than just vector spaces) one can compose with multiplication in E to obtain an E-valued form. If E is a bundle of commutative, associative algebras then, with this modified wedge product, the set of all E-valued differential forms Ω ( M , E ) = ⨁ p = 0 dim ⁡ M Ω p ( M , E ) {\displaystyle \Omega (M,E)=\bigoplus _{p=0}^{\dim M}\Omega ^{p}(M,E)} becomes a graded-commutative associative algebra. If the fibers of E are not commutative then Ω(M,E) will not be graded-commutative. === Exterior derivative === For any vector space V there is a natural exterior derivative on the space of V-valued forms. This is just the ordinary exterior derivative acting component-wise relative to any basis of V. Explicitly, if {eα} is a basis for V then the differential of a V-valued p-form ω = ωαeα is given by d ω = ( d ω α ) e α . {\displaystyle d\omega =(d\omega ^{\alpha })e_{\alpha }.\,} The exterior derivative on V-valued forms is completely characterized by the usual relations: d ( ω + η ) = d ω + d η d ( ω ∧ η ) = d ω ∧ η + ( − 1 ) p ω ∧ d η ( p = deg ⁡ ω ) d ( d ω ) = 0. {\displaystyle {\begin{aligned}&d(\omega +\eta )=d\omega +d\eta \\&d(\omega \wedge \eta )=d\omega \wedge \eta +(-1)^{p}\,\omega \wedge d\eta \qquad (p=\deg \omega )\\&d(d\omega )=0.\end{aligned}}} More generally, the above remarks apply to E-valued forms where E is any flat vector bundle over M (i.e. a vector bundle whose transition functions are constant). The exterior derivative is defined as above on any local trivialization of E. If E is not flat then there is no natural notion of an exterior derivative acting on E-valued forms. What is needed is a choice of connection on E. A connection on E is a linear differential operator taking sections of E to E-valued one forms: ∇ : Ω 0 ( M , E ) → Ω 1 ( M , E ) . {\displaystyle \nabla :\Omega ^{0}(M,E)\to \Omega ^{1}(M,E).} If E is equipped with a connection ∇ then there is a unique covariant exterior derivative d ∇ : Ω p ( M , E ) → Ω p + 1 ( M , E ) {\displaystyle d_{\nabla }:\Omega ^{p}(M,E)\to \Omega ^{p+1}(M,E)} extending ∇. The covariant exterior derivative is characterized by linearity and the equation d ∇ ( ω ∧ η ) = d ∇ ω ∧ η + ( − 1 ) p ω ∧ d η {\displaystyle d_{\nabla }(\omega \wedge \eta )=d_{\nabla }\omega \wedge \eta +(-1)^{p}\,\omega \wedge d\eta } where ω is a E-valued p-form and η is an ordinary q-form. In general, one need not have d∇2 = 0. In fact, this happens if and only if the connection ∇ is flat (i.e. has vanishing curvature). == Basic or tensorial forms on principal bundles == Let E → M be a smooth vector bundle of rank k over M and let π : F(E) → M be the (associated) frame bundle of E, which is a principal GLk(R) bundle over M. The pullback of E by π is canonically isomorphic to F(E) ×ρ Rk via the inverse of [u, v] →u(v), where ρ is the standard representation. Therefore, the pullback by π of an E-valued form on M determines an Rk-valued form on F(E). It is not hard to check that this pulled back form is right-equivariant with respect to the natural action of GLk(R) on F(E) × Rk and vanishes on vertical vectors (tangent vectors to F(E) which lie in the kernel of dπ). Such vector-valued forms on F(E) are important enough to warrant special terminology: they are called basic or tensorial forms on F(E). Let π : P → M be a (smooth) principal G-bundle and let V be a fixed vector space together with a representation ρ : G → GL(V). A basic or tensorial form on P of type ρ is a V-valued form ω on P that is equivariant and horizontal in the sense that ( R g ) ∗ ω = ρ ( g − 1 ) ω {\displaystyle (R_{g})^{*}\omega =\rho (g^{-1})\omega \,} for all g ∈ G, and ω ( v 1 , … , v p ) = 0 {\displaystyle \omega (v_{1},\ldots ,v_{p})=0} whenever at least one of the vi are vertical (i.e., dπ(vi) = 0). Here Rg denotes the right action of G on P for some g ∈ G. Note that for 0-forms the second condition is vacuously true. Example: If ρ is the adjoint representation of G on the Lie algebra, then the connection form ω satisfies the first condition (but not the second). The associated curvature form Ω satisfies both; hence Ω is a tensorial form of adjoint type. The "difference" of two connection forms is a tensorial form. Given P and ρ as above one can construct the associated vector bundle E = P ×ρ V. Tensorial q-forms on P are in a natural one-to-one correspondence with E-valued q-forms on M. As in the case of the principal bundle F(E) above, given a q-form ϕ ¯ {\displaystyle {\overline {\phi }}} on M with values in E, define φ on P fiberwise by, say at u, ϕ = u − 1 π ∗ ϕ ¯ {\displaystyle \phi =u^{-1}\pi ^{*}{\overline {\phi }}} where u is viewed as a linear isomorphism V → ≃ E π ( u ) = ( π ∗ E ) u , v ↦ [ u , v ] {\displaystyle V{\overset {\simeq }{\to }}E_{\pi (u)}=(\pi ^{*}E)_{u},v\mapsto [u,v]} . φ is then a tensorial form of type ρ. Conversely, given a tensorial form φ of type ρ, the same formula defines an E-valued form ϕ ¯ {\displaystyle {\overline {\phi }}} on M (cf. the Chern–Weil homomorphism.) In particular, there is a natural isomorphism of vector spaces Γ ( M , E ) ≃ { f : P → V | f ( u g ) = ρ ( g ) − 1 f ( u ) } , f ¯ ↔ f {\displaystyle \Gamma (M,E)\simeq \{f:P\to V|f(ug)=\rho (g)^{-1}f(u)\},\,{\overline {f}}\leftrightarrow f} . Example: Let E be the tangent bundle of M. Then identity bundle map idE: E →E is an E-valued one form on M. The tautological one-form is a unique one-form on the frame bundle of E that corresponds to idE. Denoted by θ, it is a tensorial form of standard type. Now, suppose there is a connection on P so that there is an exterior covariant differentiation D on (various) vector-valued forms on P. Through the above correspondence, D also acts on E-valued forms: define ∇ by ∇ ϕ ¯ = D ϕ ¯ . {\displaystyle \nabla {\overline {\phi }}={\overline {D\phi }}.} In particular for zero-forms, ∇ : Γ ( M , E ) → Γ ( M , T ∗ M ⊗ E ) {\displaystyle \nabla :\Gamma (M,E)\to \Gamma (M,T^{*}M\otimes E)} . This is exactly the covariant derivative for the connection on the vector bundle E. == Examples == Siegel modular forms arise as vector-valued differential forms on Siegel modular varieties. == Notes == == References == Shoshichi Kobayashi and Katsumi Nomizu (1963) Foundations of Differential Geometry, Vol. 1, Wiley Interscience.
Wikipedia/Vector_valued_differential_form
In mathematics, and in particular differential geometry and gauge theory, Hitchin's equations are a system of partial differential equations for a connection and Higgs field on a vector bundle or principal bundle over a Riemann surface, written down by Nigel Hitchin in 1987. Hitchin's equations are locally equivalent to the harmonic map equation for a surface into the symmetric space dual to the structure group. They also appear as a dimensional reduction of the self-dual Yang–Mills equations from four dimensions to two dimensions, and solutions to Hitchin's equations give examples of Higgs bundles and of holomorphic connections. The existence of solutions to Hitchin's equations on a compact Riemann surface follows from the stability of the corresponding Higgs bundle or the corresponding holomorphic connection, and this is the simplest form of the Nonabelian Hodge correspondence. The moduli space of solutions to Hitchin's equations was constructed by Hitchin in the rank two case on a compact Riemann surface and was one of the first examples of a hyperkähler manifold constructed. The nonabelian Hodge correspondence shows it is isomorphic to the Higgs bundle moduli space, and to the moduli space of holomorphic connections. Using the metric structure on the Higgs bundle moduli space afforded by its description in terms of Hitchin's equations, Hitchin constructed the Hitchin system, a completely integrable system whose twisted generalization over a finite field was used by Ngô Bảo Châu in his proof of the fundamental lemma in the Langlands program, for which he was afforded the 2010 Fields medal. == Definition == The definition may be phrased for a connection on a vector bundle or principal bundle, with the two perspectives being essentially interchangeable. Here the definition of principal bundles is presented, which is the form that appears in Hitchin's work. Let P → Σ {\displaystyle P\to \Sigma } be a principal G {\displaystyle G} -bundle for a compact real Lie group G {\displaystyle G} over a compact Riemann surface. For simplicity we will consider the case of G = SU ( 2 ) {\displaystyle G={\text{SU}}(2)} or G = SO ( 3 ) {\displaystyle G={\text{SO}}(3)} , the special unitary group or special orthogonal group. Suppose A {\displaystyle A} is a connection on P {\displaystyle P} , and let Φ {\displaystyle \Phi } be a section of the complex vector bundle ad P C ⊗ T 1 , 0 ∗ Σ {\displaystyle {\text{ad}}P^{\mathbb {C} }\otimes T_{1,0}^{*}\Sigma } , where ad P C {\displaystyle {\text{ad}}P^{\mathbb {C} }} is the complexification of the adjoint bundle of P {\displaystyle P} , with fibre given by the complexification g ⊗ C {\displaystyle {\mathfrak {g}}\otimes \mathbb {C} } of the Lie algebra g {\displaystyle {\mathfrak {g}}} of G {\displaystyle G} . That is, Φ {\displaystyle \Phi } is a complex ad P {\displaystyle {\text{ad}}P} -valued ( 1 , 0 ) {\displaystyle (1,0)} -form on Σ {\displaystyle \Sigma } . Such a Φ {\displaystyle \Phi } is called a Higgs field in analogy with the auxiliary Higgs field appearing in Yang–Mills theory. For a pair ( A , Φ ) {\displaystyle (A,\Phi )} , Hitchin's equations assert that { F A + [ Φ , Φ ∗ ] = 0 ∂ ¯ A Φ = 0. {\displaystyle {\begin{cases}F_{A}+[\Phi ,\Phi ^{*}]=0\\{\bar {\partial }}_{A}\Phi =0.\end{cases}}} where F A ∈ Ω 2 ( Σ , ad P ) {\displaystyle F_{A}\in \Omega ^{2}(\Sigma ,{\text{ad}}P)} is the curvature form of A {\displaystyle A} , ∂ ¯ A {\displaystyle {\bar {\partial }}_{A}} is the ( 0 , 1 ) {\displaystyle (0,1)} -part of the induced connection on the complexified adjoint bundle ad P ⊗ C {\displaystyle {\text{ad}}P\otimes \mathbb {C} } , and [ Φ , Φ ∗ ] {\displaystyle [\Phi ,\Phi ^{*}]} is the commutator of ad P {\displaystyle {\text{ad}}P} -valued one-forms in the sense of Lie algebra-valued differential forms. Since [ Φ , Φ ∗ ] {\displaystyle [\Phi ,\Phi ^{*}]} is of type ( 1 , 1 ) {\displaystyle (1,1)} , Hitchin's equations assert that the ( 0 , 2 ) {\displaystyle (0,2)} -component F A 0 , 2 = 0 {\displaystyle F_{A}^{0,2}=0} . Since ∂ ¯ A 2 = F A 0 , 2 {\displaystyle {\bar {\partial }}_{A}^{2}=F_{A}^{0,2}} , this implies that ∂ ¯ A {\displaystyle {\bar {\partial }}_{A}} is a Dolbeault operator on ad P C {\displaystyle {\text{ad}}P^{\mathbb {C} }} and gives this Lie algebra bundle the structure of a holomorphic vector bundle. Therefore, the condition ∂ ¯ A Φ = 0 {\displaystyle {\bar {\partial }}_{A}\Phi =0} means that Φ {\displaystyle \Phi } is a holomorphic ad P {\displaystyle {\text{ad}}P} -valued ( 1 , 0 ) {\displaystyle (1,0)} -form on Σ {\displaystyle \Sigma } . A pair consisting of a holomorphic vector bundle E {\displaystyle E} with a holomorphic endomorphism-valued ( 1 , 0 ) {\displaystyle (1,0)} -form Φ {\displaystyle \Phi } is called a Higgs bundle, and so every solution to Hitchin's equations produces an example of a Higgs bundle. == Derivation == Hitchin's equations can be derived as a dimensional reduction of the Yang–Mills equations from four dimension to two dimensions. Consider a connection A {\displaystyle A} on a trivial principal G {\displaystyle G} -bundle over R 4 {\displaystyle \mathbb {R} ^{4}} . Then there exists four functions A 1 , A 2 , A 3 , A 4 : R 4 → g {\displaystyle A_{1},A_{2},A_{3},A_{4}:\mathbb {R} ^{4}\to {\mathfrak {g}}} such that A = A 1 d x 1 + A 2 d x 2 + A 3 d x 3 + A 4 d x 4 {\displaystyle A=A_{1}dx^{1}+A_{2}dx^{2}+A_{3}dx^{3}+A_{4}dx^{4}} where d x i {\displaystyle dx^{i}} are the standard coordinate differential forms on R 4 {\displaystyle \mathbb {R} ^{4}} . The self-duality equations for the connection A {\displaystyle A} , a particular case of the Yang–Mills equations, can be written { F 12 = F 34 F 13 = F 42 F 14 = F 23 {\displaystyle {\begin{cases}F_{12}=F_{34}\\F_{13}=F_{42}\\F_{14}=F_{23}\end{cases}}} where F = ∑ i < j F i j d x i ∧ d x j {\textstyle F=\sum _{i<j}F_{ij}dx^{i}\wedge dx^{j}} is the curvature two-form of A {\displaystyle A} . To dimensionally reduce to two dimensions, one imposes that the connection forms A i {\displaystyle A_{i}} are independent of the coordinates x 3 , x 4 {\displaystyle x^{3},x^{4}} on R 4 {\displaystyle \mathbb {R} ^{4}} . Thus the components A 1 d x 1 + A 2 d x 2 {\displaystyle A_{1}dx^{1}+A_{2}dx^{2}} define a connection on the restricted bundle over R 2 {\displaystyle \mathbb {R} ^{2}} , and if one relabels A 3 = ϕ 1 {\displaystyle A_{3}=\phi _{1}} , A 4 = ϕ 2 {\displaystyle A_{4}=\phi _{2}} then these are auxiliary g {\displaystyle {\mathfrak {g}}} -valued fields over R 2 {\displaystyle \mathbb {R} ^{2}} . If one now writes ϕ = ϕ 1 − i ϕ 2 {\displaystyle \phi =\phi _{1}-i\phi _{2}} and Φ = 1 2 ϕ d z {\textstyle \Phi ={\frac {1}{2}}\phi dz} where d z = d x 1 + i d x 2 {\displaystyle dz=dx^{1}+idx^{2}} is the standard complex ( 1 , 0 ) {\displaystyle (1,0)} -form on R 2 = C {\displaystyle \mathbb {R} ^{2}=\mathbb {C} } , then the self-duality equations above become precisely Hitchin's equations. Since these equations are conformally invariant on R 2 {\displaystyle \mathbb {R} ^{2}} , they make sense on a conformal compactification of the plane, a Riemann surface. == References ==
Wikipedia/Hitchin's_equations
In mathematics, and in particular gauge theory and complex geometry, a Hermitian Yang–Mills connection (or Hermite–Einstein connection) is a Chern connection associated to an inner product on a holomorphic vector bundle over a Kähler manifold that satisfies an analogue of Einstein's equations: namely, the contraction of the curvature 2-form of the connection with the Kähler form is required to be a constant times the identity transformation. Hermitian Yang–Mills connections are special examples of Yang–Mills connections, and are often called instantons. The Kobayashi–Hitchin correspondence proved by Donaldson, Uhlenbeck and Yau asserts that a holomorphic vector bundle over a compact Kähler manifold admits a Hermitian Yang–Mills connection if and only if it is slope polystable. == Hermitian Yang–Mills equations == Hermite–Einstein connections arise as solutions of the Hermitian Yang–Mills equations. These are a system of partial differential equations on a vector bundle over a Kähler manifold, which imply the Yang–Mills equations. Let A {\displaystyle A} be a Hermitian connection on a Hermitian vector bundle E {\displaystyle E} over a Kähler manifold X {\displaystyle X} of dimension n {\displaystyle n} . Then the Hermitian Yang–Mills equations are: F A 0 , 2 = 0 F A ⋅ ω = λ ( E ) Id , {\displaystyle {\begin{aligned}&F_{A}^{0,2}=0\\&F_{A}\cdot \omega =\lambda (E)\operatorname {Id} ,\end{aligned}}} for some constant λ ( E ) ∈ C {\displaystyle \lambda (E)\in \mathbb {C} } . Here we have: F A ∧ ω n − 1 = ( F A ⋅ ω ) ω n . {\displaystyle F_{A}\wedge \omega ^{n-1}=(F_{A}\cdot \omega )\omega ^{n}.} Notice that since A {\displaystyle A} is assumed to be a Hermitian connection, the curvature F A {\displaystyle F_{A}} is skew-Hermitian, and so F A 0 , 2 = 0 {\displaystyle F_{A}^{0,2}=0} implies F A 2 , 0 = 0 {\displaystyle F_{A}^{2,0}=0} . When the underlying Kähler manifold X {\displaystyle X} is compact, λ ( E ) {\displaystyle \lambda (E)} may be computed using Chern–Weil theory. Namely, we have deg ⁡ ( E ) := ∫ X c 1 ( E ) ∧ ω n − 1 = i 2 π ∫ X Tr ⁡ ( F A ) ∧ ω n − 1 = i 2 π ∫ X Tr ⁡ ( F A ⋅ ω ) ω n . {\displaystyle {\begin{aligned}\deg(E)&:=\int _{X}c_{1}(E)\wedge \omega ^{n-1}\\&={\frac {i}{2\pi }}\int _{X}\operatorname {Tr} (F_{A})\wedge \omega ^{n-1}\\&={\frac {i}{2\pi }}\int _{X}\operatorname {Tr} (F_{A}\cdot \omega )\omega ^{n}.\end{aligned}}} Since F A ⋅ ω = λ ( E ) Id E {\displaystyle F_{A}\cdot \omega =\lambda (E)\operatorname {Id} _{E}} and the identity endomorphism has trace given by the rank of E {\displaystyle E} , we obtain λ ( E ) = − 2 π i n ! Vol ⁡ ( X ) μ ( E ) , {\displaystyle \lambda (E)=-{\frac {2\pi i}{n!\operatorname {Vol} (X)}}\mu (E),} where μ ( E ) {\displaystyle \mu (E)} is the slope of the vector bundle E {\displaystyle E} , given by μ ( E ) = deg ⁡ ( E ) rank ⁡ ( E ) , {\displaystyle \mu (E)={\frac {\deg(E)}{\operatorname {rank} (E)}},} and the volume of X {\displaystyle X} is taken with respect to the volume form ω n / n ! {\displaystyle \omega ^{n}/n!} . Due to the similarity of the second condition in the Hermitian Yang–Mills equations with the equations for an Einstein metric, solutions of the Hermitian Yang–Mills equations are often called Hermite–Einstein connections, as well as Hermitian Yang–Mills connections. == Examples == The Levi-Civita connection of a Kähler–Einstein metric is Hermite–Einstein with respect to the Kähler–Einstein metric. (These examples are however dangerously misleading, because there are compact Einstein manifolds, such as the Page metric on C P 2 # C P 2 ¯ {\displaystyle {\mathbb {C} P}^{2}\#{\overline {\mathbb {C} P^{2}}}} , that are Hermitian, but for which the Levi-Civita connection is not Hermite–Einstein.) When the Hermitian vector bundle E {\displaystyle E} has a holomorphic structure, there is a natural choice of Hermitian connection, the Chern connection. For the Chern connection, the condition that F A 0 , 2 = 0 {\displaystyle F_{A}^{0,2}=0} is automatically satisfied. The Hitchin–Kobayashi correspondence asserts that a holomorphic vector bundle E {\displaystyle E} admits a Hermitian metric h {\displaystyle h} such that the associated Chern connection satisfies the Hermitian Yang–Mills equations if and only if the vector bundle is polystable. From this perspective, the Hermitian Yang–Mills equations can be seen as a system of equations for the metric h {\displaystyle h} rather than the associated Chern connection, and such metrics solving the equations are called Hermite–Einstein metrics. The Hermite–Einstein condition on Chern connections was first introduced by Kobayashi (1980, section 6). These equation imply the Yang–Mills equations in any dimension, and in real dimension four are closely related to the self-dual Yang–Mills equations that define instantons. In particular, when the complex dimension of the Kähler manifold X {\displaystyle X} is 2 {\displaystyle 2} , there is a splitting of the forms into self-dual and anti-self-dual forms. The complex structure interacts with this as follows: Λ + 2 = Λ 2 , 0 ⊕ Λ 0 , 2 ⊕ ⟨ ω ⟩ , Λ − 2 = ⟨ ω ⟩ ⊥ ⊂ Λ 1 , 1 {\displaystyle \Lambda _{+}^{2}=\Lambda ^{2,0}\oplus \Lambda ^{0,2}\oplus \langle \omega \rangle ,\qquad \Lambda _{-}^{2}=\langle \omega \rangle ^{\perp }\subset \Lambda ^{1,1}} When the degree of the vector bundle E {\displaystyle E} vanishes, then the Hermitian Yang–Mills equations become F A 0 , 2 = F A 2 , 0 = F A ⋅ ω = 0 {\displaystyle F_{A}^{0,2}=F_{A}^{2,0}=F_{A}\cdot \omega =0} . By the above representation, this is precisely the condition that F A + = 0 {\displaystyle F_{A}^{+}=0} . That is, A {\displaystyle A} is an ASD instanton. Notice that when the degree does not vanish, solutions of the Hermitian Yang–Mills equations cannot be anti-self-dual, and in fact there are no solutions to the ASD equations in this case. == See also == Einstein manifold Deformed Hermitian Yang–Mills equation Gauge theory (mathematics) == References == Kobayashi, Shoshichi (1980), "First Chern class and holomorphic tensor fields", Nagoya Mathematical Journal, 77: 5–11, doi:10.1017/S0027763000018602, ISSN 0027-7630, MR 0556302, S2CID 118228189 Kobayashi, Shoshichi (1987), Differential geometry of complex vector bundles, Publications of the Mathematical Society of Japan, vol. 15, Princeton University Press, ISBN 978-0-691-08467-1, MR 0909698
Wikipedia/Hermitian_Yang–Mills_equations
In mathematics, and especially gauge theory, the Bogomolny equation for magnetic monopoles is the equation F A = ⋆ d A Φ , {\displaystyle F_{A}=\star d_{A}\Phi ,} where F A {\displaystyle F_{A}} is the curvature of a connection A {\displaystyle A} on a principal G {\displaystyle G} -bundle over a 3-manifold M {\displaystyle M} , Φ {\displaystyle \Phi } is a section of the corresponding adjoint bundle, d A {\displaystyle d_{A}} is the exterior covariant derivative induced by A {\displaystyle A} on the adjoint bundle, and ⋆ {\displaystyle \star } is the Hodge star operator on M {\displaystyle M} . These equations are named after E. B. Bogomolny and were studied extensively by Michael Atiyah and Nigel Hitchin. The equations are a dimensional reduction of the self-dual Yang–Mills equations from four dimensions to three dimensions, and correspond to global minima of the appropriate action. If M {\displaystyle M} is closed, there are only trivial (i.e. flat) solutions. == See also == Monopole moduli space Ginzburg–Landau theory Seiberg–Witten theory Bogomol'nyi–Prasad–Sommerfield bound == References == == External links == Bogomolny equation on nLab "Magnetic_monopole", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Bogomolny_equations
In general relativity, a vacuum solution is a Lorentzian manifold whose Einstein tensor vanishes identically. According to the Einstein field equation, this means that the stress–energy tensor also vanishes identically, so that no matter or non-gravitational fields are present. These are distinct from the electrovacuum solutions, which take into account the electromagnetic field in addition to the gravitational field. Vacuum solutions are also distinct from the lambdavacuum solutions, where the only term in the stress–energy tensor is the cosmological constant term (and thus, the lambdavacuums can be taken as cosmological models). More generally, a vacuum region in a Lorentzian manifold is a region in which the Einstein tensor vanishes. Vacuum solutions are a special case of the more general exact solutions in general relativity. == Equivalent conditions == It is a mathematical fact that the Einstein tensor vanishes if and only if the Ricci tensor vanishes. This follows from the fact that these two second rank tensors stand in a kind of dual relationship; they are the trace reverse of each other: G a b = R a b − R 2 g a b , R a b = G a b − G 2 g a b {\displaystyle G_{ab}=R_{ab}-{\frac {R}{2}}\,g_{ab},\;\;R_{ab}=G_{ab}-{\frac {G}{2}}\,g_{ab}} where the traces are R = R a a , G = G a a = − R {\displaystyle R={R^{a}}_{a},\;\;G={G^{a}}_{a}=-R} . A third equivalent condition follows from the Ricci decomposition of the Riemann curvature tensor as a sum of the Weyl curvature tensor plus terms built out of the Ricci tensor: the Weyl and Riemann tensors agree, R a b c d = C a b c d {\displaystyle R_{abcd}=C_{abcd}} , in some region if and only if it is a vacuum region. == Gravitational energy == Since T a b = 0 {\displaystyle T^{ab}=0} in a vacuum region, it might seem that according to general relativity, vacuum regions must contain no energy. But the gravitational field can do work, so we must expect the gravitational field itself to possess energy, and it does. However, determining the precise location of this gravitational field energy is technically problematical in general relativity, by its very nature of the clean separation into a universal gravitational interaction and "all the rest". The fact that the gravitational field itself possesses energy yields a way to understand the nonlinearity of the Einstein field equation: this gravitational field energy itself produces more gravity. (This is described as "the gravity of gravity", or by saying that "gravity gravitates".) This means that the gravitational field outside the Sun is a bit stronger according to general relativity than it is according to Newton's theory. == Examples == Well-known examples of explicit vacuum solutions include: Minkowski spacetime (which describes empty space with no cosmological constant) Milne model (which is a model developed by E. A. Milne describing an empty universe which has no curvature) Schwarzschild vacuum (which describes the spacetime geometry around a spherical mass), Kerr vacuum (which describes the geometry around a rotating object), Taub–NUT vacuum (a famous counterexample describing the exterior gravitational field of an isolated object with strange properties), Kerns–Wild vacuum (Robert M. Kerns and Walter J. Wild 1982) (a Schwarzschild object immersed in an ambient "almost uniform" gravitational field), double Kerr vacuum (two Kerr objects sharing the same axis of rotation, but held apart by unphysical zero active gravitational mass "cables" going out to suspension points infinitely removed), Khan–Penrose vacuum (K. A. Khan and Roger Penrose 1971) (a simple colliding plane wave model), Oszváth–Schücking vacuum (the circularly polarized sinusoidal gravitational wave, another famous counterexample). Kasner metric (An anisotropic solution, used to study gravitational chaos in three or more dimensions). These all belong to one or more general families of solutions: the Weyl vacua (Hermann Weyl) (the family of all static vacuum solutions), the Beck vacua (Guido Beck 1925) (the family of all cylindrically symmetric nonrotating vacuum solutions), the Ernst vacua (Frederick J. Ernst 1968) (the family of all stationary axisymmetric vacuum solutions), the Ehlers vacua (Jürgen Ehlers) (the family of all cylindrically symmetric vacuum solutions), the Szekeres vacua (George Szekeres) (the family of all colliding gravitational plane wave models), the Gowdy vacua (Robert H. Gowdy) (cosmological models constructed using gravitational waves), Several of the families mentioned here, members of which are obtained by solving an appropriate linear or nonlinear, real or complex partial differential equation, turn out to be very closely related, in perhaps surprising ways. In addition to these, we also have the vacuum pp-wave spacetimes, which include the gravitational plane waves. == See also == Introduction to the mathematics of general relativity Topological defect == References == === Sources === Stephani, Hans, ed. (2003). Exact solutions of Einstein's field equations (PDF). Cambridge monographs on mathematical physics (2nd ed.). Cambridge, UK; New York: Cambridge University Press. ISBN 978-0-521-46136-8.
Wikipedia/Vacuum_solutions
In physics and mathematics, and especially differential geometry and gauge theory, the Yang–Mills equations are a system of partial differential equations for a connection on a vector bundle or principal bundle. They arise in physics as the Euler–Lagrange equations of the Yang–Mills action functional. They have also found significant use in mathematics. Solutions of the equations are called Yang–Mills connections or instantons. The moduli space of instantons was used by Simon Donaldson to prove Donaldson's theorem. == Motivation == === Physics === In their foundational paper on the topic of gauge theories, Robert Mills and Chen-Ning Yang developed (essentially independent of the mathematical literature) the theory of principal bundles and connections in order to explain the concept of gauge symmetry and gauge invariance as it applies to physical theories. The gauge theories Yang and Mills discovered, now called Yang–Mills theories, generalised the classical work of James Maxwell on Maxwell's equations, which had been phrased in the language of a U ⁡ ( 1 ) {\displaystyle \operatorname {U} (1)} gauge theory by Wolfgang Pauli and others. The novelty of the work of Yang and Mills was to define gauge theories for an arbitrary choice of Lie group G {\displaystyle G} , called the structure group (or in physics the gauge group, see Gauge group (mathematics) for more details). This group could be non-Abelian as opposed to the case G = U ⁡ ( 1 ) {\displaystyle G=\operatorname {U} (1)} corresponding to electromagnetism, and the right framework to discuss such objects is the theory of principal bundles. The essential points of the work of Yang and Mills are as follows. One assumes that the fundamental description of a physical model is through the use of fields, and derives that under a local gauge transformation (change of local trivialisation of principal bundle), these physical fields must transform in precisely the way that a connection A {\displaystyle A} (in physics, a gauge field) on a principal bundle transforms. The gauge field strength is the curvature F A {\displaystyle F_{A}} of the connection, and the energy of the gauge field is given (up to a constant) by the Yang–Mills action functional YM ⁡ ( A ) = ∫ X ‖ F A ‖ 2 d v o l g . {\displaystyle \operatorname {YM} (A)=\int _{X}\|F_{A}\|^{2}\,d\mathrm {vol} _{g}.} The principle of least action dictates that the correct equations of motion for this physical theory should be given by the Euler–Lagrange equations of this functional, which are the Yang–Mills equations derived below: d A ⋆ F A = 0. {\displaystyle d_{A}\star F_{A}=0.} === Mathematics === In addition to the physical origins of the theory, the Yang–Mills equations are of important geometric interest. There is in general no natural choice of connection on a vector bundle or principal bundle. In the special case where this bundle is the tangent bundle to a Riemannian manifold, there is such a natural choice, the Levi-Civita connection, but in general there is an infinite-dimensional space of possible choices. A Yang–Mills connection gives some kind of natural choice of a connection for a general fibre bundle, as we now describe. A connection is defined by its local forms A α ∈ Ω 1 ( U α , ad ⁡ ( P ) ) {\displaystyle A_{\alpha }\in \Omega ^{1}(U_{\alpha },\operatorname {ad} (P))} for a trivialising open cover { U α } {\displaystyle \{U_{\alpha }\}} for the bundle P → X {\displaystyle P\to X} . The first attempt at choosing a canonical connection might be to demand that these forms vanish. However, this is not possible unless the trivialisation is flat, in the sense that the transition functions g α β : U α ∩ U β → G {\displaystyle g_{\alpha \beta }:U_{\alpha }\cap U_{\beta }\to G} are constant functions. Not every bundle is flat, so this is not possible in general. Instead one might ask that the local connection forms A α {\displaystyle A_{\alpha }} are themselves constant. On a principal bundle the correct way to phrase this condition is that the curvature F A = d A + 1 2 [ A , A ] {\displaystyle F_{A}=dA+{\frac {1}{2}}[A,A]} vanishes. However, by Chern–Weil theory if the curvature F A {\displaystyle F_{A}} vanishes (that is to say, A {\displaystyle A} is a flat connection), then the underlying principal bundle must have trivial Chern classes, which is a topological obstruction to the existence of flat connections: not every principal bundle can have a flat connection. The best one can hope for is then to ask that instead of vanishing curvature, the bundle has curvature as small as possible. The Yang–Mills action functional described above is precisely (the square of) the L 2 {\displaystyle L^{2}} -norm of the curvature, and its Euler–Lagrange equations describe the critical points of this functional, either the absolute minima or local minima. That is to say, Yang–Mills connections are precisely those that minimize their curvature. In this sense they are the natural choice of connection on a principal or vector bundle over a manifold from a mathematical point of view. == Definition == Let X {\displaystyle X} be a compact, oriented, Riemannian manifold. The Yang–Mills equations can be phrased for a connection on a vector bundle or principal G {\displaystyle G} -bundle over X {\displaystyle X} , for some compact Lie group G {\displaystyle G} . Here the latter convention is presented. Let P {\displaystyle P} denote a principal G {\displaystyle G} -bundle over X {\displaystyle X} . Then a connection on P {\displaystyle P} may be specified by a Lie algebra-valued differential form A {\displaystyle A} on the total space of the principal bundle. This connection has a curvature form F A {\displaystyle F_{A}} , which is a two-form on X {\displaystyle X} with values in the adjoint bundle ad ⁡ ( P ) {\displaystyle \operatorname {ad} (P)} of P {\displaystyle P} . Associated to the connection A {\displaystyle A} is an exterior covariant derivative d A {\displaystyle d_{A}} , defined on the adjoint bundle. Additionally, since G {\displaystyle G} is compact, its associated compact Lie algebra admits an invariant inner product under the adjoint representation. Since X {\displaystyle X} is Riemannian, there is an inner product on the cotangent bundle, and combined with the invariant inner product on ad ⁡ ( P ) {\displaystyle \operatorname {ad} (P)} there is an inner product on the bundle ad ⁡ ( P ) ⊗ Λ 2 T ∗ X {\displaystyle \operatorname {ad} (P)\otimes \Lambda ^{2}T^{*}X} of ad ⁡ ( P ) {\displaystyle \operatorname {ad} (P)} -valued two-forms on X {\displaystyle X} . Since X {\displaystyle X} is oriented, there is an L 2 {\displaystyle L^{2}} -inner product on the sections of this bundle. Namely, ⟨ s , t ⟩ L 2 = ∫ X ⟨ s , t ⟩ d v o l g {\displaystyle \langle s,t\rangle _{L^{2}}=\int _{X}\langle s,t\rangle \,dvol_{g}} where inside the integral the fiber-wise inner product is being used, and d v o l g {\displaystyle dvol_{g}} is the Riemannian volume form of X {\displaystyle X} . Using this L 2 {\displaystyle L^{2}} -inner product, the formal adjoint operator of d A {\displaystyle d_{A}} is defined by ⟨ d A s , t ⟩ L 2 = ⟨ s , d A ∗ t ⟩ L 2 {\displaystyle \langle d_{A}s,t\rangle _{L^{2}}=\langle s,d_{A}^{*}t\rangle _{L^{2}}} . Explicitly this is given by d A ∗ = ± ⋆ d A ⋆ {\displaystyle d_{A}^{*}=\pm \star d_{A}\star } where ⋆ {\displaystyle \star } is the Hodge star operator acting on two-forms. Assuming the above set up, the Yang–Mills equations are a system of (in general non-linear) partial differential equations given by Since the Hodge star is an isomorphism, by the explicit formula for d A ∗ {\displaystyle d_{A}^{*}} the Yang–Mills equations can equivalently be written A connection satisfying (1) or (2) is called a Yang–Mills connection. Every connection automatically satisfies the Bianchi identity d A F A = 0 {\displaystyle d_{A}F_{A}=0} , so Yang–Mills connections can be seen as a non-linear analogue of harmonic differential forms, which satisfy d ω = d ∗ ω = 0 {\displaystyle d\omega =d^{*}\omega =0} . In this sense the search for Yang–Mills connections can be compared to Hodge theory, which seeks a harmonic representative in the de Rham cohomology class of a differential form. The analogy being that a Yang–Mills connection is like a harmonic representative in the set of all possible connections on a principal bundle. == Derivation == The Yang–Mills equations are the Euler–Lagrange equations of the Yang–Mills functional, defined by To derive the equations from the functional, recall that the space A {\displaystyle {\mathcal {A}}} of all connections on P {\displaystyle P} is an affine space modelled on the vector space Ω 1 ( P ; g ) {\displaystyle \Omega ^{1}(P;{\mathfrak {g}})} . Given a small deformation A + t a {\displaystyle A+ta} of a connection A {\displaystyle A} in this affine space, the curvatures are related by F A + t a = F A + t d A a + t 2 a ∧ a . {\displaystyle F_{A+ta}=F_{A}+td_{A}a+t^{2}a\wedge a.} To determine the critical points of (3), compute d d t ( YM ⁡ ( A + t a ) ) t = 0 = d d t ( ∫ X ⟨ F A + t d A a + t 2 a ∧ a , F A + t d A a + t 2 a ∧ a ⟩ d v o l g ) t = 0 = d d t ( ∫ X ‖ F A ‖ 2 + 2 t ⟨ F A , d A a ⟩ + 2 t 2 ⟨ F A , a ∧ a ⟩ + t 4 ‖ a ∧ a ‖ 2 d v o l g ) t = 0 = 2 ∫ X ⟨ d A ∗ F A , a ⟩ d v o l g . {\displaystyle {\begin{aligned}{\frac {d}{dt}}\left(\operatorname {YM} (A+ta)\right)_{t=0}&={\frac {d}{dt}}\left(\int _{X}\langle F_{A}+t\,d_{A}a+t^{2}a\wedge a,F_{A}+t\,d_{A}a+t^{2}a\wedge a\rangle \,d\mathrm {vol} _{g}\right)_{t=0}\\&={\frac {d}{dt}}\left(\int _{X}\|F_{A}\|^{2}+2t\langle F_{A},d_{A}a\rangle +2t^{2}\langle F_{A},a\wedge a\rangle +t^{4}\|a\wedge a\|^{2}\,d\mathrm {vol} _{g}\right)_{t=0}\\&=2\int _{X}\langle d_{A}^{*}F_{A},a\rangle \,d\mathrm {vol} _{g}.\end{aligned}}} The connection A {\displaystyle A} is a critical point of the Yang–Mills functional if and only if this vanishes for every a {\displaystyle a} , and this occurs precisely when (1) is satisfied. == Moduli space of Yang–Mills connections == The Yang–Mills equations are gauge invariant. Mathematically, a gauge transformation is an automorphism g {\displaystyle g} of the principal bundle P {\displaystyle P} , and since the inner product on ad ⁡ ( P ) {\displaystyle \operatorname {ad} (P)} is invariant, the Yang–Mills functional satisfies YM ⁡ ( g ⋅ A ) = ∫ X ‖ g F A g − 1 ‖ 2 d v o l g = ∫ X ‖ F A ‖ 2 d v o l g = YM ⁡ ( A ) {\displaystyle \operatorname {YM} (g\cdot A)=\int _{X}\|gF_{A}g^{-1}\|^{2}\,d\mathrm {vol} _{g}=\int _{X}\|F_{A}\|^{2}\,d\mathrm {vol} _{g}=\operatorname {YM} (A)} and so if A {\displaystyle A} satisfies (1), so does g ⋅ A {\displaystyle g\cdot A} . There is a moduli space of Yang–Mills connections modulo gauge transformations. Denote by G {\displaystyle {\mathcal {G}}} the gauge group of automorphisms of P {\displaystyle P} . The set B = A / G {\displaystyle {\mathcal {B}}={\mathcal {A}}/{\mathcal {G}}} classifies all connections modulo gauge transformations, and the moduli space M {\displaystyle {\mathcal {M}}} of Yang–Mills connections is a subset. In general neither B {\displaystyle {\mathcal {B}}} or M {\displaystyle {\mathcal {M}}} is Hausdorff or a smooth manifold. However, by restricting to irreducible connections, that is, connections A {\displaystyle A} whose holonomy group is given by all of G {\displaystyle G} , one does obtain Hausdorff spaces. The space of irreducible connections is denoted A ∗ {\displaystyle {\mathcal {A}}^{*}} , and so the moduli spaces are denoted B ∗ {\displaystyle {\mathcal {B}}^{*}} and M ∗ {\displaystyle {\mathcal {M}}^{*}} . Moduli spaces of Yang–Mills connections have been intensively studied in specific circumstances. Michael Atiyah and Raoul Bott studied the Yang–Mills equations for bundles over compact Riemann surfaces. There the moduli space obtains an alternative description as a moduli space of holomorphic vector bundles. This is the Narasimhan–Seshadri theorem, which was proved in this form relating Yang–Mills connections to holomorphic vector bundles by Donaldson. In this setting the moduli space has the structure of a compact Kähler manifold. Moduli of Yang–Mills connections have been most studied when the dimension of the base manifold X {\displaystyle X} is four. Here the Yang–Mills equations admit a simplification from a second-order PDE to a first-order PDE, the anti-self-duality equations. == Anti-self-duality equations == When the dimension of the base manifold X {\displaystyle X} is four, a coincidence occurs: the Hodge star operator maps two-forms to two-forms, ⋆ : Ω 2 ( X ) → Ω 2 ( X ) {\displaystyle \star :\Omega ^{2}(X)\to \Omega ^{2}(X)} . The Hodge star operator squares to the identity in this case, and so has eigenvalues 1 {\displaystyle 1} and − 1 {\displaystyle -1} . In particular, there is a decomposition Ω 2 ( X ) = Ω + ( X ) ⊕ Ω − ( X ) {\displaystyle \Omega ^{2}(X)=\Omega _{+}(X)\oplus \Omega _{-}(X)} into the positive and negative eigenspaces of ⋆ {\displaystyle \star } , the self-dual and anti-self-dual two-forms. If a connection A {\displaystyle A} on a principal G {\displaystyle G} -bundle over a four-manifold X {\displaystyle X} satisfies either F A = ⋆ F A {\displaystyle F_{A}={\star F_{A}}} or F A = − ⋆ F A {\displaystyle F_{A}=-{\star F_{A}}} , then by (2), the connection is a Yang–Mills connection. These connections are called either self-dual connections or anti-self-dual connections, and the equations the self-duality (SD) equations and the anti-self-duality (ASD) equations. The spaces of self-dual and anti-self-dual connections are denoted by A + {\displaystyle {\mathcal {A}}^{+}} and A − {\displaystyle {\mathcal {A}}^{-}} , and similarly for B ± {\displaystyle {\mathcal {B}}^{\pm }} and M ± {\displaystyle {\mathcal {M}}^{\pm }} . The moduli space of ASD connections, or instantons, was most intensively studied by Donaldson in the case where G = SU ⁡ ( 2 ) {\displaystyle G=\operatorname {SU} (2)} and X {\displaystyle X} is simply-connected. In this setting, the principal SU ⁡ ( 2 ) {\displaystyle \operatorname {SU} (2)} -bundle is classified by its second Chern class, c 2 ( P ) ∈ H 4 ( X , Z ) ≅ Z {\displaystyle c_{2}(P)\in H^{4}(X,\mathbb {Z} )\cong \mathbb {Z} } . For various choices of principal bundle, one obtains moduli spaces with interesting properties. These spaces are Hausdorff, even when allowing reducible connections, and are generically smooth. It was shown by Donaldson that the smooth part is orientable. By the Atiyah–Singer index theorem, one may compute that the dimension of M k − {\displaystyle {\mathcal {M}}_{k}^{-}} , the moduli space of ASD connections when c 2 ( P ) = k {\displaystyle c_{2}(P)=k} , to be dim ⁡ M k − = 8 k − 3 ( 1 − b 1 ( X ) + b + ( X ) ) {\displaystyle \dim {\mathcal {M}}_{k}^{-}=8k-3(1-b_{1}(X)+b_{+}(X))} where b 1 ( X ) {\displaystyle b_{1}(X)} is the first Betti number of X {\displaystyle X} , and b + ( X ) {\displaystyle b_{+}(X)} is the dimension of the positive-definite subspace of H 2 ( X , R ) {\displaystyle H_{2}(X,\mathbb {R} )} with respect to the intersection form on X {\displaystyle X} . For example, when X = S 4 {\displaystyle X=S^{4}} and k = 1 {\displaystyle k=1} , the intersection form is trivial and the moduli space has dimension dim ⁡ M 1 − ( S 4 ) = 8 − 3 = 5 {\displaystyle \dim {\mathcal {M}}_{1}^{-}(S^{4})=8-3=5} . This agrees with existence of the BPST instanton, which is the unique ASD instanton on S 4 {\displaystyle S^{4}} up to a 5 parameter family defining its centre in R 4 {\displaystyle \mathbb {R} ^{4}} and its scale. Such instantons on R 4 {\displaystyle \mathbb {R} ^{4}} may be extended across the point at infinity using Uhlenbeck's removable singularity theorem. More generally, for positive k , {\displaystyle k,} the moduli space has dimension 8 k − 3. {\displaystyle 8k-3.} == Applications == === Donaldson's theorem === The moduli space of Yang–Mills equations was used by Donaldson to prove Donaldson's theorem about the intersection form of simply-connected four-manifolds. Using analytical results of Clifford Taubes and Karen Uhlenbeck, Donaldson was able to show that in specific circumstances (when the intersection form is definite) the moduli space of ASD instantons on a smooth, compact, oriented, simply-connected four-manifold X {\displaystyle X} gives a cobordism between a copy of the manifold itself, and a disjoint union of copies of the complex projective plane C P 2 {\displaystyle \mathbb {CP} ^{2}} . We can count the number of copies of C P 2 {\displaystyle \mathbb {CP} ^{2}} in two ways: once using that signature is a cobordism invariant, and another using a Hodge-theoretic interpretation of reducible connections. Interpreting these counts carefully, one can conclude that such a smooth manifold has diagonalisable intersection form. The moduli space of ASD instantons may be used to define further invariants of four-manifolds. Donaldson defined polynomials on the second homology group of a suitably restricted class of four-manifolds, arising from pairings of cohomology classes on the moduli space. This work has subsequently been surpassed by Seiberg–Witten invariants. === Dimensional reduction and other moduli spaces === Through the process of dimensional reduction, the Yang–Mills equations may be used to derive other important equations in differential geometry and gauge theory. Dimensional reduction is the process of taking the Yang–Mills equations over a four-manifold, typically R 4 {\displaystyle \mathbb {R} ^{4}} , and imposing that the solutions be invariant under a symmetry group. For example: By requiring the anti-self-duality equations to be invariant under translations in a single direction of R 4 {\displaystyle \mathbb {R} ^{4}} , one obtains the Bogomolny equations which describe magnetic monopoles on R 3 {\displaystyle \mathbb {R} ^{3}} . By requiring the self-duality equations to be invariant under translation in two directions, one obtains Hitchin's equations first investigated by Hitchin. These equations naturally lead to the study of Higgs bundles and the Hitchin system. By requiring the anti-self-duality equations to be invariant in three directions, one obtains the Nahm equations on an interval. There is a duality between solutions of the dimensionally reduced ASD equations on R 3 {\displaystyle \mathbb {R} ^{3}} and R {\displaystyle \mathbb {R} } called the Nahm transform, after Werner Nahm, who first described how to construct monopoles from Nahm equation data. Hitchin showed the converse, and Donaldson proved that solutions to the Nahm equations could further be linked to moduli spaces of rational maps from the complex projective line to itself. The duality observed for these solutions is theorized to hold for arbitrary dual groups of symmetries of a four-manifold. Indeed there is a similar duality between instantons invariant under dual lattices inside R 4 {\displaystyle \mathbb {R} ^{4}} , instantons on dual four-dimensional tori, and the ADHM construction can be thought of as a duality between instantons on R 4 {\displaystyle \mathbb {R} ^{4}} and dual algebraic data over a single point. Symmetry reductions of the ASD equations also lead to a number of integrable systems, and Ward's conjecture is that in fact all known integrable ODEs and PDEs come from symmetry reduction of ASDYM. For example reductions of SU(2) ASDYM give the sine-Gordon and Korteweg–de Vries equation, of S L ( 3 , R ) {\displaystyle \mathrm {SL} (3,\mathbb {R} )} ASDYM gives the Tzitzeica equation, and a particular reduction to 2 + 1 {\displaystyle 2+1} dimensions gives the integrable chiral model of Ward. In this sense it is a 'master theory' for integrable systems, allowing many known systems to be recovered by picking appropriate parameters, such as choice of gauge group and symmetry reduction scheme. Other such master theories are four-dimensional Chern–Simons theory and the affine Gaudin model. === Chern–Simons theory === The moduli space of Yang–Mills equations over a compact Riemann surface Σ {\displaystyle \Sigma } can be viewed as the configuration space of Chern–Simons theory on a cylinder Σ × [ 0 , 1 ] {\displaystyle \Sigma \times [0,1]} . In this case the moduli space admits a geometric quantization, discovered independently by Nigel Hitchin and Axelrod–Della Pietra–Witten. == See also == Connection (vector bundle) Connection (principal bundle) Donaldson theory Stable Yang–Mills connection F-Yang–Mills equations Bi-Yang–Mills equations Hermitian Yang–Mills equations Deformed Hermitian Yang–Mills equations Yang–Mills–Higgs equations == Notes == == References ==
Wikipedia/Yang–Mills_functional
In theoretical physics, the Weyl transformation, named after German mathematician Hermann Weyl, is a local rescaling of the metric tensor: g a b → e − 2 ω ( x ) g a b {\displaystyle g_{ab}\rightarrow e^{-2\omega (x)}g_{ab}} which produces another metric in the same conformal class. A theory or an expression invariant under this transformation is called conformally invariant, or is said to possess Weyl invariance or Weyl symmetry. The Weyl symmetry is an important symmetry in conformal field theory. It is, for example, a symmetry of the Polyakov action. When quantum mechanical effects break the conformal invariance of a theory, it is said to exhibit a conformal anomaly or Weyl anomaly. The ordinary Levi-Civita connection and associated spin connections are not invariant under Weyl transformations. Weyl connections are a class of affine connections that is invariant, although no Weyl connection is individual invariant under Weyl transformations. == Conformal weight == A quantity φ {\displaystyle \varphi } has conformal weight k {\displaystyle k} if, under the Weyl transformation, it transforms via φ → φ e k ω . {\displaystyle \varphi \to \varphi e^{k\omega }.} Thus conformally weighted quantities belong to certain density bundles; see also conformal dimension. Let A μ {\displaystyle A_{\mu }} be the connection one-form associated to the Levi-Civita connection of g {\displaystyle g} . Introduce a connection that depends also on an initial one-form ∂ μ ω {\displaystyle \partial _{\mu }\omega } via B μ = A μ + ∂ μ ω . {\displaystyle B_{\mu }=A_{\mu }+\partial _{\mu }\omega .} Then D μ φ ≡ ∂ μ φ + k B μ φ {\displaystyle D_{\mu }\varphi \equiv \partial _{\mu }\varphi +kB_{\mu }\varphi } is covariant and has conformal weight k − 1 {\displaystyle k-1} . == Formulas == For the transformation g a b = f ( ϕ ( x ) ) g ¯ a b {\displaystyle g_{ab}=f(\phi (x)){\bar {g}}_{ab}} We can derive the following formulas g a b = 1 f ( ϕ ( x ) ) g ¯ a b − g = − g ¯ f D / 2 Γ a b c = Γ ¯ a b c + f ′ 2 f ( δ b c ∂ a ϕ + δ a c ∂ b ϕ − g ¯ a b ∂ c ϕ ) ≡ Γ ¯ a b c + γ a b c R a b = R ¯ a b + f ″ f − f ′ 2 2 f 2 ( ( 2 − D ) ∂ a ϕ ∂ b ϕ − g ¯ a b ∂ c ϕ ∂ c ϕ ) + f ′ 2 f ( ( 2 − D ) ∇ ¯ a ∂ b ϕ − g ¯ a b ◻ ¯ ϕ ) + 1 4 f ′ 2 f 2 ( D − 2 ) ( ∂ a ϕ ∂ b ϕ − g ¯ a b ∂ c ϕ ∂ c ϕ ) R = 1 f R ¯ + 1 − D f ( f ″ f − f ′ 2 f 2 ∂ c ϕ ∂ c ϕ + f ′ f ◻ ¯ ϕ ) + 1 4 f f ′ 2 f 2 ( D − 2 ) ( 1 − D ) ∂ c ϕ ∂ c ϕ {\displaystyle {\begin{aligned}g^{ab}&={\frac {1}{f(\phi (x))}}{\bar {g}}^{ab}\\{\sqrt {-g}}&={\sqrt {-{\bar {g}}}}f^{D/2}\\\Gamma _{ab}^{c}&={\bar {\Gamma }}_{ab}^{c}+{\frac {f'}{2f}}\left(\delta _{b}^{c}\partial _{a}\phi +\delta _{a}^{c}\partial _{b}\phi -{\bar {g}}_{ab}\partial ^{c}\phi \right)\equiv {\bar {\Gamma }}_{ab}^{c}+\gamma _{ab}^{c}\\R_{ab}&={\bar {R}}_{ab}+{\frac {f''f-f^{\prime 2}}{2f^{2}}}\left((2-D)\partial _{a}\phi \partial _{b}\phi -{\bar {g}}_{ab}\partial ^{c}\phi \partial _{c}\phi \right)+{\frac {f'}{2f}}\left((2-D){\bar {\nabla }}_{a}\partial _{b}\phi -{\bar {g}}_{ab}{\bar {\Box }}\phi \right)+{\frac {1}{4}}{\frac {f^{\prime 2}}{f^{2}}}(D-2)\left(\partial _{a}\phi \partial _{b}\phi -{\bar {g}}_{ab}\partial _{c}\phi \partial ^{c}\phi \right)\\R&={\frac {1}{f}}{\bar {R}}+{\frac {1-D}{f}}\left({\frac {f''f-f^{\prime 2}}{f^{2}}}\partial ^{c}\phi \partial _{c}\phi +{\frac {f'}{f}}{\bar {\Box }}\phi \right)+{\frac {1}{4f}}{\frac {f^{\prime 2}}{f^{2}}}(D-2)(1-D)\partial _{c}\phi \partial ^{c}\phi \end{aligned}}} Note that the Weyl tensor is invariant under a Weyl rescaling. == References == Weyl, Hermann (1993) [1921]. Raum, Zeit, Materie [Space, Time, Matter]. Lectures on General Relativity (in German). Berlin: Springer. ISBN 3-540-56978-2.
Wikipedia/Weyl_transformation
The Kelvin transform is a device used in classical potential theory to extend the concept of a harmonic function, by allowing the definition of a function which is 'harmonic at infinity'. This technique is also used in the study of subharmonic and superharmonic functions. In order to define the Kelvin transform f* of a function f, it is necessary to first consider the concept of inversion in a sphere in Rn as follows. It is possible to use inversion in any sphere, but the ideas are clearest when considering a sphere with centre at the origin. Given a fixed sphere S(0, R) with centre 0 and radius R, the inversion of a point x in Rn is defined to be x ∗ = R 2 | x | 2 x . {\displaystyle x^{*}={\frac {R^{2}}{|x|^{2}}}x.} A useful effect of this inversion is that the origin 0 is the image of ∞ {\displaystyle \infty } , and ∞ {\displaystyle \infty } is the image of 0. Under this inversion, spheres are transformed into spheres, and the exterior of a sphere is transformed to the interior, and vice versa. The Kelvin transform of a function is then defined by: If D is an open subset of Rn which does not contain 0, then for any function f defined on D, the Kelvin transform f* of f with respect to the sphere S(0, R) is f ∗ ( x ∗ ) = | x | n − 2 R 2 n − 4 f ( x ) = 1 | x ∗ | n − 2 f ( x ) = 1 | x ∗ | n − 2 f ( R 2 | x ∗ | 2 x ∗ ) . {\displaystyle f^{*}(x^{*})={\frac {|x|^{n-2}}{R^{2n-4}}}f(x)={\frac {1}{|x^{*}|^{n-2}}}f(x)={\frac {1}{|x^{*}|^{n-2}}}f\left({\frac {R^{2}}{|x^{*}|^{2}}}x^{*}\right).} One of the important properties of the Kelvin transform, and the main reason behind its creation, is the following result: Let D be an open subset in Rn which does not contain the origin 0. Then a function u is harmonic, subharmonic or superharmonic in D if and only if the Kelvin transform u* with respect to the sphere S(0, R) is harmonic, subharmonic or superharmonic in D*. This follows from the formula Δ u ∗ ( x ∗ ) = R 4 | x ∗ | n + 2 ( Δ u ) ( R 2 | x ∗ | 2 x ∗ ) . {\displaystyle \Delta u^{*}(x^{*})={\frac {R^{4}}{|x^{*}|^{n+2}}}(\Delta u)\left({\frac {R^{2}}{|x^{*}|^{2}}}x^{*}\right).} == See also == William Thomson, 1st Baron Kelvin Inversive geometry Spherical wave transformation == References == William Thomson, Lord Kelvin (1845) "Extrait d'une lettre de M. William Thomson à M. Liouville", Journal de Mathématiques Pures et Appliquées 10: 364–7 William Thompson (1847) "Extraits deux lettres adressees à M. Liouville, par M. William Thomson", Journal de Mathématiques Pures et Appliquées 12: 556–64 J. L. Doob (2001). Classical Potential Theory and Its Probabilistic Counterpart. Springer-Verlag. p. 26. ISBN 3-540-41206-9. L. L. Helms (1975). Introduction to potential theory. R. E. Krieger. ISBN 0-88275-224-3. O. D. Kellogg (1953). Foundations of potential theory. Dover. ISBN 0-486-60144-7. {{cite book}}: ISBN / Date incompatibility (help) John Wermer (1981) Potential Theory 2nd edition, page 84, Lecture Notes in Mathematics #408 ISBN 3-540-10276-0
Wikipedia/Kelvin_transform
In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory. Intuitively, subharmonic functions are related to convex functions of one variable as follows. If the graph of a convex function and a line intersect at two points, then the graph of the convex function is below the line between those points. In the same way, if the values of a subharmonic function are no larger than the values of a harmonic function on the boundary of a ball, then the values of the subharmonic function are no larger than the values of the harmonic function also inside the ball. Superharmonic functions can be defined by the same description, only replacing "no larger" with "no smaller". Alternatively, a superharmonic function is just the negative of a subharmonic function, and for this reason any property of subharmonic functions can be easily transferred to superharmonic functions. == Formal definition == Formally, the definition can be stated as follows. Let G {\displaystyle G} be a subset of the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} and let φ : G → R ∪ { − ∞ } {\displaystyle \varphi \colon G\to \mathbb {R} \cup \{-\infty \}} be an upper semi-continuous function. Then, φ {\displaystyle \varphi } is called subharmonic if for any closed ball B ( x , r ) ¯ {\displaystyle {\overline {B(x,r)}}} of center x {\displaystyle x} and radius r {\displaystyle r} contained in G {\displaystyle G} and every real-valued continuous function h {\displaystyle h} on B ( x , r ) ¯ {\displaystyle {\overline {B(x,r)}}} that is harmonic in B ( x , r ) {\displaystyle B(x,r)} and satisfies φ ( y ) ≤ h ( y ) {\displaystyle \varphi (y)\leq h(y)} for all y {\displaystyle y} on the boundary ∂ B ( x , r ) {\displaystyle \partial B(x,r)} of B ( x , r ) {\displaystyle B(x,r)} , we have φ ( y ) ≤ h ( y ) {\displaystyle \varphi (y)\leq h(y)} for all y ∈ B ( x , r ) . {\displaystyle y\in B(x,r).} Note that by the above, the function which is identically −∞ is subharmonic, but some authors exclude this function by definition. A function u {\displaystyle u} is called superharmonic if − u {\displaystyle -u} is subharmonic. == Properties == A function is harmonic if and only if it is both subharmonic and superharmonic. If ϕ {\displaystyle \phi } is C2 (twice continuously differentiable) on an open set G {\displaystyle G} in R n {\displaystyle \mathbb {R} ^{n}} , then ϕ {\displaystyle \phi } is subharmonic if and only if one has Δ ϕ ≥ 0 {\displaystyle \Delta \phi \geq 0} on G {\displaystyle G} , where Δ {\displaystyle \Delta } is the Laplacian. The maximum of a subharmonic function cannot be achieved in the interior of its domain unless the function is constant, which is called the maximum principle. However, the minimum of a subharmonic function can be achieved in the interior of its domain. Subharmonic functions make a convex cone, that is, a linear combination of subharmonic functions with positive coefficients is also subharmonic. The pointwise maximum of two subharmonic functions is subharmonic. If the pointwise maximum of a countable number of subharmonic functions is upper semi-continuous, then it is also subharmonic. The limit of a decreasing sequence of subharmonic functions is subharmonic (or identically equal to − ∞ {\displaystyle -\infty } ). Subharmonic functions are not necessarily continuous in the usual topology, however one can introduce the fine topology which makes them continuous. == Examples == If f {\displaystyle f} is analytic then log ⁡ | f | {\displaystyle \log |f|} is subharmonic. More examples can be constructed by using the properties listed above, by taking maxima, convex combinations and limits. In dimension 1, all subharmonic functions can be obtained in this way. == Riesz Representation Theorem == If u {\displaystyle u} is subharmonic in a region D {\displaystyle D} , in Euclidean space of dimension n {\displaystyle n} , v {\displaystyle v} is harmonic in D {\displaystyle D} , and u ≤ v {\displaystyle u\leq v} , then v {\displaystyle v} is called a harmonic majorant of u {\displaystyle u} . If a harmonic majorant exists, then there exists the least harmonic majorant, and u ( x ) = v ( x ) − ∫ D d μ ( y ) | x − y | n − 2 , n ≥ 3 {\displaystyle u(x)=v(x)-\int _{D}{\frac {d\mu (y)}{|x-y|^{n-2}}},\quad n\geq 3} while in dimension 2, u ( x ) = v ( x ) + ∫ D log ⁡ | x − y | d μ ( y ) , {\displaystyle u(x)=v(x)+\int _{D}\log |x-y|d\mu (y),} where v {\displaystyle v} is the least harmonic majorant, and μ {\displaystyle \mu } is a Borel measure in D {\displaystyle D} . This is called the Riesz representation theorem. == Subharmonic functions in the complex plane == Subharmonic functions are of a particular importance in complex analysis, where they are intimately connected to holomorphic functions. One can show that a real-valued, continuous function φ {\displaystyle \varphi } of a complex variable (that is, of two real variables) defined on a set G ⊂ C {\displaystyle G\subset \mathbb {C} } is subharmonic if and only if for any closed disc D ( z , r ) ⊂ G {\displaystyle D(z,r)\subset G} of center z {\displaystyle z} and radius r {\displaystyle r} one has φ ( z ) ≤ 1 2 π ∫ 0 2 π φ ( z + r e i θ ) d θ . {\displaystyle \varphi (z)\leq {\frac {1}{2\pi }}\int _{0}^{2\pi }\varphi (z+re^{i\theta })\,d\theta .} Intuitively, this means that a subharmonic function is at any point no greater than the average of the values in a circle around that point, a fact which can be used to derive the maximum principle. If f {\displaystyle f} is a holomorphic function, then φ ( z ) = log ⁡ | f ( z ) | {\displaystyle \varphi (z)=\log \left|f(z)\right|} is a subharmonic function if we define the value of φ ( z ) {\displaystyle \varphi (z)} at the zeros of f {\displaystyle f} to be − ∞ {\displaystyle -\infty } . It follows that ψ α ( z ) = | f ( z ) | α {\displaystyle \psi _{\alpha }(z)=\left|f(z)\right|^{\alpha }} is subharmonic for every α > 0. This observation plays a role in the theory of Hardy spaces, especially for the study of Hp when 0 < p < 1. In the context of the complex plane, the connection to the convex functions can be realized as well by the fact that a subharmonic function f {\displaystyle f} on a domain G ⊂ C {\displaystyle G\subset \mathbb {C} } that is constant in the imaginary direction is convex in the real direction and vice versa. === Harmonic majorants of subharmonic functions === If u {\displaystyle u} is subharmonic in a region Ω {\displaystyle \Omega } of the complex plane, and h {\displaystyle h} is harmonic on Ω {\displaystyle \Omega } , then h {\displaystyle h} is a harmonic majorant of u {\displaystyle u} in Ω {\displaystyle \Omega } if u ≤ h {\displaystyle u\leq h} in Ω {\displaystyle \Omega } . Such an inequality can be viewed as a growth condition on u {\displaystyle u} . === Subharmonic functions in the unit disc. Radial maximal function === Let φ be subharmonic, continuous and non-negative in an open subset Ω of the complex plane containing the closed unit disc D(0, 1). The radial maximal function for the function φ (restricted to the unit disc) is defined on the unit circle by ( M φ ) ( e i θ ) = sup 0 ≤ r < 1 φ ( r e i θ ) . {\displaystyle (M\varphi )(e^{i\theta })=\sup _{0\leq r<1}\varphi (re^{i\theta }).} If Pr denotes the Poisson kernel, it follows from the subharmonicity that 0 ≤ φ ( r e i θ ) ≤ 1 2 π ∫ 0 2 π P r ( θ − t ) φ ( e i t ) d t , r < 1. {\displaystyle 0\leq \varphi (re^{i\theta })\leq {\frac {1}{2\pi }}\int _{0}^{2\pi }P_{r}\left(\theta -t\right)\varphi \left(e^{it}\right)\,dt,\ \ \ r<1.} It can be shown that the last integral is less than the value at eiθ of the Hardy–Littlewood maximal function φ∗ of the restriction of φ to the unit circle T, φ ∗ ( e i θ ) = sup 0 < α ≤ π 1 2 α ∫ θ − α θ + α φ ( e i t ) d t , {\displaystyle \varphi ^{*}(e^{i\theta })=\sup _{0<\alpha \leq \pi }{\frac {1}{2\alpha }}\int _{\theta -\alpha }^{\theta +\alpha }\varphi \left(e^{it}\right)\,dt,} so that 0 ≤ M φ ≤ φ∗. It is known that the Hardy–Littlewood operator is bounded on Lp(T) when 1 < p < ∞. It follows that for some universal constant C, ‖ M φ ‖ L 2 ( T ) 2 ≤ C 2 ∫ 0 2 π φ ( e i θ ) 2 d θ . {\displaystyle \|M\varphi \|_{L^{2}(\mathbf {T} )}^{2}\leq C^{2}\,\int _{0}^{2\pi }\varphi (e^{i\theta })^{2}\,d\theta .} If f is a function holomorphic in Ω and 0 < p < ∞, then the preceding inequality applies to φ = |f |p/2. It can be deduced from these facts that any function F in the classical Hardy space Hp satisfies ∫ 0 2 π ( sup 0 ≤ r < 1 | F ( r e i θ ) | ) p d θ ≤ C 2 sup 0 ≤ r < 1 ∫ 0 2 π | F ( r e i θ ) | p d θ . {\displaystyle \int _{0}^{2\pi }\left(\sup _{0\leq r<1}\left|F(re^{i\theta })\right|\right)^{p}\,d\theta \leq C^{2}\,\sup _{0\leq r<1}\int _{0}^{2\pi }\left|F(re^{i\theta })\right|^{p}\,d\theta .} With more work, it can be shown that F has radial limits F(eiθ) almost everywhere on the unit circle, and (by the dominated convergence theorem) that Fr, defined by Fr(eiθ) = F(r eiθ) tends to F in Lp(T). == Subharmonic functions on Riemannian manifolds == Subharmonic functions can be defined on an arbitrary Riemannian manifold. Definition: Let M be a Riemannian manifold, and f : M → R {\displaystyle f:\;M\to \mathbb {R} } an upper semicontinuous function. Assume that for any open subset U ⊂ M {\displaystyle U\subset M} , and any harmonic function f1 on U, such that f 1 ≥ f {\displaystyle f_{1}\geq f} on the boundary of U, the inequality f 1 ≥ f {\displaystyle f_{1}\geq f} holds on all U. Then f is called subharmonic. This definition is equivalent to one given above. Also, for twice differentiable functions, subharmonicity is equivalent to the inequality Δ f ≥ 0 {\displaystyle \Delta f\geq 0} , where Δ {\displaystyle \Delta } is the usual Laplacian. == See also == Plurisubharmonic function — generalization to several complex variables Classical fine topology == Notes == == References == Conway, John B. (1978). Functions of one complex variable. New York: Springer-Verlag. ISBN 0-387-90328-3. Krantz, Steven G. (1992). Function Theory of Several Complex Variables. Providence, Rhode Island: AMS Chelsea Publishing. ISBN 0-8218-2724-3. Doob, Joseph Leo (1984). Classical Potential Theory and Its Probabilistic Counterpart. Berlin Heidelberg New York: Springer-Verlag. ISBN 3-540-41206-9. Rosenblum, Marvin; Rovnyak, James (1994). Topics in Hardy classes and univalent functions. Birkhauser Advanced Texts: Basel Textbooks. Basel: Birkhauser Verlag. This article incorporates material from Subharmonic and superharmonic functions on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Subharmonic_function
In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function f : U → R , {\displaystyle f\colon U\to \mathbb {R} ,} where U is an open subset of ⁠ R n , {\displaystyle \mathbb {R} ^{n},} ⁠ that satisfies Laplace's equation, that is, ∂ 2 f ∂ x 1 2 + ∂ 2 f ∂ x 2 2 + ⋯ + ∂ 2 f ∂ x n 2 = 0 {\displaystyle {\frac {\partial ^{2}f}{\partial x_{1}^{2}}}+{\frac {\partial ^{2}f}{\partial x_{2}^{2}}}+\cdots +{\frac {\partial ^{2}f}{\partial x_{n}^{2}}}=0} everywhere on U. This is usually written as ∇ 2 f = 0 {\displaystyle \nabla ^{2}f=0} or Δ f = 0 {\displaystyle \Delta f=0} == Etymology of the term "harmonic" == The descriptor "harmonic" in the name "harmonic function" originates from a point on a taut string which is undergoing harmonic motion. The solution to the differential equation for this type of motion can be written in terms of sines and cosines, functions which are thus referred to as "harmonics." Fourier analysis involves expanding functions on the unit circle in terms of a series of these harmonics. Considering higher dimensional analogues of the harmonics on the unit n-sphere, one arrives at the spherical harmonics. These functions satisfy Laplace's equation and, over time, "harmonic" was used to refer to all functions satisfying Laplace's equation. == Examples == Examples of harmonic functions of two variables are: The real or imaginary part of any holomorphic function. The function f ( x , y ) = e x sin ⁡ y ; {\displaystyle \,\!f(x,y)=e^{x}\sin y;} this is a special case of the example above, as f ( x , y ) = Im ⁡ ( e x + i y ) , {\displaystyle f(x,y)=\operatorname {Im} \left(e^{x+iy}\right),} and e x + i y {\displaystyle e^{x+iy}} is a holomorphic function. The second derivative with respect to x is e x sin ⁡ y , {\displaystyle \,\!e^{x}\sin y,} while the second derivative with respect to y is − e x sin ⁡ y . {\displaystyle \,\!-e^{x}\sin y.} The function f ( x , y ) = ln ⁡ ( x 2 + y 2 ) {\displaystyle \,\!f(x,y)=\ln \left(x^{2}+y^{2}\right)} defined on R 2 ∖ { 0 } . {\displaystyle \mathbb {R} ^{2}\smallsetminus \lbrace 0\rbrace .} This can describe the electric potential due to a line charge or the gravity potential due to a long cylindrical mass. Examples of harmonic functions of three variables are given in the table below with r 2 = x 2 + y 2 + z 2 : {\displaystyle r^{2}=x^{2}+y^{2}+z^{2}:} Harmonic functions that arise in physics are determined by their singularities and boundary conditions (such as Dirichlet boundary conditions or Neumann boundary conditions). On regions without boundaries, adding the real or imaginary part of any entire function will produce a harmonic function with the same singularity, so in this case the harmonic function is not determined by its singularities; however, we can make the solution unique in physical situations by requiring that the solution approaches 0 as r approaches infinity. In this case, uniqueness follows by Liouville's theorem. The singular points of the harmonic functions above are expressed as "charges" and "charge densities" using the terminology of electrostatics, and so the corresponding harmonic function will be proportional to the electrostatic potential due to these charge distributions. Each function above will yield another harmonic function when multiplied by a constant, rotated, and/or has a constant added. The inversion of each function will yield another harmonic function which has singularities which are the images of the original singularities in a spherical "mirror". Also, the sum of any two harmonic functions will yield another harmonic function. Finally, examples of harmonic functions of n variables are: The constant, linear and affine functions on all of ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠ (for example, the electric potential between the plates of a capacitor, and the gravity potential of a slab) The function f ( x 1 , … , x n ) = ( x 1 2 + ⋯ + x n 2 ) 1 − n / 2 {\displaystyle f(x_{1},\dots ,x_{n})=\left({x_{1}}^{2}+\cdots +{x_{n}}^{2}\right)^{1-n/2}} on R n ∖ { 0 } {\displaystyle \mathbb {R} ^{n}\smallsetminus \lbrace 0\rbrace } for n > 2. == Properties == The set of harmonic functions on a given open set U can be seen as the kernel of the Laplace operator Δ and is therefore a vector space over ⁠ R : {\displaystyle \mathbb {R} \!:} ⁠ linear combinations of harmonic functions are again harmonic. If f is a harmonic function on U, then all partial derivatives of f are also harmonic functions on U. The Laplace operator Δ and the partial derivative operator will commute on this class of functions. In several ways, the harmonic functions are real analogues to holomorphic functions. All harmonic functions are analytic, that is, they can be locally expressed as power series. This is a general fact about elliptic operators, of which the Laplacian is a major example. The uniform limit of a convergent sequence of harmonic functions is still harmonic. This is true because every continuous function satisfying the mean value property is harmonic. Consider the sequence on ⁠ ( − ∞ , 0 ) × R {\displaystyle (-\infty ,0)\times \mathbb {R} } ⁠ defined by f n ( x , y ) = 1 n exp ⁡ ( n x ) cos ⁡ ( n y ) ; {\textstyle f_{n}(x,y)={\frac {1}{n}}\exp(nx)\cos(ny);} this sequence is harmonic and converges uniformly to the zero function; however note that the partial derivatives are not uniformly convergent to the zero function (the derivative of the zero function). This example shows the importance of relying on the mean value property and continuity to argue that the limit is harmonic. == Connections with complex function theory == The real and imaginary part of any holomorphic function yield harmonic functions on ⁠ R 2 {\displaystyle \mathbb {R} ^{2}} ⁠ (these are said to be a pair of harmonic conjugate functions). Conversely, any harmonic function u on an open subset Ω of ⁠ R 2 {\displaystyle \mathbb {R} ^{2}} ⁠ is locally the real part of a holomorphic function. This is immediately seen observing that, writing z = x + i y , {\displaystyle z=x+iy,} the complex function g ( z ) := u x − i u y {\displaystyle g(z):=u_{x}-iu_{y}} is holomorphic in Ω because it satisfies the Cauchy–Riemann equations. Therefore, g locally has a primitive f, and u is the real part of f up to a constant, as ux is the real part of f ′ = g . {\displaystyle f'=g.} Although the above correspondence with holomorphic functions only holds for functions of two real variables, harmonic functions in n variables still enjoy a number of properties typical of holomorphic functions. They are (real) analytic; they have a maximum principle and a mean-value principle; a theorem of removal of singularities as well as a Liouville theorem holds for them in analogy to the corresponding theorems in complex functions theory. == Properties of harmonic functions == Some important properties of harmonic functions can be deduced from Laplace's equation. === Regularity theorem for harmonic functions === Harmonic functions are infinitely differentiable in open sets. In fact, harmonic functions are real analytic. === Maximum principle === Harmonic functions satisfy the following maximum principle: if K is a nonempty compact subset of U, then f restricted to K attains its maximum and minimum on the boundary of K. If U is connected, this means that f cannot have local maxima or minima, other than the exceptional case where f is constant. Similar properties can be shown for subharmonic functions. === The mean value property === If B(x, r) is a ball with center x and radius r which is completely contained in the open set Ω ⊂ R n , {\displaystyle \Omega \subset \mathbb {R} ^{n},} then the value u(x) of a harmonic function u : Ω → R {\displaystyle u:\Omega \to \mathbb {R} } at the center of the ball is given by the average value of u on the surface of the ball; this average value is also equal to the average value of u in the interior of the ball. In other words, u ( x ) = 1 n ω n r n − 1 ∫ ∂ B ( x , r ) u d σ = 1 ω n r n ∫ B ( x , r ) u d V {\displaystyle u(x)={\frac {1}{n\omega _{n}r^{n-1}}}\int _{\partial B(x,r)}u\,d\sigma ={\frac {1}{\omega _{n}r^{n}}}\int _{B(x,r)}u\,dV} where ωn is the volume of the unit ball in n dimensions and σ is the (n − 1)-dimensional surface measure. Conversely, all locally integrable functions satisfying the (volume) mean-value property are both infinitely differentiable and harmonic. In terms of convolutions, if χ r := 1 | B ( 0 , r ) | χ B ( 0 , r ) = n ω n r n χ B ( 0 , r ) {\displaystyle \chi _{r}:={\frac {1}{|B(0,r)|}}\chi _{B(0,r)}={\frac {n}{\omega _{n}r^{n}}}\chi _{B(0,r)}} denotes the characteristic function of the ball with radius r about the origin, normalized so that ∫ R n χ r d x = 1 , {\textstyle \int _{\mathbb {R} ^{n}}\chi _{r}\,dx=1,} the function u is harmonic on Ω if and only if u ( x ) = u ∗ χ r ( x ) {\displaystyle u(x)=u*\chi _{r}(x)\;} for all x and r such that B ( x , r ) ⊂ Ω . {\displaystyle B(x,r)\subset \Omega .} Sketch of the proof. The proof of the mean-value property of the harmonic functions and its converse follows immediately observing that the non-homogeneous equation, for any 0 < s < r Δ w = χ r − χ s {\displaystyle \Delta w=\chi _{r}-\chi _{s}\;} admits an easy explicit solution wr,s of class C1,1 with compact support in B(0, r). Thus, if u is harmonic in Ω 0 = Δ u ∗ w r , s = u ∗ Δ w r , s = u ∗ χ r − u ∗ χ s {\displaystyle 0=\Delta u*w_{r,s}=u*\Delta w_{r,s}=u*\chi _{r}-u*\chi _{s}\;} holds in the set Ωr of all points x in Ω with dist ⁡ ( x , ∂ Ω ) > r . {\displaystyle \operatorname {dist} (x,\partial \Omega )>r.} Since u is continuous in Ω, u ∗ χ s {\displaystyle u*\chi _{s}} converges to u as s → 0 showing the mean value property for u in Ω. Conversely, if u is any L l o c 1 {\displaystyle L_{\mathrm {loc} }^{1}\;} function satisfying the mean-value property in Ω, that is, u ∗ χ r = u ∗ χ s {\displaystyle u*\chi _{r}=u*\chi _{s}\;} holds in Ωr for all 0 < s < r then, iterating m times the convolution with χr one has: u = u ∗ χ r = u ∗ χ r ∗ ⋯ ∗ χ r , x ∈ Ω m r , {\displaystyle u=u*\chi _{r}=u*\chi _{r}*\cdots *\chi _{r}\,,\qquad x\in \Omega _{mr},} so that u is C m − 1 ( Ω m r ) {\displaystyle C^{m-1}(\Omega _{mr})\;} because the m-fold iterated convolution of χr is of class C m − 1 {\displaystyle C^{m-1}\;} with support B(0, mr). Since r and m are arbitrary, u is C ∞ ( Ω ) {\displaystyle C^{\infty }(\Omega )\;} too. Moreover, Δ u ∗ w r , s = u ∗ Δ w r , s = u ∗ χ r − u ∗ χ s = 0 {\displaystyle \Delta u*w_{r,s}=u*\Delta w_{r,s}=u*\chi _{r}-u*\chi _{s}=0\;} for all 0 < s < r so that Δu = 0 in Ω by the fundamental theorem of the calculus of variations, proving the equivalence between harmonicity and mean-value property. This statement of the mean value property can be generalized as follows: If h is any spherically symmetric function supported in B(x, r) such that ∫ h = 1 , {\textstyle \int h=1,} then u ( x ) = h ∗ u ( x ) . {\displaystyle u(x)=h*u(x).} In other words, we can take the weighted average of u about a point and recover u(x). In particular, by taking h to be a C∞ function, we can recover the value of u at any point even if we only know how u acts as a distribution. See Weyl's lemma. === Harnack's inequality === Let V ⊂ V ¯ ⊂ Ω {\displaystyle V\subset {\overline {V}}\subset \Omega } be a connected set in a bounded domain Ω. Then for every non-negative harmonic function u, Harnack's inequality sup V u ≤ C inf V u {\displaystyle \sup _{V}u\leq C\inf _{V}u} holds for some constant C that depends only on V and Ω. === Removal of singularities === The following principle of removal of singularities holds for harmonic functions. If f is a harmonic function defined on a dotted open subset Ω ∖ { x 0 } {\displaystyle \Omega \smallsetminus \{x_{0}\}} of ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠, which is less singular at x0 than the fundamental solution (for n > 2), that is f ( x ) = o ( | x − x 0 | 2 − n ) , as x → x 0 , {\displaystyle f(x)=o\left(\vert x-x_{0}\vert ^{2-n}\right),\qquad {\text{as }}x\to x_{0},} then f extends to a harmonic function on Ω (compare Riemann's theorem for functions of a complex variable). === Liouville's theorem === Theorem: If f is a harmonic function defined on all of ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠ which is bounded above or bounded below, then f is constant. (Compare Liouville's theorem for functions of a complex variable). Edward Nelson gave a particularly short proof of this theorem for the case of bounded functions, using the mean value property mentioned above: Given two points, choose two balls with the given points as centers and of equal radius. If the radius is large enough, the two balls will coincide except for an arbitrarily small proportion of their volume. Since f is bounded, the averages of it over the two balls are arbitrarily close, and so f assumes the same value at any two points. The proof can be adapted to the case where the harmonic function f is merely bounded above or below. By adding a constant and possibly multiplying by –1, we may assume that f is non-negative. Then for any two points x and y, and any positive number R, we let r = R + d ( x , y ) . {\displaystyle r=R+d(x,y).} We then consider the balls BR(x) and Br(y) where by the triangle inequality, the first ball is contained in the second. By the averaging property and the monotonicity of the integral, we have f ( x ) = 1 vol ⁡ ( B R ) ∫ B R ( x ) f ( z ) d z ≤ 1 vol ⁡ ( B R ) ∫ B r ( y ) f ( z ) d z . {\displaystyle f(x)={\frac {1}{\operatorname {vol} (B_{R})}}\int _{B_{R}(x)}f(z)\,dz\leq {\frac {1}{\operatorname {vol} (B_{R})}}\int _{B_{r}(y)}f(z)\,dz.} (Note that since vol BR(x) is independent of x, we denote it merely as vol BR.) In the last expression, we may multiply and divide by vol Br and use the averaging property again, to obtain f ( x ) ≤ vol ⁡ ( B r ) vol ⁡ ( B R ) f ( y ) . {\displaystyle f(x)\leq {\frac {\operatorname {vol} (B_{r})}{\operatorname {vol} (B_{R})}}f(y).} But as R → ∞ , {\displaystyle R\rightarrow \infty ,} the quantity vol ⁡ ( B r ) vol ⁡ ( B R ) = ( R + d ( x , y ) ) n R n {\displaystyle {\frac {\operatorname {vol} (B_{r})}{\operatorname {vol} (B_{R})}}={\frac {\left(R+d(x,y)\right)^{n}}{R^{n}}}} tends to 1. Thus, f ( x ) ≤ f ( y ) . {\displaystyle f(x)\leq f(y).} The same argument with the roles of x and y reversed shows that f ( y ) ≤ f ( x ) {\displaystyle f(y)\leq f(x)} , so that f ( x ) = f ( y ) . {\displaystyle f(x)=f(y).} Another proof uses the fact that given a Brownian motion Bt in ⁠ R n , {\displaystyle \mathbb {R} ^{n},} ⁠ such that B 0 = x 0 , {\displaystyle B_{0}=x_{0},} we have E [ f ( B t ) ] = f ( x 0 ) {\displaystyle E[f(B_{t})]=f(x_{0})} for all t ≥ 0. In words, it says that a harmonic function defines a martingale for the Brownian motion. Then a probabilistic coupling argument finishes the proof. == Generalizations == === Weakly harmonic function === A function (or, more generally, a distribution) is weakly harmonic if it satisfies Laplace's equation Δ f = 0 {\displaystyle \Delta f=0\,} in a weak sense (or, equivalently, in the sense of distributions). A weakly harmonic function coincides almost everywhere with a strongly harmonic function, and is in particular smooth. A weakly harmonic distribution is precisely the distribution associated to a strongly harmonic function, and so also is smooth. This is Weyl's lemma. There are other weak formulations of Laplace's equation that are often useful. One of which is Dirichlet's principle, representing harmonic functions in the Sobolev space H1(Ω) as the minimizers of the Dirichlet energy integral J ( u ) := ∫ Ω | ∇ u | 2 d x {\displaystyle J(u):=\int _{\Omega }|\nabla u|^{2}\,dx} with respect to local variations, that is, all functions u ∈ H 1 ( Ω ) {\displaystyle u\in H^{1}(\Omega )} such that J ( u ) ≤ J ( u + v ) {\displaystyle J(u)\leq J(u+v)} holds for all v ∈ C c ∞ ( Ω ) , {\displaystyle v\in C_{c}^{\infty }(\Omega ),} or equivalently, for all v ∈ H 0 1 ( Ω ) . {\displaystyle v\in H_{0}^{1}(\Omega ).} === Harmonic functions on manifolds === Harmonic functions can be defined on an arbitrary Riemannian manifold, using the Laplace–Beltrami operator Δ. In this context, a function is called harmonic if Δ f = 0. {\displaystyle \ \Delta f=0.} Many of the properties of harmonic functions on domains in Euclidean space carry over to this more general setting, including the mean value theorem (over geodesic balls), the maximum principle, and the Harnack inequality. With the exception of the mean value theorem, these are easy consequences of the corresponding results for general linear elliptic partial differential equations of the second order. === Subharmonic functions === A C2 function that satisfies Δf ≥ 0 is called subharmonic. This condition guarantees that the maximum principle will hold, although other properties of harmonic functions may fail. More generally, a function is subharmonic if and only if, in the interior of any ball in its domain, its graph lies below that of the harmonic function interpolating its boundary values on the ball. === Harmonic forms === One generalization of the study of harmonic functions is the study of harmonic forms on Riemannian manifolds, and it is related to the study of cohomology. Also, it is possible to define harmonic vector-valued functions, or harmonic maps of two Riemannian manifolds, which are critical points of a generalized Dirichlet energy functional (this includes harmonic functions as a special case, a result known as Dirichlet principle). This kind of harmonic map appears in the theory of minimal surfaces. For example, a curve, that is, a map from an interval in ⁠ R {\displaystyle \mathbb {R} } ⁠ to a Riemannian manifold, is a harmonic map if and only if it is a geodesic. === Harmonic maps between manifolds === If M and N are two Riemannian manifolds, then a harmonic map u : M → N {\displaystyle u:M\to N} is defined to be a critical point of the Dirichlet energy D [ u ] = 1 2 ∫ M ‖ d u ‖ 2 d Vol {\displaystyle D[u]={\frac {1}{2}}\int _{M}\left\|du\right\|^{2}\,d\operatorname {Vol} } in which d u : T M → T N {\displaystyle du:TM\to TN} is the differential of u, and the norm is that induced by the metric on M and that on N on the tensor product bundle T ∗ M ⊗ u − 1 T N . {\displaystyle T^{\ast }M\otimes u^{-1}TN.} Important special cases of harmonic maps between manifolds include minimal surfaces, which are precisely the harmonic immersions of a surface into three-dimensional Euclidean space. More generally, minimal submanifolds are harmonic immersions of one manifold in another. Harmonic coordinates are a harmonic diffeomorphism from a manifold to an open subset of a Euclidean space of the same dimension. == See also == == Notes == == References == == External links == "Harmonic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Harmonic Function". MathWorld. Harmonic Function Theory by S.Axler, Paul Bourdon, and Wade Ramey
Wikipedia/Harmonic_functions
The method of images (or method of mirror images) is a mathematical tool for solving differential equations, in which boundary conditions are satisfied by combining a solution not restricted by the boundary conditions with its possibly weighted mirror image. Generally, original singularities are inside the domain of interest but the function is made to satisfy boundary conditions by placing additional singularities outside the domain of interest. Typically the locations of these additional singularities are determined as the virtual location of the original singularities as viewed in a mirror placed at the location of the boundary conditions. Most typically, the mirror is a hyperplane or hypersphere. The method of images can also be used in solving discrete problems with boundary conditions, such counting the number of restricted discrete random walks. == Method of image charges == The method of image charges is used in electrostatics to simply calculate or visualize the distribution of the electric field of a charge in the vicinity of a conducting surface. It is based on the fact that the tangential component of the electrical field on the surface of a conductor is zero, and that an electric field E in some region is uniquely defined by its normal component over the surface that confines this region (the uniqueness theorem). == Magnet-superconductor systems == The method of images may also be used in magnetostatics for calculating the magnetic field of a magnet that is close to a superconducting surface. The superconductor in so-called Meissner state is an ideal diamagnet into which the magnetic field does not penetrate. Therefore, the normal component of the magnetic field on its surface should be zero. Then the image of the magnet should be mirrored. The force between the magnet and the superconducting surface is therefore repulsive. Comparing to the case of the charge dipole above a flat conducting surface, the mirrored magnetization vector can be thought as due to an additional sign change of an axial vector. In order to take into account the magnetic flux pinning phenomenon in type-II superconductors, the frozen mirror image method can be used. == Mass transport in environmental flows with non-infinite domains == Environmental engineers are often interested in the reflection (and sometimes the absorption) of a contaminant plume off of an impenetrable (no-flux) boundary. A quick way to model this reflection is with the method of images. The reflections, or images, are oriented in space such that they perfectly replace any mass (from the real plume) passing through a given boundary. A single boundary will necessitate a single image. Two or more boundaries produce infinite images. However, for the purposes of modeling mass transport—such as the spread of a contaminant spill in a lake—it may be unnecessary to include an infinite set of images when there are multiple relevant boundaries. For example, to represent the reflection within a certain threshold of physical accuracy, one might choose to include only the primary and secondary images. The simplest case is a single boundary in 1-dimensional space. In this case, only one image is possible. If as time elapses, a mass approaches the boundary, then an image can appropriately describe the reflection of that mass back across the boundary. Another simple example is a single boundary in 2-dimensional space. Again, since there is only a single boundary, only one image is necessary. This describes a smokestack, whose effluent "reflects" in the atmosphere off of the impenetrable ground, and is otherwise approximately unbounded. Finally, we consider a mass release in 1-dimensional space bounded to its left and right by impenetrable boundaries. There are two primary images, each replacing the mass of the original release reflecting through each boundary. There are two secondary images, each replacing the mass of one of the primary images flowing through the opposite boundary. There are also two tertiary images (replacing the mass lost by the secondary images), two quaternary images (replacing the mass lost by the tertiary images), and so on ad infinitum. For a given system, once all of the images are carefully oriented, the concentration field is given by summing the mass releases (the true plume in addition to all of the images) within the specified boundaries. This concentration field is only physically accurate within the boundaries; the field outside the boundaries is non-physical and irrelevant for most engineering purposes. == Mathematics for continuous cases == This method is a specific application of Green's functions. The method of images works well when the boundary is a flat surface and the distribution has a geometric center. This allows for simple mirror-like reflection of the distribution to satisfy a variety of boundary conditions. Consider the simple 1D case illustrated in the graphic where there is a distribution of ⟨ c ⟩ {\displaystyle \langle c\rangle } as a function of x {\displaystyle x} and a single boundary located at x b {\displaystyle x_{b}} with the real domain such that x ≥ x b {\displaystyle x\geq x_{b}} and the image domain x < x b {\displaystyle x<x_{b}} . Consider the solution f ( ± x + x 0 , t ) {\displaystyle f(\pm x+x_{0},t)} to satisfy the linear differential equation for any x 0 {\displaystyle x_{0}} , but not necessarily the boundary condition. Note these distributions are typical in models that assume a Gaussian distribution. This is particularly common in environmental engineering, especially in atmospheric flows that use Gaussian plume models. === Perfectly reflecting boundary conditions === The mathematical statement of a perfectly reflecting boundary condition is as follows: ∇ y ( x ) ⋅ n = 0 {\displaystyle \nabla y(\mathbf {x} )\cdot \mathbf {n} =0} This states that the derivative of our scalar function y {\displaystyle y} will have no derivative in the normal direction to a wall. In the 1D case, this simplifies to: d ⟨ c ⟩ d x = 0 {\displaystyle {\frac {d\langle c\rangle }{dx}}=0} This condition is enforced with positive images so that: ⟨ c ⟩ = f ( x − x 0 , t ) + f ( − x + ( x b − ( x 0 − x b ) ) , t ) {\displaystyle \langle c\rangle =f(x-x_{0},t)+f(-x+(x_{b}-(x_{0}-x_{b})),t)} where the − x + ( x b − ( x 0 − x b ) ) {\displaystyle -x+(x_{b}-(x_{0}-x_{b}))} translates and reflects the image into place. Taking the derivative with respect to x {\displaystyle x} : d ⟨ c ⟩ d x | x b = d f ( x − x 0 , t ) d x | x b + d f ( − x + ( x b − ( x 0 − x b ) ) , t ) d x | x b = d f ( x , t ) d x | x b − x 0 − d f ( x , t ) d x | x b − x 0 = 0 {\displaystyle \left.{\frac {d\langle c\rangle }{dx}}\right|_{x_{b}}=\left.{\frac {df(x-x_{0},t)}{dx}}\right|_{x_{b}}+\left.{\frac {df(-x+(x_{b}-(x_{0}-x_{b})),t)}{dx}}\right|_{x_{b}}=\left.{\frac {df(x,t)}{dx}}\right|_{x_{b}-x_{0}}-\left.{\frac {df(x,t)}{dx}}\right|_{x_{b}-x_{0}}=0} Thus, the perfectly reflecting boundary condition is satisfied. === Perfectly absorbing boundary conditions === The statement of a perfectly absorbing boundary condition is as follows: y ( x b ) = 0 {\displaystyle y(x_{b})=0} This condition is enforced using a negative mirror image: ⟨ c ⟩ = f ( x − x 0 , t ) − f ( − x + ( x b − ( x 0 − x b ) ) , t ) {\displaystyle \langle c\rangle =f(x-x_{0},t)-f(-x+(x_{b}-(x_{0}-x_{b})),t)} And: ⟨ c ⟩ | x b = f ( x b − x 0 , t ) − f ( − x b + ( x b − ( x 0 − x b ) ) , t ) = f ( x b − x 0 , t ) − f ( x b − x 0 , t ) = 0 {\displaystyle \langle c\rangle {\bigg |}_{x_{b}}=f(x_{b}-x_{0},t)-f(-x_{b}+(x_{b}-(x_{0}-x_{b})),t)=f(x_{b}-x_{0},t)-f(x_{b}-x_{0},t)=0} Thus this boundary condition is also satisfied. == Mathematics for discrete cases == The method of images can be used in discrete cases. For example, the number of random walks that start at position 0, take steps of size ±1, continue for a total of n steps, and end at position k is given by the binomial coefficient ( n ( n + k ) / 2 ) {\displaystyle {\binom {n}{(n+k)/2}}} assuming that |k| ≤ n and n + k is even. Suppose we have the boundary condition that walks are prohibited from stepping to −1 during any part of the walk. The number of restricted walks can be calculated by starting with the number of unrestricted walks that start at position 0 and end at position k and subtracting the number of unrestricted walks that start at position −2 and end at position k. This is because, for any given number of steps, exactly as many unrestricted positively weighted walks as unrestricted negatively weighted walks will reach −1; they are mirror images of each other. As such, these negatively weighted walks cancel out precisely those positively weighted walks that our boundary condition has prohibited. For example, if the number of steps is n = 2m and the final location is k = 0 then the number of restricted walks is the Catalan number C m = ( 2 m m ) − ( 2 m m + 1 ) . {\displaystyle C_{m}={\binom {2m}{m}}-{\binom {2m}{m+1}}\,.} == References ==
Wikipedia/Method_of_images
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions y(x) of Bessel's differential equation x 2 d 2 y d x 2 + x d y d x + ( x 2 − α 2 ) y = 0 {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0} for an arbitrary complex number α {\displaystyle \alpha } , which represents the order of the Bessel function. Although α {\displaystyle \alpha } and − α {\displaystyle -\alpha } produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of α {\displaystyle \alpha } . The most important cases are when α {\displaystyle \alpha } is an integer or half-integer. Bessel functions for integer α {\displaystyle \alpha } are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer α {\displaystyle \alpha } are obtained when solving the Helmholtz equation in spherical coordinates. == Applications == Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (α = n); in spherical problems, one obtains half-integer orders (α = n + 1/2). For example: Electromagnetic waves in a cylindrical waveguide Pressure amplitudes of inviscid rotational flows Heat conduction in a cylindrical object Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory) Diffusion problems on a lattice Solutions to the Schrödinger equation in spherical and cylindrical coordinates for a free particle Position space representation of the Feynman propagator in quantum field theory Solving for patterns of acoustical radiation Frequency-dependent friction in circular pipelines Dynamics of floating bodies Angular resolution Diffraction from helical objects, including DNA Probability density function of product of two normally distributed random variables Analyzing of the surface waves generated by microtremors, in geophysics and seismology. Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter). == Definitions == Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.The subscript n is typically used in place of α {\displaystyle \alpha } when α {\displaystyle \alpha } is known to be an integer. Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn and yn. === Bessel functions of the first kind: Jα === Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel's differential equation. For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible to define the function by x α {\displaystyle x^{\alpha }} times a Maclaurin series (note that α need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation: J α ( x ) = ∑ m = 0 ∞ ( − 1 ) m m ! Γ ( m + α + 1 ) ( x 2 ) 2 m + α , {\displaystyle J_{\alpha }(x)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{m!\,\Gamma (m+\alpha +1)}}{\left({\frac {x}{2}}\right)}^{2m+\alpha },} where Γ(z) is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by 2 {\displaystyle 2} in x / 2 {\displaystyle x/2} ; this definition is not used in this article. The Bessel function of the first kind is an entire function if α is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to x − 1 / 2 {\displaystyle x^{-{1}/{2}}} (see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large x. (The series indicates that −J1(x) is the derivative of J0(x), much like −sin x is the derivative of cos x; more generally, the derivative of Jn(x) can be expressed in terms of Jn ± 1(x) by the identities below.) For non-integer α, the functions Jα(x) and J−α(x) are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order n, the following relationship is valid (the gamma function has simple poles at each of the non-positive integers): J − n ( x ) = ( − 1 ) n J n ( x ) . {\displaystyle J_{-n}(x)=(-1)^{n}J_{n}(x).} This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below. ==== Bessel's integrals ==== Another definition of the Bessel function, for integer values of n, is possible using an integral representation: J n ( x ) = 1 π ∫ 0 π cos ⁡ ( n τ − x sin ⁡ τ ) d τ = 1 π Re ⁡ ( ∫ 0 π e i ( n τ − x sin ⁡ τ ) d τ ) , {\displaystyle J_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(n\tau -x\sin \tau )\,d\tau ={\frac {1}{\pi }}\operatorname {Re} \left(\int _{0}^{\pi }e^{i(n\tau -x\sin \tau )}\,d\tau \right),} which is also called Hansen-Bessel formula. This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for Re(x) > 0: J α ( x ) = 1 π ∫ 0 π cos ⁡ ( α τ − x sin ⁡ τ ) d τ − sin ⁡ ( α π ) π ∫ 0 ∞ e − x sinh ⁡ t − α t d t . {\displaystyle J_{\alpha }(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(\alpha \tau -x\sin \tau )\,d\tau -{\frac {\sin(\alpha \pi )}{\pi }}\int _{0}^{\infty }e^{-x\sinh t-\alpha t}\,dt.} ==== Relation to hypergeometric series ==== The Bessel functions can be expressed in terms of the generalized hypergeometric series as J α ( x ) = ( x 2 ) α Γ ( α + 1 ) 0 F 1 ( α + 1 ; − x 2 4 ) . {\displaystyle J_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\;_{0}F_{1}\left(\alpha +1;-{\frac {x^{2}}{4}}\right).} This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function. ==== Relation to Laguerre polynomials ==== In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t, the Bessel function can be expressed as J α ( x ) ( x 2 ) α = e − t Γ ( α + 1 ) ∑ k = 0 ∞ L k ( α ) ( x 2 4 t ) ( k + α k ) t k k ! . {\displaystyle {\frac {J_{\alpha }(x)}{\left({\frac {x}{2}}\right)^{\alpha }}}={\frac {e^{-t}}{\Gamma (\alpha +1)}}\sum _{k=0}^{\infty }{\frac {L_{k}^{(\alpha )}\left({\frac {x^{2}}{4t}}\right)}{\binom {k+\alpha }{k}}}{\frac {t^{k}}{k!}}.} === Bessel functions of the second kind: Yα === The Bessel functions of the second kind, denoted by Yα(x), occasionally denoted instead by Nα(x), are solutions of the Bessel differential equation that have a singularity at the origin (x = 0) and are multivalued. These are sometimes called Weber functions, as they were introduced by H. M. Weber (1873), and also Neumann functions after Carl Neumann. For non-integer α, Yα(x) is related to Jα(x) by Y α ( x ) = J α ( x ) cos ⁡ ( α π ) − J − α ( x ) sin ⁡ ( α π ) . {\displaystyle Y_{\alpha }(x)={\frac {J_{\alpha }(x)\cos(\alpha \pi )-J_{-\alpha }(x)}{\sin(\alpha \pi )}}.} In the case of integer order n, the function is defined by taking the limit as a non-integer α tends to n: Y n ( x ) = lim α → n Y α ( x ) . {\displaystyle Y_{n}(x)=\lim _{\alpha \to n}Y_{\alpha }(x).} If n is a nonnegative integer, we have the series Y n ( z ) = − ( z 2 ) − n π ∑ k = 0 n − 1 ( n − k − 1 ) ! k ! ( z 2 4 ) k + 2 π J n ( z ) ln ⁡ z 2 − ( z 2 ) n π ∑ k = 0 ∞ ( ψ ( k + 1 ) + ψ ( n + k + 1 ) ) ( − z 2 4 ) k k ! ( n + k ) ! {\displaystyle Y_{n}(z)=-{\frac {\left({\frac {z}{2}}\right)^{-n}}{\pi }}\sum _{k=0}^{n-1}{\frac {(n-k-1)!}{k!}}\left({\frac {z^{2}}{4}}\right)^{k}+{\frac {2}{\pi }}J_{n}(z)\ln {\frac {z}{2}}-{\frac {\left({\frac {z}{2}}\right)^{n}}{\pi }}\sum _{k=0}^{\infty }(\psi (k+1)+\psi (n+k+1)){\frac {\left(-{\frac {z^{2}}{4}}\right)^{k}}{k!(n+k)!}}} where ψ ( z ) {\displaystyle \psi (z)} is the digamma function, the logarithmic derivative of the gamma function. There is also a corresponding integral formula (for Re(x) > 0): Y n ( x ) = 1 π ∫ 0 π sin ⁡ ( x sin ⁡ θ − n θ ) d θ − 1 π ∫ 0 ∞ ( e n t + ( − 1 ) n e − n t ) e − x sinh ⁡ t d t . {\displaystyle Y_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\sin(x\sin \theta -n\theta )\,d\theta -{\frac {1}{\pi }}\int _{0}^{\infty }\left(e^{nt}+(-1)^{n}e^{-nt}\right)e^{-x\sinh t}\,dt.} In the case where n = 0: (with γ {\displaystyle \gamma } being Euler's constant) Y 0 ( x ) = 4 π 2 ∫ 0 1 2 π cos ⁡ ( x cos ⁡ θ ) ( γ + ln ⁡ ( 2 x sin 2 ⁡ θ ) ) d θ . {\displaystyle Y_{0}\left(x\right)={\frac {4}{\pi ^{2}}}\int _{0}^{{\frac {1}{2}}\pi }\cos \left(x\cos \theta \right)\left(\gamma +\ln \left(2x\sin ^{2}\theta \right)\right)\,d\theta .} Yα(x) is necessary as the second linearly independent solution of the Bessel's equation when α is an integer. But Yα(x) has more meaning than that. It can be considered as a "natural" partner of Jα(x). See also the subsection on Hankel functions below. When α is an integer, moreover, as was similarly the case for the functions of the first kind, the following relationship is valid: Y − n ( x ) = ( − 1 ) n Y n ( x ) . {\displaystyle Y_{-n}(x)=(-1)^{n}Y_{n}(x).} Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane cut along the negative real axis. When α is an integer, the Bessel functions J are entire functions of x. If x is held fixed at a non-zero value, then the Bessel functions are entire functions of α. The Bessel functions of the second kind when α is an integer is an example of the second kind of solution in Fuchs's theorem. === Hankel functions: H(1)α, H(2)α === Another important formulation of the two linearly independent solutions to Bessel's equation are the Hankel functions of the first and second kind, H(1)α(x) and H(2)α(x), defined as H α ( 1 ) ( x ) = J α ( x ) + i Y α ( x ) , H α ( 2 ) ( x ) = J α ( x ) − i Y α ( x ) , {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&=J_{\alpha }(x)+iY_{\alpha }(x),\\[5pt]H_{\alpha }^{(2)}(x)&=J_{\alpha }(x)-iY_{\alpha }(x),\end{aligned}}} where i is the imaginary unit. These linear combinations are also known as Bessel functions of the third kind; they are two linearly independent solutions of Bessel's differential equation. They are named after Hermann Hankel. These forms of linear combination satisfy numerous simple-looking properties, like asymptotic formulae or integral representations. Here, "simple" means an appearance of a factor of the form ei f(x). For real x > 0 {\displaystyle x>0} where J α ( x ) {\displaystyle J_{\alpha }(x)} , Y α ( x ) {\displaystyle Y_{\alpha }(x)} are real-valued, the Bessel functions of the first and second kind are the real and imaginary parts, respectively, of the first Hankel function and the real and negative imaginary parts of the second Hankel function. Thus, the above formulae are analogs of Euler's formula, substituting H(1)α(x), H(2)α(x) for e ± i x {\displaystyle e^{\pm ix}} and J α ( x ) {\displaystyle J_{\alpha }(x)} , Y α ( x ) {\displaystyle Y_{\alpha }(x)} for cos ⁡ ( x ) {\displaystyle \cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , as explicitly shown in the asymptotic expansion. The Hankel functions are used to express outward- and inward-propagating cylindrical-wave solutions of the cylindrical wave equation, respectively (or vice versa, depending on the sign convention for the frequency). Using the previous relationships, they can be expressed as H α ( 1 ) ( x ) = J − α ( x ) − e − α π i J α ( x ) i sin ⁡ α π , H α ( 2 ) ( x ) = J − α ( x ) − e α π i J α ( x ) − i sin ⁡ α π . {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {J_{-\alpha }(x)-e^{-\alpha \pi i}J_{\alpha }(x)}{i\sin \alpha \pi }},\\[5pt]H_{\alpha }^{(2)}(x)&={\frac {J_{-\alpha }(x)-e^{\alpha \pi i}J_{\alpha }(x)}{-i\sin \alpha \pi }}.\end{aligned}}} If α is an integer, the limit has to be calculated. The following relationships are valid, whether α is an integer or not: H − α ( 1 ) ( x ) = e α π i H α ( 1 ) ( x ) , H − α ( 2 ) ( x ) = e − α π i H α ( 2 ) ( x ) . {\displaystyle {\begin{aligned}H_{-\alpha }^{(1)}(x)&=e^{\alpha \pi i}H_{\alpha }^{(1)}(x),\\[6mu]H_{-\alpha }^{(2)}(x)&=e^{-\alpha \pi i}H_{\alpha }^{(2)}(x).\end{aligned}}} In particular, if α = m + ⁠1/2⁠ with m a nonnegative integer, the above relations imply directly that J − ( m + 1 2 ) ( x ) = ( − 1 ) m + 1 Y m + 1 2 ( x ) , Y − ( m + 1 2 ) ( x ) = ( − 1 ) m J m + 1 2 ( x ) . {\displaystyle {\begin{aligned}J_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m+1}Y_{m+{\frac {1}{2}}}(x),\\[5pt]Y_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m}J_{m+{\frac {1}{2}}}(x).\end{aligned}}} These are useful in developing the spherical Bessel functions (see below). The Hankel functions admit the following integral representations for Re(x) > 0: H α ( 1 ) ( x ) = 1 π i ∫ − ∞ + ∞ + π i e x sinh ⁡ t − α t d t , H α ( 2 ) ( x ) = − 1 π i ∫ − ∞ + ∞ − π i e x sinh ⁡ t − α t d t , {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {1}{\pi i}}\int _{-\infty }^{+\infty +\pi i}e^{x\sinh t-\alpha t}\,dt,\\[5pt]H_{\alpha }^{(2)}(x)&=-{\frac {1}{\pi i}}\int _{-\infty }^{+\infty -\pi i}e^{x\sinh t-\alpha t}\,dt,\end{aligned}}} where the integration limits indicate integration along a contour that can be chosen as follows: from −∞ to 0 along the negative real axis, from 0 to ±πi along the imaginary axis, and from ±πi to +∞ ± πi along a contour parallel to the real axis. === Modified Bessel functions: Iα, Kα === The Bessel functions are valid even for complex arguments x, and an important special case is that of a purely imaginary argument. In this case, the solutions to the Bessel equation are called the modified Bessel functions (or occasionally the hyperbolic Bessel functions) of the first and second kind and are defined as I α ( x ) = i − α J α ( i x ) = ∑ m = 0 ∞ 1 m ! Γ ( m + α + 1 ) ( x 2 ) 2 m + α , K α ( x ) = π 2 I − α ( x ) − I α ( x ) sin ⁡ α π , {\displaystyle {\begin{aligned}I_{\alpha }(x)&=i^{-\alpha }J_{\alpha }(ix)=\sum _{m=0}^{\infty }{\frac {1}{m!\,\Gamma (m+\alpha +1)}}\left({\frac {x}{2}}\right)^{2m+\alpha },\\[5pt]K_{\alpha }(x)&={\frac {\pi }{2}}{\frac {I_{-\alpha }(x)-I_{\alpha }(x)}{\sin \alpha \pi }},\end{aligned}}} when α is not an integer. When α is an integer, then the limit is used. These are chosen to be real-valued for real and positive arguments x. The series expansion for Iα(x) is thus similar to that for Jα(x), but without the alternating (−1)m factor. K α {\displaystyle K_{\alpha }} can be expressed in terms of Hankel functions: K α ( x ) = { π 2 i α + 1 H α ( 1 ) ( i x ) − π < arg ⁡ x ≤ π 2 π 2 ( − i ) α + 1 H α ( 2 ) ( − i x ) − π 2 < arg ⁡ x ≤ π {\displaystyle K_{\alpha }(x)={\begin{cases}{\frac {\pi }{2}}i^{\alpha +1}H_{\alpha }^{(1)}(ix)&-\pi <\arg x\leq {\frac {\pi }{2}}\\{\frac {\pi }{2}}(-i)^{\alpha +1}H_{\alpha }^{(2)}(-ix)&-{\frac {\pi }{2}}<\arg x\leq \pi \end{cases}}} Using these two formulae the result to J α 2 ( z ) {\displaystyle J_{\alpha }^{2}(z)} + Y α 2 ( z ) {\displaystyle Y_{\alpha }^{2}(z)} , commonly known as Nicholson's integral or Nicholson's formula, can be obtained to give the following J α 2 ( x ) + Y α 2 ( x ) = 8 π 2 ∫ 0 ∞ cosh ⁡ ( 2 α t ) K 0 ( 2 x sinh ⁡ t ) d t , {\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8}{\pi ^{2}}}\int _{0}^{\infty }\cosh(2\alpha t)K_{0}(2x\sinh t)\,dt,} given that the condition Re(x) > 0 is met. It can also be shown that J α 2 ( x ) + Y α 2 ( x ) = 8 cos ⁡ ( α π ) π 2 ∫ 0 ∞ K 2 α ( 2 x sinh ⁡ t ) d t , {\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8\cos(\alpha \pi )}{\pi ^{2}}}\int _{0}^{\infty }K_{2\alpha }(2x\sinh t)\,dt,} only when |Re(α)| < ⁠1/2⁠ and Re(x) ≥ 0 but not when x = 0. We can express the first and second Bessel functions in terms of the modified Bessel functions (these are valid if −π < arg z ≤ ⁠π/2⁠): J α ( i z ) = e α π i 2 I α ( z ) , Y α ( i z ) = e ( α + 1 ) π i 2 I α ( z ) − 2 π e − α π i 2 K α ( z ) . {\displaystyle {\begin{aligned}J_{\alpha }(iz)&=e^{\frac {\alpha \pi i}{2}}I_{\alpha }(z),\\[1ex]Y_{\alpha }(iz)&=e^{\frac {(\alpha +1)\pi i}{2}}I_{\alpha }(z)-{\tfrac {2}{\pi }}e^{-{\frac {\alpha \pi i}{2}}}K_{\alpha }(z).\end{aligned}}} Iα(x) and Kα(x) are the two linearly independent solutions to the modified Bessel's equation: x 2 d 2 y d x 2 + x d y d x − ( x 2 + α 2 ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y=0.} Unlike the ordinary Bessel functions, which are oscillating as functions of a real argument, Iα and Kα are exponentially growing and decaying functions respectively. Like the ordinary Bessel function Jα, the function Iα goes to zero at x = 0 for α > 0 and is finite at x = 0 for α = 0. Analogously, Kα diverges at x = 0 with the singularity being of logarithmic type for K0, and ⁠1/2⁠Γ(|α|)(2/x)|α| otherwise. Two integral formulas for the modified Bessel functions are (for Re(x) > 0): I α ( x ) = 1 π ∫ 0 π e x cos ⁡ θ cos ⁡ α θ d θ − sin ⁡ α π π ∫ 0 ∞ e − x cosh ⁡ t − α t d t , K α ( x ) = ∫ 0 ∞ e − x cosh ⁡ t cosh ⁡ α t d t . {\displaystyle {\begin{aligned}I_{\alpha }(x)&={\frac {1}{\pi }}\int _{0}^{\pi }e^{x\cos \theta }\cos \alpha \theta \,d\theta -{\frac {\sin \alpha \pi }{\pi }}\int _{0}^{\infty }e^{-x\cosh t-\alpha t}\,dt,\\[5pt]K_{\alpha }(x)&=\int _{0}^{\infty }e^{-x\cosh t}\cosh \alpha t\,dt.\end{aligned}}} Bessel functions can be described as Fourier transforms of powers of quadratic functions. For example (for Re(ω) > 0): 2 K 0 ( ω ) = ∫ − ∞ ∞ e i ω t t 2 + 1 d t . {\displaystyle 2\,K_{0}(\omega )=\int _{-\infty }^{\infty }{\frac {e^{i\omega t}}{\sqrt {t^{2}+1}}}\,dt.} It can be proven by showing equality to the above integral definition for K0. This is done by integrating a closed curve in the first quadrant of the complex plane. Modified Bessel functions of the second kind may be represented with Bassett's integral K n ( x z ) = Γ ( n + 1 2 ) ( 2 z ) n π x n ∫ 0 ∞ cos ⁡ ( x t ) d t ( t 2 + z 2 ) n + 1 2 . {\displaystyle K_{n}(xz)={\frac {\Gamma \left(n+{\frac {1}{2}}\right)(2z)^{n}}{{\sqrt {\pi }}x^{n}}}\int _{0}^{\infty }{\frac {\cos(xt)\,dt}{(t^{2}+z^{2})^{n+{\frac {1}{2}}}}}.} Modified Bessel functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals K 1 3 ( ξ ) = 3 ∫ 0 ∞ exp ⁡ ( − ξ ( 1 + 4 x 2 3 ) 1 + x 2 3 ) d x , K 2 3 ( ξ ) = 1 3 ∫ 0 ∞ 3 + 2 x 2 1 + x 2 3 exp ⁡ ( − ξ ( 1 + 4 x 2 3 ) 1 + x 2 3 ) d x . {\displaystyle {\begin{aligned}K_{\frac {1}{3}}(\xi )&={\sqrt {3}}\int _{0}^{\infty }\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx,\\[5pt]K_{\frac {2}{3}}(\xi )&={\frac {1}{\sqrt {3}}}\int _{0}^{\infty }{\frac {3+2x^{2}}{\sqrt {1+{\frac {x^{2}}{3}}}}}\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx.\end{aligned}}} The modified Bessel function K 1 2 ( ξ ) = ( 2 ξ / π ) − 1 / 2 exp ⁡ ( − ξ ) {\displaystyle K_{\frac {1}{2}}(\xi )=(2\xi /\pi )^{-1/2}\exp(-\xi )} is useful to represent the Laplace distribution as an Exponential-scale mixture of normal distributions. The modified Bessel function of the second kind has also been called by the following names (now rare): Basset function after Alfred Barnard Basset Modified Bessel function of the third kind Modified Hankel function Macdonald function after Hector Munro Macdonald === Spherical Bessel functions: jn, yn === When solving the Helmholtz equation in spherical coordinates by separation of variables, the radial equation has the form x 2 d 2 y d x 2 + 2 x d y d x + ( x 2 − n ( n + 1 ) ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+2x{\frac {dy}{dx}}+\left(x^{2}-n(n+1)\right)y=0.} The two linearly independent solutions to this equation are called the spherical Bessel functions jn and yn, and are related to the ordinary Bessel functions Jn and Yn by j n ( x ) = π 2 x J n + 1 2 ( x ) , y n ( x ) = π 2 x Y n + 1 2 ( x ) = ( − 1 ) n + 1 π 2 x J − n − 1 2 ( x ) . {\displaystyle {\begin{aligned}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x),\\y_{n}(x)&={\sqrt {\frac {\pi }{2x}}}Y_{n+{\frac {1}{2}}}(x)=(-1)^{n+1}{\sqrt {\frac {\pi }{2x}}}J_{-n-{\frac {1}{2}}}(x).\end{aligned}}} yn is also denoted nn or ηn; some authors call these functions the spherical Neumann functions. From the relations to the ordinary Bessel functions it is directly seen that: j n ( x ) = ( − 1 ) n y − n − 1 ( x ) y n ( x ) = ( − 1 ) n + 1 j − n − 1 ( x ) {\displaystyle {\begin{aligned}j_{n}(x)&=(-1)^{n}y_{-n-1}(x)\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)\end{aligned}}} The spherical Bessel functions can also be written as (Rayleigh's formulas) j n ( x ) = ( − x ) n ( 1 x d d x ) n sin ⁡ x x , y n ( x ) = − ( − x ) n ( 1 x d d x ) n cos ⁡ x x . {\displaystyle {\begin{aligned}j_{n}(x)&=(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\sin x}{x}},\\y_{n}(x)&=-(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\cos x}{x}}.\end{aligned}}} The zeroth spherical Bessel function j0(x) is also known as the (unnormalized) sinc function. The first few spherical Bessel functions are: j 0 ( x ) = sin ⁡ x x . j 1 ( x ) = sin ⁡ x x 2 − cos ⁡ x x , j 2 ( x ) = ( 3 x 2 − 1 ) sin ⁡ x x − 3 cos ⁡ x x 2 , j 3 ( x ) = ( 15 x 3 − 6 x ) sin ⁡ x x − ( 15 x 2 − 1 ) cos ⁡ x x {\displaystyle {\begin{aligned}j_{0}(x)&={\frac {\sin x}{x}}.\\j_{1}(x)&={\frac {\sin x}{x^{2}}}-{\frac {\cos x}{x}},\\j_{2}(x)&=\left({\frac {3}{x^{2}}}-1\right){\frac {\sin x}{x}}-{\frac {3\cos x}{x^{2}}},\\j_{3}(x)&=\left({\frac {15}{x^{3}}}-{\frac {6}{x}}\right){\frac {\sin x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\cos x}{x}}\end{aligned}}} and y 0 ( x ) = − j − 1 ( x ) = − cos ⁡ x x , y 1 ( x ) = j − 2 ( x ) = − cos ⁡ x x 2 − sin ⁡ x x , y 2 ( x ) = − j − 3 ( x ) = ( − 3 x 2 + 1 ) cos ⁡ x x − 3 sin ⁡ x x 2 , y 3 ( x ) = j − 4 ( x ) = ( − 15 x 3 + 6 x ) cos ⁡ x x − ( 15 x 2 − 1 ) sin ⁡ x x . {\displaystyle {\begin{aligned}y_{0}(x)&=-j_{-1}(x)=-{\frac {\cos x}{x}},\\y_{1}(x)&=j_{-2}(x)=-{\frac {\cos x}{x^{2}}}-{\frac {\sin x}{x}},\\y_{2}(x)&=-j_{-3}(x)=\left(-{\frac {3}{x^{2}}}+1\right){\frac {\cos x}{x}}-{\frac {3\sin x}{x^{2}}},\\y_{3}(x)&=j_{-4}(x)=\left(-{\frac {15}{x^{3}}}+{\frac {6}{x}}\right){\frac {\cos x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\sin x}{x}}.\end{aligned}}} The first few non-zero roots of the first few spherical Bessel functions are: ==== Generating function ==== The spherical Bessel functions have the generating functions 1 z cos ⁡ ( z 2 − 2 z t ) = ∑ n = 0 ∞ t n n ! j n − 1 ( z ) , 1 z sin ⁡ ( z 2 − 2 z t ) = ∑ n = 0 ∞ t n n ! y n − 1 ( z ) . {\displaystyle {\begin{aligned}{\frac {1}{z}}\cos \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}j_{n-1}(z),\\{\frac {1}{z}}\sin \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}y_{n-1}(z).\end{aligned}}} ==== Finite series expansions ==== In contrast to the whole integer Bessel functions Jn(x), Yn(x), the spherical Bessel functions jn(x), yn(x) have a finite series expression: j n ( x ) = π 2 x J n + 1 2 ( x ) = = 1 2 x [ e i x ∑ r = 0 n i r − n − 1 ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r + e − i x ∑ r = 0 n ( − i ) r − n − 1 ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r ] = 1 x [ sin ⁡ ( x − n π 2 ) ∑ r = 0 [ n 2 ] ( − 1 ) r ( n + 2 r ) ! ( 2 r ) ! ( n − 2 r ) ! ( 2 x ) 2 r + cos ⁡ ( x − n π 2 ) ∑ r = 0 [ n − 1 2 ] ( − 1 ) r ( n + 2 r + 1 ) ! ( 2 r + 1 ) ! ( n − 2 r − 1 ) ! ( 2 x ) 2 r + 1 ] y n ( x ) = ( − 1 ) n + 1 j − n − 1 ( x ) = ( − 1 ) n + 1 π 2 x J − ( n + 1 2 ) ( x ) = = ( − 1 ) n + 1 2 x [ e i x ∑ r = 0 n i r + n ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r + e − i x ∑ r = 0 n ( − i ) r + n ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r ] = = ( − 1 ) n + 1 x [ cos ⁡ ( x + n π 2 ) ∑ r = 0 [ n 2 ] ( − 1 ) r ( n + 2 r ) ! ( 2 r ) ! ( n − 2 r ) ! ( 2 x ) 2 r − sin ⁡ ( x + n π 2 ) ∑ r = 0 [ n − 1 2 ] ( − 1 ) r ( n + 2 r + 1 ) ! ( 2 r + 1 ) ! ( n − 2 r − 1 ) ! ( 2 x ) 2 r + 1 ] {\displaystyle {\begin{alignedat}{2}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x)=\\&={\frac {1}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]\\&={\frac {1}{x}}\left[\sin \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}+\cos \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)=(-1)^{n+1}{\frac {\pi }{2x}}J_{-\left(n+{\frac {1}{2}}\right)}(x)=\\&={\frac {(-1)^{n+1}}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]=\\&={\frac {(-1)^{n+1}}{x}}\left[\cos \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}-\sin \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\end{alignedat}}} ==== Differential relations ==== In the following, fn is any of jn, yn, h(1)n, h(2)n for n = 0, ±1, ±2, ... ( 1 z d d z ) m ( z n + 1 f n ( z ) ) = z n − m + 1 f n − m ( z ) , ( 1 z d d z ) m ( z − n f n ( z ) ) = ( − 1 ) m z − n − m f n + m ( z ) . {\displaystyle {\begin{aligned}\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{n+1}f_{n}(z)\right)&=z^{n-m+1}f_{n-m}(z),\\\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{-n}f_{n}(z)\right)&=(-1)^{m}z^{-n-m}f_{n+m}(z).\end{aligned}}} === Spherical Hankel functions: h(1)n, h(2)n === There are also spherical analogues of the Hankel functions: h n ( 1 ) ( x ) = j n ( x ) + i y n ( x ) , h n ( 2 ) ( x ) = j n ( x ) − i y n ( x ) . {\displaystyle {\begin{aligned}h_{n}^{(1)}(x)&=j_{n}(x)+iy_{n}(x),\\h_{n}^{(2)}(x)&=j_{n}(x)-iy_{n}(x).\end{aligned}}} There are simple closed-form expressions for the Bessel functions of half-integer order in terms of the standard trigonometric functions, and therefore for the spherical Bessel functions. In particular, for non-negative integers n: h n ( 1 ) ( x ) = ( − i ) n + 1 e i x x ∑ m = 0 n i m m ! ( 2 x ) m ( n + m ) ! ( n − m ) ! , {\displaystyle h_{n}^{(1)}(x)=(-i)^{n+1}{\frac {e^{ix}}{x}}\sum _{m=0}^{n}{\frac {i^{m}}{m!\,(2x)^{m}}}{\frac {(n+m)!}{(n-m)!}},} and h(2)n is the complex-conjugate of this (for real x). It follows, for example, that j0(x) = ⁠sin x/x⁠ and y0(x) = −⁠cos x/x⁠, and so on. The spherical Hankel functions appear in problems involving spherical wave propagation, for example in the multipole expansion of the electromagnetic field. === Riccati–Bessel functions: Sn, Cn, ξn, ζn === Riccati–Bessel functions only slightly differ from spherical Bessel functions: S n ( x ) = x j n ( x ) = π x 2 J n + 1 2 ( x ) C n ( x ) = − x y n ( x ) = − π x 2 Y n + 1 2 ( x ) ξ n ( x ) = x h n ( 1 ) ( x ) = π x 2 H n + 1 2 ( 1 ) ( x ) = S n ( x ) − i C n ( x ) ζ n ( x ) = x h n ( 2 ) ( x ) = π x 2 H n + 1 2 ( 2 ) ( x ) = S n ( x ) + i C n ( x ) {\displaystyle {\begin{aligned}S_{n}(x)&=xj_{n}(x)={\sqrt {\frac {\pi x}{2}}}J_{n+{\frac {1}{2}}}(x)\\C_{n}(x)&=-xy_{n}(x)=-{\sqrt {\frac {\pi x}{2}}}Y_{n+{\frac {1}{2}}}(x)\\\xi _{n}(x)&=xh_{n}^{(1)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(1)}(x)=S_{n}(x)-iC_{n}(x)\\\zeta _{n}(x)&=xh_{n}^{(2)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(2)}(x)=S_{n}(x)+iC_{n}(x)\end{aligned}}} They satisfy the differential equation x 2 d 2 y d x 2 + ( x 2 − n ( n + 1 ) ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+\left(x^{2}-n(n+1)\right)y=0.} For example, this kind of differential equation appears in quantum mechanics while solving the radial component of the Schrödinger equation with hypothetical cylindrical infinite potential barrier. This differential equation, and the Riccati–Bessel solutions, also arises in the problem of scattering of electromagnetic waves by a sphere, known as Mie scattering after the first published solution by Mie (1908). See e.g., Du (2004) for recent developments and references. Following Debye (1909), the notation ψn, χn is sometimes used instead of Sn, Cn. == Asymptotic forms == The Bessel functions have the following asymptotic forms. For small arguments 0 < z ≪ α + 1 {\displaystyle 0<z\ll {\sqrt {\alpha +1}}} , one obtains, when α {\displaystyle \alpha } is not a negative integer: J α ( z ) ∼ 1 Γ ( α + 1 ) ( z 2 ) α . {\displaystyle J_{\alpha }(z)\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha }.} When α is a negative integer, we have J α ( z ) ∼ ( − 1 ) α ( − α ) ! ( 2 z ) α . {\displaystyle J_{\alpha }(z)\sim {\frac {(-1)^{\alpha }}{(-\alpha )!}}\left({\frac {2}{z}}\right)^{\alpha }.} For the Bessel function of the second kind we have three cases: Y α ( z ) ∼ { 2 π ( ln ⁡ ( z 2 ) + γ ) if α = 0 − Γ ( α ) π ( 2 z ) α + 1 Γ ( α + 1 ) ( z 2 ) α cot ⁡ ( α π ) if α is a positive integer (one term dominates unless α is imaginary) , − ( − 1 ) α Γ ( − α ) π ( z 2 ) α if α is a negative integer, {\displaystyle Y_{\alpha }(z)\sim {\begin{cases}{\dfrac {2}{\pi }}\left(\ln \left({\dfrac {z}{2}}\right)+\gamma \right)&{\text{if }}\alpha =0\\[1ex]-{\dfrac {\Gamma (\alpha )}{\pi }}\left({\dfrac {2}{z}}\right)^{\alpha }+{\dfrac {1}{\Gamma (\alpha +1)}}\left({\dfrac {z}{2}}\right)^{\alpha }\cot(\alpha \pi )&{\text{if }}\alpha {\text{ is a positive integer (one term dominates unless }}\alpha {\text{ is imaginary)}},\\[1ex]-{\dfrac {(-1)^{\alpha }\Gamma (-\alpha )}{\pi }}\left({\dfrac {z}{2}}\right)^{\alpha }&{\text{if }}\alpha {\text{ is a negative integer,}}\end{cases}}} where γ is the Euler–Mascheroni constant (0.5772...). For large real arguments z ≫ |α2 − ⁠1/4⁠|, one cannot write a true asymptotic form for Bessel functions of the first and second kind (unless α is half-integer) because they have zeros all the way out to infinity, which would have to be matched exactly by any asymptotic expansion. However, for a given value of arg z one can write an equation containing a term of order |z|−1: J α ( z ) = 2 π z ( cos ⁡ ( z − α π 2 − π 4 ) + e | Im ⁡ ( z ) | O ( | z | − 1 ) ) for | arg ⁡ z | < π , Y α ( z ) = 2 π z ( sin ⁡ ( z − α π 2 − π 4 ) + e | Im ⁡ ( z ) | O ( | z | − 1 ) ) for | arg ⁡ z | < π . {\displaystyle {\begin{aligned}J_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\cos \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi ,\\Y_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\sin \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi .\end{aligned}}} (For α = ⁠1/2⁠, the last terms in these formulas drop out completely; see the spherical Bessel functions above.) The asymptotic forms for the Hankel functions are: H α ( 1 ) ( z ) ∼ 2 π z e i ( z − α π 2 − π 4 ) for − π < arg ⁡ z < 2 π , H α ( 2 ) ( z ) ∼ 2 π z e − i ( z − α π 2 − π 4 ) for − 2 π < arg ⁡ z < π . {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<2\pi ,\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-2\pi <\arg z<\pi .\end{aligned}}} These can be extended to other values of arg z using equations relating H(1)α(zeimπ) and H(2)α(zeimπ) to H(1)α(z) and H(2)α(z). It is interesting that although the Bessel function of the first kind is the average of the two Hankel functions, Jα(z) is not asymptotic to the average of these two asymptotic forms when z is negative (because one or the other will not be correct there, depending on the arg z used). But the asymptotic forms for the Hankel functions permit us to write asymptotic forms for the Bessel functions of first and second kinds for complex (non-real) z so long as |z| goes to infinity at a constant phase angle arg z (using the square root having positive real part): J α ( z ) ∼ 1 2 π z e i ( z − α π 2 − π 4 ) for − π < arg ⁡ z < 0 , J α ( z ) ∼ 1 2 π z e − i ( z − α π 2 − π 4 ) for 0 < arg ⁡ z < π , Y α ( z ) ∼ − i 1 2 π z e i ( z − α π 2 − π 4 ) for − π < arg ⁡ z < 0 , Y α ( z ) ∼ i 1 2 π z e − i ( z − α π 2 − π 4 ) for 0 < arg ⁡ z < π . {\displaystyle {\begin{aligned}J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi ,\\[1ex]Y_{\alpha }(z)&\sim -i{\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]Y_{\alpha }(z)&\sim i{\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi .\end{aligned}}} For the modified Bessel functions, Hankel developed asymptotic expansions as well: I α ( z ) ∼ e z 2 π z ( 1 − 4 α 2 − 1 8 z + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) 2 ! ( 8 z ) 2 − ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) ( 4 α 2 − 25 ) 3 ! ( 8 z ) 3 + ⋯ ) for | arg ⁡ z | < π 2 , K α ( z ) ∼ π 2 z e − z ( 1 + 4 α 2 − 1 8 z + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) 2 ! ( 8 z ) 2 + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) ( 4 α 2 − 25 ) 3 ! ( 8 z ) 3 + ⋯ ) for | arg ⁡ z | < 3 π 2 . {\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {e^{z}}{\sqrt {2\pi z}}}\left(1-{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}-{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {\pi }{2}},\\K_{\alpha }(z)&\sim {\sqrt {\frac {\pi }{2z}}}e^{-z}\left(1+{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {3\pi }{2}}.\end{aligned}}} There is also the asymptotic form (for large real z {\displaystyle z} ) I α ( z ) = 1 2 π z 1 + α 2 z 2 4 exp ⁡ ( − α arcsinh ⁡ ( α z ) + z 1 + α 2 z 2 ) ( 1 + O ( 1 z 1 + α 2 z 2 ) ) . {\displaystyle {\begin{aligned}I_{\alpha }(z)={\frac {1}{{\sqrt {2\pi z}}{\sqrt[{4}]{1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\exp \left(-\alpha \operatorname {arcsinh} \left({\frac {\alpha }{z}}\right)+z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}\right)\left(1+{\mathcal {O}}\left({\frac {1}{z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\right)\right).\end{aligned}}} When α = ⁠1/2⁠, all the terms except the first vanish, and we have I 1 / 2 ( z ) = 2 π sinh ⁡ ( z ) z ∼ e z 2 π z for | arg ⁡ z | < π 2 , K 1 / 2 ( z ) = π 2 e − z z . {\displaystyle {\begin{aligned}I_{{1}/{2}}(z)&={\sqrt {\frac {2}{\pi }}}{\frac {\sinh(z)}{\sqrt {z}}}\sim {\frac {e^{z}}{\sqrt {2\pi z}}}&&{\text{for }}\left|\arg z\right|<{\tfrac {\pi }{2}},\\[1ex]K_{{1}/{2}}(z)&={\sqrt {\frac {\pi }{2}}}{\frac {e^{-z}}{\sqrt {z}}}.\end{aligned}}} For small arguments 0 < | z | ≪ α + 1 {\displaystyle 0<|z|\ll {\sqrt {\alpha +1}}} , we have I α ( z ) ∼ 1 Γ ( α + 1 ) ( z 2 ) α , K α ( z ) ∼ { − ln ⁡ ( z 2 ) − γ if α = 0 Γ ( α ) 2 ( 2 z ) α if α > 0 {\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha },\\[1ex]K_{\alpha }(z)&\sim {\begin{cases}-\ln \left({\dfrac {z}{2}}\right)-\gamma &{\text{if }}\alpha =0\\[1ex]{\frac {\Gamma (\alpha )}{2}}\left({\dfrac {2}{z}}\right)^{\alpha }&{\text{if }}\alpha >0\end{cases}}\end{aligned}}} == Properties == For integer order α = n, Jn is often defined via a Laurent series for a generating function: e x 2 ( t − 1 t ) = ∑ n = − ∞ ∞ J n ( x ) t n {\displaystyle e^{{\frac {x}{2}}\left(t-{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }J_{n}(x)t^{n}} an approach used by P. A. Hansen in 1843. (This can be generalized to non-integer order by contour integration or other methods.) Infinite series of Bessel functions in the form ∑ ν = − ∞ ∞ J N ν + p ( x ) {\textstyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)} where ν , p ∈ Z , N ∈ Z + \nu ,p\in \mathbb {Z} ,\ N\in \mathbb {Z} ^{+} arise in many physical systems and are defined in closed form by the Sung series. For example, when N = 3: ∑ ν = − ∞ ∞ J 3 ν + p ( x ) = 1 3 [ 1 + 2 cos ⁡ ( x 3 / 2 − 2 π p / 3 ) ] {\textstyle \sum _{\nu =-\infty }^{\infty }J_{3\nu +p}(x)={\frac {1}{3}}\left[1+2\cos {(x{\sqrt {3}}/2-2\pi p/3)}\right]} . More generally, the Sung series and the alternating Sung series are written as: ∑ ν = − ∞ ∞ J N ν + p ( x ) = 1 N ∑ q = 0 N − 1 e i x sin ⁡ 2 π q / N e − i 2 π p q / N {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {2\pi q/N}}e^{-i2\pi pq/N}} ∑ ν = − ∞ ∞ ( − 1 ) ν J N ν + p ( x ) = 1 N ∑ q = 0 N − 1 e i x sin ⁡ ( 2 q + 1 ) π / N e − i ( 2 q + 1 ) π p / N {\displaystyle \sum _{\nu =-\infty }^{\infty }(-1)^{\nu }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {(2q+1)\pi /N}}e^{-i(2q+1)\pi p/N}} A series expansion using Bessel functions (Kapteyn series) is 1 1 − z = 1 + 2 ∑ n = 1 ∞ J n ( n z ) . {\displaystyle {\frac {1}{1-z}}=1+2\sum _{n=1}^{\infty }J_{n}(nz).} Another important relation for integer orders is the Jacobi–Anger expansion: e i z cos ⁡ ϕ = ∑ n = − ∞ ∞ i n J n ( z ) e i n ϕ {\displaystyle e^{iz\cos \phi }=\sum _{n=-\infty }^{\infty }i^{n}J_{n}(z)e^{in\phi }} and e ± i z sin ⁡ ϕ = J 0 ( z ) + 2 ∑ n = 1 ∞ J 2 n ( z ) cos ⁡ ( 2 n ϕ ) ± 2 i ∑ n = 0 ∞ J 2 n + 1 ( z ) sin ⁡ ( ( 2 n + 1 ) ϕ ) {\displaystyle e^{\pm iz\sin \phi }=J_{0}(z)+2\sum _{n=1}^{\infty }J_{2n}(z)\cos(2n\phi )\pm 2i\sum _{n=0}^{\infty }J_{2n+1}(z)\sin((2n+1)\phi )} which is used to expand a plane wave as a sum of cylindrical waves, or to find the Fourier series of a tone-modulated FM signal. More generally, a series f ( z ) = a 0 ν J ν ( z ) + 2 ⋅ ∑ k = 1 ∞ a k ν J ν + k ( z ) {\displaystyle f(z)=a_{0}^{\nu }J_{\nu }(z)+2\cdot \sum _{k=1}^{\infty }a_{k}^{\nu }J_{\nu +k}(z)} is called Neumann expansion of f. The coefficients for ν = 0 have the explicit form a k 0 = 1 2 π i ∫ | z | = c f ( z ) O k ( z ) d z {\displaystyle a_{k}^{0}={\frac {1}{2\pi i}}\int _{|z|=c}f(z)O_{k}(z)\,dz} where Ok is Neumann's polynomial. Selected functions admit the special representation f ( z ) = ∑ k = 0 ∞ a k ν J ν + 2 k ( z ) {\displaystyle f(z)=\sum _{k=0}^{\infty }a_{k}^{\nu }J_{\nu +2k}(z)} with a k ν = 2 ( ν + 2 k ) ∫ 0 ∞ f ( z ) J ν + 2 k ( z ) z d z {\displaystyle a_{k}^{\nu }=2(\nu +2k)\int _{0}^{\infty }f(z){\frac {J_{\nu +2k}(z)}{z}}\,dz} due to the orthogonality relation ∫ 0 ∞ J α ( z ) J β ( z ) d z z = 2 π sin ⁡ ( π 2 ( α − β ) ) α 2 − β 2 {\displaystyle \int _{0}^{\infty }J_{\alpha }(z)J_{\beta }(z){\frac {dz}{z}}={\frac {2}{\pi }}{\frac {\sin \left({\frac {\pi }{2}}(\alpha -\beta )\right)}{\alpha ^{2}-\beta ^{2}}}} More generally, if f has a branch-point near the origin of such a nature that f ( z ) = ∑ k = 0 a k J ν + k ( z ) {\displaystyle f(z)=\sum _{k=0}a_{k}J_{\nu +k}(z)} then L { ∑ k = 0 a k J ν + k } ( s ) = 1 1 + s 2 ∑ k = 0 a k ( s + 1 + s 2 ) ν + k {\displaystyle {\mathcal {L}}\left\{\sum _{k=0}a_{k}J_{\nu +k}\right\}(s)={\frac {1}{\sqrt {1+s^{2}}}}\sum _{k=0}{\frac {a_{k}}{\left(s+{\sqrt {1+s^{2}}}\right)^{\nu +k}}}} or ∑ k = 0 a k ξ ν + k = 1 + ξ 2 2 ξ L { f } ( 1 − ξ 2 2 ξ ) {\displaystyle \sum _{k=0}a_{k}\xi ^{\nu +k}={\frac {1+\xi ^{2}}{2\xi }}{\mathcal {L}}\{f\}\left({\frac {1-\xi ^{2}}{2\xi }}\right)} where L { f } {\displaystyle {\mathcal {L}}\{f\}} is the Laplace transform of f. Another way to define the Bessel functions is the Poisson representation formula and the Mehler-Sonine formula: J ν ( z ) = ( z 2 ) ν Γ ( ν + 1 2 ) π ∫ − 1 1 e i z s ( 1 − s 2 ) ν − 1 2 d s = 2 ( z 2 ) ν ⋅ π ⋅ Γ ( 1 2 − ν ) ∫ 1 ∞ sin ⁡ z u ( u 2 − 1 ) ν + 1 2 d u {\displaystyle {\begin{aligned}J_{\nu }(z)&={\frac {\left({\frac {z}{2}}\right)^{\nu }}{\Gamma \left(\nu +{\frac {1}{2}}\right){\sqrt {\pi }}}}\int _{-1}^{1}e^{izs}\left(1-s^{2}\right)^{\nu -{\frac {1}{2}}}\,ds\\[5px]&={\frac {2}{{\left({\frac {z}{2}}\right)}^{\nu }\cdot {\sqrt {\pi }}\cdot \Gamma \left({\frac {1}{2}}-\nu \right)}}\int _{1}^{\infty }{\frac {\sin zu}{\left(u^{2}-1\right)^{\nu +{\frac {1}{2}}}}}\,du\end{aligned}}} where ν > −⁠1/2⁠ and z ∈ C. This formula is useful especially when working with Fourier transforms. Because Bessel's equation becomes Hermitian (self-adjoint) if it is divided by x, the solutions must satisfy an orthogonality relationship for appropriate boundary conditions. In particular, it follows that: ∫ 0 1 x J α ( x u α , m ) J α ( x u α , n ) d x = δ m , n 2 [ J α + 1 ( u α , m ) ] 2 = δ m , n 2 [ J α ′ ( u α , m ) ] 2 {\displaystyle \int _{0}^{1}xJ_{\alpha }\left(xu_{\alpha ,m}\right)J_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[J_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}={\frac {\delta _{m,n}}{2}}\left[J_{\alpha }'\left(u_{\alpha ,m}\right)\right]^{2}} where α > −1, δm,n is the Kronecker delta, and uα,m is the mth zero of Jα(x). This orthogonality relation can then be used to extract the coefficients in the Fourier–Bessel series, where a function is expanded in the basis of the functions Jα(x uα,m) for fixed α and varying m. An analogous relationship for the spherical Bessel functions follows immediately: ∫ 0 1 x 2 j α ( x u α , m ) j α ( x u α , n ) d x = δ m , n 2 [ j α + 1 ( u α , m ) ] 2 {\displaystyle \int _{0}^{1}x^{2}j_{\alpha }\left(xu_{\alpha ,m}\right)j_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[j_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}} If one defines a boxcar function of x that depends on a small parameter ε as: f ε ( x ) = 1 ε rect ⁡ ( x − 1 ε ) {\displaystyle f_{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x-1}{\varepsilon }}\right)} (where rect is the rectangle function) then the Hankel transform of it (of any given order α > −⁠1/2⁠), gε(k), approaches Jα(k) as ε approaches zero, for any given k. Conversely, the Hankel transform (of the same order) of gε(k) is fε(x): ∫ 0 ∞ k J α ( k x ) g ε ( k ) d k = f ε ( x ) {\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)g_{\varepsilon }(k)\,dk=f_{\varepsilon }(x)} which is zero everywhere except near 1. As ε approaches zero, the right-hand side approaches δ(x − 1), where δ is the Dirac delta function. This admits the limit (in the distributional sense): ∫ 0 ∞ k J α ( k x ) J α ( k ) d k = δ ( x − 1 ) {\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)J_{\alpha }(k)\,dk=\delta (x-1)} A change of variables then yields the closure equation: ∫ 0 ∞ x J α ( u x ) J α ( v x ) d x = 1 u δ ( u − v ) {\displaystyle \int _{0}^{\infty }xJ_{\alpha }(ux)J_{\alpha }(vx)\,dx={\frac {1}{u}}\delta (u-v)} for α > −⁠1/2⁠. The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is: ∫ 0 ∞ x 2 j α ( u x ) j α ( v x ) d x = π 2 u v δ ( u − v ) {\displaystyle \int _{0}^{\infty }x^{2}j_{\alpha }(ux)j_{\alpha }(vx)\,dx={\frac {\pi }{2uv}}\delta (u-v)} for α > −1. Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions: A α ( x ) d B α d x − d A α d x B α ( x ) = C α x {\displaystyle A_{\alpha }(x){\frac {dB_{\alpha }}{dx}}-{\frac {dA_{\alpha }}{dx}}B_{\alpha }(x)={\frac {C_{\alpha }}{x}}} where Aα and Bα are any two solutions of Bessel's equation, and Cα is a constant independent of x (which depends on α and on the particular Bessel functions considered). In particular, J α ( x ) d Y α d x − d J α d x Y α ( x ) = 2 π x {\displaystyle J_{\alpha }(x){\frac {dY_{\alpha }}{dx}}-{\frac {dJ_{\alpha }}{dx}}Y_{\alpha }(x)={\frac {2}{\pi x}}} and I α ( x ) d K α d x − d I α d x K α ( x ) = − 1 x , {\displaystyle I_{\alpha }(x){\frac {dK_{\alpha }}{dx}}-{\frac {dI_{\alpha }}{dx}}K_{\alpha }(x)=-{\frac {1}{x}},} for α > −1. For α > −1, the even entire function of genus 1, x−αJα(x), has only real zeros. Let 0 < j α , 1 < j α , 2 < ⋯ < j α , n < ⋯ {\displaystyle 0<j_{\alpha ,1}<j_{\alpha ,2}<\cdots <j_{\alpha ,n}<\cdots } be all its positive zeros, then J α ( z ) = ( z 2 ) α Γ ( α + 1 ) ∏ n = 1 ∞ ( 1 − z 2 j α , n 2 ) {\displaystyle J_{\alpha }(z)={\frac {\left({\frac {z}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{j_{\alpha ,n}^{2}}}\right)} (There are a large number of other known integrals and identities that are not reproduced here, but which can be found in the references.) === Recurrence relations === The functions Jα, Yα, H(1)α, and H(2)α all satisfy the recurrence relations 2 α x Z α ( x ) = Z α − 1 ( x ) + Z α + 1 ( x ) {\displaystyle {\frac {2\alpha }{x}}Z_{\alpha }(x)=Z_{\alpha -1}(x)+Z_{\alpha +1}(x)} and 2 d Z α ( x ) d x = Z α − 1 ( x ) − Z α + 1 ( x ) , {\displaystyle 2{\frac {dZ_{\alpha }(x)}{dx}}=Z_{\alpha -1}(x)-Z_{\alpha +1}(x),} where Z denotes J, Y, H(1), or H(2). These two identities are often combined, e.g. added or subtracted, to yield various other relations. In this way, for example, one can compute Bessel functions of higher orders (or higher derivatives) given the values at lower orders (or lower derivatives). In particular, it follows that ( 1 x d d x ) m [ x α Z α ( x ) ] = x α − m Z α − m ( x ) , ( 1 x d d x ) m [ Z α ( x ) x α ] = ( − 1 ) m Z α + m ( x ) x α + m . {\displaystyle {\begin{aligned}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[x^{\alpha }Z_{\alpha }(x)\right]&=x^{\alpha -m}Z_{\alpha -m}(x),\\\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[{\frac {Z_{\alpha }(x)}{x^{\alpha }}}\right]&=(-1)^{m}{\frac {Z_{\alpha +m}(x)}{x^{\alpha +m}}}.\end{aligned}}} Using the previous relations one can arrive to similar relations for the Spherical Bessel functions: 2 α + 1 x j α ( x ) = j α − 1 + j α + 1 {\displaystyle {\frac {2\alpha +1}{x}}j_{\alpha }(x)=j_{\alpha -1}+j_{\alpha +1}} and d j α ( x ) d x = j α − 1 − α + 1 x j α {\displaystyle {\frac {dj_{\alpha }(x)}{dx}}=j_{\alpha -1}-{\frac {\alpha +1}{x}}j_{\alpha }} Modified Bessel functions follow similar relations: e ( x 2 ) ( t + 1 t ) = ∑ n = − ∞ ∞ I n ( x ) t n {\displaystyle e^{\left({\frac {x}{2}}\right)\left(t+{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }I_{n}(x)t^{n}} and e z cos ⁡ θ = I 0 ( z ) + 2 ∑ n = 1 ∞ I n ( z ) cos ⁡ n θ {\displaystyle e^{z\cos \theta }=I_{0}(z)+2\sum _{n=1}^{\infty }I_{n}(z)\cos n\theta } and 1 2 π ∫ 0 2 π e z cos ⁡ ( m θ ) + y cos ⁡ θ d θ = I 0 ( z ) I 0 ( y ) + 2 ∑ n = 1 ∞ I n ( z ) I m n ( y ) . {\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }e^{z\cos(m\theta )+y\cos \theta }d\theta =I_{0}(z)I_{0}(y)+2\sum _{n=1}^{\infty }I_{n}(z)I_{mn}(y).} The recurrence relation reads C α − 1 ( x ) − C α + 1 ( x ) = 2 α x C α ( x ) , C α − 1 ( x ) + C α + 1 ( x ) = 2 d d x C α ( x ) , {\displaystyle {\begin{aligned}C_{\alpha -1}(x)-C_{\alpha +1}(x)&={\frac {2\alpha }{x}}C_{\alpha }(x),\\[1ex]C_{\alpha -1}(x)+C_{\alpha +1}(x)&=2{\frac {d}{dx}}C_{\alpha }(x),\end{aligned}}} where Cα denotes Iα or eαiπKα. These recurrence relations are useful for discrete diffusion problems. === Transcendence === In 1929, Carl Ludwig Siegel proved that Jν(x), J'ν(x), and the logarithmic derivative ⁠J'ν(x)/Jν(x)⁠ are transcendental numbers when ν is rational and x is algebraic and nonzero. The same proof also implies that Γ ( v + 1 ) ( 2 / x ) v J v ( x ) {\displaystyle \Gamma (v+1)(2/x)^{v}J_{v}(x)} is transcendental under the same assumptions. === Sums with Bessel functions === The product of two Bessel functions admits the following sum: ∑ ν = − ∞ ∞ J ν ( x ) J n − ν ( y ) = J n ( x + y ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{n-\nu }(y)=J_{n}(x+y),} ∑ ν = − ∞ ∞ J ν ( x ) J ν + n ( y ) = J n ( y − x ) . {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(y)=J_{n}(y-x).} From these equalities it follows that ∑ ν = − ∞ ∞ J ν ( x ) J ν + n ( x ) = δ n , 0 {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(x)=\delta _{n,0}} and as a consequence ∑ ν = − ∞ ∞ J ν 2 ( x ) = 1. {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }^{2}(x)=1.} These sums can be extended to include a term multiplier that is a polynomial function of the index. For example, ∑ ν = − ∞ ∞ ν J ν ( x ) J ν + n ( x ) = x 2 ( δ n , 1 + δ n , − 1 ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,1}+\delta _{n,-1}\right),} ∑ ν = − ∞ ∞ ν J ν 2 ( x ) = 0 , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }^{2}(x)=0,} ∑ ν = − ∞ ∞ ν 2 J ν ( x ) J ν + n ( x ) = x 2 ( δ n , − 1 − δ n , 1 ) + x 2 4 ( δ n , − 2 + 2 δ n , 0 + δ n , 2 ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,-1}-\delta _{n,1}\right)+{\frac {x^{2}}{4}}\left(\delta _{n,-2}+2\delta _{n,0}+\delta _{n,2}\right),} ∑ ν = − ∞ ∞ ν 2 J ν 2 ( x ) = x 2 2 . {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }^{2}(x)={\frac {x^{2}}{2}}.} == Multiplication theorem == The Bessel functions obey a multiplication theorem λ − ν J ν ( λ z ) = ∑ n = 0 ∞ 1 n ! ( ( 1 − λ 2 ) z 2 ) n J ν + n ( z ) , {\displaystyle \lambda ^{-\nu }J_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(1-\lambda ^{2}\right)z}{2}}\right)^{n}J_{\nu +n}(z),} where λ and ν may be taken as arbitrary complex numbers. For |λ2 − 1| < 1, the above expression also holds if J is replaced by Y. The analogous identities for modified Bessel functions and |λ2 − 1| < 1 are λ − ν I ν ( λ z ) = ∑ n = 0 ∞ 1 n ! ( ( λ 2 − 1 ) z 2 ) n I ν + n ( z ) {\displaystyle \lambda ^{-\nu }I_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}I_{\nu +n}(z)} and λ − ν K ν ( λ z ) = ∑ n = 0 ∞ ( − 1 ) n n ! ( ( λ 2 − 1 ) z 2 ) n K ν + n ( z ) . {\displaystyle \lambda ^{-\nu }K_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}K_{\nu +n}(z).} == Zeros of the Bessel function == === Bourget's hypothesis === Bessel himself originally proved that for nonnegative integers n, the equation Jn(x) = 0 has an infinite number of solutions in x. When the functions Jn(x) are plotted on the same graph, though, none of the zeros seem to coincide for different values of n except for the zero at x = 0. This phenomenon is known as Bourget's hypothesis after the 19th-century French mathematician who studied Bessel functions. Specifically it states that for any integers n ≥ 0 and m ≥ 1, the functions Jn(x) and Jn + m(x) have no common zeros other than the one at x = 0. The hypothesis was proved by Carl Ludwig Siegel in 1929. === Transcendence === Siegel proved in 1929 that when ν is rational, all nonzero roots of Jν(x) and J'ν(x) are transcendental, as are all the roots of Kν(x). It is also known that all roots of the higher derivatives J ν ( n ) ( x ) {\displaystyle J_{\nu }^{(n)}(x)} for n ≤ 18 are transcendental, except for the special values J 1 ( 3 ) ( ± 3 ) = 0 {\displaystyle J_{1}^{(3)}(\pm {\sqrt {3}})=0} and J 0 ( 4 ) ( ± 3 ) = 0 {\displaystyle J_{0}^{(4)}(\pm {\sqrt {3}})=0} . === Numerical approaches === For numerical studies about the zeros of the Bessel function, see Gil, Segura & Temme (2007), Kravanja et al. (1998) and Moler (2004). === Numerical values === The first zeros in J0 (i.e., j0,1, j0,2 and j0,3) occur at arguments of approximately 2.40483, 5.52008 and 8.65373, respectively. == History == === Waves and elasticity problems === The first appearance of a Bessel function appears in the work of Daniel Bernoulli in 1732, while working on the analysis of a vibrating string, a problem that was tackled before by his father Johann Bernoulli. Daniel considered a flexible chain suspended from a fixed point above and free at its lower end. The solution of the differential equation led to the introduction of a function that is now considered J 0 ( x ) {\displaystyle J_{0}(x)} . Bernoulli also developed a method to find the zeros of the function. Leonhard Euler in 1736, found a link between other functions (now known as Laguerre polynomials) and Bernoulli's solution. Euler also introduced a non-uniform chain that lead to the introduction of functions now related to modified Bessel functions I n ( x ) {\displaystyle I_{n}(x)} . In the middle of the eighteen century, Jean le Rond d'Alembert had found a formula to solve the wave equation. By 1771 there was dispute between Bernoulli, Euler, d'Alembert and Joseph-Louis Lagrange on the nature of the solutions vibrating strings. Euler worked in 1778 on buckling, introducing the concept of Euler's critical load. To solve the problem he introduced the series for J ± 1 / 3 ( x ) {\displaystyle J_{\pm 1/3}(x)} . Euler also worked out the solutions of vibrating 2D membranes in cylindrical coordinates in 1780. In order to solve his differential equation he introduced a power series associated to J n ( x ) {\displaystyle J_{n}(x)} , for integer n. During the end of the 19th century Lagrange, Pierre-Simon Laplace and Marc-Antoine Parseval also found equivalents to the Bessel functions. Parseval for example found an integral representation of J 0 ( x ) {\displaystyle J_{0}(x)} using cosine. At the beginning of the 1800s, Joseph Fourier used J 0 ( x ) {\displaystyle J_{0}(x)} to solve the heat equation in a problem with cylindrical symmetry. Fourier won a prize of the French Academy of Sciences for this work in 1811. But most of the details of his work, including the use of a Fourier series, remained unpublished until 1822. Poisson in rivalry with Fourier, extended Fourier's work in 1823, introducing new properties of Bessel functions including Bessel functions of half-integer order (now known as spherical Bessel functions). === Astronomical problems === In 1770, Lagrangre introduced the series expansion of Bessel functions to solve Kepler's equation, a trascendental equation in astronomy. Friedrich Wilhelm Bessel had seen Lagrange's solution but found it difficult to handle. In 1813 in a letter to Carl Friedrich Gauss, Bessel simplified the calculation using trigonometric functions. Bessel published his work in 1819, independently introducing the method of Fourier series unaware of the work of Fourier which was published later. In 1824, Bessel carried out a systematic investigation of the functions, which earned the functions his name. In older literature the functions were called cylindrical functions or even Bessel–Fourier functions. == See also == == Notes == == References == == External links ==
Wikipedia/Hankel_function_of_the_first_kind
The adiabatic theorem is a concept in quantum mechanics. Its original form, due to Max Born and Vladimir Fock (1928), was stated as follows: A physical system remains in its instantaneous eigenstate if a given perturbation is acting on it slowly enough and if there is a gap between the eigenvalue and the rest of the Hamiltonian's spectrum. In simpler terms, a quantum mechanical system subjected to gradually changing external conditions adapts its functional form, but when subjected to rapidly varying conditions there is insufficient time for the functional form to adapt, so the spatial probability density remains unchanged. == Adiabatic pendulum == At the 1911 Solvay conference, Einstein gave a lecture on the quantum hypothesis, which states that E = n h ν {\displaystyle E=nh\nu } for atomic oscillators. After Einstein's lecture, Hendrik Lorentz commented that, classically, if a simple pendulum is shortened by holding the wire between two fingers and sliding down, it seems that its energy will change smoothly as the pendulum is shortened. This seems to show that the quantum hypothesis is invalid for macroscopic systems, and if macroscopic systems do not follow the quantum hypothesis, then as the macroscopic system becomes microscopic, it seems the quantum hypothesis would be invalidated. Einstein replied that although both the energy E {\displaystyle E} and the frequency ν {\displaystyle \nu } would change, their ratio E ν {\displaystyle {\frac {E}{\nu }}} would still be conserved, thus saving the quantum hypothesis. Before the conference, Einstein had just read a paper by Paul Ehrenfest on the adiabatic hypothesis. We know that he had read it because he mentioned it in a letter to Michele Besso written before the conference. == Diabatic vs. adiabatic processes == At some initial time t 0 {\displaystyle t_{0}} a quantum-mechanical system has an energy given by the Hamiltonian H ^ ( t 0 ) {\displaystyle {\hat {H}}(t_{0})} ; the system is in an eigenstate of H ^ ( t 0 ) {\displaystyle {\hat {H}}(t_{0})} labelled ψ ( x , t 0 ) {\displaystyle \psi (x,t_{0})} . Changing conditions modify the Hamiltonian in a continuous manner, resulting in a final Hamiltonian H ^ ( t 1 ) {\displaystyle {\hat {H}}(t_{1})} at some later time t 1 {\displaystyle t_{1}} . The system will evolve according to the time-dependent Schrödinger equation, to reach a final state ψ ( x , t 1 ) {\displaystyle \psi (x,t_{1})} . The adiabatic theorem states that the modification to the system depends critically on the time τ = t 1 − t 0 {\displaystyle \tau =t_{1}-t_{0}} during which the modification takes place. For a truly adiabatic process we require τ → ∞ {\displaystyle \tau \to \infty } ; in this case the final state ψ ( x , t 1 ) {\displaystyle \psi (x,t_{1})} will be an eigenstate of the final Hamiltonian H ^ ( t 1 ) {\displaystyle {\hat {H}}(t_{1})} , with a modified configuration: | ψ ( x , t 1 ) | 2 ≠ | ψ ( x , t 0 ) | 2 . {\displaystyle |\psi (x,t_{1})|^{2}\neq |\psi (x,t_{0})|^{2}.} The degree to which a given change approximates an adiabatic process depends on both the energy separation between ψ ( x , t 0 ) {\displaystyle \psi (x,t_{0})} and adjacent states, and the ratio of the interval τ {\displaystyle \tau } to the characteristic timescale of the evolution of ψ ( x , t 0 ) {\displaystyle \psi (x,t_{0})} for a time-independent Hamiltonian, τ int = 2 π ℏ / E 0 {\displaystyle \tau _{\text{int}}=2\pi \hbar /E_{0}} , where E 0 {\displaystyle E_{0}} is the energy of ψ ( x , t 0 ) {\displaystyle \psi (x,t_{0})} . Conversely, in the limit τ → 0 {\displaystyle \tau \to 0} we have infinitely rapid, or diabatic passage; the configuration of the state remains unchanged: | ψ ( x , t 1 ) | 2 = | ψ ( x , t 0 ) | 2 . {\displaystyle |\psi (x,t_{1})|^{2}=|\psi (x,t_{0})|^{2}.} The so-called "gap condition" included in Born and Fock's original definition given above refers to a requirement that the spectrum of H ^ {\displaystyle {\hat {H}}} is discrete and nondegenerate, such that there is no ambiguity in the ordering of the states (one can easily establish which eigenstate of H ^ ( t 1 ) {\displaystyle {\hat {H}}(t_{1})} corresponds to ψ ( t 0 ) {\displaystyle \psi (t_{0})} ). In 1999 J. E. Avron and A. Elgart reformulated the adiabatic theorem to adapt it to situations without a gap. === Comparison with the adiabatic concept in thermodynamics === The term "adiabatic" is traditionally used in thermodynamics to describe processes without the exchange of heat between system and environment (see adiabatic process), more precisely these processes are usually faster than the timescale of heat exchange. (For example, a pressure wave is adiabatic with respect to a heat wave, which is not adiabatic.) Adiabatic in the context of thermodynamics is often used as a synonym for fast process. The classical and quantum mechanics definition is instead closer to the thermodynamical concept of a quasistatic process, which are processes that are almost always at equilibrium (i.e. that are slower than the internal energy exchange interactions time scales, namely a "normal" atmospheric heat wave is quasi-static, and a pressure wave is not). Adiabatic in the context of mechanics is often used as a synonym for slow process. In the quantum world adiabatic means for example that the time scale of electrons and photon interactions is much faster or almost instantaneous with respect to the average time scale of electrons and photon propagation. Therefore, we can model the interactions as a piece of continuous propagation of electrons and photons (i.e. states at equilibrium) plus a quantum jump between states (i.e. instantaneous). The adiabatic theorem in this heuristic context tells essentially that quantum jumps are preferably avoided, and the system tries to conserve the state and the quantum numbers. The quantum mechanical concept of adiabatic is related to adiabatic invariant, it is often used in the old quantum theory and has no direct relation with heat exchange. == Example systems == === Simple pendulum === As an example, consider a pendulum oscillating in a vertical plane. If the support is moved, the mode of oscillation of the pendulum will change. If the support is moved sufficiently slowly, the motion of the pendulum relative to the support will remain unchanged. A gradual change in external conditions allows the system to adapt, such that it retains its initial character. The detailed classical example is available in the Adiabatic invariant page and here. === Quantum harmonic oscillator === The classical nature of a pendulum precludes a full description of the effects of the adiabatic theorem. As a further example consider a quantum harmonic oscillator as the spring constant k {\displaystyle k} is increased. Classically this is equivalent to increasing the stiffness of a spring; quantum-mechanically the effect is a narrowing of the potential energy curve in the system Hamiltonian. If k {\displaystyle k} is increased adiabatically ( d k d t → 0 ) {\textstyle \left({\frac {dk}{dt}}\to 0\right)} then the system at time t {\displaystyle t} will be in an instantaneous eigenstate ψ ( t ) {\displaystyle \psi (t)} of the current Hamiltonian H ^ ( t ) {\displaystyle {\hat {H}}(t)} , corresponding to the initial eigenstate of H ^ ( 0 ) {\displaystyle {\hat {H}}(0)} . For the special case of a system like the quantum harmonic oscillator described by a single quantum number, this means the quantum number will remain unchanged. Figure 1 shows how a harmonic oscillator, initially in its ground state, n = 0 {\displaystyle n=0} , remains in the ground state as the potential energy curve is compressed; the functional form of the state adapting to the slowly varying conditions. For a rapidly increased spring constant, the system undergoes a diabatic process ( d k d t → ∞ ) {\textstyle \left({\frac {dk}{dt}}\to \infty \right)} in which the system has no time to adapt its functional form to the changing conditions. While the final state must look identical to the initial state ( | ψ ( t ) | 2 = | ψ ( 0 ) | 2 ) {\displaystyle \left(|\psi (t)|^{2}=|\psi (0)|^{2}\right)} for a process occurring over a vanishing time period, there is no eigenstate of the new Hamiltonian, H ^ ( t ) {\displaystyle {\hat {H}}(t)} , that resembles the initial state. The final state is composed of a linear superposition of many different eigenstates of H ^ ( t ) {\displaystyle {\hat {H}}(t)} which sum to reproduce the form of the initial state. === Avoided curve crossing === For a more widely applicable example, consider a 2-level atom subjected to an external magnetic field. The states, labelled | 1 ⟩ {\displaystyle |1\rangle } and | 2 ⟩ {\displaystyle |2\rangle } using bra–ket notation, can be thought of as atomic angular-momentum states, each with a particular geometry. For reasons that will become clear these states will henceforth be referred to as the diabatic states. The system wavefunction can be represented as a linear combination of the diabatic states: | Ψ ⟩ = c 1 ( t ) | 1 ⟩ + c 2 ( t ) | 2 ⟩ . {\displaystyle |\Psi \rangle =c_{1}(t)|1\rangle +c_{2}(t)|2\rangle .} With the field absent, the energetic separation of the diabatic states is equal to ℏ ω 0 {\displaystyle \hbar \omega _{0}} ; the energy of state | 1 ⟩ {\displaystyle |1\rangle } increases with increasing magnetic field (a low-field-seeking state), while the energy of state | 2 ⟩ {\displaystyle |2\rangle } decreases with increasing magnetic field (a high-field-seeking state). Assuming the magnetic-field dependence is linear, the Hamiltonian matrix for the system with the field applied can be written H = ( μ B ( t ) − ℏ ω 0 / 2 a a ∗ ℏ ω 0 / 2 − μ B ( t ) ) {\displaystyle \mathbf {H} ={\begin{pmatrix}\mu B(t)-\hbar \omega _{0}/2&a\\a^{*}&\hbar \omega _{0}/2-\mu B(t)\end{pmatrix}}} where μ {\displaystyle \mu } is the magnetic moment of the atom, assumed to be the same for the two diabatic states, and a {\displaystyle a} is some time-independent coupling between the two states. The diagonal elements are the energies of the diabatic states ( E 1 ( t ) {\displaystyle E_{1}(t)} and E 2 ( t ) {\displaystyle E_{2}(t)} ), however, as H {\displaystyle \mathbf {H} } is not a diagonal matrix, it is clear that these states are not eigenstates of H {\displaystyle \mathbf {H} } due to the off-diagonal coupling constant. The eigenvectors of the matrix H {\displaystyle \mathbf {H} } are the eigenstates of the system, which we will label | ϕ 1 ( t ) ⟩ {\displaystyle |\phi _{1}(t)\rangle } and | ϕ 2 ( t ) ⟩ {\displaystyle |\phi _{2}(t)\rangle } , with corresponding eigenvalues ε 1 ( t ) = − 1 2 4 a 2 + ( ℏ ω 0 − 2 μ B ( t ) ) 2 ε 2 ( t ) = + 1 2 4 a 2 + ( ℏ ω 0 − 2 μ B ( t ) ) 2 . {\displaystyle {\begin{aligned}\varepsilon _{1}(t)&=-{\frac {1}{2}}{\sqrt {4a^{2}+(\hbar \omega _{0}-2\mu B(t))^{2}}}\\[4pt]\varepsilon _{2}(t)&=+{\frac {1}{2}}{\sqrt {4a^{2}+(\hbar \omega _{0}-2\mu B(t))^{2}}}.\end{aligned}}} It is important to realise that the eigenvalues ε 1 ( t ) {\displaystyle \varepsilon _{1}(t)} and ε 2 ( t ) {\displaystyle \varepsilon _{2}(t)} are the only allowed outputs for any individual measurement of the system energy, whereas the diabatic energies E 1 ( t ) {\displaystyle E_{1}(t)} and E 2 ( t ) {\displaystyle E_{2}(t)} correspond to the expectation values for the energy of the system in the diabatic states | 1 ⟩ {\displaystyle |1\rangle } and | 2 ⟩ {\displaystyle |2\rangle } . Figure 2 shows the dependence of the diabatic and adiabatic energies on the value of the magnetic field; note that for non-zero coupling the eigenvalues of the Hamiltonian cannot be degenerate, and thus we have an avoided crossing. If an atom is initially in state | ϕ 2 ( t 0 ) ⟩ {\displaystyle |\phi _{2}(t_{0})\rangle } in zero magnetic field (on the red curve, at the extreme left), an adiabatic increase in magnetic field ( d B d t → 0 ) {\textstyle \left({\frac {dB}{dt}}\to 0\right)} will ensure the system remains in an eigenstate of the Hamiltonian | ϕ 2 ( t ) ⟩ {\displaystyle |\phi _{2}(t)\rangle } throughout the process (follows the red curve). A diabatic increase in magnetic field ( d B d t → ∞ ) {\textstyle \left({\frac {dB}{dt}}\to \infty \right)} will ensure the system follows the diabatic path (the dotted blue line), such that the system undergoes a transition to state | ϕ 1 ( t 1 ) ⟩ {\displaystyle |\phi _{1}(t_{1})\rangle } . For finite magnetic field slew rates ( 0 < d B d t < ∞ ) {\textstyle \left(0<{\frac {dB}{dt}}<\infty \right)} there will be a finite probability of finding the system in either of the two eigenstates. See below for approaches to calculating these probabilities. These results are extremely important in atomic and molecular physics for control of the energy-state distribution in a population of atoms or molecules. == Mathematical statement == Under a slowly changing Hamiltonian H ( t ) {\displaystyle H(t)} with instantaneous eigenstates | n ( t ) ⟩ {\displaystyle |n(t)\rangle } and corresponding energies E n ( t ) {\displaystyle E_{n}(t)} , a quantum system evolves from the initial state | ψ ( 0 ) ⟩ = ∑ n c n ( 0 ) | n ( 0 ) ⟩ {\displaystyle |\psi (0)\rangle =\sum _{n}c_{n}(0)|n(0)\rangle } to the final state | ψ ( t ) ⟩ = ∑ n c n ( t ) | n ( t ) ⟩ , {\displaystyle |\psi (t)\rangle =\sum _{n}c_{n}(t)|n(t)\rangle ,} where the coefficients undergo the change of phase c n ( t ) = c n ( 0 ) e i θ n ( t ) e i γ n ( t ) {\displaystyle c_{n}(t)=c_{n}(0)e^{i\theta _{n}(t)}e^{i\gamma _{n}(t)}} with the dynamical phase θ m ( t ) = − 1 ℏ ∫ 0 t E m ( t ′ ) d t ′ {\displaystyle \theta _{m}(t)=-{\frac {1}{\hbar }}\int _{0}^{t}E_{m}(t')dt'} and geometric phase γ m ( t ) = i ∫ 0 t ⟨ m ( t ′ ) | m ˙ ( t ′ ) ⟩ d t ′ . {\displaystyle \gamma _{m}(t)=i\int _{0}^{t}\langle m(t')|{\dot {m}}(t')\rangle dt'.} In particular, | c n ( t ) | 2 = | c n ( 0 ) | 2 {\displaystyle |c_{n}(t)|^{2}=|c_{n}(0)|^{2}} , so if the system begins in an eigenstate of H ( 0 ) {\displaystyle H(0)} , it remains in an eigenstate of H ( t ) {\displaystyle H(t)} during the evolution with a change of phase only. === Proofs === == Example applications == Often a solid crystal is modeled as a set of independent valence electrons moving in a mean perfectly periodic potential generated by a rigid lattice of ions. With the Adiabatic theorem we can also include instead the motion of the valence electrons across the crystal and the thermal motion of the ions as in the Born–Oppenheimer approximation. This does explain many phenomena in the scope of: thermodynamics: Temperature dependence of specific heat, thermal expansion, melting transport phenomena: the temperature dependence of electric resistivity of conductors, the temperature dependence of electric conductivity in insulators, Some properties of low temperature superconductivity optics: optic absorption in the infrared for ionic crystals, Brillouin scattering, Raman scattering == Deriving conditions for diabatic vs adiabatic passage == We will now pursue a more rigorous analysis. Making use of bra–ket notation, the state vector of the system at time t {\displaystyle t} can be written | ψ ( t ) ⟩ = ∑ n c n A ( t ) e − i E n t / ℏ | ϕ n ⟩ , {\displaystyle |\psi (t)\rangle =\sum _{n}c_{n}^{A}(t)e^{-iE_{n}t/\hbar }|\phi _{n}\rangle ,} where the spatial wavefunction alluded to earlier is the projection of the state vector onto the eigenstates of the position operator ψ ( x , t ) = ⟨ x | ψ ( t ) ⟩ . {\displaystyle \psi (x,t)=\langle x|\psi (t)\rangle .} It is instructive to examine the limiting cases, in which τ {\displaystyle \tau } is very large (adiabatic, or gradual change) and very small (diabatic, or sudden change). Consider a system Hamiltonian undergoing continuous change from an initial value H ^ 0 {\displaystyle {\hat {H}}_{0}} , at time t 0 {\displaystyle t_{0}} , to a final value H ^ 1 {\displaystyle {\hat {H}}_{1}} , at time t 1 {\displaystyle t_{1}} , where τ = t 1 − t 0 {\displaystyle \tau =t_{1}-t_{0}} . The evolution of the system can be described in the Schrödinger picture by the time-evolution operator, defined by the integral equation U ^ ( t , t 0 ) = 1 − i ℏ ∫ t 0 t H ^ ( t ′ ) U ^ ( t ′ , t 0 ) d t ′ , {\displaystyle {\hat {U}}(t,t_{0})=1-{\frac {i}{\hbar }}\int _{t_{0}}^{t}{\hat {H}}(t'){\hat {U}}(t',t_{0})dt',} which is equivalent to the Schrödinger equation. i ℏ ∂ ∂ t U ^ ( t , t 0 ) = H ^ ( t ) U ^ ( t , t 0 ) , {\displaystyle i\hbar {\frac {\partial }{\partial t}}{\hat {U}}(t,t_{0})={\hat {H}}(t){\hat {U}}(t,t_{0}),} along with the initial condition U ^ ( t 0 , t 0 ) = 1 {\displaystyle {\hat {U}}(t_{0},t_{0})=1} . Given knowledge of the system wave function at t 0 {\displaystyle t_{0}} , the evolution of the system up to a later time t {\displaystyle t} can be obtained using | ψ ( t ) ⟩ = U ^ ( t , t 0 ) | ψ ( t 0 ) ⟩ . {\displaystyle |\psi (t)\rangle ={\hat {U}}(t,t_{0})|\psi (t_{0})\rangle .} The problem of determining the adiabaticity of a given process is equivalent to establishing the dependence of U ^ ( t 1 , t 0 ) {\displaystyle {\hat {U}}(t_{1},t_{0})} on τ {\displaystyle \tau } . To determine the validity of the adiabatic approximation for a given process, one can calculate the probability of finding the system in a state other than that in which it started. Using bra–ket notation and using the definition | 0 ⟩ ≡ | ψ ( t 0 ) ⟩ {\displaystyle |0\rangle \equiv |\psi (t_{0})\rangle } , we have: ζ = ⟨ 0 | U ^ † ( t 1 , t 0 ) U ^ ( t 1 , t 0 ) | 0 ⟩ − ⟨ 0 | U ^ † ( t 1 , t 0 ) | 0 ⟩ ⟨ 0 | U ^ ( t 1 , t 0 ) | 0 ⟩ . {\displaystyle \zeta =\langle 0|{\hat {U}}^{\dagger }(t_{1},t_{0}){\hat {U}}(t_{1},t_{0})|0\rangle -\langle 0|{\hat {U}}^{\dagger }(t_{1},t_{0})|0\rangle \langle 0|{\hat {U}}(t_{1},t_{0})|0\rangle .} We can expand U ^ ( t 1 , t 0 ) {\displaystyle {\hat {U}}(t_{1},t_{0})} U ^ ( t 1 , t 0 ) = 1 + 1 i ℏ ∫ t 0 t 1 H ^ ( t ) d t + 1 ( i ℏ ) 2 ∫ t 0 t 1 d t ′ ∫ t 0 t ′ d t ″ H ^ ( t ′ ) H ^ ( t ″ ) + ⋯ . {\displaystyle {\hat {U}}(t_{1},t_{0})=1+{1 \over i\hbar }\int _{t_{0}}^{t_{1}}{\hat {H}}(t)dt+{1 \over (i\hbar )^{2}}\int _{t_{0}}^{t_{1}}dt'\int _{t_{0}}^{t'}dt''{\hat {H}}(t'){\hat {H}}(t'')+\cdots .} In the perturbative limit we can take just the first two terms and substitute them into our equation for ζ {\displaystyle \zeta } , recognizing that 1 τ ∫ t 0 t 1 H ^ ( t ) d t ≡ H ¯ {\displaystyle {1 \over \tau }\int _{t_{0}}^{t_{1}}{\hat {H}}(t)dt\equiv {\bar {H}}} is the system Hamiltonian, averaged over the interval t 0 → t 1 {\displaystyle t_{0}\to t_{1}} , we have: ζ = ⟨ 0 | ( 1 + i ℏ τ H ¯ ) ( 1 − i ℏ τ H ¯ ) | 0 ⟩ − ⟨ 0 | ( 1 + i ℏ τ H ¯ ) | 0 ⟩ ⟨ 0 | ( 1 − i ℏ τ H ¯ ) | 0 ⟩ . {\displaystyle \zeta =\langle 0|(1+{\tfrac {i}{\hbar }}\tau {\bar {H}})(1-{\tfrac {i}{\hbar }}\tau {\bar {H}})|0\rangle -\langle 0|(1+{\tfrac {i}{\hbar }}\tau {\bar {H}})|0\rangle \langle 0|(1-{\tfrac {i}{\hbar }}\tau {\bar {H}})|0\rangle .} After expanding the products and making the appropriate cancellations, we are left with: ζ = τ 2 ℏ 2 ( ⟨ 0 | H ¯ 2 | 0 ⟩ − ⟨ 0 | H ¯ | 0 ⟩ ⟨ 0 | H ¯ | 0 ⟩ ) , {\displaystyle \zeta ={\frac {\tau ^{2}}{\hbar ^{2}}}\left(\langle 0|{\bar {H}}^{2}|0\rangle -\langle 0|{\bar {H}}|0\rangle \langle 0|{\bar {H}}|0\rangle \right),} giving ζ = τ 2 Δ H ¯ 2 ℏ 2 , {\displaystyle \zeta ={\frac {\tau ^{2}\Delta {\bar {H}}^{2}}{\hbar ^{2}}},} where Δ H ¯ {\displaystyle \Delta {\bar {H}}} is the root mean square deviation of the system Hamiltonian averaged over the interval of interest. The sudden approximation is valid when ζ ≪ 1 {\displaystyle \zeta \ll 1} (the probability of finding the system in a state other than that in which is started approaches zero), thus the validity condition is given by τ ≪ ℏ Δ H ¯ , {\displaystyle \tau \ll {\hbar \over \Delta {\bar {H}}},} which is a statement of the time-energy form of the Heisenberg uncertainty principle. === Diabatic passage === In the limit τ → 0 {\displaystyle \tau \to 0} we have infinitely rapid, or diabatic passage: lim τ → 0 U ^ ( t 1 , t 0 ) = 1. {\displaystyle \lim _{\tau \to 0}{\hat {U}}(t_{1},t_{0})=1.} The functional form of the system remains unchanged: | ⟨ x | ψ ( t 1 ) ⟩ | 2 = | ⟨ x | ψ ( t 0 ) ⟩ | 2 . {\displaystyle |\langle x|\psi (t_{1})\rangle |^{2}=\left|\langle x|\psi (t_{0})\rangle \right|^{2}.} This is sometimes referred to as the sudden approximation. The validity of the approximation for a given process can be characterized by the probability that the state of the system remains unchanged: P D = 1 − ζ . {\displaystyle P_{D}=1-\zeta .} === Adiabatic passage === In the limit τ → ∞ {\displaystyle \tau \to \infty } we have infinitely slow, or adiabatic passage. The system evolves, adapting its form to the changing conditions, | ⟨ x | ψ ( t 1 ) ⟩ | 2 ≠ | ⟨ x | ψ ( t 0 ) ⟩ | 2 . {\displaystyle |\langle x|\psi (t_{1})\rangle |^{2}\neq |\langle x|\psi (t_{0})\rangle |^{2}.} If the system is initially in an eigenstate of H ^ ( t 0 ) {\displaystyle {\hat {H}}(t_{0})} , after a period τ {\displaystyle \tau } it will have passed into the corresponding eigenstate of H ^ ( t 1 ) {\displaystyle {\hat {H}}(t_{1})} . This is referred to as the adiabatic approximation. The validity of the approximation for a given process can be determined from the probability that the final state of the system is different from the initial state: P A = ζ . {\displaystyle P_{A}=\zeta .} == Calculating adiabatic passage probabilities == === The Landau–Zener formula === In 1932 an analytic solution to the problem of calculating adiabatic transition probabilities was published separately by Lev Landau and Clarence Zener, for the special case of a linearly changing perturbation in which the time-varying component does not couple the relevant states (hence the coupling in the diabatic Hamiltonian matrix is independent of time). The key figure of merit in this approach is the Landau–Zener velocity: v LZ = ∂ ∂ t | E 2 − E 1 | ∂ ∂ q | E 2 − E 1 | ≈ d q d t , {\displaystyle v_{\text{LZ}}={{\frac {\partial }{\partial t}}|E_{2}-E_{1}| \over {\frac {\partial }{\partial q}}|E_{2}-E_{1}|}\approx {\frac {dq}{dt}},} where q {\displaystyle q} is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and E 1 {\displaystyle E_{1}} and E 2 {\displaystyle E_{2}} are the energies of the two diabatic (crossing) states. A large v LZ {\displaystyle v_{\text{LZ}}} results in a large diabatic transition probability and vice versa. Using the Landau–Zener formula the probability, P D {\displaystyle P_{\rm {D}}} , of a diabatic transition is given by P D = e − 2 π Γ Γ = a 2 / ℏ | ∂ ∂ t ( E 2 − E 1 ) | = a 2 / ℏ | d q d t ∂ ∂ q ( E 2 − E 1 ) | = a 2 ℏ | α | {\displaystyle {\begin{aligned}P_{\rm {D}}&=e^{-2\pi \Gamma }\\\Gamma &={a^{2}/\hbar \over \left|{\frac {\partial }{\partial t}}(E_{2}-E_{1})\right|}={a^{2}/\hbar \over \left|{\frac {dq}{dt}}{\frac {\partial }{\partial q}}(E_{2}-E_{1})\right|}\\&={a^{2} \over \hbar |\alpha |}\\\end{aligned}}} === The numerical approach === For a transition involving a nonlinear change in perturbation variable or time-dependent coupling between the diabatic states, the equations of motion for the system dynamics cannot be solved analytically. The diabatic transition probability can still be obtained using one of the wide varieties of numerical solution algorithms for ordinary differential equations. The equations to be solved can be obtained from the time-dependent Schrödinger equation: i ℏ c _ ˙ A ( t ) = H A ( t ) c _ A ( t ) , {\displaystyle i\hbar {\dot {\underline {c}}}^{A}(t)=\mathbf {H} _{A}(t){\underline {c}}^{A}(t),} where c _ A ( t ) {\displaystyle {\underline {c}}^{A}(t)} is a vector containing the adiabatic state amplitudes, H A ( t ) {\displaystyle \mathbf {H} _{A}(t)} is the time-dependent adiabatic Hamiltonian, and the overdot represents a time derivative. Comparison of the initial conditions used with the values of the state amplitudes following the transition can yield the diabatic transition probability. In particular, for a two-state system: P D = | c 2 A ( t 1 ) | 2 {\displaystyle P_{D}=|c_{2}^{A}(t_{1})|^{2}} for a system that began with | c 1 A ( t 0 ) | 2 = 1 {\displaystyle |c_{1}^{A}(t_{0})|^{2}=1} . == See also == Landau–Zener formula Berry phase Quantum stirring, ratchets, and pumping Adiabatic quantum motor Born–Oppenheimer approximation Eigenstate thermalization hypothesis Adiabatic process == References ==
Wikipedia/Adiabatic_process_(quantum_mechanics)
An adiabatic process (adiabatic from Ancient Greek ἀδιάβατος (adiábatos) 'impassable') is a type of thermodynamic process that occurs without transferring heat between the thermodynamic system and its environment. Unlike an isothermal process, an adiabatic process transfers energy to the surroundings only as work and/or mass flow. As a key concept in thermodynamics, the adiabatic process supports the theory that explains the first law of thermodynamics. The opposite term to "adiabatic" is diabatic. Some chemical and physical processes occur too rapidly for energy to enter or leave the system as heat, allowing a convenient "adiabatic approximation". For example, the adiabatic flame temperature uses this approximation to calculate the upper limit of flame temperature by assuming combustion loses no heat to its surroundings. In meteorology, adiabatic expansion and cooling of moist air, which can be triggered by winds flowing up and over a mountain for example, can cause the water vapor pressure to exceed the saturation vapor pressure. Expansion and cooling beyond the saturation vapor pressure is often idealized as a pseudo-adiabatic process whereby excess vapor instantly precipitates into water droplets. The change in temperature of an air undergoing pseudo-adiabatic expansion differs from air undergoing adiabatic expansion because latent heat is released by precipitation. == Description == A process without transfer of heat to or from a system, so that Q = 0, is called adiabatic, and such a system is said to be adiabatically isolated. The simplifying assumption frequently made is that a process is adiabatic. For example, the compression of a gas within a cylinder of an engine is assumed to occur so rapidly that on the time scale of the compression process, little of the system's energy can be transferred out as heat to the surroundings. Even though the cylinders are not insulated and are quite conductive, that process is idealized to be adiabatic. The same can be said to be true for the expansion process of such a system. The assumption of adiabatic isolation is useful and often combined with other such idealizations to calculate a good first approximation of a system's behaviour. For example, according to Laplace, when sound travels in a gas, there is no time for heat conduction in the medium, and so the propagation of sound is adiabatic. For such an adiabatic process, the modulus of elasticity (Young's modulus) can be expressed as E = γP, where γ is the ratio of specific heats at constant pressure and at constant volume (γ = ⁠Cp/Cv⁠) and P is the pressure of the gas. === Various applications of the adiabatic assumption === For a closed system, one may write the first law of thermodynamics as ΔU = Q − W, where ΔU denotes the change of the system's internal energy, Q the quantity of energy added to it as heat, and W the work done by the system on its surroundings. If the system has such rigid walls that work cannot be transferred in or out (W = 0), and the walls are not adiabatic and energy is added in the form of heat (Q > 0), and there is no phase change, then the temperature of the system will rise. If the system has such rigid walls that pressure–volume work cannot be done, but the walls are adiabatic (Q = 0), and energy is added as isochoric (constant volume) work in the form of friction or the stirring of a viscous fluid within the system (W < 0), and there is no phase change, then the temperature of the system will rise. If the system walls are adiabatic (Q = 0) but not rigid (W ≠ 0), and, in a fictive idealized process, energy is added to the system in the form of frictionless, non-viscous pressure–volume work (W < 0), and there is no phase change, then the temperature of the system will rise. Such a process is called an isentropic process and is said to be "reversible". Ideally, if the process were reversed the energy could be recovered entirely as work done by the system. If the system contains a compressible gas and is reduced in volume, the uncertainty of the position of the gas is reduced, and seemingly would reduce the entropy of the system, but the temperature of the system will rise as the process is isentropic (ΔS = 0). Should the work be added in such a way that friction or viscous forces are operating within the system, then the process is not isentropic, and if there is no phase change, then the temperature of the system will rise, the process is said to be "irreversible", and the work added to the system is not entirely recoverable in the form of work. If the walls of a system are not adiabatic, and energy is transferred in as heat, entropy is transferred into the system with the heat. Such a process is neither adiabatic nor isentropic, having Q > 0, and ΔS > 0 according to the second law of thermodynamics. Naturally occurring adiabatic processes are irreversible (entropy is produced). The transfer of energy as work into an adiabatically isolated system can be imagined as being of two idealized extreme kinds. In one such kind, no entropy is produced within the system (no friction, viscous dissipation, etc.), and the work is only pressure-volume work (denoted by P dV). In nature, this ideal kind occurs only approximately because it demands an infinitely slow process and no sources of dissipation. The other extreme kind of work is isochoric work (dV = 0), for which energy is added as work solely through friction or viscous dissipation within the system. A stirrer that transfers energy to a viscous fluid of an adiabatically isolated system with rigid walls, without phase change, will cause a rise in temperature of the fluid, but that work is not recoverable. Isochoric work is irreversible. The second law of thermodynamics observes that a natural process, of transfer of energy as work, always consists at least of isochoric work and often both of these extreme kinds of work. Every natural process, adiabatic or not, is irreversible, with ΔS > 0, as friction or viscosity are always present to some extent. == Adiabatic compression and expansion == The adiabatic compression of a gas causes a rise in temperature of the gas. Adiabatic expansion against pressure, or a spring, causes a drop in temperature. In contrast, free expansion is an isothermal process for an ideal gas. Adiabatic compression occurs when the pressure of a gas is increased by work done on it by its surroundings, e.g., a piston compressing a gas contained within a cylinder and raising the temperature where in many practical situations heat conduction through walls can be slow compared with the compression time. This finds practical application in diesel engines which rely on the lack of heat dissipation during the compression stroke to elevate the fuel vapor temperature sufficiently to ignite it. Adiabatic compression occurs in the Earth's atmosphere when an air mass descends, for example, in a Katabatic wind, Foehn wind, or Chinook wind flowing downhill over a mountain range. When a parcel of air descends, the pressure on the parcel increases. Because of this increase in pressure, the parcel's volume decreases and its temperature increases as work is done on the parcel of air, thus increasing its internal energy, which manifests itself by a rise in the temperature of that mass of air. The parcel of air can only slowly dissipate the energy by conduction or radiation (heat), and to a first approximation it can be considered adiabatically isolated and the process an adiabatic process. Adiabatic expansion occurs when the pressure on an adiabatically isolated system is decreased, allowing it to expand in size, thus causing it to do work on its surroundings. When the pressure applied on a parcel of gas is reduced, the gas in the parcel is allowed to expand; as the volume increases, the temperature falls as its internal energy decreases. Adiabatic expansion occurs in the Earth's atmosphere with orographic lifting and lee waves, and this can form pilei or lenticular clouds. Due in part to adiabatic expansion in mountainous areas, snowfall infrequently occurs in some parts of the Sahara desert. Adiabatic expansion does not have to involve a fluid. One technique used to reach very low temperatures (thousandths and even millionths of a degree above absolute zero) is via adiabatic demagnetisation, where the change in magnetic field on a magnetic material is used to provide adiabatic expansion. Also, the contents of an expanding universe can be described (to first order) as an adiabatically expanding fluid. (See heat death of the universe.) Rising magma also undergoes adiabatic expansion before eruption, particularly significant in the case of magmas that rise quickly from great depths such as kimberlites. In the Earth's convecting mantle (the asthenosphere) beneath the lithosphere, the mantle temperature is approximately an adiabat. The slight decrease in temperature with shallowing depth is due to the decrease in pressure the shallower the material is in the Earth. Such temperature changes can be quantified using the ideal gas law, or the hydrostatic equation for atmospheric processes. In practice, no process is truly adiabatic. Many processes rely on a large difference in time scales of the process of interest and the rate of heat dissipation across a system boundary, and thus are approximated by using an adiabatic assumption. There is always some heat loss, as no perfect insulators exist. == Ideal gas (reversible process) == The mathematical equation for an ideal gas undergoing a reversible (i.e., no entropy generation) adiabatic process can be represented by the polytropic process equation P V γ = constant , {\displaystyle PV^{\gamma }={\text{constant}},} where P is pressure, V is volume, and γ is the adiabatic index or heat capacity ratio defined as γ = C P C V = f + 2 f . {\displaystyle \gamma ={\frac {C_{P}}{C_{V}}}={\frac {f+2}{f}}.} Here CP is the specific heat for constant pressure, CV is the specific heat for constant volume, and f is the number of degrees of freedom (3 for a monatomic gas, 5 for a diatomic gas or a gas of linear molecules such as carbon dioxide). For a monatomic ideal gas, γ = ⁠5/3⁠, and for a diatomic gas (such as nitrogen and oxygen, the main components of air), γ = ⁠7/5⁠. Note that the above formula is only applicable to classical ideal gases (that is, gases far above absolute zero temperature) and not Bose–Einstein or Fermi gases. One can also use the ideal gas law to rewrite the above relationship between P and V as P 1 − γ T γ = constant , T V γ − 1 = constant {\displaystyle {\begin{aligned}P^{1-\gamma }T^{\gamma }&={\text{constant}},\\TV^{\gamma -1}&={\text{constant}}\end{aligned}}} where T is the absolute or thermodynamic temperature. === Example of adiabatic compression === The compression stroke in a gasoline engine can be used as an example of adiabatic compression. The model assumptions are: the uncompressed volume of the cylinder is one litre (1 L = 1000 cm3 = 0.001 m3); the gas within is the air consisting of molecular nitrogen and oxygen only (thus a diatomic gas with 5 degrees of freedom, and so γ = ⁠7/5⁠); the compression ratio of the engine is 10:1 (that is, the 1 L volume of uncompressed gas is reduced to 0.1 L by the piston); and the uncompressed gas is at approximately room temperature and pressure (a warm room temperature of ~27 °C, or 300 K, and a pressure of 1 bar = 100 kPa, i.e. typical sea-level atmospheric pressure). P 1 V 1 γ = c o n s t a n t 1 = 100 000 Pa × ( 0.001 m 3 ) 7 5 = 10 5 × 6.31 × 10 − 5 Pa m 21 / 5 = 6.31 Pa m 21 / 5 , {\displaystyle {\begin{aligned}P_{1}V_{1}^{\gamma }&=\mathrm {constant} _{1}\\&=100\,000~{\text{Pa}}\times (0.001~{\text{m}}^{3})^{\frac {7}{5}}\\&=10^{5}\times 6.31\times 10^{-5}~{\text{Pa}}\,{\text{m}}^{21/5}\\&=6.31~{\text{Pa}}\,{\text{m}}^{21/5},\end{aligned}}} so the adiabatic constant for this example is about 6.31 Pa m4.2. The gas is now compressed to a 0.1 L (0.0001 m3) volume, which we assume happens quickly enough that no heat enters or leaves the gas through the walls. The adiabatic constant remains the same, but with the resulting pressure unknown P 2 V 2 γ = c o n s t a n t 1 = 6.31 Pa m 21 / 5 = P × ( 0.0001 m 3 ) 7 5 , {\displaystyle {\begin{aligned}P_{2}V_{2}^{\gamma }&=\mathrm {constant} _{1}\\&=6.31~{\text{Pa}}\,{\text{m}}^{21/5}\\&=P\times (0.0001~{\text{m}}^{3})^{\frac {7}{5}},\end{aligned}}} We can now solve for the final pressure P 2 = P 1 ( V 1 V 2 ) γ = 100 000 Pa × 10 7 / 5 = 2.51 × 10 6 Pa {\displaystyle {\begin{aligned}P_{2}&=P_{1}\left({\frac {V_{1}}{V_{2}}}\right)^{\gamma }\\&=100\,000~{\text{Pa}}\times {\text{10}}^{7/5}\\&=2.51\times 10^{6}~{\text{Pa}}\end{aligned}}} or 25.1 bar. This pressure increase is more than a simple 10:1 compression ratio would indicate; this is because the gas is not only compressed, but the work done to compress the gas also increases its internal energy, which manifests itself by a rise in the gas temperature and an additional rise in pressure above what would result from a simplistic calculation of 10 times the original pressure. We can solve for the temperature of the compressed gas in the engine cylinder as well, using the ideal gas law, PV = nRT (n is amount of gas in moles and R the gas constant for that gas). Our initial conditions being 100 kPa of pressure, 1 L volume, and 300 K of temperature, our experimental constant (nR) is: P V T = c o n s t a n t 2 = 10 5 Pa × 10 − 3 m 3 300 K = 0.333 Pa m 3 K − 1 . {\displaystyle {\begin{aligned}{\frac {PV}{T}}&=\mathrm {constant} _{2}\\&={\frac {10^{5}~{\text{Pa}}\times 10^{-3}~{\text{m}}^{3}}{300~{\text{K}}}}\\&=0.333~{\text{Pa}}\,{\text{m}}^{3}{\text{K}}^{-1}.\end{aligned}}} We know the compressed gas has V = 0.1 L and P = 2.51×106 Pa, so we can solve for temperature: T = P V c o n s t a n t 2 = 2.51 × 10 6 Pa × 10 − 4 m 3 0.333 Pa m 3 K − 1 = 753 K . {\displaystyle {\begin{aligned}T&={\frac {PV}{\mathrm {constant} _{2}}}\\&={\frac {2.51\times 10^{6}~{\text{Pa}}\times 10^{-4}~{\text{m}}^{3}}{0.333~{\text{Pa}}\,{\text{m}}^{3}{\text{K}}^{-1}}}\\&=753~{\text{K}}.\end{aligned}}} That is a final temperature of 753 K, or 479 °C, or 896 °F, well above the ignition point of many fuels. This is why a high-compression engine requires fuels specially formulated to not self-ignite (which would cause engine knocking when operated under these conditions of temperature and pressure), or that a supercharger with an intercooler to provide a pressure boost but with a lower temperature rise would be advantageous. A diesel engine operates under even more extreme conditions, with compression ratios of 16:1 or more being typical, in order to provide a very high gas pressure, which ensures immediate ignition of the injected fuel. === Adiabatic free expansion of a gas === For an adiabatic free expansion of an ideal gas, the gas is contained in an insulated container and then allowed to expand in a vacuum. Because there is no external pressure for the gas to expand against, the work done by or on the system is zero. Since this process does not involve any heat transfer or work, the first law of thermodynamics then implies that the net internal energy change of the system is zero. For an ideal gas, the temperature remains constant because the internal energy only depends on temperature in that case. Since at constant temperature, the entropy is proportional to the volume, the entropy increases in this case, therefore this process is irreversible. === Derivation of P–V relation for adiabatic compression and expansion === The definition of an adiabatic process is that heat transfer to the system is zero, δQ = 0. Then, according to the first law of thermodynamics, where dU is the change in the internal energy of the system and δW is work done by the system. Any work (δW) done must be done at the expense of internal energy U, since no heat δQ is being supplied from the surroundings. Pressure–volume work δW done by the system is defined as However, P does not remain constant during an adiabatic process but instead changes along with V. It is desired to know how the values of dP and dV relate to each other as the adiabatic process proceeds. For an ideal gas (recall ideal gas law PV = nRT) the internal energy is given by where α is the number of degrees of freedom divided by 2, R is the universal gas constant and n is the number of moles in the system (a constant). Differentiating equation (a3) yields Equation (a4) is often expressed as dU = nCV dT because CV = αR. Now substitute equations (a2) and (a4) into equation (a1) to obtain − P d V = α P d V + α V d P , {\displaystyle -P\,dV=\alpha P\,dV+\alpha V\,dP,} factorize −P dV: − ( α + 1 ) P d V = α V d P , {\displaystyle -(\alpha +1)P\,dV=\alpha V\,dP,} and divide both sides by PV: − ( α + 1 ) d V V = α d P P . {\displaystyle -(\alpha +1){\frac {dV}{V}}=\alpha {\frac {dP}{P}}.} After integrating the left and right sides from V0 to V and from P0 to P and changing the sides respectively, ln ⁡ ( P P 0 ) = − α + 1 α ln ⁡ ( V V 0 ) . {\displaystyle \ln \left({\frac {P}{P_{0}}}\right)=-{\frac {\alpha +1}{\alpha }}\ln \left({\frac {V}{V_{0}}}\right).} Exponentiate both sides, substitute ⁠α + 1/α⁠ with γ, the heat capacity ratio ( P P 0 ) = ( V V 0 ) − γ , {\displaystyle \left({\frac {P}{P_{0}}}\right)=\left({\frac {V}{V_{0}}}\right)^{-\gamma },} and eliminate the negative sign to obtain ( P P 0 ) = ( V 0 V ) γ . {\displaystyle \left({\frac {P}{P_{0}}}\right)=\left({\frac {V_{0}}{V}}\right)^{\gamma }.} Therefore, ( P P 0 ) ( V V 0 ) γ = 1 , {\displaystyle \left({\frac {P}{P_{0}}}\right)\left({\frac {V}{V_{0}}}\right)^{\gamma }=1,} and P 0 V 0 γ = P V γ = c o n s t a n t . {\displaystyle P_{0}V_{0}^{\gamma }=PV^{\gamma }=\mathrm {constant} .} At the same time, the work done by the pressure–volume changes as a result from this process, is equal to Since we require the process to be adiabatic, the following equation needs to be true By the previous derivation, Rearranging (b4) gives P = P 1 ( V 1 V ) γ . {\displaystyle P=P_{1}\left({\frac {V_{1}}{V}}\right)^{\gamma }.} Substituting this into (b2) gives W = ∫ V 1 V 2 P 1 ( V 1 V ) γ d V . {\displaystyle W=\int _{V_{1}}^{V_{2}}P_{1}\left({\frac {V_{1}}{V}}\right)^{\gamma }\,dV.} Integrating, we obtain the expression for work, W = P 1 V 1 γ V 2 1 − γ − V 1 1 − γ 1 − γ = P 2 V 2 − P 1 V 1 1 − γ . {\displaystyle {\begin{aligned}W=P_{1}V_{1}^{\gamma }{\frac {V_{2}^{1-\gamma }-V_{1}^{1-\gamma }}{1-\gamma }}\\&={\frac {P_{2}V_{2}-P_{1}V_{1}}{1-\gamma }}.\end{aligned}}} Substituting γ = ⁠α + 1/α⁠ in the second term, W = − α P 1 V 1 γ ( V 2 1 − γ − V 1 1 − γ ) . {\displaystyle W=-\alpha P_{1}V_{1}^{\gamma }\left(V_{2}^{1-\gamma }-V_{1}^{1-\gamma }\right).} Rearranging, W = − α P 1 V 1 ( ( V 2 V 1 ) 1 − γ − 1 ) . {\displaystyle W=-\alpha P_{1}V_{1}\left(\left({\frac {V_{2}}{V_{1}}}\right)^{1-\gamma }-1\right).} Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases), W = − α n R T 1 ( ( V 2 V 1 ) 1 − γ − 1 ) . {\displaystyle W=-\alpha nRT_{1}\left(\left({\frac {V_{2}}{V_{1}}}\right)^{1-\gamma }-1\right).} By the continuous formula, P 2 P 1 = ( V 2 V 1 ) − γ , {\displaystyle {\frac {P_{2}}{P_{1}}}=\left({\frac {V_{2}}{V_{1}}}\right)^{-\gamma },} or ( P 2 P 1 ) − 1 γ = V 2 V 1 . {\displaystyle \left({\frac {P_{2}}{P_{1}}}\right)^{-{\frac {1}{\gamma }}}={\frac {V_{2}}{V_{1}}}.} Substituting into the previous expression for W, W = − α n R T 1 ( ( P 2 P 1 ) γ − 1 γ − 1 ) . {\displaystyle W=-\alpha nRT_{1}\left(\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}-1\right).} Substituting this expression and (b1) in (b3) gives α n R ( T 2 − T 1 ) = α n R T 1 ( ( P 2 P 1 ) γ − 1 γ − 1 ) . {\displaystyle \alpha nR(T_{2}-T_{1})=\alpha nRT_{1}\left(\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}-1\right).} Simplifying, T 2 − T 1 = T 1 ( ( P 2 P 1 ) γ − 1 γ − 1 ) , T 2 T 1 − 1 = ( P 2 P 1 ) γ − 1 γ − 1 , T 2 = T 1 ( P 2 P 1 ) γ − 1 γ . {\displaystyle {\begin{aligned}T_{2}-T_{1}&=T_{1}\left(\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}-1\right),\\{\frac {T_{2}}{T_{1}}}-1&=\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}-1,\\T_{2}&=T_{1}\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}.\end{aligned}}} === Derivation of discrete formula and work expression === The change in internal energy of a system, measured from state 1 to state 2, is equal to At the same time, the work done by the pressure–volume changes as a result from this process, is equal to Since we require the process to be adiabatic, the following equation needs to be true By the previous derivation, Rearranging (c4) gives P = P 1 ( V 1 V ) γ . {\displaystyle P=P_{1}\left({\frac {V_{1}}{V}}\right)^{\gamma }.} Substituting this into (c2) gives W = ∫ V 1 V 2 P 1 ( V 1 V ) γ d V . {\displaystyle W=\int _{V_{1}}^{V_{2}}P_{1}\left({\frac {V_{1}}{V}}\right)^{\gamma }\,dV.} Integrating we obtain the expression for work, W = P 1 V 1 γ V 2 1 − γ − V 1 1 − γ 1 − γ = P 2 V 2 − P 1 V 1 1 − γ . {\displaystyle W=P_{1}V_{1}^{\gamma }{\frac {V_{2}^{1-\gamma }-V_{1}^{1-\gamma }}{1-\gamma }}={\frac {P_{2}V_{2}-P_{1}V_{1}}{1-\gamma }}.} Substituting γ = ⁠α + 1/α⁠ in second term, W = − α P 1 V 1 γ ( V 2 1 − γ − V 1 1 − γ ) . {\displaystyle W=-\alpha P_{1}V_{1}^{\gamma }\left(V_{2}^{1-\gamma }-V_{1}^{1-\gamma }\right).} Rearranging, W = − α P 1 V 1 ( ( V 2 V 1 ) 1 − γ − 1 ) . {\displaystyle W=-\alpha P_{1}V_{1}\left(\left({\frac {V_{2}}{V_{1}}}\right)^{1-\gamma }-1\right).} Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases), W = − α n R T 1 ( ( V 2 V 1 ) 1 − γ − 1 ) . {\displaystyle W=-\alpha nRT_{1}\left(\left({\frac {V_{2}}{V_{1}}}\right)^{1-\gamma }-1\right).} By the continuous formula, P 2 P 1 = ( V 2 V 1 ) − γ , {\displaystyle {\frac {P_{2}}{P_{1}}}=\left({\frac {V_{2}}{V_{1}}}\right)^{-\gamma },} or ( P 2 P 1 ) − 1 γ = V 2 V 1 . {\displaystyle \left({\frac {P_{2}}{P_{1}}}\right)^{-{\frac {1}{\gamma }}}={\frac {V_{2}}{V_{1}}}.} Substituting into the previous expression for W, W = − α n R T 1 ( ( P 2 P 1 ) γ − 1 γ − 1 ) . {\displaystyle W=-\alpha nRT_{1}\left(\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}-1\right).} Substituting this expression and (c1) in (c3) gives α n R ( T 2 − T 1 ) = α n R T 1 ( ( P 2 P 1 ) γ − 1 γ − 1 ) . {\displaystyle \alpha nR(T_{2}-T_{1})=\alpha nRT_{1}\left(\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}-1\right).} Simplifying, T 2 − T 1 = T 1 ( ( P 2 P 1 ) γ − 1 γ − 1 ) , T 2 T 1 − 1 = ( P 2 P 1 ) γ − 1 γ − 1 , T 2 = T 1 ( P 2 P 1 ) γ − 1 γ . {\displaystyle {\begin{aligned}T_{2}-T_{1}&=T_{1}\left(\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}-1\right),\\{\frac {T_{2}}{T_{1}}}-1&=\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}-1,\\T_{2}&=T_{1}\left({\frac {P_{2}}{P_{1}}}\right)^{\frac {\gamma -1}{\gamma }}.\end{aligned}}} == Graphing adiabats == An adiabat is a curve of constant entropy in a diagram. Some properties of adiabats on a P–V diagram are indicated. These properties may be read from the classical behaviour of ideal gases, except in the region where PV becomes small (low temperature), where quantum effects become important. Every adiabat asymptotically approaches both the V axis and the P axis (just like isotherms). Each adiabat intersects each isotherm exactly once. An adiabat looks similar to an isotherm, except that during an expansion, an adiabat loses more pressure than an isotherm, so it has a steeper inclination (more vertical). If isotherms are concave towards the north-east direction (45° from V-axis), then adiabats are concave towards the east north-east (31° from V-axis). If adiabats and isotherms are graphed at regular intervals of entropy and temperature, respectively (like altitude on a contour map), then as the eye moves towards the axes (towards the south-west), it sees the density of isotherms stay constant, but it sees the density of adiabats grow. The exception is very near absolute zero, where the density of adiabats drops sharply and they become rare (see Nernst's theorem). == Etymology == The term adiabatic () is an anglicization of the Greek term ἀδιάβατος "impassable" (used by Xenophon of rivers). It is used in the thermodynamic sense by Rankine (1866), and adopted by Maxwell in 1871 (explicitly attributing the term to Rankine). The etymological origin corresponds here to an impossibility of transfer of energy as heat and of transfer of matter across the wall. The Greek word ἀδιάβατος is formed from privative ἀ- ("not") and διαβατός, "passable", in turn deriving from διά ("through"), and βαῖνειν ("to walk, go, come"). Furthermore, in atmospheric thermodynamics, a diabatic process is one in which heat is exchanged. An adiabatic process is the opposite – a process in which no heat is exchanged. == Conceptual significance in thermodynamic theory == The adiabatic process has been important for thermodynamics since its early days. It was important in the work of Joule because it provided a way of nearly directly relating quantities of heat and work. Energy can enter or leave a thermodynamic system enclosed by walls that prevent mass transfer only as heat or work. Therefore, a quantity of work in such a system can be related almost directly to an equivalent quantity of heat in a cycle of two limbs. The first limb is an isochoric adiabatic work process increasing the system's internal energy; the second, an isochoric and workless heat transfer returning the system to its original state. Accordingly, Rankine measured quantity of heat in units of work, rather than as a calorimetric quantity. In 1854, Rankine used a quantity that he called "the thermodynamic function" that later was called entropy, and at that time he wrote also of the "curve of no transmission of heat", which he later called an adiabatic curve. Besides its two isothermal limbs, Carnot's cycle has two adiabatic limbs. For the foundations of thermodynamics, the conceptual importance of this was emphasized by Bryan, by Carathéodory, and by Born. The reason is that calorimetry presupposes a type of temperature as already defined before the statement of the first law of thermodynamics, such as one based on empirical scales. Such a presupposition involves making the distinction between empirical temperature and absolute temperature. Rather, the definition of absolute thermodynamic temperature is best left till the second law is available as a conceptual basis. In the eighteenth century, the law of conservation of energy was not yet fully formulated or established, and the nature of heat was debated. One approach to these problems was to regard heat, measured by calorimetry, as a primary substance that is conserved in quantity. By the middle of the nineteenth century, it was recognized as a form of energy, and the law of conservation of energy was thereby also recognized. The view that eventually established itself, and is currently regarded as right, is that the law of conservation of energy is a primary axiom, and that heat is to be analyzed as consequential. In this light, heat cannot be a component of the total energy of a single body because it is not a state variable but, rather, a variable that describes a transfer between two bodies. The adiabatic process is important because it is a logical ingredient of this current view. == Divergent usages of the word adiabatic == This present article is written from the viewpoint of macroscopic thermodynamics, and the word adiabatic is used in this article in the traditional way of thermodynamics, introduced by Rankine. It is pointed out in the present article that, for example, if a compression of a gas is rapid, then there is little time for heat transfer to occur, even when the gas is not adiabatically isolated by a definite wall. In this sense, a rapid compression of a gas is sometimes approximately or loosely said to be adiabatic, though often far from isentropic, even when the gas is not adiabatically isolated by a definite wall. Some authors, like Pippard, recommend using "adiathermal" to refer to processes where no heat-exchange occurs (such as Joule expansion), and "adiabatic" to reversible quasi-static adiathermal processes (so that rapid compression of a gas is not "adiabatic"). And Laidler has summarized the complicated etymology of "adiabatic". Quantum mechanics and quantum statistical mechanics, however, use the word adiabatic in a very different sense, one that can at times seem almost opposite to the classical thermodynamic sense. In quantum theory, the word adiabatic can mean something perhaps near isentropic, or perhaps near quasi-static, but the usage of the word is very different between the two disciplines. On the one hand, in quantum theory, if a perturbative element of compressive work is done almost infinitely slowly (that is to say quasi-statically), it is said to have been done adiabatically. The idea is that the shapes of the eigenfunctions change slowly and continuously, so that no quantum jump is triggered, and the change is virtually reversible. While the occupation numbers are unchanged, nevertheless there is change in the energy levels of one-to-one corresponding, pre- and post-compression, eigenstates. Thus a perturbative element of work has been done without heat transfer and without introduction of random change within the system. For example, Max Born writes Actually, it is usually the 'adiabatic' case with which we have to do: i.e. the limiting case where the external force (or the reaction of the parts of the system on each other) acts very slowly. In this case, to a very high approximation c 1 2 = 1 , c 2 2 = 0 , c 3 2 = 0 , . . . , {\displaystyle c_{1}^{2}=1,\,\,c_{2}^{2}=0,\,\,c_{3}^{2}=0,\,...\,,} that is, there is no probability for a transition, and the system is in the initial state after cessation of the perturbation. Such a slow perturbation is therefore reversible, as it is classically. On the other hand, in quantum theory, if a perturbative element of compressive work is done rapidly, it changes the occupation numbers and energies of the eigenstates in proportion to the transition moment integral and in accordance with time-dependent perturbation theory, as well as perturbing the functional form of the eigenstates themselves. In that theory, such a rapid change is said not to be adiabatic, and the contrary word diabatic is applied to it. Recent research suggests that the power absorbed from the perturbation corresponds to the rate of these non-adiabatic transitions. This corresponds to the classical process of energy transfer in the form of heat, but with the relative time scales reversed in the quantum case. Quantum adiabatic processes occur over relatively long time scales, while classical adiabatic processes occur over relatively short time scales. It should also be noted that the concept of 'heat' (in reference to the quantity of thermal energy transferred) breaks down at the quantum level, and the specific form of energy (typically electromagnetic) must be considered instead. The small or negligible absorption of energy from the perturbation in a quantum adiabatic process provides a good justification for identifying it as the quantum analogue of adiabatic processes in classical thermodynamics, and for the reuse of the term. In classical thermodynamics, such a rapid change would still be called adiabatic because the system is adiabatically isolated, and there is no transfer of energy as heat. The strong irreversibility of the change, due to viscosity or other entropy production, does not impinge on this classical usage. Thus for a mass of gas, in macroscopic thermodynamics, words are so used that a compression is sometimes loosely or approximately said to be adiabatic if it is rapid enough to avoid significant heat transfer, even if the system is not adiabatically isolated. But in quantum statistical theory, a compression is not called adiabatic if it is rapid, even if the system is adiabatically isolated in the classical thermodynamic sense of the term. The words are used differently in the two disciplines, as stated just above. == See also == Fire piston Heat burst Related physics topics First law of thermodynamics Entropy (classical thermodynamics) Adiabatic conductivity Adiabatic lapse rate Total air temperature Magnetic refrigeration Berry phase Related thermodynamic processes Cyclic process Isobaric process Isenthalpic process Isentropic process Isochoric process Isothermal process Polytropic process Quasistatic process == References == General Silbey, Robert J.; et al. (2004). Physical chemistry. Hoboken, New Jersey: Wiley. p. 55. ISBN 978-0-471-21504-2. Nave, Carl Rod. "Adiabatic Processes". HyperPhysics. Thorngren, Dr. Jane R. "Adiabatic Processes". Daphne – A Palomar College Web Server, 21 July 1995. Archived 2011-05-09 at the Wayback Machine. == External links == Media related to Adiabatic processes at Wikimedia Commons Article in HyperPhysics Encyclopaedia
Wikipedia/Adiabatic_approximation
In statistical mechanics, an Ursell function or connected correlation function, is a cumulant of a random variable. It can often be obtained by summing over connected Feynman diagrams (the sum over all Feynman diagrams gives the correlation functions). The Ursell function was named after Harold Ursell, who introduced it in 1927. == Definition == If X is a random variable, the moments sn and cumulants (same as the Ursell functions) un are functions of X related by the exponential formula: E ⁡ ( exp ⁡ ( z X ) ) = ∑ n s n z n n ! = exp ⁡ ( ∑ n u n z n n ! ) {\displaystyle \operatorname {E} (\exp(zX))=\sum _{n}s_{n}{\frac {z^{n}}{n!}}=\exp \left(\sum _{n}u_{n}{\frac {z^{n}}{n!}}\right)} (where E {\displaystyle \operatorname {E} } is the expectation). The Ursell functions for multivariate random variables are defined analogously to the above, and in the same way as multivariate cumulants. u n ( X 1 , … , X n ) = ∂ ∂ z 1 ⋯ ∂ ∂ z n log ⁡ E ⁡ ( exp ⁡ ∑ z i X i ) | z i = 0 {\displaystyle u_{n}\left(X_{1},\ldots ,X_{n}\right)=\left.{\frac {\partial }{\partial z_{1}}}\cdots {\frac {\partial }{\partial z_{n}}}\log \operatorname {E} \left(\exp \sum z_{i}X_{i}\right)\right|_{z_{i}=0}} The Ursell functions of a single random variable X are obtained from these by setting X = X1 = … = Xn. The first few are given by u 1 ( X 1 ) = E ⁡ ( X 1 ) u 2 ( X 1 , X 2 ) = E ⁡ ( X 1 X 2 ) − E ⁡ ( X 1 ) E ⁡ ( X 2 ) u 3 ( X 1 , X 2 , X 3 ) = E ⁡ ( X 1 X 2 X 3 ) − E ⁡ ( X 1 ) E ⁡ ( X 2 X 3 ) − E ⁡ ( X 2 ) E ⁡ ( X 3 X 1 ) − E ⁡ ( X 3 ) E ⁡ ( X 1 X 2 ) + 2 E ⁡ ( X 1 ) E ⁡ ( X 2 ) E ⁡ ( X 3 ) u 4 ( X 1 , X 2 , X 3 , X 4 ) = E ⁡ ( X 1 X 2 X 3 X 4 ) − E ⁡ ( X 1 ) E ⁡ ( X 2 X 3 X 4 ) − E ⁡ ( X 2 ) E ⁡ ( X 1 X 3 X 4 ) − E ⁡ ( X 3 ) E ⁡ ( X 1 X 2 X 4 ) − E ⁡ ( X 4 ) E ⁡ ( X 1 X 2 X 3 ) − E ⁡ ( X 1 X 2 ) E ⁡ ( X 3 X 4 ) − E ⁡ ( X 1 X 3 ) E ⁡ ( X 2 X 4 ) − E ⁡ ( X 1 X 4 ) E ⁡ ( X 2 X 3 ) + 2 E ⁡ ( X 1 X 2 ) E ⁡ ( X 3 ) E ⁡ ( X 4 ) + 2 E ⁡ ( X 1 X 3 ) E ⁡ ( X 2 ) E ⁡ ( X 4 ) + 2 E ⁡ ( X 1 X 4 ) E ⁡ ( X 2 ) E ⁡ ( X 3 ) + 2 E ⁡ ( X 2 X 3 ) E ⁡ ( X 1 ) E ⁡ ( X 4 ) + 2 E ⁡ ( X 2 X 4 ) E ⁡ ( X 1 ) E ⁡ ( X 3 ) + 2 E ⁡ ( X 3 X 4 ) E ⁡ ( X 1 ) E ⁡ ( X 2 ) − 6 E ⁡ ( X 1 ) E ⁡ ( X 2 ) E ⁡ ( X 3 ) E ⁡ ( X 4 ) {\displaystyle {\begin{aligned}u_{1}(X_{1})={}&\operatorname {E} (X_{1})\\u_{2}(X_{1},X_{2})={}&\operatorname {E} (X_{1}X_{2})-\operatorname {E} (X_{1})\operatorname {E} (X_{2})\\u_{3}(X_{1},X_{2},X_{3})={}&\operatorname {E} (X_{1}X_{2}X_{3})-\operatorname {E} (X_{1})\operatorname {E} (X_{2}X_{3})-\operatorname {E} (X_{2})\operatorname {E} (X_{3}X_{1})-\operatorname {E} (X_{3})\operatorname {E} (X_{1}X_{2})+2\operatorname {E} (X_{1})\operatorname {E} (X_{2})\operatorname {E} (X_{3})\\u_{4}\left(X_{1},X_{2},X_{3},X_{4}\right)={}&\operatorname {E} (X_{1}X_{2}X_{3}X_{4})-\operatorname {E} (X_{1})\operatorname {E} (X_{2}X_{3}X_{4})-\operatorname {E} (X_{2})\operatorname {E} (X_{1}X_{3}X_{4})-\operatorname {E} (X_{3})\operatorname {E} (X_{1}X_{2}X_{4})-\operatorname {E} (X_{4})\operatorname {E} (X_{1}X_{2}X_{3})\\&-\operatorname {E} (X_{1}X_{2})\operatorname {E} (X_{3}X_{4})-\operatorname {E} (X_{1}X_{3})\operatorname {E} (X_{2}X_{4})-\operatorname {E} (X_{1}X_{4})\operatorname {E} (X_{2}X_{3})\\&+2\operatorname {E} (X_{1}X_{2})\operatorname {E} (X_{3})\operatorname {E} (X_{4})+2\operatorname {E} (X_{1}X_{3})\operatorname {E} (X_{2})\operatorname {E} (X_{4})+2\operatorname {E} (X_{1}X_{4})\operatorname {E} (X_{2})\operatorname {E} (X_{3})+2\operatorname {E} (X_{2}X_{3})\operatorname {E} (X_{1})\operatorname {E} (X_{4})\\&+2\operatorname {E} (X_{2}X_{4})\operatorname {E} (X_{1})\operatorname {E} (X_{3})+2\operatorname {E} (X_{3}X_{4})\operatorname {E} (X_{1})\operatorname {E} (X_{2})-6\operatorname {E} (X_{1})\operatorname {E} (X_{2})\operatorname {E} (X_{3})\operatorname {E} (X_{4})\end{aligned}}} == Characterization == Percus (1975) showed that the Ursell functions, considered as multilinear functions of several random variables, are uniquely determined up to a constant by the fact that they vanish whenever the variables Xi can be divided into two nonempty independent sets. == See also == Cumulant == References == Glimm, James; Jaffe, Arthur (1987), Quantum physics (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-96476-8, MR 0887102 Percus, J. K. (1975), "Correlation inequalities for Ising spin lattices" (PDF), Comm. Math. Phys., 40 (3): 283–308, Bibcode:1975CMaPh..40..283P, doi:10.1007/bf01610004, MR 0378683, S2CID 120940116 Ursell, H. D. (1927), "The evaluation of Gibbs phase-integral for imperfect gases", Proc. Cambridge Philos. Soc., 23 (6): 685–697, Bibcode:1927PCPS...23..685U, doi:10.1017/S0305004100011191, S2CID 123023251
Wikipedia/Connected_correlation_function
In mathematics, an nth root of a number x is a number r which, when raised to the power of n, yields x: r n = r × r × ⋯ × r ⏟ n factors = x . {\displaystyle r^{n}=\underbrace {r\times r\times \dotsb \times r} _{n{\text{ factors}}}=x.} The positive integer n is called the index or degree, and the number x of which the root is taken is the radicand. A root of degree 2 is called a square root and a root of degree 3, a cube root. Roots of higher degree are referred by using ordinal numbers, as in fourth root, twentieth root, etc. The computation of an nth root is a root extraction. For example, 3 is a square root of 9, since 32 = 9, and −3 is also a square root of 9, since (−3)2 = 9. The nth root of x is written as x n {\displaystyle {\sqrt[{n}]{x}}} using the radical symbol x {\displaystyle {\sqrt {\phantom {x}}}} . The square root is usually written as ⁠ x {\displaystyle {\sqrt {x}}} ⁠, with the degree omitted. Taking the nth root of a number, for fixed ⁠ n {\displaystyle n} ⁠, is the inverse of raising a number to the nth power, and can be written as a fractional exponent: x n = x 1 / n . {\displaystyle {\sqrt[{n}]{x}}=x^{1/n}.} For a positive real number x, x {\displaystyle {\sqrt {x}}} denotes the positive square root of x and x n {\displaystyle {\sqrt[{n}]{x}}} denotes the positive real nth root. A negative real number −x has no real-valued square roots, but when x is treated as a complex number it has two imaginary square roots, ⁠ + i x {\displaystyle +i{\sqrt {x}}} ⁠ and ⁠ − i x {\displaystyle -i{\sqrt {x}}} ⁠, where i is the imaginary unit. In general, any non-zero complex number has n distinct complex-valued nth roots, equally distributed around a complex circle of constant absolute value. (The nth root of 0 is zero with multiplicity n, and this circle degenerates to a point.) Extracting the nth roots of a complex number x can thus be taken to be a multivalued function. By convention the principal value of this function, called the principal root and denoted ⁠ x n {\displaystyle {\sqrt[{n}]{x}}} ⁠, is taken to be the nth root with the greatest real part and in the special case when x is a negative real number, the one with a positive imaginary part. The principal root of a positive real number is thus also a positive real number. As a function, the principal root is continuous in the whole complex plane, except along the negative real axis. An unresolved root, especially one using the radical symbol, is sometimes referred to as a surd or a radical. Any expression containing a radical, whether it is a square root, a cube root, or a higher root, is called a radical expression, and if it contains no transcendental functions or transcendental numbers it is called an algebraic expression. Roots are used for determining the radius of convergence of a power series with the root test. The nth roots of 1 are called roots of unity and play a fundamental role in various areas of mathematics, such as number theory, theory of equations, and Fourier transform. == History == An archaic term for the operation of taking nth roots is radication. == Definition and notation == An nth root of a number x, where n is a positive integer, is any of the n real or complex numbers r whose nth power is x: r n = x . {\displaystyle r^{n}=x.} Every positive real number x has a single positive nth root, called the principal nth root, which is written x n {\displaystyle {\sqrt[{n}]{x}}} . For n equal to 2 this is called the principal square root and the n is omitted. The nth root can also be represented using exponentiation as x1/n. For even values of n, positive numbers also have a negative nth root, while negative numbers do not have a real nth root. For odd values of n, every negative number x has a real negative nth root. For example, −2 has a real 5th root, − 2 5 = − 1.148698354 … {\displaystyle {\sqrt[{5}]{-2}}=-1.148698354\ldots } but −2 does not have any real 6th roots. Every non-zero number x, real or complex, has n different complex number nth roots. (In the case x is real, this count includes any real nth roots.) The only complex root of 0 is 0. The nth roots of almost all numbers (all integers except the nth powers, and all rationals except the quotients of two nth powers) are irrational. For example, 2 = 1.414213562 … {\displaystyle {\sqrt {2}}=1.414213562\ldots } All nth roots of rational numbers are algebraic numbers, and all nth roots of integers are algebraic integers. The term "surd" traces back to Al-Khwarizmi (c. 825), who referred to rational and irrational numbers as audible and inaudible, respectively. This later led to the Arabic word أصم (asamm, meaning "deaf" or "dumb") for irrational number being translated into Latin as surdus (meaning "deaf" or "mute"). Gerard of Cremona (c. 1150), Fibonacci (1202), and then Robert Recorde (1551) all used the term to refer to unresolved irrational roots, that is, expressions of the form r n {\displaystyle {\sqrt[{n}]{r}}} , in which n {\displaystyle n} and r {\displaystyle r} are integer numerals and the whole expression denotes an irrational number. Irrational numbers of the form ± a , {\displaystyle \pm {\sqrt {a}},} where a {\displaystyle a} is rational, are called pure quadratic surds; irrational numbers of the form a ± b {\displaystyle a\pm {\sqrt {b}}} , where a {\displaystyle a} and b {\displaystyle b} are rational, are called mixed quadratic surds. === Square roots === A square root of a number x is a number r which, when squared, becomes x: r 2 = x . {\displaystyle r^{2}=x.} Every positive real number has two square roots, one positive and one negative. For example, the two square roots of 25 are 5 and −5. The positive square root is also known as the principal square root, and is denoted with a radical sign: 25 = 5. {\displaystyle {\sqrt {25}}=5.} Since the square of every real number is nonnegative, negative numbers do not have real square roots. However, for every negative real number there are two imaginary square roots. For example, the square roots of −25 are 5i and −5i, where i represents a number whose square is −1. === Cube roots === A cube root of a number x is a number r whose cube is x: r 3 = x . {\displaystyle r^{3}=x.} Every real number x has exactly one real cube root, written x 3 {\displaystyle {\sqrt[{3}]{x}}} . For example, 8 3 = 2 − 8 3 = − 2. {\displaystyle {\begin{aligned}{\sqrt[{3}]{8}}&=2\\{\sqrt[{3}]{-8}}&=-2.\end{aligned}}} Every real number has two additional complex cube roots. == Identities and properties == Expressing the degree of an nth root in its exponent form, as in x 1 / n {\displaystyle x^{1/n}} , makes it easier to manipulate powers and roots. If a {\displaystyle a} is a non-negative real number, a m n = ( a m ) 1 / n = a m / n = ( a 1 / n ) m = ( a n ) m . {\displaystyle {\sqrt[{n}]{a^{m}}}=(a^{m})^{1/n}=a^{m/n}=(a^{1/n})^{m}=({\sqrt[{n}]{a}})^{m}.} Every non-negative number has exactly one non-negative real nth root, and so the rules for operations with surds involving non-negative radicands a {\displaystyle a} and b {\displaystyle b} are straightforward within the real numbers: a b n = a n b n a b n = a n b n {\displaystyle {\begin{aligned}{\sqrt[{n}]{ab}}&={\sqrt[{n}]{a}}{\sqrt[{n}]{b}}\\{\sqrt[{n}]{\frac {a}{b}}}&={\frac {\sqrt[{n}]{a}}{\sqrt[{n}]{b}}}\end{aligned}}} Subtleties can occur when taking the nth roots of negative or complex numbers. For instance: − 1 × − 1 ≠ − 1 × − 1 = 1 , {\displaystyle {\sqrt {-1}}\times {\sqrt {-1}}\neq {\sqrt {-1\times -1}}=1,\quad } but, rather, − 1 × − 1 = i × i = i 2 = − 1. {\displaystyle \quad {\sqrt {-1}}\times {\sqrt {-1}}=i\times i=i^{2}=-1.} Since the rule a n × b n = a b n {\displaystyle {\sqrt[{n}]{a}}\times {\sqrt[{n}]{b}}={\sqrt[{n}]{ab}}} strictly holds for non-negative real radicands only, its application leads to the inequality in the first step above. == Simplified form of a radical expression == A non-nested radical expression is said to be in simplified form if no factor of the radicand can be written as a power greater than or equal to the index; there are no fractions inside the radical sign; and there are no radicals in the denominator. For example, to write the radical expression 32 / 5 {\displaystyle \textstyle {\sqrt {32/5}}} in simplified form, we can proceed as follows. First, look for a perfect square under the square root sign and remove it: 32 5 = 16 ⋅ 2 5 = 16 ⋅ 2 5 = 4 2 5 {\displaystyle {\sqrt {\frac {32}{5}}}={\sqrt {\frac {16\cdot 2}{5}}}={\sqrt {16}}\cdot {\sqrt {\frac {2}{5}}}=4{\sqrt {\frac {2}{5}}}} Next, there is a fraction under the radical sign, which we change as follows: 4 2 5 = 4 2 5 {\displaystyle 4{\sqrt {\frac {2}{5}}}={\frac {4{\sqrt {2}}}{\sqrt {5}}}} Finally, we remove the radical from the denominator as follows: 4 2 5 = 4 2 5 ⋅ 5 5 = 4 10 5 = 4 5 10 {\displaystyle {\frac {4{\sqrt {2}}}{\sqrt {5}}}={\frac {4{\sqrt {2}}}{\sqrt {5}}}\cdot {\frac {\sqrt {5}}{\sqrt {5}}}={\frac {4{\sqrt {10}}}{5}}={\frac {4}{5}}{\sqrt {10}}} When there is a denominator involving surds it is always possible to find a factor to multiply both numerator and denominator by to simplify the expression. For instance using the factorization of the sum of two cubes: 1 a 3 + b 3 = a 2 3 − a b 3 + b 2 3 ( a 3 + b 3 ) ( a 2 3 − a b 3 + b 2 3 ) = a 2 3 − a b 3 + b 2 3 a + b . {\displaystyle {\frac {1}{{\sqrt[{3}]{a}}+{\sqrt[{3}]{b}}}}={\frac {{\sqrt[{3}]{a^{2}}}-{\sqrt[{3}]{ab}}+{\sqrt[{3}]{b^{2}}}}{\left({\sqrt[{3}]{a}}+{\sqrt[{3}]{b}}\right)\left({\sqrt[{3}]{a^{2}}}-{\sqrt[{3}]{ab}}+{\sqrt[{3}]{b^{2}}}\right)}}={\frac {{\sqrt[{3}]{a^{2}}}-{\sqrt[{3}]{ab}}+{\sqrt[{3}]{b^{2}}}}{a+b}}.} Simplifying radical expressions involving nested radicals can be quite difficult. In particular, denesting is not always possible, and when possible, it may involve advanced Galois theory. Moreover, when complete denesting is impossible, there is no general canonical form such that the equality of two numbers can be tested by simply looking at their canonical expressions. For example, it is not obvious that 3 + 2 2 = 1 + 2 . {\displaystyle {\sqrt {3+2{\sqrt {2}}}}=1+{\sqrt {2}}.} The above can be derived through: 3 + 2 2 = 1 + 2 2 + 2 = 1 2 + 2 2 + 2 2 = ( 1 + 2 ) 2 = 1 + 2 {\displaystyle {\sqrt {3+2{\sqrt {2}}}}={\sqrt {1+2{\sqrt {2}}+2}}={\sqrt {1^{2}+2{\sqrt {2}}+{\sqrt {2}}^{2}}}={\sqrt {\left(1+{\sqrt {2}}\right)^{2}}}=1+{\sqrt {2}}} Let r = p / q {\displaystyle r=p/q} , with p and q coprime and positive integers. Then r n = p n / q n {\displaystyle {\sqrt[{n}]{r}}={\sqrt[{n}]{p}}/{\sqrt[{n}]{q}}} is rational if and only if both p n {\displaystyle {\sqrt[{n}]{p}}} and q n {\displaystyle {\sqrt[{n}]{q}}} are integers, which means that both p and q are nth powers of some integer. == Infinite series == The radical or root may be represented by the infinite series: ( 1 + x ) s t = ∑ n = 0 ∞ ∏ k = 0 n − 1 ( s − k t ) n ! t n x n {\displaystyle (1+x)^{\frac {s}{t}}=\sum _{n=0}^{\infty }{\frac {\prod _{k=0}^{n-1}(s-kt)}{n!t^{n}}}x^{n}} with | x | < 1 {\displaystyle |x|<1} . This expression can be derived from the binomial series. == Computing principal roots == === Using Newton's method === The nth root of a number A can be computed with Newton's method, which starts with an initial guess x0 and then iterates using the recurrence relation x k + 1 = x k − x k n − A n x k n − 1 {\displaystyle x_{k+1}=x_{k}-{\frac {x_{k}^{n}-A}{nx_{k}^{n-1}}}} until the desired precision is reached. For computational efficiency, the recurrence relation is commonly rewritten x k + 1 = n − 1 n x k + A n 1 x k n − 1 . {\displaystyle x_{k+1}={\frac {n-1}{n}}\,x_{k}+{\frac {A}{n}}\,{\frac {1}{x_{k}^{n-1}}}.} This allows to have only one exponentiation, and to compute once for all the first factor of each term. For example, to find the fifth root of 34, we plug in n = 5, A = 34 and x0 = 2 (initial guess). The first 5 iterations are, approximately: (All correct digits shown.) The approximation x4 is accurate to 25 decimal places and x5 is good for 51. Newton's method can be modified to produce various generalized continued fractions for the nth root. For example, z n = x n + y n = x + y n x n − 1 + ( n − 1 ) y 2 x + ( n + 1 ) y 3 n x n − 1 + ( 2 n − 1 ) y 2 x + ( 2 n + 1 ) y 5 n x n − 1 + ( 3 n − 1 ) y 2 x + ⋱ . {\displaystyle {\sqrt[{n}]{z}}={\sqrt[{n}]{x^{n}+y}}=x+{\cfrac {y}{nx^{n-1}+{\cfrac {(n-1)y}{2x+{\cfrac {(n+1)y}{3nx^{n-1}+{\cfrac {(2n-1)y}{2x+{\cfrac {(2n+1)y}{5nx^{n-1}+{\cfrac {(3n-1)y}{2x+\ddots }}}}}}}}}}}}.} === Digit-by-digit calculation of principal roots of decimal (base 10) numbers === Building on the digit-by-digit calculation of a square root, it can be seen that the formula used there, x ( 20 p + x ) ≤ c {\displaystyle x(20p+x)\leq c} , or x 2 + 20 x p ≤ c {\displaystyle x^{2}+20xp\leq c} , follows a pattern involving Pascal's triangle. For the nth root of a number P ( n , i ) {\displaystyle P(n,i)} is defined as the value of element i {\displaystyle i} in row n {\displaystyle n} of Pascal's Triangle such that P ( 4 , 1 ) = 4 {\displaystyle P(4,1)=4} , we can rewrite the expression as ∑ i = 0 n − 1 10 i P ( n , i ) p i x n − i {\displaystyle \sum _{i=0}^{n-1}10^{i}P(n,i)p^{i}x^{n-i}} . For convenience, call the result of this expression y {\displaystyle y} . Using this more general expression, any positive principal root can be computed, digit-by-digit, as follows. Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into groups of digits equating to the root being taken, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the radicand. One digit of the root will appear above each group of digits of the original number. Beginning with the left-most group of digits, do the following procedure for each group: Starting on the left, bring down the most significant (leftmost) group of digits not yet used (if all the digits have been used, write "0" the number of times required to make a group) and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by 10 n {\displaystyle 10^{n}} and add the digits from the next group. This will be the current value c. Find p and x, as follows: Let p {\displaystyle p} be the part of the root found so far, ignoring any decimal point. (For the first step, p = 0 {\displaystyle p=0} and 0 0 = 1 {\displaystyle 0^{0}=1} ). Determine the greatest digit x {\displaystyle x} such that y ≤ c {\displaystyle y\leq c} . Place the digit x {\displaystyle x} as the next digit of the root, i.e., above the group of digits you just brought down. Thus the next p will be the old p times 10 plus x. Subtract y {\displaystyle y} from c {\displaystyle c} to form a new remainder. If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration. ==== Examples ==== Find the square root of 152.2756. 1 2. 3 4 / \/ 01 52.27 56 (Results) (Explanations) 01 x = 1 100·1·00·12 + 101·2·01·11 ≤ 1 < 100·1·00·22 + 101·2·01·21 01 y = 1 y = 100·1·00·12 + 101·2·01·11 = 1 + 0 = 1 00 52 x = 2 100·1·10·22 + 101·2·11·21 ≤ 52 < 100·1·10·32 + 101·2·11·31 00 44 y = 44 y = 100·1·10·22 + 101·2·11·21 = 4 + 40 = 44 08 27 x = 3 100·1·120·32 + 101·2·121·31 ≤ 827 < 100·1·120·42 + 101·2·121·41 07 29 y = 729 y = 100·1·120·32 + 101·2·121·31 = 9 + 720 = 729 98 56 x = 4 100·1·1230·42 + 101·2·1231·41 ≤ 9856 < 100·1·1230·52 + 101·2·1231·51 98 56 y = 9856 y = 100·1·1230·42 + 101·2·1231·41 = 16 + 9840 = 9856 00 00 Algorithm terminates: Answer is 12.34 Find the cube root of 4192 truncated to the nearest thousandth. 1 6. 1 2 4 3 / \/ 004 192.000 000 000 (Results) (Explanations) 004 x = 1 100·1·00·13 + 101·3·01·12 + 102·3·02·11 ≤ 4 < 100·1·00·23 + 101·3·01·22 + 102·3·02·21 001 y = 1 y = 100·1·00·13 + 101·3·01·12 + 102·3·02·11 = 1 + 0 + 0 = 1 003 192 x = 6 100·1·10·63 + 101·3·11·62 + 102·3·12·61 ≤ 3192 < 100·1·10·73 + 101·3·11·72 + 102·3·12·71 003 096 y = 3096 y = 100·1·10·63 + 101·3·11·62 + 102·3·12·61 = 216 + 1,080 + 1,800 = 3,096 096 000 x = 1 100·1·160·13 + 101·3·161·12 + 102·3·162·11 ≤ 96000 < 100·1·160·23 + 101·3·161·22 + 102·3·162·21 077 281 y = 77281 y = 100·1·160·13 + 101·3·161·12 + 102·3·162·11 = 1 + 480 + 76,800 = 77,281 018 719 000 x = 2 100·1·1610·23 + 101·3·1611·22 + 102·3·1612·21 ≤ 18719000 < 100·1·1610·33 + 101·3·1611·32 + 102·3·1612·31 015 571 928 y = 15571928 y = 100·1·1610·23 + 101·3·1611·22 + 102·3·1612·21 = 8 + 19,320 + 15,552,600 = 15,571,928 003 147 072 000 x = 4 100·1·16120·43 + 101·3·16121·42 + 102·3·16122·41 ≤ 3147072000 < 100·1·16120·53 + 101·3·16121·52 + 102·3·16122·51 The desired precision is achieved. The cube root of 4192 is 16.124... === Logarithmic calculation === The principal nth root of a positive number can be computed using logarithms. Starting from the equation that defines r as an nth root of x, namely r n = x , {\displaystyle r^{n}=x,} with x positive and therefore its principal root r also positive, one takes logarithms of both sides (any base of the logarithm will do) to obtain n log b ⁡ r = log b ⁡ x hence log b ⁡ r = log b ⁡ x n . {\displaystyle n\log _{b}r=\log _{b}x\quad \quad {\text{hence}}\quad \quad \log _{b}r={\frac {\log _{b}x}{n}}.} The root r is recovered from this by taking the antilog: r = b 1 n log b ⁡ x . {\displaystyle r=b^{{\frac {1}{n}}\log _{b}x}.} (Note: That formula shows b raised to the power of the result of the division, not b multiplied by the result of the division.) For the case in which x is negative and n is odd, there is one real root r which is also negative. This can be found by first multiplying both sides of the defining equation by −1 to obtain | r | n = | x | , {\displaystyle |r|^{n}=|x|,} then proceeding as before to find |r|, and using r = −|r|. == Geometric constructibility == The ancient Greek mathematicians knew how to use compass and straightedge to construct a length equal to the square root of a given length, when an auxiliary line of unit length is given. In 1837 Pierre Wantzel proved that an nth root of a given length cannot be constructed if n is not a power of 2. == Complex roots == Every complex number other than 0 has n different nth roots. === Square roots === The two square roots of a complex number are always negatives of each other. For example, the square roots of −4 are 2i and −2i, and the square roots of i are 1 2 ( 1 + i ) and − 1 2 ( 1 + i ) . {\displaystyle {\tfrac {1}{\sqrt {2}}}(1+i)\quad {\text{and}}\quad -{\tfrac {1}{\sqrt {2}}}(1+i).} If we express a complex number in polar form, then the square root can be obtained by taking the square root of the radius and halving the angle: r e i θ = ± r ⋅ e i θ / 2 . {\displaystyle {\sqrt {re^{i\theta }}}=\pm {\sqrt {r}}\cdot e^{i\theta /2}.} A principal root of a complex number may be chosen in various ways, for example r e i θ = r ⋅ e i θ / 2 {\displaystyle {\sqrt {re^{i\theta }}}={\sqrt {r}}\cdot e^{i\theta /2}} which introduces a branch cut in the complex plane along the positive real axis with the condition 0 ≤ θ < 2π, or along the negative real axis with −π < θ ≤ π. Using the first(last) branch cut the principal square root z {\displaystyle \scriptstyle {\sqrt {z}}} maps z {\displaystyle \scriptstyle z} to the half plane with non-negative imaginary(real) part. The last branch cut is presupposed in mathematical software like Matlab or Scilab. === Roots of unity === The number 1 has n different nth roots in the complex plane, namely 1 , ω , ω 2 , … , ω n − 1 , {\displaystyle 1,\;\omega ,\;\omega ^{2},\;\ldots ,\;\omega ^{n-1},} where ω = e 2 π i n = cos ⁡ ( 2 π n ) + i sin ⁡ ( 2 π n ) . {\displaystyle \omega =e^{\frac {2\pi i}{n}}=\cos \left({\frac {2\pi }{n}}\right)+i\sin \left({\frac {2\pi }{n}}\right).} These roots are evenly spaced around the unit circle in the complex plane, at angles which are multiples of 2 π / n {\displaystyle 2\pi /n} . For example, the square roots of unity are 1 and −1, and the fourth roots of unity are 1, i {\displaystyle i} , −1, and − i {\displaystyle -i} . === nth roots === Every complex number has n different nth roots in the complex plane. These are η , η ω , η ω 2 , … , η ω n − 1 , {\displaystyle \eta ,\;\eta \omega ,\;\eta \omega ^{2},\;\ldots ,\;\eta \omega ^{n-1},} where η is a single nth root, and 1, ω, ω2, ... ωn−1 are the nth roots of unity. For example, the four different fourth roots of 2 are 2 4 , i 2 4 , − 2 4 , and − i 2 4 . {\displaystyle {\sqrt[{4}]{2}},\quad i{\sqrt[{4}]{2}},\quad -{\sqrt[{4}]{2}},\quad {\text{and}}\quad -i{\sqrt[{4}]{2}}.} In polar form, a single nth root may be found by the formula r e i θ n = r n ⋅ e i θ / n . {\displaystyle {\sqrt[{n}]{re^{i\theta }}}={\sqrt[{n}]{r}}\cdot e^{i\theta /n}.} Here r is the magnitude (the modulus, also called the absolute value) of the number whose root is to be taken; if the number can be written as a+bi then r = a 2 + b 2 {\displaystyle r={\sqrt {a^{2}+b^{2}}}} . Also, θ {\displaystyle \theta } is the angle formed as one pivots on the origin counterclockwise from the positive horizontal axis to a ray going from the origin to the number; it has the properties that cos ⁡ θ = a / r , {\displaystyle \cos \theta =a/r,} sin ⁡ θ = b / r , {\displaystyle \sin \theta =b/r,} and tan ⁡ θ = b / a . {\displaystyle \tan \theta =b/a.} Thus finding nth roots in the complex plane can be segmented into two steps. First, the magnitude of all the nth roots is the nth root of the magnitude of the original number. Second, the angle between the positive horizontal axis and a ray from the origin to one of the nth roots is θ / n {\displaystyle \theta /n} , where θ {\displaystyle \theta } is the angle defined in the same way for the number whose root is being taken. Furthermore, all n of the nth roots are at equally spaced angles from each other. If n is even, a complex number's nth roots, of which there are an even number, come in additive inverse pairs, so that if a number r1 is one of the nth roots then r2 = −r1 is another. This is because raising the latter's coefficient −1 to the nth power for even n yields 1: that is, (−r1)n = (−1)n × r1n = r1n. As with square roots, the formula above does not define a continuous function over the entire complex plane, but instead has a branch cut at points where θ / n is discontinuous. == Solving polynomials == It was once conjectured that all polynomial equations could be solved algebraically (that is, that all roots of a polynomial could be expressed in terms of a finite number of radicals and elementary operations). However, while this is true for third degree polynomials (cubics) and fourth degree polynomials (quartics), the Abel–Ruffini theorem (1824) shows that this is not true in general when the degree is 5 or greater. For example, the solutions of the equation x 5 = x + 1 {\displaystyle x^{5}=x+1} cannot be expressed in terms of radicals. (cf. quintic equation) == Proof of irrationality for non-perfect nth power x == Assume that x n {\displaystyle {\sqrt[{n}]{x}}} is rational. That is, it can be reduced to a fraction a b {\displaystyle {\frac {a}{b}}} , where a and b are integers without a common factor. This means that x = a n b n {\displaystyle x={\frac {a^{n}}{b^{n}}}} . Since x is an integer, a n {\displaystyle a^{n}} and b n {\displaystyle b^{n}} must share a common factor if b ≠ 1 {\displaystyle b\neq 1} . This means that if b ≠ 1 {\displaystyle b\neq 1} , a n b n {\displaystyle {\frac {a^{n}}{b^{n}}}} is not in simplest form. Thus b should equal 1. Since 1 n = 1 {\displaystyle 1^{n}=1} and n 1 = n {\displaystyle {\frac {n}{1}}=n} , a n b n = a n {\displaystyle {\frac {a^{n}}{b^{n}}}=a^{n}} . This means that x = a n {\displaystyle x=a^{n}} and thus, x n = a {\displaystyle {\sqrt[{n}]{x}}=a} . This implies that x n {\displaystyle {\sqrt[{n}]{x}}} is an integer. Since x is not a perfect nth power, this is impossible. Thus x n {\displaystyle {\sqrt[{n}]{x}}} is irrational. == See also == Geometric mean Twelfth root of two == References == == External links ==
Wikipedia/Nth_root_algorithm
The calculus of variations (or variational calculus) is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations. A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, which depends upon the material of the medium. One corresponding concept in mechanics is the principle of least/stationary action. Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet's principle. Plateau's problem requires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in soapy water. Although such experiments are relatively easy to perform, their mathematical formulation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology. == History == The calculus of variations began with the work of Isaac Newton, such as with Newton's minimal resistance problem, which he formulated and solved in 1685, and later published in his Principia in 1687, which was the first problem in the field to be formulated and correctly solved, and was also one of the most difficult problems tackled by variational methods prior to the twentieth century. This problem was followed by the brachistochrone curve problem raised by Johann Bernoulli (1696), which was similar to one raised by Galileo Galilei in 1638, but he did not solve the problem explicity nor did he use the methods based on calculus. Bernoulli had solved the problem, using the principle of least time in the process, but not calculus of variations, whereas Newton did to solve the problem in 1697, and as a result, he pioneered the field with his work on the two problems. The problem would immediately occupy the attention of Jacob Bernoulli and the Marquis de l'Hôpital, but Leonhard Euler first elaborated the subject, beginning in 1733. Joseph-Louis Lagrange was influenced by Euler's work to contribute greatly to the theory. After Euler saw the 1755 work of the 19-year-old Lagrange, Euler dropped his own partly geometric approach in favor of Lagrange's purely analytic approach and renamed the subject the calculus of variations in his 1756 lecture Elementa Calculi Variationum. Adrien-Marie Legendre (1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima. Isaac Newton and Gottfried Leibniz also gave some early attention to the subject. To this discrimination Vincenzo Brunacci (1810), Carl Friedrich Gauss (1829), Siméon Poisson (1831), Mikhail Ostrogradsky (1834), and Carl Jacobi (1837) have been among the contributors. An important general work is that of Pierre Frédéric Sarrus (1842) which was condensed and improved by Augustin-Louis Cauchy (1844). Other valuable treatises and memoirs have been written by Strauch (1849), John Hewitt Jellett (1850), Otto Hesse (1857), Alfred Clebsch (1858), and Lewis Buffett Carll (1885), but perhaps the most important work of the century is that of Karl Weierstrass. His celebrated course on the theory is epoch-making, and it may be asserted that he was the first to place it on a firm and unquestionable foundation. The 20th and the 23rd Hilbert problem published in 1900 encouraged further development. In the 20th century David Hilbert, Oskar Bolza, Gilbert Ames Bliss, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions. Marston Morse applied calculus of variations in what is now called Morse theory. Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory. The dynamic programming of Richard Bellman is an alternative to the calculus of variations. == Extrema == The calculus of variations is concerned with the maxima or minima (collectively called extrema) of functionals. A functional maps functions to scalars, so functionals have been described as "functions of functions." Functionals have extrema with respect to the elements y {\displaystyle y} of a given function space defined over a given domain. A functional J [ y ] {\displaystyle J[y]} is said to have an extremum at the function f {\displaystyle f} if Δ J = J [ y ] − J [ f ] {\displaystyle \Delta J=J[y]-J[f]} has the same sign for all y {\displaystyle y} in an arbitrarily small neighborhood of f . {\displaystyle f.} The function f {\displaystyle f} is called an extremal function or extremal. The extremum J [ f ] {\displaystyle J[f]} is called a local maximum if Δ J ≤ 0 {\displaystyle \Delta J\leq 0} everywhere in an arbitrarily small neighborhood of f , {\displaystyle f,} and a local minimum if Δ J ≥ 0 {\displaystyle \Delta J\geq 0} there. For a function space of continuous functions, extrema of corresponding functionals are called strong extrema or weak extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not. Both strong and weak extrema of functionals are for a space of continuous functions but strong extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. == Euler–Lagrange equation == Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions for which the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x , {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L\left(x,y(x),y'(x)\right)\,dx,} where x 1 , x 2 {\displaystyle x_{1},x_{2}} are constants, y ( x ) {\displaystyle y(x)} is twice continuously differentiable, y ′ ( x ) = d y d x , {\displaystyle y'(x)={\frac {dy}{dx}},} L ( x , y ( x ) , y ′ ( x ) ) {\displaystyle L\left(x,y(x),y'(x)\right)} is twice continuously differentiable with respect to its arguments x , y , {\displaystyle x,y,} and y ′ . {\displaystyle y'.} If the functional J [ y ] {\displaystyle J[y]} attains a local minimum at f , {\displaystyle f,} and η ( x ) {\displaystyle \eta (x)} is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 {\displaystyle x_{1}} and x 2 , {\displaystyle x_{2},} then for any number ε {\displaystyle \varepsilon } close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term ε η {\displaystyle \varepsilon \eta } is called the variation of the function f {\displaystyle f} and is denoted by δ f . {\displaystyle \delta f.} Substituting f + ε η {\displaystyle f+\varepsilon \eta } for y {\displaystyle y} in the functional J [ y ] , {\displaystyle J[y],} the result is a function of ε , {\displaystyle \varepsilon ,} Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J [ y ] {\displaystyle J[y]} has a minimum for y = f {\displaystyle y=f} the function Φ ( ε ) {\displaystyle \Phi (\varepsilon )} has a minimum at ε = 0 {\displaystyle \varepsilon =0} and thus, Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L [ x , y , y ′ ] , {\displaystyle L\left[x,y,y'\right],} where y = f + ε η {\displaystyle y=f+\varepsilon \eta } and y ′ = f ′ + ε η ′ {\displaystyle y'=f'+\varepsilon \eta '} are considered as functions of ε {\displaystyle \varepsilon } rather than x , {\displaystyle x,} yields d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and because d y d ε = η {\displaystyle {\frac {dy}{d\varepsilon }}=\eta } and d y ′ d ε = η ′ , {\displaystyle {\frac {dy'}{d\varepsilon }}=\eta ',} d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ . {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '.} Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ∂ L ∂ f η d x + ∂ L ∂ f ′ η | x 1 x 2 − ∫ x 1 x 2 η d d x ∂ L ∂ f ′ d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}{\frac {\partial L}{\partial f}}\eta \,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}-\int _{x_{1}}^{x_{2}}\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx\\\end{aligned}}} where L [ x , y , y ′ ] → L [ x , f , f ′ ] {\displaystyle L\left[x,y,y'\right]\to L\left[x,f,f'\right]} when ε = 0 {\displaystyle \varepsilon =0} and we have used integration by parts on the second term. The second term on the second line vanishes because η = 0 {\displaystyle \eta =0} at x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( x ) ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta (x)\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J [ f ] {\displaystyle J[f]} and is denoted δ J {\displaystyle \delta J} or δ f ( x ) . {\displaystyle \delta f(x).} In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f ( x ) . {\displaystyle f(x).} The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J [ f ] . {\displaystyle J[f].} A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. === Example === In order to illustrate this process, consider the problem of finding the extremal function y = f ( x ) , {\displaystyle y=f(x),} which is the shortest curve that connects two points ( x 1 , y 1 ) {\displaystyle \left(x_{1},y_{1}\right)} and ( x 2 , y 2 ) . {\displaystyle \left(x_{2},y_{2}\right).} The arc length of the curve is given by A [ y ] = ∫ x 1 x 2 1 + [ y ′ ( x ) ] 2 d x , {\displaystyle A[y]=\int _{x_{1}}^{x_{2}}{\sqrt {1+[y'(x)]^{2}}}\,dx\,,} with y ′ ( x ) = d y d x , y 1 = f ( x 1 ) , y 2 = f ( x 2 ) . {\displaystyle y'(x)={\frac {dy}{dx}}\,,\ \ y_{1}=f(x_{1})\,,\ \ y_{2}=f(x_{2})\,.} Note that assuming y is a function of x loses generality; ideally both should be a function of some other parameter. This approach is good solely for instructive purposes. The Euler–Lagrange equation will now be used to find the extremal function f ( x ) {\displaystyle f(x)} that minimizes the functional A [ y ] . {\displaystyle A[y].} ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} with L = 1 + [ f ′ ( x ) ] 2 . {\displaystyle L={\sqrt {1+[f'(x)]^{2}}}\,.} Since f {\displaystyle f} does not appear explicitly in L , {\displaystyle L,} the first term in the Euler–Lagrange equation vanishes for all f ( x ) {\displaystyle f(x)} and thus, d d x ∂ L ∂ f ′ = 0 . {\displaystyle {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\,.} Substituting for L {\displaystyle L} and taking the derivative, d d x f ′ ( x ) 1 + [ f ′ ( x ) ] 2 = 0 . {\displaystyle {\frac {d}{dx}}\ {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}\ =0\,.} Thus f ′ ( x ) 1 + [ f ′ ( x ) ] 2 = c , {\displaystyle {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}=c\,,} for some constant c {\displaystyle c} . Then [ f ′ ( x ) ] 2 1 + [ f ′ ( x ) ] 2 = c 2 , {\displaystyle {\frac {[f'(x)]^{2}}{1+[f'(x)]^{2}}}=c^{2}\,,} where 0 ≤ c 2 < 1. {\displaystyle 0\leq c^{2}<1.} Solving, we get [ f ′ ( x ) ] 2 = c 2 1 − c 2 {\displaystyle [f'(x)]^{2}={\frac {c^{2}}{1-c^{2}}}} which implies that f ′ ( x ) = m {\displaystyle f'(x)=m} is a constant and therefore that the shortest curve that connects two points ( x 1 , y 1 ) {\displaystyle \left(x_{1},y_{1}\right)} and ( x 2 , y 2 ) {\displaystyle \left(x_{2},y_{2}\right)} is f ( x ) = m x + b with m = y 2 − y 1 x 2 − x 1 and b = x 2 y 1 − x 1 y 2 x 2 − x 1 {\displaystyle f(x)=mx+b\qquad {\text{with}}\ \ m={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}\quad {\text{and}}\quad b={\frac {x_{2}y_{1}-x_{1}y_{2}}{x_{2}-x_{1}}}} and we have thus found the extremal function f ( x ) {\displaystyle f(x)} that minimizes the functional A [ y ] {\displaystyle A[y]} so that A [ f ] {\displaystyle A[f]} is a minimum. The equation for a straight line is y = m x + b . {\displaystyle y=mx+b.} In other words, the shortest distance between two points is a straight line. == Beltrami's identity == In physics problems it may be the case that ∂ L ∂ x = 0 , {\displaystyle {\frac {\partial L}{\partial x}}=0,} meaning the integrand is a function of f ( x ) {\displaystyle f(x)} and f ′ ( x ) {\displaystyle f'(x)} but x {\displaystyle x} does not appear separately. In that case, the Euler–Lagrange equation can be simplified to the Beltrami identity L − f ′ ∂ L ∂ f ′ = C , {\displaystyle L-f'{\frac {\partial L}{\partial f'}}=C\,,} where C {\displaystyle C} is a constant. The left hand side is the Legendre transformation of L {\displaystyle L} with respect to f ′ ( x ) . {\displaystyle f'(x).} The intuition behind this result is that, if the variable x {\displaystyle x} is actually time, then the statement ∂ L ∂ x = 0 {\displaystyle {\frac {\partial L}{\partial x}}=0} implies that the Lagrangian is time-independent. By Noether's theorem, there is an associated conserved quantity. In this case, this quantity is the Hamiltonian, the Legendre transform of the Lagrangian, which (often) coincides with the energy of the system. This is (minus) the constant in Beltrami's identity. == Euler–Poisson equation == If S {\displaystyle S} depends on higher-derivatives of y ( x ) {\displaystyle y(x)} , that is, if S = ∫ a b f ( x , y ( x ) , y ′ ( x ) , … , y ( n ) ( x ) ) d x , {\displaystyle S=\int _{a}^{b}f(x,y(x),y'(x),\dots ,y^{(n)}(x))dx,} then y {\displaystyle y} must satisfy the Euler–Poisson equation, ∂ f ∂ y − d d x ( ∂ f ∂ y ′ ) + ⋯ + ( − 1 ) n d n d x n [ ∂ f ∂ y ( n ) ] = 0. {\displaystyle {\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)+\dots +(-1)^{n}{\frac {d^{n}}{dx^{n}}}\left[{\frac {\partial f}{\partial y^{(n)}}}\right]=0.} == Du Bois-Reymond's theorem == The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integral J {\displaystyle J} requires only first derivatives of trial functions. The condition that the first variation vanishes at an extremal may be regarded as a weak form of the Euler–Lagrange equation. The theorem of Du Bois-Reymond asserts that this weak form implies the strong form. If L {\displaystyle L} has continuous first and second derivatives with respect to all of its arguments, and if ∂ 2 L ∂ f ′ 2 ≠ 0 , {\displaystyle {\frac {\partial ^{2}L}{\partial f'^{2}}}\neq 0,} then f {\displaystyle f} has two continuous derivatives, and it satisfies the Euler–Lagrange equation. == Lavrentiev phenomenon == Hilbert was the first to give good conditions for the Euler–Lagrange equations to give a stationary solution. Within a convex area and a positive thrice differentiable Lagrangian the solutions are composed of a countable collection of sections that either go along the boundary or satisfy the Euler–Lagrange equations in the interior. However Lavrentiev in 1926 showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections. The Lavrentiev Phenomenon identifies a difference in the infimum of a minimization problem across different classes of admissible functions. For instance the following problem, presented by Manià in 1934: L [ x ] = ∫ 0 1 ( x 3 − t ) 2 x ′ 6 , {\displaystyle L[x]=\int _{0}^{1}(x^{3}-t)^{2}x'^{6},} A = { x ∈ W 1 , 1 ( 0 , 1 ) : x ( 0 ) = 0 , x ( 1 ) = 1 } . {\displaystyle {A}=\{x\in W^{1,1}(0,1):x(0)=0,\ x(1)=1\}.} Clearly, x ( t ) = t 1 3 {\displaystyle x(t)=t^{\frac {1}{3}}} minimizes the functional, but we find any function x ∈ W 1 , ∞ {\displaystyle x\in W^{1,\infty }} gives a value bounded away from the infimum. Examples (in one-dimension) are traditionally manifested across W 1 , 1 {\displaystyle W^{1,1}} and W 1 , ∞ , {\displaystyle W^{1,\infty },} but Ball and Mizel procured the first functional that displayed Lavrentiev's Phenomenon across W 1 , p {\displaystyle W^{1,p}} and W 1 , q {\displaystyle W^{1,q}} for 1 ≤ p < q < ∞ . {\displaystyle 1\leq p<q<\infty .} There are several results that gives criteria under which the phenomenon does not occur - for instance 'standard growth', a Lagrangian with no dependence on the second variable, or an approximating sequence satisfying Cesari's Condition (D) - but results are often particular, and applicable to a small class of functionals. Connected with the Lavrentiev Phenomenon is the repulsion property: any functional displaying Lavrentiev's Phenomenon will display the weak repulsion property. == Functions of several variables == For example, if φ ( x , y ) {\displaystyle \varphi (x,y)} denotes the displacement of a membrane above the domain D {\displaystyle D} in the x , y {\displaystyle x,y} plane, then its potential energy is proportional to its surface area: U [ φ ] = ∬ D 1 + ∇ φ ⋅ ∇ φ d x d y . {\displaystyle U[\varphi ]=\iint _{D}{\sqrt {1+\nabla \varphi \cdot \nabla \varphi }}\,dx\,dy.} Plateau's problem consists of finding a function that minimizes the surface area while assuming prescribed values on the boundary of D {\displaystyle D} ; the solutions are called minimal surfaces. The Euler–Lagrange equation for this problem is nonlinear: φ x x ( 1 + φ y 2 ) + φ y y ( 1 + φ x 2 ) − 2 φ x φ y φ x y = 0. {\displaystyle \varphi _{xx}(1+\varphi _{y}^{2})+\varphi _{yy}(1+\varphi _{x}^{2})-2\varphi _{x}\varphi _{y}\varphi _{xy}=0.} See Courant (1950) for details. === Dirichlet's principle === It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated by V [ φ ] = 1 2 ∬ D ∇ φ ⋅ ∇ φ d x d y . {\displaystyle V[\varphi ]={\frac {1}{2}}\iint _{D}\nabla \varphi \cdot \nabla \varphi \,dx\,dy.} The functional V {\displaystyle V} is to be minimized among all trial functions φ {\displaystyle \varphi } that assume prescribed values on the boundary of D {\displaystyle D} . If u {\displaystyle u} is the minimizing function and v {\displaystyle v} is an arbitrary smooth function that vanishes on the boundary of D {\displaystyle D} , then the first variation of V [ u + ε v ] {\displaystyle V[u+\varepsilon v]} must vanish: d d ε V [ u + ε v ] | ε = 0 = ∬ D ∇ u ⋅ ∇ v d x d y = 0. {\displaystyle \left.{\frac {d}{d\varepsilon }}V[u+\varepsilon v]\right|_{\varepsilon =0}=\iint _{D}\nabla u\cdot \nabla v\,dx\,dy=0.} Provided that u {\displaystyle u} has two derivatives, we may apply the divergence theorem to obtain ∬ D ∇ ⋅ ( v ∇ u ) d x d y = ∬ D ∇ u ⋅ ∇ v + v ∇ ⋅ ∇ u d x d y = ∫ C v ∂ u ∂ n d s , {\displaystyle \iint _{D}\nabla \cdot (v\nabla u)\,dx\,dy=\iint _{D}\nabla u\cdot \nabla v+v\nabla \cdot \nabla u\,dx\,dy=\int _{C}v{\frac {\partial u}{\partial n}}\,ds,} where C {\displaystyle C} is the boundary of D , {\displaystyle D,} s {\displaystyle s} is arclength along C {\displaystyle C} and ∂ u / ∂ n {\displaystyle \partial u/\partial n} is the normal derivative of u {\displaystyle u} on C . {\displaystyle C.} Since v {\displaystyle v} vanishes on C {\displaystyle C} and the first variation vanishes, the result is ∬ D v ∇ ⋅ ∇ u d x d y = 0 {\displaystyle \iint _{D}v\nabla \cdot \nabla u\,dx\,dy=0} for all smooth functions v {\displaystyle v} that vanish on the boundary of D {\displaystyle D} . The proof for the case of one dimensional integrals may be adapted to this case to show that ∇ ⋅ ∇ u = 0 {\displaystyle \nabla \cdot \nabla u=0} in D . {\displaystyle D.} The difficulty with this reasoning is the assumption that the minimizing function u {\displaystyle u} must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy. Riemann named this idea the Dirichlet principle in honor of his teacher Peter Gustav Lejeune Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimize W [ φ ] = ∫ − 1 1 ( x φ ′ ) 2 d x {\displaystyle W[\varphi ]=\int _{-1}^{1}(x\varphi ')^{2}\,dx} among all functions φ {\displaystyle \varphi } that satisfy φ ( − 1 ) = − 1 {\displaystyle \varphi (-1)=-1} and φ ( 1 ) = 1. {\displaystyle \varphi (1)=1.} W {\displaystyle W} can be made arbitrarily small by choosing piecewise linear functions that make a transition between −1 and 1 in a small neighborhood of the origin. However, there is no function that makes W = 0. {\displaystyle W=0.} Eventually it was shown that Dirichlet's principle is valid, but it requires a sophisticated application of the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998). === Generalization to other boundary value problems === A more general expression for the potential energy of a membrane is V [ φ ] = ∬ D [ 1 2 ∇ φ ⋅ ∇ φ + f ( x , y ) φ ] d x d y + ∫ C [ 1 2 σ ( s ) φ 2 + g ( s ) φ ] d s . {\displaystyle V[\varphi ]=\iint _{D}\left[{\frac {1}{2}}\nabla \varphi \cdot \nabla \varphi +f(x,y)\varphi \right]\,dx\,dy\,+\int _{C}\left[{\frac {1}{2}}\sigma (s)\varphi ^{2}+g(s)\varphi \right]\,ds.} This corresponds to an external force density f ( x , y ) {\displaystyle f(x,y)} in D , {\displaystyle D,} an external force g ( s ) {\displaystyle g(s)} on the boundary C , {\displaystyle C,} and elastic forces with modulus σ ( s ) {\displaystyle \sigma (s)} acting on C {\displaystyle C} . The function that minimizes the potential energy with no restriction on its boundary values will be denoted by u {\displaystyle u} . Provided that f {\displaystyle f} and g {\displaystyle g} are continuous, regularity theory implies that the minimizing function u {\displaystyle u} will have two derivatives. In taking the first variation, no boundary condition need be imposed on the increment v {\displaystyle v} . The first variation of V [ u + ε v ] {\displaystyle V[u+\varepsilon v]} is given by ∬ D [ ∇ u ⋅ ∇ v + f v ] d x d y + ∫ C [ σ u v + g v ] d s = 0. {\displaystyle \iint _{D}\left[\nabla u\cdot \nabla v+fv\right]\,dx\,dy+\int _{C}\left[\sigma uv+gv\right]\,ds=0.} If we apply the divergence theorem, the result is ∬ D [ − v ∇ ⋅ ∇ u + v f ] d x d y + ∫ C v [ ∂ u ∂ n + σ u + g ] d s = 0. {\displaystyle \iint _{D}\left[-v\nabla \cdot \nabla u+vf\right]\,dx\,dy+\int _{C}v\left[{\frac {\partial u}{\partial n}}+\sigma u+g\right]\,ds=0.} If we first set v = 0 {\displaystyle v=0} on C , {\displaystyle C,} the boundary integral vanishes, and we conclude as before that − ∇ ⋅ ∇ u + f = 0 {\displaystyle -\nabla \cdot \nabla u+f=0} in D {\displaystyle D} . Then if we allow v {\displaystyle v} to assume arbitrary boundary values, this implies that u {\displaystyle u} must satisfy the boundary condition ∂ u ∂ n + σ u + g = 0 , {\displaystyle {\frac {\partial u}{\partial n}}+\sigma u+g=0,} on C {\displaystyle C} . This boundary condition is a consequence of the minimizing property of u {\displaystyle u} : it is not imposed beforehand. Such conditions are called natural boundary conditions. The preceding reasoning is not valid if σ {\displaystyle \sigma } vanishes identically on C . {\displaystyle C.} In such a case, we could allow a trial function φ ≡ c {\displaystyle \varphi \equiv c} , where c {\displaystyle c} is a constant. For such a trial function, V [ c ] = c [ ∬ D f d x d y + ∫ C g d s ] . {\displaystyle V[c]=c\left[\iint _{D}f\,dx\,dy+\int _{C}g\,ds\right].} By appropriate choice of c {\displaystyle c} , V {\displaystyle V} can assume any value unless the quantity inside the brackets vanishes. Therefore, the variational problem is meaningless unless ∬ D f d x d y + ∫ C g d s = 0. {\displaystyle \iint _{D}f\,dx\,dy+\int _{C}g\,ds=0.} This condition implies that net external forces on the system are in equilibrium. If these forces are in equilibrium, then the variational problem has a solution, but it is not unique, since an arbitrary constant may be added. Further details and examples are in Courant and Hilbert (1953). == Eigenvalue problems == Both one-dimensional and multi-dimensional eigenvalue problems can be formulated as variational problems. === Sturm–Liouville problems === The Sturm–Liouville eigenvalue problem involves a general quadratic form Q [ y ] = ∫ x 1 x 2 [ p ( x ) y ′ ( x ) 2 + q ( x ) y ( x ) 2 ] d x , {\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx,} where y {\displaystyle y} is restricted to functions that satisfy the boundary conditions y ( x 1 ) = 0 , y ( x 2 ) = 0. {\displaystyle y(x_{1})=0,\quad y(x_{2})=0.} Let R {\displaystyle R} be a normalization integral R [ y ] = ∫ x 1 x 2 r ( x ) y ( x ) 2 d x . {\displaystyle R[y]=\int _{x_{1}}^{x_{2}}r(x)y(x)^{2}\,dx.} The functions p ( x ) {\displaystyle p(x)} and r ( x ) {\displaystyle r(x)} are required to be everywhere positive and bounded away from zero. The primary variational problem is to minimize the ratio Q / R {\displaystyle Q/R} among all y {\displaystyle y} satisfying the endpoint conditions, which is equivalent to minimizing Q [ y ] {\displaystyle Q[y]} under the constraint that R [ y ] {\displaystyle R[y]} is constant. It is shown below that the Euler–Lagrange equation for the minimizing u {\displaystyle u} is − ( p u ′ ) ′ + q u − λ r u = 0 , {\displaystyle -(pu')'+qu-\lambda ru=0,} where λ {\displaystyle \lambda } is the quotient λ = Q [ u ] R [ u ] . {\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.} It can be shown (see Gelfand and Fomin 1963) that the minimizing u {\displaystyle u} has two derivatives and satisfies the Euler–Lagrange equation. The associated λ {\displaystyle \lambda } will be denoted by λ 1 {\displaystyle \lambda _{1}} ; it is the lowest eigenvalue for this equation and boundary conditions. The associated minimizing function will be denoted by u 1 ( x ) {\displaystyle u_{1}(x)} . This variational characterization of eigenvalues leads to the Rayleigh–Ritz method: choose an approximating u {\displaystyle u} as a linear combination of basis functions (for example trigonometric functions) and carry out a finite-dimensional minimization among such linear combinations. This method is often surprisingly accurate. The next smallest eigenvalue and eigenfunction can be obtained by minimizing Q {\displaystyle Q} under the additional constraint ∫ x 1 x 2 r ( x ) u 1 ( x ) y ( x ) d x = 0. {\displaystyle \int _{x_{1}}^{x_{2}}r(x)u_{1}(x)y(x)\,dx=0.} This procedure can be extended to obtain the complete sequence of eigenvalues and eigenfunctions for the problem. The variational problem also applies to more general boundary conditions. Instead of requiring that y {\displaystyle y} vanish at the endpoints, we may not impose any condition at the endpoints, and set Q [ y ] = ∫ x 1 x 2 [ p ( x ) y ′ ( x ) 2 + q ( x ) y ( x ) 2 ] d x + a 1 y ( x 1 ) 2 + a 2 y ( x 2 ) 2 , {\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx+a_{1}y(x_{1})^{2}+a_{2}y(x_{2})^{2},} where a 1 {\displaystyle a_{1}} and a 2 {\displaystyle a_{2}} are arbitrary. If we set y = u + ε v {\displaystyle y=u+\varepsilon v} , the first variation for the ratio Q / R {\displaystyle Q/R} is V 1 = 2 R [ u ] ( ∫ x 1 x 2 [ p ( x ) u ′ ( x ) v ′ ( x ) + q ( x ) u ( x ) v ( x ) − λ r ( x ) u ( x ) v ( x ) ] d x + a 1 u ( x 1 ) v ( x 1 ) + a 2 u ( x 2 ) v ( x 2 ) ) , {\displaystyle V_{1}={\frac {2}{R[u]}}\left(\int _{x_{1}}^{x_{2}}\left[p(x)u'(x)v'(x)+q(x)u(x)v(x)-\lambda r(x)u(x)v(x)\right]\,dx+a_{1}u(x_{1})v(x_{1})+a_{2}u(x_{2})v(x_{2})\right),} where λ {\displaystyle \lambda } is given by the ratio Q [ u ] / R [ u ] {\displaystyle Q[u]/R[u]} as previously. After integration by parts, R [ u ] 2 V 1 = ∫ x 1 x 2 v ( x ) [ − ( p u ′ ) ′ + q u − λ r u ] d x + v ( x 1 ) [ − p ( x 1 ) u ′ ( x 1 ) + a 1 u ( x 1 ) ] + v ( x 2 ) [ p ( x 2 ) u ′ ( x 2 ) + a 2 u ( x 2 ) ] . {\displaystyle {\frac {R[u]}{2}}V_{1}=\int _{x_{1}}^{x_{2}}v(x)\left[-(pu')'+qu-\lambda ru\right]\,dx+v(x_{1})[-p(x_{1})u'(x_{1})+a_{1}u(x_{1})]+v(x_{2})[p(x_{2})u'(x_{2})+a_{2}u(x_{2})].} If we first require that v {\displaystyle v} vanish at the endpoints, the first variation will vanish for all such v {\displaystyle v} only if − ( p u ′ ) ′ + q u − λ r u = 0 for x 1 < x < x 2 . {\displaystyle -(pu')'+qu-\lambda ru=0\quad {\hbox{for}}\quad x_{1}<x<x_{2}.} If u {\displaystyle u} satisfies this condition, then the first variation will vanish for arbitrary v {\displaystyle v} only if − p ( x 1 ) u ′ ( x 1 ) + a 1 u ( x 1 ) = 0 , and p ( x 2 ) u ′ ( x 2 ) + a 2 u ( x 2 ) = 0. {\displaystyle -p(x_{1})u'(x_{1})+a_{1}u(x_{1})=0,\quad {\hbox{and}}\quad p(x_{2})u'(x_{2})+a_{2}u(x_{2})=0.} These latter conditions are the natural boundary conditions for this problem, since they are not imposed on trial functions for the minimization, but are instead a consequence of the minimization. === Eigenvalue problems in several dimensions === Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case. For example, given a domain D {\displaystyle D} with boundary B {\displaystyle B} in three dimensions we may define Q [ φ ] = ∭ D p ( X ) ∇ φ ⋅ ∇ φ + q ( X ) φ 2 d x d y d z + ∬ B σ ( S ) φ 2 d S , {\displaystyle Q[\varphi ]=\iiint _{D}p(X)\nabla \varphi \cdot \nabla \varphi +q(X)\varphi ^{2}\,dx\,dy\,dz+\iint _{B}\sigma (S)\varphi ^{2}\,dS,} and R [ φ ] = ∭ D r ( X ) φ ( X ) 2 d x d y d z . {\displaystyle R[\varphi ]=\iiint _{D}r(X)\varphi (X)^{2}\,dx\,dy\,dz.} Let u {\displaystyle u} be the function that minimizes the quotient Q [ φ ] / R [ φ ] {\displaystyle Q[\varphi ]/R[\varphi ]} , with no condition prescribed on the boundary B . {\displaystyle B.} The Euler–Lagrange equation satisfied by u {\displaystyle u} is − ∇ ⋅ ( p ( X ) ∇ u ) + q ( x ) u − λ r ( x ) u = 0 , {\displaystyle -\nabla \cdot (p(X)\nabla u)+q(x)u-\lambda r(x)u=0,} where λ = Q [ u ] R [ u ] . {\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.} The minimizing u {\displaystyle u} must also satisfy the natural boundary condition p ( S ) ∂ u ∂ n + σ ( S ) u = 0 , {\displaystyle p(S){\frac {\partial u}{\partial n}}+\sigma (S)u=0,} on the boundary B . {\displaystyle B.} This result depends upon the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998) for details. Many extensions, including completeness results, asymptotic properties of the eigenvalues and results concerning the nodes of the eigenfunctions are in Courant and Hilbert (1953). == Applications == === Optics === Fermat's principle states that light takes a path that (locally) minimizes the optical length between its endpoints. If the x {\displaystyle x} -coordinate is chosen as the parameter along the path, and y = f ( x ) {\displaystyle y=f(x)} along the path, then the optical length is given by A [ f ] = ∫ x 0 x 1 n ( x , f ( x ) ) 1 + f ′ ( x ) 2 d x , {\displaystyle A[f]=\int _{x_{0}}^{x_{1}}n(x,f(x)){\sqrt {1+f'(x)^{2}}}dx,} where the refractive index n ( x , y ) {\displaystyle n(x,y)} depends upon the material. If we try f ( x ) = f 0 ( x ) + ε f 1 ( x ) {\displaystyle f(x)=f_{0}(x)+\varepsilon f_{1}(x)} then the first variation of A {\displaystyle A} (the derivative of A {\displaystyle A} with respect to ε {\displaystyle \varepsilon } ) is δ A [ f 0 , f 1 ] = ∫ x 0 x 1 [ n ( x , f 0 ) f 0 ′ ( x ) f 1 ′ ( x ) 1 + f 0 ′ ( x ) 2 + n y ( x , f 0 ) f 1 1 + f 0 ′ ( x ) 2 ] d x . {\displaystyle \delta A[f_{0},f_{1}]=\int _{x_{0}}^{x_{1}}\left[{\frac {n(x,f_{0})f_{0}'(x)f_{1}'(x)}{\sqrt {1+f_{0}'(x)^{2}}}}+n_{y}(x,f_{0})f_{1}{\sqrt {1+f_{0}'(x)^{2}}}\right]dx.} After integration by parts of the first term within brackets, we obtain the Euler–Lagrange equation − d d x [ n ( x , f 0 ) f 0 ′ 1 + f 0 ′ 2 ] + n y ( x , f 0 ) 1 + f 0 ′ ( x ) 2 = 0. {\displaystyle -{\frac {d}{dx}}\left[{\frac {n(x,f_{0})f_{0}'}{\sqrt {1+f_{0}'^{2}}}}\right]+n_{y}(x,f_{0}){\sqrt {1+f_{0}'(x)^{2}}}=0.} The light rays may be determined by integrating this equation. This formalism is used in the context of Lagrangian optics and Hamiltonian optics. ==== Snell's law ==== There is a discontinuity of the refractive index when light enters or leaves a lens. Let n ( x , y ) = { n ( − ) if x < 0 , n ( + ) if x > 0 , {\displaystyle n(x,y)={\begin{cases}n_{(-)}&{\text{if}}\quad x<0,\\n_{(+)}&{\text{if}}\quad x>0,\end{cases}}} where n ( − ) {\displaystyle n_{(-)}} and n ( + ) {\displaystyle n_{(+)}} are constants. Then the Euler–Lagrange equation holds as before in the region where x < 0 {\displaystyle x<0} or x > 0 {\displaystyle x>0} , and in fact the path is a straight line there, since the refractive index is constant. At the x = 0 {\displaystyle x=0} , f {\displaystyle f} must be continuous, but f ′ {\displaystyle f'} may be discontinuous. After integration by parts in the separate regions and using the Euler–Lagrange equations, the first variation takes the form δ A [ f 0 , f 1 ] = f 1 ( 0 ) [ n ( − ) f 0 ′ ( 0 − ) 1 + f 0 ′ ( 0 − ) 2 − n ( + ) f 0 ′ ( 0 + ) 1 + f 0 ′ ( 0 + ) 2 ] . {\displaystyle \delta A[f_{0},f_{1}]=f_{1}(0)\left[n_{(-)}{\frac {f_{0}'(0^{-})}{\sqrt {1+f_{0}'(0^{-})^{2}}}}-n_{(+)}{\frac {f_{0}'(0^{+})}{\sqrt {1+f_{0}'(0^{+})^{2}}}}\right].} The factor multiplying n ( − ) {\displaystyle n_{(-)}} is the sine of angle of the incident ray with the x {\displaystyle x} axis, and the factor multiplying n ( + ) {\displaystyle n_{(+)}} is the sine of angle of the refracted ray with the x {\displaystyle x} axis. Snell's law for refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length. ==== Fermat's principle in three dimensions ==== It is expedient to use vector notation: let X = ( x 1 , x 2 , x 3 ) , {\displaystyle X=(x_{1},x_{2},x_{3}),} let t {\displaystyle t} be a parameter, let X ( t ) {\displaystyle X(t)} be the parametric representation of a curve C , {\displaystyle C,} and let X ˙ ( t ) {\displaystyle {\dot {X}}(t)} be its tangent vector. The optical length of the curve is given by A [ C ] = ∫ t 0 t 1 n ( X ) X ˙ ⋅ X ˙ d t . {\displaystyle A[C]=\int _{t_{0}}^{t_{1}}n(X){\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,dt.} Note that this integral is invariant with respect to changes in the parametric representation of C . {\displaystyle C.} The Euler–Lagrange equations for a minimizing curve have the symmetric form d d t P = X ˙ ⋅ X ˙ ∇ n , {\displaystyle {\frac {d}{dt}}P={\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,\nabla n,} where P = n ( X ) X ˙ X ˙ ⋅ X ˙ . {\displaystyle P={\frac {n(X){\dot {X}}}{\sqrt {{\dot {X}}\cdot {\dot {X}}}}}.} It follows from the definition that P {\displaystyle P} satisfies P ⋅ P = n ( X ) 2 . {\displaystyle P\cdot P=n(X)^{2}.} Therefore, the integral may also be written as A [ C ] = ∫ t 0 t 1 P ⋅ X ˙ d t . {\displaystyle A[C]=\int _{t_{0}}^{t_{1}}P\cdot {\dot {X}}\,dt.} This form suggests that if we can find a function ψ {\displaystyle \psi } whose gradient is given by P , {\displaystyle P,} then the integral A {\displaystyle A} is given by the difference of ψ {\displaystyle \psi } at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces of ψ {\displaystyle \psi } . In order to find such a function, we turn to the wave equation, which governs the propagation of light. This formalism is used in the context of Lagrangian optics and Hamiltonian optics. ===== Connection with the wave equation ===== The wave equation for an inhomogeneous medium is u t t = c 2 ∇ ⋅ ∇ u , {\displaystyle u_{tt}=c^{2}\nabla \cdot \nabla u,} where c {\displaystyle c} is the velocity, which generally depends upon X {\displaystyle X} . Wave fronts for light are characteristic surfaces for this partial differential equation: they satisfy φ t 2 = c ( X ) 2 ∇ φ ⋅ ∇ φ . {\displaystyle \varphi _{t}^{2}=c(X)^{2}\,\nabla \varphi \cdot \nabla \varphi .} We may look for solutions in the form φ ( t , X ) = t − ψ ( X ) . {\displaystyle \varphi (t,X)=t-\psi (X).} In that case, ψ {\displaystyle \psi } satisfies ∇ ψ ⋅ ∇ ψ = n 2 , {\displaystyle \nabla \psi \cdot \nabla \psi =n^{2},} where n = 1 / c {\displaystyle n=1/c} . According to the theory of first-order partial differential equations, if P = ∇ ψ , {\displaystyle P=\nabla \psi ,} then P {\displaystyle P} satisfies d P d s = n ∇ n , {\displaystyle {\frac {dP}{ds}}=n\,\nabla n,} along a system of curves (the light rays) that are given by d X d s = P . {\displaystyle {\frac {dX}{ds}}=P.} These equations for solution of a first-order partial differential equation are identical to the Euler–Lagrange equations if we make the identification d s d t = X ˙ ⋅ X ˙ n . {\displaystyle {\frac {ds}{dt}}={\frac {\sqrt {{\dot {X}}\cdot {\dot {X}}}}{n}}.} We conclude that the function ψ {\displaystyle \psi } is the value of the minimizing integral A {\displaystyle A} as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of the Hamilton–Jacobi theory, which applies to more general variational problems. === Mechanics === In classical mechanics, the action, S , {\displaystyle S,} is defined as the time integral of the Lagrangian, L {\displaystyle L} . The Lagrangian is the difference of energies, L = T − U , {\displaystyle L=T-U,} where T {\displaystyle T} is the kinetic energy of a mechanical system and U {\displaystyle U} its potential energy. Hamilton's principle (or the action principle) states that the motion of a conservative holonomic (integrable constraints) mechanical system is such that the action integral S = ∫ t 0 t 1 L ( x , x ˙ , t ) d t {\displaystyle S=\int _{t_{0}}^{t_{1}}L(x,{\dot {x}},t)\,dt} is stationary with respect to variations in the path x ( t ) {\displaystyle x(t)} . The Euler–Lagrange equations for this system are known as Lagrange's equations: d d t ∂ L ∂ x ˙ = ∂ L ∂ x , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {x}}}}={\frac {\partial L}{\partial x}},} and they are equivalent to Newton's equations of motion (for such systems). The conjugate momenta P {\displaystyle P} are defined by p = ∂ L ∂ x ˙ . {\displaystyle p={\frac {\partial L}{\partial {\dot {x}}}}.} For example, if T = 1 2 m x ˙ 2 , {\displaystyle T={\frac {1}{2}}m{\dot {x}}^{2},} then p = m x ˙ . {\displaystyle p=m{\dot {x}}.} Hamiltonian mechanics results if the conjugate momenta are introduced in place of x ˙ {\displaystyle {\dot {x}}} by a Legendre transformation of the Lagrangian L {\displaystyle L} into the Hamiltonian H {\displaystyle H} defined by H ( x , p , t ) = p x ˙ − L ( x , x ˙ , t ) . {\displaystyle H(x,p,t)=p\,{\dot {x}}-L(x,{\dot {x}},t).} The Hamiltonian is the total energy of the system: H = T + U {\displaystyle H=T+U} . Analogy with Fermat's principle suggests that solutions of Lagrange's equations (the particle trajectories) may be described in terms of level surfaces of some function of X {\displaystyle X} . This function is a solution of the Hamilton–Jacobi equation: ∂ ψ ∂ t + H ( x , ∂ ψ ∂ x , t ) = 0. {\displaystyle {\frac {\partial \psi }{\partial t}}+H\left(x,{\frac {\partial \psi }{\partial x}},t\right)=0.} === Further applications === Further applications of the calculus of variations include the following: The derivation of the catenary shape Solution to Newton's minimal resistance problem Solution to the brachistochrone problem Solution to the tautochrone problem Solution to isoperimetric problems Calculating geodesics Finding minimal surfaces and solving Plateau's problem Optimal control Analytical mechanics, or reformulations of Newton's laws of motion, most notably Lagrangian and Hamiltonian mechanics; Geometric optics, especially Lagrangian and Hamiltonian optics; Variational method (quantum mechanics), one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states; Variational Bayesian methods, a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning; Variational methods in general relativity, a family of techniques using calculus of variations to solve problems in Einstein's general theory of relativity; Finite element method is a variational method for finding numerical solutions to boundary-value problems in differential equations; Total variation denoising, an image processing method for filtering high variance or noisy signals. == Variations and sufficient condition for a minimum == Calculus of variations is concerned with variations of functionals, which are small changes in the functional's value due to small changes in the function that is its argument. The first variation is defined as the linear part of the change in the functional, and the second variation is defined as the quadratic part. For example, if J [ y ] {\displaystyle J[y]} is a functional with the function y = y ( x ) {\displaystyle y=y(x)} as its argument, and there is a small change in its argument from y {\displaystyle y} to y + h , {\displaystyle y+h,} where h = h ( x ) {\displaystyle h=h(x)} is a function in the same function space as y {\displaystyle y} , then the corresponding change in the functional is Δ J [ h ] = J [ y + h ] − J [ y ] . {\displaystyle \Delta J[h]=J[y+h]-J[y].} The functional J [ y ] {\displaystyle J[y]} is said to be differentiable if Δ J [ h ] = φ [ h ] + ε ‖ h ‖ , {\displaystyle \Delta J[h]=\varphi [h]+\varepsilon \|h\|,} where φ [ h ] {\displaystyle \varphi [h]} is a linear functional, ‖ h ‖ {\displaystyle \|h\|} is the norm of h , {\displaystyle h,} and ε → 0 {\displaystyle \varepsilon \to 0} as ‖ h ‖ → 0. {\displaystyle \|h\|\to 0.} The linear functional φ [ h ] {\displaystyle \varphi [h]} is the first variation of J [ y ] {\displaystyle J[y]} and is denoted by, δ J [ h ] = φ [ h ] . {\displaystyle \delta J[h]=\varphi [h].} The functional J [ y ] {\displaystyle J[y]} is said to be twice differentiable if Δ J [ h ] = φ 1 [ h ] + φ 2 [ h ] + ε ‖ h ‖ 2 , {\displaystyle \Delta J[h]=\varphi _{1}[h]+\varphi _{2}[h]+\varepsilon \|h\|^{2},} where φ 1 [ h ] {\displaystyle \varphi _{1}[h]} is a linear functional (the first variation), φ 2 [ h ] {\displaystyle \varphi _{2}[h]} is a quadratic functional, and ε → 0 {\displaystyle \varepsilon \to 0} as ‖ h ‖ → 0. {\displaystyle \|h\|\to 0.} The quadratic functional φ 2 [ h ] {\displaystyle \varphi _{2}[h]} is the second variation of J [ y ] {\displaystyle J[y]} and is denoted by, δ 2 J [ h ] = φ 2 [ h ] . {\displaystyle \delta ^{2}J[h]=\varphi _{2}[h].} The second variation δ 2 J [ h ] {\displaystyle \delta ^{2}J[h]} is said to be strongly positive if δ 2 J [ h ] ≥ k ‖ h ‖ 2 , {\displaystyle \delta ^{2}J[h]\geq k\|h\|^{2},} for all h {\displaystyle h} and for some constant k > 0 {\displaystyle k>0} . Using the above definitions, especially the definitions of first variation, second variation, and strongly positive, the following sufficient condition for a minimum of a functional can be stated. == See also == == Notes == == References == == Further reading == Benesova, B. and Kruzik, M.: "Weak Lower Semicontinuity of Integral Functionals and Applications". SIAM Review 59(4) (2017), 703–766. Bolza, O.: Lectures on the Calculus of Variations. Chelsea Publishing Company, 1904, available on Digital Mathematics library. 2nd edition republished in 1961, paperback in 2005, ISBN 978-1-4181-8201-4. Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013. Clegg, J.C.: Calculus of Variations, Interscience Publishers Inc., 1968. Courant, R.: Dirichlet's principle, conformal mapping and minimal surfaces. Interscience, 1950. Dacorogna, Bernard: "Introduction" Introduction to the Calculus of Variations, 3rd edition. 2014, World Scientific Publishing, ISBN 978-1-78326-551-0. Elsgolc, L.E.: Calculus of Variations, Pergamon Press Ltd., 1962. Forsyth, A.R.: Calculus of Variations, Dover, 1960. Fox, Charles: An Introduction to the Calculus of Variations, Dover Publ., 1987. Giaquinta, Mariano; Hildebrandt, Stefan: Calculus of Variations I and II, Springer-Verlag, ISBN 978-3-662-03278-7 and ISBN 978-3-662-06201-2 Jost, J. and X. Li-Jost: Calculus of Variations. Cambridge University Press, 1998. Lebedev, L.P. and Cloud, M.J.: The Calculus of Variations and Functional Analysis with Optimal Control and Applications in Mechanics, World Scientific, 2003, pages 1–98. Logan, J. David: Applied Mathematics, 3rd edition. Wiley-Interscience, 2006 Pike, Ralph W. "Chapter 8: Calculus of Variations". Optimization for Engineering Systems. Louisiana State University. Archived from the original on 2007-07-05. Roubicek, T.: "Calculus of variations". Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588. Sagan, Hans: Introduction to the Calculus of Variations, Dover, 1992. Weinstock, Robert: Calculus of Variations with Applications to Physics and Engineering, Dover, 1974 (reprint of 1952 ed.). == External links == Variational calculus. Encyclopedia of Mathematics. calculus of variations. PlanetMath. Calculus of Variations. MathWorld. Calculus of variations. Example problems. Mathematics - Calculus of Variations and Integral Equations. Lectures on YouTube. Selected papers on Geodesic Fields. Part I, Part II.
Wikipedia/Variational_methods
A multiresolution analysis (MRA) or multiscale approximation (MSA) is the design method of most of the practically relevant discrete wavelet transforms (DWT) and the justification for the algorithm of the fast wavelet transform (FWT). It was introduced in this context in 1988/89 by Stephane Mallat and Yves Meyer and has predecessors in the microlocal analysis in the theory of differential equations (the ironing method) and the pyramid methods of image processing as introduced in 1981/83 by Peter J. Burt, Edward H. Adelson and James L. Crowley. == Definition == A multiresolution analysis of the Lebesgue space L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} consists of a sequence of nested subspaces { 0 } ⊂ ⋯ ⊂ V 1 ⊂ V 0 ⊂ V − 1 ⊂ ⋯ ⊂ V − n ⊂ V − ( n + 1 ) ⊂ ⋯ ⊂ L 2 ( R ) {\displaystyle \{0\}\subset \dots \subset V_{1}\subset V_{0}\subset V_{-1}\subset \dots \subset V_{-n}\subset V_{-(n+1)}\subset \dots \subset L^{2}(\mathbb {R} )} that satisfies certain self-similarity relations in time-space and scale-frequency, as well as completeness and regularity relations. Self-similarity in time demands that each subspace Vk is invariant under shifts by integer multiples of 2k. That is, for each f ∈ V k , m ∈ Z {\displaystyle f\in V_{k},\;m\in \mathbb {Z} } the function g defined as g ( x ) = f ( x − m 2 k ) {\displaystyle g(x)=f(x-m2^{k})} also contained in V k {\displaystyle V_{k}} . Self-similarity in scale demands that all subspaces V k ⊂ V l , k > l , {\displaystyle V_{k}\subset V_{l},\;k>l,} are time-scaled versions of each other, with scaling respectively dilation factor 2k-l. I.e., for each f ∈ V k {\displaystyle f\in V_{k}} there is a g ∈ V l {\displaystyle g\in V_{l}} with ∀ x ∈ R : g ( x ) = f ( 2 k − l x ) {\displaystyle \forall x\in \mathbb {R} :\;g(x)=f(2^{k-l}x)} . In the sequence of subspaces, for k>l the space resolution 2l of the l-th subspace is higher than the resolution 2k of the k-th subspace. Regularity demands that the model subspace V0 be generated as the linear hull (algebraically or even topologically closed) of the integer shifts of one or a finite number of generating functions ϕ {\displaystyle \phi } or ϕ 1 , … , ϕ r {\displaystyle \phi _{1},\dots ,\phi _{r}} . Those integer shifts should at least form a frame for the subspace V 0 ⊂ L 2 ( R ) {\displaystyle V_{0}\subset L^{2}(\mathbb {R} )} , which imposes certain conditions on the decay at infinity. The generating functions are also known as scaling functions or father wavelets. In most cases one demands of those functions to be piecewise continuous with compact support. Completeness demands that those nested subspaces fill the whole space, i.e., their union should be dense in L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} , and that they are not too redundant, i.e., their intersection should only contain the zero element. == Important conclusions == In the case of one continuous (or at least with bounded variation) compactly supported scaling function with orthogonal shifts, one may make a number of deductions. The proof of existence of this class of functions is due to Ingrid Daubechies. Assuming the scaling function has compact support, then V 0 ⊂ V − 1 {\displaystyle V_{0}\subset V_{-1}} implies that there is a finite sequence of coefficients a k = 2 ⟨ ϕ ( x ) , ϕ ( 2 x − k ) ⟩ {\displaystyle a_{k}=2\langle \phi (x),\phi (2x-k)\rangle } for | k | ≤ N {\displaystyle |k|\leq N} , and a k = 0 {\displaystyle a_{k}=0} for | k | > N {\displaystyle |k|>N} , such that ϕ ( x ) = ∑ k = − N N a k ϕ ( 2 x − k ) . {\displaystyle \phi (x)=\sum _{k=-N}^{N}a_{k}\phi (2x-k).} Defining another function, known as mother wavelet or just the wavelet ψ ( x ) := ∑ k = − N N ( − 1 ) k a 1 − k ϕ ( 2 x − k ) , {\displaystyle \psi (x):=\sum _{k=-N}^{N}(-1)^{k}a_{1-k}\phi (2x-k),} one can show that the space W 0 ⊂ V − 1 {\displaystyle W_{0}\subset V_{-1}} , which is defined as the (closed) linear hull of the mother wavelet's integer shifts, is the orthogonal complement to V 0 {\displaystyle V_{0}} inside V − 1 {\displaystyle V_{-1}} . Or put differently, V − 1 {\displaystyle V_{-1}} is the orthogonal sum (denoted by ⊕ {\displaystyle \oplus } ) of W 0 {\displaystyle W_{0}} and V 0 {\displaystyle V_{0}} . By self-similarity, there are scaled versions W k {\displaystyle W_{k}} of W 0 {\displaystyle W_{0}} and by completeness one has L 2 ( R ) = closure of ⨁ k ∈ Z W k , {\displaystyle L^{2}(\mathbb {R} )={\mbox{closure of }}\bigoplus _{k\in \mathbb {Z} }W_{k},} thus the set { ψ k , n ( x ) = 2 − k ψ ( 2 − k x − n ) : k , n ∈ Z } {\displaystyle \{\psi _{k,n}(x)={\sqrt {2}}^{-k}\psi (2^{-k}x-n):\;k,n\in \mathbb {Z} \}} is a countable complete orthonormal wavelet basis in L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . == See also == Multigrid method Multiscale modeling Scale space Time–frequency analysis Wavelet == References == Chui, Charles K. (1992). An Introduction to Wavelets. San Diego: Academic Press. ISBN 0-585-47090-1. Akansu, A.N.; Haddad, R.A. (1992). Multiresolution signal decomposition: transforms, subbands, and wavelets. Academic Press. ISBN 978-0-12-047141-6. Crowley, J. L., (1982). A Representations for Visual Information, Doctoral Thesis, Carnegie-Mellon University, 1982. Burrus, C.S.; Gopinath, R.A.; Guo, H. (1997). Introduction to Wavelets and Wavelet Transforms: A Primer. Prentice-Hall. ISBN 0-13-489600-9. Mallat, S.G. (1999). A Wavelet Tour of Signal Processing. Academic Press. ISBN 0-12-466606-X.
Wikipedia/Multiresolution_analysis
In numerical analysis, mortar methods are discretization methods for partial differential equations, which use separate finite element discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. Mortar discretizations lend themselves naturally to the solution by iterative domain decomposition methods such as FETI and balancing domain decomposition In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints. Similar to penalty methods, mortar methods are explicit in their nature, i.e. they require the contacting surfaces to be defined. This is in contrast to fully implicit methods, such as the third medium contact method, where contacting surfaces do not need to be defined. == References ==
Wikipedia/Mortar_method
In numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems. Relaxation methods were developed for solving large sparse linear systems, which arose as finite-difference discretizations of differential equations. They are also used for the solution of linear equations for linear least-squares problems and also for systems of linear inequalities, such as those arising in linear programming. They have also been developed for solving nonlinear systems of equations. Relaxation methods are important especially in the solution of linear systems used to model elliptic partial differential equations, such as Laplace's equation and its generalization, Poisson's equation. These equations describe boundary-value problems, in which the solution-function's values are specified on boundary of a domain; the problem is to compute a solution also on its interior. Relaxation methods are used to solve the linear equations resulting from a discretization of the differential equation, for example by finite differences. Iterative relaxation of solutions is commonly dubbed smoothing because with certain equations, such as Laplace's equation, it resembles repeated application of a local smoothing filter to the solution vector. These are not to be confused with relaxation methods in mathematical optimization, which approximate a difficult problem by a simpler problem whose "relaxed" solution provides information about the solution of the original problem. == Model problem of potential theory == When φ is a smooth real-valued function on the real numbers, its second derivative can be approximated by: d 2 φ ( x ) d x 2 = φ ( x − h ) − 2 φ ( x ) + φ ( x + h ) h 2 + O ( h 2 ) . {\displaystyle {\frac {d^{2}\varphi (x)}{{dx}^{2}}}={\frac {\varphi (x{-}h)-2\varphi (x)+\varphi (x{+}h)}{h^{2}}}\,+\,{\mathcal {O}}(h^{2})\,.} Using this in both dimensions for a function φ of two arguments at the point (x, y), and solving for φ(x, y), results in: φ ( x , y ) = 1 4 ( φ ( x + h , y ) + φ ( x , y + h ) + φ ( x − h , y ) + φ ( x , y − h ) − h 2 ∇ 2 φ ( x , y ) ) + O ( h 4 ) . {\displaystyle \varphi (x,y)={\tfrac {1}{4}}\left(\varphi (x{+}h,y)+\varphi (x,y{+}h)+\varphi (x{-}h,y)+\varphi (x,y{-}h)\,-\,h^{2}{\nabla }^{2}\varphi (x,y)\right)\,+\,{\mathcal {O}}(h^{4})\,.} To approximate the solution of the Poisson equation: ∇ 2 φ = f {\displaystyle {\nabla }^{2}\varphi =f\,} numerically on a two-dimensional grid with grid spacing h, the relaxation method assigns the given values of function φ to the grid points near the boundary and arbitrary values to the interior grid points, and then repeatedly performs the assignment φ := φ* on the interior points, where φ* is defined by: φ ∗ ( x , y ) = 1 4 ( φ ( x + h , y ) + φ ( x , y + h ) + φ ( x − h , y ) + φ ( x , y − h ) − h 2 f ( x , y ) ) , {\displaystyle \varphi ^{*}(x,y)={\tfrac {1}{4}}\left(\varphi (x{+}h,y)+\varphi (x,y{+}h)+\varphi (x{-}h,y)+\varphi (x,y{-}h)\,-\,h^{2}f(x,y)\right)\,,} until convergence. The method is easily generalized to other numbers of dimensions. == Convergence and acceleration == While the method converges under general conditions, it typically makes slower progress than competing methods. Nonetheless, the study of relaxation methods remains a core part of linear algebra, because the transformations of relaxation theory provide excellent preconditioners for new methods. Indeed, the choice of preconditioner is often more important than the choice of iterative method. Multigrid methods may be used to accelerate the methods. One can first compute an approximation on a coarser grid – usually the double spacing 2h – and use that solution with interpolated values for the other grid points as the initial assignment. This can then also be done recursively for the coarser computation. == See also == In linear systems, the two main classes of relaxation methods are stationary iterative methods, and the more general Krylov subspace methods. The Jacobi method is a simple relaxation method. The Gauss–Seidel method is an improvement upon the Jacobi method. Successive over-relaxation can be applied to either of the Jacobi and Gauss–Seidel methods to speed convergence. Multigrid methods == Notes == == References == Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. ISBN 0-89871-321-8. Ortega, J. M.; Rheinboldt, W. C. (2000). Iterative solution of nonlinear equations in several variables. Classics in Applied Mathematics. Vol. 30 (Reprint of the 1970 Academic Press ed.). Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM). pp. xxvi+572. ISBN 0-89871-461-3. MR 1744713. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 18.3. Relaxation Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Yousef Saad, Iterative Methods for Sparse Linear Systems, 1st edition, PWS, 1996. Richard S. Varga 2002 Matrix Iterative Analysis, Second ed. (of 1962 Prentice Hall edition), Springer-Verlag. David M. Young, Jr. Iterative Solution of Large Linear Systems, Academic Press, 1971. (reprinted by Dover, 2003) == Further reading == Southwell, R.V. (1940) Relaxation Methods in Engineering Science. Oxford University Press, Oxford. Southwell, R.V. (1946) Relaxation Methods in Theoretical Physics. Oxford University Press, Oxford. John. D. Jackson (1999). Classical Electrodynamics. New Jersey: Wiley. ISBN 0-471-30932-X. M.N.O. Sadiku (1992). Numerical Techniques in Electromagnetics. Boca Raton: CRC Pres. P.-B. Zhou (1993). Numerical Analysis of Electromagnetic Fields. New York: Springer. P. Grivet, P.W. Hawkes, A.Septier (1972). Electron Optics, 2nd edition. Pergamon Press. ISBN 9781483137858. D. W. O. Heddle (2000). Electrostatic Lens Systems, 2nd edition. CRC Press. ISBN 9781420034394. Erwin Kasper (2001). Advances in Imaging and Electron Physics, Vol. 116, Numerical Field Calculation for Charged Particle Optics. Academic Press. ISBN 978-0-12-014758-8.
Wikipedia/Relaxation_method
The Hitchin functional is a mathematical concept with applications in string theory that was introduced by the British mathematician Nigel Hitchin. Hitchin (2000) and Hitchin (2001) are the original articles of the Hitchin functional. As with Hitchin's introduction of generalized complex manifolds, this is an example of a mathematical tool found useful in mathematical physics. == Formal definition == This is the definition for 6-manifolds. The definition in Hitchin's article is more general, but more abstract. Let M {\displaystyle M} be a compact, oriented 6-manifold with trivial canonical bundle. Then the Hitchin functional is a functional on 3-forms defined by the formula: Φ ( Ω ) = ∫ M Ω ∧ ∗ Ω , {\displaystyle \Phi (\Omega )=\int _{M}\Omega \wedge *\Omega ,} where Ω {\displaystyle \Omega } is a 3-form and * denotes the Hodge star operator. == Properties == The Hitchin functional is analogous for six-manifold to the Yang-Mills functional for the four-manifolds. The Hitchin functional is manifestly invariant under the action of the group of orientation-preserving diffeomorphisms. Theorem. Suppose that M {\displaystyle M} is a three-dimensional complex manifold and Ω {\displaystyle \Omega } is the real part of a non-vanishing holomorphic 3-form, then Ω {\displaystyle \Omega } is a critical point of the functional Φ {\displaystyle \Phi } restricted to the cohomology class [ Ω ] ∈ H 3 ( M , R ) {\displaystyle [\Omega ]\in H^{3}(M,R)} . Conversely, if Ω {\displaystyle \Omega } is a critical point of the functional Φ {\displaystyle \Phi } in a given comohology class and Ω ∧ ∗ Ω < 0 {\displaystyle \Omega \wedge *\Omega <0} , then Ω {\displaystyle \Omega } defines the structure of a complex manifold, such that Ω {\displaystyle \Omega } is the real part of a non-vanishing holomorphic 3-form on M {\displaystyle M} . The proof of the theorem in Hitchin's articles Hitchin (2000) and Hitchin (2001) is relatively straightforward. The power of this concept is in the converse statement: if the exact form Φ ( Ω ) {\displaystyle \Phi (\Omega )} is known, we only have to look at its critical points to find the possible complex structures. == Stable forms == Action functionals often determine geometric structure on M {\displaystyle M} and geometric structure are often characterized by the existence of particular differential forms on M {\displaystyle M} that obey some integrable conditions. If an 2-form ω {\displaystyle \omega } can be written with local coordinates ω = d p 1 ∧ d q 1 + ⋯ + d p m ∧ d q m {\displaystyle \omega =dp_{1}\wedge dq_{1}+\cdots +dp_{m}\wedge dq_{m}} and d ω = 0 {\displaystyle d\omega =0} , then ω {\displaystyle \omega } defines symplectic structure. A p-form ω ∈ Ω p ( M , R ) {\displaystyle \omega \in \Omega ^{p}(M,\mathbb {R} )} is stable if it lies in an open orbit of the local G L ( n , R ) {\displaystyle GL(n,\mathbb {R} )} action where n=dim(M), namely if any small perturbation ω ↦ ω + δ ω {\displaystyle \omega \mapsto \omega +\delta \omega } can be undone by a local G L ( n , R ) {\displaystyle GL(n,\mathbb {R} )} action. So any 1-form that don't vanish everywhere is stable; 2-form (or p-form when p is even) stability is equivalent to non-degeneracy. What about p=3? For large n 3-form is difficult because the dimension of ∧ 3 ( R n ) {\displaystyle \wedge ^{3}(\mathbb {R} ^{n})} , is of the order of n 3 {\displaystyle n^{3}} , grows more fastly than the dimension of G L ( n , R ) {\displaystyle GL(n,\mathbb {R} )} which is n 2 {\displaystyle n^{2}} . But there are some very lucky exceptional case, namely, n = 6 {\displaystyle n=6} , when dim ∧ 3 ( R 6 ) = 20 {\displaystyle \wedge ^{3}(\mathbb {R} ^{6})=20} , dim G L ( 6 , R ) = 36 {\displaystyle GL(6,\mathbb {R} )=36} . Let ρ {\displaystyle \rho } be a stable real 3-form in dimension 6. Then the stabilizer of ρ {\displaystyle \rho } under G L ( 6 , R ) {\displaystyle GL(6,\mathbb {R} )} has real dimension 36-20=16, in fact either S L ( 3 , R ) × S L ( 3 , R ) {\displaystyle SL(3,\mathbb {R} )\times SL(3,\mathbb {R} )} or S L ( 3 , C ) {\displaystyle SL(3,\mathbb {C} )} . Focus on the case of S L ( 3 , C ) {\displaystyle SL(3,\mathbb {C} )} and if ρ {\displaystyle \rho } has a stabilizer in S L ( 3 , C ) {\displaystyle SL(3,\mathbb {C} )} then it can be written with local coordinates as follows: ρ = 1 2 ( ζ 1 ∧ ζ 2 ∧ ζ 3 + ζ 1 ¯ ∧ ζ 2 ¯ ∧ ζ 3 ¯ ) {\displaystyle \rho ={\frac {1}{2}}(\zeta _{1}\wedge \zeta _{2}\wedge \zeta _{3}+{\bar {\zeta _{1}}}\wedge {\bar {\zeta _{2}}}\wedge {\bar {\zeta _{3}}})} where ζ 1 = e 1 + i e 2 , ζ 2 = e 3 + i e 4 , ζ 3 = e 5 + i e 6 {\displaystyle \zeta _{1}=e_{1}+ie_{2},\zeta _{2}=e_{3}+ie_{4},\zeta _{3}=e_{5}+ie_{6}} and e i {\displaystyle e_{i}} are bases of T ∗ M {\displaystyle T^{*}M} . Then ζ i {\displaystyle \zeta _{i}} determines an almost complex structure on M {\displaystyle M} . Moreover, if there exist local coordinate ( z 1 , z 2 , z 3 ) {\displaystyle (z_{1},z_{2},z_{3})} such that ζ i = d z i {\displaystyle \zeta _{i}=dz_{i}} then it determines fortunately a complex structure on M {\displaystyle M} . Given the stable ρ ∈ Ω 3 ( M , R ) {\displaystyle \rho \in \Omega ^{3}(M,\mathbb {R} )} : ρ = 1 2 ( ζ 1 ∧ ζ 2 ∧ ζ 3 + ζ 1 ¯ ∧ ζ 2 ¯ ∧ ζ 3 ¯ ) {\displaystyle \rho ={\frac {1}{2}}(\zeta _{1}\wedge \zeta _{2}\wedge \zeta _{3}+{\bar {\zeta _{1}}}\wedge {\bar {\zeta _{2}}}\wedge {\bar {\zeta _{3}}})} . We can define another real 3-from ρ ~ ( ρ ) = 1 2 ( ζ 1 ∧ ζ 2 ∧ ζ 3 − ζ 1 ¯ ∧ ζ 2 ¯ ∧ ζ 3 ¯ ) {\displaystyle {\tilde {\rho }}(\rho )={\frac {1}{2}}(\zeta _{1}\wedge \zeta _{2}\wedge \zeta _{3}-{\bar {\zeta _{1}}}\wedge {\bar {\zeta _{2}}}\wedge {\bar {\zeta _{3}}})} . And then Ω = ρ + i ρ ~ ( ρ ) {\displaystyle \Omega =\rho +i{\tilde {\rho }}(\rho )} is a holomorphic 3-form in the almost complex structure determined by ρ {\displaystyle \rho } . Furthermore, it becomes to be the complex structure just if d Ω = 0 {\displaystyle d\Omega =0} i.e. d ρ = 0 {\displaystyle d\rho =0} and d ρ ~ ( ρ ) = 0 {\displaystyle d{\tilde {\rho }}(\rho )=0} . This Ω {\displaystyle \Omega } is just the 3-form Ω {\displaystyle \Omega } in formal definition of Hitchin functional. These idea induces the generalized complex structure. == Use in string theory == Hitchin functionals arise in many areas of string theory. An example is the compactifications of the 10-dimensional string with a subsequent orientifold projection κ {\displaystyle \kappa } using an involution ν {\displaystyle \nu } . In this case, M {\displaystyle M} is the internal 6 (real) dimensional Calabi-Yau space. The couplings to the complexified Kähler coordinates τ {\displaystyle \tau } is given by g i j = τ im ∫ τ i ∗ ( ν ⋅ κ τ ) . {\displaystyle g_{ij}=\tau {\text{im}}\int \tau i^{*}(\nu \cdot \kappa \tau ).} The potential function is the functional V [ J ] = ∫ J ∧ J ∧ J {\displaystyle V[J]=\int J\wedge J\wedge J} , where J is the almost complex structure. Both are Hitchin functionals.Grimm & Louis (2005) As application to string theory, the famous OSV conjecture Ooguri, Strominger & Vafa (2004) used Hitchin functional in order to relate topological string to 4-dimensional black hole entropy. Using similar technique in the G 2 {\displaystyle G_{2}} holonomy Dijkgraaf et al. (2005) argued about topological M-theory and in the S p i n ( 7 ) {\displaystyle Spin(7)} holonomy topological F-theory might be argued. More recently, E. Witten claimed the mysterious superconformal field theory in six dimensions, called 6D (2,0) superconformal field theory Witten (2007). Hitchin functional gives one of the bases of it. == Notes == == References == Hitchin, Nigel (2000). "The geometry of three-forms in six and seven dimensions". arXiv:math/0010054. Hitchin, Nigel (2001). "Stable forms and special metric". arXiv:math/0107101. Grimm, Thomas; Louis, Jan (2005). "The effective action of Type IIA Calabi-Yau orientifolds". Nuclear Physics B. 718 (1–2): 153–202. arXiv:hep-th/0412277. Bibcode:2005NuPhB.718..153G. CiteSeerX 10.1.1.268.839. doi:10.1016/j.nuclphysb.2005.04.007. S2CID 119502508. Dijkgraaf, Robbert; Gukov, Sergei; Neitzke, Andrew; Vafa, Cumrun (2005). "Topological M-theory as Unification of Form Theories of Gravity". Adv. Theor. Math. Phys. 9 (4): 603–665. arXiv:hep-th/0411073. Bibcode:2004hep.th...11073D. doi:10.4310/ATMP.2005.v9.n4.a5. S2CID 1204839. Ooguri, Hiroshi; Strominger, Andrew; Vafa, Cumran (2004). "Black Hole Attractors and the Topological String". Physical Review D. 70 (10): 6007. arXiv:hep-th/0405146. Bibcode:2004PhRvD..70j6007O. doi:10.1103/PhysRevD.70.106007. S2CID 6289773. Witten, Edward (2007). "Conformal Field Theory In Four And Six Dimensions". arXiv:0712.0157 [math.RT].
Wikipedia/Hitchin_functional
In differential geometry, the Atiyah–Singer index theorem, proved by Michael Atiyah and Isadore Singer (1963), states that for an elliptic differential operator on a compact manifold, the analytical index (related to the dimension of the space of solutions) is equal to the topological index (defined in terms of some topological data). It includes many other theorems, such as the Chern–Gauss–Bonnet theorem and Riemann–Roch theorem, as special cases, and has applications to theoretical physics. == History == The index problem for elliptic differential operators was posed by Israel Gel'fand. He noticed the homotopy invariance of the index, and asked for a formula for it by means of topological invariants. Some of the motivating examples included the Riemann–Roch theorem and its generalization the Hirzebruch–Riemann–Roch theorem, and the Hirzebruch signature theorem. Friedrich Hirzebruch and Armand Borel had proved the integrality of the  genus of a spin manifold, and Atiyah suggested that this integrality could be explained if it were the index of the Dirac operator (which was rediscovered by Atiyah and Singer in 1961). The Atiyah–Singer theorem was announced in 1963. The proof sketched in this announcement was never published by them, though it appears in Palais's book. It appears also in the "Séminaire Cartan-Schwartz 1963/64" that was held in Paris simultaneously with the seminar led by Richard Palais at Princeton University. The last talk in Paris was by Atiyah on manifolds with boundary. Their first published proof replaced the cobordism theory of the first proof with K-theory, and they used this to give proofs of various generalizations in another sequence of papers. 1965: Sergey P. Novikov published his results on the topological invariance of the rational Pontryagin classes on smooth manifolds. Robion Kirby and Laurent C. Siebenmann's results, combined with René Thom's paper proved the existence of rational Pontryagin classes on topological manifolds. The rational Pontryagin classes are essential ingredients of the index theorem on smooth and topological manifolds. 1969: Michael Atiyah defines abstract elliptic operators on arbitrary metric spaces. Abstract elliptic operators became protagonists in Kasparov's theory and Connes's noncommutative differential geometry. 1971: Isadore Singer proposes a comprehensive program for future extensions of index theory. 1972: Gennadi G. Kasparov publishes his work on the realization of K-homology by abstract elliptic operators. 1973: Atiyah, Raoul Bott, and Vijay Patodi gave a new proof of the index theorem using the heat equation, described in a paper by Melrose. 1977: Dennis Sullivan establishes his theorem on the existence and uniqueness of Lipschitz and quasiconformal structures on topological manifolds of dimension different from 4. 1983: Ezra Getzler motivated by ideas of Edward Witten and Luis Alvarez-Gaume, gave a short proof of the local index theorem for operators that are locally Dirac operators; this covers many of the useful cases. 1983: Nicolae Teleman proves that the analytical indices of signature operators with values in vector bundles are topological invariants. 1984: Teleman establishes the index theorem on topological manifolds. 1986: Alain Connes publishes his fundamental paper on noncommutative geometry. 1989: Simon K. Donaldson and Sullivan study Yang–Mills theory on quasiconformal manifolds of dimension 4. They introduce the signature operator S defined on differential forms of degree two. 1990: Connes and Henri Moscovici prove the local index formula in the context of non-commutative geometry. 1994: Connes, Sullivan, and Teleman prove the index theorem for signature operators on quasiconformal manifolds. == Notation == X is a compact smooth manifold (without boundary). E and F are smooth vector bundles over X. D is an elliptic differential operator from E to F. So in local coordinates it acts as a differential operator, taking smooth sections of E to smooth sections of F. == Symbol of a differential operator == If D is a differential operator on a Euclidean space of order n in k variables x 1 , … , x k {\displaystyle x_{1},\dots ,x_{k}} , then its symbol is the function of 2k variables x 1 , … , x k , y 1 , … , y k {\displaystyle x_{1},\dots ,x_{k},y_{1},\dots ,y_{k}} , given by dropping all terms of order less than n and replacing ∂ / ∂ x i {\displaystyle \partial /\partial x_{i}} by y i {\displaystyle y_{i}} . So the symbol is homogeneous in the variables y, of degree n. The symbol is well defined even though ∂ / ∂ x i {\displaystyle \partial /\partial x_{i}} does not commute with x i {\displaystyle x_{i}} because we keep only the highest order terms and differential operators commute "up to lower-order terms". The operator is called elliptic if the symbol is nonzero whenever at least one y is nonzero. Example: The Laplace operator in k variables has symbol y 1 2 + ⋯ + y k 2 {\displaystyle y_{1}^{2}+\cdots +y_{k}^{2}} , and so is elliptic as this is nonzero whenever any of the y i {\displaystyle y_{i}} 's are nonzero. The wave operator has symbol − y 1 2 + ⋯ + y k 2 {\displaystyle -y_{1}^{2}+\cdots +y_{k}^{2}} , which is not elliptic if k ≥ 2 {\displaystyle k\geq 2} , as the symbol vanishes for some non-zero values of the ys. The symbol of a differential operator of order n on a smooth manifold X is defined in much the same way using local coordinate charts, and is a function on the cotangent bundle of X, homogeneous of degree n on each cotangent space. (In general, differential operators transform in a rather complicated way under coordinate transforms (see jet bundle); however, the highest order terms transform like tensors so we get well defined homogeneous functions on the cotangent spaces that are independent of the choice of local charts.) More generally, the symbol of a differential operator between two vector bundles E and F is a section of the pullback of the bundle Hom(E, F) to the cotangent space of X. The differential operator is called elliptic if the element of Hom(Ex, Fx) is invertible for all non-zero cotangent vectors at any point x of X. A key property of elliptic operators is that they are almost invertible; this is closely related to the fact that their symbols are almost invertible. More precisely, an elliptic operator D on a compact manifold has a (non-unique) parametrix (or pseudoinverse) D′ such that DD′ -1 and D′D -1 are both compact operators. An important consequence is that the kernel of D is finite-dimensional, because all eigenspaces of compact operators, other than the kernel, are finite-dimensional. (The pseudoinverse of an elliptic differential operator is almost never a differential operator. However, it is an elliptic pseudodifferential operator.) == Analytical index == As the elliptic differential operator D has a pseudoinverse, it is a Fredholm operator. Any Fredholm operator has an index, defined as the difference between the (finite) dimension of the kernel of D (solutions of Df = 0), and the (finite) dimension of the cokernel of D (the constraints on the right-hand-side of an inhomogeneous equation like Df = g, or equivalently the kernel of the adjoint operator). In other words, Index(D) = dim Ker(D) − dim Coker(D) = dim Ker(D) − dim Ker(D*). This is sometimes called the analytical index of D. Example: Suppose that the manifold is the circle (thought of as R/Z), and D is the operator d/dx − λ for some complex constant λ. (This is the simplest example of an elliptic operator.) Then the kernel is the space of multiples of exp(λx) if λ is an integral multiple of 2πi and is 0 otherwise, and the kernel of the adjoint is a similar space with λ replaced by its complex conjugate. So D has index 0. This example shows that the kernel and cokernel of elliptic operators can jump discontinuously as the elliptic operator varies, so there is no nice formula for their dimensions in terms of continuous topological data. However the jumps in the dimensions of the kernel and cokernel are the same, so the index, given by the difference of their dimensions, does indeed vary continuously, and can be given in terms of topological data by the index theorem. == Topological index == The topological index of an elliptic differential operator D {\displaystyle D} between smooth vector bundles E {\displaystyle E} and F {\displaystyle F} on an n {\displaystyle n} -dimensional compact manifold X {\displaystyle X} is given by ( − 1 ) n ch ⁡ ( D ) Td ⁡ ( X ) [ X ] = ( − 1 ) n ∫ X ch ⁡ ( D ) Td ⁡ ( X ) {\displaystyle (-1)^{n}\operatorname {ch} (D)\operatorname {Td} (X)[X]=(-1)^{n}\int _{X}\operatorname {ch} (D)\operatorname {Td} (X)} in other words the value of the top dimensional component of the mixed cohomology class ch ⁡ ( D ) Td ⁡ ( X ) {\displaystyle \operatorname {ch} (D)\operatorname {Td} (X)} on the fundamental homology class of the manifold X {\displaystyle X} up to a difference of sign. Here, Td ⁡ ( X ) {\displaystyle \operatorname {Td} (X)} is the Todd class of the complexified tangent bundle of X {\displaystyle X} . ch ⁡ ( D ) {\displaystyle \operatorname {ch} (D)} is equal to φ − 1 ( ch ⁡ ( d ( p ∗ E , p ∗ F , σ ( D ) ) ) ) {\displaystyle \varphi ^{-1}(\operatorname {ch} (d(p^{*}E,p^{*}F,\sigma (D))))} , where φ : H k ( X ; Q ) → H n + k ( B ( X ) / S ( X ) ; Q ) {\displaystyle \varphi :H^{k}(X;\mathbb {Q} )\to H^{n+k}(B(X)/S(X);\mathbb {Q} )} is the Thom isomorphism for the sphere bundle p : B ( X ) / S ( X ) → X {\displaystyle p:B(X)/S(X)\to X} ch : K ( X ) ⊗ Q → H ∗ ( X ; Q ) {\displaystyle \operatorname {ch} :K(X)\otimes \mathbb {Q} \to H^{*}(X;\mathbb {Q} )} is the Chern character d ( p ∗ E , p ∗ F , σ ( D ) ) {\displaystyle d(p^{*}E,p^{*}F,\sigma (D))} is the "difference element" in K ( B ( X ) / S ( X ) ) {\displaystyle K(B(X)/S(X))} associated to two vector bundles p ∗ E {\displaystyle p^{*}E} and p ∗ F {\displaystyle p^{*}F} on B ( X ) {\displaystyle B(X)} and an isomorphism σ ( D ) {\displaystyle \sigma (D)} between them on the subspace S ( X ) {\displaystyle S(X)} . σ ( D ) {\displaystyle \sigma (D)} is the symbol of D {\displaystyle D} In some situations, it is possible to simplify the above formula for computational purposes. In particular, if X {\displaystyle X} is a 2 m {\displaystyle 2m} -dimensional orientable (compact) manifold with non-zero Euler class e ( T X ) {\displaystyle e(TX)} , then applying the Thom isomorphism and dividing by the Euler class, the topological index may be expressed as ( − 1 ) m ∫ X ch ⁡ ( E ) − ch ⁡ ( F ) e ( T X ) Td ⁡ ( X ) {\displaystyle (-1)^{m}\int _{X}{\frac {\operatorname {ch} (E)-\operatorname {ch} (F)}{e(TX)}}\operatorname {Td} (X)} where division makes sense by pulling e ( T X ) − 1 {\displaystyle e(TX)^{-1}} back from the cohomology ring of the classifying space B S O {\displaystyle BSO} . One can also define the topological index using only K-theory (and this alternative definition is compatible in a certain sense with the Chern-character construction above). If X is a compact submanifold of a manifold Y then there is a pushforward (or "shriek") map from K(TX) to K(TY). The topological index of an element of K(TX) is defined to be the image of this operation with Y some Euclidean space, for which K(TY) can be naturally identified with the integers Z (as a consequence of Bott-periodicity). This map is independent of the embedding of X in Euclidean space. Now a differential operator as above naturally defines an element of K(TX), and the image in Z under this map "is" the topological index. As usual, D is an elliptic differential operator between vector bundles E and F over a compact manifold X. The index problem is the following: compute the (analytical) index of D using only the symbol s and topological data derived from the manifold and the vector bundle. The Atiyah–Singer index theorem solves this problem, and states: The analytical index of D is equal to its topological index. In spite of its formidable definition, the topological index is usually straightforward to evaluate explicitly. So this makes it possible to evaluate the analytical index. (The cokernel and kernel of an elliptic operator are in general extremely hard to evaluate individually; the index theorem shows that we can usually at least evaluate their difference.) Many important invariants of a manifold (such as the signature) can be given as the index of suitable differential operators, so the index theorem allows us to evaluate these invariants in terms of topological data. Although the analytical index is usually hard to evaluate directly, it is at least obviously an integer. The topological index is by definition a rational number, but it is usually not at all obvious from the definition that it is also integral. So the Atiyah–Singer index theorem implies some deep integrality properties, as it implies that the topological index is integral. The index of an elliptic differential operator obviously vanishes if the operator is self adjoint. It also vanishes if the manifold X has odd dimension, though there are pseudodifferential elliptic operators whose index does not vanish in odd dimensions. === Relation to Grothendieck–Riemann–Roch === The Grothendieck–Riemann–Roch theorem was one of the main motivations behind the index theorem because the index theorem is the counterpart of this theorem in the setting of real manifolds. Now, if there's a map f : X → Y {\displaystyle f:X\to Y} of compact stably almost complex manifolds, then there is a commutative diagram K ( X ) → Td ( X ) ⋅ ch H ( X ; Q ) f ∗ ↓ ↓ f ∗ K ( Y ) → Td ( Y ) ⋅ ch H ( Y ; Q ) {\displaystyle {\begin{array}{ccc}&&&\\&K(X)&{\xrightarrow[{}]{{\text{Td}}(X)\cdot {\text{ch}}}}&H(X;\mathbb {Q} )&\\&f_{*}{\Bigg \downarrow }&&{\Bigg \downarrow }f_{*}\\&K(Y)&{\xrightarrow[{{\text{Td}}(Y)\cdot {\text{ch}}}]{}}&H(Y;\mathbb {Q} )&\\&&&\\\end{array}}} if Y = ∗ {\displaystyle Y=*} is a point, then we recover the statement above. Here K ( X ) {\displaystyle K(X)} is the Grothendieck group of complex vector bundles. This commutative diagram is formally very similar to the GRR theorem because the cohomology groups on the right are replaced by the Chow ring of a smooth variety, and the Grothendieck group on the left is given by the Grothendieck group of algebraic vector bundles. == Extensions of the Atiyah–Singer index theorem == === Teleman index theorem === Due to (Teleman 1983), (Teleman 1984): For any abstract elliptic operator (Atiyah 1970) on a closed, oriented, topological manifold, the analytical index equals the topological index. The proof of this result goes through specific considerations, including the extension of Hodge theory on combinatorial and Lipschitz manifolds (Teleman 1980), (Teleman 1983), the extension of Atiyah–Singer's signature operator to Lipschitz manifolds (Teleman 1983), Kasparov's K-homology (Kasparov 1972) and topological cobordism (Kirby & Siebenmann 1977). This result shows that the index theorem is not merely a differentiability statement, but rather a topological statement. === Connes–Donaldson–Sullivan–Teleman index theorem === Due to (Donaldson & Sullivan 1989), (Connes, Sullivan & Teleman 1994): For any quasiconformal manifold there exists a local construction of the Hirzebruch–Thom characteristic classes. This theory is based on a signature operator S, defined on middle degree differential forms on even-dimensional quasiconformal manifolds (compare (Donaldson & Sullivan 1989)). Using topological cobordism and K-homology one may provide a full statement of an index theorem on quasiconformal manifolds (see page 678 of (Connes, Sullivan & Teleman 1994)). The work (Connes, Sullivan & Teleman 1994) "provides local constructions for characteristic classes based on higher dimensional relatives of the measurable Riemann mapping in dimension two and the Yang–Mills theory in dimension four." These results constitute significant advances along the lines of Singer's program Prospects in Mathematics (Singer 1971). At the same time, they provide, also, an effective construction of the rational Pontrjagin classes on topological manifolds. The paper (Teleman 1985) provides a link between Thom's original construction of the rational Pontrjagin classes (Thom 1956) and index theory. It is important to mention that the index formula is a topological statement. The obstruction theories due to Milnor, Kervaire, Kirby, Siebenmann, Sullivan, Donaldson show that only a minority of topological manifolds possess differentiable structures and these are not necessarily unique. Sullivan's result on Lipschitz and quasiconformal structures (Sullivan 1979) shows that any topological manifold in dimension different from 4 possesses such a structure which is unique (up to isotopy close to identity). The quasiconformal structures (Connes, Sullivan & Teleman 1994) and more generally the Lp-structures, p > n(n+1)/2, introduced by M. Hilsum (Hilsum 1999), are the weakest analytical structures on topological manifolds of dimension n for which the index theorem is known to hold. === Other extensions === The Atiyah–Singer theorem applies to elliptic pseudodifferential operators in much the same way as for elliptic differential operators. In fact, for technical reasons most of the early proofs worked with pseudodifferential rather than differential operators: their extra flexibility made some steps of the proofs easier. Instead of working with an elliptic operator between two vector bundles, it is sometimes more convenient to work with an elliptic complex 0 → E 0 → E 1 → E 2 → ⋯ → E m → 0 {\displaystyle 0\rightarrow E_{0}\rightarrow E_{1}\rightarrow E_{2}\rightarrow \dotsm \rightarrow E_{m}\rightarrow 0} of vector bundles. The difference is that the symbols now form an exact sequence (off the zero section). In the case when there are just two non-zero bundles in the complex this implies that the symbol is an isomorphism off the zero section, so an elliptic complex with 2 terms is essentially the same as an elliptic operator between two vector bundles. Conversely the index theorem for an elliptic complex can easily be reduced to the case of an elliptic operator: the two vector bundles are given by the sums of the even or odd terms of the complex, and the elliptic operator is the sum of the operators of the elliptic complex and their adjoints, restricted to the sum of the even bundles. If the manifold is allowed to have boundary, then some restrictions must be put on the domain of the elliptic operator in order to ensure a finite index. These conditions can be local (like demanding that the sections in the domain vanish at the boundary) or more complicated global conditions (like requiring that the sections in the domain solve some differential equation). The local case was worked out by Atiyah and Bott, but they showed that many interesting operators (e.g., the signature operator) do not admit local boundary conditions. To handle these operators, Atiyah, Patodi and Singer introduced global boundary conditions equivalent to attaching a cylinder to the manifold along the boundary and then restricting the domain to those sections that are square integrable along the cylinder. This point of view is adopted in the proof of Melrose (1993) of the Atiyah–Patodi–Singer index theorem. Instead of just one elliptic operator, one can consider a family of elliptic operators parameterized by some space Y. In this case the index is an element of the K-theory of Y, rather than an integer. If the operators in the family are real, then the index lies in the real K-theory of Y. This gives a little extra information, as the map from the real K-theory of Y to the complex K-theory is not always injective. If there is a group action of a group G on the compact manifold X, commuting with the elliptic operator, then one replaces ordinary K-theory with equivariant K-theory. Moreover, one gets generalizations of the Lefschetz fixed-point theorem, with terms coming from fixed-point submanifolds of the group G. See also: equivariant index theorem. Atiyah (1976) showed how to extend the index theorem to some non-compact manifolds, acted on by a discrete group with compact quotient. The kernel of the elliptic operator is in general infinite dimensional in this case, but it is possible to get a finite index using the dimension of a module over a von Neumann algebra; this index is in general real rather than integer valued. This version is called the L2 index theorem, and was used by Atiyah & Schmid (1977) to rederive properties of the discrete series representations of semisimple Lie groups. The Callias index theorem is an index theorem for a Dirac operator on a noncompact odd-dimensional space. The Atiyah–Singer index is only defined on compact spaces, and vanishes when their dimension is odd. In 1978 Constantine Callias, at the suggestion of his Ph.D. advisor Roman Jackiw, used the axial anomaly to derive this index theorem on spaces equipped with a Hermitian matrix called the Higgs field. The index of the Dirac operator is a topological invariant which measures the winding of the Higgs field on a sphere at infinity. If U is the unit matrix in the direction of the Higgs field, then the index is proportional to the integral of U(dU)n−1 over the (n−1)-sphere at infinity. If n is even, it is always zero. The topological interpretation of this invariant and its relation to the Hörmander index proposed by Boris Fedosov, as generalized by Lars Hörmander, was published by Raoul Bott and Robert Thomas Seeley. == Examples == === Chern-Gauss-Bonnet theorem === Suppose that M {\displaystyle M} is a compact oriented manifold of dimension n = 2 r {\displaystyle n=2r} . If we take Λ even {\displaystyle \Lambda ^{\text{even}}} to be the sum of the even exterior powers of the cotangent bundle, and Λ odd {\displaystyle \Lambda ^{\text{odd}}} to be the sum of the odd powers, define D = d + d ∗ {\displaystyle D=d+d^{*}} , considered as a map from Λ even {\displaystyle \Lambda ^{\text{even}}} to Λ odd {\displaystyle \Lambda ^{\text{odd}}} . Then the analytical index of D {\displaystyle D} is the Euler characteristic χ ( M ) {\displaystyle \chi (M)} of the Hodge cohomology of M {\displaystyle M} , and the topological index is the integral of the Euler class over the manifold. The index formula for this operator yields the Chern–Gauss–Bonnet theorem. The concrete computation goes as follows: according to one variation of the splitting principle, if E {\displaystyle E} is a real vector bundle of dimension n = 2 r {\displaystyle n=2r} , in order to prove assertions involving characteristic classes, we may suppose that there are complex line bundles l 1 , … , l r {\displaystyle l_{1},\,\ldots ,\,l_{r}} such that E ⊗ C = l 1 ⊕ l 1 ¯ ⊕ ⋯ l r ⊕ l r ¯ {\displaystyle E\otimes \mathbb {C} =l_{1}\oplus {\overline {l_{1}}}\oplus \dotsm l_{r}\oplus {\overline {l_{r}}}} . Therefore, we can consider the Chern roots x i ( E ⊗ C ) = c 1 ( l i ) {\displaystyle x_{i}(E\otimes \mathbb {C} )=c_{1}(l_{i})} , x r + i ( E ⊗ C ) = c 1 ( l i ¯ ) = − x i ( E ⊗ C ) {\displaystyle x_{r+i}(E\otimes \mathbb {C} )=c_{1}{\mathord {\left({\overline {l_{i}}}\right)}}=-x_{i}(E\otimes \mathbb {C} )} , i = 1 , … , r {\displaystyle i=1,\,\ldots ,\,r} . Using Chern roots as above and the standard properties of the Euler class, we have that e ( T M ) = ∏ i r x i ( T M ⊗ C ) {\textstyle e(TM)=\prod _{i}^{r}x_{i}(TM\otimes \mathbb {C} )} . As for the Chern character and the Todd class, ch ⁡ ( Λ even − Λ odd ) = 1 − ch ⁡ ( T ∗ M ⊗ C ) + ch ⁡ ( Λ 2 T ∗ M ⊗ C ) − … + ( − 1 ) n ch ⁡ ( Λ n T ∗ M ⊗ C ) = 1 − ∑ i n e − x i ( T M ⊗ C ) + ∑ i < j e − x i e − x j ( T M ⊗ C ) + … + ( − 1 ) n e − x 1 ⋯ e − x n ( T M ⊗ C ) = ∏ i n ( 1 − e − x i ) ( T M ⊗ C ) Td ⁡ ( T M ⊗ C ) = ∏ i n x i 1 − e − x i ( T M ⊗ C ) {\displaystyle {\begin{aligned}\operatorname {ch} {\mathord {\left(\Lambda ^{\text{even}}-\Lambda ^{\text{odd}}\right)}}&=1-\operatorname {ch} (T^{*}M\otimes \mathbb {C} )+\operatorname {ch} {\mathord {\left(\Lambda ^{2}T^{*}M\otimes \mathbb {C} \right)}}-\ldots +(-1)^{n}\operatorname {ch} {\mathord {\left(\Lambda ^{n}T^{*}M\otimes \mathbb {C} \right)}}\\&=1-\sum _{i}^{n}e^{-x_{i}}(TM\otimes \mathbb {C} )+\sum _{i<j}e^{-x_{i}}e^{-x_{j}}(TM\otimes \mathbb {C} )+\ldots +(-1)^{n}e^{-x_{1}}\dotsm e^{-x_{n}}(TM\otimes \mathbb {C} )\\&=\prod _{i}^{n}\left(1-e^{-x_{i}}\right)(TM\otimes \mathbb {C} )\\[3pt]\operatorname {Td} (TM\otimes \mathbb {C} )&=\prod _{i}^{n}{\frac {x_{i}}{1-e^{-x_{i}}}}(TM\otimes \mathbb {C} )\end{aligned}}} Applying the index theorem, χ ( M ) = ( − 1 ) r ∫ M ∏ i n ( 1 − e − x i ) ∏ i r x i ∏ i n x i 1 − e − x i ( T M ⊗ C ) = ( − 1 ) r ∫ M ( − 1 ) r ∏ i r x i ( T M ⊗ C ) = ∫ M e ( T M ) {\displaystyle \chi (M)=(-1)^{r}\int _{M}{\frac {\prod _{i}^{n}\left(1-e^{-x_{i}}\right)}{\prod _{i}^{r}x_{i}}}\prod _{i}^{n}{\frac {x_{i}}{1-e^{-x_{i}}}}(TM\otimes \mathbb {C} )=(-1)^{r}\int _{M}(-1)^{r}\prod _{i}^{r}x_{i}(TM\otimes \mathbb {C} )=\int _{M}e(TM)} which is the "topological" version of the Chern-Gauss-Bonnet theorem (the geometric one being obtained by applying the Chern-Weil homomorphism). === Hirzebruch–Riemann–Roch theorem === Take X to be a complex manifold of (complex) dimension n with a holomorphic vector bundle V. We let the vector bundles E and F be the sums of the bundles of differential forms with coefficients in V of type (0, i) with i even or odd, and we let the differential operator D be the sum ∂ ¯ + ∂ ¯ ∗ {\displaystyle {\overline {\partial }}+{\overline {\partial }}^{*}} restricted to E. This derivation of the Hirzebruch–Riemann–Roch theorem is more natural if we use the index theorem for elliptic complexes rather than elliptic operators. We can take the complex to be 0 → V → V ⊗ Λ 0 , 1 T ∗ ( X ) → V ⊗ Λ 0 , 2 T ∗ ( X ) → ⋯ {\displaystyle 0\rightarrow V\rightarrow V\otimes \Lambda ^{0,1}T^{*}(X)\rightarrow V\otimes \Lambda ^{0,2}T^{*}(X)\rightarrow \dotsm } with the differential given by ∂ ¯ {\displaystyle {\overline {\partial }}} . Then the i'th cohomology group is just the coherent cohomology group Hi(X, V), so the analytical index of this complex is the holomorphic Euler characteristic of V: index ⁡ ( D ) = ∑ p ( − 1 ) p dim ⁡ H p ( X , V ) = χ ( X , V ) {\displaystyle \operatorname {index} (D)=\sum _{p}(-1)^{p}\dim H^{p}(X,V)=\chi (X,V)} Since we are dealing with complex bundles, the computation of the topological index is simpler. Using Chern roots and doing similar computations as in the previous example, the Euler class is given by e ( T X ) = ∏ i n x i ( T X ) {\textstyle e(TX)=\prod _{i}^{n}x_{i}(TX)} and ch ⁡ ( ∑ j n ( − 1 ) j V ⊗ Λ j T ∗ X ¯ ) = ch ⁡ ( V ) ∏ j n ( 1 − e x j ) ( T X ) Td ⁡ ( T X ⊗ C ) = Td ⁡ ( T X ) Td ⁡ ( T X ¯ ) = ∏ i n x i 1 − e − x i ∏ j n − x j 1 − e x j ( T X ) {\displaystyle {\begin{aligned}\operatorname {ch} \left(\sum _{j}^{n}(-1)^{j}V\otimes \Lambda ^{j}{\overline {T^{*}X}}\right)&=\operatorname {ch} (V)\prod _{j}^{n}\left(1-e^{x_{j}}\right)(TX)\\\operatorname {Td} (TX\otimes \mathbb {C} )=\operatorname {Td} (TX)\operatorname {Td} \left({\overline {TX}}\right)&=\prod _{i}^{n}{\frac {x_{i}}{1-e^{-x_{i}}}}\prod _{j}^{n}{\frac {-x_{j}}{1-e^{x_{j}}}}(TX)\end{aligned}}} Applying the index theorem, we obtain the Hirzebruch-Riemann-Roch theorem: χ ( X , V ) = ∫ X ch ⁡ ( V ) Td ⁡ ( T X ) {\displaystyle \chi (X,V)=\int _{X}\operatorname {ch} (V)\operatorname {Td} (TX)} In fact we get a generalization of it to all complex manifolds: Hirzebruch's proof only worked for projective complex manifolds X. === Hirzebruch signature theorem === The Hirzebruch signature theorem states that the signature of a compact oriented manifold X of dimension 4k is given by the L genus of the manifold. This follows from the Atiyah–Singer index theorem applied to the following signature operator. The bundles E and F are given by the +1 and −1 eigenspaces of the operator on the bundle of differential forms of X, that acts on k-forms as i k ( k − 1 ) {\displaystyle i^{k(k-1)}} times the Hodge star operator. The operator D is the Hodge Laplacian D ≡ Δ := ( d + d ∗ ) 2 {\displaystyle D\equiv \Delta \mathrel {:=} \left(\mathbf {d} +\mathbf {d^{*}} \right)^{2}} restricted to E, where d is the Cartan exterior derivative and d* is its adjoint. The analytic index of D is the signature of the manifold X, and its topological index is the L genus of X, so these are equal. ===  genus and Rochlin's theorem === The  genus is a rational number defined for any manifold, but is in general not an integer. Borel and Hirzebruch showed that it is integral for spin manifolds, and an even integer if in addition the dimension is 4 mod 8. This can be deduced from the index theorem, which implies that the  genus for spin manifolds is the index of a Dirac operator. The extra factor of 2 in dimensions 4 mod 8 comes from the fact that in this case the kernel and cokernel of the Dirac operator have a quaternionic structure, so as complex vector spaces they have even dimensions, so the index is even. In dimension 4 this result implies Rochlin's theorem that the signature of a 4-dimensional spin manifold is divisible by 16: this follows because in dimension 4 the  genus is minus one eighth of the signature. == Proof techniques == === Pseudodifferential operators === Pseudodifferential operators can be explained easily in the case of constant coefficient operators on Euclidean space. In this case, constant coefficient differential operators are just the Fourier transforms of multiplication by polynomials, and constant coefficient pseudodifferential operators are just the Fourier transforms of multiplication by more general functions. Many proofs of the index theorem use pseudodifferential operators rather than differential operators. The reason for this is that for many purposes there are not enough differential operators. For example, a pseudoinverse of an elliptic differential operator of positive order is not a differential operator, but is a pseudodifferential operator. Also, there is a direct correspondence between data representing elements of K(B(X), S(X)) (clutching functions) and symbols of elliptic pseudodifferential operators. Pseudodifferential operators have an order, which can be any real number or even −∞, and have symbols (which are no longer polynomials on the cotangent space), and elliptic differential operators are those whose symbols are invertible for sufficiently large cotangent vectors. Most versions of the index theorem can be extended from elliptic differential operators to elliptic pseudodifferential operators. === Cobordism === The initial proof was based on that of the Hirzebruch–Riemann–Roch theorem (1954), and involved cobordism theory and pseudodifferential operators. The idea of this first proof is roughly as follows. Consider the ring generated by pairs (X, V) where V is a smooth vector bundle on the compact smooth oriented manifold X, with relations that the sum and product of the ring on these generators are given by disjoint union and product of manifolds (with the obvious operations on the vector bundles), and any boundary of a manifold with vector bundle is 0. This is similar to the cobordism ring of oriented manifolds, except that the manifolds also have a vector bundle. The topological and analytical indices are both reinterpreted as functions from this ring to the integers. Then one checks that these two functions are in fact both ring homomorphisms. In order to prove they are the same, it is then only necessary to check they are the same on a set of generators of this ring. Thom's cobordism theory gives a set of generators; for example, complex vector spaces with the trivial bundle together with certain bundles over even dimensional spheres. So the index theorem can be proved by checking it on these particularly simple cases. === K-theory === Atiyah and Singer's first published proof used K-theory rather than cobordism. If i is any inclusion of compact manifolds from X to Y, they defined a 'pushforward' operation i! on elliptic operators of X to elliptic operators of Y that preserves the index. By taking Y to be some sphere that X embeds in, this reduces the index theorem to the case of spheres. If Y is a sphere and X is some point embedded in Y, then any elliptic operator on Y is the image under i! of some elliptic operator on the point. This reduces the index theorem to the case of a point, where it is trivial. === Heat equation === Atiyah, Bott, and Patodi (1973) gave a new proof of the index theorem using the heat equation, see e.g. Berline, Getzler & Vergne (1992). The proof is also published in (Melrose 1993) and (Gilkey 1994). If D is a differential operator with adjoint D*, then D*D and DD* are self adjoint operators whose non-zero eigenvalues have the same multiplicities. However their zero eigenspaces may have different multiplicities, as these multiplicities are the dimensions of the kernels of D and D*. Therefore, the index of D is given by index ⁡ ( D ) = dim ⁡ Ker ⁡ ( D ) − dim ⁡ Ker ⁡ ( D ∗ ) = dim ⁡ Ker ⁡ ( D ∗ D ) − dim ⁡ Ker ⁡ ( D D ∗ ) = Tr ⁡ ( e − t D ∗ D ) − Tr ⁡ ( e − t D D ∗ ) {\displaystyle \operatorname {index} (D)=\dim \operatorname {Ker} (D)-\dim \operatorname {Ker} (D^{*})=\dim \operatorname {Ker} (D^{*}D)-\dim \operatorname {Ker} (DD^{*})=\operatorname {Tr} \left(e^{-tD^{*}D}\right)-\operatorname {Tr} \left(e^{-tDD^{*}}\right)} for any positive t. The right hand side is given by the trace of the difference of the kernels of two heat operators. These have an asymptotic expansion for small positive t, which can be used to evaluate the limit as t tends to 0, giving a proof of the Atiyah–Singer index theorem. The asymptotic expansions for small t appear very complicated, but invariant theory shows that there are huge cancellations between the terms, which makes it possible to find the leading terms explicitly. These cancellations were later explained using supersymmetry. == See also == (-1)F – Term in quantum field theoryPages displaying short descriptions of redirect targets Witten index – Modified partition function == Citations == == References == The papers by Atiyah are reprinted in volumes 3 and 4 of his collected works, (Atiyah 1988a, 1988b) == External links == === Links on the theory === Mazzeo, Rafe. "The Atiyah–Singer Index Theorem: What it is and why you should care" (PDF). Archived from the original on June 24, 2006. Retrieved January 3, 2006.{{cite web}}: CS1 maint: bot: original URL status unknown (link) Pdf presentation. Voitsekhovskii, M.I.; Shubin, M.A. (2001) [1994], "Index formulas", Encyclopedia of Mathematics, EMS Press Wassermann, Antony. "Lecture notes on the Atiyah–Singer Index Theorem". Archived from the original on March 29, 2017. === Links of interviews === Raussen, Martin; Skau, Christian (2005), "Interview with Michael Atiyah and Isadore Singer" (PDF), Notices of AMS, pp. 223–231 R. R. Seeley and other (1999) Recollections from the early days of index theory and pseudo-differential operators - A partial transcript of informal post–dinner conversation during a symposium held in Roskilde, Denmark, in September 1998.
Wikipedia/Index_theory
In mathematics, the associated graded ring of a ring R with respect to a proper ideal I is the graded ring: gr I ⁡ R = ⨁ n = 0 ∞ I n / I n + 1 {\displaystyle \operatorname {gr} _{I}R=\bigoplus _{n=0}^{\infty }I^{n}/I^{n+1}} . Similarly, if M is a left R-module, then the associated graded module is the graded module over gr I ⁡ R {\displaystyle \operatorname {gr} _{I}R} : gr I ⁡ M = ⨁ n = 0 ∞ I n M / I n + 1 M {\displaystyle \operatorname {gr} _{I}M=\bigoplus _{n=0}^{\infty }I^{n}M/I^{n+1}M} . == Basic definitions and properties == For a ring R and ideal I, multiplication in gr I ⁡ R {\displaystyle \operatorname {gr} _{I}R} is defined as follows: First, consider homogeneous elements a ∈ I i / I i + 1 {\displaystyle a\in I^{i}/I^{i+1}} and b ∈ I j / I j + 1 {\displaystyle b\in I^{j}/I^{j+1}} and suppose a ′ ∈ I i {\displaystyle a'\in I^{i}} is a representative of a and b ′ ∈ I j {\displaystyle b'\in I^{j}} is a representative of b. Then define a b {\displaystyle ab} to be the equivalence class of a ′ b ′ {\displaystyle a'b'} in I i + j / I i + j + 1 {\displaystyle I^{i+j}/I^{i+j+1}} . Note that this is well-defined modulo I i + j + 1 {\displaystyle I^{i+j+1}} . Multiplication of inhomogeneous elements is defined by using the distributive property. A ring or module may be related to its associated graded ring or module through the initial form map. Let M be an R-module and I an ideal of R. Given f ∈ M {\displaystyle f\in M} , the initial form of f in gr I ⁡ M {\displaystyle \operatorname {gr} _{I}M} , written i n ( f ) {\displaystyle \mathrm {in} (f)} , is the equivalence class of f in I m M / I m + 1 M {\displaystyle I^{m}M/I^{m+1}M} where m is the maximum integer such that f ∈ I m M {\displaystyle f\in I^{m}M} . If f ∈ I m M {\displaystyle f\in I^{m}M} for every m, then set i n ( f ) = 0 {\displaystyle \mathrm {in} (f)=0} . The initial form map is only a map of sets and generally not a homomorphism. For a submodule N ⊂ M {\displaystyle N\subset M} , i n ( N ) {\displaystyle \mathrm {in} (N)} is defined to be the submodule of gr I ⁡ M {\displaystyle \operatorname {gr} _{I}M} generated by { i n ( f ) | f ∈ N } {\displaystyle \{\mathrm {in} (f)|f\in N\}} . This may not be the same as the submodule of gr I ⁡ M {\displaystyle \operatorname {gr} _{I}M} generated by the only initial forms of the generators of N. A ring inherits some "good" properties from its associated graded ring. For example, if R is a noetherian local ring, and gr I ⁡ R {\displaystyle \operatorname {gr} _{I}R} is an integral domain, then R is itself an integral domain. == gr of a quotient module == Let N ⊂ M {\displaystyle N\subset M} be left modules over a ring R and I an ideal of R. Since I n ( M / N ) I n + 1 ( M / N ) ≃ I n M + N I n + 1 M + N ≃ I n M I n M ∩ ( I n + 1 M + N ) = I n M I n M ∩ N + I n + 1 M {\displaystyle {I^{n}(M/N) \over I^{n+1}(M/N)}\simeq {I^{n}M+N \over I^{n+1}M+N}\simeq {I^{n}M \over I^{n}M\cap (I^{n+1}M+N)}={I^{n}M \over I^{n}M\cap N+I^{n+1}M}} (the last equality is by modular law), there is a canonical identification: gr I ⁡ ( M / N ) = gr I ⁡ M / in ⁡ ( N ) {\displaystyle \operatorname {gr} _{I}(M/N)=\operatorname {gr} _{I}M/\operatorname {in} (N)} where in ⁡ ( N ) = ⨁ n = 0 ∞ I n M ∩ N + I n + 1 M I n + 1 M , {\displaystyle \operatorname {in} (N)=\bigoplus _{n=0}^{\infty }{I^{n}M\cap N+I^{n+1}M \over I^{n+1}M},} called the submodule generated by the initial forms of the elements of N {\displaystyle N} . == Examples == Let U be the universal enveloping algebra of a Lie algebra g {\displaystyle {\mathfrak {g}}} over a field k; it is filtered by degree. The Poincaré–Birkhoff–Witt theorem implies that gr ⁡ U {\displaystyle \operatorname {gr} U} is a polynomial ring; in fact, it is the coordinate ring k [ g ∗ ] {\displaystyle k[{\mathfrak {g}}^{*}]} . The associated graded algebra of a Clifford algebra is an exterior algebra; i.e., a Clifford algebra degenerates to an exterior algebra. == Generalization to multiplicative filtrations == The associated graded can also be defined more generally for multiplicative descending filtrations of R (see also filtered ring.) Let F be a descending chain of ideals of the form R = I 0 ⊃ I 1 ⊃ I 2 ⊃ ⋯ {\displaystyle R=I_{0}\supset I_{1}\supset I_{2}\supset \dotsb } such that I j I k ⊂ I j + k {\displaystyle I_{j}I_{k}\subset I_{j+k}} . The graded ring associated with this filtration is gr F ⁡ R = ⨁ n = 0 ∞ I n / I n + 1 {\displaystyle \operatorname {gr} _{F}R=\bigoplus _{n=0}^{\infty }I_{n}/I_{n+1}} . Multiplication and the initial form map are defined as above. == See also == Graded (mathematics) Rees algebra == References == Eisenbud, David (1995). Commutative Algebra. Graduate Texts in Mathematics. Vol. 150. New York: Springer-Verlag. doi:10.1007/978-1-4612-5350-1. ISBN 0-387-94268-8. MR 1322960. Matsumura, Hideyuki (1989). Commutative ring theory. Cambridge Studies in Advanced Mathematics. Vol. 8. Translated from the Japanese by M. Reid (Second ed.). Cambridge: Cambridge University Press. ISBN 0-521-36764-6. MR 1011461. Zariski, Oscar; Samuel, Pierre (1975), Commutative algebra. Vol. II, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90171-8, MR 0389876
Wikipedia/Associated_graded_algebra
In mathematics, a generalized Clifford algebra (GCA) is a unital associative algebra that generalizes the Clifford algebra, and goes back to the work of Hermann Weyl, who utilized and formalized these clock-and-shift operators introduced by J. J. Sylvester (1882), and organized by Cartan (1898) and Schwinger. Clock and shift matrices find routine applications in numerous areas of mathematical physics, providing the cornerstone of quantum mechanical dynamics in finite-dimensional vector spaces. The concept of a spinor can further be linked to these algebras. The term generalized Clifford algebra can also refer to associative algebras that are constructed using forms of higher degree instead of quadratic forms. == Definition and properties == === Abstract definition === The n-dimensional generalized Clifford algebra is defined as an associative algebra over a field F, generated by e j e k = ω j k e k e j ω j k e ℓ = e ℓ ω j k ω j k ω ℓ m = ω ℓ m ω j k {\displaystyle {\begin{aligned}e_{j}e_{k}&=\omega _{jk}e_{k}e_{j}\\\omega _{jk}e_{\ell }&=e_{\ell }\omega _{jk}\\\omega _{jk}\omega _{\ell m}&=\omega _{\ell m}\omega _{jk}\end{aligned}}} and e j N j = 1 = ω j k N j = ω j k N k {\displaystyle e_{j}^{N_{j}}=1=\omega _{jk}^{N_{j}}=\omega _{jk}^{N_{k}}\,} ∀ j,k,ℓ,m = 1, . . . ,n. Moreover, in any irreducible matrix representation, relevant for physical applications, it is required that ω j k = ω k j − 1 = e 2 π i ν k j / N k j {\displaystyle \omega _{jk}=\omega _{kj}^{-1}=e^{2\pi i\nu _{kj}/N_{kj}}} ∀ j,k = 1, . . . ,n, and N k j = {\displaystyle N_{kj}={}} gcd ( N j , N k ) {\displaystyle (N_{j},N_{k})} . The field F is usually taken to be the complex numbers C. === More specific definition === In the more common cases of GCA, the n-dimensional generalized Clifford algebra of order p has the property ωkj = ω, N k = p {\displaystyle N_{k}=p} for all j,k, and ν k j = 1 {\displaystyle \nu _{kj}=1} . It follows that e j e k = ω e k e j ω e ℓ = e ℓ ω {\displaystyle {\begin{aligned}e_{j}e_{k}&=\omega \,e_{k}e_{j}\,\\\omega e_{\ell }&=e_{\ell }\omega \,\end{aligned}}} and e j p = 1 = ω p {\displaystyle e_{j}^{p}=1=\omega ^{p}\,} for all j,k,ℓ = 1, . . . ,n, and ω = e 2 π i / p {\displaystyle \omega =e^{2\pi i/p}} is the pth root of 1. There exist several definitions of a Generalized Clifford Algebra in the literature. Clifford algebra In the (orthogonal) Clifford algebra, the elements follow an anticommutation rule, with ω = −1, and p = 2. == Matrix representation == The Clock and Shift matrices can be represented by n×n matrices in Schwinger's canonical notation as V = ( 0 1 0 ⋯ 0 0 0 1 ⋯ 0 0 0 ⋱ 1 0 ⋮ ⋮ ⋮ ⋱ ⋮ 1 0 0 ⋯ 0 ) , U = ( 1 0 0 ⋯ 0 0 ω 0 ⋯ 0 0 0 ω 2 ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ ω ( n − 1 ) ) , W = ( 1 1 1 ⋯ 1 1 ω ω 2 ⋯ ω n − 1 1 ω 2 ( ω 2 ) 2 ⋯ ω 2 ( n − 1 ) ⋮ ⋮ ⋮ ⋱ ⋮ 1 ω n − 1 ω 2 ( n − 1 ) ⋯ ω ( n − 1 ) 2 ) {\displaystyle {\begin{aligned}V&={\begin{pmatrix}0&1&0&\cdots &0\\0&0&1&\cdots &0\\0&0&\ddots &1&0\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&0&0&\cdots &0\end{pmatrix}},&U&={\begin{pmatrix}1&0&0&\cdots &0\\0&\omega &0&\cdots &0\\0&0&\omega ^{2}&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\cdots &\omega ^{(n-1)}\end{pmatrix}},&W&={\begin{pmatrix}1&1&1&\cdots &1\\1&\omega &\omega ^{2}&\cdots &\omega ^{n-1}\\1&\omega ^{2}&(\omega ^{2})^{2}&\cdots &\omega ^{2(n-1)}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\omega ^{n-1}&\omega ^{2(n-1)}&\cdots &\omega ^{(n-1)^{2}}\end{pmatrix}}\end{aligned}}} . Notably, Vn = 1, VU = ωUV (the Weyl braiding relations), and W−1VW = U (the discrete Fourier transform). With e1 = V , e2 = VU, and e3 = U, one has three basis elements which, together with ω, fulfil the above conditions of the Generalized Clifford Algebra (GCA). These matrices, V and U, normally referred to as "shift and clock matrices", were introduced by J. J. Sylvester in the 1880s. (Note that the matrices V are cyclic permutation matrices that perform a circular shift; they are not to be confused with upper and lower shift matrices which have ones only either above or below the diagonal, respectively). === Specific examples === ==== Case n = p = 2 ==== In this case, we have ω = −1, and V = ( 0 1 1 0 ) , U = ( 1 0 0 − 1 ) , W = ( 1 1 1 − 1 ) {\displaystyle {\begin{aligned}V&={\begin{pmatrix}0&1\\1&0\end{pmatrix}},&U&={\begin{pmatrix}1&0\\0&-1\end{pmatrix}},&W&={\begin{pmatrix}1&1\\1&-1\end{pmatrix}}\end{aligned}}} thus e 1 = ( 0 1 1 0 ) , e 2 = ( 0 − 1 1 0 ) , e 3 = ( 1 0 0 − 1 ) , {\displaystyle {\begin{aligned}e_{1}&={\begin{pmatrix}0&1\\1&0\end{pmatrix}},&e_{2}&={\begin{pmatrix}0&-1\\1&0\end{pmatrix}},&e_{3}&={\begin{pmatrix}1&0\\0&-1\end{pmatrix}},\end{aligned}}} which constitute the Pauli matrices. ==== Case n = p = 4 ==== In this case we have ω = i, and V = ( 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 ) , U = ( 1 0 0 0 0 i 0 0 0 0 − 1 0 0 0 0 − i ) , W = ( 1 1 1 1 1 i − 1 − i 1 − 1 1 − 1 1 − i − 1 i ) {\displaystyle {\begin{aligned}V&={\begin{pmatrix}0&1&0&0\\0&0&1&0\\0&0&0&1\\1&0&0&0\end{pmatrix}},&U&={\begin{pmatrix}1&0&0&0\\0&i&0&0\\0&0&-1&0\\0&0&0&-i\end{pmatrix}},&W&={\begin{pmatrix}1&1&1&1\\1&i&-1&-i\\1&-1&1&-1\\1&-i&-1&i\end{pmatrix}}\end{aligned}}} and e1, e2, e3 may be determined accordingly. == See also == Clifford algebra Generalizations of Pauli matrices DFT matrix Circulant matrix == References == == Further reading == Fairlie, D. B.; Fletcher, P.; Zachos, C. K. (1990). "Infinite-dimensional algebras and a trigonometric basis for the classical Lie algebras". Journal of Mathematical Physics. 31 (5): 1088. Bibcode:1990JMP....31.1088F. doi:10.1063/1.528788. Jagannathan, R. (2010). "On generalized Clifford algebras and their physical applications". arXiv:1005.4300 [math-ph]. (In The legacy of Alladi Ramakrishnan in the mathematical sciences (pp. 465–489). Springer, New York, NY.) Morinaga, K.; Nono, T. (1952). "On the linearization of a form of higher degree and its representation". J. Sci. Hiroshima Univ. Ser. A. 16: 13–41. doi:10.32917/hmj/1557367250. Morris, A.O. (1967). "On a Generalized Clifford Algebra". Quart. J. Math (Oxford. 18 (1): 7–12. Bibcode:1967QJMat..18....7M. doi:10.1093/qmath/18.1.7. Morris, A.O. (1968). "On a Generalized Clifford Algebra II". Quart. J. Math (Oxford. 19 (1): 289–299. Bibcode:1968QJMat..19..289M. doi:10.1093/qmath/19.1.289.
Wikipedia/Generalized_Clifford_algebra
In mathematics and physics CCR algebras (after canonical commutation relations) and CAR algebras (after canonical anticommutation relations) arise from the quantum mechanical study of bosons and fermions, respectively. They play a prominent role in quantum statistical mechanics and quantum field theory. == CCR and CAR as *-algebras == Let V {\displaystyle V} be a real vector space equipped with a nonsingular real antisymmetric bilinear form ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} (i.e. a symplectic vector space). The unital *-algebra generated by elements of V {\displaystyle V} subject to the relations f g − g f = i ( f , g ) {\displaystyle fg-gf=i(f,g)\,} f ∗ = f , {\displaystyle f^{*}=f,\,} for any f , g {\displaystyle f,~g} in V {\displaystyle V} is called the canonical commutation relations (CCR) algebra. The uniqueness of the representations of this algebra when V {\displaystyle V} is finite dimensional is discussed in the Stone–von Neumann theorem. If V {\displaystyle V} is equipped with a nonsingular real symmetric bilinear form ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} instead, the unital *-algebra generated by the elements of V {\displaystyle V} subject to the relations f g + g f = ( f , g ) , {\displaystyle fg+gf=(f,g),\,} f ∗ = f , {\displaystyle f^{*}=f,\,} for any f , g {\displaystyle f,~g} in V {\displaystyle V} is called the canonical anticommutation relations (CAR) algebra. == The C*-algebra of CCR == There is a distinct, but closely related meaning of CCR algebra, called the CCR C*-algebra. Let H {\displaystyle H} be a real symplectic vector space with nonsingular symplectic form ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} . In the theory of operator algebras, the CCR algebra over H {\displaystyle H} is the unital C*-algebra generated by elements { W ( f ) : f ∈ H } {\displaystyle \{W(f):~f\in H\}} subject to W ( f ) W ( g ) = e − i ( f , g ) W ( f + g ) , {\displaystyle W(f)W(g)=e^{-i(f,g)}W(f+g),\,} W ( f ) ∗ = W ( − f ) . {\displaystyle W(f)^{*}=W(-f).\,} These are called the Weyl form of the canonical commutation relations and, in particular, they imply that each W ( f ) {\displaystyle W(f)} is unitary and W ( 0 ) = 1 {\displaystyle W(0)=1} . It is well known that the CCR algebra is a simple (unless the sympletic form is degenerate) non-separable algebra and is unique up to isomorphism. When H {\displaystyle H} is a complex Hilbert space and ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} is given by the imaginary part of the inner-product, the CCR algebra is faithfully represented on the symmetric Fock space over H {\displaystyle H} by setting W ( f ) ( 1 , g , g ⊗ 2 2 ! , g ⊗ 3 3 ! , … ) = e − 1 2 ‖ f ‖ 2 − ⟨ f , g ⟩ ( 1 , f + g , ( f + g ) ⊗ 2 2 ! , ( f + g ) ⊗ 3 3 ! , … ) , {\displaystyle W(f)\left(1,g,{\frac {g^{\otimes 2}}{2!}},{\frac {g^{\otimes 3}}{3!}},\ldots \right)=e^{-{\frac {1}{2}}\|f\|^{2}-\langle f,g\rangle }\left(1,f+g,{\frac {(f+g)^{\otimes 2}}{2!}},{\frac {(f+g)^{\otimes 3}}{3!}},\ldots \right),} for any f , g ∈ H {\displaystyle f,g\in H} . The field operators B ( f ) {\displaystyle B(f)} are defined for each f ∈ H {\displaystyle f\in H} as the generator of the one-parameter unitary group ( W ( t f ) ) t ∈ R {\displaystyle (W(tf))_{t\in \mathbb {R} }} on the symmetric Fock space. These are self-adjoint unbounded operators, however they formally satisfy B ( f ) B ( g ) − B ( g ) B ( f ) = 2 i Im ⁡ ⟨ f , g ⟩ . {\displaystyle B(f)B(g)-B(g)B(f)=2i\operatorname {Im} \langle f,g\rangle .} As the assignment f ↦ B ( f ) {\displaystyle f\mapsto B(f)} is real-linear, so the operators B ( f ) {\displaystyle B(f)} define a CCR algebra over ( H , 2 Im ⁡ ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle (H,2\operatorname {Im} \langle \cdot ,\cdot \rangle )} in the sense of Section 1. == The C*-algebra of CAR == Let H {\displaystyle H} be a Hilbert space. In the theory of operator algebras the CAR algebra is the unique C*-completion of the complex unital *-algebra generated by elements { b ( f ) , b ∗ ( f ) : f ∈ H } {\displaystyle \{b(f),b^{*}(f):~f\in H\}} subject to the relations b ( f ) b ∗ ( g ) + b ∗ ( g ) b ( f ) = ⟨ f , g ⟩ , {\displaystyle b(f)b^{*}(g)+b^{*}(g)b(f)=\langle f,g\rangle ,\,} b ( f ) b ( g ) + b ( g ) b ( f ) = 0 , {\displaystyle b(f)b(g)+b(g)b(f)=0,\,} λ b ∗ ( f ) = b ∗ ( λ f ) , {\displaystyle \lambda b^{*}(f)=b^{*}(\lambda f),\,} b ( f ) ∗ = b ∗ ( f ) , {\displaystyle b(f)^{*}=b^{*}(f),\,} for any f , g ∈ H {\displaystyle f,g\in H} , λ ∈ C {\displaystyle \lambda \in \mathbb {C} } . When H {\displaystyle H} is separable the CAR algebra is an AF algebra and in the special case H {\displaystyle H} is infinite dimensional it is often written as M 2 ∞ ( C ) {\displaystyle {M_{2^{\infty }}(\mathbb {C} )}} . Let F a ( H ) {\displaystyle F_{a}(H)} be the antisymmetric Fock space over H {\displaystyle H} and let P a {\displaystyle P_{a}} be the orthogonal projection onto antisymmetric vectors: P a : ⨁ n = 0 ∞ H ⊗ n → F a ( H ) . {\displaystyle P_{a}:\bigoplus _{n=0}^{\infty }H^{\otimes n}\to F_{a}(H).\,} The CAR algebra is faithfully represented on F a ( H ) {\displaystyle F_{a}(H)} by setting b ∗ ( f ) P a ( g 1 ⊗ g 2 ⊗ ⋯ ⊗ g n ) = n + 1 P a ( f ⊗ g 1 ⊗ g 2 ⊗ ⋯ ⊗ g n ) {\displaystyle b^{*}(f)P_{a}(g_{1}\otimes g_{2}\otimes \cdots \otimes g_{n})={\sqrt {n+1}}P_{a}(f\otimes g_{1}\otimes g_{2}\otimes \cdots \otimes g_{n})\,} for all f , g 1 , … , g n ∈ H {\displaystyle f,g_{1},\ldots ,g_{n}\in H} and n ∈ N {\displaystyle n\in \mathbb {N} } . The fact that these form a C*-algebra is due to the fact that creation and annihilation operators on antisymmetric Fock space are bona-fide bounded operators. Moreover, the field operators B ( f ) := b ∗ ( f ) + b ( f ) {\displaystyle B(f):=b^{*}(f)+b(f)} satisfy B ( f ) B ( g ) + B ( g ) B ( f ) = 2 R e ⟨ f , g ⟩ , {\displaystyle B(f)B(g)+B(g)B(f)=2\mathrm {Re} \langle f,g\rangle ,\,} giving the relationship with Section 1. == Superalgebra generalization == Let V {\displaystyle V} be a real Z 2 {\displaystyle \mathbb {Z} _{2}} -graded vector space equipped with a nonsingular antisymmetric bilinear superform ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} (i.e. ( g , f ) = − ( − 1 ) | f | | g | ( f , g ) {\displaystyle (g,f)=-(-1)^{|f||g|}(f,g)} ) such that ( f , g ) {\displaystyle (f,g)} is real if either f {\displaystyle f} or g {\displaystyle g} is an even element and imaginary if both of them are odd. The unital *-algebra generated by the elements of V {\displaystyle V} subject to the relations f g − ( − 1 ) | f | | g | g f = i ( f , g ) {\displaystyle fg-(-1)^{|f||g|}gf=i(f,g)\,} f ∗ = f , g ∗ = g {\displaystyle f^{*}=f,~g^{*}=g\,} for any two pure elements f , g {\displaystyle f,~g} in V {\displaystyle V} is the obvious superalgebra generalization which unifies CCRs with CARs: if all pure elements are even, one obtains a CCR, while if all pure elements are odd, one obtains a CAR. In mathematics, the abstract structure of the CCR and CAR algebras, over any field, not just the complex numbers, is studied by the name of Weyl and Clifford algebras, where many significant results have accrued. One of these is that the graded generalizations of Weyl and Clifford algebras allow the basis-free formulation of the canonical commutation and anticommutation relations in terms of a symplectic and a symmetric non-degenerate bilinear form. In addition, the binary elements in this graded Weyl algebra give a basis-free version of the commutation relations of the symplectic and indefinite orthogonal Lie algebras. == See also == Bose–Einstein statistics Fermi–Dirac statistics Glossary of string theory Heisenberg group Bogoliubov transformation (−1)F == References ==
Wikipedia/CCR_and_CAR_algebras
Algorithm is the first studio album from My Heart to Fear. Solid State Records released the album on July 9, 2013. == Critical reception == Awarding the album three stars from Alternative Press, Jason Schreurs writes, "As is the case with the bulk of this musical style, the vocals bring it back down to a near-mediocre level." Bradley Zorgdrager, rating the album a five out of ten for Exclaim!, says, "Unfortunately, a lack of inspiration causes the songs to come undone, as many of the parts sound only like a means to get to the next." Giving the album four stars at About.com, Todd Lyons states, "everything binds together into one masterful meditation." Tim Dodderidge, indicating in an 8.5 out of ten review by Mind Equals Blown, writes, "From start to finish, My Heart to Fear’s debut full-length is an energetic, ferocious, cathartic and inspiring metal album." Kevin Hoskins, giving the album three and a half stars for Jesus Freak Hideout, describes, "this is just metal done well ... but any hardcore fan will be digging this release all summer long." Awarding the album four and a half stars from HM Magazine, Sean Huncherick states, "One good thing about Algorithm is that the band realizes they don’t need to constantly play as fast as they can." Brody B., rating the album four star at Indie Vision Music, writes, "With a few minor tweaks here and there that could have made songs feel more fleshed out I would have had a hard time finding fault with this debut record." == Track listing == == References ==
Wikipedia/Algorithm_(My_Heart_to_Fear_album)
"Algorithm" is a song by English rock band Muse. It was released as the first track from the band's eighth studio album, Simulation Theory, on 9 November 2018. "Algorithm" is a retro-futuristic and industrial sounding song, in common with the overall theme of Simulation Theory. == Release == The name of the song was first mentioned by lead singer Matt Bellamy while speaking with Matt Wilkinson on Beats 1 Radio on 16 February 2018. According to Bellamy, the song blends classical romantic piano with 1980s synths and computer game music. Q magazine published a preview of Simulation Theory on 23 October, in which they stated that "opener "Algorithm" sounds like it could be from Daft Punk's Tron: Legacy soundtrack, a fusion of dramatic strings and industrial electro." == Writing and recording == Lead singer Bellamy said regarding "Algorithm" that "it’s about an intelligence, be it human or artificial, that realises that it lives in a simulated reality, and it is controlled by its creator. It feels betrayed, finds this situation unfair and tries to escape". In another interview, Bellamy said that "Algorithm" and "The Dark Side" deal with the struggle to get away from a dystopian world and anxieties about technology. Bellamy has stated that "Algorithm" is his favourite track from Simulation Theory "because it's an interesting combination of retro-synth and futuristic stuff". The song has been compared by various reviewers to "Apocalypse Please" and "Supremacy", and by Muse fans to "Take A Bow" and "The Dark Side". == Music video == A short preview of the "Algorithm" music video was first shown by Bellamy on his smartphone during an interview on Virgin Radio in Milan two days prior to the release. Actor Terry Crews makes a second appearance in a Simulation Theory music video, following on from his role as protagonist of the music video for "Pressure". == Charts == == References == == External links == "Algorithm" music video on YouTube
Wikipedia/Algorithm_(song)
The Algorithm is the musical project of French musician Rémi Gallego (born 7 October 1989) from Perpignan. His style is characterised by an unusual combination of electronic music with progressive metal. Gallego chose the name The Algorithm to highlight the music's complex and electronic nature. == History == === Early years (2009–2010) === After the demise of his band Dying Breath, Rémi Gallego decided in 2009 to look for potential members for a band that focused on mathcore, which came as an inspiration from The Dillinger Escape Plan. After a futile search for new members, with the help of his guitar and a DAW, Gallego began to produce his own music. In December 2009 and July 2010, he published the two demos The Doppler Effect and Critical Error, which were released through his own website for free download. Towards the end of 2010, he announced that he was working on a new EP named Identity (it was never completed). Also, he was preparing for his first live appearances. === First live shows (2011) === In August 2011, The Algorithm released his compilation called Method_ on which the songs from his two previous demos were compiled which were also for free download. An appearance followed in October 2011 at the Euroblast Festival in Cologne, where The Algorithm featured alongside bands such as Textures, TesseracT and Vildhjarta. A month later Mike Malyan, drummer for the band Monuments, uploaded a drum cover of the song "Isometry" on YouTube. After seeing this, Gallego was convinced that it would be possible to play his songs on a real drum set and Malyan was presented as an accompaniment during live performances. "Before he put his Isometry drum cover online, one year ago, I had no idea that someone could ever play my drum programmings with such dedication, musicality and tightness. He was really thrilled to play live with me and so we decided to make it happen. I can't be more happy to work with such a great friend/musician." In the same month, The Algorithm signed a record deal with the British label Basick Records. === Signing to Basick Records and Polymorphic Code (2012–2013) === In January 2012, The Algorithm released the single "Trojans" via Basick Records, which was only available digitally. It was followed by appearances in festivals such as Djentival in Karlsruhe, Germany, as well as on the UK Tech-Metal Fest held in Alton, United Kingdom, where he joined the release in addition to including Uneven Structure and Chimp Spanner appearances. On 19 November 2012, the debut album Polymorphic Code was released through Basick Records, which included seven previously unreleased songs as well as the song "Trojans". In January 2013, The Algorithm played alongside Enter Shikari and Cancer Bats at a concert in Paris. In April 2013, The Algorithm played their first live shows in the UK with a new live member, guitarist Max Michel. On 17 June 2013, The Algorithm was decorated on a Metal Hammer Golden Gods Awards as the best underground artist of that year, decided by the votes of Metal Hammer readers. From September–October 2013, The Algorithm toured mainland Europe on the French Connection Tour with Uneven Structure and Weaksaw. However, Mike Malyan was not able to perform on this tour; Boris Le Gal of NeonFly filled in for him instead. The live line-up also performed on a UK tour with Hacktivist from November to December 2013. === Octopus4 and video games (2013–2014) === In December 2013, the band played a show in Paris with Uneven Structure, Kadinja and Cycles. A week afterwards, it was announced that Max Michel would no longer be performing with Rémi as he had been accepted into the Berklee College of Music and could no longer tour regularly. The Algorithm's second album, Octopus4, was released on 2 June 2014. Along with the release of the album, a crowd funding campaign was launched for a video game named RogueStar: Pirates vs. Privateers which features music composed by Rémi. === Brute Force, Compiler Optimization Techniques and Data Renaissance (2016–present) === The third album, Brute Force, was released on 1 April 2016 through the label FiXT. In 2017, an extension named Hacknet Labyrinths was released for the video game Hacknet, and features music composed by Rémi. The same year, Rémi released an EP titled "直線移動" under a new alias, Boucle Infinie. In 2018, The Algorithm released his fourth studio album, Compiler Optimization Techniques. In 2022, the project's fifth studio album, Data Renaissance, was published. == Musical style == The Algorithm melds several types of electronic and electronic dance music with progressive metal (including djent and mathcore). For live performances Rémi Gallego uses an Akai APC40, a MIDI controller produced by the company Akai Professional, co-developed with the German company Ableton, connected to a laptop running Ableton Live. In addition, a distorted female voice can be heard on almost all the releases, provided by Florent Latorre, a friend of Gallego's. == Members == Rémi Gallego – electronics, guitars, bass (2009–present) Touring members Mike Malyan – drums (2012–2014) Jean Ferry – drums, electronic drums (2013–present) Max Michel – guitars (2013) == Discography == === Studio albums === Polymorphic Code (2012) Octopus4 (2014) Brute Force (2016) Compiler Optimization Techniques (2018) Data Renaissance (2022) === Compilations === Method_ (2011) === EPs === Identity (2010) Brute Force: Overclock (2016) Brute Force: Source Code (2017) === Singles === "Trojans" (2012) "Synthesizer" (2014) "Terminal" (2014) "Neotokyo" (2015) "Floating Point" (2016) "Pointers" (2016) "Collapse" (2018) "People from the Dark Hill" (2020) "Among the Wolves" (2021) "Protocols" (2021) "Interrupt Handler" (2021) "Segmentation Fault" (2021) "Run Away" (2021) "Decompilation" (2021) "Readonly" (2021) "Cryptographic Memory" (2021) "Object Resurrection" (2022) "Cosmic Rays and Flipped Bits" (2022) "Latent Noise" (2023) === Demos === The Doppler Effect (2009) Critical Error (2010) == References == == External links == Official site The Algorithm at Myspace The Algorithm at YouTube
Wikipedia/The_Algorithm
Algorithmic may refer to: Algorithm, step-by-step instructions for a calculation Algorithmic art, art made by an algorithm Algorithmic composition, music made by an algorithm Algorithmic trading, trading decisions made by an algorithm Algorithmic patent, an intellectual property right in an algorithm Algorithmics, the science of algorithms Algorithmica, an academic journal for algorithm research Algorithmic efficiency, the computational resources used by an algorithm Algorithmic information theory, study of relationships between computation and information Algorithmic mechanism design, the design of economic systems from an algorithmic point of view Algorithmic number theory, algorithms for number-theoretic computation Algorithmic game theory, game-theoretic techniques for algorithm design and analysis Algorithmic cooling, a phenomenon in quantum computation Algorithmic probability, a universal choice of prior probabilities in Solomonoff's theory of inductive inference == See also == Algorithmic complexity (disambiguation)
Wikipedia/Algorithmic_(disambiguation)
Snoop Dogg Presents Algorithm (or simply titled Algorithm) is a compilation album by American rapper Snoop Dogg. Some publications described the recording as a compilation album, but the rapper's official website describes it as a studio album. Released on November 19, 2021 by Doggy Style Records and Def Jam Recordings and featured contributions from various artists including Method Man & Redman, Eric Bellinger, Usher, Blxst, Fabolous, and Dave East. == Background == Following his appointment as executive creative consultant at Def Jam Recordings in June, Snoop Dogg officially announced the album on October 26, 2021. He subsequently released the singles "Big Subwoofer" on October 20 and "Murder Music" on November 5. He appeared on The Tonight Show Starring Jimmy Fallon on September 27 to tease the album, and he also appeared on the podcast The Joe Rogan Experience on November 12 in promotion of the album. == Critical reception == Algorithm received positive reviews from critics. At Metacritic, which assigns a normalized rating out of 100 to reviews from critics, the album received an average score of 70, which indicates "generally favorable reviews", based on nine reviews. == Commercial performance == Algorithm debuted at number 166 on the US Billboard 200, becoming his 26th entry on the Billboard 200. The album debuted at number 8 on the US Compilation Albums, marking Snoop Dogg's first album on the chart. == Track listing == Track listing adapted from Genius. == Charts == == References ==
Wikipedia/Snoop_Dogg_Presents_Algorithm
A recommender system (RecSys), or a recommendation system (sometimes replacing system with terms such as platform, engine, or algorithm) and sometimes only called "the algorithm" or "algorithm", is a subclass of information filtering system that provides suggestions for items that are most pertinent to a particular user. Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer. Modern recommendation systems such as those used on large social media sites make extensive use of AI, machine learning and related techniques to learn the behavior and preferences of each user and categorize content to tailor their feed individually. Typically, the suggestions refer to various decision-making processes, such as what product to purchase, what music to listen to, or what online news to read. Recommender systems are used in a variety of areas, with commonly recognised examples taking the form of playlist generators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders. These systems can operate using a single type of input, like music, or multiple inputs within and across platforms like news, books and search queries. There are also popular recommender systems for specific topics like restaurants and online dating. Recommender systems have also been developed to explore research articles and experts, collaborators, and financial services. A content discovery platform is an implemented software recommendation platform which uses recommender system tools. It utilizes user metadata in order to discover and recommend appropriate content, whilst reducing ongoing maintenance and development costs. A content discovery platform delivers personalized content to websites, mobile devices and set-top boxes. A large range of content discovery platforms currently exist for various forms of content ranging from news articles and academic journal articles to television. As operators compete to be the gateway to home entertainment, personalized television is a key service differentiator. Academic content discovery has recently become another area of interest, with several companies being established to help academic researchers keep up to date with relevant academic content and serendipitously discover new content. == Overview == Recommender systems usually make use of either or both collaborative filtering and content-based filtering, as well as other systems such as knowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (e.g., items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in. Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties. === Example === The differences between collaborative and content-based filtering can be demonstrated by comparing two early music recommender systems, Last.fm and Pandora Radio. Last.fm creates a "station" of recommended songs by observing what bands and individual tracks the user has listened to on a regular basis and comparing those against the listening behavior of other users. Last.fm will play tracks that do not appear in the user's library, but are often played by other users with similar interests. As this approach leverages the behavior of users, it is an example of a collaborative filtering technique. Pandora uses the properties of a song or artist (a subset of the 450 attributes provided by the Music Genome Project) to seed a "station" that plays music with similar properties. User feedback is used to refine the station's results, deemphasizing certain attributes when a user "dislikes" a particular song and emphasizing other attributes when a user "likes" a song. This is an example of a content-based approach. Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of the cold start problem, and is common in collaborative filtering systems. Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed). === Alternative implementations === Recommender systems are a useful alternative to search algorithms since they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data. In some cases, like in the Gonzalez v. Google Supreme Court case, may argue that search and recommendation algorithms are different technologies. Recommender systems have been the focus of several granted patents, and there are more than 50 software libraries that support the development of recommender systems including LensKit, RecBole, ReChorus and RecPack. == History == Elaine Rich created the first recommender system in 1979, called Grundy. She looked for a way to recommend users books they might like. Her idea was to create a system that asks users specific questions and classifies them into classes of preferences, or "stereotypes", depending on their answers. Depending on users' stereotype membership, they would then get recommendations for books they might like. Another early recommender system, called a "digital bookshelf", was described in a 1990 technical report by Jussi Karlgren at Columbia University, and implemented at scale and worked through in technical reports and publications from 1994 onwards by Jussi Karlgren, then at SICS, and research groups led by Pattie Maes at MIT, Will Hill at Bellcore, and Paul Resnick, also at MIT, whose work with GroupLens was awarded the 2010 ACM Software Systems Award. Montaner provided the first overview of recommender systems from an intelligent agent perspective. Adomavicius provided a new, alternate overview of recommender systems. Herlocker provides an additional overview of evaluation techniques for recommender systems, and Beel et al. discussed the problems of offline evaluations. Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges. == Approaches == === Collaborative filtering === One approach to the design of recommender systems that has wide use is collaborative filtering. Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm, while that of model-based approaches is matrix factorization (recommender systems). A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach and the Pearson Correlation as first implemented by Allen. When building a model from a user's behavior, a distinction is often made between explicit and implicit forms of data collection. Examples of explicit data collection include the following: Asking a user to rate an item on a sliding scale. Asking a user to search. Asking a user to rank a collection of items from favorite to least favorite. Presenting two items to a user and asking him/her to choose the better one of them. Asking a user to create a list of items that he/she likes (see Rocchio classification or other similar techniques). Examples of implicit data collection include the following: Observing the items that a user views in an online store. Analyzing item/user viewing times. Keeping a record of the items that a user purchases online. Obtaining a list of items that a user has listened to or watched on his/her computer. Analyzing the user's social network and discovering similar likes and dislikes. Collaborative filtering approaches often suffer from three problems: cold start, scalability, and sparsity. Cold start: For a new user or item, there is not enough data to make accurate recommendations. Note: one commonly implemented solution to this problem is the multi-armed bandit algorithm. Scalability: There are millions of users and products in many of the environments in which these systems make recommendations. Thus, a large amount of computation power is often necessary to calculate recommendations. Sparsity: The number of items sold on major e-commerce sites is extremely large. The most active users will only have rated a small subset of the overall database. Thus, even the most popular items have very few ratings. One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system. Many social networks originally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends. Collaborative filtering is still used as part of hybrid systems. === Content-based filtering === Another common approach when designing recommender systems is content-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences. These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features. In this system, keywords are used to describe the items, and a user profile is built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots in information retrieval and information filtering research. To create a user profile, the system mostly focuses on two types of information: A model of the user's preference. A history of the user's interaction with the recommender system. Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is the tf–idf representation (also called vector space representation). The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such as Bayesian Classifiers, cluster analysis, decision trees, and artificial neural networks in order to estimate the probability that the user is going to like the item. A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system. Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improved metadata of items, because as they also reflect aspects of the item like metadata, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques including text mining, information retrieval, sentiment analysis (see also Multimodal sentiment analysis) and deep learning. === Hybrid recommendations approaches === Most recommender systems now use a hybrid approach, combining collaborative filtering, content-based filtering, and other approaches. There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model. Several studies that empirically compared the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck in knowledge-based approaches. Netflix is a good example of the use of hybrid recommender systems. The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering). Some hybridization techniques include: Weighted: Combining the score of different recommendation components numerically. Switching: Choosing among recommendation components and applying the selected one. Mixed: Recommendations from different recommenders are presented together to give the recommendation. Cascade: Recommenders are given strict priority, with the lower priority ones breaking ties in the scoring of the higher ones. Meta-level: One recommendation technique is applied and produces some sort of model, which is then the input used by the next technique. == Technologies == === Session-based recommender systems === These recommender systems use the interactions of a user within a session to generate recommendations. Session-based recommender systems are used at YouTube and Amazon. These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains where session-based recommendations are particularly relevant include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such as recurrent neural networks, transformers, and other deep-learning-based approaches. === Reinforcement learning for recommender systems === The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user. One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest. === Multi-criteria recommender systems === Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems. See this chapter for an extended introduction. === Risk-aware recommender systems === The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue is DRARS, a system which models the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm. === Mobile recommender systems === Mobile recommender systems make use of internet-accessing smartphones to offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems. There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy. Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available). One example of a mobile recommender system are the approaches taken by companies such as Uber and Lyft to generate driving routes for taxi drivers in a city. This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits. === Generative recommenders === Generative recommenders (GR) represent an approach that transforms recommendation tasks into sequential transduction problems, where user actions are treated like tokens in a generative modeling framework. In one method, known as HSTU (Hierarchical Sequential Transduction Units), high-cardinality, non-stationary, and streaming datasets are efficiently processed as sequences, enabling the model to learn from trillions of parameters and to handle user action histories orders of magnitude longer than before. By turning all of the system’s varied data into a single stream of tokens and using a custom self-attention approach instead of traditional neural network layers, generative recommenders make the model much simpler and less memory-hungry. As a result, it can improve recommendation quality in test simulations and in real-world tests, while being faster than previous Transformer-based systems when handling long lists of user actions. Ultimately, this approach allows the model’s performance to grow steadily as more computing power is used, laying a foundation for efficient and scalable “foundation models” for recommendations. == The Netflix Prize == One of the events that energized research in recommender systems was the Netflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules. The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.: Predictive accuracy is substantially improved when blending multiple predictors. Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique. Consequently, our solution is an ensemble of many methods. Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place founded Gravity R&D, a recommendation engine that's active in the RecSys community. 4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites. A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on the Internet Movie Database (IMDb). As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and the Video Privacy Protection Act by releasing the datasets. This, as well as concerns from the Federal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010. == Evaluation == === Performance measures === Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure the effectiveness of recommender systems, and compare different approaches, three types of evaluations are available: user studies, online evaluations (A/B tests), and offline evaluations. The commonly used metrics are the mean squared error and root mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such as precision and recall or discounted cumulative gain (DCG) are useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation. However, many of the classic evaluation measures are highly criticized. Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise. User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best. In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such as conversion rate or click-through rate. Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies. The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers. For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests. A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms. Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction. This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module. Researchers have concluded that the results of offline evaluations should be viewed critically. === Beyond accuracy === Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important. Diversity – Users tend to be more satisfied with recommendations when there is a higher intra-list diversity, e.g. items from different artists. Recommender persistence – In some situations, it is more effective to re-show recommendations, or let users re-rate items, than showing new items. There are several reasons for this. Users may ignore items when they are shown for the first time, for instance, because they had no time to inspect the recommendations carefully. Privacy – Recommender systems usually have to deal with privacy concerns because users have to reveal sensitive information. Building user profiles using collaborative filtering can be problematic from a privacy point of view. Many European countries have a strong culture of data privacy, and every attempt to introduce any level of user profiling can result in a negative customer response. Much research has been conducted on ongoing privacy issues in this space. The Netflix Prize is particularly notable for the detailed personal information released in its dataset. Ramakrishnan et al. have conducted an extensive overview of the trade-offs between personalization and privacy and found that the combination of weak ties (an unexpected connection that provides serendipitous recommendations) and other data sources can be used to uncover identities of users in an anonymized dataset. User demographics – Beel et al. found that user demographics may influence how satisfied users are with recommendations. In their paper they show that elderly users tend to be more interested in recommendations than younger users. Robustness – When users can participate in the recommender system, the issue of fraud must be addressed. Serendipity – Serendipity is a measure of "how surprising the recommendations are". For instance, a recommender system that recommends milk to a customer in a grocery store might be perfectly accurate, but it is not a good recommendation because it is an obvious item for the customer to buy. "[Serendipity] serves two purposes: First, the chance that users lose interest because the choice set is too uniform decreases. Second, these items are needed for algorithms to learn and improve themselves". Trust – A recommender system is of little value for a user if the user does not trust the system. Trust can be built by a recommender system by explaining how it generates recommendations, and why it recommends an item. Labelling – User satisfaction with recommendations may be influenced by the labeling of the recommendations. For instance, in the cited study click-through rate (CTR) for recommendations labeled as "Sponsored" were lower (CTR=5.93%) than CTR for identical recommendations labeled as "Organic" (CTR=8.86%). Recommendations with no label performed best (CTR=9.87%) in that study. === Reproducibility === Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to a reproducibility crisis in recommender systems publications. The topic of reproducibility seems to be a recurrent issue in some Machine Learning publication venues, but does not have a considerable effect beyond the world of scientific publication. In the context of recommender systems a 2019 paper surveyed a small number of hand-picked publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area. More recent work on benchmarking a set of the same methods came to qualitatively very different results whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM, RecSys Challenge. Moreover, neural and deep learning methods are widely used in industry where they are extensively tested. The topic of reproducibility is not new in recommender systems. By 2011, Ekstrand, Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently". Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions." As a consequence, much research about recommender systems can be considered as not reproducible. Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems. Said and Bellogín conducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used. Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation: "(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research." == Artificial intelligence applications in recommendation == Artificial intelligence (AI) applications in recommendation systems are the advanced methodologies that leverage AI technologies, to enhance the performance recommendation engines. The AI-based recommender can analyze complex data sets, learning from user behavior, preferences, and interactions to generate highly accurate and personalized content or product suggestions. The integration of AI in recommendation systems has marked a significant evolution from traditional recommendation methods. Traditional methods often relied on inflexible algorithms that could suggest items based on general user trends or apparent similarities in content. In comparison, AI-powered systems have the capability to detect patterns and subtle distinctions that may be overlooked by traditional methods. These systems can adapt to specific individual preferences, thereby offering recommendations that are more aligned with individual user needs. This approach marks a shift towards more personalized, user-centric suggestions. Recommendation systems widely adopt AI techniques such as machine learning, deep learning, and natural language processing. These advanced methods enhance system capabilities to predict user preferences and deliver personalized content more accurately. Each technique contributes uniquely. The following sections will introduce specific AI models utilized by a recommendation system by illustrating their theories and functionalities. === KNN-based collaborative filters === Collaborative filtering (CF) is one of the most commonly used recommendation system algorithms. It generates personalized suggestions for users based on explicit or implicit behavioral patterns to form predictions. Specifically, it relies on external feedback such as star ratings, purchasing history and so on to make judgments. CF make predictions about users' preference based on similarity measurements. Essentially, the underlying theory is: "if user A is similar to user B, and if A likes item C, then it is likely that B also likes item C." There are many models available for collaborative filtering. For AI-applied collaborative filtering, a common model is called K-nearest neighbors. The ideas are as follows: Data Representation: Create a n-dimensional space where each axis represents a user's trait (ratings, purchases, etc.). Represent the user as a point in that space. Statistical Distance: 'Distance' measures how far apart users are in this space. See statistical distance for computational details Identifying Neighbors: Based on the computed distances, find k nearest neighbors of the user to which we want to make recommendations Forming Predictive Recommendations: The system will analyze the similar preference of the k neighbors. The system will make recommendations based on that similarity === Neural networks === An artificial neural network (ANN), is a deep learning model structure which aims to mimic a human brain. They comprise a series of neurons, each responsible for receiving and processing information transmitted from other interconnected neurons. Similar to a human brain, these neurons will change activation state based on incoming signals (training input and backpropagated output), allowing the system to adjust activation weights during the network learning phase. ANN is usually designed to be a black-box model. Unlike regular machine learning where the underlying theoretical components are formal and rigid, the collaborative effects of neurons are not entirely clear, but modern experiments has shown the predictive power of ANN. ANN is widely used in recommendation systems for its power to utilize various data. Other than feedback data, ANN can incorporate non-feedback data which are too intricate for collaborative filtering to learn, and the unique structure allows ANN to identify extra signal from non-feedback data to boost user experience. Following are some examples: Time and Seasonality: what specify time and date or a season that a user interacts with the platform User Navigation Patterns: sequence of pages visited, time spent on different parts of a website, mouse movement, etc. External Social Trends: information from outer social media ==== Two-Tower Model ==== The Two-Tower model is a neural architecture commonly employed in large-scale recommendation systems, particularly for candidate retrieval tasks. It consists of two neural networks: User Tower: Encodes user-specific features, such as interaction history or demographic data. Item Tower: Encodes item-specific features, such as metadata or content embeddings. The outputs of the two towers are fixed-length embeddings that represent users and items in a shared vector space. A similarity metric, such as dot product or cosine similarity, is used to measure relevance between a user and an item. This model is highly efficient for large datasets as embeddings can be pre-computed for items, allowing rapid retrieval during inference. It is often used in conjunction with ranking models for end-to-end recommendation pipelines. === Natural language processing === Natural language processing is a series of AI algorithms to make natural human language accessible and analyzable to a machine. It is a fairly modern technique inspired by the growing amount of textual information. For application in recommendation system, a common case is the Amazon customer review. Amazon will analyze the feedbacks comments from each customer and report relevant data to other customers for reference. The recent years have witnessed the development of various text analysis models, including latent semantic analysis (LSA), singular value decomposition (SVD), latent Dirichlet allocation (LDA), etc. Their uses have consistently aimed to provide customers with more precise and tailored recommendations. == Specific applications == === Academic content discovery === An emerging market for content discovery platforms is academic content. Approximately 6000 academic journal articles are published daily, making it increasingly difficult for researchers to balance time management with staying up to date with relevant research. Though traditional tools academic search tools such as Google Scholar or PubMed provide a readily accessible database of journal articles, content recommendation in these cases are performed in a 'linear' fashion, with users setting 'alarms' for new publications based on keywords, journals or particular authors. Google Scholar provides an 'Updates' tool that suggests articles by using a statistical model that takes a researchers' authorized paper and citations as input. Whilst these recommendations have been noted to be extremely good, this poses a problem with early career researchers which may be lacking a sufficient body of work to produce accurate recommendations. === Decision-making === In contrast to an engagement-based ranking system employed by social media and other digital platforms, a bridging-based ranking optimizes for content that is unifying instead of polarizing. Examples include Polis and Remesh which have been used around the world to help find more consensus around specific political issues. Twitter has also used this approach for managing its community notes, which YouTube planned to pilot in 2024. Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empowering deliberative groups that are representative of the platform's users to control the design and implementation of the algorithm. === Television === As the connected television landscape continues to evolve, search and recommendation are seen as having an even more pivotal role in the discovery of content. With broadband-connected devices, consumers are projected to have access to content from linear broadcast sources as well as internet television. Therefore, there is a risk that the market could become fragmented, leaving it to the viewer to visit various locations and find what they want to watch in a way that is time-consuming and complicated for them. By using a search and recommendation engine, viewers are provided with a central 'portal' from which to discover content from several sources in just one location. == See also == == References == == Further reading == Books Kim Falk (d 2019), Practical Recommender Systems, Manning Publications, ISBN 9781617292705 Bharat Bhasker; K. Srikumar (2010). Recommender Systems in E-Commerce. CUP. ISBN 978-0-07-068067-8. Archived from the original on September 1, 2010. Jannach, Dietmar; Markus Zanker; Alexander Felfernig; Gerhard Friedrich (2010). Recommender Systems: An Introduction. CUP. ISBN 978-0-521-49336-9. Archived from the original on August 31, 2015. Seaver, Nick (2022). Computing Taste: Algorithms and the Makers of Music Recommendation. University of Chicago Press. Scientific articles Robert M. Bell; Jim Bennett; Yehuda Koren & Chris Volinsky (May 2009). "The Million Dollar Programming Prize". IEEE Spectrum. Archived from the original on May 11, 2009. Retrieved December 10, 2018. Prem Melville, Raymond J. Mooney, and Ramadass Nagarajan. (2002) Content-Boosted Collaborative Filtering for Improved Recommendations. Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI-2002), pp. 187–192, Edmonton, Canada, July 2002.
Wikipedia/Recommendation_algorithm
Algorithms is a monthly peer-reviewed open-access scientific journal of mathematics, covering design, analysis, and experiments on algorithms. The journal is published by MDPI and was established in 2008. The founding editor-in-chief was Kazuo Iwama (Kyoto University). From May 2014 to September 2019, the editor-in-chief was Henning Fernau (Universität Trier). The current editor-in-chief is Frank Werner (Otto-von-Guericke-Universität Magdeburg). == Abstracting and indexing == According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.3. The journal is abstracted and indexed in: == See also == Journals with similar scope include: ACM Transactions on Algorithms Algorithmica Journal of Algorithms (Elsevier) == References == == External links == Official website
Wikipedia/Algorithms_(journal)
In the C++ Standard Library, the algorithms library provides various functions that perform algorithmic operations on containers and other sequences, represented by Iterators. The C++ standard provides some standard algorithms collected in the <algorithm> standard header. A handful of algorithms are also in the <numeric> header. All algorithms are in the std namespace. == Execution policies == C++17 provides the ability for many algorithms to optionally take an execution policy, which may allow implementations to execute the algorithm in parallel (i.e. by using threads or SIMD instructions). There are four different policy types, each indicating different semantics about the order in which element accesses are allowed to be observed relative to each other sequenced_policy, which indicates that the execution of the algorithm must happen on the thread which invokes the function, and the order of element accesses should execute in sequence. It is equivalent to calling the function without an execution policy parallel_policy, which indicates that the execution of the algorithm may happen across multiple threads, however within each thread the order of element accesses are made in sequence (i.e. element accesses may not be done concurrently) parallel_unsequenced_policy, which indicates that the execution of the algorithm may happen across multiple threads, and element accesses do not have to be performed in order within the same thread unsequenced_policy, which indicates that the execution of the algorithm must happen on the thread which invokes the function, however the order of element accesses may be performed out of sequence It is up to the user to ensure that the operations performed by the function are thread safe when using policies which may execute across different threads. == Ranges == C++20 adds versions of the algorithms defined in the <algorithm> header which operate on ranges rather than pairs of iterators. The ranges versions of algorithm functions are scoped within the ranges namespace. They extend the functionality of the basic algorithms by allowing iterator-sentinel pairs to be used instead of requiring that both iterators be of the same type and also allowing interoperability with the objects provided by the ranges header without requiring the user to manually extract the iterators. == Non-modifying sequence operations == === Predicate checking algorithms === Checks if a given predicate evaluates to true for some amount of objects in the range, or returns the amount of objects that do all_of any_of none_of count count_if contains === Comparison algorithms === Compares two ranges for some property mismatch equal lexicographical_compare contains_subrange starts_with ends_with is_permutation === Searching algorithms === Finds the first or last position in a range where the subsequent elements satisfy some predicate find find_if find_if_not find_last find_last_if find_last_if_not find_end find_first_of adjacent_find search search_n partition_point === Binary search algorithms === Provides Binary search operations on ranges. It is undefined behaviour to use these on ranges which are not sorted. binary_search upper_bound lower_bound equal_range === Maximum/Minimum search algorithms === Finds the maximum or minimum element in a range, as defined by some comparison predicate max_element min_element minmax_element === Property checking algorithms === Checks if an entire range satisfies some property is_partitioned is_sorted is_heap == Modifying sequence operations == === Copying algorithms === Transfers the elements from one range into another copy copy_if copy_backward move move_backward reverse_copy rotate_copy unique_copy sample === Partitioning algorithms === Moves the elements of a range in-place so the range is partitioned with respect to some property unique remove remove_if partition partition_copy stable_partition === Sorting algorithms === Sorts or partially sorts a range in-place sort partial sort stable_sort nth_element === Populating algorithms === Populates a given range without reading the values contained within fill generate iota === Transforming algorithms === Transforms each element of a given range in-place for_each transform replace replace_if clamp === Reordering algorithms === Changes the order of elements within a range in-place shuffle shift_left shift_right reverse rotate === Heap algorithms === Provides algorithms to create, insert, and remove elements from a max heap [[Max heap|make_heap]] [[Max heap|push_heap]] [[Max heap|pop_heap]] [[Max heap|sort_heap]] == References == == External links == C++ reference for standard algorithms
Wikipedia/Algorithm_(C++)
The Algorithm is the eighth studio album by American rock band Filter. It was released on August 25, 2023. Originally conceived in 2018 as a follow-up to the band's first album, Short Bus (1995), titled Rebus, the project changed course due to the collapse of the PledgeMusic crowd funding platform. Despite this, some material from the sessions still appears in the final release, while two other tracks were released in 2020 as singles. The Algorithm is the band's first album in seven years since Crazy Eyes (2016). == Background and recording == After releasing their seventh studio album, Crazy Eyes (2016), and touring in support of it in 2017, frontman Richard Patrick turned to making new music in 2018. The start of the project was spurred by a particular event; Patrick was attending a Veruca Salt concert which also was being attended by original Filter member Brian Liesegang, who had left the band shortly after the release of their first album, the platinum selling Short Bus (1995), due to creative differences with Patrick. Knowing both were there, Veruca Salt member Louise Post stopped mid-concert to call both out, stating that they need to "bury any bullshit, forget the crap, and get their shit together" in regards to making new music together. The two took the message to heart, and decided to work on a new album together. By October 2018, they had announced the concept; the two decided on calling the album Rebus—an allusion to the only Filter album the two had worked together on—and centered the album's conception around the idea of recording a follow-up to that album, but with more modern sounds and concepts. The band had planned to procure funding for the album creation process through crowd sourcing platform PledgeMusic. However, the band had gone quiet on the progress of the project through the mid-part of 2019, until July 2019, when Patrick announced that the collaboration with Liesegang had been cancelled due to the bankruptcy of the PledgeMusic company and "a variety of other reasons". He announced that the scope of the album would be changing - Liesegang would not be working on the album moving forward, and that it had changed names to They've Got Us Right Where They Want Us, at Each Other's Throats. Despite this, Patrick noted that he still hoped to include three of the songs that he had written with Liesegang on the album, titled "Murica", "Thoughts and Prayers", and "(Command-Z) High as a Muv Fucka". Patrick once again went silent on the project until the release of the single "Thoughts and Prayers" in June 2020, where he announced it had been retitled again, to Murica, and that it was scheduled for release by the end of 2020. On October 29, the title track was released as a single along with its music video and album cover art. The video depicts Patrick as a far right wing Republican party supporter, causing tension among the band's fan base. In 2022, Patrick announced in interviews on his Facebook page that he had changed the name back to They've Got Us Right Where They Want Us, at Each Other's Throats, that the album was now scheduled for a 2023 release on Golden Robot Records, and the two singles released in 2020 would not be on the album. In October 2022, the single "For the Beaten" was released, which Patrick now described as the album's first single. May 5, 2023 saw the release of the second single, "Face Down", along with a reveal of the album's title. == Track listing == == Personnel == Filter Richard Patrick – lead vocals, guitars, bass, programming Jonny Radtke – guitars, backing vocals Bobby Miller – bass, backing vocals Elias Mallin – drums Additional personnel Zach Munowitz – guitars on "For the Beaten", "Up Against the Wall", and "Say It Again" Sam Tinnesz – guitars on "Obliteration" and "Burn Out the Sun" Mark Jackson – guitars on "Obliteration" Brian Liesegang – programming on "Command Z" Ray Luzier – drums on "Summer Child" Seth Mosley – production on "Threshing Floor" Brian Virtue – mixing, production Howie Weinberg – mastering == Charts == == References ==
Wikipedia/The_Algorithm_(Filter_album)
In Einstein's theory of general relativity, the Schwarzschild metric (also known as the Schwarzschild solution) is an exact solution to the Einstein field equations that describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, angular momentum of the mass, and universal cosmological constant are all zero. The solution is a useful approximation for describing slowly rotating astronomical objects such as many stars and planets, including Earth and the Sun. It was found by Karl Schwarzschild in 1916. According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has neither electric charge nor angular momentum (non-rotating). A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass. The Schwarzschild black hole is characterized by a surrounding spherical boundary, called the event horizon, which is situated at the Schwarzschild radius ( r s {\displaystyle r_{\text{s}}} ), often called the radius of a black hole. The boundary is not a physical surface, and a person who fell through the event horizon (before being torn apart by tidal forces) would not notice any physical surface at that position; it is a mathematical surface which is significant in determining the black hole's properties. Any non-rotating and non-charged mass that is smaller than its Schwarzschild radius forms a black hole. The solution of the Einstein field equations is valid for any mass M, so in principle (within the theory of general relativity) a Schwarzschild black hole of any mass could exist if conditions became sufficiently favorable to allow for its formation. In the vicinity of a Schwarzschild black hole, space curves so much that even light rays are deflected, and very nearby light can be deflected so much that it travels several times around the black hole. == Formulation == The Schwarzschild metric is a spherically symmetric Lorentzian metric (here, with signature convention (+, -, -, -)), defined on (a subset of) R × ( E 3 − O ) ≅ R × ( 0 , ∞ ) × S 2 {\displaystyle \mathbb {R} \times \left(E^{3}-O\right)\cong \mathbb {R} \times (0,\infty )\times S^{2}} where E 3 {\displaystyle E^{3}} is 3 dimensional Euclidean space, and S 2 ⊂ E 3 {\displaystyle S^{2}\subset E^{3}} is the two sphere. The rotation group S O ( 3 ) = S O ( E 3 ) {\displaystyle \mathrm {SO} (3)=\mathrm {SO} (E^{3})} acts on the E 3 − O {\displaystyle E^{3}-O} or S 2 {\displaystyle S^{2}} factor as rotations around the center O {\displaystyle O} , while leaving the first R {\displaystyle \mathbb {R} } factor unchanged. The Schwarzschild metric is a solution of Einstein's field equations in empty space, meaning that it is valid only outside the gravitating body. That is, for a spherical body of radius R {\displaystyle R} the solution is valid for r > R {\displaystyle r>R} . To describe the gravitational field both inside and outside the gravitating body the Schwarzschild solution must be matched with some suitable interior solution at ⁠ r = R {\displaystyle r=R} ⁠, such as the interior Schwarzschild metric. In Schwarzschild coordinates ( t , r , θ , ϕ ) {\displaystyle (t,r,\theta ,\phi )} the Schwarzschild metric (or equivalently, the line element for proper time) has the form d s 2 = c 2 d τ 2 = ( 1 − r s r ) c 2 d t 2 − ( 1 − r s r ) − 1 d r 2 − r 2 d Ω 2 , {\displaystyle {ds}^{2}=c^{2}\,{d\tau }^{2}=\left(1-{\frac {r_{\mathrm {s} }}{r}}\right)c^{2}\,dt^{2}-\left(1-{\frac {r_{\mathrm {s} }}{r}}\right)^{-1}\,dr^{2}-r^{2}{d\Omega }^{2},} where d Ω 2 {\displaystyle {d\Omega }^{2}} is the metric on the two sphere, i.e. ⁠ d Ω 2 = ( d θ 2 + sin 2 ⁡ θ d ϕ 2 ) {\displaystyle {d\Omega }^{2}=\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)} ⁠. Furthermore, d τ 2 {\displaystyle d\tau ^{2}} is positive for timelike curves, in which case τ {\displaystyle \tau } is the proper time (time measured by a clock moving along the same world line with a test particle), c {\displaystyle c} is the speed of light, t {\displaystyle t} is, for ⁠ r > r s {\displaystyle r>r_{\text{s}}} ⁠, the time coordinate (measured by a clock located infinitely far from the massive body and stationary with respect to it), r {\displaystyle r} is, for ⁠ r > r s {\displaystyle r>r_{\text{s}}} ⁠, the radial coordinate (measured as the circumference, divided by 2π, of a sphere centered around the massive body), Ω {\displaystyle \Omega } is a point on the two sphere ⁠ S 2 {\displaystyle S^{2}} ⁠, θ {\displaystyle \theta } is the colatitude of Ω {\displaystyle \Omega } (angle from north, in units of radians) defined after arbitrarily choosing a z-axis, ϕ {\displaystyle \phi } is the longitude of Ω {\displaystyle \Omega } (also in radians) around the chosen z-axis, and r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the massive body, a scale factor which is related to its mass M {\displaystyle M} by ⁠ r s = 2 G M / c 2 {\displaystyle r_{\text{s}}={2GM}/{c^{2}}} ⁠, where G {\displaystyle G} is the gravitational constant. The Schwarzschild metric has a singularity for r = 0, which is an intrinsic curvature singularity. It also seems to have a singularity on the event horizon r = rs. Depending on the point of view, the metric is therefore defined only on the exterior region r > r s {\displaystyle r>r_{\text{s}}} , only on the interior region r < r s {\displaystyle r<r_{\text{s}}} or their disjoint union. However, the metric is actually non-singular across the event horizon, as one sees in suitable coordinates (see below). For ⁠ r ≫ r s {\displaystyle r\gg r_{\text{s}}} ⁠, the Schwarzschild metric is asymptotic to the standard Lorentz metric on Minkowski space. For almost all astrophysical objects, the ratio r s R {\displaystyle {\frac {r_{\text{s}}}{R}}} is extremely small. For example, the Schwarzschild radius r s ( Earth ) {\displaystyle r_{\text{s}}^{({\text{Earth}})}} of the Earth is roughly 8.9 mm, while the Sun, which is 3.3×105 times as massive has a Schwarzschild radius r s ( Sun ) {\displaystyle r_{\text{s}}^{({\text{Sun}})}} of approximately 3.0 km. The ratio becomes large only in close proximity to black holes and other ultra-dense objects such as neutron stars. The radial coordinate turns out to have physical significance as the "proper distance between two events that occur simultaneously relative to the radially moving geodesic clocks, the two events lying on the same radial coordinate line". The Schwarzschild solution is analogous to a classical Newtonian theory of gravity that corresponds to the gravitational field around a point particle. Even at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. == History == The Schwarzschild solution is named in honour of Karl Schwarzschild, who found the exact solution in 1915 and published it in January 1916, a little more than a month after the publication of Einstein's theory of general relativity. It was the first exact solution of the Einstein field equations other than the trivial flat space solution. Schwarzschild died shortly after his paper was published, as a result of a disease (thought to be pemphigus) he developed while serving in the German army during World War I. Johannes Droste in 1916 independently produced the same solution as Schwarzschild, using a simpler, more direct derivation. In the early years of general relativity there was a lot of confusion about the nature of the singularities found in the Schwarzschild and other solutions of the Einstein field equations. In Schwarzschild's original paper, he put what we now call the event horizon at the origin of his coordinate system. In this paper he also introduced what is now known as the Schwarzschild radial coordinate (r in the equations above), as an auxiliary variable. In his equations, Schwarzschild was using a different radial coordinate that was zero at the Schwarzschild radius. A more complete analysis of the singularity structure was given by David Hilbert in the following year, identifying the singularities both at r = 0 and r = rs. Although there was general consensus that the singularity at r = 0 was a 'genuine' physical singularity, the nature of the singularity at r = rs remained unclear. In 1921, Paul Painlevé and in 1922 Allvar Gullstrand independently produced a metric, a spherically symmetric solution of Einstein's equations, which we now know is coordinate transformation of the Schwarzschild metric, Gullstrand–Painlevé coordinates, in which there was no singularity at r = rs. They, however, did not recognize that their solutions were just coordinate transforms, and in fact used their solution to argue that Einstein's theory was wrong. In 1924 Arthur Eddington produced the first coordinate transformation (Eddington–Finkelstein coordinates) that showed that the singularity at r = rs was a coordinate artifact, although he also seems to have been unaware of the significance of this discovery. Later, in 1932, Georges Lemaître gave a different coordinate transformation (Lemaître coordinates) to the same effect and was the first to recognize that this implied that the singularity at r = rs was not physical. In 1939 Howard Robertson showed that a free falling observer descending in the Schwarzschild metric would cross the r = rs singularity in a finite amount of proper time even though this would take an infinite amount of time in terms of coordinate time t. In 1950, John Synge produced a paper that showed the maximal analytic extension of the Schwarzschild metric, again showing that the singularity at r = rs was a coordinate artifact and that it represented two horizons. A similar result was later rediscovered by George Szekeres, and independently Martin Kruskal. The new coordinates nowadays known as Kruskal–Szekeres coordinates were much simpler than Synge's but both provided a single set of coordinates that covered the entire spacetime. However, perhaps due to the obscurity of the journals in which the papers of Lemaître and Synge were published their conclusions went unnoticed, with many of the major players in the field including Einstein believing that the singularity at the Schwarzschild radius was physical. Synge's later derivation of the Kruskal–Szekeres metric solution, which was motivated by a desire to avoid "using 'bad' [Schwarzschild] coordinates to obtain 'good' [Kruskal–Szekeres] coordinates", has been generally under-appreciated in the literature, but was adopted by Chandrasekhar in his black hole monograph. Real progress was made in the 1960s when the mathematically rigorous formulation cast in terms of differential geometry entered the field of general relativity, allowing more exact definitions of what it means for a Lorentzian manifold to be singular. This led to definitive identification of the r = rs singularity in the Schwarzschild metric as an event horizon, i.e., a hypersurface in spacetime that can be crossed in only one direction. == Singularities and black holes == The Schwarzschild solution appears to have singularities at r = 0 and r = rs; some of the metric components "blow up" (entail division by zero or multiplication by infinity) at these radii. Since the Schwarzschild metric is expected to be valid only for those radii larger than the radius R of the gravitating body, there is no problem as long as R > rs. For ordinary stars and planets this is always the case. For example, the radius of the Sun is approximately 700000 km, while its Schwarzschild radius is only 3 km. The singularity at r = rs divides the Schwarzschild coordinates in two disconnected patches. The exterior Schwarzschild solution with r > rs is the one that is related to the gravitational fields of stars and planets. The interior Schwarzschild solution with 0 ≤ r < rs, which contains the singularity at r = 0, is completely separated from the outer patch by the singularity at r = rs. The Schwarzschild coordinates therefore give no physical connection between the two patches, which may be viewed as separate solutions. The singularity at r = rs is an illusion however; it is an instance of what is called a coordinate singularity. As the name implies, the singularity arises from a bad choice of coordinates or coordinate conditions. When changing to a different coordinate system (for example Lemaître coordinates, Eddington–Finkelstein coordinates, Kruskal–Szekeres coordinates, Novikov coordinates, or Gullstrand–Painlevé coordinates) the metric becomes regular at r = rs and can extend the external patch to values of r smaller than rs. Using a different coordinate transformation one can then relate the extended external patch to the inner patch. The case r = 0 is different, however. If one asks that the solution be valid for all r one runs into a true physical singularity, or gravitational singularity, at the origin. To see that this is a true singularity one must look at quantities that are independent of the choice of coordinates. One such important quantity is the Kretschmann invariant, which is given by R α β γ δ R α β γ δ = 12 r s 2 r 6 = 48 G 2 M 2 c 4 r 6 . {\displaystyle R^{\alpha \beta \gamma \delta }R_{\alpha \beta \gamma \delta }={\frac {12r_{\mathrm {s} }^{2}}{r^{6}}}={\frac {48G^{2}M^{2}}{c^{4}r^{6}}}\,.} At r = 0 the curvature becomes infinite, indicating the presence of a singularity. At this point the metric cannot be extended in a smooth manner (the Kretschmann invariant involves second derivatives of the metric), spacetime itself is then no longer well-defined. Furthermore, Sbierski showed the metric cannot be extended even in a continuous manner. For a long time it was thought that such a solution was non-physical. However, a greater understanding of general relativity led to the realization that such singularities were a generic feature of the theory and not just an exotic special case. The Schwarzschild solution, taken to be valid for all r > 0, is called a Schwarzschild black hole. It is a perfectly valid solution of the Einstein field equations, although (like other black holes) it has rather bizarre properties. For r < rs the Schwarzschild radial coordinate r becomes timelike and the time coordinate t becomes spacelike. A curve at constant r is no longer a possible worldline of a particle or observer, not even if a force is exerted to try to keep it there; this occurs because spacetime has been curved so much that the direction of cause and effect (the particle's future light cone) points into the singularity. The surface r = rs demarcates what is called the event horizon of the black hole. It represents the point past which light can no longer escape the gravitational field. Any physical object whose radius R becomes less than or equal to the Schwarzschild radius has undergone gravitational collapse and become a black hole. == Alternative coordinates == The Schwarzschild solution can be expressed in a range of different choices of coordinates besides the Schwarzschild coordinates used above. Different choices tend to highlight different features of the solution. The table below shows some popular choices. In table above, some shorthand has been introduced for brevity. The speed of light c has been set to one. The notation g Ω = d θ 2 + sin 2 ⁡ θ d φ 2 {\displaystyle g_{\Omega }=d\theta ^{2}+\sin ^{2}\theta \,d\varphi ^{2}} is used for the metric of a unit radius 2-dimensional sphere. Moreover, in each entry R and T denote alternative choices of radial and time coordinate for the particular coordinates. Note, the R or T may vary from entry to entry. The Kruskal–Szekeres coordinates have the form to which the Belinski–Zakharov transform can be applied. This implies that the Schwarzschild black hole is a form of gravitational soliton. == Flamm's paraboloid == The spatial curvature of the Schwarzschild solution for r > rs can be visualized as the graphic shows. Consider a constant time equatorial slice H through the Schwarzschild solution by fixing θ = ⁠π/2⁠, t = constant, and letting the remaining Schwarzschild coordinates (r, φ) vary. Imagine now that there is an additional Euclidean dimension w, which has no physical reality (it is not part of spacetime). Then replace the (r, φ) plane with a surface dimpled in the w direction according to the equation (Flamm's paraboloid) w = 2 r s ( r − r s ) . {\displaystyle w=2{\sqrt {r_{\text{s}}\left(r-r_{\text{s}}\right)}}.} This surface has the property that distances measured within it match distances in the Schwarzschild metric, because with the definition of w above, d w 2 + d r 2 + r 2 d φ 2 = d r 2 1 − r s r + r 2 d φ 2 = − d s 2 . {\displaystyle dw^{2}+dr^{2}+r^{2}\,d\varphi ^{2}={\frac {dr^{2}}{1-{\frac {r_{\text{s}}}{r}}}}+r^{2}\,d\varphi ^{2}=-ds^{2}.} Thus, Flamm's paraboloid is useful for visualizing the spatial curvature of the Schwarzschild metric. It should not, however, be confused with a gravity well. No ordinary (massive or massless) particle can have a worldline lying on the paraboloid, since all distances on it are spacelike (this is a cross-section at one moment of time, so any particle moving on it would have an infinite velocity). A tachyon could have a spacelike worldline that lies entirely on a single paraboloid. However, even in that case its geodesic path is not the trajectory one gets through a "rubber sheet" analogy of gravitational well: in particular, if the dimple is drawn pointing upward rather than downward, the tachyon's geodesic path still curves toward the central mass, not away. See the gravity well article for more information. Flamm's paraboloid may be derived as follows. The Euclidean metric in the cylindrical coordinates (r, φ, w) is written − d s 2 = d w 2 + d r 2 + r 2 d φ 2 . {\displaystyle -ds^{2}=dw^{2}+dr^{2}+r^{2}\,d\varphi ^{2}.} Letting the surface be described by the function w = w(r), the Euclidean metric can be written as − d s 2 = ( 1 + ( d w d r ) 2 ) d r 2 + r 2 d φ 2 . {\displaystyle -ds^{2}=\left(1+\left({\frac {dw}{dr}}\right)^{2}\right)\,dr^{2}+r^{2}\,d\varphi ^{2}.} Comparing this with the Schwarzschild metric in the equatorial plane (θ = π/2) at a fixed time (t = constant, dt = 0), − d s 2 = ( 1 − r s r ) − 1 d r 2 + r 2 d φ 2 , {\displaystyle -ds^{2}=\left(1-{\frac {r_{\text{s}}}{r}}\right)^{-1}\,dr^{2}+r^{2}\,d\varphi ^{2},} yields an integral expression for w(r): w ( r ) = ∫ d r r r s − 1 = 2 r s r r s − 1 + constant , {\displaystyle w(r)=\int {\frac {dr}{\sqrt {{\frac {r}{r_{\text{s}}}}-1}}}=2r_{\text{s}}{\sqrt {{\frac {r}{r_{\text{s}}}}-1}}+{\text{constant}},} whose solution is Flamm's paraboloid. == Orbital motion == A particle orbiting in the Schwarzschild metric can have a stable circular orbit with r > 3rs. Circular orbits with r between 1.5rs and 3rs are unstable, and no circular orbits exist for r < 1.5rs. The circular orbit of minimum radius 1.5rs corresponds to an orbital velocity approaching the speed of light. It is possible for a particle to have a constant value of r between rs and 1.5rs, but only if some force acts to keep it there. Noncircular orbits, such as Mercury's, dwell longer at small radii than would be expected in Newtonian gravity. This can be seen as a less extreme version of the more dramatic case in which a particle passes through the event horizon and dwells inside it forever. Intermediate between the case of Mercury and the case of an object falling past the event horizon, there are exotic possibilities such as knife-edge orbits, in which the satellite can be made to execute an arbitrarily large number of nearly circular orbits, after which it flies back outward. == Symmetries == The isometry group of the Schwarzchild metric is ⁠ R × O ( 3 ) × { ± 1 } {\displaystyle \mathbb {R} \times \mathrm {O} (3)\times \{\pm 1\}} ⁠, where O ( 3 ) {\displaystyle \mathrm {O} (3)} is the orthogonal group of rotations and reflections in three dimensions, R {\displaystyle \mathbb {R} } comprises the time translations, and { ± 1 } {\displaystyle \{\pm 1\}} is the group generated by time reversal. This is thus the subgroup of the ten-dimensional Poincaré group which takes the time axis (trajectory of the star) to itself. It omits the spatial translations (three dimensions) and boosts (three dimensions). It retains the time translations (one dimension) and rotations (three dimensions). Thus it has four dimensions. Like the Poincaré group, it has four connected components: the component of the identity; the time reversed component; the spatial inversion component; and the component which is both time reversed and spatially inverted. == Curvatures == The Ricci curvature scalar and the Ricci curvature tensor are both zero. Non-zero components of the Riemann curvature tensor are given by − R t r t r = 2 R θ r θ r = 2 R ϕ r ϕ r = r s r 2 ( r s − r ) , {\displaystyle -R^{t}{}_{rtr}=2R^{\theta }{}_{r\theta r}=2R^{\phi }{}_{r\phi r}={\frac {r_{\text{s}}}{r^{2}(r_{\text{s}}-r)}},} 2 R t θ t θ = 2 R r θ r θ = − R ϕ θ ϕ θ = − r s r , {\displaystyle 2R^{t}{}_{\theta t\theta }=2R^{r}{}_{\theta r\theta }=-R^{\phi }{}_{\theta \phi \theta }=-{\frac {r_{\text{s}}}{r}},} 2 R t ϕ t ϕ = 2 R r ϕ r ϕ = − R θ ϕ θ ϕ = − r s sin 2 ⁡ ( θ ) r , {\displaystyle 2R^{t}{}_{\phi t\phi }=2R^{r}{}_{\phi r\phi }=-R^{\theta }{}_{\phi \theta \phi }=-{\frac {r_{\text{s}}\sin ^{2}(\theta )}{r}},} R r t r t = − 2 R θ t θ t = − 2 R ϕ t ϕ t = c 2 r s ( r s − r ) r 4 , {\displaystyle R^{r}{}_{trt}=-2R^{\theta }{}_{t\theta t}=-2R^{\phi }{}_{t\phi t}=c^{2}{\frac {r_{\text{s}}(r_{\text{s}}-r)}{r^{4}}},} from which one can see that R γ α γ β = 0 {\displaystyle R^{\gamma }{}_{\alpha \gamma \beta }=0} . Six of these formulas are Eq. 5.13 in Carroll and imply the other 6 by R α β γ δ = g α κ g β λ R λ κ δ γ {\displaystyle R^{\alpha }{}_{\beta \gamma \delta }=g^{\alpha \kappa }g_{\beta \lambda }R^{\lambda }{}_{\kappa \delta \gamma }} . Components which are obtainable by other symmetries of the Riemann tensor are not displayed. To understand the physical meaning of these quantities, it is useful to express the curvature tensor in an orthonormal basis. In an orthonormal basis of an observer the non-zero components in geometric units are R r ^ t ^ r ^ t ^ = − R θ ^ ϕ ^ θ ^ ϕ ^ = − r s r 3 , {\displaystyle R^{\hat {r}}{}_{{\hat {t}}{\hat {r}}{\hat {t}}}=-R^{\hat {\theta }}{}_{{\hat {\phi }}{\hat {\theta }}{\hat {\phi }}}=-{\frac {r_{\text{s}}}{r^{3}}},} R θ ^ t ^ θ ^ t ^ = R ϕ ^ t ^ ϕ ^ t ^ = − R r ^ θ ^ r ^ θ ^ = − R r ^ ϕ ^ r ^ ϕ ^ = r s 2 r 3 . {\displaystyle R^{\hat {\theta }}{}_{{\hat {t}}{\hat {\theta }}{\hat {t}}}=R^{\hat {\phi }}{}_{{\hat {t}}{\hat {\phi }}{\hat {t}}}=-R^{\hat {r}}{}_{{\hat {\theta }}{\hat {r}}{\hat {\theta }}}=-R^{\hat {r}}{}_{{\hat {\phi }}{\hat {r}}{\hat {\phi }}}={\frac {r_{\text{s}}}{2r^{3}}}.} Again, components which are obtainable by the symmetries of the Riemann tensor are not displayed. These results are invariant to any Lorentz boost, thus the components do not change for non-static observers. The geodesic deviation equation shows that the tidal acceleration between two observers separated by ξ j ^ {\displaystyle \xi ^{\hat {j}}} is D 2 ξ j ^ / D τ 2 = − R j ^ t ^ k ^ t ^ ξ k ^ {\displaystyle D^{2}\xi ^{\hat {j}}/D\tau ^{2}=-R^{\hat {j}}{}_{{\hat {t}}{\hat {k}}{\hat {t}}}\xi ^{\hat {k}}} , so a body of length L {\displaystyle L} is stretched in the radial direction by an apparent acceleration ( r s / r 3 ) c 2 L {\displaystyle (r_{\text{s}}/r^{3})c^{2}L} and squeezed in the perpendicular directions by − ( r s / ( 2 r 3 ) ) c 2 L {\displaystyle -(r_{\text{s}}/(2r^{3}))c^{2}L} . == See also == Derivation of the Schwarzschild solution Reissner–Nordström metric (charged, non-rotating solution) Kerr metric (uncharged, rotating solution) Kerr–Newman metric (charged, rotating solution) Black hole, a general review Schwarzschild coordinates Kruskal–Szekeres coordinates Eddington–Finkelstein coordinates Gullstrand–Painlevé coordinates Lemaître coordinates (Schwarzschild solution in synchronous coordinates) Frame fields in general relativity (Lemaître observers in the Schwarzschild vacuum) Tolman–Oppenheimer–Volkoff equation (metric and pressure equations of a static and spherically symmetric body of isotropic material) Planck length == Notes == == References == Schwarzschild, K. (1916). "Über das Gravitationsfeld eines Massenpunktes nach der Einsteinschen Theorie". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften. 7: 189–196. Bibcode:1916AbhKP1916..189S. Text of the original paper, in Wikisource Translation: Antoci, S.; Loinger, A. (1999). "On the gravitational field of a mass point according to Einstein's theory". arXiv:physics/9905030. A commentary on the paper, giving a simpler derivation: Bel, L. (2007). "Über das Gravitationsfeld eines Massenpunktesnach der Einsteinschen Theorie". arXiv:0709.2257 [gr-qc]. Schwarzschild, K. (1916). "Über das Gravitationsfeld einer Kugel aus inkompressibler Flüssigkeit". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften. 1: 424. Text of the original paper, in Wikisource Translation: Antoci, S. (1999). "On the gravitational field of a sphere of incompressible fluid according to Einstein's theory". arXiv:physics/9912033. Flamm, L. (1916). "Beiträge zur Einstein'schen Gravitationstheorie". Physikalische Zeitschrift. 17: 448. Adler, R.; Bazin, M.; Schiffer, M. (1975). Introduction to General Relativity (2nd ed.). McGraw-Hill. Chapter 6. ISBN 0-07-000423-4. Landau, L. D.; Lifshitz, E. M. (1975). The Classical Theory of Fields. Course of Theoretical Physics. Vol. 2 (4th Revised English ed.). Pergamon Press. Chapter 12. ISBN 0-08-025072-6. Misner, C. W.; Thorne, K. S.; Wheeler, J. A. (1970). Gravitation. W.H. Freeman. Chapters 31 and 32. ISBN 0-7167-0344-0. Weinberg, S. (1972). Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity. John Wiley & Sons. Chapter 8. ISBN 0-471-92567-5. Taylor, E. F.; Wheeler, J. A. (2000). Exploring Black Holes: Introduction to General Relativity. Addison-Wesley. ISBN 0-201-38423-X. Heinzle, J. M.; Steinbauer, R. (2002). "Remarks on the distributional Schwarzschild geometry". Journal of Mathematical Physics. 43 (3): 1493–1508. arXiv:gr-qc/0112047. Bibcode:2002JMP....43.1493H. doi:10.1063/1.1448684. S2CID 119677857. Sanchez, Norma (15 August 1978). "Absorption and emission spectra of a Schwarzschild black hole". Physical Review D. 18 (4): 1030–1036. doi:10.1103/PhysRevD.18.1030. Persides, S. (1 August 1973). "On the radial wave equation in Schwarzschild's space-time". Journal of Mathematical Physics. 14 (8): 1017–1021. doi:10.1063/1.1666431. ISSN 0022-2488.
Wikipedia/Schwarzschild_solution
In ring theory, a branch of mathematics, an idempotent element or simply idempotent of a ring is an element a such that a2 = a. That is, the element is idempotent under the ring's multiplication. Inductively then, one can also conclude that a = a2 = a3 = a4 = ... = an for any positive integer n. For example, an idempotent element of a matrix ring is precisely an idempotent matrix. For general rings, elements idempotent under multiplication are involved in decompositions of modules, and connected to homological properties of the ring. In Boolean algebra, the main objects of study are rings in which all elements are idempotent under both addition and multiplication. == Examples == === Quotients of Z === One may consider the ring of integers modulo n, where n is square-free. By the Chinese remainder theorem, this ring factors into the product of rings of integers modulo p, where p is prime. Now each of these factors is a field, so it is clear that the factors' only idempotents will be 0 and 1. That is, each factor has two idempotents. So if there are m factors, there will be 2m idempotents. We can check this for the integers mod 6, R = Z / 6Z. Since 6 has two prime factors (2 and 3) it should have 22 idempotents. 02 ≡ 0 ≡ 0 (mod 6) 12 ≡ 1 ≡ 1 (mod 6) 22 ≡ 4 ≡ 4 (mod 6) 32 ≡ 9 ≡ 3 (mod 6) 42 ≡ 16 ≡ 4 (mod 6) 52 ≡ 25 ≡ 1 (mod 6) From these computations, 0, 1, 3, and 4 are idempotents of this ring, while 2 and 5 are not. This also demonstrates the decomposition properties described below: because 3 + 4 ≡ 1 (mod 6), there is a ring decomposition 3Z / 6Z ⊕ 4Z / 6Z. In 3Z / 6Z the multiplicative identity is 3 + 6Z and in 4Z / 6Z the multiplicative identity is 4 + 6Z. === Quotient of polynomial ring === Given a ring R and an element f ∈ R such that f2 ≠ 0, the quotient ring R / (f2 − f) has the idempotent f. For example, this could be applied to x ∈ Z[x], or any polynomial f ∈ k[x1, ..., xn]. === Idempotents in the ring of split-quaternions === There is a circle of idempotents in the ring of split-quaternions. Split quaternions have the structure of a real algebra, so elements can be written w + xi + yj + zk over a basis {1, i, j, k}, with j2 = k2 = +1. For any θ, s = j cos ⁡ θ + k sin ⁡ θ {\displaystyle s=j\cos \theta +k\sin \theta } satisfies s2 = +1 since j and k satisfy the anticommutative property. Now ( 1 + s 2 ) 2 = 1 + 2 s + s 2 4 = 1 + s 2 , {\displaystyle ({\frac {1+s}{2}})^{2}={\frac {1+2s+s^{2}}{4}}={\frac {1+s}{2}},} the idempotent property. The element s is called a hyperbolic unit and so far, the i-coordinate has been taken as zero. When this coordinate is non-zero, then there is a hyperboloid of one sheet of hyperbolic units in split-quaternions. The same equality shows the idempotent property of 1 + s 2 {\displaystyle {\frac {1+s}{2}}} where s is on the hyperboloid. == Types of ring idempotents == A partial list of important types of idempotents includes: Two idempotents a and b are called orthogonal if ab = ba = 0. If a is idempotent in the ring R (with unity), then so is b = 1 − a; moreover, a and b are orthogonal. An idempotent a in R is called a central idempotent if ax = xa for all x in R, that is, if a is in the center of R. A trivial idempotent refers to either of the elements 0 and 1, which are always idempotent. A primitive idempotent of a ring R is a nonzero idempotent a such that aR is indecomposable as a right R-module; that is, such that aR is not a direct sum of two nonzero submodules. Equivalently, a is a primitive idempotent if it cannot be written as a = e + f, where e and f are nonzero orthogonal idempotents in R. A local idempotent is an idempotent a such that aRa is a local ring. This implies that aR is directly indecomposable, so local idempotents are also primitive. A right irreducible idempotent is an idempotent a for which aR is a simple module. By Schur's lemma, EndR(aR) = aRa is a division ring, and hence is a local ring, so right (and left) irreducible idempotents are local. A centrally primitive idempotent is a central idempotent a that cannot be written as the sum of two nonzero orthogonal central idempotents. An idempotent a + I in the quotient ring R / I is said to lift modulo I if there is an idempotent b in R such that b + I = a + I. An idempotent a of R is called a full idempotent if RaR = R. A separability idempotent; see Separable algebra. Any non-trivial idempotent a is a zero divisor (because ab = 0 with neither a nor b being zero, where b = 1 − a). This shows that integral domains and division rings do not have such idempotents. Local rings also do not have such idempotents, but for a different reason. The only idempotent contained in the Jacobson radical of a ring is 0. == Rings characterized by idempotents == A ring in which all elements are idempotent is called a Boolean ring. Some authors use the term "idempotent ring" for this type of ring. In such a ring, multiplication is commutative and every element is its own additive inverse. A ring is semisimple if and only if every right (or every left) ideal is generated by an idempotent. A ring is von Neumann regular if and only if every finitely generated right (or every finitely generated left) ideal is generated by an idempotent. A ring for which the annihilator r.Ann(S) every subset S of R is generated by an idempotent is called a Baer ring. If the condition only holds for all singleton subsets of R, then the ring is a right Rickart ring. Both of these types of rings are interesting even when they lack a multiplicative identity. A ring in which all idempotents are central is called an abelian ring. Such rings need not be commutative. A ring is directly irreducible if and only if 0 and 1 are the only central idempotents. A ring R can be written as e1R ⊕ e2R ⊕ ... ⊕ enR with each ei a local idempotent if and only if R is a semiperfect ring. A ring is called an SBI ring or Lift/rad ring if all idempotents of R lift modulo the Jacobson radical. A ring satisfies the ascending chain condition on right direct summands if and only if the ring satisfies the descending chain condition on left direct summands if and only if every set of pairwise orthogonal idempotents is finite. If a is idempotent in the ring R, then aRa is again a ring, with multiplicative identity a. The ring aRa is often referred to as a corner ring of R. The corner ring arises naturally since the ring of endomorphisms EndR(aR) ≅ aRa. == Role in decompositions == The idempotents of R have an important connection to decomposition of R-modules. If M is an R-module and E = EndR(M) is its ring of endomorphisms, then A ⊕ B = M if and only if there is a unique idempotent e in E such that A = eM and B = (1 − e)M. Clearly then, M is directly indecomposable if and only if 0 and 1 are the only idempotents in E. In the case when M = R (assumed unital), the endomorphism ring EndR(R) = R, where each endomorphism arises as left multiplication by a fixed ring element. With this modification of notation, A ⊕ B = R as right modules if and only if there exists a unique idempotent e such that eR = A and (1 − e)R = B. Thus every direct summand of R is generated by an idempotent. If a is a central idempotent, then the corner ring aRa = Ra is a ring with multiplicative identity a. Just as idempotents determine the direct decompositions of R as a module, the central idempotents of R determine the decompositions of R as a direct sum of rings. If R is the direct sum of the rings R1, ..., Rn, then the identity elements of the rings Ri are central idempotents in R, pairwise orthogonal, and their sum is 1. Conversely, given central idempotents a1, ..., an in R that are pairwise orthogonal and have sum 1, then R is the direct sum of the rings Ra1, ..., Ran. So in particular, every central idempotent a in R gives rise to a decomposition of R as a direct sum of the corner rings aRa and (1 − a)R(1 − a). As a result, a ring R is directly indecomposable as a ring if and only if the identity 1 is centrally primitive. Working inductively, one can attempt to decompose 1 into a sum of centrally primitive elements. If 1 is centrally primitive, we are done. If not, it is a sum of central orthogonal idempotents, which in turn are primitive or sums of more central idempotents, and so on. The problem that may occur is that this may continue without end, producing an infinite family of central orthogonal idempotents. The condition "R does not contain infinite sets of central orthogonal idempotents" is a type of finiteness condition on the ring. It can be achieved in many ways, such as requiring the ring to be right Noetherian. If a decomposition R = c1R ⊕ c2R ⊕ ... ⊕ cnR exists with each ci a centrally primitive idempotent, then R is a direct sum of the corner rings ciRci, each of which is ring irreducible. For associative algebras or Jordan algebras over a field, the Peirce decomposition is a decomposition of an algebra as a sum of eigenspaces of commuting idempotent elements. == Relation with involutions == If a is an idempotent of the endomorphism ring EndR(M), then the endomorphism f = 1 − 2a is an R-module involution of M. That is, f is an R-module homomorphism such that f2 is the identity endomorphism of M. An idempotent element a of R and its associated involution f gives rise to two involutions of the module R, depending on viewing R as a left or right module. If r represents an arbitrary element of R, f can be viewed as a right R-module homomorphism r ↦ fr so that ffr = r, or f can also be viewed as a left R-module homomorphism r ↦ rf, where rff = r. This process can be reversed if 2 is an invertible element of R: if b is an involution, then 2−1(1 − b) and 2−1(1 + b) are orthogonal idempotents, corresponding to a and 1 − a. Thus for a ring in which 2 is invertible, the idempotent elements correspond to involutions in a one-to-one manner. == Category of R-modules == Lifting idempotents also has major consequences for the category of R-modules. All idempotents lift modulo I if and only if every R direct summand of R/I has a projective cover as an R-module. Idempotents always lift modulo nil ideals and rings for which R is I-adically complete. Lifting is most important when I = J(R), the Jacobson radical of R. Yet another characterization of semiperfect rings is that they are semilocal rings whose idempotents lift modulo J(R). == Lattice of idempotents == One may define a partial order on the idempotents of a ring as follows: if a and b are idempotents, we write a ≤ b if and only if ab = ba = a. With respect to this order, 0 is the smallest and 1 the largest idempotent. For orthogonal idempotents a and b, a + b is also idempotent, and we have a ≤ a + b and b ≤ a + b. The atoms of this partial order are precisely the primitive idempotents. When the above partial order is restricted to the central idempotents of R, a lattice structure, or even a Boolean algebra structure, can be given. For two central idempotents e and f, the complement is given by ¬e = 1 − e, the meet is given by e ∧ f = ef. and the join is given by e ∨ f = ¬(¬e ∧ ¬f) = e + f − ef The ordering now becomes simply e ≤ f if and only if eR ⊆ fR, and the join and meet satisfy (e ∨ f)R = eR + fR and (e ∧ f)R = eR ∩ fR = (eR)(fR). It is shown in Goodearl 1991, p. 99 that if R is von Neumann regular and right self-injective, then the lattice is a complete lattice. == Notes == == Citations == == References ==
Wikipedia/Idempotent_(ring_theory)
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange. Because a differentiable functional is stationary at its local extrema, the Euler–Lagrange equation is useful for solving optimization problems in which, given some functional, one seeks the function minimizing or maximizing it. This is analogous to Fermat's theorem in calculus, stating that at any point where a differentiable function attains a local extremum its derivative is zero. In Lagrangian mechanics, according to Hamilton's principle of stationary action, the evolution of a physical system is described by the solutions to the Euler equation for the action of the system. In this context Euler equations are usually called Lagrange equations. In classical mechanics, it is equivalent to Newton's laws of motion; indeed, the Euler-Lagrange equations will produce the same equations as Newton's Laws. This is particularly useful when analyzing systems whose force vectors are particularly complicated. It has the advantage that it takes the same form in any system of generalized coordinates, and it is better suited to generalizations. In classical field theory there is an analogous equation to calculate the dynamics of a field. == History == The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Their correspondence ultimately led to the calculus of variations, a term coined by Euler himself in 1766. == Statement == Let ( X , L ) {\displaystyle (X,L)} be a real dynamical system with n {\displaystyle n} degrees of freedom. Here X {\displaystyle X} is the configuration space and L = L ( t , q ( t ) , v ( t ) ) {\displaystyle L=L(t,{\boldsymbol {q}}(t),{\boldsymbol {v}}(t))} the Lagrangian, i.e. a smooth real-valued function such that q ( t ) ∈ X , {\displaystyle {\boldsymbol {q}}(t)\in X,} and v ( t ) {\displaystyle {\boldsymbol {v}}(t)} is an n {\displaystyle n} -dimensional "vector of speed". (For those familiar with differential geometry, X {\displaystyle X} is a smooth manifold, and L : R t × X × T X → R , {\displaystyle L:{\mathbb {R} }_{t}\times X\times TX\to {\mathbb {R} },} where T X {\displaystyle TX} is the tangent bundle of X ) . {\displaystyle X).} Let P ( a , b , x a , x b ) {\displaystyle {\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})} be the set of smooth paths q : [ a , b ] → X {\displaystyle {\boldsymbol {q}}:[a,b]\to X} for which q ( a ) = x a {\displaystyle {\boldsymbol {q}}(a)={\boldsymbol {x}}_{a}} and q ( b ) = x b . {\displaystyle {\boldsymbol {q}}(b)={\boldsymbol {x}}_{b}.} The action functional S : P ( a , b , x a , x b ) → R {\displaystyle S:{\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})\to \mathbb {R} } is defined via S [ q ] = ∫ a b L ( t , q ( t ) , q ˙ ( t ) ) d t . {\displaystyle S[{\boldsymbol {q}}]=\int _{a}^{b}L(t,{\boldsymbol {q}}(t),{\dot {\boldsymbol {q}}}(t))\,dt.} A path q ∈ P ( a , b , x a , x b ) {\displaystyle {\boldsymbol {q}}\in {\cal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})} is a stationary point of S {\displaystyle S} if and only if Here, q ˙ ( t ) {\displaystyle {\dot {\boldsymbol {q}}}(t)} is the time derivative of q ( t ) . {\displaystyle {\boldsymbol {q}}(t).} When we say stationary point, we mean a stationary point of S {\displaystyle S} with respect to any small perturbation in q {\displaystyle {\boldsymbol {q}}} . See proofs below for more rigorous detail. == Example == A standard example is finding the real-valued function y(x) on the interval [a, b], such that y(a) = c and y(b) = d, for which the path length along the curve traced by y is as short as possible. s = ∫ a b d x 2 + d y 2 = ∫ a b 1 + y ′ 2 d x , {\displaystyle {\text{s}}=\int _{a}^{b}{\sqrt {\mathrm {d} x^{2}+\mathrm {d} y^{2}}}=\int _{a}^{b}{\sqrt {1+y'^{2}}}\,\mathrm {d} x,} the integrand function being L ( x , y , y ′ ) = 1 + y ′ 2 {\textstyle L(x,y,y')={\sqrt {1+y'^{2}}}} . The partial derivatives of L are: ∂ L ( x , y , y ′ ) ∂ y ′ = y ′ 1 + y ′ 2 and ∂ L ( x , y , y ′ ) ∂ y = 0. {\displaystyle {\frac {\partial L(x,y,y')}{\partial y'}}={\frac {y'}{\sqrt {1+y'^{2}}}}\quad {\text{and}}\quad {\frac {\partial L(x,y,y')}{\partial y}}=0.} By substituting these into the Euler–Lagrange equation, we obtain d d x y ′ ( x ) 1 + ( y ′ ( x ) ) 2 = 0 y ′ ( x ) 1 + ( y ′ ( x ) ) 2 = C = constant ⇒ y ′ ( x ) = C 1 − C 2 =: A ⇒ y ( x ) = A x + B {\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} x}}{\frac {y'(x)}{\sqrt {1+(y'(x))^{2}}}}&=0\\{\frac {y'(x)}{\sqrt {1+(y'(x))^{2}}}}&=C={\text{constant}}\\\Rightarrow y'(x)&={\frac {C}{\sqrt {1-C^{2}}}}=:A\\\Rightarrow y(x)&=Ax+B\end{aligned}}} that is, the function must have a constant first derivative, and thus its graph is a straight line. == Generalizations == === Single function of single variable with higher derivatives === The stationary values of the functional I [ f ] = ∫ x 0 x 1 L ( x , f , f ′ , f ″ , … , f ( k ) ) d x ; f ′ := d f d x , f ″ := d 2 f d x 2 , f ( k ) := d k f d x k {\displaystyle I[f]=\int _{x_{0}}^{x_{1}}{\mathcal {L}}(x,f,f',f'',\dots ,f^{(k)})~\mathrm {d} x~;~~f':={\cfrac {\mathrm {d} f}{\mathrm {d} x}},~f'':={\cfrac {\mathrm {d} ^{2}f}{\mathrm {d} x^{2}}},~f^{(k)}:={\cfrac {\mathrm {d} ^{k}f}{\mathrm {d} x^{k}}}} can be obtained from the Euler–Lagrange equation ∂ L ∂ f − d d x ( ∂ L ∂ f ′ ) + d 2 d x 2 ( ∂ L ∂ f ″ ) − ⋯ + ( − 1 ) k d k d x k ( ∂ L ∂ f ( k ) ) = 0 {\displaystyle {\cfrac {\partial {\mathcal {L}}}{\partial f}}-{\cfrac {\mathrm {d} }{\mathrm {d} x}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f'}}\right)+{\cfrac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f''}}\right)-\dots +(-1)^{k}{\cfrac {\mathrm {d} ^{k}}{\mathrm {d} x^{k}}}\left({\cfrac {\partial {\mathcal {L}}}{\partial f^{(k)}}}\right)=0} under fixed boundary conditions for the function itself as well as for the first k − 1 {\displaystyle k-1} derivatives (i.e. for all f ( i ) , i ∈ { 0 , . . . , k − 1 } {\displaystyle f^{(i)},i\in \{0,...,k-1\}} ). The endpoint values of the highest derivative f ( k ) {\displaystyle f^{(k)}} remain flexible. === Several functions of single variable with single derivative === If the problem involves finding several functions ( f 1 , f 2 , … , f m {\displaystyle f_{1},f_{2},\dots ,f_{m}} ) of a single independent variable ( x {\displaystyle x} ) that define an extremum of the functional I [ f 1 , f 2 , … , f m ] = ∫ x 0 x 1 L ( x , f 1 , f 2 , … , f m , f 1 ′ , f 2 ′ , … , f m ′ ) d x ; f i ′ := d f i d x {\displaystyle I[f_{1},f_{2},\dots ,f_{m}]=\int _{x_{0}}^{x_{1}}{\mathcal {L}}(x,f_{1},f_{2},\dots ,f_{m},f_{1}',f_{2}',\dots ,f_{m}')~\mathrm {d} x~;~~f_{i}':={\cfrac {\mathrm {d} f_{i}}{\mathrm {d} x}}} then the corresponding Euler–Lagrange equations are ∂ L ∂ f i − d d x ( ∂ L ∂ f i ′ ) = 0 ; i = 1 , 2 , . . . , m {\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f_{i}}}-{\frac {\mathrm {d} }{\mathrm {d} x}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i}'}}\right)=0;\quad i=1,2,...,m\end{aligned}}} === Single function of several variables with single derivative === A multi-dimensional generalization comes from considering a function on n variables. If Ω {\displaystyle \Omega } is some surface, then I [ f ] = ∫ Ω L ( x 1 , … , x n , f , f 1 , … , f n ) d x ; f j := ∂ f ∂ x j {\displaystyle I[f]=\int _{\Omega }{\mathcal {L}}(x_{1},\dots ,x_{n},f,f_{1},\dots ,f_{n})\,\mathrm {d} \mathbf {x} \,\!~;~~f_{j}:={\cfrac {\partial f}{\partial x_{j}}}} is extremized only if f satisfies the partial differential equation ∂ L ∂ f − ∑ j = 1 n ∂ ∂ x j ( ∂ L ∂ f j ) = 0. {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{j}}}\right)=0.} When n = 2 and functional I {\displaystyle {\mathcal {I}}} is the energy functional, this leads to the soap-film minimal surface problem. === Several functions of several variables with single derivative === If there are several unknown functions to be determined and several variables such that I [ f 1 , f 2 , … , f m ] = ∫ Ω L ( x 1 , … , x n , f 1 , … , f m , f 1 , 1 , … , f 1 , n , … , f m , 1 , … , f m , n ) d x ; f i , j := ∂ f i ∂ x j {\displaystyle I[f_{1},f_{2},\dots ,f_{m}]=\int _{\Omega }{\mathcal {L}}(x_{1},\dots ,x_{n},f_{1},\dots ,f_{m},f_{1,1},\dots ,f_{1,n},\dots ,f_{m,1},\dots ,f_{m,n})\,\mathrm {d} \mathbf {x} \,\!~;~~f_{i,j}:={\cfrac {\partial f_{i}}{\partial x_{j}}}} the system of Euler–Lagrange equations is ∂ L ∂ f 1 − ∑ j = 1 n ∂ ∂ x j ( ∂ L ∂ f 1 , j ) = 0 1 ∂ L ∂ f 2 − ∑ j = 1 n ∂ ∂ x j ( ∂ L ∂ f 2 , j ) = 0 2 ⋮ ⋮ ⋮ ∂ L ∂ f m − ∑ j = 1 n ∂ ∂ x j ( ∂ L ∂ f m , j ) = 0 m . {\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f_{1}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{1,j}}}\right)&=0_{1}\\{\frac {\partial {\mathcal {L}}}{\partial f_{2}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{2,j}}}\right)&=0_{2}\\\vdots \qquad \vdots \qquad &\quad \vdots \\{\frac {\partial {\mathcal {L}}}{\partial f_{m}}}-\sum _{j=1}^{n}{\frac {\partial }{\partial x_{j}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{m,j}}}\right)&=0_{m}.\end{aligned}}} === Single function of two variables with higher derivatives === If there is a single unknown function f to be determined that is dependent on two variables x1 and x2 and if the functional depends on higher derivatives of f up to n-th order such that I [ f ] = ∫ Ω L ( x 1 , x 2 , f , f 1 , f 2 , f 11 , f 12 , f 22 , … , f 22 … 2 ) d x f i := ∂ f ∂ x i , f i j := ∂ 2 f ∂ x i ∂ x j , … {\displaystyle {\begin{aligned}I[f]&=\int _{\Omega }{\mathcal {L}}(x_{1},x_{2},f,f_{1},f_{2},f_{11},f_{12},f_{22},\dots ,f_{22\dots 2})\,\mathrm {d} \mathbf {x} \\&\qquad \quad f_{i}:={\cfrac {\partial f}{\partial x_{i}}}\;,\quad f_{ij}:={\cfrac {\partial ^{2}f}{\partial x_{i}\partial x_{j}}}\;,\;\;\dots \end{aligned}}} then the Euler–Lagrange equation is ∂ L ∂ f − ∂ ∂ x 1 ( ∂ L ∂ f 1 ) − ∂ ∂ x 2 ( ∂ L ∂ f 2 ) + ∂ 2 ∂ x 1 2 ( ∂ L ∂ f 11 ) + ∂ 2 ∂ x 1 ∂ x 2 ( ∂ L ∂ f 12 ) + ∂ 2 ∂ x 2 2 ( ∂ L ∂ f 22 ) − ⋯ + ( − 1 ) n ∂ n ∂ x 2 n ( ∂ L ∂ f 22 … 2 ) = 0 {\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial f}}&-{\frac {\partial }{\partial x_{1}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{1}}}\right)-{\frac {\partial }{\partial x_{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{2}}}\right)+{\frac {\partial ^{2}}{\partial x_{1}^{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{11}}}\right)+{\frac {\partial ^{2}}{\partial x_{1}\partial x_{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{12}}}\right)+{\frac {\partial ^{2}}{\partial x_{2}^{2}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{22}}}\right)\\&-\dots +(-1)^{n}{\frac {\partial ^{n}}{\partial x_{2}^{n}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{22\dots 2}}}\right)=0\end{aligned}}} which can be represented shortly as: ∂ L ∂ f + ∑ j = 1 n ∑ μ 1 ≤ … ≤ μ j ( − 1 ) j ∂ j ∂ x μ 1 … ∂ x μ j ( ∂ L ∂ f μ 1 … μ j ) = 0 {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f}}+\sum _{j=1}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}{\frac {\partial ^{j}}{\partial x_{\mu _{1}}\dots \partial x_{\mu _{j}}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{\mu _{1}\dots \mu _{j}}}}\right)=0} wherein μ 1 … μ j {\displaystyle \mu _{1}\dots \mu _{j}} are indices that span the number of variables, that is, here they go from 1 to 2. Here summation over the μ 1 … μ j {\displaystyle \mu _{1}\dots \mu _{j}} indices is only over μ 1 ≤ μ 2 ≤ … ≤ μ j {\displaystyle \mu _{1}\leq \mu _{2}\leq \ldots \leq \mu _{j}} in order to avoid counting the same partial derivative multiple times, for example f 12 = f 21 {\displaystyle f_{12}=f_{21}} appears only once in the previous equation. === Several functions of several variables with higher derivatives === If there are p unknown functions fi to be determined that are dependent on m variables x1 ... xm and if the functional depends on higher derivatives of the fi up to n-th order such that I [ f 1 , … , f p ] = ∫ Ω L ( x 1 , … , x m ; f 1 , … , f p ; f 1 , 1 , … , f p , m ; f 1 , 11 , … , f p , m m ; … ; f p , 1 … 1 , … , f p , m … m ) d x f i , μ := ∂ f i ∂ x μ , f i , μ 1 μ 2 := ∂ 2 f i ∂ x μ 1 ∂ x μ 2 , … {\displaystyle {\begin{aligned}I[f_{1},\ldots ,f_{p}]&=\int _{\Omega }{\mathcal {L}}(x_{1},\ldots ,x_{m};f_{1},\ldots ,f_{p};f_{1,1},\ldots ,f_{p,m};f_{1,11},\ldots ,f_{p,mm};\ldots ;f_{p,1\ldots 1},\ldots ,f_{p,m\ldots m})\,\mathrm {d} \mathbf {x} \\&\qquad \quad f_{i,\mu }:={\cfrac {\partial f_{i}}{\partial x_{\mu }}}\;,\quad f_{i,\mu _{1}\mu _{2}}:={\cfrac {\partial ^{2}f_{i}}{\partial x_{\mu _{1}}\partial x_{\mu _{2}}}}\;,\;\;\dots \end{aligned}}} where μ 1 … μ j {\displaystyle \mu _{1}\dots \mu _{j}} are indices that span the number of variables, that is they go from 1 to m. Then the Euler–Lagrange equation is ∂ L ∂ f i + ∑ j = 1 n ∑ μ 1 ≤ … ≤ μ j ( − 1 ) j ∂ j ∂ x μ 1 … ∂ x μ j ( ∂ L ∂ f i , μ 1 … μ j ) = 0 {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial f_{i}}}+\sum _{j=1}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}{\frac {\partial ^{j}}{\partial x_{\mu _{1}}\dots \partial x_{\mu _{j}}}}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i,\mu _{1}\dots \mu _{j}}}}\right)=0} where the summation over the μ 1 … μ j {\displaystyle \mu _{1}\dots \mu _{j}} is avoiding counting the same derivative f i , μ 1 μ 2 = f i , μ 2 μ 1 {\displaystyle f_{i,\mu _{1}\mu _{2}}=f_{i,\mu _{2}\mu _{1}}} several times, just as in the previous subsection. This can be expressed more compactly as ∑ j = 0 n ∑ μ 1 ≤ … ≤ μ j ( − 1 ) j ∂ μ 1 … μ j j ( ∂ L ∂ f i , μ 1 … μ j ) = 0 {\displaystyle \sum _{j=0}^{n}\sum _{\mu _{1}\leq \ldots \leq \mu _{j}}(-1)^{j}\partial _{\mu _{1}\ldots \mu _{j}}^{j}\left({\frac {\partial {\mathcal {L}}}{\partial f_{i,\mu _{1}\dots \mu _{j}}}}\right)=0} === Field theories === == Generalization to manifolds == Let M {\displaystyle M} be a smooth manifold, and let C ∞ ( [ a , b ] ) {\displaystyle C^{\infty }([a,b])} denote the space of smooth functions f : [ a , b ] → M {\displaystyle f\colon [a,b]\to M} . Then, for functionals S : C ∞ ( [ a , b ] ) → R {\displaystyle S\colon C^{\infty }([a,b])\to \mathbb {R} } of the form S [ f ] = ∫ a b ( L ∘ f ˙ ) ( t ) d t {\displaystyle S[f]=\int _{a}^{b}(L\circ {\dot {f}})(t)\,\mathrm {d} t} where L : T M → R {\displaystyle L\colon TM\to \mathbb {R} } is the Lagrangian, the statement d S f = 0 {\displaystyle \mathrm {d} S_{f}=0} is equivalent to the statement that, for all t ∈ [ a , b ] {\displaystyle t\in [a,b]} , each coordinate frame trivialization ( x i , X i ) {\displaystyle (x^{i},X^{i})} of a neighborhood of f ˙ ( t ) {\displaystyle {\dot {f}}(t)} yields the following dim ⁡ M {\displaystyle \dim M} equations: ∀ i : d d t ∂ L ∂ X i | f ˙ ( t ) = ∂ L ∂ x i | f ˙ ( t ) . {\displaystyle \forall i:{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial X^{i}}}{\bigg |}_{{\dot {f}}(t)}={\frac {\partial L}{\partial x^{i}}}{\bigg |}_{{\dot {f}}(t)}.} Euler-Lagrange equations can also be written in a coordinate-free form as L Δ θ L = d L {\displaystyle {\mathcal {L}}_{\Delta }\theta _{L}=dL} where θ L {\displaystyle \theta _{L}} is the canonical momenta 1-form corresponding to the Lagrangian L {\displaystyle L} . The vector field generating time translations is denoted by Δ {\displaystyle \Delta } and the Lie derivative is denoted by L {\displaystyle {\mathcal {L}}} . One can use local charts ( q α , q ˙ α ) {\displaystyle (q^{\alpha },{\dot {q}}^{\alpha })} in which θ L = ∂ L ∂ q ˙ α d q α {\displaystyle \theta _{L}={\frac {\partial L}{\partial {\dot {q}}^{\alpha }}}dq^{\alpha }} and Δ := d d t = q ˙ α ∂ ∂ q α + q ¨ α ∂ ∂ q ˙ α {\displaystyle \Delta :={\frac {d}{dt}}={\dot {q}}^{\alpha }{\frac {\partial }{\partial q^{\alpha }}}+{\ddot {q}}^{\alpha }{\frac {\partial }{\partial {\dot {q}}^{\alpha }}}} and use coordinate expressions for the Lie derivative to see equivalence with coordinate expressions of the Euler Lagrange equation. The coordinate free form is particularly suitable for geometrical interpretation of the Euler Lagrange equations. == See also == Lagrangian mechanics Hamiltonian mechanics Analytical mechanics Beltrami identity Functional derivative == Notes == == References == "Lagrange equations (in mechanics)", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Euler-Lagrange Differential Equation". MathWorld. Calculus of Variations at PlanetMath. Gelfand, Izrail Moiseevich (1963). Calculus of Variations. Dover. ISBN 0-486-41448-5. {{cite book}}: ISBN / Date incompatibility (help) Roubicek, T.: Calculus of variations. Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588.
Wikipedia/Euler-Lagrange_equations
In physical cosmology, cosmological perturbation theory is the theory by which the evolution of structure is understood in the Big Bang model. Cosmological perturbation theory may be broken into two categories: Newtonian or general relativistic. Each case uses its governing equations to compute gravitational and pressure forces which cause small perturbations to grow and eventually seed the formation of stars, quasars, galaxies and clusters. Both cases apply only to situations where the universe is predominantly homogeneous, such as during cosmic inflation and large parts of the Big Bang. The universe is believed to still be homogeneous enough that the theory is a good approximation on the largest scales, but on smaller scales more involved techniques, such as N-body simulations, must be used. When deciding whether to use general relativity for perturbation theory, note that Newtonian physics is only applicable in some cases such as for scales smaller than the Hubble horizon, where spacetime is sufficiently flat, and for which speeds are non-relativistic. Because of the gauge invariance of general relativity, the correct formulation of cosmological perturbation theory is subtle. In particular, when describing an inhomogeneous spacetime, there is often not a preferred coordinate choice. There are currently two distinct approaches to perturbation theory in classical general relativity: gauge-invariant perturbation theory based on foliating a space-time with hyper-surfaces, and 1+3 covariant gauge-invariant perturbation theory based on threading a space-time with frames. == Newtonian perturbation theory == In this section, we will focus on the effect of matter on structure formation in the hydrodynamical fluid regime. This regime is useful because dark matter has dominated structure growth for most of the universe's history. In this regime, we are on sub-Hubble scales < H − 1 , {\displaystyle <H^{-1}~,} (where H {\displaystyle H} is the Hubble parameter) so we can take spacetime to be flat, and ignore general relativistic corrections. But these scales are above a cut-off, such that perturbations in pressure and density are sufficiently linear δ P , δ ρ ≪ 1 . {\displaystyle \delta P~,~\delta \rho \ll 1~.} Next we assume low pressure P ≪ ρ , {\displaystyle P\ll \rho ~,} so that we can ignore radiative effects and low speeds u ≪ c , {\displaystyle u\ll c~,} so we are in the non-relativistic regime. The first governing equation follows from matter conservation – the continuity equation ∂ ρ ∂ t + 3 H ρ + 1 a ⋅ ∇ ( ρ v → ) = 0 , {\displaystyle {\frac {\partial \rho }{\partial t}}+3H\rho +{\frac {1}{a}}\cdot \nabla \left(\rho {\vec {v}}\right)=0~,} where a {\displaystyle a} is the scale factor and v → {\displaystyle {\vec {v}}} is the peculiar velocity. Although we don't explicitly write it, all variables are evaluated at time t {\displaystyle t} and the divergence ∇ {\displaystyle \nabla } is in comoving coordinates. Second, momentum conservation gives us the Euler equation ρ d u → d t = ρ ( ∂ ∂ t + 1 a v → ⋅ ∇ ) u → = − 1 a ∇ P − 1 a ρ ∇ Φ , {\displaystyle \rho {\frac {{\text{d}}{\vec {u}}}{{\text{d}}t}}=\rho \left({\frac {\partial }{\partial t}}+{\frac {1}{a}}{\vec {v}}\cdot \nabla \right){\vec {u}}=-{\frac {1}{a}}\nabla P-{\frac {1}{a}}\rho \nabla \Phi ~,} where Φ {\displaystyle \Phi } is the gravitational potential. Lastly, we know that for Newtonian gravity, the potential obeys the Poisson equation 1 a 2 ∇ 2 Φ = 4 π G ρ . {\displaystyle {\frac {1}{a^{2}}}\nabla ^{2}\Phi =4\pi G\rho ~.} So far, our equations are fully nonlinear, and can be hard to interpret intuitively. It's therefore useful to consider a perturbative expansion and examine each order separately. We use the following decomposition ρ = ρ ¯ ( 1 + δ ) , u → = H a x → + v → , P = P ¯ + δ P , Φ = Φ ¯ + δ Φ {\displaystyle \rho ={\bar {\rho }}(1+\delta )~,~{\vec {u}}=Ha{\vec {x}}+{\vec {v}}~,~P={\bar {P}}+\delta P~,~\Phi ={\bar {\Phi }}+\delta \Phi ~} where x → {\displaystyle {\vec {x}}} is a comoving coordinate. At linear order, the continuity equation becomes δ ˙ = − 1 a θ , {\displaystyle {\dot {\delta }}=-{\frac {1}{a}}\theta ~,} where θ ≡ ∇ ⋅ v → {\displaystyle \theta \equiv \nabla \cdot {\vec {v}}} is the velocity divergence. And the linear Euler equation is ρ ¯ ( v → ˙ + H v → ) = − 1 a ∇ δ P − 1 a ρ ¯ ∇ δ Φ . {\displaystyle {\bar {\rho }}\left({\dot {\vec {v}}}+H{\vec {v}}\right)=-{\frac {1}{a}}\nabla \delta P-{\frac {1}{a}}{\bar {\rho }}\nabla \delta \Phi ~.} By combining the linear continuity, Euler, and Poisson equations, we arrive at a simple master equation governing evolution ( ∂ 2 ∂ 2 t + 2 H ∂ ∂ t − c s 2 1 a ∇ 2 − 4 π G ρ ¯ ) δ = 0 , {\displaystyle \left({\frac {\partial ^{2}}{\partial ^{2}t}}+2H{\frac {\partial }{\partial t}}-c_{s}^{2}{\frac {1}{a}}\nabla ^{2}-4\pi G{\bar {\rho }}\right)\delta =0~,} where we defined a sound speed c s 2 ≡ δ P / ρ ¯ δ {\displaystyle c_{s}^{2}\equiv \delta P/{\bar {\rho }}\delta ~} to give us a closure relation. This master equation admits wave solutions in δ ( x → , t ) {\displaystyle \delta ({\vec {x}},t)} which tell us how matter fluctuations grow over time due to a combination of competing effects – the fluctuation's self-gravity, pressure forces, the universe's expansion, and the background gravitational field. == Gauge-invariant perturbation theory == The gauge-invariant perturbation theory is based on developments by Bardeen (1980), Kodama and Sasaki (1984) building on the work of Lifshitz (1946). This is the standard approach to perturbation theory of general relativity for cosmology. This approach is widely used for the computation of anisotropies in the cosmic microwave background radiation as part of the physical cosmology program and focuses on predictions arising from linearisations that preserve gauge invariance with respect to Friedmann-Lemaître-Robertson-Walker (FLRW) models. This approach draws heavily on the use of Newtonian like analogue and usually has as it starting point the FRW background around which perturbations are developed. The approach is non-local and coordinate dependent but gauge invariant as the resulting linear framework is built from a specified family of background hyper-surfaces which are linked by gauge preserving mappings to foliate the space-time. Although intuitive this approach does not deal well with the nonlinearities natural to general relativity. == 1+3 covariant gauge-invariant perturbation theory == In relativistic cosmology using the Lagrangian threading dynamics of Ehlers (1971) and Ellis (1971) it is usual to use the gauge-invariant covariant perturbation theory developed by Hawking (1966) and Ellis and Bruni (1989). Here rather than starting with a background and perturbing away from that background one starts with full general relativity and systematically reduces the theory down to one that is linear around a particular background. The approach is local and both covariant as well as gauge invariant but can be non-linear because the approach is built around the local comoving observer frame (see frame bundle) which is used to thread the entire space-time. This approach to perturbation theory produces differential equations that are of just the right order needed to describe the true physical degrees of freedom and as such no non-physical gauge modes exist. It is usual to express the theory in a coordinate free manner. For applications of kinetic theory, because one is required to use the full tangent bundle, it becomes convenient to use the tetrad formulation of relativistic cosmology. The application of this approach to the computation of anisotropies in cosmic microwave background radiation requires the linearization of the full relativistic kinetic theory developed by Thorne (1980) and Ellis, Matravers and Treciokas (1983). == Gauge freedom and frame fixing == In relativistic cosmology there is a freedom associated with the choice of threading frame; this frame choice is distinct from the choice associated with coordinates. Picking this frame is equivalent to fixing the choice of timelike world lines mapped into each other. This reduces the gauge freedom; it does not fix the gauge but the theory remains gauge invariant under the remaining gauge freedoms. In order to fix the gauge a specification of correspondences between the time surfaces in the real universe (perturbed) and the background universe are required along with the correspondences between points on the initial spacelike surfaces in the background and in the real universe. This is the link between the gauge-invariant perturbation theory and the gauge-invariant covariant perturbation theory. Gauge invariance is only guaranteed if the choice of frame coincides exactly with that of the background; usually this is trivial to ensure because physical frames have this property. == Newtonian-like equations == Newtonian-like equations emerge from perturbative general relativity with the choice of the Newtonian gauge; the Newtonian gauge provides the direct link between the variables typically used in the gauge-invariant perturbation theory and those arising from the more general gauge-invariant covariant perturbation theory. == See also == Primordial fluctuations Cosmic microwave background spectral distortions == References == == Bibliography == See physical cosmology textbooks. == External links == Ellis, George F. R.; van Elst, Henk (1999). "Cosmological models". In Marc Lachièze-Rey (ed.). Theoretical and Observational Cosmology: Proceedings of the NATO Advanced Study Institute on Theoretical and Observational Cosmology. Cargèse Lectures 1998. NATO Science Series: Series C. Vol. 541. Kluwer Academic. pp. 1–116. arXiv:gr-qc/9812046. Bibcode:1999ASIC..541....1E.
Wikipedia/Cosmological_perturbation_theory
In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system A x = λ x {\displaystyle Ax=\lambda x} that is perturbed from one with known eigenvectors and eigenvalues A 0 x 0 = λ 0 x 0 {\displaystyle A_{0}x_{0}=\lambda _{0}x_{0}} . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues x 0 i , λ 0 i , i = 1 , … n {\displaystyle x_{0i},\lambda _{0i},i=1,\dots n} are to changes in the system. This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities. The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra or numerical functional analysis. This article is focused on the case of the perturbation of a simple eigenvalue (see in multiplicity of eigenvalues). == Why generalized eigenvalues? == In the entry applications of eigenvalues and eigenvectors we find numerous scientific fields in which eigenvalues are used to obtain solutions. Generalized eigenvalue problems are less widespread but are a key in the study of vibrations. They are useful when we use the Galerkin method or Rayleigh-Ritz method to find approximate solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant (1943) is fundamental. The Finite element method is a widespread particular case. In classical mechanics, generalized eigenvalues may crop up when we look for vibrations of multiple degrees of freedom systems close to equilibrium; the kinetic energy provides the mass matrix M {\displaystyle M} , the potential strain energy provides the rigidity matrix K {\displaystyle K} . For further details, see the first section of this article of Weinstein (1941, in French) With both methods, we obtain a system of differential equations or Matrix differential equation M x ¨ + B x ˙ + K x = 0 {\displaystyle M{\ddot {x}}+B{\dot {x}}+Kx=0} with the mass matrix M {\displaystyle M} , the damping matrix B {\displaystyle B} and the rigidity matrix K {\displaystyle K} . If we neglect the damping effect, we use B = 0 {\displaystyle B=0} , we can look for a solution of the following form x = e i ω t u {\displaystyle x=e^{i\omega t}u} ; we obtain that u {\displaystyle u} and ω 2 {\displaystyle \omega ^{2}} are solution of the generalized eigenvalue problem − ω 2 M u + K u = 0 {\displaystyle -\omega ^{2}Mu+Ku=0} == Setting of perturbation for a generalized eigenvalue problem == Suppose we have solutions to the generalized eigenvalue problem, K 0 x 0 i = λ 0 i M 0 x 0 i . ( 0 ) {\displaystyle \mathbf {K} _{0}\mathbf {x} _{0i}=\lambda _{0i}\mathbf {M} _{0}\mathbf {x} _{0i}.\qquad (0)} where K 0 {\displaystyle \mathbf {K} _{0}} and M 0 {\displaystyle \mathbf {M} _{0}} are matrices. That is, we know the eigenvalues λ0i and eigenvectors x0i for i = 1, ..., N. It is also required that the eigenvalues are distinct. Now suppose we want to change the matrices by a small amount. That is, we want to find the eigenvalues and eigenvectors of K x i = λ i M x i ( 1 ) {\displaystyle \mathbf {K} \mathbf {x} _{i}=\lambda _{i}\mathbf {M} \mathbf {x} _{i}\qquad (1)} where K = K 0 + δ K M = M 0 + δ M {\displaystyle {\begin{aligned}\mathbf {K} &=\mathbf {K} _{0}+\delta \mathbf {K} \\\mathbf {M} &=\mathbf {M} _{0}+\delta \mathbf {M} \end{aligned}}} with the perturbations δ K {\displaystyle \delta \mathbf {K} } and δ M {\displaystyle \delta \mathbf {M} } much smaller than K {\displaystyle \mathbf {K} } and M {\displaystyle \mathbf {M} } respectively. Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations: λ i = λ 0 i + δ λ i x i = x 0 i + δ x i {\displaystyle {\begin{aligned}\lambda _{i}&=\lambda _{0i}+\delta \lambda _{i}\\\mathbf {x} _{i}&=\mathbf {x} _{0i}+\delta \mathbf {x} _{i}\end{aligned}}} == Steps == We assume that the matrices are symmetric and positive definite, and assume we have scaled the eigenvectors such that x 0 j ⊤ M 0 x 0 i = δ i j , {\displaystyle \mathbf {x} _{0j}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}=\delta _{ij},\quad } x i T M x j = δ i j ( 2 ) {\displaystyle \mathbf {x} _{i}^{T}\mathbf {M} \mathbf {x} _{j}=\delta _{ij}\qquad (2)} where δij is the Kronecker delta. Now we want to solve the equation K x i − λ i M x i = 0. {\displaystyle \mathbf {K} \mathbf {x} _{i}-\lambda _{i}\mathbf {M} \mathbf {x} _{i}=0.} In this article we restrict the study to first order perturbation. === First order expansion of the equation === Substituting in (1), we get ( K 0 + δ K ) ( x 0 i + δ x i ) = ( λ 0 i + δ λ i ) ( M 0 + δ M ) ( x 0 i + δ x i ) , {\displaystyle (\mathbf {K} _{0}+\delta \mathbf {K} )(\mathbf {x} _{0i}+\delta \mathbf {x} _{i})=\left(\lambda _{0i}+\delta \lambda _{i}\right)\left(\mathbf {M} _{0}+\delta \mathbf {M} \right)\left(\mathbf {x} _{0i}+\delta \mathbf {x} _{i}\right),} which expands to K 0 x 0 i + δ K x 0 i + K 0 δ x i + δ K δ x i = λ 0 i M 0 x 0 i + λ 0 i M 0 δ x i + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i + λ 0 i δ M δ x i + δ λ i δ M x 0 i + δ λ i M 0 δ x i + δ λ i δ M δ x i . {\displaystyle {\begin{aligned}\mathbf {K} _{0}\mathbf {x} _{0i}&+\delta \mathbf {K} \mathbf {x} _{0i}+\mathbf {K} _{0}\delta \mathbf {x} _{i}+\delta \mathbf {K} \delta \mathbf {x} _{i}=\\[6pt]&\lambda _{0i}\mathbf {M} _{0}\mathbf {x} _{0i}+\lambda _{0i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}+\\&\quad \lambda _{0i}\delta \mathbf {M} \delta \mathbf {x} _{i}+\delta \lambda _{i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\delta \lambda _{i}\delta \mathbf {M} \delta \mathbf {x} _{i}.\end{aligned}}} Canceling from (0) ( K 0 x 0 i = λ 0 i M 0 x 0 i {\displaystyle \mathbf {K} _{0}\mathbf {x} _{0i}=\lambda _{0i}\mathbf {M} _{0}\mathbf {x} _{0i}} ) leaves δ K x 0 i + K 0 δ x i + δ K δ x i = λ 0 i M 0 δ x i + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i + λ 0 i δ M δ x i + δ λ i δ M x 0 i + δ λ i M 0 δ x i + δ λ i δ M δ x i . {\displaystyle {\begin{aligned}\delta \mathbf {K} \mathbf {x} _{0i}+&\mathbf {K} _{0}\delta \mathbf {x} _{i}+\delta \mathbf {K} \delta \mathbf {x} _{i}=\lambda _{0i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}+\\&\lambda _{0i}\delta \mathbf {M} \delta \mathbf {x} _{i}+\delta \lambda _{i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\delta \lambda _{i}\delta \mathbf {M} \delta \mathbf {x} _{i}.\end{aligned}}} Removing the higher-order terms, this simplifies to K 0 δ x i + δ K x 0 i = λ 0 i M 0 δ x i + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i . ( 3 ) {\displaystyle \mathbf {K} _{0}\delta \mathbf {x} _{i}+\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\delta \mathbf {M} \mathrm {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}.\qquad (3)} In other words, δ λ i {\displaystyle \delta \lambda _{i}} no longer denotes the exact variation of the eigenvalue but its first order approximation. As the matrix is symmetric, the unperturbed eigenvectors are M {\displaystyle M} orthogonal and so we use them as a basis for the perturbed eigenvectors. That is, we want to construct δ x i = ∑ j = 1 N ε i j x 0 j ( 4 ) {\displaystyle \delta \mathbf {x} _{i}=\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}\qquad (4)\quad } with ε i j = x 0 j T M δ x i {\displaystyle \varepsilon _{ij}=\mathbf {x} _{0j}^{T}M\delta \mathbf {x} _{i}} , where the εij are small constants that are to be determined. In the same way, substituting in (2), and removing higher order terms, we get δ x j M 0 x 0 i + x 0 j M 0 δ x i + x 0 j δ M 0 x 0 i = 0 ( 5 ) {\displaystyle \delta \mathbf {x} _{j}\mathbf {M} _{0}\mathbf {x} _{0i}+\mathbf {x} _{0j}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\mathbf {x} _{0j}\delta \mathbf {M} _{0}\mathbf {x} _{0i}=0\quad {(5)}} The derivation can go on with two forks. ==== First fork: get first eigenvalue perturbation ==== ===== Eigenvalue perturbation ===== We start with (3) K 0 δ x i + δ K x 0 i = λ 0 i M 0 δ x i + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i ; {\displaystyle \quad \mathbf {K} _{0}\delta \mathbf {x} _{i}+\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\delta \mathbf {M} \mathrm {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i};} we left multiply with x 0 i T {\displaystyle \mathbf {x} _{0i}^{T}} and use (2) as well as its first order variation (5); we get x 0 i T δ K x 0 i = λ 0 i x 0 i T δ M x 0 i + δ λ i {\displaystyle \mathbf {x} _{0i}^{T}\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{T}\delta \mathbf {M} \mathrm {x} _{0i}+\delta \lambda _{i}} or δ λ i = x 0 i T δ K x 0 i − λ 0 i x 0 i T δ M x 0 i {\displaystyle \delta \lambda _{i}=\mathbf {x} _{0i}^{T}\delta \mathbf {K} \mathbf {x} _{0i}-\lambda _{0i}\mathbf {x} _{0i}^{T}\delta \mathbf {M} \mathrm {x} _{0i}} We notice that it is the first order perturbation of the generalized Rayleigh quotient with fixed x 0 i {\displaystyle x_{0i}} : R ( K , M ; x 0 i ) = x 0 i T K x 0 i / x 0 i T M x 0 i , with x 0 i T M x 0 i = 1 {\displaystyle R(K,M;x_{0i})=x_{0i}^{T}Kx_{0i}/x_{0i}^{T}Mx_{0i},{\text{ with }}x_{0i}^{T}Mx_{0i}=1} Moreover, for M = I {\displaystyle M=I} , the formula δ λ i = x 0 i T δ K x 0 i {\displaystyle \delta \lambda _{i}=x_{0i}^{T}\delta Kx_{0i}} should be compared with Bauer-Fike theorem which provides a bound for eigenvalue perturbation. ===== Eigenvector perturbation ===== We left multiply (3) with x 0 j T {\displaystyle x_{0j}^{T}} for j ≠ i {\displaystyle j\neq i} and get x 0 j T K 0 δ x i + x 0 j T δ K x 0 i = λ 0 i x 0 j T M 0 δ x i + λ 0 i x 0 j T δ M x 0 i + δ λ i x 0 j T M 0 x 0 i . {\displaystyle \mathbf {x} _{0j}^{T}\mathbf {K} _{0}\delta \mathbf {x} _{i}+\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}+\delta \lambda _{i}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\mathbf {x} _{0i}.} We use x 0 j T K = λ 0 j x 0 j T M and x 0 j T M 0 x 0 i = 0 , {\displaystyle \mathbf {x} _{0j}^{T}K=\lambda _{0j}\mathbf {x} _{0j}^{T}M{\text{ and }}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\mathbf {x} _{0i}=0,} for j ≠ i {\displaystyle j\neq i} . λ 0 j x 0 j T M 0 δ x i + x 0 j T δ K x 0 i = λ 0 i x 0 j T M 0 δ x i + λ 0 i x 0 j T δ M x 0 i . {\displaystyle \lambda _{0j}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}.} or ( λ 0 j − λ 0 i ) x 0 j T M 0 δ x i + x 0 j T δ K x 0 i = λ 0 i x 0 j T δ M x 0 i . {\displaystyle (\lambda _{0j}-\lambda _{0i})\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}+\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}.} As the eigenvalues are assumed to be simple, for j ≠ i {\displaystyle j\neq i} ϵ i j = x 0 j T M 0 δ x i = − x 0 j T δ K x 0 i + λ 0 i x 0 j T δ M x 0 i ( λ 0 j − λ 0 i ) , i = 1 , … N ; j = 1 , … N ; j ≠ i . {\displaystyle \epsilon _{ij}=\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}={\frac {-\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}+\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}}{(\lambda _{0j}-\lambda _{0i})}},i=1,\dots N;j=1,\dots N;j\neq i.} Moreover (5) (the first order variation of (2) ) yields 2 ϵ i i = 2 x 0 i T M 0 δ x i = − x 0 i T δ M x 0 i . {\displaystyle 2\epsilon _{ii}=2\mathbf {x} _{0i}^{T}\mathbf {M} _{0}\delta x_{i}=-\mathbf {x} _{0i}^{T}\delta M\mathbf {x} _{0i}.} We have obtained all the components of δ x i {\displaystyle \delta x_{i}} . ==== Second fork: Straightforward manipulations ==== Substituting (4) into (3) and rearranging gives K 0 ∑ j = 1 N ε i j x 0 j + δ K x 0 i = λ 0 i M 0 ∑ j = 1 N ε i j x 0 j + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i ( 5 ) ∑ j = 1 N ε i j K 0 x 0 j + δ K x 0 i = λ 0 i M 0 ∑ j = 1 N ε i j x 0 j + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i ( applying K 0 to the sum ) ∑ j = 1 N ε i j λ 0 j M 0 x 0 j + δ K x 0 i = λ 0 i M 0 ∑ j = 1 N ε i j x 0 j + λ 0 i δ M x 0 i + δ λ i M 0 x 0 i ( using Eq. ( 1 ) ) {\displaystyle {\begin{aligned}\mathbf {K} _{0}\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}+\delta \mathbf {K} \mathbf {x} _{0i}&=\lambda _{0i}\mathbf {M} _{0}\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}&&(5)\\\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {K} _{0}\mathbf {x} _{0j}+\delta \mathbf {K} \mathbf {x} _{0i}&=\lambda _{0i}\mathbf {M} _{0}\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}&&\\({\text{applying }}\mathbf {K} _{0}{\text{ to the sum}})\\\sum _{j=1}^{N}\varepsilon _{ij}\lambda _{0j}\mathbf {M} _{0}\mathbf {x} _{0j}+\delta \mathbf {K} \mathbf {x} _{0i}&=\lambda _{0i}\mathbf {M} _{0}\sum _{j=1}^{N}\varepsilon _{ij}\mathbf {x} _{0j}+\lambda _{0i}\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {M} _{0}\mathbf {x} _{0i}&&({\text{using Eq. }}(1))\end{aligned}}} Because the eigenvectors are M0-orthogonal when M0 is positive definite, we can remove the summations by left-multiplying by x 0 i ⊤ {\displaystyle \mathbf {x} _{0i}^{\top }} : x 0 i ⊤ ε i i λ 0 i M 0 x 0 i + x 0 i ⊤ δ K x 0 i = λ 0 i x 0 i ⊤ M 0 ε i i x 0 i + λ 0 i x 0 i ⊤ δ M x 0 i + δ λ i x 0 i ⊤ M 0 x 0 i . {\displaystyle \mathbf {x} _{0i}^{\top }\varepsilon _{ii}\lambda _{0i}\mathbf {M} _{0}\mathbf {x} _{0i}+\mathbf {x} _{0i}^{\top }\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\varepsilon _{ii}\mathbf {x} _{0i}+\lambda _{0i}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}.} By use of equation (1) again: x 0 i ⊤ K 0 ε i i x 0 i + x 0 i ⊤ δ K x 0 i = λ 0 i x 0 i ⊤ M 0 ε i i x 0 i + λ 0 i x 0 i ⊤ δ M x 0 i + δ λ i x 0 i ⊤ M 0 x 0 i . ( 6 ) {\displaystyle \mathbf {x} _{0i}^{\top }\mathbf {K} _{0}\varepsilon _{ii}\mathbf {x} _{0i}+\mathbf {x} _{0i}^{\top }\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\varepsilon _{ii}\mathbf {x} _{0i}+\lambda _{0i}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}.\qquad (6)} The two terms containing εii are equal because left-multiplying (1) by x 0 i ⊤ {\displaystyle \mathbf {x} _{0i}^{\top }} gives x 0 i ⊤ K 0 x 0 i = λ 0 i x 0 i ⊤ M 0 x 0 i . {\displaystyle \mathbf {x} _{0i}^{\top }\mathbf {K} _{0}\mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}.} Canceling those terms in (6) leaves x 0 i ⊤ δ K x 0 i = λ 0 i x 0 i ⊤ δ M x 0 i + δ λ i x 0 i ⊤ M 0 x 0 i . {\displaystyle \mathbf {x} _{0i}^{\top }\delta \mathbf {K} \mathbf {x} _{0i}=\lambda _{0i}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}+\delta \lambda _{i}\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}.} Rearranging gives δ λ i = x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i x 0 i ⊤ M 0 x 0 i {\displaystyle \delta \lambda _{i}={\frac {\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}}{\mathbf {x} _{0i}^{\top }\mathbf {M} _{0}\mathbf {x} _{0i}}}} But by (2), this denominator is equal to 1. Thus δ λ i = x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i . {\displaystyle \delta \lambda _{i}=\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}.} Then, as λ i ≠ λ k {\displaystyle \lambda _{i}\neq \lambda _{k}} for i ≠ k {\displaystyle i\neq k} (assumption simple eigenvalues) by left-multiplying equation (5) by x 0 k ⊤ {\displaystyle \mathbf {x} _{0k}^{\top }} : ε i k = x 0 k ⊤ ( δ K − λ 0 i δ M ) x 0 i λ 0 i − λ 0 k , i ≠ k . {\displaystyle \varepsilon _{ik}={\frac {\mathbf {x} _{0k}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}}{\lambda _{0i}-\lambda _{0k}}},\qquad i\neq k.} Or by changing the name of the indices: ε i j = x 0 j ⊤ ( δ K − λ 0 i δ M ) x 0 i λ 0 i − λ 0 j , i ≠ j . {\displaystyle \varepsilon _{ij}={\frac {\mathbf {x} _{0j}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}}{\lambda _{0i}-\lambda _{0j}}},\qquad i\neq j.} To find εii, use the fact that: x i ⊤ M x i = 1 {\displaystyle \mathbf {x} _{i}^{\top }\mathbf {M} \mathbf {x} _{i}=1} implies: ε i i = − 1 2 x 0 i ⊤ δ M x 0 i . {\displaystyle \varepsilon _{ii}=-{\tfrac {1}{2}}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}.} == Summary of the first order perturbation result == In the case where all the matrices are Hermitian positive definite and all the eigenvalues are distinct, λ i = λ 0 i + x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i x i = x 0 i ( 1 − 1 2 x 0 i ⊤ δ M x 0 i ) + ∑ j = 1 j ≠ i N x 0 j ⊤ ( δ K − λ 0 i δ M ) x 0 i λ 0 i − λ 0 j x 0 j {\displaystyle {\begin{aligned}\lambda _{i}&=\lambda _{0i}+\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}\\\mathbf {x} _{i}&=\mathbf {x} _{0i}\left(1-{\tfrac {1}{2}}\mathbf {x} _{0i}^{\top }\delta \mathbf {M} \mathbf {x} _{0i}\right)+\sum _{j=1 \atop j\neq i}^{N}{\frac {\mathbf {x} _{0j}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}}{\lambda _{0i}-\lambda _{0j}}}\mathbf {x} _{0j}\end{aligned}}} for infinitesimal δ K {\displaystyle \delta \mathbf {K} } and δ M {\displaystyle \delta \mathbf {M} } (the higher order terms in (3) being neglected). So far, we have not proved that these higher order terms may be neglected. This point may be derived using the implicit function theorem; in next section, we summarize the use of this theorem in order to obtain a first order expansion. == Theoretical derivation == === Perturbation of an implicit function. === In the next paragraph, we shall use the Implicit function theorem (Statement of the theorem ); we notice that for a continuously differentiable function f : R n + m → R m , f : ( x , y ) ↦ f ( x , y ) {\displaystyle f:\mathbb {R} ^{n+m}\to \mathbb {R} ^{m},\;f:(x,y)\mapsto f(x,y)} , with an invertible Jacobian matrix J f , b ( x 0 , y 0 ) {\displaystyle J_{f,b}(x_{0},y_{0})} , from a point ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} solution of f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} , we get solutions of f ( x , y ) = 0 {\displaystyle f(x,y)=0} with x {\displaystyle x} close to x 0 {\displaystyle x_{0}} in the form y = g ( x ) {\displaystyle y=g(x)} where g {\displaystyle g} is a continuously differentiable function ; moreover the Jacobian marix of g {\displaystyle g} is provided by the linear system J f , y ( x , g ( x ) ) J g , x ( x ) + J f , x ( x , g ( x ) ) = 0 ( 6 ) {\displaystyle J_{f,y}(x,g(x))J_{g,x}(x)+J_{f,x}(x,g(x))=0\quad (6)} . As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix of g {\displaystyle g} may be computed with a first order expansion of f ( x 0 + δ x , y 0 + δ y ) = 0 {\displaystyle f(x_{0}+\delta x,y_{0}+\delta y)=0} , we get J f , x ( x , g ( x ) ) δ x + J f , y ( x , g ( x ) ) δ y = 0 {\displaystyle J_{f,x}(x,g(x))\delta x+J_{f,y}(x,g(x))\delta y=0} ; as δ y = J g , x ( x ) δ x {\displaystyle \delta y=J_{g,x}(x)\delta x} , it is equivalent to equation ( 6 ) {\displaystyle (6)} . === Eigenvalue perturbation: a theoretical basis. === We use the previous paragraph (Perturbation of an implicit function) with somewhat different notations suited to eigenvalue perturbation; we introduce f ~ : R 2 n 2 × R n + 1 → R n + 1 {\displaystyle {\tilde {f}}:\mathbb {R} ^{2n^{2}}\times \mathbb {R} ^{n+1}\to \mathbb {R} ^{n+1}} , with f ~ ( K , M , λ , x ) = ( f ( K , M , λ , x ) f n + 1 ( x ) ) {\displaystyle {\tilde {f}}(K,M,\lambda ,x)={\binom {f(K,M,\lambda ,x)}{f_{n+1}(x)}}} with f ( K , M , λ , x ) = K x − λ x , f n + 1 ( M , x ) = x T M x − 1 {\displaystyle f(K,M,\lambda ,x)=Kx-\lambda x,f_{n+1}(M,x)=x^{T}Mx-1} . In order to use the Implicit function theorem, we study the invertibility of the Jacobian J f ~ ; λ , x ( K , M ; λ 0 i , x 0 i ) {\displaystyle J_{{\tilde {f}};\lambda ,x}(K,M;\lambda _{0i},x_{0i})} with J f ~ ; λ , x ( K , M ; λ i , x i ) ( δ λ , δ x ) = ( − M x i 0 ) δ λ + ( K − λ M 2 x i T M ) δ x i {\displaystyle J_{{\tilde {f}};\lambda ,x}(K,M;\lambda _{i},x_{i})(\delta \lambda ,\delta x)={\binom {-Mx_{i}}{0}}\delta \lambda +{\binom {K-\lambda M}{2x_{i}^{T}M}}\delta x_{i}} . Indeed, the solution of J f ~ ; λ 0 i , x 0 i ( K , M ; λ 0 i , x 0 i ) ( δ λ i , δ x i ) = {\displaystyle J_{{\tilde {f}};\lambda _{0i},x_{0i}}(K,M;\lambda _{0i},x_{0i})(\delta \lambda _{i},\delta x_{i})=} ( y y n + 1 ) {\displaystyle {\binom {y}{y_{n+1}}}} may be derived with computations similar to the derivation of the expansion. δ λ i = − x 0 i T y , and ( λ 0 i − λ 0 j ) x 0 j T M δ x i = x j T y , j = 1 , … , n , j ≠ i ; {\displaystyle \delta \lambda _{i}=-x_{0i}^{T}y,\;{\text{ and }}(\lambda _{0i}-\lambda _{0j})x_{0j}^{T}M\delta x_{i}=x_{j}^{T}y,j=1,\dots ,n,j\neq i\;;} or x 0 j T M δ x i = x j T y / ( λ 0 i − λ 0 j ) , and 2 x 0 i T M δ x i = y n + 1 {\displaystyle {\text{ or }}x_{0j}^{T}M\delta x_{i}=x_{j}^{T}y/(\lambda _{0i}-\lambda _{0j}),{\text{ and }}\;2x_{0i}^{T}M\delta x_{i}=y_{n+1}} When λ i {\displaystyle \lambda _{i}} is a simple eigenvalue, as the eigenvectors x 0 j , j = 1 , … , n {\displaystyle x_{0j},j=1,\dots ,n} form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible. The implicit function theorem provides a continuously differentiable function ( K , M ) ↦ ( λ i ( K , M ) , x i ( K , M ) ) {\displaystyle (K,M)\mapsto (\lambda _{i}(K,M),x_{i}(K,M))} hence the expansion with little o notation: λ i = λ 0 i + δ λ i + o ( ‖ δ K ‖ + ‖ δ M ‖ ) {\displaystyle \lambda _{i}=\lambda _{0i}+\delta \lambda _{i}+o(\|\delta K\|+\|\delta M\|)} x i = x 0 i + δ x i + o ( ‖ δ K ‖ + ‖ δ M ‖ ) {\displaystyle x_{i}=x_{0i}+\delta x_{i}+o(\|\delta K\|+\|\delta M\|)} . with δ λ i = x 0 i T δ K x 0 i − λ 0 i x 0 i T δ M x 0 i ; {\displaystyle \delta \lambda _{i}=\mathbf {x} _{0i}^{T}\delta \mathbf {K} \mathbf {x} _{0i}-\lambda _{0i}\mathbf {x} _{0i}^{T}\delta \mathbf {M} \mathrm {x} _{0i};} δ x i = x 0 j T M 0 δ x i x 0 j with {\displaystyle \delta x_{i}=\mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}\mathbf {x} _{0j}{\text{ with}}} x 0 j T M 0 δ x i = − x 0 j T δ K x 0 i + λ 0 i x 0 j T δ M x 0 i ( λ 0 j − λ 0 i ) , i = 1 , … n ; j = 1 , … n ; j ≠ i . {\displaystyle \mathbf {x} _{0j}^{T}\mathbf {M} _{0}\delta \mathbf {x} _{i}={\frac {-\mathbf {x} _{0j}^{T}\delta \mathbf {K} \mathbf {x} _{0i}+\lambda _{0i}\mathbf {x} _{0j}^{T}\delta \mathbf {M} \mathrm {x} _{0i}}{(\lambda _{0j}-\lambda _{0i})}},i=1,\dots n;j=1,\dots n;j\neq i.} This is the first order expansion of the perturbed eigenvalues and eigenvectors. which is proved. == Results of sensitivity analysis with respect to the entries of the matrices == === The results === This means it is possible to efficiently do a sensitivity analysis on λi as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing Kkℓ will also change Kℓk, hence the (2 − δkℓ) term.) ∂ λ i ∂ K ( k ℓ ) = ∂ ∂ K ( k ℓ ) ( λ 0 i + x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i ) = x 0 i ( k ) x 0 i ( ℓ ) ( 2 − δ k ℓ ) ∂ λ i ∂ M ( k ℓ ) = ∂ ∂ M ( k ℓ ) ( λ 0 i + x 0 i ⊤ ( δ K − λ 0 i δ M ) x 0 i ) = − λ i x 0 i ( k ) x 0 i ( ℓ ) ( 2 − δ k ℓ ) . {\displaystyle {\begin{aligned}{\frac {\partial \lambda _{i}}{\partial \mathbf {K} _{(k\ell )}}}&={\frac {\partial }{\partial \mathbf {K} _{(k\ell )}}}\left(\lambda _{0i}+\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}\right)=x_{0i(k)}x_{0i(\ell )}\left(2-\delta _{k\ell }\right)\\{\frac {\partial \lambda _{i}}{\partial \mathbf {M} _{(k\ell )}}}&={\frac {\partial }{\partial \mathbf {M} _{(k\ell )}}}\left(\lambda _{0i}+\mathbf {x} _{0i}^{\top }\left(\delta \mathbf {K} -\lambda _{0i}\delta \mathbf {M} \right)\mathbf {x} _{0i}\right)=-\lambda _{i}x_{0i(k)}x_{0i(\ell )}\left(2-\delta _{k\ell }\right).\end{aligned}}} Similarly ∂ x i ∂ K ( k ℓ ) = ∑ j = 1 j ≠ i N x 0 j ( k ) x 0 i ( ℓ ) ( 2 − δ k ℓ ) λ 0 i − λ 0 j x 0 j ∂ x i ∂ M ( k ℓ ) = − x 0 i x 0 i ( k ) x 0 i ( ℓ ) 2 ( 2 − δ k ℓ ) − ∑ j = 1 j ≠ i N λ 0 i x 0 j ( k ) x 0 i ( ℓ ) λ 0 i − λ 0 j x 0 j ( 2 − δ k ℓ ) . {\displaystyle {\begin{aligned}{\frac {\partial \mathbf {x} _{i}}{\partial \mathbf {K} _{(k\ell )}}}&=\sum _{j=1 \atop j\neq i}^{N}{\frac {x_{0j(k)}x_{0i(\ell )}\left(2-\delta _{k\ell }\right)}{\lambda _{0i}-\lambda _{0j}}}\mathbf {x} _{0j}\\{\frac {\partial \mathbf {x} _{i}}{\partial \mathbf {M} _{(k\ell )}}}&=-\mathbf {x} _{0i}{\frac {x_{0i(k)}x_{0i(\ell )}}{2}}(2-\delta _{k\ell })-\sum _{j=1 \atop j\neq i}^{N}{\frac {\lambda _{0i}x_{0j(k)}x_{0i(\ell )}}{\lambda _{0i}-\lambda _{0j}}}\mathbf {x} _{0j}\left(2-\delta _{k\ell }\right).\end{aligned}}} === Eigenvalue sensitivity, a small example === A simple case is K = [ 2 b b 0 ] {\displaystyle K={\begin{bmatrix}2&b\\b&0\end{bmatrix}}} ; however you can compute eigenvalues and eigenvectors with the help of online tools such as [1] (see introduction in Wikipedia WIMS) or using Sage SageMath. You get the smallest eigenvalue λ = − [ b 2 + 1 + 1 ] {\displaystyle \lambda =-\left[{\sqrt {b^{2}+1}}+1\right]} and an explicit computation ∂ λ ∂ b = − x x 2 + 1 {\displaystyle {\frac {\partial \lambda }{\partial b}}={\frac {-x}{\sqrt {x^{2}+1}}}} ; more over, an associated eigenvector is x ~ 0 = [ x , − ( x 2 + 1 + 1 ) ) ] T {\displaystyle {\tilde {x}}_{0}=[x,-({\sqrt {x^{2}+1}}+1))]^{T}} ; it is not an unitary vector; so x 01 x 02 = x ~ 01 x ~ 02 / ‖ x ~ 0 ‖ 2 {\displaystyle x_{01}x_{02}={\tilde {x}}_{01}{\tilde {x}}_{02}/\|{\tilde {x}}_{0}\|^{2}} ; we get ‖ x ~ 0 ‖ 2 = 2 x 2 + 1 ( x 2 + 1 + 1 ) {\displaystyle \|{\tilde {x}}_{0}\|^{2}=2{\sqrt {x^{2}+1}}({\sqrt {x^{2}+1}}+1)} and x ~ 01 x ~ 02 = − x ( x 2 + 1 + 1 ) {\displaystyle {\tilde {x}}_{01}{\tilde {x}}_{02}=-x({\sqrt {x^{2}+1}}+1)} ; hence x 01 x 02 = − x 2 x 2 + 1 {\displaystyle x_{01}x_{02}=-{\frac {x}{2{\sqrt {x^{2}+1}}}}} ; for this example , we have checked that ∂ λ ∂ b = 2 x 01 x 02 {\displaystyle {\frac {\partial \lambda }{\partial b}}=2x_{01}x_{02}} or δ λ = 2 x 01 x 02 δ b {\displaystyle \delta \lambda =2x_{01}x_{02}\delta b} . == Existence of eigenvectors == Note that in the above example we assumed that both the unperturbed and the perturbed systems involved symmetric matrices, which guaranteed the existence of N {\displaystyle N} linearly independent eigenvectors. An eigenvalue problem involving non-symmetric matrices is not guaranteed to have N {\displaystyle N} linearly independent eigenvectors, though a sufficient condition is that K {\displaystyle \mathbf {K} } and M {\displaystyle \mathbf {M} } be simultaneously diagonalizable. == The case of repeated eigenvalues == A technical report of Rellich for perturbation of eigenvalue problems provides several examples. The elementary examples are in chapter 2. The report may be downloaded from archive.org. We draw an example in which the eigenvectors have a nasty behavior. === Example 1 === Consider the following matrix B ( ϵ ) = ϵ [ cos ⁡ ( 2 / ϵ ) , sin ⁡ ( 2 / ϵ ) sin ⁡ ( 2 / ϵ ) , s cos ⁡ ( 2 / ϵ ) ] {\displaystyle B(\epsilon )=\epsilon {\begin{bmatrix}\cos(2/\epsilon )&,\sin(2/\epsilon )\\\sin(2/\epsilon )&,s\cos(2/\epsilon )\end{bmatrix}}} and A ( ϵ ) = I − e − 1 / ϵ 2 B ; {\displaystyle A(\epsilon )=I-e^{-1/\epsilon ^{2}}B;} A ( 0 ) = I . {\displaystyle A(0)=I.} For ϵ ≠ 0 {\displaystyle \epsilon \neq 0} , the matrix A ( ϵ ) {\displaystyle A(\epsilon )} has eigenvectors Φ 1 = [ cos ⁡ ( 1 / ϵ ) , − sin ⁡ ( 1 / ϵ ) ] T ; Φ 2 = [ sin ⁡ ( 1 / ϵ ) , − cos ⁡ ( 1 / ϵ ) ] T {\displaystyle \Phi ^{1}=[\cos(1/\epsilon ),-\sin(1/\epsilon )]^{T};\Phi ^{2}=[\sin(1/\epsilon ),-\cos(1/\epsilon )]^{T}} belonging to eigenvalues λ 1 = 1 − e − 1 / ϵ 2 ) , λ 2 = 1 + e − 1 / ϵ 2 ) {\displaystyle \lambda _{1}=1-e^{-1/\epsilon ^{2})},\lambda _{2}=1+e^{-1/\epsilon ^{2})}} . Since λ 1 ≠ λ 2 {\displaystyle \lambda _{1}\neq \lambda _{2}} for ϵ ≠ 0 {\displaystyle \epsilon \neq 0} if u j ( ϵ ) , j = 1 , 2 , {\displaystyle u^{j}(\epsilon ),j=1,2,} are any normalized eigenvectors belonging to λ j ( ϵ ) , j = 1 , 2 {\displaystyle \lambda _{j}(\epsilon ),j=1,2} respectively then u j = e α j ( ϵ ) Φ j ( ϵ ) {\displaystyle u^{j}=e^{\alpha _{j}(\epsilon )}\Phi ^{j}(\epsilon )} where α j , j = 1 , 2 {\displaystyle \alpha _{j},j=1,2} are real for ϵ ≠ 0. {\displaystyle \epsilon \neq 0.} It is obviously impossible to define α 1 ( ϵ ) {\displaystyle \alpha _{1}(\epsilon )} , say, in such a way that u 1 ( ϵ ) {\displaystyle u^{1}(\epsilon )} tends to a limit as ϵ → 0 , {\displaystyle \epsilon \rightarrow 0,} because | u 1 ( ϵ ) | = | cos ⁡ ( 1 / ϵ ) | {\displaystyle |u^{1}(\epsilon )|=|\cos(1/\epsilon )|} has no limit as ϵ → 0. {\displaystyle \epsilon \rightarrow 0.} Note in this example that A j k ( ϵ ) {\displaystyle A_{jk}(\epsilon )} is not only continuous but also has continuous derivatives of all orders. Rellich draws the following important consequence. << Since in general the individual eigenvectors do not depend continuously on the perturbation parameter even though the operator A ( ϵ ) {\displaystyle A(\epsilon )} does, it is necessary to work, not with an eigenvector, but rather with the space spanned by all the eigenvectors belonging to the same eigenvalue. >> === Example 2 === This example is less nasty that the previous one. Suppose [ K 0 ] {\displaystyle [K_{0}]} is the 2x2 identity matrix, any vector is an eigenvector; then u 0 = [ 1 , 1 ] T / 2 {\displaystyle u_{0}=[1,1]^{T}/{\sqrt {2}}} is one possible eigenvector. But if one makes a small perturbation, such as [ K ] = [ K 0 ] + [ ϵ 0 0 0 ] {\displaystyle [K]=[K_{0}]+{\begin{bmatrix}\epsilon &0\\0&0\end{bmatrix}}} Then the eigenvectors are v 1 = [ 1 , 0 ] T {\displaystyle v_{1}=[1,0]^{T}} and v 2 = [ 0 , 1 ] T {\displaystyle v_{2}=[0,1]^{T}} ; they are constant with respect to ϵ {\displaystyle \epsilon } so that ‖ u 0 − v 1 ‖ {\displaystyle \|u_{0}-v_{1}\|} is constant and does not go to zero. == See also == Perturbation theory (quantum mechanics) Bauer–Fike theorem == References == == Further reading == === Books === Ren-Cang Li (2014). "Matrix Perturbation Theory". In Hogben, Leslie (ed.). Handbook of linear algebra (Second ed.). CRC Press. ISBN 978-1466507289. Rellich, F., & Berkowitz, J. (1969). Perturbation theory of eigenvalue problems. CRC Press.{{cite book}}: CS1 maint: multiple names: authors list (link). Bhatia, R. (1987). Perturbation bounds for matrix eigenvalues. SIAM. === Report === Rellich, Franz (1954). Perturbation theory of eigenvalue problems. New-York: Courant Institute of Mathematical Sciences, New-York University. === Journal papers === Simon, B. (1982). Large orders and summability of eigenvalue perturbation theory: a mathematical overview. International Journal of Quantum Chemistry, 21(1), 3-25. Crandall, M. G., & Rabinowitz, P. H. (1973). Bifurcation, perturbation of simple eigenvalues, and linearized stability. Archive for Rational Mechanics and Analysis, 52(2), 161-180. Stewart, G. W. (1973). Error and perturbation bounds for subspaces associated with certain eigenvalue problems. SIAM review, 15(4), 727-764. Löwdin, P. O. (1962). Studies in perturbation theory. IV. Solution of eigenvalue problem by projection operator formalism. Journal of Mathematical Physics, 3(5), 969-982.
Wikipedia/Eigenvalue_perturbation
In mathematics, the method of dominant balance approximates the solution to an equation by solving a simplified form of the equation containing 2 or more of the equation's terms that most influence (dominate) the solution and excluding terms contributing only small modifications to this approximate solution. Following an initial solution, iteration of the procedure may generate additional terms of an asymptotic expansion providing a more accurate solution. An early example of the dominant balance method is the Newton polygon method. Newton developed this method to find an explicit approximation for an algebraic function. Newton expressed the function as proportional to the independent variable raised to a power, retained only the lowest-degree polynomial terms (dominant terms), and solved this simplified reduced equation to obtain an approximate solution. Dominant balance has a broad range of applications, solving differential equations arising in fluid mechanics, plasma physics, turbulence, combustion, nonlinear optics, geophysical fluid dynamics, and neuroscience. == Asymptotic relations == The functions f ( z ) {\textstyle f(z)} and g ( z ) {\displaystyle g(z)} of parameter or independent variable z {\textstyle z} and the quotient f ( z ) / g ( z ) {\textstyle f(z)/g(z)} have limits as z {\textstyle z} approaches the limit L {\textstyle L} . The function f ( z ) {\textstyle f(z)} is much less than g ( z ) {\textstyle g(z)} as z {\textstyle z} approaches L {\textstyle L} , written as f ( z ) ≪ g ( z ) ( z → L ) {\textstyle f(z)\ll g(z)\ (z\to L)} , if the limit of the quotient f ( z ) / g ( z ) {\textstyle f(z)/g(z)} is zero as z {\textstyle z} approaches L {\textstyle L} . The relation f ( z ) {\textstyle f(z)} is lower order than g ( z ) {\textstyle g(z)} as z {\textstyle z} approaches L {\textstyle L} , written using little-o notation f ( z ) = o ( g ( z ) ) ( z → L ) {\textstyle f(z)=o(g(z))\ (z\to L)} , is identical to the f ( z ) {\textstyle f(z)} is much less than g ( z ) {\textstyle g(z)} as z {\textstyle z} approaches L {\textstyle L} relation. The function f ( z ) {\textstyle f(z)} is equivalent to g ( z ) {\textstyle g(z)} as z {\textstyle z} approaches L {\textstyle L} , written as f ( z ) ∼ g ( z ) ( z → L ) {\textstyle f(z)\sim g(z)\ (z\to L)} , if the limit of the quotient f ( z ) / g ( z ) {\textstyle f(z)/g(z)} is 1 as z {\textstyle z} approaches L {\textstyle L} . This result indicates that the zero function, f ( z ) = 0 {\textstyle f(z)=0} for all values of z {\textstyle z} , can never be equivalent to any other function. Asymptotically equivalent functions remain asymptotically equivalent under integration if requirements related to convergence are met. There are more specific requirements for asymptotically equivalent functions to remain asymptotically equivalent under differentiation. == Equation properties == An equation's approximate solution is s ( z ) {\textstyle s(z)} as z {\textstyle z} approaches limit L {\textstyle L} . The equation's terms that may be constants or contain this solution are T 0 ( s ) , T 1 ( s ) , … , T n ( s ) {\textstyle T_{0}(s),T_{1}(s),\ldots ,T_{n}(s)} . If the approximate solution is fully correct, the equation's terms sum to zero in this equation: T 0 ( s ) + T 1 ( s ) + … + T n ( s ) = 0. {\displaystyle T_{0}(s)+T_{1}(s)+\ldots +T_{n}(s)=0.} For distinct integer indices i , j {\textstyle i,j} , this equation is a sum of 2 terms and a remainder R i j ( s ) {\textstyle R_{ij}(s)} expressed as T i ( s ) + T j ( s ) + R i j ( s ) = 0 R i j ( s ) = ∑ k = 0 k ≠ i , k ≠ j n T k ( s ) . {\displaystyle {\begin{aligned}&T_{i}(s)+T_{j}(s)+R_{ij}(s)=0\\&R_{ij}(s)=\sum _{{k=0} \atop {k\neq i,k\neq j}}^{n}T_{k}(s).\end{aligned}}} Balance equation terms T i ( s ) {\textstyle T_{i}(s)} and T j ( s ) {\textstyle T_{j}(s)} means make these terms equal and asymptotically equivalent by finding the function s ( z ) {\textstyle s(z)} that solves the reduced equation T i ( s ) + T j ( s ) = 0 {\textstyle T_{i}(s)+T_{j}(s)=0} with T i ( s ) ≠ 0 {\textstyle T_{i}(s)\neq 0} and T j ( s ) ≠ 0 {\textstyle T_{j}(s)\neq 0} . This solution s ( z ) {\textstyle s(z)} is consistent if terms T i ( s ) {\textstyle T_{i}(s)} and T j ( s ) {\textstyle T_{j}(s)} are dominant; dominant means the remaining equation terms R i j ( s ) {\textstyle R_{ij}(s)} are much less than terms T i ( s ) {\textstyle T_{i}(s)} and T j ( s ) {\textstyle T_{j}(s)} as z {\textstyle z} approaches L {\textstyle L} . A consistent solution that balances two equation terms may generate an accurate approximation to the full equation's solution for z {\textstyle z} values approaching L {\textstyle L} . Approximate solutions arising from balancing different terms of an equation may generate distinct approximate solutions e.g. inner and outer layer solutions. Substituting the scaled function s ( z ) = ( z − L ) p s ~ ( z ) {\textstyle s(z)=(z-L)^{p}{\tilde {s}}(z)} into the equation and taking the limit as z {\textstyle z} approaches L {\textstyle L} may generate simplified reduced equations for distinct exponent values of p {\textstyle p} . These simplified equations are called distinguished limits and identify balanced dominant equation terms. The scale transformation generates the scaled functions. The dominant balance method applies scale transformations to balance equation terms whose factors contain distinct exponents. For example, T i ( s ) {\textstyle T_{i}(s)} contains factor ( z − L ) q {\textstyle (z-L)^{q}} and term T j ( s ) {\textstyle T_{j}(s)} contains factor ( z − L ) r {\textstyle (z-L)^{r}} with q ≠ r {\textstyle q\neq r} . Scaled functions are applied to differential equations when z {\textstyle z} is an equation parameter, not the differential equation´s independent variable. The Kruskal-Newton diagram facilitates identifying the required scaled functions needed for dominant balance of algebraic and differential equations. For differential equation solutions containing an irregular singularity, the leading behavior is the first term of an asymptotic series solution that remains when the independent variable z {\textstyle z} approaches an irregular singularity L {\textstyle L} . The controlling factor is the fastest changing part of the leading behavior. It is advised to "show that the equation for the function obtained by factoring off the dominant balance solution from the exact solution itself has a solution that varies less rapidly than the dominant balance solution." == Algorithm == The input is the set of equation terms and the limit L. The output is the set of approximate solutions. For each pair of distinct equation terms T i ( s ) , T j ( s ) {\textstyle T_{i}(s),T_{j}(s)} the algorithm applies a scale transformation if needed, balances the selected terms by finding a function that solves the reduced equation and then determines if this function is consistent. If the function balances the terms and is consistent, the algorithm adds the function to the set of approximate solutions, otherwise the algorithm rejects the function. The process is repeated for each pair of distinct equation terms. Inputs Set of equation terms { T 0 ( s ) , T 1 ( s ) , … , T n ( s ) } {\textstyle \{T_{0}(s),T_{1}(s),\ldots ,T_{n}(s)\}} and limit L {\textstyle L} Output Set of approximate solutions { s 0 ( z ) , s 1 ( z ) , … } {\textstyle \{s_{0}(z),s_{1}(z),\dots \}} For each pair of distinct equation terms T i ( s ) , T j ( s ) {\textstyle T_{i}(s),T_{j}(s)} do: Apply a scale transformation if needed. Solve the reduced equation: T i ( s ) + T j ( s ) = 0 {\textstyle T_{i}(s)+T_{j}(s)=0} with T i ( s ) ≠ 0 {\textstyle T_{i}(s)\neq 0} and T j ( s ) ≠ 0 {\textstyle T_{j}(s)\neq 0} . Verify consistency: R i j ( s ) ≪ T i ( s ) ( z → L ) {\textstyle R_{ij}(s)\ll T_{i}(s)\ (z\to L)} and R i j ( s ) ≪ T j ( s ) ( z → L ) . {\textstyle R_{ij}(s)\ll T_{j}(s)\ (z\to L).} If function s ( z ) {\textstyle s(z)} is consistent and solves the reduced equation, add this function to the set of approximate solutions, otherwise reject the function. == Improved accuracy == The method may be iterated to generate additional terms of an asymptotic expansion to provide a more accurate solution. Iterative methods such as the Newton-Raphson method may generate a more accurate solution. A perturbation series, using the approximate solution as the first term, may also generate a more accurate solution. == Examples == === Algebraic function === The dominant balance method will find an explicit approximate expression for the multi-valued function s = s ( z ) {\textstyle s=s(z)} defined by the equation 1 − 16 s + z s 5 = 0 {\textstyle 1-16s+zs^{5}=0} as z {\textstyle z} approaches zero. ==== Input ==== The set of equation terms is { 1 , − 16 s , z s 5 } {\textstyle \{1,-16s,zs^{5}\}} and the limit is zero. ==== First term pair ==== Select the terms 1 {\textstyle 1} and − 16 s {\textstyle -16s} . The scale transformation is not required. Solve the reduced equation: 1 − 16 s = 0 , s ( z ) = 1 16 {\displaystyle 1-16s=0,s(z)={\tfrac {1}{16}}} . Verify consistency: z s 5 ≪ 1 ( z → 0 ) , z s 5 ≪ 16 s ( z → 0 ) {\displaystyle zs^{5}\ll 1\ (z\to 0),\ zs^{5}\ll 16s\ (z\to 0)\ } for s ( z ) = 1 16 . {\displaystyle s(z)={\tfrac {1}{16}}.} Add this function to the set of approximate solutions: s 0 ( z ) = 1 16 {\displaystyle s_{0}(z)={\tfrac {1}{16}}} . ==== Second term pair ==== Select the terms − 16 s {\displaystyle -16s} and z s 5 {\displaystyle zs^{5}} . Apply the scale transformation s = z − 1 / 4 s ~ {\displaystyle s=z^{-1/4}{\tilde {s}}} . The transformed equation is z 1 / 4 − 16 s ~ + s ~ 5 = 0 {\displaystyle z^{1/4}-16{\tilde {s}}+{\tilde {s}}^{5}=0} . Solve the reduced equation: − 16 s ~ + s ~ 5 = 0 , s ~ = 2 , − 2 , 2 i , − 2 i {\displaystyle -16{\tilde {s}}+{\tilde {s}}^{5}=0,\ {\tilde {s}}=2,-2,2i,-2i} . Verify consistency: z 1 / 4 ≪ 16 s ~ ( z → 0 ) , z 1 / 4 ≪ s ~ 5 ( z → 0 ) {\displaystyle z^{1/4}\ll 16{\tilde {s}}\ (z\to 0),\ z^{1/4}\ll {\tilde {s}}^{5}\ (z\to 0)\ } for s ~ = 2 , − 2 , 2 i , − 2 i . {\displaystyle {\tilde {s}}=2,-2,2i,-2i.} Add these functions to the set of approximate solutions: s 1 ( z ) = 2 z 1 / 4 , s 2 ( z ) = − 2 z 1 / 4 , s 3 ( z ) = 2 i z 1 / 4 , s 4 ( z ) = − 2 i z 1 / 4 . {\displaystyle s_{1}(z)={\frac {2}{z^{1/4}}},s_{2}(z)={\frac {-2}{z^{1/4}}},s_{3}(z)={\frac {2i}{z^{1/4}}},s_{4}(z)={\frac {-2i}{z^{1/4}}}.} ==== Third term pair ==== Select the terms 1 {\displaystyle 1} and z s 5 {\displaystyle zs^{5}} . Apply the scale transformation s = z − 1 / 5 s ~ {\displaystyle s=z^{-1/5}{\tilde {s}}} . The transformed equation is 1 − 16 z − 1 / 5 s ~ + s ~ 5 = 0. {\displaystyle 1-16z^{-1/5}{\tilde {s}}+{\tilde {s}}^{5}=0.} Solve the reduced equation: 1 + s ~ 5 = 0 , s ~ = ( − 1 ) 1 / 5 . {\displaystyle 1+{\tilde {s}}^{5}=0,\ {\tilde {s}}=(-1)^{1/5}.} The function is not consistent: − 16 z − 1 / 5 s ~ ≫ 1 ( z → 0 ) , z − 1 / 5 s ~ ≫ s ~ 5 ( z → 0 ) {\displaystyle -16z^{-1/5}{\tilde {s}}\gg 1\ (z\to 0),\ z^{-1/5}{\tilde {s}}\gg {\tilde {s}}^{5}\ (z\to 0)\ } for s ~ = ( − 1 ) 1 / 5 . {\displaystyle {\tilde {s}}=(-1)^{1/5}.} Reject this function: s = z − 1 / 5 ( − 1 ) 1 / 5 . {\displaystyle s=z^{-1/5}(-1)^{1/5}.} ==== Output ==== The set of approximate solutions has 5 functions: { 1 16 , 2 z 1 / 4 , − 2 z 1 / 4 , 2 i z 1 / 4 , − 2 i z 1 / 4 } . {\displaystyle \left\{{\frac {1}{16}},{\frac {2}{z^{1/4}}},{\frac {-2}{z^{1/4}}},{\frac {2i}{z^{1/4}}},{\frac {-2i}{z^{1/4}}}\right\}.} ==== Perturbation series solution ==== The approximate solutions are the first terms in the perturbation series solutions. s 0 ( z ) = 1 16 + 1 16777216 z 1 + 5 17592186044416 z 2 + … , s 1 ( z ) = 2 z 1 / 4 − 1 64 − 5 16384 z 1 4 − 5 524288 z 1 2 − … , s 2 ( z ) = − 2 z 1 / 4 − 1 64 + 5 16384 z 1 4 − 5 524288 z 1 2 + … , s 3 ( z ) = 2 i z 1 / 4 − 1 64 + 5 i 16384 z 1 4 + 5 524288 z 1 2 − … s 4 ( z ) = − 2 i z 1 / 4 − 1 64 − 5 i 16384 z 1 4 + 5 524288 z 1 2 + … , {\displaystyle {\begin{aligned}&s_{0}(z)={\frac {1}{16}}+{\frac {1}{16777216}}z^{1}+{\frac {5}{17592186044416}}z^{2}+\ldots ,\\&s_{1}(z)={\frac {2}{z^{1/4}}}-{\frac {1}{64}}-{\frac {5}{16384}}z^{\frac {1}{4}}-{\frac {5}{524288}}z^{\frac {1}{2}}-\ldots ,\\&s_{2}(z)=-{\frac {2}{z^{1/4}}}-{\frac {1}{64}}+{\frac {5}{16384}}z^{\frac {1}{4}}-{\frac {5}{524288}}z^{\frac {1}{2}}+\ldots ,\\&s_{3}(z)={\frac {2i}{z^{1/4}}}-{\frac {1}{64}}+{\frac {5i}{16384}}z^{\frac {1}{4}}+{\frac {5}{524288}}z^{\frac {1}{2}}-\ldots \\&s_{4}(z)=-{\frac {2i}{z^{1/4}}}-{\frac {1}{64}}-{\frac {5i}{16384}}z^{\frac {1}{4}}+{\frac {5}{524288}}z^{\frac {1}{2}}+\ldots ,\\\end{aligned}}} === Differential equation === The differential equation z 3 w ′ ′ − w = 0 {\textstyle z^{3}w^{\prime \prime }-w=0} is known to have a solution with an exponential leading term. The transformation w ( z ) = e s ( z ) {\textstyle w(z)=e^{s(z)}} leads to the differential equation 1 − z 3 ( s ′ ) 2 − z 3 s ′ ′ = 0 {\textstyle 1-z^{3}(s^{\prime })^{2}-z^{3}s^{\prime \prime }=0} . The dominant balance method will find an approximate solution as z {\textstyle z} approaches zero. Scaled functions will not be used because z {\textstyle z} is the differential equation's independent variable, not a differential equation parameter. ==== Input ==== The set of equation terms is { 1 , − z 3 ( s ′ ) 2 , − z 3 s ′ ′ } {\textstyle \{1,-z^{3}(s^{\prime })^{2},-z^{3}s^{\prime \prime }\}} and the limit is zero. ===== First term pair ===== Select 1 {\displaystyle 1} and − z 3 ( s ′ ) 2 {\displaystyle -z^{3}(s^{\prime })^{2}} . The scale transformation is not required. Solve the reduced equation: 1 − z 3 ( s ′ ) 2 = 0 , s ( z ) = ± 2 z − 1 / 2 {\displaystyle 1-z^{3}(s^{\prime })^{2}=0,\ s(z)=\pm 2z^{-1/2}} Verify consistency: z 3 s ′ ′ ≪ 1 ( z → 0 ) , z 3 s ′ ′ ≪ z 3 ( s ′ ) 2 ( z → 0 ) {\displaystyle z^{3}s^{\prime \prime }\ll 1\ (z\to 0),\ z^{3}s^{\prime \prime }\ll z^{3}(s^{\prime })^{2}\ (z\to 0)} for s ( z ) = ± 2 z − 1 / 2 . {\displaystyle s(z)=\pm 2z^{-1/2}.} Add these 2 functions to the set of approximate solutions: s + ( z ) = + 2 z − 1 / 2 , s − ( z ) = − 2 z − 1 / 2 . {\displaystyle s_{+}(z)=+2z^{-1/2},\ s_{-}(z)=-2z^{-1/2}.} ==== Second term pair ==== Select 1 {\displaystyle 1} and − z 3 s ′ ′ {\displaystyle -z^{3}s^{\prime \prime }} The scale transformation is not required. Solve the reduced equation: 1 − z 3 s ′ ′ = 0 , s ( z ) = 1 2 z − 1 {\displaystyle 1-z^{3}s^{\prime \prime }=0,\ s(z)={\tfrac {1}{2}}z^{-1}} The function is not consistent: z 3 ( s ′ ) 2 ≫ 1 ( z → 0 ) , z 3 ( s ′ ) 2 ≫ z 3 s ′ ′ ( z → 0 ) {\displaystyle z^{3}(s^{\prime })^{2}\gg 1\ (z\to 0),\ z^{3}(s^{\prime })^{2}\gg z^{3}s^{\prime \prime }\ (z\to 0)} for s ( z ) = 1 2 z − 1 . {\displaystyle s(z)={\tfrac {1}{2}}z^{-1}.} Reject this function: s ( z ) = 1 2 z − 1 . {\displaystyle s(z)={\tfrac {1}{2}}z^{-1}.} . ==== Third term pair ==== Select − z 3 ( s ′ ) 2 {\displaystyle -z^{3}(s^{\prime })^{2}} and − z 3 s ′ ′ {\displaystyle -z^{3}s^{\prime \prime }} . The scale transformation is not required. Solve the reduced equation: z 3 ( s ′ ) 2 + z 3 s ′ ′ = 0 , s ( z ) = ln ⁡ z {\displaystyle z^{3}(s^{\prime })^{2}+z^{3}s^{\prime \prime }=0,\ s(z)=\ln z} . The function is not consistent: 1 ≫ z 3 ( s ′ ) 2 ( z → 0 ) {\displaystyle 1\gg z^{3}(s^{\prime })^{2}\ (z\to 0)\ } and 1 ≫ z 3 s ′ ′ ( z → 0 ) {\displaystyle \ 1\gg \ z^{3}s^{\prime \prime }\ (z\to 0)} for s ( z ) = ln ⁡ z . {\displaystyle s(z)=\ln z.} Reject this function: s ( z ) = ln ⁡ z . {\displaystyle s(z)=\ln z.} ==== Output ==== The set of approximate solutions has 2 functions: { + 2 z − 1 / 2 , − 2 z − 1 / 2 } . {\displaystyle \left\{+2z^{-1/2},-2z^{-1/2}\right\}.} ==== Find 2-term solutions ==== Using the 1-term solution, a 2-term solution is s 2 ± ( z ) = ± 2 z − 1 / 2 + s ( z ) . {\displaystyle s_{2\pm }(z)=\pm 2z^{-1/2}+s(z).} Substitution of this 2-term solution into the original differential equation generates a new differential equation: 1 − z 3 ( s 2 ± ′ ) 2 − z 3 s 2 ± ′ ′ = 0 ± 1 ∓ 4 3 z s ′ + 2 3 z 5 / 2 ( s ′ ) 2 + 2 3 z 5 / 2 s ′ ′ = 0. {\displaystyle {\begin{aligned}1-z^{3}(s_{2\pm }^{\prime })^{2}-z^{3}s_{2\pm }^{\prime \prime }&=0\\\pm 1\mp {\frac {4}{3}}zs^{\prime }+{\frac {2}{3}}z^{5/2}(s^{\prime })^{2}+{\frac {2}{3}}z^{5/2}s^{\prime \prime }&=0.\end{aligned}}} ==== Input ==== The set of equation terms is { ± 1 , ∓ 4 3 z s ′ , 2 3 z 5 / 2 ( s ′ ) 2 , 2 3 z 5 / 2 s ′ ′ } {\textstyle \{\pm 1,\mp {\frac {4}{3}}zs^{\prime },{\frac {2}{3}}z^{5/2}(s^{\prime })^{2},{\frac {2}{3}}z^{5/2}s^{\prime \prime }\}} and the limit is zero. ===== First term pair ===== 1. Select 1 {\displaystyle 1} and − 4 3 z s ′ {\displaystyle -{\tfrac {4}{3}}zs^{\prime }} . 2. The scale transformation is not required. 3. Solve the reduced equation: 1 − 4 3 z s ′ = 0 , s ( z ) = 3 4 ln ⁡ z {\displaystyle 1-{\tfrac {4}{3}}zs^{\prime }=0,\ s(z)={\tfrac {3}{4}}\ln z} . 4. Verify consistency: 2 3 z 5 / 2 ( s ′ ) 2 + 2 3 z 5 / 2 s ′ ′ ≪ 1 ( z → 0 ) , for s ( z ) = 3 4 ln ⁡ z {\displaystyle {\tfrac {2}{3}}z^{5/2}(s^{\prime })^{2}+{\tfrac {2}{3}}z^{5/2}s^{\prime \prime }\ll 1\ (z\to 0),{\text{for}}\ s(z)={\tfrac {3}{4}}\ln z} 2 3 z 5 / 2 ( s ′ ) 2 + 2 3 z 5 / 2 s ′ ′ ≪ 4 3 z s ′ ( z → 0 ) for s ( z ) = 3 4 ln ⁡ z . {\displaystyle {\tfrac {2}{3}}z^{5/2}(s^{\prime })^{2}+{\tfrac {2}{3}}z^{5/2}s^{\prime \prime }\ll {\tfrac {4}{3}}zs^{\prime }\ (z\to 0)\ {\text{for}}\ s(z)={\tfrac {3}{4}}\ln z.} 5. Add these functions to the set of approximate solutions: s 2 + ( z ) = + 2 z − 1 / 2 + 3 4 ln ⁡ z {\textstyle s_{2+}(z)=+2z^{-1/2}+{\tfrac {3}{4}}\ln z} s 2 − ( z ) = − 2 z − 1 / 2 + 3 4 ln ⁡ z {\textstyle s_{2-}(z)=-2z^{-1/2}+{\tfrac {3}{4}}\ln z} . ==== Other term pairs ==== For other term pairs, the functions that solve the reduced equations are not consistent. ==== Output ==== The set of approximate solutions has 2 functions: { + 2 z − 1 / 2 + 3 4 ln ⁡ z , − 2 z − 1 / 2 + 3 4 ln ⁡ z } . {\displaystyle \left\{+2z^{-1/2}+{\tfrac {3}{4}}\ln z,-2z^{-1/2}+{\tfrac {3}{4}}\ln z\right\}.} ==== Asymptotic expansion ==== The next iteration generates a 3-term solution s 3 ± ( z ) = ± 2 z − 1 / 2 + 3 4 ln ⁡ ( z ) + h ( z ) {\textstyle s_{3\pm }(z)=\pm 2z^{-1/2}+{\tfrac {3}{4}}\operatorname {ln} (z)+h(z)} with h ( z ) ≪ 1 ( z → 0 ) {\textstyle h(z)\ll 1\ (z\to 0)} and this means that a power series expansion can represent the remainder of the solution. The dominant balance method generates the leading term to this asymptotic expansion with constant A {\textstyle A} and expansion coefficients determined by substitution into the full differential equation: w ( z ) = A z 3 / 4 e ± 2 z − 1 / 2 ( ∑ n = 0 m a n z n / 2 ) {\displaystyle w(z)=Az^{3/4}e^{\pm 2z^{-1/2}}\left(\sum _{n=0}^{m}\ a_{n}z^{n/2}\right)} a n + 1 = ± ( n − 1 / 2 ) ( n + 3 / 2 ) a n 4 ( n + 1 ) . {\displaystyle a_{n+1}=\pm {\frac {(n-1/2)(n+3/2)a_{n}}{4(n+1)}}.} A partial sum of this non-convergent series generates an approximate solution. The leading term corresponds to the Liouville-Green (LG) or Wentzel–Kramers–Brillouin (WKB) approximation. == Citations == == References == == See also == Asymptotic analysis
Wikipedia/Method_of_dominant_balance
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves (e.g. water waves, sound waves and seismic waves) or electromagnetic waves (including light waves). It arises in fields like acoustics, electromagnetism, and fluid dynamics. This article focuses on waves in classical physics. Quantum physics uses an operator-based wave equation often as a relativistic wave equation. == Introduction == The wave equation is a hyperbolic partial differential equation describing waves, including traveling and standing waves; the latter can be considered as linear superpositions of waves traveling in opposite directions. This article mostly focuses on the scalar wave equation describing waves in scalars by scalar functions u = u (x, y, z, t) of a time variable t (a variable representing time) and one or more spatial variables x, y, z (variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves in vectors such as waves for an electrical field, magnetic field, and magnetic vector potential and elastic waves. By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in the Cartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as the x component for the x axis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for ( E x , E y , E z ) {\displaystyle (E_{x},E_{y},E_{z})} as the representation of an electric vector field wave E → {\displaystyle {\vec {E}}} in the absence of wave sources, each coordinate axis component E i {\displaystyle E_{i}} (i = x, y, z) must satisfy the scalar wave equation. Other scalar wave equation solutions u are for physical quantities in scalars such as pressure in a liquid or gas, or the displacement along some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions. The scalar wave equation is where c is a fixed non-negative real coefficient representing the propagation speed of the wave u is a scalar field representing the displacement or, more generally, the conserved quantity (e.g. pressure or density) x, y and z are the three spatial coordinates and t being the time coordinate. The equation states that, at any given point, the second derivative of u {\displaystyle u} with respect to time is proportional to the sum of the second derivatives of u {\displaystyle u} with respect to space, with the constant of proportionality being the square of the speed of the wave. Using notations from vector calculus, the wave equation can be written compactly as u t t = c 2 Δ u , {\displaystyle u_{tt}=c^{2}\Delta u,} or ◻ u = 0 , {\displaystyle \Box u=0,} where the double subscript denotes the second-order partial derivative with respect to time, Δ {\displaystyle \Delta } is the Laplace operator and ◻ {\displaystyle \Box } the d'Alembert operator, defined as: u t t = ∂ 2 u ∂ t 2 , Δ = ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 + ∂ 2 ∂ z 2 , ◻ = 1 c 2 ∂ 2 ∂ t 2 − Δ . {\displaystyle u_{tt}={\frac {\partial ^{2}u}{\partial t^{2}}},\qquad \Delta ={\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}},\qquad \Box ={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\Delta .} A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed c. This analysis is possible because the wave equation is linear and homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics. The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments. == Wave equation in one space dimension == The wave equation in one spatial dimension can be written as follows: ∂ 2 u ∂ t 2 = c 2 ∂ 2 u ∂ x 2 . {\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}=c^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}.} This equation is typically described as having only one spatial dimension x, because the only other independent variable is the time t. === Derivation === The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension. Another physical setting for derivation of the wave equation in one space dimension uses Hooke's law. In the theory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress). ==== Hooke's law ==== The wave equation in the one-dimensional case can be derived from Hooke's law in the following way: imagine an array of little weights of mass m interconnected with massless springs of length h. The springs have a spring constant of k: Here the dependent variable u(x) measures the distance from the equilibrium of the mass situated at x, so that u(x) essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The resulting force exerted on the mass m at the location x + h is: F Hooke = F x + 2 h − F x = k [ u ( x + 2 h , t ) − u ( x + h , t ) ] − k [ u ( x + h , t ) − u ( x , t ) ] . {\displaystyle {\begin{aligned}F_{\text{Hooke}}&=F_{x+2h}-F_{x}=k[u(x+2h,t)-u(x+h,t)]-k[u(x+h,t)-u(x,t)].\end{aligned}}} By equating the latter equation with F Newton = m a ( t ) = m ∂ 2 ∂ t 2 u ( x + h , t ) , {\displaystyle {\begin{aligned}F_{\text{Newton}}&=m\,a(t)=m\,{\frac {\partial ^{2}}{\partial t^{2}}}u(x+h,t),\end{aligned}}} the equation of motion for the weight at the location x + h is obtained: ∂ 2 ∂ t 2 u ( x + h , t ) = k m [ u ( x + 2 h , t ) − u ( x + h , t ) − u ( x + h , t ) + u ( x , t ) ] . {\displaystyle {\frac {\partial ^{2}}{\partial t^{2}}}u(x+h,t)={\frac {k}{m}}[u(x+2h,t)-u(x+h,t)-u(x+h,t)+u(x,t)].} If the array of weights consists of N weights spaced evenly over the length L = Nh of total mass M = Nm, and the total spring constant of the array K = k/N, we can write the above equation as ∂ 2 ∂ t 2 u ( x + h , t ) = K L 2 M [ u ( x + 2 h , t ) − 2 u ( x + h , t ) + u ( x , t ) ] h 2 . {\displaystyle {\frac {\partial ^{2}}{\partial t^{2}}}u(x+h,t)={\frac {KL^{2}}{M}}{\frac {[u(x+2h,t)-2u(x+h,t)+u(x,t)]}{h^{2}}}.} Taking the limit N → ∞, h → 0 and assuming smoothness, one gets ∂ 2 u ( x , t ) ∂ t 2 = K L 2 M ∂ 2 u ( x , t ) ∂ x 2 , {\displaystyle {\frac {\partial ^{2}u(x,t)}{\partial t^{2}}}={\frac {KL^{2}}{M}}{\frac {\partial ^{2}u(x,t)}{\partial x^{2}}},} which is from the definition of a second derivative. KL2/M is the square of the propagation speed in this particular case. ==== Stress pulse in a bar ==== In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness K given by K = E A L , {\displaystyle K={\frac {EA}{L}},} where A is the cross-sectional area, and E is the Young's modulus of the material. The wave equation becomes ∂ 2 u ( x , t ) ∂ t 2 = E A L M ∂ 2 u ( x , t ) ∂ x 2 . {\displaystyle {\frac {\partial ^{2}u(x,t)}{\partial t^{2}}}={\frac {EAL}{M}}{\frac {\partial ^{2}u(x,t)}{\partial x^{2}}}.} AL is equal to the volume of the bar, and therefore A L M = 1 ρ , {\displaystyle {\frac {AL}{M}}={\frac {1}{\rho }},} where ρ is the density of the material. The wave equation reduces to ∂ 2 u ( x , t ) ∂ t 2 = E ρ ∂ 2 u ( x , t ) ∂ x 2 . {\displaystyle {\frac {\partial ^{2}u(x,t)}{\partial t^{2}}}={\frac {E}{\rho }}{\frac {\partial ^{2}u(x,t)}{\partial x^{2}}}.} The speed of a stress wave in a bar is therefore E / ρ {\displaystyle {\sqrt {E/\rho }}} . === General solution === ==== Algebraic approach ==== For the one-dimensional wave equation a relatively simple general solution may be found. Defining new variables ξ = x − c t , η = x + c t {\displaystyle {\begin{aligned}\xi &=x-ct,\\\eta &=x+ct\end{aligned}}} changes the wave equation into ∂ 2 u ∂ ξ ∂ η ( x , t ) = 0 , {\displaystyle {\frac {\partial ^{2}u}{\partial \xi \partial \eta }}(x,t)=0,} which leads to the general solution u ( x , t ) = F ( ξ ) + G ( η ) = F ( x − c t ) + G ( x + c t ) . {\displaystyle u(x,t)=F(\xi )+G(\eta )=F(x-ct)+G(x+ct).} In other words, the solution is the sum of a right-traveling function F and a left-traveling function G. "Traveling" means that the shape of these individual arbitrary functions with respect to x stays constant, however, the functions are translated left and right with time at the speed c. This was derived by Jean le Rond d'Alembert. Another way to arrive at this result is to factor the wave equation using two first-order differential operators: [ ∂ ∂ t − c ∂ ∂ x ] [ ∂ ∂ t + c ∂ ∂ x ] u = 0. {\displaystyle \left[{\frac {\partial }{\partial t}}-c{\frac {\partial }{\partial x}}\right]\left[{\frac {\partial }{\partial t}}+c{\frac {\partial }{\partial x}}\right]u=0.} Then, for our original equation, we can define v ≡ ∂ u ∂ t + c ∂ u ∂ x , {\displaystyle v\equiv {\frac {\partial u}{\partial t}}+c{\frac {\partial u}{\partial x}},} and find that we must have ∂ v ∂ t − c ∂ v ∂ x = 0. {\displaystyle {\frac {\partial v}{\partial t}}-c{\frac {\partial v}{\partial x}}=0.} This advection equation can be solved by interpreting it as telling us that the directional derivative of v in the (1, -c) direction is 0. This means that the value of v is constant on characteristic lines of the form x + ct = x0, and thus that v must depend only on x + ct, that is, have the form H(x + ct). Then, to solve the first (inhomogenous) equation relating v to u, we can note that its homogenous solution must be a function of the form F(x - ct), by logic similar to the above. Guessing a particular solution of the form G(x + ct), we find that [ ∂ ∂ t + c ∂ ∂ x ] G ( x + c t ) = H ( x + c t ) . {\displaystyle \left[{\frac {\partial }{\partial t}}+c{\frac {\partial }{\partial x}}\right]G(x+ct)=H(x+ct).} Expanding out the left side, rearranging terms, then using the change of variables s = x + ct simplifies the equation to G ′ ( s ) = H ( s ) 2 c . {\displaystyle G'(s)={\frac {H(s)}{2c}}.} This means we can find a particular solution G of the desired form by integration. Thus, we have again shown that u obeys u(x, t) = F(x - ct) + G(x + ct). For an initial-value problem, the arbitrary functions F and G can be determined to satisfy initial conditions: u ( x , 0 ) = f ( x ) , {\displaystyle u(x,0)=f(x),} u t ( x , 0 ) = g ( x ) . {\displaystyle u_{t}(x,0)=g(x).} The result is d'Alembert's formula: u ( x , t ) = f ( x − c t ) + f ( x + c t ) 2 + 1 2 c ∫ x − c t x + c t g ( s ) d s . {\displaystyle u(x,t)={\frac {f(x-ct)+f(x+ct)}{2}}+{\frac {1}{2c}}\int _{x-ct}^{x+ct}g(s)\,ds.} In the classical sense, if f(x) ∈ Ck, and g(x) ∈ Ck−1, then u(t, x) ∈ Ck. However, the waveforms F and G may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left. The basic wave equation is a linear differential equation, and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components. ==== Plane-wave eigenmodes ==== Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency ω, so that the temporal part of the wave function takes the form e−iωt = cos(ωt) − i sin(ωt), and the amplitude is a function f(x) of the spatial variable x, giving a separation of variables for the wave function: u ω ( x , t ) = e − i ω t f ( x ) . {\displaystyle u_{\omega }(x,t)=e^{-i\omega t}f(x).} This produces an ordinary differential equation for the spatial part f(x): ∂ 2 u ω ∂ t 2 = ∂ 2 ∂ t 2 ( e − i ω t f ( x ) ) = − ω 2 e − i ω t f ( x ) = c 2 ∂ 2 ∂ x 2 ( e − i ω t f ( x ) ) . {\displaystyle {\frac {\partial ^{2}u_{\omega }}{\partial t^{2}}}={\frac {\partial ^{2}}{\partial t^{2}}}\left(e^{-i\omega t}f(x)\right)=-\omega ^{2}e^{-i\omega t}f(x)=c^{2}{\frac {\partial ^{2}}{\partial x^{2}}}\left(e^{-i\omega t}f(x)\right).} Therefore, d 2 d x 2 f ( x ) = − ( ω c ) 2 f ( x ) , {\displaystyle {\frac {d^{2}}{dx^{2}}}f(x)=-\left({\frac {\omega }{c}}\right)^{2}f(x),} which is precisely an eigenvalue equation for f(x), hence the name eigenmode. Known as the Helmholtz equation, it has the well-known plane-wave solutions f ( x ) = A e ± i k x , {\displaystyle f(x)=Ae^{\pm ikx},} with wave number k = ω/c. The total wave function for this eigenmode is then the linear combination u ω ( x , t ) = e − i ω t ( A e − i k x + B e i k x ) = A e − i ( k x + ω t ) + B e i ( k x − ω t ) , {\displaystyle u_{\omega }(x,t)=e^{-i\omega t}\left(Ae^{-ikx}+Be^{ikx}\right)=Ae^{-i(kx+\omega t)}+Be^{i(kx-\omega t)},} where complex numbers A, B depend in general on any initial and boundary conditions of the problem. Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor e − i ω t , {\displaystyle e^{-i\omega t},} so that a full solution can be decomposed into an eigenmode expansion: u ( x , t ) = ∫ − ∞ ∞ s ( ω ) u ω ( x , t ) d ω , {\displaystyle u(x,t)=\int _{-\infty }^{\infty }s(\omega )u_{\omega }(x,t)\,d\omega ,} or in terms of the plane waves, u ( x , t ) = ∫ − ∞ ∞ s + ( ω ) e − i ( k x + ω t ) d ω + ∫ − ∞ ∞ s − ( ω ) e i ( k x − ω t ) d ω = ∫ − ∞ ∞ s + ( ω ) e − i k ( x + c t ) d ω + ∫ − ∞ ∞ s − ( ω ) e i k ( x − c t ) d ω = F ( x − c t ) + G ( x + c t ) , {\displaystyle {\begin{aligned}u(x,t)&=\int _{-\infty }^{\infty }s_{+}(\omega )e^{-i(kx+\omega t)}\,d\omega +\int _{-\infty }^{\infty }s_{-}(\omega )e^{i(kx-\omega t)}\,d\omega \\&=\int _{-\infty }^{\infty }s_{+}(\omega )e^{-ik(x+ct)}\,d\omega +\int _{-\infty }^{\infty }s_{-}(\omega )e^{ik(x-ct)}\,d\omega \\&=F(x-ct)+G(x+ct),\end{aligned}}} which is exactly in the same form as in the algebraic approach. Functions s±(ω) are known as the Fourier component and are determined by initial and boundary conditions. This is a so-called frequency-domain method, alternative to direct time-domain propagations, such as FDTD method, of the wave packet u(x, t), which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged by chirp wave solutions allowing for time variation of ω. The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source. == Vectorial wave equation in three space dimensions == The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element. If the medium has a modulus of elasticity E {\displaystyle E} that is homogeneous (i.e. independent of x {\displaystyle \mathbf {x} } ) within the volume element, then its stress tensor is given by T = E ∇ u {\displaystyle \mathbf {T} =E\nabla \mathbf {u} } , for a vectorial elastic deflection u ( x , t ) {\displaystyle \mathbf {u} (\mathbf {x} ,t)} . The local equilibrium of: the tension force div ⁡ T = ∇ ⋅ ( E ∇ u ) = E Δ u {\displaystyle \operatorname {div} \mathbf {T} =\nabla \cdot (E\nabla \mathbf {u} )=E\Delta \mathbf {u} } due to deflection u {\displaystyle \mathbf {u} } , and the inertial force ρ ∂ 2 u / ∂ t 2 {\displaystyle \rho \partial ^{2}\mathbf {u} /\partial t^{2}} caused by the local acceleration ∂ 2 u / ∂ t 2 {\displaystyle \partial ^{2}\mathbf {u} /\partial t^{2}} can be written as ρ ∂ 2 u ∂ t 2 − E Δ u = 0 . {\displaystyle \rho {\frac {\partial ^{2}\mathbf {u} }{\partial t^{2}}}-E\Delta \mathbf {u} =\mathbf {0} .} By merging density ρ {\displaystyle \rho } and elasticity module E , {\displaystyle E,} the sound velocity c = E / ρ {\displaystyle c={\sqrt {E/\rho }}} results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium: ∂ 2 u ∂ t 2 − c 2 Δ u = 0 . {\displaystyle {\frac {\partial ^{2}\mathbf {u} }{\partial t^{2}}}-c^{2}\Delta \mathbf {u} ={\boldsymbol {0}}.} (Note: Instead of vectorial u ( x , t ) , {\displaystyle \mathbf {u} (\mathbf {x} ,t),} only scalar u ( x , t ) {\displaystyle u(x,t)} can be used, i.e. waves are travelling only along the x {\displaystyle x} axis, and the scalar wave equation follows as ∂ 2 u ∂ t 2 − c 2 ∂ 2 u ∂ x 2 = 0 {\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}-c^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}=0} .) The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity term c 2 = ( + c ) 2 = ( − c ) 2 {\displaystyle c^{2}=(+c)^{2}=(-c)^{2}} can be seen that there are two waves travelling in opposite directions + c {\displaystyle +c} and − c {\displaystyle -c} are possible, hence results the designation “two-way wave equation”. It can be shown for plane longitudinal wave propagation that the synthesis of two one-way wave equations leads to a general two-way wave equation. For ∇ c = 0 , {\displaystyle \nabla \mathbf {c} =\mathbf {0} ,} special two-wave equation with the d'Alembert operator results: ( ∂ ∂ t − c ⋅ ∇ ) ( ∂ ∂ t + c ⋅ ∇ ) u = ( ∂ 2 ∂ t 2 + ( c ⋅ ∇ ) c ⋅ ∇ ) u = ( ∂ 2 ∂ t 2 + ( c ⋅ ∇ ) 2 ) u = 0 . {\displaystyle \left({\frac {\partial }{\partial t}}-\mathbf {c} \cdot \nabla \right)\left({\frac {\partial }{\partial t}}+\mathbf {c} \cdot \nabla \right)\mathbf {u} =\left({\frac {\partial ^{2}}{\partial t^{2}}}+(\mathbf {c} \cdot \nabla )\mathbf {c} \cdot \nabla \right)\mathbf {u} =\left({\frac {\partial ^{2}}{\partial t^{2}}}+(\mathbf {c} \cdot \nabla )^{2}\right)\mathbf {u} =\mathbf {0} .} For ∇ c = 0 , {\displaystyle \nabla \mathbf {c} =\mathbf {0} ,} this simplifies to ( ∂ 2 ∂ t 2 + c 2 Δ ) u = 0 . {\displaystyle \left({\frac {\partial ^{2}}{\partial t^{2}}}+c^{2}\Delta \right)\mathbf {u} =\mathbf {0} .} Therefore, the vectorial 1st-order one-way wave equation with waves travelling in a pre-defined propagation direction c {\displaystyle \mathbf {c} } results as ∂ u ∂ t − c ⋅ ∇ u = 0 . {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}-\mathbf {c} \cdot \nabla \mathbf {u} =\mathbf {0} .} == Scalar wave equation in three space dimensions == A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions. === Spherical waves === To obtain a solution with constant frequencies, apply the Fourier transform Ψ ( r , t ) = ∫ − ∞ ∞ Ψ ( r , ω ) e − i ω t d ω , {\displaystyle \Psi (\mathbf {r} ,t)=\int _{-\infty }^{\infty }\Psi (\mathbf {r} ,\omega )e^{-i\omega t}\,d\omega ,} which transforms the wave equation into an elliptic partial differential equation of the form: ( ∇ 2 + ω 2 c 2 ) Ψ ( r , ω ) = 0. {\displaystyle \left(\nabla ^{2}+{\frac {\omega ^{2}}{c^{2}}}\right)\Psi (\mathbf {r} ,\omega )=0.} This is the Helmholtz equation and can be solved using separation of variables. In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as: Ψ ( r , ω ) = ∑ l , m f l m ( r ) Y l m ( θ , ϕ ) . {\displaystyle \Psi (\mathbf {r} ,\omega )=\sum _{l,m}f_{lm}(r)Y_{lm}(\theta ,\phi ).} The angular part of the solution take the form of spherical harmonics and the radial function satisfies: [ d 2 d r 2 + 2 r d d r + k 2 − l ( l + 1 ) r 2 ] f l ( r ) = 0. {\displaystyle \left[{\frac {d^{2}}{dr^{2}}}+{\frac {2}{r}}{\frac {d}{dr}}+k^{2}-{\frac {l(l+1)}{r^{2}}}\right]f_{l}(r)=0.} independent of m {\displaystyle m} , with k 2 = ω 2 / c 2 {\displaystyle k^{2}=\omega ^{2}/c^{2}} . Substituting f l ( r ) = 1 r u l ( r ) , {\displaystyle f_{l}(r)={\frac {1}{\sqrt {r}}}u_{l}(r),} transforms the equation into [ d 2 d r 2 + 1 r d d r + k 2 − ( l + 1 2 ) 2 r 2 ] u l ( r ) = 0 , {\displaystyle \left[{\frac {d^{2}}{dr^{2}}}+{\frac {1}{r}}{\frac {d}{dr}}+k^{2}-{\frac {(l+{\frac {1}{2}})^{2}}{r^{2}}}\right]u_{l}(r)=0,} which is the Bessel equation. ==== Example ==== Consider the case l = 0. Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., Ψ(r, t) → u(r, t). In this case, the wave equation reduces to ( ∇ 2 − 1 c 2 ∂ 2 ∂ t 2 ) Ψ ( r , t ) = 0 , {\displaystyle \left(\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right)\Psi (\mathbf {r} ,t)=0,} or ( ∂ 2 ∂ r 2 + 2 r ∂ ∂ r − 1 c 2 ∂ 2 ∂ t 2 ) u ( r , t ) = 0. {\displaystyle \left({\frac {\partial ^{2}}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial }{\partial r}}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right)u(r,t)=0.} This equation can be rewritten as ∂ 2 ( r u ) ∂ t 2 − c 2 ∂ 2 ( r u ) ∂ r 2 = 0 , {\displaystyle {\frac {\partial ^{2}(ru)}{\partial t^{2}}}-c^{2}{\frac {\partial ^{2}(ru)}{\partial r^{2}}}=0,} where the quantity ru satisfies the one-dimensional wave equation. Therefore, there are solutions in the form u ( r , t ) = 1 r F ( r − c t ) + 1 r G ( r + c t ) , {\displaystyle u(r,t)={\frac {1}{r}}F(r-ct)+{\frac {1}{r}}G(r+ct),} where F and G are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as r increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions. For physical examples of solutions to the 3D wave equation that possess angular dependence, see dipole radiation. ==== Monochromatic spherical wave ==== Although the word "monochromatic" is not exactly accurate, since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section on plane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency ω, then the transformed function ru(r, t) has simply plane-wave solutions: r u ( r , t ) = A e i ( ω t ± k r ) , {\displaystyle ru(r,t)=Ae^{i(\omega t\pm kr)},} or u ( r , t ) = A r e i ( ω t ± k r ) . {\displaystyle u(r,t)={\frac {A}{r}}e^{i(\omega t\pm kr)}.} From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitude I = | u ( r , t ) | 2 = | A | 2 r 2 , {\displaystyle I=|u(r,t)|^{2}={\frac {|A|^{2}}{r^{2}}},} drops at the rate proportional to 1/r2, an example of the inverse-square law. === Solution of a general initial-value problem === The wave equation is linear in u and is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Let φ(ξ, η, ζ) be an arbitrary function of three independent variables, and let the spherical wave form F be a delta function. Let a family of spherical waves have center at (ξ, η, ζ), and let r be the radial distance from that point. Thus r 2 = ( x − ξ ) 2 + ( y − η ) 2 + ( z − ζ ) 2 . {\displaystyle r^{2}=(x-\xi )^{2}+(y-\eta )^{2}+(z-\zeta )^{2}.} If u is a superposition of such waves with weighting function φ, then u ( t , x , y , z ) = 1 4 π c ∭ φ ( ξ , η , ζ ) δ ( r − c t ) r d ξ d η d ζ ; {\displaystyle u(t,x,y,z)={\frac {1}{4\pi c}}\iiint \varphi (\xi ,\eta ,\zeta ){\frac {\delta (r-ct)}{r}}\,d\xi \,d\eta \,d\zeta ;} the denominator 4πc is a convenience. From the definition of the delta function, u may also be written as u ( t , x , y , z ) = t 4 π ∬ S φ ( x + c t α , y + c t β , z + c t γ ) d ω , {\displaystyle u(t,x,y,z)={\frac {t}{4\pi }}\iint _{S}\varphi (x+ct\alpha ,y+ct\beta ,z+ct\gamma )\,d\omega ,} where α, β, and γ are coordinates on the unit sphere S, and ω is the area element on S. This result has the interpretation that u(t, x) is t times the mean value of φ on a sphere of radius ct centered at x: u ( t , x , y , z ) = t M c t [ φ ] . {\displaystyle u(t,x,y,z)=tM_{ct}[\varphi ].} It follows that u ( 0 , x , y , z ) = 0 , u t ( 0 , x , y , z ) = φ ( x , y , z ) . {\displaystyle u(0,x,y,z)=0,\quad u_{t}(0,x,y,z)=\varphi (x,y,z).} The mean value is an even function of t, and hence if v ( t , x , y , z ) = ∂ ∂ t ( t M c t [ φ ] ) , {\displaystyle v(t,x,y,z)={\frac {\partial }{\partial t}}{\big (}tM_{ct}[\varphi ]{\big )},} then v ( 0 , x , y , z ) = φ ( x , y , z ) , v t ( 0 , x , y , z ) = 0. {\displaystyle v(0,x,y,z)=\varphi (x,y,z),\quad v_{t}(0,x,y,z)=0.} These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point P, given (t, x, y, z) depends only on the data on the sphere of radius ct that is intersected by the light cone drawn backwards from P. It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is only true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure. == Scalar wave equation in two space dimensions == In two space dimensions, the wave equation is u t t = c 2 ( u x x + u y y ) . {\displaystyle u_{tt}=c^{2}\left(u_{xx}+u_{yy}\right).} We can use the three-dimensional theory to solve this problem if we regard u as a function in three dimensions that is independent of the third dimension. If u ( 0 , x , y ) = 0 , u t ( 0 , x , y ) = ϕ ( x , y ) , {\displaystyle u(0,x,y)=0,\quad u_{t}(0,x,y)=\phi (x,y),} then the three-dimensional solution formula becomes u ( t , x , y ) = t M c t [ ϕ ] = t 4 π ∬ S ϕ ( x + c t α , y + c t β ) d ω , {\displaystyle u(t,x,y)=tM_{ct}[\phi ]={\frac {t}{4\pi }}\iint _{S}\phi (x+ct\alpha ,\,y+ct\beta )\,d\omega ,} where α and β are the first two coordinates on the unit sphere, and dω is the area element on the sphere. This integral may be rewritten as a double integral over the disc D with center (x, y) and radius ct: u ( t , x , y ) = 1 2 π c ∬ D ϕ ( x + ξ , y + η ) ( c t ) 2 − ξ 2 − η 2 d ξ d η . {\displaystyle u(t,x,y)={\frac {1}{2\pi c}}\iint _{D}{\frac {\phi (x+\xi ,y+\eta )}{\sqrt {(ct)^{2}-\xi ^{2}-\eta ^{2}}}}d\xi \,d\eta .} It is apparent that the solution at (t, x, y) depends not only on the data on the light cone where ( x − ξ ) 2 + ( y − η ) 2 = c 2 t 2 , {\displaystyle (x-\xi )^{2}+(y-\eta )^{2}=c^{2}t^{2},} but also on data that are interior to that cone. == Scalar wave equation in general dimension and Kirchhoff's formulae == We want to find solutions to utt − Δu = 0 for u : Rn × (0, ∞) → R with u(x, 0) = g(x) and ut(x, 0) = h(x). === Odd dimensions === Assume n ≥ 3 is an odd integer, and g ∈ Cm+1(Rn), h ∈ Cm(Rn) for m = (n + 1)/2. Let γn = 1 × 3 × 5 × ⋯ × (n − 2) and let u ( x , t ) = 1 γ n [ ∂ t ( 1 t ∂ t ) n − 3 2 ( t n − 2 1 | ∂ B t ( x ) | ∫ ∂ B t ( x ) g d S ) + ( 1 t ∂ t ) n − 3 2 ( t n − 2 1 | ∂ B t ( x ) | ∫ ∂ B t ( x ) h d S ) ] {\displaystyle u(x,t)={\frac {1}{\gamma _{n}}}\left[\partial _{t}\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-3}{2}}\left(t^{n-2}{\frac {1}{|\partial B_{t}(x)|}}\int _{\partial B_{t}(x)}g\,dS\right)+\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-3}{2}}\left(t^{n-2}{\frac {1}{|\partial B_{t}(x)|}}\int _{\partial B_{t}(x)}h\,dS\right)\right]} Then u ∈ C 2 ( R n × [ 0 , ∞ ) ) {\displaystyle u\in C^{2}{\big (}\mathbf {R} ^{n}\times [0,\infty ){\big )}} , u t t − Δ u = 0 {\displaystyle u_{tt}-\Delta u=0} in R n × ( 0 , ∞ ) {\displaystyle \mathbf {R} ^{n}\times (0,\infty )} , lim ( x , t ) → ( x 0 , 0 ) u ( x , t ) = g ( x 0 ) {\displaystyle \lim _{(x,t)\to (x^{0},0)}u(x,t)=g(x^{0})} , lim ( x , t ) → ( x 0 , 0 ) u t ( x , t ) = h ( x 0 ) {\displaystyle \lim _{(x,t)\to (x^{0},0)}u_{t}(x,t)=h(x^{0})} . === Even dimensions === Assume n ≥ 2 is an even integer and g ∈ Cm+1(Rn), h ∈ Cm(Rn), for m = (n + 2)/2. Let γn = 2 × 4 × ⋯ × n and let u ( x , t ) = 1 γ n [ ∂ t ( 1 t ∂ t ) n − 2 2 ( t n 1 | B t ( x ) | ∫ B t ( x ) g ( t 2 − | y − x | 2 ) 1 2 d y ) + ( 1 t ∂ t ) n − 2 2 ( t n 1 | B t ( x ) | ∫ B t ( x ) h ( t 2 − | y − x | 2 ) 1 2 d y ) ] {\displaystyle u(x,t)={\frac {1}{\gamma _{n}}}\left[\partial _{t}\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-2}{2}}\left(t^{n}{\frac {1}{|B_{t}(x)|}}\int _{B_{t}(x)}{\frac {g}{(t^{2}-|y-x|^{2})^{\frac {1}{2}}}}dy\right)+\left({\frac {1}{t}}\partial _{t}\right)^{\frac {n-2}{2}}\left(t^{n}{\frac {1}{|B_{t}(x)|}}\int _{B_{t}(x)}{\frac {h}{(t^{2}-|y-x|^{2})^{\frac {1}{2}}}}dy\right)\right]} then u ∈ C2(Rn × [0, ∞)) utt − Δu = 0 in Rn × (0, ∞) lim ( x , t ) → ( x 0 , 0 ) u ( x , t ) = g ( x 0 ) {\displaystyle \lim _{(x,t)\to (x^{0},0)}u(x,t)=g(x^{0})} lim ( x , t ) → ( x 0 , 0 ) u t ( x , t ) = h ( x 0 ) {\displaystyle \lim _{(x,t)\to (x^{0},0)}u_{t}(x,t)=h(x^{0})} == Green's function == Consider the inhomogeneous wave equation in 1 + D {\displaystyle 1+D} dimensions ( ∂ t t − c 2 ∇ 2 ) u = s ( t , x ) {\displaystyle (\partial _{tt}-c^{2}\nabla ^{2})u=s(t,x)} By rescaling time, we can set wave speed c = 1 {\displaystyle c=1} . Since the wave equation ( ∂ t t − ∇ 2 ) u = s ( t , x ) {\displaystyle (\partial _{tt}-\nabla ^{2})u=s(t,x)} has order 2 in time, there are two impulse responses: an acceleration impulse and a velocity impulse. The effect of inflicting an acceleration impulse is to suddenly change the wave velocity ∂ t u {\displaystyle \partial _{t}u} . The effect of inflicting a velocity impulse is to suddenly change the wave displacement u {\displaystyle u} . For acceleration impulse, s ( t , x ) = δ D + 1 ( t , x ) {\displaystyle s(t,x)=\delta ^{D+1}(t,x)} where δ {\displaystyle \delta } is the Dirac delta function. The solution to this case is called the Green's function G {\displaystyle G} for the wave equation. For velocity impulse, s ( t , x ) = ∂ t δ D + 1 ( t , x ) {\displaystyle s(t,x)=\partial _{t}\delta ^{D+1}(t,x)} , so if we solve the Green function G {\displaystyle G} , the solution for this case is just ∂ t G {\displaystyle \partial _{t}G} . === Duhamel's principle === The main use of Green's functions is to solve initial value problems by Duhamel's principle, both for the homogeneous and the inhomogeneous case. Given the Green function G {\displaystyle G} , and initial conditions u ( 0 , x ) , ∂ t u ( 0 , x ) {\displaystyle u(0,x),\partial _{t}u(0,x)} , the solution to the homogeneous wave equation is u = ( ∂ t G ) ∗ u + G ∗ ∂ t u {\displaystyle u=(\partial _{t}G)\ast u+G\ast \partial _{t}u} where the asterisk is convolution in space. More explicitly, u ( t , x ) = ∫ ( ∂ t G ) ( t , x − x ′ ) u ( 0 , x ′ ) d x ′ + ∫ G ( t , x − x ′ ) ( ∂ t u ) ( 0 , x ′ ) d x ′ . {\displaystyle u(t,x)=\int (\partial _{t}G)(t,x-x')u(0,x')dx'+\int G(t,x-x')(\partial _{t}u)(0,x')dx'.} For the inhomogeneous case, the solution has one additional term by convolution over spacetime: ∬ t ′ < t G ( t − t ′ , x − x ′ ) s ( t ′ , x ′ ) d t ′ d x ′ . {\displaystyle \iint _{t'<t}G(t-t',x-x')s(t',x')dt'dx'.} === Solution by Fourier transform === By a Fourier transform, G ^ ( ω ) = 1 − ω 0 2 + ω 1 2 + ⋯ + ω D 2 , G ( t , x ) = 1 ( 2 π ) D + 1 ∫ G ^ ( ω ) e + i ω 0 t + i ω → ⋅ x → d ω 0 d ω → . {\displaystyle {\hat {G}}(\omega )={\frac {1}{-\omega _{0}^{2}+\omega _{1}^{2}+\cdots +\omega _{D}^{2}}},\quad G(t,x)={\frac {1}{(2\pi )^{D+1}}}\int {\hat {G}}(\omega )e^{+i\omega _{0}t+i{\vec {\omega }}\cdot {\vec {x}}}d\omega _{0}d{\vec {\omega }}.} The ω 0 {\displaystyle \omega _{0}} term can be integrated by the residue theorem. It would require us to perturb the integral slightly either by + i ϵ {\displaystyle +i\epsilon } or by − i ϵ {\displaystyle -i\epsilon } , because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution. The forward solution gives G ( t , x ) = 1 ( 2 π ) D ∫ sin ⁡ ( ‖ ω → ‖ t ) ‖ ω → ‖ e i ω → ⋅ x → d ω → , ∂ t G ( t , x ) = 1 ( 2 π ) D ∫ cos ⁡ ( ‖ ω → ‖ t ) e i ω → ⋅ x → d ω → . {\displaystyle G(t,x)={\frac {1}{(2\pi )^{D}}}\int {\frac {\sin(\|{\vec {\omega }}\|t)}{\|{\vec {\omega }}\|}}e^{i{\vec {\omega }}\cdot {\vec {x}}}d{\vec {\omega }},\quad \partial _{t}G(t,x)={\frac {1}{(2\pi )^{D}}}\int \cos(\|{\vec {\omega }}\|t)e^{i{\vec {\omega }}\cdot {\vec {x}}}d{\vec {\omega }}.} The integral can be solved by analytically continuing the Poisson kernel, giving G ( t , x ) = lim ϵ → 0 + C D D − 1 Im ⁡ [ ‖ x ‖ 2 − ( t − i ϵ ) 2 ] − ( D − 1 ) / 2 {\displaystyle G(t,x)=\lim _{\epsilon \rightarrow 0^{+}}{\frac {C_{D}}{D-1}}\operatorname {Im} \left[\|x\|^{2}-(t-i\epsilon )^{2}\right]^{-(D-1)/2}} where C D = π − ( D + 1 ) / 2 Γ ( ( D + 1 ) / 2 ) {\displaystyle C_{D}=\pi ^{-(D+1)/2}\Gamma ((D+1)/2)} is half the surface area of a ( D + 1 ) {\displaystyle (D+1)} -dimensional hypersphere. === Solutions in particular dimensions === We can relate the Green's function in D {\displaystyle D} dimensions to the Green's function in D + n {\displaystyle D+n} dimensions. ==== Lowering dimensions ==== Given a function s ( t , x ) {\displaystyle s(t,x)} and a solution u ( t , x ) {\displaystyle u(t,x)} of a differential equation in ( 1 + D ) {\displaystyle (1+D)} dimensions, we can trivially extend it to ( 1 + D + n ) {\displaystyle (1+D+n)} dimensions by setting the additional n {\displaystyle n} dimensions to be constant: s ( t , x 1 : D , x D + 1 : D + n ) = s ( t , x 1 : D ) , u ( t , x 1 : D , x D + 1 : D + n ) = u ( t , x 1 : D ) . {\displaystyle s(t,x_{1:D},x_{D+1:D+n})=s(t,x_{1:D}),\quad u(t,x_{1:D},x_{D+1:D+n})=u(t,x_{1:D}).} Since the Green's function is constructed from f {\displaystyle f} and u {\displaystyle u} , the Green's function in ( 1 + D + n ) {\displaystyle (1+D+n)} dimensions integrates to the Green's function in ( 1 + D ) {\displaystyle (1+D)} dimensions: G D ( t , x 1 : D ) = ∫ R n G D + n ( t , x 1 : D , x D + 1 : D + n ) d n x D + 1 : D + n . {\displaystyle G_{D}(t,x_{1:D})=\int _{\mathbb {R} ^{n}}G_{D+n}(t,x_{1:D},x_{D+1:D+n})d^{n}x_{D+1:D+n}.} ==== Raising dimensions ==== The Green's function in D {\displaystyle D} dimensions can be related to the Green's function in D + 2 {\displaystyle D+2} dimensions. By spherical symmetry, G D ( t , r ) = ∫ R 2 G D + 2 ( t , r 2 + y 2 + z 2 ) d y d z . {\displaystyle G_{D}(t,r)=\int _{\mathbb {R} ^{2}}G_{D+2}(t,{\sqrt {r^{2}+y^{2}+z^{2}}})dydz.} Integrating in polar coordinates, G D ( t , r ) = 2 π ∫ 0 ∞ G D + 2 ( t , r 2 + q 2 ) q d q = 2 π ∫ r ∞ G D + 2 ( t , q ′ ) q ′ d q ′ , {\displaystyle G_{D}(t,r)=2\pi \int _{0}^{\infty }G_{D+2}(t,{\sqrt {r^{2}+q^{2}}})qdq=2\pi \int _{r}^{\infty }G_{D+2}(t,q')q'dq',} where in the last equality we made the change of variables q ′ = r 2 + q 2 {\displaystyle q'={\sqrt {r^{2}+q^{2}}}} . Thus, we obtain the recurrence relation G D + 2 ( t , r ) = − 1 2 π r ∂ r G D ( t , r ) . {\displaystyle G_{D+2}(t,r)=-{\frac {1}{2\pi r}}\partial _{r}G_{D}(t,r).} === Solutions in D = 1, 2, 3 === When D = 1 {\displaystyle D=1} , the integrand in the Fourier transform is the sinc function G 1 ( t , x ) = 1 2 π ∫ R sin ⁡ ( | ω | t ) | ω | e i ω x d ω = 1 2 π ∫ sinc ⁡ ( ω ) e i ω x t d ω = sgn ⁡ ( t − x ) + sgn ⁡ ( t + x ) 4 = { 1 2 θ ( t − | x | ) t > 0 − 1 2 θ ( − t − | x | ) t < 0 {\displaystyle {\begin{aligned}G_{1}(t,x)&={\frac {1}{2\pi }}\int _{\mathbb {R} }{\frac {\sin(|\omega |t)}{|\omega |}}e^{i\omega x}d\omega \\&={\frac {1}{2\pi }}\int \operatorname {sinc} (\omega )e^{i\omega {\frac {x}{t}}}d\omega \\&={\frac {\operatorname {sgn}(t-x)+\operatorname {sgn}(t+x)}{4}}\\&={\begin{cases}{\frac {1}{2}}\theta (t-|x|)\quad t>0\\-{\frac {1}{2}}\theta (-t-|x|)\quad t<0\end{cases}}\end{aligned}}} where sgn {\displaystyle \operatorname {sgn} } is the sign function and θ {\displaystyle \theta } is the unit step function. One solution is the forward solution, the other is the backward solution. The dimension can be raised to give the D = 3 {\displaystyle D=3} case G 3 ( t , r ) = δ ( t − r ) 4 π r {\displaystyle G_{3}(t,r)={\frac {\delta (t-r)}{4\pi r}}} and similarly for the backward solution. This can be integrated down by one dimension to give the D = 2 {\displaystyle D=2} case G 2 ( t , r ) = ∫ R δ ( t − r 2 + z 2 ) 4 π r 2 + z 2 d z = θ ( t − r ) 2 π t 2 − r 2 {\displaystyle G_{2}(t,r)=\int _{\mathbb {R} }{\frac {\delta (t-{\sqrt {r^{2}+z^{2}}})}{4\pi {\sqrt {r^{2}+z^{2}}}}}dz={\frac {\theta (t-r)}{2\pi {\sqrt {t^{2}-r^{2}}}}}} === Wavefronts and wakes === In D = 1 {\displaystyle D=1} case, the Green's function solution is the sum of two wavefronts sgn ⁡ ( t − x ) 4 + sgn ⁡ ( t + x ) 4 {\displaystyle {\frac {\operatorname {sgn}(t-x)}{4}}+{\frac {\operatorname {sgn}(t+x)}{4}}} moving in opposite directions. In odd dimensions, the forward solution is nonzero only at t = r {\displaystyle t=r} . As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function. For example, G 1 = 1 2 c θ ( τ ) G 3 = 1 4 π c 2 δ ( τ ) r G 5 = 1 8 π 2 c 2 ( δ ( τ ) r 3 + δ ′ ( τ ) c r 2 ) G 7 = 1 16 π 3 c 2 ( 3 δ ( τ ) r 4 + 3 δ ′ ( τ ) c r 3 + δ ′ ′ ( τ ) c 2 r 2 ) {\displaystyle {\begin{aligned}&G_{1}={\frac {1}{2c}}\theta (\tau )\\&G_{3}={\frac {1}{4\pi c^{2}}}{\frac {\delta (\tau )}{r}}\\&G_{5}={\frac {1}{8\pi ^{2}c^{2}}}\left({\frac {\delta (\tau )}{r^{3}}}+{\frac {\delta ^{\prime }(\tau )}{cr^{2}}}\right)\\&G_{7}={\frac {1}{16\pi ^{3}c^{2}}}\left(3{\frac {\delta (\tau )}{r^{4}}}+3{\frac {\delta ^{\prime }(\tau )}{cr^{3}}}+{\frac {\delta ^{\prime \prime }(\tau )}{c^{2}r^{2}}}\right)\end{aligned}}} where τ = t − r {\displaystyle \tau =t-r} , and the wave speed c {\displaystyle c} is restored. In even dimensions, the forward solution is nonzero in r ≤ t {\displaystyle r\leq t} , the entire region behind the wavefront becomes nonzero, called a wake. The wake has equation: G D ( t , x ) = ( − 1 ) 1 + D / 2 1 ( 2 π ) D / 2 1 c D θ ( t − r / c ) ( t 2 − r 2 / c 2 ) ( D − 1 ) / 2 {\displaystyle G_{D}(t,x)=(-1)^{1+D/2}{\frac {1}{(2\pi )^{D/2}}}{\frac {1}{c^{D}}}{\frac {\theta (t-r/c)}{\left(t^{2}-r^{2}/c^{2}\right)^{(D-1)/2}}}} The wavefront itself also involves increasingly higher derivatives of the Dirac delta function. This means that a general Huygens' principle – the wave displacement at a point ( t , x ) {\displaystyle (t,x)} in spacetime depends only on the state at points on characteristic rays passing ( t , x ) {\displaystyle (t,x)} – only holds in odd dimensions. A physical interpretation is that signals transmitted by waves remain undistorted in odd dimensions, but distorted in even dimensions.: 698  Hadamard's conjecture states that this generalized Huygens' principle still holds in all odd dimensions even when the coefficients in the wave equation are no longer constant. It is not strictly correct, but it is correct for certain families of coefficients: 765  == Problems with boundaries == === One space dimension === ==== Reflection and transmission at the boundary of two media ==== For an incident wave traveling from one medium (where the wave speed is c1) to another medium (where the wave speed is c2), one part of the wave will transmit into the second medium, while another part reflects back into the other direction and stays in the first medium. The amplitude of the transmitted wave and the reflected wave can be calculated by using the continuity condition at the boundary. Consider the component of the incident wave with an angular frequency of ω, which has the waveform u inc ( x , t ) = A e i ( k 1 x − ω t ) , A ∈ C . {\displaystyle u^{\text{inc}}(x,t)=Ae^{i(k_{1}x-\omega t)},\quad A\in \mathbb {C} .} At t = 0, the incident reaches the boundary between the two media at x = 0. Therefore, the corresponding reflected wave and the transmitted wave will have the waveforms u refl ( x , t ) = B e i ( − k 1 x − ω t ) , u trans ( x , t ) = C e i ( k 2 x − ω t ) , B , C ∈ C . {\displaystyle u^{\text{refl}}(x,t)=Be^{i(-k_{1}x-\omega t)},\quad u^{\text{trans}}(x,t)=Ce^{i(k_{2}x-\omega t)},\quad B,C\in \mathbb {C} .} The continuity condition at the boundary is u inc ( 0 , t ) + u refl ( 0 , t ) = u trans ( 0 , t ) , u x inc ( 0 , t ) + u x ref ( 0 , t ) = u x trans ( 0 , t ) . {\displaystyle u^{\text{inc}}(0,t)+u^{\text{refl}}(0,t)=u^{\text{trans}}(0,t),\quad u_{x}^{\text{inc}}(0,t)+u_{x}^{\text{ref}}(0,t)=u_{x}^{\text{trans}}(0,t).} This gives the equations A + B = C , A − B = k 2 k 1 C = c 1 c 2 C , {\displaystyle A+B=C,\quad A-B={\frac {k_{2}}{k_{1}}}C={\frac {c_{1}}{c_{2}}}C,} and we have the reflectivity and transmissivity B A = c 2 − c 1 c 2 + c 1 , C A = 2 c 2 c 2 + c 1 . {\displaystyle {\frac {B}{A}}={\frac {c_{2}-c_{1}}{c_{2}+c_{1}}},\quad {\frac {C}{A}}={\frac {2c_{2}}{c_{2}+c_{1}}}.} When c2 < c1, the reflected wave has a reflection phase change of 180°, since B/A < 0. The energy conservation can be verified by B 2 c 1 + C 2 c 2 = A 2 c 1 . {\displaystyle {\frac {B^{2}}{c_{1}}}+{\frac {C^{2}}{c_{2}}}={\frac {A^{2}}{c_{1}}}.} The above discussion holds true for any component, regardless of its angular frequency of ω. The limiting case of c2 = 0 corresponds to a "fixed end" that does not move, whereas the limiting case of c2 → ∞ corresponds to a "free end". ==== The Sturm–Liouville formulation ==== A flexible string that is stretched between two points x = 0 and x = L satisfies the wave equation for t > 0 and 0 < x < L. On the boundary points, u may satisfy a variety of boundary conditions. A general form that is appropriate for applications is − u x ( t , 0 ) + a u ( t , 0 ) = 0 , u x ( t , L ) + b u ( t , L ) = 0 , {\displaystyle {\begin{aligned}-u_{x}(t,0)+au(t,0)&=0,\\u_{x}(t,L)+bu(t,L)&=0,\end{aligned}}} where a and b are non-negative. The case where u is required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respective a or b approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form u ( t , x ) = T ( t ) v ( x ) . {\displaystyle u(t,x)=T(t)v(x).} A consequence is that T ″ c 2 T = v ″ v = − λ . {\displaystyle {\frac {T''}{c^{2}T}}={\frac {v''}{v}}=-\lambda .} The eigenvalue λ must be determined so that there is a non-trivial solution of the boundary-value problem v ″ + λ v = 0 , − v ′ ( 0 ) + a v ( 0 ) = 0 , v ′ ( L ) + b v ( L ) = 0. {\displaystyle {\begin{aligned}v''+\lambda v=0,&\\-v'(0)+av(0)&=0,\\v'(L)+bv(L)&=0.\end{aligned}}} This is a special case of the general problem of Sturm–Liouville theory. If a and b are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for u and ut can be obtained from expansion of these functions in the appropriate trigonometric series. === Several space dimensions === The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain D in m-dimensional x space, with boundary B. Then the wave equation is to be satisfied if x is in D, and t > 0. On the boundary of D, the solution u shall satisfy ∂ u ∂ n + a u = 0 , {\displaystyle {\frac {\partial u}{\partial n}}+au=0,} where n is the unit outward normal to B, and a is a non-negative function defined on B. The case where u vanishes on B is a limiting case for a approaching infinity. The initial conditions are u ( 0 , x ) = f ( x ) , u t ( 0 , x ) = g ( x ) , {\displaystyle u(0,x)=f(x),\quad u_{t}(0,x)=g(x),} where f and g are defined in D. This problem may be solved by expanding f and g in the eigenfunctions of the Laplacian in D, which satisfy the boundary conditions. Thus the eigenfunction v satisfies ∇ ⋅ ∇ v + λ v = 0 {\displaystyle \nabla \cdot \nabla v+\lambda v=0} in D, and ∂ v ∂ n + a v = 0 {\displaystyle {\frac {\partial v}{\partial n}}+av=0} on B. In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary B. If B is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle θ, multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation. If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order. == Inhomogeneous wave equation in one dimension == The inhomogeneous wave equation in one dimension is u t t ( x , t ) − c 2 u x x ( x , t ) = s ( x , t ) {\displaystyle u_{tt}(x,t)-c^{2}u_{xx}(x,t)=s(x,t)} with initial conditions u ( x , 0 ) = f ( x ) , {\displaystyle u(x,0)=f(x),} u t ( x , 0 ) = g ( x ) . {\displaystyle u_{t}(x,0)=g(x).} The function s(x, t) is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism. One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point (xi, ti), the value of u(xi, ti) depends only on the values of f(xi + cti) and f(xi − cti) and the values of the function g(x) between (xi − cti) and (xi + cti). This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is c, then no part of the wave that cannot propagate to a given point by a given time can affect the amplitude at the same point and time. In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point (xi, ti) as RC. Suppose we integrate the inhomogeneous wave equation over this region: ∬ R C ( c 2 u x x ( x , t ) − u t t ( x , t ) ) d x d t = ∬ R C s ( x , t ) d x d t . {\displaystyle \iint _{R_{C}}{\big (}c^{2}u_{xx}(x,t)-u_{tt}(x,t){\big )}\,dx\,dt=\iint _{R_{C}}s(x,t)\,dx\,dt.} To simplify this greatly, we can use Green's theorem to simplify the left side to get the following: ∫ L 0 + L 1 + L 2 ( − c 2 u x ( x , t ) d t − u t ( x , t ) d x ) = ∬ R C s ( x , t ) d x d t . {\displaystyle \int _{L_{0}+L_{1}+L_{2}}{\big (}{-}c^{2}u_{x}(x,t)\,dt-u_{t}(x,t)\,dx{\big )}=\iint _{R_{C}}s(x,t)\,dx\,dt.} The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute: ∫ x i − c t i x i + c t i − u t ( x , 0 ) d x = − ∫ x i − c t i x i + c t i g ( x ) d x . {\displaystyle \int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}-u_{t}(x,0)\,dx=-\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx.} In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus dt = 0. For the other two sides of the region, it is worth noting that x ± ct is a constant, namely xi ± cti, where the sign is chosen appropriately. Using this, we can get the relation dx ± cdt = 0, again choosing the right sign: ∫ L 1 ( − c 2 u x ( x , t ) d t − u t ( x , t ) d x ) = ∫ L 1 ( c u x ( x , t ) d x + c u t ( x , t ) d t ) = c ∫ L 1 d u ( x , t ) = c u ( x i , t i ) − c f ( x i + c t i ) . {\displaystyle {\begin{aligned}\int _{L_{1}}{\big (}{-}c^{2}u_{x}(x,t)\,dt-u_{t}(x,t)\,dx{\big )}&=\int _{L_{1}}{\big (}cu_{x}(x,t)\,dx+cu_{t}(x,t)\,dt{\big )}\\&=c\int _{L_{1}}\,du(x,t)\\&=cu(x_{i},t_{i})-cf(x_{i}+ct_{i}).\end{aligned}}} And similarly for the final boundary segment: ∫ L 2 ( − c 2 u x ( x , t ) d t − u t ( x , t ) d x ) = − ∫ L 2 ( c u x ( x , t ) d x + c u t ( x , t ) d t ) = − c ∫ L 2 d u ( x , t ) = c u ( x i , t i ) − c f ( x i − c t i ) . {\displaystyle {\begin{aligned}\int _{L_{2}}{\big (}{-}c^{2}u_{x}(x,t)\,dt-u_{t}(x,t)\,dx{\big )}&=-\int _{L_{2}}{\big (}cu_{x}(x,t)\,dx+cu_{t}(x,t)\,dt{\big )}\\&=-c\int _{L_{2}}\,du(x,t)\\&=cu(x_{i},t_{i})-cf(x_{i}-ct_{i}).\end{aligned}}} Adding the three results together and putting them back in the original integral gives ∬ R C s ( x , t ) d x d t = − ∫ x i − c t i x i + c t i g ( x ) d x + c u ( x i , t i ) − c f ( x i + c t i ) + c u ( x i , t i ) − c f ( x i − c t i ) = 2 c u ( x i , t i ) − c f ( x i + c t i ) − c f ( x i − c t i ) − ∫ x i − c t i x i + c t i g ( x ) d x . {\displaystyle {\begin{aligned}\iint _{R_{C}}s(x,t)\,dx\,dt&=-\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx+cu(x_{i},t_{i})-cf(x_{i}+ct_{i})+cu(x_{i},t_{i})-cf(x_{i}-ct_{i})\\&=2cu(x_{i},t_{i})-cf(x_{i}+ct_{i})-cf(x_{i}-ct_{i})-\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx.\end{aligned}}} Solving for u(xi, ti), we arrive at u ( x i , t i ) = f ( x i + c t i ) + f ( x i − c t i ) 2 + 1 2 c ∫ x i − c t i x i + c t i g ( x ) d x + 1 2 c ∫ 0 t i ∫ x i − c ( t i − t ) x i + c ( t i − t ) s ( x , t ) d x d t . {\displaystyle u(x_{i},t_{i})={\frac {f(x_{i}+ct_{i})+f(x_{i}-ct_{i})}{2}}+{\frac {1}{2c}}\int _{x_{i}-ct_{i}}^{x_{i}+ct_{i}}g(x)\,dx+{\frac {1}{2c}}\int _{0}^{t_{i}}\int _{x_{i}-c(t_{i}-t)}^{x_{i}+c(t_{i}-t)}s(x,t)\,dx\,dt.} In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices (xi, ti) compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source. == Further generalizations == === Elastic waves === The elastic wave equation (also known as the Navier–Cauchy equation) in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion: ρ u ¨ = f + ( λ + 2 μ ) ∇ ( ∇ ⋅ u ) − μ ∇ × ( ∇ × u ) , {\displaystyle \rho {\ddot {\mathbf {u} }}=\mathbf {f} +(\lambda +2\mu )\nabla (\nabla \cdot \mathbf {u} )-\mu \nabla \times (\nabla \times \mathbf {u} ),} where: λ and μ are the so-called Lamé parameters describing the elastic properties of the medium, ρ is the density, f is the source function (driving force), u is the displacement vector. By using ∇ × (∇ × u) = ∇(∇ ⋅ u) − ∇ ⋅ ∇ u = ∇(∇ ⋅ u) − ∆u, the elastic wave equation can be rewritten into the more common form of the Navier–Cauchy equation. Note that in the elastic wave equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation. As an aid to understanding, the reader will observe that if f and ∇ ⋅ u are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field E, which has only transverse waves. === Dispersion relation === In dispersive wave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by a dispersion relation ω = ω ( k ) , {\displaystyle \omega =\omega (\mathbf {k} ),} where ω is the angular frequency, and k is the wavevector describing plane-wave solutions. For light waves, the dispersion relation is ω = ±c |k|, but in general, the constant speed c gets replaced by a variable phase velocity: v p = ω ( k ) k . {\displaystyle v_{\text{p}}={\frac {\omega (k)}{k}}.} == See also == == Notes == == References == Flint, H.T. (1929) "Wave Mechanics" Methuen & Co. Ltd. London. Atiyah, M. F.; Bott, R.; Gårding, L. (1970). "Lacunas for hyperbolic differential operators with constant coefficients I". Acta Mathematica. 124: 109–189. doi:10.1007/BF02394570. ISSN 0001-5962. Atiyah, M. F.; Bott, R.; Gårding, L. (1973). "Lacunas for hyperbolic differential operators with constant coefficients. II". Acta Mathematica. 131: 145–206. doi:10.1007/BF02392039. ISSN 0001-5962. R. Courant, D. Hilbert, Methods of Mathematical Physics, vol II. Interscience (Wiley) New York, 1962. Evans, Lawrence C. (2010). Partial Differential Equations. Providence (R.I.): American Mathematical Soc. ISBN 978-0-8218-4974-3. "Linear Wave Equations", EqWorld: The World of Mathematical Equations. "Nonlinear Wave Equations", EqWorld: The World of Mathematical Equations. William C. Lane, "MISN-0-201 The Wave Equation and Its Solutions", Project PHYSNET. == External links == Nonlinear Wave Equations by Stephen Wolfram and Rob Knapp, Nonlinear Wave Equation Explorer by Wolfram Demonstrations Project. Mathematical aspects of wave equations are discussed on the Dispersive PDE Wiki Archived 2007-04-25 at the Wayback Machine. Graham W Griffiths and William E. Schiesser (2009). Linear and nonlinear waves. Scholarpedia, 4(7):4308. doi:10.4249/scholarpedia.4308
Wikipedia/Linear_wave_equation
The lossy count algorithm is an algorithm to identify elements in a data stream whose frequency exceeds a user-given threshold. The algorithm works by dividing the data stream into buckets for frequent items, but fill as many buckets as possible in main memory one time. The frequency computed by this algorithm is not always accurate, but has an error threshold that can be specified by the user. The run time and space required by the algorithm is inversely proportional to the specified error threshold; hence the larger the error, the smaller the footprint. The algorithm was created by computer scientists Rajeev Motwani and Gurmeet Singh Manku. It finds applications in computations where data takes the form of a continuous data stream instead of a finite data set, such as network traffic measurements, web server logs, and clickstreams. == Algorithm == The general algorithm is as follows Step 1: Divide the incoming data stream into buckets of width w = 1 / ϵ {\displaystyle w=1/\epsilon } , where ϵ {\displaystyle \epsilon } is mentioned by user as the error bound (along with minimum support threshold = σ {\displaystyle \sigma } ). Step 2: Increment the frequency count of each item according to the new bucket values. After each bucket, decrement all counters by 1. Step 3: Repeat – Update counters and after each bucket, decrement all counters by 1. == References ==
Wikipedia/Lossy_Count_Algorithm
In computer science, an online algorithm is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an offline algorithm is given the whole problem data from the beginning and is required to output an answer which solves the problem at hand. In operations research, the area in which online algorithms are developed is called online optimization. As an example, consider the sorting algorithms selection sort and insertion sort: selection sort repeatedly selects the minimum element from the unsorted remainder and places it at the front, which requires access to the entire input; it is thus an offline algorithm. On the other hand, insertion sort considers one input element per iteration and produces a partial solution without considering future elements. Thus insertion sort is an online algorithm. Note that the final result of an insertion sort is optimum, i.e., a correctly sorted list. For many problems, online algorithms cannot match the performance of offline algorithms. If the ratio between the performance of an online algorithm and an optimal offline algorithm is bounded, the online algorithm is called competitive. Not every offline algorithm has an efficient online counterpart. In grammar theory they are associated with Straight-line grammars. == Definition == Because it does not know the whole input, an online algorithm is forced to make decisions that may later turn out not to be optimal, and the study of online algorithms has focused on the quality of decision-making that is possible in this setting. Competitive analysis formalizes this idea by comparing the relative performance of an online and offline algorithm for the same problem instance. Specifically, the competitive ratio of an algorithm, is defined as the worst-case ratio of its cost divided by the optimal cost, over all possible inputs. The competitive ratio of an online problem is the best competitive ratio achieved by an online algorithm. Intuitively, the competitive ratio of an algorithm gives a measure on the quality of solutions produced by this algorithm, while the competitive ratio of a problem shows the importance of knowing the future for this problem. === Other interpretations === For other points of view on online inputs to algorithms, see streaming algorithm: focusing on the amount of memory needed to accurately represent past inputs; dynamic algorithm: focusing on the time complexity of maintaining solutions to problems with online inputs. === Examples === Some online algorithms: Insertion sort Perceptron Reservoir sampling Greedy algorithm Adversary model Metrical task systems Odds algorithm Page replacement algorithm Algorithms for calculating variance Ukkonen's algorithm == Online problems == A problem exemplifying the concepts of online algorithms is the Canadian traveller problem. The goal of this problem is to minimize the cost of reaching a target in a weighted graph where some of the edges are unreliable and may have been removed from the graph. However, that an edge has been removed (failed) is only revealed to the traveller when she/he reaches one of the edge's endpoints. The worst case for this problem is simply that all of the unreliable edges fail and the problem reduces to the usual shortest path problem. An alternative analysis of the problem can be made with the help of competitive analysis. For this method of analysis, the offline algorithm knows in advance which edges will fail and the goal is to minimize the ratio between the online and offline algorithms' performance. This problem is PSPACE-complete. There are many formal problems that offer more than one online algorithm as solution: k-server problem Job shop scheduling problem List update problem Bandit problem Secretary problem Search games Ski rental problem Linear search problem Portfolio selection problem == See also == Dynamic algorithm Prophet inequality Real-time computing Streaming algorithm Sequential algorithm Online machine learning/Offline learning == References == Borodin, A.; El-Yaniv, R. (1998). Online Computation and Competitive Analysis. Cambridge University Press. ISBN 0-521-56392-5. == External links == Bibliography of papers on online algorithms
Wikipedia/Online_algorithms
Misra and Gries defined the heavy-hitters problem (though they did not introduce the term heavy-hitters) and described the first algorithm for it in the paper Finding repeated elements. Their algorithm extends the Boyer-Moore majority finding algorithm in a significant way. One version of the heavy-hitters problem is as follows: Given is a bag b of n elements and an integer k ≥ 2. Find the values that occur more than n ÷ k times in b. The Misra-Gries algorithm solves the problem by making two passes over the values in b, while storing at most k values from b and their number of occurrences during the course of the algorithm. Misra-Gries is one of the earliest streaming algorithms, and it is described below in those terms in section #Summaries. == Misra–Gries algorithm == A bag is like a set in which the same value may occur multiple times. Assume that a bag is available as an array b[0:n – 1] of n elements. In the abstract description of the algorithm, we treat b and its segments also as bags. Henceforth, a heavy hitter of bag b is a value that occurs more than n ÷ k times in it, for some integer k, k≥2. A k-reduced bag for bag b is derived from b by repeating the following operation until no longer possible: Delete k distinct elements from b. From its definition, a k-reduced bag contains fewer than k different values. The following theorem is easy to prove: Theorem 1. Each heavy-hitter of b is an element of a k-reduced bag for b. The first pass of the heavy-hitters computation constructs a k-reduced bag t. The second pass declares an element of t to be a heavy-hitter if it occurs more than n ÷ k times in b. According to Theorem 1, this procedure determines all and only the heavy-hitters. The second pass is easy to program, so we describe only the first pass. In order to construct t, scan the values in b in arbitrary order, for specificity the following algorithm scans them in the order of increasing indices. Invariant P of the algorithm is that t is a k-reduced bag for the scanned values and d is the number of distinct values in t. Initially, no value has been scanned, t is the empty bag, and d is zero. Whenever element b[i] is scanned, in order to preserve the invariant: (1) if b[i] is not in t, add it to t and increase d by 1, (2) if b[i] is in t, add it to t but don't modify d, and (3) if d becomes equal to k, reduce t by deleting k distinct values from it and update d appropriately. algorithm Misra–Gries is t, d := { }, 0 for i from 0 to n-1 do if b[i] ∉ {\displaystyle \notin } t then t, d:= t ∪ {b[i]}, d+1 else t, d:= t ∪ {b[i]}, d endif if d = k then Delete k distinct values from t; update d endif endfor A possible implementation of t is as a set of pairs of the form (vi, ci) where each vi is a distinct value in t and ci is the number of occurrences of vi in t. Then d is the size of this set. The step "Delete k distinct values from t" amounts to reducing each ci by 1 and then removing any pair (vi, ci) from the set if ci becomes 0. Using an AVL tree implementation of t, the algorithm has a running time of O(n log k). In order to assess the space requirement, assume that the elements of b can have m possible values, so the storage of a value vi needs O(log m) bits. Since each counter ci may have a value as high as n, its storage needs O(log n) bits. Therefore, for O(k) value-counter pairs, the space requirement is O(k (log n + log m)). == Summaries == In the field of streaming algorithms, the output of the Misra-Gries algorithm in the first pass may be called a summary, and such summaries are used to solve the frequent elements problem in the data stream model. A streaming algorithm makes a small, bounded number of passes over a list of data items called a stream. It processes the elements using at most logarithmic amount of extra space in the size of the list to produce an answer. The term Misra–Gries summary appears to have been coined by Graham Cormode. == References ==
Wikipedia/Misra–Gries_heavy_hitters_algorithm
Robert Morris (July 25, 1932 – June 26, 2011) was an American cryptographer and computer scientist. His name sometimes apppears with a middle initial H that he adopted informally. == Family and education == Morris was born in Boston, Massachusetts. His parents were Walter W. Morris, a salesman, and Helen Kelly Morris, a homemaker. He received a bachelor's degree in mathematics from Harvard University in 1957 and a master's degree in applied mathematics from Harvard in 1958. He married Anne Farlow, and they had three children together: Robert Tappan Morris (author of the 1988 Morris worm), Meredith Morris, and Benjamin Morris. == Bell Labs == From 1960 until 1986, Morris was a researcher at Bell Labs and worked on Multics and later Unix. Using the TMG compiler-compiler, Morris, together with McIlroy, developed the early implementation of the PL/I compiler called EPL for the Multics project. The pair also contributed a version of runoff text-formatting program for Multics. Morris's contributions to early versions of Unix include the math library, the dc programming language, the program crypt, and the password encryption scheme used for user authentication. The encryption scheme (invented by Roger Needham), was based on using a trapdoor function (now called a key derivation function) to compute hashes of user passwords which were stored in the file /etc/passwd; analogous techniques, relying on different functions, are still in use today. == National Security Agency == In 1986, Morris began work at the National Security Agency (NSA). He served as chief scientist of the NSA's National Computer Security Center, where he was involved in the production of the Rainbow Series of computer security standards, and retired from the NSA in 1994. He once told a reporter that, while at the NSA, he helped the FBI decode encrypted evidence. There is a description of Morris in Clifford Stoll's book The Cuckoo's Egg. Many readers of Stoll's book remember Morris for giving Stoll a challenging mathematical puzzle (originally due to John H. Conway) in the course of their discussions on computer security: What is the next number in the sequence 1 11 21 1211 111221? (known as the look-and-say sequence). Stoll was unaware of the answer to this puzzle at the time and remained unaware when writing The Cuckoo's Egg and thus did not reveal the answer in his book. Robert Morris died in Lebanon, New Hampshire. == Quotes == Rule 1 of cryptanalysis: check for plaintext. Never underestimate the attention, risk, money, and time that an opponent will put into reading traffic. It is easy to run a secure computer system. You merely have to disconnect all dial-up connections and permit only direct-wired terminals, put the machine and its terminals in a shielded room, and post a guard at the door. == Selected publications == (with Fred T. Grampp) UNIX Operating System Security, AT&T Bell Laboratories Technical Journal, 63, part 2, #8 (October 1984), pp. 1649–1672. == References == == External links == Dennis Ritchie: "Dabbling in the Cryptographic World" tells the story of cryptographic research he performed with Morris and why that research was never published.
Wikipedia/Robert_Morris_(cryptographer)
In computing, a one-pass algorithm or single-pass algorithm is a streaming algorithm which reads its input exactly once. It does so by processing items in order, without unbounded buffering; it reads a block into an input buffer, processes it, and moves the result into an output buffer for each step in the process. A one-pass algorithm generally requires O(n) (see 'big O' notation) time and less than O(n) storage (typically O(1)), where n is the size of the input. An example of a one-pass algorithm is the Sondik partially observable Markov decision process. == Example problems solvable by one-pass algorithms == Given any list as an input: Count the number of elements. Given a list of numbers: Find the k largest or smallest elements, k given in advance. Find the sum, mean, variance and standard deviation of the elements of the list. See also Algorithms for calculating variance. Given a list of symbols from an alphabet of k symbols, given in advance. Count the number of times each symbol appears in the input. Find the most or least frequent elements. Sort the list according to some order on the symbols (possible since the and after number of symbols is limited). Find the maximum gap between two appearances of a given symbol. == Example problems not solvable by one-pass algorithms == Given any list as an input: Find the nth element from the end (or report that the list has fewer than n elements). Find the middle element of the list. However, this is solvable with two passes: Pass 1 counts the elements and pass 2 picks out the middle one. Given a list of numbers: Find the median. Find the modes (This is not the same as finding the most frequent symbol from a limited alphabet). Sort the list. Count the number of items greater than or less than the mean. However, this can be done in constant memory with two passes: Pass 1 finds the average and pass 2 does the counting. The two-pass algorithms above are still streaming algorithms but not one-pass algorithms. == References ==
Wikipedia/One-pass_algorithm
In mathematics, a Poisson superalgebra is a Z2-graded generalization of a Poisson algebra. Specifically, a Poisson superalgebra is an (associative) superalgebra A together with a second product, a Lie superbracket [ ⋅ , ⋅ ] : A ⊗ A → A {\displaystyle [\cdot ,\cdot ]:A\otimes A\to A} such that (A, [·,·]) is a Lie superalgebra and the operator [ x , ⋅ ] : A → A {\displaystyle [x,\cdot ]:A\to A} is a superderivation of A: [ x , y z ] = [ x , y ] z + ( − 1 ) | x | | y | y [ x , z ] . {\displaystyle [x,yz]=[x,y]z+(-1)^{|x||y|}y[x,z].} Here, | a | = deg ⁡ a {\displaystyle |a|=\deg a} is the grading of a (pure) element a {\displaystyle a} . A supercommutative Poisson algebra is one for which the (associative) product is supercommutative. This is one of two possible ways of "super"izing the Poisson algebra. This gives the classical dynamics of fermion fields and classical spin-1/2 particles. The other way is to define an antibracket algebra or Gerstenhaber algebra, used in the BRST and Batalin-Vilkovisky formalism. The difference between these two is in the grading of the Lie bracket. In the Poisson superalgebra, the grading of the bracket is zero: | [ a , b ] | = | a | + | b | {\displaystyle |[a,b]|=|a|+|b|} whereas in the Gerstenhaber algebra, the bracket decreases the grading by one: | [ a , b ] | = | a | + | b | − 1 {\displaystyle |[a,b]|=|a|+|b|-1} == Examples == If A {\displaystyle A} is any associative Z2 graded algebra, then, defining a new product [ ⋅ , ⋅ ] {\displaystyle [\cdot ,\cdot ]} , called the super-commutator, by [ x , y ] := x y − ( − 1 ) | x | | y | y x {\displaystyle [x,y]:=xy-(-1)^{|x||y|}yx} for any pure graded x, y, turns A {\displaystyle A} into a Poisson superalgebra. == See also == Poisson supermanifold == References == Y. Kosmann-Schwarzbach (2001) [1994], "Poisson algebra", Encyclopedia of Mathematics, EMS Press
Wikipedia/Poisson_superalgebra
In computer programming, digraphs and trigraphs are sequences of two and three characters, respectively, that appear in source code and, according to a programming language's specification, should be treated as if they were single characters. Various reasons exist for using digraphs and trigraphs: keyboards may not have keys to cover the entire character set of the language, input of special characters may be difficult, text editors may reserve some characters for special use and so on. Trigraphs might also be used for some EBCDIC code pages that lack characters such as { and }. == History == The basic character set of the C programming language is a subset of the ASCII character set that includes nine characters which lie outside the ISO 646 invariant character set. This can pose a problem for writing source code when the encoding (and possibly keyboard) being used does not support one or more of these nine characters. The ANSI C committee invented trigraphs as a way of entering source code using keyboards that support any national version of the ISO 646 character set. With the widespread adoption of ASCII and Unicode/UTF-8, trigraph use is limited today, and trigraph support has been removed from C as of C23. == Implementations == Trigraphs are not commonly encountered outside compiler test suites. Some compilers support an option to turn recognition of trigraphs off, or disable trigraphs by default and require an option to turn them on. Some can issue warnings when they encounter trigraphs in source files. Borland supplied a separate program, the trigraph preprocessor (TRIGRAPH.EXE), to be used only when trigraph processing is desired (the rationale was to maximise speed of compilation). == Language support == Different systems define different sets of digraphs and trigraphs, as described below. === ALGOL === Early versions of ALGOL predated the standardized ASCII and EBCDIC character sets, and were typically implemented using a manufacturer-specific six-bit character code. A number of ALGOL operations either lacked codepoints in the available character set or were not supported by peripherals, leading to a number of substitutions including := for ← (assignment) and >= for ≥ (greater than or equal). === Pascal === The Pascal programming language supports digraphs (., .), (* and *) for [, ], { and } respectively. Unlike all other cases mentioned here, (* and *) were and still are in wide use. However, many compilers treat them as a different type of commenting block rather than as actual digraphs, that is, a comment started with (* cannot be closed with } and vice versa. === J === The J programming language is a descendant of APL but uses the ASCII character set rather than APL symbols. Because the printable range of ASCII is smaller than APL's specialized set of symbols, . (dot) and : (colon) characters are used to inflect ASCII symbols, effectively interpreting unigraphs, digraphs or rarely trigraphs as standalone "symbols". Unlike the use of digraphs and trigraphs in C and C++, there are no single-character equivalents to these in J. === C === The C preprocessor (used for C and with slight differences in C++; see below) replaces all occurrences of the nine trigraph sequences in this table by their single-character equivalents before any other processing (until C23). A programmer may want to place two question marks together yet not have the compiler treat them as introducing a trigraph. The C grammar does not permit two consecutive ? tokens, so the only places in a C file where two question marks in a row may be used are in multi-character constants, string literals, and comments. This is particularly a problem for the classic Mac OS, where the constant '????' may be used as a file type or creator. To safely place two consecutive question marks within a string literal, the programmer can use string concatenation "...?""?..." or an escape sequence "...?\?...". ??? is not itself a trigraph sequence, but when followed by a character such as - it will be interpreted as ? + ??-, as in the example below which has 16 ?s before the /. The ??/ trigraph can be used to introduce an escaped newline for line splicing; this must be taken into account for correct and efficient handling of trigraphs within the preprocessor. It can also cause surprises, particularly within comments. For example: which is a single logical comment line (used in C++ and C99), and which is a correctly formed block comment. The concept can be used to check for trigraphs as in the following C99 example, where only one return statement will be executed. In 1994, a normative amendment to the C standard, C95, included in C99, supplied digraphs as more readable alternatives to five of the trigraphs. Unlike trigraphs, digraphs are handled during tokenization, and any digraph must always represent a full token by itself, or compose the token %:%: replacing the preprocessor concatenation token ##. If a digraph sequence occurs inside another token, for example a quoted string, or a character constant, it will not be replaced. === C++ === C++ (through C++14, see below) behaves like C, including the C99 additions. As a note, %:%: is treated as a single token, rather than two occurrences of %:. In the sequence <:: if the subsequent character is neither : nor >, the < is treated as a preprocessing token by itself and not as the first character of the alternative token <:. This is done so certain uses of templates are not broken by the substitution. The C++ Standard makes this comment with regards to the term "digraph": The term "digraph" (token consisting of two characters) is not perfectly descriptive, since one of the alternative preprocessing-tokens is %:%: and of course several primary tokens contain two characters. Nonetheless, those alternative tokens that aren't lexical keywords are colloquially known as "digraphs". Trigraphs were proposed for deprecation in C++0x, which was released as C++11. This was opposed by IBM, speaking on behalf of itself and other users of C++, and as a result trigraphs were retained in C++11. Trigraphs were then proposed again for removal (not only deprecation) in C++17. This passed a committee vote, and trigraphs (but not the additional tokens) are removed from C++17 despite the opposition from IBM. Existing code that uses trigraphs can be supported by translating from the source files (parsing trigraphs) to the basic source character set that does not include trigraphs. === RPL === Hewlett-Packard calculators supporting the RPL language and input method provide support for a large number of trigraphs (also called TIO codes) to reliably transcribe non-seven-bit ASCII characters of the calculators' extended character set on foreign platforms, and to ease keyboard input without using the CHARS application. The first character of all TIO codes is a \, followed by two other ASCII characters vaguely resembling the glyph to be substituted. All other characters can be entered using the special \nnn TIO code syntax with nnn being a three-digit decimal number (with leading zeros if necessary) of the corresponding code point (thereby formally representing a tetragraph). == Application support == === Vim === The Vim text editor supports digraphs for actual entry of text characters, following RFC 1345. The entry of digraphs is bound to Ctrl+K by default. The list of all possible digraphs in Vim can be displayed by typing :dig. === GNU Screen === GNU Screen has a digraph command, bound to Ctrl+A Ctrl+V by default. === Lotus === Lotus 1-2-3 for DOS uses Alt+F1 as compose key to allow easier input of many special characters of the Lotus International Character Set (LICS) and Lotus Multi-Byte Character Set (LMBCS). == See also == Compose key List of XML and HTML character entity references Escape sequence Escape sequences in C C alternative tokens == References == == External links == RFC 1345
Wikipedia/Digraph_(computing)
In computer engineering, an execution unit (E-unit or EU) is a part of a processing unit that performs the operations and calculations forwarded from the instruction unit. It may have its own internal control sequence unit (not to be confused with a CPU's main control unit), some registers, and other internal units such as an arithmetic logic unit, address generation unit, floating-point unit, load–store unit, branch execution unit or other smaller and more specific components, and can be tailored to support a certain datatype, such as integers or floating-points. It is common for modern processing units to have multiple parallel functional units within its execution units, which is referred to as superscalar design. The simplest arrangement is to use a single bus manager unit to manage the memory interface and the others to perform calculations. Additionally, modern execution units are usually pipelined. == References ==
Wikipedia/Functional_unit
PJW hash function is a non-cryptographic hash function created by Peter J. Weinberger of AT&T Bell Labs. == Other versions == A variant of PJW hash had been used to create ElfHash or Elf64 hash that is used in Unix object files with ELF format. Allen Holub has created a portable version of PJW hash algorithm that had a bug and ended up in several textbooks, as the author of one of these textbooks later admitted. == Algorithm == PJW hash algorithm involves shifting the previous hash and adding the current byte followed by moving the high bits: algorithm PJW_hash(s) is uint h := 0 bits := uint size in bits for i := 1 to |S| do h := h << bits/8 + s[i] high := get top bits/8 bits of h from left if high ≠ 0 then h := h xor (high >> bits * 3/4) h := h & ~high return h == Implementation == Below is the algorithm implementation used in Unix ELF format: This C code incorrectly assumes that long is a 32-bit data type. When long is wider than 32 bits, as it is on many 64-bit systems, the code contains a bug. == See also == Non-cryptographic hash functions == References ==
Wikipedia/PJW_hash_function
In computer science, a perfect hash function h for a set S is a hash function that maps distinct elements in S to a set of m integers, with no collisions. In mathematical terms, it is an injective function. Perfect hash functions may be used to implement a lookup table with constant worst-case access time. A perfect hash function can, as any hash function, be used to implement hash tables, with the advantage that no collision resolution has to be implemented. In addition, if the keys are not in the data and if it is known that queried keys will be valid, then the keys do not need to be stored in the lookup table, saving space. Disadvantages of perfect hash functions are that S needs to be known for the construction of the perfect hash function. Non-dynamic perfect hash functions need to be re-constructed if S changes. For frequently changing S dynamic perfect hash functions may be used at the cost of additional space. The space requirement to store the perfect hash function is in O(n) where n is the number of keys in the structure. The important performance parameters for perfect hash functions are the evaluation time, which should be constant, the construction time, and the representation size. == Application == A perfect hash function with values in a limited range can be used for efficient lookup operations, by placing keys from S (or other associated values) in a lookup table indexed by the output of the function. One can then test whether a key is present in S, or look up a value associated with that key, by looking for it at its cell of the table. Each such lookup takes constant time in the worst case. With perfect hashing, the associated data can be read or written with a single access to the table. == Performance of perfect hash functions == The important performance parameters for perfect hashing are the representation size, the evaluation time, the construction time, and additionally the range requirement m n {\displaystyle {\frac {m}{n}}} (average number of buckets per key in the hash table). The evaluation time can be as fast as O(1), which is optimal. The construction time needs to be at least O(n), because each element in S needs to be considered, and S contains n elements. This lower bound can be achieved in practice. The lower bound for the representation size depends on m and n. Let m = (1+ε) n and h a perfect hash function. A good approximation for the lower bound is log ⁡ e − ε log ⁡ 1 + ε ε {\displaystyle \log e-\varepsilon \log {\frac {1+\varepsilon }{\varepsilon }}} Bits per element. For minimal perfect hashing, ε = 0, the lower bound is log e ≈ 1.44 bits per element. == Construction == A perfect hash function for a specific set S that can be evaluated in constant time, and with values in a small range, can be found by a randomized algorithm in a number of operations that is proportional to the size of S. The original construction of Fredman, Komlós & Szemerédi (1984) uses a two-level scheme to map a set S of n elements to a range of O(n) indices, and then map each index to a range of hash values. The first level of their construction chooses a large prime p (larger than the size of the universe from which S is drawn), and a parameter k, and maps each element x of S to the index g ( x ) = ( k x mod p ) mod n . {\displaystyle g(x)=(kx{\bmod {p}}){\bmod {n}}.} If k is chosen randomly, this step is likely to have collisions, but the number of elements ni that are simultaneously mapped to the same index i is likely to be small. The second level of their construction assigns disjoint ranges of O(ni2) integers to each index i. It uses a second set of linear modular functions, one for each index i, to map each member x of S into the range associated with g(x). As Fredman, Komlós & Szemerédi (1984) show, there exists a choice of the parameter k such that the sum of the lengths of the ranges for the n different values of g(x) is O(n). Additionally, for each value of g(x), there exists a linear modular function that maps the corresponding subset of S into the range associated with that value. Both k, and the second-level functions for each value of g(x), can be found in polynomial time by choosing values randomly until finding one that works. The hash function itself requires storage space O(n) to store k, p, and all of the second-level linear modular functions. Computing the hash value of a given key x may be performed in constant time by computing g(x), looking up the second-level function associated with g(x), and applying this function to x. A modified version of this two-level scheme with a larger number of values at the top level can be used to construct a perfect hash function that maps S into a smaller range of length n + o(n). A more recent method for constructing a perfect hash function is described by Belazzougui, Botelho & Dietzfelbinger (2009) as "hash, displace, and compress". Here a first-level hash function g is also used to map elements onto a range of r integers. An element x ∈ S is stored in the Bucket Bg(x). Then, in descending order of size, each bucket's elements are hashed by a hash function of a sequence of independent fully random hash functions (Φ1, Φ2, Φ3, ...), starting with Φ1. If the hash function does not produce any collisions for the bucket, and the resulting values are not yet occupied by other elements from other buckets, the function is chosen for that bucket. If not, the next hash function in the sequence is tested. To evaluate the perfect hash function h(x) one only has to save the mapping σ of the bucket index g(x) onto the correct hash function in the sequence, resulting in h(x) = Φσ(g(x)). Finally, to reduce the representation size, the (σ(i))0 ≤ i < r are compressed into a form that still allows the evaluation in O(1). This approach needs linear time in n for construction, and constant evaluation time. The representation size is in O(n), and depends on the achieved range. For example, with m = 1.23n Belazzougui, Botelho & Dietzfelbinger (2009) achieved a representation size between 3.03 bits/key and 1.40 bits/key for their given example set of 10 million entries, with lower values needing a higher computation time. The space lower bound in this scenario is 0.88 bits/key. === Pseudocode === algorithm hash, displace, and compress is (1) Split S into buckets Bi := g−1({i})∩S,0 ≤ i < r (2) Sort buckets Bi in falling order according to size |Bi| (3) Initialize array T[0...m-1] with 0's (4) for all i ∈[r], in the order from (2), do (5) for l ← 1,2,... (6) repeat forming Ki ← {Φl(x)|x ∈ Bi} (6) until |Ki|=|Bi| and Ki∩{j|T[j]=1}= ∅ (7) let σ(i):= the successful l (8) for all j ∈ Ki let T[j]:= 1 (9) Transform (σi)0≤i<r into compressed form, retaining O(1) access. == Space lower bounds == The use of O(n) words of information to store the function of Fredman, Komlós & Szemerédi (1984) is near-optimal: any perfect hash function that can be calculated in constant time requires at least a number of bits that is proportional to the size of S. For minimal perfect hash functions the information theoretic space lower bound is log 2 ⁡ e ≈ 1.44 {\displaystyle \log _{2}e\approx 1.44} bits/key. For perfect hash functions, it is first assumed that the range of h is bounded by n as m = (1+ε) n. With the formula given by Belazzougui, Botelho & Dietzfelbinger (2009) and for a universe U ⊇ S {\displaystyle U\supseteq S} whose size |U| = u tends towards infinity, the space lower bounds is log 2 ⁡ e − ε log ⁡ 1 + ε ε {\displaystyle \log _{2}e-\varepsilon \log {\frac {1+\varepsilon }{\varepsilon }}} bits/key, minus log(n) bits overall. == Extensions == === Dynamic perfect hashing === Using a perfect hash function is best in situations where there is a frequently queried large set, S, which is seldom updated. This is because any modification of the set S may cause the hash function to no longer be perfect for the modified set. Solutions which update the hash function any time the set is modified are known as dynamic perfect hashing, but these methods are relatively complicated to implement. === Minimal perfect hash function === A minimal perfect hash function is a perfect hash function that maps n keys to n consecutive integers – usually the numbers from 0 to n − 1 or from 1 to n. A more formal way of expressing this is: Let j and k be elements of some finite set S. Then h is a minimal perfect hash function if and only if h(j) = h(k) implies j = k (injectivity) and there exists an integer a such that the range of h is a..a + |S| − 1. It has been proven that a general purpose minimal perfect hash scheme requires at least log 2 ⁡ e ≈ 1.44 {\displaystyle \log _{2}e\approx 1.44} bits/key. Assuming that S {\displaystyle S} is a set of size n {\displaystyle n} containing integers in the range [ 1 , 2 o ( n ) ] {\displaystyle [1,2^{o(n)}]} , it is known how to efficiently construct an explicit minimal perfect hash function from S {\displaystyle S} to { 1 , 2 , … , n } {\displaystyle \{1,2,\ldots ,n\}} that uses space n log 2 ⁡ e + o ( n ) {\displaystyle n\log _{2}e+o(n)} bits and that supports constant evaluation time. In practice, there are minimal perfect hashing schemes that use roughly 1.56 bits/key if given enough time. === k-perfect hashing === A hash function is k-perfect if at most k elements from S are mapped onto the same value in the range. The "hash, displace, and compress" algorithm can be used to construct k-perfect hash functions by allowing up to k collisions. The changes necessary to accomplish this are minimal, and are underlined in the adapted pseudocode below: (4) for all i ∈[r], in the order from (2), do (5) for l ← 1,2,... (6) repeat forming Ki ← {Φl(x)|x ∈ Bi} (6) until |Ki|=|Bi| and Ki∩{j|T[j]=k}= ∅ (7) let σ(i):= the successful l (8) for all j ∈ Ki set T[j]←T[j]+1 === Order preservation === A minimal perfect hash function F is order preserving if keys are given in some order a1, a2, ..., an and for any keys aj and ak, j < k implies F(aj) < F(ak). In this case, the function value is just the position of each key in the sorted ordering of all of the keys. A simple implementation of order-preserving minimal perfect hash functions with constant access time is to use an (ordinary) perfect hash function to store a lookup table of the positions of each key. This solution uses O ( n log ⁡ n ) {\displaystyle O(n\log n)} bits, which is optimal in the setting where the comparison function for the keys may be arbitrary. However, if the keys a1, a2, ..., an are integers drawn from a universe { 1 , 2 , … , U } {\displaystyle \{1,2,\ldots ,U\}} , then it is possible to construct an order-preserving hash function using only O ( n log ⁡ log ⁡ log ⁡ U ) {\displaystyle O(n\log \log \log U)} bits of space. Moreover, this bound is known to be optimal. == Related constructions == While well-dimensioned hash tables have amortized average O(1) time (amortized average constant time) for lookups, insertions, and deletion, most hash table algorithms suffer from possible worst-case times that take much longer. A worst-case O(1) time (constant time even in the worst case) would be better for many applications (including network router and memory caches).: 41  Few hash table algorithms support worst-case O(1) lookup time (constant lookup time even in the worst case). The few that do include: perfect hashing; dynamic perfect hashing; cuckoo hashing; hopscotch hashing; and extendible hashing.: 42–69  A simple alternative to perfect hashing, which also allows dynamic updates, is cuckoo hashing. This scheme maps keys to two or more locations within a range (unlike perfect hashing which maps each key to a single location) but does so in such a way that the keys can be assigned one-to-one to locations to which they have been mapped. Lookups with this scheme are slower, because multiple locations must be checked, but nevertheless take constant worst-case time. == References == == Further reading == Richard J. Cichelli. Minimal Perfect Hash Functions Made Simple, Communications of the ACM, Vol. 23, Number 1, January 1980. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Third Edition. MIT Press, 2009. ISBN 978-0262033848. Section 11.5: Perfect hashing, pp. 267, 277–282. Fabiano C. Botelho, Rasmus Pagh and Nivio Ziviani. "Perfect Hashing for Data Management Applications". Fabiano C. Botelho and Nivio Ziviani. "External perfect hashing for very large key sets". 16th ACM Conference on Information and Knowledge Management (CIKM07), Lisbon, Portugal, November 2007. Djamal Belazzougui, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna. "Monotone minimal perfect hashing: Searching a sorted table with O(1) accesses". In Proceedings of the 20th Annual ACM-SIAM Symposium On Discrete Mathematics (SODA), New York, 2009. ACM Press. Marshall D. Brain and Alan L. Tharp. "Near-perfect Hashing of Large Word Sets". Software—Practice and Experience, vol. 19(10), 967-078, October 1989. John Wiley & Sons. Douglas C. Schmidt, GPERF: A Perfect Hash Function Generator, C++ Report, SIGS, Vol. 10, No. 10, November/December, 1998. == External links == gperf is an open source C and C++ perfect hash generator (very fast, but only works for small sets) Minimal Perfect Hashing (bob algorithm) by Bob Jenkins cmph: C Minimal Perfect Hashing Library, open source implementations for many (minimal) perfect hashes (works for big sets) Sux4J: open source monotone minimal perfect hashing in Java MPHSharp: perfect hashing methods in C# BBHash: minimal perfect hash function in header-only C++ Perfect::Hash, perfect hash generator in Perl that makes C code. Has a "prior art" section worth looking at.
Wikipedia/Perfect_hash_function
In Boolean algebra, a parity function is a Boolean function whose value is one if and only if the input vector has an odd number of ones. The parity function of two inputs is also known as the XOR function. The parity function is notable for its role in theoretical investigation of circuit complexity of Boolean functions. The output of the parity function is the parity bit. == Definition == The n {\displaystyle n} -variable parity function is the Boolean function f : { 0 , 1 } n → { 0 , 1 } {\displaystyle f:\{0,1\}^{n}\to \{0,1\}} with the property that f ( x ) = 1 {\displaystyle f(x)=1} if and only if the number of ones in the vector x ∈ { 0 , 1 } n {\displaystyle x\in \{0,1\}^{n}} is odd. In other words, f {\displaystyle f} is defined as follows: f ( x ) = x 1 ⊕ x 2 ⊕ ⋯ ⊕ x n {\displaystyle f(x)=x_{1}\oplus x_{2}\oplus \dots \oplus x_{n}} where ⊕ {\displaystyle \oplus } denotes exclusive or. == Properties == Parity only depends on the number of ones and is therefore a symmetric Boolean function. The n-variable parity function and its negation are the only Boolean functions for which all disjunctive normal forms have the maximal number of 2 n − 1 monomials of length n and all conjunctive normal forms have the maximal number of 2 n − 1 clauses of length n. == Computational complexity == Some of the earliest work in computational complexity was 1961 bound of Bella Subbotovskaya showing the size of a Boolean formula computing parity must be at least Ω ( n 3 / 2 ) {\displaystyle \Omega (n^{3/2})} . This work uses the method of random restrictions. This exponent of 3 / 2 {\displaystyle 3/2} has been increased through careful analysis to 1.63 {\displaystyle 1.63} by Paterson and Zwick (1993) and then to 2 {\displaystyle 2} by Håstad (1998). In the early 1980s, Merrick Furst, James Saxe and Michael Sipser and independently Miklós Ajtai established super-polynomial lower bounds on the size of constant-depth Boolean circuits for the parity function, i.e., they showed that polynomial-size constant-depth circuits cannot compute the parity function. Similar results were also established for the majority, multiplication and transitive closure functions, by reduction from the parity function. Håstad (1987) established tight exponential lower bounds on the size of constant-depth Boolean circuits for the parity function. Håstad's Switching Lemma is the key technical tool used for these lower bounds and Johan Håstad was awarded the Gödel Prize for this work in 1994. The precise result is that depth-k circuits with AND, OR, and NOT gates require size exp ⁡ ( Ω ( n 1 k − 1 ) ) {\displaystyle \exp(\Omega (n^{\frac {1}{k-1}}))} to compute the parity function. This is asymptotically almost optimal as there are depth-k circuits computing parity which have size exp ⁡ ( O ( n 1 k − 1 ) t ) {\displaystyle \exp(O(n^{\frac {1}{k-1}})t)} . == Infinite version == An infinite parity function is a function f : { 0 , 1 } ω → { 0 , 1 } {\displaystyle f\colon \{0,1\}^{\omega }\to \{0,1\}} mapping every infinite binary string to 0 or 1, having the following property: if w {\displaystyle w} and v {\displaystyle v} are infinite binary strings differing only on finite number of coordinates then f ( w ) = f ( v ) {\displaystyle f(w)=f(v)} if and only if w {\displaystyle w} and v {\displaystyle v} differ on even number of coordinates. Assuming axiom of choice it can be proved that parity functions exist and there are 2 2 ℵ 0 {\displaystyle 2^{2^{\aleph _{0}}}} many of them; as many as the number of all functions from { 0 , 1 } ω {\displaystyle \{0,1\}^{\omega }} to { 0 , 1 } {\displaystyle \{0,1\}} . It is enough to take one representative per equivalence class of relation ≈ {\displaystyle \approx } defined as follows: w ≈ v {\displaystyle w\approx v} if w {\displaystyle w} and v {\displaystyle v} differ at finite number of coordinates. Having such representatives, we can map all of them to 0 {\displaystyle 0} ; the rest of f {\displaystyle f} values are deducted unambiguously. Another construction of an infinite parity function can be done using a non-principal ultrafilter U {\displaystyle U} on ω {\displaystyle \omega } . The existence of non-principal ultrafilters on ω {\displaystyle \omega } follows from – and is strictly weaker than – the axiom of choice. For any w : ω → { 0 , 1 } {\displaystyle w:\omega \to \{0,1\}} we consider the set A w = { n ∈ ω ∣ { k ≤ n ∣ w ( k ) = 0 } is even } {\displaystyle A_{w}=\{n\in \omega \mid \{k\leq n\mid w(k)=0\}{\text{ is even}}\}} . The infinite parity function f {\displaystyle f} is defined by mapping w {\displaystyle w} to 0 {\displaystyle 0} if and only if A w {\displaystyle A_{w}} is an element of the ultrafilter. It is necessary to assume at least some amount of choice to prove that infinite parity functions exist. If f {\displaystyle f} is an infinite parity function and we consider the inverse image f − 1 [ 0 ] {\displaystyle f^{-1}[0]} as a subset of the Cantor space { 0 , 1 } ω {\displaystyle \{0,1\}^{\omega }} , then f − 1 [ 0 ] {\displaystyle f^{-1}[0]} is a non-measurable set and does not have the property of Baire. Without the axiom of choice, it is consistent (relative to ZF) that all subsets of the Cantor space are measurable and have the property of Baire and thus that no infinite parity function exists; this holds in the Solovay model, for instance. == See also == Walsh function, a continuous equivalent Parity bit, the output of the function Piling-up lemma, a statistical property for independent inputs Multiway switching, a physical implementation often used to control lighting Related topics: Error Correction Error Detection == References ==
Wikipedia/Parity_function
Random number generation is a process by which, often by means of a random number generator (RNG), a sequence of numbers or symbols is generated that cannot be reasonably predicted better than by random chance. This means that the particular outcome sequence will contain some patterns detectable in hindsight but impossible to foresee. True random number generators can be hardware random-number generators (HRNGs), wherein each generation is a function of the current value of a physical environment's attribute that is constantly changing in a manner that is practically impossible to model. This would be in contrast to so-called "random number generations" done by pseudorandom number generators (PRNGs), which generate numbers that only look random but are in fact predetermined—these generations can be reproduced simply by knowing the state of the PRNG. Various applications of randomness have led to the development of different methods for generating random data. Some of these have existed since ancient times, including well-known examples like the rolling of dice, coin flipping, the shuffling of playing cards, the use of yarrow stalks (for divination) in the I Ching, as well as countless other techniques. Because of the mechanical nature of these techniques, generating large quantities of sufficiently random numbers (important in statistics) required much work and time. Thus, results would sometimes be collected and distributed as random number tables. Several computational methods for pseudorandom number generation exist. All fall short of the goal of true randomness, although they may meet, with varying success, some of the statistical tests for randomness intended to measure how unpredictable their results are (that is, to what degree their patterns are discernible). This generally makes them unusable for applications such as cryptography. However, carefully designed cryptographically secure pseudorandom number generators (CSPRNGS) also exist, with special features specifically designed for use in cryptography. == Practical applications and uses == Random number generators have applications in gambling, statistical sampling, computer simulation, cryptography, completely randomized design, and other areas where producing an unpredictable result is desirable. Generally, in applications having unpredictability as the paramount feature, such as in security applications, hardware generators are generally preferred over pseudorandom algorithms, where feasible. Pseudorandom number generators are very useful in developing Monte Carlo-method simulations, as debugging is facilitated by the ability to run the same sequence of random numbers again by starting from the same random seed. They are also used in cryptography – so long as the seed is secret. The sender and receiver can generate the same set of numbers automatically to use as keys. The generation of pseudorandom numbers is an important and common task in computer programming. While cryptography and certain numerical algorithms require a very high degree of apparent randomness, many other operations only need a modest amount of unpredictability. Some simple examples might be presenting a user with a "random quote of the day", or determining which way a computer-controlled adversary might move in a computer game. Weaker forms of randomness are used in hash algorithms and in creating amortized searching and sorting algorithms. Some applications that appear at first sight to be suitable for randomization are in fact not quite so simple. For instance, a system that "randomly" selects music tracks for a background music system must only appear random, and may even have ways to control the selection of music: a truly random system would have no restriction on the same item appearing two or three times in succession. == True vs. pseudo-random numbers == There are two principal methods used to generate random numbers. The first method measures some physical phenomenon that is expected to be random and then compensates for possible biases in the measurement process. Example sources include measuring atmospheric noise, thermal noise, and other external electromagnetic and quantum phenomena. For example, cosmic background radiation or radioactive decay as measured over short timescales represent sources of natural entropy (as a measure of unpredictability or surprise of the number generation process). The speed at which entropy can be obtained from natural sources is dependent on the underlying physical phenomena being measured. Thus, sources of naturally occurring true entropy are said to be blocking – they are rate-limited until enough entropy is harvested to meet the demand. On some Unix-like systems, including most Linux distributions, the pseudo device file /dev/random will block until sufficient entropy is harvested from the environment. Due to this blocking behavior, large bulk reads from /dev/random, such as filling a hard disk drive with random bits, can often be slow on systems that use this type of entropy source. The second method uses computational algorithms that can produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed value or key. As a result, the entire seemingly random sequence can be reproduced if the seed value is known. This type of random number generator is often called a pseudorandom number generator. This type of generator typically does not rely on sources of naturally occurring entropy, though it may be periodically seeded by natural sources. This generator type is non-blocking, so they are not rate-limited by an external event, making large bulk reads a possibility. Some systems take a hybrid approach, providing randomness harvested from natural sources when available, and falling back to periodically re-seeded software-based cryptographically secure pseudorandom number generators (CSPRNGs). The fallback occurs when the desired read rate of randomness exceeds the ability of the natural harvesting approach to keep up with the demand. This approach avoids the rate-limited blocking behavior of random number generators based on slower and purely environmental methods. While a pseudorandom number generator based solely on deterministic logic can never be regarded as a true random number source in the purest sense of the word, in practice they are generally sufficient even for demanding security-critical applications. Carefully designed and implemented pseudorandom number generators can be certified for security-critical cryptographic purposes, as is the case with the yarrow algorithm and fortuna. The former is the basis of the /dev/random source of entropy on FreeBSD, AIX, macOS, NetBSD, and others. OpenBSD uses a pseudorandom number algorithm known as arc4random. == Generation methods == === Physical methods === The earliest methods for generating random numbers, such as dice, coin flipping and roulette wheels, are still used today, mainly in games and gambling as they tend to be too slow for most applications in statistics and cryptography. A hardware random number generator can be based on an essentially random atomic or subatomic physical phenomenon whose unpredictability can be traced to the laws of quantum mechanics. Sources of entropy include radioactive decay, thermal noise, shot noise, avalanche noise in Zener diodes, clock drift, the timing of actual movements of a hard disk read-write head, and radio noise. However, physical phenomena and tools used to measure them generally feature asymmetries and systematic biases that make their outcomes not uniformly random. A randomness extractor, such as a cryptographic hash function, can be used to approach a uniform distribution of bits from a non-uniformly random source, though at a lower bit rate. The appearance of wideband photonic entropy sources, such as optical chaos and amplified spontaneous emission noise, greatly aid the development of the physical random number generator. Among them, optical chaos has a high potential to physically produce high-speed random numbers due to its high bandwidth and large amplitude. A prototype of a high-speed, real-time physical random bit generator based on a chaotic laser was built in 2013. Various imaginative ways of collecting this entropic information have been devised. One technique is to run a hash function against a frame of a video stream from an unpredictable source. Lavarand used this technique with images of a number of lava lamps. HotBits measured radioactive decay with Geiger–Muller tubes, while Random.org uses variations in the amplitude of atmospheric noise recorded with a normal radio. Another common entropy source is the behavior of human users of the system. While people are not considered good randomness generators upon request, they generate random behavior quite well in the context of playing mixed strategy games. Some security-related computer software requires the user to make a lengthy series of mouse movements or keyboard inputs to create sufficient entropy needed to generate random keys or to initialize pseudorandom number generators. === Computational methods === Most computer-generated random numbers use PRNGs which are algorithms that can automatically create long runs of numbers with good random properties but eventually the sequence repeats (or the memory usage grows without bound). These random numbers are fine in many situations but are not as random as numbers generated from electromagnetic atmospheric noise used as a source of entropy. The series of values generated by such algorithms is generally determined by a fixed number called a seed. One of the most common PRNG is the linear congruential generator, which uses the recurrence X n + 1 = ( a X n + b ) mod m {\displaystyle X_{n+1}=(aX_{n}+b)\,{\textrm {mod}}\,m} to generate numbers, where a, b and m are large integers, and X n + 1 {\displaystyle X_{n+1}} is the next in X as a series of pseudorandom numbers. The maximum number of numbers the formula can produce is the modulus, m. The recurrence relation can be extended to matrices to have much longer periods and better statistical properties . To avoid certain non-random properties of a single linear congruential generator, several such random number generators with slightly different values of the multiplier coefficient, a, can be used in parallel, with a master random number generator that selects from among the several different generators. A simple pen-and-paper method for generating random numbers is the so-called middle-square method suggested by John von Neumann. While simple to implement, its output is of poor quality. It has a very short period and severe weaknesses, such as the output sequence almost always converging to zero. A recent innovation is to combine the middle square with a Weyl sequence. This method produces high-quality output through a long period. Most computer programming languages include functions or library routines that provide random number generators. They are often designed to provide a random byte or word, or a floating point number uniformly distributed between 0 and 1. The quality i.e. randomness of such library functions varies widely from completely predictable output, to cryptographically secure. The default random number generator in many languages, including Python, Ruby, R, IDL and PHP is based on the Mersenne Twister algorithm and is not sufficient for cryptography purposes, as is explicitly stated in the language documentation. Such library functions often have poor statistical properties, and some will repeat patterns after only tens of thousands of trials. They are often initialized using a computer's real-time clock as the seed, since such a clock is 64 bit and measures in nanoseconds, far beyond the person's precision. These functions may provide enough randomness for certain tasks (for example video games) but are unsuitable where high-quality randomness is required, such as in cryptography applications, or statistics. Much higher quality random number sources are available on most operating systems; for example /dev/random on various BSD flavors, Linux, Mac OS X, IRIX, and Solaris, or CryptGenRandom for Microsoft Windows. Most programming languages, including those mentioned above, provide a means to access these higher-quality sources. === By humans === Random number generation may also be performed by humans, in the form of collecting various inputs from end users and using them as a randomization source. However, most studies find that human subjects have some degree of non-randomness when attempting to produce a random sequence of e.g. digits or letters. They may alternate too much between choices when compared to a good random generator; thus, this approach is not widely used. However, for the very reason that humans perform poorly in this task, human random number generation can be used as a tool to gain insights into brain functions otherwise not accessible. == Post-processing and statistical checks == Even given a source of plausible random numbers (perhaps from a quantum mechanically based hardware generator), obtaining numbers which are completely unbiased takes care. In addition, behavior of these generators often changes with temperature, power supply voltage, the age of the device, or other outside interference. Generated random numbers are sometimes subjected to statistical tests before use to ensure that the underlying source is still working, and then post-processed to improve their statistical properties. An example would be the TRNG9803 hardware random number generator, which uses an entropy measurement as a hardware test, and then post-processes the random sequence with a shift register stream cipher. It is generally hard to use statistical tests to validate the generated random numbers. Wang and Nicol proposed a distance-based statistical testing technique that is used to identify the weaknesses of several random generators. Li and Wang proposed a method of testing random numbers based on laser chaotic entropy sources using Brownian motion properties. Statistical tests are also used to give confidence that the post-processed final output from a random number generator is truly unbiased, with numerous randomness test suites being developed. == Other considerations == === Reshaping the distribution === ==== Uniform distributions ==== Most random number generators natively work with integers or individual bits, so an extra step is required to arrive at the canonical uniform distribution between 0 and 1. The implementation is not as trivial as dividing the integer by its maximum possible value. Specifically: The integer used in the transformation must provide enough bits for the intended precision. The nature of floating-point math itself means there exists more precision the closer the number is to zero. This extra precision is usually not used due to the sheer number of bits required. Rounding error in division may bias the result. At worst, a supposedly excluded bound may be drawn contrary to expectations based on real-number math. The mainstream algorithm, used by OpenJDK, Rust, and NumPy, is described in a proposal for C++'s STL. It does not use the extra precision and suffers from bias only in the last bit due to round-to-even. Other numeric concerns are warranted when shifting this canonical uniform distribution to a different range. A proposed method for the Swift programming language claims to use the full precision everywhere. Uniformly distributed integers are commonly used in algorithms such as the Fisher–Yates shuffle. Again, a naive implementation may induce a modulo bias into the result, so more involved algorithms must be used. A method that nearly never performs division was described in 2018 by Daniel Lemire, with the current state-of-the-art being the arithmetic encoding-inspired 2021 "optimal algorithm" by Stephen Canon of Apple Inc. Most 0 to 1 RNGs include 0 but exclude 1, while others include or exclude both. ==== Other distributions ==== Given a source of uniform random numbers, there are a couple of methods to create a new random source that corresponds to a probability density function. One method called the inversion method, involves integrating up to an area greater than or equal to the random number (which should be generated between 0 and 1 for proper distributions). A second method called the acceptance-rejection method, involves choosing an x and y value and testing whether the function of x is greater than the y value. If it is, the x value is accepted. Otherwise, the x value is rejected and the algorithm tries again. As an example for rejection sampling, to generate a pair of statistically independent standard normally distributed random numbers (x, y), one may first generate the polar coordinates (r, θ), where r2~χ22 and θ~UNIFORM(0,2π) (see Box–Muller transform). === Whitening === The outputs of multiple independent RNGs can be combined (for example, using a bit-wise XOR operation) to provide a combined RNG at least as good as the best RNG used. This is referred to as software whitening. Computational and hardware random number generators are sometimes combined to reflect the benefits of both kinds. Computational random number generators can typically generate pseudorandom numbers much faster than physical generators, while physical generators can generate true randomness. == Low-discrepancy sequences as an alternative == Some computations making use of a random number generator can be summarized as the computation of a total or average value, such as the computation of integrals by the Monte Carlo method. For such problems, it may be possible to find a more accurate solution by the use of so-called low-discrepancy sequences, also called quasirandom numbers. Such sequences have a definite pattern that fills in gaps evenly, qualitatively speaking; a truly random sequence may, and usually does, leave larger gaps. == Activities and demonstrations == The following sites make available random number samples: The SOCR resource pages contain a number of hands-on interactive activities and demonstrations of random number generation using Java applets. The Quantum Optics Group at the ANU generates random numbers sourced from quantum vacuum. Samples of random numbers are available at their quantum random number generator research page. Random.org makes available random numbers that are sourced from the randomness of atmospheric noise. The Quantum Random Bit Generator Service at the Ruđer Bošković Institute harvests randomness from the quantum process of photonic emission in semiconductors. They supply a variety of ways of fetching the data, including libraries for several programming languages. The Group at the Taiyuan University of Technology generates random numbers sourced from a chaotic laser. Samples of random numbers are available at their physical random number generator service. == Backdoors == Since much cryptography depends on a cryptographically secure random number generator for key and cryptographic nonce generation, if a random number generator can be made predictable, it can be used as backdoor by an attacker to break the encryption. The NSA is reported to have inserted a backdoor into the NIST certified cryptographically secure pseudorandom number generator Dual EC DRBG. If for example an SSL connection is created using this random number generator, then according to Matthew Green it would allow NSA to determine the state of the random number generator, and thereby eventually be able to read all data sent over the SSL connection. Even though it was apparent that Dual_EC_DRBG was a very poor and possibly backdoored pseudorandom number generator long before the NSA backdoor was confirmed in 2013, it had seen significant usage in practice until 2013, for example by the prominent security company RSA Security. There have subsequently been accusations that RSA Security knowingly inserted a NSA backdoor into its products, possibly as part of the Bullrun program. RSA has denied knowingly inserting a backdoor into its products. It has also been theorized that hardware RNGs could be secretly modified to have less entropy than stated, which would make encryption using the hardware RNG susceptible to attack. One such method that has been published works by modifying the dopant mask of the chip, which would be undetectable to optical reverse-engineering. For example, for random number generation in Linux, it is seen as unacceptable to use Intel's RDRAND hardware RNG without mixing in the RDRAND output with other sources of entropy to counteract any backdoors in the hardware RNG, especially after the revelation of the NSA Bullrun program. In 2010, a U.S. lottery draw was rigged by the information security director of the Multi-State Lottery Association (MUSL), who surreptitiously installed backdoor malware on the MUSL's secure RNG computer during routine maintenance. During the hacks the man won a total amount of $16,500,000 over multiple years. == See also == == References == == Further reading == Donald Knuth (1997). "Chapter 3 – Random Numbers". The Art of Computer Programming. Vol. 2: Seminumerical algorithms (3 ed.). L'Ecuyer, Pierre (2017). "History of Uniform Random Number Generation" (PDF). Proceedings of the 2017 Winter Simulation Conference. IEEE Press. pp. 202–230. L'Ecuyer, Pierre (2012). "Random Number Generation" (PDF). In J. E. Gentle; W. Haerdle; Y. Mori (eds.). Handbook of Computational Statistics: Concepts and Methods. Handbook of Computational Statistics (second ed.). Springer-Verlag. pp. 35–71. doi:10.1007/978-3-642-21551-3_3. hdl:10419/22195. ISBN 978-3-642-21550-6. Kroese, D. P.; Taimre, T.; Botev, Z.I. (2011). "Chapter 1 – Uniform Random Number Generation". Handbook of Monte Carlo Methods. New York: John Wiley & Sons. p. 772. ISBN 978-0-470-17793-8. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Chapter 7. Random Numbers". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. NIST SP800-90A, B, C series on random number generation M. Tomassini; M. Sipper; M. Perrenoud (October 2000). "On the generation of high-quality random numbers by two-dimensional cellular automata". IEEE Transactions on Computers. 49 (10): 1146–1151. doi:10.1109/12.888056. S2CID 10139169. == External links == RANDOM.ORG True Random Number Service Quantum random number generator at ANU Random and Pseudorandom on In Our Time at the BBC jRand a Java-based framework for the generation of simulation sequences, including pseudorandom sequences of numbers Random number generators in NAG Fortran Library Randomness Beacon at NIST, broadcasting full entropy bit-strings in blocks of 512 bits every 60 seconds. Designed to provide unpredictability, autonomy, and consistency. A system call for random numbers: getrandom(), a LWN.net article describing a dedicated Linux system call Statistical Properties of Pseudo Random Sequences and Experiments with PHP and Debian OpenSSL Random Sequence Generator based on Avalanche Noise Cryptographically Enhanced PRNG
Wikipedia/Randomization_function
In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher). Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length). Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length. == Significance == Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) to plaintext. All commonly-used ciphers are based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend on the key alone has been explicitly formulated by Auguste Kerckhoffs (in the 1880s) and Claude Shannon (in the 1940s); the statements are known as Kerckhoffs' principle and Shannon's Maxim respectively. A key should, therefore, be large enough that a brute-force attack (possible against any encryption algorithm) is infeasible – i.e. would take too long and/or would take too much memory to execute. Shannon's work on information theory showed that to achieve so-called 'perfect secrecy', the key length must be at least as large as the message and only used once (this algorithm is called the one-time pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses on computational security, under which the computational requirements of breaking an encrypted text must be infeasible for an attacker. == Key size and encryption system == Encryption systems are often grouped into families. Common families include symmetric systems (e.g. AES) and asymmetric systems (e.g. RSA and Elliptic-curve cryptography [ECC]). They may be grouped according to the central algorithm used (e.g. ECC and Feistel ciphers). Because each of these has a different level of cryptographic complexity, it is usual to have different key sizes for the same level of security, depending upon the algorithm used. For example, the security available with a 1024-bit key using asymmetric RSA is considered approximately equal in security to an 80-bit key in a symmetric algorithm. The actual degree of security achieved over time varies, as more computational power and more powerful mathematical analytic methods become available. For this reason, cryptologists tend to look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to longer key sizes or more difficult algorithms. For example, as of May 2007, a 1039-bit integer was factored with the special number field sieve using 400 computers over 11 months. The factored number was of a special form; the special number field sieve cannot be used on RSA keys. The computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an advance warning that 1024 bit RSA keys used in secure online commerce should be deprecated, since they may become breakable in the foreseeable future. Cryptography professor Arjen Lenstra observed that "Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor number" and when asked whether 1024-bit RSA keys are dead, said: "The answer to that question is an unqualified yes." The 2015 Logjam attack revealed additional dangers in using Diffie-Hellman key exchange when only one or a few common 1024-bit or smaller prime moduli are in use. This practice, somewhat common at the time, allows large amounts of communications to be compromised at the expense of attacking a small number of primes. == Brute-force attack == Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it may be possible to run through the entire space of keys in what is known as a brute-force attack. Because longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical. With a key of length n bits, there are 2n possible keys. This number grows very rapidly as n increases. The large number of operations (2128) required to try all possible 128-bit keys is widely considered out of reach for conventional digital computing techniques for the foreseeable future. However, a quantum computer capable of running Grover's algorithm would be able to search the possible keys more efficiently. If a suitably sized quantum computer would reduce a 128-bit key down to 64-bit security, roughly a DES equivalent. This is one of the reasons why AES supports key lengths of 256 bits and longer. == Symmetric algorithm key lengths == IBM's Lucifer cipher was selected in 1974 as the base for what would become the Data Encryption Standard. Lucifer's key length was reduced from 128 bits to 56 bits, which the NSA and NIST argued was sufficient for non-governmental protection at the time. The NSA has major computing resources and a large budget; some cryptographers including Whitfield Diffie and Martin Hellman complained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute force parallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years". However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government. The book Cracking DES (O'Reilly and Associates) tells of the successful ability in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; see EFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length for symmetric algorithm keys for general use. Because of this, DES was replaced in most security applications by Triple DES, which has 112 bits of security when using 168-bit keys (triple key). The Advanced Encryption Standard published in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms of AES's quality until quantum computers become available. However, as of 2015, the U.S. National Security Agency has issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for data classified up to Top Secret. In 2003, the U.S. National Institute for Standards and Technology, NIST proposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010. Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits of security strength for key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-key Triple DES, and AES. Approvals for two-key Triple DES and Skipjack were withdrawn in 2015; the NSA's Skipjack algorithm used in its Fortezza program employs 80-bit keys. == Asymmetric algorithm key lengths == The effectiveness of public key cryptosystems depends on the intractability (computational and theoretical) of certain mathematical problems such as integer factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus, asymmetric keys must be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerful quantum computers in the future. Since 2015, NIST recommends a minimum of 2048-bit keys for RSA, an update to the widely accepted recommendation of a 1024-bit minimum since at least 2002. 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys. In 2003, RSA Security claimed that 1024-bit keys were likely to become crackable sometime between 2006 and 2010, while 2048-bit keys are sufficient until 2030. As of 2020 the largest RSA key publicly known to be cracked is RSA-250 with 829 bits. The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on the discrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key. Elliptic-curve cryptography (ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bit Elliptic-curve Diffie–Hellman (ECDH) key has approximately the same safety factor as a 128-bit AES key. A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004. The NSA previously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET; In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information. == Effect of quantum computing attacks on key strength == The two best known quantum computing attacks are based on Shor's algorithm and Grover's algorithm. Of the two, Shor's offers the greater risk to current security systems. Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms including RSA, Diffie-Hellman and elliptic curve cryptography. According to Professor Gilles Brassard, an expert in quantum computing: "The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer." The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitous SSL used to protect e-commerce and Internet banking and SSH used to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time, commonly known as retroactive/retrospective decryption or "harvest now, decrypt later". Mainstream symmetric ciphers (such as AES or Twofish) and collision resistant hash functions (such as SHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable to Grover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case. Thus in the presence of large quantum computers an n-bit key can provide at least n/2 bits of security. Quantum brute force is easily defeated by doubling the key length, which has little extra computational cost in ordinary use. This implies that at least a 256-bit symmetric key is required to achieve 128-bit security rating against a quantum computer. As mentioned above, the NSA announced in 2015 that it plans to transition to quantum-resistant algorithms. In a 2016 Quantum Computing FAQ, the NSA affirmed: "A sufficiently large quantum computer, if built, would be capable of undermining all widely-deployed public key algorithms used for key establishment and digital signatures. [...] It is generally accepted that quantum computing techniques are much less effective against symmetric algorithms than against current widely used public key algorithms. While public key cryptography requires changes in the fundamental design to protect against a potential future quantum computer, symmetric key algorithms are believed to be secure provided a sufficiently large key size is used. [...] The public-key algorithms (RSA, Diffie-Hellman, [Elliptic-curve Diffie–Hellman] ECDH, and [Elliptic Curve Digital Signature Algorithm] ECDSA) are all vulnerable to attack by a sufficiently large quantum computer. [...] While a number of interesting quantum resistant public key algorithms have been proposed external to NSA, nothing has been standardized by NIST, and NSA is not specifying any commercial quantum resistant standards at this time. NSA expects that NIST will play a leading role in the effort to develop a widely accepted, standardized set of quantum resistant algorithms. [...] Given the level of interest in the cryptographic community, we hope that there will be quantum resistant algorithms widely available in the next decade. [...] The AES-256 and SHA-384 algorithms are symmetric, and believed to be safe from attack by a large quantum computer." In a 2022 press release, the NSA notified: "A cryptanalytically-relevant quantum computer (CRQC) would have the potential to break public-key systems (sometimes referred to as asymmetric cryptography) that are used today. Given foreign pursuits in quantum computing, now is the time to plan, prepare and budget for a transition to [quantum-resistant] QR algorithms to assure sustained protection of [National Security Systems] NSS and related assets in the event a CRQC becomes an achievable reality." Since September 2022, the NSA has been transitioning from the Commercial National Security Algorithm Suite (now referred to as CNSA 1.0), originally launched in January 2016, to the Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), both summarized below: CNSA 2.0 CNSA 1.0 == See also == Key stretching == Notes == == References == == Further reading == Recommendation for Key Management — Part 1: general, NIST Special Publication 800-57. March, 2007 Blaze, Matt; Diffie, Whitfield; Rivest, Ronald L.; et al. "Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security". January, 1996 Arjen K. Lenstra, Eric R. Verheul: Selecting Cryptographic Key Sizes. J. Cryptology 14(4): 255-293 (2001) — Citeseer link == External links == www.keylength.com: An online keylength calculator Articles discussing the implications of quantum computing NIST cryptographic toolkit Burt Kaliski: TWIRL and RSA key sizes (May 2003)
Wikipedia/Key_space_(cryptography)
In mathematics, the Runge–Kutta–Fehlberg method (or Fehlberg method) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is based on the large class of Runge–Kutta methods. The novelty of Fehlberg's method is that it is an embedded method from the Runge–Kutta family, meaning that it reuses the same intermediate calculations to produce two estimates of different accuracy, allowing for automatic error estimation. The method presented in Fehlberg's 1969 paper has been dubbed the RKF45 method, and is a method of order O(h4) with an error estimator of order O(h5). By performing one extra calculation, the error in the solution can be estimated and controlled by using the higher-order embedded method that allows for an adaptive stepsize to be determined automatically. == Butcher tableau for Fehlberg's 4(5) method == Any Runge–Kutta method is uniquely identified by its Butcher tableau. The embedded pair proposed by Fehlberg The first row of coefficients at the bottom of the table gives the fifth-order accurate method, and the second row gives the fourth-order accurate method. == Implementing an RK4(5) Algorithm == The coefficients found by Fehlberg for Formula 1 (derivation with his parameter α2=1/3) are given in the table below: Fehlberg outlines a solution to solving a system of n differential equations of the form: d y i d x = f i ( x , y 1 , y 2 , … , y n ) , i = 1 , 2 , … , n {\displaystyle {\frac {dy_{i}}{dx}}=f_{i}(x,y_{1},y_{2},\ldots ,y_{n}),i=1,2,\ldots ,n} to iterative solve for y i ( x + h ) , i = 1 , 2 , … , n {\displaystyle y_{i}(x+h),i=1,2,\ldots ,n} where h is an adaptive stepsize to be determined algorithmically: The solution is the weighted average of six increments, where each increment is the product of the size of the interval, h {\textstyle h} , and an estimated slope specified by function f on the right-hand side of the differential equation. k 1 = h ⋅ f ( x + A ( 1 ) ⋅ h , y ) k 2 = h ⋅ f ( x + A ( 2 ) ⋅ h , y + B ( 2 , 1 ) ⋅ k 1 ) k 3 = h ⋅ f ( x + A ( 3 ) ⋅ h , y + B ( 3 , 1 ) ⋅ k 1 + B ( 3 , 2 ) ⋅ k 2 ) k 4 = h ⋅ f ( x + A ( 4 ) ⋅ h , y + B ( 4 , 1 ) ⋅ k 1 + B ( 4 , 2 ) ⋅ k 2 + B ( 4 , 3 ) ⋅ k 3 ) k 5 = h ⋅ f ( x + A ( 5 ) ⋅ h , y + B ( 5 , 1 ) ⋅ k 1 + B ( 5 , 2 ) ⋅ k 2 + B ( 5 , 3 ) ⋅ k 3 + B ( 5 , 4 ) ⋅ k 4 ) k 6 = h ⋅ f ( x + A ( 6 ) ⋅ h , y + B ( 6 , 1 ) ⋅ k 1 + B ( 6 , 2 ) ⋅ k 2 + B ( 6 , 3 ) ⋅ k 3 + B ( 6 , 4 ) ⋅ k 4 + B ( 6 , 5 ) ⋅ k 5 ) {\displaystyle {\begin{aligned}k_{1}&=h\cdot f(x+A(1)\cdot h,y)\\k_{2}&=h\cdot f(x+A(2)\cdot h,y+B(2,1)\cdot k_{1})\\k_{3}&=h\cdot f(x+A(3)\cdot h,y+B(3,1)\cdot k_{1}+B(3,2)\cdot k_{2})\\k_{4}&=h\cdot f(x+A(4)\cdot h,y+B(4,1)\cdot k_{1}+B(4,2)\cdot k_{2}+B(4,3)\cdot k_{3})\\k_{5}&=h\cdot f(x+A(5)\cdot h,y+B(5,1)\cdot k_{1}+B(5,2)\cdot k_{2}+B(5,3)\cdot k_{3}+B(5,4)\cdot k_{4})\\k_{6}&=h\cdot f(x+A(6)\cdot h,y+B(6,1)\cdot k_{1}+B(6,2)\cdot k_{2}+B(6,3)\cdot k_{3}+B(6,4)\cdot k_{4}+B(6,5)\cdot k_{5})\end{aligned}}} Then the weighted average is: y ( x + h ) = y ( x ) + C H ( 1 ) ⋅ k 1 + C H ( 2 ) ⋅ k 2 + C H ( 3 ) ⋅ k 3 + C H ( 4 ) ⋅ k 4 + C H ( 5 ) ⋅ k 5 + C H ( 6 ) ⋅ k 6 {\displaystyle y(x+h)=y(x)+CH(1)\cdot k_{1}+CH(2)\cdot k_{2}+CH(3)\cdot k_{3}+CH(4)\cdot k_{4}+CH(5)\cdot k_{5}+CH(6)\cdot k_{6}} The estimate of the truncation error is: T E = | C T ( 1 ) ⋅ k 1 + C T ( 2 ) ⋅ k 2 + C T ( 3 ) ⋅ k 3 + C T ( 4 ) ⋅ k 4 + C T ( 5 ) ⋅ k 5 + C T ( 6 ) ⋅ k 6 | {\displaystyle \mathrm {TE} =\left|\mathrm {CT} (1)\cdot k_{1}+\mathrm {CT} (2)\cdot k_{2}+\mathrm {CT} (3)\cdot k_{3}+\mathrm {CT} (4)\cdot k_{4}+\mathrm {CT} (5)\cdot k_{5}+\mathrm {CT} (6)\cdot k_{6}\right|} At the completion of the step, a new stepsize is calculated: h new = 0.9 ⋅ h ⋅ ( ε T E ) 1 / 5 {\displaystyle h_{\text{new}}=0.9\cdot h\cdot \left({\frac {\varepsilon }{TE}}\right)^{1/5}} If T E > ε {\textstyle \mathrm {TE} >\varepsilon } , then replace h {\textstyle h} with h new {\textstyle h_{\text{new}}} and repeat the step. If T E ⩽ ε {\textstyle TE\leqslant \varepsilon } , then the step is completed. Replace h {\textstyle h} with h new {\textstyle h_{\text{new}}} for the next step. The coefficients found by Fehlberg for Formula 2 (derivation with his parameter α2 = 3/8) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: In another table in Fehlberg, coefficients for an RKF4(5) derived by D. Sarafyan are given: == See also == List of Runge–Kutta methods Numerical methods for ordinary differential equations Runge–Kutta methods == Notes == == References == Fehlberg, Erwin (1968) Classical fifth-, sixth-, seventh-, and eighth-order Runge-Kutta formulas with stepsize control. NASA Technical Report 287. https://ntrs.nasa.gov/api/citations/19680027281/downloads/19680027281.pdf Fehlberg, Erwin (1969) Low-order classical Runge-Kutta formulas with stepsize control and their application to some heat transfer problems. Vol. 315. National aeronautics and space administration. Fehlberg, Erwin (1969). "Klassische Runge-Kutta-Nystrom-Formeln funfter und siebenter Ordnung mit Schrittweiten-Kontrolle". Computing. 4: 93–106. doi:10.1007/BF02234758. S2CID 38715401. Fehlberg, Erwin (1970) Some experimental results concerning the error propagation in Runge-Kutta type integration formulas. NASA Technical Report R-352. https://ntrs.nasa.gov/api/citations/19700031412/downloads/19700031412.pdf Fehlberg, Erwin (1970). "Klassische Runge-Kutta-Formeln vierter und niedrigerer Ordnung mit Schrittweiten-Kontrolle und ihre Anwendung auf Wärmeleitungsprobleme," Computing (Arch. Elektron. Rechnen), vol. 6, pp. 61–71. doi:10.1007/BF02241732 Hairer, Ernst; Nørsett, Syvert; Wanner, Gerhard (1993). Solving Ordinary Differential Equations I: Nonstiff Problems (Second ed.). Berlin: Springer-Verlag. ISBN 3-540-56670-8. Sarafyan, Diran (1966) Error Estimation for Runge-Kutta Methods Through Pseudo-Iterative Formulas. Technical Report No. 14, Louisiana State University in New Orleans, May 1966. == Further reading == Fehlberg, E (1958). "Eine Methode zur Fehlerverkleinerung beim Runge-Kutta-Verfahren". Zeitschrift für Angewandte Mathematik und Mechanik. 38 (11/12): 421–426. Bibcode:1958ZaMM...38..421F. doi:10.1002/zamm.19580381102. Fehlberg, E (1964). "New high-order Runge-Kutta formulas with step size control for systems of first and second-order differential equations". Zeitschrift für Angewandte Mathematik und Mechanik. 44 (S1): T17 – T29. doi:10.1002/zamm.19640441310. Fehlberg, E (1972). "Klassische Runge-Kutta-Nystrom-Formeln mit Schrittweiten-Kontrolle fur Differentialgleichungen x.. = f(t,x)". Computing. 10: 305–315. doi:10.1007/BF02242243. S2CID 37369149. Fehlberg, E (1975). "Klassische Runge-Kutta-Nystrom-Formeln mit Schrittweiten-Kontrolle fur Differentialgleichungen x.. = f(t,x,x.)". Computing. 14: 371–387. doi:10.1007/BF02253548. S2CID 30533090. Simos, T. E. (1993). "A Runge-Kutta Fehlberg method with phase-lag of order infinity for initial-value problems with oscillating solution". Computers & Mathematics with Applications. 25 (6): 95–101. doi:10.1016/0898-1221(93)90303-D.. Handapangoda, C. C.; Premaratne, M.; Yeo, L.; Friend, J. (2008). "Laguerre Runge-Kutta-Fehlberg Method for Simulating Laser Pulse Propagation in Biological Tissue". IEEE Journal of Selected Topics in Quantum Electronics. 1 (14): 105–112. Bibcode:2008IJSTQ..14..105H. doi:10.1109/JSTQE.2007.913971. S2CID 13069335.. Simos, T. E. (1995). "Modified Runge–Kutta–Fehlberg methods for periodic initial-value problems". Japan Journal of Industrial and Applied Mathematics. 12 (1): 109. doi:10.1007/BF03167384. S2CID 120146558.. Sarafyan, D. (1994). "Approximate Solution of Ordinary Differential Equations and Their Systems Through Discrete and Continuous Embedded Runge-Kutta Formulae and Upgrading Their Order". Computers & Mathematics with Applications. 28 (10–12): 353–384. doi:10.1016/0898-1221(94)00201-0. Paul, S.; Mondal, S. P.; Bhattacharya, P. (2016). "Numerical solution of Lotka Volterra prey predator model by using Runge–Kutta–Fehlberg method and Laplace Adomian decomposition method". Alexandria Engineering Journal. 55 (1): 613–617. doi:10.1016/j.aej.2015.12.026.
Wikipedia/Runge–Kutta–Fehlberg_method
In mathematics, the Parker–Sochacki method is an algorithm for solving systems of ordinary differential equations (ODEs), developed by G. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces Maclaurin series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. == Summary == The Parker–Sochacki method rests on two simple observations: If a set of ODEs has a particular form, then the Picard method can be used to find their solution in the form of a power series. If the ODEs do not have the required form, it is nearly always possible to find an expanded set of equations that do have the required form, such that a subset of the solution is a solution of the original ODEs. Several coefficients of the power series are calculated in turn, a time step is chosen, the series is evaluated at that time, and the process repeats. The end result is a high order piecewise solution to the original ODE problem. The order of the solution desired is an adjustable variable in the program that can change between steps. The order of the solution is only limited by the floating point representation on the machine running the program. And in some cases can be either extended by using arbitrary precision floating point numbers, or for special cases by finding solution with only integer or rational coefficients. == Advantages == The method requires only addition, subtraction, and multiplication, making it very convenient for high-speed computation. (The only divisions are inverses of small integers, which can be precomputed.) Use of a high order—calculating many coefficients of the power series—is convenient. (Typically a higher order permits a longer time step without loss of accuracy, which improves efficiency.) The order and step size can be easily changed from one step to the next. It is possible to calculate a guaranteed error bound on the solution. Arbitrary precision floating point libraries allow this method to compute arbitrarily accurate solutions. With the Parker–Sochacki method, information between integration steps is developed at high order. As the Parker–Sochacki method integrates, the program can be designed to save the power series coefficients that provide a smooth solution between points in time. The coefficients can be saved and used so that polynomial evaluation provides the high order solution between steps. With most other classical integration methods, one would have to resort to interpolation to get information between integration steps, leading to an increase of error. There is an a priori error bound for a single step with the Parker–Sochacki method. This allows a Parker–Sochacki program to calculate the step size that guarantees that the error is below any non-zero given tolerance. Using this calculated step size with an error tolerance of less than half of the machine epsilon yields a symplectic integration. == Disadvantages == Most methods for numerically solving ODEs require only the evaluation of derivatives for chosen values of the variables, so systems like MATLAB include implementations of several methods all sharing the same calling sequence. Users can try different methods by simply changing the name of the function called. The Parker–Sochacki method requires more work to put the equations into the proper form, and cannot use the same calling sequence. == References == == External links == Polynomial ODEs – Examples, Solutions, Properties (PDF), retrieved August 27, 2017. A thorough explanation of the paradigm and application of the Parker–Sochacki method Joseph W. Rudmin (1998), "Application of the Parker–Sochacki Method to Celestial Mechanics", Journal of Computational Neuroscience, 27: 115–133, arXiv:1007.1677, doi:10.1007/s10827-008-0131-5. A demonstration of the theory and usage of the Parker–Sochacki method, including a solution for the classical Newtonian N-body problem with mutual gravitational attraction. The Modified Picard Method., retrieved November 11, 2013. A collection of papers and some Matlab code.
Wikipedia/Parker–Sochacki_method
In mathematics numerical analysis, the Nyström method or quadrature method seeks the numerical solution of an integral equation by replacing the integral with a representative weighted sum. The continuous problem is broken into n {\displaystyle n} discrete intervals; quadrature or numerical integration determines the weights and locations of representative points for the integral. The problem becomes a system of linear equations with n {\displaystyle n} equations and n {\displaystyle n} unknowns, and the underlying function is implicitly represented by an interpolation using the chosen quadrature rule. This discrete problem may be ill-conditioned, depending on the original problem and the chosen quadrature rule. Since the linear equations require O ( n 3 ) {\displaystyle O(n^{3})} operations to solve, high-order quadrature rules perform better because low-order quadrature rules require large n {\displaystyle n} for a given accuracy. Gaussian quadrature is normally a good choice for smooth, non-singular problems. == Discretization of the integral == Standard quadrature methods seek to represent an integral as a weighed sum in the following manner: ∫ a b h ( x ) d x ≈ ∑ k = 1 n w k h ( x k ) {\displaystyle \int _{a}^{b}h(x)\;\mathrm {d} x\approx \sum _{k=1}^{n}w_{k}h(x_{k})} where w k {\displaystyle w_{k}} are the weights of the quadrature rule, and points x k {\displaystyle x_{k}} are the abscissas. == Example == Applying this to the inhomogeneous Fredholm equation of the second kind f ( x ) = λ u ( x ) − ∫ a b K ( x , x ′ ) f ( x ′ ) d x ′ {\displaystyle f(x)=\lambda u(x)-\int _{a}^{b}K(x,x')f(x')\;\mathrm {d} x'} , results in f ( x ) ≈ λ u ( x ) − ∑ k = 1 n w k K ( x , x k ) f ( x k ) {\displaystyle f(x)\approx \lambda u(x)-\sum _{k=1}^{n}w_{k}K(x,x_{k})f(x_{k})} . == See also == Boundary element method == References == == Bibliography == Leonard M. Delves & Joan E. Walsh (eds): Numerical Solution of Integral Equations, Clarendon, Oxford, 1974. Hans-Jürgen Reinhardt: Analysis of Approximation Methods for Differential and Integral Equations, Springer, New York, 1985.
Wikipedia/Nyström_method
In numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to an initial value problem. It involves finding solutions to the initial value problem for different initial conditions until one finds the solution that also satisfies the boundary conditions of the boundary value problem. In layman's terms, one "shoots" out trajectories in different directions from one boundary until one finds the trajectory that "hits" the other boundary condition. == Mathematical description == Suppose one wants to solve the boundary-value problem y ″ ( t ) = f ( t , y ( t ) , y ′ ( t ) ) , y ( t 0 ) = y 0 , y ( t 1 ) = y 1 . {\displaystyle y''(t)=f(t,y(t),y'(t)),\quad y(t_{0})=y_{0},\quad y(t_{1})=y_{1}.} Let y ( t ; a ) {\displaystyle y(t;a)} solve the initial-value problem y ″ ( t ) = f ( t , y ( t ) , y ′ ( t ) ) , y ( t 0 ) = y 0 , y ′ ( t 0 ) = a . {\displaystyle y''(t)=f(t,y(t),y'(t)),\quad y(t_{0})=y_{0},\quad y'(t_{0})=a.} If y ( t 1 ; a ) = y 1 {\displaystyle y(t_{1};a)=y_{1}} , then y ( t ; a ) {\displaystyle y(t;a)} is also a solution of the boundary-value problem. The shooting method is the process of solving the initial value problem for many different values of a {\displaystyle a} until one finds the solution y ( t ; a ) {\displaystyle y(t;a)} that satisfies the desired boundary conditions. Typically, one does so numerically. The solution(s) correspond to root(s) of F ( a ) = y ( t 1 ; a ) − y 1 . {\displaystyle F(a)=y(t_{1};a)-y_{1}.} To systematically vary the shooting parameter a {\displaystyle a} and find the root, one can employ standard root-finding algorithms like the bisection method or Newton's method. Roots of F {\displaystyle F} and solutions to the boundary value problem are equivalent. If a {\displaystyle a} is a root of F {\displaystyle F} , then y ( t ; a ) {\displaystyle y(t;a)} is a solution of the boundary value problem. Conversely, if the boundary value problem has a solution y ( t ) {\displaystyle y(t)} , it is also the unique solution y ( t ; a ) {\displaystyle y(t;a)} of the initial value problem where a = y ′ ( t 0 ) {\displaystyle a=y'(t_{0})} , so a {\displaystyle a} is a root of F {\displaystyle F} . == Etymology and intuition == The term "shooting method" has its origin in artillery. An analogy for the shooting method is to place a cannon at the position y ( t 0 ) = y 0 {\displaystyle y(t_{0})=y_{0}} , then vary the angle a = y ′ ( t 0 ) {\displaystyle a=y'(t_{0})} of the cannon, then fire the cannon until it hits the boundary value y ( t 1 ) = y 1 {\displaystyle y(t_{1})=y_{1}} . Between each shot, the direction of the cannon is adjusted based on the previous shot, so every shot hits closer than the previous one. The trajectory that "hits" the desired boundary value is the solution to the boundary value problem — hence the name "shooting method". == Linear shooting method == The boundary value problem is linear if f has the form f ( t , y ( t ) , y ′ ( t ) ) = p ( t ) y ′ ( t ) + q ( t ) y ( t ) + r ( t ) . {\displaystyle f(t,y(t),y'(t))=p(t)y'(t)+q(t)y(t)+r(t).} In this case, the solution to the boundary value problem is usually given by: y ( t ) = y ( 1 ) ( t ) + y 1 − y ( 1 ) ( t 1 ) y ( 2 ) ( t 1 ) y ( 2 ) ( t ) {\displaystyle y(t)=y_{(1)}(t)+{\frac {y_{1}-y_{(1)}(t_{1})}{y_{(2)}(t_{1})}}y_{(2)}(t)} where y ( 1 ) ( t ) {\displaystyle y_{(1)}(t)} is the solution to the initial value problem: y ( 1 ) ″ ( t ) = p ( t ) y ( 1 ) ′ ( t ) + q ( t ) y ( 1 ) ( t ) + r ( t ) , y ( 1 ) ( t 0 ) = y 0 , y ( 1 ) ′ ( t 0 ) = 0 , {\displaystyle y_{(1)}''(t)=p(t)y_{(1)}'(t)+q(t)y_{(1)}(t)+r(t),\quad y_{(1)}(t_{0})=y_{0},\quad y_{(1)}'(t_{0})=0,} and y ( 2 ) ( t ) {\displaystyle y_{(2)}(t)} is the solution to the initial value problem: y ( 2 ) ″ ( t ) = p ( t ) y ( 2 ) ′ ( t ) + q ( t ) y ( 2 ) ( t ) , y ( 2 ) ( t 0 ) = 0 , y ( 2 ) ′ ( t 0 ) = 1. {\displaystyle y_{(2)}''(t)=p(t)y_{(2)}'(t)+q(t)y_{(2)}(t),\quad y_{(2)}(t_{0})=0,\quad y_{(2)}'(t_{0})=1.} See the proof for the precise condition under which this result holds. == Examples == === Standard boundary value problem === A boundary value problem is given as follows by Stoer and Bulirsch (Section 7.3.1). w ″ ( t ) = 3 2 w 2 ( t ) , w ( 0 ) = 4 , w ( 1 ) = 1 {\displaystyle w''(t)={\frac {3}{2}}w^{2}(t),\quad w(0)=4,\quad w(1)=1} The initial value problem w ″ ( t ) = 3 2 w 2 ( t ) , w ( 0 ) = 4 , w ′ ( 0 ) = s {\displaystyle w''(t)={\frac {3}{2}}w^{2}(t),\quad w(0)=4,\quad w'(0)=s} was solved for s = −1, −2, −3, ..., −100, and F(s) = w(1;s) − 1 plotted in the Figure 2. Inspecting the plot of F, we see that there are roots near −8 and −36. Some trajectories of w(t;s) are shown in the Figure 1. Stoer and Bulirsch state that there are two solutions, which can be found by algebraic methods. These correspond to the initial conditions w′(0) = −8 and w′(0) = −35.9 (approximately). === Eigenvalue problem === The shooting method can also be used to solve eigenvalue problems. Consider the time-independent Schrödinger equation for the quantum harmonic oscillator − 1 2 ψ n ″ ( x ) + 1 2 x 2 ψ n ( x ) = E n ψ n ( x ) . {\displaystyle -{\frac {1}{2}}\psi _{n}''(x)+{\frac {1}{2}}x^{2}\psi _{n}(x)=E_{n}\psi _{n}(x).} In quantum mechanics, one seeks normalizable wavefunctions ψ n ( x ) {\displaystyle \psi _{n}(x)} and their corresponding energies subject to the boundary conditions ψ n ( x → + ∞ ) = ψ n ( x → − ∞ ) = 0. {\displaystyle \psi _{n}(x\rightarrow +\infty )=\psi _{n}(x\rightarrow -\infty )=0.} The problem can be solved analytically to find the energies E n = n + 1 / 2 {\displaystyle E_{n}=n+1/2} for n = 0 , 1 , 2 , … {\displaystyle n=0,1,2,\dots } , but also serves as an excellent illustration of the shooting method. To apply it, first note some general properties of the Schrödinger equation: If ψ n ( x ) {\displaystyle \psi _{n}(x)} is an eigenfunction, so is C ψ n ( x ) {\displaystyle C\psi _{n}(x)} for any nonzero constant C {\displaystyle C} . The n {\displaystyle n} -th excited state ψ n ( x ) {\displaystyle \psi _{n}(x)} has n {\displaystyle n} roots where ψ n ( x ) = 0 {\displaystyle \psi _{n}(x)=0} . For even n {\displaystyle n} , the n {\displaystyle n} -th excited state ψ n ( x ) = ψ n ( − x ) {\displaystyle \psi _{n}(x)=\psi _{n}(-x)} is symmetric and nonzero at the origin. For odd n {\displaystyle n} , the n {\displaystyle n} -th excited state ψ n ( x ) = − ψ n ( − x ) {\displaystyle \psi _{n}(x)=-\psi _{n}(-x)} is antisymmetric and thus zero at the origin. To find the n {\displaystyle n} -th excited state ψ n ( x ) {\displaystyle \psi _{n}(x)} and its energy E n {\displaystyle E_{n}} , the shooting method is then to: Guess some energy E n {\displaystyle E_{n}} . Integrate the Schrödinger equation. For example, use the central finite difference − 1 2 ψ n i + 1 − 2 ψ n i + ψ n i − 1 Δ x 2 + 1 2 ( x i ) 2 ψ n i = E n ψ n i . {\displaystyle -{\frac {1}{2}}{\frac {\psi _{n}^{i+1}-2\psi _{n}^{i}+\psi _{n}^{i-1}}{{\Delta x}^{2}}}+{\frac {1}{2}}(x^{i})^{2}\psi _{n}^{i}=E_{n}\psi _{n}^{i}.} If n {\displaystyle n} is even, set ψ 0 {\displaystyle \psi _{0}} to some arbitrary number (say ψ n 0 = 1 {\displaystyle \psi _{n}^{0}=1} — the wavefunction can be normalized after integration anyway) and use the symmetric property to find all remaining ψ n i {\displaystyle \psi _{n}^{i}} . If n {\displaystyle n} is odd, set ψ n 0 = 0 {\displaystyle \psi _{n}^{0}=0} and ψ n 1 {\displaystyle \psi _{n}^{1}} to some arbitrary number (say ψ n 1 = 1 {\displaystyle \psi _{n}^{1}=1} — the wavefunction can be normalized after integration anyway) and find all remaining ψ n i {\displaystyle \psi _{n}^{i}} . Count the roots of ψ n {\displaystyle \psi _{n}} and refine the guess for the energy E n {\displaystyle E_{n}} . If there are n {\displaystyle n} or less roots, the guessed energy is too low, so increase it and repeat the process. If there are more than n {\displaystyle n} roots, the guessed energy is too high, so decrease it and repeat the process. The energy-guessing can be done with the bisection method, and the process can be terminated when the energy difference is sufficiently small. Then one can take any energy in the interval to be the correct energy. == See also == Direct multiple shooting method Computation of radiowave attenuation in the atmosphere == Notes == == References == Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 18.1. The Shooting Method". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. == External links == Brief Description of ODEPACK (at Netlib; contains LSODE) Shooting method of solving boundary value problems – Notes, PPT, Maple, Mathcad, Matlab, Mathematica at Holistic Numerical Methods Institute [1]
Wikipedia/Shooting_method
In mathematics and computational science, the Euler method (also called the forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who first proposed it in his book Institutionum calculi integralis (published 1768–1770). The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. The Euler method often serves as the basis to construct more complex methods, e.g., predictor–corrector method. == Geometrical description == === Purpose and why it works === Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential equation. Here, a differential equation can be thought of as a formula by which the slope of the tangent line to the curve can be computed at any point on the curve, once the position of that point has been calculated. The idea is that while the curve is initially unknown, its starting point, which we denote by A 0 , {\displaystyle A_{0},} is known (see Figure 1). Then, from the differential equation, the slope to the curve at A 0 {\displaystyle A_{0}} can be computed, and so, the tangent line. Take a small step along that tangent line up to a point A 1 . {\displaystyle A_{1}.} Along this small step, the slope does not change too much, so A 1 {\displaystyle A_{1}} will be close to the curve. If we pretend that A 1 {\displaystyle A_{1}} is still on the curve, the same reasoning as for the point A 0 {\displaystyle A_{0}} above can be used. After several steps, a polygonal curve ( A 0 , A 1 , A 2 , A 3 , … {\displaystyle A_{0},A_{1},A_{2},A_{3},\dots } ) is computed. In general, this curve does not diverge too far from the original unknown curve, and the error between the two curves can be made small if the step size is small enough and the interval of computation is finite. === First-order process === When given the values for t 0 {\displaystyle t_{0}} and y ( t 0 ) {\displaystyle y(t_{0})} , and the derivative of y {\displaystyle y} is a given function of t {\displaystyle t} and y {\displaystyle y} denoted as y ′ ( t ) = f ( t , y ( t ) ) {\displaystyle y'(t)=f{\bigl (}t,y(t){\bigr )}} . Begin the process by setting y 0 = y ( t 0 ) {\displaystyle y_{0}=y(t_{0})} . Next, choose a value h {\displaystyle h} for the size of every step along t-axis, and set t n = t 0 + n h {\displaystyle t_{n}=t_{0}+nh} (or equivalently t n + 1 = t n + h {\displaystyle t_{n+1}=t_{n}+h} ). Now, the Euler method is used to find y n + 1 {\displaystyle y_{n+1}} from y n {\displaystyle y_{n}} and t n {\displaystyle t_{n}} : y n + 1 = y n + h f ( t n , y n ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).} The value of y n {\displaystyle y_{n}} is an approximation of the solution at time t n {\displaystyle t_{n}} , i.e., y n ≈ y ( t n ) {\displaystyle y_{n}\approx y(t_{n})} . The Euler method is explicit, i.e. the solution y n + 1 {\displaystyle y_{n+1}} is an explicit function of y i {\displaystyle y_{i}} for i ≤ n {\displaystyle i\leq n} . === Higher-order process === While the Euler method integrates a first-order ODE, any ODE of order N {\displaystyle N} can be represented as a system of first-order ODEs. When given the ODE of order N {\displaystyle N} defined as y ( N + 1 ) ( t ) = f ( t , y ( t ) , y ′ ( t ) , … , y ( N ) ( t ) ) , {\displaystyle y^{(N+1)}(t)=f\left(t,y(t),y'(t),\ldots ,y^{(N)}(t)\right),} as well as h {\displaystyle h} , t 0 {\displaystyle t_{0}} , and y 0 , y 0 ′ , … , y 0 ( N ) {\displaystyle y_{0},y'_{0},\dots ,y_{0}^{(N)}} , we implement the following formula until we reach the approximation of the solution to the ODE at the desired time: y → i + 1 = ( y i + 1 y i + 1 ′ ⋮ y i + 1 ( N − 1 ) y i + 1 ( N ) ) = ( y i + h ⋅ y i ′ y i ′ + h ⋅ y i ″ ⋮ y i ( N − 1 ) + h ⋅ y i ( N ) y i ( N ) + h ⋅ f ( t i , y i , y i ′ , … , y i ( N ) ) ) {\displaystyle {\vec {y}}_{i+1}={\begin{pmatrix}y_{i+1}\\y'_{i+1}\\\vdots \\y_{i+1}^{(N-1)}\\y_{i+1}^{(N)}\end{pmatrix}}={\begin{pmatrix}y_{i}+h\cdot y'_{i}\\y'_{i}+h\cdot y''_{i}\\\vdots \\y_{i}^{(N-1)}+h\cdot y_{i}^{(N)}\\y_{i}^{(N)}+h\cdot f\left(t_{i},y_{i},y'_{i},\ldots ,y_{i}^{(N)}\right)\end{pmatrix}}} These first-order systems can be handled by Euler's method or, in fact, by any other scheme for first-order systems. == First-order example == Given the initial value problem y ′ = y , y ( 0 ) = 1 , {\displaystyle y'=y,\quad y(0)=1,} we would like to use the Euler method to approximate y ( 4 ) {\displaystyle y(4)} . === Using step size equal to 1 (h = 1) === The Euler method is y n + 1 = y n + h f ( t n , y n ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).} so first we must compute f ( t 0 , y 0 ) {\displaystyle f(t_{0},y_{0})} . In this simple differential equation, the function f {\displaystyle f} is defined by f ( t , y ) = y {\displaystyle f(t,y)=y} . We have f ( t 0 , y 0 ) = f ( 0 , 1 ) = 1. {\displaystyle f(t_{0},y_{0})=f(0,1)=1.} By doing the above step, we have found the slope of the line that is tangent to the solution curve at the point ( 0 , 1 ) {\displaystyle (0,1)} . Recall that the slope is defined as the change in y {\displaystyle y} divided by the change in t {\displaystyle t} , or Δ y Δ t {\textstyle {\frac {\Delta y}{\Delta t}}} . The next step is to multiply the above value by the step size h {\displaystyle h} , which we take equal to one here: h ⋅ f ( y 0 ) = 1 ⋅ 1 = 1. {\displaystyle h\cdot f(y_{0})=1\cdot 1=1.} Since the step size is the change in t {\displaystyle t} , when we multiply the step size and the slope of the tangent, we get a change in y {\displaystyle y} value. This value is then added to the initial y {\displaystyle y} value to obtain the next value to be used for computations. y 0 + h f ( y 0 ) = y 1 = 1 + 1 ⋅ 1 = 2. {\displaystyle y_{0}+hf(y_{0})=y_{1}=1+1\cdot 1=2.} The above steps should be repeated to find y 2 {\displaystyle y_{2}} , y 3 {\displaystyle y_{3}} and y 4 {\displaystyle y_{4}} . y 2 = y 1 + h f ( y 1 ) = 2 + 1 ⋅ 2 = 4 , y 3 = y 2 + h f ( y 2 ) = 4 + 1 ⋅ 4 = 8 , y 4 = y 3 + h f ( y 3 ) = 8 + 1 ⋅ 8 = 16. {\displaystyle {\begin{aligned}y_{2}&=y_{1}+hf(y_{1})=2+1\cdot 2=4,\\y_{3}&=y_{2}+hf(y_{2})=4+1\cdot 4=8,\\y_{4}&=y_{3}+hf(y_{3})=8+1\cdot 8=16.\end{aligned}}} Due to the repetitive nature of this algorithm, it can be helpful to organize computations in a chart form, as seen below, to avoid making errors. The conclusion of this computation is that y 4 = 16 {\displaystyle y_{4}=16} . The exact solution of the differential equation is y ( t ) = e t {\displaystyle y(t)=e^{t}} , so y ( 4 ) = e 4 ≈ 54.598 {\displaystyle y(4)=e^{4}\approx 54.598} . Although the approximation of the Euler method was not very precise in this specific case, particularly due to a large value step size h {\displaystyle h} , its behaviour is qualitatively correct as the figure shows. === Using other step sizes === As suggested in the introduction, the Euler method is more accurate if the step size h {\displaystyle h} is smaller. The table below shows the result with different step sizes. The top row corresponds to the example in the previous section, and the second row is illustrated in the figure. The error recorded in the last column of the table is the difference between the exact solution at t = 4 {\displaystyle t=4} and the Euler approximation. In the bottom of the table, the step size is half the step size in the previous row, and the error is also approximately half the error in the previous row. This suggests that the error is roughly proportional to the step size, at least for fairly small values of the step size. This is true in general, also for other equations; see the section Global truncation error for more details. Other methods, such as the midpoint method also illustrated in the figures, behave more favourably: the global error of the midpoint method is roughly proportional to the square of the step size. For this reason, the Euler method is said to be a first-order method, while the midpoint method is second order. We can extrapolate from the above table that the step size needed to get an answer that is correct to three decimal places is approximately 0.00001, meaning that we need 400,000 steps. This large number of steps entails a high computational cost. For this reason, higher-order methods are employed such as Runge–Kutta methods or linear multistep methods, especially if a high accuracy is desired. == Higher-order example == For this third-order example, assume that the following information is given: y ‴ + 4 t y ″ − t 2 y ′ − ( cos ⁡ t ) y = sin ⁡ t t 0 = 0 y 0 = y ( t 0 ) = 2 y 0 ′ = y ′ ( t 0 ) = − 1 y 0 ″ = y ″ ( t 0 ) = 3 h = 0.5 {\displaystyle {\begin{aligned}&y'''+4ty''-t^{2}y'-(\cos {t})y=\sin {t}\\&t_{0}=0\\&y_{0}=y(t_{0})=2\\&y'_{0}=y'(t_{0})=-1\\&y''_{0}=y''(t_{0})=3\\&h=0.5\end{aligned}}} From this we can isolate y''' to get the equation: f ( t , y , y ′ , y ″ ) = y ‴ = sin ⁡ t + ( cos ⁡ t ) y + t 2 y ′ − 4 t y ″ {\displaystyle f\left(t,y,y',y''\right)=y'''=\sin {t}+(\cos {t})y+t^{2}y'-4ty''} Using that we can get the solution for y → 1 {\displaystyle {\vec {y}}_{1}} : y → 1 = ( y 1 y 1 ′ y 1 ″ ) = ( y 0 + h ⋅ y 0 ′ y 0 ′ + h ⋅ y 0 ″ y 0 ″ + h ⋅ f ( t 0 , y 0 , y 0 ′ , y 0 ″ ) ) = ( 2 + 0.5 ⋅ − 1 − 1 + 0.5 ⋅ 3 3 + 0.5 ⋅ ( sin ⁡ 0 + ( cos ⁡ 0 ) ⋅ 2 + 0 2 ⋅ ( − 1 ) − 4 ⋅ 0 ⋅ 3 ) ) = ( 1.5 0.5 4 ) {\displaystyle {\vec {y}}_{1}={\begin{pmatrix}y_{1}\\y_{1}'\\y_{1}''\end{pmatrix}}={\begin{pmatrix}y_{0}+h\cdot y'_{0}\\y'_{0}+h\cdot y''_{0}\\y''_{0}+h\cdot f\left(t_{0},y_{0},y'_{0},y''_{0}\right)\end{pmatrix}}={\begin{pmatrix}2+0.5\cdot -1\\-1+0.5\cdot 3\\3+0.5\cdot \left(\sin {0}+(\cos {0})\cdot 2+0^{2}\cdot (-1)-4\cdot 0\cdot 3\right)\end{pmatrix}}={\begin{pmatrix}1.5\\0.5\\4\end{pmatrix}}} And using the solution for y → 1 {\displaystyle {\vec {y}}_{1}} , we can get the solution for y → 2 {\displaystyle {\vec {y}}_{2}} : y → 2 = ( y 2 y 2 ′ y 2 ″ ) = ( y 1 + h ⋅ y 1 ′ y 1 ′ + h ⋅ y 1 ″ y 1 ″ + h ⋅ f ( t 1 , y 1 , y 1 ′ , y 1 ″ ) ) = ( 1.5 + 0.5 ⋅ 0.5 0.5 + 0.5 ⋅ 4 4 + 0.5 ⋅ ( sin ⁡ 0.5 + ( cos ⁡ 0.5 ) ⋅ 1.5 + 0.5 2 ⋅ 0.5 − 4 ⋅ 0.5 ⋅ 4 ) ) = ( 1.75 2.5 0.9604... ) {\displaystyle {\vec {y}}_{2}={\begin{pmatrix}y_{2}\\y_{2}'\\y_{2}''\end{pmatrix}}={\begin{pmatrix}y_{1}+h\cdot y'_{1}\\y'_{1}+h\cdot y''_{1}\\y''_{1}+h\cdot f\left(t_{1},y_{1},y'_{1},y''_{1}\right)\end{pmatrix}}={\begin{pmatrix}1.5+0.5\cdot 0.5\\0.5+0.5\cdot 4\\4+0.5\cdot \left(\sin {0.5}+(\cos {0.5})\cdot 1.5+0.5^{2}\cdot 0.5-4\cdot 0.5\cdot 4\right)\end{pmatrix}}={\begin{pmatrix}1.75\\2.5\\0.9604...\end{pmatrix}}} We can continue this process using the same formula as long as necessary to find whichever y → i {\displaystyle {\vec {y}}_{i}} desired. == Derivation == The Euler method can be derived in a number of ways. (1) Firstly, there is the geometrical description above. (2) Another possibility is to consider the Taylor expansion of the function y {\displaystyle y} around t 0 {\displaystyle t_{0}} : y ( t 0 + h ) = y ( t 0 ) + h y ′ ( t 0 ) + 1 2 h 2 y ″ ( t 0 ) + O ( h 3 ) . {\displaystyle y(t_{0}+h)=y(t_{0})+hy'(t_{0})+{\tfrac {1}{2}}h^{2}y''(t_{0})+O\left(h^{3}\right).} The differential equation states that y ′ = f ( t , y ) {\displaystyle y'=f(t,y)} . If this is substituted in the Taylor expansion and the quadratic and higher-order terms are ignored, the Euler method arises. The Taylor expansion is used below to analyze the error committed by the Euler method, and it can be extended to produce Runge–Kutta methods. (3) A closely related derivation is to substitute the forward finite difference formula for the derivative, y ′ ( t 0 ) ≈ y ( t 0 + h ) − y ( t 0 ) h {\displaystyle y'(t_{0})\approx {\frac {y(t_{0}+h)-y(t_{0})}{h}}} in the differential equation y ′ = f ( t , y ) {\displaystyle y'=f(t,y)} . Again, this yields the Euler method. A similar computation leads to the midpoint method and the backward Euler method. (4) Finally, one can integrate the differential equation from t 0 {\displaystyle t_{0}} to t 0 + h {\displaystyle t_{0}+h} and apply the fundamental theorem of calculus to get: y ( t 0 + h ) − y ( t 0 ) = ∫ t 0 t 0 + h f ( t , y ( t ) ) d t . {\displaystyle y(t_{0}+h)-y(t_{0})=\int _{t_{0}}^{t_{0}+h}f{\bigl (}t,y(t){\bigr )}\,\mathrm {d} t.} Now approximate the integral by the left-hand rectangle method (with only one rectangle): ∫ t 0 t 0 + h f ( t , y ( t ) ) d t ≈ h f ( t 0 , y ( t 0 ) ) . {\displaystyle \int _{t_{0}}^{t_{0}+h}f{\bigl (}t,y(t){\bigr )}\,\mathrm {d} t\approx hf{\bigl (}t_{0},y(t_{0}){\bigr )}.} Combining both equations, one finds again the Euler method. This line of thought can be continued to arrive at various linear multistep methods. == Local truncation error == The local truncation error of the Euler method is the error made in a single step. It is the difference between the numerical solution after one step, y 1 {\displaystyle y_{1}} , and the exact solution at time t 1 = t 0 + h {\displaystyle t_{1}=t_{0}+h} . The numerical solution is given by y 1 = y 0 + h f ( t 0 , y 0 ) . {\displaystyle y_{1}=y_{0}+hf(t_{0},y_{0}).} For the exact solution, we use the Taylor expansion mentioned in the section Derivation above: y ( t 0 + h ) = y ( t 0 ) + h y ′ ( t 0 ) + 1 2 h 2 y ″ ( t 0 ) + O ( h 3 ) . {\displaystyle y(t_{0}+h)=y(t_{0})+hy'(t_{0})+{\tfrac {1}{2}}h^{2}y''(t_{0})+O\left(h^{3}\right).} The local truncation error (LTE) introduced by the Euler method is given by the difference between these equations: L T E = y ( t 0 + h ) − y 1 = 1 2 h 2 y ″ ( t 0 ) + O ( h 3 ) . {\displaystyle \mathrm {LTE} =y(t_{0}+h)-y_{1}={\tfrac {1}{2}}h^{2}y''(t_{0})+O\left(h^{3}\right).} This result is valid if y {\displaystyle y} has a bounded third derivative. This shows that for small h {\displaystyle h} , the local truncation error is approximately proportional to h 2 {\displaystyle h^{2}} . This makes the Euler method less accurate than higher-order techniques such as Runge–Kutta methods and linear multistep methods, for which the local truncation error is proportional to a higher power of the step size. A slightly different formulation for the local truncation error can be obtained by using the Lagrange form for the remainder term in Taylor's theorem. If y {\displaystyle y} has a continuous second derivative, then there exists a ξ ∈ [ t 0 , t 0 + h ] {\displaystyle \xi \in [t_{0},t_{0}+h]} such that L T E = y ( t 0 + h ) − y 1 = 1 2 h 2 y ″ ( ξ ) . {\displaystyle \mathrm {LTE} =y(t_{0}+h)-y_{1}={\tfrac {1}{2}}h^{2}y''(\xi ).} In the above expressions for the error, the second derivative of the unknown exact solution y {\displaystyle y} can be replaced by an expression involving the right-hand side of the differential equation. Indeed, it follows from the equation y ′ = f ( t , y ) {\displaystyle y'=f(t,y)} that y ″ ( t 0 ) = ∂ f ∂ t ( t 0 , y ( t 0 ) ) + ∂ f ∂ y ( t 0 , y ( t 0 ) ) f ( t 0 , y ( t 0 ) ) . {\displaystyle y''(t_{0})={\frac {\partial f}{\partial t}}{\bigl (}t_{0},y(t_{0}){\bigr )}+{\frac {\partial f}{\partial y}}{\bigl (}t_{0},y(t_{0}){\bigr )}\,f{\bigl (}t_{0},y(t_{0}){\bigr )}.} == Global truncation error == The global truncation error is the error at a fixed time t i {\displaystyle t_{i}} , after however many steps the method needs to take to reach that time from the initial time. The global truncation error is the cumulative effect of the local truncation errors committed in each step. The number of steps is easily determined to be t i − t 0 h {\textstyle {\frac {t_{i}-t_{0}}{h}}} , which is proportional to 1 h {\textstyle {\frac {1}{h}}} , and the error committed in each step is proportional to h 2 {\displaystyle h^{2}} (see the previous section). Thus, it is to be expected that the global truncation error will be proportional to h {\displaystyle h} . This intuitive reasoning can be made precise. If the solution y {\displaystyle y} has a bounded second derivative and f {\displaystyle f} is Lipschitz continuous in its second argument, then the global truncation error (denoted as | y ( t i ) − y i | {\displaystyle |y(t_{i})-y_{i}|} ) is bounded by | y ( t i ) − y i | ≤ h M 2 L ( e L ( t i − t 0 ) − 1 ) {\displaystyle |y(t_{i})-y_{i}|\leq {\frac {hM}{2L}}\left(e^{L(t_{i}-t_{0})}-1\right)} where M {\displaystyle M} is an upper bound on the second derivative of y {\displaystyle y} on the given interval and L {\displaystyle L} is the Lipschitz constant of f {\displaystyle f} . Or more simply, when y ′ ( t ) = f ( t , y ) {\displaystyle y'(t)=f(t,y)} , the value L = max ( | d d y [ f ( t , y ) ] | ) {\textstyle L={\text{max}}{\bigl (}|{\frac {d}{dy}}{\bigl [}f(t,y){\bigr ]}|{\bigr )}} (such that t {\displaystyle t} is treated as a constant). In contrast, M = max ( | d 2 d t 2 [ y ( t ) ] | ) {\textstyle M={\text{max}}{\bigl (}|{\frac {d^{2}}{dt^{2}}}{\bigl [}y(t){\bigr ]}|{\bigr )}} where function y ( t ) {\displaystyle y(t)} is the exact solution which only contains the t {\displaystyle t} variable. The precise form of this bound is of little practical importance, as in most cases the bound vastly overestimates the actual error committed by the Euler method. What is important is that it shows that the global truncation error is (approximately) proportional to h {\displaystyle h} . For this reason, the Euler method is said to be first order. === Example === If we have the differential equation y ′ = 1 + ( t − y ) 2 {\displaystyle y'=1+(t-y)^{2}} , and the exact solution y = t + 1 t − 1 {\displaystyle y=t+{\frac {1}{t-1}}} , and we want to find M {\displaystyle M} and L {\displaystyle L} for when 2 ≤ t ≤ 3 {\displaystyle 2\leq t\leq 3} . L = max ( | d d y [ f ( t , y ) ] | ) = max 2 ≤ t ≤ 3 ( | d d y [ 1 + ( t − y ) 2 ] | ) = max 2 ≤ t ≤ 3 ( | 2 ( t − y ) | ) = max 2 ≤ t ≤ 3 ( | 2 ( t − [ t + 1 t − 1 ] ) | ) = max 2 ≤ t ≤ 3 ( | − 2 t − 1 | ) = 2 {\displaystyle L={\text{max}}{\bigl (}|{\frac {d}{dy}}{\bigl [}f(t,y){\bigr ]}|{\bigr )}=\max _{2\leq t\leq 3}{\bigl (}|{\frac {d}{dy}}{\bigl [}1+(t-y)^{2}{\bigr ]}|{\bigr )}=\max _{2\leq t\leq 3}{\bigl (}|2(t-y)|{\bigr )}=\max _{2\leq t\leq 3}{\bigl (}|2(t-[t+{\frac {1}{t-1}}])|{\bigr )}=\max _{2\leq t\leq 3}{\bigl (}|-{\frac {2}{t-1}}|{\bigr )}=2} M = max ( | d 2 d t 2 [ y ( t ) ] | ) = max 2 ≤ t ≤ 3 ( | d 2 d t 2 [ t + 1 1 − t ] | ) = max 2 ≤ t ≤ 3 ( | 2 ( − t + 1 ) 3 | ) = 2 {\displaystyle M={\text{max}}{\bigl (}|{\frac {d^{2}}{dt^{2}}}{\bigl [}y(t){\bigr ]}|{\bigr )}=\max _{2\leq t\leq 3}\left(|{\frac {d^{2}}{dt^{2}}}{\bigl [}t+{\frac {1}{1-t}}{\bigr ]}|\right)=\max _{2\leq t\leq 3}\left(|{\frac {2}{(-t+1)^{3}}}|\right)=2} Thus we can find the error bound at t=2.5 and h=0.5: error bound = h M 2 L ( e L ( t i − t 0 ) − 1 ) = 0.5 ⋅ 2 2 ⋅ 2 ( e 2 ( 2.5 − 2 ) − 1 ) = 0.42957 {\displaystyle {\text{error bound}}={\frac {hM}{2L}}\left(e^{L(t_{i}-t_{0})}-1\right)={\frac {0.5\cdot 2}{2\cdot 2}}\left(e^{2(2.5-2)}-1\right)=0.42957} Notice that t0 is equal to 2 because it is the lower bound for t in 2 ≤ t ≤ 3 {\displaystyle 2\leq t\leq 3} . == Numerical stability == The Euler method can also be numerically unstable, especially for stiff equations, meaning that the numerical solution grows very large for equations where the exact solution does not. This can be illustrated using the linear equation y ′ = − 2.3 y , y ( 0 ) = 1. {\displaystyle y'=-2.3y,\qquad y(0)=1.} The exact solution is y ( t ) = e − 2.3 t {\displaystyle y(t)=e^{-2.3t}} , which decays to zero as t → ∞ {\displaystyle t\to \infty } . However, if the Euler method is applied to this equation with step size h = 1 {\displaystyle h=1} , then the numerical solution is qualitatively wrong: It oscillates and grows (see the figure). This is what it means to be unstable. If a smaller step size is used, for instance h = 0.7 {\displaystyle h=0.7} , then the numerical solution does decay to zero. If the Euler method is applied to the linear equation y ′ = k y {\displaystyle y'=ky} , then the numerical solution is unstable if the product h k {\displaystyle hk} is outside the region { z ∈ C | | z + 1 | ≤ 1 } , {\displaystyle {\bigl \{}z\in \mathbf {C} \,{\big |}\,|z+1|\leq 1{\bigr \}},} illustrated on the right. This region is called the (linear) stability region. In the example, k = − 2.3 {\displaystyle k=-2.3} , so if h = 1 {\displaystyle h=1} then h k = − 2.3 {\displaystyle hk=-2.3} which is outside the stability region, and thus the numerical solution is unstable. This limitation — along with its slow convergence of error with h {\displaystyle h} — means that the Euler method is not often used, except as a simple example of numerical integration. Frequently models of physical systems contain terms representing fast-decaying elements (i.e. with large negative exponential arguments). Even when these are not of interest in the overall solution, the instability they can induce means that an exceptionally small timestep would be required if the Euler method is used. == Rounding errors == In step n {\displaystyle n} of the Euler method, the rounding error is roughly of the magnitude ε y n {\displaystyle \varepsilon y_{n}} where ε {\displaystyle \varepsilon } is the machine epsilon. Assuming that the rounding errors are independent random variables, the expected total rounding error is proportional to ε h {\textstyle {\frac {\varepsilon }{\sqrt {h}}}} . Thus, for extremely small values of the step size the truncation error will be small but the effect of rounding error may be big. Most of the effect of rounding error can be easily avoided if compensated summation is used in the formula for the Euler method. == Modifications and extensions == A simple modification of the Euler method which eliminates the stability problems noted above is the backward Euler method: y n + 1 = y n + h f ( t n + 1 , y n + 1 ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n+1},y_{n+1}).} This differs from the (standard, or forward) Euler method in that the function f {\displaystyle f} is evaluated at the end point of the step, instead of the starting point. The backward Euler method is an implicit method, meaning that the formula for the backward Euler method has y n + 1 {\displaystyle y_{n+1}} on both sides, so when applying the backward Euler method we have to solve an equation. This makes the implementation more costly. Other modifications of the Euler method that help with stability yield the exponential Euler method or the semi-implicit Euler method. More complicated methods can achieve a higher order (and more accuracy). One possibility is to use more function evaluations. This is illustrated by the midpoint method which is already mentioned in this article: y n + 1 = y n + h f ( t n + 1 2 h , y n + 1 2 h f ( t n , y n ) ) {\displaystyle y_{n+1}=y_{n}+hf\left(t_{n}+{\tfrac {1}{2}}h,y_{n}+{\tfrac {1}{2}}hf(t_{n},y_{n})\right)} . This leads to the family of Runge–Kutta methods. The other possibility is to use more past values, as illustrated by the two-step Adams–Bashforth method: y n + 1 = y n + 3 2 h f ( t n , y n ) − 1 2 h f ( t n − 1 , y n − 1 ) . {\displaystyle y_{n+1}=y_{n}+{\tfrac {3}{2}}hf(t_{n},y_{n})-{\tfrac {1}{2}}hf(t_{n-1},y_{n-1}).} This leads to the family of linear multistep methods. There are other modifications which uses techniques from compressive sensing to minimize memory usage == In popular culture == In the film Hidden Figures, Katherine Johnson resorts to the Euler method in calculating the re-entry of astronaut John Glenn from Earth orbit. == See also == Crank–Nicolson method Gradient descent similarly uses finite steps, here to find minima of functions List of Runge–Kutta methods Linear multistep method Numerical integration (for calculating definite integrals) Numerical methods for ordinary differential equations == Notes == == References == Atkinson, Kendall A. (1989). An Introduction to Numerical Analysis (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-50023-0. Ascher, Uri M.; Petzold, Linda R. (1998). Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0-89871-412-8. Butcher, John C. (2003). Numerical Methods for Ordinary Differential Equations. New York: John Wiley & Sons. ISBN 978-0-471-96758-3. Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993). Solving ordinary differential equations I: Nonstiff problems. Berlin, New York: Springer-Verlag. ISBN 978-3-540-56670-0. Iserles, Arieh (1996). A First Course in the Numerical Analysis of Differential Equations. Cambridge University Press. ISBN 978-0-521-55655-2. Stoer, Josef; Bulirsch, Roland (2002). Introduction to Numerical Analysis (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-95452-3. Lakoba, Taras I. (2012), Simple Euler method and its modifications (PDF) (Lecture notes for MATH334), University of Vermont, retrieved 29 February 2012 Unni, M P. (2017). "Memory reduction for numerical solution of differential equations using compressive sensing". 2017 IEEE 13th International Colloquium on Signal Processing & its Applications (CSPA). IEEE CSPA. pp. 79–84. doi:10.1109/CSPA.2017.8064928. ISBN 978-1-5090-1184-1. S2CID 13082456. == External links == Media related to Euler method at Wikimedia Commons Euler method implementations in different languages by Rosetta Code "Euler method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Forward_Euler_method
In numerical analysis, the Runge–Kutta methods (English: RUUNG-ə-KUUT-tah) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta. == The Runge–Kutta method == The most widely known member of the Runge–Kutta family is generally referred to as "RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method". Let an initial value problem be specified as follows: d y d t = f ( t , y ) , y ( t 0 ) = y 0 . {\displaystyle {\frac {dy}{dt}}=f(t,y),\quad y(t_{0})=y_{0}.} Here y {\displaystyle y} is an unknown function (scalar or vector) of time t {\displaystyle t} , which we would like to approximate; we are told that d y d t {\displaystyle {\frac {dy}{dt}}} , the rate at which y {\displaystyle y} changes, is a function of t {\displaystyle t} and of y {\displaystyle y} itself. At the initial time t 0 {\displaystyle t_{0}} the corresponding y {\displaystyle y} value is y 0 {\displaystyle y_{0}} . The function f {\displaystyle f} and the initial conditions t 0 {\displaystyle t_{0}} , y 0 {\displaystyle y_{0}} are given. Now we pick a step-size h > 0 and define: y n + 1 = y n + h 6 ( k 1 + 2 k 2 + 2 k 3 + k 4 ) , t n + 1 = t n + h {\displaystyle {\begin{aligned}y_{n+1}&=y_{n}+{\frac {h}{6}}\left(k_{1}+2k_{2}+2k_{3}+k_{4}\right),\\t_{n+1}&=t_{n}+h\\\end{aligned}}} for n = 0, 1, 2, 3, ..., using k 1 = f ( t n , y n ) , k 2 = f ( t n + h 2 , y n + h k 1 2 ) , k 3 = f ( t n + h 2 , y n + h k 2 2 ) , k 4 = f ( t n + h , y n + h k 3 ) . {\displaystyle {\begin{aligned}k_{1}&=\ f(t_{n},y_{n}),\\k_{2}&=\ f\!\left(t_{n}+{\frac {h}{2}},y_{n}+h{\frac {k_{1}}{2}}\right),\\k_{3}&=\ f\!\left(t_{n}+{\frac {h}{2}},y_{n}+h{\frac {k_{2}}{2}}\right),\\k_{4}&=\ f\!\left(t_{n}+h,y_{n}+hk_{3}\right).\end{aligned}}} (Note: the above equations have different but equivalent definitions in different texts.) Here y n + 1 {\displaystyle y_{n+1}} is the RK4 approximation of y ( t n + 1 ) {\displaystyle y(t_{n+1})} , and the next value ( y n + 1 {\displaystyle y_{n+1}} ) is determined by the present value ( y n {\displaystyle y_{n}} ) plus the weighted average of four increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by function f on the right-hand side of the differential equation. k 1 {\displaystyle k_{1}} is the slope at the beginning of the interval, using y {\displaystyle y} (Euler's method); k 2 {\displaystyle k_{2}} is the slope at the midpoint of the interval, using y {\displaystyle y} and k 1 {\displaystyle k_{1}} ; k 3 {\displaystyle k_{3}} is again the slope at the midpoint, but now using y {\displaystyle y} and k 2 {\displaystyle k_{2}} ; k 4 {\displaystyle k_{4}} is the slope at the end of the interval, using y {\displaystyle y} and k 3 {\displaystyle k_{3}} . In averaging the four slopes, greater weight is given to the slopes at the midpoint. If f {\displaystyle f} is independent of y {\displaystyle y} , so that the differential equation is equivalent to a simple integral, then RK4 is Simpson's rule. The RK4 method is a fourth-order method, meaning that the local truncation error is on the order of O ( h 5 ) {\displaystyle O(h^{5})} , while the total accumulated error is on the order of O ( h 4 ) {\displaystyle O(h^{4})} . In many practical applications the function f {\displaystyle f} is independent of t {\displaystyle t} (so called autonomous system, or time-invariant system, especially in physics), and their increments are not computed at all and not passed to function f {\displaystyle f} , with only the final formula for t n + 1 {\displaystyle t_{n+1}} used. == Explicit Runge–Kutta methods == The family of explicit Runge–Kutta methods is a generalization of the RK4 method mentioned above. It is given by y n + 1 = y n + h ∑ i = 1 s b i k i , {\displaystyle y_{n+1}=y_{n}+h\sum _{i=1}^{s}b_{i}k_{i},} where k 1 = f ( t n , y n ) , k 2 = f ( t n + c 2 h , y n + ( a 21 k 1 ) h ) , k 3 = f ( t n + c 3 h , y n + ( a 31 k 1 + a 32 k 2 ) h ) , ⋮ k s = f ( t n + c s h , y n + ( a s 1 k 1 + a s 2 k 2 + ⋯ + a s , s − 1 k s − 1 ) h ) . {\displaystyle {\begin{aligned}k_{1}&=f(t_{n},y_{n}),\\k_{2}&=f(t_{n}+c_{2}h,y_{n}+(a_{21}k_{1})h),\\k_{3}&=f(t_{n}+c_{3}h,y_{n}+(a_{31}k_{1}+a_{32}k_{2})h),\\&\ \ \vdots \\k_{s}&=f(t_{n}+c_{s}h,y_{n}+(a_{s1}k_{1}+a_{s2}k_{2}+\cdots +a_{s,s-1}k_{s-1})h).\end{aligned}}} (Note: the above equations may have different but equivalent definitions in some texts.) To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). The matrix [aij] is called the Runge–Kutta matrix, while the bi and ci are known as the weights and the nodes. These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher): A Taylor series expansion shows that the Runge–Kutta method is consistent if and only if ∑ i = 1 s b i = 1. {\displaystyle \sum _{i=1}^{s}b_{i}=1.} There are also accompanying requirements if one requires the method to have a certain order p, meaning that the local truncation error is O(hp+1). These can be derived from the definition of the truncation error itself. For example, a two-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and b2a21 = 1/2. Note that a popular condition for determining coefficients is ∑ j = 1 i − 1 a i j = c i for i = 2 , … , s . {\displaystyle \sum _{j=1}^{i-1}a_{ij}=c_{i}{\text{ for }}i=2,\ldots ,s.} This condition alone, however, is neither sufficient, nor necessary for consistency. In general, if an explicit s {\displaystyle s} -stage Runge–Kutta method has order p {\displaystyle p} , then it can be proven that the number of stages must satisfy s ≥ p {\displaystyle s\geq p} and if p ≥ 5 {\displaystyle p\geq 5} , then s ≥ p + 1 {\displaystyle s\geq p+1} . However, it is not known whether these bounds are sharp in all cases. In some cases, it is proven that the bound cannot be achieved. For instance, Butcher proved that for p > 6 {\displaystyle p>6} , there is no explicit method with s = p + 1 {\displaystyle s=p+1} stages. Butcher also proved that for p > 7 {\displaystyle p>7} , there is no explicit Runge-Kutta method with p + 2 {\displaystyle p+2} stages. In general, however, it remains an open problem what the precise minimum number of stages s {\displaystyle s} is for an explicit Runge–Kutta method to have order p {\displaystyle p} . Some values which are known are: p 1 2 3 4 5 6 7 8 min s 1 2 3 4 6 7 9 11 {\displaystyle {\begin{array}{c|cccccccc}p&1&2&3&4&5&6&7&8\\\hline \min s&1&2&3&4&6&7&9&11\end{array}}} The provable bound above then imply that we can not find methods of orders p = 1 , 2 , … , 6 {\displaystyle p=1,2,\ldots ,6} that require fewer stages than the methods we already know for these orders. The work of Butcher also proves that 7th and 8th order methods have a minimum of 9 and 11 stages, respectively. An example of an explicit method of order 6 with 7 stages can be found in Ref. Explicit methods of order 7 with 9 stages and explicit methods of order 8 with 11 stages are also known. See Refs. for a summary. === Examples === The RK4 method falls in this framework. Its tableau is A slight variation of "the" Runge–Kutta method is also due to Kutta in 1901 and is called the 3/8-rule. The primary advantage this method has is that almost all of the error coefficients are smaller than in the popular method, but it requires slightly more FLOPs (floating-point operations) per time step. Its Butcher tableau is However, the simplest Runge–Kutta method is the (forward) Euler method, given by the formula y n + 1 = y n + h f ( t n , y n ) {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n})} . This is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is === Second-order methods with two stages === An example of a second-order method with two stages is provided by the explicit midpoint method: y n + 1 = y n + h f ( t n + 1 2 h , y n + 1 2 h f ( t n , y n ) ) . {\displaystyle y_{n+1}=y_{n}+hf\left(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}hf(t_{n},\ y_{n})\right).} The corresponding tableau is The midpoint method is not the only second-order Runge–Kutta method with two stages; there is a family of such methods, parameterized by α and given by the formula y n + 1 = y n + h ( ( 1 − 1 2 α ) f ( t n , y n ) + 1 2 α f ( t n + α h , y n + α h f ( t n , y n ) ) ) . {\displaystyle y_{n+1}=y_{n}+h{\bigl (}(1-{\tfrac {1}{2\alpha }})f(t_{n},y_{n})+{\tfrac {1}{2\alpha }}f(t_{n}+\alpha h,y_{n}+\alpha hf(t_{n},y_{n})){\bigr )}.} Its Butcher tableau is In this family, α = 1 2 {\displaystyle \alpha ={\tfrac {1}{2}}} gives the midpoint method, α = 1 {\displaystyle \alpha =1} is Heun's method, and α = 2 3 {\displaystyle \alpha ={\tfrac {2}{3}}} is Ralston's method. == Use == As an example, consider the two-stage second-order Runge–Kutta method with α = 2/3, also known as Ralston method. It is given by the tableau with the corresponding equations k 1 = f ( t n , y n ) , k 2 = f ( t n + 2 3 h , y n + 2 3 h k 1 ) , y n + 1 = y n + h ( 1 4 k 1 + 3 4 k 2 ) . {\displaystyle {\begin{aligned}k_{1}&=f(t_{n},\ y_{n}),\\k_{2}&=f(t_{n}+{\tfrac {2}{3}}h,\ y_{n}+{\tfrac {2}{3}}hk_{1}),\\y_{n+1}&=y_{n}+h\left({\tfrac {1}{4}}k_{1}+{\tfrac {3}{4}}k_{2}\right).\end{aligned}}} This method is used to solve the initial-value problem d y d t = tan ⁡ ( y ) + 1 , y 0 = 1 , t ∈ [ 1 , 1.1 ] {\displaystyle {\frac {dy}{dt}}=\tan(y)+1,\quad y_{0}=1,\ t\in [1,1.1]} with step size h = 0.025, so the method needs to take four steps. The method proceeds as follows: The numerical solutions correspond to the underlined values. == Adaptive Runge–Kutta methods == Adaptive methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step. This is done by having two methods, one with order p {\displaystyle p} and one with order p − 1 {\displaystyle p-1} . These methods are interwoven, i.e., they have common intermediate steps. Thanks to this, estimating the error has little or negligible computational cost compared to a step with the higher-order method. During the integration, the step size is adapted such that the estimated error stays below a user-defined threshold: If the error is too high, a step is repeated with a lower step size; if the error is much smaller, the step size is increased to save time. This results in an (almost), optimal step size, which saves computation time. Moreover, the user does not have to spend time on finding an appropriate step size. The lower-order step is given by y n + 1 ∗ = y n + h ∑ i = 1 s b i ∗ k i , {\displaystyle y_{n+1}^{*}=y_{n}+h\sum _{i=1}^{s}b_{i}^{*}k_{i},} where k i {\displaystyle k_{i}} are the same as for the higher-order method. Then the error is e n + 1 = y n + 1 − y n + 1 ∗ = h ∑ i = 1 s ( b i − b i ∗ ) k i , {\displaystyle e_{n+1}=y_{n+1}-y_{n+1}^{*}=h\sum _{i=1}^{s}(b_{i}-b_{i}^{*})k_{i},} which is O ( h p ) {\displaystyle O(h^{p})} . The Butcher tableau for this kind of method is extended to give the values of b i ∗ {\displaystyle b_{i}^{*}} : The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher tableau is: However, the simplest adaptive Runge–Kutta method involves combining Heun's method, which is order 2, with the Euler method, which is order 1. Its extended Butcher tableau is: Other adaptive Runge–Kutta methods are the Bogacki–Shampine method (orders 3 and 2), the Cash–Karp method and the Dormand–Prince method (both with orders 5 and 4). == Nonconfluent Runge–Kutta methods == A Runge–Kutta method is said to be nonconfluent if all the c i , i = 1 , 2 , … , s {\displaystyle c_{i},\,i=1,2,\ldots ,s} are distinct. == Runge–Kutta–Nyström methods == Runge–Kutta–Nyström methods are specialized Runge–Kutta methods that are optimized for second-order differential equations. A general Runge–Kutta–Nyström method for a second-order ODE system y ¨ i = f i ( y 1 , y 2 , … , y n ) {\displaystyle {\ddot {y}}_{i}=f_{i}(y_{1},y_{2},\ldots ,y_{n})} with order s {\displaystyle s} is with the form { g i = y m + c i h y ˙ m + h 2 ∑ j = 1 s a i j f ( g j ) , i = 1 , 2 , … , s y m + 1 = y m + h y ˙ m + h 2 ∑ j = 1 s b ¯ j f ( g j ) y ˙ m + 1 = y ˙ m + h ∑ j = 1 s b j f ( g j ) {\displaystyle {\begin{cases}g_{i}=y_{m}+c_{i}h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}a_{ij}f(g_{j}),&i=1,2,\ldots ,s\\y_{m+1}=y_{m}+h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}{\bar {b}}_{j}f(g_{j})\\{\dot {y}}_{m+1}={\dot {y}}_{m}+h\sum _{j=1}^{s}b_{j}f(g_{j})\end{cases}}} which forms a Butcher table with the form c 1 a 11 a 12 … a 1 s c 2 a 21 a 22 … a 2 s ⋮ ⋮ ⋮ ⋱ ⋮ c s a s 1 a s 2 … a s s b ¯ 1 b ¯ 2 … b ¯ s b 1 b 2 … b s = c A b ¯ ⊤ b ⊤ {\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &{\bar {b}}_{1}&{\bar {b}}_{2}&\dots &{\bar {b}}_{s}\\&b_{1}&b_{2}&\dots &b_{s}\end{array}}={\begin{array}{c|c}\mathbf {c} &\mathbf {A} \\\hline &\mathbf {\bar {b}} ^{\top }\\&\mathbf {b} ^{\top }\end{array}}} Two fourth-order explicit RKN methods are given by the following Butcher tables: c i a i j 3 + 3 6 0 0 0 3 − 3 6 2 − 3 12 0 0 3 + 3 6 0 3 6 0 b i ¯ 5 − 3 3 24 3 + 3 12 1 + 3 24 b i 3 − 2 3 12 1 2 3 + 2 3 12 {\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3+{\sqrt {3}}}{6}}&0&0&0\\{\frac {3-{\sqrt {3}}}{6}}&{\frac {2-{\sqrt {3}}}{12}}&0&0\\{\frac {3+{\sqrt {3}}}{6}}&0&{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5-3{\sqrt {3}}}{24}}&{\frac {3+{\sqrt {3}}}{12}}&{\frac {1+{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3-2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3+2{\sqrt {3}}}{12}}\end{array}}} c i a i j 3 − 3 6 0 0 0 3 + 3 6 2 + 3 12 0 0 3 − 3 6 0 − 3 6 0 b i ¯ 5 + 3 3 24 3 − 3 12 1 − 3 24 b i 3 + 2 3 12 1 2 3 − 2 3 12 {\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3-{\sqrt {3}}}{6}}&0&0&0\\{\frac {3+{\sqrt {3}}}{6}}&{\frac {2+{\sqrt {3}}}{12}}&0&0\\{\frac {3-{\sqrt {3}}}{6}}&0&-{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5+3{\sqrt {3}}}{24}}&{\frac {3-{\sqrt {3}}}{12}}&{\frac {1-{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3+2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3-2{\sqrt {3}}}{12}}\end{array}}} These two schemes also have the symplectic-preserving properties when the original equation is derived from a conservative classical mechanical system, i.e. when f i ( x 1 , … , x n ) = ∂ V ∂ x i ( x 1 , … , x n ) {\displaystyle f_{i}(x_{1},\ldots ,x_{n})={\frac {\partial V}{\partial x_{i}}}(x_{1},\ldots ,x_{n})} for some scalar function V {\displaystyle V} . == Implicit Runge–Kutta methods == All Runge–Kutta methods mentioned up to now are explicit methods. Explicit Runge–Kutta methods are generally unsuitable for the solution of stiff equations because their region of absolute stability is small; in particular, it is bounded. This issue is especially important in the solution of partial differential equations. The instability of explicit Runge–Kutta methods motivates the development of implicit methods. An implicit Runge–Kutta method has the form y n + 1 = y n + h ∑ i = 1 s b i k i , {\displaystyle y_{n+1}=y_{n}+h\sum _{i=1}^{s}b_{i}k_{i},} where k i = f ( t n + c i h , y n + h ∑ j = 1 s a i j k j ) , i = 1 , … , s . {\displaystyle k_{i}=f\left(t_{n}+c_{i}h,\ y_{n}+h\sum _{j=1}^{s}a_{ij}k_{j}\right),\quad i=1,\ldots ,s.} The difference with an explicit method is that in an explicit method, the sum over j only goes up to i − 1. This also shows up in the Butcher tableau: the coefficient matrix a i j {\displaystyle a_{ij}} of an explicit method is lower triangular. In an implicit method, the sum over j goes up to s and the coefficient matrix is not triangular, yielding a Butcher tableau of the form c 1 a 11 a 12 … a 1 s c 2 a 21 a 22 … a 2 s ⋮ ⋮ ⋮ ⋱ ⋮ c s a s 1 a s 2 … a s s b 1 b 2 … b s b 1 ∗ b 2 ∗ … b s ∗ = c A b T {\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &b_{1}&b_{2}&\dots &b_{s}\\&b_{1}^{*}&b_{2}^{*}&\dots &b_{s}^{*}\\\end{array}}={\begin{array}{c|c}\mathbf {c} &A\\\hline &\mathbf {b^{T}} \\\end{array}}} See Adaptive Runge-Kutta methods above for the explanation of the b ∗ {\displaystyle b^{*}} row. The consequence of this difference is that at every step, a system of algebraic equations has to be solved. This increases the computational cost considerably. If a method with s stages is used to solve a differential equation with m components, then the system of algebraic equations has ms components. This can be contrasted with implicit linear multistep methods (the other big family of methods for ODEs): an implicit s-step linear multistep method needs to solve a system of algebraic equations with only m components, so the size of the system does not increase as the number of steps increases. === Examples === The simplest example of an implicit Runge–Kutta method is the backward Euler method: y n + 1 = y n + h f ( t n + h , y n + 1 ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n}+h,\ y_{n+1}).\,} The Butcher tableau for this is simply: 1 1 1 {\displaystyle {\begin{array}{c|c}1&1\\\hline &1\\\end{array}}} This Butcher tableau corresponds to the formulae k 1 = f ( t n + h , y n + h k 1 ) and y n + 1 = y n + h k 1 , {\displaystyle k_{1}=f(t_{n}+h,\ y_{n}+hk_{1})\quad {\text{and}}\quad y_{n+1}=y_{n}+hk_{1},} which can be re-arranged to get the formula for the backward Euler method listed above. Another example for an implicit Runge–Kutta method is the trapezoidal rule. Its Butcher tableau is: 0 0 0 1 1 2 1 2 1 2 1 2 1 0 {\displaystyle {\begin{array}{c|cc}0&0&0\\1&{\frac {1}{2}}&{\frac {1}{2}}\\\hline &{\frac {1}{2}}&{\frac {1}{2}}\\&1&0\\\end{array}}} The trapezoidal rule is a collocation method (as discussed in that article). All collocation methods are implicit Runge–Kutta methods, but not all implicit Runge–Kutta methods are collocation methods. The Gauss–Legendre methods form a family of collocation methods based on Gauss quadrature. A Gauss–Legendre method with s stages has order 2s (thus, methods with arbitrarily high order can be constructed). The method with two stages (and thus order four) has Butcher tableau: 1 2 − 1 6 3 1 4 1 4 − 1 6 3 1 2 + 1 6 3 1 4 + 1 6 3 1 4 1 2 1 2 1 2 + 1 2 3 1 2 − 1 2 3 {\displaystyle {\begin{array}{c|cc}{\frac {1}{2}}-{\frac {1}{6}}{\sqrt {3}}&{\frac {1}{4}}&{\frac {1}{4}}-{\frac {1}{6}}{\sqrt {3}}\\{\frac {1}{2}}+{\frac {1}{6}}{\sqrt {3}}&{\frac {1}{4}}+{\frac {1}{6}}{\sqrt {3}}&{\frac {1}{4}}\\\hline &{\frac {1}{2}}&{\frac {1}{2}}\\&{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {3}}&{\frac {1}{2}}-{\frac {1}{2}}{\sqrt {3}}\end{array}}} === Stability === The advantage of implicit Runge–Kutta methods over explicit ones is their greater stability, especially when applied to stiff equations. Consider the linear test equation y ′ = λ y {\displaystyle y'=\lambda y} . A Runge–Kutta method applied to this equation reduces to the iteration y n + 1 = r ( h λ ) y n {\displaystyle y_{n+1}=r(h\lambda )\,y_{n}} , with r given by r ( z ) = 1 + z b T ( I − z A ) − 1 e = det ( I − z A + z e b T ) det ( I − z A ) , {\displaystyle r(z)=1+zb^{T}(I-zA)^{-1}e={\frac {\det(I-zA+zeb^{T})}{\det(I-zA)}},} where e stands for the vector of ones. The function r is called the stability function. It follows from the formula that r is the quotient of two polynomials of degree s if the method has s stages. Explicit methods have a strictly lower triangular matrix A, which implies that det(I − zA) = 1 and that the stability function is a polynomial. The numerical solution to the linear test equation decays to zero if | r(z) | < 1 with z = hλ. The set of such z is called the domain of absolute stability. In particular, the method is said to be absolute stable if all z with Re(z) < 0 are in the domain of absolute stability. The stability function of an explicit Runge–Kutta method is a polynomial, so explicit Runge–Kutta methods can never be A-stable. If the method has order p, then the stability function satisfies r ( z ) = e z + O ( z p + 1 ) {\displaystyle r(z)={\textrm {e}}^{z}+O(z^{p+1})} as z → 0 {\displaystyle z\to 0} . Thus, it is of interest to study quotients of polynomials of given degrees that approximate the exponential function the best. These are known as Padé approximants. A Padé approximant with numerator of degree m and denominator of degree n is A-stable if and only if m ≤ n ≤ m + 2. The Gauss–Legendre method with s stages has order 2s, so its stability function is the Padé approximant with m = n = s. It follows that the method is A-stable. This shows that A-stable Runge–Kutta can have arbitrarily high order. In contrast, the order of A-stable linear multistep methods cannot exceed two. == B-stability == The A-stability concept for the solution of differential equations is related to the linear autonomous equation y ′ = λ y {\displaystyle y'=\lambda y} . Dahlquist (1963) proposed the investigation of stability of numerical schemes when applied to nonlinear systems that satisfy a monotonicity condition. The corresponding concepts were defined as G-stability for multistep methods (and the related one-leg methods) and B-stability (Butcher, 1975) for Runge–Kutta methods. A Runge–Kutta method applied to the non-linear system y ′ = f ( y ) {\displaystyle y'=f(y)} , which verifies ⟨ f ( y ) − f ( z ) , y − z ⟩ ≤ 0 {\displaystyle \langle f(y)-f(z),\ y-z\rangle \leq 0} , is called B-stable, if this condition implies ‖ y n + 1 − z n + 1 ‖ ≤ ‖ y n − z n ‖ {\displaystyle \|y_{n+1}-z_{n+1}\|\leq \|y_{n}-z_{n}\|} for two numerical solutions. Let B {\displaystyle B} , M {\displaystyle M} and Q {\displaystyle Q} be three s × s {\displaystyle s\times s} matrices defined by B = diag ⁡ ( b 1 , b 2 , … , b s ) , M = B A + A T B − b b T , Q = B A − 1 + A − T B − A − T b b T A − 1 . {\displaystyle {\begin{aligned}B&=\operatorname {diag} (b_{1},b_{2},\ldots ,b_{s}),\\[4pt]M&=BA+A^{T}B-bb^{T},\\[4pt]Q&=BA^{-1}+A^{-T}B-A^{-T}bb^{T}A^{-1}.\end{aligned}}} A Runge–Kutta method is said to be algebraically stable if the matrices B {\displaystyle B} and M {\displaystyle M} are both non-negative definite. A sufficient condition for B-stability is: B {\displaystyle B} and Q {\displaystyle Q} are non-negative definite. == Derivation of the Runge–Kutta fourth-order method == In general a Runge–Kutta method of order s {\displaystyle s} can be written as: y t + h = y t + h ⋅ ∑ i = 1 s a i k i + O ( h s + 1 ) , {\displaystyle y_{t+h}=y_{t}+h\cdot \sum _{i=1}^{s}a_{i}k_{i}+{\mathcal {O}}(h^{s+1}),} where: k i = ∑ j = 1 s β i j f ( k j , t n + α i h ) {\displaystyle k_{i}=\sum _{j=1}^{s}\beta _{ij}f(k_{j},\ t_{n}+\alpha _{i}h)} are increments obtained evaluating the derivatives of y t {\displaystyle y_{t}} at the i {\displaystyle i} -th order. We develop the derivation for the Runge–Kutta fourth-order method using the general formula with s = 4 {\displaystyle s=4} evaluated, as explained above, at the starting point, the midpoint and the end point of any interval ( t , t + h ) {\displaystyle (t,\ t+h)} ; thus, we choose: α i β i j α 1 = 0 β 21 = 1 2 α 2 = 1 2 β 32 = 1 2 α 3 = 1 2 β 43 = 1 α 4 = 1 {\displaystyle {\begin{aligned}&\alpha _{i}&&\beta _{ij}\\\alpha _{1}&=0&\beta _{21}&={\frac {1}{2}}\\\alpha _{2}&={\frac {1}{2}}&\beta _{32}&={\frac {1}{2}}\\\alpha _{3}&={\frac {1}{2}}&\beta _{43}&=1\\\alpha _{4}&=1&&\\\end{aligned}}} and β i j = 0 {\displaystyle \beta _{ij}=0} otherwise. We begin by defining the following quantities: y t + h 1 = y t + h f ( y t , t ) y t + h 2 = y t + h f ( y t + h / 2 1 , t + h 2 ) y t + h 3 = y t + h f ( y t + h / 2 2 , t + h 2 ) {\displaystyle {\begin{aligned}y_{t+h}^{1}&=y_{t}+hf\left(y_{t},\ t\right)\\y_{t+h}^{2}&=y_{t}+hf\left(y_{t+h/2}^{1},\ t+{\frac {h}{2}}\right)\\y_{t+h}^{3}&=y_{t}+hf\left(y_{t+h/2}^{2},\ t+{\frac {h}{2}}\right)\end{aligned}}} where y t + h / 2 1 = y t + y t + h 1 2 {\displaystyle y_{t+h/2}^{1}={\dfrac {y_{t}+y_{t+h}^{1}}{2}}} and y t + h / 2 2 = y t + y t + h 2 2 . {\displaystyle y_{t+h/2}^{2}={\dfrac {y_{t}+y_{t+h}^{2}}{2}}.} If we define: k 1 = f ( y t , t ) k 2 = f ( y t + h / 2 1 , t + h 2 ) = f ( y t + h 2 k 1 , t + h 2 ) k 3 = f ( y t + h / 2 2 , t + h 2 ) = f ( y t + h 2 k 2 , t + h 2 ) k 4 = f ( y t + h 3 , t + h ) = f ( y t + h k 3 , t + h ) {\displaystyle {\begin{aligned}k_{1}&=f(y_{t},\ t)\\k_{2}&=f\left(y_{t+h/2}^{1},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}k_{1},\ t+{\frac {h}{2}}\right)\\k_{3}&=f\left(y_{t+h/2}^{2},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}k_{2},\ t+{\frac {h}{2}}\right)\\k_{4}&=f\left(y_{t+h}^{3},\ t+h\right)=f\left(y_{t}+hk_{3},\ t+h\right)\end{aligned}}} and for the previous relations we can show that the following equalities hold up to O ( h 2 ) {\displaystyle {\mathcal {O}}(h^{2})} : k 2 = f ( y t + h / 2 1 , t + h 2 ) = f ( y t + h 2 k 1 , t + h 2 ) = f ( y t , t ) + h 2 d d t f ( y t , t ) k 3 = f ( y t + h / 2 2 , t + h 2 ) = f ( y t + h 2 f ( y t + h 2 k 1 , t + h 2 ) , t + h 2 ) = f ( y t , t ) + h 2 d d t [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] k 4 = f ( y t + h 3 , t + h ) = f ( y t + h f ( y t + h 2 k 2 , t + h 2 ) , t + h ) = f ( y t + h f ( y t + h 2 f ( y t + h 2 f ( y t , t ) , t + h 2 ) , t + h 2 ) , t + h ) = f ( y t , t ) + h d d t [ f ( y t , t ) + h 2 d d t [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] ] {\displaystyle {\begin{aligned}k_{2}&=f\left(y_{t+h/2}^{1},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}k_{1},\ t+{\frac {h}{2}}\right)\\&=f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f\left(y_{t},\ t\right)\\k_{3}&=f\left(y_{t+h/2}^{2},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}f\left(y_{t}+{\frac {h}{2}}k_{1},\ t+{\frac {h}{2}}\right),\ t+{\frac {h}{2}}\right)\\&=f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f\left(y_{t},\ t\right)\right]\\k_{4}&=f\left(y_{t+h}^{3},\ t+h\right)=f\left(y_{t}+hf\left(y_{t}+{\frac {h}{2}}k_{2},\ t+{\frac {h}{2}}\right),\ t+h\right)\\&=f\left(y_{t}+hf\left(y_{t}+{\frac {h}{2}}f\left(y_{t}+{\frac {h}{2}}f\left(y_{t},\ t\right),\ t+{\frac {h}{2}}\right),\ t+{\frac {h}{2}}\right),\ t+h\right)\\&=f\left(y_{t},\ t\right)+h{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f\left(y_{t},\ t\right)\right]\right]\end{aligned}}} where: d d t f ( y t , t ) = ∂ ∂ y f ( y t , t ) y ˙ t + ∂ ∂ t f ( y t , t ) = f y ( y t , t ) y ˙ t + f t ( y t , t ) := y ¨ t {\displaystyle {\frac {d}{dt}}f(y_{t},\ t)={\frac {\partial }{\partial y}}f(y_{t},\ t){\dot {y}}_{t}+{\frac {\partial }{\partial t}}f(y_{t},\ t)=f_{y}(y_{t},\ t){\dot {y}}_{t}+f_{t}(y_{t},\ t):={\ddot {y}}_{t}} is the total derivative of f {\displaystyle f} with respect to time. If we now express the general formula using what we just derived we obtain: y t + h = y t + h { a ⋅ f ( y t , t ) + b ⋅ [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] + + c ⋅ [ f ( y t , t ) + h 2 d d t [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] ] + + d ⋅ [ f ( y t , t ) + h d d t [ f ( y t , t ) + h 2 d d t [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] ] ] } + O ( h 5 ) = y t + a ⋅ h f t + b ⋅ h f t + b ⋅ h 2 2 d f t d t + c ⋅ h f t + c ⋅ h 2 2 d f t d t + + c ⋅ h 3 4 d 2 f t d t 2 + d ⋅ h f t + d ⋅ h 2 d f t d t + d ⋅ h 3 2 d 2 f t d t 2 + d ⋅ h 4 4 d 3 f t d t 3 + O ( h 5 ) {\displaystyle {\begin{aligned}y_{t+h}={}&y_{t}+h\left\lbrace a\cdot f(y_{t},\ t)+b\cdot \left[f(y_{t},\ t)+{\frac {h}{2}}{\frac {d}{dt}}f(y_{t},\ t)\right]\right.+\\&{}+c\cdot \left[f(y_{t},\ t)+{\frac {h}{2}}{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f(y_{t},\ t)\right]\right]+\\&{}+d\cdot \left[f(y_{t},\ t)+h{\frac {d}{dt}}\left[f(y_{t},\ t)+{\frac {h}{2}}{\frac {d}{dt}}\left[f(y_{t},\ t)+\left.{\frac {h}{2}}{\frac {d}{dt}}f(y_{t},\ t)\right]\right]\right]\right\rbrace +{\mathcal {O}}(h^{5})\\={}&y_{t}+a\cdot hf_{t}+b\cdot hf_{t}+b\cdot {\frac {h^{2}}{2}}{\frac {df_{t}}{dt}}+c\cdot hf_{t}+c\cdot {\frac {h^{2}}{2}}{\frac {df_{t}}{dt}}+\\&{}+c\cdot {\frac {h^{3}}{4}}{\frac {d^{2}f_{t}}{dt^{2}}}+d\cdot hf_{t}+d\cdot h^{2}{\frac {df_{t}}{dt}}+d\cdot {\frac {h^{3}}{2}}{\frac {d^{2}f_{t}}{dt^{2}}}+d\cdot {\frac {h^{4}}{4}}{\frac {d^{3}f_{t}}{dt^{3}}}+{\mathcal {O}}(h^{5})\end{aligned}}} and comparing this with the Taylor series of y t + h {\displaystyle y_{t+h}} around t {\displaystyle t} : y t + h = y t + h y ˙ t + h 2 2 y ¨ t + h 3 6 y t ( 3 ) + h 4 24 y t ( 4 ) + O ( h 5 ) = = y t + h f ( y t , t ) + h 2 2 d d t f ( y t , t ) + h 3 6 d 2 d t 2 f ( y t , t ) + h 4 24 d 3 d t 3 f ( y t , t ) {\displaystyle {\begin{aligned}y_{t+h}&=y_{t}+h{\dot {y}}_{t}+{\frac {h^{2}}{2}}{\ddot {y}}_{t}+{\frac {h^{3}}{6}}y_{t}^{(3)}+{\frac {h^{4}}{24}}y_{t}^{(4)}+{\mathcal {O}}(h^{5})=\\&=y_{t}+hf(y_{t},\ t)+{\frac {h^{2}}{2}}{\frac {d}{dt}}f(y_{t},\ t)+{\frac {h^{3}}{6}}{\frac {d^{2}}{dt^{2}}}f(y_{t},\ t)+{\frac {h^{4}}{24}}{\frac {d^{3}}{dt^{3}}}f(y_{t},\ t)\end{aligned}}} we obtain a system of constraints on the coefficients: { a + b + c + d = 1 1 2 b + 1 2 c + d = 1 2 1 4 c + 1 2 d = 1 6 1 4 d = 1 24 {\displaystyle {\begin{cases}&a+b+c+d=1\\[6pt]&{\frac {1}{2}}b+{\frac {1}{2}}c+d={\frac {1}{2}}\\[6pt]&{\frac {1}{4}}c+{\frac {1}{2}}d={\frac {1}{6}}\\[6pt]&{\frac {1}{4}}d={\frac {1}{24}}\end{cases}}} which when solved gives a = 1 6 , b = 1 3 , c = 1 3 , d = 1 6 {\displaystyle a={\frac {1}{6}},b={\frac {1}{3}},c={\frac {1}{3}},d={\frac {1}{6}}} as stated above. == See also == Euler's method List of Runge–Kutta methods Numerical methods for ordinary differential equations Runge–Kutta method (SDE) General linear methods Lie group integrator == Notes == == References == Runge, Carl David Tolmé (1895), "Über die numerische Auflösung von Differentialgleichungen", Mathematische Annalen, 46 (2), Springer: 167–178, doi:10.1007/BF01446807, S2CID 119924854. Kutta, Wilhelm (1901), "Beitrag zur näherungsweisen Integration totaler Differentialgleichungen", Zeitschrift für Mathematik und Physik, 46: 435–453. Ascher, Uri M.; Petzold, Linda R. (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-412-8. Atkinson, Kendall A. (1989), An Introduction to Numerical Analysis (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-50023-0. Butcher, John C. (May 1963), "Coefficients for the study of Runge-Kutta integration processes", Journal of the Australian Mathematical Society, 3 (2): 185–201, doi:10.1017/S1446788700027932. Butcher, John C. (May 1964), "On Runge-Kutta processes of high order", Journal of the Australian Mathematical Society, 4 (2): 179–194, doi:10.1017/S1446788700023387 Butcher, John C. (1975), "A stability property of implicit Runge-Kutta methods", BIT, 15 (4): 358–361, doi:10.1007/bf01931672, S2CID 120854166. Butcher, John C. (2000), "Numerical methods for ordinary differential equations in the 20th century", J. Comput. Appl. Math., 125 (1–2): 1–29, Bibcode:2000JCoAM.125....1B, doi:10.1016/S0377-0427(00)00455-6. Butcher, John C. (2008), Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, ISBN 978-0-470-72335-7. Cellier, F.; Kofman, E. (2006), Continuous System Simulation, Springer Verlag, ISBN 0-387-26102-8. Dahlquist, Germund (1963), "A special stability problem for linear multistep methods", BIT, 3: 27–43, doi:10.1007/BF01963532, hdl:10338.dmlcz/103497, ISSN 0006-3835, S2CID 120241743. Forsythe, George E.; Malcolm, Michael A.; Moler, Cleve B. (1977), Computer Methods for Mathematical Computations, Prentice-Hall (see Chapter 6). Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0. Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-60452-5. Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, Bibcode:1996fcna.book.....I, ISBN 978-0-521-55655-2. Lambert, J.D (1991), Numerical Methods for Ordinary Differential Systems. The Initial Value Problem, John Wiley & Sons, ISBN 0-471-92990-5 Kaw, Autar; Kalu, Egwu (2008), Numerical Methods with Applications (1st ed.), autarkaw.com. Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 17.1 Runge-Kutta Method", Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8. Also, Section 17.2. Adaptive Stepsize Control for Runge-Kutta. Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-95452-3. Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1. Tan, Delin; Chen, Zheng (2012), "On A General Formula of Fourth Order Runge-Kutta Method" (PDF), Journal of Mathematical Science & Mathematics Education, 7 (2): 1–10. advance discrete maths ignou reference book (code- mcs033) John C. Butcher: "B-Series : Algebraic Analysis of Numerical Methods", Springer(SSCM, volume 55), ISBN 978-3030709556 (April, 2021). Butcher, J.C. (1985), "The non-existence of ten stage eighth order explicit Runge-Kutta methods", BIT Numerical Mathematics, 25 (3): 521–540, doi:10.1007/BF01935372. Butcher, J.C. (1965), "On the attainable order of Runge-Kutta methods", Mathematics of Computation, 19 (91): 408–417, doi:10.1090/S0025-5718-1965-0179943-X. Curtis, A.R. (1970), "An eighth order Runge-Kutta process with eleven function evaluations per step", Numerische Mathematik, 16 (3): 268–277, doi:10.1007/BF02219778. Cooper, G.J.; Verner, J.H. (1972), "Some Explicit Runge–Kutta Methods of High Order", SIAM Journal on Numerical Analysis, 9 (3): 389–405, Bibcode:1972SJNA....9..389C, doi:10.1137/0709037. Butcher, J.C. (1996), "A History of Runge-Kutta Methods", Applied Numerical Mathematics, 20 (3): 247–260, doi:10.1016/0168-9274(95)00108-5. == External links == "Runge-Kutta method", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Runge–Kutta 4th-Order Method Tracker Component Library Implementation in Matlab — Implements 32 embedded Runge Kutta algorithms in RungeKStep, 24 embedded Runge-Kutta Nyström algorithms in RungeKNystroemSStep and 4 general Runge-Kutta Nyström algorithms in RungeKNystroemGStep.
Wikipedia/Implicit_Runge–Kutta_methods
OpenModelica is a free and open source environment based on the Modelica modeling language for modeling, simulating, optimizing and analyzing complex dynamic systems. This software is actively developed by Open Source Modelica Consortium, a non-profit, non-governmental organization. The Open Source Modelica Consortium is run as a project of RISE SICS East AB in collaboration with Linköping University. OpenModelica is used in academic and industrial environments. Industrial applications include the use of OpenModelica along with proprietary software in the fields of power plant optimization, automotive and water treatment. == Tools and Applications == === OpenModelica Compiler (OMC) === OpenModelica Compiler (OMC) is a Modelica compiler, translating Modelica to C code, with a symbol table containing definitions of classes, functions, and variables. Such definitions can be predefined, user-defined, or obtained from libraries. The compiler also includes a Modelica interpreter for interactive usage and constant expression evaluation. The subsystem also includes facilities for building simulation executables linked with selected numerical ODE or DAE solvers. The OMC is written in MetaModelica, a unified equation-based semantical and mathematical modeling language and is bootstrapped. === OpenModelica Connection Editor (OMEdit) === OpenModelica Connection Editor is an open source graphical user interface for creating, editing and simulating Modelica models in textual and graphical modes. OMEdit communicates with OMC through an interactive API, requests model information and creates models/connection diagrams based on the Modelica annotations. The implementation is based on C++ and the Qt library. === OpenModelica Shell (OMShell) === OpenModelica Shell (OMShell) is an interactive command-line interface that parses and interprets commands and Modelica expressions for evaluation, simulation, plotting, etc. The session handler also contains simple history facilities, and completion of file names and certain identifiers in commands. === OpenModelica Notebook (OMNotebook) === OpenModelica Notebook (OMNotebook), is a light-weight Mathematica-style editor for Modelica that implements interactive WYSIWYG realization of Literate Programming, a form of programming where programs are integrated with documentation in the same document. OMNotebook is primarily used for teaching and allows to mix hierarchically structured text with cells containing Modelica models and expressions. These can be evaluated, simulated and plotted with the results displayed directly in the OMNotebook. === OpenModelica Python Interface (OMPython) === OMPython is a Python interface enabling users to access the modeling and simulation capabilities of OpenModelica from Python. It uses CORBA (omniORB) or ZEROMQ to communicate with the OpenModelica scripting API. === OpenModelica Matlab Interface (OMMatlab) === OMMatlab is a Matlab interface that provides access the modeling and simulation capabilities of OpenModelica from matlab. It uses ZEROMQ to communicate with the OpenModelica compiler API. === Modelica Development Tooling (MDT) === MDT is an Eclipse plugin that integrates the OpenModelica compiler with Eclipse. It provides an editor for advanced text based model editing with code assistance. MDT interacts with the OpenModelica Compiler through an existing CORBA based API and is used primarily in the development of the OpenModelica compiler. == See also == Modelica Dymola JModelica.org Wolfram SystemModeler SimulationX Simulink == References ==
Wikipedia/OpenModelica
Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as Euler's method) refer to only one previous point and its derivative to determine the current value. Methods such as Runge–Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used. == Definitions == Numerical methods for ordinary differential equations approximate solutions to initial value problems of the form y ′ = f ( t , y ) , y ( t 0 ) = y 0 . {\displaystyle y'=f(t,y),\quad y(t_{0})=y_{0}.} The result is approximations for the value of y ( t ) {\displaystyle y(t)} at discrete times t i {\displaystyle t_{i}} : y i ≈ y ( t i ) where t i = t 0 + i h , {\displaystyle y_{i}\approx y(t_{i})\quad {\text{where}}\quad t_{i}=t_{0}+ih,} where h {\displaystyle h} is the time step (sometimes referred to as Δ t {\displaystyle \Delta t} ) and i {\displaystyle i} is an integer. Multistep methods use information from the previous s {\displaystyle s} steps to calculate the next value. In particular, a linear multistep method uses a linear combination of y i {\displaystyle y_{i}} and f ( t i , y i ) {\displaystyle f(t_{i},y_{i})} to calculate the value of y {\displaystyle y} for the desired current step. Thus, a linear multistep method is a method of the form y n + s + a s − 1 ⋅ y n + s − 1 + a s − 2 ⋅ y n + s − 2 + ⋯ + a 0 ⋅ y n = h ⋅ ( b s ⋅ f ( t n + s , y n + s ) + b s − 1 ⋅ f ( t n + s − 1 , y n + s − 1 ) + ⋯ + b 0 ⋅ f ( t n , y n ) ) ⇔ ∑ j = 0 s a j y n + j = h ∑ j = 0 s b j f ( t n + j , y n + j ) , {\displaystyle {\begin{aligned}&y_{n+s}+a_{s-1}\cdot y_{n+s-1}+a_{s-2}\cdot y_{n+s-2}+\cdots +a_{0}\cdot y_{n}\\&\qquad {}=h\cdot \left(b_{s}\cdot f(t_{n+s},y_{n+s})+b_{s-1}\cdot f(t_{n+s-1},y_{n+s-1})+\cdots +b_{0}\cdot f(t_{n},y_{n})\right)\\&\Leftrightarrow \sum _{j=0}^{s}a_{j}y_{n+j}=h\sum _{j=0}^{s}b_{j}f(t_{n+j},y_{n+j}),\end{aligned}}} with a s = 1 {\displaystyle a_{s}=1} . The coefficients a 0 , … , a s − 1 {\displaystyle a_{0},\dotsc ,a_{s-1}} and b 0 , … , b s {\displaystyle b_{0},\dotsc ,b_{s}} determine the method. The designer of the method chooses the coefficients, balancing the need to get a good approximation to the true solution against the desire to get a method that is easy to apply. Often, many coefficients are zero to simplify the method. One can distinguish between explicit and implicit methods. If b s = 0 {\displaystyle b_{s}=0} , then the method is called "explicit", since the formula can directly compute y n + s {\displaystyle y_{n+s}} . If b s ≠ 0 {\displaystyle b_{s}\neq 0} then the method is called "implicit", since the value of y n + s {\displaystyle y_{n+s}} depends on the value of f ( t n + s , y n + s ) {\displaystyle f(t_{n+s},y_{n+s})} , and the equation must be solved for y n + s {\displaystyle y_{n+s}} . Iterative methods such as Newton's method are often used to solve the implicit formula. Sometimes an explicit multistep method is used to "predict" the value of y n + s {\displaystyle y_{n+s}} . That value is then used in an implicit formula to "correct" the value. The result is a predictor–corrector method. == Examples == Consider for an example the problem y ′ = f ( t , y ) = y , y ( 0 ) = 1. {\displaystyle y'=f(t,y)=y,\quad y(0)=1.} The exact solution is y ( t ) = e t {\displaystyle y(t)=e^{t}} . === One-step Euler === A simple numerical method is Euler's method: y n + 1 = y n + h f ( t n , y n ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).} Euler's method can be viewed as an explicit multistep method for the degenerate case of one step. This method, applied with step size h = 1 2 {\displaystyle h={\tfrac {1}{2}}} on the problem y ′ = y {\displaystyle y'=y} , gives the following results: y 1 = y 0 + h f ( t 0 , y 0 ) = 1 + 1 2 ⋅ 1 = 1.5 , y 2 = y 1 + h f ( t 1 , y 1 ) = 1.5 + 1 2 ⋅ 1.5 = 2.25 , y 3 = y 2 + h f ( t 2 , y 2 ) = 2.25 + 1 2 ⋅ 2.25 = 3.375 , y 4 = y 3 + h f ( t 3 , y 3 ) = 3.375 + 1 2 ⋅ 3.375 = 5.0625. {\displaystyle {\begin{aligned}y_{1}&=y_{0}+hf(t_{0},y_{0})=1+{\tfrac {1}{2}}\cdot 1=1.5,\\y_{2}&=y_{1}+hf(t_{1},y_{1})=1.5+{\tfrac {1}{2}}\cdot 1.5=2.25,\\y_{3}&=y_{2}+hf(t_{2},y_{2})=2.25+{\tfrac {1}{2}}\cdot 2.25=3.375,\\y_{4}&=y_{3}+hf(t_{3},y_{3})=3.375+{\tfrac {1}{2}}\cdot 3.375=5.0625.\end{aligned}}} === Two-step Adams–Bashforth === Euler's method is a one-step method. A simple multistep method is the two-step Adams–Bashforth method y n + 2 = y n + 1 + 3 2 h f ( t n + 1 , y n + 1 ) − 1 2 h f ( t n , y n ) . {\displaystyle y_{n+2}=y_{n+1}+{\tfrac {3}{2}}hf(t_{n+1},y_{n+1})-{\tfrac {1}{2}}hf(t_{n},y_{n}).} This method needs two values, y n + 1 {\displaystyle y_{n+1}} and y n {\displaystyle y_{n}} , to compute the next value, y n + 2 {\displaystyle y_{n+2}} . However, the initial value problem provides only one value, y 0 = 1 {\displaystyle y_{0}=1} . One possibility to resolve this issue is to use the y 1 {\displaystyle y_{1}} computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits): y 2 = y 1 + 3 2 h f ( t 1 , y 1 ) − 1 2 h f ( t 0 , y 0 ) = 1.5 + 3 2 ⋅ 1 2 ⋅ 1.5 − 1 2 ⋅ 1 2 ⋅ 1 = 2.375 , y 3 = y 2 + 3 2 h f ( t 2 , y 2 ) − 1 2 h f ( t 1 , y 1 ) = 2.375 + 3 2 ⋅ 1 2 ⋅ 2.375 − 1 2 ⋅ 1 2 ⋅ 1.5 = 3.7812 , y 4 = y 3 + 3 2 h f ( t 3 , y 3 ) − 1 2 h f ( t 2 , y 2 ) = 3.7812 + 3 2 ⋅ 1 2 ⋅ 3.7812 − 1 2 ⋅ 1 2 ⋅ 2.375 = 6.0234. {\displaystyle {\begin{aligned}y_{2}&=y_{1}+{\tfrac {3}{2}}hf(t_{1},y_{1})-{\tfrac {1}{2}}hf(t_{0},y_{0})=1.5+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 1.5-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 1=2.375,\\y_{3}&=y_{2}+{\tfrac {3}{2}}hf(t_{2},y_{2})-{\tfrac {1}{2}}hf(t_{1},y_{1})=2.375+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 2.375-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 1.5=3.7812,\\y_{4}&=y_{3}+{\tfrac {3}{2}}hf(t_{3},y_{3})-{\tfrac {1}{2}}hf(t_{2},y_{2})=3.7812+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 3.7812-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 2.375=6.0234.\end{aligned}}} The exact solution at t = t 4 = 2 {\displaystyle t=t_{4}=2} is e 2 = 7.3891 … {\displaystyle e^{2}=7.3891\ldots } , so the two-step Adams–Bashforth method is more accurate than Euler's method. This is always the case if the step size is small enough. == Families of multistep methods == Three families of linear multistep methods are commonly used: Adams–Bashforth methods, Adams–Moulton methods, and the backward differentiation formulas (BDFs). === Adams–Bashforth methods === The Adams–Bashforth methods are explicit methods. The coefficients are a s − 1 = − 1 {\displaystyle a_{s-1}=-1} and a s − 2 = ⋯ = a 0 = 0 {\displaystyle a_{s-2}=\cdots =a_{0}=0} , while the b j {\displaystyle b_{j}} are chosen such that the methods have order s (this determines the methods uniquely). The Adams–Bashforth methods with s = 1, 2, 3, 4, 5 are (Hairer, Nørsett & Wanner 1993, §III.1; Butcher 2003, p. 103): y n + 1 = y n + h f ( t n , y n ) , (This is the Euler method) y n + 2 = y n + 1 + h ( 3 2 f ( t n + 1 , y n + 1 ) − 1 2 f ( t n , y n ) ) , y n + 3 = y n + 2 + h ( 23 12 f ( t n + 2 , y n + 2 ) − 16 12 f ( t n + 1 , y n + 1 ) + 5 12 f ( t n , y n ) ) , y n + 4 = y n + 3 + h ( 55 24 f ( t n + 3 , y n + 3 ) − 59 24 f ( t n + 2 , y n + 2 ) + 37 24 f ( t n + 1 , y n + 1 ) − 9 24 f ( t n , y n ) ) , y n + 5 = y n + 4 + h ( 1901 720 f ( t n + 4 , y n + 4 ) − 2774 720 f ( t n + 3 , y n + 3 ) + 2616 720 f ( t n + 2 , y n + 2 ) − 1274 720 f ( t n + 1 , y n + 1 ) + 251 720 f ( t n , y n ) ) . {\displaystyle {\begin{aligned}y_{n+1}&=y_{n}+hf(t_{n},y_{n}),\qquad {\text{(This is the Euler method)}}\\y_{n+2}&=y_{n+1}+h\left({\frac {3}{2}}f(t_{n+1},y_{n+1})-{\frac {1}{2}}f(t_{n},y_{n})\right),\\y_{n+3}&=y_{n+2}+h\left({\frac {23}{12}}f(t_{n+2},y_{n+2})-{\frac {16}{12}}f(t_{n+1},y_{n+1})+{\frac {5}{12}}f(t_{n},y_{n})\right),\\y_{n+4}&=y_{n+3}+h\left({\frac {55}{24}}f(t_{n+3},y_{n+3})-{\frac {59}{24}}f(t_{n+2},y_{n+2})+{\frac {37}{24}}f(t_{n+1},y_{n+1})-{\frac {9}{24}}f(t_{n},y_{n})\right),\\y_{n+5}&=y_{n+4}+h\left({\frac {1901}{720}}f(t_{n+4},y_{n+4})-{\frac {2774}{720}}f(t_{n+3},y_{n+3})+{\frac {2616}{720}}f(t_{n+2},y_{n+2})-{\frac {1274}{720}}f(t_{n+1},y_{n+1})+{\frac {251}{720}}f(t_{n},y_{n})\right).\end{aligned}}} The coefficients b j {\displaystyle b_{j}} can be determined as follows. Use polynomial interpolation to find the polynomial p of degree s − 1 {\displaystyle s-1} such that p ( t n + i ) = f ( t n + i , y n + i ) , for i = 0 , … , s − 1. {\displaystyle p(t_{n+i})=f(t_{n+i},y_{n+i}),\qquad {\text{for }}i=0,\ldots ,s-1.} The Lagrange formula for polynomial interpolation yields p ( t ) = ∑ j = 0 s − 1 ( − 1 ) s − j − 1 f ( t n + j , y n + j ) j ! ( s − j − 1 ) ! h s − 1 ∏ i = 0 i ≠ j s − 1 ( t − t n + i ) . {\displaystyle p(t)=\sum _{j=0}^{s-1}{\frac {(-1)^{s-j-1}f(t_{n+j},y_{n+j})}{j!(s-j-1)!h^{s-1}}}\prod _{i=0 \atop i\neq j}^{s-1}(t-t_{n+i}).} The polynomial p is locally a good approximation of the right-hand side of the differential equation y ′ = f ( t , y ) {\displaystyle y'=f(t,y)} that is to be solved, so consider the equation y ′ = p ( t ) {\displaystyle y'=p(t)} instead. This equation can be solved exactly; the solution is simply the integral of p. This suggests taking y n + s = y n + s − 1 + ∫ t n + s − 1 t n + s p ( t ) d t . {\displaystyle y_{n+s}=y_{n+s-1}+\int _{t_{n+s-1}}^{t_{n+s}}p(t)\,\mathrm {d} t.} The Adams–Bashforth method arises when the formula for p is substituted. The coefficients b j {\displaystyle b_{j}} turn out to be given by b s − j − 1 = ( − 1 ) j j ! ( s − j − 1 ) ! ∫ 0 1 ∏ i = 0 i ≠ j s − 1 ( u + i ) d u , for j = 0 , … , s − 1. {\displaystyle b_{s-j-1}={\frac {(-1)^{j}}{j!(s-j-1)!}}\int _{0}^{1}\prod _{i=0 \atop i\neq j}^{s-1}(u+i)\,\mathrm {d} u,\qquad {\text{for }}j=0,\ldots ,s-1.} Replacing f ( t , y ) {\displaystyle f(t,y)} by its interpolant p incurs an error of order hs, and it follows that the s-step Adams–Bashforth method has indeed order s (Iserles 1996, §2.1) The Adams–Bashforth methods were designed by John Couch Adams to solve a differential equation modelling capillary action due to Francis Bashforth. Bashforth (1883) published his theory and Adams' numerical method (Goldstine 1977). === Adams–Moulton methods === The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have a s − 1 = − 1 {\displaystyle a_{s-1}=-1} and a s − 2 = ⋯ = a 0 = 0 {\displaystyle a_{s-2}=\cdots =a_{0}=0} . Again the b coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that b s = 0 {\displaystyle b_{s}=0} , an s-step Adams–Moulton method can reach order s + 1 {\displaystyle s+1} , while an s-step Adams–Bashforth methods has only order s. The Adams–Moulton methods with s = 0, 1, 2, 3, 4 are (Hairer, Nørsett & Wanner 1993, §III.1; Quarteroni, Sacco & Saleri 2000) listed, where the first two methods are the backward Euler method and the trapezoidal rule respectively: y n = y n − 1 + h f ( t n , y n ) , y n + 1 = y n + 1 2 h ( f ( t n + 1 , y n + 1 ) + f ( t n , y n ) ) , y n + 2 = y n + 1 + h ( 5 12 f ( t n + 2 , y n + 2 ) + 8 12 f ( t n + 1 , y n + 1 ) − 1 12 f ( t n , y n ) ) , y n + 3 = y n + 2 + h ( 9 24 f ( t n + 3 , y n + 3 ) + 19 24 f ( t n + 2 , y n + 2 ) − 5 24 f ( t n + 1 , y n + 1 ) + 1 24 f ( t n , y n ) ) , y n + 4 = y n + 3 + h ( 251 720 f ( t n + 4 , y n + 4 ) + 646 720 f ( t n + 3 , y n + 3 ) − 264 720 f ( t n + 2 , y n + 2 ) + 106 720 f ( t n + 1 , y n + 1 ) − 19 720 f ( t n , y n ) ) . {\displaystyle {\begin{aligned}y_{n}&=y_{n-1}+hf(t_{n},y_{n}),\\y_{n+1}&=y_{n}+{\frac {1}{2}}h\left(f(t_{n+1},y_{n+1})+f(t_{n},y_{n})\right),\\y_{n+2}&=y_{n+1}+h\left({\frac {5}{12}}f(t_{n+2},y_{n+2})+{\frac {8}{12}}f(t_{n+1},y_{n+1})-{\frac {1}{12}}f(t_{n},y_{n})\right),\\y_{n+3}&=y_{n+2}+h\left({\frac {9}{24}}f(t_{n+3},y_{n+3})+{\frac {19}{24}}f(t_{n+2},y_{n+2})-{\frac {5}{24}}f(t_{n+1},y_{n+1})+{\frac {1}{24}}f(t_{n},y_{n})\right),\\y_{n+4}&=y_{n+3}+h\left({\frac {251}{720}}f(t_{n+4},y_{n+4})+{\frac {646}{720}}f(t_{n+3},y_{n+3})-{\frac {264}{720}}f(t_{n+2},y_{n+2})+{\frac {106}{720}}f(t_{n+1},y_{n+1})-{\frac {19}{720}}f(t_{n},y_{n})\right).\end{aligned}}} The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points t n − 1 , … , t n − s {\displaystyle t_{n-1},\dots ,t_{n-s}} , as above, but also t n {\displaystyle t_{n}} . The coefficients are given by b s − j = ( − 1 ) j j ! ( s − j ) ! ∫ 0 1 ∏ i = 0 i ≠ j s ( u + i − 1 ) d u , for j = 0 , … , s . {\displaystyle b_{s-j}={\frac {(-1)^{j}}{j!(s-j)!}}\int _{0}^{1}\prod _{i=0 \atop i\neq j}^{s}(u+i-1)\,\mathrm {d} u,\qquad {\text{for }}j=0,\ldots ,s.} The Adams–Moulton methods are solely due to John Couch Adams, like the Adams–Bashforth methods. The name of Forest Ray Moulton became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a predictor-corrector pair (Moulton 1926); Milne (1926) had the same idea. Adams used Newton's method to solve the implicit equation (Hairer, Nørsett & Wanner 1993, §III.1). === Backward differentiation formulas (BDF) === The BDF methods are implicit methods with b s − 1 = ⋯ = b 0 = 0 {\displaystyle b_{s-1}=\cdots =b_{0}=0} and the other coefficients chosen such that the method attains order s (the maximum possible). These methods are especially used for the solution of stiff differential equations. == Analysis == The central concepts in the analysis of linear multistep methods, and indeed any numerical method for differential equations, are convergence, order, and stability. === Consistency and order === The first question is whether the method is consistent: is the difference equation a s y n + s + a s − 1 y n + s − 1 + a s − 2 y n + s − 2 + ⋯ + a 0 y n = h ( b s f ( t n + s , y n + s ) + b s − 1 f ( t n + s − 1 , y n + s − 1 ) + ⋯ + b 0 f ( t n , y n ) ) , {\displaystyle {\begin{aligned}&a_{s}y_{n+s}+a_{s-1}y_{n+s-1}+a_{s-2}y_{n+s-2}+\cdots +a_{0}y_{n}\\&\qquad {}=h{\bigl (}b_{s}f(t_{n+s},y_{n+s})+b_{s-1}f(t_{n+s-1},y_{n+s-1})+\cdots +b_{0}f(t_{n},y_{n}){\bigr )},\end{aligned}}} a good approximation of the differential equation y ′ = f ( t , y ) {\displaystyle y'=f(t,y)} ? More precisely, a multistep method is consistent if the local truncation error goes to zero faster than the step size h as h goes to zero, where the local truncation error is defined to be the difference between the result y n + s {\displaystyle y_{n+s}} of the method, assuming that all the previous values y n + s − 1 , … , y n {\displaystyle y_{n+s-1},\ldots ,y_{n}} are exact, and the exact solution of the equation at time t n + s {\displaystyle t_{n+s}} . A computation using Taylor series shows that a linear multistep method is consistent if and only if ∑ k = 0 s − 1 a k = − 1 and ∑ k = 0 s b k = s + ∑ k = 0 s − 1 k a k . {\displaystyle \sum _{k=0}^{s-1}a_{k}=-1\quad {\text{and}}\quad \sum _{k=0}^{s}b_{k}=s+\sum _{k=0}^{s-1}ka_{k}.} All the methods mentioned above are consistent (Hairer, Nørsett & Wanner 1993, §III.2). If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have order p if the local error is of order O ( h p + 1 ) {\displaystyle O(h^{p+1})} as h goes to zero. This is equivalent to the following condition on the coefficients of the methods: ∑ k = 0 s − 1 a k = − 1 and q ∑ k = 0 s k q − 1 b k = s q + ∑ k = 0 s − 1 k q a k for q = 1 , … , p . {\displaystyle \sum _{k=0}^{s-1}a_{k}=-1\quad {\text{and}}\quad q\sum _{k=0}^{s}k^{q-1}b_{k}=s^{q}+\sum _{k=0}^{s-1}k^{q}a_{k}{\text{ for }}q=1,\ldots ,p.} The s-step Adams–Bashforth method has order s, while the s-step Adams–Moulton method has order s + 1 {\displaystyle s+1} (Hairer, Nørsett & Wanner 1993, §III.2). These conditions are often formulated using the characteristic polynomials ρ ( z ) = z s + ∑ k = 0 s − 1 a k z k and σ ( z ) = ∑ k = 0 s b k z k . {\displaystyle \rho (z)=z^{s}+\sum _{k=0}^{s-1}a_{k}z^{k}\quad {\text{and}}\quad \sigma (z)=\sum _{k=0}^{s}b_{k}z^{k}.} In terms of these polynomials, the above condition for the method to have order p becomes ρ ( e h ) − h σ ( e h ) = O ( h p + 1 ) as h → 0. {\displaystyle \rho (e^{h})-h\sigma (e^{h})=O(h^{p+1})\quad {\text{as }}h\to 0.} In particular, the method is consistent if it has order at least one, which is the case if ρ ( 1 ) = 0 {\displaystyle \rho (1)=0} and ρ ′ ( 1 ) = σ ( 1 ) {\displaystyle \rho '(1)=\sigma (1)} . === Stability and convergence === The numerical solution of a one-step method depends on the initial condition y 0 {\displaystyle y_{0}} , but the numerical solution of an s-step method depend on all the s starting values, y 0 , y 1 , … , y s − 1 {\displaystyle y_{0},y_{1},\ldots ,y_{s-1}} . It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is zero-stable for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than Kε for some value of K which does not depend on the step size h. This is called "zero-stability" because it is enough to check the condition for the differential equation y ′ = 0 {\displaystyle y'=0} (Süli & Mayers 2003, p. 332). If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the root condition is satisfied. A linear multistep method is zero-stable if and only if the root condition is satisfied (Süli & Mayers 2003, p. 335). Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values y 1 , … , y s − 1 {\displaystyle y_{1},\ldots ,y_{s-1}} all converge to the initial value y 0 {\displaystyle y_{0}} as h → 0 {\displaystyle h\to 0} . Then, the numerical solution converges to the exact solution as h → 0 {\displaystyle h\to 0} if and only if the method is zero-stable. This result is known as the Dahlquist equivalence theorem, named after Germund Dahlquist; this theorem is similar in spirit to the Lax equivalence theorem for finite difference methods. Furthermore, if the method has order p, then the global error (the difference between the numerical solution and the exact solution at a fixed time) is O ( h p ) {\displaystyle O(h^{p})} (Süli & Mayers 2003, p. 340). Furthermore, if the method is convergent, the method is said to be strongly stable if z = 1 {\displaystyle z=1} is the only root of modulus 1. If it is convergent and all roots of modulus 1 are not repeated, but there is more than one such root, it is said to be relatively stable. Note that 1 must be a root for the method to be convergent; thus convergent methods are always one of these two. To assess the performance of linear multistep methods on stiff equations, consider the linear test equation y' = λy. A multistep method applied to this differential equation with step size h yields a linear recurrence relation with characteristic polynomial π ( z ; h λ ) = ( 1 − h λ β s ) z s + ∑ k = 0 s − 1 ( α k − h λ β k ) z k = ρ ( z ) − h λ σ ( z ) . {\displaystyle \pi (z;h\lambda )=(1-h\lambda \beta _{s})z^{s}+\sum _{k=0}^{s-1}(\alpha _{k}-h\lambda \beta _{k})z^{k}=\rho (z)-h\lambda \sigma (z).} This polynomial is called the stability polynomial of the multistep method. If all of its roots have modulus less than one then the numerical solution of the multistep method will converge to zero and the multistep method is said to be absolutely stable for that value of hλ. The method is said to be A-stable if it is absolutely stable for all hλ with negative real part. The region of absolute stability is the set of all hλ for which the multistep method is absolutely stable (Süli & Mayers 2003, pp. 347 & 348). For more details, see the section on stiff equations and multistep methods. === Example === Consider the Adams–Bashforth three-step method y n + 3 = y n + 2 + h ( 23 12 f ( t n + 2 , y n + 2 ) − 4 3 f ( t n + 1 , y n + 1 ) + 5 12 f ( t n , y n ) ) . {\displaystyle y_{n+3}=y_{n+2}+h\left({23 \over 12}f(t_{n+2},y_{n+2})-{4 \over 3}f(t_{n+1},y_{n+1})+{5 \over 12}f(t_{n},y_{n})\right).} One characteristic polynomial is thus ρ ( z ) = z 3 − z 2 = z 2 ( z − 1 ) {\displaystyle \rho (z)=z^{3}-z^{2}=z^{2}(z-1)} which has roots z = 0 , 1 {\displaystyle z=0,1} , and the conditions above are satisfied. As z = 1 {\displaystyle z=1} is the only root of modulus 1, the method is strongly stable. The other characteristic polynomial is σ ( z ) = 23 12 z 2 − 4 3 z + 5 12 {\displaystyle \sigma (z)={\frac {23}{12}}z^{2}-{\frac {4}{3}}z+{\frac {5}{12}}} == First and second Dahlquist barriers == These two results were proved by Germund Dahlquist and represent an important bound for the order of convergence and for the A-stability of a linear multistep method. The first Dahlquist barrier was proved in Dahlquist (1956) and the second in Dahlquist (1963). === First Dahlquist barrier === The first Dahlquist barrier states that a zero-stable and linear q-step multistep method cannot attain an order of convergence greater than q + 1 if q is odd and greater than q + 2 if q is even. If the method is also explicit, then it cannot attain an order greater than q (Hairer, Nørsett & Wanner 1993, Thm III.3.5). === Second Dahlquist barrier === The second Dahlquist barrier states that no explicit linear multistep methods are A-stable. Further, the maximal order of an (implicit) A-stable linear multistep method is 2. Among the A-stable linear multistep methods of order 2, the trapezoidal rule has the smallest error constant (Dahlquist 1963, Thm 2.1 and 2.2). == See also == Digital energy gain == References == Bashforth, Francis (1883), An Attempt to test the Theories of Capillary Action by comparing the theoretical and measured forms of drops of fluid. With an explanation of the method of integration employed in constructing the tables which give the theoretical forms of such drops, by J. C. Adams, Cambridge{{citation}}: CS1 maint: location missing publisher (link). Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, John Wiley, ISBN 978-0-471-96758-3. Dahlquist, Germund (1956), "Convergence and stability in the numerical integration of ordinary differential equations", Mathematica Scandinavica, 4: 33–53, doi:10.7146/math.scand.a-10454. Dahlquist, Germund (1963), "A special stability problem for linear multistep methods", BIT, 3: 27–43, doi:10.1007/BF01963532, ISSN 0006-3835, S2CID 120241743. Goldstine, Herman H. (1977), A History of Numerical Analysis from the 16th through the 19th Century, New York: Springer-Verlag, ISBN 978-0-387-90277-7. Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems (2nd ed.), Berlin: Springer Verlag, ISBN 978-3-540-56670-0. Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-60452-5. Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, Bibcode:1996fcna.book.....I, ISBN 978-0-521-55655-2. Milne, W. E. (1926), "Numerical integration of ordinary differential equations", American Mathematical Monthly, 33 (9), Mathematical Association of America: 455–460, doi:10.2307/2299609, JSTOR 2299609. Moulton, Forest R. (1926), New methods in exterior ballistics, University of Chicago Press. Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000), Matematica Numerica, Springer Verlag, ISBN 978-88-470-0077-3. Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1. == External links == Weisstein, Eric W. "Adams Method". MathWorld.
Wikipedia/Multistep_method
In computer simulations of mechanical systems, energy drift is the gradual change in the total energy of a closed system over time. According to the laws of mechanics, the energy should be a constant of motion and should not change. However, in simulations the energy might fluctuate on a short time scale and increase or decrease on a very long time scale due to numerical integration artifacts that arise with the use of a finite time step Δt. This is somewhat similar to the flying ice cube problem, whereby numerical errors in handling equipartition of energy can change vibrational energy into translational energy. More specifically, the energy tends to increase exponentially; its increase can be understood intuitively because each step introduces a small perturbation δv to the true velocity vtrue, which (if uncorrelated with v, which will be true for simple integration methods) results in a second-order increase in the energy E = ∑ m v 2 = ∑ m v true 2 + ∑ m δ v 2 {\displaystyle E=\sum m\mathbf {v} ^{2}=\sum m\mathbf {v} _{\text{true}}^{2}+\sum m\,\delta \mathbf {v} ^{2}} (The cross term in v · δv is zero because of no correlation.) Energy drift - usually damping - is substantial for numerical integration schemes that are not symplectic, such as the Runge-Kutta family. Symplectic integrators usually used in molecular dynamics, such as the Verlet integrator family, exhibit increases in energy over very long time scales, though the error remains roughly constant. These integrators do not in fact reproduce the actual Hamiltonian mechanics of the system; instead, they reproduce a closely related "shadow" Hamiltonian whose value they conserve many orders of magnitude more closely. The accuracy of the energy conservation for the true Hamiltonian is dependent on the time step. The energy computed from the modified Hamiltonian of a symplectic integrator is O ( Δ t p ) {\displaystyle {\mathcal {O}}\left(\Delta t^{p}\right)} from the true Hamiltonian. Energy drift is similar to parametric resonance in that a finite, discrete timestepping scheme will result in nonphysical, limited sampling of motions with frequencies close to the frequency of velocity updates. Thus the restriction on the maximum step size that will be stable for a given system is proportional to the period of the fastest fundamental modes of the system's motion. For a motion with a natural frequency ω, artificial resonances are introduced when the frequency of velocity updates, 2 π Δ t {\displaystyle {\frac {2\pi }{\Delta t}}} is related to ω as n m ω = 2 π Δ t {\displaystyle {\frac {n}{m}}\omega ={\frac {2\pi }{\Delta t}}} where n and m are integers describing the resonance order. For Verlet integration, resonances up to the fourth order ( n m = 4 ) {\displaystyle \left({\frac {n}{m}}=4\right)} frequently lead to numerical instability, leading to a restriction on the timestep size of Δ t < 2 ω ≈ 0.225 p {\displaystyle \Delta t<{\frac {\sqrt {2}}{\omega }}\approx 0.225p} where ω is the frequency of the fastest motion in the system and p is its period. The fastest motions in most biomolecular systems involve the motions of hydrogen atoms; it is thus common to use constraint algorithms to restrict hydrogen motion and thus increase the maximum stable time step that can be used in the simulation. However, because the time scales of heavy-atom motions are not widely divergent from those of hydrogen motions, in practice this allows only about a twofold increase in time step. Common practice in all-atom biomolecular simulation is to use a time step of 1 femtosecond (fs) for unconstrained simulations and 2 fs for constrained simulations, although larger time steps may be possible for certain systems or choices of parameters. Energy drift can also result from imperfections in evaluating the energy function, usually due to simulation parameters that sacrifice accuracy for computational speed. For example, cutoff schemes for evaluating the electrostatic forces introduce systematic errors in the energy with each time step as particles move back and forth across the cutoff radius if sufficient smoothing is not used. Particle mesh Ewald summation is one solution for this effect, but introduces artifacts of its own. Errors in the system being simulated can also induce energy drifts characterized as "explosive" that are not artifacts, but are reflective of the instability of the initial conditions; this may occur when the system has not been subjected to sufficient structural minimization before beginning production dynamics. In practice, energy drift may be measured as a percent increase over time, or as a time needed to add a given amount of energy to the system. The practical effects of energy drift depend on the simulation conditions, the thermodynamic ensemble being simulated, and the intended use of the simulation under study; for example, energy drift has much more severe consequences for simulations of the microcanonical ensemble than the canonical ensemble where the temperature is held constant. However, it has been shown that long microcanonical ensemble simulations can be performed with insignificant energy drift, including those of flexible molecules which incorporate constraints and Ewald summations. Energy drift is often used as a measure of the quality of the simulation, and has been proposed as one quality metric to be routinely reported in a mass repository of molecular dynamics trajectory data analogous to the Protein Data Bank. == References == == Further reading == Sanz-Serna JM, Calvo MP. (1994). Numerical Hamiltonian Problems. Chapman & Hall, London, England.
Wikipedia/Energy_drift
In numerical analysis, leapfrog integration is a method for numerically integrating differential equations of the form x ¨ = d 2 x d t 2 = A ( x ) , {\displaystyle {\ddot {x}}={\frac {d^{2}x}{dt^{2}}}=A(x),} or equivalently of the form v ˙ = d v d t = A ( x ) , x ˙ = d x d t = v , {\displaystyle {\dot {v}}={\frac {dv}{dt}}=A(x),\qquad {\dot {x}}={\frac {dx}{dt}}=v,} particularly in the case of a dynamical system of classical mechanics. The method is known by different names in different disciplines. In particular, it is similar to the velocity Verlet method, which is a variant of Verlet integration. Leapfrog integration is equivalent to updating positions x ( t ) {\displaystyle x(t)} and velocities v ( t ) = x ˙ ( t ) {\displaystyle v(t)={\dot {x}}(t)} at different interleaved time points, staggered in such a way that they "leapfrog" over each other. Leapfrog integration is a second-order method, in contrast to Euler integration, which is only first-order, yet requires the same number of function evaluations per step. Unlike Euler integration, it is stable for oscillatory motion, as long as the time-step Δ t {\displaystyle \Delta t} is constant, and Δ t < 2 / ω {\displaystyle \Delta t<2/\omega } . Using Yoshida coefficients, applying the leapfrog integrator multiple times with the correct timesteps, a much higher order integrator can be generated. == Algorithm == In leapfrog integration, the equations for updating position and velocity are a i = A ( x i ) , v i + 1 / 2 = v i − 1 / 2 + a i Δ t , x i + 1 = x i + v i + 1 / 2 Δ t , {\displaystyle {\begin{aligned}a_{i}&=A(x_{i}),\\v_{i+1/2}&=v_{i-1/2}+a_{i}\,\Delta t,\\x_{i+1}&=x_{i}+v_{i+1/2}\,\Delta t,\end{aligned}}} where x i {\displaystyle x_{i}} is position at step i {\displaystyle i} , v i + 1 / 2 {\displaystyle v_{i+1/2\,}} is the velocity, or first derivative of x {\displaystyle x} , at step i + 1 / 2 {\displaystyle i+1/2\,} , a i = A ( x i ) {\displaystyle a_{i}=A(x_{i})} is the acceleration, or second derivative of x {\displaystyle x} , at step i {\displaystyle i} , and Δ t {\displaystyle \Delta t} is the size of each time step. These equations can be expressed in a form that gives velocity at integer steps as well: x i + 1 = x i + v i Δ t + 1 2 a i Δ t 2 , v i + 1 = v i + 1 2 ( a i + a i + 1 ) Δ t . {\displaystyle {\begin{aligned}x_{i+1}&=x_{i}+v_{i}\,\Delta t+{\tfrac {1}{2}}\,a_{i}\,\Delta t^{\,2},\\v_{i+1}&=v_{i}+{\tfrac {1}{2}}(a_{i}+a_{i+1})\,\Delta t.\end{aligned}}} However, in this synchronized form, the time-step Δ t {\displaystyle \Delta t} must be constant to maintain stability. The synchronised form can be re-arranged to the 'kick-drift-kick' form; v i + 1 / 2 = v i + 1 2 a i Δ t , x i + 1 = x i + v i + 1 / 2 Δ t , v i + 1 = v i + 1 / 2 + 1 2 a i + 1 Δ t , {\displaystyle {\begin{aligned}v_{i+1/2}&=v_{i}+{\tfrac {1}{2}}a_{i}\Delta t,\\[2pt]x_{i+1}&=x_{i}+v_{i+1/2}\Delta t,\\[2pt]v_{i+1}&=v_{i+1/2}+{\tfrac {1}{2}}a_{i+1}\Delta t,\end{aligned}}} which is primarily used where variable time-steps are required. The separation of the acceleration calculation onto the beginning and end of a step means that if time resolution is increased by a factor of two ( Δ t → Δ t / 2 {\displaystyle \Delta t\rightarrow \Delta t/2} ), then only one extra (computationally expensive) acceleration calculation is required. One use of this equation is in Newtonian gravity simulations, since in that case the acceleration depends only on the positions of the gravitating masses (and not on their velocities). There are two primary strengths to leapfrog integration when applied to mechanics problems. The first is the time-reversibility of the Leapfrog method. One can integrate forward n steps, and then reverse the direction of integration and integrate backwards n steps to arrive at the same starting position. The second strength is its symplectic nature, which implies that it conserves the (slightly modified; see symplectic integrator) energy of a Hamiltonian dynamical system. This is especially useful when computing orbital dynamics, as many other integration schemes, such as the (order-4) Runge–Kutta method, do not conserve energy and allow the system to drift substantially over time. Because of its time-reversibility, and because it is a symplectic integrator, leapfrog integration is also used in Hamiltonian Monte Carlo, a method for drawing random samples from a probability distribution whose overall normalization is unknown. == Yoshida algorithms == The leapfrog integrator can be converted into higher order integrators using techniques due to Haruo Yoshida. In this approach, the leapfrog is applied over a number of different timesteps. It turns out that when the correct timesteps are used in sequence, the errors cancel and far higher order integrators can be easily produced. === 4th order Yoshida integrator === One step under the 4th order Yoshida integrator requires four intermediary steps. The position and velocity are computed at different times. Only three (computationally expensive) acceleration calculations are required. The equations for the 4th order integrator to update position and velocity are x i 1 = x i + c 1 v i Δ t , v i 1 = v i + d 1 a ( x i 1 ) Δ t , x i 2 = x i 1 + c 2 v i 1 Δ t , v i 2 = v i 1 + d 2 a ( x i 2 ) Δ t , x i 3 = x i 2 + c 3 v i 2 Δ t , v i 3 = v i 2 + d 3 a ( x i 3 ) Δ t , x i + 1 ≡ x i 4 = x i 3 + c 4 v i 3 Δ t , v i + 1 ≡ v i 4 = v i 3 {\displaystyle {\begin{aligned}x_{i}^{1}&=x_{i}+c_{1}\,v_{i}\,\Delta t,&v_{i}^{1}&=v_{i}+d_{1}\,a(x_{i}^{1})\,\Delta t,\\x_{i}^{2}&=x_{i}^{1}+c_{2}\,v_{i}^{1}\,\Delta t,&v_{i}^{2}&=v_{i}^{1}+d_{2}\,a(x_{i}^{2})\,\Delta t,\\x_{i}^{3}&=x_{i}^{2}+c_{3}\,v_{i}^{2}\,\Delta t,&v_{i}^{3}&=v_{i}^{2}+d_{3}\,a(x_{i}^{3})\,\Delta t,\\x_{i+1}&\equiv x_{i}^{4}=x_{i}^{3}+c_{4}\,v_{i}^{3}\,\Delta t,&v_{i+1}&\equiv v_{i}^{4}=v_{i}^{3}\\\end{aligned}}} where x i , v i {\displaystyle x_{i},v_{i}} are the starting position and velocity, x i n , v i n {\displaystyle x_{i}^{n},v_{i}^{n}} are intermediary position and velocity at intermediary step n {\displaystyle n} , a ( x i n ) {\displaystyle a(x_{i}^{n})} is the acceleration at the position x i n {\displaystyle x_{i}^{n}} , and x i + 1 , v i + 1 {\displaystyle x_{i+1},v_{i+1}} are the final position and velocity under one 4th order Yoshida step. Coefficients ( c 1 , c 2 , c 3 , c 4 ) {\displaystyle (c_{1},c_{2},c_{3},c_{4})} and ( d 1 , d 2 , d 3 ) {\displaystyle (d_{1},d_{2},d_{3})} are derived in (see the equation (4.6)) w 0 ≡ − 2 3 2 − 2 3 , w 1 ≡ 1 2 − 2 3 , c 1 = c 4 ≡ w 1 2 , c 2 = c 3 ≡ w 0 + w 1 2 , d 1 = d 3 ≡ w 1 , d 2 ≡ w 0 {\displaystyle {\begin{aligned}w_{0}&\equiv -{\frac {\sqrt[{3}]{2}}{2-{\sqrt[{3}]{2}}}},&w_{1}&\equiv {\frac {1}{2-{\sqrt[{3}]{2}}}},\\[1ex]c_{1}&=c_{4}\equiv {\frac {w_{1}}{2}},&c_{2}=c_{3}&\equiv {\frac {w_{0}+w_{1}}{2}},\\[1ex]d_{1}&=d_{3}\equiv w_{1},&d_{2}&\equiv w_{0}\\\end{aligned}}} All intermediary steps form one Δ t {\displaystyle \Delta t} step which implies that coefficients sum up to one: ∑ i = 1 4 c i = 1 {\textstyle \sum _{i=1}^{4}c_{i}=1} and ∑ i = 1 3 d i = 1 {\textstyle \sum _{i=1}^{3}d_{i}=1} . Please note that position and velocity are computed at different times and some intermediary steps are backwards in time. To illustrate this, we give the numerical values of c n {\displaystyle c_{n}} coefficients: c 1 = 0.6756 {\displaystyle c_{1}=0.6756} , c 2 = − 0.1756 {\displaystyle c_{2}=-0.1756} , c 3 = − 0.1756 {\displaystyle c_{3}=-0.1756} , c 4 = 0.6756. {\displaystyle c_{4}=0.6756.} == See also == Numerical methods for ordinary differential equations Symplectic integration Euler integration Verlet integration Runge–Kutta integration == References == == External links == [1], Drexel University Physics
Wikipedia/Leapfrog_method
The quantized state systems (QSS) methods are a family of numerical integration solvers based on the idea of state quantization, dual to the traditional idea of time discretization. Unlike traditional numerical solution methods, which approach the problem by discretizing time and solving for the next (real-valued) state at each successive time step, QSS methods keep time as a continuous entity and instead quantize the system's state, instead solving for the time at which the state deviates from its quantized value by a quantum. They can also have many advantages compared to classical algorithms. They inherently allow for modeling discontinuities in the system due to their discrete-event nature and asynchronous nature. They also allow for explicit root-finding and detection of zero-crossing using explicit algorithms, avoiding the need for iteration---a fact which is especially important in the case of stiff systems, where traditional time-stepping methods require a heavy computational penalty due to the requirement to implicitly solve for the next system state. Finally, QSS methods satisfy remarkable global stability and error bounds, described below, which are not satisfied by classical solution techniques. By their nature, QSS methods are therefore neatly modeled by the DEVS formalism, a discrete-event model of computation, in contrast with traditional methods, which form discrete-time models of the continuous-time system. They have therefore been implemented in [PowerDEVS], a simulation engine for such discrete-event systems. == Theoretical properties == In 2001, Ernesto Kofman proved a remarkable property of the quantized-state system simulation method: namely, that when the technique is used to solve a stable linear time-invariant (LTI) system, the global error is bounded by a constant that is proportional to the quantum, but (crucially) independent of the duration of the simulation. More specifically, for a stable multidimensional LTI system with the state-transition matrix A {\displaystyle A} and input matrix B {\displaystyle B} , it was shown in [CK06] that the absolute error vector e → ( t ) {\displaystyle {\vec {e}}(t)} is bounded above by | e → ( t ) | ≤ | V | | ℜ ( Λ ) − 1 Λ | | V − 1 | Δ Q → + | V | | ℜ ( Λ ) − 1 V − 1 B | Δ u → {\displaystyle \left|{\vec {e}}(t)\right|\leq \left|V\right|\ \left|\Re \left(\Lambda \right)^{-1}\Lambda \right|\ \left|V^{-1}\right|\ \Delta {\vec {Q}}+\left|V\right|\ \left|\Re \left(\Lambda \right)^{-1}V^{-1}B\right|\ \Delta {\vec {u}}} where Δ Q → {\displaystyle \Delta {\vec {Q}}} is the vector of state quanta, Δ u → {\displaystyle \Delta {\vec {u}}} is the vector with quanta adopted in the input signals, V Λ V − 1 = A {\displaystyle V\Lambda V^{-1}=A} is the eigendecomposition or Jordan canonical form of A {\displaystyle A} , and | ⋅ | {\displaystyle \left|\,\cdot \,\right|} denotes the element-wise absolute value operator (not to be confused with the determinant or norm). It is worth noticing that this remarkable error bound comes at a price: the global error for a stable LTI system is also, in a sense, bounded below by the quantum itself, at least for the first-order QSS1 method. This is because, unless the approximation happens to coincide exactly with the correct value (an event which will almost surely not happen), it will simply continue oscillating around the equilibrium, as the state is always (by definition) guaranteed to change by exactly one quantum outside of the equilibrium. Avoiding this condition would require finding a reliable technique for dynamically lowering the quantum in a manner analogous to adaptive stepsize methods in traditional discrete time simulation algorithms. == First-order QSS method – QSS1 == Let an initial value problem be specified as follows. x ˙ ( t ) = f ( x ( t ) , t ) , x ( t 0 ) = x 0 . {\displaystyle {\dot {x}}(t)=f(x(t),t),\quad x(t_{0})=x_{0}.} The first-order QSS method, known as QSS1, approximates the above system by x ˙ ( t ) = f ( q ( t ) , t ) , q ( t 0 ) = x 0 . {\displaystyle {\dot {x}}(t)=f(q(t),t),\quad q(t_{0})=x_{0}.} where x {\displaystyle x} and q {\displaystyle q} are related by a hysteretic quantization function q ( t ) = { x ( t ) if | x ( t ) − q ( t − ) | ≥ Δ Q q ( t − ) otherwise {\displaystyle q(t)={\begin{cases}x(t)&{\text{if }}\left|x(t)-q(t^{-})\right|\geq \Delta Q\\q(t^{-})&{\text{otherwise}}\end{cases}}} where Δ Q {\displaystyle \Delta Q} is called a quantum. Notice that this quantization function is hysteretic because it has memory: not only is its output a function of the current state x ( t ) {\displaystyle x(t)} , but it also depends on its old value, q ( t − ) {\displaystyle q(t^{-})} . This formulation therefore approximates the state by a piecewise constant function, q ( t ) {\displaystyle q(t)} , that updates its value as soon as the state deviates from this approximation by one quantum. The multidimensional formulation of this system is almost the same as the single-dimensional formulation above: the k th {\displaystyle k^{\text{th}}} quantized state q k ( t ) {\displaystyle q_{k}(t)} is a function of its corresponding state, x k ( t ) {\displaystyle x_{k}(t)} , and the state vector x → ( t ) {\displaystyle {\vec {x}}(t)} is a function of the entire quantized state vector, q → ( t ) {\displaystyle {\vec {q}}(t)} : x → ( t ) = f ( q → ( t ) , t ) {\displaystyle {\vec {x}}(t)=f({\vec {q}}(t),t)} == High-order QSS methods – QSS2 and QSS3 == The second-order QSS method, QSS2, follows the same principle as QSS1, except that it defines q ( t ) {\displaystyle q(t)} as a piecewise linear approximation of the trajectory x ( t ) {\displaystyle x(t)} that updates its trajectory as soon as the two differ from each other by one quantum. The pattern continues for higher-order approximations, which define the quantized state q ( t ) {\displaystyle q(t)} as successively higher-order polynomial approximations of the system's state. It is important to note that, while in principle a QSS method of arbitrary order can be used to model a continuous-time system, it is seldom desirable to use methods of order higher than four, as the Abel–Ruffini theorem implies that the time of the next quantization, t {\displaystyle t} , cannot (in general) be explicitly solved for algebraically when the polynomial approximation is of degree greater than four, and hence must be approximated iteratively using a root-finding algorithm. In practice, QSS2 or QSS3 proves sufficient for many problems and the use of higher-order methods results in little, if any, additional benefit. == Software implementation == The QSS Methods can be implemented as a discrete event system and simulated in any DEVS simulator. QSS methods constitute the main numerical solver for PowerDEVS[BK011] software. They have also been implemented in as a stand-alone version. == References == [CK06] Francois E. Cellier & Ernesto Kofman (2006). Continuous System Simulation (first ed.). Springer. ISBN 978-0-387-26102-7. [BK11] Bergero, Federico & Kofman, Ernesto (2011). "PowerDEVS: a tool for hybrid system modeling and real-time simulation" (first ed.). Society for Computer Simulation International, San Diego. == External links == Stand-alone implementation of QSS Methods PowerDEVS at SourceForge
Wikipedia/Quantized_state_systems_methods
In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Extrapolation may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced. By doing so, one makes an assumption of the unknown (for example, a driver may extrapolate road conditions beyond what is currently visible and these extrapolations may be correct or incorrect). The extrapolation method can be applied in the interior reconstruction problem. == Method == A sound choice of which extrapolation method to apply relies on a priori knowledge of the process that created the existing data points. Some experts have proposed the use of causal forces in the evaluation of extrapolation methods. Crucial questions are, for example, if the data can be assumed to be continuous, smooth, possibly periodic, etc. === Linear === Linear extrapolation means creating a tangent line at the end of the known data and extending it beyond that limit. Linear extrapolation will only provide good results when used to extend the graph of an approximately linear function or not too far beyond the known data. If the two data points nearest the point x ∗ {\displaystyle x_{*}} to be extrapolated are ( x k − 1 , y k − 1 ) {\displaystyle (x_{k-1},y_{k-1})} and ( x k , y k ) {\displaystyle (x_{k},y_{k})} , linear extrapolation gives the function: y ( x ∗ ) = y k − 1 + x ∗ − x k − 1 x k − x k − 1 ( y k − y k − 1 ) . {\displaystyle y(x_{*})=y_{k-1}+{\frac {x_{*}-x_{k-1}}{x_{k}-x_{k-1}}}(y_{k}-y_{k-1}).} (which is identical to linear interpolation if x k − 1 < x ∗ < x k {\displaystyle x_{k-1}<x_{*}<x_{k}} ). It is possible to include more than two points, and averaging the slope of the linear interpolant, by regression-like techniques, on the data points chosen to be included. This is similar to linear prediction. === Polynomial === A polynomial curve can be created through the entire known data or just near the end (two points for linear extrapolation, three points for quadratic extrapolation, etc.). The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means of Lagrange interpolation or using Newton's method of finite differences to create a Newton series that fits the data. The resulting polynomial may be used to extrapolate the data. High-order polynomial extrapolation must be used with due care. For the example data set and problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield unusable values; an error estimate of the extrapolated value will grow with the degree of the polynomial extrapolation. This is related to Runge's phenomenon. === Conic === A conic section can be created using five points near the end of the known data. If the conic section created is an ellipse or circle, when extrapolated it will loop back and rejoin itself. An extrapolated parabola or hyperbola will not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation could be done with a conic sections template (on paper) or with a computer. === French curve === French curve extrapolation is a method suitable for any distribution that has a tendency to be exponential, but with accelerating or decelerating factors. This method has been used successfully in providing forecast projections of the growth of HIV/AIDS in the UK since 1987 and variant CJD in the UK for a number of years. Another study has shown that extrapolation can produce the same quality of forecasting results as more complex forecasting strategies. === Geometric Extrapolation with error prediction === Can be created with 3 points of a sequence and the "moment" or "index", this type of extrapolation have 100% accuracy in predictions in a big percentage of known series database (OEIS). Example of extrapolation with error prediction : sequence = [ 1 , 2 , 3 , 5 ] {\displaystyle {\text{sequence}}=[1,2,3,5]} f 1 ( x , y ) = x y {\displaystyle {f_{1}(x,y)={\frac {x}{y}}}} d 1 = f 1 ( 3 , 2 ) {\displaystyle d_{1}=f_{1}(3,2)} d 2 = f 1 ( 5 , 3 ) {\displaystyle {d_{2}=f_{1}(5,3)}} m = sequence ( 5 ) {\displaystyle m={\text{sequence}}(5)} n = sequence ( 3 ) {\displaystyle n={\text{sequence}}(3)} f ( m , n , d 1 , d 2 ) = round ( ( n ⋅ d 1 − m ) + ( m ⋅ d 2 ) ) = round ( ( 3 × 1.5 − 5 ) + ( 5 × 1.66 ) ) = 8 {\displaystyle {\begin{aligned}{\text{f}}(m,n,d_{1},d_{2})&={\text{round}}\left((n\cdot d_{1}-m)+(m\cdot d_{2})\right)\\&={\text{round}}\left((3\times 1.5-5\right)+(5\times 1.66))=8\end{aligned}}} == Quality == Typically, the quality of a particular method of extrapolation is limited by the assumptions about the function made by the method. If the method assumes the data are smooth, then a non-smooth function will be poorly extrapolated. In terms of complex time series, some experts have discovered that extrapolation is more accurate when performed through the decomposition of causal forces. Even for proper assumptions about the function, the extrapolation can diverge severely from the function. The classic example is truncated power series representations of sin(x) and related trigonometric functions. For instance, taking only data from near the x = 0, we may estimate that the function behaves as sin(x) ~ x. In the neighborhood of x = 0, this is an excellent estimate. Away from x = 0 however, the extrapolation moves arbitrarily away from the x-axis while sin(x) remains in the interval [−1, 1]. I.e., the error increases without bound. Taking more terms in the power series of sin(x) around x = 0 will produce better agreement over a larger interval near x = 0, but will produce extrapolations that eventually diverge away from the x-axis even faster than the linear approximation. This divergence is a specific property of extrapolation methods and is only circumvented when the functional forms assumed by the extrapolation method (inadvertently or intentionally due to additional information) accurately represent the nature of the function being extrapolated. For particular problems, this additional information may be available, but in the general case, it is impossible to satisfy all possible function behaviors with a workably small set of potential behavior. == In the complex plane == In complex analysis, a problem of extrapolation may be converted into an interpolation problem by the change of variable z ^ = 1 / z {\displaystyle {\hat {z}}=1/z} . This transform exchanges the part of the complex plane inside the unit circle with the part of the complex plane outside of the unit circle. In particular, the compactification point at infinity is mapped to the origin and vice versa. Care must be taken with this transform however, since the original function may have had "features", for example poles and other singularities, at infinity that were not evident from the sampled data. Another problem of extrapolation is loosely related to the problem of analytic continuation, where (typically) a power series representation of a function is expanded at one of its points of convergence to produce a power series with a larger radius of convergence. In effect, a set of data from a small region is used to extrapolate a function onto a larger region. Again, analytic continuation can be thwarted by function features that were not evident from the initial data. Also, one may use sequence transformations like Padé approximants and Levin-type sequence transformations as extrapolation methods that lead to a summation of power series that are divergent outside the original radius of convergence. In this case, one often obtains rational approximants. == Extrapolation arguments == Extrapolation arguments are informal and unquantified arguments which assert that something is probably true beyond the range of values for which it is known to be true. For example, we believe in the reality of what we see through magnifying glasses because it agrees with what we see with the naked eye but extends beyond it; we believe in what we see through light microscopes because it agrees with what we see through magnifying glasses but extends beyond it; and similarly for electron microscopes. Such arguments are widely used in biology in extrapolating from animal studies to humans and from pilot studies to a broader population. Like slippery slope arguments, extrapolation arguments may be strong or weak depending on such factors as how far the extrapolation goes beyond the known range. == See also == Forecasting Minimum polynomial extrapolation Multigrid method Overfitting Prediction interval Regression analysis Richardson extrapolation Static analysis Trend estimation Extrapolation domain analysis Dead reckoning Interior reconstruction Extreme value theory Interpolation == Notes == == References == Extrapolation Methods. Theory and Practice by C. Brezinski and M. Redivo Zaglia, North-Holland, 1991. Avram Sidi: "Practical Extrapolation Methods: Theory and Applications", Cambridge University Press, ISBN 0-521-66159-5 (2003). Claude Brezinski and Michela Redivo-Zaglia : "Extrapolation and Rational Approximation", Springer Nature, Switzerland, ISBN 9783030584177, (2020).
Wikipedia/Extrapolation_method
In numerical analysis, the Bulirsch–Stoer algorithm is a method for the numerical solution of ordinary differential equations which combines three powerful ideas: Richardson extrapolation, the use of rational function extrapolation in Richardson-type applications, and the modified midpoint method, to obtain numerical solutions to ordinary differential equations (ODEs) with high accuracy and comparatively little computational effort. It is named after Roland Bulirsch and Josef Stoer. It is sometimes called the Gragg–Bulirsch–Stoer (GBS) algorithm because of the importance of a result about the error function of the modified midpoint method, due to William B. Gragg. == Underlying ideas == The idea of Richardson extrapolation is to consider a numerical calculation whose accuracy depends on the used stepsize h as an (unknown) analytic function of the stepsize h, performing the numerical calculation with various values of h, fitting a (chosen) analytic function to the resulting points, and then evaluating the fitting function for h = 0, thus trying to approximate the result of the calculation with infinitely fine steps. Bulirsch and Stoer recognized that using rational functions as fitting functions for Richardson extrapolation in numerical integration is superior to using polynomial functions because rational functions are able to approximate functions with poles rather well (compared to polynomial functions), given that there are enough higher-power terms in the denominator to account for nearby poles. While a polynomial interpolation or extrapolation only yields good results if the nearest pole is rather far outside a circle around the known data points in the complex plane, rational function interpolation or extrapolation can have remarkable accuracy even in the presence of nearby poles. The modified midpoint method by itself is a second-order method, and therefore generally inferior to fourth-order methods like the fourth-order Runge–Kutta method. However, it has the advantage of requiring only one derivative evaluation per substep (asymptotically for a large number of substeps), and, in addition, as discovered by Gragg, the error of a modified midpoint step of size H, consisting of n substeps of size h = H/n each, and expressed as a power series in h, contains only even powers of h. This makes the modified midpoint method extremely useful to the Bulirsch–Stoer method as the accuracy increases two orders at a time when the results of separate attempts to cross the interval H with increasing numbers of substeps are combined. Hairer, Nørsett & Wanner (1993, p. 228), in their discussion of the method, say that rational extrapolation in this case is nearly never an improvement over polynomial interpolation (Deuflhard 1983). Furthermore, the modified midpoint method is a modification of the regular midpoint method to make it more stable, but because of the extrapolation this does not really matter (Shampine & Baca 1983). == References == Deuflhard, Peter (1983), "Order and stepsize control in extrapolation methods", Numerische Mathematik, 41 (3): 399–422, doi:10.1007/BF01418332, ISSN 0029-599X, S2CID 121911947. Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 17.3. Richardson Extrapolation and the Bulirsch-Stoer Method". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-17. Shampine, Lawrence F.; Baca, Lorraine S. (1983), "Smoothing the extrapolated midpoint rule", Numerische Mathematik, 41 (2): 165–175, doi:10.1007/BF01390211, ISSN 0029-599X, S2CID 121097742. == External links == ODEX.F, implementation of the Bulirsch–Stoer algorithm by Ernst Hairer and Gerhard Wanner (for other routines and license conditions, see their Fortran and Matlab Codes page). BOOST library, implementation in C++. Apache Commons Math, implementation in Java.
Wikipedia/Bulirsch–Stoer_algorithm
Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as Euler's method) refer to only one previous point and its derivative to determine the current value. Methods such as Runge–Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used. == Definitions == Numerical methods for ordinary differential equations approximate solutions to initial value problems of the form y ′ = f ( t , y ) , y ( t 0 ) = y 0 . {\displaystyle y'=f(t,y),\quad y(t_{0})=y_{0}.} The result is approximations for the value of y ( t ) {\displaystyle y(t)} at discrete times t i {\displaystyle t_{i}} : y i ≈ y ( t i ) where t i = t 0 + i h , {\displaystyle y_{i}\approx y(t_{i})\quad {\text{where}}\quad t_{i}=t_{0}+ih,} where h {\displaystyle h} is the time step (sometimes referred to as Δ t {\displaystyle \Delta t} ) and i {\displaystyle i} is an integer. Multistep methods use information from the previous s {\displaystyle s} steps to calculate the next value. In particular, a linear multistep method uses a linear combination of y i {\displaystyle y_{i}} and f ( t i , y i ) {\displaystyle f(t_{i},y_{i})} to calculate the value of y {\displaystyle y} for the desired current step. Thus, a linear multistep method is a method of the form y n + s + a s − 1 ⋅ y n + s − 1 + a s − 2 ⋅ y n + s − 2 + ⋯ + a 0 ⋅ y n = h ⋅ ( b s ⋅ f ( t n + s , y n + s ) + b s − 1 ⋅ f ( t n + s − 1 , y n + s − 1 ) + ⋯ + b 0 ⋅ f ( t n , y n ) ) ⇔ ∑ j = 0 s a j y n + j = h ∑ j = 0 s b j f ( t n + j , y n + j ) , {\displaystyle {\begin{aligned}&y_{n+s}+a_{s-1}\cdot y_{n+s-1}+a_{s-2}\cdot y_{n+s-2}+\cdots +a_{0}\cdot y_{n}\\&\qquad {}=h\cdot \left(b_{s}\cdot f(t_{n+s},y_{n+s})+b_{s-1}\cdot f(t_{n+s-1},y_{n+s-1})+\cdots +b_{0}\cdot f(t_{n},y_{n})\right)\\&\Leftrightarrow \sum _{j=0}^{s}a_{j}y_{n+j}=h\sum _{j=0}^{s}b_{j}f(t_{n+j},y_{n+j}),\end{aligned}}} with a s = 1 {\displaystyle a_{s}=1} . The coefficients a 0 , … , a s − 1 {\displaystyle a_{0},\dotsc ,a_{s-1}} and b 0 , … , b s {\displaystyle b_{0},\dotsc ,b_{s}} determine the method. The designer of the method chooses the coefficients, balancing the need to get a good approximation to the true solution against the desire to get a method that is easy to apply. Often, many coefficients are zero to simplify the method. One can distinguish between explicit and implicit methods. If b s = 0 {\displaystyle b_{s}=0} , then the method is called "explicit", since the formula can directly compute y n + s {\displaystyle y_{n+s}} . If b s ≠ 0 {\displaystyle b_{s}\neq 0} then the method is called "implicit", since the value of y n + s {\displaystyle y_{n+s}} depends on the value of f ( t n + s , y n + s ) {\displaystyle f(t_{n+s},y_{n+s})} , and the equation must be solved for y n + s {\displaystyle y_{n+s}} . Iterative methods such as Newton's method are often used to solve the implicit formula. Sometimes an explicit multistep method is used to "predict" the value of y n + s {\displaystyle y_{n+s}} . That value is then used in an implicit formula to "correct" the value. The result is a predictor–corrector method. == Examples == Consider for an example the problem y ′ = f ( t , y ) = y , y ( 0 ) = 1. {\displaystyle y'=f(t,y)=y,\quad y(0)=1.} The exact solution is y ( t ) = e t {\displaystyle y(t)=e^{t}} . === One-step Euler === A simple numerical method is Euler's method: y n + 1 = y n + h f ( t n , y n ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).} Euler's method can be viewed as an explicit multistep method for the degenerate case of one step. This method, applied with step size h = 1 2 {\displaystyle h={\tfrac {1}{2}}} on the problem y ′ = y {\displaystyle y'=y} , gives the following results: y 1 = y 0 + h f ( t 0 , y 0 ) = 1 + 1 2 ⋅ 1 = 1.5 , y 2 = y 1 + h f ( t 1 , y 1 ) = 1.5 + 1 2 ⋅ 1.5 = 2.25 , y 3 = y 2 + h f ( t 2 , y 2 ) = 2.25 + 1 2 ⋅ 2.25 = 3.375 , y 4 = y 3 + h f ( t 3 , y 3 ) = 3.375 + 1 2 ⋅ 3.375 = 5.0625. {\displaystyle {\begin{aligned}y_{1}&=y_{0}+hf(t_{0},y_{0})=1+{\tfrac {1}{2}}\cdot 1=1.5,\\y_{2}&=y_{1}+hf(t_{1},y_{1})=1.5+{\tfrac {1}{2}}\cdot 1.5=2.25,\\y_{3}&=y_{2}+hf(t_{2},y_{2})=2.25+{\tfrac {1}{2}}\cdot 2.25=3.375,\\y_{4}&=y_{3}+hf(t_{3},y_{3})=3.375+{\tfrac {1}{2}}\cdot 3.375=5.0625.\end{aligned}}} === Two-step Adams–Bashforth === Euler's method is a one-step method. A simple multistep method is the two-step Adams–Bashforth method y n + 2 = y n + 1 + 3 2 h f ( t n + 1 , y n + 1 ) − 1 2 h f ( t n , y n ) . {\displaystyle y_{n+2}=y_{n+1}+{\tfrac {3}{2}}hf(t_{n+1},y_{n+1})-{\tfrac {1}{2}}hf(t_{n},y_{n}).} This method needs two values, y n + 1 {\displaystyle y_{n+1}} and y n {\displaystyle y_{n}} , to compute the next value, y n + 2 {\displaystyle y_{n+2}} . However, the initial value problem provides only one value, y 0 = 1 {\displaystyle y_{0}=1} . One possibility to resolve this issue is to use the y 1 {\displaystyle y_{1}} computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits): y 2 = y 1 + 3 2 h f ( t 1 , y 1 ) − 1 2 h f ( t 0 , y 0 ) = 1.5 + 3 2 ⋅ 1 2 ⋅ 1.5 − 1 2 ⋅ 1 2 ⋅ 1 = 2.375 , y 3 = y 2 + 3 2 h f ( t 2 , y 2 ) − 1 2 h f ( t 1 , y 1 ) = 2.375 + 3 2 ⋅ 1 2 ⋅ 2.375 − 1 2 ⋅ 1 2 ⋅ 1.5 = 3.7812 , y 4 = y 3 + 3 2 h f ( t 3 , y 3 ) − 1 2 h f ( t 2 , y 2 ) = 3.7812 + 3 2 ⋅ 1 2 ⋅ 3.7812 − 1 2 ⋅ 1 2 ⋅ 2.375 = 6.0234. {\displaystyle {\begin{aligned}y_{2}&=y_{1}+{\tfrac {3}{2}}hf(t_{1},y_{1})-{\tfrac {1}{2}}hf(t_{0},y_{0})=1.5+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 1.5-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 1=2.375,\\y_{3}&=y_{2}+{\tfrac {3}{2}}hf(t_{2},y_{2})-{\tfrac {1}{2}}hf(t_{1},y_{1})=2.375+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 2.375-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 1.5=3.7812,\\y_{4}&=y_{3}+{\tfrac {3}{2}}hf(t_{3},y_{3})-{\tfrac {1}{2}}hf(t_{2},y_{2})=3.7812+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 3.7812-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 2.375=6.0234.\end{aligned}}} The exact solution at t = t 4 = 2 {\displaystyle t=t_{4}=2} is e 2 = 7.3891 … {\displaystyle e^{2}=7.3891\ldots } , so the two-step Adams–Bashforth method is more accurate than Euler's method. This is always the case if the step size is small enough. == Families of multistep methods == Three families of linear multistep methods are commonly used: Adams–Bashforth methods, Adams–Moulton methods, and the backward differentiation formulas (BDFs). === Adams–Bashforth methods === The Adams–Bashforth methods are explicit methods. The coefficients are a s − 1 = − 1 {\displaystyle a_{s-1}=-1} and a s − 2 = ⋯ = a 0 = 0 {\displaystyle a_{s-2}=\cdots =a_{0}=0} , while the b j {\displaystyle b_{j}} are chosen such that the methods have order s (this determines the methods uniquely). The Adams–Bashforth methods with s = 1, 2, 3, 4, 5 are (Hairer, Nørsett & Wanner 1993, §III.1; Butcher 2003, p. 103): y n + 1 = y n + h f ( t n , y n ) , (This is the Euler method) y n + 2 = y n + 1 + h ( 3 2 f ( t n + 1 , y n + 1 ) − 1 2 f ( t n , y n ) ) , y n + 3 = y n + 2 + h ( 23 12 f ( t n + 2 , y n + 2 ) − 16 12 f ( t n + 1 , y n + 1 ) + 5 12 f ( t n , y n ) ) , y n + 4 = y n + 3 + h ( 55 24 f ( t n + 3 , y n + 3 ) − 59 24 f ( t n + 2 , y n + 2 ) + 37 24 f ( t n + 1 , y n + 1 ) − 9 24 f ( t n , y n ) ) , y n + 5 = y n + 4 + h ( 1901 720 f ( t n + 4 , y n + 4 ) − 2774 720 f ( t n + 3 , y n + 3 ) + 2616 720 f ( t n + 2 , y n + 2 ) − 1274 720 f ( t n + 1 , y n + 1 ) + 251 720 f ( t n , y n ) ) . {\displaystyle {\begin{aligned}y_{n+1}&=y_{n}+hf(t_{n},y_{n}),\qquad {\text{(This is the Euler method)}}\\y_{n+2}&=y_{n+1}+h\left({\frac {3}{2}}f(t_{n+1},y_{n+1})-{\frac {1}{2}}f(t_{n},y_{n})\right),\\y_{n+3}&=y_{n+2}+h\left({\frac {23}{12}}f(t_{n+2},y_{n+2})-{\frac {16}{12}}f(t_{n+1},y_{n+1})+{\frac {5}{12}}f(t_{n},y_{n})\right),\\y_{n+4}&=y_{n+3}+h\left({\frac {55}{24}}f(t_{n+3},y_{n+3})-{\frac {59}{24}}f(t_{n+2},y_{n+2})+{\frac {37}{24}}f(t_{n+1},y_{n+1})-{\frac {9}{24}}f(t_{n},y_{n})\right),\\y_{n+5}&=y_{n+4}+h\left({\frac {1901}{720}}f(t_{n+4},y_{n+4})-{\frac {2774}{720}}f(t_{n+3},y_{n+3})+{\frac {2616}{720}}f(t_{n+2},y_{n+2})-{\frac {1274}{720}}f(t_{n+1},y_{n+1})+{\frac {251}{720}}f(t_{n},y_{n})\right).\end{aligned}}} The coefficients b j {\displaystyle b_{j}} can be determined as follows. Use polynomial interpolation to find the polynomial p of degree s − 1 {\displaystyle s-1} such that p ( t n + i ) = f ( t n + i , y n + i ) , for i = 0 , … , s − 1. {\displaystyle p(t_{n+i})=f(t_{n+i},y_{n+i}),\qquad {\text{for }}i=0,\ldots ,s-1.} The Lagrange formula for polynomial interpolation yields p ( t ) = ∑ j = 0 s − 1 ( − 1 ) s − j − 1 f ( t n + j , y n + j ) j ! ( s − j − 1 ) ! h s − 1 ∏ i = 0 i ≠ j s − 1 ( t − t n + i ) . {\displaystyle p(t)=\sum _{j=0}^{s-1}{\frac {(-1)^{s-j-1}f(t_{n+j},y_{n+j})}{j!(s-j-1)!h^{s-1}}}\prod _{i=0 \atop i\neq j}^{s-1}(t-t_{n+i}).} The polynomial p is locally a good approximation of the right-hand side of the differential equation y ′ = f ( t , y ) {\displaystyle y'=f(t,y)} that is to be solved, so consider the equation y ′ = p ( t ) {\displaystyle y'=p(t)} instead. This equation can be solved exactly; the solution is simply the integral of p. This suggests taking y n + s = y n + s − 1 + ∫ t n + s − 1 t n + s p ( t ) d t . {\displaystyle y_{n+s}=y_{n+s-1}+\int _{t_{n+s-1}}^{t_{n+s}}p(t)\,\mathrm {d} t.} The Adams–Bashforth method arises when the formula for p is substituted. The coefficients b j {\displaystyle b_{j}} turn out to be given by b s − j − 1 = ( − 1 ) j j ! ( s − j − 1 ) ! ∫ 0 1 ∏ i = 0 i ≠ j s − 1 ( u + i ) d u , for j = 0 , … , s − 1. {\displaystyle b_{s-j-1}={\frac {(-1)^{j}}{j!(s-j-1)!}}\int _{0}^{1}\prod _{i=0 \atop i\neq j}^{s-1}(u+i)\,\mathrm {d} u,\qquad {\text{for }}j=0,\ldots ,s-1.} Replacing f ( t , y ) {\displaystyle f(t,y)} by its interpolant p incurs an error of order hs, and it follows that the s-step Adams–Bashforth method has indeed order s (Iserles 1996, §2.1) The Adams–Bashforth methods were designed by John Couch Adams to solve a differential equation modelling capillary action due to Francis Bashforth. Bashforth (1883) published his theory and Adams' numerical method (Goldstine 1977). === Adams–Moulton methods === The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have a s − 1 = − 1 {\displaystyle a_{s-1}=-1} and a s − 2 = ⋯ = a 0 = 0 {\displaystyle a_{s-2}=\cdots =a_{0}=0} . Again the b coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that b s = 0 {\displaystyle b_{s}=0} , an s-step Adams–Moulton method can reach order s + 1 {\displaystyle s+1} , while an s-step Adams–Bashforth methods has only order s. The Adams–Moulton methods with s = 0, 1, 2, 3, 4 are (Hairer, Nørsett & Wanner 1993, §III.1; Quarteroni, Sacco & Saleri 2000) listed, where the first two methods are the backward Euler method and the trapezoidal rule respectively: y n = y n − 1 + h f ( t n , y n ) , y n + 1 = y n + 1 2 h ( f ( t n + 1 , y n + 1 ) + f ( t n , y n ) ) , y n + 2 = y n + 1 + h ( 5 12 f ( t n + 2 , y n + 2 ) + 8 12 f ( t n + 1 , y n + 1 ) − 1 12 f ( t n , y n ) ) , y n + 3 = y n + 2 + h ( 9 24 f ( t n + 3 , y n + 3 ) + 19 24 f ( t n + 2 , y n + 2 ) − 5 24 f ( t n + 1 , y n + 1 ) + 1 24 f ( t n , y n ) ) , y n + 4 = y n + 3 + h ( 251 720 f ( t n + 4 , y n + 4 ) + 646 720 f ( t n + 3 , y n + 3 ) − 264 720 f ( t n + 2 , y n + 2 ) + 106 720 f ( t n + 1 , y n + 1 ) − 19 720 f ( t n , y n ) ) . {\displaystyle {\begin{aligned}y_{n}&=y_{n-1}+hf(t_{n},y_{n}),\\y_{n+1}&=y_{n}+{\frac {1}{2}}h\left(f(t_{n+1},y_{n+1})+f(t_{n},y_{n})\right),\\y_{n+2}&=y_{n+1}+h\left({\frac {5}{12}}f(t_{n+2},y_{n+2})+{\frac {8}{12}}f(t_{n+1},y_{n+1})-{\frac {1}{12}}f(t_{n},y_{n})\right),\\y_{n+3}&=y_{n+2}+h\left({\frac {9}{24}}f(t_{n+3},y_{n+3})+{\frac {19}{24}}f(t_{n+2},y_{n+2})-{\frac {5}{24}}f(t_{n+1},y_{n+1})+{\frac {1}{24}}f(t_{n},y_{n})\right),\\y_{n+4}&=y_{n+3}+h\left({\frac {251}{720}}f(t_{n+4},y_{n+4})+{\frac {646}{720}}f(t_{n+3},y_{n+3})-{\frac {264}{720}}f(t_{n+2},y_{n+2})+{\frac {106}{720}}f(t_{n+1},y_{n+1})-{\frac {19}{720}}f(t_{n},y_{n})\right).\end{aligned}}} The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points t n − 1 , … , t n − s {\displaystyle t_{n-1},\dots ,t_{n-s}} , as above, but also t n {\displaystyle t_{n}} . The coefficients are given by b s − j = ( − 1 ) j j ! ( s − j ) ! ∫ 0 1 ∏ i = 0 i ≠ j s ( u + i − 1 ) d u , for j = 0 , … , s . {\displaystyle b_{s-j}={\frac {(-1)^{j}}{j!(s-j)!}}\int _{0}^{1}\prod _{i=0 \atop i\neq j}^{s}(u+i-1)\,\mathrm {d} u,\qquad {\text{for }}j=0,\ldots ,s.} The Adams–Moulton methods are solely due to John Couch Adams, like the Adams–Bashforth methods. The name of Forest Ray Moulton became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a predictor-corrector pair (Moulton 1926); Milne (1926) had the same idea. Adams used Newton's method to solve the implicit equation (Hairer, Nørsett & Wanner 1993, §III.1). === Backward differentiation formulas (BDF) === The BDF methods are implicit methods with b s − 1 = ⋯ = b 0 = 0 {\displaystyle b_{s-1}=\cdots =b_{0}=0} and the other coefficients chosen such that the method attains order s (the maximum possible). These methods are especially used for the solution of stiff differential equations. == Analysis == The central concepts in the analysis of linear multistep methods, and indeed any numerical method for differential equations, are convergence, order, and stability. === Consistency and order === The first question is whether the method is consistent: is the difference equation a s y n + s + a s − 1 y n + s − 1 + a s − 2 y n + s − 2 + ⋯ + a 0 y n = h ( b s f ( t n + s , y n + s ) + b s − 1 f ( t n + s − 1 , y n + s − 1 ) + ⋯ + b 0 f ( t n , y n ) ) , {\displaystyle {\begin{aligned}&a_{s}y_{n+s}+a_{s-1}y_{n+s-1}+a_{s-2}y_{n+s-2}+\cdots +a_{0}y_{n}\\&\qquad {}=h{\bigl (}b_{s}f(t_{n+s},y_{n+s})+b_{s-1}f(t_{n+s-1},y_{n+s-1})+\cdots +b_{0}f(t_{n},y_{n}){\bigr )},\end{aligned}}} a good approximation of the differential equation y ′ = f ( t , y ) {\displaystyle y'=f(t,y)} ? More precisely, a multistep method is consistent if the local truncation error goes to zero faster than the step size h as h goes to zero, where the local truncation error is defined to be the difference between the result y n + s {\displaystyle y_{n+s}} of the method, assuming that all the previous values y n + s − 1 , … , y n {\displaystyle y_{n+s-1},\ldots ,y_{n}} are exact, and the exact solution of the equation at time t n + s {\displaystyle t_{n+s}} . A computation using Taylor series shows that a linear multistep method is consistent if and only if ∑ k = 0 s − 1 a k = − 1 and ∑ k = 0 s b k = s + ∑ k = 0 s − 1 k a k . {\displaystyle \sum _{k=0}^{s-1}a_{k}=-1\quad {\text{and}}\quad \sum _{k=0}^{s}b_{k}=s+\sum _{k=0}^{s-1}ka_{k}.} All the methods mentioned above are consistent (Hairer, Nørsett & Wanner 1993, §III.2). If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have order p if the local error is of order O ( h p + 1 ) {\displaystyle O(h^{p+1})} as h goes to zero. This is equivalent to the following condition on the coefficients of the methods: ∑ k = 0 s − 1 a k = − 1 and q ∑ k = 0 s k q − 1 b k = s q + ∑ k = 0 s − 1 k q a k for q = 1 , … , p . {\displaystyle \sum _{k=0}^{s-1}a_{k}=-1\quad {\text{and}}\quad q\sum _{k=0}^{s}k^{q-1}b_{k}=s^{q}+\sum _{k=0}^{s-1}k^{q}a_{k}{\text{ for }}q=1,\ldots ,p.} The s-step Adams–Bashforth method has order s, while the s-step Adams–Moulton method has order s + 1 {\displaystyle s+1} (Hairer, Nørsett & Wanner 1993, §III.2). These conditions are often formulated using the characteristic polynomials ρ ( z ) = z s + ∑ k = 0 s − 1 a k z k and σ ( z ) = ∑ k = 0 s b k z k . {\displaystyle \rho (z)=z^{s}+\sum _{k=0}^{s-1}a_{k}z^{k}\quad {\text{and}}\quad \sigma (z)=\sum _{k=0}^{s}b_{k}z^{k}.} In terms of these polynomials, the above condition for the method to have order p becomes ρ ( e h ) − h σ ( e h ) = O ( h p + 1 ) as h → 0. {\displaystyle \rho (e^{h})-h\sigma (e^{h})=O(h^{p+1})\quad {\text{as }}h\to 0.} In particular, the method is consistent if it has order at least one, which is the case if ρ ( 1 ) = 0 {\displaystyle \rho (1)=0} and ρ ′ ( 1 ) = σ ( 1 ) {\displaystyle \rho '(1)=\sigma (1)} . === Stability and convergence === The numerical solution of a one-step method depends on the initial condition y 0 {\displaystyle y_{0}} , but the numerical solution of an s-step method depend on all the s starting values, y 0 , y 1 , … , y s − 1 {\displaystyle y_{0},y_{1},\ldots ,y_{s-1}} . It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is zero-stable for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than Kε for some value of K which does not depend on the step size h. This is called "zero-stability" because it is enough to check the condition for the differential equation y ′ = 0 {\displaystyle y'=0} (Süli & Mayers 2003, p. 332). If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the root condition is satisfied. A linear multistep method is zero-stable if and only if the root condition is satisfied (Süli & Mayers 2003, p. 335). Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values y 1 , … , y s − 1 {\displaystyle y_{1},\ldots ,y_{s-1}} all converge to the initial value y 0 {\displaystyle y_{0}} as h → 0 {\displaystyle h\to 0} . Then, the numerical solution converges to the exact solution as h → 0 {\displaystyle h\to 0} if and only if the method is zero-stable. This result is known as the Dahlquist equivalence theorem, named after Germund Dahlquist; this theorem is similar in spirit to the Lax equivalence theorem for finite difference methods. Furthermore, if the method has order p, then the global error (the difference between the numerical solution and the exact solution at a fixed time) is O ( h p ) {\displaystyle O(h^{p})} (Süli & Mayers 2003, p. 340). Furthermore, if the method is convergent, the method is said to be strongly stable if z = 1 {\displaystyle z=1} is the only root of modulus 1. If it is convergent and all roots of modulus 1 are not repeated, but there is more than one such root, it is said to be relatively stable. Note that 1 must be a root for the method to be convergent; thus convergent methods are always one of these two. To assess the performance of linear multistep methods on stiff equations, consider the linear test equation y' = λy. A multistep method applied to this differential equation with step size h yields a linear recurrence relation with characteristic polynomial π ( z ; h λ ) = ( 1 − h λ β s ) z s + ∑ k = 0 s − 1 ( α k − h λ β k ) z k = ρ ( z ) − h λ σ ( z ) . {\displaystyle \pi (z;h\lambda )=(1-h\lambda \beta _{s})z^{s}+\sum _{k=0}^{s-1}(\alpha _{k}-h\lambda \beta _{k})z^{k}=\rho (z)-h\lambda \sigma (z).} This polynomial is called the stability polynomial of the multistep method. If all of its roots have modulus less than one then the numerical solution of the multistep method will converge to zero and the multistep method is said to be absolutely stable for that value of hλ. The method is said to be A-stable if it is absolutely stable for all hλ with negative real part. The region of absolute stability is the set of all hλ for which the multistep method is absolutely stable (Süli & Mayers 2003, pp. 347 & 348). For more details, see the section on stiff equations and multistep methods. === Example === Consider the Adams–Bashforth three-step method y n + 3 = y n + 2 + h ( 23 12 f ( t n + 2 , y n + 2 ) − 4 3 f ( t n + 1 , y n + 1 ) + 5 12 f ( t n , y n ) ) . {\displaystyle y_{n+3}=y_{n+2}+h\left({23 \over 12}f(t_{n+2},y_{n+2})-{4 \over 3}f(t_{n+1},y_{n+1})+{5 \over 12}f(t_{n},y_{n})\right).} One characteristic polynomial is thus ρ ( z ) = z 3 − z 2 = z 2 ( z − 1 ) {\displaystyle \rho (z)=z^{3}-z^{2}=z^{2}(z-1)} which has roots z = 0 , 1 {\displaystyle z=0,1} , and the conditions above are satisfied. As z = 1 {\displaystyle z=1} is the only root of modulus 1, the method is strongly stable. The other characteristic polynomial is σ ( z ) = 23 12 z 2 − 4 3 z + 5 12 {\displaystyle \sigma (z)={\frac {23}{12}}z^{2}-{\frac {4}{3}}z+{\frac {5}{12}}} == First and second Dahlquist barriers == These two results were proved by Germund Dahlquist and represent an important bound for the order of convergence and for the A-stability of a linear multistep method. The first Dahlquist barrier was proved in Dahlquist (1956) and the second in Dahlquist (1963). === First Dahlquist barrier === The first Dahlquist barrier states that a zero-stable and linear q-step multistep method cannot attain an order of convergence greater than q + 1 if q is odd and greater than q + 2 if q is even. If the method is also explicit, then it cannot attain an order greater than q (Hairer, Nørsett & Wanner 1993, Thm III.3.5). === Second Dahlquist barrier === The second Dahlquist barrier states that no explicit linear multistep methods are A-stable. Further, the maximal order of an (implicit) A-stable linear multistep method is 2. Among the A-stable linear multistep methods of order 2, the trapezoidal rule has the smallest error constant (Dahlquist 1963, Thm 2.1 and 2.2). == See also == Digital energy gain == References == Bashforth, Francis (1883), An Attempt to test the Theories of Capillary Action by comparing the theoretical and measured forms of drops of fluid. With an explanation of the method of integration employed in constructing the tables which give the theoretical forms of such drops, by J. C. Adams, Cambridge{{citation}}: CS1 maint: location missing publisher (link). Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, John Wiley, ISBN 978-0-471-96758-3. Dahlquist, Germund (1956), "Convergence and stability in the numerical integration of ordinary differential equations", Mathematica Scandinavica, 4: 33–53, doi:10.7146/math.scand.a-10454. Dahlquist, Germund (1963), "A special stability problem for linear multistep methods", BIT, 3: 27–43, doi:10.1007/BF01963532, ISSN 0006-3835, S2CID 120241743. Goldstine, Herman H. (1977), A History of Numerical Analysis from the 16th through the 19th Century, New York: Springer-Verlag, ISBN 978-0-387-90277-7. Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems (2nd ed.), Berlin: Springer Verlag, ISBN 978-3-540-56670-0. Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-60452-5. Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, Bibcode:1996fcna.book.....I, ISBN 978-0-521-55655-2. Milne, W. E. (1926), "Numerical integration of ordinary differential equations", American Mathematical Monthly, 33 (9), Mathematical Association of America: 455–460, doi:10.2307/2299609, JSTOR 2299609. Moulton, Forest R. (1926), New methods in exterior ballistics, University of Chicago Press. Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000), Matematica Numerica, Springer Verlag, ISBN 978-88-470-0077-3. Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1. == External links == Weisstein, Eric W. "Adams Method". MathWorld.
Wikipedia/Adams–Bashforth_methods
Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation d y d t = f ( t , y ) . {\displaystyle {\frac {dy}{dt}}=f(t,y).} Explicit Runge–Kutta methods take the form y n + 1 = y n + h ∑ i = 1 s b i k i k 1 = f ( t n , y n ) , k 2 = f ( t n + c 2 h , y n + h ( a 21 k 1 ) ) , k 3 = f ( t n + c 3 h , y n + h ( a 31 k 1 + a 32 k 2 ) ) , ⋮ k i = f ( t n + c i h , y n + h ∑ j = 1 i − 1 a i j k j ) . {\displaystyle {\begin{aligned}y_{n+1}&=y_{n}+h\sum _{i=1}^{s}b_{i}k_{i}\\k_{1}&=f(t_{n},y_{n}),\\k_{2}&=f(t_{n}+c_{2}h,y_{n}+h(a_{21}k_{1})),\\k_{3}&=f(t_{n}+c_{3}h,y_{n}+h(a_{31}k_{1}+a_{32}k_{2})),\\&\;\;\vdots \\k_{i}&=f\left(t_{n}+c_{i}h,y_{n}+h\sum _{j=1}^{i-1}a_{ij}k_{j}\right).\end{aligned}}} Stages for implicit methods of s stages take the more general form, with the solution to be found over all s k i = f ( t n + c i h , y n + h ∑ j = 1 s a i j k j ) . {\displaystyle k_{i}=f\left(t_{n}+c_{i}h,y_{n}+h\sum _{j=1}^{s}a_{ij}k_{j}\right).} Each method listed on this page is defined by its Butcher tableau, which puts the coefficients of the method in a table as follows: c 1 a 11 a 12 … a 1 s c 2 a 21 a 22 … a 2 s ⋮ ⋮ ⋮ ⋱ ⋮ c s a s 1 a s 2 … a s s b 1 b 2 … b s {\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &b_{1}&b_{2}&\dots &b_{s}\\\end{array}}} For adaptive and implicit methods, the Butcher tableau is extended to give values of b i ∗ {\displaystyle b_{i}^{*}} , and the estimated error is then e n + 1 = h ∑ i = 1 s ( b i − b i ∗ ) k i {\displaystyle e_{n+1}=h\sum _{i=1}^{s}(b_{i}-b_{i}^{*})k_{i}} . == Explicit methods == The explicit methods are those where the matrix [ a i j ] {\displaystyle [a_{ij}]} is lower triangular. === First-order methods === ==== Forward Euler ==== The Euler method is first order. The lack of stability and accuracy limits its popularity mainly to use as a simple introductory example of a numeric solution method. 0 0 1 {\displaystyle {\begin{array}{c|c}0&0\\\hline &1\\\end{array}}} === Second-order methods === ==== Generic second-order method ==== Second-order methods can be generically written as follows: 0 0 0 α α 0 1 − 1 2 α 1 2 α {\displaystyle {\begin{array}{c|ccc}0&0&0\\\alpha &\alpha &0\\\hline &1-{\frac {1}{2\alpha }}&{\frac {1}{2\alpha }}\\\end{array}}} with α ≠ 0. ==== Explicit midpoint method ==== The (explicit) midpoint method is a second-order method with two stages (see also the implicit midpoint method below): 0 0 0 1 / 2 1 / 2 0 0 1 {\displaystyle {\begin{array}{c|cc}0&0&0\\1/2&1/2&0\\\hline &0&1\\\end{array}}} ==== Heun's method ==== Heun's method is a second-order method with two stages. It is also known as the explicit trapezoid rule, improved Euler's method, or modified Euler's method: 0 0 0 1 1 0 1 / 2 1 / 2 {\displaystyle {\begin{array}{c|cc}0&0&0\\1&1&0\\\hline &1/2&1/2\\\end{array}}} ==== Ralston's method ==== Ralston's method is a second-order method with two stages and a minimum local error bound: 0 0 0 2 / 3 2 / 3 0 1 / 4 3 / 4 {\displaystyle {\begin{array}{c|cc}0&0&0\\2/3&2/3&0\\\hline &1/4&3/4\\\end{array}}} === Third-order methods === ==== Generic third-order method ==== Third-order methods can be generically written as follows: 0 0 0 0 α α 0 0 β β α β − 3 α ( 1 − α ) ( 3 α − 2 ) − β α β − α ( 3 α − 2 ) 0 1 − 3 α + 3 β − 2 6 α β 3 β − 2 6 α ( β − α ) 2 − 3 α 6 β ( β − α ) {\displaystyle {\begin{array}{c|ccc}0&0&0&0\\\alpha &\alpha &0&0\\\beta &{\frac {\beta }{\alpha }}{\frac {\beta -3\alpha (1-\alpha )}{(3\alpha -2)}}&-{\frac {\beta }{\alpha }}{\frac {\beta -\alpha }{(3\alpha -2)}}&0\\\hline &1-{\frac {3\alpha +3\beta -2}{6\alpha \beta }}&{\frac {3\beta -2}{6\alpha (\beta -\alpha )}}&{\frac {2-3\alpha }{6\beta (\beta -\alpha )}}\\\end{array}}} with α ≠ 0, α ≠ 2⁄3, β ≠ 0, and α ≠ β. ==== Kutta's third-order method ==== 0 0 0 0 1 / 2 1 / 2 0 0 1 − 1 2 0 1 / 6 2 / 3 1 / 6 {\displaystyle {\begin{array}{c|ccc}0&0&0&0\\1/2&1/2&0&0\\1&-1&2&0\\\hline &1/6&2/3&1/6\\\end{array}}} ==== Heun's third-order method ==== 0 0 0 0 1 / 3 1 / 3 0 0 2 / 3 0 2 / 3 0 1 / 4 0 3 / 4 {\displaystyle {\begin{array}{c|ccc}0&0&0&0\\1/3&1/3&0&0\\2/3&0&2/3&0\\\hline &1/4&0&3/4\\\end{array}}} ==== Ralston's third-order method ==== Ralston's third-order method has a minimum local error bound and is used in the embedded Bogacki–Shampine method. 0 0 0 0 1 / 2 1 / 2 0 0 3 / 4 0 3 / 4 0 2 / 9 1 / 3 4 / 9 {\displaystyle {\begin{array}{c|ccc}0&0&0&0\\1/2&1/2&0&0\\3/4&0&3/4&0\\\hline &2/9&1/3&4/9\\\end{array}}} ==== Van der Houwen's/Wray's third-order method ==== 0 0 0 0 8 / 15 8 / 15 0 0 2 / 3 1 / 4 5 / 12 0 1 / 4 0 3 / 4 {\displaystyle {\begin{array}{c|ccc}0&0&0&0\\8/15&8/15&0&0\\2/3&1/4&5/12&0\\\hline &1/4&0&3/4\\\end{array}}} ==== Third-order Strong Stability Preserving Runge-Kutta (SSPRK3) ==== 0 0 0 0 1 1 0 0 1 / 2 1 / 4 1 / 4 0 1 / 6 1 / 6 2 / 3 {\displaystyle {\begin{array}{c|ccc}0&0&0&0\\1&1&0&0\\1/2&1/4&1/4&0\\\hline &1/6&1/6&2/3\\\end{array}}} === Fourth-order methods === ==== Classic fourth-order method ==== The "original" Runge–Kutta method. 0 0 0 0 0 1 / 2 1 / 2 0 0 0 1 / 2 0 1 / 2 0 0 1 0 0 1 0 1 / 6 1 / 3 1 / 3 1 / 6 {\displaystyle {\begin{array}{c|cccc}0&0&0&0&0\\1/2&1/2&0&0&0\\1/2&0&1/2&0&0\\1&0&0&1&0\\\hline &1/6&1/3&1/3&1/6\\\end{array}}} ==== 3/8-rule fourth-order method ==== This method doesn't have as much notoriety as the "classic" method, but is just as classic because it was proposed in the same paper (Kutta, 1901). 0 0 0 0 0 1 / 3 1 / 3 0 0 0 2 / 3 − 1 / 3 1 0 0 1 1 − 1 1 0 1 / 8 3 / 8 3 / 8 1 / 8 {\displaystyle {\begin{array}{c|cccc}0&0&0&0&0\\1/3&1/3&0&0&0\\2/3&-1/3&1&0&0\\1&1&-1&1&0\\\hline &1/8&3/8&3/8&1/8\\\end{array}}} ==== Ralston's fourth-order method ==== This fourth order method has minimum truncation error. 0 0 0 0 0 2 5 2 5 0 0 0 14 − 3 5 16 − 2 889 + 1 428 5 1 024 3 785 − 1 620 5 1 024 0 0 1 − 3 365 + 2 094 5 6 040 − 975 − 3 046 5 2 552 467 040 + 203 968 5 240 845 0 263 + 24 5 1 812 125 − 1000 5 3 828 3 426 304 + 1 661 952 5 5 924 787 30 − 4 5 123 {\displaystyle {\begin{array}{c|cccc}0&0&0&0&0\\{\frac {2}{5}}&{\frac {2}{5}}&0&0&0\\{\frac {14-3{\sqrt {5}}}{16}}&{\frac {-2\,889+1\,428{\sqrt {5}}}{1\,024}}&{\frac {3\,785-1\,620{\sqrt {5}}}{1\,024}}&0&0\\1&{\frac {-3\,365+2\,094{\sqrt {5}}}{6\,040}}&{\frac {-975-3\,046{\sqrt {5}}}{2\,552}}&{\frac {467\,040+203\,968{\sqrt {5}}}{240\,845}}&0\\\hline &{\frac {263+24{\sqrt {5}}}{1\,812}}&{\frac {125-1000{\sqrt {5}}}{3\,828}}&{\frac {3\,426\,304+1\,661\,952{\sqrt {5}}}{5\,924\,787}}&{\frac {30-4{\sqrt {5}}}{123}}\\\end{array}}} === Fifth-order methods === ==== Nyström's fifth-order method ==== This fifth-order method was a correction of the one proposed originally by Kutta's work. 0 0 0 0 0 0 0 1 3 1 3 0 0 0 0 0 2 5 4 25 6 25 0 0 0 0 1 1 4 − 3 15 4 0 0 0 2 3 2 27 10 9 − 50 81 8 81 0 0 4 5 2 25 12 25 2 15 8 75 0 0 23 192 0 125 192 0 − 27 64 125 192 {\displaystyle {\begin{array}{c|cccccc}0&0&0&0&0&0&0\\{\frac {1}{3}}&{\frac {1}{3}}&0&0&0&0&0\\{\frac {2}{5}}&{\frac {4}{25}}&{\frac {6}{25}}&0&0&0&0\\1&{\frac {1}{4}}&-3&{\frac {15}{4}}&0&0&0\\{\frac {2}{3}}&{\frac {2}{27}}&{\frac {10}{9}}&-{\frac {50}{81}}&{\frac {8}{81}}&0&0\\{\frac {4}{5}}&{\frac {2}{25}}&{\frac {12}{25}}&{\frac {2}{15}}&{\frac {8}{75}}&0&0\\\hline &{\frac {23}{192}}&0&{\frac {125}{192}}&0&-{\frac {27}{64}}&{\frac {125}{192}}\\\end{array}}} == Embedded methods == The embedded methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step, and as result, allow to control the error with adaptive stepsize. This is done by having two methods in the tableau, one with order p and one with order p-1. The lower-order step is given by y n + 1 ∗ = y n + h ∑ i = 1 s b i ∗ k i , {\displaystyle y_{n+1}^{*}=y_{n}+h\sum _{i=1}^{s}b_{i}^{*}k_{i},} where the k i {\displaystyle k_{i}} are the same as for the higher order method. Then the error is e n + 1 = y n + 1 − y n + 1 ∗ = h ∑ i = 1 s ( b i − b i ∗ ) k i , {\displaystyle e_{n+1}=y_{n+1}-y_{n+1}^{*}=h\sum _{i=1}^{s}(b_{i}-b_{i}^{*})k_{i},} which is O ( h p ) {\displaystyle O(h^{p})} . The Butcher Tableau for this kind of method is extended to give the values of b i ∗ {\displaystyle b_{i}^{*}} c 1 a 11 a 12 … a 1 s c 2 a 21 a 22 … a 2 s ⋮ ⋮ ⋮ ⋱ ⋮ c s a s 1 a s 2 … a s s b 1 b 2 … b s b 1 ∗ b 2 ∗ … b s ∗ {\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &b_{1}&b_{2}&\dots &b_{s}\\&b_{1}^{*}&b_{2}^{*}&\dots &b_{s}^{*}\\\end{array}}} === Heun–Euler === The simplest adaptive Runge–Kutta method involves combining Heun's method, which is order 2, with the Euler method, which is order 1. Its extended Butcher Tableau is: 0 1 1 1 / 2 1 / 2 1 0 {\displaystyle {\begin{array}{c|cc}0&\\1&1\\\hline &1/2&1/2\\&1&0\end{array}}} The error estimate is used to control the stepsize. === Fehlberg RK1(2) === The Fehlberg method has two methods of orders 1 and 2. Its extended Butcher Tableau is: The first row of b coefficients gives the second-order accurate solution, and the second row has order one. === Bogacki–Shampine === The Bogacki–Shampine method has two methods of orders 2 and 3. Its extended Butcher Tableau is: The first row of b coefficients gives the third-order accurate solution, and the second row has order two. === Fehlberg === The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4; it is sometimes dubbed RKF45 . Its extended Butcher Tableau is: 0 1 / 4 1 / 4 3 / 8 3 / 32 9 / 32 12 / 13 1932 / 2197 − 7200 / 2197 7296 / 2197 1 439 / 216 − 8 3680 / 513 − 845 / 4104 1 / 2 − 8 / 27 2 − 3544 / 2565 1859 / 4104 − 11 / 40 16 / 135 0 6656 / 12825 28561 / 56430 − 9 / 50 2 / 55 25 / 216 0 1408 / 2565 2197 / 4104 − 1 / 5 0 {\displaystyle {\begin{array}{r|ccccc}0&&&&&\\1/4&1/4&&&\\3/8&3/32&9/32&&\\12/13&1932/2197&-7200/2197&7296/2197&\\1&439/216&-8&3680/513&-845/4104&\\1/2&-8/27&2&-3544/2565&1859/4104&-11/40\\\hline &16/135&0&6656/12825&28561/56430&-9/50&2/55\\&25/216&0&1408/2565&2197/4104&-1/5&0\end{array}}} The first row of b coefficients gives the fifth-order accurate solution, and the second row has order four. The coefficients here allow for an adaptive stepsize to be determined automatically. === Cash-Karp === Cash and Karp have modified Fehlberg's original idea. The extended tableau for the Cash–Karp method is The first row of b coefficients gives the fifth-order accurate solution, and the second row has order four. === Dormand–Prince === The extended tableau for the Dormand–Prince method is The first row of b coefficients gives the fifth-order accurate solution, and the second row gives the fourth-order accurate solution. == Implicit methods == === Backward Euler === The backward Euler method is first order. Unconditionally stable and non-oscillatory for linear diffusion problems. 1 1 1 {\displaystyle {\begin{array}{c|c}1&1\\\hline &1\\\end{array}}} === Implicit midpoint === The implicit midpoint method is of second order. It is the simplest method in the class of collocation methods known as the Gauss-Legendre methods. It is a symplectic integrator. 1 / 2 1 / 2 1 {\displaystyle {\begin{array}{c|c}1/2&1/2\\\hline &1\end{array}}} === Crank-Nicolson method === The Crank–Nicolson method corresponds to the implicit trapezoidal rule and is a second-order accurate and A-stable method. 0 0 0 1 1 / 2 1 / 2 1 / 2 1 / 2 {\displaystyle {\begin{array}{c|cc}0&0&0\\1&1/2&1/2\\\hline &1/2&1/2\\\end{array}}} === Gauss–Legendre methods === These methods are based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method of order four has Butcher tableau: 1 2 − 3 6 1 4 1 4 − 3 6 1 2 + 3 6 1 4 + 3 6 1 4 1 2 1 2 1 2 + 3 2 1 2 − 3 2 {\displaystyle {\begin{array}{c|cc}{\frac {1}{2}}-{\frac {\sqrt {3}}{6}}&{\frac {1}{4}}&{\frac {1}{4}}-{\frac {\sqrt {3}}{6}}\\{\frac {1}{2}}+{\frac {\sqrt {3}}{6}}&{\frac {1}{4}}+{\frac {\sqrt {3}}{6}}&{\frac {1}{4}}\\\hline &{\frac {1}{2}}&{\frac {1}{2}}\\&{\frac {1}{2}}+{\frac {\sqrt {3}}{2}}&{\frac {1}{2}}-{\frac {\sqrt {3}}{2}}\\\end{array}}} The Gauss–Legendre method of order six has Butcher tableau: 1 2 − 15 10 5 36 2 9 − 15 15 5 36 − 15 30 1 2 5 36 + 15 24 2 9 5 36 − 15 24 1 2 + 15 10 5 36 + 15 30 2 9 + 15 15 5 36 5 18 4 9 5 18 − 5 6 8 3 − 5 6 {\displaystyle {\begin{array}{c|ccc}{\frac {1}{2}}-{\frac {\sqrt {15}}{10}}&{\frac {5}{36}}&{\frac {2}{9}}-{\frac {\sqrt {15}}{15}}&{\frac {5}{36}}-{\frac {\sqrt {15}}{30}}\\{\frac {1}{2}}&{\frac {5}{36}}+{\frac {\sqrt {15}}{24}}&{\frac {2}{9}}&{\frac {5}{36}}-{\frac {\sqrt {15}}{24}}\\{\frac {1}{2}}+{\frac {\sqrt {15}}{10}}&{\frac {5}{36}}+{\frac {\sqrt {15}}{30}}&{\frac {2}{9}}+{\frac {\sqrt {15}}{15}}&{\frac {5}{36}}\\\hline &{\frac {5}{18}}&{\frac {4}{9}}&{\frac {5}{18}}\\&-{\frac {5}{6}}&{\frac {8}{3}}&-{\frac {5}{6}}\end{array}}} === Diagonally Implicit Runge–Kutta methods === Diagonally Implicit Runge–Kutta (DIRK) formulae have been widely used for the numerical solution of stiff initial value problems; the advantage of this approach is that here the solution may be found sequentially as opposed to simultaneously. The simplest method from this class is the order 2 implicit midpoint method. Kraaijevanger and Spijker's two-stage Diagonally Implicit Runge–Kutta method: 1 / 2 1 / 2 0 3 / 2 − 1 / 2 2 − 1 / 2 3 / 2 {\displaystyle {\begin{array}{c|cc}1/2&1/2&0\\3/2&-1/2&2\\\hline &-1/2&3/2\\\end{array}}} Qin and Zhang's two-stage, 2nd order, symplectic Diagonally Implicit Runge–Kutta method: 1 / 4 1 / 4 0 3 / 4 1 / 2 1 / 4 1 / 2 1 / 2 {\displaystyle {\begin{array}{c|cc}1/4&1/4&0\\3/4&1/2&1/4\\\hline &1/2&1/2\\\end{array}}} Pareschi and Russo's two-stage 2nd order Diagonally Implicit Runge–Kutta method: x x 0 1 − x 1 − 2 x x 1 2 1 2 {\displaystyle {\begin{array}{c|cc}x&x&0\\1-x&1-2x&x\\\hline &{\frac {1}{2}}&{\frac {1}{2}}\\\end{array}}} This Diagonally Implicit Runge–Kutta method is A-stable if and only if x ≥ 1 4 {\textstyle x\geq {\frac {1}{4}}} . Moreover, this method is L-stable if and only if x {\displaystyle x} equals one of the roots of the polynomial x 2 − 2 x + 1 2 {\textstyle x^{2}-2x+{\frac {1}{2}}} , i.e. if x = 1 ± 2 2 {\textstyle x=1\pm {\frac {\sqrt {2}}{2}}} . Qin and Zhang's Diagonally Implicit Runge–Kutta method corresponds to Pareschi and Russo's Diagonally Implicit Runge–Kutta method with x = 1 / 4 {\displaystyle x=1/4} . Two-stage 2nd order Diagonally Implicit Runge–Kutta method: x x 0 1 1 − x x 1 − x x {\displaystyle {\begin{array}{c|cc}x&x&0\\1&1-x&x\\\hline &1-x&x\\\end{array}}} Again, this Diagonally Implicit Runge–Kutta method is A-stable if and only if x ≥ 1 4 {\textstyle x\geq {\frac {1}{4}}} . As the previous method, this method is again L-stable if and only if x {\displaystyle x} equals one of the roots of the polynomial x 2 − 2 x + 1 2 {\textstyle x^{2}-2x+{\frac {1}{2}}} , i.e. if x = 1 ± 2 2 {\textstyle x=1\pm {\frac {\sqrt {2}}{2}}} . This condition is also necessary for 2nd order accuracy. Crouzeix's two-stage, 3rd order Diagonally Implicit Runge–Kutta method: 1 2 + 3 6 1 2 + 3 6 0 1 2 − 3 6 − 3 3 1 2 + 3 6 1 2 1 2 {\displaystyle {\begin{array}{c|cc}{\frac {1}{2}}+{\frac {\sqrt {3}}{6}}&{\frac {1}{2}}+{\frac {\sqrt {3}}{6}}&0\\{\frac {1}{2}}-{\frac {\sqrt {3}}{6}}&-{\frac {\sqrt {3}}{3}}&{\frac {1}{2}}+{\frac {\sqrt {3}}{6}}\\\hline &{\frac {1}{2}}&{\frac {1}{2}}\\\end{array}}} Crouzeix's three-stage, 4th order Diagonally Implicit Runge–Kutta method: 1 + α 2 1 + α 2 0 0 1 2 − α 2 1 + α 2 0 1 − α 2 1 + α − ( 1 + 2 α ) 1 + α 2 1 6 α 2 1 − 1 3 α 2 1 6 α 2 {\displaystyle {\begin{array}{c|ccc}{\frac {1+\alpha }{2}}&{\frac {1+\alpha }{2}}&0&0\\{\frac {1}{2}}&-{\frac {\alpha }{2}}&{\frac {1+\alpha }{2}}&0\\{\frac {1-\alpha }{2}}&1+\alpha &-(1+2\,\alpha )&{\frac {1+\alpha }{2}}\\\hline &{\frac {1}{6\alpha ^{2}}}&1-{\frac {1}{3\alpha ^{2}}}&{\frac {1}{6\alpha ^{2}}}\\\end{array}}} with α = 2 3 cos ⁡ π 18 {\textstyle \alpha ={\frac {2}{\sqrt {3}}}\cos {\frac {\pi }{18}}} . Three-stage, 3rd order, L-stable Diagonally Implicit Runge–Kutta method: x x 0 0 1 + x 2 1 − x 2 x 0 1 − 3 x 2 / 2 + 4 x − 1 / 4 3 x 2 / 2 − 5 x + 5 / 4 x − 3 x 2 / 2 + 4 x − 1 / 4 3 x 2 / 2 − 5 x + 5 / 4 x {\displaystyle {\begin{array}{c|ccc}x&x&0&0\\{\frac {1+x}{2}}&{\frac {1-x}{2}}&x&0\\1&-3x^{2}/2+4x-1/4&3x^{2}/2-5x+5/4&x\\\hline &-3x^{2}/2+4x-1/4&3x^{2}/2-5x+5/4&x\\\end{array}}} with x = 0.4358665215 {\displaystyle x=0.4358665215} Nørsett's three-stage, 4th order Diagonally Implicit Runge–Kutta method has the following Butcher tableau: x x 0 0 1 / 2 1 / 2 − x x 0 1 − x 2 x 1 − 4 x x 1 6 ( 1 − 2 x ) 2 3 ( 1 − 2 x ) 2 − 1 3 ( 1 − 2 x ) 2 1 6 ( 1 − 2 x ) 2 {\displaystyle {\begin{array}{c|ccc}x&x&0&0\\1/2&1/2-x&x&0\\1-x&2x&1-4x&x\\\hline &{\frac {1}{6(1-2x)^{2}}}&{\frac {3(1-2x)^{2}-1}{3(1-2x)^{2}}}&{\frac {1}{6(1-2x)^{2}}}\\\end{array}}} with x {\displaystyle x} one of the three roots of the cubic equation x 3 − 3 x 2 / 2 + x / 2 − 1 / 24 = 0 {\displaystyle x^{3}-3x^{2}/2+x/2-1/24=0} . The three roots of this cubic equation are approximately x 1 = 1 2 + 1 3 cos ⁡ π 18 = 1.068579021301629 {\textstyle x_{1}={\frac {1}{2}}+{\frac {1}{\sqrt {3}}}\cos {\frac {\pi }{18}}=1.068579021301629} , x 2 = 0.1288864005157204 {\textstyle x_{2}=0.1288864005157204} , and x 3 = 0.3025345781826508 {\textstyle x_{3}=0.3025345781826508} . The root x 1 {\displaystyle x_{1}} gives the best stability properties for initial value problems. Four-stage, 3rd order, L-stable Diagonally Implicit Runge–Kutta method 1 / 2 1 / 2 0 0 0 2 / 3 1 / 6 1 / 2 0 0 1 / 2 − 1 / 2 1 / 2 1 / 2 0 1 3 / 2 − 3 / 2 1 / 2 1 / 2 3 / 2 − 3 / 2 1 / 2 1 / 2 {\displaystyle {\begin{array}{c|cccc}1/2&1/2&0&0&0\\2/3&1/6&1/2&0&0\\1/2&-1/2&1/2&1/2&0\\1&3/2&-3/2&1/2&1/2\\\hline &3/2&-3/2&1/2&1/2\\\end{array}}} === Lobatto methods === There are three main families of Lobatto methods, called IIIA, IIIB and IIIC (in classical mathematical literature, the symbols I and II are reserved for two types of Radau methods). These are named after Rehuel Lobatto as a reference to the Lobatto quadrature rule, but were introduced by Byron L. Ehle in his thesis. All are implicit methods, have order 2s − 2 and they all have c1 = 0 and cs = 1. Unlike any explicit method, it's possible for these methods to have the order greater than the number of stages. Lobatto lived before the classic fourth-order method was popularized by Runge and Kutta. ==== Lobatto IIIA methods ==== The Lobatto IIIA methods are collocation methods. The second-order method is known as the trapezoidal rule: 0 0 0 1 1 / 2 1 / 2 1 / 2 1 / 2 1 0 {\displaystyle {\begin{array}{c|cc}0&0&0\\1&1/2&1/2\\\hline &1/2&1/2\\&1&0\\\end{array}}} The fourth-order method is given by 0 0 0 0 1 / 2 5 / 24 1 / 3 − 1 / 24 1 1 / 6 2 / 3 1 / 6 1 / 6 2 / 3 1 / 6 − 1 2 2 − 1 2 {\displaystyle {\begin{array}{c|ccc}0&0&0&0\\1/2&5/24&1/3&-1/24\\1&1/6&2/3&1/6\\\hline &1/6&2/3&1/6\\&-{\frac {1}{2}}&2&-{\frac {1}{2}}\\\end{array}}} These methods are A-stable, but neither L-stable nor B-stable. ==== Lobatto IIIB methods ==== The Lobatto IIIB methods are not collocation methods, but they can be viewed as discontinuous collocation methods (Hairer, Lubich & Wanner 2006, §II.1.4). The second-order method is given by 0 1 / 2 0 1 1 / 2 0 1 / 2 1 / 2 1 0 {\displaystyle {\begin{array}{c|cc}0&1/2&0\\1&1/2&0\\\hline &1/2&1/2\\&1&0\\\end{array}}} The fourth-order method is given by 0 1 / 6 − 1 / 6 0 1 / 2 1 / 6 1 / 3 0 1 1 / 6 5 / 6 0 1 / 6 2 / 3 1 / 6 − 1 2 2 − 1 2 {\displaystyle {\begin{array}{c|ccc}0&1/6&-1/6&0\\1/2&1/6&1/3&0\\1&1/6&5/6&0\\\hline &1/6&2/3&1/6\\&-{\frac {1}{2}}&2&-{\frac {1}{2}}\\\end{array}}} Lobatto IIIB methods are A-stable, but neither L-stable nor B-stable. ==== Lobatto IIIC methods ==== The Lobatto IIIC methods also are discontinuous collocation methods. The second-order method is given by 0 1 / 2 − 1 / 2 1 1 / 2 1 / 2 1 / 2 1 / 2 1 0 {\displaystyle {\begin{array}{c|cc}0&1/2&-1/2\\1&1/2&1/2\\\hline &1/2&1/2\\&1&0\\\end{array}}} The fourth-order method is given by 0 1 / 6 − 1 / 3 1 / 6 1 / 2 1 / 6 5 / 12 − 1 / 12 1 1 / 6 2 / 3 1 / 6 1 / 6 2 / 3 1 / 6 − 1 2 2 − 1 2 {\displaystyle {\begin{array}{c|ccc}0&1/6&-1/3&1/6\\1/2&1/6&5/12&-1/12\\1&1/6&2/3&1/6\\\hline &1/6&2/3&1/6\\&-{\frac {1}{2}}&2&-{\frac {1}{2}}\\\end{array}}} They are L-stable. They are also algebraically stable and thus B-stable, that makes them suitable for stiff problems. ==== Lobatto IIIC* methods ==== The Lobatto IIIC* methods are also known as Lobatto III methods (Butcher, 2008), Butcher's Lobatto methods (Hairer et al., 1993), and Lobatto IIIC methods (Sun, 2000) in the literature. The second-order method is given by 0 0 0 1 1 0 1 / 2 1 / 2 {\displaystyle {\begin{array}{c|cc}0&0&0\\1&1&0\\\hline &1/2&1/2\\\end{array}}} Butcher's three-stage, fourth-order method is given by 0 0 0 0 1 / 2 1 / 4 1 / 4 0 1 0 1 0 1 / 6 2 / 3 1 / 6 {\displaystyle {\begin{array}{c|ccc}0&0&0&0\\1/2&1/4&1/4&0\\1&0&1&0\\\hline &1/6&2/3&1/6\\\end{array}}} These methods are not A-stable, B-stable or L-stable. The Lobatto IIIC* method for s = 2 {\displaystyle s=2} is sometimes called the explicit trapezoidal rule. ==== Generalized Lobatto methods ==== One can consider a very general family of methods with three real parameters ( α A , α B , α C ) {\displaystyle (\alpha _{A},\alpha _{B},\alpha _{C})} by considering Lobatto coefficients of the form a i , j ( α A , α B , α C ) = α A a i , j A + α B a i , j B + α C a i , j C + α C ∗ a i , j C ∗ {\displaystyle a_{i,j}(\alpha _{A},\alpha _{B},\alpha _{C})=\alpha _{A}a_{i,j}^{A}+\alpha _{B}a_{i,j}^{B}+\alpha _{C}a_{i,j}^{C}+\alpha _{C*}a_{i,j}^{C*}} , where α C ∗ = 1 − α A − α B − α C {\displaystyle \alpha _{C*}=1-\alpha _{A}-\alpha _{B}-\alpha _{C}} . For example, Lobatto IIID family introduced in (Nørsett and Wanner, 1981), also called Lobatto IIINW, are given by 0 1 / 2 1 / 2 1 − 1 / 2 1 / 2 1 / 2 1 / 2 {\displaystyle {\begin{array}{c|cc}0&1/2&1/2\\1&-1/2&1/2\\\hline &1/2&1/2\\\end{array}}} and 0 1 / 6 0 − 1 / 6 1 / 2 1 / 12 5 / 12 0 1 1 / 2 1 / 3 1 / 6 1 / 6 2 / 3 1 / 6 {\displaystyle {\begin{array}{c|ccc}0&1/6&0&-1/6\\1/2&1/12&5/12&0\\1&1/2&1/3&1/6\\\hline &1/6&2/3&1/6\\\end{array}}} These methods correspond to α A = 2 {\displaystyle \alpha _{A}=2} , α B = 2 {\displaystyle \alpha _{B}=2} , α C = − 1 {\displaystyle \alpha _{C}=-1} , and α C ∗ = − 2 {\displaystyle \alpha _{C*}=-2} . The methods are L-stable. They are algebraically stable and thus B-stable. === Radau methods === Radau methods are fully implicit methods (matrix A of such methods can have any structure). Radau methods attain order 2s − 1 for s stages. Radau methods are A-stable, but expensive to implement. Also they can suffer from order reduction. ==== Radau IA methods ==== The first order method is similar to the backward Euler method and given by 0 1 1 {\displaystyle {\begin{array}{c|cc}0&1\\\hline &1\\\end{array}}} The third-order method is given by 0 1 / 4 − 1 / 4 2 / 3 1 / 4 5 / 12 1 / 4 3 / 4 {\displaystyle {\begin{array}{c|cc}0&1/4&-1/4\\2/3&1/4&5/12\\\hline &1/4&3/4\\\end{array}}} The fifth-order method is given by 0 1 9 − 1 − 6 18 − 1 + 6 18 3 5 − 6 10 1 9 11 45 + 7 6 360 11 45 − 43 6 360 3 5 + 6 10 1 9 11 45 + 43 6 360 11 45 − 7 6 360 1 9 4 9 + 6 36 4 9 − 6 36 {\displaystyle {\begin{array}{c|ccc}0&{\frac {1}{9}}&{\frac {-1-{\sqrt {6}}}{18}}&{\frac {-1+{\sqrt {6}}}{18}}\\{\frac {3}{5}}-{\frac {\sqrt {6}}{10}}&{\frac {1}{9}}&{\frac {11}{45}}+{\frac {7{\sqrt {6}}}{360}}&{\frac {11}{45}}-{\frac {43{\sqrt {6}}}{360}}\\{\frac {3}{5}}+{\frac {\sqrt {6}}{10}}&{\frac {1}{9}}&{\frac {11}{45}}+{\frac {43{\sqrt {6}}}{360}}&{\frac {11}{45}}-{\frac {7{\sqrt {6}}}{360}}\\\hline &{\frac {1}{9}}&{\frac {4}{9}}+{\frac {\sqrt {6}}{36}}&{\frac {4}{9}}-{\frac {\sqrt {6}}{36}}\\\end{array}}} ==== Radau IIA methods ==== The ci of this method are zeros of d s − 1 d x s − 1 ( x s − 1 ( x − 1 ) s ) {\displaystyle {\frac {d^{s-1}}{dx^{s-1}}}(x^{s-1}(x-1)^{s})} . The first-order method is equivalent to the backward Euler method. The third-order method is given by 1 / 3 5 / 12 − 1 / 12 1 3 / 4 1 / 4 3 / 4 1 / 4 {\displaystyle {\begin{array}{c|cc}1/3&5/12&-1/12\\1&3/4&1/4\\\hline &3/4&1/4\\\end{array}}} The fifth-order method is given by 2 5 − 6 10 11 45 − 7 6 360 37 225 − 169 6 1800 − 2 225 + 6 75 2 5 + 6 10 37 225 + 169 6 1800 11 45 + 7 6 360 − 2 225 − 6 75 1 4 9 − 6 36 4 9 + 6 36 1 9 4 9 − 6 36 4 9 + 6 36 1 9 {\displaystyle {\begin{array}{c|ccc}{\frac {2}{5}}-{\frac {\sqrt {6}}{10}}&{\frac {11}{45}}-{\frac {7{\sqrt {6}}}{360}}&{\frac {37}{225}}-{\frac {169{\sqrt {6}}}{1800}}&-{\frac {2}{225}}+{\frac {\sqrt {6}}{75}}\\{\frac {2}{5}}+{\frac {\sqrt {6}}{10}}&{\frac {37}{225}}+{\frac {169{\sqrt {6}}}{1800}}&{\frac {11}{45}}+{\frac {7{\sqrt {6}}}{360}}&-{\frac {2}{225}}-{\frac {\sqrt {6}}{75}}\\1&{\frac {4}{9}}-{\frac {\sqrt {6}}{36}}&{\frac {4}{9}}+{\frac {\sqrt {6}}{36}}&{\frac {1}{9}}\\\hline &{\frac {4}{9}}-{\frac {\sqrt {6}}{36}}&{\frac {4}{9}}+{\frac {\sqrt {6}}{36}}&{\frac {1}{9}}\\\end{array}}} == Notes == == References == Ehle, Byron L. (1969). On Padé approximations to the exponential function and A-stable methods for the numerical solution of initial value problems (PDF) (Thesis). Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0. Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-60452-5. Hairer, Ernst; Lubich, Christian; Wanner, Gerhard (2006), Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-30663-4.
Wikipedia/Explicit_Runge–Kutta_methods
In mathematics, differential inclusions are a generalization of the concept of ordinary differential equation of the form d x d t ( t ) ∈ F ( t , x ( t ) ) , {\displaystyle {\frac {dx}{dt}}(t)\in F(t,x(t)),} where F is a multivalued map, i.e. F(t, x) is a set rather than a single point in R d {\displaystyle \mathbb {R} ^{d}} . Differential inclusions arise in many situations including differential variational inequalities, projected dynamical systems, Moreau's sweeping process, linear and nonlinear complementarity dynamical systems, discontinuous ordinary differential equations, switching dynamical systems, and fuzzy set arithmetic. For example, the basic rule for Coulomb friction is that the friction force has magnitude μN in the direction opposite to the direction of slip, where N is the normal force and μ is a constant (the friction coefficient). However, if the slip is zero, the friction force can be any force in the correct plane with magnitude smaller than or equal to μN. Thus, writing the friction force as a function of position and velocity leads to a set-valued function. In differential inclusion, we not only take a set-valued map at the right hand side but also we can take a subset of a Euclidean space R N {\displaystyle \mathbb {R} ^{N}} for some N ∈ N {\displaystyle N\in \mathbb {N} } as following way. Let n ∈ N {\displaystyle n\in \mathbb {N} } and E ⊂ R n × n ∖ { 0 } . {\displaystyle E\subset \mathbb {R} ^{n\times n}\setminus \{0\}.} Our main purpose is to find a W 0 1 , ∞ ( Ω , R n ) {\displaystyle W_{0}^{1,\infty }(\Omega ,\mathbb {R} ^{n})} function u {\displaystyle u} satisfying the differential inclusion D u ∈ E {\displaystyle Du\in E} a.e. in Ω , {\displaystyle \Omega ,} where Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} is an open bounded set. == Theory == Existence theory usually assumes that F(t, x) is an upper hemicontinuous function of x, measurable in t, and that F(t, x) is a closed, convex set for all t and x. Existence of solutions for the initial value problem d x d t ( t ) ∈ F ( t , x ( t ) ) , x ( t 0 ) = x 0 {\displaystyle {\frac {dx}{dt}}(t)\in F(t,x(t)),\quad x(t_{0})=x_{0}} for a sufficiently small time interval [t0, t0 + ε), ε > 0 then follows. Global existence can be shown provided F does not allow "blow-up" ( ‖ x ( t ) ‖ → ∞ {\displaystyle \scriptstyle \Vert x(t)\Vert \,\to \,\infty } as t → t ∗ {\displaystyle \scriptstyle t\,\to \,t^{*}} for a finite t ∗ {\displaystyle \scriptstyle t^{*}} ). Existence theory for differential inclusions with non-convex F(t, x) is an active area of research. Uniqueness of solutions usually requires other conditions. For example, suppose F ( t , x ) {\displaystyle F(t,x)} satisfies a one-sided Lipschitz condition: ( x 1 − x 2 ) T ( F ( t , x 1 ) − F ( t , x 2 ) ) ≤ C ‖ x 1 − x 2 ‖ 2 {\displaystyle (x_{1}-x_{2})^{T}(F(t,x_{1})-F(t,x_{2}))\leq C\Vert x_{1}-x_{2}\Vert ^{2}} for some C for all x1 and x2. Then the initial value problem d x d t ( t ) ∈ F ( t , x ( t ) ) , x ( t 0 ) = x 0 {\displaystyle {\frac {dx}{dt}}(t)\in F(t,x(t)),\quad x(t_{0})=x_{0}} has a unique solution. This is closely related to the theory of maximal monotone operators, as developed by Minty and Haïm Brezis. Filippov's theory only allows for discontinuities in the derivative d x d t ( t ) {\displaystyle {\frac {dx}{dt}}(t)} , but allows no discontinuities in the state, i.e. x ( t ) {\displaystyle x(t)} need be continuous. Schatzman and later Moreau (who gave it the currently accepted name) extended the notion to measure differential inclusion (MDI) in which the inclusion is evaluated by taking the limit from above for x ( t ) {\displaystyle x(t)} . == Applications == Differential inclusions can be used to understand and suitably interpret discontinuous ordinary differential equations, such as arise for Coulomb friction in mechanical systems and ideal switches in power electronics. An important contribution has been made by A. F. Filippov, who studied regularizations of discontinuous equations. Further, the technique of regularization was used by N.N. Krasovskii in the theory of differential games. Differential inclusions are also found at the foundation of non-smooth dynamical systems (NSDS) analysis, which is used in the analog study of switching electrical circuits using idealized component equations (for example using idealized, straight vertical lines for the sharply exponential forward and breakdown conduction regions of a diode characteristic) and in the study of certain non-smooth mechanical system such as stick-slip oscillations in systems with dry friction or the dynamics of impact phenomena. Software that solves NSDS systems exists, such as INRIA's Siconos. In continuous function when Fuzzy concept is used in differential inclusion a new concept comes as Fuzzy differential inclusion which has application in Atmospheric dispersion modeling and Cybernetics in Medical imaging. == See also == Stiffness, which affects ODEs/DAEs for functions with "sharp turns" and which affects numerical convergence == References == Aubin, Jean-Pierre; Cellina, Arrigo (1984). Differential Inclusions, Set-Valued Maps and Viability Theory. Grundl. der Math. Wiss. Vol. 264. Berlin: Springer. ISBN 9783540131052. Aubin, Jean-Pierre; Frankowska, Hélène (1990). Set-Valued Analysis. Birkhäuser. ISBN 978-0817648473. Deimling, Klaus (1992). Multivalued Differential Equations. Walter de Gruyter. ISBN 978-3110132120. Andres, J.; Górniewicz, Lech (2003). Topological Fixed Point Principles for Boundary Value Problems. Springer. ISBN 978-9048163182. Filippov, A.F. (1988). Differential equations with discontinuous right-hand sides. Kluwer Academic Publishers Group. ISBN 90-277-2699-X.
Wikipedia/Differential_inclusion
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain (if applicable) are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points. Finite difference methods convert ordinary differential equations (ODE) or partial differential equations (PDE), which may be nonlinear, into a system of linear equations that can be solved by matrix algebra techniques. Modern computers can perform these linear algebra computations efficiently, and this, along with their relative ease of implementation, has led to the widespread use of FDM in modern numerical analysis. Today, FDMs are one of the most common approaches to the numerical solution of PDE, along with finite element methods. == Derive difference quotient from Taylor's polynomial == For a n-times differentiable function, by Taylor's theorem the Taylor series expansion is given as f ( x 0 + h ) = f ( x 0 ) + f ′ ( x 0 ) 1 ! h + f ( 2 ) ( x 0 ) 2 ! h 2 + ⋯ + f ( n ) ( x 0 ) n ! h n + R n ( x ) , {\displaystyle f(x_{0}+h)=f(x_{0})+{\frac {f'(x_{0})}{1!}}h+{\frac {f^{(2)}(x_{0})}{2!}}h^{2}+\cdots +{\frac {f^{(n)}(x_{0})}{n!}}h^{n}+R_{n}(x),} Where n! denotes the factorial of n, and Rn(x) is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function. Following is the process to derive an approximation for the first derivative of the function f by first truncating the Taylor polynomial plus remainder: f ( x 0 + h ) = f ( x 0 ) + f ′ ( x 0 ) h + R 1 ( x ) . {\displaystyle f(x_{0}+h)=f(x_{0})+f'(x_{0})h+R_{1}(x).} Dividing across by h gives: f ( x 0 + h ) h = f ( x 0 ) h + f ′ ( x 0 ) + R 1 ( x ) h {\displaystyle {f(x_{0}+h) \over h}={f(x_{0}) \over h}+f'(x_{0})+{R_{1}(x) \over h}} Solving for f ′ ( x 0 ) {\displaystyle f'(x_{0})} : f ′ ( x 0 ) = f ( x 0 + h ) − f ( x 0 ) h − R 1 ( x ) h . {\displaystyle f'(x_{0})={f(x_{0}+h)-f(x_{0}) \over h}-{R_{1}(x) \over h}.} Assuming that R 1 ( x ) {\displaystyle R_{1}(x)} is sufficiently small, the approximation of the first derivative of f is: f ′ ( x 0 ) ≈ f ( x 0 + h ) − f ( x 0 ) h . {\displaystyle f'(x_{0})\approx {f(x_{0}+h)-f(x_{0}) \over h}.} This is similar to the definition of derivative, which is: f ′ ( x 0 ) = lim h → 0 f ( x 0 + h ) − f ( x 0 ) h . {\displaystyle f'(x_{0})=\lim _{h\to 0}{\frac {f(x_{0}+h)-f(x_{0})}{h}}.} except for the limit towards zero (the method is named after this). == Accuracy and order == The error in a method's solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rounding of decimal quantities, and truncation error or discretization error, the difference between the exact solution of the original differential equation and the exact quantity assuming perfect arithmetic (no round-off). To use a finite difference method to approximate the solution to a problem, one must first discretize the problem's domain. This is usually done by dividing the domain into a uniform grid (see image). This means that finite-difference methods produce sets of discrete numerical approximations to the derivative, often in a "time-stepping" manner. An expression of general interest is the local truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from a single application of a method. That is, it is the quantity f ′ ( x i ) − f i ′ {\displaystyle f'(x_{i})-f'_{i}} if f ′ ( x i ) {\displaystyle f'(x_{i})} refers to the exact value and f i ′ {\displaystyle f'_{i}} to the numerical approximation. The remainder term of the Taylor polynomial can be used to analyze local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for f ( x 0 + h ) {\displaystyle f(x_{0}+h)} , which is R n ( x 0 + h ) = f ( n + 1 ) ( ξ ) ( n + 1 ) ! ( h ) n + 1 , x 0 < ξ < x 0 + h , {\displaystyle R_{n}(x_{0}+h)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}(h)^{n+1}\,,\quad x_{0}<\xi <x_{0}+h,} the dominant term of the local truncation error can be discovered. For example, again using the forward-difference formula for the first derivative, knowing that f ( x i ) = f ( x 0 + i h ) {\displaystyle f(x_{i})=f(x_{0}+ih)} , f ( x 0 + i h ) = f ( x 0 ) + f ′ ( x 0 ) i h + f ″ ( ξ ) 2 ! ( i h ) 2 , {\displaystyle f(x_{0}+ih)=f(x_{0})+f'(x_{0})ih+{\frac {f''(\xi )}{2!}}(ih)^{2},} and with some algebraic manipulation, this leads to f ( x 0 + i h ) − f ( x 0 ) i h = f ′ ( x 0 ) + f ″ ( ξ ) 2 ! i h , {\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+{\frac {f''(\xi )}{2!}}ih,} and further noting that the quantity on the left is the approximation from the finite difference method and that the quantity on the right is the exact quantity of interest plus a remainder, clearly that remainder is the local truncation error. A final expression of this example and its order is: f ( x 0 + i h ) − f ( x 0 ) i h = f ′ ( x 0 ) + O ( h ) . {\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+O(h).} In this case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection and the step sizes (time and space steps). The data quality and simulation duration increase significantly with smaller step size. Therefore, a reasonable balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing simulation speed in practice. However, time steps which are too large may create instabilities and affect the data quality. The von Neumann and Courant-Friedrichs-Lewy criteria are often evaluated to determine the numerical model stability. == Example: ordinary differential equation == For example, consider the ordinary differential equation u ′ ( x ) = 3 u ( x ) + 2. {\displaystyle u'(x)=3u(x)+2.} The Euler method for solving this equation uses the finite difference quotient u ( x + h ) − u ( x ) h ≈ u ′ ( x ) {\displaystyle {\frac {u(x+h)-u(x)}{h}}\approx u'(x)} to approximate the differential equation by first substituting it for u'(x) then applying a little algebra (multiplying both sides by h, and then adding u(x) to both sides) to get u ( x + h ) ≈ u ( x ) + h ( 3 u ( x ) + 2 ) . {\displaystyle u(x+h)\approx u(x)+h(3u(x)+2).} The last equation is a finite-difference equation, and solving this equation gives an approximate solution to the differential equation. == Example: The heat equation == Consider the normalized heat equation in one dimension, with homogeneous Dirichlet boundary conditions { U t = U x x U ( 0 , t ) = U ( 1 , t ) = 0 (boundary condition) U ( x , 0 ) = U 0 ( x ) (initial condition) {\displaystyle {\begin{cases}U_{t}=U_{xx}\\U(0,t)=U(1,t)=0&{\text{(boundary condition)}}\\U(x,0)=U_{0}(x)&{\text{(initial condition)}}\end{cases}}} One way to numerically solve this equation is to approximate all the derivatives by finite differences. First partition the domain in space using a mesh x 0 , … , x J {\displaystyle x_{0},\dots ,x_{J}} and in time using a mesh t 0 , … , t N {\displaystyle t_{0},\dots ,t_{N}} . Assume a uniform partition both in space and in time, so the difference between two consecutive space points will be h and between two consecutive time points will be k. The points u ( x j , t n ) = u j n {\displaystyle u(x_{j},t_{n})=u_{j}^{n}} will represent the numerical approximation of u ( x j , t n ) . {\displaystyle u(x_{j},t_{n}).} === Explicit method === Using a forward difference at time t n {\displaystyle t_{n}} and a second-order central difference for the space derivative at position x j {\displaystyle x_{j}} (FTCS) gives the recurrence equation: u j n + 1 − u j n k = u j + 1 n − 2 u j n + u j − 1 n h 2 . {\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}.} This is an explicit method for solving the one-dimensional heat equation. One can obtain u j n + 1 {\displaystyle u_{j}^{n+1}} from the other values this way: u j n + 1 = ( 1 − 2 r ) u j n + r u j − 1 n + r u j + 1 n {\displaystyle u_{j}^{n+1}=(1-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}} where r = k / h 2 . {\displaystyle r=k/h^{2}.} So, with this recurrence relation, and knowing the values at time n, one can obtain the corresponding values at time n+1. u 0 n {\displaystyle u_{0}^{n}} and u J n {\displaystyle u_{J}^{n}} must be replaced by the boundary conditions, in this example they are both 0. This explicit method is known to be numerically stable and convergent whenever r ≤ 1 / 2 {\displaystyle r\leq 1/2} . The numerical errors are proportional to the time step and the square of the space step: Δ u = O ( k ) + O ( h 2 ) {\displaystyle \Delta u=O(k)+O(h^{2})} === Implicit method === Using the backward difference at time t n + 1 {\displaystyle t_{n+1}} and a second-order central difference for the space derivative at position x j {\displaystyle x_{j}} (The Backward Time, Centered Space Method "BTCS") gives the recurrence equation: u j n + 1 − u j n k = u j + 1 n + 1 − 2 u j n + 1 + u j − 1 n + 1 h 2 . {\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}.} This is an implicit method for solving the one-dimensional heat equation. One can obtain u j n + 1 {\displaystyle u_{j}^{n+1}} from solving a system of linear equations: ( 1 + 2 r ) u j n + 1 − r u j − 1 n + 1 − r u j + 1 n + 1 = u j n {\displaystyle (1+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=u_{j}^{n}} The scheme is always numerically stable and convergent but usually more numerically intensive than the explicit method as it requires solving a system of numerical equations on each time step. The errors are linear over the time step and quadratic over the space step: Δ u = O ( k ) + O ( h 2 ) . {\displaystyle \Delta u=O(k)+O(h^{2}).} === Crank–Nicolson method === Finally, using the central difference at time t n + 1 / 2 {\displaystyle t_{n+1/2}} and a second-order central difference for the space derivative at position x j {\displaystyle x_{j}} ("CTCS") gives the recurrence equation: u j n + 1 − u j n k = 1 2 ( u j + 1 n + 1 − 2 u j n + 1 + u j − 1 n + 1 h 2 + u j + 1 n − 2 u j n + u j − 1 n h 2 ) . {\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k}}={\frac {1}{2}}\left({\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}+{\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}\right).} This formula is known as the Crank–Nicolson method. One can obtain u j n + 1 {\displaystyle u_{j}^{n+1}} from solving a system of linear equations: ( 2 + 2 r ) u j n + 1 − r u j − 1 n + 1 − r u j + 1 n + 1 = ( 2 − 2 r ) u j n + r u j − 1 n + r u j + 1 n {\displaystyle (2+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=(2-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}} The scheme is always numerically stable and convergent but usually more numerically intensive as it requires solving a system of numerical equations on each time step. The errors are quadratic over both the time step and the space step: Δ u = O ( k 2 ) + O ( h 2 ) . {\displaystyle \Delta u=O(k^{2})+O(h^{2}).} === Comparison === To summarize, usually the Crank–Nicolson scheme is the most accurate scheme for small time steps. For larger time steps, the implicit scheme works better since it is less computationally demanding. The explicit scheme is the least accurate and can be unstable, but is also the easiest to implement and the least numerically intensive. Here is an example. The figures below present the solutions given by the above methods to approximate the heat equation U t = α U x x , α = 1 π 2 , {\displaystyle U_{t}=\alpha U_{xx},\quad \alpha ={\frac {1}{\pi ^{2}}},} with the boundary condition U ( 0 , t ) = U ( 1 , t ) = 0. {\displaystyle U(0,t)=U(1,t)=0.} The exact solution is U ( x , t ) = 1 π 2 e − t sin ⁡ ( π x ) . {\displaystyle U(x,t)={\frac {1}{\pi ^{2}}}e^{-t}\sin(\pi x).} == Example: The Laplace operator == The (continuous) Laplace operator in n {\displaystyle n} -dimensions is given by Δ u ( x ) = ∑ i = 1 n ∂ i 2 u ( x ) {\displaystyle \Delta u(x)=\sum _{i=1}^{n}\partial _{i}^{2}u(x)} . The discrete Laplace operator Δ h u {\displaystyle \Delta _{h}u} depends on the dimension n {\displaystyle n} . In 1D the Laplace operator is approximated as Δ u ( x ) = u ″ ( x ) ≈ u ( x − h ) − 2 u ( x ) + u ( x + h ) h 2 =: Δ h u ( x ) . {\displaystyle \Delta u(x)=u''(x)\approx {\frac {u(x-h)-2u(x)+u(x+h)}{h^{2}}}=:\Delta _{h}u(x)\,.} This approximation is usually expressed via the following stencil Δ h = 1 h 2 [ 1 − 2 1 ] {\displaystyle \Delta _{h}={\frac {1}{h^{2}}}{\begin{bmatrix}1&-2&1\end{bmatrix}}} and which represents a symmetric, tridiagonal matrix. For an equidistant grid one gets a Toeplitz matrix. The 2D case shows all the characteristics of the more general n-dimensional case. Each second partial derivative needs to be approximated similar to the 1D case Δ u ( x , y ) = u x x ( x , y ) + u y y ( x , y ) ≈ u ( x − h , y ) − 2 u ( x , y ) + u ( x + h , y ) h 2 + u ( x , y − h ) − 2 u ( x , y ) + u ( x , y + h ) h 2 = u ( x − h , y ) + u ( x + h , y ) − 4 u ( x , y ) + u ( x , y − h ) + u ( x , y + h ) h 2 =: Δ h u ( x , y ) , {\displaystyle {\begin{aligned}\Delta u(x,y)&=u_{xx}(x,y)+u_{yy}(x,y)\\&\approx {\frac {u(x-h,y)-2u(x,y)+u(x+h,y)}{h^{2}}}+{\frac {u(x,y-h)-2u(x,y)+u(x,y+h)}{h^{2}}}\\&={\frac {u(x-h,y)+u(x+h,y)-4u(x,y)+u(x,y-h)+u(x,y+h)}{h^{2}}}\\&=:\Delta _{h}u(x,y)\,,\end{aligned}}} which is usually given by the following stencil Δ h = 1 h 2 [ 1 1 − 4 1 1 ] . {\displaystyle \Delta _{h}={\frac {1}{h^{2}}}{\begin{bmatrix}&1\\1&-4&1\\&1\end{bmatrix}}\,.} === Consistency === Consistency of the above-mentioned approximation can be shown for highly regular functions, such as u ∈ C 4 ( Ω ) {\displaystyle u\in C^{4}(\Omega )} . The statement is Δ u − Δ h u = O ( h 2 ) . {\displaystyle \Delta u-\Delta _{h}u={\mathcal {O}}(h^{2})\,.} To prove this, one needs to substitute Taylor Series expansions up to order 3 into the discrete Laplace operator. === Properties === ==== Subharmonic ==== Similar to continuous subharmonic functions one can define subharmonic functions for finite-difference approximations u h {\displaystyle u_{h}} − Δ h u h ≤ 0 . {\displaystyle -\Delta _{h}u_{h}\leq 0\,.} ==== Mean value ==== One can define a general stencil of positive type via [ α N α W − α C α E α S ] , α i > 0 , α C = ∑ i ∈ { N , E , S , W } α i . {\displaystyle {\begin{bmatrix}&\alpha _{N}\\\alpha _{W}&-\alpha _{C}&\alpha _{E}\\&\alpha _{S}\end{bmatrix}}\,,\quad \alpha _{i}>0\,,\quad \alpha _{C}=\sum _{i\in \{N,E,S,W\}}\alpha _{i}\,.} If u h {\displaystyle u_{h}} is (discrete) subharmonic then the following mean value property holds u h ( x C ) ≤ ∑ i ∈ { N , E , S , W } α i u h ( x i ) ∑ i ∈ { N , E , S , W } α i , {\displaystyle u_{h}(x_{C})\leq {\frac {\sum _{i\in \{N,E,S,W\}}\alpha _{i}u_{h}(x_{i})}{\sum _{i\in \{N,E,S,W\}}\alpha _{i}}}\,,} where the approximation is evaluated on points of the grid, and the stencil is assumed to be of positive type. A similar mean value property also holds for the continuous case. ==== Maximum principle ==== For a (discrete) subharmonic function u h {\displaystyle u_{h}} the following holds max Ω h u h ≤ max ∂ Ω h u h , {\displaystyle \max _{\Omega _{h}}u_{h}\leq \max _{\partial \Omega _{h}}u_{h}\,,} where Ω h , ∂ Ω h {\displaystyle \Omega _{h},\partial \Omega _{h}} are discretizations of the continuous domain Ω {\displaystyle \Omega } , respectively the boundary ∂ Ω {\displaystyle \partial \Omega } . A similar maximum principle also holds for the continuous case. == The SBP-SAT method == The SBP-SAT (summation by parts - simultaneous approximation term) method is a stable and accurate technique for discretizing and imposing boundary conditions of a well-posed partial differential equation using high order finite differences. The method is based on finite differences where the differentiation operators exhibit summation-by-parts properties. Typically, these operators consist of differentiation matrices with central difference stencils in the interior with carefully chosen one-sided boundary stencils designed to mimic integration-by-parts in the discrete setting. Using the SAT technique, the boundary conditions of the PDE are imposed weakly, where the boundary values are "pulled" towards the desired conditions rather than exactly fulfilled. If the tuning parameters (inherent to the SAT technique) are chosen properly, the resulting system of ODE's will exhibit similar energy behavior as the continuous PDE, i.e. the system has no non-physical energy growth. This guarantees stability if an integration scheme with a stability region that includes parts of the imaginary axis, such as the fourth order Runge-Kutta method, is used. This makes the SAT technique an attractive method of imposing boundary conditions for higher order finite difference methods, in contrast to for example the injection method, which typically will not be stable if high order differentiation operators are used. == See also == == References == == Further reading == K.W. Morton and D.F. Mayers, Numerical Solution of Partial Differential Equations, An Introduction. Cambridge University Press, 2005. Autar Kaw and E. Eric Kalu, Numerical Methods with Applications, (2008) [1]. Contains a brief, engineering-oriented introduction to FDM (for ODEs) in Chapter 08.07. John Strikwerda (2004). Finite Difference Schemes and Partial Differential Equations (2nd ed.). SIAM. ISBN 978-0-89871-639-9. Smith, G. D. (1985), Numerical Solution of Partial Differential Equations: Finite Difference Methods, 3rd ed., Oxford University Press Peter Olver (2013). Introduction to Partial Differential Equations. Springer. Chapter 5: Finite differences. ISBN 978-3-319-02099-0.. Randall J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations, SIAM, 2007. Sergey Lemeshevsky, Piotr Matus, Dmitriy Poliakov(Eds): "Exact Finite-Difference Schemes", De Gruyter (2016). DOI: https://doi.org/10.1515/9783110491326 . Mikhail Shashkov: Conservative Finite-Difference Methods on General Grids, CRC Press, ISBN 0-8493-7375-1 (1996).
Wikipedia/Finite_Difference_Method
In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers, the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC). It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, that number is the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, 21 = 5 × 105 + (−2) × 252). The fact that the GCD can always be expressed in this way is known as Bézout's identity. The version of the Euclidean algorithm described above—which follows Euclid's original presentation—may require many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844 (Lamé's Theorem), and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century. The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations. The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains. == Background: greatest common divisor == The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers a and b. The greatest common divisor g is the largest natural number that divides both a and b without leaving a remainder. Synonyms for GCD include greatest common factor (GCF), highest common factor (HCF), highest common divisor (HCD), and greatest common measure (GCM). The greatest common divisor is often written as gcd(a, b) or, more simply, as (a, b), although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD. If gcd(a, b) = 1, then a and b are said to be coprime (or relatively prime). This property does not imply that a or b are themselves prime numbers. For example, 6 and 35 factor as 6 = 2 × 3 and 35 = 5 × 7, so they are not prime, but their prime factors are different, so 6 and 35 are coprime, with no common factors other than 1. Let g = gcd(a, b). Since a and b are both multiples of g, they can be written a = mg and b = ng, and there is no larger number G > g for which this is true. The natural numbers m and n must be coprime, since any common factor could be factored out of m and n to make g greater. Thus, any other number c that divides both a and b must also divide g. The greatest common divisor g of a and b is the unique (positive) common divisor of a and b that is divisible by any other common divisor c. The greatest common divisor can be visualized as follows. Consider a rectangular area a by b, and any common divisor c that divides both a and b exactly. The sides of the rectangle can be divided into segments of length c, which divides the rectangle into a grid of squares of side length c. The GCD g is the largest value of c for which this is possible. For illustration, a 24×60 rectangular area can be divided into a grid of: 1×1 squares, 2×2 squares, 3×3 squares, 4×4 squares, 6×6 squares or 12×12 squares. Therefore, 12 is the GCD of 24 and 60. A 24×60 rectangular area can be divided into a grid of 12×12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5). The greatest common divisor of two numbers a and b is the product of the prime factors shared by the two numbers, where each prime factor can be repeated as many times as it divides both a and b. For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, and 3213 can be factored into 3 × 3 × 3 × 7 × 17, the GCD of 1386 and 3213 equals 63 = 3 × 3 × 7, the product of their shared prime factors (with 3 repeated since 3 × 3 divides both). If two numbers have no common prime factors, their GCD is 1 (obtained here as an instance of the empty product); in other words, they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors. Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility. Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor g of two nonzero numbers a and b is also their smallest positive integral linear combination, that is, the smallest positive number of the form ua + vb where u and v are integers. The set of all integral linear combinations of a and b is actually the same as the set of all multiples of g (mg, where m is an integer). In modern mathematical language, the ideal generated by a and b is the ideal generated by g alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of a and b also divides the GCD (it divides both terms of ua + vb). The equivalence of this GCD definition with the other definitions is described below. The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCDs of pairs of numbers. For example, gcd(a, b, c) = gcd(a, gcd(b, c)) = gcd(gcd(a, b), c) = gcd(gcd(a, c), b). Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers. === Procedure === The Euclidean algorithm can be thought of as constructing a sequence of non-negative integers that begins with the two given integers r − 2 = a {\displaystyle r_{-2}=a} and r − 1 = b {\displaystyle r_{-1}=b} and will eventually terminate with the integer zero: { r − 2 = a , r − 1 = b , r 0 , r 1 , ⋯ , r n − 1 , r n = 0 } {\displaystyle \{r_{-2}=a,\ r_{-1}=b,\ r_{0},\ r_{1},\ \cdots ,\ r_{n-1},\ r_{n}=0\}} with r k + 1 < r k {\displaystyle r_{k+1}<r_{k}} . The integer r n − 1 {\displaystyle r_{n-1}} will then be the GCD and we can state gcd ( a , b ) = r n − 1 {\displaystyle {\text{gcd}}(a,b)=r_{n-1}} . The algorithm indicates how to construct the intermediate remainders r k {\displaystyle r_{k}} via division-with-remainder on the preceding pair ( r k − 2 , r k − 1 ) {\displaystyle (r_{k-2},\ r_{k-1})} by finding an integer quotient q k {\displaystyle q_{k}} so that: r k − 2 = q k ⋅ r k − 1 + r k , with r k − 1 > r k ≥ 0. {\displaystyle r_{k-2}=q_{k}\cdot r_{k-1}+r_{k}{\text{, with }}\ r_{k-1}>r_{k}\geq 0.} Because the sequence of non-negative integers { r k } {\displaystyle \{r_{k}\}} is strictly decreasing, it eventually must terminate. In other words, since r k ≥ 0 {\displaystyle r_{k}\geq 0} for every k {\displaystyle k} , and each r k {\displaystyle r_{k}} is an integer that is strictly smaller than the preceding r k − 1 {\displaystyle r_{k-1}} , there eventually cannot be a non-negative integer smaller than zero, and hence the algorithm must terminate. In fact, the algorithm will always terminate at the nth step with r n {\displaystyle r_{n}} equal to zero. To illustrate, suppose the GCD of 1071 and 462 is requested. The sequence is initially { r − 2 = 1071 , r − 1 = 462 } {\displaystyle \{r_{-2}=1071,\ r_{-1}=462\}} and in order to find r 0 {\displaystyle r_{0}} , we need to find integers q 0 {\displaystyle q_{0}} and r 0 < r − 1 {\displaystyle r_{0}<r_{-1}} such that: 1071 = q 0 ⋅ 462 + r 0 {\displaystyle 1071=q_{0}\cdot 462+r_{0}} . This is the quotient q 0 = 2 {\displaystyle q_{0}=2} since 1071 = 2 ⋅ 462 + 147 {\displaystyle 1071=2\cdot 462+147} . This determines r 0 = 147 {\displaystyle r_{0}=147} and so the sequence is now { 1071 , 462 , r 0 = 147 } {\displaystyle \{1071,\ 462,\ r_{0}=147\}} . The next step is to continue the sequence to find r 1 {\displaystyle r_{1}} by finding integers q 1 {\displaystyle q_{1}} and r 1 < r 0 {\displaystyle r_{1}<r_{0}} such that: 462 = q 1 ⋅ 147 + r 1 {\displaystyle 462=q_{1}\cdot 147+r_{1}} . This is the quotient q 1 = 3 {\displaystyle q_{1}=3} since 462 = 3 ⋅ 147 + 21 {\displaystyle 462=3\cdot 147+21} . This determines r 1 = 21 {\displaystyle r_{1}=21} and so the sequence is now { 1071 , 462 , 147 , r 1 = 21 } {\displaystyle \{1071,\ 462,\ 147,\ r_{1}=21\}} . The next step is to continue the sequence to find r 2 {\displaystyle r_{2}} by finding integers q 2 {\displaystyle q_{2}} and r 2 < r 1 {\displaystyle r_{2}<r_{1}} such that: 147 = q 2 ⋅ 21 + r 2 {\displaystyle 147=q_{2}\cdot 21+r_{2}} . This is the quotient q 2 = 7 {\displaystyle q_{2}=7} since 147 = 7 ⋅ 21 + 0 {\displaystyle 147=7\cdot 21+0} . This determines r 2 = 0 {\displaystyle r_{2}=0} and so the sequence is completed as { 1071 , 462 , 147 , 21 , r 2 = 0 } {\displaystyle \{1071,\ 462,\ 147,\ 21,\ r_{2}=0\}} as no further non-negative integer smaller than 0 {\displaystyle 0} can be found. The penultimate remainder 21 {\displaystyle 21} is therefore the requested GCD: gcd ( 1071 , 462 ) = 21. {\displaystyle {\text{gcd}}(1071,\ 462)=21.} We can generalize slightly by dropping any ordering requirement on the initial two values a {\displaystyle a} and b {\displaystyle b} . If a = b {\displaystyle a=b} , the algorithm may continue and trivially find that gcd ( a , a ) = a {\displaystyle {\text{gcd}}(a,\ a)=a} as the sequence of remainders will be { a , a , 0 } {\displaystyle \{a,\ a,\ 0\}} . If a < b {\displaystyle a<b} , then we can also continue since a ≡ 0 ⋅ b + a {\displaystyle a\equiv 0\cdot b+a} , suggesting the next remainder should be a {\displaystyle a} itself, and the sequence is { a , b , a , ⋯ } {\displaystyle \{a,\ b,\ a,\ \cdots \}} . Normally, this would be invalid because it breaks the requirement r 0 < r − 1 {\displaystyle r_{0}<r_{-1}} but now we have a < b {\displaystyle a<b} by construction, so the requirement is automatically satisfied and the Euclidean algorithm can continue as normal. Therefore, dropping any ordering between the first two integers does not affect the conclusion that the sequence must eventually terminate because the next remainder will always satisfy r 0 < b {\displaystyle r_{0}<b} and everything continues as above. The only modifications that need to be made are that r k < r k − 1 {\displaystyle r_{k}<r_{k-1}} only for k ≥ 0 {\displaystyle k\geq 0} , and that the sub-sequence of non-negative integers { r k − 1 } {\displaystyle \{r_{k-1}\}} for k ≥ 0 {\displaystyle k\geq 0} is strictly decreasing, therefore excluding a = r − 2 {\displaystyle a=r_{-2}} from both statements. === Proof of validity === The validity of the Euclidean algorithm can be proven by a two-step argument. In the first step, the final nonzero remainder rN−1 is shown to divide both a and b. Since it is a common divisor, it must be less than or equal to the greatest common divisor g. In the second step, it is shown that any common divisor of a and b, including g, must divide rN−1; therefore, g must be less than or equal to rN−1. These two opposite inequalities imply rN−1 = g. To demonstrate that rN−1 divides both a and b (the first step), rN−1 divides its predecessor rN−2 rN−2 = qN rN−1 since the final remainder rN is zero. rN−1 also divides its next predecessor rN−3 rN−3 = qN−1 rN−2 + rN−1 because it divides both terms on the right-hand side of the equation. Iterating the same argument, rN−1 divides all the preceding remainders, including a and b. None of the preceding remainders rN−2, rN−3, etc. divide a and b, since they leave a remainder. Since rN−1 is a common divisor of a and b, rN−1 ≤ g. In the second step, any natural number c that divides both a and b (in other words, any common divisor of a and b) divides the remainders rk. By definition, a and b can be written as multiples of c: a = mc and b = nc, where m and n are natural numbers. Therefore, c divides the initial remainder r0, since r0 = a − q0b = mc − q0nc = (m − q0n)c. An analogous argument shows that c also divides the subsequent remainders r1, r2, etc. Therefore, the greatest common divisor g must divide rN−1, which implies that g ≤ rN−1. Since the first part of the argument showed the reverse (rN−1 ≤ g), it follows that g = rN−1. Thus, g is the greatest common divisor of all the succeeding pairs: g = gcd(a, b) = gcd(b, r0) = gcd(r0, r1) = ... = gcd(rN−2, rN−1) = rN−1. === Worked example === For illustration, the Euclidean algorithm can be used to find the greatest common divisor of a = 1071 and b = 462. To begin, multiples of 462 are subtracted from 1071 until the remainder is less than 462. Two such multiples can be subtracted (q0 = 2), leaving a remainder of 147: 1071 = 2 × 462 + 147. Then multiples of 147 are subtracted from 462 until the remainder is less than 147. Three multiples can be subtracted (q1 = 3), leaving a remainder of 21: 462 = 3 × 147 + 21. Then multiples of 21 are subtracted from 147 until the remainder is less than 21. Seven multiples can be subtracted (q2 = 7), leaving no remainder: 147 = 7 × 21 + 0. Since the last remainder is zero, the algorithm ends with 21 as the greatest common divisor of 1071 and 462. This agrees with the gcd(1071, 462) found by prime factorization above. In tabular form, the steps are: === Visualization === The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor. Assume that we wish to cover an a×b rectangle with square tiles exactly, where a is the larger of the two numbers. We first attempt to tile the rectangle using b×b square tiles; however, this leaves an r0×b residual rectangle untiled, where r0 < b. We then attempt to tile the residual rectangle with r0×r0 square tiles. This leaves a second residual rectangle r1×r0, which we attempt to tile using r1×r1 square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is 21×21 (shown in red), and 21 is the GCD of 1071 and 462, the dimensions of the original rectangle (shown in green). === Euclidean division === At every step k, the Euclidean algorithm computes a quotient qk and remainder rk from two numbers rk−1 and rk−2 rk−2 = qk rk−1 + rk, where the rk is non-negative and is strictly less than the absolute value of rk−1. The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique. In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, rk−1 is subtracted from rk−2 repeatedly until the remainder rk is smaller than rk−1. After that rk and rk−1 are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply rk = rk−2 mod rk−1. === Implementations === Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as function gcd(a, b) while b ≠ 0 t := b b := a mod b a := t return a At the beginning of the kth iteration, the variable b holds the latest remainder rk−1, whereas the variable a holds its predecessor, rk−2. The step b := a mod b is equivalent to the above recursion formula rk ≡ rk−2 mod rk−1. The temporary variable t holds the value of rk−1 while the next remainder rk is being calculated. At the end of the loop iteration, the variable b holds the remainder rk, whereas the variable a holds its predecessor, rk−1. (If negative inputs are allowed, or if the mod function may return negative values, the last line must be replaced with return abs(a).) In the subtraction-based version, which was Euclid's original version, the remainder calculation (b := a mod b) is replaced by repeated subtraction. Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when a = b: function gcd(a, b) while a ≠ b if a > b a := a − b else b := b − a return a The variables a and b alternate holding the previous remainders rk−1 and rk−2. Assume that a is larger than b at the beginning of an iteration; then a equals rk−2, since rk−2 > rk−1. During the loop iteration, a is reduced by multiples of the previous remainder b until a is smaller than b. Then a is the next remainder rk. Then b is reduced by multiples of a until it is again smaller than a, giving the next remainder rk+1, and so on. The recursive version is based on the equality of the GCDs of successive remainders and the stopping condition gcd(rN−1, 0) = rN−1. function gcd(a, b) if b = 0 return a else return gcd(b, a mod b) (As above, if negative inputs are allowed, or if the mod function may return negative values, the instruction return a must be replaced by return max(a, −a).) For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21. === Method of least absolute remainders === In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder. Previously, the equation rk−2 = qk rk−1 + rk assumed that |rk−1| > rk > 0. However, an alternative negative remainder ek can be computed: rk−2 = (qk + 1) rk−1 + ek if rk−1 > 0 or rk−2 = (qk – 1) rk−1 + ek if rk−1 < 0. If rk is replaced by ek. when |ek| < |rk|, then one gets a variant of Euclidean algorithm such that |rk| ≤ |rk−1| / 2 at each step. Leopold Kronecker has shown that this version requires the fewest steps of any version of Euclid's algorithm. More generally, it has been proven that, for every input numbers a and b, the number of steps is minimal if and only if qk is chosen in order that | r k + 1 r k | < 1 φ ∼ 0.618 , {\displaystyle \left|{\frac {r_{k+1}}{r_{k}}}\right|<{\frac {1}{\varphi }}\sim 0.618,} where φ {\displaystyle \varphi } is the golden ratio. == Historical development == The Euclidean algorithm is one of the oldest algorithms in common use. It appears in Euclid's Elements (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths a and b corresponds to the greatest length g that measures a and b evenly; in other words, the lengths a and b are both integer multiples of the length g. The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his Elements. The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras. The algorithm was probably known by Eudoxus of Cnidus (about 375 BC). The algorithm may even pre-date Eudoxus, judging from the use of the technical term ἀνθυφαίρεσις (anthyphairesis, reciprocal subtraction) in works by Euclid and Aristotle. Claude Brezinski, following remarks by Pappus of Alexandria, credits the algorithm to Theaetetus (c. 417 – c. 369 BC). Centuries later, Euclid's algorithm was discovered independently both in India and in China, primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer", perhaps because of its effectiveness in solving Diophantine equations. Although a special case of the Chinese remainder theorem had already been described in the Chinese book Sunzi Suanjing, the general solution was published by Qin Jiushao in his 1247 book Shushu Jiuzhang (數書九章 Mathematical Treatise in Nine Sections). The Euclidean algorithm was first described numerically and popularized in Europe in the second edition of Bachet's Problèmes plaisants et délectables (Pleasant and enjoyable problems, 1624). In Europe, it was likewise used to solve Diophantine equations and in developing continued fractions. The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson, who attributed it to Roger Cotes as a method for computing continued fractions efficiently. In the 19th century, the Euclidean algorithm led to the development of new number systems, such as Gaussian integers and Eisenstein integers. In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832. Gauss mentioned the algorithm in his Disquisitiones Arithmeticae (published 1801), but only as a method for continued fractions. Peter Gustav Lejeune Dirichlet seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory. Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied. Lejeune Dirichlet's lectures on number theory were edited and extended by Richard Dedekind, who used Euclid's algorithm to study algebraic integers, a new general type of number. For example, Dedekind was the first to prove Fermat's two-square theorem using the unique factorization of Gaussian integers. Dedekind also defined the concept of a Euclidean domain, a number system in which a generalized version of the Euclidean algorithm can be defined (as described below). In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals. Other applications of Euclid's algorithm were developed in the 19th century. In 1829, Charles Sturm showed that the algorithm was useful in the Sturm chain method for counting the real roots of polynomials in any given interval. The Euclidean algorithm was the first integer relation algorithm, which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed, such as the algorithm of Helaman Ferguson and R.W. Forcade (1979) and the LLL algorithm. In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called The Game of Euclid, which has an optimal strategy. The players begin with two piles of a and b stones. The players take turns removing m multiples of the smaller pile from the larger. Thus, if the two piles consist of x and y stones, where x is larger than y, the next player can reduce the larger pile from x stones to x − my stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones. == Mathematical applications == === Bézout's identity === Bézout's identity states that the greatest common divisor g of two integers a and b can be represented as a linear sum of the original two numbers a and b. In other words, it is always possible to find integers s and t such that g = sa + tb. The integers s and t can be calculated from the quotients q0, q1, etc. by reversing the order of equations in Euclid's algorithm. Beginning with the next-to-last equation, g can be expressed in terms of the quotient qN−1 and the two preceding remainders, rN−2 and rN−3: g = rN−1 = rN−3 − qN−1 rN−2. Those two remainders can be likewise expressed in terms of their quotients and preceding remainders, rN−2 = rN−4 − qN−2 rN−3 and rN−3 = rN−5 − qN−3 rN−4. Substituting these formulae for rN−2 and rN−3 into the first equation yields g as a linear sum of the remainders rN−4 and rN−5. The process of substituting remainders by formulae involving their predecessors can be continued until the original numbers a and b are reached: r2 = r0 − q2 r1 r1 = b − q1 r0 r0 = a − q0 b. After all the remainders r0, r1, etc. have been substituted, the final equation expresses g as a linear sum of a and b, so that g = sa + tb. The Euclidean algorithm, and thus Bézout's identity, can be generalized to the context of Euclidean domains. === Principal ideals and related problems === Bézout's identity provides yet another definition of the greatest common divisor g of two numbers a and b. Consider the set of all numbers ua + vb, where u and v are any two integers. Since a and b are both divisible by g, every number in the set is divisible by g. In other words, every number of the set is an integer multiple of g. This is true for every common divisor of a and b. However, unlike other common divisors, the greatest common divisor is a member of the set; by Bézout's identity, choosing u = s and v = t gives g. A smaller common divisor cannot be a member of the set, since every member of the set must be divisible by g. Conversely, any multiple m of g can be obtained by choosing u = ms and v = mt, where s and t are the integers of Bézout's identity. This may be seen by multiplying Bézout's identity by m, mg = msa + mtb. Therefore, the set of all numbers ua + vb is equivalent to the set of multiples m of g. In other words, the set of all possible sums of integer multiples of two numbers (a and b) is equivalent to the set of multiples of gcd(a, b). The GCD is said to be the generator of the ideal of a and b. This GCD definition led to the modern abstract algebraic concepts of a principal ideal (an ideal generated by a single element) and a principal ideal domain (a domain in which every ideal is a principal ideal). Certain problems can be solved using this result. For example, consider two measuring cups of volume a and b. By adding/subtracting u multiples of the first cup and v multiples of the second cup, any volume ua + vb can be measured out. These volumes are all multiples of g = gcd(a, b). === Extended Euclidean algorithm === The integers s and t of Bézout's identity can be computed efficiently using the extended Euclidean algorithm. This extension adds two recursive equations to Euclid's algorithm sk = sk−2 − qksk−1 tk = tk−2 − qktk−1 with the starting values s−2 = 1, t−2 = 0 s−1 = 0, t−1 = 1. Using this recursion, Bézout's integers s and t are given by s = sN and t = tN, where N + 1 is the step on which the algorithm terminates with rN+1 = 0. The validity of this approach can be shown by induction. Assume that the recursion formula is correct up to step k − 1 of the algorithm; in other words, assume that rj = sj a + tj b for all j less than k. The kth step of the algorithm gives the equation rk = rk−2 − qkrk−1. Since the recursion formula has been assumed to be correct for rk−2 and rk−1, they may be expressed in terms of the corresponding s and t variables rk = (sk−2 a + tk−2 b) − qk(sk−1 a + tk−1 b). Rearranging this equation yields the recursion formula for step k, as required rk = sk a + tk b = (sk−2 − qksk−1) a + (tk−2 − qktk−1) b. === Matrix method === The integers s and t can also be found using an equivalent matrix method. The sequence of equations of Euclid's algorithm a = q 0 b + r 0 b = q 1 r 0 + r 1 ⋮ r N − 2 = q N r N − 1 + 0 {\displaystyle {\begin{aligned}a&=q_{0}b+r_{0}\\b&=q_{1}r_{0}+r_{1}\\&\,\,\,\vdots \\r_{N-2}&=q_{N}r_{N-1}+0\end{aligned}}} can be written as a product of 2×2 quotient matrices multiplying a two-dimensional remainder vector ( a b ) = ( q 0 1 1 0 ) ( b r 0 ) = ( q 0 1 1 0 ) ( q 1 1 1 0 ) ( r 0 r 1 ) = ⋯ = ∏ i = 0 N ( q i 1 1 0 ) ( r N − 1 0 ) . {\displaystyle {\begin{pmatrix}a\\b\end{pmatrix}}={\begin{pmatrix}q_{0}&1\\1&0\end{pmatrix}}{\begin{pmatrix}b\\r_{0}\end{pmatrix}}={\begin{pmatrix}q_{0}&1\\1&0\end{pmatrix}}{\begin{pmatrix}q_{1}&1\\1&0\end{pmatrix}}{\begin{pmatrix}r_{0}\\r_{1}\end{pmatrix}}=\cdots =\prod _{i=0}^{N}{\begin{pmatrix}q_{i}&1\\1&0\end{pmatrix}}{\begin{pmatrix}r_{N-1}\\0\end{pmatrix}}\,.} Let M represent the product of all the quotient matrices M = ( m 11 m 12 m 21 m 22 ) = ∏ i = 0 N ( q i 1 1 0 ) = ( q 0 1 1 0 ) ( q 1 1 1 0 ) ⋯ ( q N 1 1 0 ) . {\displaystyle \mathbf {M} ={\begin{pmatrix}m_{11}&m_{12}\\m_{21}&m_{22}\end{pmatrix}}=\prod _{i=0}^{N}{\begin{pmatrix}q_{i}&1\\1&0\end{pmatrix}}={\begin{pmatrix}q_{0}&1\\1&0\end{pmatrix}}{\begin{pmatrix}q_{1}&1\\1&0\end{pmatrix}}\cdots {\begin{pmatrix}q_{N}&1\\1&0\end{pmatrix}}\,.} This simplifies the Euclidean algorithm to the form ( a b ) = M ( r N − 1 0 ) = M ( g 0 ) . {\displaystyle {\begin{pmatrix}a\\b\end{pmatrix}}=\mathbf {M} {\begin{pmatrix}r_{N-1}\\0\end{pmatrix}}=\mathbf {M} {\begin{pmatrix}g\\0\end{pmatrix}}\,.} To express g as a linear sum of a and b, both sides of this equation can be multiplied by the inverse of the matrix M. The determinant of M equals (−1)N+1, since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of M is never zero, the vector of the final remainders can be solved using the inverse of M ( g 0 ) = M − 1 ( a b ) = ( − 1 ) N + 1 ( m 22 − m 12 − m 21 m 11 ) ( a b ) . {\displaystyle {\begin{pmatrix}g\\0\end{pmatrix}}=\mathbf {M} ^{-1}{\begin{pmatrix}a\\b\end{pmatrix}}=(-1)^{N+1}{\begin{pmatrix}m_{22}&-m_{12}\\-m_{21}&m_{11}\end{pmatrix}}{\begin{pmatrix}a\\b\end{pmatrix}}\,.} Since the top equation gives g = (−1)N+1 ( m22 a − m12 b), the two integers of Bézout's identity are s = (−1)N+1m22 and t = (−1)Nm12. The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm. === Euclid's lemma and unique factorization === Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors. To illustrate this, suppose that a number L can be written as a product of two factors u and v, that is, L = uv. If another number w also divides L but is coprime with u, then w must divide v, by the following argument: If the greatest common divisor of u and w is 1, then integers s and t can be found such that 1 = su + tw by Bézout's identity. Multiplying both sides by v gives the relation: v = suv + twv = sL + twv Since w divides both terms on the right-hand side, it must also divide the left-hand side, v. This result is known as Euclid's lemma. Specifically, if a prime number divides L, then it must divide at least one factor of L. Conversely, if a number w is coprime to each of a series of numbers a1, a2, ..., an, then w is also coprime to their product, a1 × a2 × ... × an. Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers. To see this, assume the contrary, that there are two independent factorizations of L into m and n prime factors, respectively L = p1p2...pm = q1q2...qn . Since each prime p divides L by assumption, it must also divide one of the q factors; since each q is prime as well, it must be that p = q. Iteratively dividing by the p factors shows that each p has an equal counterpart q; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below. === Linear Diophantine equations === Diophantine equations are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician Diophantus. A typical linear Diophantine equation seeks integers x and y such that ax + by = c where a, b and c are given integers. This can be written as an equation for x in modular arithmetic: ax ≡ c mod b. Let g be the greatest common divisor of a and b. Both terms in ax + by are divisible by g; therefore, c must also be divisible by g, or the equation has no solutions. By dividing both sides by c/g, the equation can be reduced to Bezout's identity sa + tb = g, where s and t can be found by the extended Euclidean algorithm. This provides one solution to the Diophantine equation, x1 = s (c/g) and y1 = t (c/g). In general, a linear Diophantine equation has no solutions, or an infinite number of solutions. To find the latter, consider two solutions, (x1, y1) and (x2, y2), where ax1 + by1 = c = ax2 + by2 or equivalently a(x1 − x2) = b(y2 − y1). Therefore, the smallest difference between two x solutions is b/g, whereas the smallest difference between two y solutions is a/g. Thus, the solutions may be expressed as x = x1 − bu/g y = y1 + au/g. By allowing u to vary over all possible integers, an infinite family of solutions can be generated from a single solution (x1, y1). If the solutions are required to be positive integers (x > 0, y > 0), only a finite number of solutions may be possible. This restriction on the acceptable solutions allows some systems of Diophantine equations with more unknowns than equations to have a finite number of solutions; this is impossible for a system of linear equations when the solutions can be any real number (see Underdetermined system). === Multiplicative inverses and the RSA algorithm === A finite field is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as commutativity, associativity and distributivity. An example of a finite field is the set of 13 numbers {0, 1, 2, ..., 12} using modular arithmetic. In this field, the results of any mathematical operation (addition, subtraction, multiplication, or division) is reduced modulo 13; that is, multiples of 13 are added or subtracted until the result is brought within the range 0–12. For example, the result of 5 × 7 = 35 mod 13 = 9. Such finite fields can be defined for any prime p; using more sophisticated definitions, they can also be defined for any power m of a prime pm. Finite fields are often called Galois fields, and are abbreviated as GF(p) or GF(pm). In such a field with m numbers, every nonzero element a has a unique modular multiplicative inverse, a−1 such that aa−1 = a−1a ≡ 1 mod m. This inverse can be found by solving the congruence equation ax ≡ 1 mod m, or the equivalent linear Diophantine equation ax + my = 1. This equation can be solved by the Euclidean algorithm, as described above. Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message. Although the RSA algorithm uses rings rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in error-correcting codes; for example, it can be used as an alternative to the Berlekamp–Massey algorithm for decoding BCH and Reed–Solomon codes, which are based on Galois fields. === Chinese remainder theorem === Euclid's algorithm can also be used to solve multiple linear Diophantine equations. Such equations arise in the Chinese remainder theorem, which describes a novel method to represent an integer x. Instead of representing an integer by its digits, it may be represented by its remainders xi modulo a set of N coprime numbers mi: x 1 ≡ x ( mod m 1 ) x 2 ≡ x ( mod m 2 ) ⋮ x N ≡ x ( mod m N ) . {\displaystyle {\begin{aligned}x_{1}&\equiv x{\pmod {m_{1}}}\\x_{2}&\equiv x{\pmod {m_{2}}}\\&\,\,\,\vdots \\x_{N}&\equiv x{\pmod {m_{N}}}\,.\end{aligned}}} The goal is to determine x from its N remainders xi. The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus M that is the product of all the individual moduli mi, and define Mi as M i = M m i . {\displaystyle M_{i}={\frac {M}{m_{i}}}.} Thus, each Mi is the product of all the moduli except mi. The solution depends on finding N new numbers hi such that M i h i ≡ 1 ( mod m i ) . {\displaystyle M_{i}h_{i}\equiv 1{\pmod {m_{i}}}\,.} With these numbers hi, any integer x can be reconstructed from its remainders xi by the equation x ≡ ( x 1 M 1 h 1 + x 2 M 2 h 2 + ⋯ + x N M N h N ) ( mod M ) . {\displaystyle x\equiv (x_{1}M_{1}h_{1}+x_{2}M_{2}h_{2}+\cdots +x_{N}M_{N}h_{N}){\pmod {M}}\,.} Since these numbers hi are the multiplicative inverses of the Mi, they may be found using Euclid's algorithm as described in the previous subsection. === Stern–Brocot tree === The Euclidean algorithm can be used to arrange the set of all positive rational numbers into an infinite binary search tree, called the Stern–Brocot tree. The number 1 (expressed as a fraction 1/1) is placed at the root of the tree, and the location of any other number a/b can be found by computing gcd(a,b) using the original form of the Euclidean algorithm, in which each step replaces the larger of the two given numbers by its difference with the smaller number (not its remainder), stopping when two equal numbers are reached. A step of the Euclidean algorithm that replaces the first of the two numbers corresponds to a step in the tree from a node to its right child, and a step that replaces the second of the two numbers corresponds to a step in the tree from a node to its left child. The sequence of steps constructed in this way does not depend on whether a/b is given in lowest terms, and forms a path from the root to a node containing the number a/b. This fact can be used to prove that each positive rational number appears exactly once in this tree. For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice: gcd ( 3 , 4 ) ← = gcd ( 3 , 1 ) → = gcd ( 2 , 1 ) → = gcd ( 1 , 1 ) . {\displaystyle {\begin{aligned}&\gcd(3,4)&\leftarrow \\={}&\gcd(3,1)&\rightarrow \\={}&\gcd(2,1)&\rightarrow \\={}&\gcd(1,1).\end{aligned}}} The Euclidean algorithm has almost the same relationship to another binary tree on the rational numbers called the Calkin–Wilf tree. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root. === Continued fractions === The Euclidean algorithm has a close relationship with continued fractions. The sequence of equations can be written in the form a b = q 0 + r 0 b b r 0 = q 1 + r 1 r 0 r 0 r 1 = q 2 + r 2 r 1 ⋮ r k − 2 r k − 1 = q k + r k r k − 1 ⋮ r N − 2 r N − 1 = q N . {\displaystyle {\begin{aligned}{\frac {a}{b}}&=q_{0}+{\frac {r_{0}}{b}}\\{\frac {b}{r_{0}}}&=q_{1}+{\frac {r_{1}}{r_{0}}}\\{\frac {r_{0}}{r_{1}}}&=q_{2}+{\frac {r_{2}}{r_{1}}}\\&\,\,\,\vdots \\{\frac {r_{k-2}}{r_{k-1}}}&=q_{k}+{\frac {r_{k}}{r_{k-1}}}\\&\,\,\,\vdots \\{\frac {r_{N-2}}{r_{N-1}}}&=q_{N}\,.\end{aligned}}} The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form a b = q 0 + 1 q 1 + r 1 r 0 . {\displaystyle {\frac {a}{b}}=q_{0}+{\cfrac {1}{q_{1}+{\cfrac {r_{1}}{r_{0}}}}}\,.} The third equation may be used to substitute the denominator term r1/r0, yielding a b = q 0 + 1 q 1 + 1 q 2 + r 2 r 1 . {\displaystyle {\frac {a}{b}}=q_{0}+{\cfrac {1}{q_{1}+{\cfrac {1}{q_{2}+{\cfrac {r_{2}}{r_{1}}}}}}}\,.} The final ratio of remainders rk/rk−1 can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction a b = q 0 + 1 q 1 + 1 q 2 + 1 ⋱ + 1 q N = [ q 0 ; q 1 , q 2 , … , q N ] . {\displaystyle {\frac {a}{b}}=q_{0}+{\cfrac {1}{q_{1}+{\cfrac {1}{q_{2}+{\cfrac {1}{\ddots +{\cfrac {1}{q_{N}}}}}}}}}=[q_{0};q_{1},q_{2},\ldots ,q_{N}]\,.} In the worked example above, the gcd(1071, 462) was calculated, and the quotients qk were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written 1071 462 = 2 + 1 3 + 1 7 = [ 2 ; 3 , 7 ] {\displaystyle {\frac {1071}{462}}=2+{\cfrac {1}{3+{\cfrac {1}{7}}}}=[2;3,7]} as can be confirmed by calculation. === Factorization algorithms === Calculating a greatest common divisor is an essential step in several integer factorization algorithms, such as Pollard's rho algorithm, Shor's algorithm, Dixon's factorization method and the Lenstra elliptic curve factorization. The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm. == Algorithmic efficiency == The computational efficiency of Euclid's algorithm has been studied thoroughly. This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811, who showed that the number of division steps on input (u, v) is bounded by v; later he improved this to v/2 + 2. Later, in 1841, P. J. E. Finck showed that the number of division steps is at most 2 log2 v + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input. Émile Léger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. Finck's analysis was refined by Gabriel Lamé in 1844, who showed that the number of steps required for completion is never more than five times the number h of base-10 digits of the smaller number b. In the uniform cost model (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes constant time, and Lamé's analysis implies that the total running time is also O(h). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as O(h2). In this case the total time for all of the steps of the algorithm can be analyzed using a telescoping series, showing that it is also O(h2). Modern algorithmic techniques based on the Schönhage–Strassen algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD. === Number of steps === The number of steps to calculate the GCD of two natural numbers, a and b, may be denoted by T(a, b). If g is the GCD of a and b, then a = mg and b = ng for two coprime numbers m and n. Then T(a, b) = T(m, n) as may be seen by dividing all the steps in the Euclidean algorithm by g. By the same argument, the number of steps remains the same if a and b are multiplied by a common factor w: T(a, b) = T(wa, wb). Therefore, the number of steps T may vary dramatically between neighboring pairs of numbers, such as T(a, b) and T(a, b + 1), depending on the size of the two GCDs. The recursive nature of the Euclidean algorithm gives another equation T(a, b) = 1 + T(b, r0) = 2 + T(r0, r1) = … = N + T(rN−2, rN−1) = N + 1 where T(x, 0) = 0 by assumption. ==== Worst-case ==== If the Euclidean algorithm requires N steps for a pair of natural numbers a > b > 0, the smallest values of a and b for which this is true are the Fibonacci numbers FN+2 and FN+1, respectively. More precisely, if the Euclidean algorithm requires N steps for the pair a > b, then one has a ≥ FN+2 and b ≥ FN+1. This can be shown by induction. If N = 1, b divides a with no remainder; the smallest natural numbers for which this is true is b = 1 and a = 2, which are F2 and F3, respectively. Now assume that the result holds for all values of N up to M − 1. The first step of the M-step algorithm is a = q0b + r0, and the Euclidean algorithm requires M − 1 steps for the pair b > r0. By induction hypothesis, one has b ≥ FM+1 and r0 ≥ FM. Therefore, a = q0b + r0 ≥ b + r0 ≥ FM+1 + FM = FM+2, which is the desired inequality. This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory, and also the first practical application of the Fibonacci numbers. This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10). For if the algorithm requires N steps, then b is greater than or equal to FN+1 which in turn is greater than or equal to φN−1, where φ is the golden ratio. Since b ≥ φN−1, then N − 1 ≤ logφb. Since log10φ > 1/5, (N − 1)/5 < log10φ logφb = log10b. Thus, N ≤ 5 log10b. Thus, the Euclidean algorithm always needs less than O(h) divisions, where h is the number of digits in the smaller number b. ==== Average ==== The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time T(a) required to calculate the GCD of a given number a and a smaller natural number b chosen with equal probability from the integers 0 to a − 1 T ( a ) = 1 a ∑ 0 ≤ b < a T ( a , b ) . {\displaystyle T(a)={\frac {1}{a}}\sum _{0\leq b<a}T(a,b).} However, since T(a, b) fluctuates dramatically with the GCD of the two numbers, the averaged function T(a) is likewise "noisy". To reduce this noise, a second average τ(a) is taken over all numbers coprime with a τ ( a ) = 1 φ ( a ) ∑ 0 ≤ b < a gcd ( a , b ) = 1 T ( a , b ) . {\displaystyle \tau (a)={\frac {1}{\varphi (a)}}\sum _{\begin{smallmatrix}0\leq b<a\\\gcd(a,b)=1\end{smallmatrix}}T(a,b).} There are φ(a) coprime integers less than a, where φ is Euler's totient function. This tau average grows smoothly with a τ ( a ) = 12 π 2 ln ⁡ 2 ln ⁡ a + C + O ( a − 1 / 6 − ε ) {\displaystyle \tau (a)={\frac {12}{\pi ^{2}}}\ln 2\ln a+C+O(a^{-1/6-\varepsilon })} with the residual error being of order a−(1/6)+ε, where ε is infinitesimal. The constant C in this formula is called Porter's constant and equals C = − 1 2 + 6 ln ⁡ 2 π 2 ( 4 γ − 24 π 2 ζ ′ ( 2 ) + 3 ln ⁡ 2 − 2 ) ≈ 1.467 {\displaystyle C=-{\frac {1}{2}}+{\frac {6\ln 2}{\pi ^{2}}}\left(4\gamma -{\frac {24}{\pi ^{2}}}\zeta '(2)+3\ln 2-2\right)\approx 1.467} where γ is the Euler–Mascheroni constant and ζ′ is the derivative of the Riemann zeta function. The leading coefficient (12/π2) ln 2 was determined by two independent methods. Since the first average can be calculated from the tau average by summing over the divisors d of a T ( a ) = 1 a ∑ d ∣ a φ ( d ) τ ( d ) {\displaystyle T(a)={\frac {1}{a}}\sum _{d\mid a}\varphi (d)\tau (d)} it can be approximated by the formula T ( a ) ≈ C + 12 π 2 ln ⁡ 2 ( ln ⁡ a − ∑ d ∣ a Λ ( d ) d ) {\displaystyle T(a)\approx C+{\frac {12}{\pi ^{2}}}\ln 2\,{\biggl (}{\ln a}-\sum _{d\mid a}{\frac {\Lambda (d)}{d}}{\biggr )}} where Λ(d) is the Mangoldt function. A third average Y(n) is defined as the mean number of steps required when both a and b are chosen randomly (with uniform distribution) from 1 to n Y ( n ) = 1 n 2 ∑ a = 1 n ∑ b = 1 n T ( a , b ) = 1 n ∑ a = 1 n T ( a ) . {\displaystyle Y(n)={\frac {1}{n^{2}}}\sum _{a=1}^{n}\sum _{b=1}^{n}T(a,b)={\frac {1}{n}}\sum _{a=1}^{n}T(a).} Substituting the approximate formula for T(a) into this equation yields an estimate for Y(n) Y ( n ) ≈ 12 π 2 ln ⁡ 2 ln ⁡ n + 0.06. {\displaystyle Y(n)\approx {\frac {12}{\pi ^{2}}}\ln 2\ln n+0.06.} === Computational expense per step === In each step k of the Euclidean algorithm, the quotient qk and remainder rk are computed for a given pair of integers rk−2 and rk−1 rk−2 = qk rk−1 + rk. The computational expense per step is associated chiefly with finding qk, since the remainder rk can be calculated quickly from rk−2, rk−1, and qk rk = rk−2 − qk rk−1. The computational expense of dividing h-bit numbers scales as O(h(ℓ + 1)), where ℓ is the length of the quotient. For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient q number of subtractions. If the ratio of a and b is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient q is approximately ln |u/(u − 1)| where u = (q + 1)2. For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers, the subtraction-based Euclid's algorithm is competitive with the division-based version. This is exploited in the binary version of Euclid's algorithm. Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (h2) with the average number of digits h in the initial two numbers a and b. Let h0, h1, ..., hN−1 represent the number of digits in the successive remainders r0, r1, ..., rN−1. Since the number of steps N grows linearly with h, the running time is bounded by O ( ∑ i < N h i ( h i − h i + 1 + 2 ) ) ⊆ O ( h ∑ i < N ( h i − h i + 1 + 2 ) ) ⊆ O ( h ( h 0 + 2 N ) ) ⊆ O ( h 2 ) . {\displaystyle O{\Big (}\sum _{i<N}h_{i}(h_{i}-h_{i+1}+2){\Big )}\subseteq O{\Big (}h\sum _{i<N}(h_{i}-h_{i+1}+2){\Big )}\subseteq O(h(h_{0}+2N))\subseteq O(h^{2}).} === Alternative methods === Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined. One inefficient approach to finding the GCD of two natural numbers a and b is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number b. The number of steps of this approach grows linearly with b, or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted above, the GCD equals the product of the prime factors shared by the two numbers a and b. Present methods for prime factorization are also inefficient; many modern cryptography systems even rely on that inefficiency. The binary GCD algorithm is an efficient alternative that substitutes division with faster operations by exploiting the binary representation used by computers. However, this alternative also scales like O(h²). It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way. Additional efficiency can be gleaned by examining only the leading digits of the two numbers a and b. The binary algorithm can be extended to other bases (k-ary algorithms), with up to fivefold increases in speed. Lehmer's GCD algorithm uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases. A recursive approach for very large integers (with more than 25,000 digits) leads to quasilinear integer GCD algorithms, such as those of Schönhage, and Stehlé and Zimmermann. These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given above. These quasilinear methods generally scale as O(h log h2 log log h). == Generalizations == Although the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers), it may be generalized to the real numbers, and to other mathematical objects, such as polynomials, quadratic integers and Hurwitz quaternions. In the latter cases, the Euclidean algorithm is used to demonstrate the crucial property of unique factorization, i.e., that such numbers can be factored uniquely into irreducible elements, the counterparts of prime numbers. Unique factorization is essential to many proofs of number theory. === Rational and real numbers === Euclid's algorithm can be applied to real numbers, as described by Euclid in Book 10 of his Elements. The goal of the algorithm is to identify a real number g such that two given real numbers, a and b, are integer multiples of it: a = mg and b = ng, where m and n are integers. This identification is equivalent to finding an integer relation among the real numbers a and b; that is, it determines integers s and t such that sa + tb = 0. If such an equation is possible, a and b are called commensurable lengths, otherwise they are incommensurable lengths. The real-number Euclidean algorithm differs from its integer counterpart in two respects. First, the remainders rk are real numbers, although the quotients qk are integers as before. Second, the algorithm is not guaranteed to end in a finite number N of steps. If it does, the fraction a/b is a rational number, i.e., the ratio of two integers a b = m g n g = m n , {\displaystyle {\frac {a}{b}}={\frac {mg}{ng}}={\frac {m}{n}},} and can be written as a finite continued fraction [q0; q1, q2, ..., qN]. If the algorithm does not stop, the fraction a/b is an irrational number and can be described by an infinite continued fraction [q0; q1, q2, …]. Examples of infinite continued fractions are the golden ratio φ = [1; 1, 1, ...] and the square root of two, √2 = [1; 2, 2, ...]. The algorithm is unlikely to stop, since almost all ratios a/b of two real numbers are irrational. An infinite continued fraction may be truncated at a step k [q0; q1, q2, ..., qk] to yield an approximation to a/b that improves as k is increased. The approximation is described by convergents mk/nk; the numerator and denominators are coprime and obey the recurrence relation m k = q k m k − 1 + m k − 2 n k = q k n k − 1 + n k − 2 , {\displaystyle {\begin{aligned}m_{k}&=q_{k}m_{k-1}+m_{k-2}\\n_{k}&=q_{k}n_{k-1}+n_{k-2},\end{aligned}}} where m−1 = n−2 = 1 and m−2 = n−1 = 0 are the initial values of the recursion. The convergent mk/nk is the best rational number approximation to a/b with denominator nk: | a b − m k n k | < 1 n k 2 . {\displaystyle \left|{\frac {a}{b}}-{\frac {m_{k}}{n_{k}}}\right|<{\frac {1}{n_{k}^{2}}}.} === Polynomials === Polynomials in a single variable x can be added, multiplied and factored into irreducible polynomials, which are the analogs of the prime numbers for integers. The greatest common divisor polynomial g(x) of two polynomials a(x) and b(x) is defined as the product of their shared irreducible polynomials, which can be identified using the Euclidean algorithm. The basic procedure is similar to that for integers. At each step k, a quotient polynomial qk(x) and a remainder polynomial rk(x) are identified to satisfy the recursive equation r k − 2 ( x ) = q k ( x ) r k − 1 ( x ) + r k ( x ) , {\displaystyle r_{k-2}(x)=q_{k}(x)r_{k-1}(x)+r_{k}(x),} where r−2(x) = a(x) and r−1(x) = b(x). Each quotient polynomial is chosen such that each remainder is either zero or has a degree that is smaller than the degree of its predecessor: deg[rk(x)] < deg[rk−1(x)]. Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The last nonzero remainder is the greatest common divisor of the original two polynomials, a(x) and b(x). For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials a ( x ) = x 4 − 4 x 3 + 4 x 2 − 3 x + 14 = ( x 2 − 5 x + 7 ) ( x 2 + x + 2 ) and b ( x ) = x 4 + 8 x 3 + 12 x 2 + 17 x + 6 = ( x 2 + 7 x + 3 ) ( x 2 + x + 2 ) . {\displaystyle {\begin{aligned}a(x)&=x^{4}-4x^{3}+4x^{2}-3x+14=(x^{2}-5x+7)(x^{2}+x+2)\qquad {\text{and}}\\b(x)&=x^{4}+8x^{3}+12x^{2}+17x+6=(x^{2}+7x+3)(x^{2}+x+2).\end{aligned}}} Dividing a(x) by b(x) yields a remainder r0(x) = x3 + (2/3)x2 + (5/3)x − (2/3). In the next step, b(x) is divided by r0(x) yielding a remainder r1(x) = x2 + x + 2. Finally, dividing r0(x) by r1(x) yields a zero remainder, indicating that r1(x) is the greatest common divisor polynomial of a(x) and b(x), consistent with their factorization. Many of the applications described above for integers carry over to polynomials. The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined. The polynomial Euclidean algorithm has other applications, such as Sturm chains, a method for counting the zeros of a polynomial that lie inside a given real interval. This in turn has applications in several areas, such as the Routh–Hurwitz stability criterion in control theory. Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields GF(p) described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials. === Gaussian integers === The Gaussian integers are complex numbers of the form α = u + vi, where u and v are ordinary integers and i is the square root of negative one. By defining an analog of the Euclidean algorithm, Gaussian integers can be shown to be uniquely factorizable, by the argument above. This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares. In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments. The Euclidean algorithm developed for two Gaussian integers α and β is nearly the same as that for ordinary integers, but differs in two respects. As before, we set r−2 = α and r−1 = β, and the task at each step k is to identify a quotient qk and a remainder rk such that r k = r k − 2 − q k r k − 1 , {\displaystyle r_{k}=r_{k-2}-q_{k}r_{k-1},} where every remainder is strictly smaller than its predecessor: |rk| < |rk−1|. The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are complex numbers. The quotients qk are generally found by rounding the real and complex parts of the exact ratio (such as the complex number α/β) to the nearest integers. The second difference lies in the necessity of defining how one complex remainder can be "smaller" than another. To do this, a norm function f(u + vi) = u2 + v2 is defined, which converts every Gaussian integer u + vi into an ordinary integer. After each step k of the Euclidean algorithm, the norm of the remainder f(rk) is smaller than the norm of the preceding remainder, f(rk−1). Since the norm is a nonnegative integer and decreases with every step, the Euclidean algorithm for Gaussian integers ends in a finite number of steps. The final nonzero remainder is gcd(α, β), the Gaussian integer of largest norm that divides both α and β; it is unique up to multiplication by a unit, ±1 or ±i. Many of the other applications of the Euclidean algorithm carry over to Gaussian integers. For example, it can be used to solve linear Diophantine equations and Chinese remainder problems for Gaussian integers; continued fractions of Gaussian integers can also be defined. === Euclidean domains === A set of elements under two binary operations, denoted as addition and multiplication, is called a Euclidean domain if it forms a commutative ring R and, roughly speaking, if a generalized Euclidean algorithm can be performed on them. The two operations of such a ring need not be the addition and multiplication of ordinary arithmetic; rather, they can be more general, such as the operations of a mathematical group or monoid. Nevertheless, these general operations should respect many of the laws governing ordinary arithmetic, such as commutativity, associativity and distributivity. The generalized Euclidean algorithm requires a Euclidean function, i.e., a mapping f from R into the set of nonnegative integers such that, for any two nonzero elements a and b in R, there exist q and r in R such that a = qb + r and f(r) < f(b). Examples of such mappings are the absolute value for integers, the degree for univariate polynomials, and the norm for Gaussian integers above. The basic principle is that each step of the algorithm reduces f inexorably; hence, if f can be reduced only a finite number of times, the algorithm must stop in a finite number of steps. This principle relies on the well-ordering property of the non-negative integers, which asserts that every non-empty set of non-negative integers has a smallest member. The fundamental theorem of arithmetic applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into irreducible elements. Any Euclidean domain is a unique factorization domain (UFD), although the converse is not true. The Euclidean domains and the UFD's are subclasses of the GCD domains, domains in which a greatest common divisor of two numbers always exists. In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm. A Euclidean domain is always a principal ideal domain (PID), an integral domain in which every ideal is a principal ideal. Again, the converse is not true: not every PID is a Euclidean domain. The unique factorization of Euclidean domains is useful in many applications. For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all Pythagorean triples and in proving Fermat's theorem on sums of two squares. Unique factorization was also a key element in an attempted proof of Fermat's Last Theorem published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of Joseph Liouville. Lamé's approach required the unique factorization of numbers of the form x + ωy, where x and y are integers, and ω = e2iπ/n is an nth root of 1, that is, ωn = 1. Although this approach succeeds for some values of n (such as n = 3, the Eisenstein integers), in general such numbers do not factor uniquely. This failure of unique factorization in some cyclotomic fields led Ernst Kummer to the concept of ideal numbers and, later, Richard Dedekind to ideals. ==== Unique factorization of quadratic integers ==== The quadratic integer rings are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the imaginary unit i is replaced by a number ω. Thus, they have the form u + vω, where u and v are integers and ω has one of two forms, depending on a parameter D. If D does not equal a multiple of four plus one, then ω = D . {\displaystyle \omega ={\sqrt {D}}.} If, however, D does equal a multiple of four plus one, then ω = 1 + D 2 . {\displaystyle \omega ={\frac {1+{\sqrt {D}}}{2}}.} If the function f corresponds to a norm function, such as that used to order the Gaussian integers above, then the domain is known as norm-Euclidean. The norm-Euclidean rings of quadratic integers are exactly those where D is one of the values −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, or 73. The cases D = −1 and D = −3 yield the Gaussian integers and Eisenstein integers, respectively. If f is allowed to be any Euclidean function, then the list of possible values of D for which the domain is Euclidean is not yet known. The first example of a Euclidean domain that was not norm-Euclidean (with D = 69) was published in 1994. In 1973, Weinberger proved that a quadratic integer ring with D > 0 is Euclidean if, and only if, it is a principal ideal domain, provided that the generalized Riemann hypothesis holds. === Noncommutative rings === The Euclidean algorithm may be applied to some noncommutative rings such as the set of Hurwitz quaternions. Let α and β represent two elements from such a ring. They have a common right divisor δ if α = ξδ and β = ηδ for some choice of ξ and η in the ring. Similarly, they have a common left divisor if α = dξ and β = dη for some choice of ξ and η in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. Choosing the right divisors, the first step in finding the gcd(α, β) by the Euclidean algorithm can be written ρ 0 = α − ψ 0 β = ( ξ − ψ 0 η ) δ , {\displaystyle \rho _{0}=\alpha -\psi _{0}\beta =(\xi -\psi _{0}\eta )\delta ,} where ψ0 represents the quotient and ρ0 the remainder. Here the quotient and remainder are chosen so that (if nonzero) the remainder has N(ρ0) < N(β) for a "Euclidean function" N defined analogously to the Euclidean functions of Euclidean domains in the non-commutative case. This equation shows that any common right divisor of α and β is likewise a common divisor of the remainder ρ0. The analogous equation for the left divisors would be ρ 0 = α − β ψ 0 = δ ( ξ − η ψ 0 ) . {\displaystyle \rho _{0}=\alpha -\beta \psi _{0}=\delta (\xi -\eta \psi _{0}).} With either choice, the process is repeated as above until the greatest common right or left divisor is identified. As in the Euclidean domain, the "size" of the remainder ρ0 (formally, its Euclidean function or "norm") must be strictly smaller than β, and there must be only a finite number of possible sizes for ρ0, so that the algorithm is guaranteed to terminate. Many results for the GCD carry over to noncommutative numbers. For example, Bézout's identity states that the right gcd(α, β) can be expressed as a linear combination of α and β. In other words, there are numbers σ and τ such that Γ right = σ α + τ β . {\displaystyle \Gamma _{\text{right}}=\sigma \alpha +\tau \beta .} The analogous identity for the left GCD is nearly the same: Γ left = α σ + β τ . {\displaystyle \Gamma _{\text{left}}=\alpha \sigma +\beta \tau .} Bézout's identity can be used to solve Diophantine equations. For instance, one of the standard proofs of Lagrange's four-square theorem, that every positive integer can be represented as a sum of four squares, is based on quaternion GCDs in this way. == See also == Euclidean rhythm, a method for using the Euclidean algorithm to generate musical rhythms == Notes == == References == == Bibliography == Bueso, José; Gómez-Torrecillas, José; Verschoren, Alain (2003). Algorithmic Methods in Non-Commutative Algebra: Applications to Quantum Groups. Mathematical Modelling: Theory and Applications. Vol. 17. Kluwer Academic Publishers, Dordrecht. doi:10.1007/978-94-017-0285-0. ISBN 1-4020-1402-3. MR 2006329. Cohen, H. (1993). A Course in Computational Algebraic Number Theory. New York: Springer-Verlag. ISBN 0-387-55640-0. Cohn, H. (1980). Advanced Number Theory. New York: Dover. ISBN 0-486-64023-X. Cox, D.; Little, J.; O'Shea, D. (1997). Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra (2nd ed.). Springer-Verlag. ISBN 0-387-94680-2. Crandall, R.; Pomerance, C. (2001). Prime Numbers: A Computational Perspective (1st ed.). New York: Springer-Verlag. ISBN 0-387-94777-9. Lejeune Dirichlet, P. G. (1894). Dedekind, Richard (ed.). Vorlesungen über Zahlentheorie (Lectures on Number Theory) (in German). Braunschweig: Vieweg. LCCN 03005859. OCLC 490186017.. See also Vorlesungen über Zahlentheorie Knuth, D. E. (1997). The Art of Computer Programming, Volume 2: Seminumerical Algorithms (3rd ed.). Addison–Wesley. ISBN 0-201-89684-2. LeVeque, W. J. (1996) [1977]. Fundamentals of Number Theory. New York: Dover. ISBN 0-486-68906-9. Mollin, R. A. (2008). Fundamental Number Theory with Applications (2nd ed.). Boca Raton: Chapman & Hall/CRC. ISBN 978-1-4200-6659-3. Ore, O. (1948). Number Theory and Its History. New York: McGraw–Hill. Rosen, K. H. (2000). Elementary Number Theory and its Applications (4th ed.). Reading, MA: Addison–Wesley. ISBN 0-201-87073-8. Schroeder, M. (2005). Number Theory in Science and Communication (4th ed.). Springer-Verlag. ISBN 0-387-15800-6. Stark, H. (1978). An Introduction to Number Theory. MIT Press. ISBN 0-262-69060-8. Stillwell, J. (1997). Numbers and Geometry. New York: Springer-Verlag. ISBN 0-387-98289-2. Stillwell, J. (2003). Elements of Number Theory. New York: Springer-Verlag. ISBN 0-387-95587-9. Tattersall, J. J. (2005). Elementary Number Theory in Nine Chapters. Cambridge: Cambridge University Press. ISBN 978-0-521-85014-8. == External links == Demonstrations of Euclid's algorithm Weisstein, Eric W. "Euclidean Algorithm". MathWorld. Euclid's Algorithm at cut-the-knot Euclid's algorithm at PlanetMath. The Euclidean Algorithm at MathPages Euclid's Game at cut-the-knot Music and Euclid's algorithm
Wikipedia/Euclid's_algorithm
Kyma is a visual programming language for sound design used by musicians, researchers, and sound designers. In Kyma, a user programs a multiprocessor digital signal processor (DSP) by graphically connecting modules on the display of a Macintosh or Windows computer. == Background == Kyma has characteristics of both object-oriented and functional programming languages. The basic unit in Kyma is the Sound object, not the note of traditional music notation. A Sound is defined as: A Sound atom A unary transform T(s) where s is a Sound An n-ary transform T(s1, s2,.., sn), where s1,s2,..sn are Sounds A Sound atom is a source of audio (like a microphone input or a noise generator), a unary transform modifies its argument (for example, a low-pass filter might take a running average of its input), and an n-ary transform combines two or more Sounds (a Mixer, for example, is defined as the sum of its inputs). == History == The first version of Kyma, which computed digital audio samples on a Macintosh 512K was written in the Smalltalk programming language in 1986 by Carla Scaletti in Champaign, Illinois. In May 1987, Scaletti had partitioned Kyma into graphics and sound generation engines and ported the sound generation code to a digital signal processor called the Platypus designed by Lippold Haken and Kurt J. Hebel of the CERL Sound Group. In 1987, Scaletti presented a paper on Kyma and demonstrated live digital sound generation on the Platypus at the International Computer Music Conference where it was identified by electronic synthesis pioneer Bob Moog as a technology to watch in his conference report for Keyboard Magazine: One new language that acknowledges no distinction between sound synthesis and composition is Kyma, a music composition language for the Macintosh that views all elements in a piece of music, from the structure of a single sound to the structure of the entire composition, as objects to be composed. When the University of Illinois at Urbana-Champaign eliminated the funding for the PLATO laboratory in 1989, Scaletti and Hebel formed Symbolic Sound Corporation to continue developing Kyma and digital audio signal processing hardware. == Selected filmography == Wall-E War of the Worlds (2005) Finding Nemo Star Wars: Episode II – Attack of the Clones Star Wars: Episode III – Revenge of the Sith Master and Commander: The Far Side of the World == Selected discography == Zooma (1999) by John Paul Jones Movement in Still Life (1999) by BT The Thunderthief (2001) by John Paul Jones Emotional Technology (2003) by BT On An Island (2006) by David Gilmour Today (2006) by Junkie XL Unidentified Sound Object (2006) by U.S.O. Project Recombinant Art 01 Black Swan (2009) by Cristian Vogel ISAM (2011) by Amon Tobin The Creation of the Universe by Metal Machine Trio (played by Sarth Calhoun) GRUIS (2016) by Roland Emile Kuit Bella's Lullaby Critical Mass Remix (2008) composed by Carter Burwell, prod. by Jason Bentley & Tobias Enhus == References == == External links == Official website, Symbolic Sound Corporation
Wikipedia/Kyma_(sound_design_language)
A MIDI controller is any hardware or software that generates and transmits Musical Instrument Digital Interface (MIDI) data to MIDI-enabled devices, typically to trigger sounds and control parameters of an electronic music performance. They most often use a musical keyboard to send data about the pitch of notes to play, although a MIDI controller may trigger lighting and other effects. A wind controller has a sensor that converts breath pressure to volume information and lip pressure to control pitch. Controllers for percussion and stringed instruments exist, as well as specialized and experimental devices. Some MIDI controllers are used in association with specific digital audio workstation software. The original MIDI specification has been extended to include a greater range of control features. == Features == MIDI controllers usually do not create or produce musical sounds by themselves. MIDI controllers typically have some type of interface that the performer presses, strikes, blows or touches. This action generates MIDI data (e.g. notes played and their intensity), which can then be transmitted to a MIDI-compatible sound module or synthesizer using a MIDI cable. The sound module or synthesizer in turn produces a sound that is amplified through a loudspeaker. The most commonly used MIDI controller is the electronic musical keyboard MIDI controller. When the keys are played, the MIDI controller sends MIDI data about the pitch of the note, how hard the note was played and its duration. Other common MIDI controllers are wind controllers, which a musician blows into and presses keys to transmit MIDI data, and electronic drums. The MIDI controller can be populated with any number of sliders, knobs, buttons, pedals and other sensors, and may or may not include a piano keyboard. Many audio control surfaces are MIDI-based and so are essentially MIDI controllers. While the most common use of MIDI controllers is to trigger musical sounds and play musical instruments, MIDI controllers are also used to control other MIDI-compatible devices, such as stage lights, digital audio mixers and complex guitar effects units. == Types (hardware and software ) == The following are classes of MIDI (Musical Instrument Digital Interface) controller: The human interface component of a traditional instrument redesigned as a MIDI control device. The most common type of device in this class is the keyboard controller. Such a device provides a musical keyboard and perhaps other actuators (pitch bend and modulation wheels, for example) but produces no sound on its own. It is intended only to drive other MIDI devices. Percussion controllers such as the Roland Octapad fall into this class, as do a variety of wind controllers and guitar-like controllers such as the SynthAxe. Electronic musical instruments, including synthesizers, samplers, drum machines, and electronic drums, which are used to perform music in real time and are inherently able to transmit a MIDI data stream of the performance. Pitch-to-MIDI converters including guitar/synthesizers analyze a pitch and convert it into a MIDI signal. There are several devices that do this for the human voice and for monophonic instruments such as flutes, for example. Traditional instruments such as drums, acoustic pianos, and accordions which are outfitted with sensors and a computer processor which accepts input from the sensors and transmits real-time performance information as MIDI data. The performance information (e.g., on which notes or drums are struck, and how hard) is then sent to a module or computer which converts the data into sounds (e.g., samples or synthesized sounds). Sequencers, which store and retrieve MIDI data and send the data to MIDI-enabled instruments in order to reproduce a performance. MIDI Machine Control (MMC) devices such as recording equipment, which transmit messages to aid in the synchronization of MIDI-enabled devices. For example, a recorder may have a feature to index a recording by measure and beat. The sequencer that it controls would stay synchronized with it as the recorder's transport controls are pushed and corresponding MIDI messages transmitted. MIDI Show Control (MSC) devices such as show controllers, which transmit messages to aid in the operation and cueing of live theatrical and themed entertainment productions. For example, a variety of show control sub systems such as sound consoles, sound playback controllers, virtual audio matrices and switchers, video playback systems, rigging controllers, pyro and lighting control systems directly respond to MSC commands. However, most standalone generic MSC controllers are intended to actuate a generic computerized show control system that has been carefully programmed to produce the complex desired results that the show demands at each moment of the production. == Performance controllers == MIDI was designed with keyboards in mind, and any controller that is not a keyboard is considered an "alternative" controller. This was seen as a limitation by composers who were not interested in keyboard-based music, but the standard proved flexible, and MIDI compatibility was introduced to other types of controllers, including guitars, wind instruments and drum machines.: 23  === Keyboards === Keyboards are by far the most common type of MIDI controller. These are available in sizes that range from 25-key, 2-octave models, to full-sized 88-key instruments. Some are keyboard-only controllers, though many include other real-time controllers such as sliders, knobs, and wheels. Commonly, there are also connections for sustain and expression pedals. Most keyboard controllers offer the ability to split the playing area into zones, which can be of any desired size and can overlap with each other. Each zone can be assigned to a different MIDI channel and can be set to play any desired range of notes. This allows a single playing surface to control a number of different devices.: 79–80  MIDI capabilities can also be built into traditional keyboard instruments, such as grand pianos: 82  and Rhodes pianos. Pedal keyboards can operate the pedal tones of a MIDI organ, or can drive a bass synthesizer. === Wind controllers === Wind controllers allow MIDI parts to be played with the same kind of expression and articulation that is available to players of wind and brass instruments. They allow breath and pitch glide control that provide a more versatile kind of phrasing, particularly when playing sampled or physically modeled wind instrument parts.: 95  A typical wind controller has a sensor that converts breath pressure to volume information and may allow pitch control through a lip pressure sensor and a pitch-bend wheel. Some models include a configurable key layout that can emulate different instruments' fingering systems. Examples of such controllers include Akai's Electronic Wind Instrument (EWI) and Electronic Valve Instrument (EVI). The EWI uses a system of keypads and rollers modeled after a traditional woodwind instrument, while the EVI is based on an acoustic brass instrument, and has three switches that emulate a trumpet's valves.: 320–321  Simpler breath controllers are also available. Unlike wind controllers, they do not trigger notes and are intended for use in conjunction with a keyboard or synthesizer. === Drum and percussion controllers === Keyboards can be used to trigger drum sounds, but are impractical for playing repeated patterns such as rolls, due to the length of key travel. After keyboards, drum pads are the next most significant MIDI performance controllers.: 319–320  Drum controllers may be built into drum machines, may be standalone control surfaces, or may emulate the look and feel of acoustic percussion instruments. MIDI triggers can also be installed into acoustic drum and percussion instruments. The pads built into drum machines are typically too small and fragile to be played with sticks, and are played with fingers.: 88  Dedicated drum pads such as the Roland Octapad or the DrumKAT are playable with the hands or with sticks. There are also percussion controllers such as the vibraphone-style MalletKAT,: 88–91  and Marimba Lumina. Pads that can trigger a MIDI device can be homemade from a piezoelectric sensor and a practice pad or other piece of foam rubber. === Stringed instrument controllers === A guitar can be fitted with special pickups that digitize the instrument's output and allow it to play a synthesizer's sounds. These assign a separate MIDI channel for each string, and may give the player the choice of triggering the same sound from all six strings or playing a different sound from each.: 92–93  Some models, such as Yamaha's G10, dispense with the traditional guitar body and replace it with electronics.: 320  Other systems, such as Roland's MIDI pickups, are included with or can be retrofitted to a standard instrument. Max Mathews designed a MIDI violin for Laurie Anderson in the mid-1980s, and MIDI-equipped violas, cellos, contrabasses, and mandolins also exist. Other string controllers such as the Starr Labs Ztar use a combination of fretboard keys and strings to trigger notes without needing a MIDI pickup. === Specialized and experimental controllers === DJ digital controllers may be standalone units or may be integrated with a specific piece of software. These typically respond to MIDI clock sync and provide control over mixing, looping, effects, and sample playback. MIDI triggers attached to shoes or clothing are sometimes used by stage performers. The Kroonde Gamma wireless sensor can capture physical motion as MIDI signals. Sensors built into a dance floor at the University of Texas at Austin convert dancers' movements into MIDI messages, and David Rokeby's Very Nervous System art installation created music from the movements of passers-through. Software applications exist which enable the use of iOS devices as gesture controllers. Numerous experimental controllers exist which abandon traditional musical interfaces entirely. These include the gesture-controlled Buchla Thunder, sonomes such as the C-Thru Music Axis, which rearrange the scale tones into an isometric layout, and Haken Audio's keyless, touch-sensitive Continuum playing surface. Experimental MIDI controllers may be created from unusual objects, such as an ironing board with heat sensors installed, or a sofa equipped with pressure sensors. GRIDI is a large scale physical MIDI sequencer with embedded LEDs developed by Yuvi Gerstein in 2015, which uses balls as inputs. The Eigenharp controller is a combination of a breath controller, a configurable series of multi-dimensional control keys, and ribbon controllers designed to control its own virtual instrument software. == Auxiliary controllers == Software synthesizers offer great power and versatility, but some players feel that division of attention between a MIDI keyboard and a computer keyboard and mouse robs some of the immediacy from the playing experience. Devices dedicated to real-time MIDI control provide an ergonomic benefit and can provide a greater sense of connection with the instrument than can an interface that is accessed through a mouse and computer keyboard. Controllers may be general-purpose devices that are designed to work with a variety of equipment, or they may be designed to work with a specific piece of software. Examples of the latter include Akai's APC40 controller or Nakedboards MC-8 for Ableton Live, and Korg's MS-20ic controller which is a reproduction of their MS-20 analog synthesizer. The MS-20ic controller includes patch cables that can be used to control signal routing in their virtual reproduction of the MS-20 synthesizer and can also control third-party devices. === Control surfaces === Control surfaces are hardware devices that provide a variety of controls that transmit real-time controller messages. These enable software instruments to be programmed without the discomfort of excessive mouse movements, or adjustment of hardware devices without the need to step through layered menus. Buttons, sliders, and knobs are the most common controllers provided, but rotary encoders, transport controls, joysticks, ribbon controllers, vector touchpads in the style of Korg's Kaoss pad, and optical controllers such as Roland's D-Beam may also be present. Control surfaces may be used for mixing, sequencer automation, turntablism, and lighting control. === Specialized real-time controllers === Audio control surfaces often resemble mixing consoles in appearance, and enable a level of hands-on control for changing parameters such as sound levels and effects applied to individual tracks of a multitrack recording or channels supporting a live performance. MIDI footswitches are commonly used to send MIDI program change commands to effects devices but may be combined with a pedalboard for more detailed adjustment of effects units. Pedals are available in the form of on/off switches, either momentary or latching or as expression pedals whose position determines the value of a MIDI continuous controller. Drawbar controllers are for use with MIDI and virtual organs. Along with a set of drawbars for timbre control, they may provide controls for standard organ effects such as Leslie speaker speed, vibrato and chorus. == Use in a data stream == Modifiers such as modulation wheels, pitch bend wheels, sustain pedals, pitch sliders, buttons, knobs, faders, switches, ribbon controllers, etc., alter an instrument's state of operation, and thus can be used to modify sounds or other parameters of music performance in real time via MIDI connections. Some controllers, such as pitch bend, are special. Whereas the data range of most continuous controllers (such as volume, for example) consists of 128 steps ranging in value from 0 to 127, pitch bend data may be encoded with over 16,000 data steps. This produces the illusion of a continuously sliding pitch, as in a violin's portamento, rather than a series of zippered steps such as a guitarist sliding their finger up the frets of their guitar's neck. The original MIDI specification included 128 virtual controller numbers for real-time modifications to live instruments or their audio. MIDI Show Control (MSC) and MIDI Machine Control (MMC) are two separate extensions of the original MIDI spec, expanding MIDI protocol to accept far more than its original intentions. == Common products == The most common MIDI controllers encountered are various sizes of MIDI keyboards. A modern controller lacks internal sound generation, instead acting as a primary or secondary input for a synthesizer, digital sampler or a computer running a VST instrument or other software sound generator. Many have several user-definable knobs and slide controls that can control aspects of a synthesizer's sound in real time. Such controllers are much cheaper than a full synthesizer and are increasingly equipped with Universal Serial Bus, which allows connection to a computer without a MIDI interface. Despite not using MIDI directly, software applications recognize such controllers as a MIDI device. In most cases, a USB-equipped controller can draw necessary power from USB connection, and does not require an AC adapter when connected to a computer. Keyboards range in size from 88 weighted-action keys to portable 25-key models. == References ==
Wikipedia/MIDI_controller
Fortune's algorithm is a sweep line algorithm for generating a Voronoi diagram from a set of points in a plane using O(n log n) time and O(n) space. It was originally published by Steven Fortune in 1986 in his paper "A sweepline algorithm for Voronoi diagrams." == Algorithm description == The algorithm maintains both a sweep line and a beach line, which both move through the plane as the algorithm progresses. The sweep line is a straight line, which we may by convention assume to be vertical and moving left to right across the plane. At any time during the algorithm, the input points left of the sweep line will have been incorporated into the Voronoi diagram, while the points right of the sweep line will not have been considered yet. The beach line is not a straight line, but a complicated, piecewise curve to the left of the sweep line, composed of pieces of parabolas; it divides the portion of the plane within which the Voronoi diagram can be known, regardless of what other points might be right of the sweep line, from the rest of the plane. For each point left of the sweep line, one can define a parabola of points equidistant from that point and from the sweep line; the beach line is the boundary of the union of these parabolas. As the sweep line progresses, the vertices of the beach line, at which two parabolas cross, trace out the edges of the Voronoi diagram. The beach line progresses by keeping each parabola base exactly halfway between the points initially swept over with the sweep line, and the new position of the sweep line. Mathematically, this means each parabola is formed by using the sweep line as the directrix and the input point as the focus. The algorithm maintains as data structures a binary search tree describing the combinatorial structure of the beach line, and a priority queue listing potential future events that could change the beach line structure. These events include the addition of another parabola to the beach line (when the sweep line crosses another input point) and the removal of a curve from the beach line (when the sweep line becomes tangent to a circle through some three input points whose parabolas form consecutive segments of the beach line). Each such event may be prioritized by the x-coordinate of the sweep line at the point the event occurs. The algorithm itself then consists of repeatedly removing the next event from the priority queue, finding the changes the event causes in the beach line, and updating the data structures. As there are O(n) events to process (each being associated with some feature of the Voronoi diagram) and O(log n) time to process an event (each consisting of a constant number of binary search tree and priority queue operations) the total time is O(n log n). === Pseudocode === Pseudocode description of the algorithm. let ∗ ( z ) {\displaystyle \scriptstyle *(z)} be the transformation ∗ ( z ) = ( z x , z y + d ( z ) ) {\displaystyle \scriptstyle *(z)=(z_{x},z_{y}+d(z))} , where d ( z ) {\displaystyle \scriptstyle d(z)} is the Euclidean distance between z and the nearest site let T be the "beach line" let R p {\displaystyle \scriptstyle R_{p}} be the region covered by site p. let C p q {\displaystyle \scriptstyle C_{pq}} be the boundary ray between sites p and q. let S {\displaystyle \scriptstyle S} be a set of sites on which this algorithm is to be applied. let p 1 , p 2 , . . . , p m {\displaystyle \scriptstyle p_{1},p_{2},...,p_{m}} be the sites extracted from S with minimal y-coordinate, ordered by x-coordinate let DeleteMin(X) be the act of removing the lowest and leftmost site of X (sort by y unless they're identical, in which case sort by x) let V be the Voronoi map of S which is to be constructed by this algorithm Q ← p 1 , p 2 , … , p m , S {\displaystyle Q\gets {p_{1},p_{2},\dots ,p_{m},S}} create initial vertical boundary rays C p 1 , p 2 0 , C p 2 , p 3 0 , … , C p m − 1 , p m 0 {\displaystyle \scriptstyle C_{p_{1},p_{2}}^{0},C_{p_{2},p_{3}}^{0},\dots ,C_{p_{m-1},p_{m}}^{0}} T ← ∗ ( R p 1 ) , C p 1 , p 2 0 , ∗ ( R p 2 ) , C p 2 , p 3 0 , … , ∗ ( R p m − 1 ) , C p m − 1 , p m 0 , ∗ ( R p m ) {\displaystyle T\gets *(R_{p_{1}}),C_{p_{1},p_{2}}^{0},*(R_{p_{2}}),C_{p_{2},p_{3}}^{0},\dots ,*(R_{p_{m-1}}),C_{p_{m-1},p_{m}}^{0},*(R_{p_{m}})} while not IsEmpty(Q) do p ← DeleteMin(Q) case p of p is a site in ∗ ( V ) {\displaystyle \scriptstyle *(V)} : find the occurrence of a region ∗ ( R q ) {\displaystyle \scriptstyle *(R_{q})} in T containing p, bracketed by C r q {\displaystyle \scriptstyle C_{rq}} on the left and C q s {\displaystyle \scriptstyle C_{qs}} on the right create new boundary rays C p q − {\displaystyle \scriptstyle C_{pq}^{-}} and C p q + {\displaystyle \scriptstyle C_{pq}^{+}} with bases p replace ∗ ( R q ) {\displaystyle \scriptstyle *(R_{q})} with ∗ ( R q ) , C p q − , ∗ ( R p ) , C p q + , ∗ ( R q ) {\displaystyle \scriptstyle *(R_{q}),C_{pq}^{-},*(R_{p}),C_{pq}^{+},*(R_{q})} in T delete from Q any intersection between C r q {\displaystyle \scriptstyle C_{rq}} and C q s {\displaystyle \scriptstyle C_{qs}} insert into Q any intersection between C r q {\displaystyle \scriptstyle C_{rq}} and C p q − {\displaystyle \scriptstyle C_{pq}^{-}} insert into Q any intersection between C p q + {\displaystyle \scriptstyle C_{pq}^{+}} and C q s {\displaystyle \scriptstyle C_{qs}} p is a Voronoi vertex in ∗ ( V ) {\displaystyle \scriptstyle *(V)} : let p be the intersection of C q r {\displaystyle \scriptstyle C_{qr}} on the left and C r s {\displaystyle \scriptstyle C_{rs}} on the right let C u q {\displaystyle \scriptstyle C_{uq}} be the left neighbor of C q r {\displaystyle \scriptstyle C_{qr}} and let C s v {\displaystyle \scriptstyle C_{sv}} be the right neighbor of C r s {\displaystyle \scriptstyle C_{rs}} in T if q y = s y {\displaystyle \scriptstyle q_{y}=s_{y}} , create a new boundary ray C q s 0 {\displaystyle \scriptstyle C_{qs}^{0}} else if p is right of the higher of q and s, create C q s + {\displaystyle \scriptstyle C_{qs}^{+}} else create C q s − {\displaystyle \scriptstyle C_{qs}^{-}} endif replace C q r , ∗ ( R r ) , C r s {\displaystyle \scriptstyle C_{qr},*(R_{r}),C_{rs}} with newly created C q s {\displaystyle \scriptstyle C_{qs}} in T delete from Q any intersection between C u q {\displaystyle \scriptstyle C_{uq}} and C q r {\displaystyle \scriptstyle C_{qr}} delete from Q any intersection between C r s {\displaystyle \scriptstyle C_{rs}} and C s v {\displaystyle \scriptstyle C_{sv}} insert into Q any intersection between C u q {\displaystyle \scriptstyle C_{uq}} and C q s {\displaystyle \scriptstyle C_{qs}} insert into Q any intersection between C q s {\displaystyle \scriptstyle C_{qs}} and C s v {\displaystyle \scriptstyle C_{sv}} record p as the summit of C q r {\displaystyle \scriptstyle C_{qr}} and C r s {\displaystyle \scriptstyle C_{rs}} and the base of C q s {\displaystyle \scriptstyle C_{qs}} output the boundary segments C q r {\displaystyle \scriptstyle C_{qr}} and C r s {\displaystyle \scriptstyle C_{rs}} endcase endwhile output the remaining boundary rays in T == Weighted sites and disks == === Additively weighted sites === As Fortune describes in ref., a modified version of the sweep line algorithm can be used to construct an additively weighted Voronoi diagram, in which the distance to each site is offset by the weight of the site; this may equivalently be viewed as a Voronoi diagram of a set of disks, centered at the sites with radius equal to the weight of the site. the algorithm is found to have O ( n log ⁡ ( n ) ) {\displaystyle O(n\log(n))} time complexity with n being the number of sites according to ref. Weighted sites may be used to control the areas of the Voronoi cells when using Voronoi diagrams to construct treemaps. In an additively weighted Voronoi diagram, the bisector between sites is in general a hyperbola, in contrast to unweighted Voronoi diagrams and power diagrams of disks for which it is a straight line. == References == == External links == Steven Fortune's C implementation Fortune's Voronoi algorithm implemented in C++ Fortune's algorithm implemented in JavaScript Fortune's Algorithm Visualization
Wikipedia/Fortune's_algorithm
In computational geometry, the Bentley–Ottmann algorithm is a sweep line algorithm for listing all crossings in a set of line segments, i.e. it finds the intersection points (or, simply, intersections) of line segments. It extends the Shamos–Hoey algorithm, a similar previous algorithm for testing whether or not a set of line segments has any crossings. For an input consisting of n {\displaystyle n} line segments with k {\displaystyle k} crossings (or intersections), the Bentley–Ottmann algorithm takes time O ( ( n + k ) log ⁡ n ) {\displaystyle {\mathcal {O}}((n+k)\log n)} . In cases where k = o ( n 2 log ⁡ n ) {\displaystyle k={\mathcal {o}}\left({\frac {n^{2}}{\log n}}\right)} , this is an improvement on a naïve algorithm that tests every pair of segments, which takes Θ ( n 2 ) {\displaystyle \Theta (n^{2})} . The algorithm was initially developed by Jon Bentley and Thomas Ottmann (1979); it is described in more detail in the textbooks Preparata & Shamos (1985), O'Rourke (1998), and de Berg et al. (2000). Although asymptotically faster algorithms are now known by Chazelle & Edelsbrunner (1992) and Balaban (1995), the Bentley–Ottmann algorithm remains a practical choice due to its simplicity and low memory requirements. == Overall strategy == The main idea of the Bentley–Ottmann algorithm is to use a sweep line approach, in which a vertical line L moves from left to right (or, e.g., from top to bottom) across the plane, intersecting the input line segments in sequence as it moves. The algorithm is described most easily in its general position, meaning: No two line segment endpoints or crossings have the same x-coordinate No line segment endpoint lies upon another line segment No three line segments intersect at a single point. In such a case, L will always intersect the input line segments in a set of points whose vertical ordering changes only at a finite set of discrete events. Specifically, a discrete event can either be associated with an endpoint (left or right) of a line-segment or intersection point of two line-segments. Thus, the continuous motion of L can be broken down into a finite sequence of steps, and simulated by an algorithm that runs in a finite amount of time. There are two types of events that may happen during the course of this simulation. When L sweeps across an endpoint of a line segment s, the intersection of L with s is added to or removed from the vertically ordered set of intersection points. These events are easy to predict, as the endpoints are known already from the input to the algorithm. The remaining events occur when L sweeps across a crossing between (or intersection of) two line segments s and t. These events may also be predicted from the fact that, just prior to the event, the points of intersection of L with s and t are adjacent in the vertical ordering of the intersection points. The Bentley–Ottmann algorithm itself maintains data structures representing the current vertical ordering of the intersection points of the sweep line with the input line segments, and a collection of potential future events formed by adjacent pairs of intersection points. It processes each event in turn, updating its data structures to represent the new set of intersection points. == Data structures == In order to efficiently maintain the intersection points of the sweep line L with the input line segments and the sequence of future events, the Bentley–Ottmann algorithm maintains two data structures: A binary search tree (the "sweep line status tree"), containing the set of input line segments that cross L, ordered by the y-coordinates of the points where these segments cross L. The crossing points themselves are not represented explicitly in the binary search tree. The Bentley–Ottmann algorithm will insert a new segment s into this data structure when the sweep line L crosses the left endpoint p of this segment (i.e. the endpoint of the segment with the smallest x-coordinate, provided the sweep line L starts from the left, as explained above in this article). The correct position of segment s in the binary search tree may be determined by a binary search, each step of which tests whether p is above or below some other segment that is crossed by L. Thus, an insertion may be performed in logarithmic time. The Bentley–Ottmann algorithm will also delete segments from the binary search tree, and use the binary search tree to determine the segments that are immediately above or below other segments; these operations may be performed using only the tree structure itself without reference to the underlying geometry of the segments. A priority queue (the "event queue"), used to maintain a sequence of potential future events in the Bentley–Ottmann algorithm. Each event is associated with a point p in the plane, either a segment endpoint or a crossing point, and the event happens when line L sweeps over p. Thus, the events may be prioritized by the x-coordinates of the points associated with each event. In the Bentley–Ottmann algorithm, the potential future events consist of line segment endpoints that have not yet been swept over, and the points of intersection of pairs of lines containing pairs of segments that are immediately above or below each other. The algorithm does not need to maintain explicitly a representation of the sweep line L or its position in the plane. Rather, the position of L is represented indirectly: it is the vertical line through the point associated with the most recently processed event. The binary search tree may be any balanced binary search tree data structure, such as a red–black tree; all that is required is that insertions, deletions, and searches take logarithmic time. Similarly, the priority queue may be a binary heap or any other logarithmic-time priority queue; more sophisticated priority queues such as a Fibonacci heap are not necessary. Note that the space complexity of the priority queue depends on the data structure used to implement it. == Detailed algorithm == The Bentley–Ottmann algorithm performs the following steps. Initialize a priority queue Q of potential future events, each associated with a point in the plane and prioritized by the x-coordinate of the point. So, initially, Q contains an event for each of the endpoints of the input segments. Initialize a self-balancing binary search tree T of the line segments that cross the sweep line L, ordered by the y-coordinates of the crossing points. Initially, T is empty. (Even though the line sweep L is not explicitly represented, it may be helpful to imagine it as a vertical line which, initially, is at the left of all input segments.) While Q is nonempty, find and remove the event from Q associated with a point p with minimum x-coordinate. Determine what type of event this is and process it according to the following case analysis: If p is the left endpoint of a line segment s, insert s into T. Find the line-segments r and t that are respectively immediately above and below s in T (if they exist); if the crossing of r and t (the neighbours of s in the status data structure) forms a potential future event in the event queue, remove this possible future event from the event queue. If s crosses r or t, add those crossing points as potential future events in the event queue. If p is the right endpoint of a line segment s, remove s from T. Find the segments r and t that (prior to the removal of s) were respectively immediately above and below it in T (if they exist). If r and t cross, add that crossing point as a potential future event in the event queue. If p is the crossing point of two segments s and t (with s below t to the left of the crossing), swap the positions of s and t in T. After the swap, find the segments r and u (if they exist) that are immediately below and above t and s, respectively. Remove any crossing points rs (i.e. a crossing point between r and s) and tu (i.e. a crossing point between t and u) from the event queue, and, if r and t cross or s and u cross, add those crossing points to the event queue. == Analysis == The algorithm processes one event per segment endpoint or crossing point, in the sorted order of the x {\displaystyle x} -coordinates of these points, as may be proven by induction. This follows because, once the i {\displaystyle i} th event has been processed, the next event (if it is a crossing point) must be a crossing of two segments that are adjacent in the ordering of the segments represented by T {\displaystyle T} , and because the algorithm maintains all crossings between adjacent segments as potential future events in the event queue; therefore, the correct next event will always be present in the event queue. As a consequence, it correctly finds all crossings of input line segments, the problem it was designed to solve. The Bentley–Ottmann algorithm processes a sequence of 2 n + k {\displaystyle 2n+k} events, where n {\displaystyle n} denotes the number of input line segments and k {\displaystyle k} denotes the number of crossings. Each event is processed by a constant number of operations in the binary search tree and the event queue, and (because it contains only segment endpoints and crossings between adjacent segments) the event queue never contains more than 3 n {\displaystyle 3n} events. All operations take time O ( log ⁡ n ) {\displaystyle {\mathcal {O}}(\log n)} . Hence the total time for the algorithm is O ( ( n + k ) log ⁡ n ) {\displaystyle {\mathcal {O}}((n+k)\log n)} . If the crossings found by the algorithm do not need to be stored once they have been found, the space used by the algorithm at any point in time is O ( n ) {\displaystyle {\mathcal {O}}(n)} : each of the n {\displaystyle n} input line segments corresponds to at most one node of the binary search tree T, and as stated above the event queue contains at most 3 n {\displaystyle 3n} elements. This space bound is due to Brown (1981); the original version of the algorithm was slightly different (it did not remove crossing events from Q {\displaystyle Q} when some other event causes the two crossing segments to become non-adjacent) causing it to use more space. Chen & Chan (2003) described a highly space-efficient version of the Bentley–Ottmann algorithm that encodes most of its information in the ordering of the segments in an array representing the input, requiring only O ( log 2 ⁡ n ) {\displaystyle {\mathcal {O}}(\log ^{2}n)} additional memory cells. However, in order to access the encoded information, the algorithm is slowed by a logarithmic factor. == Special position == The algorithm description above assumes that line segments are not vertical, that line segment endpoints do not lie on other line segments, that crossings are formed by only two line segments, and that no two event points have the same x-coordinate. In other words, it doesn't take into account corner cases, i.e. it assumes general position of the endpoints of the input segments. However, these general position assumptions are not reasonable for most applications of line segment intersection. Bentley & Ottmann (1979) suggested perturbing the input slightly to avoid these kinds of numerical coincidences, but did not describe in detail how to perform these perturbations. de Berg et al. (2000) describe in more detail the following measures for handling special-position inputs: Break ties between event points with the same x-coordinate by using the y-coordinate. Events with different y-coordinates are handled as before. This modification handles both the problem of multiple event points with the same x-coordinate, and the problem of vertical line segments: the left endpoint of a vertical segment is defined to be the one with the lower y-coordinate, and the steps needed to process such a segment are essentially the same as those needed to process a non-vertical segment with a very high slope. Define a line segment to be a closed set, containing its endpoints. Therefore, two line segments that share an endpoint, or a line segment that contains an endpoint of another segment, both count as an intersection of two line segments. When multiple line segments intersect at the same point, create and process a single event point for that intersection. The updates to the binary search tree caused by this event may involve removing any line segments for which this is the right endpoint, inserting new line segments for which this is the left endpoint, and reversing the order of the remaining line segments containing this event point. The output from the version of the algorithm described by de Berg et al. (2000) consists of the set of intersection points of line segments, labeled by the segments they belong to, rather than the set of pairs of line segments that intersect. A similar approach to degeneracies was used in the LEDA implementation of the Bentley–Ottmann algorithm. == Numerical precision issues == For the correctness of the algorithm, it is necessary to determine without approximation the above-below relations between a line segment endpoint and other line segments, and to correctly prioritize different event points. For this reason it is standard to use integer coordinates for the endpoints of the input line segments, and to represent the rational number coordinates of the intersection points of two segments exactly, using arbitrary-precision arithmetic. However, it may be possible to speed up the calculations and comparisons of these coordinates by using floating point calculations and testing whether the values calculated in this way are sufficiently far from zero that they may be used without any possibility of error. The exact arithmetic calculations required by a naïve implementation of the Bentley–Ottmann algorithm may require five times as many bits of precision as the input coordinates, but Boissonat & Preparata (2000) describe modifications to the algorithm that reduce the needed amount of precision to twice the number of bits as the input coordinates. == Faster algorithms == The O(n log n) part of the time bound for the Bentley–Ottmann algorithm is necessary, as there are matching lower bounds for the problem of detecting intersecting line segments in algebraic decision tree models of computation. However, the dependence on k, the number of crossings, can be improved. Clarkson (1988) and Mulmuley (1988) both provided randomized algorithms for constructing the planar graph whose vertices are endpoints and crossings of line segments, and whose edges are the portions of the segments connecting these vertices, in expected time O(n log n + k), and this problem of arrangement construction was solved deterministically in the same O(n log n + k) time bound by Chazelle & Edelsbrunner (1992). However, constructing this arrangement as a whole requires space O(n + k), greater than the O(n) space bound of the Bentley–Ottmann algorithm; Balaban (1995) described a different algorithm that lists all intersections in time O(n log n + k) and space O(n). If the input line segments and their endpoints form the edges and vertices of a connected graph (possibly with crossings), the O(n log n) part of the time bound for the Bentley–Ottmann algorithm may also be reduced. As Clarkson, Cole & Tarjan (1992) show, in this case there is a randomized algorithm for solving the problem in expected time O(n log* n + k), where log* denotes the iterated logarithm, a function much more slowly growing than the logarithm. A closely related randomized algorithm of Eppstein, Goodrich & Strash (2009) solves the same problem in time O(n + k log(i)n) for any constant i, where log(i) denotes the function obtained by iterating the logarithm function i times. The first of these algorithms takes linear time whenever k is larger than n by a log(i)n factor, for any constant i, while the second algorithm takes linear time whenever k is smaller than n by a log(i)n factor. Both of these algorithms involve applying the Bentley–Ottmann algorithm to small random samples of the input. == Notes == == References == Balaban, I. J. (1995), "An optimal algorithm for finding segments intersections", Proc. 11th ACM Symp. Computational Geometry, pp. 211–219, doi:10.1145/220279.220302, ISBN 0-89791-724-3, S2CID 6342118. Bartuschka, U.; Mehlhorn, K.; Näher, S. (1997), "A robust and efficient implementation of a sweep line algorithm for the straight line segment intersection problem", in Italiano, G. F.; Orlando, S. (eds.), Proc. Worksh. Algorithm Engineering, archived from the original on 2017-06-06, retrieved 2009-05-27. Bentley, J. L.; Ottmann, T. A. (1979), "Algorithms for reporting and counting geometric intersections", IEEE Transactions on Computers, C-28 (9): 643–647, doi:10.1109/TC.1979.1675432, S2CID 1618521. de Berg, Mark; van Kreveld, Marc; Overmars, Mark; Schwarzkopf, Otfried (2000), "Chapter 2: Line segment intersection", Computational Geometry (2nd ed.), Springer-Verlag, pp. 19–44, ISBN 978-3-540-65620-3. Boissonat, J.-D.; Preparata, F. P. (2000), "Robust plane sweep for intersecting segments" (PDF), SIAM Journal on Computing, 29 (5): 1401–1421, doi:10.1137/S0097539797329373. Brown, K. Q. (1981), "Comments on "Algorithms for Reporting and Counting Geometric Intersections"", IEEE Transactions on Computers, C-30 (2): 147, doi:10.1109/tc.1981.6312179, S2CID 206622367. Chazelle, Bernard; Edelsbrunner, Herbert (1992), "An optimal algorithm for intersecting line segments in the plane", Journal of the ACM, 39 (1): 1–54, doi:10.1145/147508.147511, S2CID 785741. Chen, E. Y.; Chan, T. M. (2003), "A space-efficient algorithm for segment intersection", Proc. 15th Canadian Conference on Computational Geometry (PDF). Clarkson, K. L. (1988), "Applications of random sampling in computational geometry, II", Proc. 4th ACM Symp. Computational Geometry, pp. 1–11, doi:10.1145/73393.73394, ISBN 0-89791-270-5, S2CID 15134654. Clarkson, K. L.; Cole, R.; Tarjan, R. E. (1992), "Randomized parallel algorithms for trapezoidal diagrams", International Journal of Computational Geometry and Applications, 2 (2): 117–133, doi:10.1142/S0218195992000081. Corrigendum, 2 (3): 341–343. Eppstein, D.; Goodrich, M.; Strash, D. (2009), "Linear-time algorithms for geometric graphs with sublinearly many crossings", Proc. 20th ACM-SIAM Symp. Discrete Algorithms (SODA 2009), pp. 150–159, arXiv:0812.0893, Bibcode:2008arXiv0812.0893E, doi:10.1137/090759112, S2CID 13044724. Mulmuley, K. (1988), "A fast planar partition algorithm, I", Proc. 29th IEEE Symp. Foundations of Computer Science (FOCS 1988), pp. 580–589, doi:10.1109/SFCS.1988.21974, ISBN 0-8186-0877-3, S2CID 34582594. O'Rourke, J. (1998), "Section 7.7: Intersection of segments", Computational Geometry in C (2nd ed.), Cambridge University Press, pp. 263–265, ISBN 978-0-521-64976-6. Preparata, F. P.; Shamos, M. I. (1985), "Section 7.2.3: Intersection of line segments", Computational Geometry: An Introduction, Springer-Verlag, pp. 278–287, Bibcode:1985cgai.book.....P. Pach, J.; Sharir, M. (1991), "On vertical visibility in arrangements of segments and the queue size in the Bentley–Ottmann line sweeping algorithm", SIAM Journal on Computing, 20 (3): 460–470, doi:10.1137/0220029, MR 1094525. Shamos, M. I.; Hoey, Dan (1976), "Geometric intersection problems", 17th IEEE Conf. Foundations of Computer Science (FOCS 1976), pp. 208–215, doi:10.1109/SFCS.1976.16, S2CID 124804. == External links == Smid, Michiel (2003), Computing intersections in a set of line segments: the Bentley–Ottmann algorithm (PDF).
Wikipedia/Bentley–Ottmann_algorithm
Empirical research is research using empirical evidence. It is also a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values some research more than other kinds. Empirical evidence (the record of one's direct observations or experiences) can be analyzed quantitatively or qualitatively. Quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected (usually called data). Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions that cannot be studied in laboratory settings, particularly in the social sciences and in education. In some fields, quantitative research may begin with a research question (e.g., "Does listening to vocal music during the learning of a word list have an effect on later memory for these words?") which is tested through experimentation. Usually, the researcher has a certain theory regarding the topic under investigation. Based on this theory, statements or hypotheses will be proposed (e.g., "Listening to vocal music has a negative effect on learning a word list."). From these hypotheses, predictions about specific events are derived (e.g., "People who study a word list while listening to vocal music will remember fewer words on a later memory test than people who study a word list in silence."). These predictions can then be tested with a suitable experiment. Depending on the outcomes of the experiment, the theory on which the hypotheses and predictions were based will be supported or not, or may need to be modified and then subjected to further testing. == History == The experimental method has evolved over the ages, with many scientists contributing to its foundation and development. In ancient times, Greek philosophers, such as Aristotle, relied on observation and rational inference in their studies. Aristotle, for example, rejected exclusive reliance on logical deduction, emphasizing the importance of observation in understanding nature. During the Middle Ages, Muslim scientists significantly advanced the experimental method. Jabir ibn Hayyan, known as the father of chemistry, introduced experimental methodology into chemistry and developed chemical processes such as crystallization, calcination, and distillation. He also discovered important acids like sulfuric and nitric acid, expanding the possibilities of chemical experiments. The famous optics scientist Alhazen (Ibn al-Haytham) was among the first to rely on experimentation in studying light and vision. In his book Book of Optics, he employed a scientific method based on observation, experimentation, and mathematical proof, making him a pioneer of the modern scientific method. These scientific approaches were transmitted to Europe through translations, influencing the development of modern scientific methodology. European scientists, such as Francis Bacon, were inspired by the works of Muslim scholars in refining the experimental method. The researcher Robert Briffault, in his book Making of Humanity, states: "It was under their successors at Oxford School (that is, successors to the Muslims of Spain) that Roger Bacon learned Arabic and Arabic Sciences. Neither Roger Bacon nor later namesake has any title to be credited with having introduced the experimental method. Roger Bacon was no more than one of apostles of Muslim Science and Method to Christian Europe". == Terminology == The term empirical was originally used to refer to certain ancient Greek practitioners of medicine who rejected adherence to the dogmatic doctrines of the day, preferring instead to rely on the observation of phenomena as perceived in experience. Later empiricism referred to a theory of knowledge in philosophy which adheres to the principle that knowledge arises from experience and evidence gathered specifically using the senses. In scientific use, the term empirical refers to the gathering of data using only evidence that is observable by the senses or in some cases using calibrated scientific instruments. What early philosophers described as empiricist and empirical research have in common is the dependence on observable data to formulate and test theories and come to conclusions. == Usage == The researcher attempts to describe accurately the interaction between the instrument (or the human senses) and the entity being observed. If instrumentation is involved, the researcher is expected to calibrate his/her instrument by applying it to known standard objects and documenting the results before applying it to unknown objects. In other words, it describes the research that has not taken place before and their results. In practice, the accumulation of evidence for or against any particular theory involves planned research designs for the collection of empirical data, and academic rigor plays a large part of judging the merits of research design. Several typologies for such designs have been suggested, one of the most popular of which comes from Campbell and Stanley. They are responsible for popularizing the widely cited distinction among pre-experimental, experimental, and quasi-experimental designs and are staunch advocates of the central role of randomized experiments in educational research. === Scientific research === Accurate analysis of data using standardized statistical methods in scientific studies is critical to determining the validity of empirical research. Statistical formulas such as regression, uncertainty coefficient, t-test, chi square, and various types of ANOVA (analyses of variance) are fundamental to forming logical, valid conclusions. If empirical data reach significance under the appropriate statistical formula, the research hypothesis is supported. If not, the null hypothesis is supported (or, more accurately, not rejected), meaning no effect of the independent variable(s) was observed on the dependent variable(s). The result of empirical research using statistical hypothesis testing is never proof. It can only support a hypothesis, reject it, or do neither. These methods yield only probabilities. Among scientific researchers, empirical evidence (as distinct from empirical research) refers to objective evidence that appears the same regardless of the observer. For example, a thermometer will not display different temperatures for each individual who observes it. Temperature, as measured by an accurate, well calibrated thermometer, is empirical evidence. By contrast, non-empirical evidence is subjective, depending on the observer. Following the previous example, observer A might truthfully report that a room is warm, while observer B might truthfully report that the same room is cool, though both observe the same reading on the thermometer. The use of empirical evidence negates this effect of personal (i.e., subjective) experience or time. The varying perception of empiricism and rationalism shows concern with the limit to which there is dependency on experience of sense as an effort of gaining knowledge. According to rationalism, there are a number of different ways in which sense experience is gained independently for the knowledge and concepts. According to empiricism, sense experience is considered as the main source of every piece of knowledge and the concepts. In general, rationalists are known for the development of their own views following two different way. First, the key argument can be placed that there are cases in which the content of knowledge or concepts end up outstripping the information. This outstripped information is provided by the sense experience (Hjørland, 2010, 2). Second, there is construction of accounts as to how reasoning helps in the provision of addition knowledge about a specific or broader scope. Empiricists are known to be presenting complementary senses related to thought. First, there is development of accounts of how there is provision of information by experience that is cited by rationalists. This is insofar for having it in the initial place. At times, empiricists tend to be opting skepticism as an option of rationalism. If experience is not helpful in the provision of knowledge or concept cited by rationalists, then they do not exist (Pearce, 2010, 35). Second, empiricists have a tendency of attacking the accounts of rationalists, while considering reasoning to be an important source of knowledge or concepts. The overall disagreement between empiricists and rationalists shows major concerns about how knowledge is gained with respect to the sources of knowledge and concepts. In some of the cases, disagreement on the point of gaining knowledge results in the provision of conflicting responses to other aspects as well. There might be a disagreement in the overall feature of warrant, while limiting the knowledge and thought. Empiricists are known for sharing the view that there is no existence of innate knowledge and rather that is derivation of knowledge out of experience. These experiences are either reasoned using the mind or sensed through the five senses human possess (Bernard, 2011, 5). On the other hand, rationalists are known to be sharing the view that there is existence of innate knowledge and this is different for the objects of innate knowledge being chosen. In order to follow rationalism, there must be adoption of one of the three claims related to the theory that are deduction or intuition, innate knowledge, and innate concept. The more there is removal of concept from mental operations and experience, there can be performance over experience with increased plausibility in being innate. Further ahead, empiricism in context with a specific subject provides a rejection of the corresponding version related to innate knowledge and deduction or intuition (Weiskopf, 2008, 16). Insofar as there is acknowledgement of concepts and knowledge within the area of subject, the knowledge has major dependence on experience through human senses. == Empirical cycle == A.D. de Groot's empirical cycle: Observation: The observation of a phenomenon and inquiry concerning its causes. Induction: The formulation of hypotheses - generalized explanations for the phenomenon. Deduction: The formulation of experiments that will test the hypotheses (i.e. confirm them if true, refute them if false). Testing: The procedures by which the hypotheses are tested and data are collected. Evaluation: The interpretation of the data and the formulation of a theory - an abductive argument that presents the results of the experiment as the most reasonable explanation for the phenomenon. == See also == Case study Fact Field research Scientific method == References == == External links == The dictionary definition of empirical research at Wiktionary Some Key Concepts for the Design and Review of Empirical Research Archived 2021-04-16 at the Wayback Machine
Wikipedia/Empirical_methods
In computer science, the Krauss wildcard-matching algorithm is a pattern matching algorithm. Based on the wildcard syntax in common use, e.g. in the Microsoft Windows command-line interface, the algorithm provides a non-recursive mechanism for matching patterns in software applications, based on syntax simpler than that typically offered by regular expressions. == History == The algorithm is based on a history of development, correctness and performance testing, and programmer feedback that began with an unsuccessful search for a reliable non-recursive algorithm for matching wildcards. An initial algorithm, implemented in a single while loop, quickly prompted comments from software developers, leading to improvements. Ongoing comments and suggestions culminated in a revised algorithm still implemented in a single while loop but refined based on a collection of test cases and a performance profiler. The experience tuning the single while loop using the profiler prompted development of a two-loop strategy that achieved further performance gains, particularly in situations involving empty input strings or input containing no wildcard characters. The two-loop algorithm is available for use by the open-source software development community, under the terms of the Apache License v. 2.0, and is accompanied by test case code. == Usage == The algorithm made available under the Apache license is implemented in both pointer-based C++ and portable C++ (implemented without pointers). The test case code, also available under the Apache license, can be applied to any algorithm that provides the pattern matching operations below. The implementation as coded is unable to handle multibyte character sets and poses problems when the text being searched may contain multiple incompatible character sets. === Pattern matching operations === The algorithm supports three pattern matching operations: A one-to-one match is performed between the pattern and the source to be checked for a match, with the exception of asterisk (*) or question mark (?) characters in the pattern. An asterisk (*) character matches any sequence of zero or more characters. A question mark (?) character matches any single character. == Examples == *foo* matches any string containing "foo". mini* matches any string that begins with "mini" (including the string "mini" itself). ???* matches any string of three or more letters. == Applications == The original algorithm has been ported to the DataFlex programming language by Larry Heiges for use with Data Access Worldwide code library. It has been posted on GitHub in modified form as part of a log file reader. The 2014 algorithm is part of the Unreal Model Viewer built into the Epic Games Unreal Engine game engine. == See also == pattern matching glob (programming) wildmat == References ==
Wikipedia/Krauss_matching_wildcards_algorithm
In database management, an aggregate function or aggregation function is a function where multiple values are processed together to form a single summary statistic. Common aggregate functions include: Average (i.e., arithmetic mean) Count Maximum Median Minimum Mode Range Sum Others include: Nanmean (mean ignoring NaN values, also known as "nil" or "null") Stddev Formally, an aggregate function takes as input a set, a multiset (bag), or a list from some input domain I and outputs an element of an output domain O. The input and output domains may be the same, such as for SUM, or may be different, such as for COUNT. Aggregate functions occur commonly in numerous programming languages, in spreadsheets, and in relational algebra. The listagg function, as defined in the SQL:2016 standard aggregates data from multiple rows into a single concatenated string. In the entity relationship diagram, aggregation is represented as seen in Figure 1 with a rectangle around the relationship and its entities to indicate that it is being treated as an aggregate entity. == Decomposable aggregate functions == Aggregate functions present a bottleneck, because they potentially require having all input values at once. In distributed computing, it is desirable to divide such computations into smaller pieces, and distribute the work, usually computing in parallel, via a divide and conquer algorithm. Some aggregate functions can be computed by computing the aggregate for subsets, and then aggregating these aggregates; examples include COUNT, MAX, MIN, and SUM. In other cases the aggregate can be computed by computing auxiliary numbers for subsets, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples include AVERAGE (tracking sum and count, dividing at the end) and RANGE (tracking max and min, subtracting at the end). In other cases the aggregate cannot be computed without analyzing the entire set at once, though in some cases approximations can be distributed; examples include DISTINCT COUNT (Count-distinct problem), MEDIAN, and MODE. Such functions are called decomposable aggregation functions or decomposable aggregate functions. The simplest may be referred to as self-decomposable aggregation functions, which are defined as those functions f such that there is a merge operator ⁠ ⋄ {\displaystyle \diamond } ⁠ such that f ( X ⊎ Y ) = f ( X ) ⋄ f ( Y ) {\displaystyle f(X\uplus Y)=f(X)\diamond f(Y)} where ⁠ ⊎ {\displaystyle \uplus } ⁠ is the union of multisets (see monoid homomorphism). For example, SUM: SUM ⁡ ( x ) = x {\displaystyle \operatorname {SUM} ({x})=x} , for a singleton; SUM ⁡ ( X ⊎ Y ) = SUM ⁡ ( X ) + SUM ⁡ ( Y ) {\displaystyle \operatorname {SUM} (X\uplus Y)=\operatorname {SUM} (X)+\operatorname {SUM} (Y)} , meaning that merge ⁠ ⋄ {\displaystyle \diamond } ⁠ is simply addition. COUNT: COUNT ⁡ ( x ) = 1 {\displaystyle \operatorname {COUNT} ({x})=1} , COUNT ⁡ ( X ⊎ Y ) = COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) {\displaystyle \operatorname {COUNT} (X\uplus Y)=\operatorname {COUNT} (X)+\operatorname {COUNT} (Y)} . MAX: MAX ⁡ ( x ) = x {\displaystyle \operatorname {MAX} ({x})=x} , MAX ⁡ ( X ⊎ Y ) = max ( MAX ⁡ ( X ) , MAX ⁡ ( Y ) ) {\displaystyle \operatorname {MAX} (X\uplus Y)=\max {\bigl (}\operatorname {MAX} (X),\operatorname {MAX} (Y){\bigr )}} . MIN: MIN ⁡ ( x ) = x {\textstyle \operatorname {MIN} ({x})=x} , MIN ⁡ ( X ⊎ Y ) = min ( MIN ⁡ ( X ) , MIN ⁡ ( Y ) ) {\displaystyle \operatorname {MIN} (X\uplus Y)=\min {\bigl (}\operatorname {MIN} (X),\operatorname {MIN} (Y){\bigr )}} . Note that self-decomposable aggregation functions can be combined (formally, taking the product) by applying them separately, so for instance one can compute both the SUM and COUNT at the same time, by tracking two numbers. More generally, one can define a decomposable aggregation function f as one that can be expressed as the composition of a final function g and a self-decomposable aggregation function h, f = g ∘ h , f ( X ) = g ( h ( X ) ) {\displaystyle f=g\circ h,f(X)=g(h(X))} . For example, AVERAGE=SUM/COUNT and RANGE=MAX−MIN. In the MapReduce framework, these steps are known as InitialReduce (value on individual record/singleton set), Combine (binary merge on two aggregations), and FinalReduce (final function on auxiliary values), and moving decomposable aggregation before the Shuffle phase is known as an InitialReduce step, Decomposable aggregation functions are important in online analytical processing (OLAP), as they allow aggregation queries to be computed on the pre-computed results in the OLAP cube, rather than on the base data. For example, it is easy to support COUNT, MAX, MIN, and SUM in OLAP, since these can be computed for each cell of the OLAP cube and then summarized ("rolled up"), but it is difficult to support MEDIAN, as that must be computed for every view separately. == Other decomposable aggregate functions == In order to calculate the average and standard deviation from aggregate data, it is necessary to have available for each group: the total of values (Σxi = SUM(x)), the number of values (N=COUNT(x)) and the total of squares of the values (Σxi2=SUM(x2)) of each groups. AVG: AVG ⁡ ( X ⊎ Y ) = ( AVG ⁡ ( X ) ∗ COUNT ⁡ ( X ) + AVG ⁡ ( Y ) ∗ COUNT ⁡ ( Y ) ) / ( COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) ) {\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {AVG} (X)*\operatorname {COUNT} (X)+\operatorname {AVG} (Y)*\operatorname {COUNT} (Y){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}} or AVG ⁡ ( X ⊎ Y ) = ( SUM ⁡ ( X ) + SUM ⁡ ( Y ) ) / ( COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) ) {\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {SUM} (X)+\operatorname {SUM} (Y){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}} or, only if COUNT(X)=COUNT(Y) AVG ⁡ ( X ⊎ Y ) = ( AVG ⁡ ( X ) + AVG ⁡ ( Y ) ) / 2 {\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {AVG} (X)+\operatorname {AVG} (Y){\bigr )}/2} SUM(x2): The sum of squares of the values is important in order to calculate the Standard Deviation of groups SUM ⁡ ( X 2 ⊎ Y 2 ) = SUM ⁡ ( X 2 ) + SUM ⁡ ( Y 2 ) {\displaystyle \operatorname {SUM} (X^{2}\uplus Y^{2})=\operatorname {SUM} (X^{2})+\operatorname {SUM} (Y^{2})} STDDEV: For a finite population with equal probabilities at all points, we have STDDEV ⁡ ( X ) = s ( x ) = 1 N ∑ i = 1 N ( x i − x ¯ ) 2 = 1 N ( ∑ i = 1 N x i 2 ) − ( x ¯ ) 2 = SUM ⁡ ( x 2 ) / COUNT ⁡ ( x ) − AVG ⁡ ( x ) 2 {\displaystyle \operatorname {STDDEV} (X)=s(x)={\sqrt {{\frac {1}{N}}\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}}={\sqrt {{\frac {1}{N}}\left(\sum _{i=1}^{N}x_{i}^{2}\right)-({\overline {x}})^{2}}}={\sqrt {\operatorname {SUM} (x^{2})/\operatorname {COUNT} (x)-\operatorname {AVG} (x)^{2}}}} This means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value. STDDEV ⁡ ( X ⊎ Y ) = SUM ⁡ ( X 2 ⊎ Y 2 ) / COUNT ⁡ ( X ⊎ Y ) − AVG ⁡ ( X ⊎ Y ) 2 {\displaystyle \operatorname {STDDEV} (X\uplus Y)={\sqrt {\operatorname {SUM} (X^{2}\uplus Y^{2})/\operatorname {COUNT} (X\uplus Y)-\operatorname {AVG} (X\uplus Y)^{2}}}} STDDEV ⁡ ( X ⊎ Y ) = ( SUM ⁡ ( X 2 ) + SUM ⁡ ( Y 2 ) ) / ( COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) ) − ( ( SUM ⁡ ( X ) + SUM ⁡ ( Y ) ) / ( COUNT ⁡ ( X ) + COUNT ⁡ ( Y ) ) ) 2 {\displaystyle \operatorname {STDDEV} (X\uplus Y)={\sqrt {{\bigl (}\operatorname {SUM} (X^{2})+\operatorname {SUM} (Y^{2}){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}-{\bigl (}(\operatorname {SUM} (X)+\operatorname {SUM} (Y))/(\operatorname {COUNT} (X)+\operatorname {COUNT} (Y)){\bigr )}^{2}}}} == See also == Cross-tabulation a.k.a. Contingency table Data drilling Data mining Data processing Extract, transform, load Fold (higher-order function) Group by (SQL), SQL clause OLAP cube Online analytical processing Pivot table Relational algebra Utility functions on indivisible goods#Aggregates of utility functions XML for Analysis AggregateIQ MapReduce == References == == Literature == Grabisch, Michel; Marichal, Jean-Luc; Mesiar, Radko; Pap, Endre (2009). Aggregation functions. Encyclopedia of Mathematics and its Applications. Vol. 127. Cambridge: Cambridge University Press. ISBN 978-0-521-51926-7. Zbl 1196.00002. Oracle Aggregate Functions: MAX, MIN, COUNT, SUM, AVG Examples == External links == Aggregate Functions (Transact-SQL)
Wikipedia/Aggregate_function
In object-oriented computer programming, an extension method is a method added to an object after the original object was compiled. The modified object is often a class, a prototype, or a type. Extension methods are features of some object-oriented programming languages. There is no syntactic difference between calling an extension method and calling a method declared in the type definition. Not all languages implement extension methods in an equally safe manner, however. For instance, languages such as C#, Java (via Manifold, Lombok, or Fluent), and Kotlin don't alter the extended class in any way, because doing so may break class hierarchies and interfere with virtual method dispatching. Instead, these languages strictly implement extension methods statically and use static dispatching to invoke them. == Support in programming languages == Extension methods are features of numerous languages including C#, Java via Manifold or Lombok or Fluent, Gosu, JavaScript, Oxygene, Ruby, Smalltalk, Kotlin, Dart, Visual Basic.NET, and Xojo. In dynamic languages like Python, the concept of an extension method is unnecessary because classes (excluding built-in classes) can be extended without any special syntax (an approach known as "monkey-patching", employed in libraries such as gevent). In VB.NET and Oxygene, they are recognized by the presence of the "extension" keyword or attribute. In Xojo, the "Extends" keyword is used with global methods. In C#, they are implemented as static methods in static classes, with the first argument being of extended class and preceded by "this" keyword. In Java, extension methods are added via Manifold, a jar file added to the project's classpath. Similar to C#, a Java extension method is declared static in an @Extension class where the first argument has the same type as the extended class and is annotated with @This. Alternatively, the Fluent plugin allows calling any static method as an extension method without using annotations, as long as the method signature matches. In Smalltalk, any code can add a method to any class at any time, by sending a method creation message (such as methodsFor:) to the class the user wants to extend. The Smalltalk method category is conventionally named after the package that provides the extension, surrounded by asterisks. For example, when Etoys application code extends classes in the core library, the added methods are put in the *etoys* category. In Ruby, like Smalltalk, there is no special language feature for extension, as Ruby allows classes to be re-opened at any time with the class keyword to add new methods. The Ruby community often describes an extension method as a kind of monkey patch. There is also a newer feature for adding safe/local extensions to the objects, called Refinements, but it is known to be less used. In Swift, the extension keyword marks a class-like construct that allows the addition of methods, constructors, and fields to an existing class, including the ability to implement a new interface/protocol to the existing class. == Extension methods as enabling feature == Next to extension methods allowing code written by others to be extended as described below, extension methods enable patterns that are useful in their own right as well. The predominant reason why extension methods were introduced was Language Integrated Query (LINQ). Compiler support for extension methods allows deep integration of LINQ with old code just the same as with new code, as well as support for query syntax which for the moment is unique to the primary Microsoft .NET languages. === Centralize common behavior === However, extension methods allow features to be implemented once in ways that enable reuse without the need for inheritance or the overhead of virtual method invocations, or to require implementors of an interface to implement either trivial or woefully complex functionality. A particularly useful scenario is if the feature operates on an interface for which there is no concrete implementation or a useful implementation is not provided by the class library author, e.g. such as is often the case in libraries that provide developers a plugin architecture or similar functionality. Consider the following code and suppose it is the only code contained in a class library. Nevertheless, every implementor of the ILogger interface will gain the ability to write a formatted string, just by including a using MyCoolLogger statement, without having to implement it once and without being required to subclass a class library provided implementation of ILogger. use as : === Better loose coupling === Extension methods allow users of class libraries to refrain from ever declaring an argument, variable, or anything else with a type that comes from that library. Construction and conversion of the types used in the class library can be implemented as extension methods. After carefully implementing the conversions and factories, switching from one class library to another can be made as easy as changing the using statement that makes the extension methods available for the compiler to bind to. === Fluent application programmer's interfaces === Extension methods have special use in implementing so called fluent interfaces. An example is Microsoft's Entity Framework configuration API, which allows for example to write code that resembles regular English as closely as practical. One could argue this is just as well possible without extension methods, but one will find that in practice, extension methods provide a superior experience because less constraints are placed on the class hierarchy to make it work - and read - as desired. The following example uses Entity Framework and configures the TodoList class to be stored in the database table Lists and defines a primary and a foreign key. The code should be understood more or less as: "A TodoList has key TodoListID, its entity set name is Lists and it has many TodoItem's each of which has a required TodoList". === Productivity === Consider for example IEnumerable and note its simplicity - there is just one method, yet it is the basis of LINQ more or less. There are many implementations of this interface in Microsoft .NET. Nevertheless, obviously, it would have been burdensome to require each of these implementations to implement the whole series of methods that are defined in the System.Linq namespace to operate on IEnumerables, even though Microsoft has all the source code. Even worse, this would have required everybody besides Microsoft considering to use IEnumerable themselves to also implement all those methods, which would have been very anti-productive seeing the widespread use of this very common interface. Instead, by implementing the one method of this interface, LINQ can be used more or less immediately. Especially seeing in practically most cases IEnumerable's GetEnumerator method is delegated to a private collection, list or array's GetEnumerator implementation. === Performance === That said, additional implementations of a feature provided by an extension method can be added to improve performance, or to deal with differently implemented interface implementations, such as providing the compiler an implementation of IEnumerable specifically for arrays (in System.SZArrayHelper), which it will automatically choose for extension method calls on array typed references, since their argument will be more specific (this T[] value) than the extension method with the same name that operates on instances of the IEnumerable interface (this IEnumerable value). === Alleviating the need for a common base class === With generic classes, extension methods allow implementation of behavior that is available for all instantiations of the generic type without requiring them to derive from a common base class, and without restricting the type parameters to a specific inheritance branch. This is a big win, since the situations where this argument holds require a non-generic base class just to implement the shared feature - which then requires the generic subclass to perform boxing and/or casts whenever the type used is one of the type arguments. === Conservative use === A note should be placed on preferring extension methods over other means of achieving reuse and proper object-oriented design. Extension methods might 'clutter' the automatic completion features of code editors, such as Visual Studio's IntelliSense, hence they should either be in their own namespace to allow the developer to selectively import them or they should be defined on a type that is specific enough for the method to appear in IntelliSense only when really relevant and given the above, consider that they might be hard to find should the developer expect them, but miss them from IntelliSense due to a missing using statement, since the developer may not have associated the method with the class that defines it, or even the namespace in which it lives - but rather with the type that it extends and the namespace that type lives in. == The problem == In programming, situations arise where it is necessary to add functionality to an existing class—for instance by adding a new method. Normally the programmer would modify the existing class's source code, but this forces the programmer to recompile all binaries with these new changes and requires that the programmer be able to modify the class, which is not always possible, for example when using classes from a third-party assembly. This is typically worked around in one of three ways, all of which are somewhat limited and unintuitive : Inherit the class and then implement the functionality in an instance method in the derived class. Implement the functionality in a static method added to a helper class. Use aggregation instead of inheritance. == Current C# solutions == The first option is in principle easier, but it is unfortunately limited by the fact that many classes restrict inheritance of certain members or forbid it completely. This includes sealed class and the different primitive data types in C# such as int, float and string. The second option, on the other hand, does not share these restrictions, but it may be less intuitive as it requires a reference to a separate class instead of using the methods of the class in question directly. As an example, consider a need of extending the string class with a new reverse method whose return value is a string with the characters in reversed order. Because the string class is a sealed type, the method would typically be added to a new utility class in a manner similar to the following: This may, however, become increasingly difficult to navigate as the library of utility methods and classes increases, particularly for newcomers. The location is also less intuitive because, unlike most string methods, it would not be a member of the string class, but in a completely different class altogether. A better syntax would therefore be the following: == Current VB.NET solutions == In most ways, the VB.NET solution is similar to the C# solution above. However VB.NET has a unique advantage in that it allows members to be passed in to the extension by reference (C# only allows by value). Allowing for the following; Because Visual Basic allows the source object to be passed in by reference it is possible to make changes to the source object directly, without need to create another variable. It is also more intuitive as it works in a consistent fashion to existing methods of classes. == Extension methods == The new language feature of extension methods in C# 3.0, however, makes the latter code possible. This approach requires a static class and a static method, as follows. In the definition, the modifier 'this' before the first argument specifies that it's an extension method (in this case to the type 'string'). In a call, the first argument is not 'passed in' because it is already known as the 'calling' object (the object before the dot). The major difference between calling extension methods and calling static helper methods is that static methods are called in prefix notation, whereas extension methods are called in infix notation. The latter leads to more readable code when the result of one operation is used for another operation. With static methods With extension methods == Naming conflicts in extension methods and instance methods == In C# 3.0, both an instance method and an extension method with the same signature can exist for a class. In such a scenario, the instance method is preferred over the extension method. Neither the compiler nor the Microsoft Visual Studio IDE warns about the naming conflict. Consider this C# class, where the GetAlphabet() method is invoked on an instance of this class: Result of invoking GetAlphabet() on an instance of AlphabetMaker if only the extension method exists: ABC Result if both the instance method and the extension method exist: abc == See also == UFCS, a way to use free functions as extension methods provided in the D programming language Type classes Anonymous types Lambda expressions Expression trees Runtime alteration Duck typing == References == == External links == Open source collection of C# extension methods libraries. Now archived at Codeplex Extension method in C# Extension methods C# Extension Methods. A collection. extensionmethod.net Large database with C#, Visual Basic, F# and Javascript extension methods Explanation and code example Defining your own functions in jQuery Uniform function call syntax Extension methods in C# Extension Methods in Java with Manifold Extension Methods in Java with Lombok Extension Methods in Java with Fluent Extension functions in Kotlin
Wikipedia/Extension_methods
In computer programming, a function object is a construct allowing an object to be invoked or called as if it were an ordinary function, usually with the same syntax (a function parameter that can also be a function). In some languages, particularly C++, function objects are often called functors (not related to the functional programming concept). == Description == A typical use of a function object is in writing callback functions. A callback in procedural languages, such as C, may be performed by using function pointers. However it can be difficult or awkward to pass a state into or out of the callback function. This restriction also inhibits more dynamic behavior of the function. A function object solves those problems since the function is really a façade for a full object, carrying its own state. Many modern (and some older) languages, e.g. C++, Eiffel, Groovy, Lisp, Smalltalk, Perl, PHP, Python, Ruby, Scala, and many others, support first-class function objects and may even make significant use of them. Functional programming languages additionally support closures, i.e. first-class functions that can 'close over' variables in their surrounding environment at creation time. During compilation, a transformation known as lambda lifting converts the closures into function objects. == In C and C++ == Consider the example of a sorting routine that uses a callback function to define an ordering relation between a pair of items. The following C/C++ program uses function pointers: In C++, a function object may be used instead of an ordinary function by defining a class that overloads the function call operator by defining an operator() member function. In C++, this may appear as follows: Notice that the syntax for providing the callback to the std::sort() function is identical, but an object is passed instead of a function pointer. When invoked, the callback function is executed just as any other member function, and therefore has full access to the other members (data or functions) of the object. Of course, this is just a trivial example. To understand what power a functor provides more than a regular function, consider the common use case of sorting objects by a particular field. In the following example, a functor is used to sort a simple employee database by each employee's ID number. In C++11, the lambda expression provides a more succinct way to do the same thing. It is possible to use function objects in situations other than as callback functions. In this case, the shortened term functor is normally not used about the function object. Continuing the example, In addition to class type functors, other kinds of function objects are also possible in C++. They can take advantage of C++'s member-pointer or template facilities. The expressiveness of templates allows some functional programming techniques to be used, such as defining function objects in terms of other function objects (like function composition). Much of the C++ Standard Template Library (STL) makes heavy use of template-based function objects. Another way to create a function object in C++ is to define a non-explicit conversion function to a function pointer type, a function reference type, or a reference to function pointer type. Assuming the conversion does not discard cv-qualifiers, this allows an object of that type to be used as a function with the same signature as the type it is converted to. Modifying an earlier example to use this we obtain the following class, whose instances can be called like function pointers: === Maintaining state === Another advantage of function objects is their ability to maintain a state that affects operator() between calls. For example, the following code defines a generator counting from 10 upwards and is invoked 11 times. In C++14 or later, the example above could be rewritten as: == In C# == In C#, function objects are declared via delegates. A delegate can be declared using a named method or a lambda expression. Here is an example using a named method. Here is an example using a lambda expression. == In D == D provides several ways to declare function objects: Lisp/Python-style via closures or C#-style via delegates, respectively: The difference between a delegate and a closure in D is automatically and conservatively determined by the compiler. D also supports function literals, that allow a lambda-style definition: To allow the compiler to inline the code (see above), function objects can also be specified C++-style via operator overloading: == In Eiffel == In the Eiffel software development method and language, operations and objects are seen always as separate concepts. However, the agent mechanism facilitates the modeling of operations as runtime objects. Agents satisfy the range of application attributed to function objects, such as being passed as arguments in procedural calls or specified as callback routines. The design of the agent mechanism in Eiffel attempts to reflect the object-oriented nature of the method and language. An agent is an object that generally is a direct instance of one of the two library classes, which model the two types of routines in Eiffel: PROCEDURE and FUNCTION. These two classes descend from the more abstract ROUTINE. Within software text, the language keyword agent allows agents to be constructed in a compact form. In the following example, the goal is to add the action of stepping the gauge forward to the list of actions to be executed in the event that a button is clicked. The routine extend referenced in the example above is a feature of a class in a graphical user interface (GUI) library to provide event-driven programming capabilities. In other library classes, agents are seen to be used for different purposes. In a library supporting data structures, for example, a class modeling linear structures effects universal quantification with a function for_all of type BOOLEAN that accepts an agent, an instance of FUNCTION, as an argument. So, in the following example, my_action is executed only if all members of my_list contain the character '!': When agents are created, the arguments to the routines they model and even the target object to which they are applied can be either closed or left open. Closed arguments and targets are given values at agent creation time. The assignment of values for open arguments and targets is deferred until some point after the agent is created. The routine for_all expects as an argument an agent representing a function with one open argument or target that conforms to actual generic parameter for the structure (STRING in this example.) When the target of an agent is left open, the class name of the expected target, enclosed in braces, is substituted for an object reference as shown in the text agent {STRING}.has ('!') in the example above. When an argument is left open, the question mark character ('?') is coded as a placeholder for the open argument. The ability to close or leave open targets and arguments is intended to improve the flexibility of the agent mechanism. Consider a class that contains the following procedure to print a string on standard output after a new line: The following snippet, assumed to be in the same class, uses print_on_new_line to demonstrate the mixing of open arguments and open targets in agents used as arguments to the same routine. This example uses the procedure do_all for linear structures, which executes the routine modeled by an agent for each item in the structure. The sequence of three instructions prints the strings in my_list, converts the strings to lowercase, and then prints them again. Procedure do_all iterates across the structure executing the routine substituting the current item for either the open argument (in the case of the agents based on print_on_new_line), or the open target (in the case of the agent based on to_lower). Open and closed arguments and targets also allow the use of routines that call for more arguments than are required by closing all but the necessary number of arguments: The Eiffel agent mechanism is detailed in the Eiffel ISO/ECMA standard document. == In Java == Java has no first-class functions, so function objects are usually expressed by an interface with a single method (most commonly the Callable interface), typically with the implementation being an anonymous inner class, or, starting in Java 8, a lambda. For an example from Java's standard library, java.util.Collections.sort() takes a List and a functor whose role is to compare objects in the List. Without first-class functions, the function is part of the Comparator interface. This could be used as follows. In Java 8+, this can be written as: == In JavaScript == In JavaScript, functions are first class objects. JavaScript also supports closures. Compare the following with the subsequent Python example. An example of this in use: == In Julia == In Julia, methods are associated with types, so it is possible to make any arbitrary Julia object "callable" by adding methods to its type. (Such "callable" objects are sometimes called "functors.") An example is this accumulator mutable struct (based on Paul Graham's study on programming language syntax and clarity): Such an accumulator can also be implemented using closure: == In Lisp and Scheme == In Lisp family languages such as Common Lisp, Scheme, and others, functions are objects, just like strings, vectors, lists, and numbers. A closure-constructing operator creates a function object from a part of the program: the part of code given as an argument to the operator is part of the function, and so is the lexical environment: the bindings of the lexically visible variables are captured and stored in the function object, which is more commonly called a closure. The captured bindings play the role of member variables, and the code part of the closure plays the role of the anonymous member function, just like operator () in C++. The closure constructor has the syntax (lambda (parameters ...) code ...). The (parameters ...) part allows an interface to be declared, so that the function takes the declared parameters. The code ... part consists of expressions that are evaluated when the functor is called. Many uses of functors in languages like C++ are simply emulations of the missing closure constructor. Since the programmer cannot directly construct a closure, they must define a class that has all of the necessary state variables, and also a member function. Then, construct an instance of that class instead, ensuring that all the member variables are initialized through its constructor. The values are derived precisely from those local variables that ought to be captured directly by a closure. A function-object using the class system in Common Lisp, no use of closures: Since there is no standard way to make funcallable objects in Common Lisp, we fake it by defining a generic function called FUNCTOR-CALL. This can be specialized for any class whatsoever. The standard FUNCALL function is not generic; it only takes function objects. It is this FUNCTOR-CALL generic function that gives us function objects, which are a computer programming construct allowing an object to be invoked or called as if it were an ordinary function, usually with the same syntax. We have almost the same syntax: FUNCTOR-CALL instead of FUNCALL. Some Lisps provide funcallable objects as a simple extension. Making objects callable using the same syntax as functions is a fairly trivial business. Making a function call operator work with different kinds of function things, whether they be class objects or closures is no more complicated than making a + operator that works with different kinds of numbers, such as integers, reals or complex numbers. Now, a counter implemented using a closure. This is much more brief and direct. The INITIAL-VALUE argument of the MAKE-COUNTER factory function is captured and used directly. It does not have to be copied into some auxiliary class object through a constructor. It is the counter. An auxiliary object is created, but that happens behind the scenes. Scheme makes closures even simpler, and Scheme code tends to use such higher-order programming somewhat more idiomatically. More than one closure can be created in the same lexical environment. A vector of closures, each implementing a specific kind of operation, can quite faithfully emulate an object that has a set of virtual operations. That type of single dispatch object-oriented programming can be done fully with closures. Thus there exists a kind of tunnel being dug from both sides of the proverbial mountain. Programmers in OOP languages discover function objects by restricting objects to have one main function to do that object's functional purpose, and even eliminate its name so that it looks like the object is being called! While programmers who use closures are not surprised that an object is called like a function, they discover that multiple closures sharing the same environment can provide a complete set of abstract operations like a virtual table for single dispatch type OOP. == In Objective-C == In Objective-C, a function object can be created from the NSInvocation class. Construction of a function object requires a method signature, the target object, and the target selector. Here is an example for creating an invocation to the current object's myMethod: An advantage of NSInvocation is that the target object can be modified after creation. A single NSInvocation can be created and then called for each of any number of targets, for instance from an observable object. An NSInvocation can be created from only a protocol, but it is not straightforward. See here. == In Perl == In Perl, a function object can be created either from a class's constructor returning a function closed over the object's instance data, blessed into the class: or by overloading the &{} operator so that the object can be used as a function: In both cases the function object can be used either using the dereferencing arrow syntax $ref->(@arguments): or using the coderef dereferencing syntax &$ref(@arguments): == In PHP == PHP 5.3+ has first-class functions that can be used e.g. as parameter to the usort() function: PHP 5.3+, supports also lambda functions and closures. An example of this in use: It is also possible in PHP 5.3+ to make objects invokable by adding a magic __invoke() method to their class: == In PowerShell == In the Windows PowerShell language, a script block is a collection of statements or expressions that can be used as a single unit. A script block can accept arguments and return values. A script block is an instance of a Microsoft .NET Framework type System.Management.Automation.ScriptBlock. == In Python == In Python, functions are first-class objects, just like strings, numbers, lists etc. This feature eliminates the need to write a function object in many cases. Any object with a __call__() method can be called using function-call syntax. An example is this accumulator class (based on Paul Graham's study on programming language syntax and clarity): An example of this in use (using the interactive interpreter): Since functions are objects, they can also be defined locally, given attributes, and returned by other functions, as demonstrated in the following example: == In Ruby == In Ruby, several objects can be considered function objects, in particular Method and Proc objects. Ruby also has two kinds of objects that can be thought of as semi-function objects: UnboundMethod and block. UnboundMethods must first be bound to an object (thus becoming a Method) before they can be used as a function object. Blocks can be called like function objects, but to be used in any other capacity as an object (e.g. passed as an argument) they must first be converted to a Proc. More recently, symbols (accessed via the literal unary indicator :) can also be converted to Procs. Using Ruby's unary & operator—equivalent to calling to_proc on an object, and assuming that method exists—the Ruby Extensions Project created a simple hack. Now, method foo can be a function object, i.e. a Proc, via &:foo and used via takes_a_functor(&:foo). Symbol.to_proc was officially added to Ruby on June 11, 2006, during RubyKaigi2006. [1] Because of the variety of forms, the term Functor is not generally used in Ruby to mean a Function object. Just a type of dispatch delegation introduced by the Ruby Facets project is named as Functor. The most basic definition of which is: This usage is more akin to that used by functional programming languages, like ML, and the original mathematical terminology. == Other meanings == In a more theoretical context a function object may be considered to be any instance of the class of functions, especially in languages such as Common Lisp in which functions are first-class objects. The ML family of functional programming languages uses the term functor to represent a mapping from modules to modules, or from types to types and is a technique for reusing code. Functors used in this manner are analogous to the original mathematical meaning of functor in category theory, or to the use of generic programming in C++, Java or Ada. In Haskell, the term functor is also used for a concept related to the meaning of functor in category theory. In Prolog and related languages, functor is a synonym for function symbol. == See also == Callback (computer science) Closure (computer science) Function pointer Higher-order function Command pattern Currying == Notes == == References == == Further reading == David Vandevoorde & Nicolai M Josuttis (2006). C++ Templates: The Complete Guide, ISBN 0-201-73484-2: Specifically, chapter 22 is devoted to function objects. == External links == Description from the Portland Pattern Repository C++ Advanced Design Issues - Asynchronous C++ Archived 2020-09-22 at the Wayback Machine by Kevlin Henney The Function Pointer Tutorials by Lars Haendel (2000/2001) Article "Generalized Function Pointers" by Herb Sutter Generic Algorithms for Java PHP Functors - Function Objects in PHP What the heck is a functionoid, and why would I use one? (C++ FAQ)
Wikipedia/Function_object
In computer programming, an anamorphism is a function that generates a sequence by repeated application of the function to its previous result. You begin with some value A and apply a function f to it to get B. Then you apply f to B to get C, and so on until some terminating condition is reached. The anamorphism is the function that generates the list of A, B, C, etc. You can think of the anamorphism as unfolding the initial value into a sequence. The above layman's description can be stated more formally in category theory: the anamorphism of a coinductive type denotes the assignment of a coalgebra to its unique morphism to the final coalgebra of an endofunctor. These objects are used in functional programming as unfolds. The categorical dual (aka opposite) of the anamorphism is the catamorphism. == Anamorphisms in functional programming == In functional programming, an anamorphism is a generalization of the concept of unfolds on coinductive lists. Formally, anamorphisms are generic functions that can corecursively construct a result of a certain type and which is parameterized by functions that determine the next single step of the construction. The data type in question is defined as the greatest fixed point ν X . F X of a functor F. By the universal property of final coalgebras, there is a unique coalgebra morphism A → ν X . F X for any other F-coalgebra a : A → F A. Thus, one can define functions from a type A _into_ a coinductive datatype by specifying a coalgebra structure a on A. === Example: Potentially infinite lists === As an example, the type of potentially infinite lists (with elements of a fixed type value) is given as the fixed point [value] = ν X . value × X + 1, i.e. a list consists either of a value and a further list, or it is empty. A (pseudo-)Haskell-Definition might look like this: It is the fixed point of the functor F value, where: One can easily check that indeed the type [value] is isomorphic to F value [value], and thus [value] is the fixed point. (Also note that in Haskell, least and greatest fixed points of functors coincide, therefore inductive lists are the same as coinductive, potentially infinite lists.) The anamorphism for lists (then usually known as unfold) would build a (potentially infinite) list from a state value. Typically, the unfold takes a state value x and a function f that yields either a pair of a value and a new state, or a singleton to mark the end of the list. The anamorphism would then begin with a first seed, compute whether the list continues or ends, and in case of a nonempty list, prepend the computed value to the recursive call to the anamorphism. A Haskell definition of an unfold, or anamorphism for lists, called ana, is as follows: We can now implement quite general functions using ana, for example a countdown: This function will decrement an integer and output it at the same time, until it is negative, at which point it will mark the end of the list. Correspondingly, ana f 3 will compute the list [2,1,0]. === Anamorphisms on other data structures === An anamorphism can be defined for any recursive type, according to a generic pattern, generalizing the second version of ana for lists. For example, the unfold for the tree data structure is as follows To better see the relationship between the recursive type and its anamorphism, note that Tree and List can be defined thus: The analogy with ana appears by renaming b in its type: With these definitions, the argument to the constructor of the type has the same type as the return type of the first argument of ana, with the recursive mentions of the type replaced with b. == History == One of the first publications to introduce the notion of an anamorphism in the context of programming was the paper Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire, by Erik Meijer et al., which was in the context of the Squiggol programming language. == Applications == Functions like zip and iterate are examples of anamorphisms. zip takes a pair of lists, say ['a','b','c'] and [1,2,3] and returns a list of pairs [('a',1),('b',2),('c',3)]. Iterate takes a thing, x, and a function, f, from such things to such things, and returns the infinite list that comes from repeated application of f, i.e. the list [x, (f x), (f (f x)), (f (f (f x))), ...]. To prove this, we can implement both using our generic unfold, ana, using a simple recursive routine: In a language like Haskell, even the abstract functions fold, unfold and ana are merely defined terms, as we have seen from the definitions given above. == Anamorphisms in category theory == In category theory, anamorphisms are the categorical dual of catamorphisms (and catamorphisms are the categorical dual of anamorphisms). That means the following. Suppose (A, fin) is a final F-coalgebra for some endofunctor F of some category into itself. Thus, fin is a morphism from A to FA, and since it is assumed to be final we know that whenever (X, f) is another F-coalgebra (a morphism f from X to FX), there will be a unique homomorphism h from (X, f) to (A, fin), that is a morphism h from X to A such that fin . h = Fh . f. Then for each such f we denote by ana f that uniquely specified morphism h. In other words, we have the following defining relationship, given some fixed F, A, and fin as above: h = a n a f {\displaystyle h=\mathrm {ana} \ f} f i n ∘ h = F h ∘ f {\displaystyle \mathrm {fin} \circ h=Fh\circ f} === Notation === A notation for ana f found in the literature is [ ( f ) ] {\displaystyle [\!(f)\!]} . The brackets used are known as lens brackets, after which anamorphisms are sometimes referred to as lenses. == See also == Morphism Morphisms of F-algebras From an initial algebra to an algebra: Catamorphism An anamorphism followed by an catamorphism: Hylomorphism Extension of the idea of catamorphisms: Paramorphism Extension of the idea of anamorphisms: Apomorphism == References == == External links == Anamorphisms in Haskell
Wikipedia/Unfold_(higher-order_function)
In mathematics, an iterated binary operation is an extension of a binary operation on a set S to a function on finite sequences of elements of S through repeated application. Common examples include the extension of the addition operation to the summation operation, and the extension of the multiplication operation to the product operation. Other operations, e.g., the set-theoretic operations union and intersection, are also often iterated, but the iterations are not given separate names. In print, summation and product are represented by special symbols; but other iterated operators often are denoted by larger variants of the symbol for the ordinary binary operator. Thus, the iterations of the four operations mentioned above are denoted ∑ , ∏ , ⋃ , {\displaystyle \sum ,\ \prod ,\ \bigcup ,} and ⋂ {\displaystyle \bigcap } , respectively. More generally, iteration of a binary function is generally denoted by a slash: iteration of f {\displaystyle f} over the sequence ( a 1 , a 2 … , a n ) {\displaystyle (a_{1},a_{2}\ldots ,a_{n})} is denoted by f / ( a 1 , a 2 … , a n ) {\displaystyle f/(a_{1},a_{2}\ldots ,a_{n})} , following the notation for reduce in Bird–Meertens formalism. In general, there is more than one way to extend a binary operation to operate on finite sequences, depending on whether the operator is associative, and whether the operator has identity elements. == Definition == Denote by aj,k, with j ≥ 0 and k ≥ j, the finite sequence of length k − j of elements of S, with members (ai), for j ≤ i < k. Note that if k = j, the sequence is empty. For f : S × S → S, define a new function Fl on finite nonempty sequences of elements of S, where F l ( a 0 , k ) = { a 0 , k = 1 f ( F l ( a 0 , k − 1 ) , a k − 1 ) , k > 1. {\displaystyle F_{l}(\mathbf {a} _{0,k})={\begin{cases}a_{0},&k=1\\f(F_{l}(\mathbf {a} _{0,k-1}),a_{k-1}),&k>1.\end{cases}}} Similarly, define F r ( a 0 , k ) = { a 0 , k = 1 f ( a 0 , F r ( a 1 , k ) ) , k > 1. {\displaystyle F_{r}(\mathbf {a} _{0,k})={\begin{cases}a_{0},&k=1\\f(a_{0},F_{r}(\mathbf {a} _{1,k})),&k>1.\end{cases}}} If f has a unique left identity e, the definition of Fl can be modified to operate on empty sequences by defining the value of Fl on an empty sequence to be e (the previous base case on sequences of length 1 becomes redundant). Similarly, Fr can be modified to operate on empty sequences if f has a unique right identity. If f is associative, then Fl equals Fr, and we can simply write F. Moreover, if an identity element e exists, then it is unique (see Monoid). If f is commutative and associative, then F can operate on any non-empty finite multiset by applying it to an arbitrary enumeration of the multiset. If f moreover has an identity element e, then this is defined to be the value of F on an empty multiset. If f is idempotent, then the above definitions can be extended to finite sets. If S also is equipped with a metric or more generally with topology that is Hausdorff, so that the concept of a limit of a sequence is defined in S, then an infinite iteration on a countable sequence in S is defined exactly when the corresponding sequence of finite iterations converges. Thus, e.g., if a0, a1, a2, a3, … is an infinite sequence of real numbers, then the infinite product ∏ i = 0 ∞ a i {\textstyle \prod _{i=0}^{\infty }a_{i}} is defined, and equal to lim n → ∞ ∏ i = 0 n a i , {\textstyle \lim \limits _{n\to \infty }\prod _{i=0}^{n}a_{i},} if and only if that limit exists. == Non-associative binary operation == The general, non-associative binary operation is given by a magma. The act of iterating on a non-associative binary operation may be represented as a binary tree. == Notation == Iterated binary operations are used to represent an operation that will be repeated over a set subject to some constraints. Typically the lower bound of a restriction is written under the symbol, and the upper bound over the symbol, though they may also be written as superscripts and subscripts in compact notation. Interpolation is performed over positive integers from the lower to upper bound, to produce the set which will be substituted into the index (below denoted as i ) for the repeated operations. Common notations include the big Sigma (repeated sum) and big Pi (repeated product) notations. ∑ i = 0 n − 1 i = 0 + 1 + 2 + ⋯ + ( n − 1 ) {\displaystyle \sum _{i=0}^{n-1}i=0+1+2+\dots +(n-1)} ∏ i = 0 n − 1 i = 0 × 1 × 2 × ⋯ × ( n − 1 ) {\displaystyle \prod _{i=0}^{n-1}i=0\times 1\times 2\times \dots \times (n-1)} It is possible to specify set membership or other logical constraints in place of explicit indices, in order to implicitly specify which elements of a set shall be used: ∑ x ∈ S x = x 1 + x 2 + x 3 + ⋯ + x n {\displaystyle \sum _{x\in S}x=x_{1}+x_{2}+x_{3}+\dots +x_{n}} Multiple conditions may be written either joined with a logical and or separately: ∑ ( i ∈ 2 N ) ∧ ( i ≤ n ) i = ∑ i ≤ n i ∈ 2 N i = 0 + 2 + 4 + ⋯ + n {\displaystyle \sum _{(i\in 2\mathbb {N} )\wedge (i\leq n)}i=\sum _{\stackrel {i\in 2\mathbb {N} }{i\leq n}}i=0+2+4+\dots +n} Less commonly, any binary operator such as exclusive or ( ⊕ {\displaystyle \oplus } ) or set union ( ∪ {\displaystyle \cup } ) may also be used. For example, if S is a set of logical propositions: ⋀ p ∈ S p = p 1 ∧ p 2 ∧ ⋯ ∧ p N {\displaystyle \bigwedge _{p\in S}p=p_{1}\wedge p_{2}\wedge \dots \wedge p_{N}} which is true iff all of the elements of S are true. == See also == Unary operation Unary function Binary operation Binary function Ternary operation == References == == External links == Bulk action Parallel prefix operation Archived 2013-06-03 at the Wayback Machine Nuprl iterated binary operations
Wikipedia/Iterated_binary_operation
In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. The divide-and-conquer technique is the basis of efficient algorithms for many problems, such as sorting (e.g., quicksort, merge sort), multiplying large numbers (e.g., the Karatsuba algorithm), finding the closest pair of points, syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFT). Designing efficient divide-and-conquer algorithms can be difficult. As in mathematical induction, it is often necessary to generalize the problem to make it amenable to a recursive solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving recurrence relations. == Divide and conquer == The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its basic idea is to decompose a given problem into two or more similar, but simpler, subproblems, to solve them in turn, and to compose their solutions to solve the given problem. Problems of sufficient simplicity are solved directly. For example, to sort a given list of n natural numbers, split it into two lists of about n/2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the given list (see the picture). This approach is known as the merge sort algorithm. The name "divide and conquer" is sometimes applied to algorithms that reduce each problem to only one sub-problem, such as the binary search algorithm for finding a record in a sorted list (or its analogue in numerical computing, the bisection algorithm for root finding). These algorithms can be implemented more efficiently than general divide-and-conquer algorithms; in particular, if they use tail recursion, they can be converted into simple loops. Under this broad definition, however, every algorithm that uses recursion or loops could be regarded as a "divide-and-conquer algorithm". Therefore, some authors consider that the name "divide and conquer" should be used only when each problem may generate two or more subproblems. The name decrease and conquer has been proposed instead for the single-subproblem class. An important application of divide and conquer is in optimization, where if the search space is reduced ("pruned") by a constant factor at each step, the overall algorithm has the same asymptotic complexity as the pruning step, with the constant depending on the pruning factor (by summing the geometric series); this is known as prune and search. == Early historical examples == Early examples of these algorithms are primarily decrease and conquer – the original problem is successively broken down into single subproblems, and indeed can be solved iteratively. Binary search, a decrease-and-conquer algorithm where the subproblems are of roughly half the original size, has a long history. While a clear description of the algorithm on computers appeared in 1946 in an article by John Mauchly, the idea of using a sorted list of items to facilitate searching dates back at least as far as Babylonia in 200 BC. Another ancient decrease-and-conquer algorithm is the Euclidean algorithm to compute the greatest common divisor of two numbers by reducing the numbers to smaller and smaller equivalent subproblems, which dates to several centuries BC. An early example of a divide-and-conquer algorithm with multiple subproblems is Gauss's 1805 description of what is now called the Cooley–Tukey fast Fourier transform (FFT) algorithm, although he did not analyze its operation count quantitatively, and FFTs did not become widespread until they were rediscovered over a century later. An early two-subproblem D&C algorithm that was specifically developed for computers and properly analyzed is the merge sort algorithm, invented by John von Neumann in 1945. Another notable example is the algorithm invented by Anatolii A. Karatsuba in 1960 that could multiply two n-digit numbers in O ( n log 2 ⁡ 3 ) {\displaystyle O(n^{\log _{2}3})} operations (in Big O notation). This algorithm disproved Andrey Kolmogorov's 1956 conjecture that Ω ( n 2 ) {\displaystyle \Omega (n^{2})} operations would be required for that task. As another example of a divide-and-conquer algorithm that did not originally involve computers, Donald Knuth gives the method a post office typically uses to route mail: letters are sorted into separate bags for different geographical areas, each of these bags is itself sorted into batches for smaller sub-regions, and so on until they are delivered. This is related to a radix sort, described for punch-card sorting machines as early as 1929. == Advantages == === Solving difficult problems === Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into sub-problems, of solving the trivial cases, and of combining sub-problems to the original problem. Similarly, decrease and conquer only requires reducing the problem to a single smaller problem, such as the classic Tower of Hanoi puzzle, which reduces moving a tower of height n {\displaystyle n} to move a tower of height n − 1 {\displaystyle n-1} . === Algorithm efficiency === The divide-and-conquer paradigm often helps in the discovery of efficient algorithms. It was the key, for example, to Karatsuba's fast multiplication method, the quicksort and mergesort algorithms, the Strassen algorithm for matrix multiplication, and fast Fourier transforms. In all these examples, the D&C approach led to an improvement in the asymptotic cost of the solution. For example, if (a) the base cases have constant-bounded size, the work of splitting the problem and combining the partial solutions is proportional to the problem's size n {\displaystyle n} , and (b) there is a bounded number p {\displaystyle p} of sub-problems of size ~ n p {\displaystyle {\frac {n}{p}}} at each stage, then the cost of the divide-and-conquer algorithm will be O ( n log p ⁡ n ) {\displaystyle O(n\log _{p}n)} . For other types of divide-and-conquer approaches, running times can also be generalized. For example, when a) the work of splitting the problem and combining the partial solutions take c n {\displaystyle cn} time, where n {\displaystyle n} is the input size and c {\displaystyle c} is some constant; b) when n < 2 {\displaystyle n<2} , the algorithm takes time upper-bounded by c {\displaystyle c} , and c) there are q {\displaystyle q} subproblems where each subproblem has size ~ n 2 {\displaystyle {\frac {n}{2}}} . Then, the running times are as follows: if the number of subproblems q > 2 {\displaystyle q>2} , then the divide-and-conquer algorithm's running time is bounded by O ( n log 2 ⁡ q ) {\displaystyle O(n^{\log _{2}q})} . if the number of subproblems is exactly one, then the divide-and-conquer algorithm's running time is bounded by O ( n ) {\displaystyle O(n)} . If, instead, the work of splitting the problem and combining the partial solutions take c n 2 {\displaystyle cn^{2}} time, and there are 2 subproblems where each has size n 2 {\displaystyle {\frac {n}{2}}} , then the running time of the divide-and-conquer algorithm is bounded by O ( n 2 ) {\displaystyle O(n^{2})} . === Parallelism === Divide-and-conquer algorithms are naturally adapted for execution in multi-processor machines, especially shared-memory systems where the communication of data between processors does not need to be planned in advance because distinct sub-problems can be executed on different processors. === Memory access === Divide-and-conquer algorithms naturally tend to make efficient use of memory caches. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the cache, without accessing the slower main memory. An algorithm designed to exploit the cache in this way is called cache-oblivious, because it does not contain the cache size as an explicit parameter. Moreover, D&C algorithms can be designed for important algorithms (e.g., sorting, FFTs, and matrix multiplication) to be optimal cache-oblivious algorithms–they use the cache in a probably optimal way, in an asymptotic sense, regardless of the cache size. In contrast, the traditional approach to exploiting the cache is blocking, as in loop nest optimization, where the problem is explicitly divided into chunks of the appropriate size—this can also use the cache optimally, but only when the algorithm is tuned for the specific cache sizes of a particular machine. The same advantage exists with regards to other hierarchical storage systems, such as NUMA or virtual memory, as well as for multiple levels of cache: once a sub-problem is small enough, it can be solved within a given level of the hierarchy, without accessing the higher (slower) levels. === Roundoff control === In computations with rounded arithmetic, e.g. with floating-point numbers, a divide-and-conquer algorithm may yield more accurate results than a superficially equivalent iterative method. For example, one can add N numbers either by a simple loop that adds each datum to a single variable, or by a D&C algorithm called pairwise summation that breaks the data set into two halves, recursively computes the sum of each half, and then adds the two sums. While the second method performs the same number of additions as the first and pays the overhead of the recursive calls, it is usually more accurate. == Implementation issues == === Recursion === Divide-and-conquer algorithms are naturally implemented as recursive procedures. In that case, the partial sub-problems leading to the one currently being solved are automatically stored in the procedure call stack. A recursive function is a function that calls itself within its definition. === Explicit stack === Divide-and-conquer algorithms can also be implemented by a non-recursive program that stores the partial sub-problems in some explicit data structure, such as a stack, queue, or priority queue. This approach allows more freedom in the choice of the sub-problem that is to be solved next, a feature that is important in some applications — e.g. in breadth-first recursion and the branch-and-bound method for function optimization. This approach is also the standard solution in programming languages that do not provide support for recursive procedures. === Stack size === In recursive implementations of D&C algorithms, one must make sure that there is sufficient memory allocated for the recursion stack, otherwise, the execution may fail because of stack overflow. D&C algorithms that are time-efficient often have relatively small recursion depth. For example, the quicksort algorithm can be implemented so that it never requires more than log 2 ⁡ n {\displaystyle \log _{2}n} nested recursive calls to sort n {\displaystyle n} items. Stack overflow may be difficult to avoid when using recursive procedures since many compilers assume that the recursion stack is a contiguous area of memory, and some allocate a fixed amount of space for it. Compilers may also save more information in the recursion stack than is strictly necessary, such as return address, unchanging parameters, and the internal variables of the procedure. Thus, the risk of stack overflow can be reduced by minimizing the parameters and internal variables of the recursive procedure or by using an explicit stack structure. === Choosing the base cases === In any recursive algorithm, there is considerable freedom in the choice of the base cases, the small subproblems that are solved directly in order to terminate the recursion. Choosing the smallest or simplest possible base cases is more elegant and usually leads to simpler programs, because there are fewer cases to consider and they are easier to solve. For example, a Fast Fourier Transform algorithm could stop the recursion when the input is a single sample, and the quicksort list-sorting algorithm could stop when the input is the empty list; in both examples, there is only one base case to consider, and it requires no processing. On the other hand, efficiency often improves if the recursion is stopped at relatively large base cases, and these are solved non-recursively, resulting in a hybrid algorithm. This strategy avoids the overhead of recursive calls that do little or no work and may also allow the use of specialized non-recursive algorithms that, for those base cases, are more efficient than explicit recursion. A general procedure for a simple hybrid recursive algorithm is short-circuiting the base case, also known as arm's-length recursion. In this case, whether the next step will result in the base case is checked before the function call, avoiding an unnecessary function call. For example, in a tree, rather than recursing to a child node and then checking whether it is null, checking null before recursing; avoids half the function calls in some algorithms on binary trees. Since a D&C algorithm eventually reduces each problem or sub-problem instance to a large number of base instances, these often dominate the overall cost of the algorithm, especially when the splitting/joining overhead is low. Note that these considerations do not depend on whether recursion is implemented by the compiler or by an explicit stack. Thus, for example, many library implementations of quicksort will switch to a simple loop-based insertion sort (or similar) algorithm once the number of items to be sorted is sufficiently small. Note that, if the empty list were the only base case, sorting a list with n {\displaystyle n} entries would entail maximally n {\displaystyle n} quicksort calls that would do nothing but return immediately. Increasing the base cases to lists of size 2 or less will eliminate most of those do-nothing calls, and more generally a base case larger than 2 is typically used to reduce the fraction of time spent in function-call overhead or stack manipulation. Alternatively, one can employ large base cases that still use a divide-and-conquer algorithm, but implement the algorithm for predetermined set of fixed sizes where the algorithm can be completely unrolled into code that has no recursion, loops, or conditionals (related to the technique of partial evaluation). For example, this approach is used in some efficient FFT implementations, where the base cases are unrolled implementations of divide-and-conquer FFT algorithms for a set of fixed sizes. Source-code generation methods may be used to produce the large number of separate base cases desirable to implement this strategy efficiently. The generalized version of this idea is known as recursion "unrolling" or "coarsening", and various techniques have been proposed for automating the procedure of enlarging the base case. === Dynamic programming for overlapping subproblems === For some problems, the branched recursion may end up evaluating the same sub-problem many times over. In such cases it may be worth identifying and saving the solutions to these overlapping subproblems, a technique which is commonly known as memoization. Followed to the limit, it leads to bottom-up divide-and-conquer algorithms such as dynamic programming. == See also == Akra–Bazzi method – Method in computer science Decomposable aggregation function – Type of function in database managementPages displaying short descriptions of redirect targets "Divide and conquer" – Strategy in politics and sociology Fork–join model – Way of setting up and executing parallel computer programs Master theorem (analysis of algorithms) – Tool for analyzing divide-and-conquer algorithms Mathematical induction – Form of mathematical proof MapReduce – Parallel programming model Heuristic (computer science) – Type of algorithm, produces approximately correct solutions == References ==
Wikipedia/Divide-and-conquer_method
In computer science, the Tak function is a recursive function, named after Ikuo Takeuchi. It is defined as follows: τ ( x , y , z ) = { τ ( τ ( x − 1 , y , z ) , τ ( y − 1 , z , x ) , τ ( z − 1 , x , y ) ) if y < x z otherwise {\displaystyle \tau (x,y,z)={\begin{cases}\tau (\tau (x-1,y,z),\tau (y-1,z,x),\tau (z-1,x,y))&{\text{if }}y<x\\z&{\text{otherwise}}\end{cases}}} This function is often used as a benchmark for languages with optimization for recursion. == tak() vs. tarai() == The original definition by Takeuchi was as follows: tarai is short for たらい回し (tarai mawashi, "to pass around") in Japanese. John McCarthy named this function tak() after Takeuchi. However, in certain later references, the y somehow got turned into the z. This is a small, but significant difference because the original version benefits significantly from lazy evaluation. Though written in exactly the same manner as others, the Haskell code below runs much faster. One can easily accelerate this function via memoization yet lazy evaluation still wins. The best known way to optimize tarai is to use a mutually recursive helper function as follows. Here is an efficient implementation of tarai() in C: Note the additional check for (x <= y) before z (the third argument) is evaluated, avoiding unnecessary recursive evaluation. == References == == External links == Weisstein, Eric W. "TAK Function". MathWorld. TAK Function
Wikipedia/Tak_(function)
Microsoft Developer Network (MSDN) was the division of Microsoft responsible for managing the firm's relationship with developers and testers, such as hardware developers interested in the operating system (OS), and software developers developing on the various OS platforms or using the API or scripting languages of Microsoft's applications. The relationship management was situated in assorted media: web sites, newsletters, developer conferences, trade media, blogs and DVD distribution. Starting in January 2020, the website was fully integrated with Microsoft Docs (itself integrated into Microsoft Learn in 2022). == Websites == MSDN's primary web presence at msdn.microsoft.com was a collection of sites for the developer community that provided information, documentation, and discussion that was authored both by Microsoft and by the community at large. Microsoft later began placing emphasis on incorporation of forums, blogs, library annotations and social bookmarking to make MSDN an open dialog with the developer community rather than a one-way service. The main website, and most of its constituent applications below were available in 56 or more languages. === Library === MSDN Library was a library of official technical documentation intended for independent developers of software for Microsoft Windows. MSDN Library documented the APIs that ship with Microsoft products and also included sample code, technical articles, and other programming information. The library was freely available on the web, with CDs and DVDs of the most recent materials initially issued quarterly as part of an MSDN subscription. However, beginning in 2006, they were available to be freely downloaded from Microsoft Download Center in the form of ISO images. Visual Studio Express edition integrated only with MSDN Express Library, which was a subset of the full MSDN Library, although either edition of the MSDN Library could be freely downloaded and installed standalone. In Visual Studio 2010 MSDN Library was replaced with the new Help System, which was installed as a part of Visual Studio 2010 installation. Help Library Manager was used to install Help Content books covering selected topics. In 2016, Microsoft introduced the new technical documentation platform, Microsoft Docs, intended as a replacement of the TechNet and MSDN libraries. Over the next two years, the content of the MSDN Library was gradually migrated into Microsoft Docs. In 2022, Microsoft Docs was itself incorporated into Microsoft Learn. MSDN Library pages now redirect to the corresponding Microsoft Learn pages. ==== Integration with Visual Studio ==== Each edition of MSDN Library could only be accessed with one help viewer (Microsoft Document Explorer or other help viewer), which was integrated with the then current single version or sometimes two versions of Visual Studio. In addition, each new version of Visual Studio did not integrate with an earlier version of MSDN. A compatible MSDN Library was released with each new version of Visual Studio and included on the Visual Studio DVD. As newer versions of Visual Studio were released, newer editions of MSDN Library did not integrate with older Visual Studio versions and did not even include old/obsolete documentation for deprecated or discontinued products. MSDN Library versions could be installed side-by-side, that is, both the older as well as the newer versions of MSDN Library could co-exist. === Forums === MSDN Forums were the web-based forums used by the community to discuss a wide variety of software development topics. MSDN Forums were migrated to an all-new platform during 2008 that provided new features designed to improve efficiency such as inline preview of threads, AJAX filtering, and a slide-up post editor. === Blogs === MSDN blogs was a series of blogs that were hosted under Microsoft's domain blogs.msdn.com. Some blogs were dedicated to a product – e.g. Visual Studio, Internet Explorer, PowerShell – or a version of a product – e.g. Windows 7, Windows 8 – while others belonged to a Microsoft employee, e.g. Michael Howard or Raymond Chen. In May 2020, the MSDN and TechNet blogs were closed and the content was archived at Microsoft Docs. === Social bookmarking === Social bookmarking on MSDN Social was first launched in 2008, built on a new web platform that had user-tagging and feeds at its core. The goal of the social bookmarking application was to provide a method whereby members of the developer community could: Contribute to a database of quality links on any topic from across the web. By filtering on one or more tags, (e.g. ".net" and "database") users could discover popular or recent links and subscribe to a feed of those links. Find and follow experts' recommended sites. Each profile page included a feed of the user's contributions. Users could be discovered through a drop-down menu on each bookmark. Demonstrate their expertise through the links displayed in their profile. Store their favorite links online. The initial release of the application provided standard features for the genre, including a bookmarklet and import capabilities. The MSDN web site was also starting to incorporate feeds of social bookmarks from experts and the community, displayed alongside feeds from relevant bloggers. The social bookmarking feature was discontinued on October 1, 2009. === Gallery === MSDN Gallery was a repository of community-authored code samples and projects. Launched in 2008, the purpose of the site evolved to complement Codeplex, the open-source project hosting site from Microsoft. MSDN Gallery was retired in 2002 and all MSDN pages now redirect to the new code samples experience on Microsoft Learn. == Software subscriptions == MSDN had historically offered a subscription package whereby developers had access and licenses to use nearly all Microsoft software that had ever been released to the public. Subscriptions were sold on an annual basis, and cost anywhere from US$1,000 to US$6,000 per year per subscription, as it was offered in several tiers. Although in most cases the software itself functioned exactly like the full product, the MSDN end-user license agreement prohibited use of the software in a business production environment. This was a legal restriction, not a technical one. An exception was made for Microsoft Office, allowing personal use even for business purposes without a separate license—but only with the "MSDN Premium Subscription" and even so only "directly related to the design, development and test and/or documentation of software projects;" this does not terminate == MSDN Magazine == Microsoft provided editorial content for MSDN Magazine, a monthly publication. The magazine was created as a merger between Microsoft Systems Journal (MSJ) and Microsoft Internet Developer (MIND) magazines in March 2000. MSJ back issues were available online. MSDN Magazine was available as a print magazine in the United States, and online in 11 languages. The last issue of the magazine was released in November 2019. === Microsoft Systems Journal === Microsoft Systems Journal was a 1986-founded bi-monthly Microsoft magazine. == History == MSDN was launched in September 1992 as a quarterly, CD-ROM-based compilation of technical articles, sample code, and software development kits. The first two MSDN CD releases (September 1992 and January 1993) were marked as pre-release discs (P1 and P2, respectively). Disc 3, released in April 1993, was the first full release. In addition to CDs, there was a 16-page tabloid newspaper, Microsoft Developer Network News, edited by Andrew Himes, who had previously been the founding editor of MacTech, the premiere Macintosh technology journal. A Level II subscription was added in 1993, that included the MAPI, ODBC, TAPI and VFW SDKs. MSDN2 was opened in November 2004 as a source for Visual Studio 2005 API information, with noteworthy differences being updated web site code, conforming better to web standards and thus giving a long-awaited improved support for alternative web browsers to Internet Explorer in the API browser. In 2008, the original MSDN cluster was retired and MSDN2 became msdn.microsoft.com. === Dr GUI and the MSDN Writers Team === In 1996, Bob Gunderson began writing a column in Microsoft Developer Network News, edited by Andrew Himes, using the pseudonym "Dr.GUI". The column provided answers to questions submitted by MSDN subscribers. The caricature of Dr. GUI was based on a photo of Gunderson. When he left the MSDN team, Dennis Crain took over the Dr. GUI role and added medical humor to the column. Upon his departure, Dr. GUI became the composite identity of the original group (most notably Paul Johns) of Developer Technology Engineers that provided in-depth technical articles to the Library. The early members included: Bob Gunderson, Dale Rogerson, Rüdiger R. Asche, Ken Lassesen, Nigel Thompson (a.k.a. Herman Rodent), Nancy Cluts, Paul Johns, Dennis Crain, and Ken Bergmann. Nigel Thompson was the development manager for Windows Multimedia Extensions that originally added multimedia capabilities to Windows. Renan Jeffreis produced the original system (Panda) to publish MSDN on the Internet and in HTML instead of the earlier multimedia viewer engine. Dale Rogerson, Nigel Thompson and Nancy Cluts all published MS Press books while on the MSDN team. As of August 2010, only Dennis Crain and Dale Rogerson remain employed by Microsoft. == See also == DreamSpark Microsoft TechNet Oracle Developers The Code Room MDN Web Docs == References == == External links == Official website (Archive) Archived MSDN and TechNet Blogs
Wikipedia/Microsoft_Developer_Network
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program. In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner. Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming that treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification. Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Lean is a functional programming language commonly used for verifying mathematical theorems. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8). == History == The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s. Church later developed a weaker system, the simply typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming. The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced. Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features. Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q. In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language. John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming. The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML. In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming. In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages. The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990. More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept. Functional programming continues to be used in commercial settings. == Concepts == A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts. === First-class and higher-order functions === Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator d / d x {\displaystyle d/dx} , which returns the derivative of a function f {\displaystyle f} . Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values). Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one. === Pure functions === Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code: If the result of a pure expression is not used, it can be removed without affecting other expressions. If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency or idempotence), i.e., calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.) If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe). If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation). While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics. === Recursion === Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches. The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop. Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages. Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming. === Strict versus non-strict evaluation === Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression: print length([2+1, 3*2, 1/0, 5-4]) fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself. The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell. Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them. === Type systems === Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic data types makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases. Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the language C that is written in Coq and formally verified. A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#. === Referential transparency === Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent. Consider C assignment statement x=x*10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x=x*10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent. === Data structures === Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created. == Comparison to imperative programming == Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency. Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item. === Imperative vs. functional programming === The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable result. Traditional imperative loop: Functional programming with higher-order functions: Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule). === Simulating state === There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way. The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries). Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged. Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations. Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit. === Efficiency issues === Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations. Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion. Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between formally immutable data. Rust distinguishes itself by its approach to data immutability which involves immutable references and a concept called lifetimes. Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel programming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example java.util.concurrent classes are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use. Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue). This approach is common in Erlang/Elixir or Akka. Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) . ==== Abstraction cost ==== Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number in Clojure: When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as: has the mean execution time of 4.76 ms, while the second one, in which .equals is a direct invocation of the underlying Java method, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation of even?. For instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling map is 4% slower than an equivalent for loop and has the same allocation profile, which can be attributed to various compiler optimizations, such as inlining. One distinguishing feature of Rust are zero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly instruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime. === Functional programming in non-functional languages === It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions. JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as Perl 5.0 in 1994, PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin. In Perl, lambda, map, reduce, filter, and closures are fully supported and frequently used. The book Higher-Order Perl, released in 2005, was written to provide an expansive guide on using Perl for functional programming. In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style. In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes. In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#. Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold. Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript. == Comparison to logic programming == Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations. For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program: The program can be queried, like a functional program, to generate mothers from children: But it can also be queried backwards, to generate children: It can even be used to generate all instances of the mother relation: Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form: The same definition in relational notation needs to be written in the unnested form: Here :- means if and , means and. However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming: Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy. == Applications == === Text editors === Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages. Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family. === Spreadsheets === Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature. === Microservices === Due to their composability, functional programming paradigms can be suitable for microservices-based architectures. === Academia === Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming. === Industry === Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming. Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure. Scala has been widely used in Data science, while ClojureScript, Elm or PureScript are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)'s classified ads platform Allegro Lokalnie. Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory. === Education === Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods. Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics. In particular, Scheme has been a relatively popular choice for teaching programming for years. == See also == Eager evaluation Functional reactive programming Inductive functional programming List of functional programming languages List of functional programming topics Nested function Purely functional programming == Notes and references == == Further reading == Abelson, Hal; Sussman, Gerald Jay (1985). Structure and Interpretation of Computer Programs. MIT Press. Bibcode:1985sicp.book.....A. Cousineau, Guy and Michel Mauny. The Functional Approach to Programming. Cambridge, UK: Cambridge University Press, 1998. Curry, Haskell Brooks and Feys, Robert and Craig, William. Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958. Curry, Haskell B.; Hindley, J. Roger; Seldin, Jonathan P. (1972). Combinatory Logic. Vol. II. Amsterdam: North Holland. ISBN 978-0-7204-2208-5. Dominus, Mark Jason. Higher-Order Perl. Morgan Kaufmann. 2005. Felleisen, Matthias; Findler, Robert; Flatt, Matthew; Krishnamurthi, Shriram (2018). How to Design Programs. MIT Press. Graham, Paul. ANSI Common LISP. Englewood Cliffs, New Jersey: Prentice Hall, 1996. MacLennan, Bruce J. Functional Programming: Practice and Theory. Addison-Wesley, 1990. Michaelson, Greg (10 April 2013). An Introduction to Functional Programming Through Lambda Calculus. Courier Corporation. ISBN 978-0-486-28029-5. O'Sullivan, Brian; Stewart, Don; Goerzen, John (2008). Real World Haskell. O'Reilly. Pratt, Terrence W. and Marvin Victor Zelkowitz. Programming Languages: Design and Implementation. 3rd ed. Englewood Cliffs, New Jersey: Prentice Hall, 1996. Salus, Peter H. Functional and Logic Programming Languages. Vol. 4 of Handbook of Programming Languages. Indianapolis, Indiana: Macmillan Technical Publishing, 1998. Thompson, Simon. Haskell: The Craft of Functional Programming. Harlow, England: Addison-Wesley Longman Limited, 1996. == External links == Ford, Neal. "Functional thinking". Retrieved 2021-11-10. Akhmechet, Slava (2006-06-19). "defmacro – Functional Programming For The Rest of Us". Retrieved 2013-02-24. An introduction Functional programming in Python (by David Mertz): part 1, part 2, part 3
Wikipedia/Functional_language
How to Design Programs (HtDP) is a textbook by Matthias Felleisen, Robert Bruce Findler, Matthew Flatt, and Shriram Krishnamurthi on the systematic design of computer programs. MIT Press published the first edition in 2001, and the second edition in 2018, which is freely available online and in print. The book introduces the concept of a design recipe, a six-step process for creating programs from a problem statement. While the book was originally used along with the education project TeachScheme! (renamed ProgramByDesign), it has been adopted at many colleges and universities for teaching program design principles. According to HtDP, the design process starts with a careful analysis of a problem statement with the goal of extracting a rigorous description of the kinds of data that the desired program consumes and produces. The structure of these data descriptions determines the organization of the program. Then, the book carefully introduces data forms of progressively growing complexity. It starts with data of atomic forms and then progresses to compound forms, including data that can be arbitrarily large. For each kind of data definition, the book explains how to organize the program in principle, thus enabling a programmer who encounters a new form of data to still construct a program systematically. Like Structure and Interpretation of Computer Programs (SICP), HtDP relies on a variant of the programming language Scheme. It includes its own programming integrated development environment (IDE), named DrRacket, which provides a series of programming languages. The first language supports only functions, atomic data, and simple structures. Each language adds expressive power to the prior one. Except for the largest teaching language, all languages for HtDP are functional programming languages. == Pedagogical basis == In the 2004 paper, The Structure and Interpretation of the Computer Science Curriculum, the same authors compared and contrasted the pedagogical focus of How to Design Programs (HtDP) with that of Structure and Interpretation of Computer Programs (SICP). In the 14-page paper, the authors distinguish the pedagogic focus of HtDP from that of SICP, and show how HtDP was designed as a textbook to address some problems that some students and teachers had with SICP. The paper introduces the pedagogical landscape surrounding the publication of SICP. The paper starts with a history and critique of SICP, followed by a description of the goal of the computing curriculum. It then describes the principles of teaching behind HtDP; in particular, the difference between implicit vs. explicit teaching of design principles. It then continues on to describe the role of Scheme and the importance of an ideal programming environment, and concludes with an extensive evaluation of content and student/faculty reaction to experience with SICP vs. HtDP. One of the major focuses of the paper is the emphasis on the difference in required domain knowledge between SICP and HtDP. A chart in the paper compares major exercises in SICP and HtDP, and the related text describes how the exercises in the former require considerably more sophisticated domain knowledge than those of HtDP. The paper continues on to explain why this difference in required domain knowledge has resulted in certain students having confused domain knowledge with program design knowledge. The paper claims the following four major efforts that the authors of HtDP have made to address perceived issues with SICP: HtDP addresses explicitly, rather than implicitly, how programs should be constructed. To make programming easier, the book guides students through five different knowledge levels corresponding to data definition levels of complexity. The book's exercises focus on program design guidelines, rather than domain knowledge. The book assumes less domain knowledge than that of SICP. The paper then distinguishes between structural recursion, where the related data definition happens to be self-referential, requiring usually a straightforward design process, and generative recursion, where new problem data is generated in the middle of the problem-solving process and the problem solving method is re-used, often requiring ad hoc mathematical insight, and stresses how this distinction makes their approach scalable to the object-oriented (OO) world. Finally, the paper concludes with a description of responses from various faculty and students after having used HtDP in the classroom. == References == == External links == Official website, 2018 2nd edition, 2003 1st edition
Wikipedia/How_to_Design_Programs
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program. In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner. Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming that treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification. Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Lean is a functional programming language commonly used for verifying mathematical theorems. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8). == History == The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s. Church later developed a weaker system, the simply typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming. The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced. Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features. Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language (ISBN 9780471430148). APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q. In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language. John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming. The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML. In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming. In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages. The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990. More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept. Functional programming continues to be used in commercial settings. == Concepts == A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts. === First-class and higher-order functions === Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator d / d x {\displaystyle d/dx} , which returns the derivative of a function f {\displaystyle f} . Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values). Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one. === Pure functions === Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code: If the result of a pure expression is not used, it can be removed without affecting other expressions. If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency or idempotence), i.e., calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.) If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe). If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation). While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics. === Recursion === Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches. The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop. Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages. Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming. === Strict versus non-strict evaluation === Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression: print length([2+1, 3*2, 1/0, 5-4]) fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself. The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell. Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them. === Type systems === Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic data types makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases. Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the language C that is written in Coq and formally verified. A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#. === Referential transparency === Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent. Consider C assignment statement x=x*10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x=x*10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent. === Data structures === Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created. == Comparison to imperative programming == Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency. Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item. === Imperative vs. functional programming === The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable result. Traditional imperative loop: Functional programming with higher-order functions: Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule). === Simulating state === There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way. The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries). Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged. Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations. Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit. === Efficiency issues === Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations. Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion. Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between formally immutable data. Rust distinguishes itself by its approach to data immutability which involves immutable references and a concept called lifetimes. Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel programming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example java.util.concurrent classes are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use. Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue). This approach is common in Erlang/Elixir or Akka. Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) . ==== Abstraction cost ==== Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number in Clojure: When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as: has the mean execution time of 4.76 ms, while the second one, in which .equals is a direct invocation of the underlying Java method, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation of even?. For instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling map is 4% slower than an equivalent for loop and has the same allocation profile, which can be attributed to various compiler optimizations, such as inlining. One distinguishing feature of Rust are zero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly instruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime. === Functional programming in non-functional languages === It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions. JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as Perl 5.0 in 1994, PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin. In Perl, lambda, map, reduce, filter, and closures are fully supported and frequently used. The book Higher-Order Perl, released in 2005, was written to provide an expansive guide on using Perl for functional programming. In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style. In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes. In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#. Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold. Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript. == Comparison to logic programming == Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations. For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program: The program can be queried, like a functional program, to generate mothers from children: But it can also be queried backwards, to generate children: It can even be used to generate all instances of the mother relation: Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form: The same definition in relational notation needs to be written in the unnested form: Here :- means if and , means and. However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming: Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy. == Applications == === Text editors === Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages. Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family. === Spreadsheets === Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature. === Microservices === Due to their composability, functional programming paradigms can be suitable for microservices-based architectures. === Academia === Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming. === Industry === Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming. Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure. Scala has been widely used in Data science, while ClojureScript, Elm or PureScript are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)'s classified ads platform Allegro Lokalnie. Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory. === Education === Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods. Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics. In particular, Scheme has been a relatively popular choice for teaching programming for years. == See also == Eager evaluation Functional reactive programming Inductive functional programming List of functional programming languages List of functional programming topics Nested function Purely functional programming == Notes and references == == Further reading == Abelson, Hal; Sussman, Gerald Jay (1985). Structure and Interpretation of Computer Programs. MIT Press. Bibcode:1985sicp.book.....A. Cousineau, Guy and Michel Mauny. The Functional Approach to Programming. Cambridge, UK: Cambridge University Press, 1998. Curry, Haskell Brooks and Feys, Robert and Craig, William. Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958. Curry, Haskell B.; Hindley, J. Roger; Seldin, Jonathan P. (1972). Combinatory Logic. Vol. II. Amsterdam: North Holland. ISBN 978-0-7204-2208-5. Dominus, Mark Jason. Higher-Order Perl. Morgan Kaufmann. 2005. Felleisen, Matthias; Findler, Robert; Flatt, Matthew; Krishnamurthi, Shriram (2018). How to Design Programs. MIT Press. Graham, Paul. ANSI Common LISP. Englewood Cliffs, New Jersey: Prentice Hall, 1996. MacLennan, Bruce J. Functional Programming: Practice and Theory. Addison-Wesley, 1990. Michaelson, Greg (10 April 2013). An Introduction to Functional Programming Through Lambda Calculus. Courier Corporation. ISBN 978-0-486-28029-5. O'Sullivan, Brian; Stewart, Don; Goerzen, John (2008). Real World Haskell. O'Reilly. Pratt, Terrence W. and Marvin Victor Zelkowitz. Programming Languages: Design and Implementation. 3rd ed. Englewood Cliffs, New Jersey: Prentice Hall, 1996. Salus, Peter H. Functional and Logic Programming Languages. Vol. 4 of Handbook of Programming Languages. Indianapolis, Indiana: Macmillan Technical Publishing, 1998. Thompson, Simon. Haskell: The Craft of Functional Programming. Harlow, England: Addison-Wesley Longman Limited, 1996. == External links == Ford, Neal. "Functional thinking". Retrieved 2021-11-10. Akhmechet, Slava (2006-06-19). "defmacro – Functional Programming For The Rest of Us". Retrieved 2013-02-24. An introduction Functional programming in Python (by David Mertz): part 1, part 2, part 3
Wikipedia/Functional_languages
A wrapper function is a function (another word for a subroutine) in a software library or a computer program whose main purpose is to call a second subroutine or a system call with little or no additional computation. Wrapper functions simplify writing computer programs by abstracting the details of a subroutine's implementation. == Purpose == Wrapper functions are a means of delegation and can be used for a number of purposes. === Programming convenience === Wrapper functions simplify writing computer programs. For example, the MouseAdapter and similar classes in the Java AWT library demonstrate this. They are useful in the development of applications that use third-party library functions. A wrapper can be written for each of the third party functions and used in the native application. In case the third party functions change or are updated, only the wrappers in the native application need to be modified as opposed to changing all instances of third party functions in the native application. === Adapting class/object interfaces === Wrapper functions can be used to adapt an existing class or object to have a different interface. This is especially useful when using existing library code. === Code testing === Wrapper functions can be used to write error checking routines for pre-existing system functions without increasing the length of a code by a large amount by repeating the same error check for each call to the function. All calls to the original function can be replaced with calls to the wrapper, allowing the programmer to forget about error checking once the wrapper is written. A test driver is a kind of wrapper function that exercises a code module, typically calling it repeatedly, with different settings or parameters, in order to rigorously pursue each possible path. It is not deliverable code, but it is not throwaway code either, being typically retained for use in regression testing. An interface adaptor is a kind of wrapper function that simplifies, tailors, or amplifies the interface to a code module, with the intent of making it more intelligible or relevant to the user. It may rename parameters, combine parameters, set defaults for parameters, and the like. === Multiple inheritance === In a programming language that does not support multiple inheritance of base classes, wrapper functions can be used to simulate it. Below is an example of part of a Java class that "inherits" from LinkedList and HashSet. See method for further implementation details. == Library functions and system calls == Many library functions, such as those in the C Standard Library, act as interfaces for abstraction of system calls. The fork and execve functions in glibc are examples of this. They call the lower-level fork and execve system calls, respectively. This may lead to incorrectly using the terms "system call" and "syscall" to refer to higher-level library calls rather than the similarly named system calls, which they wrap. == Helper function == A helper function is a function which groups parts of computation by assigning descriptive names and allowing for the reuse of the computations. Although not all wrappers are helper functions, all helper functions are wrappers, and a notable use of helper functions—grouping frequently utilized operations—is in dynamic binary translation, in which helper functions of a particular architecture are used in translation of instructions from one instruction set into another. == See also == Wrapper library Driver wrapper Adapter pattern Decorator pattern Delegation (programming) Forwarding (object-oriented programming) Language binding wrapper to another language SWIG automatic wrapper generator Nested function Partial application == References ==
Wikipedia/Wrapper_function
In computer science, a stream is a sequence of potentially unlimited data elements made available over time. A stream can be thought of as items on a conveyor belt being processed one at a time rather than in large batches. Streams are processed differently from batch data. Normal functions cannot operate on streams as a whole because they have potentially unlimited data. Formally, streams are codata (potentially unlimited), not data (which is finite). Functions that operate on a stream producing another stream are known as filters and can be connected in pipelines in a manner analogous to function composition. Filters may operate on one item of a stream at a time or may base an item of output on multiple items of input such as a moving average. == Examples == The term "stream" is used in a number of similar ways: "Stream editing", as with sed, awk, and perl. Stream editing processes a file or files, in-place, without having to load the file(s) into a user interface. One example of such use is to do a search and replace on all the files in a directory, from the command line. On Unix and related systems based on the C language, a stream is a source or sink of data, usually individual bytes or characters. Streams are an abstraction used when reading or writing files, or communicating over network sockets. The standard streams are three streams made available to all programs. I/O devices can be interpreted as streams, as they produce or consume potentially unlimited data over time. In object-oriented programming, input streams are generally implemented as iterators. In the Scheme language and some others, a stream is a lazily evaluated or delayed sequence of data elements. A stream can be used similarly to a list, but later elements are only calculated when needed. Streams can therefore represent infinite sequences and series. In the Smalltalk standard library and in other programming languages as well, a stream is an external iterator. As in Scheme, streams can represent finite or infinite sequences. Stream processing — in parallel processing, especially in graphic processing, the term stream is applied to hardware as well as software. There it defines the quasi-continuous flow of data that is processed in a dataflow programming language as soon as the program state meets the starting condition of the stream. == Applications == Streams can be used as the underlying data type for channels in interprocess communication. == Other uses == The term "stream" is also applied to file system forks, where multiple sets of data are associated with a single filename. Most often, there is one main stream that makes up the normal file data, while additional streams contain metadata. Here "stream" is used to indicate "variable size data", as opposed to fixed size metadata such as extended attributes, but differs from "stream" as used otherwise, meaning "data available over time, potentially infinite". == See also == Bitstream Codata Data stream Data stream mining Traffic flow (computer networking) Network socket Streaming algorithm Streaming media Stream processing == References == == External links == An Approximate L1-Difference Algorithm for Massive Data Streams, 1995 Feigenbaum et al.
Wikipedia/Stream_(computer_science)
A number of countries have attempted to restrict the import of cryptography tools. == Rationale == Countries may wish to restrict import of cryptography technologies for a number of reasons: Imported cryptography may have backdoors or security holes (e.g. the FREAK vulnerability), intentional or not, which allows the country or group who created the backdoor technology, for example the National Security Agency (NSA), to spy on persons using the imported cryptography; therefore the use of cryptography is restricted to that which the government thinks is safe, or which it develops itself. Citizens can anonymously communicate with each other, preventing any external party from monitoring them. Encrypted transactions may impede external entities to control the conducting of business. Cryptography may sometimes increase levels of privacy within the country beyond what the government wishes. == Status by country == The Electronic Privacy Information Center and Global Internet Liberty Campaign reports use a color code to indicate the level of restriction, with the following meanings: Green: No restriction Yellow: License required for importation Red: Total ban == See also == Export of cryptography == External links == Cryptography and Liberty 1998, GILC Report Crypto-Law survey 2013
Wikipedia/Restrictions_on_the_import_of_cryptography
There are many variants of the counter machine, among them those of Hermes, Ershov, Péter, Minsky, Lambek, Shepherdson and Sturgis, and Schönhage. These are explained below. == The models in more detail == === 1954: Hermes' model === Shepherdson & Sturgis (1963) observe that "the proof of this universality [of digital computers to Turing machines] ... seems to have been first written down by Hermes, who showed in [7--their reference number] how an idealized computer could be programmed to duplicate the behavior of any Turing machine", and: "Kaphengst's approach is interesting in that it gives a direct proof of the universality of present-day digital computers, at least when idealized to the extent of admitting an infinity of storage registers each capable of storing arbitrarily long words". The only two arithmetic instructions are Successor operation Testing two numbers for equality The rest of the operations are transfers from register-to-accumulator or accumulator-to-register or test-jumps. Kaphengst's paper is written in German; Sheperdson and Sturgis's translation uses terms such as "mill" and "orders". The machine contains "a mill" (accumulator). Kaphengst designates his mill/accumulator with the "infinity" symbol but we will use "A" in the following description. It also contains an "order register" ("order" as in "instruction", not as in "sequence"). (This usage came from the Burks–Goldstine–von Neumann (1946) report's description of "...an Electronic Computing Instrument".) The order/instruction register is register "0". And, although not clear from Sheperdson and Sturgis's exposition, the model contains an "extension register" designated by Kaphengst "infinity-prime"; we will use "E". The instructions are stored in the registers: "...so the machine, like an actual computer, is capable of doing arithmetic operations on its own program" (p. 244). Thus this model is actually a random-access machine. In the following, "[ r ]" indicates "contents of" register r, etc. Shepherdson & Sturgis (1963) remove the mill/accumulator A and reduce the Kaphengst instructions to register-to-register "copy", arithmetic operation "increment", and "register-to-register compare". Observe that there is no decrement. This model, almost verbatim, is to be found in Minsky (1967); see more in the section below. === 1958: Ershov's class of operator algorithms === Shepherdson & Sturgis (1963) observe that Ersov's model allows for storage of the program in the registers. They assert that Ersov's model is as follows: === 1958: Péter's "treatment" === Shepherdson & Sturgis (1963) observe that Péter's "treatment" (they are not too specific here) has an equivalence to the instructions shown in the following table. They comment specifically about these instructions, that: "from the point of view of proving as quickly as possible the computability of all partial recursive functions Péter's is perhaps the best; for proving their computability by Turing machines a further analysis of the copying operation is necessary along the lines we have taken above." === 1961: Minsky's model of a partial recursive function reduced to a "program" of only two instructions === In his inquiry into problems of Emil Post (the tag system) and Hilbert's 10th problem (Hilbert's problems, Diophantine equation) led Minsky to the following definition of: "an interesting basis for recursive function theory involving programs of only the simplest arithmetic operations" (Minsky (1961) p. 437). His "Theorem Ia" asserts that any partial recursive function is represented by "a program operating on two integers S1 and S2 using instructions Ij of the forms (cf. Minsky (1961) p. 449): The first theorem is the context of a second "Theorem IIa" that "...represents any partial recursive function by a program operating on one integer S [contained in a single register r1] using instructions Ij of the forms": In this second form the machine uses Gödel numbers to process "the integer S". He asserts that the first machine/model does not need to do this if it has 4 registers available to it. === 1961: Melzak model: a single ternary instruction with addition and proper subtraction === "It is our object to describe a primitive device, to be called a Q-machine, which arrives at effective computability via arithmetic rather than via logic. Its three operations are keeping tally, comparing non-negative integers, and transferring" (Melzak (1961) p. 281) If we use the context of his model, "keeping tally" means "adding by successive increments" (throwing a pebbles into) or "subtracting by successive decrements"; transferring means moving (not copying) the contents from hole A to hole B, and comparing numbers is self-evident. This appears to be a blend of the three base models. Melzak's physical model is holes { X, Y, Z, etc. } in the ground together with an unlimited supply of pebbles in a special hole S (Sink or Supply or both? Melzak doesn't say). "The Q-machine consists of an indefinitely large number of locations: S, A1, A2, ..., an indefinitely large supply of counters distributed among these locations, a program, and an operator whose sole purpose is to carry out the instructions. Initially all but a finite number from among the locations ... are empty and each of the remaining ones contains a finite number of counters" (p. 283, boldface added) The instruction is a single "ternary operation" he calls "XYZ": "XYZ" denotes the operation of Of all the possible operations, some are not allowed, as shown in the table below: Some observations about the Melzak model: === 1961: Lambek "abacus" model: atomizing Melzak's model to X+, X- with test === Original "abacus" model of Lambek (1962): Lambek references Melzak's paper. He atomizes Melzak's single 3-parameter operation (really 4 if we count the instruction addresses) into a 2-parameter increment "X+" and 3-parameter decrement "X-". He also provides both an informal and formal definition of "a program". This form is virtually identical to the Minsky (1961) model, and has been adopted by Boolos, Burgess & Jeffrey (2007, p. 45, Abacus Computability). Abacus model of Boolos, Burgess & Jeffrey: The various editions beginning with 1970 the authors use the Lambek (1961) model of an "infinite abacus". This series of Wikipedia articles is using their symbolism, e.g. " [ r ] +1 → r" "the contents of register identified as number 'r', plus 1, replaces the contents of [is put into] register number 'r' ". They use Lambek's name "abacus" but follow Melzak's pebble-in-holes model, modified by them to a 'stones-in-boxes' model. Like the original abacus model of Lambek, their model retains the Minsky (1961) use of non-sequential instructions – unlike the "conventional" computer-like default sequential instruction execution, the next instruction Ia is contained within the instruction. Observe, however, that B-B and B-B-J do not use a variable "X" in the mnemonics with a specifying parameter (as shown in the Lambek version) --i.e. "X+" and "X-" – but rather the instruction mnemonics specifies the registers themselves, e.g. "2+", or "3-": === 1963: Shepherdson and Sturgis's model === Shepherdson & Sturgis (1963) reference Minsky (1961) as it appeared for them in the form of an MIT Lincoln Laboratory report: In Section 10 we show that theorems (including Minsky's results [21, their reference]) on the computation of partial recursive functions by one or two tapes can be obtained rather easily from one of our intermediate forms. Their model is strongly influenced by the model and the spirit of Hao Wang (1957) and his Wang B-machine (also see Post–Turing machine). They "sum up by saying": ...we have tried to carry a step further the 'rapprochement' between the practical and theoretical aspects of computation suggested and started by Wang. Unlimited Register Machine URM: This, their "most flexible machine... consists of a denumerable sequence of registers numbered 1, 2, 3, ..., each of which can store any natural number...Each particular program, however involves only a finite number of these registers" (p. 219). In other words, the number of registers is potentially infinite, and each register's "size" is infinite. They offer the following instruction set, and the following "Notes": Notes. This set of instructions is chosen for ease of programming the computation of partial recursive functions rather than economy; it is shown in Section 4 that this set is equivalent to a smaller set. There are infinitely many instructions in this list since m, n [ contents of rj, etc.] range over all positive integers. In instructions a, b, c, d the contents of all registers except n are supposed to be left unchanged; in instructions e, f, the contents of all registers are unchanged (p. 219). Indeed, they show how to reduce this set further, to the following (for an infinite number of registers each of infinite size): Limited Register Machine LRM: Here they restrict the machine to a finite number of registers N, but they also allow more registers to "be brought in" or removed if empty (cf. p. 228). They show that the remove-register instruction need not require an empty register. Single-Register Machine SRM: Here they are implementing the tag system of Emil Post and thereby allow only writing to the end of the string and erasing from the beginning. This is shown in their Figure 1 as a tape with a read head on the left and a write head on the right, and it can only move the tape right. "A" is their "word" (p. 229): a. P(i) ;add ai to the end of A b. D ;delete the first letter of A f'. Ji[E1] ;If A begins with ai jump to exit 1. They also provide a model as "a stack of cards" with the symbols { 0, 1 } (p. 232 and Appendix C p. 248): add card at top printed 1 add card at top printed 0 remove bottom card; if printed 1 jump to instruction m, else next instruction. === 1967: Minsky's "Simple Universal Base for a Program Computer" === Ultimately, in Problem 11.7-1 Minsky observes that many bases of computation can be formed from a tiny collection: "Many other combinations of operation types [ 0 ], [ ' ], [ - ], [ O- ], [ → ] and [ RPT ] form universal basis. Find some of these basis. Which combinations of three operations are not universal basis? Invent some other operations..." (p. 214) The following are definitions of the various instructions he treats: Minsky (1967) begins with a model that consists of the three operations plus HALT: { [ 0 ], [ ' ], [ - ], [ H ] } He observes that we can dispense with [ 0 ] if we allow for a specific register e.g. w already "empty" (Minsky (1967) p. 206). Later (pages 255ff) he compresses the three { [ 0 ], [ ' ], [ - ] }, into two { [ ' ], [ - ] }. But he admits the model is easier if he adds some [pseudo]-instructions [ O- ] (combined [ 0 ] and [ - ]) and "go(n)". He builds "go(n)" out of the register w pre-set to 0, so that [O-] (w, (n)) is an unconditional jump. In his section 11.5 "The equivalence of Program Machines with General-recursive functions" he introduces two new subroutines: f. [ → ] j. [ ≠ ] Jump unless equal": IF [ rj ] ≠ [ rk ] THEN jump to zth instruction ELSE next instruction He proceeds to show how to replace the "successor-predecessor" set { [ 0 ], [ ' ], [ - ] } with the "successor-equality" set { [ 0 ], [ ' ], [ ≠ ] }. And then he defines his "REPEAT" [RPT] and shows that we can define any primitive recursive function by the "successor-repeat" set { [ 0 ], [ ' ], [RPT] } (where the range of the [ RPT ] cannot include itself. If it does, we get what is called the mu operator (see also mu recursive functions) (p. 213)): Any general recursive function can be computed by a program computer using only operations [ 0 ], [ ' ], [ RPT ] if we permit a RPT operation to lie in its own range ... [however] in general a RPT operation could not be an instruction in the finite-state part of the machine...[if it were] this might exhaust any particular amount of storage allowed in the finite part of the machine. RPT operations require infinite registers of their own, in general... etc." (p. 214) === 1980: Schönhage's 0-parameter model RAM0 === Schönhage (1980) developed his computational model in context of a "new" model he called the Storage Machine Modification model (SMM), his variety of pointer machine. His development described a RAM (random-access machine) model with a remarkable instruction set requiring no operands at all, excepting, perhaps, the "conditional jump" (and even that could be achieved without an operand): "...the RAM0 version deserves special attention for its extreme simplicity; its instruction set consists of only a few one-letter codes, without any (explicit) addressing" (p. 494) The way Schönhage did this is of interest. He (i) atomizes the conventional register "address:datum" into its two parts: "address", and "datum", and (ii) generates the "address" in a specific register n to which the finite-state machine instructions (i.e. the "machine code") would have access, and (iii) provides an "accumulator" register z where all arithmetic operations are to occur. In his particular RAM0 model has only two "arithmetic operations" – "Z" for "set contents of register z to zero", and "A" for "add one to contents of register z". The only access to address-register n is via a copy-from-A-to-N instruction called "set address n". To store a "datum" in accumulator z in a given register, the machine uses the contents of n to specify the register's address and register z to supply the datum to be sent to the register. Peculiarities: A first peculiarity of the Schönhage RAM0 is how it "loads" something into register z: register z first supplies the register-address and then secondly, receives the datum from the register – a form of indirect "load". The second peculiarity is the specification of the COMPARE operation. This is a "jump if accumulator-register z=zero (not, for example, "compare the contents of z to the contents of the register pointed to by n). Apparently if the test fails the machine skips over the next instruction which always must be in the form of "goto λ" where "λ" is the jump-to address. The instruction – "compare contents of z to zero" is unlike the Schonhage successor-RAM1 model (or any other known successor-models) with the more conventional "compare contents of register z to contents of register a for equality". Primarily for reference – this is a RAM model, not a counter-machine model – the following is the Schönhage RAM0 instruction set: Again, the above instruction set is for a random-access machine, a RAM – a counter machine with indirect addressing; instruction "N" allows for indirect storage of the accumulator, and instruction "L" allows for indirect load of the accumulator. While peculiar, Schönhage's model shows how the conventional counter-machine's "register-to-register" or "read-modify-write" instruction set can be atomized to its simplest 0-parameter form. == References == == Bibliography == Boolos, George; Burgess, John P.; Jeffrey, Richard (2007) [1974]. Computability and Logic (5 ed.). Cambridge, England: Cambridge University Press. ISBN 978-0-521-87752-7. The original Boolos-Jeffrey text has been extensively revised by Burgess: more advanced than an introductory textbook. "Abacus machine" model is extensively developed in Chapter 5 Abacus Computability; it is one of three models extensively treated and compared—the Turing machine (still in Boolos' original 4-tuple form) and recursion the other two. Cutland, Nigel (1980). Computability: An Introduction to Recursive Function Theory (PDF). Cambridge University Press. ISBN 0521223849. Retrieved 7 November 2023. Fischer, Patrick C.; Meyer, A. R.; Rosenberg, Arnold L. (1968), "Counter machines and counter languages", Mathematical Systems Theory, 2 (3): 265–283, doi:10.1007/bf01694011, MR 0235932, S2CID 13006433. Develops time hierarchy and space hierarchy theorems for counter machines, analogous to the hierarchies for Turing machines. Donald Knuth (1968), The Art of Computer Programming, Second Edition 1973, Addison-Wesley, Reading, Massachusetts. Cf pages 462-463 where he defines "a new kind of abstract machine or 'automaton' which deals with linked structures." Joachim Lambek (1961, received 15 June 1961), How to Program an Infinite Abacus, Mathematical Bulletin, vol. 4, no. 3. September 1961 pages 295–302. In his Appendix II, Lambek proposes a "formal definition of 'program'. He references Melzak (1961) and Kleene (1952) Introduction to Metamathematics. Z. A. Melzak (1961, received 15 May 1961), An informal Arithmetical Approach to Computability and Computation, Canadian Mathematical Bulletin, vol. 4, no. 3. September 1961 pages 279-293. Melzak offers no references but acknowledges "the benefit of conversations with Drs. R. Hamming, D. McIlroy and V. Vyssots of the Bell telephone Laborators and with Dr. H. Wang of Oxford University." Marvin Minsky (1961). "Recursive Unsolvability of Post's Problem of 'Tag' and Other Topics in Theory of Turing Machines". Annals of Mathematics. 74 (3): 437–455. doi:10.2307/1970290. JSTOR 1970290. Marvin Minsky (1967). Computation: Finite and Infinite Machines (1st ed.). Englewood Cliffs, N. J.: Prentice-Hall, Inc. In particular see chapter 11: Models Similar to Digital Computers and chapter 14: Very Simple Bases for Computability. In the former chapter he defines "Program machines" and in the later chapter he discusses "Universal Program machines with Two Registers" and "...with one register", etc. Shepherdson, John C.; Sturgis, H. E. (1963). "Computability of Recursive Functions". Journal of the ACM. 10 (2): 217–255. doi:10.1145/321160.321170. An extremely valuable reference paper. In their Appendix A the authors cite 4 others with reference to "Minimality of Instructions Used in 4.1: Comparison with Similar Systems". Kaphengst, Heinz, Eine Abstrakte programmgesteuerte Rechenmaschine', Zeitschrift fur mathematische Logik und Grundlagen der Mathematik:5 (1959), 366-379. Ershov, A. P. On operator algorithms, (Russian) Dok. Akad. Nauk 122 (1958), 967-970. English translation, Automat. Express 1 (1959), 20-23. Péter, Rózsa Graphschemata und rekursive Funktionen, Dialectica 12 (1958), 373. Hermes, Hans Die Universalität programmgesteuerter Rechenmaschinen. Math.-Phys. Semesterberichte (Göttingen) 4 (1954), 42–53. A. Schōnhage (1980), Storage Modification Machines, Society for Industrial and Applied Mathematics, SIAM J. Comput. Vol. 9, No. 3, August 1980. Wherein Schōnhage shows the equivalence of his SMM with the "successor RAM" (Random Access Machine), etc. Rich Schroeppel, May 1972, "A Two counter Machine Cannot Calculate 2N", Massachusetts Institute of Technology, A. I. Laboratory, Artificial Intelligence Memo #257. The author references Minsky 1967 and notes that "Frances Yao independently proved the non-computability using a similar method in April 1971." Peter van Emde Boas, Machine Models and Simulations pp. 3–66, appearing in: Jan van Leeuwen, ed. Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity, The MIT PRESSElsevier, 1990. ISBN 0-444-88071-2 (volume A). QA 76.H279 1990. van Emde Boas' treatment of SMMs appears on pp. 32-35. This treatment clarifies Schōnhage 1980 -- it closely follows but expands slightly the Schōnhage treatment. Both references may be needed for effective understanding. Hao Wang (1957), A Variant to Turing's Theory of Computing Machines, JACM (Journal of the Association for Computing Machinery) 4; 63–92. Presented at the meeting of the Association, June 23–25, 1954.
Wikipedia/Counter-machine_model
In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted by 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra uses logical operators such as conjunction (and) denoted as ∧, disjunction (or) denoted as ∨, and negation (not) denoted as ¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describing logical operations in the same way that elementary algebra describes numerical operations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term Boolean algebra was first suggested by Henry M. Sheffer in 1913, although Charles Sanders Peirce gave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880. Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics. == History == A precursor of Boolean algebra was Gottfried Wilhelm Leibniz's algebra of concepts. The usage of binary in relation to the I Ching was central to Leibniz's characteristica universalis. It eventually created the foundations of algebra of concepts. Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets. Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington and others, until it reached the modern conception of an (abstract) mathematical structure. For example, the empirical observation that one can manipulate expressions in the algebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably. Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for very-large-scale integration (VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first-order logic. Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics. The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity. == Values == Whereas expressions denote mainly numbers in elementary algebra, in Boolean algebra, they denote the truth values false and true. These values are represented with the bits, 0 and 1. They do not behave like the integers 0 and 1, for which 1 + 1 = 2, but may be identified with the elements of the two-element field GF(2), that is, integer arithmetic modulo 2, for which 1 + 1 = 0. Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunction x ∨ y (inclusive-or) definable as x + y − xy and negation ¬x as 1 − x. In GF(2), − may be replaced by +, since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in which GF(2) is not implemented). Boolean algebra also deals with functions which have their values in the set {0,1}. A sequence of bits is a commonly used example of such a function. Another common example is the totality of subsets of a set E: to a subset F of E, one can define the indicator function that takes the value 1 on F, and 0 outside F. The most general example is the set elements of a Boolean algebra, with all of the foregoing being instances thereof. As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables. == Operations == === Basic operations === While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations: conjunction, disjunction, and negation, expressed with the corresponding binary operators AND ( ∧ {\displaystyle \land } ) and OR ( ∨ {\displaystyle \lor } ) and the unary operator NOT ( ¬ {\displaystyle \neg } ), collectively referred to as Boolean operators. Variables in Boolean algebra that store the logical value of 0 and 1 are called the Boolean variables. They are used to store either true or false values. The basic operations on Boolean variables x and y are defined as follows: Alternatively, the values of x ∧ y, x ∨ y, and ¬x can be expressed by tabulating their values with truth tables as follows: When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules. If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (where x + y uses addition and xy uses multiplication), or by the minimum/maximum functions: x ∧ y = x y = min ( x , y ) x ∨ y = x + y − x y = x + y ( 1 − x ) = max ( x , y ) ¬ x = 1 − x {\displaystyle {\begin{aligned}x\wedge y&=xy=\min(x,y)\\x\vee y&=x+y-xy=x+y(1-x)=\max(x,y)\\\neg x&=1-x\end{aligned}}} One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws): x ∧ y = ¬ ( ¬ x ∨ ¬ y ) x ∨ y = ¬ ( ¬ x ∧ ¬ y ) {\displaystyle {\begin{aligned}x\wedge y&=\neg (\neg x\vee \neg y)\\x\vee y&=\neg (\neg x\wedge \neg y)\end{aligned}}} === Secondary operations === Operations composed from the basic operations include, among others, the following: These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs. Material conditional The first operation, x → y, or Cxy, is called material implication. If x is true, then the result of expression x → y is taken to be that of y (e.g. if x is true and y is false, then x → y is also false). But if x is false, then the value of y can be ignored; however, the operation must return some Boolean value and there are only two choices. So by definition, x → y is true when x is false (relevance logic rejects this definition, by viewing an implication with a false premise as something other than either true or false). Exclusive OR (XOR) The second operation, x ⊕ y, or Jxy, is called exclusive or (often abbreviated as XOR) to distinguish it from disjunction as the inclusive kind. It excludes the possibility of both x and y being true (e.g. see table): if both are true then result is false. Defined in terms of arithmetic it is addition where mod 2 is 1 + 1 = 0. Logical equivalence The third operation, the complement of exclusive or, is equivalence or Boolean equality: x ≡ y, or Exy, is true just when x and y have the same value. Hence x ⊕ y as its complement can be understood as x ≠ y, being true just when x and y are different. Thus, its counterpart in arithmetic mod 2 is x + y. Equivalence's counterpart in arithmetic mod 2 is x + y + 1. == Laws == A law of Boolean algebra is an identity such as x ∨ (y ∨ z) = (x ∨ y) ∨ z between two Boolean terms, where a Boolean term is defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of a Boolean algebra as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of x ∨ (y ∧ z) = x ∨ (z ∧ y) from y ∧ z = z ∧ y (as treated in § Axiomatizing Boolean algebra). === Monotone laws === Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra: The following laws hold in Boolean algebra, but not in ordinary algebra: Taking x = 2 in the third law above shows that it is not an ordinary algebra law, since 2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be 1(1 + 1) = 2, while the right hand side would be 1 (and so on). All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows. === Nonmonotone laws === The complement operation is defined by the following two laws. Complementation 1 x ∧ ¬ x = 0 Complementation 2 x ∨ ¬ x = 1 {\displaystyle {\begin{aligned}&{\text{Complementation 1}}&x\wedge \neg x&=0\\&{\text{Complementation 2}}&x\vee \neg x&=1\end{aligned}}} All properties of negation including the laws below follow from the above two laws alone. In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law) Double negation ¬ ( ¬ x ) = x {\displaystyle {\begin{aligned}&{\text{Double negation}}&\neg {(\neg {x})}&=x\end{aligned}}} But whereas ordinary algebra satisfies the two laws ( − x ) ( − y ) = x y ( − x ) + ( − y ) = − ( x + y ) {\displaystyle {\begin{aligned}(-x)(-y)&=xy\\(-x)+(-y)&=-(x+y)\end{aligned}}} Boolean algebra satisfies De Morgan's laws: De Morgan 1 ¬ x ∧ ¬ y = ¬ ( x ∨ y ) De Morgan 2 ¬ x ∨ ¬ y = ¬ ( x ∧ y ) {\displaystyle {\begin{aligned}&{\text{De Morgan 1}}&\neg x\wedge \neg y&=\neg {(x\vee y)}\\&{\text{De Morgan 2}}&\neg x\vee \neg y&=\neg {(x\wedge y)}\end{aligned}}} === Completeness === The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The laws complementation 1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as the models of these axioms as treated in § Boolean algebras. Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras. This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in § Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as any tautology, understood as an equation that holds for all values of its variables over 0 and 1. All these definitions of Boolean algebra can be shown to be equivalent. === Duality principle === Principle: If {X, R} is a partially ordered set, then {X, R(inverse)} is also a partially ordered set. There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed to α and β, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences. But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used. But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged, now there is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns for x ∧ y and x ∨ y in the truth tables have changed places, but that switch is immaterial. When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are called dual to each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. The duality principle, also called De Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged. One change not needed to make as part of this interchange was to complement. Complement is a self-dual operation. The identity or do-nothing operation x (copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is (x ∧ y) ∨ (y ∧ z) ∨ (z ∧ x). There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, if f(x, y, z) = (x ∧ y) ∨ (y ∧ z) ∨ (z ∧ x), then f(f(x, y, z), x, t) is a self-dual operation of four arguments x, y, z, t. The principle of duality can be explained from a group theory perspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set of Boolean polynomials back to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form a group under function composition, isomorphic to the Klein four-group, acting on the set of Boolean polynomials. Walter Gottschalk remarked that consequently a more appropriate name for the phenomenon would be the principle (or square) of quaternality.: 21–22  == Diagrammatic representations == === Venn diagrams === A Venn diagram can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to the values 1 (true) and 0 (false) for variable x. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention). The three Venn diagrams in the figure below represent respectively conjunction x ∧ y, disjunction x ∨ y, and complement ¬x. For conjunction, the region inside both circles is shaded to indicate that x ∧ y is 1 when both variables are 1. The other regions are left unshaded to indicate that x ∧ y is 0 for the other three combinations. The second diagram represents disjunction x ∨ y by shading those regions that lie inside either or both circles. The third diagram represents complement ¬x by shading the region not inside the circle. While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle for x in those boxes, in which case each would denote a function of one argument, x, which returns the same value independently of x, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes one argument, which it ignores, and is a unary operation. Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchanging x and y would have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry. Idempotence of ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨. To see the first absorption law, x ∧ (x ∨ y) = x, start with the diagram in the middle for x ∨ y and note that the portion of the shaded area in common with the x circle is the whole of the x circle. For the second absorption law, x ∨ (x ∧ y) = x, start with the left diagram for x∧y and note that shading the whole of the x circle results in just the x circle being shaded, since the previous shading was inside the x circle. The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades the x circle. To visualize the first De Morgan's law, (¬x) ∧ (¬y) = ¬(x ∨ y), start with the middle diagram for x ∨ y and complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside the x circle and outside the y circle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes. The second De Morgan's law, (¬x) ∨ (¬y) = ¬(x ∧ y), works the same way with the two diagrams interchanged. The first complement law, x ∧ ¬x = 0, says that the interior and exterior of the x circle have no overlap. The second complement law, x ∨ ¬x = 1, says that everything is either inside or outside the x circle. === Digital logic gates === Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows: The lines on the left of each gate represent input wires or ports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports. Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port. The duality principle, or De Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged. More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namely x, y, ¬x, and ¬y; and the remaining two are x ⊕ y (XOR) and its complement x ≡ y. == Boolean algebras == The term "algebra" denotes both a subject, namely the subject of algebra, and an object, namely an algebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then give the formal definition of the general notion. === Concrete Boolean algebras === A concrete Boolean algebra or field of sets is any nonempty set of subsets of a given set X closed under the set operations of union, intersection, and complement relative to X. (Historically X itself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and let X be empty.) Example 1. The power set 2X of X, consisting of all subsets of X. Here X may be any set: empty, finite, infinite, or even uncountable. Example 2. The empty set and X. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets of X must contain the empty set and X. Hence no smaller example is possible, other than the degenerate algebra obtained by taking X to be empty so as to make the empty set and X coincide. Example 3. The set of finite and cofinite sets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers. Example 4. For a less trivial example of the point made by example 2, consider a Venn diagram formed by n closed curves partitioning the diagram into 2n regions, and let X be the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset of X, and every point in X is in exactly one region. Then the set of all 22n possible unions of regions (including the empty set obtained as the union of the empty set of regions and X obtained as the union of all 2n regions) is closed under union, intersection, and complement relative to X and therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the case n = 0 of no curves. === Subsets as bit vectors === A subset Y of X can be identified with an indexed family of bits with index set X, with the bit indexed by x ∈ X being 1 or 0 according to whether or not x ∈ Y. (This is the so-called characteristic function notion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if ⁠ X = { a , b , c } {\displaystyle X=\{a,b,c\}} ⁠ where a, b, c are viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} of X can be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinite sequences of bits, while those indexed by the reals in the unit interval [0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]). From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations of bitwise ∧, ∨, and ¬, as in 1010∧0110 = 0010, 1010∨0110 = 1110, and ¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively. === Prototypical Boolean algebra === The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called the prototypical Boolean algebra, justified by the following observation. The laws satisfied by all nondegenerate concrete Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector. The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete. === Boolean algebras: the definition === The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can be shown to satisfy the laws of Boolean algebra. Instead of showing that the Boolean laws are satisfied, we can instead postulate a set X, two binary operations on X, and one unary operation, and require that those operations satisfy the laws of Boolean algebra. The elements of X need not be bit vectors or subsets but can be anything at all. This leads to the more general abstract definition. A Boolean algebra is any set with binary operations ∧ and ∨ and a unary operation ¬ thereon satisfying the Boolean laws. For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axioms by fiat is entirely analogous to the abstract definitions of group, ring, field etc. characteristic of modern or abstract algebra. Given any complete axiomatization of Boolean algebra, such as the axioms for a complemented distributive lattice, a sufficient condition for an algebraic structure of this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition. A Boolean algebra is a complemented distributive lattice. The section on axiomatization lists other axiomatizations, any of which can be made the basis of an equivalent definition. === Representable Boolean algebras === Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Let n be a square-free positive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations of greatest common divisor, least common multiple, and division into n (that is, ¬x = n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors of n. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors of n a Boolean algebra that is not concrete according to our definitions. However, if each divisor of n is represented by the set of its prime factors, this nonconcrete Boolean algebra is isomorphic to the concrete Boolean algebra consisting of all sets of prime factors of n, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division into n. So this example, while not technically concrete, is at least "morally" concrete via this representation, called an isomorphism. This example is an instance of the following notion. A Boolean algebra is called representable when it is isomorphic to a concrete Boolean algebra. The next question is answered positively as follows. Every Boolean algebra is representable. That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on the Boolean prime ideal theorem, a choice principle slightly weaker than the axiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability. The laws satisfied by all Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example a relation algebra is a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras. == Axiomatizing Boolean algebra == The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold. In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to be finitely axiomatizable or finitely based. Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as a complemented distributive lattice. By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing the Sheffer stroke operation, the single axiom ( ( a ∣ b ) ∣ c ) ∣ ( a ∣ ( ( a ∣ c ) ∣ a ) ) = c {\displaystyle ((a\mid b)\mid c)\mid (a\mid ((a\mid c)\mid a))=c} is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; see Minimal axioms for Boolean algebra. == Propositional logic == Propositional logic is a logical system that is intimately connected to Boolean algebra. Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra. Syntactically, every Boolean term corresponds to a propositional formula of propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variables x, y, ... become propositional variables (or atoms) P, Q, ... Boolean terms such as x ∨ y become propositional formulas P ∨ Q; 0 becomes false or ⊥, and 1 becomes true or T. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talking about propositional calculus) to denote propositions. The semantics of propositional logic rely on truth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then the truth value of a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while in Boolean-valued semantics arbitrary Boolean algebras are considered. A tautology is a propositional formula that is assigned truth value 1 by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra). These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used. === Applications === One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language. Whereas the proposition "if x = 3, then x + 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "if x = 3, then x = 3" does not; it is true merely by virtue of its structure, and remains true whether "x = 3" is replaced by "x = 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "if P, then P," or in the language of Boolean algebra, P → P. Replacing P by x = 3 or any other proposition is called instantiation of P by that proposition. The result of instantiating P in an abstract proposition is called an instance of the proposition. Thus, x = 3 → x = 3 is a tautology by virtue of being an instance of the abstract tautology P → P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense as P → x = 3 or x = 3 → x = 4. Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiating Q by Q → P in P → (Q → P) to yield the instance P → ((Q → P) → P). (The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.) === Deductive systems for propositional logic === An axiomatization of propositional calculus is a set of tautologies called axioms and one or more inference rules for producing new tautologies from old. A proof in an axiom system A is a finite nonempty sequence of propositions each of which is either an instance of an axiom of A or follows by some rule of A from propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is the theorem proved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization is sound when every theorem is a tautology, and complete when every tautology is a theorem. ==== Sequent calculus ==== Propositional calculus is commonly organized as a Hilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form is sequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions called sequents, such as A ∨ B, A ∧ C, ... ⊢ A, B → C, .... The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ, A ⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional proposition A appended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as the entailment of the succedent by the antecedent. Entailment differs from implication in that whereas the latter is a binary operation that returns a value in a Boolean algebra, the former is a binary relation which either holds or does not hold. In this sense, entailment is an external form of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined by x ≤ y just when x ∨ y = y. This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus. == Applications == Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics. === Computers === In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits. Today, all modern general-purpose computers perform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: as voltages on wires in high-speed circuits and capacitive storage devices, as orientations of a magnetic domain in ferromagnetic storage devices, as holes in punched cards or paper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.) Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low. Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming in machine code, assembly language, and certain other programming languages, programmers work with the low-level digital structure of the data registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits as binary numbers (base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of the carry operation in the first but not the second. === Two-valued logic === Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, making two-valued logic deserving of organization and study in its own right. A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low. Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory. Two-valued logic can be extended to multi-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 − x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined via De Morgan's law. Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true. === Boolean operations === The original application for Boolean operations was mathematical logic, where it combines the truth values, true or false, of individual formulas. ==== Natural language ==== Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies). But not is synonymous with and not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of these logical connectives often have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, since and usually means and then in such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as in get dressed and go to school. Disjunctive commands such love me or leave me or fish or cut bait tend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such as tea and milk generally describe aggregation as with set union while tea or milk is a choice. However, context can reverse these senses, as in your choices are coffee and tea which usually means the same as your choices are coffee or tea (alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and although P necessarily implies "not not P," the converse is suspect in English, much as with intuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them. ==== Digital logic ==== Boolean operations are used in digital logic to combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector of n identical binary gates are used to combine two bit vectors each of n bits, the individual bit operations can be understood collectively as a single operation on values from a Boolean algebra with 2n elements. ==== Naive set theory ==== Naive set theory interprets Boolean operations as acting on subsets of a given set X. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on. ==== Video cards ==== The 256-element free Boolean algebra on three generators is deployed in computer displays based on raster graphics, which use bit blit to manipulate whole regions consisting of pixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called the mask. Modern video cards offer all 223 = 256 ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constants SRC = 0xaa or 0b10101010, DST = 0xcc or 0b11001100, and MSK = 0xf0 or 0b11110000 allow Boolean operations such as (SRC^DST)&MSK (meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time, 0x80 in the (SRC^DST)&MSK example, 0x88 if just SRC^DST, etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression. ==== Modeling and CAD ==== Solid modeling systems for computer aided design offer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a set S of voxels (the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets of S, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operation x ∧ ¬y or x − y, which in set theory is set difference, remove the elements of y from those of x. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference. ==== Boolean searches ==== Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported by Google. Doublequotes are used to combine whitespace-separated words into a single search term. Whitespace is used to specify logical AND, as it is the default operator for joining search terms: "Search term 1" "Search term 2" The OR keyword is used for logical OR: "Search term 1" OR "Search term 2" A prefixed minus sign is used for logical NOT: "Search term 1" −"Search term 2" == See also == == Notes == == References == == Further reading == Mano, Morris; Ciletti, Michael D. (2013). Digital Design. Pearson. ISBN 978-0-13-277420-8. Whitesitt, J. Eldon (1995). Boolean algebra and its applications. Courier Dover Publications. ISBN 978-0-486-68483-3. Dwinger, Philip (1971). Introduction to Boolean algebras. Würzburg, Germany: Physica Verlag. Sikorski, Roman (1969). Boolean Algebras (3 ed.). Berlin, Germany: Springer-Verlag. ISBN 978-0-387-04469-9. Bocheński, Józef Maria (1959). A Précis of Mathematical Logic. Translated from the French and German editions by Otto Bird. Dordrecht, South Holland: D. Reidel. === Historical perspective === Boole, George (1848). "The Calculus of Logic". Cambridge and Dublin Mathematical Journal. III: 183–198. Hailperin, Theodore (1986). Boole's logic and probability: a critical exposition from the standpoint of contemporary algebra, logic, and probability theory (2 ed.). Elsevier. ISBN 978-0-444-87952-3. Gabbay, Dov M.; Woods, John, eds. (2004). The rise of modern logic: from Leibniz to Frege. Handbook of the History of Logic. Vol. 3. Elsevier. ISBN 978-0-444-51611-4., several relevant chapters by Hailperin, Valencia, and Grattan-Guinness Badesa, Calixto (2004). "Chapter 1. Algebra of Classes and Propositional Calculus". The birth of model theory: Löwenheim's theorem in the frame of the theory of relatives. Princeton University Press. ISBN 978-0-691-05853-5. Stanković, Radomir S. [in German]; Astola, Jaakko Tapio [in Finnish] (2011). Written at Niš, Serbia & Tampere, Finland. From Boolean Logic to Switching Circuits and Automata: Towards Modern Information Technology. Studies in Computational Intelligence. Vol. 335 (1 ed.). Berlin & Heidelberg, Germany: Springer-Verlag. pp. xviii + 212. doi:10.1007/978-3-642-11682-7. ISBN 978-3-642-11681-0. ISSN 1860-949X. LCCN 2011921126. Retrieved 2022-10-25. "The Algebra of Logic Tradition" entry by Burris, Stanley in the Stanford Encyclopedia of Philosophy, 21 February 2012 == External links ==
Wikipedia/Boolean_equation
In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function (sometimes shortened to recursive function). In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines (this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function. Other equivalent classes of functions are the functions of lambda calculus and the functions that can be computed by Markov algorithms. The subset of all total recursive functions with values in {0,1} is known in computational complexity theory as the complexity class R. == Definition == The μ-recursive functions (or general recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the minimization operator μ. The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class of primitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackermann function can be proven to be total recursive, and to be non-primitive. Primitive or "basic" functions: Constant functions Ckn: For each natural number n and every k C n k ( x 1 , … , x k ) = d e f n {\displaystyle C_{n}^{k}(x_{1},\ldots ,x_{k})\ {\stackrel {\mathrm {def} }{=}}\ n} Alternative definitions use instead a zero function as a primitive function that always returns zero, and build the constant functions from the zero function, the successor function and the composition operator. Successor function S: S ( x ) = d e f x + 1 {\displaystyle S(x)\ {\stackrel {\mathrm {def} }{=}}\ x+1\,} Projection function P i k {\displaystyle P_{i}^{k}} (also called the Identity function): For all natural numbers i , k {\displaystyle i,k} such that 1 ≤ i ≤ k {\displaystyle 1\leq i\leq k} : P i k ( x 1 , … , x k ) = d e f x i . {\displaystyle P_{i}^{k}(x_{1},\ldots ,x_{k})\ {\stackrel {\mathrm {def} }{=}}\ x_{i}\,.} Operators (the domain of a function defined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result): Composition operator ∘ {\displaystyle \circ \,} (also called the substitution operator): Given an m-ary function h ( x 1 , … , x m ) {\displaystyle h(x_{1},\ldots ,x_{m})\,} and m k-ary functions g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) {\displaystyle g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k})} : h ∘ ( g 1 , … , g m ) = d e f f , where f ( x 1 , … , x k ) = h ( g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) ) . {\displaystyle h\circ (g_{1},\ldots ,g_{m})\ {\stackrel {\mathrm {def} }{=}}\ f,\quad {\text{where}}\quad f(x_{1},\ldots ,x_{k})=h(g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k})).} This means that f ( x 1 , … , x k ) {\displaystyle f(x_{1},\ldots ,x_{k})} is defined only if g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) , {\displaystyle g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k}),} and h ( g 1 ( x 1 , … , x k ) , … , g m ( x 1 , … , x k ) ) {\displaystyle h(g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k}))} are all defined. Primitive recursion operator ρ: Given the k-ary function g ( x 1 , … , x k ) {\displaystyle g(x_{1},\ldots ,x_{k})\,} and k+2 -ary function h ( y , z , x 1 , … , x k ) {\displaystyle h(y,z,x_{1},\ldots ,x_{k})\,} : ρ ( g , h ) = d e f f where the k+1 -ary function f is defined by f ( 0 , x 1 , … , x k ) = g ( x 1 , … , x k ) f ( S ( y ) , x 1 , … , x k ) = h ( y , f ( y , x 1 , … , x k ) , x 1 , … , x k ) . {\displaystyle {\begin{aligned}\rho (g,h)&\ {\stackrel {\mathrm {def} }{=}}\ f\quad {\text{where the k+1 -ary function }}f{\text{ is defined by}}\\f(0,x_{1},\ldots ,x_{k})&=g(x_{1},\ldots ,x_{k})\\f(S(y),x_{1},\ldots ,x_{k})&=h(y,f(y,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k})\,.\end{aligned}}} This means that f ( y , x 1 , … , x k ) {\displaystyle f(y,x_{1},\ldots ,x_{k})} is defined only if g ( x 1 , … , x k ) {\displaystyle g(x_{1},\ldots ,x_{k})} and h ( z , f ( z , x 1 , … , x k ) , x 1 , … , x k ) {\displaystyle h(z,f(z,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k})} are defined for all z < y . {\displaystyle z<y.} Minimization operator μ: Given a (k+1)-ary function f ( y , x 1 , … , x k ) {\displaystyle f(y,x_{1},\ldots ,x_{k})\,} , the k-ary function μ ( f ) {\displaystyle \mu (f)} is defined by: μ ( f ) ( x 1 , … , x k ) = z ⟺ d e f f ( i , x 1 , … , x k ) > 0 for i = 0 , … , z − 1 and f ( z , x 1 , … , x k ) = 0 {\displaystyle {\begin{aligned}\mu (f)(x_{1},\ldots ,x_{k})=z{\stackrel {\mathrm {def} }{\iff }}\ f(i,x_{1},\ldots ,x_{k})&>0\quad {\text{for}}\quad i=0,\ldots ,z-1\quad {\text{and}}\\f(z,x_{1},\ldots ,x_{k})&=0\quad \end{aligned}}} Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, or if one encounters an argument for which f is not defined, then the search never terminates, and μ ( f ) {\displaystyle \mu (f)} is not defined for the argument ( x 1 , … , x k ) . {\displaystyle (x_{1},\ldots ,x_{k}).} While some textbooks use the μ-operator as defined here, others demand that the μ-operator is applied to total functions f only. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normal Form Theorem (see below). The only difference is, that it becomes undecidable whether a specific function definition defines a μ-recursive function, as it is undecidable whether a computable (i.e. μ-recursive) function is total. The strong equality relation ≃ {\displaystyle \simeq } can be used to compare partial μ-recursive functions. This is defined for all partial functions f and g so that f ( x 1 , … , x k ) ≃ g ( x 1 , … , x l ) {\displaystyle f(x_{1},\ldots ,x_{k})\simeq g(x_{1},\ldots ,x_{l})} holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined. == Examples == Examples not involving the minimization operator can be found at Primitive recursive function#Examples. The following examples are intended just to demonstrate the use of the minimization operator; they could also be defined without it, albeit in a more complicated way, since they are all primitive recursive. The following examples define general recursive functions that are not primitive recursive; hence they cannot avoid using the minimization operator. == Total recursive function == A general recursive function is called total recursive function if it is defined for every input, or, equivalently, if it can be computed by a total Turing machine. There is no way to computably tell if a given general recursive function is total - see Halting problem. == Equivalence with other models of computability == In the equivalence of models of computability, a parallel is drawn between Turing machines that do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values). == Normal form theorem == A normal form theorem due to Kleene says that for each k there are primitive recursive functions U ( y ) {\displaystyle U(y)\!} and T ( y , e , x 1 , … , x k ) {\displaystyle T(y,e,x_{1},\ldots ,x_{k})\!} such that for any μ-recursive function f ( x 1 , … , x k ) {\displaystyle f(x_{1},\ldots ,x_{k})\!} with k free variables there is an e such that f ( x 1 , … , x k ) ≃ U ( μ ( T ) ( e , x 1 , … , x k ) ) {\displaystyle f(x_{1},\ldots ,x_{k})\simeq U(\mu (T)(e,x_{1},\ldots ,x_{k}))} . The number e is called an index or Gödel number for the function f.: 52–53  A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function. Minsky observes the U {\displaystyle U} defined above is in essence the μ-recursive equivalent of the universal Turing machine: To construct U is to write down the definition of a general-recursive function U(n, x) that correctly interprets the number n and computes the appropriate function of x. to construct U directly would involve essentially the same amount of effort, and essentially the same ideas, as we have invested in constructing the universal Turing machine == Symbolism == A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following the string of parameters x1, ..., xn is abbreviated as x: Constant function: Kleene uses " Cnq(x) = q " and Boolos-Burgess-Jeffrey (2002) (B-B-J) use the abbreviation " constn( x) = n ": e.g. C713 ( r, s, t, u, v, w, x ) = 13 e.g. const13 ( r, s, t, u, v, w, x ) = 13 Successor function: Kleene uses x' and S for "Successor". As "successor" is considered to be primitive, most texts use the apostrophe as follows: S(a) = a +1 =def a', where 1 =def 0', 2 =def 0 ' ', etc. Identity function: Kleene (1952) uses " Uni " to indicate the identity function over the variables xi; B-B-J use the identity function idni over the variables x1 to xn: Uni( x ) = idni( x ) = xi e.g. U73 = id73 ( r, s, t, u, v, w, x ) = t Composition (Substitution) operator: Kleene uses a bold-face Smn (not to be confused with his S for "successor" ! ). The superscript "m" refers to the mth of function "fm", whereas the subscript "n" refers to the nth variable "xn": If we are given h( x )= g( f1(x), ... , fm(x) ) h(x) = Snm(g, f1, ... , fm ) In a similar manner, but without the sub- and superscripts, B-B-J write: h(x')= Cn[g, f1 ,..., fm](x) Primitive Recursion: Kleene uses the symbol " Rn(base step, induction step) " where n indicates the number of variables, B-B-J use " Pr(base step, induction step)(x)". Given: base step: h( 0, x )= f( x ), and induction step: h( y+1, x ) = g( y, h(y, x),x ) Example: primitive recursion definition of a + b: base step: f( 0, a ) = a = U11(a) induction step: f( b' , a ) = ( f ( b, a ) )' = g( b, f( b, a), a ) = g( b, c, a ) = c' = S(U32( b, c, a )) R2 { U11(a), S [ (U32( b, c, a ) ] } Pr{ U11(a), S[ (U32( b, c, a ) ] } Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions S(a) = a' U11(a) = a U32( b, c, a ) = c g(b, c, a) = S(U32( b, c, a )) = c' base step: h( 0, a ) = U11(a) induction step: h( b', a ) = g( b, h( b, a ), a ) He arrives at: a+b = R2[ U11, S31(S, U32) ] == Examples == Fibonacci number McCarthy 91 function == See also == Recursion theory Recursion Recursion (computer science) == References == == External links == Stanford Encyclopedia of Philosophy entry A compiler for transforming a recursive function into an equivalent Turing machine
Wikipedia/Mu_recursive_function
In supergravity and supersymmetric representation theory, Adinkra symbols are a graphical representation of supersymmetric algebras. Mathematically they can be described as colored finite connected simple graphs, that are bipartite and n-regular. Their name is derived from Adinkra symbols of the same name, and they were introduced by Michael Faux and Sylvester James Gates in 2004. == Overview == One approach to the representation theory of super Lie algebras is to restrict attention to representations in one space-time dimension and having N {\displaystyle N} supersymmetry generators, i.e., to ( 1 | N ) {\displaystyle (1|N)} superalgebras. In that case, the defining algebraic relationship among the supersymmetry generators reduces to { Q I , Q J } = 2 i δ I J ∂ τ {\displaystyle \{Q_{I},Q_{J}\}=2i\delta _{IJ}\partial _{\tau }} . Here ∂ τ {\displaystyle \partial _{\tau }} denotes partial differentiation along the single space-time coordinate. One simple realization of the ( 1 | 1 ) {\displaystyle (1|1)} algebra consists of a single bosonic field ϕ {\displaystyle \phi } , a fermionic field ψ {\displaystyle \psi } , and a generator Q {\displaystyle Q} which acts as Q ϕ = i ψ {\displaystyle Q\phi =i\psi } , Q ψ = ∂ τ ϕ {\displaystyle Q\psi =\partial _{\tau }\phi } . Since we have just one supersymmetry generator in this case, the superalgebra relation reduces to Q 2 = i ∂ τ {\displaystyle Q^{2}=i\partial _{\tau }} , which is clearly satisfied. We can represent this algebra graphically using one solid vertex, one hollow vertex, and a single colored edge connecting them. == See also == Feynman diagram == References == == External links == http://golem.ph.utexas.edu/category/2007/08/adinkras.html https://www.flickr.com/photos/science_and_thecity/2796684536/ https://www.flickr.com/photos/science_and_thecity/2795836787/ http://www.thegreatcourses.com/courses/superstring-theory-the-dna-of-reality.html
Wikipedia/Adinkra_symbols_(physics)